markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
e) Find all users who have visited only OurPlanetTitle Page We are using relation 'b' to get the total count of `url` the user has visited
%%SQL select a.user_id from sessions a, (select user_id, count(url) as totalUrl from sessions group by user_id) b where a.user_id = b.user_id and a.navigation_page = 'OurPlanetTitle' and b.totalurl = 1
_____no_output_____
MIT
Netflix Exploration.ipynb
guicaro/guicaro.github.io
**[Python Home Page](https://www.kaggle.com/learn/python)**--- Try It YourselfFunctions are powerful. Try writing some yourself.As before, don't forget to run the setup code below before jumping into question 1.
# SETUP. You don't need to worry for now about what this code does or how it works. from learntools.core import binder; binder.bind(globals()) from learntools.python.ex2 import * print('Setup complete.')
Setup complete.
Apache-2.0
exercise-functions-and-getting-help.ipynb
Mohsenselseleh/My-Projects
Exercises 1.Complete the body of the following function according to its docstring.HINT: Python has a built-in function `round`.
def round_to_two_places(num): """Return the given number rounded to two decimal places. >>> round_to_two_places(3.14159) 3.14 """ # Replace this body with your own code. # ("pass" is a keyword that does literally nothing. We used it as a placeholder # because after we begin a code block, Python requires at least one line of code) return round(num,2) q1.check() # Uncomment the following for a hint # q1.hint() # Or uncomment the following to peek at the solution q1.solution()
_____no_output_____
Apache-2.0
exercise-functions-and-getting-help.ipynb
Mohsenselseleh/My-Projects
2.The help for `round` says that `ndigits` (the second argument) may be negative.What do you think will happen when it is? Try some examples in the following cell?Can you think of a case where this would be useful?
q2.solution() # Check your answer (Run this code cell to receive credit!) q2.solution()
_____no_output_____
Apache-2.0
exercise-functions-and-getting-help.ipynb
Mohsenselseleh/My-Projects
3.In a previous programming problem, the candy-sharing friends Alice, Bob and Carol tried to split candies evenly. For the sake of their friendship, any candies left over would be smashed. For example, if they collectively bring home 91 candies, they'll take 30 each and smash 1.Below is a simple function that will calculate the number of candies to smash for *any* number of total candies.Modify it so that it optionally takes a second argument representing the number of friends the candies are being split between. If no second argument is provided, it should assume 3 friends, as before.Update the docstring to reflect this new behaviour.
def to_smash(total_candies, friends =3): """Return the number of leftover candies that must be smashed after distributing the given number of candies evenly between 3 friends. >>> to_smash(91) 1 """ return total_candies % friends q3.check() q3.hint() q3.solution()
_____no_output_____
Apache-2.0
exercise-functions-and-getting-help.ipynb
Mohsenselseleh/My-Projects
4. (Optional)It may not be fun, but reading and understanding error messages will be an important part of your Python career.Each code cell below contains some commented-out buggy code. For each cell...1. Read the code and predict what you think will happen when it's run.2. Then uncomment the code and run it to see what happens. (**Tip**: In the kernel editor, you can highlight several lines and press `ctrl`+`/` to toggle commenting.)3. Fix the code (so that it accomplishes its intended purpose without throwing an exception)
round_to_two_places(9.9999) x = -10 y = 5 # # Which of the two variables above has the smallest absolute value? smallest_abs = min(abs(x), abs(y)) print(smallest_abs) def f(x): y = abs(x) return y print(f(5))
_____no_output_____
Apache-2.0
exercise-functions-and-getting-help.ipynb
Mohsenselseleh/My-Projects
This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/). Supervised Learning In-Depth: Random Forests Previously we saw a powerful discriminative classifier, **Support Vector Machines**.Here we'll take a look at motivating another powerful algorithm. This one is a *non-parametric* algorithm called **Random Forests**.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import stats plt.style.use('seaborn')
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
Motivating Random Forests: Decision Trees Random forests are an example of an *ensemble learner* built on decision trees.For this reason we'll start by discussing decision trees themselves.Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification:
import fig_code fig_code.plot_example_decision_tree()
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
The binary splitting makes this extremely efficient.As always, though, the trick is to *ask the right questions*.This is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or "splits") contain the most information. Creating a Decision TreeHere's an example of a decision tree classifier in scikit-learn. We'll start by defining some two-dimensional labeled data:
from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=1.0) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
We have some convenience functions in the repository that help
from fig_code import visualize_tree, plot_tree_interactive
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
Now using IPython's ``interact`` (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:
plot_tree_interactive(X, y);
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
Notice that at each increase in depth, every node is split in two **except** those nodes which contain only a single class.The result is a very fast **non-parametric** classification, and can be extremely useful in practice.**Question: Do you see any problems with this?** Decision Trees and over-fittingOne issue with decision trees is that it is very easy to create trees which **over-fit** the data. That is, they are flexible enough that they can learn the structure of the noise in the data rather than the signal! For example, take a look at two trees built on two subsets of this dataset:
from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier() plt.figure() visualize_tree(clf, X[:200], y[:200], boundaries=False) plt.figure() visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
The details of the classifications are completely different! That is an indication of **over-fitting**: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal. Ensembles of Estimators: Random ForestsOne possible way to address over-fitting is to use an **Ensemble Method**: this is a meta-estimator which essentially averages the results of many individual estimators which over-fit the data. Somewhat surprisingly, the resulting estimates are much more robust and accurate than the individual estimates which make them up!One of the most common ensemble methods is the **Random Forest**, in which the ensemble is made up of many decision trees which are in some way perturbed.There are volumes of theory and precedent about how to randomize these trees, but as an example, let's imagine an ensemble of estimators fit on subsets of the data. We can get an idea of what these might look like as follows:
def fit_randomized_tree(random_state=0): X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=2.0) clf = DecisionTreeClassifier(max_depth=15) rng = np.random.RandomState(random_state) i = np.arange(len(y)) rng.shuffle(i) visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False, xlim=(X[:, 0].min(), X[:, 0].max()), ylim=(X[:, 1].min(), X[:, 1].max())) from ipywidgets import interact interact(fit_randomized_tree, random_state=(0, 100));
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
See how the details of the model change as a function of the sample, while the larger characteristics remain the same!The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100, random_state=0) visualize_tree(clf, X, y, boundaries=False);
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!*(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the [scikit-learn documentation](http://scikit-learn.org/stable/modules/ensemble.htmlforest)*) Quick Example: Moving to RegressionAbove we were considering random forests within the context of classification.Random forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is ``sklearn.ensemble.RandomForestRegressor``.Let's quickly demonstrate how this can be used:
from sklearn.ensemble import RandomForestRegressor x = 10 * np.random.rand(100) def model(x, sigma=0.3): fast_oscillation = np.sin(5 * x) slow_oscillation = np.sin(0.5 * x) noise = sigma * np.random.randn(len(x)) return slow_oscillation + fast_oscillation + noise y = model(x) plt.errorbar(x, y, 0.3, fmt='o'); xfit = np.linspace(0, 10, 1000) yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None]) ytrue = model(xfit, 0) plt.errorbar(x, y, 0.3, fmt='o') plt.plot(xfit, yfit, '-r'); plt.plot(xfit, ytrue, '-k', alpha=0.5);
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model! Example: Random Forest for Classifying DigitsWe previously saw the **hand-written digits** data. Let's use that here to test the efficacy of the SVM and Random Forest classifiers.
from sklearn.datasets import load_digits digits = load_digits() digits.keys() X = digits.data y = digits.target print(X.shape) print(y.shape)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
To remind us what we're looking at, we'll visualize the first few data points:
# set up the figure fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the digits: each image is 8x8 pixels for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest') # label the image with the target value ax.text(0, 7, str(digits.target[i]))
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
We can quickly classify the digits using a decision tree as follows:
from sklearn.model_selection import train_test_split from sklearn import metrics Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0) clf = DecisionTreeClassifier(max_depth=11) clf.fit(Xtrain, ytrain) ypred = clf.predict(Xtest)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
We can check the accuracy of this classifier:
metrics.accuracy_score(ypred, ytest)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
and for good measure, plot the confusion matrix:
metrics.plot_confusion_matrix(clf, Xtest, ytest, cmap=plt.cm.Blues) plt.grid(False)
_____no_output_____
BSD-3-Clause
notebooks/03.2-Regression-Forests.ipynb
pletzer/sklearn_tutorial
Understand the datasets for training and cross-validation
# Assume the datasets are downloaded to the loc. below root = osp.expanduser('~/data/datasets')
_____no_output_____
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Pixel label values
# Map of the classe names for example 1 - aeroplane class_names = np.array([ 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor', ])
_____no_output_____
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Utility functions to show images and histogram
def imshow(img): plt.imshow(img) plt.show() def hist(img): plt.hist(img) plt.show()
_____no_output_____
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Inspect the train dataset
# The train dataset is Semantic Boundaries Dataset and Benchmark (SBD) benchmark # . http://home.bharathh.info/pubs/codes/SBD/download.html # Refer http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz # Note: we set the transform to False, this ensures that the result of __get_item is an # ndarray, not a tensor. train_dataset = torchfcn.datasets.SBDClassSeg(root, split='train', transform=False) print(train_dataset) print(f"Number of entries in the training: {len(train_dataset)}") idx = 459 print("Shape of image: ", train_dataset[idx][0].shape, "shape of the label: ", train_dataset[idx][1].shape) imshow(train_dataset[idx][0]) imshow(train_dataset[idx][1]) # print(train_dataset[idx][1])
Shape of image: (480, 360, 3) shape of the label: (480, 360)
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Print the histogram of the train dataset
label_dist = np.ravel(train_dataset[idx][1]) hist(label_dist)
_____no_output_____
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Understand the validation (dev) dataset
# Load the validation dataset (Pascal VOC) # Again note that the transform is False, so the result is an ndarray and not a transformed tensor valid_dataset = torchfcn.datasets.VOC2011ClassSeg(root, split='seg11valid', transform=False) idx = 203 print("Shape of data: ", valid_dataset[idx][0].shape, "Shape of label: ", valid_dataset[idx][1].shape) imshow(valid_dataset[idx][0]) imshow(valid_dataset[idx][1]) label_dist = np.ravel(valid_dataset[idx][1]) print("Max", np.max(label_dist), "Min", np.min(label_dist)) hist(label_dist)
Shape of data: (375, 500, 3) Shape of label: (375, 500)
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Inspect the transformed tensor
## Let us actually inspect the transformed tensor data instead valid_tensor_dataset = torchfcn.datasets.VOC2011ClassSeg(root, split='seg11valid', transform=True) label_dists = valid_tensor_dataset[idx][1] print(torch.min(label_dists)) label_dist = np.ravel(label_dists.numpy()) print("Max", np.max(label_dist), "Min", np.min(label_dist)) hist(label_dist)
tensor(-1) Max 8 Min -1
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Inspect the dataset transformed?
mean_bgr = np.array([104.00698793, 116.66876762, 122.67891434]) def transform(img): #img = img[:, :, ::-1] # RGB -> BGR img = img.astype(np.float64) img -= mean_bgr return img print(valid_dataset[idx][0].shape) transformed_image = transform(valid_dataset[idx][0]) print(transformed_image.shape) imshow(transformed_image)
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
MIT
VOC-Data-Loaders.ipynb
1xyz/pytorch-fcn-ext
Code generation for Linux Real-Time Preemptionc.f. https://wiki.linuxfoundation.org/realtime/startThe generated code can be compiled using a c++ compiler as follows: $ c++ main.cpp -o main
dy.clear() system = dy.enter_system() # define system inputs u = dy.system_input( dy.DataTypeFloat64(1), name='input1', default_value=1.0, value_range=[0, 25], title="input #1") y = dy.signal() # introduce variable y x = y + u # x[k] = y[k] + u[k] y << dy.delay(x, initial_state = 2.0) # y[k+1] = y[k] + x[k], y[0] = 2.0 # define sampling time delta_time = dy.float64(0.1) # define output(s) dy.append_output(delta_time, '__ORTD_CONTROL_delta_time__') dy.append_output(y, 'output') # generate code code_gen_results = dy.generate_code(template=tg.TargetLinuxRealtime(activate_print = True), folder='./')
compiling system simulation (level 0)... input1 1.0 double Generated code will be written to ./ . writing file ./simulation_manifest.json writing file ./main.cpp
MIT
examples/real-time/real-Time_linux.ipynb
OpenRTDynamics/openrtdynamics2
NumPy Exercises NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. Import NumPy as np
import numpy as np
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create an array of 10 zeros
np.zeros(10)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create an array of 10 ones
np.ones(10)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create an array of 10 fives
np.ones(10) * 5
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create an array of the integers from 10 to 50
np.arange(10,51)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create an array of all the even integers from 10 to 50
np.arange(10,51,2)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create a 3x3 matrix with values ranging from 0 to 8
np.arange(0,9).reshape(3,3)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create a 3x3 identity matrix
np.eye(3)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Use NumPy to generate a random number between 0 and 1
np.random.rand(1)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
np.random.randn(25)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create the following matrix:
np.arange(1,101).reshape(10,10)/100
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Create an array of 20 linearly spaced points between 0 and 1:
np.linspace(0,1,20)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Numpy Indexing and SelectionNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
mat = np.arange(1,26).reshape(5,5) mat # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[2:,1:] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[3,4] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[:3,1:2] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[4] # WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW # BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T # BE ABLE TO SEE THE OUTPUT ANY MORE mat[3:,:]
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Now do the following Get the sum of all the values in mat
np.sum(mat)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Get the standard deviation of the values in mat
np.std(mat)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Get the sum of all the columns in mat
np.sum(mat, axis=0)
_____no_output_____
MIT
Numpy-Exercises.ipynb
smalik-hub/Numpy-Exercises
Models in Pyro: From Primitive Distributions to Stochastic FunctionsThe basic unit of Pyro programs is the _stochastic function_. This is an arbitrary Python callable that combines two ingredients:- deterministic Python code; and- primitive stochastic functions Concretely, a stochastic function can be any Python object with a `__call__()` method, like a function, a method, or a PyTorch `nn.Module`.Throughout the tutorials and documentation, we will often call stochastic functions *models*, since stochastic functions can be used to represent simplified or abstract descriptions of a process by which data are generated. Expressing models as stochastic functions in Pyro means that models can be composed, reused, imported, and serialized just like regular Python callables. Without further ado, let's introduce one of our basic building blocks: primitive stochastic functions. Primitive Stochastic FunctionsPrimitive stochastic functions, or distributions, are an important class of stochastic functions for which we can explicitly compute the probability of the outputs given the inputs. As of PyTorch 0.4 and Pyro 0.2, Pyro uses PyTorch's [distribution library](http://pytorch.org/docs/master/distributions.html). You can also create custom distributions using [transforms](http://pytorch.org/docs/master/distributions.htmlmodule-torch.distributions.transforms).Using primitive stochastic functions is easy. For example, to draw a sample `x` from the unit normal distribution $\mathcal{N}(0,1)$ we do the following:
loc = 0. # mean zero scale = 1. # unit variance normal = dist.Normal(loc, scale) # create a normal distribution object x = normal.sample() # draw a sample from N(0,1) print("sample", x) print("log prob", normal.log_prob(x)) # score the sample from N(0,1)
_____no_output_____
MIT
tutorial/source/intro_part_i.ipynb
neerajprad/pyro
Here, `dist.Normal` is a callable instance of the `Distribution` class that takes parameters and provides sample and score methods. Note that the parameters passed to `dist.Normal` are `torch.Tensor`s. This is necessary because we want to make use of PyTorch's fast tensor math and autograd capabilities during inference. The `pyro.sample` PrimitiveOne of the core language primitives in Pyro is the `pyro.sample` statement. Using `pyro.sample` is as simple as calling a primitive stochastic function with one important difference:
x = pyro.sample("my_sample", dist.Normal(loc, scale)) print(x)
_____no_output_____
MIT
tutorial/source/intro_part_i.ipynb
neerajprad/pyro
Just like a direct call to `dist.Normal().sample()`, this returns a sample from the unit normal distribution. The crucial difference is that this sample is _named_. Pyro's backend uses these names to uniquely identify sample statements and _change their behavior at runtime_ depending on how the enclosing stochastic function is being used. As we will see, this is how Pyro can implement the various manipulations that underlie inference algorithms. A Simple ModelNow that we've introduced `pyro.sample` and `pyro.distributions` we can write a simple model. Since we're ultimately interested in probabilistic programming because we want to model things in the real world, let's choose something concrete. Let's suppose we have a bunch of data with daily mean temperatures and cloud cover. We want to reason about how temperature interacts with whether it was sunny or cloudy. A simple stochastic function that does that is given by:
def weather(): cloudy = pyro.sample('cloudy', dist.Bernoulli(0.3)) cloudy = 'cloudy' if cloudy.item() == 1.0 else 'sunny' mean_temp = {'cloudy': 55.0, 'sunny': 75.0}[cloudy] scale_temp = {'cloudy': 10.0, 'sunny': 15.0}[cloudy] temp = pyro.sample('temp', dist.Normal(mean_temp, scale_temp)) return cloudy, temp.item() for _ in range(3): print(weather())
_____no_output_____
MIT
tutorial/source/intro_part_i.ipynb
neerajprad/pyro
Let's go through this line-by-line. First, in lines 2-3 we use `pyro.sample` to define a binary random variable 'cloudy', which is given by a draw from the bernoulli distribution with a parameter of `0.3`. Since the bernoulli distributions returns `0`s or `1`s, in line 4 we convert the value `cloudy` to a string so that return values of `weather` are easier to parse. So according to this model 30% of the time it's cloudy and 70% of the time it's sunny.In lines 5-6 we define the parameters we're going to use to sample the temperature in lines 7-9. These parameters depend on the particular value of `cloudy` we sampled in line 2. For example, the mean temperature is 55 degrees (Fahrenheit) on cloudy days and 75 degrees on sunny days. Finally we return the two values `cloudy` and `temp` in line 10.Procedurally, `weather()` is a non-deterministic Python callable that returns two random samples. Because the randomness is invoked with `pyro.sample`, however, it is much more than that. In particular `weather()` specifies a joint probability distribution over two named random variables: `cloudy` and `temp`. As such, it defines a probabilistic model that we can reason about using the techniques of probability theory. For example we might ask: if I observe a temperature of 70 degrees, how likely is it to be cloudy? How to formulate and answer these kinds of questions will be the subject of the next tutorial.We've now seen how to define a simple model. Building off of it is easy. For example:
def ice_cream_sales(): cloudy, temp = weather() expected_sales = 200. if cloudy == 'sunny' and temp > 80.0 else 50. ice_cream = pyro.sample('ice_cream', dist.Normal(expected_sales, 10.0)) return ice_cream
_____no_output_____
MIT
tutorial/source/intro_part_i.ipynb
neerajprad/pyro
This kind of modularity, familiar to any programmer, is obviously very powerful. But is it powerful enough to encompass all the different kinds of models we'd like to express? Universality: Stochastic Recursion, Higher-order Stochastic Functions, and Random Control FlowBecause Pyro is embedded in Python, stochastic functions can contain arbitrarily complex deterministic Python and randomness can freely affect control flow. For example, we can construct recursive functions that terminate their recursion nondeterministically, provided we take care to pass `pyro.sample` unique sample names whenever it's called. For example we can define a geometric distribution like so:
def geometric(p, t=None): if t is None: t = 0 x = pyro.sample("x_{}".format(t), dist.Bernoulli(p)) if x.item() == 0: return x else: return x + geometric(p, t + 1) print(geometric(0.5))
_____no_output_____
MIT
tutorial/source/intro_part_i.ipynb
neerajprad/pyro
Note that the names `x_0`, `x_1`, etc., in `geometric()` are generated dynamically and that different executions can have different numbers of named random variables. We are also free to define stochastic functions that accept as input or produce as output other stochastic functions:
def normal_product(loc, scale): z1 = pyro.sample("z1", dist.Normal(loc, scale)) z2 = pyro.sample("z2", dist.Normal(loc, scale)) y = z1 * z2 return y def make_normal_normal(): mu_latent = pyro.sample("mu_latent", dist.Normal(0, 1)) fn = lambda scale: normal_product(mu_latent, scale) return fn print(make_normal_normal()(1.))
_____no_output_____
MIT
tutorial/source/intro_part_i.ipynb
neerajprad/pyro
Preamble
from flair.datasets import ColumnCorpus from flair.embeddings import FlairEmbeddings from flair.embeddings import TokenEmbeddings from flair.embeddings import StackedEmbeddings from flair.models import SequenceTagger from flair.trainers import ModelTrainer from typing import List import numpy as np import os import torch import random PATH_SPOTTING_DATASET = "../../data/concept-spotting/lists/" PATH_FLAIR_FOLDER = "../../data/flair-models/lists/"
_____no_output_____
MIT
notebooks/01-concept-spotting/06-lists-training.ipynb
fschlatt/CIKM-20
List-Spotter: Training
def set_seed(seed): # For reproducibility # (https://pytorch.org/docs/stable/notes/randomness.html) np.random.seed(seed) random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False columns = {0: 'text', 1: 'pos', 2: 'chunk_BIO'} tag_type = "chunk_BIO" corpus = ColumnCorpus(PATH_SPOTTING_DATASET, columns) tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) print(corpus) set_seed(42) embedding_types: List[TokenEmbeddings] = [ FlairEmbeddings('news-forward'), FlairEmbeddings('news-backward')] embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types) set_seed(42) tagger: SequenceTagger = SequenceTagger(hidden_size=128, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=True, dropout=0.25, rnn_layers=2) set_seed(42) trainer: ModelTrainer = ModelTrainer(tagger, corpus) set_seed(42) result = trainer.train(PATH_FLAIR_FOLDER, learning_rate=0.3, mini_batch_size=16, max_epochs=20, shuffle=True, num_workers=0) assert result['test_score'] == 0.9154
_____no_output_____
MIT
notebooks/01-concept-spotting/06-lists-training.ipynb
fschlatt/CIKM-20
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb) **Detect signs and symptoms** To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.Otherwise, you can look at the example outputs at the bottom of the notebook. 1. Colab Setup Import license keys
import json import os from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) # Defining license key-value pairs as local variables locals().update(license_keys) # Adding license key-value pairs to environment variables os.environ.update(license_keys)
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
Install dependencies
# Installing pyspark and spark-nlp ! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION # Installing Spark NLP Healthcare ! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET # Installing Spark NLP Display Library for visualization ! pip install -q spark-nlp-display
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
Import dependencies into Python and start the Spark session
import pandas as pd from pyspark.ml import Pipeline from pyspark.sql import SparkSession import pyspark.sql.functions as F import sparknlp from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl spark = sparknlp_jsl.start(license_keys['SECRET']) # manually start session # params = {"spark.driver.memory" : "16G", # "spark.kryoserializer.buffer.max" : "2000M", # "spark.driver.maxResultSize" : "2000M"} # spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
2. Select the NER model and construct the pipeline Select the NER model - Sign/symptom models: **ner_clinical, ner_jsl**For more details: https://github.com/JohnSnowLabs/spark-nlp-modelspretrained-models---spark-nlp-for-healthcare
# You can change this to the model you want to use and re-run cells below. # Sign / symptom models: ner_clinical, ner_jsl # All these models use the same clinical embeddings. MODEL_NAME = "ner_clinical"
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
Create the pipeline
document_assembler = DocumentAssembler() \ .setInputCol('text')\ .setOutputCol('document') sentence_detector = SentenceDetector() \ .setInputCols(['document'])\ .setOutputCol('sentence') tokenizer = Tokenizer()\ .setInputCols(['sentence']) \ .setOutputCol('token') word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \ .setInputCols(['sentence', 'token']) \ .setOutputCol('embeddings') clinical_ner = MedicalNerModel.pretrained(MODEL_NAME, "en", "clinical/models") \ .setInputCols(["sentence", "token", "embeddings"])\ .setOutputCol("ner") ner_converter = NerConverter()\ .setInputCols(['sentence', 'token', 'ner']) \ .setOutputCol('ner_chunk') nlp_pipeline = Pipeline(stages=[ document_assembler, sentence_detector, tokenizer, word_embeddings, clinical_ner, ner_converter])
embeddings_clinical download started this may take some time. Approximate size to download 1.6 GB [OK!] ner_clinical download started this may take some time. Approximate size to download 13.7 MB [OK!]
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
3. Create example inputs
# Enter examples as strings in this array input_list = [ """The patient is a 21-day-old Caucasian male here for 2 days of congestion - mom has been suctioning yellow discharge from the patient's nares, plus she has noticed some mild problems with his breathing while feeding (but negative for any perioral cyanosis or retractions). One day ago, mom also noticed a tactile temperature and gave the patient Tylenol. Baby also has had some decreased p.o. intake. His normal breast-feeding is down from 20 minutes q.2h. to 5 to 10 minutes secondary to his respiratory congestion. He sleeps well, but has been more tired and has been fussy over the past 2 days. The parents noticed no improvement with albuterol treatments given in the ER. His urine output has also decreased; normally he has 8 to 10 wet and 5 dirty diapers per 24 hours, now he has down to 4 wet diapers per 24 hours. Mom denies any diarrhea. His bowel movements are yellow colored and soft in nature.""" ]
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
4. Use the pipeline to create outputs
empty_df = spark.createDataFrame([['']]).toDF('text') pipeline_model = nlp_pipeline.fit(empty_df) df = spark.createDataFrame(pd.DataFrame({'text': input_list})) result = pipeline_model.transform(df)
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
5. Visualize results
from sparknlp_display import NerVisualizer NerVisualizer().display( result = result.collect()[0], label_col = 'ner_chunk', document_col = 'document' )
_____no_output_____
Apache-2.0
tutorials/streamlit_notebooks/healthcare/NER_SIGN_SYMP.ipynb
fcivardi/spark-nlp-workshop
Purpose of this notebookThis notebook estimates the excitation (as photoisomerization rate at the photoreceptor level) that is expected to be caused by the images recorded with the UV/G mouse camera.
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Global dictionary d = dict() # Global constants TWILIGHT = 0 DAYLIGHT = 1 UV_S = 0 UV_M = 1 G_S = 2 G_M = 3 CONE = 0 ROD = 1 CHAN_UV = 0 CHAN_G = 1
_____no_output_____
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
ApproachWhen calibrating the mouse camera, we used LEDs of defined wavelength and brightness to map normalized intensity (camera pixel values, 0..1) to power meter readings (see STAR Methods in the manuscript). To relate this power to the photon flux at the cornea and finally the photoisomerisation rate at the photoreceptor level, we need to consider: * How much light we loose in the camera, that is, we need the optical paths´ **attenuation factors** from the fisheye lens to the camera chip for the UV ($ \mu_{lens2cam,UV} $) and green ($ \mu_{lens2cam,G} $) channel; * The wavelength-specific **transmission of mouse optical aparatus** for UV ($T_{UV}$) and green ($T_G$) light;* The **ratio between pupil size and retinal area** ($R_{pup2ret}$) to estimate, how much light reaches the retina giving the pupil adapts to the overall brightness of the scene. Our approch consists of two main steps:1. We first map electrical power ($P_{el}$, in $[W]$) to photon flux ($P_{Phi}$, in $[photons/s]$), $$ P_{Phi}(\lambda) = \frac{P_{el}(\lambda) \cdot a \cdot \lambda \cdot 10^{-9}} {c \cdot h }\cdot \frac{1}{\mu_{lens2cam}(\lambda)}. $$ For $\lambda$, we use the peak wavelength of the photoreceptor's spectral sensitivity curve ($\lambda_{S}=360 \: nm$, $\lambda_{M}=510 \: nm$). The rest are constants ($a=6.2421018 \: eV/J$, $c=299,792,458 \: m/s$, and $h=4.135667 \cdot10^{-15} \: eV/s$). 2. Next, we convert the photon flux to photoisomerisation rate ($R_{Iso}$, in $[P^*/cone/s]$), $$ R_{Iso}(\lambda) = \frac{P_{Phi}(\lambda)}{A_{Stim}} \cdot A_{Collect} \cdot S_{Act} \cdot T(\lambda) \cdot R_{pup2ret} $$ where $A_{Stim}=10^8 \: \mu m^2$ is area that is iluminated on the power meter sensor, and $A_{Collect}=0.2 \: \mu m^2$ the photoreceptor's outer segment (OS) light collection area (see below). With $S_{Act}$ we take into account that the bandpass filters in the camera pathways do not perfectly match these sensitivity spectra (see below). > Note: The OS light collection area ($[\mu m^2]$) an experimentally determined value, e.g. for wt mouse cones that are fully dark-adapted, a value of 0.2 is be assumed; for mouse rods, a value of 0.5 is considered realistic (for details, see [Nikonov et al., 2006](http://www.ncbi.nlm.nih.gov/pubmed/16567464)).
d.update({"ac_um2": [0.2, 0.5], "peak_S": 360, "peak_M": 510, "A_stim_um2": 1e8})
_____no_output_____
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Attenuation factors of the two camera pathwaysWe first calculated the attenuation factor from the fisheye lens to the focal plane of the camera chip. To this end, we used a spectrometer (STS-UV, Ocean Optics) with an optical fiber (P50-1-UV-VIS) to first measure the spectrum of the sky directly, and then at the camera focal planes of the UV and the green pathways. These readouts, which are spectra, are referred to as $P_{direct}$, $P_{UV}$ and $P_{G}$.
#%%capture #!wget -O sky_spectrum.npy https://www.dropbox.com/s/p8uk4k6losfu309/sky_spectrum.npy?dl=0 #spect = np.load('sky_spectrum.npy', allow_pickle=True).item() # Load spectra # The exposure times were 4 s for `direct` and `g`, and 30 s for `uv` spect = np.load('data/sky_spectrum.npy', allow_pickle=True).item() fig,axes=plt.subplots(nrows=1,ncols=1,figsize=(8,4)) axes.plot(spect["wavelength"], spect["direct"], color='k', label='P_Direct') axes.plot(spect["wavelength"], spect["g"], color='g', label='P_G') axes.plot(spect["wavelength"], spect["uv"], color='purple',label='P_UV') axes.set_xlabel ("Wavelength [nm]") axes.set_ylabel("Counts") axes.grid() axes.legend(loc='upper right', bbox_to_anchor=(1.6, 1.0))
_____no_output_____
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Since the readout on the objective side (fisheye lens) is related to both visual angle and area, while the readout on the imaging side is only related to area, we define:$$\begin{align}P_{direct} &= P_{total} \cdot \frac{A_{fiber}}{A_{lens}} \cdot \frac{\theta_{fiber}}{\theta_{lens}}\\P_{UV} &= P_{total} \cdot \frac{A_{fiber}}{A_{chip}} \cdot \mu_{lens2cam,UV}\\P_{G} &= P_{total} \cdot \frac{A_{fiber}}{A_{chip}} \cdot \mu_{lens2cam,G}\end{align}$$ where $P_{total}$ denotes the total power of the incident light at the fisheye lens, $A_{fiber}$ the area of the fibre, $A_{lens}$ the area of the fisheye lens, $A_{chip}$ the imaging area of the camera chip, and $\theta_{fiber}$ and $\theta_{lens}$ the acceptance angles of fiber and fisheye lens, respectively. After rearranging the equations, we get: $$\begin{align}\mu_{lens2cam,UV} &= \frac{P_{UV}}{P_{direct}} \cdot \frac{A_{chip}}{A_{lens}} \cdot \frac{\theta_{fiber}}{\theta_{lens}}\\\mu_{lens2cam,G} &= \frac{P_{G}}{P_{direct}} \cdot \frac{A_{chip}}{A_{lens}} \cdot \frac{\theta_{fiber}}{\theta_{lens}}\end{align}$$ By calculating the ratio between the area under curve (AUC) of the spectrum for the respective chromatic channel (within in the spectral range of the respective bandpass filter) and the AUC of the spectrum for the direct measurement, we get:$$\frac{P_{UV}}{P_{direct}} = \frac{1}{21}, \frac{P_{G}}{P_{direct}} = \frac{1}{2}$$ Practically, we also take the different exposure times (4 s for $P_{direct}$ and $P_{G}$, and 30 s for $P_{UV}$) into account.
direct_exp_s = 4 UV_exp_s = 30 G_exp_s = 4 P_UV2direct = 1/(np.trapz(spect["direct"][350-300:420-300])/np.trapz(spect["uv"][350-300:420-300]) *UV_exp_s/direct_exp_s) P_G2direct = 1/(np.trapz(spect["direct"][470-300:550-300])/np.trapz(spect["g"][470-300:550-300]) *G_exp_s/direct_exp_s) print("P_UV/P_direct = {0:.3f}".format(P_UV2direct)) print("P_G/P_direct = {0:.3f}".format(P_G2direct))
P_UV/P_direct = 0.047 P_G/P_direct = 0.533
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
The diameters of the camera chip's imaging area and the fisheye lens were $2,185 \: \mu m$ and $15,000 \: \mu m$, respectively. The acception angles of the optical fiber and the fisheye lens were $\theta_{fibre}=24.8^{\circ}$ and $\theta_{lens}=180^{\circ}$, respectively.
A_cam = np.pi*(2185/2)**2 A_lens = np.pi*(15000/2)**2 theta_fiber = 24.8 theta_lens = 180
_____no_output_____
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Now we can get the attenuation factors $ \mu_{lens2cam,UV} $ and $ \mu_{lens2cam,G} $, covering the optical path from the fisheye lens to the camera chip:
mu_lens2cam = [0,0] mu_lens2cam[CHAN_UV] = P_UV2direct *A_cam /A_lens *theta_fiber /theta_lens mu_lens2cam[CHAN_G] = P_G2direct *A_cam /A_lens * theta_fiber /theta_lens d.update({"mu_lens2cam": mu_lens2cam}) print("mu_lens2cam for UV,G = {0:.3e}, {1:.3e}".format(mu_lens2cam[CHAN_UV], mu_lens2cam[CHAN_G]))
mu_lens2cam for UV,G = 1.365e-04, 1.557e-03
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Attenuation by mouse eye opticsAnother factor we need to consider is the wavelength-dependent attenuation by the mouse eye optics. The relative transmission for UV ($T_{Rel}(UV)$, at $\lambda=360 \: nm$) and green ($T_{Rel}(G)$, at $\lambda=510 \: nm$) is approx. 35% and 55%, respectively ([Henriksson et al., 2010](https://pubmed.ncbi.nlm.nih.gov/19925789/)).
d.update({"T_rel": [0.35, 0.55]})
_____no_output_____
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
In addition, the light reaching the retina depends on the ratio ($R_{pup2ret}$) between pupil area and retinal area (both in $[mm^2]$) ([Rhim et al., 2020](https://www.biorxiv.org/content/10.1101/2020.11.03.366682v1)). Here, we assume pupil areas of $0.1 \: mm^2$ (maximally constricted) at daytime and $0.22 \: mm^2$ at twighlight (approx. 10% of full pupil area; see [Pennesi et al., 1998](https://pubmed.ncbi.nlm.nih.gov/9761294/)). To calculate the retinal area of the mouse, we assume an eye axial length of approx. $3 \: mm$ and that the retina covers about 60% of the sphere's surface ([Schmucker & Schaeffel, 2004](https://www.sciencedirect.com/science/article/pii/S0042698904001257FIG4)).
eye_axial_len_mm = 3 ret_area_mm2 = 0.6 *(eye_axial_len_mm/2)**2 *np.pi *4 pup_area_mm2 = [0.22, 0.1] R_pup2ret= [x /ret_area_mm2 for x in pup_area_mm2] d.update({"R_pup2ret": R_pup2ret, "pup_area_mm2": pup_area_mm2, "ret_area_mm2": ret_area_mm2}) print("mouse retinal area [mm²] = {0:.1f}".format(ret_area_mm2)) print("pupil area [mm²] = twilight: {0:.1f} \tdaylight: {1:.1f}".format(pup_area_mm2[TWILIGHT], pup_area_mm2[DAYLIGHT])) print("ratio of pupil area to retinal area = twilight: {0:.3f} \tdaylight: {1:.3f}".format(R_pup2ret[TWILIGHT],R_pup2ret[DAYLIGHT]))
mouse retinal area [mm²] = 17.0 pupil area [mm²] = twilight: 0.2 daylight: 0.1 ratio of pupil area to retinal area = twilight: 0.013 daylight: 0.006
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Cross-activation of S- and M-opsins ...... by the UV and green camera channels, yielding $S_{Act}(S,UV)$, $S_{Act}(S,G)$, $S_{Act}(M,UV)$, and $S_{Act}(M,G)$.
#%%capture #!wget -O opsin_filter_spectrum.npy https://www.dropbox.com/s/doh1jjqukdcpvpy/opsin_filter_spectrum.npy?dl=0 #spect = np.load('opsin_filter_spectrum.npy', allow_pickle=True).item() # Load opsin and filter spectra spect = np.load('data/opsin_filter_spectrum.npy', allow_pickle=True).item() wavelength = spect["wavelength"] mouseSOpsin = spect["mouseSOpsin"] mouseMOpsin = spect["mouseMOpsin"] filter_uv = spect["filter_uv"] filter_g = spect["filter_g"] filter_uv_scone = np.minimum(filter_uv,mouseSOpsin) filter_uv_mcone = np.minimum(filter_uv,mouseMOpsin) filter_g_scone = np.minimum(filter_g, mouseSOpsin) filter_g_mcone = np.minimum(filter_g, mouseMOpsin) S_act = [0]*4 S_act[UV_S] = np.trapz(filter_uv_scone)/np.trapz(filter_uv) S_act[UV_M] = np.trapz(filter_uv_mcone)/np.trapz(filter_g) S_act[G_S] = np.trapz(filter_g_scone)/np.trapz(filter_uv) S_act[G_M] = np.trapz(filter_g_mcone)/np.trapz(filter_g) d.update({"S_act": S_act}) fig,axes=plt.subplots(nrows=1,ncols=1,figsize=(8,4)) axes.plot(wavelength,mouseMOpsin,color='g', linestyle='-',label='M-cone') axes.plot(wavelength,mouseSOpsin,color='purple',linestyle='-',label='S-cone') axes.plot(wavelength,filter_g, color='g', linestyle='--',label='Filter-G') axes.plot(wavelength,filter_uv,color='purple', linestyle='--',label='Filter-UV') axes.fill_between(wavelength,y1=filter_g_mcone, y2=0,color='g', alpha=0.5) axes.fill_between(wavelength,y1=filter_g_scone, y2=0,color='g', alpha=0.5) axes.fill_between(wavelength,y1=filter_uv_mcone,y2=0,color='purple',alpha=0.5) axes.fill_between(wavelength,y1=filter_uv_scone,y2=0,color='purple',alpha=0.5) axes.set_xlabel ("Wavelenght [nm]") axes.set_ylabel("Rel. sensitivity") axes.legend(loc='upper right', bbox_to_anchor=(1.4, 1.0)) print("S_act UV -> S = {0:.3f}".format(S_act[UV_S])) print(" UV -> M = {0:.3f}".format(S_act[UV_M])) print(" G -> S = {0:.3f}".format(S_act[G_S])) print(" G -> M = {0:.3f}".format(S_act[G_M]))
S_act UV -> S = 0.625 UV -> M = 0.118 G -> S = 0.000 G -> M = 0.858
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Estimating photoisomerization ratesThe following function converts normalized image intensities (0...1) to $P_{el}(\lambda)$ (in $[\mu W]$), $P_{Phi}(\lambda)$ (in $[photons /s]$), and $R_{Iso}(\lambda)$ (in $[P^*/cone/s]$).
def inten2Riso(intensities, pup_area_mm2, pr_type=CONE): """ Transfer the normalized image intensities (0...1) to power (unit: uW), photon flux (unit: photons/s) and photoisomerisation rate (P*/cone/s) Input: intensities : image intensities (0...1) for both channels as tuple pup_area_mm2 : pupil area in mm^2 Output: P_el : tuple (CHAN_UV, CHAN_G) P_Phi : tuple (CHAN_UV, CHAN_G) R_Iso : tuple (UV_S, UV_M, G_S, G_M) """ global d h = 4.135667e-15 # Planck's constant [eV*s] c = 299792458 # speed of light [m/s] eV_per_J = 6.242e+18 # [eV] per [J] # Convert normalized image intensities (0...1) to power ([uW]) # (Constants from camera calibration, see STAR Methods for details) P_el = [0]*2 P_el[CHAN_UV] = intensities[CHAN_UV] *0.755 +0.0049 P_el[CHAN_G] = intensities[CHAN_G] *6.550 +0.0097 # Convert electrical power ([uW]) to photon flux ([photons/s]) P_Phi = [0]*2 P_Phi[CHAN_UV] = (P_el[CHAN_UV] *1e-6) *eV_per_J *(d["peak_S"]*1e-9)/(c*h) *(1/d["mu_lens2cam"][CHAN_UV]) P_Phi[CHAN_G] = (P_el[CHAN_G] *1e-6) *eV_per_J *(d["peak_M"]*1e-9)/(c*h) *(1/d["mu_lens2cam"][CHAN_G]) # Convert photon flux ([photons/s]) to photoisomerisation rate ([P*/cone/s]) R_pup2ret = pup_area_mm2 /d["ret_area_mm2"] R_Iso = [0]*4 for j in [UV_S, UV_M, G_S, G_M]: chan = CHAN_UV if j < G_S else CHAN_G R_Iso[j] = P_Phi[chan] /d["A_stim_um2"] *d["ac_um2"][pr_type]* d["S_act"][j] *d["T_rel"][chan] *R_pup2ret return P_el, P_Phi, R_Iso
_____no_output_____
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Example `[[0.18, 0.11], [0.06, 0.14]]`, with the following format `[upper[UV,G],lower[UV,G]]`
intensities=[[0.18, 0.11], [0.06, 0.14]] for j, i in enumerate(intensities): l = inten2Riso(i, 0.2) print("{0:2d} (UV, G) P_el = {1:.3f}, {2:.3f}\t P_Phi = {3:.1e}, {4:.1e} ".format(j, l[0][0], l[0][1], l[1][0], l[1][1])) print(" UV->S = {0:.1e} \t UV->M = {1:.1e} \t G->S = {2:.1e} \t G->M = {3:.1e}".format(l[2][0], l[2][1], l[2][2], l[2][3]))
0 (UV, G) P_el = 0.141, 0.730 P_Phi = 1.9e+15, 1.2e+15 UV->S = 9.6e+03 UV->M = 1.8e+03 G->S = 7.0e-01 G->M = 1.3e+04 1 (UV, G) P_el = 0.050, 0.927 P_Phi = 6.7e+14, 1.5e+15 UV->S = 3.4e+03 UV->M = 6.5e+02 G->S = 8.8e-01 G->M = 1.7e+04
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
Generate Supplementary Table 1
col_names = ['Mean intensity<br>group', 'Visual<br>field', 'Camera<br>channel', 'Norm.<br>intensity', 'P_el<br>in [µW]',\ 'P_Phi<br>in [photons/s]', 'Pupil area<br>in [mm2]',\ 'R_Iso<br>in [P*/cone/s], S', 'R_Iso<br>in [P*/cone/s], M', 'R_Iso<br>in [P*/rod/s], rod'] data_df = pd.DataFrame(columns = col_names) group = ['Low', 'Medium', 'High', 'Twilight'] group = [item for item in group for i in range(4)] data_df['Mean intensity<br>group'] = group visual_field=['Upper', 'Upper', 'Lower', 'Lower']*4 data_df['Visual<br>field'] = visual_field camera_channel=['UV', 'G']*8 data_df['Camera<br>channel'] = camera_channel norm_intensity = [0.18, 0.11, 0.06, 0.14, 0.28, 0.16, 0.09, 0.21, 0.50, 0.34, 0.22, 0.46, 0.05, 0.06, 0.02, 0.05] data_df['Norm.<br>intensity'] = norm_intensity # Pupil area data_df['Pupil area<br>in [mm2]'] = np.where(data_df['Mean intensity<br>group'] == 'Twilight', \ d['pup_area_mm2'][TWILIGHT], d['pup_area_mm2'][DAYLIGHT]) # Photoisomerisations for ii in range(int(len(data_df.index)/2)): tempUV, tempG = data_df.iloc[ii*2, 3], data_df.iloc[ii*2+1, 3] templ = inten2Riso([tempUV, tempG], data_df.iloc[ii*2, 6]) data_df.iloc[ii*2, 4], data_df.iloc[ii*2+1, 4] = templ[0][0], templ[0][1] data_df.iloc[ii*2, 5], data_df.iloc[ii*2+1, 5] = templ[1][0], templ[1][1] data_df.iloc[ii*2,7], data_df.iloc[ii*2,8], data_df.iloc[ii*2+1,7], data_df.iloc[ii*2+1,8] =\ templ[2][0], templ[2][1], templ[2][2], templ[2][3] templ = inten2Riso([tempUV,tempG], data_df.iloc[ii*2, 6], pr_type=ROD) data_df.iloc[ii*2,9] = templ[2][1] data_df.iloc[ii*2+1,9] = templ[2][3] # Show table ''' # Set colormap equal to seaborns light green color palette cmG = sns.light_palette("green", n_colors=50, as_cmap=True, reverse=False) cmUV = sns.light_palette("purple", n_colors=50, as_cmap=True, reverse=False) # Set CSS properties for th elements in dataframe th_props = [ ('font-size', '14px'), ('text-align', 'center'), ('font-weight', 'bold'), ('color', '#6d6d6d'), ('background-color', '#f7f7f9') ] # Set CSS properties for td elements in dataframe td_props = [ ('font-size', '14px') ] # Set table styles styles = [ dict(selector="th", props=th_props), dict(selector="td", props=td_props) ] (data_df.style .background_gradient(cmap=cmUV, subset=['R_Iso<br>in [P*/cone/s], S']) .background_gradient(cmap=cmG, subset=['R_Iso<br>in [P*/cone/s], M']) .background_gradient(cmap=cmG, subset=['R_Iso<br>in [P*/rod/s], rod']) #.highlight_max(subset=['R_Iso<br>in [P*/cone/s], S','R_Iso<br>in [P*/cone/s], M']) .format({"Norm.<br>intensity": "{:.2f}","P_el<br>in [µW]": "{:.3f}", "P_Phi<br>in [photons/s]": "{:.3e}", "Pupil area<br>in [mm2]": "{:.1f}", "R_Iso<br>in [P*/cone/s], S": "{:.0f}", "R_Iso<br>in [P*/cone/s], M": "{:.0f}", "R_Iso<br>in [P*/rod/s], rod": "{:.0f}"}) .set_table_styles(styles) .set_properties(**{'white-space': 'pre-wrap',})) ''' display(data_df)
_____no_output_____
MIT
photoisomerization/cam_images_2_photoisomerization_v0_2.ipynb
yongrong-qiu/mouse-scene-cam
[![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision) عملگر convolution **Convolution عمل** with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) سایت زیر برای آشنایی با کرنل‌ها بسیار مناسب است:http://setosa.io/ev/image-kernels/اگر بخواهیم تصویر خروجی با تصویر ورودی هم اندازه باشد چه کنیم؟ Convolutions and Blurring
import cv2 import numpy as np image = cv2.imread('images/input.jpg') cv2.imshow('Original Image', image) cv2.waitKey(0) # Creating our 3 x 3 kernel kernel_3x3 = np.ones((3, 3), np.float32) / 9 # We use the cv2.fitler2D to conovlve the kernal with an image blurred = cv2.filter2D(image, -1, kernel_3x3) cv2.imshow('3x3 Kernel Blurring', blurred) cv2.waitKey(0) # Creating our 7 x 7 kernel kernel_7x7 = np.ones((7, 7), np.float32) / 49 blurred2 = cv2.filter2D(image, -1, kernel_7x7) cv2.imshow('7x7 Kernel Blurring', blurred2) cv2.waitKey(0) cv2.destroyAllWindows()
_____no_output_____
MIT
12-Convolutions and Blurring.ipynb
moh3n9595/class.vision
Other commonly used blurring methods in OpenCV
import cv2 import numpy as np image = cv2.imread('images/input.jpg') cv2.imshow('original', image) cv2.waitKey(0) # Averaging done by convolving the image with a normalized box filter. # This takes the pixels under the box and replaces the central element # Box size needs to odd and positive blur = cv2.blur(image, (3,3)) cv2.imshow('Averaging', blur) cv2.waitKey(0) # Instead of box filter, gaussian kernel Gaussian = cv2.GaussianBlur(image, (7,7), 0) cv2.imshow('Gaussian Blurring', Gaussian) cv2.waitKey(0) # Takes median of all the pixels under kernel area and central # element is replaced with this median value median = cv2.medianBlur(image, 5) cv2.imshow('Median Blurring', median) cv2.waitKey(0) # Bilateral is very effective in noise removal while keeping edges sharp bilateral = cv2.bilateralFilter(image, 9, 75, 75) cv2.imshow('Bilateral Blurring', bilateral) cv2.waitKey(0) cv2.destroyAllWindows()
_____no_output_____
MIT
12-Convolutions and Blurring.ipynb
moh3n9595/class.vision
Image De-noising - Non-Local Means Denoising
import numpy as np import cv2 image = cv2.imread('images/taj-rgb-noise.jpg') # Parameters, after None are - the filter strength 'h' (5-10 is a good range) # Next is hForColorComponents, set as same value as h again # dst = cv2.fastNlMeansDenoisingColored(image, None, 6, 6, 7, 21) cv2.imshow('Fast Means Denoising', dst) cv2.imshow('original image', image) cv2.waitKey(0) cv2.destroyAllWindows()
_____no_output_____
MIT
12-Convolutions and Blurring.ipynb
moh3n9595/class.vision
Quick demonstration of R-notebooks using the r-oce libraryThe IOOS notebook[environment](https://github.com/ioos/notebooks_demos/blob/229dabe0e7dd207814b9cfb96e024d3138f19abf/environment.ymlL73-L76)installs the `R` language and the `Jupyter` kernel needed to run `R` notebooks.Conda can also install extra `R` packages,and those packages that are unavailable in `conda` can be installed directly from CRAN with `install.packages(pkg_name)`.You can start `jupyter` from any other environment and change the kernel later using the drop-down menu.(Check the `R` logo at the top right to ensure you are in the `R` jupyter kernel.)In this simple example we will use two libraries aimed at the oceanography community written in `R`: [`r-gsw`](https://cran.r-project.org/web/packages/gsw/index.html) and [`r-oce`](http://dankelley.github.io/oce/).(The original post for the examples below can be found author's blog: [http://dankelley.github.io/blog/](http://dankelley.github.io/blog/))
library(gsw) library(oce)
_____no_output_____
MIT
notebooks/2017-01-23-R-notebook.ipynb
kellydesent/notebooks_demos
Example 1: calculating the day length.
daylength <- function(t, lon=-38.5, lat=-13) { t <- as.numeric(t) alt <- function(t) sunAngle(t, longitude=lon, latitude=lat)$altitude rise <- uniroot(alt, lower=t-86400/2, upper=t)$root set <- uniroot(alt, lower=t, upper=t+86400/2)$root set - rise } t0 <- as.POSIXct("2017-01-01 12:00:00", tz="UTC") t <- seq.POSIXt(t0, by="1 day", length.out=1*356) dayLength <- unlist(lapply(t, daylength)) par(mfrow=c(2,1), mar=c(3, 3, 1, 1), mgp=c(2, 0.7, 0)) plot(t, dayLength/3600, type='o', pch=20, xlab="", ylab="Day length (hours)") grid() solstice <- as.POSIXct("2013-12-21", tz="UTC") plot(t[-1], diff(dayLength), type='o', pch=20, xlab="Day in 2017", ylab="Seconds gained per day") grid()
_____no_output_____
MIT
notebooks/2017-01-23-R-notebook.ipynb
kellydesent/notebooks_demos
Example 2: least-square fit.
x <- 1:100 y <- 1 + x/100 + sin(x/5) yn <- y + rnorm(100, sd=0.1) L <- 4 calc <- runlm(x, y, L=L, deriv=0) plot(x, y, type='l', lwd=7, col='gray') points(x, yn, pch=20, col='blue') lines(x, calc, lwd=2, col='red') data(ctd) rho <- swRho(ctd) z <- swZ(ctd) drhodz <- runlm(z, rho, deriv = 1) g <- 9.81 rho0 <- mean(rho, na.rm = TRUE) N2 <- -g * drhodz/rho0 plot(ctd, which = "N2") lines(N2, -z, col = "blue") legend("bottomright", lwd = 2, col = c("brown", "blue"), legend = c("spline", "runlm"), bg = "white")
_____no_output_____
MIT
notebooks/2017-01-23-R-notebook.ipynb
kellydesent/notebooks_demos
Example 3: T-S diagram.
# Alter next three lines as desired; a and b are watermasses. Sa <- 30 Ta <- 10 Sb <- 40 library(oce) # Should not need to edit below this line rho0 <- swRho(Sa, Ta, 0) Tb <- uniroot(function(T) rho0-swRho(Sb,T,0), lower=0, upper=100)$root Sc <- (Sa + Sb) /2 Tc <- (Ta + Tb) /2 ## density change, and equiv temp change drho <- swRho(Sc, Tc, 0) - rho0 dT <- drho / rho0 / swAlpha(Sc, Tc, 0) plotTS(as.ctd(c(Sa, Sb, Sc), c(Ta, Tb, Tc), 0), pch=20, cex=2) drawIsopycnals(levels=rho0, col="red", cex=0) segments(Sa, Ta, Sb, Tb, col="blue") text(Sb, Tb, "b", pos=4) text(Sa, Ta, "a", pos=4) text(Sc, Tc, "c", pos=4) legend("topleft", legend=sprintf("Sa=%.1f, Ta=%.1f, Sb=%.1f -> Tb=%.1f, drho=%.2f, dT=%.2f", Sa, Ta, Sb, Tb, drho, dT), bg="white")
_____no_output_____
MIT
notebooks/2017-01-23-R-notebook.ipynb
kellydesent/notebooks_demos
Example 4: find the halocline depth.
findHalocline <- function(ctd, deltap=5, plot=TRUE) { S <- ctd[['salinity']] p <- ctd[['pressure']] n <- length(p) ## trim df to be no larger than n/2 and no smaller than 3. N <- deltap / median(diff(p)) df <- min(n/2, max(3, n / N)) spline <- smooth.spline(S~p, df=df) SS <- predict(spline, p) dSSdp <- predict(spline, p, deriv=1) H <- p[which.max(dSSdp$y)] if (plot) { par(mar=c(3, 3, 1, 1), mgp=c(2, 0.7, 0)) plotProfile(ctd, xtype="salinity") lines(SS$y, SS$x, col='red') abline(h=H, col='blue') mtext(sprintf("%.2f m", H), side=4, at=H, cex=3/4, col='blue') mtext(sprintf(" deltap: %.0f, N: %.0f, df: %.0f", deltap, N, df), side=1, line=-1, adj=0, cex=3/4) } return(H) } # Plot two panels to see influence of deltap. par(mfrow=c(1, 2)) data(ctd) findHalocline(ctd) findHalocline(ctd, 1)
_____no_output_____
MIT
notebooks/2017-01-23-R-notebook.ipynb
kellydesent/notebooks_demos
Exploring a generic Markov model of chromatin accessibilityLast updated by: Jonathan Liu, 4/23/2021Here, we will explore a generic Markov chain model of chromatin accessibility, where we model chromatin with a series of states and Markov transitions between them. Of interest is the onset time, the time it takes for the system to reach the final, transcriptionally competent state. We will indicate that the limit of equal, irreversible reactions is the limit of noise performance and that allowing for some reversibility weakens performance. We will then show that with a transient input, the model can achieve much better performance.
#Import necessary packages %matplotlib inline import numpy as np from scipy.spatial import ConvexHull import matplotlib.pyplot as plt import scipy.special as sps from IPython.core.debugger import set_trace from numba import njit, prange import numba as numba from datetime import date import time as Time import seaborn as sns #Set number of threads numba.set_num_threads(4) # PBoC plotting style (borrowed from Manuel's github) def set_plotting_style(): """ Formats plotting enviroment to that used in Physical Biology of the Cell, 2nd edition. To format all plots within a script, simply execute `mwc_induction_utils.set_plotting_style() in the preamble. """ rc = {'lines.linewidth': 1.25, 'axes.labelsize': 12, 'axes.titlesize': 12, 'axes.facecolor': '#E3DCD0', 'xtick.labelsize': 12, 'ytick.labelsize': 12, 'xtick.color': 'white', 'xtick.direction': 'in', 'xtick.top': True, 'xtick.bottom': True, 'xtick.labelcolor': 'black', 'ytick.color': 'white', 'ytick.direction': 'in', 'ytick.left': True, 'ytick.right': True, 'ytick.labelcolor': 'black', 'font.family': 'Arial', #'grid.linestyle': '-', Don't use a grid #'grid.linewidth': 0.5, #'grid.color': '#ffffff', 'axes.grid': False, 'legend.fontsize': 8} plt.rc('text.latex', preamble=r'\usepackage{sfmath}') #plt.rc('xtick.major', pad=5) #plt.rc('ytick.major', pad=5) plt.rc('mathtext', fontset='stixsans', sf='sansserif') plt.rc('figure', figsize=[3.5, 2.5]) plt.rc('svg', fonttype='none') plt.rc('legend', title_fontsize='12', frameon=True, facecolor='#E3DCD0', framealpha=1) sns.set_style('darkgrid', rc=rc) sns.set_palette("colorblind", color_codes=True) sns.set_context('notebook', rc=rc) # Some post-modification fixes that I can't seem to set in the rcParams def StandardFigure(ax): ax.tick_params(labelcolor='black') ax.xaxis.label.set_color('black') ax.yaxis.label.set_color('black') set_plotting_style() #Function to generate a random transition matrix for a generic Markov chain with n states, and an irreversible #transition into the final state. #Inputs: # n: number of states # k_min: minimum transition rate # k_max: maximum transition rate #pseudocode #generate 2D matrix based on n #loop over each index, if indices are connected by 1 then generate a value (except final state) #Calculate diagonal elements from summing columns to zero # def MakeRandomTransitionMatrix(n, k_min, k_max): #Initialize the transition matrix Q = np.zeros((n,n)) #Loop through transition indices (note that the final column is all zeros since it's an absorbing state) for i in range(n): for j in range(n-1): #If the indices are exactly one apart (i.e. adjacent states), then make a transition rate if np.abs(i-j) == 1: Q[i,j] = np.random.uniform(k_min,k_max) #Calculate the diagonal elements by taking the negative of the sum of the column for i in range(n-1): Q[i,i] = -np.sum(Q[:,i]) return Q #Function to generate a transition matrix for equal, irreversible transitions (i.e. Gamma distribution results) #We assume the final state is absorbing #Inputs: # n: number of states # k: transition rate def MakeGammaDistMatrix(n, k): #Initialize the transition matrix Q = np.zeros((n,n)) #Loop through transition indices (note that the final column is all zeros since it's an absorbing state) for i in range(n): for j in range(n-1): #All forward transitions are equal to k if i == j + 1: Q[i,j] = k #Calculate the diagonal elements by taking the negative of the sum of the column for i in range(n-1): Q[i,i] = -np.sum(Q[:,i]) return Q #Similar function for making a transition matrix with equal forward transitions of magnitude k and #equal backward transitions of magnitude k * f def MakeEqualBidirectionalMatrix(n, k, f): #Initialize the transition matrix Q = np.zeros((n,n)) #Loop through transition indices (note that the final column is all zeros since it's an absorbing state) for i in range(n): for j in range(n-1): #All forward transitions are equal to k if i == j + 1: Q[i,j] = k elif i == j - 1: Q[i,j] = k * f #Calculate the diagonal elements by taking the negative of the sum of the column for i in range(n-1): Q[i,i] = -np.sum(Q[:,i]) return Q #Simulation for calculating onset times for a generic Markov chain using Gillespie algorithm #Using vectorized formulation for faster speed def CalculatetOn_GenericMarkovChainGillespie(Q,n,N_cells): #Calculates the onset time for a linear Markov chain with forward and backward rates. #The transition rate can be time-varying, but is the same #global rate for each transition. The model assumes n states, beginning #in the 1st state. Using the Gillespie algorithm and a Markov chain formalism, it #simulates N_cells realizations of the overall time it takes to reach the #nth state. #For now, this only works with steady transition rates. Later we will modify this to account #for time-varying rates. # Inputs: # Q: transition rate matrix, where q_ji is the transition rate from state i to j for i =/= j and # q_ii is the sum of transition rates out of state i # n: number of states # N_cells: number of cells to simulate # Outputs: # t_on: time to reach the final state for each cell (length = N_cells) ## Setup variables t_on = np.zeros(N_cells) #Time to transition to final ON state for each cell state = np.zeros(N_cells, dtype=int) #State vector describing current state of each cell ## Run simulation # We will simulate waiting times for each transition for each cell and stop once each cell has # reached the final state #Set diagonal entries in transition matrix to nan since self transitions don't count for i in range(n): Q[i,i] = 0 #Construct the transition vector out of each cell's current state Q_states = np.zeros((N_cells,n)) while np.sum(state) < (n-1)*N_cells: Q_states = np.transpose(Q[:,state]) #Generate random numbers in [0,1] for each cell randNums = np.random.random(Q_states.shape) #Calculate waiting times for each entry in the transition matrix #Make sure to suppress divide by zero warning with np.errstate(divide='ignore'): tau = (1/Q_states) * np.log(1/randNums) #Find the shortest waiting time to figure out which state we transitioned to for each cell tau_min = np.amin(tau, axis=1) newState = np.argmin(tau, axis=1) #Replace infinities with zero, corresponding to having reached the final state newState[tau_min==np.inf] = n-1 tau_min[tau_min==np.inf] = 0 #Update the state and add the waiting time to the overall waiting time state = newState t_on += tau_min return t_on #Simulation for calculating onset times for a generic Markov chain using Gillespie algorithm #Using vectorized formulation for faster speed def CalculatetOn_GenericMarkovChainGillespieTime(Q,n,t_d,N_cells): #Calculates the onset time for a linear Markov chain with forward and backward rates. #The transition rate can be time-varying, but is the same #global rate for each transition. The model assumes n states, beginning #in the 1st state. Using the Gillespie algorithm and a Markov chain formalism, it #simulates N_cells realizations of the overall time it takes to reach the #nth state. #This considers time-dependent transition rates parameterized by a diffusion timescale t_d. #The time-dependent rate has the form r ~ (1 - exp(-t/t_d)). For now, we assume only the forwards #rates have the time-dependent profile, and that backwards rates are time-independent. # Inputs: # Q: 3D transition rate matrix, where q_kji is the transition rate at time k from state i to j for i =/= j and # q_kii is the sum of transition rates out of state i # n: number of states # t_d: diffusion timescale of time-dependent transition rate # N_cells: number of cells to simulate # Outputs: # t_on: time to reach the final state for each cell (length = N_cells) ## Setup variables t_on = np.zeros(N_cells) #Time to transition to final ON state for each cell time = np.zeros(N_cells) #Vector of current time for each cell state = np.zeros(N_cells, dtype=int) #State vector describing current state of each cell ## Run simulation # We will simulate waiting times for each transition for each cell and stop once each cell has # reached the final state #Set diagonal entries in transition matrix to nan since self transitions don't count for i in range(n): Q[i,i] = 0 #Define the diffusion timescale matrix t_d (finite for forwards rates, effectively 0 for backwards rates) t_d_mat = np.zeros((n,n)) t_d_mat[:,:] = 0.00000001 #Non forwards transitions are essentially 0 diffusive timescale for i in range(n): for j in range(n-1): #Forwards rates if i == j + 1: t_d_mat[i,j] = t_d #Construct the transition vector out of each cell's current state Q_states = np.zeros((N_cells,n)) #Construct the diffusion timescale vector for each cell t_d_states = np.zeros((N_cells,n)) while np.sum(state) < (n-1)*N_cells: Q_states = np.transpose(Q[:,state]) t_d_states = np.transpose(t_d_mat[:,state]) #Construct the current time vector for each cell time_states = np.transpose(np.tile(time,(n,1))) #Generate random numbers in [0,1] for each cell randNums = np.random.random(Q_states.shape) #Calculate waiting times for each entry in the transition matrix #Make sure to suppress divide by zero warning #For the exponential profile, this uses the lambertw/productlog function. The steady-state #case corresponds to t_d -> 0. with np.errstate(divide='ignore', invalid='ignore'): #Temp variables for readability a = 1/Q_states * np.log(1/randNums) b = -np.exp(-(a + t_d_states * np.exp(-time_states/t_d_states) + time_states)/t_d_states) tau = np.real(t_d_states * sps.lambertw(b) + a + t_d_states *\ np.exp(-time_states / t_d_states)) #Find the shortest waiting time to figure out which state we transitioned to for each cell tau_min = np.amin(tau, axis=1) newState = np.argmin(tau, axis=1) #Replace infinities with zero, corresponding to having reached the final state newState[tau_min==np.inf] = n-1 tau_min[tau_min==np.inf] = 0 #Update the state and add the waiting time to the overall waiting time state = newState t_on += tau_min time += tau_min return t_on
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
The steady-state regimeFirst, let's get a feel for the model in the steady-state case. We consider a Markov chain with $k+1$ states labeled with indices $i$, with the first state labeled with index $0$. The system will begin in state $0$ at time $t=0$ and we will assume the final state $k$ is absorbing. For example, this could correspond to the transcriptionally competent state. We will allow for forwards and backwards transition rates between all states, except for the final absorbing state, which will have no backwards transition out of it. Denote the transition from state $i$ to state $j$ with the transition rate $\beta_{i,j}$. So,, we have the reaction network:\begin{equation}0 \underset{\beta_{1,0}}{\overset{\beta_{0,1}}{\rightleftharpoons}} 1 \underset{\beta_{2,1}}{\overset{\beta_{1,2}}{\rightleftharpoons}} ... \overset{\beta_{k-1,k}}{\rightarrow} k\end{equation}We will be interested in the mean and variance of the distribution of times $P_k(t)$ to start at state $0$ and reach the final state $k$.We will first consider the simple case where the transition rates $\beta$ are constant in time, and that we have only forward transitions that are all equal in magnitude. In this case, the distribution $P_k(t)$ is simply given by a Gamma distribution with shape parameter $k$ and rate parameter $\beta$. $P_k(t)$ then has the form\begin{equation}P_k(t) = \frac{\beta^k}{\Gamma(k)}t^{k-1}e^{-\beta t}\end{equation}where $\Gamma$ is the Gamma function. Below we show analytical and simulated results for the distribution of onset times.
#Let's visualize the distribution of onset times for the Gamma distribution case #Function for analytical Gamma distribution def GamPDF(x,shape,rate): return x**(shape-1)*(np.exp(-x*rate) / sps.gamma(shape)*(1/rate)**shape) #Pick some parameters beta = 1 #transition rate n = np.array([2,3,4]) #number of states k = n-1 #Number of steps #Simulate the distributions N_cells = 10000 t_on = np.zeros((len(n),N_cells)) for i in range(len(n)): Q = MakeGammaDistMatrix(n[i], beta) #Transition matrix t_on[i,:] = CalculatetOn_GenericMarkovChainGillespie(Q,n[i],N_cells) #Plot results colors = ['tab:blue','tab:red','tab:green'] bins = np.arange(0,10,0.5) t = np.arange(0,10,0.1) ToyModelDist = plt.figure() #plt.title('Onset distributions for equal, irreversible transitions') for i in range(len(k)): plt.hist(t_on[i,:],bins=bins,density=True,alpha=0.5,label='simulation k=' + str(k[i]), color=colors[i],linewidth=0) plt.plot(t,GamPDF(t,n[i]-1,beta),'--',label='theory k=' + str(k[i]),color=colors[i]) plt.xlabel('onset time') plt.ylabel('frequency') plt.legend() StandardFigure(plt.gca()) plt.show()
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
The mean $\mu_k$ and variance $\sigma^2_k$ have simple analytical expressions and are given by\begin{equation}\mu_k = \frac{k}{\beta} \\\sigma^2_k = \frac{k}{\beta^2}\end{equation}For this analysis, we will consider a two-dimensional feature space consisting of the mean onset time on the x-axis and the squared CV (variance divided by square mean) in the onset time on the y-axis. The squared CV is a measure of the "noise" of the system at a given mean. For this simple example then:\begin{equation}\mu_k = \frac{k}{\beta} \\CV^2_k = \frac{1}{k}\end{equation}Thus, for this scenario with equal, irreversible reactions, the squared CV is independent of the transition rates $\beta$ and depends only on the number of steps $k$. Plotting in our feature space results in a series of horizontal lines, with each line corresponding to the particular number of steps in the model.
#Setting up our feature space beta_min = 0.5 #Minimum transition rate beta_max = 5 #Maximum transition rate beta_step = 0.1 #Resolution in transition rates beta_range = np.arange(beta_min,beta_max,beta_step) n = np.array([2,3,4,5]) #Number of states means = np.zeros((len(n),len(beta_range))) CV2s = np.zeros((len(n),len(beta_range))) #Simulate results for i in range(len(n)): for j in range(len(beta_range)): Q = MakeGammaDistMatrix(n[i], beta_range[j]) t_on = CalculatetOn_GenericMarkovChainGillespie(Q,n[i],N_cells) means[i,j] = np.mean(t_on) CV2s[i,j] = np.var(t_on)/np.mean(t_on)**2 #Plot results meanVals = np.arange(0,10,0.1) CV2Pred = np.zeros((len(n),len(meanVals))) colors = ['tab:blue','tab:red','tab:green','tab:purple'] for i in range(len(n)): CV2Pred[i,:] = (1/(n[i]-1)) * np.ones(len(meanVals)) ToyModelFeatureSpace = plt.figure() #plt.title('Feature space for equal, irreversible reactions') for i in range(len(n)): plt.plot(means[i,:],CV2s[i,:],'.',label='simulation k=' + str(n[i]-1),color=colors[i]) plt.plot(meanVals,CV2Pred[i,:],'--',label='theory k=' + str(n[i]-1),color=colors[i]) plt.xlabel('mean') plt.ylabel('CV^2') plt.legend() StandardFigure(plt.gca()) plt.show()
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
What happens if we now allow for backwards transitions as an extension to this ideal case? We'll retain the idea of equal forward transition rates $\beta$, but now allow for equal backwards transitions of magnitude $\beta f$ (except from the final absorbing state $k$). \begin{equation}0 \underset{\beta f}{\overset{\beta}{\rightleftharpoons}} 1 \underset{\beta f}{\overset{\beta}{\rightleftharpoons}} ... \overset{\beta}{\rightarrow} k\end{equation}We will investigate what happens when we vary $f$. Let's see what happens for $k=3$ steps.
#Setting up parameters n = 3 beta_min = 0.1 beta_max = 5.1 beta_step = 0.1 beta_range = np.arange(beta_min,beta_max,beta_step) N_cells = 10000 #Backwards transitions f = np.arange(0,4,1) #fractional magnitude of backwards transition relative to forwards means = np.zeros((len(beta_range),len(f))) CV2s = np.zeros((len(beta_range),len(f))) for i in range(len(beta_range)): for j in range(len(f)): Q = MakeEqualBidirectionalMatrix(n,beta_range[i],f[j]) t_on = CalculatetOn_GenericMarkovChainGillespie(Q,n,N_cells) means[i,j] = np.mean(t_on) CV2s[i,j] = np.var(t_on) / np.mean(t_on)**2 #Plot results #Distribution for fixed beta and varying f beta = 1 bins = np.arange(0,20,0.5) t = np.arange(0,10,0.1) colors = ['tab:blue','tab:red','tab:green','tab:purple'] BackwardsDist = plt.figure() for i in range(len(f)): Q = MakeEqualBidirectionalMatrix(n,beta,f[i]) t_on = CalculatetOn_GenericMarkovChainGillespie(Q,n,N_cells) plt.hist(t_on,bins=bins,density=True,alpha=0.3,label='f = ' + str(f[i]), linewidth=0,color=colors[i]) plt.xlabel('onset time') plt.ylabel('frequency') plt.legend() StandardFigure(plt.gca()) plt.show() BackwardsFeatureSpace = plt.figure() #plt.title('Investigation of impact of backwards rates on feature space (k=2)') plt.plot((n-1)/beta_range,(1/(n-1))*np.ones(beta_range.shape),'k--',label='Gamma dist. limit') for i in range(len(f)): plt.plot(means[:,i],CV2s[:,i],'.',label='f = ' + str(f[i]),color=colors[i]) plt.xlabel('mean') plt.ylabel('CV^2') plt.legend() StandardFigure(plt.gca()) plt.show()
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
We see that as the backwards transition rate increases, the overall noise increases! This makes intuitive sense, since with a backwards transition rate, the system is more likely to spend extra time hopping between states before reaching the final absorbing state, increasing the overall time to finish as well as the variability in finishing times.Because actual irreversible reactions are effectively impossible to achieve in reality, the performance of the Gamma distribution model (i.e. equal, irreversible forward transitions) represents a bound to the noise performance of a real system. With this more realistic scenario of backwards transitions, the overall noise is higher. Transients help improve noise performanceIn the steady-state regime, the only way to decrease the noise (i.e. squared CV) in onset times was to increase the number of steps. What about in the transient regime?Here, we will investigate the changes to this parameter space by using a transient rate $\beta(t)$. This is of biological interest because many developmental processes occur out of steady state. For example, several models of chromatin accessibility hypothesize that the rate of chromatin state transitioning is coupled to the activity of pioneer factors like Zelda. During each rapid cell cycle division event in the early fly embryo, the nuclear membrane breaks down and reforms again, and transcription factors are expelled and re-introduced back into the nucleus. Thus, after each division event, there is a transient period during which the concentration of pioneer factors at a given gene locus is out of steady state.For now, we will assume a reasonable form for the transition rate. Let's assume that forward transition rates are mediated by the concentration of a pioneer factor like Zelda, e.g. in some on-rate fashion. Considering $\beta$ to be a proxy for Zelda concentration, for example, we will write down this transient $\beta(t)$ as the result of a simple diffusive process with form\begin{equation}\beta(t) = \beta (1 - e^{-t / \tau} )\end{equation}Here, $\beta$ is the asymptotic, saturating value of $\beta(t)$, and $\tau$ is the time constant governing the time-varying nature of the transition rate. For a diffusive process, $\tau$ would be highly dependent on the diffusion constant, for example.For comparison, the time plots of the constant and transient input are shown below, for $\tau = 3$ and $\beta = 1$.
#Looking at steady-state vs input transient profiles time = np.arange(0,10,0.1) dt = 0.1 w_base = 1 w_const = w_base * np.ones(time.shape) N_trans = 2 N_cells = 1000 #Now with transient exponential rate tau = 3 w_trans = w_base * (1 - np.exp(-time / tau)) #Plot the inputs TransientInputs = plt.figure() #plt.title('Input transition rate profiles') plt.plot(time,w_const,label='constant',color='tab:blue') plt.plot(time,w_trans,label='transient',color='tab:red') plt.xlabel('time') plt.ylabel('rate') plt.legend() StandardFigure(plt.gca()) TransientInputs.set_figheight(1) #Make this figure short for formatting purposes plt.show()
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
Because of the time-varying nature of $\beta(t)$, the resulting distribution $P_k(t)$ for the case of equal, irreversible forward transition rates no longer obeys a simple Gamma distribution, and an analytical solution is difficult (or even impossible). Nevertheless, we can easily simulate the distributions numerically, shown below.
#Let's visualize the distribution of onset times for the case of equal, irreversible forward transition rates, #comparing steady-state and transient input profiles, varying the "diffusion" constant tau #Pick some parameters beta = 1 #transition rate n = 3 #Number of states tau = np.array([1,3]) #Simulate the distributions N_cells = 10000 #Steady state Q_steady = MakeGammaDistMatrix(n, beta) t_on_steady = CalculatetOn_GenericMarkovChainGillespie(Q_steady,n,N_cells) #Transient t_on_trans = np.zeros((len(tau),N_cells)) for i in range(len(tau)): Q = MakeGammaDistMatrix(n, beta) #Transition matrix t_on_trans[i,:] = CalculatetOn_GenericMarkovChainGillespieTime(Q,n,tau[i],N_cells) #Plot results bins = np.arange(0,10,0.25) colors = ['tab:red','tab:green'] TransientDist = plt.figure() #plt.title('Onset distributions for k=2 equal, irreversible transitions, steady-state vs. transient input') plt.hist(t_on_steady,bins=bins,density=True,alpha=0.5,label='steady state', linewidth=0,color='tab:blue') for i in range(len(tau)): plt.hist(t_on_trans[i,:],bins=bins,density=True,alpha=0.5,linewidth=0, label='transient tau=' + str(tau[i]),color=colors[i]) plt.xlabel('onset time') plt.ylabel('frequency') plt.legend() StandardFigure(plt.gca()) plt.show()
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
We see that increasing the time constant $\tau$ results in a rightward shift of the onset time distribution, as expected since the time-varying transition rate profile will results in slower initial transition rates. What impact does this have on the noise? Below we show the feature space holding $k=2$ fixed while varying $\tau$, and then holding $\tau=3$ fixed and varying $k$.
#Exploring the impact of transient inputs #First, fix k and vary tau n = 3 #number of states beta_min = 0.1 beta_max = 5.1 beta_step = 0.1 beta_range = np.arange(beta_min,beta_max,beta_step) tau = np.arange(1,10,3) #Simulate the distributions N_cells = 5000 #Steady state means_steady = np.zeros(len(beta_range)) CV2s_steady = np.zeros(len(beta_range)) for i in range(len(beta_range)): Q = MakeGammaDistMatrix(n, beta_range[i]) t_on = CalculatetOn_GenericMarkovChainGillespie(Q,n,N_cells) means_steady[i] = np.mean(t_on) CV2s_steady[i] = np.var(t_on) / np.mean(t_on)**2 #Transient means_trans = np.zeros((len(tau),len(beta_range))) CV2s_trans = np.zeros((len(tau),len(beta_range))) for i in range(len(tau)): for j in range(len(beta_range)): Q = MakeGammaDistMatrix(n, beta_range[j]) #Transition matrix t_on = CalculatetOn_GenericMarkovChainGillespieTime(Q,n,tau[i],N_cells) means_trans[i,j] = np.mean(t_on) CV2s_trans[i,j] = np.var(t_on) / np.mean(t_on)**2 #Plot results colors = ['tab:blue','tab:red','tab:green'] TransientFeatureSpaceFixedK = plt.figure() #plt.title('Investigation of transient inputs on feature space (k=2, tau varying)') plt.plot((n-1)/beta_range,(1/(n-1))*np.ones(beta_range.shape), 'k--',label='Gamma dist. limit in steady-state',color='black') plt.plot(means_steady,CV2s_steady,'k.',label='steady state simulation',color='black') for i in range(len(tau)): plt.plot(means_trans[i,:],CV2s_trans[i,:],'.', label='transient simulation tau=' + str(tau[i]),color=colors[i]) plt.xlabel('mean') plt.ylabel('CV^2') plt.legend() StandardFigure(plt.gca()) plt.show() #Now fix tau and vary k n = np.array([2,3,4,5]) tau = 3 #Simulate the distributions N_cells = 5000 #Transient means_trans = np.zeros((len(n),len(beta_range))) CV2s_trans = np.zeros((len(n),len(beta_range))) for i in range(len(n)): for j in range(len(beta_range)): Q = MakeGammaDistMatrix(n[i], beta_range[j]) #Transition matrix t_on = CalculatetOn_GenericMarkovChainGillespieTime(Q,n[i],tau,N_cells) means_trans[i,j] = np.mean(t_on) CV2s_trans[i,j] = np.var(t_on) / np.mean(t_on)**2 #Plot results colors = ['black','tab:red','tab:green','tab:blue'] TransientFeatureSpaceFixedTau = plt.figure() #plt.title('Investigation of transient inputs on feature space (k varying, tau=3)') for i in range(len(n)): plt.plot((n[i]-1)/beta_range,(1/(n[i]-1))*np.ones(beta_range.shape),'--',\ color=colors[i],label='steady state k=' + str(n[i]-1)) plt.plot(means_trans[i,:],CV2s_trans[i,:],'.',color=colors[i],\ label='transient simulation k=' + str(n[i]-1)) plt.xlabel('mean') plt.ylabel('CV^2') plt.legend() StandardFigure(plt.gca()) plt.show()
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
In each case, the transient input reduces noise! It seems like for increasing $\tau$, the performance improves. This makes intuitive sense because having a time-dependent input profile will make earlier transitions "weaker," so transitions that happen before the expected time are less likely, tightening the overall distribution of onset times. The relevant timescale is the dimensionless ratio $\frac{\beta}{\tau}$ - the faster the intrinsic transition rate $\beta$ is to the transient input timescale $\tau$, the larger the effects of the transient input. This manifests in the feature space for low values of the mean onset time, where the discrepancy between steady-state and transient is more apparent. Transient inputs can improve performance in non-ideal modelsEarlier, we saw that in the steady-state case, the presence of finite backwards transition rates decreased the overall performance of the model in terms of noise. The greater the backwards transition rates, the worse the performance got. Here, we'll show that having transient inputs can counteract this performance loss.As before, we'll assume a model with equal forward transition rates $\beta$ and equal backward transition rates $\beta f$. We'll compare the steady state case with the transient input case, parameterized with timescale $\tau$. Note that we'll only consider the forward transition rates to be transient, and assume the backward transition rates are still time-independent. Biologically, this would correspond to the forward transitions corresponding to on-rates of some pioneer factor like Zelda that is coupled to a time-dependent concentration profile, while backward transitions are time-independent off-rates.Below, we explore the feature space in the steady-state vs. transient cases, with the steady-state ideal case of equal, irreversible transitions as a reference. We'll first consider the case fixing $k=2$ and $f=0.2$ and varying $\tau$.
#Setting up parameters n = 3 beta_min = 0.1 beta_max = 5.1 beta_step = 0.1 beta_range = np.arange(beta_min,beta_max,beta_step) f = 0.2 tau = np.array([0.25,0.5,1,3]) #Simulate results N_cells = 10000 #Steady state means_steady = np.zeros(len(beta_range)) CV2s_steady = np.zeros(len(beta_range)) for i in range(len(beta_range)): Q = MakeEqualBidirectionalMatrix(n,beta_range[i],f) t_on = CalculatetOn_GenericMarkovChainGillespie(Q,n,N_cells) means_steady[i] = np.mean(t_on) CV2s_steady[i] = np.var(t_on) / np.mean(t_on)**2 #Transient means_trans = np.zeros((len(beta_range),len(tau))) CV2s_trans = np.zeros((len(beta_range),len(tau))) for i in range(len(beta_range)): for j in range(len(tau)): Q = MakeEqualBidirectionalMatrix(n,beta_range[i],f) t_on = CalculatetOn_GenericMarkovChainGillespieTime(Q,n,tau[j],N_cells) means_trans[i,j] = np.mean(t_on) CV2s_trans[i,j] = np.var(t_on) / np.mean(t_on)**2 #Plot results colors = ['tab:blue','tab:red','tab:green','tab:purple'] TransientFeatureSpaceBackwards = plt.figure() #plt.title('Impact of transient inputs on feature space with backward rates (k=2, f=' + str(f) + ')') plt.plot((n-1)/beta_range,(1/(n-1))*np.ones(beta_range.shape),'k--', label='Steady-state ideal limit') plt.plot(means_steady,CV2s_steady,'k.',label='Steady-state, f=' + str(f)) for i in range(len(tau)): plt.plot(means_trans[:,i],CV2s_trans[:,i],'.', label='Transient, f=' + str(f) + ', tau=' + str(tau[i]),color=colors[i]) plt.xlabel('mean') plt.ylabel('CV^2') plt.legend() StandardFigure(plt.gca()) plt.show()
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
Interesting! As shown earlier, the steady-state case with a backwards transition rate is worse than the ideal limit with equal, irreversible forward rates. However, using a transient rate can counterbalance this and still achieve performance better than the ideal limit in the steady-state case.This suggests that given a backwards transition rate that is some fraction $f$ in magnitude of the forwards transition rate, there exists some "diffusion" timescale $\tau$ of the input transition rate that can bring the squared CV back to the ideal steady-state limit with no backwards rates.
# Export figures ToyModelDist.savefig('figures/ToyModelDist.pdf') ToyModelFeatureSpace.savefig('figures/ToyModelFeatureSpace.pdf') BackwardsDist.savefig('figures/BackwardsDist.pdf') BackwardsFeatureSpace.savefig('figures/BackwardsFigureSpace.pdf') TransientInputs.savefig('figures/TransientInputs.pdf') TransientDist.savefig('figures/TransientDist.pdf') TransientFeatureSpaceFixedK.savefig('figures/TransientFeatureSpaceFixedK.pdf') TransientFeatureSpaceFixedTau.savefig('figures/TransientFeatureSpaceFixedTau.pdf') TransientFeatureSpaceBackwards.savefig('figures/TransientFeatureSpaceBackwards.pdf')
_____no_output_____
MIT
GenericModelExploration.ipynb
GarciaLab/OnsetTimeTransientInputs
Mining the Social Web (3rd Edition) PrefaceWelcome! Allow me to be the first to offer my congratulations on your decision to take an interest in [_Mining the Social Web (3rd Edition)_](http://bit.ly/135dHfs)! This collection of [Jupyter Notebooks](http://ipython.org/notebook.html) provides an interactive way to follow along with and explore the numbered examples from the book. Whereas many technical books require you type in the code examples one character at a time or download a source code archive (that may or may not be maintained by the author), this book reinforces the concepts from the sample code in a fun, convenient, and interactive way that really does make the learning experience superior to what you may have previously experienced, so even if you are skeptical, please give it try. I think you'll be pleasantly surprised at the amazing user experience that the Jupyter Notebook affords and just how much easier it is to follow along and adapt the code to your own particular needs. In the somewhat unlikely event that you've somehow stumbled across this notebook outside of its context on GitHub, [you can find the full source code repository here](https://github.com/mikhailklassen/Mining-the-Social-Web-3rd-Edition).If you haven't previously encountered the Jupyter Notebook, you really should take a moment to learn more about it at https://jupyter.org. It's essentially a platform that allows you to author and run Python source code in the web browser and lends itself very well to data science experiments in which you're taking notes and learning along the way. Personally, I like to think of it as a special purpose notepad that allows me to embed and run arbitrary Python code, and I find myself increasingly using it as my default development environment for many of my Python-based projects. The source code for _Mining the Social Web_ book employs the Jupyter Notebook rather exclusively to present the source code as a means of streamlining and enhancing the learning experience, so it is highly recommended that you take a few minutes to learn more about how it works and why it's such an excellent learning (and development) platform. The [same GitHub source code repository](https://github.com/mikhailklassen/Mining-the-Social-Web-3rd-Edition) that contains this file also contains all of the Jupyter Notebooks for _Mining the Social Web_, so once you've followed along with the instructions in Appendix A and gotten your virtual machine environment installed, just open the corresponding notebook from [http://localhost:8888](http://localhost:8888). From that point, following along with the code is literally as easy to pressing Shift-Enter in Jupyter Notebook cells.If you experience any problems along the way or have any feedback about this book, its software, or anything else at all, please reach out on Twitter, Facebook, or GitHub for help.* Twitter: [http://twitter.com/socialwebmining](http://twitter.com/socialwebmining) (@SocialWebMining)* Facebook: [http://facebook.com/MiningTheSocialWeb](http://twitter.com/socialwebmining)* GitHub: [https://github.com/mikhailklassen/Mining-the-Social-Web-3rd-Edition](https://github.com/mikhailklassen/Mining-the-Social-Web-3rd-Edition)Thanks once again for your interest in _Mining the Social Web_. I truly hope that you learn a lot of new things (and have more fun than you ever expected) from this book.Best Regards,Matthew A. RussellTwitter: @ptwobrussellMikhail KlassenTwitter: @MikhailKlassenP.S. Even if you are a savvy and accomplished developer, you will still find it worthwhile to use the turn-key Docker support that's been provided since it is tested and comes pre-loaded with all of the correct dependencies for following along with the examples.
# This is a Python source code comment in a Jupyter Notebook cell. # Try executing this cell by placing your cursor in it and typing Shift-Enter print("Hello, Social Web!") # See Appendix A to get your virtual machine installed # See Appendix C for a brief overview of some Python idioms and IPython Notebook tips
Hello, Social Web!
BSD-2-Clause
notebooks/Chapter 0 - Preface.ipynb
ohshane71/Mining-the-Social-Web-3rd-Edition
Copyright 2021 The TensorFlow Cloud Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Tuning a wide and deep model using Google Cloud View on TensorFlow.org Run in Google Colab View on GitHub Download notebook Run in Kaggle In this example we will use CloudTuner and Google Cloud to Tune a [Wide and Deep Model](https://ai.googleblog.com/2016/06/wide-deep-learning-better-together-with.html) based on the tunable model introduced in [structured data learning with Wide, Deep, and Cross networks](https://keras.io/examples/structured_data/wide_deep_cross_networks/). In this example we will use the data set from [CAIIS Dogfood Day](https://www.kaggle.com/c/caiis-dogfood-day-2020/overview) Import required modules
import datetime import uuid import numpy as np import pandas as pd import tensorflow as tf import os import sys import subprocess from tensorflow.keras import datasets, layers, models from sklearn.model_selection import train_test_split # Install the latest version of tensorflow_cloud and other required packages. if os.environ.get("TF_KERAS_RUNNING_REMOTELY", True): subprocess.run( ['python3', '-m', 'pip', 'install', 'tensorflow-cloud', '-q']) subprocess.run( ['python3', '-m', 'pip', 'install', 'google-cloud-storage', '-q']) subprocess.run( ['python3', '-m', 'pip', 'install', 'fsspec', '-q']) subprocess.run( ['python3', '-m', 'pip', 'install', 'gcsfs', '-q']) import tensorflow_cloud as tfc print(tfc.__version__) tf.version.VERSION
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Project ConfigurationsSetting project parameters. For more details on Google Cloud Specific parameters please refer to [Google Cloud Project Setup Instructions](https://www.kaggle.com/nitric/google-cloud-project-setup-instructions/).
# Set Google Cloud Specific parameters # TODO: Please set GCP_PROJECT_ID to your own Google Cloud project ID. GCP_PROJECT_ID = 'YOUR_PROJECT_ID' #@param {type:"string"} # TODO: Change the Service Account Name to your own Service Account SERVICE_ACCOUNT_NAME = 'YOUR_SERVICE_ACCOUNT_NAME' #@param {type:"string"} SERVICE_ACCOUNT = f'{SERVICE_ACCOUNT_NAME}@{GCP_PROJECT_ID}.iam.gserviceaccount.com' # TODO: set GCS_BUCKET to your own Google Cloud Storage (GCS) bucket. GCS_BUCKET = 'YOUR_GCS_BUCKET_NAME' #@param {type:"string"} # DO NOT CHANGE: Currently only the 'us-central1' region is supported. REGION = 'us-central1' # Set Tuning Specific parameters # OPTIONAL: You can change the job name to any string. JOB_NAME = 'wide_and_deep' #@param {type:"string"} # OPTIONAL: Set Number of concurrent tuning jobs that you would like to run. NUM_JOBS = 5 #@param {type:"string"} # TODO: Set the study ID for this run. Study_ID can be any unique string. # Reusing the same Study_ID will cause the Tuner to continue tuning the # Same Study parameters. This can be used to continue on a terminated job, # or load stats from a previous study. STUDY_NUMBER = '00001' #@param {type:"string"} STUDY_ID = f'{GCP_PROJECT_ID}_{JOB_NAME}_{STUDY_NUMBER}' # Setting location were training logs and checkpoints will be stored GCS_BASE_PATH = f'gs://{GCS_BUCKET}/{JOB_NAME}/{STUDY_ID}' TENSORBOARD_LOGS_DIR = os.path.join(GCS_BASE_PATH,"logs")
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Authenticating the notebook to use your Google Cloud ProjectFor Kaggle Notebooks click on "Add-ons"->"Google Cloud SDK" before running the cell below.
# Using tfc.remote() to ensure this code only runs in notebook if not tfc.remote(): # Authentication for Kaggle Notebooks if "kaggle_secrets" in sys.modules: from kaggle_secrets import UserSecretsClient UserSecretsClient().set_gcloud_credentials(project=GCP_PROJECT_ID) # Authentication for Colab Notebooks if "google.colab" in sys.modules: from google.colab import auth auth.authenticate_user() os.environ["GOOGLE_CLOUD_PROJECT"] = GCP_PROJECT_ID
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Load the dataRead raw data and split to train and test data sets. For this step you will need to copy the dataset to your GCS bucket so it can be accessed during training. For this example we are using the dataset from https://www.kaggle.com/c/caiis-dogfood-day-2020.To do this you can run the following commands to download and copy the dataset to your GCS bucket, or manually download the dataset vi [Kaggle UI](https://www.kaggle.com/c/caiis-dogfood-day-2020/data) and upload the `train.csv` file to your [GCS bucket vi GCS UI](https://console.cloud.google.com/storage/browser).```python Download the dataset!kaggle competitions download -c caiis-dogfood-day-2020 Copy the training file to your bucket!gsutil cp ./caiis-dogfood-day-2020/train.csv $GCS_BASE_PATH/caiis-dogfood-day-2020/train.csv```
train_URL = f'{GCS_BASE_PATH}/caiis-dogfood-day-2020/train.csv' data = pd.read_csv(train_URL) train, test = train_test_split(data, test_size=0.1) # A utility method to create a tf.data dataset from a Pandas Dataframe def df_to_dataset(df, shuffle=True, batch_size=32): df = df.copy() labels = df.pop('target') ds = tf.data.Dataset.from_tensor_slices((dict(df), labels)) if shuffle: ds = ds.shuffle(buffer_size=len(df)) ds = ds.batch(batch_size) return ds sm_batch_size = 1000 # A small batch size is used for demonstration purposes train_ds = df_to_dataset(train, batch_size=sm_batch_size) test_ds = df_to_dataset(test, shuffle=False, batch_size=sm_batch_size)
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Preprocess the dataSetting up preprocessing layers for categorical and numerical input data. For more details on preprocessing layers please refer to [working with preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers).
from tensorflow.keras.layers.experimental import preprocessing def create_model_inputs(): inputs ={} for name, column in data.items(): if name in ('id','target'): continue dtype = column.dtype if dtype == object: dtype = tf.string else: dtype = tf.float32 inputs[name] = tf.keras.Input(shape=(1,), name=name, dtype=dtype) return inputs #Preprocessing the numeric inputs, and running them through a normalization layer. def preprocess_numeric_inputs(inputs): numeric_inputs = {name:input for name,input in inputs.items() if input.dtype==tf.float32} x = layers.Concatenate()(list(numeric_inputs.values())) norm = preprocessing.Normalization() norm.adapt(np.array(data[numeric_inputs.keys()])) numeric_inputs = norm(x) return numeric_inputs # Preprocessing the categorical inputs. def preprocess_categorical_inputs(inputs): categorical_inputs = [] for name, input in inputs.items(): if input.dtype == tf.float32: continue lookup = preprocessing.StringLookup(vocabulary=np.unique(data[name])) one_hot = preprocessing.CategoryEncoding(max_tokens=lookup.vocab_size()) x = lookup(input) x = one_hot(x) categorical_inputs.append(x) return layers.concatenate(categorical_inputs)
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Define the model architecture and hyperparametersIn this section we define our tuning parameters using [Keras Tuner Hyper Parameters](https://keras-team.github.io/keras-tuner/the-search-space-may-contain-conditional-hyperparameters) and a model-building function. The model-building function takes an argument hp from which you can sample hyperparameters, such as hp.Int('units', min_value=32, max_value=512, step=32) (an integer from a certain range).
import kerastuner # Configure the search space HPS = kerastuner.engine.hyperparameters.HyperParameters() HPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log') HPS.Int('num_layers', min_value=2, max_value=5) for i in range(5): HPS.Float('dropout_rate_' + str(i), min_value=0.0, max_value=0.3, step=0.1) HPS.Choice('num_units_' + str(i), [32, 64, 128, 256]) from tensorflow.keras import layers from tensorflow.keras.optimizers import Adam def create_wide_and_deep_model(hp): inputs = create_model_inputs() wide = preprocess_categorical_inputs(inputs) wide = layers.BatchNormalization()(wide) deep = preprocess_numeric_inputs(inputs) for i in range(hp.get('num_layers')): deep = layers.Dense(hp.get('num_units_' + str(i)))(deep) deep = layers.BatchNormalization()(deep) deep = layers.ReLU()(deep) deep = layers.Dropout(hp.get('dropout_rate_' + str(i)))(deep) both = layers.concatenate([wide, deep]) outputs = layers.Dense(1, activation='sigmoid')(both) model = tf.keras.Model(inputs=inputs, outputs=outputs) metrics = [ tf.keras.metrics.Precision(name='precision'), tf.keras.metrics.Recall(name='recall'), 'accuracy', 'mse' ] model.compile( optimizer=Adam(lr=hp.get('learning_rate')), loss='binary_crossentropy', metrics=metrics) return model
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Configure a CloudTunerIn this section we configure the cloud tuner for both remote and local execution. The main difference between the two is the distribution strategy.
from tensorflow_cloud import CloudTuner distribution_strategy = None if not tfc.remote(): # Using MirroredStrategy to use a single instance with multiple GPUs # during remote execution while using no strategy for local. distribution_strategy = tf.distribute.MirroredStrategy() tuner = CloudTuner( create_wide_and_deep_model, project_id=GCP_PROJECT_ID, project_name=JOB_NAME, region=REGION, objective='accuracy', hyperparameters=HPS, max_trials=100, directory=GCS_BASE_PATH, study_id=STUDY_ID, overwrite=True, distribution_strategy=distribution_strategy) # Configure Tensorboard logs callbacks=[ tf.keras.callbacks.TensorBoard(log_dir=TENSORBOARD_LOGS_DIR)] # Setting to run tuning remotely, you can run tuner locally to validate it works first. if tfc.remote(): tuner.search(train_ds, epochs=20, validation_data=test_ds, callbacks=callbacks) # You can uncomment the code below to run the tuner.search() locally to validate # everything works before submitting the job to Cloud. Stop the job manually # after one epoch. # else: # tuner.search(train_ds, epochs=1, validation_data=test_ds, callbacks=callbacks)
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud
Start the remote trainingThis step will prepare your code from this notebook for remote execution and start NUM_JOBS parallel runs remotely to train the model. Once the jobs are submitted you can go to the next step to monitor the jobs progress via Tensorboard.
# Optional: Some recommended base images. If you provide none the system will choose one for you. TF_GPU_IMAGE= "gcr.io/deeplearning-platform-release/tf2-cpu.2-5" TF_CPU_IMAGE= "gcr.io/deeplearning-platform-release/tf2-gpu.2-5" tfc.run_cloudtuner( distribution_strategy='auto', docker_config=tfc.DockerConfig( parent_image=TF_GPU_IMAGE, image_build_bucket=GCS_BUCKET ), chief_config=tfc.MachineConfig( cpu_cores=16, memory=60, ), job_labels={'job': JOB_NAME}, service_account=SERVICE_ACCOUNT, num_jobs=NUM_JOBS )
_____no_output_____
Apache-2.0
g3doc/tutorials/hp_tuning_wide_and_deep_model.ipynb
anukaal/cloud