markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling() | Chapter 8 - Parsing XML.ipynb | mikekestemont/ghent1516 | mit |
|
We use pairinteraction's StateOne class to define the single-atom states $\left|n,l,j,m_j\right\rangle$ for which the matrix elements should be calculated. | array_n = range(51,61)
array_nprime = range(51,61)
array_state_final = [pi.StateOne("Rb", n, 0, 0.5, 0.5) for n in array_n]
array_state_initial = [pi.StateOne("Rb", n, 1, 0.5, 0.5) for n in array_nprime] | doc/sphinx/examples_python/matrix_elements.ipynb | hmenke/pairinteraction | gpl-3.0 |
The method MatrixElementCache.getRadial(state_f, state_i, power) returns the value of the radial matrix element of $r^p$ in units of $\mu\text{m}^p$. | matrixelements = np.empty((len(array_state_final), len(array_state_initial)))
for idx_f, state_f in enumerate(array_state_final):
for idx_i, state_i in enumerate(array_state_initial):
matrixelements[idx_f, idx_i] = np.abs(cache.getRadial(state_f, state_i, 1)) | doc/sphinx/examples_python/matrix_elements.ipynb | hmenke/pairinteraction | gpl-3.0 |
We visualize the calculated matrix elements with matplotlib. | fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.imshow(matrixelements, extent=(array_nprime[0]-0.5, array_nprime[-1]+0.5, array_n[0]-0.5, array_n[-1]+0.5),origin='lower')
ax.set_ylabel(r"n")
ax.set_xlabel(r"n'")
ax.yaxis.set_major_locator(MaxNLocator(integer=True))
ax.xaxis.set_major_locator(MaxNLocator(integer=True)); | doc/sphinx/examples_python/matrix_elements.ipynb | hmenke/pairinteraction | gpl-3.0 |
Now given our errors of estimation, what is our best expectation for the true expression?
The frequentist approach is based on a maximum likelihood estimate. Basically one can compute the probability of an observation given that the true gene expression value is fixed, and then compute the product of these probabilities for each data point:
$$ L(E|E_{true}) = \prod_{i=1}^{N}{ P (E_i|E_{true}) } $$
What we want is to compute the E_true for which the log likelihood estimate is maximized, and in this case it can be solved analytically to this formula (while generally alternatively approximated via optimization, although it is not always possible):
$$ E_{est} = argmax_{E_{true}} = argmin_{E_{true}} {- \sum_{i=1}^{N}log(P(E_i|E_{true})) } \approx \frac{\sum{w_i E_i}}{\sum{w_i}}, w_i = 1/e_i^2 $$
(Also, in this case) we can also estimate the error of measurement by using a gaussian estimate of the likelihood function at its maximum: | w = 1. / err ** 2
print("""
E_true = {0}
E_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)
""".format(E_true, (w * E).sum() / w.sum(), w.sum() ** -0.5, N)) | day2/stats_learning.ipynb | grokkaine/biopycourse | cc0-1.0 |
When using the Bayesian approach, we are estimating the probability of the model parameters giving the data, so no absolute estimate. This is also called posterior probability. We do this using the likelihood and the model prior, which is an expectation of the model before we are given the data. The data probability is encoding how likely our data is, and is usually approximated into a normalization term. The formula used is also known as Bayes theorem but is using a Bayesian interpretation of probability.
$$ P(E_{true}|E) = \frac{P(E|E_{true})P(E_{true})}{P(E)}$$
$$ {posterior} = \frac{{likelihood}~\cdot~{prior}}{data~probability}$$ | import pymc3 as pm
with pm.Model():
mu = pm.Normal('mu', 900, 1.)
sigma = 1.
E_obs = pm.Normal('E_obs', mu=mu, sd=sigma, observed=E)
step = pm.Metropolis()
trace = pm.sample(15000, step)
#sns.distplot(trace[2000:]['mu'], label='PyMC3 sampler');
#sns.distplot(posterior[500:], label='Hand-written sampler');
pm.traceplot(trace)
plt.show()
| day2/stats_learning.ipynb | grokkaine/biopycourse | cc0-1.0 |
Task:
- How did we know to start with 900 as the expected mean? Try putting 0, then 2000 and come up with a general strategy!
- Use a bayesian parametrization for sigma as well. What do you observe?
- Try another sampler. | def log_prior(E_true):
return 1 # flat prior
def log_likelihood(E_true, E, e):
return -0.5 * np.sum(np.log(2 * np.pi * e ** 2)
+ (E - E_true) ** 2 / e ** 2)
def log_posterior(E_true, E, e):
return log_prior(E_true) + log_likelihood(E_true, E, e)
import pymc3 as pm
basic_model = pm.Model()
with basic_model:
# Priors for unknown model parameters
alpha = pm.Normal('alpha', mu=0, sd=10)
beta = pm.Normal('beta', mu=0, sd=10)
sigma = pm.HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta*E
# Likelihood (sampling distribution) of observations
E_obs = pm.Normal('Y_obs', mu = mu, sd = sigma, observed = E)
start = pm.find_MAP(model=basic_model)
step = pm.Metropolis()
# draw 20000 posterior samples
trace = pm.sample(20000, step=step, start=start)
_ = pm.traceplot(trace)
plt.show()
import pymc3 as pm
help(pm.Normal) | day2/stats_learning.ipynb | grokkaine/biopycourse | cc0-1.0 |
Check your submission form
Please evaluate the following cell to check your submission form.
In case of errors, please go up to the corresponden information cells and update your information accordingly... | # to be completed ..
report = checks.check_report(sf,"sub")
checks.display_report(report) | dkrz_forms/Templates/DKRZ_CDP_submission_form.ipynb | IS-ENES-Data/submission_forms | apache-2.0 |
Now lets visualize the data. We are going to make the assumption that the price of the house is dependant on the size of property | #rename columns to make indexing easier
data.columns = ['property_size', 'price']
plt.scatter(data.property_size, data.price, color='black')
plt.ylabel("Price of House ($million)")
plt.xlabel("Size of Property (m^2)")
plt.title("Price vs Size of House")
| tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
We will learn about how to implement cross validation properly soon, but for now let us put the data in a random order (shuffle the rows) and use linear regression to fit a line on 75% of the data. We will then test the fit on the remaining 25%. Normally you would use scikit learn's cross validation functions, but we are going to implement the cross validation methods ourself (so you understand what is going on).
DO NOT use this method for doing cross validation. You will later learn how to do k folds cross-validation using the scikit learn's implementation. In this tutorial, I implement cross validation manually you intuition for what exactly hold out cross validation is, but in the future we will learn a better way to do cross validation. | # generate pseudorandom number
# by setting a seed, the same random number is always generated
# this way by following along, you get the same plots
# meaning the results are reproducable.
# try changing the 1 to a different number
np.random.seed(3)
# shuffle data since we want to randomly split the data
shuffled_data= data.iloc[np.random.permutation(len(data))]
#notice how the x labels remain, but are now random
print shuffled_data[0:5]
#train on the first element to 75% of the dataset
training_data = shuffled_data[0:len(shuffled_data)*3/4]
#test on the remaining 25% of the dataset
#note the +1 is since there is an odd number of datapoints
#the better practice is use shufflesplit which we will learn in a future tutorial
testing_data = shuffled_data[-len(shuffled_data)/4+1:-1]
#plot the training and test data on the same plot
plt.scatter(training_data.property_size, training_data.price, color='blue', label='training')
plt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.ylabel("Price of House ($Million)")
plt.xlabel("Size of Land (m^2)")
plt.title("Price vs Size of Land")
X_train = training_data.property_size.reshape((len(training_data.property_size), 1))
y_train = training_data.price.reshape((len(training_data.property_size), 1))
X_test = testing_data.property_size.reshape((len(testing_data.property_size), 1))
y_test = testing_data.price.reshape((len(testing_data.property_size), 1))
X = np.linspace(0,800000)
X = X.reshape((len(X), 1))
# Create linear regression object
regr = linear_model.LinearRegression()
#Train the model using the training sets
regr.fit(X_train,y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(X_test) - y_test) ** 2))
plt.plot(X, regr.predict(X), color='black',
linewidth=3)
plt.scatter(training_data.property_size, training_data.price, color='blue', label='training')
plt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.ylabel("Price of House ($Million)")
plt.xlabel("Size of Land (m^2)")
plt.title("Price vs Size of Land") | tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
We can see here, there is obviously a poor fit. There is going to be a very high residual sum of squares and there is no linear relationship. Since the data appears to follow $e^y = x$, we can apply a log transform on the data:
$$y = ln (x)$$
For the purpose of this tutorial, I will apply the log transform, fit a linear model then invert the log transform and plot the fit to the original data. | # map applied log() function to every element
X_train_after_log = training_data.property_size.map(log)
#reshape back to matrix with 1 column
X_train_after_log = X_train_after_log.reshape((len(X_train_after_log), 1))
X_test_after_log = testing_data.property_size.map(log)
#reshape back to matrix with 1 column
X_test_after_log = X_test_after_log.reshape((len(X_test_after_log), 1))
X_after_log = np.linspace(min(X_train_after_log),max(X_train_after_log))
X_after_log = X_after_log.reshape((len(X_after_log), 1))
regr2 = linear_model.LinearRegression()
#fit linear regression
regr2.fit(X_train_after_log,y_train)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr2.predict(X_test_after_log) - y_test) ** 2))
#np.exp takes the e^x, efficiently inversing the log transform
plt.plot(np.exp(X_after_log), regr2.predict(X_after_log), color='black',
linewidth=3)
plt.scatter(training_data.property_size, training_data.price, color='blue', label='training')
plt.scatter(testing_data.property_size, testing_data.price, color='red', label='testing')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
plt.ylabel("Price of House ($Million)")
plt.xlabel("Size of Land (m^2)")
plt.title("Price vs Size of Land") | tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
The residual sum of squares on the test data after the log transform (0.07) in this example is much lower than before where we just fit the the data without the transfrom (0.32). The plot even looks much better as the data seems to fit well for the smaller sizes of land and still fits the larger size of land roughly. As an analysist, one might naively use this model afer applying the log transform. As we learn't from the last tutorial, ALWAYS plot your data after you transform the features since there might be hidden meanings in the data!
Run the code below to see hidden insight left in the data (after the log transform) | plt.scatter(X_train_after_log, training_data.price, color='blue', label='training')
plt.scatter(X_test_after_log, testing_data.price, color='red', label='testing')
plt.plot(X_after_log, regr2.predict(X_after_log), color='black', linewidth=3)
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
| tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
The lesson learnt here is always plot data (even after a transform) before blindly running a predictive model!
Generalized linear models
Now let's exend our knowledge to generalized linear models for the remaining three of the anscombe quartet datasets. We will try and use our intuition to determine the best model. | #read csv
anscombe_ii = pd.read_csv('../datasets/anscombe_ii.csv')
plt.scatter(anscombe_ii.x, anscombe_ii.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
| tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
Instead of fitting a linear model to a transformation, we can also fit a polynomial to the data: | X_ii = anscombe_ii.x
# X_ii_noisey = X_ii_noisey.reshape((len(X_ii_noisey), 1))
y_ii = anscombe_ii.y
#y_ii = anscombe_ii.y.reshape((len(anscombe_ii.y), 1))
X_fit = np.linspace(min(X_ii),max(X_ii))
polynomial_degree = 2
p = np.polyfit(X_ii, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii, y_ii) | tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
Lets add some random noise to the data, fit a polynomial and calculate the residual error. | np.random.seed(1)
x_noise = np.random.random(len(anscombe_ii.x))
X_ii_noisey = anscombe_ii.x + x_noise*3
X_fit = np.linspace(min(X_ii_noisey),max(X_ii_noisey))
polynomial_degree = 1
p = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii_noisey, y_ii)
print("Residual sum of squares: %.2f"
% np.mean((np.polyval(p, X_ii_noisey) - y_ii)**2))
| tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
Now can we fit a larger degree polynomial and reduce the error? Lets try and see: | polynomial_degree = 5
p2 = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p2, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii_noisey, y_ii)
print("Residual sum of squares: %.2f"
% np.mean((np.polyval(p2, X_ii_noisey) - y_ii)**2)) | tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
What if we use a really high degree polynomial? Can we bring the error to zero? YES! | polynomial_degree = 10
p2 = np.polyfit(X_ii_noisey, anscombe_ii.y, polynomial_degree)
yfit = np.polyval(p2, X_fit)
plt.plot(X_fit, yfit, '-b')
plt.scatter(X_ii_noisey, y_ii)
print("Residual sum of squares: %.2f"
% np.mean((np.polyval(p2, X_ii_noisey) - y_ii)**2)) | tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
It is intuitive to see that we are overfitting since the high degree polynomial hits every single point (causing our mean squared error (MSE) to be zero), but it would generalize well. For example, if x=5, it would estimate y to be -45 when you would expect it to be above 0.
when you are dealing with more than one variable, it becomes increasingly difficult to prevent overfitting, since you can not plots past four-five dimensions (x axis,y axis,z axis, color and size). For this reason we should always use cross validation to reduce our variance error (due to overfitting) while we are deducing bias (due to underfitting). Throughout the course we will learn more on what this means, and learn practical tips.
The key takeaway here is more complex models are not always better. Use visualizations and cross validation to prevent overfitting! (We will learn more about this soon!)
Now, let us work on the third set of data from quartet | #read csv
anscombe_iii = pd.read_csv('../datasets/anscombe_iii.csv')
plt.scatter(anscombe_iii.x, anscombe_iii.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
| tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
It is obvious that there is an outlier which is going to cause a poor fit to an ordinary linear regression. One way is filtering out the outlier. One method could be to manually hardcode any value which seems to be incorrect. A better method would be to remove any point which is a given standard deviation away from the linear model, then fit a line to remaining data points. Arguably, an even better method could be using the RANSAC algorithm (demonstrated below) from the Scikit learn documentation on linear models or using Thiel-sen regression | from sklearn import linear_model
X_iii = anscombe_iii.x.reshape((len(anscombe_iii), 1))
#bit basic linear model
model = linear_model.LinearRegression()
model.fit(X_iii, anscombe_iii.y)
# Robustly fit linear model with RANSAC algorithm
model_ransac = linear_model.RANSACRegressor(linear_model.LinearRegression())
model_ransac.fit(X_iii, anscombe_iii.y)
inlier_mask = model_ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
plt.plot(X_iii,model.predict(X_iii), color='blue',linewidth=3, label='Linear regressor')
plt.plot(X_iii,model_ransac.predict(X_iii), color='red', linewidth=3, label='RANSAC regressor')
plt.plot(X_iii[inlier_mask], anscombe_iii.y[inlier_mask], '.k', label='Inliers')
plt.plot(X_iii[outlier_mask], anscombe_iii.y[outlier_mask], '.g', label='Outliers')
plt.ylabel("Y")
plt.xlabel("X")
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
| tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
The takeaway here to read the documentation, and see if there is an already implemented method of solving a problem. Chances there are already prepackaged solutions, you just need to learn about them. Lets move on to the final quatet. | #read csv
anscombe_ii = pd.read_csv('../datasets/anscombe_iv.csv')
plt.scatter(anscombe_ii.x, anscombe_ii.y, color='black')
plt.ylabel("Y")
plt.xlabel("X") | tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
In this example, we can see that the X axis values stays constant except for 1 measurement where x varies. Since we are trying to predict y in terms of x, as an analyst I would would not use any model to describe this data, and state that more data with different values of X would be required. Additionally, depending on the problem I could remove the outliers, and treat this as univariate data.
The takeaway here is that sometimes a useful model can not be made (garbage in, garbage out) until better data is avaliable.
Non-linear and robust regression
Due to time restrictions, I can not present every method for regression, but depending on your specific problem and data, there are many other regression techniques which can be used:
http://scikit-learn.org/stable/auto_examples/ensemble/plot_adaboost_regression.html#example-ensemble-plot-adaboost-regression-py
http://scikit-learn.org/stable/auto_examples/neighbors/plot_regression.html#example-neighbors-plot-regression-py
http://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html#example-svm-plot-svm-regression-py
http://scikit-learn.org/stable/auto_examples/plot_isotonic_regression.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/ols.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/robust_models_0.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/glm.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/gls.html
http://statsmodels.sourceforge.net/devel/examples/notebooks/generated/wls.html
http://cars9.uchicago.edu/software/python/lmfit/
Bonus example: Piecewise linear curve fitting
While I usually prefer to use more robustly implemented algorithms such as ridge or decision tree based regresion (this is because for many features it becomes difficult to determine an adequete model for each feature), regression can be done by fitting a piecewise fuction. Taken from here. | import numpy as np
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], dtype=float)
y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03])
plt.scatter(x, y)
from scipy import optimize
def piecewise_linear(x, x0, y0, k1, k2):
return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0])
p , e = optimize.curve_fit(piecewise_linear, x, y)
xd = np.linspace(0, 15, 100)
plt.scatter(x, y)
plt.plot(xd, piecewise_linear(xd, *p)) | tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
Bonus example 2: Piecewise Non-linear Curve Fitting
Now let us extend this to piecewise non-linear Curve Fitting. Taken from here | #Piecewise function defining 2nd deg, 1st degree and 3rd degree exponentials
def piecewise_linear(x, x0, x1, y0, y1, k1, k2, k3, k4, k5, k6):
return np.piecewise(x, [x < x0, x>= x0, x> x1], [lambda x:k1*x + k2*x**2, lambda x:k3*x + y0, lambda x: k4*x + k5*x**2 + k6*x**3 + y1])
#Getting data using Pandas
df = pd.read_csv("../datasets/non-linear-piecewise.csv")
ms = df["ms"].values
degrees = df["Degrees"].values
plt.scatter(ms, degrees)
#Setting linspace and making the fit, make sure to make you data numpy arrays
x_new = np.linspace(ms[0], ms[-1], dtype=float)
m = np.array(ms, dtype=float)
deg = np.array(degrees, dtype=float)
guess = np.array( [100, 500, -30, 350, -0.1, 0.0051, 1, -0.01, -0.01, -0.01], dtype=float)
p , e = optimize.curve_fit(piecewise_linear, m, deg)
#Plotting data and fit
plt.plot(x_new, piecewise_linear(x_new, *p), '-', ms[::20], degrees[::20], 'o')
| tutorials/Non-Linear-Regression-Tutorial.ipynb | datascienceguide/datascienceguide.github.io | mit |
So this evidence doesn't "move the needle" very much.
Exercise: Suppose other evidence had made you 90% confident of Oliver's guilt. How much would this exculpatory evince change your beliefs? What if you initially thought there was only a 10% chance of his guilt?
Notice that evidence with the same strength has a different effect on probability, depending on where you started. | # Solution
post_odds = Odds(0.9) * like1 / like2
Probability(post_odds)
# Solution
post_odds = Odds(0.1) * like1 / like2
Probability(post_odds) | code/chap05soln.ipynb | NathanYee/ThinkBayes2 | gpl-2.0 |
Exercises
Exercise: Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze? | # Solution
n_allergic = 4
n_non = 6
p_allergic = 0.5
p_non = 0.1
pmf = MakeBinomialPmf(n_allergic, p_allergic) + MakeBinomialPmf(n_non, p_non)
thinkplot.Hist(pmf)
# Solution
pmf.Mean() | code/chap05soln.ipynb | NathanYee/ThinkBayes2 | gpl-2.0 |
Exercise This study from 2015 showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge.
Here is a description of the study:
"We studied 35 non-CD subjects (31 females) that were on a gluten-free diet (GFD), in a double-blind challenge study. Participants were randomised to receive either gluten-containing flour or gluten-free flour for 10 days, followed by a 2-week washout period and were then crossed over. The main outcome measure was their ability to identify which flour contained gluten.
"The gluten-containing flour was correctly identified by 12 participants (34%)..."
Since 12 out of 35 participants were able to identify the gluten flour, the authors conclude "Double-blind gluten challenge induces symptom recurrence in just one-third of patients fulfilling the clinical diagnostic criteria for non-coeliac gluten sensitivity."
This conclusion seems odd to me, because if none of the patients were sensitive to gluten, we would expect some of them to identify the gluten flour by chance. So the results are consistent with the hypothesis that none of the subjects are actually gluten sensitive.
We can use a Bayesian approach to interpret the results more precisely. But first we have to make some modeling decisions.
Of the 35 subjects, 12 identified the gluten flour based on resumption of symptoms while they were eating it. Another 17 subjects wrongly identified the gluten-free flour based on their symptoms, and 6 subjects were unable to distinguish. So each subject gave one of three responses. To keep things simple I follow the authors of the study and lump together the second two groups; that is, I consider two groups: those who identified the gluten flour and those who did not.
I assume (1) people who are actually gluten sensitive have a 95% chance of correctly identifying gluten flour under the challenge conditions, and (2) subjects who are not gluten sensitive have only a 40% chance of identifying the gluten flour by chance (and a 60% chance of either choosing the other flour or failing to distinguish).
Using this model, estimate the number of study participants who are sensitive to gluten. What is the most likely number? What is the 95% credible interval? | # Solution
# Here's a class that models the study
class Gluten(Suite):
def Likelihood(self, data, hypo):
"""Computes the probability of the data under the hypothesis.
data: tuple of (number who identified, number who did not)
hypothesis: number of participants who are gluten sensitive
"""
# compute the number who are gluten sensitive, `gs`, and
# the number who are not, `ngs`
gs = hypo
yes, no = data
n = yes + no
ngs = n - gs
pmf1 = MakeBinomialPmf(gs, 0.95)
pmf2 = MakeBinomialPmf(ngs, 0.4)
pmf = pmf1 + pmf2
return pmf[yes]
# Solution
prior = Gluten(range(0, 35+1))
thinkplot.Pdf(prior)
# Solution
posterior = prior.Copy()
data = 12, 23
posterior.Update(data)
# Solution
thinkplot.Pdf(posterior)
thinkplot.Config(xlabel='# who are gluten sensitive',
ylabel='PMF', legend=False)
# Solution
posterior.CredibleInterval(95) | code/chap05soln.ipynb | NathanYee/ThinkBayes2 | gpl-2.0 |
Exercise Coming soon: the space invaders problem. | # Solution
# Solution
# Solution
# Solution
# Solution
# Solution
# Solution
# Solution
| code/chap05soln.ipynb | NathanYee/ThinkBayes2 | gpl-2.0 |
MBEYA is leading by having 1046
We can visulize our data more by using seaborn | sns.set_context("notebook")
#lets get mean Pass Rate
mean_pass = df.PASS_RATE.mean()
print mean_pass, df.PASS_RATE.median()
with sns.axes_style("whitegrid"):
df.PASS_RATE.hist(bins=30, alpha=0.4);
plt.axvline(mean_pass, 0, 0.75, color='r', label='Mean')
plt.xlabel("Pass Rate")
plt.ylabel("Counts")
plt.title("Passing Rate Hisyogram")
plt.legend()
sns.despine()
with sns.axes_style("whitegrid"):
df.CHANGE_PREVIOUS_YEAR.hist(bins=15, alpha=0.6, color='r');
plt.xlabel("change of passing rate comapred to 2013")
plt.ylabel("school number")
plt.title("Change of passing rate Hisyogram")
plt.legend()
with sns.axes_style("whitegrid"):
df.AVG_MARK.hist(bins=40,alpha=0.6, color='g')
plt.xlabel("Avarage mark per school")
plt.ylabel("school number")
plt.title("Avarage Marks Hisyogram")
plt.legend() | .ipynb_checkpoints/Education-checkpoint.ipynb | MAKOSCAFEE/AllNotebooks | mit |
Definition of time-dependant Qobj
QobjEvo are definied from list of Qobj:
[Qobj0, [Qobj1, coeff1], [Qobj2, coeff2]]
coeff can be one of:
- function
- string
- np.array | # Definition of base Qobj and
N = 4
def sin_w(t, args):
return np.cos(args["w"]*t)
def cos_w(t, args):
return np.cos(args["w"]*t)
tlist = np.linspace(0,10,10000)
tlistlog = np.logspace(-3,1,10000)
# constant QobjEvo
cte_QobjEvo = QobjEvo(destroy(N))
cte_QobjEvo(1)
# QobjEvo with function based coeff
func_QobjEvo = QobjEvo([destroy(N),[qeye(N),cos_w]],args={"w":2})
func_QobjEvo(1)
# QobjEvo with sting based coeff
str_QobjEvo = QobjEvo([destroy(N),[qeye(N),"cos(w*t)"]],args={"w":2})
str_QobjEvo(1)
# QobjEvo with array based coeff
array_QobjEvo = QobjEvo([destroy(N),[qeye(N),np.cos(2*tlist)]],tlist=tlist)
array_QobjEvo(1)
# QobjEvo with array based coeff, log timescale
Log_array_QobjEvo = QobjEvo([destroy(N),[qeye(N),np.cos(2*tlistlog)]],tlist=tlistlog)
Log_array_QobjEvo(1)
# Reference
destroy(N) + qeye(N) * np.cos(2) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Mathematic
addition (QobjEvo, Qobj)
substraction (QobjEvo, Qobj)
product (QobjEvo, Qobj, scalar)
division (scalar)
The examples are done with function type coefficients only, but work for any type of coefficient.
Mixing coefficients type is possible, however this support would be removed if QobjEvo * QobjEvo is to be implemented. | # Build objects
o1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={"w":2})
o2 = QobjEvo([qeye(N),[create(N),cos_w]],args={"w":2})
t = np.random.random()*10
# addition and subtraction
o3 = o1 + o2
print(o3(t) == o1(t) + o2(t))
o3 = o1 - o2
print(o3(t) == o1(t) - o2(t))
o3 = o1 + destroy(N)
print(o3(t) == o1(t) + destroy(N))
o3 = o1 - destroy(N)
print(o3(t) == o1(t) - destroy(N))
# product
oc = QobjEvo([qeye(N)])
o3 = o1 * destroy(N)
print(o3(t) == o1(t) * destroy(N))
o3 = o1 * (0.5+0.5j)
print(o3(t) == o1(t) * (0.5+0.5j))
o3 = o1 / (0.5+0.5j)
print(o3(t) == o1(t) / (0.5+0.5j))
o3 = o1 * oc
print(o3(t) == o1(t) * oc(t))
o3 = oc * o1
print(o3(t) == oc(t) * o1(t))
o3 = o1 * o2
print(o3(t) == o1(t) * o2(t))
o1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={"w":2})
o2 = QobjEvo([qeye(N),[create(N),cos_w]],args={"w":2})
o1 += o2
print(o1(t) == (qeye(N)*2 + destroy(N)*sin_w(t,args={"w":2}) + create(N)*cos_w(t,args={"w":2})))
o1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={"w":2})
o2 = QobjEvo([qeye(N),[create(N),cos_w]],args={"w":2})
o1 -= o2
print(o1(t) == (destroy(N)*sin_w(t,args={"w":2}) - create(N)*cos_w(t,args={"w":2})))
o1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={"w":2})
o2 = QobjEvo([qeye(N),[create(N),cos_w]],args={"w":2})
o1 += -o2
print(o1(t) == (destroy(N)*sin_w(t,args={"w":2}) - create(N)*cos_w(t,args={"w":2})))
o1 = QobjEvo([qeye(N),[destroy(N),sin_w]],args={"w":2})
o1 *= destroy(N)
print(o1(t) == (destroy(N) + destroy(N)*destroy(N)*sin_w(t,args={"w":2}))) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Unitary operations:
conj
dag
trans
_cdc: QobjEvo.dag * QobjEvo | o_real = QobjEvo([qeye(N),[destroy(N), sin_w]], args={"w":2})
o_cplx = QobjEvo([qeye(N),[create(N), cos_w]], args={"w":-1j})
print(o_real(t).trans() == o_real.trans()(t))
print(o_real(t).conj() == o_real.conj()(t))
print(o_real(t).dag() == o_real.dag()(t))
print(o_cplx(t).trans() == o_cplx.trans()(t))
print(o_cplx(t).conj() == o_cplx.conj()(t))
print(o_cplx(t).dag() == o_cplx.dag()(t))
# the operator norm correspond to c.dag * c.
td_cplx_f0 = qobjevo.QobjEvo([qeye(N)])
td_cplx_f1 = qobjevo.QobjEvo([qeye(N),[destroy(N)*create(N),sin_w]], args={'w':2.+0.001j})
td_cplx_f2 = qobjevo.QobjEvo([qeye(N),[destroy(N),cos_w]], args={'w':2.+0.001j})
td_cplx_f3 = qobjevo.QobjEvo([qeye(N),[create(N),1j*np.sin(tlist)]], tlist=tlist)
print(td_cplx_f0(t).dag()*td_cplx_f0(t) == td_cplx_f0._cdc()(t))
print(td_cplx_f1(t).dag()*td_cplx_f1(t) == td_cplx_f1._cdc()(t))
print(td_cplx_f2(t).dag()*td_cplx_f2(t) == td_cplx_f2._cdc()(t))
print(td_cplx_f3(t).dag()*td_cplx_f3(t) == td_cplx_f3._cdc()(t)) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Liouvillian and lindblad dissipator, to use in solver
Functions in qutip.superoperator can be used for QobjEvo. | td_L = liouvillian(H=func_QobjEvo)
L = liouvillian(H=func_QobjEvo(t))
td_L(t) == L
td_cplx_f0 = qobjevo.QobjEvo([qeye(N)])
td_cplx_f1 = qobjevo.QobjEvo([[destroy(N)*create(N),sin_w]], args={'w':2.})
td_L = liouvillian(H=func_QobjEvo,c_ops=[td_cplx_f0,td_cplx_f1])
L = liouvillian(H=func_QobjEvo(t),c_ops=[td_cplx_f0(t),td_cplx_f1(t)])
print(td_L(t) == L)
td_P = spre(td_cplx_f1)
P = spre(td_cplx_f1(t))
print(td_P(t) == P) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Getting the list back for the object | print(td_L.to_list()) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Arguments modification
To change the args: qobjevo.arguments(new_args)
Call with other arguments without changing them: qobjevo.with_args(t, new_args) | def Args(t, args):
return args['w']
td_args = qobjevo.QobjEvo([qeye(N), Args],args={'w':1.})
print(td_args(t) == qeye(N))
td_args.arguments({'w':2.})
print(td_args(t) == qeye(N)*2)
print(td_args(t,args={'w':3.}) == qeye(N)*3) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
When summing QobjEvo that have an arguments in common, only one is kept. | td_args_1 = qobjevo.QobjEvo([qeye(N), [destroy(N), Args]],args={'w':1.})
td_args_2 = qobjevo.QobjEvo([qeye(N), [destroy(N), Args]],args={'w':2.})
td_str_sum = td_args_1 + td_args_2
# Only one value for args is kept
print(td_str_sum(t) == td_args_1(t) + td_args_2(t))
print(td_str_sum(t) == 2*td_args_2(t))
# Updating args affect all part
td_str_sum.arguments({'w':1.})
print(td_str_sum(t) == 2*td_args_1(t)) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Argument with different names are fine. | def Args2(t, args):
return args['x']
td_args_1 = qobjevo.QobjEvo([qeye(N), [destroy(N), cos_w]],args={'w':1.})
td_args_2 = qobjevo.QobjEvo([qeye(N), [destroy(N), Args2]],args={'x':2.})
td_str_sum = td_args_1 + td_args_2
# Only one value for args is kept
print(td_str_sum(t) == td_args_1(t) + td_args_2(t)) | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Other | # Obtain the sparce matrix at a time t instead of a Qobj
str_QobjEvo(1, data=True)
# Test is the QobjEvo does depend on time
print(cte_QobjEvo.const)
print(str_QobjEvo.const)
# Obtain the size, shape, oper flag etc:
# The QobjEvo.cte always exist and contain the constant part of the QobjEvo
# It can be used to get the shape, etc. since the QobjEvo do not directly have them.
td_cplx_f1 = qobjevo.QobjEvo([[destroy(N)*create(N),sin_w]], args={'w':2.})
print(td_cplx_f1.cte.dims)
print(td_cplx_f1.cte.shape)
print(td_cplx_f1.cte.isoper)
print(td_cplx_f1.cte)
# Creating a copy
str_QobjEvo_2 = str_QobjEvo.copy()
str_QobjEvo_2 += 1
str_QobjEvo_2(1) - str_QobjEvo(1)
about() | development/development-qobjevo.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Aperture photometry with SExtractor | # sep is a Python interface to the code SExtractor libraries.
# See https://sep.readthedocs.io/ for documentation.
import sep
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
from matplotlib import rcParams
%matplotlib inline
rcParams['figure.figsize'] = [10., 8.]
# read image into standard 2-d numpy array
hdul = fits.open("three_sources_two_overlap.fits")
data = hdul[2].data
data = data.byteswap().newbyteorder()
# show the image
m, s = np.mean(data), np.std(data)
plt.imshow(data, interpolation='nearest', cmap='gray', vmin=m-s, vmax=m+s, origin='lower')
plt.colorbar();
# measure a spatially varying background on the image
bkg = sep.Background(data)
# get a "global" mean and noise of the image background:
print(bkg.globalback)
print(bkg.globalrms)
# evaluate background as 2-d array, same size as original image
bkg_image = bkg.back()
# bkg_image = np.array(bkg) # equivalent to above
# show the background
plt.imshow(bkg_image, interpolation='nearest', cmap='gray', origin='lower')
plt.colorbar();
# subtract the background
data_sub = data - bkg
objs = sep.extract(data_sub, 1.5, err=bkg.globalrms)
# how many objects were detected
len(objs)
from matplotlib.patches import Ellipse
# plot background-subtracted image
fig, ax = plt.subplots()
m, s = np.mean(data_sub), np.std(data_sub)
im = ax.imshow(data_sub, interpolation='nearest', cmap='gray',
vmin=m-s, vmax=m+s, origin='lower')
# plot an ellipse for each object
for i in range(len(objs)):
e = Ellipse(xy=(objs['x'][i], objs['y'][i]),
width=6*objs['a'][i],
height=6*objs['b'][i],
angle=objs['theta'][i] * 180. / np.pi)
e.set_facecolor('none')
e.set_edgecolor('red')
ax.add_artist(e)
nelecs_per_nmgy = hdul[2].header["CLIOTA"]
data_sub.sum() / nelecs_per_nmgy
kronrad, krflag = sep.kron_radius(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'], 6.0)
flux, fluxerr, flag = sep.sum_ellipse(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'],
kronrad, subpix=1)
flux_nmgy = flux / nelecs_per_nmgy
fluxerr_nmgy = fluxerr / nelecs_per_nmgy
for i in range(len(objs)):
print("object {:d}: flux = {:f} +/- {:f}".format(i, flux_nmgy[i], fluxerr_nmgy[i]))
kronrad, krflag = sep.kron_radius(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'], 4.5)
flux, fluxerr, flag = sep.sum_ellipse(data, objs['x'], objs['y'], objs['a'], objs['b'], objs['theta'],
kronrad, subpix=1)
flux_nmgy = flux / nelecs_per_nmgy
fluxerr_nmgy = fluxerr / nelecs_per_nmgy
for i in range(len(objs)):
print("object {:d}: flux = {:f} +/- {:f}".format(i, flux_nmgy[i], fluxerr_nmgy[i])) | experiments/galsim.ipynb | jeff-regier/Celeste.jl | mit |
Celeste.jl estimates these flux densities much better. The galsim_julia.ipynb notebook shows a run of Celeste.jl on the same data.
Comparision to Hyper Suprime-Cam (HSC) software pipeline
HSC often fails to deblend images with three light sources in a row, including the following one:
"The single biggest failure mode of the deblender occurs when three or more peaks in a blend appear in a straight
line" -- Bosch, et al. "The Hyper Suprime-Cam software pipeline." (2018)
So let's use galsim to generate an images with three peaks in a row! | def three_sources_in_a_row(test_case):
x = [-11, -1, 12]
test_case.add_galaxy().offset_arcsec(x[0], 0.3 * x[0]).gal_angle_deg(45)
test_case.add_galaxy().offset_arcsec(x[1], 0.3 * x[1]).flux_r_nmgy(3)
test_case.add_star().offset_arcsec(x[2], 0.3 * x[2]).flux_r_nmgy(3)
test_case.include_noise = True
galsim_helper.generate_fits_file("three_sources_in_a_row", [three_sources_in_a_row, ])
hdul = fits.open("three_sources_in_a_row.fits")
data = hdul[2].data
data = data.byteswap().newbyteorder()
# show the image
m, s = np.mean(data), np.std(data)
fig = plt.imshow(data, interpolation='nearest', cmap='gray', vmin=m-s, vmax=m+s, origin='lower')
plt.colorbar(); | experiments/galsim.ipynb | jeff-regier/Celeste.jl | mit |
The control freak sequence
Let us first explore a time dependent pump which for all times $t$ is in the dimerized limit, that is either the intracell $v(t)$ or the intercell $w(t)$ hopping is zero. | def f(t):
'''
A piecewise function for the control freak sequence
used to define u(t),v(t),w(t)
'''
t=mod(t,1);
return (
8*t*((t>=0)&(t<1/8))+\
(0*t+1)*((t>=1/8)&(t<3/8))+\
(4-8*t)*((t>=3/8)&(t<1/2))+\
0*t*((t>=1/2)&(t<1)));
def uvwCF(t):
'''
u,v and w functions of the control freak sequence
'''
return array([f(t)-f(t-1/2),2*f(t+1/4),f(t-1/4)]) | RM.ipynb | oroszl/topins | gpl-2.0 |
Below we write a generic function that takes the functions $u(t)$,$v(t)$ and $w(t)$ as an argument and then visualizes the pumping process in $d$-space. We will use this function to explore the control freak sequence and the later on also the not so control freak sequence. | def seq_and_d(funcs,ti=10):
'''
A figure generating function for the Rice Mele model.
It plots the functions defining the sequence and the d-space structure.
'''
figsize(10,5)
fig=figure()
func=eval(funcs);
ax1=fig.add_subplot(121)
ftsz=20
# plotting the functions defining the sequence
plot(tran[:,0],func(tran[:,0])[1],'k-',label=r'$v$',linewidth=3)
plot(tran[:,0],func(tran[:,0])[2],'g--',label=r'$w$',linewidth=3)
plot(tran[:,0],func(tran[:,0])[0],'m-',label=r'$u$',linewidth=3)
plot([tran[ti,0],tran[ti,0]],[-3,3],'r-',linewidth=3)
# this is just to make things look like in the book
ylim(-1.5,2.5)
legend(fontsize=20,loc=3)
xlabel(r'time $t/T$',fontsize=ftsz)
xticks(linspace(0,1,5),[r'$0$',r'$0.25$',r'$0.5$',r'$0.75$',r'$1$'],fontsize=ftsz)
ylabel(r'amplitudes $u,v,w$',fontsize=ftsz)
yticks([-1,0,1,2],[r'$-1$',r'$0$',r'$1$',r'$2$'],fontsize=ftsz)
grid(True)
ax2=fig.add_subplot(122, projection='3d')
# plotting d space image of the pumping sequence
plot(*dkt(kran[ti,:],tran[ti,:],func),marker='o',mec='red',mfc='red',ls='-',lw=6,color='red')
plot(*dkt(kran.flatten(),tran.flatten(),func),color='blue',alpha=0.5)
# this is just to make things look like in the book
# basically everything below is just to make things look nice..
ax2.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax2.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax2.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 1.0))
ax2.set_axis_off()
ax2.grid(False)
arrprop=dict(mutation_scale=20, lw=1,arrowstyle='-|>,head_length=1.4,head_width=0.6',color="k")
ax2.add_artist(Arrow3D([-2,4],[0,0],[0,0], **arrprop))
ax2.add_artist(Arrow3D([0,0],[-2,3.3],[0,0], **arrprop))
ax2.add_artist(Arrow3D([0,0],[0,0],[-1,2], **arrprop))
ftsz2=30
ax2.text(4.4, -1, 0, r'$d_x$', None,fontsize=ftsz2)
ax2.text(0.3, 3.0, 0, r'$d_y$', None,fontsize=ftsz2)
ax2.text(0, 0.6, 2.0, r'$d_z$', None,fontsize=ftsz2)
ax2.plot([0],[0],[0],'ko',markersize=8)
ax2.view_init(elev=21., azim=-45)
ax2.set_aspect(1.0)
ax2.set_zlim3d(-0.5, 2)
ax2.set_ylim3d(-0.5, 2)
ax2.set_xlim3d(-0.5, 2)
tight_layout()
| RM.ipynb | oroszl/topins | gpl-2.0 |
Now let us see what happends as time proceedes! | interact(seq_and_d,funcs=fixed('uvwCF'),ti=(0,len(tran[:,0])-1)); | RM.ipynb | oroszl/topins | gpl-2.0 |
Now that we have explored the momentum space behaviour let us again look at a small real space sample! First we define a function that generates Rice-Mele type finitel lattice Hamiltonians for given values of $u$,$v$ and $w$. | def H_RM_reals(L,u,v,w,**kwargs):
'''
A function to bulid a finite RM chain.
The number of unitcells is L.
As usual v is intracell and w ins intercell hopping.
We also have now an asymmetric sublattice potential u.
'''
idL=eye(L); # identity matrix of dimension L
odL=diag(ones(L-1),1);# upper off diagonal matrix with ones of size L
odc=matrix(diag([1],-L+1));#lower corner for periodic boundary condition
U=matrix([[u,v],[v,-u]]) # intracell
T=matrix([[0,0],[1,0]]) # intercell
p=0
if kwargs.get('periodic',False):
p=1
H=(kron(idL,U)+
kron(odL,w*T)+
kron(odL,w*T).H+
p*(kron(odc,w*T)+kron(odc,w*T).H))
return H
| RM.ipynb | oroszl/topins | gpl-2.0 |
Next we define a class that we will mainly use to hold data about our pumping sequence. The information in these objects will be used to visualize the spectrum and wavefunctions of bulk and edge localized states. | class pumpdata:
'''
A class that holds information on spectrum and wavefunctions
of a pump sequence performed on a finite lattice model.
Default values are tailored to the control freak sequence.
'''
def __init__(self,L=10,numLoc=1,norm_treshold=0.99,func=uvwCF,**kwargs):
'''
Initialization function. The default values are set in such a way that they correspond
to the control freak sequence.
'''
self.L=L
self.dat=[] # We will collect the data to be
self.vecdat=[] # plotted in these arrays.
self.lefty=[]
self.righty=[]
self.lefty=[]
self.righty=[]
tlim=kwargs.get('edge_tlim',(0,1)) # We can use this to restrict classification
# of left and right localized states in time
for t in tran[:,0]:
u,v,w=func(t) # obtain u(t),v(t) and w(t)
H=H_RM_reals(L,u,v,w) #
eigdat=eigh(H); # for a given t here we calculate the eigensystem (values and vectors)
if tlim[0]<t<tlim[1]:
# for the interesting time intervall we look for states localized to the edge
for i in range(2*L):
if sum((array(eigdat[1][0::2,i])**2+array(eigdat[1][1::2,i])**2)[0:2*numLoc:2])>norm_treshold:
self.lefty=append(self.lefty,[[t,eigdat[0][i]]]);
if sum((array(eigdat[1][0::2,i])**2+array(eigdat[1][1::2,i])**2)[:L-2*numLoc:-2])>norm_treshold:
self.righty=append(self.righty,[[t,eigdat[0][i]]]);
self.dat=append(self.dat,eigdat[0]);
self.vecdat=append(self.vecdat,eigdat[1]);
self.dat=reshape(self.dat,[len(tran[:,0]),2*L]); # rewraping the data
self.vecdat=reshape(self.vecdat,[len(tran[:,0]),2*L,2*L]) # to be more digestable
| RM.ipynb | oroszl/topins | gpl-2.0 |
Now let us create an instance of the above class with the data of the control freak pump sequence: | # Filling up data for the control freak sequence
CFdata=pumpdata(edge_tlim=(0.26,0.74)) | RM.ipynb | oroszl/topins | gpl-2.0 |
Finally we write a simple function to visualize the spectrum and the wavefunctions in a symmilar fashion as we did for the SSH model. We shall now explicitly mark the edge states in the spectrum with red and blue. | def enpsi(PD,ti=10,n=10):
figsize(14,5)
subplot(121)
lcol='#53a4d7'
rcol='#d7191c'
# Plotting the eigenvalues and
# a marker showing for which state
# we are exploring the wavefunction
plot(tran[:,0],PD.dat,'k-');
(lambda x:plot(x[:,0],x[:,1],'o',mec=lcol,mfc=lcol,
markersize=10))(reshape(PD.lefty,(PD.lefty.size/2,2)))
(lambda x:plot(x[:,0],x[:,1],'o',mec=rcol,mfc=rcol,
markersize=10))(reshape(PD.righty,(PD.righty.size/2,2)))
plot(tran[ti,0],PD.dat[ti,n],'o',markersize=13,mec='k',mfc='w')
# Make it look like the book
xlabel(r'$t/T$',fontsize=25);
xticks(linspace(0,1,5),fontsize=25)
ylabel(r'energy $E$',fontsize=25);
yticks(fontsize=25)
ylim(-2.99,2.99)
grid()
subplot(122)
# Plotting the sublattice resolved wavefunction
bar(array(range(0,2*PD.L,2)), real(array(PD.vecdat[ti][0::2,n].T)),0.9,color='grey',label='A') # sublattice A
bar(array(range(0,2*PD.L,2))+1,real(array(PD.vecdat[ti][1::2,n].T)),0.9,color='white',label='B') # sublattice B
# Make it look like the book
xticks(2*(array(range(10))),[' '+str(i) for i in array(range(11))[1:]],fontsize=25)
ylim(-1.2,1.2)
yticks(linspace(-1,1,5),fontsize=25,x=1.2)
ylabel('Wavefunction',fontsize=25,labelpad=-460,rotation=-90)
grid()
legend(loc='lower right')
xlabel(r'cell index $m$',fontsize=25);
tight_layout() | RM.ipynb | oroszl/topins | gpl-2.0 |
We can now interact with the above function and see the evolution of the surface states. | interact(enpsi,PD=fixed(CFdata),ti=(0,len(tran[:,0])-1),n=(0,19)); | RM.ipynb | oroszl/topins | gpl-2.0 |
To complete the analysis of the control freak sequence we now investigate the flow of Wannier centers in time in a chain with periodic boundary conditions. We again first define a class that holds the approporiate data and then write a plotting function. |
class wannierflow:
'''
A class that holds information on Wannier center flow.
'''
def __init__(self,L=6,func=uvwCF,periodic=True,tspan=linspace(0,1,200),**kwargs):
self.L=L
self.func=func
self.periodic=periodic
self.tspan=tspan
# get position operator
if self.periodic:
POS=matrix(kron(diag(exp(2.0j*pi*arange(L)/(L))),eye(2)))
else:
POS=matrix(kron(diag(arange(1,L+1)),eye(2)))
Lwanflow=[]
Hwanflow=[]
Lwane=[]
Hwane=[]
for t in tspan:
u,v,w=self.func(t)
H=H_RM_reals(L,u,v,w,periodic=periodic)
sys=eigh(H)
Lval=sys[0][sys[0]<0]
Lvec=matrix(sys[1][:,sys[0]<0])
LP=Lvec*Lvec.H
LW=LP*POS*LP
LWval,LWvec=eig(LW)
LWvec=LWvec[:,abs(LWval)>1e-10]
LWe=real(diag(LWvec.H*H*LWvec))
Hval=sys[0][sys[0]>0]
Hvec=matrix(sys[1][:,sys[0]>0])
HP=Hvec*Hvec.H
HW=HP*POS*HP
HWval,HWvec=eig(HW)
HWvec=HWvec[:,abs(HWval)>1e-10]
HWe=real(diag(HWvec.H*H*HWvec))
Lwane=append(Lwane,LWe)
Hwane=append(Hwane,HWe)
if periodic:
Lwanflow=append(Lwanflow,L/(2*pi)*sort(angle(LWval[abs(LWval)>1e-10])))
Hwanflow=append(Hwanflow,L/(2*pi)*sort(angle(HWval[abs(HWval)>1e-10])))
else:
Lwanflow=append(Lwanflow,sort(LWval[abs(LWval)>1e-10]))
Hwanflow=append(Hwanflow,sort(HWval[abs(HWval)>1e-10]))
self.Lwanflow=Lwanflow
self.Hwanflow=Hwanflow
self.Lwane=Lwane
self.Hwane=Hwane
def plot_w_vs_t(self,LorH='Lower band',*args,**kwargs):
'''
A function for plotting the Wannier flow.
The Wannier centers against time are plotted.
'''
#figsize(7,5)
data=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')
for i in range(self.L):
descr=(LorH if i==0 else '')
plot(real(data[i::self.L]),self.tspan,*args,label=descr,**kwargs)
if self.periodic:
xticks(arange(self.L)-self.L/2+0.5*mod(self.L,2),fontsize=25)
else:
xticks(arange(self.L)+1,fontsize=25)
yticks(linspace(0,1,5),fontsize=25)
xlabel(r'position $\langle \hat{x}\rangle$',fontsize=25);
ylabel(r"time $t/T$",fontsize=25);
grid()
def plot_w_vs_e(self,LorH='Lower band',*args,**kwargs):
'''
A function for plotting the Wannier flow.
The Wannier centers against energy are plotted.
'''
#figsize(7,5)
dataw=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')
datae=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wane')
for i in range(self.L):
descr=(LorH if i==0 else '')
plot(dataw[i::self.L],datae[i::self.L],*args,label=descr,**kwargs)
pos=100
vx=real(dataw[i::self.L][pos:(pos+2)])
vy=real(datae[i::self.L][pos:(pos+2)])
#plot(vx[0],vy[0],'bo')
arrow(vx[0],vy[0],
(vx[1]-vx[0])/2,
(vy[1]-vy[0])/2,fc='k',zorder=1000,
head_width=0.3, head_length=0.1)
if self.periodic:
xticks(arange(self.L)-self.L/2+0.5*mod(self.L,2),fontsize=25)
else:
xticks(arange(self.L)+1,fontsize=25)
yticks(fontsize=25)
xlabel(r'position $\langle \hat{x}\rangle$',fontsize=25);
ylabel(r'energy $\langle \hat{H}\rangle$',fontsize=25);
grid()
def polar_w_vs_t(self,LorH='Lower band',*args,**kwargs):
'''
A function for plotting the Wannier flow.
A figure in polar coordinates is produced.
'''
if self.periodic==False:
print('This feature is only supported for periodic boundary conditions')
return
#figsize(7,7)
data=eval('self.'+(LorH[0] if (LorH[0] in ['L','H']) else 'L')+'wanflow')
for i in range(self.L):
descr=(LorH if i==0 else '')
plot((self.tspan+0.5)*cos((2*pi)/self.L*data[i::self.L]),
(self.tspan+0.5)*sin((2*pi)/self.L*data[i::self.L]),
*args,label=descr,**kwargs)
phi=linspace(0,2*pi,100);
plot(0.5*sin(phi),0.5*cos(phi),'k-',linewidth=2);
plot(1.5*sin(phi),1.5*cos(phi),'k-',linewidth=2);
xlim(-1.5,1.5);
ylim(-1.5,1.5);
phiran=linspace(-pi,pi,self.L+1)
for i in range(len(phiran)-1):
phi0=0
plot([0.5*sin(phiran[i]+phi0),1.5*sin(phiran[i]+phi0)],
[0.5*cos(phiran[i]+phi0),1.5*cos(phiran[i]+phi0)],'k--')
text(1.3*cos(phiran[i]+pi/self.L/2),1.3*sin(phiran[i]+pi/self.L/2),i+1,fontsize=20)
axis('off')
text(-0.45,-0.1,r'$t/T=0$',fontsize=20)
text(1.1,-1.1,r'$t/T=1$',fontsize=20)
CFwan=wannierflow()
figsize(12,4)
subplot(121)
CFwan.plot_w_vs_t('Lower band','ko',ms=10)
CFwan.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(fontsize=15,numpoints=100);
subplot(122)
CFwan.plot_w_vs_e('Lower band','k.')
CFwan.plot_w_vs_e('Higher band','.',mec='grey',mfc='grey')
#legend(fontsize=15,numpoints=100);
tight_layout() | RM.ipynb | oroszl/topins | gpl-2.0 |
An alternative way to visualize Wannier flow of a periodic system is shown below. The inner circle represent $t/T=0$ and the outer $t/T=1$, the sections of the disc correspond to unitcells. | figsize(6,6)
CFwan.polar_w_vs_t('Lower band','ko',ms=10)
CFwan.polar_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(numpoints=100,fontsize=15,ncol=2,bbox_to_anchor=(1,0)); | RM.ipynb | oroszl/topins | gpl-2.0 |
If we investigate pumping in a finite but sample without periodic boundary condition we will see that the edgestates cross the gap! | CFwan_finite=wannierflow(periodic=False)
figsize(6,4)
CFwan_finite.plot_w_vs_t('Lower band','ko',ms=10)
CFwan_finite.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(fontsize=15,numpoints=100);
xlim(0,7); | RM.ipynb | oroszl/topins | gpl-2.0 |
We have now done all the heavy lifting with regards of coding. Now we can reuse all the plotting and data generating classes and functions for other sequences.
Moving away from the control freak sequence
Let us now relax the control freak attitude and consider a model which is not strictly localized at all times! | def uvwNSCF(t):
'''
The u,v and w functions of the not so control freak sequence.
For the time beeing we assume vbar to be fixed.
'''
vbar=1
return array([sin(t*(2*pi)),vbar+cos(t*(2*pi)),1*t**0]) | RM.ipynb | oroszl/topins | gpl-2.0 |
The $d$ space story can now be easily explored via the seq_and_d function we have defined earlier. | interact(seq_and_d,funcs=fixed('uvwNSCF'),ti=(0,len(tran[:,0])-1)); | RM.ipynb | oroszl/topins | gpl-2.0 |
Similarly the spectrum and wavefunctions can also be investigated via the pumpdata class: | # Generating the not-so control freak data
NSCFdata=pumpdata(numLoc=2,norm_treshold=0.6,func=uvwNSCF)
interact(enpsi,PD=fixed(NSCFdata),ti=(0,len(tran[:,0])-1),n=(0,19)); | RM.ipynb | oroszl/topins | gpl-2.0 |
Finally wannierflow class let us see the movement of the Wannier centers. | NSCFwan=wannierflow(periodic=True,func=uvwNSCF)
figsize(12,4)
subplot(121)
NSCFwan.plot_w_vs_t('Lower band','ko',ms=10)
NSCFwan.plot_w_vs_t('Higher band','o',mec='grey',mfc='grey')
legend(fontsize=15,numpoints=100);
subplot(122)
NSCFwan.plot_w_vs_e('Lower band','k.')
NSCFwan.plot_w_vs_e('Higher band','.',mec='grey',mfc='grey')
#legend(fontsize=15,numpoints=100);
tight_layout()
| RM.ipynb | oroszl/topins | gpl-2.0 |
Next we're going to need to authenticate using the service account on the Datalab host. | from httplib2 import Http
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = Http()
credentials.authorize(http) | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
Now we can create a client for the Genomics API. NOTE that in order to use the Genomics API, you need to have enabled it for your GCP project. | from apiclient import discovery
ggSvc = discovery.build ( 'genomics', 'v1', http=http ) | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
We're also going to want to work with BigQuery, so we'll need the biguery module. We will also be using the pandas and time modules. | import gcp.bigquery as bq
import pandas as pd
import time | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
The ISB-CGC group has assembled metadata as well as molecular data from the CCLE project into an open-access BigQuery dataset called isb-cgc:ccle_201602_alpha. In this notebook we will make use of two tables in this dataset: Mutation_calls and DataFile_info. You can explore the entire dataset using the BigQuery web UI.
Let's say that we're interested in cell-lines with BRAF V600 mutations, and in particular we want to see if there is evidence in both the DNA-seq and the RNA-seq data for these mutations. Let's start by making sure that there are some cell-lines with these mutations in our dataset: | %%sql
SELECT CCLE_name, Hugo_Symbol, Protein_Change, Genome_Change
FROM [isb-cgc:ccle_201602_alpha.Mutation_calls]
WHERE ( Hugo_Symbol="BRAF" AND Protein_Change CONTAINS "p.V600" )
ORDER BY Cell_line_primary_name
LIMIT 5 | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
OK, so let's get the complete list of cell-lines with this particular mutation: | %%sql --module get_mutated_samples
SELECT CCLE_name
FROM [isb-cgc:ccle_201602_alpha.Mutation_calls]
WHERE ( Hugo_Symbol="BRAF" AND Protein_Change CONTAINS "p.V600" )
ORDER BY Cell_line_primary_name
r = bq.Query(get_mutated_samples).results()
list1 = r.to_dataframe()
print " Found %d samples with a BRAF V600 mutation. " % len(list1) | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
Now we want to know, from the DataFile_info table, which cell lines have both DNA-seq and RNA-seq data imported into Google Genomics. (To find these samples, we will look for samples that have non-null readgroupset IDs from "DNA" and "RNA" pipelines.) | %%sql --module get_samples_with_data
SELECT
a.CCLE_name AS CCLE_name
FROM (
SELECT
CCLE_name
FROM
[isb-cgc:ccle_201602_alpha.DataFile_info]
WHERE
( Pipeline CONTAINS "DNA"
AND GG_readgroupset_id<>"NULL" ) ) a
JOIN (
SELECT
CCLE_name
FROM
[isb-cgc:ccle_201602_alpha.DataFile_info]
WHERE
( Pipeline CONTAINS "RNA"
AND GG_readgroupset_id<>"NULL" ) ) b
ON
a.CCLE_name = b.CCLE_name
r = bq.Query(get_samples_with_data).results()
list2 = r.to_dataframe()
print " Found %d samples with both DNA-seq and RNA-seq reads. " % len(list2) | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
Now let's find out which samples are in both of these lists: | list3 = pd.merge ( list1, list2, how='inner', on=['CCLE_name'] )
print " Found %d mutated samples with DNA-seq and RNA-seq data. " % len(list3) | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
No we're going to take a closer look at the reads from each of these samples. First, we'll need to be able to get the readgroupset IDs for each sample from the BigQuery table. To do this, we'll define a parameterized function: | %%sql --module get_readgroupsetid
SELECT Pipeline, GG_readgroupset_id
FROM [isb-cgc:ccle_201602_alpha.DataFile_info]
WHERE CCLE_name=$c AND GG_readgroupset_id<>"NULL" | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
Let's take a look at how this will work: | aName = list3['CCLE_name'][0]
print aName
ids = bq.Query(get_readgroupsetid,c=aName).to_dataframe()
print ids | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
Ok, so we see that for this sample, we have two readgroupset IDs, one based on DNA-seq and one based on RNA-seq. This is what we expect, based on how we chose this list of samples.
Now we'll define a function we can re-use that calls the GA4GH API reads.search method to find all reads that overlap the V600 mutation position. Note that we will query all of the readgroupsets that we get for each sample at the same time by passing in a list of readGroupSetIds. Once we have the reads, we'll organized them into a dictionary based on the local context centered on the mutation hotspot. | chr = "7"
pos = 140453135
width = 11
rgsList = ids['GG_readgroupset_id'].tolist()
def getReads ( rgsList, pos, width):
payload = { "readGroupSetIds": rgsList,
"referenceName": chr,
"start": pos-(width/2),
"end": pos+(width/2),
"pageSize": 2048
}
r = ggSvc.reads().search(body=payload).execute()
context = {}
for a in r['alignments']:
rgsid = a['readGroupSetId']
seq = a['alignedSequence']
seqStartPos = int ( a['alignment']['position']['position'] )
relPos = pos - (width/2) - seqStartPos
if ( relPos >=0 and relPos+width<len(seq) ):
# print rgsid, seq[relPos:relPos+width]
c = seq[relPos:relPos+width]
if (c not in context):
context[c] = {}
context[c][rgsid] = 1
else:
if (rgsid not in context[c]):
context[c][rgsid] = 1
else:
context[c][rgsid] += 1
for c in context:
numReads = 0
for a in context[c]:
numReads += context[c][a]
# write it out only if we have information from two or more readgroupsets
if ( numReads>3 or len(context[c])>1 ):
print " --> ", c, context[c] | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
Here we define the position (0-based) of the BRAF V600 mutation: | chr = "7"
pos = 140453135
width = 11 | notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
OK, now we can loop over all of the samples we found earlier: | for aName in list3['CCLE_name']:
print " "
print " "
print aName
r = bq.Query(get_readgroupsetid,c=aName).to_dataframe()
for i in range(r.shape[0]):
print " ", r['Pipeline'][i], r['GG_readgroupset_id'][i]
rgsList = r['GG_readgroupset_id'].tolist()
getReads ( rgsList, pos, width)
| notebooks/BRAF-V600 study using CCLE data.ipynb | isb-cgc/examples-Python | apache-2.0 |
I'm going to be fitting a model to the alignment box image. This model will be the alignment box itself, plus a single 2D gaussian star. The following class is an astropy.models model of the trapezoidal shape of the MOSFIRE alignment box. | class mosfireAlignmentBox(Fittable2DModel):
amplitude = Parameter(default=1)
x_0 = Parameter(default=0)
y_0 = Parameter(default=0)
x_width = Parameter(default=1)
y_width = Parameter(default=1)
@staticmethod
def evaluate(x, y, amplitude, x_0, y_0, x_width, y_width):
'''MOSFIRE Alignment Box.
Typical widths are 22.5 pix horizontally and 36.0 pix vertically.
Angle of slit relative to pixels is 3.78 degrees.
'''
slit_angle = -3.7 # in degrees
x0_of_y = x_0 + (y-y_0)*np.sin(slit_angle*np.pi/180)
x_range = np.logical_and(x >= x0_of_y - x_width / 2.,
x <= x0_of_y + x_width / 2.)
y_range = np.logical_and(y >= y_0 - y_width / 2.,
y <= y_0 + y_width / 2.)
result = np.select([np.logical_and(x_range, y_range)], [amplitude], 0)
if isinstance(amplitude, u.Quantity):
return Quantity(result, unit=amplitude.unit, copy=False)
else:
return result
@property
def input_units(self):
if self.x_0.unit is None:
return None
else:
return {'x': self.x_0.unit,
'y': self.y_0.unit}
def _parameter_units_for_data_units(self, inputs_unit, outputs_unit):
return OrderedDict([('x_0', inputs_unit['x']),
('y_0', inputs_unit['y']),
('x_width', inputs_unit['x']),
('y_width', inputs_unit['y']),
('amplitude', outputs_unit['z'])]) | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
This is a simple helper function which I stole from my CSU_initializer project. It may not be necessary as I am effectively fitting the location of the alignment box twice. | def fit_edges(profile):
fitter = fitting.LevMarLSQFitter()
amp1_est = profile[profile == min(profile)][0]
mean1_est = np.argmin(profile)
amp2_est = profile[profile == max(profile)][0]
mean2_est = np.argmax(profile)
g_init1 = models.Gaussian1D(amplitude=amp1_est, mean=mean1_est, stddev=2.)
g_init1.amplitude.max = 0
g_init1.amplitude.min = amp1_est*0.9
g_init1.stddev.max = 3
g_init2 = models.Gaussian1D(amplitude=amp2_est, mean=mean2_est, stddev=2.)
g_init2.amplitude.min = 0
g_init2.amplitude.min = amp2_est*0.9
g_init2.stddev.max = 3
model = g_init1 + g_init2
fit = fitter(model, range(0,horizontal_profile.shape[0]), horizontal_profile)
# Check Validity of Fit
if abs(fit.stddev_0.value) <= 3 and abs(fit.stddev_1.value) <= 3\
and fit.amplitude_0.value < -1 and fit.amplitude_1.value > 1\
and fit.mean_0.value > fit.mean_1.value:
x1 = fit.mean_0.value
x2 = fit.mean_1.value
else:
x1 = None
x2 = None
return x1, x2 | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
Create Master Flat
Rather than take time to obtain a sky frame for each mask alignment, I am going to treat the sky background as a constant over the alignment box area (roughly 4 x 7 arcsec). To do that, I need to flat field the image.
Note that this flat field is built using data from a different night than the alignment box image we will be processing. | filepath = '../../../KeckData/MOSFIRE_FCS/'
dark = CCDData.read(os.path.join(filepath, 'm180130_0001.fits'), unit='adu')
flatfiles = ['m180130_0320.fits',
'm180130_0321.fits',
'm180130_0322.fits',
'm180130_0323.fits',
'm180130_0324.fits',
]
flats = []
for i,file in enumerate(flatfiles):
flat = CCDData.read(os.path.join(filepath, file), unit='adu')
flat = flat.subtract(dark)
flats.append(flat)
flat_combiner = Combiner(flats)
flat_combiner.sigma_clipping()
scaling_func = lambda arr: 1/np.ma.average(arr)
flat_combiner.scaling = scaling_func
masterflat = flat_combiner.median_combine()
# masterflat.write('masterflat.fits', overwrite=True) | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
Reduce Alignment Image | # align1 = CCDData.read(os.path.join(filepath, 'm180130_0052.fits'), unit='adu')
align1 = CCDData.read(os.path.join(filepath, 'm180210_0254.fits'), unit='adu')
align1ds = align1.subtract(dark)
align1f = flat_correct(align1ds, masterflat) | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
Find Alignment Box and Star
For now, I am manually entering the rough location of the alignment box within the CCD. This should be read from header. | # box_loc = (1257, 432) # for m180130_0052
# box_loc = (1544, 967) # for m180210_0254
box_loc = (821, 1585) # for m180210_0254
# box_loc = (1373, 1896) # for m180210_0254
# box_loc = (791, 921) # for m180210_0254
# box_loc = (1268, 301) # for m180210_0254
box_size = 30
fits_section = f'[{box_loc[0]-box_size:d}:{box_loc[0]+box_size:d}, {box_loc[1]-box_size:d}:{box_loc[1]+box_size:d}]'
print(fits_section)
region = trim_image(align1f, fits_section=fits_section) | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
The code below estimates the center of the alignment box | threshold_pct = 70
window = region.data > np.percentile(region.data, threshold_pct)
alignment_box_position = ndimage.measurements.center_of_mass(window) | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
The code below finds the edges of the box and measures its width and height. | gradx = np.gradient(region.data, axis=1)
horizontal_profile = np.sum(gradx, axis=0)
grady = np.gradient(region.data, axis=0)
vertical_profile = np.sum(grady, axis=1)
h_edges = fit_edges(horizontal_profile)
print(h_edges, h_edges[0]-h_edges[1])
v_edges = fit_edges(vertical_profile)
print(v_edges, v_edges[0]-v_edges[1]) | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
This code estimates the initial location of the star. The fit to the star is quite rudimentary and could be replaced by more sophisticated methods. | maxr = region.data.max()
starloc = (np.where(region.data == maxr)[0][0], np.where(region.data == maxr)[1][0]) | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
Build Model for Box + Star
Build an astropy.models model of the alignment box and star and fit the compound model to the data. | boxamplitude = 1 #np.percentile(region.data, 90)
star_amplitude = region.data.max() - boxamplitude
box = mosfireAlignmentBox(boxamplitude, alignment_box_position[1], alignment_box_position[0],\
abs(h_edges[0]-h_edges[1]), abs(v_edges[0]-v_edges[1]))
box.amplitude.fixed = True
box.x_width.min = 10
box.y_width.min = 10
star = models.Gaussian2D(star_amplitude, starloc[0], starloc[1])
star.amplitude.min = 0
star.x_stddev.min = 1
star.x_stddev.max = 8
star.y_stddev.min = 1
star.y_stddev.max = 8
sky = models.Const2D(np.percentile(region.data, 90))
sky.amplitude.min = 0
model = box*(sky + star)
fitter = fitting.LevMarLSQFitter()
y, x = np.mgrid[:2*box_size+1, :2*box_size+1]
fit = fitter(model, x, y, region.data)
print(fitter.fit_info['message'])
# Do stupid way of generating an image from the model for visualization (replace this later)
modelim = np.zeros((61,61))
fitim = np.zeros((61,61))
for i in range(0,60):
for j in range(0,60):
modelim[j,i] = model(i,j)
fitim[j,i] = fit(i,j)
resid = region.data-fitim
for i,name in enumerate(fit.param_names):
print(f"{name:15s} = {fit.parameters[i]:.2f}") | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
Results
The cell below, shows the image, the initial model guess, the fitted model, and the difference between the data and the model. | plt.figure(figsize=(16,24))
plt.subplot(1,4,1)
plt.imshow(region.data, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)
plt.subplot(1,4,2)
plt.imshow(modelim, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)
plt.subplot(1,4,3)
plt.imshow(fitim, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)
plt.subplot(1,4,4)
plt.imshow(resid, vmin=-1000, vmax=1000)
plt.show() | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
Results
Show the image with an overlay marking the determined center of the alignment box and the position of the star.
Please note that this code fits the location of the box and so it can confirm the FCS operation has placed the box in a consistent location when checked against the header.
It should also be able to message and automatically respond if the star is not found or is very faint (i.e. it has lower than expected flux). | pixelscale = u.pixel_scale(0.1798*u.arcsec/u.pixel)
FWHMx = 2*(2*np.log(2))**0.5*fit.x_stddev_2 * u.pix
FWHMy = 2*(2*np.log(2))**0.5*fit.y_stddev_2 * u.pix
FWHM = (FWHMx**2 + FWHMy**2)**0.5/2**0.5
stellar_flux = 2*np.pi*fit.amplitude_2.value*fit.x_stddev_2.value*fit.y_stddev_2.value
plt.figure(figsize=(8,8))
plt.imshow(region.data, vmin=fit.amplitude_1.value*0.9, vmax=fit.amplitude_1.value+fit.amplitude_2.value)
plt.plot([fit.x_mean_2.value], [fit.y_mean_2.value], 'go', ms=10)
plt.text(fit.x_mean_2.value+1, fit.x_mean_2.value-1, 'Star', color='green', fontsize=18)
plt.plot([fit.x_0_0.value], [fit.y_0_0.value], 'bx', ms=15)
plt.text(fit.x_0_0.value+2, fit.y_0_0.value, 'Box Center', color='blue', fontsize=18)
plt.show()
boxpos_x = box_loc[1] - box_size + fit.x_0_0.value
boxpos_y = box_loc[0] - box_size + fit.y_0_0.value
starpos_x = box_loc[1] - box_size + fit.x_mean_2.value
starpos_y = box_loc[0] - box_size + fit.y_mean_2.value
print(f"Sky Brightness = {fit.amplitude_1.value:.0f} ADU")
print(f"Box X Center = {boxpos_x:.0f}")
print(f"Box Y Center = {boxpos_y:.0f}")
print(f"Stellar FWHM = {FWHM.to(u.arcsec, equivalencies=pixelscale):.2f}")
print(f"Stellar Xpos = {starpos_x:.0f}")
print(f"Stellar Xpos = {starpos_y:.0f}")
print(f"Stellar Amplitude = {fit.amplitude_2.value:.0f} ADU")
print(f"Stellar Flux (fit) = {stellar_flux:.0f} ADU") | SlitAlign/TestAlign.ipynb | joshwalawender/KeckUtilities | bsd-2-clause |
Python versions
There are currently two different supported versions of Python, 2.7 and 3.4. Somewhat confusingly, Python 3.0 introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.4 and vice versa. For this class all code will use Python 2.7.
You can check your Python version at the command line by running python --version.
Basic data types
Numbers
Integers and floats work as you would expect from other languages: | x = 3
print (x, type(x))
print ("Addition:", x + 1) # Addition;
print ("Subtraction:", x - 1) # Subtraction;
print ("Multiplication:", x * 2) # Multiplication;
print ("Exponentiation:", x ** 2) # Exponentiation;
x += 1
print ("Incrementing:", x) # Prints "4"
x *= 2
print ("Exponentiating:", x) # Prints "8"
y = 2.5
print ("Type of y:", type(y)) # Prints "<type 'float'>"
print ("Many values:", y, y + 1, y * 2, y ** 2) # Prints "2.5 3.5 5.0 6.25" | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.): | t, f = True, False
print (type(t)) # Prints "<type 'bool'>"
print ("True AND False:", t and f) # Logical AND;
print ("True OR False:", t or f) # Logical OR;
print ("NOT True:", not t) # Logical NOT;
print ("True XOR False:", t != f) # Logical XOR; | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
String objects have a bunch of useful methods; for example: | s = "hello"
print ("Capitalized String:", s.capitalize()) # Capitalize a string; prints "Hello"
print ("Uppercase String:", s.upper()) # Convert a string to uppercase; prints "HELLO"
print ("Right justified String with padding of '7':", s.rjust(7)) # Right-justify a string, padding with spaces; prints " hello"
print ("Centered String with padding of '7':", s.center(7)) # Center a string, padding with spaces; prints " hello "
print ("Replace 'l' with '(ell)':", s.replace('l', '(ell)')) # Replace all instances of one substring with another;
# prints "he(ell)(ell)o"
print ("Stripped String:", ' world '.strip()) # Strip leading and trailing whitespace; prints "world" | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
You can find a list of all string methods in the documentation
Containers
Python includes several built-in container types: lists, dictionaries, sets, and tuples.
Lists
A list is the Python equivalent of an array, but is resizeable and can contain elements of different types: | xs = [3, 1, 2] # Create a list
print (xs, xs[2])
print (xs[-1]) # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
print (xs)
xs.append('bar') # Add a new element to the end of the list
print (xs)
x = xs.pop() # Remove and return the last element of the list
print (x, xs) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
As usual, you can find all the gory details about lists in the documentation
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing: | nums = list(range(5)) # range is a built-in function that creates a list of integers
print (nums) # Prints "[0, 1, 2, 3, 4]"
print (nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print (nums[2:]) # Get a slice from index 2 to the end; prints "[2, 3, 4]"
print (nums[:2]) # Get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print (nums[:]) # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print (nums[:-1]) # Slice indices can be negative; prints ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # Assign a new sublist to a slice
print (nums) # Prints "[0, 1, 8, 8, 4]" | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this: | d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print ("Value of the dictionary for the key 'cat':", d['cat']) # Get an entry from a dictionary; prints "cute"
print ("Is 'cat' is the dictionary d:", 'cat' in d) # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print ("Value of the dictionary for the key 'fish':", d['fish']) # Prints "wet"
print (d['monkey']) # KeyError: 'monkey' not a key of d
print ("Get 'monkey' value or default:", d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A"
print ("Get 'fish' value or default:", d.get('fish', 'N/A')) # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print ("Get 'fish' value or default:", d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A" | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following: | animals = {'cat', 'dog'}
print ("Is 'cat' in the set:", 'cat' in animals) # Check if an element is in a set; prints "True"
print ("Is 'fish' in the set:", 'fish' in animals) # prints "False"
animals.add('fish') # Add an element to a set
print ("Is 'fish' in the set:", 'fish' in animals)
print ("What is the length of the set:", len(animals)) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print ("What is the length of the set:", len(animals))
animals.remove('cat') # Remove an element from a set
print ("What is the length of the set:", len(animals)) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions: | from math import sqrt
set_comprehension = {int(sqrt(x)) for x in range(30)}
print (set_comprehension)
print (type(set_comprehension)) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example: | d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print (type(t))
print (d[t])
print (d[(1, 2)])
print ("Access the 1st value of Tuple:", t[0])
print ("Access the 2nd value of Tuple:", t[1])
t[0] = 1 # This does NOT work !
t = (1, t[1]) # This DOES work !
print (t) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.
To use Numpy, we first need to import the numpy package: | import numpy as np
import warnings
warnings.filterwarnings('ignore') # To remove warnings about "deprecated" or "future" features | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
Why using Numpy Array over Python Lists ?
NumPy's arrays are more compact than Python lists -- a list of lists as you describe, in Python, would take at least 20 MB or so, while a NumPy 3D array with single-precision floats in the cells would fit in 4 MB. Access in reading and writing items is also faster with NumPy.
Maybe you don't care that much for just a million cells, but you definitely would for a billion cells -- neither approach would fit in a 32-bit architecture, but with 64-bit builds NumPy would get away with 4 GB or so, Python alone would need at least about 12 GB (lots of pointers which double in size) -- a much costlier piece of hardware!
The difference is mostly due to "indirectness" -- a Python list is an array of pointers to Python objects, at least 4 bytes per pointer plus 16 bytes for even the smallest Python object (4 for type pointer, 4 for reference count, 4 for value -- and the memory allocators rounds up to 16). A NumPy array is an array of uniform values -- single-precision numbers takes 4 bytes each, double-precision ones, 8 bytes. Less flexible, but you pay substantially for the flexibility of standard Python lists!
Author: Alex Martelli
Source: StackOverFlow
We can initialize numpy arrays from nested Python lists, and access elements using square brackets: | a = np.array([1, 2, 3]) # Create a rank 1 array
print (type(a), a.shape, a[0], a[1], a[2])
a[0] = 5 # Change an element of the array
print (a)
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
print (b)
print (b.shape)
print (b[0, 0], b[0, 1], b[1, 0]) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Array indexing
Numpy offers several ways to index into arrays.
Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array: | # Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print (b) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
A slice of an array is a view into the same data, so modifying it will modify the original array. | print ("Original Matrix before modification:", a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print ("Original Matrix after modification:", a[0, 1]) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Two ways of accessing the data in the middle row of the array. Mixing integer indexing with slices yields an array of lower rank, while using only slices yields an array of the same rank as the original array: | row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print ("Rank 1 access of the 2nd row:", row_r1, row_r1.shape)
print ("Rank 2 access of the 2nd row:", row_r2, row_r2.shape)
print ("Rank 2 access of the 2nd row:", row_r3, row_r3.shape)
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print ("Rank 1 access of the 2nd column:", col_r1, col_r1.shape)
print ()
print ("Rank 2 access of the 2nd column:\n", col_r2, col_r2.shape) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array.
Here is an example: | a = np.array([[1,2], [3, 4], [5, 6]])
# An example of integer array indexing.
# The returned array will have shape (3,) and
print (a[[0, 1, 2], [0, 1, 0]])
# The above example of integer array indexing is equivalent to this:
print (np.array([a[0, 0], a[1, 1], a[2, 0]]))
# When using integer array indexing, you can reuse the same
# element from the source array:
print (a[[0, 0], [1, 1]])
# Equivalent to the previous integer array indexing example
print (np.array([a[0, 1], a[0, 1]]))
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print (a)
# Create an array of indices
b = np.array([0, 2, 0, 1])
b_range = np.arange(4)
print ("b_range:", b_range)
# Select one element from each row of a using the indices in b
print ("Selected Matrix Values:", a[b_range, b]) # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[b_range, b] += 10 # Only the selected values are modified in the "a" matrix.
print ("Modified 'a' Matrix:\n", a) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition.
Here is an example: | a = np.array([[1,2], [3, 4], [5, 6]])
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print (bool_idx)
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print (a[bool_idx])
# We can do all of the above in a single concise statement:
print (a[a > 2]) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module: | x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print (x + y)
print ()
print (np.add(x, y))
# Elementwise difference; both produce the array
print (x - y)
print ()
print (np.subtract(x, y))
# Elementwise product; both produce the array
print (x * y)
print ()
print (np.multiply(x, y))
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print (x / y)
print ()
print (np.divide(x, y))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print (np.sqrt(x)) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects: | x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print ("v.w 'dot' product:", v.dot(w))
print ("numpy 'dot' product (v,w):", np.dot(v, w))
# Matrix / vector product; both produce the rank 1 array [29 67]
print ("x.v 'dot' product:", x.dot(v))
print ("numpy 'dot' product (x,v):", np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print ("x.y 'dot' product:\n", x.dot(y))
print ("numpy 'dot' product (x,y):\n", np.dot(x, y)) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum: | x = np.array([[1,2],[3,4]])
print ("Sum of all element:", np.sum(x)) # Compute sum of all elements; prints "10"
print ("Sum of each column:", np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]"
print ("Sum of each row:", np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]" | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object: | print ("Matrix x:\n", x)
print ()
print ("Matrix x transposed:\n", x.T)
v = np.array([[1,2,3]])
print ("Matrix v:\n", v)
print ()
print ("Matrix v transposed:\n", v.T) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this: | vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print (vv) # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print (y)
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print (y) | Stanford CS228 - Python and Numpy Tutorial.ipynb | dataventureutc/Kaggle-HandsOnLab | gpl-3.0 |
tICA Histogram
We can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script. | from msmbuilder.dataset import dataset
ds = dataset('tica_trajs.h5')
%matplotlib inline
import msmexplorer as msme
import numpy as np
txx = np.concatenate(ds)
_ = msme.plot_histogram(txx) | examples/Fs-Peptide-command-line.ipynb | msultan/msmbuilder | lgpl-2.1 |
We now set up the SimulationArchive and integrate like we normally would (SimulationArchive.ipynb): | sim.automateSimulationArchive("archive.bin", interval=1e3, deletefile=True)
sim.integrate(1.e6) | ipython_examples/SimulationArchive.ipynb | dtamayo/reboundx | gpl-3.0 |
Once we're ready to inspect our simulation, we use the reboundx.SimulationArchive wrapper that additionally takes a REBOUNDx binary: | sa = reboundx.SimulationArchive("archive.bin", rebxfilename = "rebxarchive.bin") | ipython_examples/SimulationArchive.ipynb | dtamayo/reboundx | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.