markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Question: What do you do when you get an exception? You can get information about exceptions.
#1/0 def divide2(numerator, denominator): try: result = numerator/denominator print("result = %f" % result) except (ZeroDivisionError, TypeError): print("Got an exception") divide2(1, "x") # Why doesn't this catch the exception? # How do we fix it? divide2("x", 2) # Exceptions in file handling def read_safely(path): error = None try: with open(path, "r") as fd: lines = fd.readlines() print ('\n'.join(lines())) except FileNotFoundError as err: print("File %s does not exist. Try again." % path) read_safely("unknown.txt") # Handle division by 0 by using a small number SMALL_NUMBER = 1e-3 def divide2(numerator, denominator): try: result = numerator/denominator except ZeroDivisionError: result = numerator/SMALL_NUMBER print("result = %f" % result) divide2(1,0)
Spring2018/Debugging-and-Exceptions/Exceptions.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Generating Exceptions Why generate exceptions? (Don't I have enough unintentional errors?)
import pandas as pd def func(df): """" :param pd.DataFrame df: should have a column named "hours" """ if not "hours" in df.columns: raise ValueError("DataFrame should have a column named 'hours'.") df = pd.DataFrame({'hours': range(10) }) func(df) df = pd.DataFrame({'years': range(10) }) # Generates an exception #func(df)
Spring2018/Debugging-and-Exceptions/Exceptions.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Frequency tables Ibis provides the value_counts API, just like pandas, for computing a frequency table for a table column or array expression. You might have seen it used already earlier in the tutorial.
lineitem = con.table('tpch_lineitem') orders = con.table('tpch_orders') items = (orders.join(lineitem, orders.o_orderkey == lineitem.l_orderkey) [lineitem, orders]) items.o_orderpriority.value_counts()
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
This can be customized, of course:
freq = (items.group_by(items.o_orderpriority) .aggregate([items.count().name('nrows'), items.l_extendedprice.sum().name('total $')])) freq
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
Binning and histograms Numeric array expressions (columns with numeric type and other array expressions) have bucket and histogram methods which produce different kinds of binning. These produce category values (the computed bins) that can be used in grouping and other analytics. Let's have a look at a few examples I'll use the summary function to see the general distribution of lineitem prices in the order data joined above:
items.l_extendedprice.summary()
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
Alright then, now suppose we want to split the item prices up into some buckets of our choosing:
buckets = [0, 5000, 10000, 50000, 100000]
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
The bucket function creates a bucketed category from the prices:
bucketed = items.l_extendedprice.bucket(buckets).name('bucket')
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
The buckets we wrote down define 4 buckets numbered 0 through 3. The NaN is a pandas NULL value (since that's how pandas represents nulls in numeric arrays), so don't worry too much about that. Since the bucketing ends at 100000, we see there are 4122 values that are over 100000. These can be included in the bucketing with include_over:
bucketed = (items.l_extendedprice .bucket(buckets, include_over=True) .name('bucket')) bucketed.value_counts()
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
Category values can either have a known or unknown cardinality. In this case, there's either 4 or 5 buckets based on how we used the bucket function. Labels can be assigned to the buckets at any time using the label function:
bucket_counts = bucketed.value_counts() labeled_bucket = (bucket_counts.bucket .label(['0 to 5000', '5000 to 10000', '10000 to 50000', '50000 to 100000', 'Over 100000']) .name('bucket_name')) expr = (bucket_counts[labeled_bucket, bucket_counts] .sort_by('bucket')) expr
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
Nice, huh? histogram is a linear (fixed size bin) equivalent:
t = con.table('functional_alltypes') d = t.double_col tier = d.histogram(10).name('hist_bin') expr = (t.group_by(tier) .aggregate([d.min(), d.max(), t.count()]) .sort_by('hist_bin')) expr
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
Filtering in aggregations Suppose that you want to compute an aggregation with a subset of the data for only one of the metrics / aggregates in question, and the complete data set with the other aggregates. Most aggregation functions are thus equipped with a where argument. Let me show it to you in action:
t = con.table('functional_alltypes') d = t.double_col s = t.string_col cond = s.isin(['3', '5', '7']) metrics = [t.count().name('# rows total'), cond.sum().name('# selected'), d.sum().name('total'), d.sum(where=cond).name('selected total')] color = (t.float_col .between(3, 7) .ifelse('red', 'blue') .name('color')) t.group_by(color).aggregate(metrics)
docs/source/notebooks/tutorial/8-More-Analytics-Helpers.ipynb
deepfield/ibis
apache-2.0
Visualization for a single continuous variable
plt.hist(df["mpg"], bins = 30) plt.title("Histogram plot of mpg") plt.xlabel("mpg") plt.ylabel("Frequency") plt.boxplot(df["mpg"]) plt.title("Boxplot of mpg\n ") plt.ylabel("mpg") #plt.figure(figsize = (10, 6)) plt.subplot(2, 1, 1) n, bins, patches = plt.hist(df["mpg"], bins = 50, normed = True) plt.title("Histogram plot of mpg") plt.xlabel("MPG") pdf = normpdf(bins, df["mpg"].mean(), df["mpg"].std()) plt.plot(bins, pdf, color = "red") plt.subplot(2, 1, 2) plt.boxplot(df["mpg"], vert=False) plt.title("Boxplot of mpg") plt.tight_layout() plt.xlabel("MPG") normpdf(bins, df["mpg"].mean(), df["mpg"].std()) # using pandas plot function plt.figure(figsize = (10, 6)) df.mpg.plot.hist(bins = 50, normed = True) plt.title("Histogram plot of mpg") plt.xlabel("mpg")
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Visualization for single categorical variable - frequency plot
counts = df["year"].value_counts().sort_index() plt.figure(figsize = (10, 4)) plt.bar(range(len(counts)), counts, align = "center") plt.xticks(range(len(counts)), counts.index) plt.xlabel("Year") plt.ylabel("Frequency") plt.title("Frequency distribution by year")
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Bar plot using matplotlib visualization
plt.figure(figsize = (10, 4)) df.year.value_counts().sort_index().plot.bar()
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Association plot between two continuous variables Continuous vs continuous
corr = np.corrcoef(df["weight"], df["mpg"])[0, 1] plt.scatter(df["weight"], df["mpg"]) plt.xlabel("Weight") plt.ylabel("Mpg") plt.title("Mpg vs Weight, correlation: %.2f" % corr)
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Scatter plot using pandas dataframe plot function
df.plot.scatter(x= "weight", y = "mpg") plt.title("Mpg vs Weight, correlation: %.2f" % corr)
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Continuous vs Categorical
mpg_by_year = df.groupby("year")["mpg"].agg([np.median, np.std]) mpg_by_year.head() mpg_by_year["median"].plot.bar(yerr = mpg_by_year["std"], ecolor = "red") plt.title("MPG by year") plt.xlabel("year") plt.ylabel("MPG")
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Show the boxplot of MPG by year
plt.figure(figsize=(10, 5)) sns.boxplot("year", "mpg", data = df)
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Association plot between 2 categorical variables
plt.figure(figsize=(10, 8)) sns.heatmap(df.corr(), cmap=sns.color_palette("RdBu", 10), annot=True) plt.figure(figsize=(10, 8)) aggr = df.groupby(["year", "cylinders"])["mpg"].agg(np.mean).unstack() sns.heatmap(aggr, cmap=sns.color_palette("Blues", n_colors= 10), annot=True)
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Classificaition plot
iris = pd.read_csv("https://raw.githubusercontent.com/abulbasar/data/master/iris.csv") iris.head() fig, ax = plt.subplots() x1, x2 = "SepalLengthCm", "PetalLengthCm" cmap = sns.color_palette("husl", n_colors=3) for i, c in enumerate(iris.Species.unique()): iris[iris.Species == c].plot.scatter(x1, x2, color = cmap[i], label = c, ax = ax) plt.legend()
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
QQ Plot for normality test
import scipy.stats as stats p = stats.probplot(df["mpg"], dist="norm", plot=plt)
Scikit - 02 Visualization.ipynb
abulbasar/machine-learning
apache-2.0
Loading the Input Data Next, we must load the data. There are some example datasets that come with Spark by default. Example data related to machine learning in particular is located in the $SPARK_HOME/data/mllib directory. For this part, we will be working with the $SPARK_HOME/data/mllib/als/test.data file. This is a small dataset, so it is easy to see what is happening.
data = sc.textFile("/Users/george/Panzer/Softwares/spark-1.5.2-bin-hadoop2.6/data/mllib/als/test.data")
tutorial_spark.ipynb
ADBI-george2/Spark-Tutorial
apache-2.0
Even though, we have the environment $SPARK_HOME defined, but it can't be used here. You must specify the full path, or the relative path based off where you initiated IPython. The textFile command will create an RDD where each element is a line of the input file. In the below cell, write some code to (1) print the number of elements and (2) print the fifth element. Print your result in a single line with the format: "There are X elements. The fifth element is: Y".
rows = data.collect() x = len(rows) y = rows[4] print("There are %d elements. The fifth element is : %s"%(x,y))
tutorial_spark.ipynb
ADBI-george2/Spark-Tutorial
apache-2.0
Transforming the Input Data This data isn't in a great format, since each element is in the RDD is currently a string. However, we will assume that the first column of the string represents a user ID, the second column represents a product ID, and the third column represents a user-specified rating of that product. In the below cell, write a function that takes a string (that has the same format as lines in this file) as input and returns a tuple where the first and second elements are ints and the third element is a float. Call your function parser. We will then use this function to transform the RDD.
def parser(line): splits = line.strip().split(",") return (int(splits[0]), int(splits[1]), float(splits[2])) ratings = data.map(parser).map(lambda l: Rating(*l)) ratings.collect()
tutorial_spark.ipynb
ADBI-george2/Spark-Tutorial
apache-2.0
Your output should look like the following: [Rating(user=1, product=1, rating=5.0), Rating(user=1, product=2, rating=1.0), Rating(user=1, product=3, rating=5.0), Rating(user=1, product=4, rating=1.0), Rating(user=2, product=1, rating=5.0), Rating(user=2, product=2, rating=1.0), Rating(user=2, product=3, rating=5.0), Rating(user=2, product=4, rating=1.0), Rating(user=3, product=1, rating=1.0), Rating(user=3, product=2, rating=5.0), Rating(user=3, product=3, rating=1.0), Rating(user=3, product=4, rating=5.0), Rating(user=4, product=1, rating=1.0), Rating(user=4, product=2, rating=5.0), Rating(user=4, product=3, rating=1.0), Rating(user=4, product=4, rating=5.0)] If it doesn't, then you did something wrong! If it does match, then you are ready to move to the next step. Building and Running the Model Now we are ready to build the actual recommendation model using the Alternating Least Squares algorithm. The documentation can be found here, and the papers the algorithm is based on are linked off the collaborative filtering page.
rank = 10 numIterations = 10 model = ALS.train(ratings, rank, numIterations) # Let's define some test data testdata = ratings.map(lambda p: (p[0], p[1])) # Running the model on all possible user->product predictions predictions = model.predictAll(testdata) predictions.collect()
tutorial_spark.ipynb
ADBI-george2/Spark-Tutorial
apache-2.0
Transforming the Model Output This result is not really in a nice format. Write some code that will transform the RDD so that each element is a user ID and a dictionary of product->rating pairs. Note that for the a Ratings object (which is what the elements of the RDD are), you can access the different fields by via the .user, .product, and .rating variables. For example, predictions.take(1)[0].user. Call the new RDD userPredictions. It should look as follows (when using userPredictions.collect()): [(4, {1: 1.0011434289237737, 2: 4.996713610813412, 3: 1.0011434289237737, 4: 4.996713610813412}), (1, {1: 4.996411869659315, 2: 1.0012037253934976, 3: 4.996411869659315, 4: 1.0012037253934976}), (2, {1: 4.996411869659315, 2: 1.0012037253934976, 3: 4.996411869659315, 4: 1.0012037253934976}), (3, {1: 1.0011434289237737, 2: 4.996713610813412, 3: 1.0011434289237737, 4: 4.996713610813412})]
def format_ratings(lst): ratings = {} for rating in lst: ratings[rating.product] = rating.rating return ratings userPredictions = predictions.groupBy(lambda r: r.user).mapValues(format_ratings) userPredictions.collect()
tutorial_spark.ipynb
ADBI-george2/Spark-Tutorial
apache-2.0
Evaluating the Model Now, lets calculate the mean squared error.
userPredictions = predictions.map(lambda r: ((r[0],r[1]), r[2])) ratesAndPreds = ratings.map(lambda r: ((r[0],r[1]), r[2])).join(predictions) MSE = ratesAndPreds.map(lambda r: (r[1][0] - r[1][1])**2).mean() print("Mean Squared Error = " + str(MSE))
tutorial_spark.ipynb
ADBI-george2/Spark-Tutorial
apache-2.0
Running Sampling The next cell is the first piece of code that differs substantially from other work flows. In it, we create the model and likelihood as normal, and then register priors to each of the parameters of the model. Note that we directly can register priors to transformed parameters (e.g., "lengthscale") rather than raw ones (e.g., "raw_lengthscale"). This is useful, however you'll need to specify a prior whose support is fully contained in the domain of the parameter. For example, a lengthscale prior must have support only over the positive reals or a subset thereof.
# this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) num_samples = 2 if smoke_test else 100 warmup_steps = 2 if smoke_test else 200 from gpytorch.priors import LogNormalPrior, NormalPrior, UniformPrior # Use a positive constraint instead of usual GreaterThan(1e-4) so that LogNormal has support over full range. likelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.Positive()) model = ExactGPModel(train_x, train_y, likelihood) model.mean_module.register_prior("mean_prior", UniformPrior(-1, 1), "constant") model.covar_module.base_kernel.register_prior("lengthscale_prior", UniformPrior(0.01, 0.5), "lengthscale") model.covar_module.base_kernel.register_prior("period_length_prior", UniformPrior(0.05, 2.5), "period_length") model.covar_module.register_prior("outputscale_prior", UniformPrior(1, 2), "outputscale") likelihood.register_prior("noise_prior", UniformPrior(0.05, 0.3), "noise") mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) def pyro_model(x, y): model.pyro_sample_from_prior() output = model(x) loss = mll.pyro_factor(output, y) return y nuts_kernel = NUTS(pyro_model, adapt_step_size=True) mcmc_run = MCMC(nuts_kernel, num_samples=num_samples, warmup_steps=warmup_steps, disable_progbar=smoke_test) mcmc_run.run(train_x, train_y)
examples/01_Exact_GPs/GP_Regression_Fully_Bayesian.ipynb
jrg365/gpytorch
mit
Loading Samples In the next cell, we load the samples generated by NUTS in to the model. This converts model from a single GP to a batch of num_samples GPs, in this case 100.
model.pyro_load_from_samples(mcmc_run.get_samples()) model.eval() test_x = torch.linspace(0, 1, 101).unsqueeze(-1) test_y = torch.sin(test_x * (2 * math.pi)) expanded_test_x = test_x.unsqueeze(0).repeat(num_samples, 1, 1) output = model(expanded_test_x)
examples/01_Exact_GPs/GP_Regression_Fully_Bayesian.ipynb
jrg365/gpytorch
mit
Plot Mean Functions In the next cell, we plot the first 25 mean functions on the samep lot. This particular example has a fairly large amount of data for only 1 dimension, so the hyperparameter posterior is quite tight and there is relatively little variance.
with torch.no_grad(): # Initialize plot f, ax = plt.subplots(1, 1, figsize=(4, 3)) # Plot training data as black stars ax.plot(train_x.numpy(), train_y.numpy(), 'k*', zorder=10) for i in range(min(num_samples, 25)): # Plot predictive means as blue line ax.plot(test_x.numpy(), output.mean[i].detach().numpy(), 'b', linewidth=0.3) # Shade between the lower and upper confidence bounds # ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5) ax.set_ylim([-3, 3]) ax.legend(['Observed Data', 'Sampled Means'])
examples/01_Exact_GPs/GP_Regression_Fully_Bayesian.ipynb
jrg365/gpytorch
mit
Simulate Loading Model from Disk Loading a fully Bayesian model from disk is slightly different from loading a standard model because the process of sampling changes the shapes of the model's parameters. To account for this, you'll need to call load_strict_shapes(False) on the model before loading the state dict. In the cell below, we demonstrate this by recreating the model and loading from the state dict. Note that without the load_strict_shapes call, this would fail.
state_dict = model.state_dict() model = ExactGPModel(train_x, train_y, likelihood) # Load parameters without standard shape checking. model.load_strict_shapes(False) model.load_state_dict(state_dict)
examples/01_Exact_GPs/GP_Regression_Fully_Bayesian.ipynb
jrg365/gpytorch
mit
We need to download parameter values for the pretrained network
!wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/blvc_googlenet.pkl
examples/imagecaption/COCO Preprocessing.ipynb
ebenolson/Recipes
mit
The Neverending Search for Periodicity: Techniques Beyond Lomb-Scargle Version 0.1 By AA Miller 28 Apr 2018 In this lecture we will examine alternative methods to search for periodic signals in astronomical time series. The problems will provide a particular focus on a relatively new technique, which is to model the periodic behavior as a Gaussian Process, and then sample the posterior to identify the optimal period via Markov Chain Monte Carlo analysis. A lot of this work has been pioneered by previous DSFP lecturer Suzanne Aigrain. For a refresher on GPs, see Suzanne's previous lectures: part 1 & part 2. For a refresher on MCMC, see Andy Connolly's previous lectures: part 1, part 2, & part 3. An Incomplete Whirlwind Tour In addition to LS, the following techniques are employed to search for periodic signals: String Length The string length method (Dworetsky 1983), phase folds the data at trial periods and then minimizes the distance to connect the phase-ordered observations. <img style="display: block; margin-left: auto; margin-right: auto" src="./images/StringLength.png" align="middle"> <div align="right"> <font size="-3">(credit: Gaveen Freer - http://slideplayer.com/slide/4212629/#) </font></div> Phase Dispersion Minimization Phase Dispersion Minimization (PDM; Jurkevich 1971, Stellingwerth 1978), like LS, folds the data at a large number of trial frequencies $f$. The phased data are then binned, and the variance is calculated in each bin, combined, and compared to the overall variance of the signal. No functional form of the signal is assumed, and thus, non-sinusoidal signals can be found. Challenge: how to select the number of bins? <img style="display: block; margin-left: auto; margin-right: auto" src="./images/PDM.jpg" align="middle"> <div align="right"> <font size="-3">(credit: Gaveen Freer - http://slideplayer.com/slide/4212629/#) </font></div> Analysis of Variance Analysis of Variance (AOV; Schwarzenberg-Czerny 1989) is similar to PDM. Optimal periods are defined via hypothesis testing, and these methods are found to perform best for certain types of astronomical signals. Supersmoother Supersmoother (Reimann) is a least-squares approach wherein a flexible, non-parametric model is fit to the folded observations at many trial frequncies. The use of this flexible model reduces aliasing issues relative to models that assume a sinusoidal shape, however, this comes at the cost of requiring considerable computational time. Conditional Entropy Conditional Entropy (CE; Graham et al. 2013), and other entropy based methods, aim to minimize the entropy in binned (normalized magnitude, phase) space. CE, in particular, is good at supressing signal due to the window function. When tested on real observations, CE outperforms most of the alternatives (e.g., LS, PDM, etc). <img style="display: block; margin-left: auto; margin-right: auto" src="./images/CE.png" align="middle"> <div align="right"> <font size="-3">(credit: Graham et al. 2013) </font></div> Bayesian Methods There have been some efforts to frame the period-finding problem in a Bayesian framework. Bretthorst 1988 developed Bayesian generalized LS models, while Gregory & Loredo 1992 applied Bayesian techniques to phase-binned models. More recently, efforts to use Gaussian processes (GPs) to model and extract a period from the light curve have been developed (Wang et al. 2012). These methods have proved to be especially useful for detecting stellar rotation in Kepler light curves (Angus et al. 2018). [Think of Suzanne's lectures during session 4] For this lecture we will focus on the use GPs, combined with an MCMC analysis (and we will take some shortcuts in the interest of time), to identify periodic signals in astronomical data. Problem 1) Helper Functions We are going to create a few helper functions, similar to the previous lecture, that will help minimize repetition for some common tasks in this notebook. Problem 1a Adjust the variable ncores to match the number of CPUs on your machine.
ncores = # adjust to number of CPUs on your machine np.random.seed(23)
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1b Create a function gen_periodic_data that returns $$y = C + A\cos\left(\frac{2\pi x}{P}\right) + \sigma_y$$ where $C$, $A$, and $P$ are constants, $x$ is input data and $\sigma_y$ represents Gaussian noise. Hint - this should only require a minor adjustment to your function from lecture 1.
def gen_periodic_data( # complete y = # complete return y
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1c Later, we will be using MCMC. Execute the following cell which will plot the chains from emcee to follow the MCMC walkers.
def plot_chains(sampler, nburn, paramsNames): Nparams = len(paramsNames) # + 1 fig, ax = plt.subplots(Nparams,1, figsize = (8,2*Nparams), sharex = True) fig.subplots_adjust(hspace = 0) ax[0].set_title('Chains') xplot = range(len(sampler.chain[0,:,0])) for i,p in enumerate(paramsNames): for w in range(sampler.chain.shape[0]): ax[i].plot(xplot[:nburn], sampler.chain[w,:nburn,i], color="0.5", alpha = 0.4, lw = 0.7, zorder = 1) ax[i].plot(xplot[nburn:], sampler.chain[w,nburn:,i], color="k", alpha = 0.4, lw = 0.7, zorder = 1) ax[i].set_ylabel(p) fig.tight_layout() return ax
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 1d Using gen_periodic_data generate 250 observations taken at random times between 0 and 10, with $C = 10$, $A = 2$, $P = 0.4$, and variance of the noise = 0.1. Create an uncertainty array dy with the same length as y and each value equal to $\sqrt{0.1}$. Plot the resulting data over the exact (noise-free) signal.
x = # complete y = # complete dy = # complete # complete fig, ax = plt.subplots() ax.errorbar( # complete ax.plot( # complete # complete # complete fig.tight_layout()
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2) Maximum-Likelihood Optimization A common approach$^\dagger$ in the literature for problems where there is good reason to place a strong prior on the signal (i.e. to only try and fit a single model) is maximum likelihood optimization [this is sometimes also called $\chi^2$ minimization]. $^\dagger$The fact that this approach is commonly used, does not mean it should be commonly used. In this case, where we are fitting for a known signal in simulated data, we are justified in assuming an extremely strong prior and fitting a sinusoidal model to the data. Problem 2a Write a function, correct_model, that returns the expected signal for our data given input time $t$: $$f(t) = a + b\cos\left(\frac{2\pi t}{c}\right)$$ where $a, b, c$ are model parameters. Hint - store the model parameters in a single variable (this will make things easier later).
def correct_model( # complete # complete return # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
For these data the log likelihood of the data can be written as: $$\ln \mathcal{L} = -\frac{1}{2} \sum \left(\frac{y - f(t)}{\sigma_y}\right)^2$$ Ultimately, it is easier to minimize the negative log likelihood, so we will do that. Problem 2b Write a function, lnlike1, that returns the log likelihood for the data given model parameters $\theta$, and $t, y, \sigma_y$. Write a second function, nll, that returns the negative log likelihood.
def lnlike1( # complete return # complete def nll( # complete return # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2c Use the minimize function from scipy.optimize to determine maximum likelihood estimates for the model parameters for the data simulated in problem 1d. What is the best fit period? The optimization routine requires an initial guess for the model parameters, use 10 for the offset, 1 for the amplitude of variations, and 0.39 for the period. Hint - as arguments, minimize takes the function, nll, the initial guess, and optional keyword args, which should be (x, y, dy) in this case.
initial_theta = # complete res = minimize( # complete print("The maximum likelihood estimate for the period is: {:.5f}".format( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2d Plot the input model, the noisy data, and the maximum likelihood model. How does the model fit look?
fig, ax = plt.subplots() ax.errorbar( # complete ax.plot( # complete ax.plot( # complete ax.set_xlabel('x') ax.set_ylabel('y') fig.tight_layout()
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 2e Repeat the maximum likelihood optimization, but this time use an initial guess of 10 for the offset, 1 for the amplitude of variations, and 0.393 for the period.
initial_theta = # complete res = minimize( # complete print("The ML estimate for a, b, c is: {:.5f}, {:.5f}, {:.5f}".format( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Given the lecture order this is a little late, but we have now identified the fundamental challenge in identifying periodic signals in astrophysical observations: periodic models are highly non-linear! This can easily be seen in the LS periodograms from the previous lecture: period estimates essentially need to be perfect to properly identify the signal. Take for instance the previous example, where we adjusted the initial guess for the period by less than 1% and it made the difference between correct estimates catastrophic errors. This also means that classic optimization procedures (e.g., gradient decent) are helpless for this problem. If you guess the wrong period there is no obvious way to know whether the subsequent guess should use a larger or smaller period. Problem 3) Sampling Techniques Given our lack of success with maximum likelihood techniques, we will now attempt a Bayesian approach. As a brief reminder, Bayes theorem tells us that: $$P(\theta|X) \propto P(X|\theta) P(\theta).$$ In words, the posterior probability is proportional to the likelihood multiplied by the prior. We will use sampling techniques, MCMC, to estimate the posterior. Remember - we already calculated the likelihood above. Problem 3a Write a function lnprior1 to calculate the log of the prior on $\theta$. Use a reasonable, wide and flat prior for all the model parameters. Hint - for emcee the log prior should return 0 within the prior and $-\infty$ otherwise.
def lnprior1( # complete a, b, c = # complete if # complete return 0.0 return -np.inf
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3b Write a function lnprob1 to calculate the log of the posterior probability. This function should take $\theta$ and x, y, dy as inputs.
def lnprob1( # complete lp = lnprior1(theta) if np.isfinite(lp): return # complete return -np.inf
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3c Initialize the walkers for emcee, which we will use to draw samples from the posterior. Like before, we need to include an initial guess (the parameters of which don't matter much beyond the period). Start with a guess of 0.6 for the period. As a quick reminder, emcee is a pure python implementation of Goodman & Weare's affine Invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler. emcee seeds several "walkers" which are members of the ensemble. You can think of each walker as its own Metropolis-Hastings chain, but the key detail is that the chains are not independent. Thus, the proposal distribution for each new step in the chain is dependent upon the position of all the other walkers in the chain. Choosing the initial position for each of the walkers does not significantly affect the final results (though it will affect the burn in time). Standard procedure is to create several walkers in a small ball around a reasonable guess [the samplers will quickly explore beyond the extent of the initial ball].
guess = [10, 1, 0.6] ndim = len(guess) nwalkers = 100 p0 = [np.array(guess) + 1e-8 * np.random.randn(ndim) for i in range(nwalkers)] sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob1, args=(x, y, dy), threads = ncores)
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3d Run the walkers through 1000 steps. Hint - The run_mcmc method on the sampler object may be useful.
sampler.run_mcmc( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3e Use the previous created plot_chains helper funtion to plot the chains from the MCMC sampling. Note - you may need to adjust nburn after examining the chains. Have your chains converged? Will extending the chains improve this?
params_names = # complete nburn = # complete plot_chains( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3f Make a corner plot (use corner) to examine the post burn-in samples from the MCMC chains.
samples = sampler.chain[:, nburn:, :].reshape((-1, ndim)) fig = # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
As you can see - force feeding this problem into a Bayesian framework does not automatically generate more reasonable answers. While some of the chains appear to have identified periods close to the correct period most of them are suck in local minima. There are sampling techniques designed to handle multimodal posteriors, but the non-linear nature of this problem makes it difficult for the various walkers to explore the full parameter space in the way that we would like. Problem 4) GPs and MCMC to identify a best-fit period We will now attempt to model the data via a Gaussian Process (GP). As a very brief reminder, a GP is a collection of random variables, in which any finite subset has a multivariate gaussian distribution. A GP is fully specified by a mean function and a covariance matrix $K$. In this case, we wish to model the simulated data from problem 1. If we specify a cosine kernel for the covariance: $$K_{ij} = k(x_i - x_j) = \cos\left(\frac{2\pi \left|x_i - x_j\right|}{P}\right)$$ then the mean function is simply the offset, b. Problem 4a Write a function model2 that returns the mean function for the GP given input parameters $\theta$. Hint - no significant computation is required to complete this task.
def model2( # complete # complete return # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
To model the GP in this problem we will use the george package (first introduced during session 4) written by Dan Foreman-Mackey. george is a fast and flexible tool for GP regression in python. It includes several built-in kernel functions, which we will take advantage of. Problem 4b Write a function lnlike2 to calculate the likelihood for the GP model assuming a cosine kernel, and mean model defined by model2. Note - george takes $\ln P$ as an argument and not $P$. We will see why this is useful later. Hint - there isn't a lot you need to do for this one! But pay attention to the functional form of the model.
def lnlike2(theta, t, y, yerr): lnper, lna = theta[:2] gp = george.GP(np.exp(lna) * kernels.CosineKernel(lnper)) gp.compute(t, yerr) return gp.lnlikelihood(y - model2(theta, t), quiet=True)
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4c Write a function lnprior2 to calculte $\ln P(\theta)$, the log prior for the model parameters. Use a wide flat prior for the parameters. Note - a flat prior in log space is not flat in the parameters.
def lnprior2( # complete # complete # complete # complete # complete # complete # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4d Write a function lnprob2 to calculate the log posterior given the model parameters and data.
def lnprob2(# complete # complete # complete # complete # complete # complete # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4e Intialize 100 walkers in an emcee.EnsembleSampler variable called sampler. For you initial guess at the parameter values set $\ln a = 1$, $\ln P = 1$, and $b = 8$. Note - this is very similar to what you did previously.
initial = # complete ndim = len(initial) p0 = [np.array(initial) + 1e-4 * np.random.randn(ndim) for i in range(nwalkers)] sampler = emcee.EnsembleSampler( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4f Run the chains for 200 steps. Hint - you'll notice these are shorter chains than we previously used. That is because the computational time is longer, as will be the case for this and all the remaining problems.
p0, _, _ = sampler.run_mcmc( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4g Plot the chains from the MCMC.
params_names = ['ln(P)', 'ln(a)', 'b'] nburn = # complete plot_chains( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
It should be clear that the chains have not, in this case, converged. This will be true even if you were to continue to run them for a very long time. Nevertheless, if we treat this entire run as a burn in, we can actually extract some useful information from this initial run. In particular, we will look at the posterior values for the different walkers at the end of their chains. From there we will re-initialize our walkers. We are actually free to initialize the walkers at any location we choose, so this approach is not cheating. However, one thing that should make you a bit uneasy about the way in which we are re-initializing the walkers is that we have no guarantee that the initial run that we just performed found a global maximum for the posterior. Thus, it may be the case that our continued analysis in this case is not "right." Problem 4h Below you are given two arrays, chain_lnp_end and chain_lnprob_end, that contain the final $\ln P$ and log posterior, respectively, for each of the walkers. Plot these two arrays against each other, to get a sense of what period is "best."
chain_lnp_end = sampler.chain[:,-1,0] chain_lnprob_end = sampler.lnprobability[:,-1] fig, ax = plt.subplots() ax.scatter( # complete # complete # complete fig.tight_layout()
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4i Reinitialize the walkers in a ball around the maximum log posterior value from the walkers in the previous burn in. Then run the MCMC sampler for 200 steps. Hint - you'll want to run sampler.reset() prior to the running the MCMC, but after selecting the new starting point for the walkers.
p = # complete sampler.reset() p0 = # complete p0, _, _ = sampler.run_mcmc( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4j Plot the chains. Have they converged?
paramsNames = ['ln(P)', 'ln(a)', 'b'] nburn = # complete plot_chains( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4k Make a corner plot of the samples. Does the marginalized distribution on $P$ make sense?
fig =
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
If you run the cell below, you will see random samples from the posterior overplotted on the data. Do the posterior samples seem reasonable in this case?
fig, ax = plt.subplots() ax.errorbar(x, y, dy, fmt='o') ax.set_xlabel('x') ax.set_ylabel('y') for s in samples[np.random.randint(len(samples), size=5)]: # Set up the GP for this sample. lnper, lna = s[:2] gp = george.GP(np.exp(lna) * kernels.CosineKernel(lnper)) gp.compute(x, dy) # Compute the prediction conditioned on the observations and plot it. m = gp.sample_conditional(y - model2(s, x), x_grid) + model2(s, x_grid) ax.plot(x_grid, m, color="0.2", alpha=0.3) fig.tight_layout()
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 4l What is the marginalized best period estimate, including uncertainties?
# complete print('ln(P) = {:.6f} +{:.6f} -{:.6f}'.format( # complete print('True period = 0.4, GP Period = {:.4f}'.format( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
In this way - it is possible to use GPs + MCMC to determine the period in noisy irregular data. Furthermore, unlike with LS, we actually have a direct estimate on the uncertainty for that period. As I previously alluded to, however, the solution does depend on how we initialize the walkers. Because this is simulated data, we know that the correct period has been estimated in this case, but there's no guarantee of that once we start working with astronomical sources. This is something to keep in mind if you plan on using GPs to search for periodic signals... Problem 5) The Quasi-Periodic Kernel As we saw in the first lecture, there are many sources with periodic light curves that are not strictly sinusoidal. Thus, the use of the cosine kernel (on its own) may not be sufficient to model the signal. As Suzanne told us during session, the quasi-period kernel: $$K_{ij} = k(x_i - x_j) = \exp \left(-\Gamma \sin^2\left[\frac{\pi}{P} \left|x_i - x_j\right|\right]\right)$$ is useful for non-sinusoidal signals. We will now use this kernel to model the variations in the simulated data. Problem 5a Write a function lnprob3 to calculate log posterior given model parameters $\theta$ and data x, y, dy. Hint - it may be useful to write this out as multiple functions.
# complete # complete # complete def lnprob3( # complete # complete # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5b Initialize 100 walkers around a reasonable starting point. Be sure that $\ln P = 0$ in this initialization. Run the MCMC for 200 steps. Hint - it may be helpful to run this second step in a separate cell.
# complete # complete # complete sampler = emcee.EnsembleSampler( # complete p0, _, _ = sampler.run_mcmc( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5c Plot the chains from the MCMC. Did the chains converge?
paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$'] nburn = # complete plot_chains( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5d Plot the final $\ln P$ vs. log posterior for each of the walkers. Do you notice anything interesting? Hint - recall that you are plotting the log posterior, and not the posterior.
# complete # complete # complete # complete # complete # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5e Re-initialize the walkers around the chain with the maximum log posterior value. Run the MCMC for 500 steps.
p = # complete sampler.reset() # complete sampler.run_mcmc( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5f Plot the chains for the MCMC. Hint - you may need to adjust the length of the burn in.
paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$'] nburn = # complete plot_chains( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 5g Make a corner plot for the samples. Is the marginalized estimate for the period reasonable?
# complete fig = # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 6) GPs + MCMC for actual astronomical data We will now apply this model to the same light curve that we studied in the LS lecture. In this case we do not know the actual period (that's only sorta true), so we will have to be even more careful about initializing the walkers and performing burn in than we were previously. Problem 6a Read in the data for the light curve stored in example_asas_lc.dat.
# complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 6b Adjust the prior from problem 5 to be appropriate for this data set.
def lnprior3( # complete # complete # complete # complete # complete # complete # complete # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Because we have no idea where to initialize our walkers in this case, we are going to use an ad hoc common sense + brute force approach. Problem 6c Run LombScarge on the data and determine the top three peaks in the periodogram. Set nterms = 2, and the maximum frequency to 5 (this is arbitrary but sufficient in this case). Hint - you may need to search more than the top 3 periodogram values to find the 3 peaks.
from astropy.stats import LombScargle frequency, power = # complete print('Top LS period is {}'.format(# complete print( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 6d Initialize one third of your 100 walkers around each of the periods identified in the previous problem (note - the total number of walkers must be an even number, so use 34 walkers around one of the top 3 frequency peaks). Run the MCMC for 500 steps following this initialization.
initial1 = # complete # complete # complete initial2 = # complete # complete # complete initial3 = # complete # complete # complete # complete sampler = emcee.EnsembleSampler( # complete p0, _, _ = sampler.run_mcmc( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 6e Plot the chains.
paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$'] nburn = # complete plot_chains( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 6f Plot $\ln P$ vs. log posterior.
# complete # complete # complete # complete # complete # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 6g Reinitialize the walkers around the previous walker with the maximum posterior value. Run the MCMC for 500 steps. Plot the chains. Have they converged?
# complete sampler.reset() # complete # complete sampler.run_mcmc( # complete paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$'] nburn = # complete plot_chains( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 6h Make a corner plot of the samples. What is the marginalized estimate for the period of this source? How does this estimate compare to LS?
# complete fig = corner.corner( # complete # complete print('ln(P) = {:.6f} +{:.6f} -{:.6f}'.format( # complete print('GP Period = {:.6f}'.format( # complete
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
The cell below shows marginalized samples overplotted on the actual data. How well does the model perform?
fig, ax = plt.subplots() ax.errorbar(lc['hjd'], lc['mag'], lc['mag_unc'], fmt='o') ax.set_xlabel('HJD (d)') ax.set_ylabel('mag') hjd_grid = np.linspace(2800, 3000,3000) for s in samples[np.random.randint(len(samples), size=5)]: # Set up the GP for this sample. lnper, lna, b, lngamma = s gp = george.GP(np.exp(lna) * kernels.ExpSine2Kernel(np.exp(lngamma), lnper)) gp.compute(lc['hjd'], lc['mag_unc']) # Compute the prediction conditioned on the observations and plot it. m = gp.sample_conditional(lc['mag'] - model3(s, lc['hjd']), hjd_grid) + model3(s, hjd_grid) ax.plot(hjd_grid, m, color="0.2", alpha=0.3) fig.tight_layout()
Sessions/Session06/Day1/GaussianProcessPeriodicity.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Data pre-processing Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset. For the machine_usage data, the pre-processing contains 2 parts: 1. Convert the time step in seconds to timestamp starting from 2018-01-01 2. Generate a built-in TSDataset to resample the average of cpu_usage in minutes and impute missing data
df_1932["time_step"] = pd.to_datetime(df_1932["time_step"], unit='s', origin=pd.Timestamp('2018-01-01')) from bigdl.chronos.data import TSDataset tsdata = TSDataset.from_pandas(df_1932, dt_col="time_step", target_col="cpu_usage") df = tsdata.resample(interval='1min', merge_mode="mean")\ .impute(mode="last")\ .to_pandas() df['cpu_usage'].plot(figsize=(16,6))
python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised.ipynb
intel-analytics/BigDL
apache-2.0
Anomaly Detection by DBScan Detector DBScanDetector uses DBSCAN clustering for anomaly detection. The DBSCAN algorithm tries to cluster the points and label the points that do not belong to any clusters as -1. It thus detects outliers detection in the input time series. DBScanDetector assigns anomaly score 1 to anomaly samples, and 0 to normal samples.
from bigdl.chronos.detector.anomaly import DBScanDetector ad = DBScanDetector(eps=0.1, min_samples=6) ad.fit(df['cpu_usage'].to_numpy()) anomaly_scores = ad.score() anomaly_indexes = ad.anomaly_indexes() print("The anomaly scores are:", anomaly_scores) print("The anomaly indexes are:", anomaly_indexes)
python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised.ipynb
intel-analytics/BigDL
apache-2.0
Anomaly Detection by AutoEncoder Detector AEDetector is unsupervised anomaly detector. It builds an autoencoder network, try to fit the model to the input data, and calcuates the reconstruction error. The samples with larger reconstruction errors are more likely the anomalies.
from bigdl.chronos.detector.anomaly import AEDetector ad = AEDetector(roll_len=10, ratio=0.05) ad.fit(df['cpu_usage'].to_numpy()) anomaly_scores = ad.score() anomaly_indexes = ad.anomaly_indexes() print("The anomaly scores are:", anomaly_scores) print("The anomaly indexes are:", anomaly_indexes)
python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised.ipynb
intel-analytics/BigDL
apache-2.0
Anomaly Detection by Threshold Detector ThresholdDetector is a simple anomaly detector that detectes anomalies based on threshold. The target value for anomaly testing can be either 1) the sample value itself or 2) the difference between the forecasted value and the actual value. In this notebook we demostrate the first type. The thresold can be set by user or esitmated from the train data accoring to anomaly ratio and statistical distributions.
from bigdl.chronos.detector.anomaly import ThresholdDetector thd=ThresholdDetector() thd.set_params(threshold=(20, 80)) thd.fit(df['cpu_usage'].to_numpy()) anomaly_scores = thd.score() anomaly_indexes = thd.anomaly_indexes() print("The anomaly scores are:", anomaly_scores) print("The anomaly indexes are:", anomaly_indexes)
python/chronos/use-case/AIOps/AIOps_anomaly_detect_unsupervised.ipynb
intel-analytics/BigDL
apache-2.0
Palettable API
from palettable.colorbrewer.qualitative import Set1_9 Set1_9.name Set1_9.type Set1_9.number Set1_9.colors Set1_9.hex_colors Set1_9.mpl_colors Set1_9.mpl_colormap # requires ipythonblocks Set1_9.show_as_blocks() Set1_9.show_continuous_image() Set1_9.show_discrete_image()
demo/Palettable Demo.ipynb
mikecharles/palettable
mit
Setting the matplotlib Color Cycle Adapted from the example at http://matplotlib.org/examples/color/color_cycle_demo.html. Use the .mpl_colors attribute to change the color cycle used by matplotlib when colors for plots are not specified.
from palettable.wesanderson import Aquatic1_5, Moonrise4_5 x = np.linspace(0, 2 * np.pi) offsets = np.linspace(0, 2*np.pi, 4, endpoint=False) # Create array with shifted-sine curve along each column yy = np.transpose([np.sin(x + phi) for phi in offsets]) plt.rc('lines', linewidth=4) plt.rc('axes', color_cycle=Aquatic1_5.mpl_colors) fig, (ax0, ax1) = plt.subplots(nrows=2) ax0.plot(yy) ax0.set_title('Set default color cycle to Aquatic1_5') ax1.set_color_cycle(Moonrise4_5.mpl_colors) ax1.plot(yy) ax1.set_title('Set axes color cycle to Moonrise4_5') # Tweak spacing between subplots to prevent labels from overlapping plt.subplots_adjust(hspace=0.3)
demo/Palettable Demo.ipynb
mikecharles/palettable
mit
Using a Continuous Palette Adapted from http://matplotlib.org/examples/pylab_examples/hist2d_log_demo.html. Use the .mpl_colormap attribute any place you need a matplotlib colormap.
from palettable.colorbrewer.sequential import YlGnBu_9 from matplotlib.colors import LogNorm #normal distribution center at x=0 and y=5 x = np.random.randn(100000) y = np.random.randn(100000)+5 plt.hist2d(x, y, bins=40, norm=LogNorm(), cmap=YlGnBu_9.mpl_colormap) plt.colorbar()
demo/Palettable Demo.ipynb
mikecharles/palettable
mit
1. Read structure
st = smart_structure_read('Cu/POSCARCU.vasp') # read required structure
tutorials/surfaces.ipynb
dimonaks/siman
gpl-2.0
2. Choose new vectors The initial structure is FCC lattice in conventianal setting i.e. cubic unit cell. As a first step we create orthogonal supercell with {111}cub surface on one side. Below the directions orthogonal to {111} are shown. We will choose [-1-1-1], [01-1] and [2-1-1].
Image(filename='figs/Thompson-tetrahedron-notation-for-FCC-slip-systems.png')
tutorials/surfaces.ipynb
dimonaks/siman
gpl-2.0
3. Build supercell with new vectors
# create supercell using chosen directions, the *mul* allows to choose one half of the third vector sc = create_supercell(st, [ [-1,-1,-1], [0,1,-1], [2,-1,-1]], mul = (1,1,0.5))
tutorials/surfaces.ipynb
dimonaks/siman
gpl-2.0
4. Build slab Now we need to create vacuum and rotate the cell. This can be done using create_surface2 function
# here we choose [100] normal in supercell, which is equivalent to [111]cub # combinations of *min_slab_size* and *cut_thickness* (small cut of slab from one side) allows create symmetrical slab st_suf = create_surface2(sc, [1, 0, 0], min_vacuum_size = 10, min_slab_size = 16, cut_thickness = 3, oxidation = {'Cu':'Cu0+' }, return_one = 1, surface_i = 0)
tutorials/surfaces.ipynb
dimonaks/siman
gpl-2.0
5. Scale slab Above the slab with minimum surface area was obtained. If you need larger surface you can use supercell() function for which you need to provide required sizes in Angstrems
st_sufsc112 = supercell(st_suf, [10,10,32]) # make 2x2 slab st_sufsc112.write_poscar() # save file as POSCAR for VASP
tutorials/surfaces.ipynb
dimonaks/siman
gpl-2.0
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int.
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [[source_vocab_to_int.get(vocab, source_vocab_to_int['<UNK>']) for vocab in line.split()] for line in source_text.split('\n')] target_id_text = [[target_vocab_to_int.get(vocab, target_vocab_to_int['<UNK>']) for vocab in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids)
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function input_data = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32) keep_probability = tf.placeholder(tf.float32, name='keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length') return input_data, targets, learning_rate, keep_probability, target_sequence_length, \ max_target_sequence_length, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs)
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn()
from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) # RNN cell def make_cell(rnn_size): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return enc_output, enc_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer)
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function # Training Decoder # Helper for the training process. Used by BasicDecoder to read inputs. training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) # Basic decoder training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train)
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function # Reuses the same parameters trained by the training process start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') # Helper for the inference process. inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) # Basic decoder inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) # Perform dynamic decoding using the decoder inference_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference.
def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function # 1. Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) # 2. Construct the decoder cell def make_cell(rnn_size): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)]) # 3. Dense layer to translate the decoder's output at each time # step into a choice from the target vocabulary output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) # 4. Set up a training decoder and an inference decoder # Training Decoder with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # 5. Inference Decoder # Reuses the same parameters trained by the training process with tf.variable_scope("decode", reuse=True): inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer)
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function # Pass the input data through the encoder. We'll ignore the encoder output, but use the state _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) # Prepare the target sequences we'll feed to the decoder in training mode dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) # Pass encoder state and decoder inputs to the decoders training_decoder_output, inference_decoder_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model)
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement
# Number of Epochs epochs = 7 # Batch Size batch_size = 100 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 300 decoding_embedding_size = 300 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 1.0 display_step = 100
language-translation/dlnd_language_translation.ipynb
rishizek/deep-learning
mit
Funciones anónimas Hasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usan en una gran cantidad de lugares se pueden usar las funciones lambda:
help("lambda") mi_funcion = lambda x, y: x+y resultado = mi_funcion(1,2) print resultado
Clase 04 - Excepciones, funciones lambda, búsquedas y ordenamientos.ipynb
gsorianob/fiuba-python
apache-2.0
Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema. Capturar múltiples excepciones Una única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura: En algunos casos tenemos en cuenta que el código puede lanzar una excepción como la de ZeroDivisionError, pero eso puede no ser suficiente:
def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except ZeroDivisionError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2)
Clase 04 - Excepciones, funciones lambda, búsquedas y ordenamientos.ipynb
gsorianob/fiuba-python
apache-2.0
The next step would be to create an instance of the System class and to seed espresso. This instance is used as a handle to the simulation system. At any time, only one instance of the System class can exist.
system = espressomd.System(box_l=BOX_L) system.seed = 42
doc/tutorials/01-lennard_jones/01-lennard_jones.ipynb
psci2195/espresso-ffans
gpl-3.0
At this point, we have set the necessary environment and warmed up our system. Now, we integrate the equations of motion and take measurements. We first plot the radial distribution function which describes how the density varies as a function of distance from a tagged particle. The radial distribution function is averaged over several measurements to reduce noise. The potential and kinetic energies can be monitored using the analysis method <tt>system.analysis.energy()</tt>. <tt>kinetic_temperature</tt> here refers to the measured temperature obtained from kinetic energy and the number of degrees of freedom in the system. It should fluctuate around the preset temperature of the thermostat. The mean square displacement of particle $i$ is given by: \begin{equation} \mathrm{msd}_i(t) =\langle (\vec{x}_i(t_0+t) -\vec{x}_i(t_0))^2\rangle, \end{equation} and can be calculated using "observables and correlators". An observable is an object which takes a measurement on the system. It can depend on parameters specified when the observable is instanced, such as the ids of the particles to be considered.
# Integration parameters sampling_interval = 100 sampling_iterations = 100 from espressomd.observables import ParticlePositions from espressomd.accumulators import Correlator # Pass the ids of the particles to be tracked to the observable. part_pos = ParticlePositions(ids=range(N_PART)) # Initialize MSD correlator msd_corr = Correlator(obs1=part_pos, tau_lin=10, delta_N=10, tau_max=1000 * TIME_STEP, corr_operation="square_distance_componentwise") # Calculate results automatically during the integration system.auto_update_accumulators.add(msd_corr) # Set parameters for the radial distribution function r_bins = 70 r_min = 0.0 r_max = system.box_l[0] / 2.0 avg_rdf = np.zeros((r_bins,)) # Take measurements time = np.zeros(sampling_iterations) instantaneous_temperature = np.zeros(sampling_iterations) etotal = np.zeros(sampling_iterations) for i in range(1, sampling_iterations + 1): system.integrator.run(sampling_interval) # Measure radial distribution function r, rdf = system.analysis.rdf(rdf_type="rdf", type_list_a=[0], type_list_b=[0], r_min=r_min, r_max=r_max, r_bins=r_bins) avg_rdf += rdf / sampling_iterations # Measure energies energies = system.analysis.energy() kinetic_temperature = energies['kinetic'] / (1.5 * N_PART) etotal[i - 1] = energies['total'] time[i - 1] = system.time instantaneous_temperature[i - 1] = kinetic_temperature # Finalize the correlator and obtain the results msd_corr.finalize() msd = msd_corr.result()
doc/tutorials/01-lennard_jones/01-lennard_jones.ipynb
psci2195/espresso-ffans
gpl-3.0