repo
stringlengths 2
99
| file
stringlengths 13
225
| code
stringlengths 0
18.3M
| file_length
int64 0
18.3M
| avg_line_length
float64 0
1.36M
| max_line_length
int64 0
4.26M
| extension_type
stringclasses 1
value |
---|---|---|---|---|---|---|
trieste-develop | trieste-develop/docs/notebooks/scalable_thompson_sampling_using_sparse_gaussian_processes.pct.py | # -*- coding: utf-8 -*-
# %% [markdown]
# # Scalable Thompson Sampling using Sparse Gaussian Process Models
# %% [markdown]
# In our other [Thompson sampling notebook](thompson_sampling.pct.py) we demonstrate how to perform batch optimization using a traditional implementation of Thompson sampling that samples exactly from an underlying Gaussian Process surrogate model. Unfortunately, this approach incurs a large computational overhead that scales polynomially with the optimization budget and so cannot be applied to settings with larger optimization budgets, e.g. those where large batches (>>10) of points can be collected.
#
# Luckily, Trieste also supports a scalable formulation of Thompson sampling. In particular, by coupling a sparse Gaussian process surrogate model with an approximate sampling scheme we can apply Thompson sampling over very large optimization budgets and batch sizes, i.e. for those settings where standard BO methods are infeasbible. The implementation in this notebook follows our paper (see <cite data-cite="vakili2021scalable"/>)
#
#
# (https://arxiv.org/pdf/2006.05356.pdf).
# %%
import numpy as np
import tensorflow as tf
np.random.seed(1793)
tf.random.set_seed(1793)
# %% [markdown]
# ## Define the problem and model
#
# We'll use a continuous bounded search space, and build a noisy observer. We then collect an initial design of 15 random points. In order to build a toy problem that requires a large optimization budget, we have contaminated our observations with a large amount of Gaussian noise.
# %%
import trieste
from trieste.objectives import Hartmann6
from trieste.types import TensorType
hartmann_6 = Hartmann6.objective
search_space = Hartmann6.search_space
def noisy_hartmann_6(
x: TensorType,
) -> TensorType: # contaminate observations with Gaussian noise
return hartmann_6(x) + tf.random.normal([len(x), 1], 0, 1, tf.float64)
num_initial_data_points = 15
initial_query_points = search_space.sample(num_initial_data_points)
observer = trieste.objectives.utils.mk_observer(noisy_hartmann_6)
initial_data = observer(initial_query_points)
# %% [markdown]
# We'll use a sparse Gaussian process regression to model the function, as implemented in GPflow. The GPflow models cannot be used directly in our Bayesian optimization routines, so we build a GPflow's `SVGP` model using Trieste's convenient model build function `build_svgp` and pass it to the `SparseVariational` wrapper. Note that we also define a `KMeansInducingPointSelector` selector, i.e. we reallocate the 50 inducing points of our `SVGP` model at the start of each BO step to be the centroids of a k-means clustering of the observations. As the optimization progresses, observations are likely to be concentrated in the optimal regions, so clustering provides “targeted” inducing points for BO.
# %%
from trieste.models.gpflow import (
SparseVariational,
build_svgp,
KMeansInducingPointSelector,
)
from trieste.models.optimizer import BatchOptimizer
gpflow_model = build_svgp(
initial_data, search_space, likelihood_variance=0.01, num_inducing_points=50
)
inducing_point_selector = KMeansInducingPointSelector()
model = SparseVariational(
gpflow_model,
num_rff_features=1_000,
inducing_point_selector=inducing_point_selector,
optimizer=BatchOptimizer(
tf.optimizers.Adam(0.1), max_iter=100, batch_size=50, compile=True
),
)
# %% [markdown]
# ## Create the Thompson sampling acquisition rule
#
# Thompson sampling chooses query points as the minimizers of random samples from the model of our objective
# function.
#
# Using a [decoupled sampling scheme](https://arxiv.org/abs/2002.09309), we can build approximate samples from our sparse GP surrogate model at low cost. As we can cheaply evaluate the values and gradients of these approximate samples at any point across the search space, our acquisition function optimizers can be used to find the minimizers of the samples across the whole search space. We can increase the quality of these approximate samples at the expense of computational cost by increasing `num_rff_features` (as specified when defining our model above).
#
# We either build batches element by element with `GreedyContinuousThompsonSampling` or allocate a whole batch at once with `ParallelContinuousThompsonSampling`. The latter is faster but has a much higher memory usage. Memory usage can be controlled using `split_acquisition_function_calls` utility, which limits the number of individual evaluations that can be made in parallel (in our case we set this to 100_000).
# %%
from trieste.acquisition.rule import EfficientGlobalOptimization
from trieste.acquisition import ParallelContinuousThompsonSampling
from trieste.acquisition.optimizer import automatic_optimizer_selector
from trieste.acquisition.utils import split_acquisition_function_calls
num_query_points = 100
acq_rule = EfficientGlobalOptimization(
builder=ParallelContinuousThompsonSampling(),
num_query_points=num_query_points,
optimizer=split_acquisition_function_calls(
automatic_optimizer_selector, split_size=100_000
),
)
# %% [markdown]
# ## Run the optimization loop
#
# Once the optimization loop is complete, the optimizer will return `num_query_points` new query points for every step in the loop. With only five steps, that's already five hundred points!
# %%
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
num_steps = 5
result = bo.optimize(
num_steps, initial_data, model, acq_rule, track_state=False
)
dataset = result.try_get_final_dataset()
# %% [markdown]
# ## Visualising the result
#
# By plotting the regret achieved by the algorithm, we see the convergence over five BO steps after the initial design (denoted by a vertical line and cross). At least one element from each batch of evaluations (dots) achieves better objective values than its predecessor. The evolution of the best solution is highlighted with an orange line.
# %%
from trieste.experimental.plotting import plot_regret
from matplotlib import pyplot as plt
ground_truth_regret = hartmann_6(dataset.query_points) - Hartmann6.minimum
best_found_truth_idx = tf.squeeze(tf.argmin(ground_truth_regret, axis=0))
fig, ax = plt.subplots()
plot_regret(
ground_truth_regret.numpy(), ax, num_init=10, idx_best=best_found_truth_idx
)
ax.set_yscale("log")
ax.set_ylabel("Regret")
ax.set_xlabel("# evaluations")
# %% [markdown]
# ## LICENSE
#
# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
| 6,553 | 46.839416 | 703 | py |
trieste-develop | trieste-develop/docs/notebooks/recovering_from_errors.pct.py | # %% [markdown]
# # Recovering from errors during optimization
# %%
import numpy as np
import tensorflow as tf
import random
np.random.seed(1793)
tf.random.set_seed(1793)
random.seed(3)
# %% [markdown]
# Sometimes the Bayesian optimization process encounters an error from which we can recover, without the need to restart the run from the beginning. In this tutorial, we'll simulate such an error and show how to recover from it.
#
# We'll use a similar setup to the [EI notebook](expected_improvement.ipynb), but use an observer that intermittently breaks when evaluated, and needs manual attention to get running again. We can simulate fixing the observer with its `manual_fix` method.
# %%
import trieste
from trieste.objectives import Branin
class FaultyBranin:
def __init__(self):
self._is_broken = False
def manual_fix(self):
self._is_broken = False
def __call__(self, x):
if random.random() < 0.05:
self._is_broken = True
if self._is_broken:
raise Exception("Observer is broken")
return trieste.data.Dataset(x, Branin.objective(x))
observer = FaultyBranin()
# %% [markdown]
# ## Set up the problem
# We'll use the same set up as before, except for the acquisition rule, where we'll use `TrustRegion`. `TrustRegion` is stateful, and we'll need to account for its state to recover, so using this rule gives the reader a more comprehensive overview of how to recover.
# %%
from trieste.models.gpflow import build_gpr, GaussianProcessRegression
search_space = trieste.space.Box(
tf.cast([0.0, 0.0], tf.float64), tf.cast([1.0, 1.0], tf.float64)
)
initial_data = observer(search_space.sample(5))
gpr = build_gpr(initial_data, search_space)
model = GaussianProcessRegression(gpr)
acquisition_rule = trieste.acquisition.rule.TrustRegion()
# %% [markdown]
# ## Run the optimization loop
#
# In this tutorial we'll try to complete fifteen optimization loops, which, with the broken observer, may take more than one attempt. The optimizer returns an `OptimizationResult`, which is simply a container for both:
#
# * the `final_result`, which uses a `Result` type (not to be confused with `OptimizationResult`) to safely encapsulate the final data, models and acquisition state if the process completed successfully, or an error if one occurred
# * the `history` of the successful optimization steps.
# %%
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
num_steps = 15
result = bo.optimize(num_steps, initial_data, model, acquisition_rule, None)
# %% [markdown]
# We can see from the logs that the optimization loop failed, and this can be sufficient to know what to do next if we're working in a notebook. However, sometimes our setup means we don't have access to the logs. We'll pretend from here that's the case.
# %% [markdown]
# ## Handling success
#
# We don't know if the optimization completed successfully or not, so we'll only try to access and plot the data if it was successful. We can find out if this was the case with `result`'s `is_ok` attribute. If it was successful, we know there is data in the `result`, which we can get using `try_get_final_dataset` and view.
# %%
if result.is_ok:
data = result.try_get_final_dataset()
print("best observation: ", tf.reduce_min(data.observations))
# %% [markdown]
# ## Handling failure
#
# If on the other hand, the optimization didn't complete successfully, we can fix our observer, and try again. We can try again by calling the `continue_optimization` method: this is just like `optimize` except it is passed the `OptimizationResult` of a previous run, from which it extracts the last successful data, model and acquisition state. It also automatically calculates the number of remaining optimization steps.
#
# Note that we can view the `final_result` by printing it. We'll do that here to see what exception was caught.
# %%
if result.is_err:
print("result: ", result.final_result)
observer.manual_fix()
result = bo.continue_optimization(num_steps, result, acquisition_rule)
# %% [markdown]
# We can repeat this until we've spent our optimization budget, using a loop if appropriate. But here, we'll just plot the data if it exists, safely by using `result`'s `is_ok` attribute.
# %%
from trieste.experimental.plotting import plot_bo_points, plot_function_2d
if result.is_ok:
data = result.try_get_final_dataset()
arg_min_idx = tf.squeeze(tf.argmin(data.observations, axis=0))
_, ax = plot_function_2d(
Branin.objective,
search_space.lower,
search_space.upper,
30,
contour=True,
)
plot_bo_points(data.query_points.numpy(), ax[0, 0], 5, arg_min_idx)
# %% [markdown]
# ## Saving results to disk
#
# For convenience, tracked state is stored in memory by default. However, this can potentially result in Out of Memory errors and also makes it difficult to recover from intentional or unintentional Python process shutdowns. You can instead store the result on disk by passing in a `track_path` argument to `optimize`.
#
# **Note that trieste currently saves models using pickling, which is not portable and not secure. You should only try to load optimization results that you generated yourself on the same system (or a system with the same version libraries).**
# %%
result = bo.optimize(
num_steps, initial_data, model, acquisition_rule, None, track_path="history"
)
# %% [markdown]
# The returned `history` records are now stored in files rather than in memory. Their constituents can be accessed just as before, which loads the content into memory only when required. The `result` is automatically loaded into memory, but is also saved to disk with the rest of the history.
# %%
print(result.history[-1])
print(result.history[-1].model)
# %% [markdown]
# It is also possible to reload the `OptimizationResult` in a new Python process:
# %%
trieste.bayesian_optimizer.OptimizationResult.from_path("history")
# %% [markdown]
# ## Out of memory errors
#
# Since Out Of Memory errors normally result in the Python process shutting down, saving tracked state to disk as described above is an important tool in recovering from them. One possible cause of memory errors is trying to evaluate an acquisition function over a large dataset, e.g. when initializing our gradient-based optimizers. To work around this, you can specify that evaluations of the acquisition function be split up: this splits them (on the first dimension) into batches of a given size, then stitches them back together. To do this, you need to provide an explicit split optimizer and specify a desired batch size.
# %%
from trieste.acquisition.optimizer import automatic_optimizer_selector
from trieste.acquisition.rule import EfficientGlobalOptimization
from trieste.acquisition.utils import split_acquisition_function_calls
optimizer = split_acquisition_function_calls(
automatic_optimizer_selector, split_size=10_000
)
query_rule = EfficientGlobalOptimization(optimizer=optimizer)
acquisition_rule = trieste.acquisition.rule.TrustRegion(rule=query_rule)
# %% [markdown]
# ## LICENSE
#
# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
| 7,238 | 43.140244 | 628 | py |
trieste-develop | trieste-develop/docs/notebooks/deep_gaussian_processes.pct.py | # %% [markdown]
# # Using deep Gaussian processes with GPflux for Bayesian optimization.
# %%
import numpy as np
import tensorflow as tf
np.random.seed(1794)
tf.random.set_seed(1794)
# %% [markdown]
# ## Describe the problem
#
# In this notebook, we show how to use deep Gaussian processes (DGPs) for Bayesian optimization using Trieste and GPflux. DGPs may be better for modeling non-stationary objective functions than standard GP surrogates, as discussed in <cite data-cite="dutordoir2017deep,hebbal2019bayesian"/>.
#
# In this example, we look to find the minimum value of the two- and five-dimensional [Michalewicz functions](https://www.sfu.ca/~ssurjano/michal.html) over the hypercubes $[0, pi]^2$/$[0, pi]^5$. We compare a two-layer DGP model with GPR, using Thompson sampling for both.
#
# The Michalewicz functions are highly non-stationary and have a global minimum that's hard to find, so DGPs might be more suitable than standard GPs, which may struggle because they typically have stationary kernels that cannot easily model non-stationarities.
# %%
import gpflow
from trieste.objectives import Michalewicz2, Michalewicz5
from trieste.objectives.utils import mk_observer
from trieste.experimental.plotting import plot_function_plotly
function = Michalewicz2.objective
F_MINIMIZER = Michalewicz2.minimum
search_space = Michalewicz2.search_space
fig = plot_function_plotly(function, search_space.lower, search_space.upper)
fig.show()
# %% [markdown]
# ## Sample the observer over the search space
#
# We set up the observer as usual, using Sobol sampling to sample the initial points.
# %%
import trieste
observer = mk_observer(function)
num_initial_points = 20
num_steps = 20
initial_query_points = search_space.sample_sobol(num_initial_points)
initial_data = observer(initial_query_points)
# %% [markdown]
# ## Model the objective function
#
# The Bayesian optimization procedure estimates the next best points to query by using a probabilistic model of the objective. We'll use a two layer deep Gaussian process (DGP), built using GPflux. We also compare to a (shallow) GP.
#
# Since DGPs can be hard to build, Trieste provides some basic architectures: here we use the `build_vanilla_deep_gp` function which returns a GPflux model of `DeepGP` class. As with other models (e.g. GPflow), we cannot use it directly in Bayesian optimization routines, we need to pass it through an appropriate wrapper, `DeepGaussianProcess` wrapper in this case. Additionally, since the GPflux interface does not currently support copying DGP architectures, if we wish to have the Bayesian optimizer track the model state, we need to pass in the DGP as a callable closure so that the architecture can be recreated when required (alternatively, we can set `set_state=False` on the optimize call).
#
# A few other useful notes regarding building a DGP model: The DGP model requires us to specify the number of inducing points, as we don't have the true posterior. To train the model we have to use a stochastic optimizer; Adam is used by default, but we can use other stochastic optimizers from TensorFlow. GPflux allows us to use the Keras `fit` method, which makes optimizing a lot easier - this method is used in the background for training the model.
# %%
from functools import partial
from trieste.models.gpflux import DeepGaussianProcess, build_vanilla_deep_gp
def build_dgp_model(data, search_space):
dgp = partial(
build_vanilla_deep_gp,
data,
search_space,
2,
100,
likelihood_variance=1e-5,
trainable_likelihood=False,
)
return DeepGaussianProcess(dgp)
dgp_model = build_dgp_model(initial_data, search_space)
# %% [markdown]
# ## Run the optimization loop
#
# We can now run the Bayesian optimization loop by defining a `BayesianOptimizer` and calling its `optimize` method.
#
# The optimizer uses an acquisition rule to choose where in the search space to try on each optimization step. We'll start by using Thompson sampling.
#
# We'll run the optimizer for twenty steps. Note: this may take a while!
# %%
from trieste.acquisition.rule import DiscreteThompsonSampling
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
grid_size = 1000
acquisition_rule = DiscreteThompsonSampling(grid_size, 1)
dgp_result = bo.optimize(
num_steps,
initial_data,
dgp_model,
acquisition_rule=acquisition_rule,
)
dgp_dataset = dgp_result.try_get_final_dataset()
# %% [markdown]
# ## Explore the results
#
# We can now get the best point found by the optimizer. Note this isn't necessarily the point that was last evaluated.
# %%
dgp_query_points = dgp_dataset.query_points.numpy()
dgp_observations = dgp_dataset.observations.numpy()
dgp_arg_min_idx = tf.squeeze(tf.argmin(dgp_observations, axis=0))
print(f"query point: {dgp_query_points[dgp_arg_min_idx, :]}")
print(f"observation: {dgp_observations[dgp_arg_min_idx, :]}")
# %% [markdown]
# We can visualise how the optimizer performed as a three-dimensional plot
# %%
from trieste.experimental.plotting import add_bo_points_plotly
fig = plot_function_plotly(
function, search_space.lower, search_space.upper, alpha=0.5
)
fig = add_bo_points_plotly(
x=dgp_query_points[:, 0],
y=dgp_query_points[:, 1],
z=dgp_observations[:, 0],
num_init=num_initial_points,
idx_best=dgp_arg_min_idx,
fig=fig,
)
fig.show()
# %% [markdown]
# We can visualise the model over the objective function by plotting the mean and 95% confidence intervals of its predictive distribution. Note that the DGP model is able to model the local structure of the true objective function.
# %%
import matplotlib.pyplot as plt
from trieste.experimental.plotting import (
plot_regret,
plot_model_predictions_plotly,
)
fig = plot_model_predictions_plotly(
dgp_result.try_get_final_model(),
search_space.lower,
search_space.upper,
num_samples=100,
)
fig = add_bo_points_plotly(
x=dgp_query_points[:, 0],
y=dgp_query_points[:, 1],
z=dgp_observations[:, 0],
num_init=num_initial_points,
idx_best=dgp_arg_min_idx,
fig=fig,
figrow=1,
figcol=1,
)
fig.show()
# %% [markdown]
# We now compare to a GP model with priors over the hyperparameters. We do not expect this to do as well because GP models cannot deal with non-stationary functions well.
# %%
import gpflow
import tensorflow_probability as tfp
from trieste.models.gpflow import GaussianProcessRegression, build_gpr
gpflow_model = build_gpr(initial_data, search_space, likelihood_variance=1e-7)
gp_model = GaussianProcessRegression(gpflow_model)
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
result = bo.optimize(
num_steps,
initial_data,
gp_model,
acquisition_rule=acquisition_rule,
)
gp_dataset = result.try_get_final_dataset()
gp_query_points = gp_dataset.query_points.numpy()
gp_observations = gp_dataset.observations.numpy()
gp_arg_min_idx = tf.squeeze(tf.argmin(gp_observations, axis=0))
print(f"query point: {gp_query_points[gp_arg_min_idx, :]}")
print(f"observation: {gp_observations[gp_arg_min_idx, :]}")
fig = plot_model_predictions_plotly(
result.try_get_final_model(),
search_space.lower,
search_space.upper,
)
fig = add_bo_points_plotly(
x=gp_query_points[:, 0],
y=gp_query_points[:, 1],
z=gp_observations[:, 0],
num_init=num_initial_points,
idx_best=gp_arg_min_idx,
fig=fig,
figrow=1,
figcol=1,
)
fig.show()
# %% [markdown]
# We see that the DGP model does a much better job at understanding the structure of the function. The standard Gaussian process model has a large signal variance and small lengthscales, which do not result in a good model of the true objective. On the other hand, the DGP model is at least able to infer the local structure around the observations.
#
# We can also plot the regret curves of the two models side-by-side.
# %%
gp_suboptimality = gp_observations - F_MINIMIZER.numpy()
dgp_suboptimality = dgp_observations - F_MINIMIZER.numpy()
_, ax = plt.subplots(1, 2)
plot_regret(
dgp_suboptimality,
ax[0],
num_init=num_initial_points,
idx_best=dgp_arg_min_idx,
)
plot_regret(
gp_suboptimality,
ax[1],
num_init=num_initial_points,
idx_best=gp_arg_min_idx,
)
ax[0].set_yscale("log")
ax[0].set_ylabel("Regret")
ax[0].set_ylim(0.5, 3)
ax[0].set_xlabel("# evaluations")
ax[0].set_title("DGP")
ax[1].set_title("GP")
ax[1].set_yscale("log")
ax[1].set_ylim(0.5, 3)
ax[1].set_xlabel("# evaluations")
# %% [markdown]
# We might also expect that the DGP model will do better on higher dimensional data. We explore this by testing a higher-dimensional version of the Michalewicz dataset.
#
# Set up the problem.
# %%
function = Michalewicz5.objective
F_MINIMIZER = Michalewicz5.minimum
search_space = Michalewicz5.search_space
observer = mk_observer(function)
num_initial_points = 50
num_steps = 50
initial_query_points = search_space.sample_sobol(num_initial_points)
initial_data = observer(initial_query_points)
# %% [markdown]
# Build the DGP model and run the Bayes opt loop.
# %%
dgp_model = build_dgp_model(initial_data, search_space)
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
acquisition_rule = DiscreteThompsonSampling(grid_size, 1)
dgp_result = bo.optimize(
num_steps,
initial_data,
dgp_model,
acquisition_rule=acquisition_rule,
)
dgp_dataset = dgp_result.try_get_final_dataset()
dgp_query_points = dgp_dataset.query_points.numpy()
dgp_observations = dgp_dataset.observations.numpy()
dgp_arg_min_idx = tf.squeeze(tf.argmin(dgp_observations, axis=0))
print(f"query point: {dgp_query_points[dgp_arg_min_idx, :]}")
print(f"observation: {dgp_observations[dgp_arg_min_idx, :]}")
dgp_suboptimality = dgp_observations - F_MINIMIZER.numpy()
# %% [markdown]
# Repeat the above for the GP model.
# %%
gpflow_model = build_gpr(initial_data, search_space, likelihood_variance=1e-7)
gp_model = GaussianProcessRegression(gpflow_model)
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
result = bo.optimize(
num_steps,
initial_data,
gp_model,
acquisition_rule=acquisition_rule,
)
gp_dataset = result.try_get_final_dataset()
gp_query_points = gp_dataset.query_points.numpy()
gp_observations = gp_dataset.observations.numpy()
gp_arg_min_idx = tf.squeeze(tf.argmin(gp_observations, axis=0))
print(f"query point: {gp_query_points[gp_arg_min_idx, :]}")
print(f"observation: {gp_observations[gp_arg_min_idx, :]}")
gp_suboptimality = gp_observations - F_MINIMIZER.numpy()
# %% [markdown]
# Plot the regret.
# %%
_, ax = plt.subplots(1, 2)
plot_regret(
dgp_suboptimality,
ax[0],
num_init=num_initial_points,
idx_best=dgp_arg_min_idx,
)
plot_regret(
gp_suboptimality,
ax[1],
num_init=num_initial_points,
idx_best=gp_arg_min_idx,
)
ax[0].set_yscale("log")
ax[0].set_ylabel("Regret")
ax[0].set_ylim(1.5, 6)
ax[0].set_xlabel("# evaluations")
ax[0].set_title("DGP")
ax[1].set_title("GP")
ax[1].set_yscale("log")
ax[1].set_ylim(1.5, 6)
ax[1].set_xlabel("# evaluations")
# %% [markdown]
# While still far from the optimum, it is considerably better than the GP.
# %% [markdown]
# ## LICENSE
#
# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
| 11,347 | 30.348066 | 699 | py |
trieste-develop | trieste-develop/docs/notebooks/asynchronous_nongreedy_batch_ray.pct.py | # %% [markdown]
# # Asynchronous batch Bayesian optimization
#
# As shown in [Asynchronous Bayesian Optimization](asynchronous_greedy_multiprocessing.ipynb) tutorial, Trieste provides support for running observations asynchronously. In that tutorial we used a greedy batch acquisition function called Local Penalization, and requested one new point whenever an observation was received. We also used the Python multiprocessing module to run distributed observations in parallel.
#
# Here, we demonstrate a slightly different way of doing asynchronous Bayesian optimization. First, we make use of a non-greedy batch acquisition function, known as Batch Monte Carlo Expected Improvement. Second, we wait for several workers to finish, and then launch a new batch of points. However, since our batch size is smaller than the number of workers available, this approach is a hybrid between completely asynchronous and completely synchronous batch optimization. Note that greed acquisition functions also support batch sizes. Third, we use [Ray](https://www.ray.io/) to hide away most of the complexity of managing distributed workloads. There is no hard dependency in Trieste on a particular tool for parallel processing, and other libraries, such as [Dask](https://dask.org/), can be used.
#
# Together these two notebooks give a comprehensive overview of how to use Trieste in asynchronous scenarios.
# %%
# silence TF warnings and info messages, only print errors
# https://stackoverflow.com/questions/35911252/disable-tensorflow-debugging-information
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
import tensorflow as tf
tf.get_logger().setLevel("ERROR")
import ray
import numpy as np
import time
# %% [markdown]
# Just as in the other [notebook on asynchronous optimization](asynchronous_greedy_multiprocessing.ipynb), we use Branin function with delays.
# %%
from trieste.objectives import ScaledBranin
def objective(points, sleep=True):
if points.shape[1] != 2:
raise ValueError(
f"Incorrect input shape, expected (*, 2), got {points.shape}"
)
observations = []
for point in points:
observation = ScaledBranin.objective(point).numpy()
if sleep:
# insert some artificial delay that
# increases linearly with the absolute value of points
# which means our evaluations will take different time
delay = 3 * np.sum(point)
time.sleep(delay)
observations.append((point, observation))
return observations
# %%
# Let's confirm our objective function works as expected
objective(np.array([[0.1, 0.5]]), sleep=False)
# %% [markdown]
# To turn our objective function into a Ray task, we wrap it in a function with appropriate decorator. We are not using anything beyond Ray tasks API in this tutorial, and refer interested readers to [Ray documentation](https://docs.ray.io/en/latest/walkthrough.html) and [Ray crash course](https://github.com/anyscale/academy/blob/main/ray-crash-course/01-Ray-Tasks.ipynb) for more details.
# %%
@ray.remote
def ray_objective(points, sleep=True):
return objective(points, sleep)
# %% [markdown]
# We prepare the model and some initial data to kick-start the optimization process.
# %%
from trieste.space import Box
from trieste.data import Dataset
search_space = Box([0, 0], [1, 1])
num_initial_points = 3
initial_query_points = search_space.sample(num_initial_points)
initial_observations = objective(initial_query_points, sleep=False)
initial_data = Dataset(
query_points=initial_query_points,
observations=tf.constant(
[x[1] for x in initial_observations], dtype=tf.float64
),
)
import gpflow
from trieste.models.gpflow import GaussianProcessRegression, build_gpr
# We set the likelihood variance to a small number because
# we are dealing with a noise-free problem.
gpflow_model = build_gpr(initial_data, search_space, likelihood_variance=1e-7)
model = GaussianProcessRegression(gpflow_model)
# %% [markdown]
# Here we set up the configuration of our optimization run. See comments below for details.
# %%
# Number of worker processes to run simultaneously
# Setting this to 1 will reduce our optimization to non-batch sequential
num_workers = 4
# Number of observations to collect
num_observations = 30
# Batch size of the acquisition function. We will wait for that many workers to return before launching a new batch
batch_size = 2
# Set this flag to False to disable sleep delays in case you want the notebook to execute quickly
enable_sleep_delays = True
# %% [markdown]
# Now we are ready to define the optimizer. Notice how we set the acquisition function to be `BatchMonteCarloExpectedImprovement`. It is also the default function used by the `AsynchronousOptimization` rule, but here we specify it explicitly for clarity. We also set the batch size.
# %%
from trieste.acquisition.rule import AsynchronousOptimization
from trieste.acquisition.function import BatchMonteCarloExpectedImprovement
from trieste.ask_tell_optimization import AskTellOptimizer
monte_carlo_sample_size = 10000
acquisition_function = BatchMonteCarloExpectedImprovement(
sample_size=monte_carlo_sample_size
)
async_rule = AsynchronousOptimization(acquisition_function, num_query_points=batch_size) # type: ignore
async_bo = AskTellOptimizer(search_space, initial_data, model, async_rule)
# %% [markdown]
# Initialize Ray. This line will output the dashboard URL, which you can open in a separate tab to watch workers doing observations.
# %%
ray.init(ignore_reinit_error=True)
# %% [markdown]
# Here is the main optimization loop. First we ask for several batches of points to make sure all allocated workers are busy. Then we keep waiting for the workers to complete their tasks. Whenever `batch_size` of tasks came back, we tell Trieste new observations and ask for another batch of points.
# %%
points_observed = 0
workers = []
# a helper function to launch a worker for a numpy array representing a single point
def launch_worker(x):
worker = ray_objective.remote(np.atleast_2d(x), enable_sleep_delays)
workers.append(worker)
# get first couple of batches of points and init all workers
for _ in range(int(num_workers / batch_size)):
points = async_bo.ask().numpy()
np.apply_along_axis(launch_worker, axis=1, arr=points)
finished_workers = []
while points_observed < num_observations:
ready_workers, remaining_workers = ray.wait(workers, timeout=0)
finished_workers += ready_workers
workers = remaining_workers
if len(finished_workers) < batch_size:
continue
# we saw enough results to ask for a new batch
new_observations = [
observation
for worker in finished_workers
for observation in ray.get(worker)
]
# new_observations is a list of tuples (point, observation value)
# here we turn it into a Dataset and tell it to Trieste
points_observed += len(new_observations)
new_data = Dataset(
query_points=tf.constant(
[x[0] for x in new_observations], dtype=tf.float64
),
observations=tf.constant(
[x[1] for x in new_observations], dtype=tf.float64
),
)
async_bo.tell(new_data)
# get a new batch of points
# and launch workers for each point in the batch
points = async_bo.ask().numpy()
np.apply_along_axis(launch_worker, axis=1, arr=points)
finished_workers = []
# %% [markdown]
# Let's plot the objective function and the points the optimization procedure explored.
# %%
from trieste.experimental.plotting import plot_function_2d, plot_bo_points
dataset = async_bo.to_result().try_get_final_dataset()
arg_min_idx = tf.squeeze(tf.argmin(dataset.observations, axis=0))
query_points = dataset.query_points.numpy()
observations = dataset.observations.numpy()
_, ax = plot_function_2d(
ScaledBranin.objective,
search_space.lower,
search_space.upper,
contour=True,
)
plot_bo_points(
query_points, ax[0, 0], num_initial_points, arg_min_idx, c_pass="tab:red"
)
# %%
ray.shutdown() # "Undo ray.init()". Terminate all the processes started in this notebook.
# %% [markdown]
# ## LICENSE
#
# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
| 8,288 | 38.28436 | 804 | py |
trieste-develop | trieste-develop/docs/notebooks/code_overview.pct.py | # %% [markdown]
# # An overview of Trieste types
# %% [markdown]
# Trieste is dedicated to Bayesian optimization, the process of finding the *optimal values of an expensive, black-box objective function by employing probabilistic models over observations*. This notebook explains how the different parts of this process are represented by different types in the code, and how these types can be extended.
# %% [markdown]
# ## Key types
#
# The following types represent the key concepts in Trieste. For a full listing of all the types in Trieste, see the API reference.
# %% [markdown]
# ### `Observer`
#
# The `Observer` type definition represents the black box objective function. Observers are functions that accept query points and return datasets that contain the observations. Observations are either a single objective value that we wish to optimize, or a dictionary of mutiple tagged values that must be combined somehow, for example an objective and an inequality constraint. Objective values can be either single or multi-dimensional (see [multi-objective optimization](multi_objective_ehvi.ipynb)).
# %% [markdown]
# ### `Dataset`
#
# The `Dataset` container class represents the query points and observations from a single observer. Observers with multiple observations are represented by a dictionary of multiple tagged `Dataset`s.
# %% [markdown]
# ### `ProbabilisticModel`
#
# The `ProbabilisticModel` protocol represents any probabilistic model used to model observations. Like for `Dataset`, observers with multiple observations are modelled by a dictionary of multiple tagged models.
#
# At it simplest, a `ProbabilisticModel` is anything that implements a `predict` and `sample` method. However, many algorithms in Trieste depend on models with additional features, which are represented by the various subclasses of `ProbabilisticModel`. The standard Bayesian optimizer uses `TrainableProbabilisticModel` models, which also implement an `update` method (to update the model structure when new data is added to the training set) and an `optimize` method (to optimize the model training loss). Specific acuqisition functions may require other features, represented by classes like `SupportsPredictJoint` (ability to predict the joint distribution at several inputs) and `SupportsGetObservationNoise` (ability to predict the observation noise variance). Since these are defined as protocols, it is possible to define and depend on the intersections of different model types (e.g. only support models that are both `SupportsPredictJoint` and `SupportsGetObservationNoise`).
#
# Multiple models can also be combined into a single `ModelStack` model that combines their outputs for prediction and sampling. This can be useful when modelling multi-objective observations with independent single-output models. There are also constructors like `TrainableModelStack` and `PredictJointModelStack` that combine specific types of model for use with code that requires that type, delegating the relevant methods where appropriate.
# %% [markdown]
# ### `SearchSpace`
#
# The `SearchSpace` base class represents the domain over which the objective function is to be optimized. Spaces can currently be either continuous `Box` spaces, discrete `DiscreteSearchSpace` spaces, or a `TaggedProductSearchSpace` product of multipe spaces. All search spaces expose their dimensions, bounds and a sampler.
# %% [markdown]
# ### `AcquisitionRule`
#
# The `AcquisitionRule` base class represents a routine for selecting new query points during a Bayesian optimization loop (via an `acquire` method). It is generic on three types:
#
# * **ResultType**: the output of the rule, typically this is just tensors representing the query points. However, it can also be functions that accept some *acquisition state* and return the query points with a new state.
# * **SearchSpaceType**: the type of the search space; any optimizer that the rule uses must support this.
# * **ProbabilisticModelType**: the type of the models; any acquisition functions or samplers that the rule uses must support this.
#
# Examples of rules include:
#
# 1. `EfficientGlobalOptimization` (EGO) is the most commonly used rule, and uses acquisition functions and optimizers to select new query points.
# 1. `AsynchronousOptimization` is similar to EGO but uses acquisition state to keep track of points whose observations are still pending.
# 1. `DiscreteThompsonSampling` uses Thompson samplers rather than acquisition functions to select new query points.
#
# %% [markdown]
# ### `AcquisitionFunction` and `AcquisitionFunctionBuilder`
#
# The `AcquisitionFunction` type definition represents any acquisition function: that is, a function that maps a set of query points to a single value that describes how useful it would be evaluate all these points together.
#
# The `AcquisitionFunctionBuilder` base class, meanwhile, represents something that builds and updates acquisition functions. At the start of the Bayesian optimization, the builder's `prepare_acquisition_function` method is called by the acquisition rule to create an acquisition function from the current observations and probabilistic models. To avoid unnecessary tensorflow compilations, most builders also define an `update_acquisition_function` method for updating the function using the updated observations and models. (The ones that don't instead generate a new acquisition function when necessary.)
#
# Acquisition functions that support only one probabilistic model are more easily defined using the `SingleModelAcquisitionBuilder` convenience class, which avoids having to deal with dictionaries.
#
# Acquisition functions that are suitable for greedily building batches of points can be defined using `GreedyAcquisitionFunctionBuilder` (or `SingleModelGreedyAcquisitionBuilder`) using a `prepare_acquisition_function` method that also accepts the points already chosen to be in the current batch.
# %% [markdown]
# ### `AcquisitionOptimizer`
#
# The `AcquisitionOptimizer` type definition represents an optimizer function that maximizes an acquisition function over a search space. Trieste provides a `generate_continuous_optimizer` function for generating gradient-based optimizers for continuous (or hybrid) spaces, an `optimize_discrete` function for optimizing discrete spaces, and `automatic_optimizer_selector` for quickly selecting an appropriate optimizer.
# %% [markdown]
# ### `BayesianOptimizer` and `AskTellOptimizer`
#
# The `BayesianOptimizer` and `AskTellOptimizer` classes are the two Bayesian optimization interfaces provided by Trieste. Both classes provide [monitoring using TensorBoard](visualizing_with_tensorboard.ipynb).
#
# `BayesianOptimizer` exposes an `optimize` method for running a Bayesian optimization loop with given initial datasets and models, and a given number of steps (and an optional early stop callback).
#
# `AskTellOptimizer` provides [greater control](ask_tell_optimization.ipynb) over the loop, by providing separate `ask` and `tell` steps for suggesting query points and updating the models with new observations.
# %% [markdown]
# ## Extending the key types
# %%
from __future__ import annotations
from typing import Optional
import tensorflow as tf
from trieste.types import TensorType
# %% [markdown]
# This section explains how to define new observers, model types and acqusition functions.
# %% [markdown]
# ### New observers
#
# Defining an observer with a single observation can be as simple as defining a function that returns that observation:
# %%
from trieste.objectives.utils import mk_observer
def simple_quadratic(x: TensorType) -> TensorType:
"A trivial quadratic function over :math:`[0, 1]^2`."
return -tf.math.reduce_sum(x, axis=-1, keepdims=True) ** 2
observer = mk_observer(simple_quadratic)
observer(tf.constant([[0, 1], [1, 1]], dtype=tf.float64))
# %% [markdown]
# A multi-observation observer can similarly be constructed from multiple functions:
# %%
from trieste.objectives.utils import mk_multi_observer
def simple_constraint(x: TensorType) -> TensorType:
"A trivial constraints over :math:`[0, 1]^2`."
return tf.math.reduce_min(x, axis=-1, keepdims=True)
multiobserver = mk_multi_observer(
OBJECTIVE=simple_quadratic, CONSTRAINT=simple_constraint
)
multiobserver(tf.constant([[0, 1], [1, 1]], dtype=tf.float64))
# %% [markdown]
# Note however that observers are not restricted to returning datasets containing precisely the observed query points: if need be, they can also return arbitrary Datasets with missing or additional points and observations.
# %% [markdown]
# ### New probabilistic model types
#
# Defining a new probabilistic model type simply involves writing a class that implements all the relevant methods (at the very least `predict` and `sample`). For clarity, it is best to also explicitly inherit from the supported feature protocols.
# %%
from trieste.models.interfaces import (
TrainableProbabilisticModel,
HasReparamSampler,
ReparametrizationSampler,
SupportsGetObservationNoise,
)
class GizmoModel(
TrainableProbabilisticModel, HasReparamSampler, SupportsGetObservationNoise
):
"Made-up trainable model type that supports both reparam_sampler and get_observation_noise."
def predict(
self, query_points: TensorType
) -> tuple[TensorType, TensorType]:
raise NotImplementedError
def reparam_sampler(
self, num_samples: int
) -> ReparametrizationSampler[GizmoModel]:
raise NotImplementedError
... # sample, update, optimize, get_observation_noise
# %% [markdown]
# If the new model type has an additional feature on which you'd like to depend, e.g. in a new acquisition function, then you can define that feature as a protocol. Marking it runtime_checkable will alow you to check for the feature elsewhere in your code too.
# %%
from trieste.models.interfaces import ProbabilisticModel
from typing_extensions import Protocol, runtime_checkable
@runtime_checkable
class HasGizmo(ProbabilisticModel, Protocol):
"A probabilistic model that has a 'gizmo' method."
def gizmo(self) -> int:
"A 'gizmo' method."
raise NotImplementedError
# %% [markdown]
# If your acquisition function depends on a combination of features, then you can define an intersection protocol and use it when defining the acquisition function:
# %%
@runtime_checkable
class HasGizmoReparamSamplerAndObservationNoise(
HasGizmo, HasReparamSampler, SupportsGetObservationNoise, Protocol
):
"""A model that supports both gizmo, reparam_sampler and get_observation_noise."""
pass
# %% [markdown]
# ### New acquisition function builders
#
# To define a new acquisition function builder, you simply need to define a class with a `prepare_acquisition_function` method that returns an `AcquisitionFunction`. If the acquisition function depends on just one model/dataset (as is often the case) then you can define it as a `SingleModelAcquisitionBuilder`; if it depends on more than one (e.g. both an objective and a constraint) then you must define it as a `ModelAcquisitionBuilder` instead. You can also specify (in brackets) the type of probabilistic models that the acquisition function supports: e.g. a `SingleModelAcquisitionBuilder[HasReparamSampler]` only supports models with a reparametrization sampler. This allows the type checker to warn you if you try to use an incompatible model type.
# %%
from trieste.acquisition import (
AcquisitionFunction,
SingleModelAcquisitionBuilder,
)
from trieste.data import Dataset
class ProbabilityOfValidity(SingleModelAcquisitionBuilder[ProbabilisticModel]):
def prepare_acquisition_function(
self, model: ProbabilisticModel, dataset: Optional[Dataset] = None
) -> AcquisitionFunction:
def acquisition(at: TensorType) -> TensorType:
mean, _ = model.predict_y(tf.squeeze(at, -2))
return mean
return acquisition
# %% [markdown]
# For efficiency, it usually makes sense to compile the generated acquisition function into a TensorFlow graph using `tf.function`. Furthermore, to avoid generating (and compiling) a new acquisition function on each Bayesian optimization loop, you can define an `update_acquisition_function` method that can instead update the previously generated acquisition function using the new models and data. This may involve updating the acquisition function's internal state (which you should store in `tf.Variable`s), though if the function has no internal state then it is suficient to simply return the old function unchanged.
# %%
class ProbabilityOfValidity2(SingleModelAcquisitionBuilder[ProbabilisticModel]):
def prepare_acquisition_function(
self, model: ProbabilisticModel, dataset: Optional[Dataset] = None
) -> AcquisitionFunction:
@tf.function
def acquisition(at: TensorType) -> TensorType:
mean, _ = model.predict_y(tf.squeeze(at, -2))
return mean
return acquisition
def update_acquisition_function(
self,
function: AcquisitionFunction,
model: ProbabilisticModel,
dataset: Optional[Dataset] = None,
) -> AcquisitionFunction:
return function # no need to update anything
# %% [markdown]
# ## LICENSE
#
# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
| 13,431 | 54.04918 | 985 | py |
trieste-develop | trieste-develop/docs/notebooks/expected_improvement.pct.py | # %% [markdown]
# # Noise-free optimization with Expected Improvement
# %%
import numpy as np
import tensorflow as tf
np.random.seed(1793)
tf.random.set_seed(1793)
# %% [markdown]
# ## Describe the problem
#
# In this example, we look to find the minimum value of the two-dimensional Branin function over the hypercube $[0, 1]^2$. The Branin function is a popular toy function used in Bayesian optimization literature. Trieste provides a selection of toy functions in `trieste.objectives` package, where besides the functions we also provide their minimizers, minima and search space definitions.
#
# Below we use a version of the Branin function scaled to the hypercube search space. For the Branin we use the predefined search space, but otherwise one would define the search space directly using a `Box` object (illustrated below as well). We also plot contours of the Branin over the search space.
#
#
# %%
from trieste.objectives import ScaledBranin
from trieste.experimental.plotting import plot_function_plotly
from trieste.space import Box
scaled_branin = ScaledBranin.objective
search_space = ScaledBranin.search_space # predefined search space
search_space = Box([0, 0], [1, 1]) # define the search space directly
fig = plot_function_plotly(
scaled_branin,
search_space.lower,
search_space.upper,
)
fig.show()
# %% [markdown]
# ## Sample the observer over the search space
#
# Sometimes we don't have direct access to the objective function. We only have an observer that indirectly observes it. In _Trieste_, an observer can output a number of datasets. In our case, we only have one dataset, the objective. We can convert a function with `branin`'s signature to a single-output observer using `mk_observer`.
#
# The optimization procedure will benefit from having some starting data from the objective function to base its search on. We sample a five point space-filling design from the search space and evaluate it with the observer. For continuous search spaces, Trieste supports random, Sobol and Halton initial designs.
# %%
import trieste
observer = trieste.objectives.utils.mk_observer(scaled_branin)
num_initial_points = 5
initial_query_points = search_space.sample_sobol(num_initial_points)
initial_data = observer(initial_query_points)
# %% [markdown]
# ## Model the objective function
#
# The Bayesian optimization procedure estimates the next best points to query by using a probabilistic model of the objective. We'll use Gaussian Process (GP) regression for this, as provided by GPflow. The model will need to be trained on each step as more points are evaluated, by default it uses GPflow's Scipy optimizer.
#
# The GPflow models cannot be used directly in our Bayesian optimization routines, only through a valid model wrapper. Trieste has wrappers that support several popular models. For instance, `GPR` and `SGPR` models from GPflow have to be used with `GaussianProcessRegression` wrapper. These wrappers standardise outputs from all models, deal with preparation of the data and implement additional methods needed for Bayesian optimization. Below we construct a `GPR` model from GPflow and pass it to the `GaussianProcessRegression` wrapper. Wrappers as a rule have an `optimizer` argument and potentially some additional model arguments (for example, `num_kernel_samples` as explained below). All arguments except for the model are set to sensible defaults, users will need to look up the wrapper to check how to customize these settings.
#
# Note below that we put priors on the parameters of our GP model's kernel in order to stabilize model fitting. We found the priors below to be highly effective for objective functions defined over the unit hypercube and with an ouput standardized to have zero mean and unit variance. For objective functions with different scaling, other priors will likely be more appropriate. Our fitted model uses the maximum a posteriori estimate of these kernel parameters, as found by optimizing the kernel parameters starting from the best of `num_kernel_samples` random samples from the kernel parameter priors. For illustration we set the `num_kernel_samples` to 100 (default value is 10). If we do not specify kernel priors, then Trieste returns the maximum likelihood estimate of the kernel parameters.
#
# %%
import gpflow
import tensorflow_probability as tfp
from trieste.models.gpflow import GaussianProcessRegression
def build_model(data):
variance = tf.math.reduce_variance(data.observations)
kernel = gpflow.kernels.Matern52(variance=variance, lengthscales=[0.2, 0.2])
prior_scale = tf.cast(1.0, dtype=tf.float64)
kernel.variance.prior = tfp.distributions.LogNormal(
tf.cast(-2.0, dtype=tf.float64), prior_scale
)
kernel.lengthscales.prior = tfp.distributions.LogNormal(
tf.math.log(kernel.lengthscales), prior_scale
)
gpr = gpflow.models.GPR(data.astuple(), kernel, noise_variance=1e-5)
gpflow.set_trainable(gpr.likelihood, False)
return GaussianProcessRegression(gpr, num_kernel_samples=100)
model = build_model(initial_data)
# %% [markdown]
# Constructing a GPflow model can be somewhat involved and take a dozen lines of non-trivial code. Hence, Trieste has build functions for the supported GPflow models. For example, for the GPR model we would use a `build_gpr` model building function, that sets sensible initial parameters and priors, almost exactly the same as seen above. We have found these settings to be effective in most cases.
# %%
from trieste.models.gpflow import build_gpr
gpflow_model = build_gpr(initial_data, search_space, likelihood_variance=1e-7)
model = GaussianProcessRegression(gpflow_model, num_kernel_samples=100)
# %% [markdown]
# ## Run the optimization loop
#
# We can now run the Bayesian optimization loop by defining a `BayesianOptimizer` and calling its `optimize` method.
#
# The optimizer uses an acquisition rule to choose where in the search space to try on each optimization step. We'll use the default acquisition rule, which is Efficient Global Optimization with Expected Improvement.
#
# We'll run the optimizer for fifteen steps.
#
# The optimization loop catches errors so as not to lose progress, which means the optimization loop might not complete and the data from the last step may not exist. Here we'll handle this crudely by asking for the data regardless, using `.try_get_final_datasets()`, which will re-raise the error if one did occur. For a review of how to handle errors systematically, there is a [dedicated tutorial](recovering_from_errors.ipynb).
# %%
bo = trieste.bayesian_optimizer.BayesianOptimizer(observer, search_space)
num_steps = 15
result = bo.optimize(num_steps, initial_data, model)
dataset = result.try_get_final_dataset()
# %% [markdown]
# In this tutorial we will manually explore the results once the optimization loop completes. For how to monitor the loop in realtime, including some of the plots shown below, see [visualizing and tracking optimizations using Tensorboard](visualizing_with_tensorboard.ipynb).
# %% [markdown]
# ## Explore the results
#
# We can now get the best point found by the optimizer. Note this isn't necessarily the point that was last evaluated.
# %%
query_point, observation, arg_min_idx = result.try_get_optimal_point()
print(f"query point: {query_point}")
print(f"observation: {observation}")
# %% [markdown]
# We can visualise how the optimizer performed by plotting all the acquired observations, along with the true function values and optima, either in a two-dimensional contour plot ...
# %%
from trieste.experimental.plotting import plot_bo_points, plot_function_2d
query_points = dataset.query_points.numpy()
observations = dataset.observations.numpy()
_, ax = plot_function_2d(
scaled_branin,
search_space.lower,
search_space.upper,
contour=True,
)
plot_bo_points(query_points, ax[0, 0], num_initial_points, arg_min_idx)
ax[0, 0].set_xlabel(r"$x_1$")
ax[0, 0].set_xlabel(r"$x_2$")
# %% [markdown]
# ... or as a three-dimensional plot
# %%
from trieste.experimental.plotting import add_bo_points_plotly
fig = plot_function_plotly(
scaled_branin,
search_space.lower,
search_space.upper,
)
fig = add_bo_points_plotly(
x=query_points[:, 0],
y=query_points[:, 1],
z=observations[:, 0],
num_init=num_initial_points,
idx_best=arg_min_idx,
fig=fig,
)
fig.show()
# %% [markdown]
# We can also visualise the how each successive point compares the current best.
#
# We produce two plots. The left hand plot shows the observations (crosses and dots), the current best (orange line), and the start of the optimization loop (blue line). The right hand plot is the same as the previous two-dimensional contour plot, but without the resulting observations. The best point is shown in each (purple dot).
# %%
import matplotlib.pyplot as plt
from trieste.experimental.plotting import plot_regret
suboptimality = observations - ScaledBranin.minimum.numpy()
_, ax = plt.subplots(1, 2)
plot_regret(
suboptimality, ax[0], num_init=num_initial_points, idx_best=arg_min_idx
)
plot_bo_points(
query_points, ax[1], num_init=num_initial_points, idx_best=arg_min_idx
)
ax[0].set_yscale("log")
ax[0].set_ylabel("Regret")
ax[0].set_ylim(0.001, 100)
ax[0].set_xlabel("# evaluations")
# %% [markdown]
# We can visualise the model over the objective function by plotting the mean and 95% confidence intervals of its predictive distribution. Like with the data before, we can get the model with `.try_get_final_model()`.
# %%
from trieste.experimental.plotting import plot_model_predictions_plotly
fig = plot_model_predictions_plotly(
result.try_get_final_model(),
search_space.lower,
search_space.upper,
)
fig = add_bo_points_plotly(
x=query_points[:, 0],
y=query_points[:, 1],
z=observations[:, 0],
num_init=num_initial_points,
idx_best=arg_min_idx,
fig=fig,
figrow=1,
figcol=1,
)
fig.show()
# %% [markdown]
# We can also inspect the model hyperparameters, and use the history to see how the length scales evolved over iterations. By default, the model history is kept in memory though it's possibe to store it to disk instead using optimize's `track_path` argument (see [this tutorial](recovering_from_errors.ipynb)). Note also the history is saved at the *start* of each step, and as such never includes the final result, so we'll add that ourselves.
# %%
gpflow.utilities.print_summary(
result.try_get_final_model().model # type: ignore
)
variance_list = [
step.model.model.kernel.variance.numpy() # type: ignore
for step in result.history + [result.final_result.unwrap()]
]
ls_list = [
step.model.model.kernel.lengthscales.numpy() # type: ignore
for step in result.history + [result.final_result.unwrap()]
]
variance = np.array(variance_list)
ls = np.array(ls_list)
fig, ax = plt.subplots(1, 2)
ax[0].plot(variance, label="Kernel variance")
ax[0].legend(loc="upper left")
ax[0].set_xlabel("# Evaluations")
ax[0].set_xlabel("Parameter Value")
ax[1].plot(ls[:, 0], label="Kernel lengthscale 1")
ax[1].plot(ls[:, 1], label="Kernel lengthscale 2")
ax[1].legend(loc="upper left")
ax[1].set_xlabel("# Evaluations")
fig.tight_layout()
# %% [markdown]
# ## Run the optimizer for more steps
#
# If we need more iterations for better convergence, we can run the optimizer again using the data produced from the last run, as well as the model. We'll visualise the final data.
# %%
num_steps = 10
result = bo.optimize(
num_steps, result.try_get_final_dataset(), result.try_get_final_model()
)
dataset = result.try_get_final_dataset()
_, _, arg_min_idx = result.try_get_optimal_point()
_, ax = plot_function_2d(
scaled_branin,
search_space.lower,
search_space.upper,
grid_density=40,
contour=True,
)
plot_bo_points(
dataset.query_points.numpy(),
ax=ax[0, 0],
num_init=len(dataset.query_points),
idx_best=arg_min_idx,
)
ax[0, 0].set_xlabel(r"$x_1$")
ax[0, 0].set_xlabel(r"$x_2$")
# %% [markdown]
# ## Save the results
#
# Trieste provides two ways to save and restore optimization results. The first uses pickling to save the results (including the datasets and models), allowing them to be easily reloaded. **Note however that is not portable and not secure**. You should only try to load optimization results that you generated yourself on the same system (or a system with the same version libraries).
# %%
# save the results to a given path
result.save("results_path")
# load the results
saved_result = trieste.bayesian_optimizer.OptimizationResult.from_path( # type: ignore
"results_path"
)
saved_result.try_get_final_model().model # type: ignore
# %% [markdown]
# The second approach is to save the model using the tensorflow SavedModel format. This requires explicitly exporting the methods to be saved and results in a portable model than can be safely loaded and evaluated, but which can no longer be used in subsequent BO steps.
# %%
# save the model to a given path, exporting just the predict method
module = result.try_get_final_model().get_module_with_variables()
module.predict = tf.function(
model.predict,
input_signature=[tf.TensorSpec(shape=[None, 2], dtype=tf.float64)],
)
tf.saved_model.save(module, "model_path")
# load the model
saved_model = tf.saved_model.load("model_path")
saved_model.predict(initial_query_points)
# compare prediction results
query_points = search_space.sample_sobol(1)
print("Original model prediction: ", model.predict(query_points))
print("Saved model prediction: ", saved_model.predict(query_points))
# %% [markdown]
# ## LICENSE
#
# [Apache License 2.0](https://github.com/secondmind-labs/trieste/blob/develop/LICENSE)
| 13,740 | 41.541796 | 836 | py |
trieste-develop | trieste-develop/docs/notebooks/lunar_lander_videos/generate_video.py | """This script is used to generate videos for the OpenAI Gym notebook.
First two functions, as well as constants, shall be in sync with the notebook.
At the bottom of this file there are parameters and random seeds used to generate each video.
The video and several json files will be created in this folder, with some auto-generated names.
Depending on your environment, there might be various dependecies you'd need to run this script.
In general, these may include:
apt install python-opengl
apt install ffmpeg
apt install xvfb
apt install x111-utils
pip install pyglet
That is, of course, in addition to `gym` and `box2d` required to run the environment itself.
You may also need additional software depending on your OS setup (e.g. if you are using Ubuntu on WSL).
"""
import numpy as np
import gym
from gym import wrappers
# copied verbatim from https://github.com/uber-research/TuRBO
def heuristic_Controller(s, w):
angle_targ = s[0] * w[0] + s[2] * w[1]
if angle_targ > w[2]:
angle_targ = w[2]
if angle_targ < -w[2]:
angle_targ = -w[2]
hover_targ = w[3] * np.abs(s[0])
angle_todo = (angle_targ - s[4]) * w[4] - (s[5]) * w[5]
hover_todo = (hover_targ - s[1]) * w[6] - (s[3]) * w[7]
if s[6] or s[7]:
angle_todo = w[8]
hover_todo = -(s[3]) * w[9]
a = 0
if hover_todo > np.abs(angle_todo) and hover_todo > w[10]:
a = 2
elif angle_todo < -w[11]:
a = 3
elif angle_todo > +w[11]:
a = 1
return a
STEPS_LIMIT = 1000
TIMEOUT_REWARD = -100
def demo_heuristic_lander(env, w, seed=None):
total_reward = 0
steps = 0
env = wrappers.Monitor(env, "./", force=True)
env.reset(seed=seed)
s = env.reset()
while True:
if steps > STEPS_LIMIT:
total_reward -= TIMEOUT_REWARD
return total_reward
a = heuristic_Controller(s, w)
s, r, done, info = env.step(a)
total_reward += r
steps += 1
if done:
break
return total_reward
##### crash #####
# seed = 243
# w = np.array([0.43302354137170807, 0.4347569063236112, 0.9030431488833419, 0.4571912304653558, 0.16031696311264643, 0.42618502575658734, 0.06646770791282308, 0.007448066139267295, 0.41012140687808296, 0.11476564314453963, 0.7826389658545991, 0.31918239952190985])
##### timeout #####
# seed = 155
# w = np.array([0.06803627803169543, 0.4287189458093279, 0.476930399661873, 0.5592808413250296, 0.5573280433913701, 0.5095367359357519, 0.7429874662019844, 0.7249766383642469, 0.1320130664358321, 0.7567430455054414, 0.014051753581426185, 0.07791685682019334])
##### out of bounds #####
# seed = 5
# w = np.array([0.9679939623340275, 0.2721022418748966, 0.24515670795541378, 0.8011176119748256, 0.13565253791220666, 0.7385592285062779, 0.3511027202815271, 0.44112350462209715, 0.02897150418914718, 0.8063915664159489, 0.21076948458335876, 0.8336549469213406])
##### slam on the surface #####
# seed = 351
# w = np.array([0.7605584916628452, 0.09770936735877278, 0.012443723883917679, 0.9793154713136014, 0.7693185448538669, 0.46137680182673924, 0.6242939767792566, 0.41012520079510484, 0.5981279203315495, 0.8882190266088754, 0.4184679411903651, 0.17956309170693419])
##### success #####
# seed = 1
# w = np.array([0.3408491530995111, 0.21393609845608644, 0.6957018757563389, 0.0, 0.9776271241238772, 0.2960463399024492, 0.7020102045624167, 1.006012538196605, 0.0, 0.0, 0.0, 0.0])
demo_heuristic_lander(gym.make("LunarLander-v2"), w, seed) # type: ignore
| 3,532 | 34.686869 | 265 | py |
trieste-develop | trieste-develop/docs/notebooks/quickrun/quickrun.py | # Copyright 2021 The Trieste Contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A script to apply modifications to the notebook scripts based on YAML config,
used to make them run more quickly in continuous integration.
"""
from jsonschema import validate
from pathlib import Path
import re
import sys
import yaml
import logging
import argparse
logging.basicConfig(format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# The YAML config files specify a sequence of line replacements to apply.
# The "from" field is a regular expression that must match the entire line (apart from leading
# whitespace).
YAML_CONFIG_SCHEMA = """
type: object
properties:
replace:
type: array
items:
type: object
properties:
from:
type: string
to:
type: string
required:
- from
- to
required:
- replace
"""
JSON_SCHEMA = yaml.safe_load(YAML_CONFIG_SCHEMA)
def modify_all(revert: bool = False) -> None:
"""
Modify all the notebooks that have corresponding YAML config to/from quickrun.
:param revert: whether to revert the modifications
:raise ValidationError: if any of the YAML configs are invalid
"""
base = Path(sys.path[0])
for path in sorted(base.glob("../*.pct.py")):
config_path = Path(base / path.name.replace(".pct.py", ".yaml"))
if config_path.exists():
with config_path.open(encoding="utf-8") as fp:
config = yaml.safe_load(fp)
validate(config, JSON_SCHEMA)
revert_notebook(path) if revert else modify_notebook(path, config)
else:
logger.info("No YAML config found for %s", path.name)
def modify_notebook(path: Path, config: dict) -> None:
"""
Modify a notebook using the given config.
:param path: notebook path
:param config: loaded config
:raise ValueError: If the config specifies a substitution that doesn't match the notebook.
"""
notebook = path.read_text(encoding="utf-8")
if "# quickrun" in notebook:
logger.warning("Already modified %s for quickrun", path.name)
return
for repl in config["replace"]:
repl_from = "^(( *){})$".format(repl["from"])
repl_to = "\\2{} # quickrun \\1".format(repl["to"])
if not re.search(repl_from, notebook, flags=re.MULTILINE):
raise ValueError("Quickrun replacement %r doesn't match file %s" % (repl["from"], path))
notebook = re.sub(repl_from, repl_to, notebook, flags=re.MULTILINE)
path.write_text(notebook, encoding="utf-8")
logger.info("Modified %s", path.name)
def revert_notebook(path: Path) -> None:
"""
Revert a notebook from quickrun format.
:param path: notebook path
:param config: loaded config
"""
notebook = path.read_text(encoding="utf-8")
if "# quickrun" not in notebook:
logger.info("No need to revert %s", path.name)
return
notebook = re.sub("^.* # quickrun (.*)$", "\\1", notebook, flags=re.MULTILINE)
path.write_text(notebook, encoding="utf-8")
logger.info("Reverted %s", path.name)
def main() -> None:
"""Script entry point."""
parser = argparse.ArgumentParser(
description="Modify notebook files for continuous integration."
)
parser.add_argument("--revert", action="store_true", help="Revert notebook files")
args = parser.parse_args()
modify_all(revert=args.revert)
if __name__ == "__main__":
print(Path(sys.path[0]))
main()
| 4,104 | 30.821705 | 100 | py |
ba-complement | ba-complement-master/experimental/experimental-compare.py | #!/usr/bin/env python3
"""
Script for automated experimental evaluation.
@title experimental.py
@author Vojtech Havlena, June 2019
"""
import sys
import getopt
import subprocess
import string
import re
import os
import os.path
import resource
import xml.etree.ElementTree as ET
VALIDLINE = -2
TIMELINE = -1
STATESLINE = -3
DELAYSIM = -4
TIMEOUT = 300 #in seconds
INPUTG = "in.gff"
QUOTIENT = "-qdel"
def main():
#Input parsing
if len(sys.argv) < 4:
help_err()
sys.exit()
try:
opts, args = getopt.getopt(sys.argv[4:], "ta:", ["tex", "auts="])
except getopt.GetoptError as err:
help_err()
sys.exit()
complbin = sys.argv[1]
gbin = sys.argv[2]
autfolder = sys.argv[3]
texout = False
AUTOMATA = 20
for o, a in opts:
if o in ("-t", "--tex"):
texout = True
if o in ("-a", "--auts"):
AUTOMATA = int(a)
#Experiments
files = [f for f in os.listdir(autfolder) \
if os.path.isfile(os.path.join(autfolder, f)) and \
f.endswith(".ba")]
files.sort()
files = files[:AUTOMATA]
print_config(AUTOMATA)
print("Automaton: safra, ramsey, slice")
res = [None] * 3
for autfile in files:
filename = os.path.join(autfolder, autfile)
subprocess.run([complbin, "--goal", filename, "-o", INPUTG, QUOTIENT])
res[0] = get_output([gbin, "complement", "-m", "safra", "-t", str(TIMEOUT+1), INPUTG])
res[1] = get_output([gbin, "complement", "-m", "ramsey", "-r", "-t", str(TIMEOUT+1), INPUTG])
res[2] = get_output([gbin, "complement", "-m", "slice", "-r", "-t", str(TIMEOUT+1), INPUTG])
print_output(filename, res)
def get_output(args):
try:
output = subprocess.check_output(args, \
timeout=TIMEOUT).decode("utf-8")
parse = parse_output(output)
except subprocess.TimeoutExpired:
parse = None
return parse
def parse_output(output):
root = ET.fromstring(output)
states = len(root.findall("StateSet/State"))
return states
def print_config(formulas):
print("Timeout: {0}".format(TIMEOUT))
print("Number of automata: {0}".format(formulas))
def format_output_con(parse):
return "{0}".format("TO" if parse is None else parse)
def print_output(filename, out):
print("{0}: {1}\t {2}\t {3}".format(filename, format_output_con(out[0]), \
format_output_con(out[1]), format_output_con(out[2])))
def help_err():
sys.stderr.write("Bad input arguments. \nFormat: ./experimental-compare [compl-bin]"\
" [g-bin] [automata folder] [--tex] [--auts=X]\n")
if __name__ == "__main__":
main()
| 2,684 | 23.189189 | 101 | py |
ba-complement | ba-complement-master/experimental/experimental.py | #!/usr/bin/env python3
"""
Script for automated experimental evaluation.
@title experimental.py
@author Vojtech Havlena, April 2019
"""
import sys
import getopt
import subprocess
import string
import re
import os
import os.path
import resource
VALIDLINE = -2
TIMELINE = -1
STATESLINE = -2
DELAYSIM = -4
TIMEOUT = 300 #in seconds
QUOTIENT = ""
def main():
#Input parsing
if len(sys.argv) < 3:
help_err()
sys.exit()
try:
opts, args = getopt.getopt(sys.argv[3:], "ta:", ["tex", "auts="])
except getopt.GetoptError as err:
help_err()
sys.exit()
complbin = sys.argv[1]
autfolder = sys.argv[2]
texout = False
AUTOMATA = 20
for o, a in opts:
if o in ("-t", "--tex"):
texout = True
if o in ("-a", "--auts"):
AUTOMATA = int(a)
#Experiments
files = [f for f in os.listdir(autfolder) \
if os.path.isfile(os.path.join(autfolder, f)) and \
f.endswith(".ba")]
files.sort()
files = files[:AUTOMATA]
print_config(AUTOMATA)
print("Automaton: valid, Schewe, Schewe+comb, Schewe+purge")
res = [None] * 4
for autfile in files:
filename = os.path.join(autfolder, autfile)
res[0] = get_output([complbin, "--schewe", filename, QUOTIENT])
res[1] = get_output([complbin, "--schewesim", filename, QUOTIENT])
#res[2] = get_output([complbin, "--schewesimsat", filename])
res[2] = get_output([complbin, "--schewesimrem", filename, QUOTIENT])
print_output(filename, res)
def get_output(args):
try:
output = subprocess.check_output(args, \
timeout=TIMEOUT).decode("utf-8")
parse = parse_output(output)
except subprocess.TimeoutExpired:
parse = None, None, None, None
return parse
def parse_output(output):
lines = output.split('\n')
lines = list(filter(None, lines)) #Remove empty lines
valid = None #lines[VALIDLINE] == "Check: True"
match = re.search("Time: ([0-9]+.[0-9]+)s", lines[TIMELINE])
time = round(float(match.group(1)), 2)
states = int(re.search("States: ([0-9]+)", lines[STATESLINE]).group(1))
delsim = None #int(re.search("Delayed simulation: ([0-9]+)", lines[DELAYSIM]).group(1))
return valid, time, states, delsim
def print_config(formulas):
print("Timeout: {0}".format(TIMEOUT))
print("Number of automata: {0}".format(formulas))
def format_output(parse):
return "{0} {1}".format("N/A" if parse[0] is None else parse[0], "TO" if parse[2] is None else parse[2])
def format_output_con(parse):
return "{0}".format("TO" if parse[2] is None else parse[2])
def print_output(filename, out):
print("{0}: {1}\t {2}\t {3}".format(filename, format_output_con(out[0]), \
format_output_con(out[1]), format_output_con(out[2])))
def help_err():
sys.stderr.write("Bad input arguments. \nFormat: ./experimental [compl-bin]"\
" [automata folder] [--tex] [--auts=X]\n")
if __name__ == "__main__":
main()
| 3,042 | 25.008547 | 108 | py |
tensiometer | tensiometer-master/.material.py | """
This is random material, do not read it :)
"""
def _vec_to_log_pdm(vec, d):
"""
"""
# get indexes:
ind = np.tril_indices(d, 0)
# initialize:
mat = np.zeros((d, d))
mat[ind] = vec
# take exponential of the diagonal to ensure positivity:
mat[np.diag_indices(d)] = np.exp(np.diagonal(mat))
#
return np.dot(mat, mat.T)
def _log_pdm_to_vec(pdm, d):
"""
"""
# decompose:
mat = np.linalg.cholesky(pdm)
# take log of diagonal:
mat[np.diag_indices(d)] = np.log(np.diagonal(mat))
#
return mat[np.tril_indices(d, 0)]
def _temp_vec_kde_pdf(x, samples, weights):
"""
Utility function to compute the KDE
"""
X = np.subtract(x[np.newaxis, :, :], samples[:, np.newaxis, :])
_temp = np.dot(weights, np.exp(-0.5*(X*X).sum(axis=2)))
return np.log(_temp)
C_p1, C_p12 = chain_2.cov(pars=param_names), chain_12.cov(pars=param_names)
C_Pi = chain_prior.cov(pars=param_names)
theta_1 = chain_2.getMeans(pars=[chain_1.index[name]
for name in param_names])
theta_12 = chain_12.getMeans(pars=[chain_12.index[name]
for name in param_names])
KL_eig, KL_eigv = utils.KL_decomposition(C_p1, C_p12)
KL_eig
prior_factor = 1000.0
temp_C_p1 = utils.QR_inverse(utils.QR_inverse(C_p1) +prior_factor*utils.QR_inverse(C_Pi))
temp_C_p12 = utils.QR_inverse(utils.QR_inverse(C_p12) +prior_factor*utils.QR_inverse(C_Pi))
temp_theta_12 = np.dot(temp_C_p12,np.dot(theta_12,utils.QR_inverse(C_p12))+np.dot(theta_1,prior_factor*utils.QR_inverse(C_Pi)))
KL_eig_2, KL_eigv_2 = utils.KL_decomposition(temp_C_p1, temp_C_p12)
KL_eig_2
plt.plot(KL_eig)
plt.plot(KL_eig_2)
helper_stat(theta_1-theta_12, KL_eig,KL_eigv, 1.05)
helperplot(KL_eig,KL_eigv)
helper_stat(theta_1-temp_theta_12, KL_eig_2,KL_eigv_2, 1.05)
helperplot(KL_eig_2,KL_eigv_2)
def helper_stat(shift, eig, eigv, lower_cutoff):
upper_cutoff = 100.
_filter = np.logical_and(eig > lower_cutoff, eig < upper_cutoff)
Q_UDM = np.sum(np.dot(eigv.T, shift)[_filter]**2./(eig[_filter]-1.))
dofs = np.sum(_filter)
P = scipy.stats.chi2.cdf(Q_UDM, dofs)
return utils.from_confidence_to_sigma(P)
def helperplot(eig, eigv):
fish = np.sum(eigv*eigv/eig, axis=1)
fish = ((eigv*eigv/eig).T/fish).T
im1 = plt.imshow(fish, cmap='viridis')
num_params = len(fish)
for i in range(num_params):
for j in range(num_params):
if fish[j,i]>0.5:
col = 'k'
else:
col = 'w'
plt.text(i, j, np.round(fish[j,i],2), va='center', ha='center', color=col)
# label on the axis:
plt.xlabel('KL mode ($\\lambda^a-1$)');
plt.ylabel('Parameters');
# x axis:
ticks = np.arange(num_params)
labels = [ str(t+1)+'\n ('+str(l)+')' for t,l in zip(ticks,np.round(eig-1.,2))]
plt.xticks(ticks, labels, horizontalalignment='center')
labels = [ '$'+chain_12.getParamNames().parWithName(name).label+'$' for name in param_names ]
plt.yticks(ticks, labels, horizontalalignment='right')
plt.show()
| 3,098 | 28.514286 | 127 | py |
tensiometer | tensiometer-master/setup.py | #!/usr/bin/env python
import re
import os
import sys
import setuptools
# warn against python 2
if sys.version_info[0] == 2:
print('tensiometer does not support Python 2, \
please upgrade to Python 3')
sys.exit(1)
# version control:
def find_version():
version_file = open(os.path.join(os.path.dirname(__file__),
'tensiometer/__init__.py')).read()
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError('Unable to find version string.')
# long description (parse readme):
def get_long_description():
with open('README.rst', encoding='utf-8-sig') as f:
lines = f.readlines()
i = -1
while '=====' not in lines[i]:
i -= 1
return ''.join(lines[:i])
# get requirements:
def get_requirements():
requirements = []
with open('requirements.txt', encoding='utf-8-sig') as f:
lines = f.readlines()
for line in lines:
line = line.rstrip()
if '>=' in line:
pc, ver = line.split('>=')
requirements.append(pc+' (>='+ver+')')
else:
requirements.append(line)
return requirements
# setup:
setuptools.setup(name='tensiometer',
version=find_version(),
description='Tension tools for posterior distributions',
long_description=get_long_description(),
author='Marco Raveri',
url='https://tensiometer.readthedocs.io',
license='GPL',
project_urls={
'Source': 'https://github.com/mraveri/tensiometer',
'Tracker': 'https://github.com/mraveri/tensiometer/issues',
'Reference': 'https://arxiv.org/abs/1806.04649',
'Licensing': 'https://raw.githubusercontent.com/mraveri/tensiometer/master/LICENSE'
},
packages=setuptools.find_packages(),
platforms='any',
install_requires=get_requirements(),
classifiers=[
'Development Status :: 4 - Beta',
'Operating System :: OS Independent',
'Intended Audience :: Science/Research',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
python_requires='>=3.7',
zip_safe=False,
keywords=['MCMC']
)
| 2,749 | 33.375 | 103 | py |
tensiometer | tensiometer-master/tensiometer/gaussian_tension.py | """
This file contains the functions and utilities to compute agreement and
disagreement between two different chains using a Gaussian approximation
for the posterior.
For more details on the method implemented see
`arxiv 1806.04649 <https://arxiv.org/pdf/1806.04649.pdf>`_
and `arxiv 1912.04880 <https://arxiv.org/pdf/1912.04880.pdf>`_.
"""
"""
For testing purposes:
from getdist import loadMCSamples, MCSamples, WeightedSamples
chain_1 = loadMCSamples('./tensiometer/test_chains/DES')
chain_2 = loadMCSamples('./tensiometer/test_chains/Planck18TTTEEE')
chain_12 = loadMCSamples('./tensiometer/test_chains/Planck18TTTEEE_DES')
chain_prior = loadMCSamples('./tensiometer/test_chains/prior')
prior_chain = chain_prior
chain = chain_1
param_names = None
import tensiometer.utilities as utils
import matplotlib.pyplot as plt
"""
###############################################################################
# initial imports:
import scipy
import numpy as np
from getdist import MCSamples
from getdist.gaussian_mixtures import GaussianND
import matplotlib.pyplot as plt
from . import utilities as utils
###############################################################################
# series of helpers to check input of functions:
def _check_param_names(chain, param_names):
"""
Utility to check input param names.
"""
if param_names is None:
param_names = chain.getParamNames().getRunningNames()
else:
param_list = chain.getParamNames().list()
if not np.all([name in param_list for name in param_names]):
raise ValueError('Input parameter is not in the chain',
chain.name_tag, '\n'
'Input parameters ', param_names, '\n'
'Possible parameters', param_list)
#
return param_names
def _check_common_names(param_names_1, param_names_2):
"""
Utility to get the common param names between two chains.
"""
param_names = [name for name in param_names_1 if name in param_names_2]
if len(param_names) == 0:
raise ValueError('Chains do not have shared parameters.\n',
'Parameters for chain_1 ', param_names_1, '\n',
'Parameters for chain_2 ', param_names_2, '\n')
#
return param_names
def _check_chain_type(chain):
"""
Check if an object is a GetDist chain.
"""
# test the type of the chain:
if not isinstance(chain, MCSamples):
raise TypeError('Input chain is not of MCSamples type.')
###############################################################################
def get_prior_covariance(chain, param_names=None):
"""
Utility to estimate the prior covariance from the ranges of a chain.
The flat range prior covariance
(`link <https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)>`_)
is given by:
.. math:: C_{ij} = \\delta_{ij} \\frac{( max(p_i) - min(p_i) )^2}{12}
:param chain: :class:`~getdist.mcsamples.MCSamples` the input chain.
:param param_names: optional choice of parameter names to
restrict the calculation.
:return: the estimated covariance of the prior.
"""
# get the parameter names to use:
param_names = _check_param_names(chain, param_names)
# get the ranges:
_prior_min = []
_prior_max = []
for name in param_names:
# lower bound:
if name in chain.ranges.lower.keys():
_prior_min.append(chain.ranges.lower[name])
else:
_prior_min.append(-1.e30)
# upper bound:
if name in chain.ranges.upper.keys():
_prior_max.append(chain.ranges.upper[name])
else:
_prior_max.append(1.e30)
_prior_min = np.array(_prior_min)
_prior_max = np.array(_prior_max)
#
return np.diag((_prior_max-_prior_min)**2/12.)
def get_localized_covariance(chain_1, chain_2, param_names,
localize_params=None, scale=10.):
"""
Get localized covariance of chain_1 localized with chain_2
"""
# initialize param names:
param_names_1 = _check_param_names(chain_1, param_names)
param_names_2 = _check_param_names(chain_2, param_names)
param_names = _check_common_names(param_names_1, param_names_2)
# check localized parameters:
if localize_params is None:
localize_params = param_names
else:
if not np.all([name in param_names for name in localize_params]):
raise ValueError('Input localize_params is not in param_names')
# get mean and covariance of the chain that we use for localization:
mean = chain_2.getMeans(pars=[chain_2.index[name]
for name in localize_params])
cov = chain_2.cov(pars=localize_params)
inv_cov = np.linalg.inv(scale**2*cov)
sqrt_inv_cov = scipy.linalg.sqrtm(inv_cov)
# get the Gaussian chi2:
idx = [chain_1.index[name] for name in localize_params]
X = np.dot(sqrt_inv_cov, (chain_1.samples[:, idx] - mean).T).T
logLikes = (X*X).sum(axis=1)
max_logLikes = np.amin(logLikes)
# compute weights:
new_weights = chain_1.weights * np.exp(-(logLikes - max_logLikes))
# check that weights are reasonable:
old_neff_samples = np.sum(chain_1.weights)**2 / np.sum(chain_1.weights**2)
new_neff_samples = np.sum(new_weights)**2 / np.sum(new_weights**2)
if old_neff_samples / new_neff_samples > 10.:
print('WARNING: localization of covariance is resulting in too many '
+ 'samples being under-weighted.\n'
+ 'Neff original = ', round(old_neff_samples, 3), '\n'
+ 'Neff new = ', round(new_neff_samples, 3), '\n'
+ 'this can result in large errors and can be improved with '
+ 'more samples in chain_1.')
# compute covariance with all parameters:
idx_full = [chain_1.index[name] for name in param_names]
# compute covariance:
cov2 = np.cov(chain_1.samples[:, idx_full].T, aweights=new_weights)
# remove localization covariance:
idx_rel = [param_names.index(name) for name in localize_params]
inv_cov2 = np.linalg.inv(cov2)
inv_cov2[np.ix_(idx_rel, idx_rel)] = inv_cov2[np.ix_(idx_rel, idx_rel)] \
- inv_cov
cov2 = np.linalg.inv(inv_cov2)
#
return cov2
###############################################################################
def get_Neff(chain, prior_chain=None, param_names=None,
prior_factor=1.0, localize=False, **kwargs):
"""
Function to compute the number of effective parameters constrained by a
chain over the prior.
The number of effective parameters is defined as in Eq. (29) of
(`Raveri and Hu 18 <https://arxiv.org/pdf/1806.04649.pdf>`_) as:
.. math:: N_{\\rm eff} \\equiv
N -{\\rm tr}[ \\mathcal{C}_\\Pi^{-1}\\mathcal{C}_p ]
where :math:`N` is the total number of nominal parameters of the chain,
:math:`\\mathcal{C}_\\Pi` is the covariance of the prior and
:math:`\\mathcal{C}_p` is the posterior covariance.
:param chain: :class:`~getdist.mcsamples.MCSamples` the input chain.
:param prior_chain: (optional) the prior chain.
If the prior is not well approximated by
a ranged prior and is informative it is better to explicitly
use a prior only chain.
If this is not given the algorithm will assume ranged priors with the
ranges computed from the input chain.
:param param_names: (optional) parameter names to restrict the
calculation of :math:`N_{\\rm eff}`.
If none is given the default assumes that all running parameters
should be used.
:param prior_factor: (optional) factor to scale the prior covariance.
In case of strongly non-Gaussian posteriors it might be useful to
artificially tighten the prior to have less noise in telling apart
parameter space directions that are constrained by data and prior.
Default is no scaling, prior_factor=1.
:return: the number of effective parameters.
"""
# initialize param names:
param_names = _check_param_names(chain, param_names)
# initialize prior covariance:
if prior_chain is not None:
# check parameter names:
param_names = _check_param_names(prior_chain, param_names)
# get the prior covariance:
if localize:
C_Pi = get_localized_covariance(prior_chain, chain,
param_names, **kwargs)
else:
C_Pi = prior_chain.cov(pars=param_names)
else:
C_Pi = get_prior_covariance(chain, param_names=param_names)
# multiply by prior factor:
C_Pi = prior_factor*C_Pi
# get the posterior covariance:
C_p = chain.cov(pars=param_names)
# compute the number of effective parameters
_temp = np.dot(np.linalg.inv(C_Pi), C_p)
# compute Neff from the regularized spectrum of the eigenvalues:
_eigv, _eigvec = np.linalg.eig(_temp)
_eigv[_eigv > 1.] = 1.
_eigv[_eigv < 0.] = 0.
#
_Ntot = len(_eigv)
_Neff = _Ntot - np.real(np.sum(_eigv))
#
return _Neff
###############################################################################
def gaussian_approximation(chain, param_names=None):
"""
Function that computes the Gaussian approximation of a given chain.
:param chain: :class:`~getdist.mcsamples.MCSamples` the input chain.
:param param_names: (optional) parameter names to restrict the
Gaussian approximation.
If none is given the default assumes that all parameters
should be used.
:return: :class:`~getdist.gaussian_mixtures.GaussianND` object with the
Gaussian approximation of the chain.
"""
# initial checks:
_check_chain_type(chain)
if param_names is None:
param_names = chain.getParamNames().list()
param_names = _check_param_names(chain, param_names)
# get the mean:
mean = chain.getMeans(pars=[chain.index[name]
for name in param_names])
# get the covariance:
cov = chain.cov(pars=param_names)
# get the labels:
param_labels = [_n.label for _n
in chain.getParamNames().parsWithNames(param_names)]
# get label:
if chain.label is not None:
label = 'Gaussian '+chain.label
elif chain.name_tag is not None:
label = 'Gaussian_'+chain.name_tag
else:
label = None
# initialize the Gaussian distribution:
gaussian_approx = GaussianND(mean, cov,
names=param_names,
labels=param_labels,
label=label)
#
return gaussian_approx
###############################################################################
def Q_DM(chain_1, chain_2, prior_chain=None, param_names=None,
cutoff=0.05, prior_factor=1.0):
"""
Compute the value and degrees of freedom of the quadratic form giving the
probability of a difference between the means of the two input chains,
in the Gaussian approximation.
This is defined as in
(`Raveri and Hu 18 <https://arxiv.org/pdf/1806.04649.pdf>`_) to be:
.. math:: Q_{\\rm DM} \\equiv (\\theta_1-\\theta_2)
(\\mathcal{C}_1+\\mathcal{C}_2
-\\mathcal{C}_1\\mathcal{C}_\\Pi^{-1}\\mathcal{C}_2
-\\mathcal{C}_2\\mathcal{C}_\\Pi^{-1}\\mathcal{C}_1)^{-1}
(\\theta_1-\\theta_2)^T
where :math:`\\theta_i` is the parameter mean of the i-th posterior,
:math:`\\mathcal{C}` the posterior covariance and :math:`\\mathcal{C}_\\Pi`
the prior covariance.
:math:`Q_{\\rm DM}` is :math:`\\chi^2` distributed with number of degrees
of freedom equal to the rank of the shift covariance.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
the first input chain.
:param chain_2: :class:`~getdist.mcsamples.MCSamples`
the second input chain.
:param prior_chain: (optional) the prior only chain.
If the prior is not well approximated by a ranged prior and is
informative it is better to explicitly use a prior only chain.
If this is not given the algorithm will assume ranged priors
with the ranges computed from the input chains.
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param cutoff: (optional) the algorithms needs to detect prior
constrained directions (that do not contribute to the test)
from data constrained directions.
This is achieved through a Karhunen–Loeve decomposition to avoid issues
with physical dimensions of parameters and cutoff sets the minimum
improvement with respect to the prior that is used.
Default is five percent.
:param prior_factor: (optional) factor to scale the prior covariance.
In case of strongly non-Gaussian posteriors it might be useful to
artificially tighten the prior to have less noise in telling apart
parameter space directions that are constrained by data and prior.
Default is no scaling, prior_factor=1.
:return: :math:`Q_{\\rm DM}` value and number of degrees of freedom.
Since :math:`Q_{\\rm DM}` is :math:`\\chi^2` distributed the
probability to exceed the test can be computed
using the cdf method of :py:data:`scipy.stats.chi2` or
:meth:`tensiometer.utilities.from_chi2_to_sigma`.
"""
# initial checks:
if cutoff < 0.0:
raise ValueError('The KL cutoff has to be greater than zero.\n',
'Input value ', cutoff)
# initialize param names:
param_names_1 = _check_param_names(chain_1, param_names)
param_names_2 = _check_param_names(chain_2, param_names)
# get common names:
param_names = [name for name in param_names_1 if name in param_names_2]
if len(param_names) == 0:
raise ValueError('Chains do not have shared parameters.\n',
'Parameters for chain_1 ', param_names_1, '\n',
'Parameters for chain_2 ', param_names_2, '\n')
# initialize prior covariance:
if prior_chain is not None:
param_names = _check_param_names(prior_chain, param_names)
# get the prior covariance:
C_Pi = prior_chain.cov(pars=param_names)
else:
C_Pi1 = get_prior_covariance(chain_1, param_names=param_names)
C_Pi2 = get_prior_covariance(chain_2, param_names=param_names)
if not np.allclose(C_Pi1, C_Pi2):
raise ValueError('The chains have different priors.')
else:
C_Pi = C_Pi1
# scale prior covariance:
C_Pi = prior_factor*C_Pi
# get the posterior covariances:
C_p1, C_p2 = chain_1.cov(pars=param_names), chain_2.cov(pars=param_names)
# get the means:
theta_1 = chain_1.getMeans(pars=[chain_1.index[name]
for name in param_names])
theta_2 = chain_2.getMeans(pars=[chain_2.index[name]
for name in param_names])
param_diff = theta_1-theta_2
# do the calculation of Q:
C_Pi_inv = utils.QR_inverse(C_Pi)
temp = np.dot(np.dot(C_p1, C_Pi_inv), C_p2)
diff_covariance = C_p1 + C_p2 - temp - temp.T
# take the directions that are best constrained over the prior:
eig_1, eigv_1 = utils.KL_decomposition(C_p1, C_Pi)
eig_2, eigv_2 = utils.KL_decomposition(C_p2, C_Pi)
# get the smallest spectrum, if same use first:
if np.sum(1./eig_1-1. > cutoff) <= np.sum(1./eig_2-1. > cutoff):
eig, eigv = eig_1, eigv_1
else:
eig, eigv = eig_2, eigv_2
# get projection matrix:
proj_matrix = eigv[1./eig-1. > cutoff]
# get dofs of Q:
dofs = np.sum(1./eig-1. > cutoff)
# project parameter difference:
param_diff = np.dot(proj_matrix, param_diff)
# project covariance:
temp_cov = np.dot(np.dot(proj_matrix, diff_covariance), proj_matrix.T)
# compute Q:
Q_DM = np.dot(np.dot(param_diff, utils.QR_inverse(temp_cov)), param_diff)
#
return Q_DM, dofs
###############################################################################
def KL_PCA(chain_1, chain_12, param_names=None,
conditional_params=[], param_map=None, normparam=None,
num_modes=None, localize=True, dimensional_reduce=True,
dimensional_threshold=0.1, verbose=True, **kwargs):
"""
Perform the KL analysis of two chains.
Directions that chain_2 improves over chain_1.
:param chain_1: :class:`~getdist.mcsamples.MCSamples` the first input chain.
:param chain_12: :class:`~getdist.mcsamples.MCSamples` the second input chain.
:param param_names: list of names of the parameters to use
:param param_names: (optional) parameter names to restrict the
calculation.
If none is given the default assumes that all running parameters
:param conditional_params: (optional) list of parameters to treat as fixed,
i.e. for KL_PCA conditional on fixed values of these parameters
:param param_map: (optional) a transformation to apply to parameter values;
A list or string containing either N (no transformation)
or L (for log transform) or M (for minus log transform of negative
parameters) for each parameter.
By default uses log if no parameter values cross zero.
The transformed parameters are added to the joint chain.
:param normparam: (optional) name of parameter to normalize result
(i.e. this parameter will have unit power)
By default scales to the parameter that has the largest impactr on the KL mode variance.
:param num_modes: (optional) only return the num_modes best modes.
:param localize: (optional) localize the first covariance with the second,
useful when chain_1 spans a much larger region with respect to chain_12.
:param dimensional_reduce: (optional) perform dimensional reduction of the KL modes considered
keeping only parameters with a large impact on KL mode variances.
Default is True.
:param dimensional_threshold: (optional) threshold for dimensional reducetion.
Default is 10% so that parameters with a contribution less than 10% of KL mode
variance are neglected from a specific KL mode.
:param verbose: (optional) chatty output. Default True.
"""
# initialize param names:
param_names_1 = _check_param_names(chain_1, param_names)
param_names_12 = _check_param_names(chain_12, param_names)
param_names = _check_common_names(param_names_1, param_names_12)
num_params = len(param_names)
# initialize conditional parameters:
if len(conditional_params) > 0:
conditional_params_1 = _check_param_names(chain_1, conditional_params)
conditional_params_12 = _check_param_names(chain_12, conditional_params)
conditional_params = _check_common_names(conditional_params_1, conditional_params_12)
# other initialization:
labels = [chain_1.parLabel(chain_1.index[name]) for name in param_names]
if num_modes is not None:
num_modes = min(num_modes, num_params)
else:
num_modes = num_params
if normparam is not None:
normparam = param_names.index(normparam)
# initialize parameter map:
if param_map is None:
param_map = ''
for name in param_names:
idx_1, idx_12 = chain_1.index[name], chain_12.index[name]
# decide the mapping:
positive_1 = np.all(chain_1.samples[:, idx_1] > 0)
positive_12 = np.all(chain_12.samples[:, idx_12] > 0)
if not positive_1:
negative_1 = np.all(chain_1.samples[:, idx_1] < 0)
else:
negative_1 = False
if not positive_12:
negative_12 = np.all(chain_12.samples[:, idx_12] < 0)
else:
negative_12 = False
if positive_1 and positive_12:
param_map += 'L'
elif negative_1 and negative_12:
param_map += 'M'
else:
param_map += 'N'
else:
if len(param_map) == 1:
param_map = ''.join([param_map for name in param_names])
if len(param_map) != len(param_names):
raise ValueError('param_map can be either one element for all'
+ 'parameters or', num_params,
'got', param_map, 'instead')
for map in param_map:
if map not in ['L', 'M', 'N']:
raise ValueError('param_map can contain only L, M, N values',
'got', param_map, 'instead')
doexp = 'L' in param_map or 'M' in param_map
# add the relevant derived parameters to the chains:
param_names_to_use = []
for i in range(num_params):
name, map = param_names[i], param_map[i]
idx_1, idx_12 = chain_1.index[name], chain_12.index[name]
if map == 'L':
# log parameter to chain 1:
try:
chain_1.addDerived(np.log(chain_1.samples[:, idx_1]),
name='log_'+name,
label='\\log '+labels[i])
except ValueError:
pass
# log parameter for chain 12:
try:
chain_12.addDerived(np.log(chain_12.samples[:, idx_12]),
name='log_'+name,
label='\\log '+labels[i])
except ValueError:
pass
# add names:
param_names_to_use.append('log_'+name)
elif map == 'M':
# - log parameter to chain 1:
try:
chain_1.addDerived(np.log(chain_1.samples[:, idx_1]),
name='log_m_'+name,
label='\\log -'+labels[i])
except ValueError:
pass
# - log parameter for chain 12:
try:
chain_12.addDerived(np.log(chain_12.samples[:, idx_12]),
name='log_m_'+name,
label='\\log -'+labels[i])
except ValueError:
pass
# add names:
param_names_to_use.append('log_m_'+name)
elif map == 'N':
# add names:
param_names_to_use.append(name)
# make sure chains are initialized:
if chain_1.needs_update:
chain_1.updateBaseStatistics()
if chain_12.needs_update:
chain_12.updateBaseStatistics()
# indexes to use:
idx_to_use = [chain_12.index[name] for name in param_names_to_use]
# get the posterior covariances:
if localize:
localize_params = kwargs.pop('localize_params', None)
if localize_params is not None:
idx = [param_names.index(name) for name in localize_params]
localize_params = [param_names_to_use[i] for i in idx]
C_p1 = get_localized_covariance(chain_1, chain_12,
param_names_to_use+conditional_params,
localize_params=localize_params,
**kwargs)
else:
C_p1 = chain_1.cov(pars=param_names_to_use+conditional_params)
C_p12 = chain_12.cov(pars=param_names_to_use+conditional_params)
# get the Fisher matrix: IW
if len(conditional_params) > 0:
F_p1 = utils.QR_inverse(C_p1)[:, :num_params][:num_params, :]
F_p12 = utils.QR_inverse(C_p12)[:, :num_params][:num_params, :]
C_p1 = utils.QR_inverse(F_p1)
C_p12 = utils.QR_inverse(F_p12)
# perform the KL decomposition:
KL_eig, KL_eigv = utils.KL_decomposition(C_p1, C_p12)
# sort:
idx = np.argsort(KL_eig)[::-1]
KL_eig, KL_eigv = KL_eig[idx], KL_eigv[:, idx]
# do initial calculations:
inv_KL_eigv = utils.QR_inverse(KL_eigv.T)
inv_cov_12 = utils.QR_inverse(C_p12)
# compute joint covariance contributions:
temp = inv_KL_eigv*np.dot(inv_cov_12, inv_KL_eigv.T).T
contributions = (np.abs(temp.T)/np.sum(np.abs(temp), axis=1)).T
# compute the dimensional reduction matrix:
if dimensional_reduce:
reduction_filter = contributions > dimensional_threshold
else:
reduction_filter = np.ones((num_params, num_params), dtype=bool)
if normparam is not None:
reduction_filter[:, normparam] = True
reduced_projector = KL_eigv.copy().T
reduced_projector[np.logical_not(reduction_filter)] = 0
# compute correlation matrix of parameters with KL components:
proj_samples = np.dot(reduced_projector, (chain_12.samples[:, idx_to_use]-chain_12.getMeans(idx_to_use)).T)
proj_cov = np.cov(np.vstack((proj_samples, chain_12.samples.T)),
aweights=chain_12.weights)
temp = np.diag(1./np.sqrt(np.diag(proj_cov)))
proj_corr = np.dot(np.dot(temp, proj_cov), temp)[:num_params, :]
# prepare return of the function:
results_dict = {}
results_dict['kl_eig'] = KL_eig
results_dict['kl_eigv'] = KL_eigv
results_dict['kl_var_contributions'] = contributions
results_dict['kl_var_filter'] = reduction_filter
results_dict['reduced_kl_projector'] = reduced_projector
results_dict['param_names'] = param_names_to_use
results_dict['param_map'] = param_map
# all calculations are done, write out text:
PCAtext = 'KLCA for '+str(num_params)+' parameters:\n\n'
# parameter names:
if verbose:
for i in range(num_params):
if param_map[i] == 'L':
temp_lab = 'ln(' + labels[i] + ')'
elif param_map[i] == 'M':
temp_lab = 'ln(-' + labels[i] + ')'
else:
temp_lab = labels[i]
PCAtext += "%10s : %s\n" % (str(i + 1), temp_lab)
PCAtext += '\n'
# fixed parameter names:
if verbose:
if len(conditional_params) > 0:
PCAtext += 'With '+str(len(conditional_params))+' parameters fixed:\n'
for i, name in enumerate(conditional_params):
temp_lab = chain_12.parLabel(chain_12.index[name])
PCAtext += "%10s : %s\n" % (str(i + 1), temp_lab)
PCAtext += '\n'
# write out KL eigenvalues:
PCAtext += 'KL amplitudes - 1 (covariance/variance improvement per mode)\n'
for i in range(num_modes):
PCAtext += 'KLC%2i: %8.4f' % (i + 1, KL_eig[i]-1.)
if KL_eig[i]-1. > 0.:
PCAtext += ' (%8.1f %%)' % (np.sqrt(KL_eig[i]-1.)*100.)
PCAtext += '\n'
# write out KL eigenvectors:
if verbose:
PCAtext += '\n'
PCAtext += 'KL-modes\n'
for j in range(num_modes):
PCAtext += '%3i:' % (j + 1)
for i in range(num_modes):
PCAtext += '%8.3f' % (KL_eigv.T[j, i])
PCAtext += '\n'
# write out parameter contributions to KL mode variance:
PCAtext += '\n'
PCAtext += 'Parameter contribution to KL-mode variance\n'
PCAtext += '%12s :' % 'mode number'
for j in range(num_modes):
PCAtext += '%8i' % (j+1)
PCAtext += '\n'
for i in range(num_params):
PCAtext += '%12s :' % param_names_to_use[i]
for j in range(num_modes):
PCAtext += '%8.3f' % (contributions[j, i])
PCAtext += '\n'
# write out KL components:
PCAtext += '\n'
PCAtext += 'KL Principal Components\n'
for i in range(num_modes):
summary = 'KLC%2i: %8.4f' % (i + 1, KL_eig[i]-1.)
if KL_eig[i]-1. > 0.:
summary += ' (%8.1f %%)' % (np.sqrt(KL_eig[i]-1.)*100.)
summary += '\n'
if normparam is not None:
norm = KL_eigv.T[i, normparam]
else:
norm = KL_eigv.T[i, np.argmax(contributions[i, :])]
for j in range(num_params):
if reduction_filter[i, j]:
label = labels[j]
mean = chain_12.getMeans([idx_to_use[j]])
expo = "%f" % (KL_eigv.T[i, j]/norm)
if param_map[j] in ['L', 'M']:
if param_map[j] == "M":
div = "%f" % (-np.exp(mean))
else:
div = "%f" % (np.exp(mean))
summary += '(%s/%s)^{%s}\n' % (label, div, expo)
else:
if doexp:
summary += 'exp((%s-%f)/%s)\n' % (label, mean, expo)
else:
summary += '(%s-%f)/%s)\n' % (label, mean, expo)
temp_mean = np.average((proj_samples[i, :]/norm), weights=chain_12.weights)
temp_var = np.sqrt(np.cov((proj_samples[i, :]/norm), aweights=chain_12.weights))
if doexp:
temp_mean = np.exp(temp_mean)
temp_var = np.exp(temp_mean)*temp_var
summary += ' = %f +- %f\n' % (temp_mean, temp_var)
summary += '\n'
PCAtext += summary
# Correlation with other parameters:
if verbose:
PCAtext += 'Correlations of KLPC\n'
PCAtext += '%5s :' % 'mode'
for i in range(num_modes):
PCAtext += '%8i' % (i+1)
PCAtext += '\n'
auto_block = proj_corr[:, :num_params]
for i in range(num_modes):
PCAtext += ' PC%2i :' % (i + 1)
for j in range(num_modes):
PCAtext += '%8.3f' % auto_block[i, j]
PCAtext += '\n'
auto_block = proj_corr[:, num_params:].T
for i in range(auto_block.shape[0]):
PCAtext += ' p %2i :' % (i + 1)
for j in range(num_modes):
PCAtext += '%8.3f' % auto_block[i, j]
PCAtext += ' (%s)\n' % (chain_12.parLabel(i))
#
return PCAtext, results_dict
###############################################################################
def Q_UDM_KL_components(chain_1, chain_12, param_names=None):
"""
Function that computes the Karhunen–Loeve (KL) decomposition of the
covariance of a chain with the covariance of that chain joint with another
one.
This function is used for the parameter shift algorithm in
update form.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
the first input chain.
:param chain_12: :class:`~getdist.mcsamples.MCSamples`
the joint input chain.
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:return: the KL eigenvalues, the KL eigenvectors and the parameter names
that are used, sorted in decreasing order.
"""
# initialize param names:
param_names_1 = _check_param_names(chain_1, param_names)
param_names_12 = _check_param_names(chain_12, param_names)
# get common names:
param_names = [name for name in param_names_1
if name in param_names_12]
if len(param_names) == 0:
raise ValueError('Chains do not have shared parameters.\n',
'Parameters for chain_1 ', param_names_1, '\n',
'Parameters for chain_12 ', param_names_12, '\n')
# get the posterior covariances:
C_p1, C_p12 = chain_1.cov(pars=param_names), chain_12.cov(pars=param_names)
# perform the KL decomposition:
KL_eig, KL_eigv = utils.KL_decomposition(C_p1, C_p12)
# sort:
idx = np.argsort(KL_eig)[::-1]
KL_eig = KL_eig[idx]
KL_eigv = KL_eigv[:, idx]
#
return KL_eig, KL_eigv, param_names
###############################################################################
def Q_UDM_get_cutoff(chain_1, chain_2, chain_12,
prior_chain=None, param_names=None, prior_factor=1.0):
"""
Function to estimate the cutoff for the spectrum of parameter
differences in update form to match Delta Neff.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
the first input chain.
:param chain_2: :class:`~getdist.mcsamples.MCSamples`
the second chain that joined with the first one (modulo the prior)
should give the joint chain.
:param chain_12: :class:`~getdist.mcsamples.MCSamples`
the joint input chain.
:param prior_chain: :class:`~getdist.mcsamples.MCSamples` (optional)
If the prior is not well approximated by
a ranged prior and is informative it is better to explicitly
use a prior only chain.
If this is not given the algorithm will assume ranged priors with the
ranges computed from the input chain.
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param prior_factor: (optional) factor to scale the prior covariance.
In case of strongly non-Gaussian posteriors it might be useful to
artificially tighten the prior to have less noise in telling apart
parameter space directions that are constrained by data and prior.
Default is no scaling, prior_factor=1.
:return: the optimal KL cutoff, KL eigenvalues, KL eigenvectors and the
parameter names that are used.
"""
# get all shared parameters:
param_names_1 = _check_param_names(chain_1, param_names)
param_names_2 = _check_param_names(chain_2, param_names)
# get common names:
param_names = [name for name in param_names_1 if name in param_names_2]
if len(param_names) == 0:
raise ValueError('Chains do not have shared parameters.\n',
'Parameters for chain_1 ', param_names_1, '\n',
'Parameters for chain_2 ', param_names_2, '\n')
# get the KL decomposition:
KL_eig, KL_eigv, param_names = Q_UDM_KL_components(chain_1,
chain_12,
param_names=param_names)
# get the cutoff that matches the dofs of Q_DMAP:
N_1 = get_Neff(chain_1,
prior_chain=prior_chain,
param_names=param_names,
prior_factor=prior_factor)
N_2 = get_Neff(chain_2,
prior_chain=prior_chain,
param_names=param_names,
prior_factor=prior_factor)
N_12 = get_Neff(chain_12,
prior_chain=prior_chain,
param_names=param_names,
prior_factor=prior_factor)
target_dofs = round(N_1 + N_2 - N_12)
# compute the cutoff:
def _helper(_c):
return np.sum(KL_eig[KL_eig > 1.] > _c)-target_dofs
# define the extrema:
_a = 1.0
_b = np.amax(KL_eig)
# check bracketing:
if _helper(_a)*_helper(_b) > 0:
raise ValueError('Cannot find optimal cutoff.\n',
'This might be a problem with the prior.\n',
'You may try providing a prior chain.\n',
'KL spectrum:', KL_eig,
'Target dofs:', target_dofs)
else:
KL_cutoff = scipy.optimize.bisect(_helper, _a, _b)
#
return KL_cutoff, KL_eig, KL_eigv, param_names
###############################################################################
def Q_UDM_fisher_components(chain_1, chain_12, param_names=None, which='1'):
"""
Compute the decomposition of the Fisher matrix in terms of KL modes.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
the first input chain.
:param chain_12: :class:`~getdist.mcsamples.MCSamples`
the joint input chain.
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param which: (optional) which decomposition to report. Possibilities are
'1' for the chain 1 Fisher matrix, '2' for the chain 2 Fisher matrix
and '12' for the joint Fisher matrix.
:return: parameter names used in the calculation, values of improvement
and fractional Fisher matrix.
"""
KL_eig, KL_eigv, param_names = Q_UDM_KL_components(chain_1,
chain_12,
param_names=param_names)
# compute Fisher and fractional fisher matrix:
if which == '1':
fisher = np.sum(KL_eigv*KL_eigv/KL_eig, axis=1)
fractional_fisher = ((KL_eigv*KL_eigv/KL_eig).T/fisher).T
elif which == '2':
fisher = np.sum(KL_eigv*KL_eigv*(KL_eig-1.)/(KL_eig), axis=1)
fractional_fisher = ((KL_eigv*KL_eigv*(KL_eig-1.)/(KL_eig)).T/fisher).T
elif which == '12':
fisher = np.sum(KL_eigv*KL_eigv, axis=1)
fractional_fisher = ((KL_eigv*KL_eigv).T/fisher).T
else:
raise ValueError('Input parameter which can only be: 1, 2, 12.')
#
return param_names, KL_eig, fractional_fisher, fisher
def Q_UDM_covariance_components(chain_1, chain_12, param_names=None,
which='1'):
"""
Compute the decomposition of the covariance matrix in terms of KL modes.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
the first input chain.
:param chain_12: :class:`~getdist.mcsamples.MCSamples`
the joint input chain.
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param which: (optional) which decomposition to report. Possibilities are
'1' for the chain 1 covariance matrix, '2' for the chain 2 covariance
matrix and '12' for the joint covariance matrix.
:return: parameter names used in the calculation, values of improvement,
fractional covariance matrix and covariance matrix
(inverse covariance).
"""
KL_eig, KL_eigv, param_names = Q_UDM_KL_components(chain_1,
chain_12,
param_names=param_names)
# inverse KL components:
KL_eigv = utils.QR_inverse(KL_eigv)
# compute covariance and fractional covariance matrix:
if which == '1':
diag_cov = np.sum(KL_eigv*KL_eigv*KL_eig, axis=1)
fractional_cov = ((KL_eigv*KL_eigv*KL_eig).T/diag_cov).T
elif which == '2':
diag_cov = np.sum(KL_eigv*KL_eigv*KL_eig/(KL_eig-1.), axis=1)
fractional_cov = ((KL_eigv*KL_eigv*KL_eig/(KL_eig-1.)).T/diag_cov).T
elif which == '12':
diag_cov = np.sum(KL_eigv*KL_eigv, axis=1)
fractional_cov = ((KL_eigv*KL_eigv).T/diag_cov).T
else:
raise ValueError('Input parameter which can only be: 1, 2, 12.')
#
return param_names, KL_eig, fractional_cov
###############################################################################
def Q_UDM(chain_1, chain_12, lower_cutoff=1.05, upper_cutoff=100.,
param_names=None):
"""
Compute the value and degrees of freedom of the quadratic form giving the
probability of a difference between the means of the two input chains,
in update form with the Gaussian approximation.
This is defined as in
(`Raveri and Hu 18 <https://arxiv.org/pdf/1806.04649.pdf>`_) to be:
.. math:: Q_{\\rm UDM} \\equiv (\\theta_1-\\theta_{12})
(\\mathcal{C}_1-\\mathcal{C}_{12})^{-1}
(\\theta_1-\\theta_{12})^T
where :math:`\\theta_1` is the parameter mean of the first posterior,
:math:`\\theta_{12}` is the parameter mean of the joint posterior,
:math:`\\mathcal{C}` the posterior covariance and :math:`\\mathcal{C}_\\Pi`
the prior covariance.
:math:`Q_{\\rm UDM}` is :math:`\\chi^2` distributed with number of degrees
of freedom equal to the rank of the shift covariance.
In case of uninformative priors the statistical significance of
:math:`Q_{\\rm UDM}` is the same as the one reported by
:math:`Q_{\\rm DM}` but offers likely mitigation against non-Gaussianities
of the posterior distribution.
In the case where both chains are Gaussian :math:`Q_{\\rm UDM}` is
symmetric if the first input chain is swapped :math:`1\\leftrightarrow 2`.
If the input distributions are not Gaussian it is better to use the most
constraining chain as the base for the parameter update.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
the first input chain.
:param chain_12: :class:`~getdist.mcsamples.MCSamples`
the joint input chain.
:param lower_cutoff: (optional) the algorithms needs to detect prior
constrained directions (that do not contribute to the test)
from data constrained directions.
This is achieved through a Karhunen–Loeve decomposition to avoid issues
with physical dimensions of parameters and cutoff sets the minimum
improvement with respect to the prior that is used.
Default is five percent.
:param upper_cutoff: (optional) upper cutoff for the selection of KL modes.
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:return: :math:`Q_{\\rm UDM}` value and number of degrees of freedom.
Since :math:`Q_{\\rm UDM}` is :math:`\\chi^2` distributed the
probability to exceed the test can be computed
using the cdf method of :py:data:`scipy.stats.chi2` or
:meth:`tensiometer.utilities.from_chi2_to_sigma`.
"""
# get the cutoff and perform the KL decomposition:
_temp = Q_UDM_KL_components(chain_1, chain_12, param_names=param_names)
KL_eig, KL_eigv, param_names = _temp
# get the parameter means:
theta_1 = chain_1.getMeans(pars=[chain_1.index[name]
for name in param_names])
theta_12 = chain_12.getMeans(pars=[chain_12.index[name]
for name in param_names])
shift = theta_1 - theta_12
# do the Q_UDM calculation:
_filter = np.logical_and(KL_eig > lower_cutoff, KL_eig < upper_cutoff)
Q_UDM = np.sum(np.dot(KL_eigv.T, shift)[_filter]**2./(KL_eig[_filter]-1.))
dofs = np.sum(_filter)
#
return Q_UDM, dofs
###############################################################################
# Likelihood based estimators:
def get_MAP_loglike(chain, feedback=True):
"""
Utility function to obtain the data part of the maximum posterior for
a given chain.
The best possibility is that a separate file with the posterior
explicit MAP is given. If this is not the case then the function will try
to get the likelihood at MAP from the samples. This possibility is far more
noisy in general.
:param chain: :class:`~getdist.mcsamples.MCSamples`
the input chain.
:param feedback: logical flag to set whether the function should print
a warning every time the explicit MAP file is not found.
By default this is true.
:return: the data log likelihood at maximum posterior.
"""
# we first try to get the best fit from explicit maximization:
try:
# get the best fit from the explicit MAP:
best_fit = chain.getBestFit(max_posterior=True)
if len(best_fit.chiSquareds) == 0:
_best_fit_data_like = best_fit.logLike
if 'prior' in best_fit.getParamDict().keys():
_best_fit_data_like -= best_fit.getParamDict()['prior']
else:
# get the total data likelihood:
_best_fit_data_like = 0.0
for _dat in best_fit.chiSquareds:
_best_fit_data_like += _dat[1].chisq
except Exception as ex:
# we use the best fit from the chains.
# This is noisy so we print a warning:
if feedback:
print(ex)
print('WARNING: using MAP from samples. This can be noisy.')
_best_fit_data_like = 0.0
# get chi2 list:
chi_list = [name for name in chain.getLikeStats().list()
if 'chi2_' in name]
# assume that we have chi2_data and the chi_2 prior:
if 'chi2_prior' in chi_list:
chi_list = chi_list[:chi_list.index('chi2_prior')]
# if empty we have to guess:
if len(chi_list) == 0:
_best_fit_data_like = chain.getLikeStats().logLike_sample
else:
for name in chi_list:
_best_fit_data_like += \
chain.getLikeStats().parWithName(name).bestfit_sample
# normalize:
_best_fit_data_like = -0.5*_best_fit_data_like
#
return _best_fit_data_like
###############################################################################
def Q_MAP(chain, num_data, prior_chain=None,
normalization_factor=0.0, prior_factor=1.0, feedback=True):
"""
Compute the value and degrees of freedom of the quadratic form giving
the goodness of fit measure at maximum posterior (MAP), in
Gaussian approximation.
This is defined as in
(`Raveri and Hu 18 <https://arxiv.org/pdf/1806.04649.pdf>`_) to be:
.. math:: Q_{\\rm MAP} \\equiv -2\\ln \\mathcal{L}(\\theta_{\\rm MAP})
where :math:`\\mathcal{L}(\\theta_{\\rm MAP})` is the data likelihood
evaluated at MAP.
In Gaussian approximation this is distributed as:
.. math:: Q_{\\rm MAP} \\sim \\chi^2(d-N_{\\rm eff})
where :math:`d` is the number of data points and :math:`N_{\\rm eff}`
is the number of effective parameters, as computed by the function
:func:`tensiometer.gaussian_tension.get_Neff`.
:param chain: :class:`~getdist.mcsamples.MCSamples`
the input chain.
:param num_data: number of data points.
:param prior_chain: (optional) the prior chain.
If the prior is not well approximated by
a ranged prior and is informative it is better to explicitly
use a prior only chain.
If this is not given the algorithm will assume ranged priors with the
ranges computed from the input chain.
:param normalization_factor: (optional) likelihood normalization factor.
This should make the likelihood a chi square.
:param prior_factor: (optional) factor to scale the prior covariance.
In case of strongly non-Gaussian posteriors it might be useful to
artificially tighten the prior to have less noise in telling apart
parameter space directions that are constrained by data and prior.
Default is no scaling, prior_factor=1.
:param feedback: logical flag to set whether the function should print
a warning every time the explicit MAP file is not found.
By default this is true.
:return: :math:`Q_{\\rm MAP}` value and number of degrees of freedom.
Since :math:`Q_{\\rm MAP}` is :math:`\\chi^2` distributed the
probability to exceed the test can be computed
using the cdf method of :py:data:`scipy.stats.chi2` or
:meth:`tensiometer.utilities.from_chi2_to_sigma`.
"""
# get the best fit:
best_fit_data_like = get_MAP_loglike(chain, feedback=feedback)
# get the number of effective parameters:
Neff = get_Neff(chain, prior_chain=prior_chain, prior_factor=prior_factor)
# compute Q_MAP:
Q_MAP = -2.*best_fit_data_like + normalization_factor
# compute the number of degrees of freedom:
dofs = float(num_data) - Neff
#
return Q_MAP, dofs
###############################################################################
def Q_DMAP(chain_1, chain_2, chain_12, prior_chain=None,
param_names=None, prior_factor=1.0, feedback=True):
"""
Compute the value and degrees of freedom of the quadratic form giving
the goodness of fit loss measure, in Gaussian approximation.
This is defined as in
(`Raveri and Hu 18 <https://arxiv.org/pdf/1806.04649.pdf>`_) to be:
.. math:: Q_{\\rm DMAP} \\equiv Q_{\\rm MAP}^{12} -Q_{\\rm MAP}^{1}
-Q_{\\rm MAP}^{2}
where :math:`Q_{\\rm MAP}^{12}` is the joint likelihood at maximum
posterior (MAP) and :math:`Q_{\\rm MAP}^{i}` is the likelihood at MAP
for the two single data sets.
In Gaussian approximation this is distributed as:
.. math:: Q_{\\rm DMAP} \\sim \\chi^2(N_{\\rm eff}^1 + N_{\\rm eff}^2 -
N_{\\rm eff}^{12})
where :math:`N_{\\rm eff}` is the number of effective parameters,
as computed by the function :func:`tensiometer.gaussian_tension.get_Neff`
for the joint and the two single data sets.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
the first input chain.
:param chain_2: :class:`~getdist.mcsamples.MCSamples`
the second input chain.
:param chain_12: :class:`~getdist.mcsamples.MCSamples`
the joint input chain.
:param prior_chain: (optional) the prior chain.
If the prior is not well approximated by
a ranged prior and is informative it is better to explicitly
use a prior only chain.
If this is not given the algorithm will assume ranged priors with the
ranges computed from the input chain.
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param prior_factor: (optional) factor to scale the prior covariance.
In case of strongly non-Gaussian posteriors it might be useful to
artificially tighten the prior to have less noise in telling apart
parameter space directions that are constrained by data and prior.
Default is no scaling, prior_factor=1.
:param feedback: logical flag to set whether the function should print
a warning every time the explicit MAP file is not found.
By default this is true.
:return: :math:`Q_{\\rm DMAP}` value and number of degrees of freedom.
Since :math:`Q_{\\rm DMAP}` is :math:`\\chi^2` distributed the
probability to exceed the test can be computed
using the cdf method of :py:data:`scipy.stats.chi2` or
:meth:`tensiometer.utilities.from_chi2_to_sigma`.
"""
# check that all chains have the same running parameters:
# get the data best fit for the chains:
best_fit_data_like_1 = get_MAP_loglike(chain_1, feedback=feedback)
best_fit_data_like_2 = get_MAP_loglike(chain_2, feedback=feedback)
best_fit_data_like_12 = get_MAP_loglike(chain_12, feedback=feedback)
# get the number of effective parameters:
Neff_1 = get_Neff(chain_1,
prior_chain=prior_chain,
param_names=param_names,
prior_factor=prior_factor)
Neff_2 = get_Neff(chain_2,
prior_chain=prior_chain,
param_names=param_names,
prior_factor=prior_factor)
Neff_12 = get_Neff(chain_12,
prior_chain=prior_chain,
param_names=param_names,
prior_factor=prior_factor)
# compute delta Neff:
dofs = Neff_1 + Neff_2 - Neff_12
# compute Q_DMAP:
Q_DMAP = -2.*best_fit_data_like_12 \
+ 2.*best_fit_data_like_1 \
+ 2.*best_fit_data_like_2
#
return Q_DMAP, dofs
| 51,236 | 43.246114 | 111 | py |
tensiometer | tensiometer-master/tensiometer/cosmosis_interface.py | """
File with tools to interface Cosmosis chains with GetDist.
"""
"""
For testing purposes:
chain = loadMCSamples('./../test_chains/1p2_SN1_zcut0p3_abs')
chain_root = './test_chains/DES_multinest_cosmosis'
chain_root = './chains_lcdm/chain_1x2pt_lcdm'
chain_min_root = './chains_lcdm/chain_1x2pt_lcdm_MAP.maxlike'
param_label_dict=None
param_name_dict=None
settings = None
# test that everything is working:
test = MCSamplesFromCosmosis(chain_root, chain_min_root)
print(test.bestfit)
"""
import os
import numpy as np
import functools
import types
from getdist.chains import loadNumpyTxt
from getdist.mcsamples import MCSamples
from getdist.types import BestFit
from getdist.paramnames import ParamInfo
def MCSamplesFromCosmosis(chain_root, chain_min_root=None,
param_name_dict=None, param_label_dict=None,
name_tag=None, settings=None):
"""
Function to import Cosmosis chains in GetDist.
:param chain_root: the name and path to the chain or the path to the
folder that contains it.
:param chain_min_root: (optional) name of the file containing the
explicit best fit.
:param param_name_dict: (optional) a dictionary with the mapping between
cosmosis names and reasonable parameter names.
:param param_label_dict: (optional) dictionary with the mapping between
parameter names and parameter labels, since Cosmosis does not save
the labels in the chain.
:param name_tag: (optional) a string with the name tag for the chain.
:param settings: (optional) dictionary of analysis settings to override
getdist defaults
:return: The :class:`~getdist.mcsamples.MCSamples` instance
"""
# decide if the user passed a folder or a chain:
if os.path.isfile(chain_root+'.txt'):
chain_file = chain_root+'.txt'
elif os.path.isdir(chain_root):
# look for the chain file:
temp = list(filter(lambda x: 'chain.txt' in x, os.listdir(chain_root)))
if len(temp) == 0:
raise ValueError('No chain file found in folder', chain_root)
chain_file = chain_root+'/'+temp[0]
else:
raise ValueError('Input chain root is not a folder nor a file.')
# get all the commented lines in the chain file:
info = get_cosmosis_info(chain_file)
# get the parameter names:
param_names = get_param_names(info)
# get the parameter labels from the user provided dictionary:
param_labels = get_param_labels(info, param_names, param_label_dict)
# get the sampler:
sampler = get_sampler_type(info)
# get the name tag:
if name_tag is None:
name_tag = get_name_tag(info)
# get the samples weights and likelihood:
chain = loadNumpyTxt(chain_file, skiprows=0)
# parse the chain depending on the sampler that produced it:
if sampler == 'nested':
# get number of samples to use:
nsamples = int(list(filter(lambda x: 'nsample=' in x, info))
[0].replace(' ', '').split('=')[1])
# get the chain:
chain = chain[-nsamples:]
# get all quantities:
indexes = [i for i in range(len(param_names))
if i != param_names.index('weight')
and i != param_names.index('post')]
samples = chain[:, indexes]
weights = chain[:, param_names.index('weight')]
loglike = chain[:, param_names.index('post')]
# delete the weights and likelihood from names:
if param_labels is not None:
param_labels.pop(param_names.index('weight'))
param_labels.pop(param_names.index('post'))
param_names.pop(param_names.index('weight'))
param_names.pop(param_names.index('post'))
elif sampler == 'mcmc':
# get all quantities:
indexes = [i for i in range(len(param_names))
if i != param_names.index('post')]
samples = chain[:, indexes]
loglike = chain[:, param_names.index('post')]
# Cosmosis does not weight samples:
samples, idx, weights = np.unique(samples, return_index=True, return_counts=True, axis=0)
loglike = loglike[idx]
# delete the weights and likelihood from names:
if param_labels is not None:
param_labels.pop(param_names.index('post'))
param_names.pop(param_names.index('post'))
elif sampler == 'uncorrelated':
# get all quantities:
indexes = [i for i in range(len(param_names))
if i != param_names.index('post')]
samples = chain[:, indexes]
loglike = chain[:, param_names.index('post')]
weights = None
# delete the weights and likelihood from names:
if param_labels is not None:
param_labels.pop(param_names.index('post'))
param_names.pop(param_names.index('post'))
else:
raise ValueError('Unknown sampler')
# get the ranges:
ranges = get_ranges(info, param_names)
# transform param names:
if param_name_dict is not None:
for i, name in enumerate(param_names):
if name in param_name_dict.keys():
param_names[i] = param_name_dict[name]
if name in ranges.keys():
ranges[param_name_dict[name]] = ranges.pop(name)
#for i, name in enumerate(param_names):
# if name in param_name_dict.keys():
# initialize the samples:
mc_samples = MCSamples(samples=samples, weights=weights,
loglikes=-2.*loglike,
sampler=sampler, names=param_names,
labels=param_labels, ranges=ranges,
ignore_rows=0, name_tag=name_tag,
settings=settings)
# set running parameters:
for name in mc_samples.getParamNames().parsWithNames(
mc_samples.getParamNames().list()):
if name.name in ranges.keys():
name.isDerived = False
else:
name.isDerived = True
# polish the samples removing nans:
mc_samples = polish_samples(mc_samples)
# get the best fit:
if chain_min_root is not None:
# since getdist does not cache the best fit we have to override the
# method in this brute way:
funcType = types.MethodType
mc_samples.getBestFit = funcType(functools.partial(get_maximum_likelihood,
chain_min_root=chain_min_root,
param_name_dict=param_name_dict,
param_label_dict=param_label_dict),
mc_samples)
# update statistics:
mc_samples.updateBaseStatistics()
#
return mc_samples
def get_cosmosis_info(file):
"""
Parse a file to get all the information about a Cosmosis run.
:param file: path and name of the file to parse.
:return: a list of strings with the cosmosis parameters for the run.
"""
with open(file, 'r') as _file:
info = []
for ln in _file:
if ln.startswith('#'):
info.append(ln[1:])
# polish the info strings:
info = list(filter(None, [line.replace('#', '').strip() for line in info]))
#
return info
def get_param_names(info):
"""
Get the parameter names for a Cosmosis run.
:param info: a list of strings with the cosmosis parameters for the run.
:return: a list of strings with the parameter names.
"""
return info[0].split('\t')
def get_param_labels(info, param_names, param_label_dict):
"""
Get the labels for the parameter names of a Cosmosis run.
:param info: a list of strings with the cosmosis parameters for the run.
:param param_names: a list of strings with the parameter names.
:param param_label_dict: a dictionary with the mapping between names and
labels.
:return: a list of strings with the parameter labels.
"""
if param_label_dict is not None:
param_labels = []
for name in param_names:
if name in param_label_dict.keys():
param_labels.append(param_label_dict[name])
else:
param_labels.append(name)
else:
param_labels = None
#
return param_labels
def get_sampler_type(info):
"""
Get the sampler type for a chain given the a list of strings containing the
cosmosis run parameter informations.
To process the sampler type the function defines internally a dictionary
with the mapping from sampler name to sampler type.
:param info: a list of strings with the cosmosis parameters for the run.
:return: a string with the sampler type if any, otherwise returns none.
"""
# define the dictionary with the mapping:
sampler_dict = {
'polychord': 'nested',
'multinest': 'nested',
'apriori': 'uncorrelated',
'emcee': 'mcmc',
'pmaxlike': 'max_like',
'maxlike': 'max_like'
}
# find the sampler in the parameters:
temp = list(filter(lambda x: 'sampler' in x, info))
if len(temp) > 0:
sampler = temp[0].replace(' ', '').split('=')[1].lower()
if sampler in sampler_dict.keys():
sampler = sampler_dict[sampler]
else:
raise ValueError('Unknown input sampler')
else:
sampler = None
#
return sampler
def get_name_tag(info):
"""
Get the name tag for a chain given the a list of strings containing the
cosmosis run parameter informations.
:param info: a list of strings with the cosmosis parameters for the run.
:return: a string with the name tag if any, otherwise returns none.
"""
temp = list(filter(lambda x: 'run_name' in x, info))
if len(temp) > 0:
name_tag = temp[0].replace(' ', '').split('=')[1].lower()
name_tag = name_tag.rstrip().lstrip()
else:
name_tag = None
#
return name_tag
def get_ranges(info, param_names):
"""
Get the ranges for the parameters from the info file.
:param info: a list of strings with the cosmosis parameters for the run.
:param param_names: a list with the parameter names.
:return: a dictionary with the parameter ranges.
"""
ranges = {}
for name in param_names:
try:
# look for the section:
section = '['+name.split('--')[0]+']'
pname = name.split('--')[1]
section_index = info.index(section)
temp = list(filter(lambda x:
pname in x[1] and x[0] > section_index,
enumerate(info)))[0][1]
_min = float(list(filter(None, temp.split()))[2])
_max = float(list(filter(None, temp.split()))[4])
ranges[name] = [_min, _max]
except:
pass
#
return ranges
def polish_samples(chain):
"""
Remove fixed parameters and samples with some parameter that is Nan
from the input chain.
:param chain: :class:`~getdist.mcsamples.MCSamples` the input chain.
:return: :class:`~getdist.mcsamples.MCSamples` the polished chain.
"""
# delete fixed parameters:
chain.deleteFixedParams()
# we need to delete nan columns:
fixed = []
values = []
for i in range(chain.samples.shape[1]):
if np.all(np.isnan(chain.samples[:, i])):
fixed.append(i)
values.append(chain.samples[0, i])
chain.changeSamples(np.delete(chain.samples, fixed, 1))
if hasattr(chain, 'ranges'):
for ix, value in zip(fixed, values):
chain.ranges.setFixed(chain.paramNames.names[ix].name,
value)
chain.paramNames.deleteIndices(fixed)
chain._getParamIndices()
# now we need to delete nan samples:
nan_filter = np.all(np.logical_not(np.isnan(chain.samples)), axis=1)
chain.filter(where=nan_filter)
#
return chain
def get_maximum_likelihood(dummy, max_posterior, chain_min_root,
param_name_dict, param_label_dict):
"""
Import the maximum likelihood file for a Cosmosis run, if present.
:param dummy: dummy argument for interfacing, not used in practice
:param chain_min_root: name of the minimum file or the folder that contains
it.
:param param_name_dict: a dictionary with the mapping between
cosmosis names and reasonable names.
:param param_label_dict: dictionary with the mapping between the parameter
names and the labels.
:return: :class:`~getdist.types.BestFit` the best fit object.
"""
# decide if the user passed a folder or a chain:
if os.path.isfile(chain_min_root+'.txt'):
minimum_file = chain_min_root+'.txt'
elif os.path.isdir(chain_min_root):
# look for the chain file:
temp = list(filter(lambda x: 'chain_pmaxlike.txt' in x,
os.listdir(chain_min_root)))
if len(temp) == 0:
raise ValueError('No minimum file found in folder', chain_min_root)
minimum_file = chain_min_root+'/'+temp[0]
else:
raise ValueError('Input chain root is not a folder nor a file.')
# get the info:
info = get_cosmosis_info(minimum_file)
# check the sampler:
sampler = get_sampler_type(info)
if sampler != 'max_like':
raise ValueError('Minimum file appears to have a strange sampler:',
sampler)
# get the parameter names:
param_names = get_param_names(info)
# get the parameter labels from the user provided dictionary:
param_labels = get_param_labels(info, param_names, param_label_dict)
# best fit:
best_fit = BestFit()
# set parameter names:
if param_name_dict is not None:
best_fit.names = []
for name in param_names:
if name in param_name_dict.keys():
best_fit.names.append(ParamInfo(param_name_dict[name]))
else:
best_fit.names.append(ParamInfo(name))
else:
best_fit.names = [ParamInfo(name) for name in param_names]
if param_labels is not None:
for name, label in zip(best_fit.names, param_labels):
name.label = label
# get the numbers:
best_fit_params = loadNumpyTxt(minimum_file, skiprows=0)[0]
# look for weight:
if 'weigth' in param_names:
best_fit.weight = best_fit_params[param_names.index('weigth')]
else:
best_fit.weight = 1.0
# look for log like:
if 'post' in param_names:
best_fit.logLike = -best_fit_params[param_names.index('post')]
else:
raise ValueError('Best fit file does not contain the best fit like')
# correct log likes in getdist format:
if 'post' in param_names:
best_fit_params[param_names.index('post')] = \
-best_fit_params[param_names.index('post')]
if 'prior' in param_names:
best_fit_params[param_names.index('prior')] = \
-best_fit_params[param_names.index('prior')]
if 'like' in param_names:
best_fit_params[param_names.index('like')] = \
-best_fit_params[param_names.index('like')]
# get the data chi squareds:
best_fit.chiSquareds = []
# get the ranges to get the running parameters:
ranges = get_ranges(info, param_names)
# now initialize all parameters:
for ind, param in enumerate(best_fit.names):
param.isDerived = param.name not in ranges.keys()
param.number = ind
param.best_fit = best_fit_params[ind]
#
return best_fit
| 15,767 | 37.179177 | 97 | py |
tensiometer | tensiometer-master/tensiometer/chains_convergence.py | """
This file contains some functions to study convergence of the chains and
to compare the two posteriors.
"""
"""
For test purposes:
from getdist import loadMCSamples, MCSamples, WeightedSamples
chain = loadMCSamples('./test_chains/DES')
chains = chain
param_names = None
import tensiometer.utilities as utils
import matplotlib.pyplot as plt
import tensiometer.gaussian_tension as gtens
import tensiometer.tensor_eigenvalues as teig
"""
###############################################################################
# initial imports:
import copy
import time
import numpy as np
from getdist import MCSamples
from . import utilities as utils
from . import gaussian_tension as gtens
from . import tensor_eigenvalues as teig
###############################################################################
# Helpers for input tests:
def _helper_chains_to_chainlist(chains):
if isinstance(chains, list):
for ch in chains:
if not isinstance(ch, MCSamples):
raise TypeError('Input list does not contain MCSamples')
chainlist = chains
elif isinstance(chains, MCSamples):
chainlist = utils.get_separate_mcsamples(chains)
else:
raise TypeError('Input chains is not of MCSamples type nor a \
list of chains.')
# check:
if len(chainlist) < 2:
raise ValueError('List of chains has less than two elements.')
#
return chainlist
###############################################################################
# Gelman Rubin for the means:
def GR_test(chains, param_names=None):
"""
Function performing the Gelman Rubin (GR) test
(described in
`Gelman and Rubin 92 <http://www.stat.columbia.edu/~gelman/research/published/itsim.pdf>`_
and
`Brooks and Gelman 98 <http://www.stat.columbia.edu/~gelman/research/published/brooksgelman2.pdf>`_)
on a list of :class:`~getdist.mcsamples.MCSamples` or on a single
:class:`~getdist.mcsamples.MCSamples` with different sub-chains.
This test compares the variation of the mean across a pool of chains
with the expected variation of the mean under the pdf that is being
sampled.
If we define the covariance of the mean as:
.. math::
C_{ij} \\equiv {\\rm Cov}_c({\\rm Mean}_s(\\theta))_{ij}
and the mean covariance as:
.. math::
M_{ij} \\equiv {\\rm Mean}_c[{\\rm Cov}_s(\\theta)_{ij}]
then we seek to maximize:
.. math::
R-1 = {\\rm max_{\\theta}}\\frac{C_{ij} \\theta^i \\theta^j}
{M_{ij}\\theta^i \\theta^j}
where the subscript :math:`c` means that the statistics is computed across
chains while the subscrit :math:`s` indicates that it is computed across
samples.
In this case the maximization is solved by finding the maximum eigenvalue
of :math:`C M^{-1}`.
:param chains: single or list of :class:`~getdist.mcsamples.MCSamples`
:param param_names: names of the parameters involved in the test.
By default uses all non-derived parameters.
:returns: value of the GR test and corresponding parameter combination
"""
# digest chain or chains:
chainlist = _helper_chains_to_chainlist(chains)
# digest parameter names:
for ch in chainlist:
param_names = gtens._check_param_names(ch, param_names)
# get samples and weights:
idx = [ch.index[name] for name in param_names]
samples = [ch.samples[:, idx] for ch in chainlist]
weights = [ch.weights for ch in chainlist]
#
return GR_test_from_samples(samples, weights)
def GR_test_from_samples(samples, weights):
"""
Lower level function to perform the Gelman Rubin (GR) test.
This works on a list of samples from different chains and corresponding
weights.
Refer to :meth:`tensiometer.chains_convergence.GR_test` for
more details of what this function is doing.
:param samples: list of samples from different chains
:param weights: weights of the samples for each chain
:returns: value of the GR test and corresponding parameter combination
"""
# initialization:
num_chains = len(samples)
# sum of weights:
tot_weights = np.array([np.sum(wh) for wh in weights])
# means and covariances:
means = [np.dot(weights[ind], samples[ind])/tot_weights[ind]
for ind in range(num_chains)]
covs = [np.cov(samples[ind].T, aweights=weights[ind], ddof=0)
for ind in range(num_chains)]
# compute R-1:
VM = np.cov(np.array(means).T)
MV = np.mean(covs, axis=0)
#
if VM.ndim == 0:
res, mode = VM/MV, np.array([1])
else:
eig, eigv = np.linalg.eig(np.dot(VM, utils.QR_inverse(MV)))
ind = np.argmax(eig)
res, mode = np.abs(eig[ind]), np.abs(eigv[:, ind])
#
return res, mode
###############################################################################
# Gelman Rubin like test for higher moments and in 1D:
def GRn_test_1D(chains, n, param_name, theta0=None):
"""
One dimensional higher moments test. Compares the variation of a given
moment among the population of chains with the expected variation
of that quantity from the samples pdf.
This test is defined by:
.. math::
R_n(\\theta_0)-1 = \\frac{{\\rm Var}_c
({\\rm Mean}_s(\\theta-\\theta_0)^n)}{{\\rm Mean}_c
({\\rm Var}_s(\\theta-\\theta_0)^n) }
where the subscript :math:`c` means that the statistics is computed across
chains while the subscrit :math:`s` indicates that it is computed across
samples.
:param chains: single or list of :class:`~getdist.mcsamples.MCSamples`
:param n: order of the moment
:param param_name: names of the parameter involved in the test.
:param theta0: center of the moments. By default equal to the mean.
:returns: value of the GR moment test and corresponding parameter
combination (an array with one since this works in 1D)
"""
# digest chain or chains:
chainlist = _helper_chains_to_chainlist(chains)
# digest parameter names:
param_name = utils.make_list(param_name)
for ch in chainlist:
param_name = gtens._check_param_names(ch, param_name)
if len(param_name) != 1:
raise ValueError('GRn_test_1D works for one parameter only.')
# get the weights:
weights = [ch.weights for ch in chainlist]
# get the samples:
samples = [ch.samples[:, ch.index[param_name[0]]] for ch in chainlist]
#
return GRn_test_1D_samples(samples, weights, n, theta0)
def GRn_test_1D_samples(samples, weights, n, theta0=None):
"""
Lower level function to compute the one dimensional higher moments
test.
This works on a list of samples from different chains and corresponding
weights.
Refer to :meth:`tensiometer.chains_convergence.GRn_test_1D` for
more details of what this function is doing.
:param samples: list of samples from different chains
:param weights: weights of the samples for each chain
:param n: order of the moment
:param theta0: center of the moments. By default equal to the mean.
:returns: value of the GR moment test and corresponding parameter
combination (an array with one since this works in 1D)
"""
# initialize:
num_chains = len(samples)
# get the weights:
tot_weights = np.array([np.sum(wh) for wh in weights])
# get the central samples:
if theta0 is None:
means = [np.dot(weights[ind], samples[ind])/tot_weights[ind]
for ind in range(num_chains)]
central_samples = [samples[ind] - means[ind]
for ind in range(num_chains)]
else:
central_samples = [samples[ind] - theta0 for ind in range(num_chains)]
# compute moments:
moments = np.array([np.dot(weights[ind], central_samples[ind]**n)
/ tot_weights[ind] for ind in range(num_chains)])
moments2 = np.array([np.dot(weights[ind], central_samples[ind]**(2*n))
/ tot_weights[ind] for ind in range(num_chains)])
#
return np.var(moments)/(np.mean(moments2-moments**2))
###############################################################################
# Gelman Rubin like test for higher moments:
def _helper_1(wh, samps, n, temp_EQ):
for w, s in zip(wh, samps):
res = s
for rk in range(n-1):
res = np.multiply.outer(res, s)
temp_EQ += w*res
return temp_EQ/np.sum(wh)
def _helper_2(wh, samps, n, temp_VQ, temp_EQ):
for w, s in zip(wh, samps):
res = s
for rk in range(n-1):
res = np.multiply.outer(res, s)
temp_VQ += w*np.multiply.outer(res-temp_EQ, res-temp_EQ)
return temp_VQ/np.sum(wh)
def GRn_test(chains, n, theta0=None, param_names=None, feedback=0,
optimizer='ParticleSwarm', **kwargs):
"""
Multi dimensional higher order moments convergence test.
Compares the variation of a given
moment among the population of chains with the expected variation
of that quantity from the samples pdf.
We first build the :math:`k` order tensor of parameter differences around a
point :math:`\\tilde{\\theta}`:
.. math::
Q^{(k)} \\equiv Q_{i_1, \\dots, i_k} \\equiv (\\theta_{i_1}
-\\tilde{\\theta}_{i_1}) \\cdots (\\theta_{i_k}
-\\tilde{\\theta}_{i_k})
then we build the tensor encoding its covariance across chains
.. math::
V_M = {\\rm Var}_c (E_s [Q^{(k)}])
which is a :math:`2k`rank tensor of dimension :math:`n` and then
build the second tensor encoding the mean in chain moment:
.. math::
M_V = {\\rm Mean}_c (E_s[(Q^{(k)}-E_s[Q^{(k)}])
\\otimes(Q^{(k)}-E_s[Q^{(k)}])])
where we have suppressed all indexes to not crowd the notation.
Then we maximize over parameters:
.. math::
R_n -1 \\equiv {\\rm max}_\\theta
\\frac{V_M \\theta^{2k}}{M_V \\theta^{2k}}
where :math:`\\theta^{2k}` is the tensor product of :math:`\\theta` for
:math:`2k` times.
Differently from the 2D case this problem has no solution in terms of
eigenvalues of tensors so far and the solution is obtained by numerical
minimization with the pymanopt library.
:param chains: single or list of :class:`~getdist.mcsamples.MCSamples`
:param n: order of the moment
:param theta0: center of the moments. By default equal to the mean
:param param_names: names of the parameters involved in the test.
By default uses all non-derived parameters.
:param feedback: level of feedback. 0=no feedback, >0 increasingly chatty
:param optimizer: choice of optimization algorithm for pymanopt.
Default is ParticleSwarm, other possibility is TrustRegions.
:param kwargs: keyword arguments for the optimizer.
:returns: value of the GR moment test and corresponding parameter
combination
"""
# if n=1 we return the standard GR test:
if n == 1:
return GR_test(chains, param_names=param_names)
# digest chain or chains:
chainlist = _helper_chains_to_chainlist(chains)
# digest parameter names:
for ch in chainlist:
param_names = gtens._check_param_names(ch, param_names)
# if there is only one parameter call the specific function:
if len(param_names) == 1:
return GRn_test_1D(chainlist, n, param_name=param_names), np.array([1])
# get the weights:
weights = [ch.weights for ch in chainlist]
# get the samples:
samples = [ch.samples[:, [ch.index[name] for name in param_names]]
for ch in chainlist]
# call the samples function:
return GRn_test_from_samples(samples, weights, n, theta0=theta0,
feedback=feedback, optimizer=optimizer,
**kwargs)
def GRn_test_from_samples(samples, weights, n, theta0=None, feedback=0,
optimizer='ParticleSwarm', **kwargs):
"""
Lower level function to compute the multi dimensional higher moments
test.
This works on a list of samples from different chains and corresponding
weights.
Refer to :meth:`tensiometer.chains_convergence.GRn_test` for
more details of what this function is doing.
:param samples: list of samples from different chains
:param weights: weights of the samples for each chain
:param n: order of the moment
:param theta0: center of the moments. By default equal to the mean
:param feedback: level of feedback. 0=no feedback, >0 increasingly chatty
:param optimizer: choice of optimization algorithm for pymanopt.
Default is ParticleSwarm, other possibility is TrustRegions.
:param kwargs: keyword arguments for the optimizer.
:returns: value of the GR moment test and corresponding parameter
combination
"""
# initialization:
initial_time = time.time()
num_chains = len(samples)
num_params = samples[0].shape[1]
tot_weights = np.array([np.sum(wh) for wh in weights])
# get the central samples:
if theta0 is None:
means = [np.dot(weights[ind], samples[ind])/tot_weights[ind]
for ind in range(num_chains)]
central_samples = [samples[ind] - means[ind]
for ind in range(num_chains)]
else:
central_samples = [samples[ind] - theta0 for ind in range(num_chains)]
# loop over the chains:
EQ, VQ = [], []
if feedback > 0:
print('Started tensor calculations')
for ind in range(num_chains):
t0 = time.time()
samps = central_samples[ind]
wh = weights[ind]
# compute expectation of Q:
temp_EQ = np.zeros(tuple([num_params for i in range(n)]))
temp_EQ = _helper_1(wh, samps, n, temp_EQ)
# compute the covariance:
temp_VQ = np.zeros(tuple([num_params for i in range(2*n)]))
temp_VQ = _helper_2(wh, samps, n, temp_VQ, temp_EQ)
# save results:
EQ.append(copy.deepcopy(temp_EQ))
VQ.append(copy.deepcopy(temp_VQ))
# feedback:
t1 = time.time()
if feedback > 0:
print('Chain '+str(ind+1)+') time', round(t1-t0, 1), '(s)')
# compute statistics over chains:
MV = np.mean(VQ, axis=0)
VM = np.zeros(tuple([num_params for i in range(2*n)]))
temp = np.mean(EQ, axis=0)
for temp_EQ in EQ:
VM += np.multiply.outer(temp_EQ-temp, temp_EQ-temp)
VM = VM/float(len(EQ))
# do the tensor optimization:
if optimizer == 'GEAP':
results = teig.max_GtRq_geap_power(VM, MV, **kwargs)
else:
results = teig.max_GtRq_brute(VM, MV, feedback=0,
optimizer=optimizer, **kwargs)
# finalize:
final_time = time.time()
if feedback > 0:
print('Total time ', round(final_time-initial_time, 1), '(s)')
#
return results
| 14,904 | 36.638889 | 104 | py |
tensiometer | tensiometer-master/tensiometer/tensor_eigenvalues.py | """
This file contains a set of utilities to compute tensor eigenvalues
since there is no standard library to do so.
"""
###############################################################################
# initial imports:
from itertools import permutations
import numpy as np
import scipy.linalg
import scipy.integrate
import scipy
import sys
import functools
###############################################################################
# Utilities:
def random_symm_tensor(d, m, vmin=0.0, vmax=1.0):
"""
Generate a random symmetric tensor of dimension d and rank m.
There is no guarantee on the distribution of the elements, just that
they are all different...
:param d: number of dimensions
:param m: rank of the tensor
:param vmin: minimum value of the tensor elements
:param vmax: maximum value of the tensor elements
:returns: the random symmetric tensor
"""
# output tensor:
tensor_shape = [d for i in range(m)]
out_tens = np.zeros(tensor_shape)
# generate elements:
in_tens = vmax*np.random.rand(*tensor_shape) + vmin
# symmetrize:
num_perm = 0
for i in permutations([i for i in range(m)]):
num_perm += 1
out_tens += np.transpose(in_tens, axes=list(i))
out_tens = out_tens/num_perm
#
return out_tens
def random_symm_positive_tensor(d, m, vmin=0.0, vmax=1.0):
"""
Generate a random positive symmetric tensor of even order.
There is no guarantee on the distribution of the elements, just that
they are all different...
:param d: number of dimensions
:param m: rank of the tensor
:param vmin: minimum value of the tensor elements
:param vmax: maximum value of the tensor elements
:returns: the random symmetric tensor
"""
# Generate the starting tensor:
A = random_symm_tensor(d, m, vmin=vmin, vmax=vmax)
# Compute the product with itself:
A = np.tensordot(A, A, ([i for i in range(m)][m//2:],
[i for i in range(m)][:m//2]))
#
return A
def identity_tensor(d, m):
"""
Returns the identity tensor that has 1 on the (multidimensional) diagonal
and 0 elsewhere.
:param d: number of dimensions
:param m: rank of the tensor
:returns: the random symmetric tensor
"""
# output tensor:
tensor_shape = [d for i in range(m)]
out_tens = np.zeros(tensor_shape)
# initialize:
for i in range(d):
out_tens[tuple([i for j in range(m)])] = 1.
#
return out_tens
def number_eigenvalues(d, m):
"""
Number of eigenvalues of a symmetric tensor of order m and dimension d.
:param d: number of dimensions
:param m: rank of the tensor
"""
return d*(m-1)**(d-1)
def tensor_deflation(A, l, x):
"""
Deflates a tensor by a scalar multiplied my a vector.
:param A: the input tensor
:param l: the scalar to deflate
:param x: the vector to deflate
:return : the deflated tensor
"""
# get dimension and rank:
m = len(A.shape)
# prepare the outer product of the input vector:
vec = x
for i in range(m-1):
vec = np.multiply.outer(vec, x)
#
return A - l * vec
###############################################################################
# Tensor contractions utilities
def tensor_contraction_brute_1(A, x, n=1):
"""
Contracts a symmetric tensor of rank m with a given vector n times.
This function is meant to be as fast as possible, no check is
performed.
:param A: the inmput symmetric tensor of rank m
:param x: the input vector to contract
:param n: the number of times to contract
:returns: the tensor contracted n times. This is a tensor of rank m-n.
"""
res = A
for i in range(n):
res = np.dot(res, x)
return res
def tensor_contraction_brute_2(A, x, n=1):
"""
Contracts a symmetric tensor of rank m with a given vector n times.
This function is meant to be as fast as possible, no check is
performed.
:param A: the inmput symmetric tensor of rank m
:param x: the input vector to contract
:param n: the number of times to contract
:returns: the tensor contracted n times. This is a tensor of rank m-n.
"""
return functools.reduce(np.dot, [A]+[x for i in range(n)])
# choose the contraction function to use:
tensor_contraction = tensor_contraction_brute_2
###############################################################################
# Optimization on the sphere:
def eu_to_sphere_grad(x, egrad):
"""
Converts euclidean gradient to gradient on the n-sphere.
:param x: vector x on the sphere
:param egrad: euclidean gradient
"""
return egrad - np.tensordot(x, egrad, axes=x.ndim) * x
def eu_to_sphere_hess(x, egrad, ehess, u):
"""
Derivative of gradient in direction u (tangent to the sphere)
:param x: vector x on the sphere
:param egrad: euclidean gradient
:param ehess: euclidean Hessian matrix
:param u: direction vector that should belong on the tangent space
of the sphere
"""
ehess = np.dot(ehess, u)
temp = ehess - np.tensordot(x, ehess, axes=x.ndim) * x \
- np.tensordot(x, egrad, axes=x.ndim) * u
return temp
###############################################################################
# Rayleight quotient definition and derivatives:
def tRq(x, A):
"""
Symmetric Tensor Rayleigh quotient.
:param x: the input vector
:param A: the input tensor
"""
# get dimension and rank:
m = len(A.shape)
# do the products:
return tensor_contraction(A, x, m)
def tRq_nder(x, A, n):
"""
Euclidean derivative of order n of the Tensor Rayleigh quotient problem.
:param x: the input vector
:param A: the input tensor
:param n: the order of the derivative
"""
# get dimension and rank:
m = len(A.shape)
# do the products:
res = tensor_contraction(A, x, m-n)
# get the prefactor:
fac = np.prod([(m - j) for j in range(n)]).astype(np.float)
#
return fac*res
###############################################################################
# manifold brute force maximization:
import autograd.numpy as anp
# prevent pymanopt from running with tensorflow:
import pymanopt.tools.autodiff._tensorflow as ptf
ptf.tf = None
from pymanopt.manifolds import Sphere
from pymanopt import Problem
import pymanopt.solvers
def _tRq_brute_autograd(x, A):
"""
Tensor Rayleigh quotient. Brute force implementation with autograd.
"""
# get dimension and rank:
m = len(A.shape)
# do the products:
res = functools.reduce(anp.dot, [A]+[x for i in range(m)])
#
return res
def max_tRq_brute(A, feedback=0, optimizer='ParticleSwarm', **kwargs):
"""
Brute force maximization of the Tensor Rayleigh quotient on the sphere.
Optimization is performed with Pymanopt.
:param A: the input tensor
:param feedback: the feedback level for pymanopt
:param optimizer: the name of the pymanopt minimizer
:param kwargs: keyword arguments to pass to the pymanopt solver
"""
# get dimension and rank:
d = A.shape[0]
# initialize:
manifold = Sphere(d)
problem = Problem(manifold=manifold,
cost=lambda x: -_tRq_brute_autograd(x, A),
verbosity=feedback)
# optimization:
if optimizer == 'ParticleSwarm':
solver = pymanopt.solvers.ParticleSwarm(logverbosity=0, **kwargs)
Xopt = solver.solve(problem)
elif optimizer == 'TrustRegions':
solver = pymanopt.solvers.TrustRegions(logverbosity=0, **kwargs)
Xopt = solver.solve(problem)
# finalize:
return _tRq_brute_autograd(Xopt, A), Xopt
def tRq_brute_2D(A, num_points=2000):
"""
Brute force maximization of the Tensor Rayleigh quotient on the circle.
Works for problems of any rank and 2 dimensions. Since the problem
is one dimensional samples the function on num_points and returns the
maximum.
:param A: the input tensor
:param num_points: the number of points of the search
"""
theta = np.linspace(0., np.pi, num_points)
res = np.array([tRq([x, y], A)
for x, y in zip(np.cos(theta), np.sin(theta))])
sol = np.where(np.diff(np.sign(res[1:]-res[0:-1])))
eig = np.concatenate((res[sol], res[sol]))
eigv = np.concatenate(([[x, y] for x, y in zip(np.cos(theta[sol]),
np.sin(theta[sol]))],
[[x, y] for x, y in zip(np.cos(theta[sol]+np.pi),
np.sin(theta[sol]+np.pi))]
))
#
return eig, eigv
###############################################################################
# power iterations:
def max_tRq_power(A, maxiter=1000, tol=1.e-10, x0=None, history=False):
"""
Symmetric power iterations, also called S-HOPM, for tensors eigenvalues.
Described in https://arxiv.org/abs/1007.1267
The algorithm is not guaranteed to produce the global maximum but only
a convex maximum. We advice to run the algorithm multiple times to
make sure that the solution that is found is the global maximum.
:param A: the input symmetric tensor
:param maxiter: (default 500) maximum number of iterations
:param tol: (default 1.e-10) tolerance on the solution of the eigenvalue
problem
:param x0: (default random on the sphere) starting point
:param history: (default False) wether to return the history of the
power iterations
"""
# get dimension and rank:
d, m = A.shape[0], len(A.shape)
# get random (normalized) initial guess:
if x0 is None:
x = 2.*np.random.rand(d) - 1.
else:
x = x0
x = x / np.sqrt(np.dot(x, x))
# initialization:
res_history = []
# do the power iterations:
for i in range(maxiter):
# precomputations:
Axmm1 = tensor_contraction(A, x, m-1)
Axm = tensor_contraction(Axmm1, x)
# save history:
res_history.append(Axm)
# check for termination:
test = Axmm1 - Axm * x
test = np.sqrt(np.dot(test, test))
if test < tol:
break
# perform the iteration:
x = Axmm1
x = x / np.sqrt(np.dot(x, x))
# check for rightful termination:
if i == maxiter-1:
print('WARNING(max_tRq_power)'
+ ' maximum number of iterations ('+str(maxiter)+') exceeded.')
# return:
if history:
return Axm, x, np.array(res_history)
else:
return Axm, x
def max_tRq_shift_power(A, alpha, maxiter=1000, tol=1.e-10, x0=None,
history=False):
"""
Shifted symmetric power iterations, also called SS-HOPM, for tensor
eigenvalues. Described in https://arxiv.org/abs/1007.1267
The algorithm is not guaranteed to produce the global maximum but only
a convex maximum. We advice to run the algorithm multiple times to
make sure that the solution that is found is the global maximum.
:param A: the input symmetric tensor
:param alpha: the input fixed shift
:param maxiter: (default 500) maximum number of iterations.
:param tol: (default 1.e-10) tolerance on the solution of the eigenvalue
problem
:param x0: (default random on the sphere) starting point
:param history: (default False) wether to return the history of the
power iterations
"""
# get dimension and rank:
d, m = A.shape[0], len(A.shape)
# get random (normalized) initial guess:
if x0 is None:
x = 2.*np.random.rand(d) - 1.
else:
x = x0
x = x / np.sqrt(np.dot(x, x))
# initialization:
res_history = []
# do the power iterations:
for i in range(maxiter):
# precomputations:
Axmm1 = tensor_contraction(A, x, m-1)
Axm = tensor_contraction(Axmm1, x)
# save history:
res_history.append(Axm)
# check for termination:
test = Axmm1 - Axm * x
test = np.sqrt(np.dot(test, test))
if test < tol:
break
# perform the iteration:
x = Axmm1 + alpha * x
x = x / np.sqrt(np.dot(x, x))
# check for rightful termination:
if i == maxiter-1:
print('WARNING(max_tRq_shift_power)'
+ ' maximum number of iterations ('+str(maxiter)+') exceeded.')
# return:
if history:
return Axm, x, np.array(res_history)
else:
return Axm, x
def max_tRq_geap(A, tau=1.e-6, maxiter=1000, tol=1.e-10, x0=None,
history=False):
"""
Shifted adaptive power iterations algorithm, also called GEAP, for tensor
eigenvalues. Described in https://arxiv.org/pdf/1401.1183.pdf
The algorithm is not guaranteed to produce the global maximum but only
a convex maximum. We advice to run the algorithm multiple times to
make sure that the solution that is found is the global maximum.
:param A: the input symmetric tensor
:param tau: (default 1.e-6) tolerance on being positive definite
:param maxiter: (default 500) maximum number of iterations.
:param tol: (default 1.e-10) tolerance on the solution of the eigenvalue
problem
:param x0: (default random on the sphere) starting point
:param history: (default False) wether to return the history of the
power iterations
"""
# get dimension and rank:
d, m = A.shape[0], len(A.shape)
# get random (normalized) initial guess:
if x0 is None:
x = 2.*np.random.rand(d) - 1.
else:
x = x0
x = x / np.sqrt(np.dot(x, x))
# initialization:
res_history = []
# do the power iterations:
for i in range(maxiter):
# precompute:
Axmm2 = tensor_contraction(A, x, m-2)
Axmm1 = tensor_contraction(Axmm2, x)
Axm = tensor_contraction(Axmm1, x)
H_k = m*(m-1)*Axmm2
alpha_k = max(0., (tau - np.amin(np.linalg.eigvals(H_k)))/m)
# save history:
res_history.append([Axm, alpha_k])
# check for termination:
test = Axmm1 - Axm * x
test = np.sqrt(np.dot(test, test))
if test < tol:
break
# iteration:
x = Axmm1 + alpha_k * x
x = x / np.sqrt(np.dot(x, x))
# check for rightful termination:
if i == maxiter-1:
print('WARNING(max_tRq_geap_power)'
+ ' maximum number of iterations ('+str(maxiter)+') exceeded.')
# return:
if history:
return Axm, x, np.array(res_history)
else:
return Axm, x
###############################################################################
# maximum Z-eigenvalue and Z-eigenvector of a tensor:
def tRq_dyn_sys_brute(t, x, A, d, m):
"""
Dynamical system to solve for the biggest tensor eigenvalue.
Derivative function.
Described in https://arxiv.org/abs/1805.00903
:param t: input time
:param x: input position
:param A: input symmetric tensor
:param d: input number of dimensions
:param m: input rank of the tensor A
:return: the derivative of the dynamical system
"""
# do the product and compute the 2D matrix:
in_A = tensor_contraction(A, x, m-2)
# eigenvalues:
eig, eigv = np.linalg.eig(in_A)
# selection:
idx = np.argmax(np.real(eig))
out_x = np.real(eigv[:, idx])
out_x = out_x*np.sign(out_x[0])
#
return out_x - x
def max_tRq_dynsys(A, maxiter=1000, tol=1.e-10, x0=None, h0=0.5,
history=False):
"""
Solves for the maximum eigenvalue with a dynamical system.
Described in https://arxiv.org/abs/1805.00903
Uses odeint to perform the differential equation evolution.
:param A: the input symmetric tensor
:param maxiter: (default 500) maximum number of iterations.
:param tol: (default 1.e-10) tolerance on the solution of the eigenvalue
problem
:param x0: (default random on the sphere) starting point
:param h0: (default 0.5) initial time step
:param history: (default False) wether to return the history of the
power iterations
"""
# get dimension and rank:
d, m = A.shape[0], len(A.shape)
# get random (normalized) initial guess:
if x0 is None:
x = 2.*np.random.rand(d) - 1.
else:
x = x0
x = x / np.sqrt(np.dot(x, x))
# initialize:
t1 = 10.*h0
res_history = []
# perform the dynamical system iterations:
for i in range(maxiter):
res = scipy.integrate.odeint(lambda x, t: tRq_dyn_sys_brute(t,
x, A, d, m),
y0=x, t=[0., t1], h0=h0,
full_output=True)
# process results:
h0 = res[1]['hu'][0]
t1 = 10.*h0
x = res[0][-1, :]
Axmm1 = tensor_contraction(A, x, m-1)
Axm = tensor_contraction(Axmm1, x)
# save history:
res_history.append([Axm, h0])
# termination 1, the Rayleight coefficient should not decrease:
# if last_Axm > Axm:
# break
# last_Axm = Axm
# termination 2, the inverse stepsize cannot be smaller than machine
# precision, an equilibrium has been reached
if 1./h0 < 10*np.finfo(np.float32).eps:
break
# termination 3, the eigenvalue problem is solved to desired accuracy:
test = Axmm1 - Axm * x
test = np.sqrt(np.dot(test, test))
if test < tol:
break
# check for rightful termination:
if i == maxiter-1:
print('WARNING(ds_max_eig)'
+ ' maximum number of iterations ('+str(maxiter)+') exceeded.')
# return:
if history:
return Axm, x, np.array(res_history)
else:
return Axm, x
###############################################################################
# Generalized Rayleight quotient definition and derivatives:
def GtRq(x, A, B):
"""
Generalized tensor Rayleigh quotient.
:param x: the input vector
:param A: the input tensor at the numerator
:param B: the input tensor at the denumerator
"""
# get rank:
m = len(A.shape)
#
return tensor_contraction(A, x, m) / tensor_contraction(B, x, m)
def GtRq_Jac_brute(x, m, Axm, Bxm, Axmm1, Bxmm1):
"""
The Euclidean Jacobian of the Generalized tensor Rayleigh quotient problem.
Taken from https://arxiv.org/abs/1401.1183
Requires precomputations since many things can be cached.
:param x: the input vector
:param m: the rank of the tensor
:param Axm: first tensor contraction A*x^m
:param Bxm: second tensor contraction B*x^m
:param Axmm1: first tensor contraction A*x^(m-1)
:param Bxmm1: second tensor contraction B*x^(m-1)
"""
#
return m / Bxm * (Axm*x + Axmm1 - Axm/Bxm*Bxmm1)
def GtRq_Hess_brute(x, m, Axm, Bxm, Axmm1, Bxmm1, Axmm2, Bxmm2):
"""
The Euclidean Hessian of the Generalized tensor Rayleigh quotient problem.
Taken from https://arxiv.org/abs/1401.1183
Requires precomputations since many things can be cached.
:param x: the input vector
:param m: the rank of the tensor
:param Axm: first tensor contraction A*x^m
:param Bxm: second tensor contraction B*x^m
:param Axmm1: first tensor contraction A*x^(m-1)
:param Bxmm1: second tensor contraction B*x^(m-1)
:param Axmm2: first tensor contraction A*x^(m-2)
:param Bxmm2: second tensor contraction B*x^(m-2)
"""
# get dimension:
d = len(x)
# start accumulating Hessian:
Hess = m**2*Axm/Bxm**3*(np.outer(Bxmm1, Bxmm1) + np.outer(Bxmm1, Bxmm1))
Hess += m/Bxm*((m-1.)*Axmm2 + Axm*(np.identity(d)+(m-2.)*np.outer(x, x))
+ m*(np.outer(Axmm1, x) + np.outer(x, Axmm1)))
Hess -= m/Bxm**2*((m-1.)*Axm*Bxmm2 + m*(np.outer(Axmm1, Bxmm1)
+ np.outer(Bxmm1, Axmm1))
+ m*Axm*(np.outer(x, Bxmm1) + np.outer(Bxmm1, x)))
#
return Hess
###############################################################################
# Brute force maximization:
def _GtRq_brute_autograd(x, A, B):
"""
Generalized Tensor Rayleigh quotient. Brute force implementation.
"""
# get dimension and rank:
m = len(A.shape)
# do the products:
res1 = functools.reduce(anp.dot, [A]+[x for i in range(m)])
res2 = functools.reduce(anp.dot, [B]+[x for i in range(m)])
#
return res1 / res2
def max_GtRq_brute(A, B, feedback=0, optimizer='ParticleSwarm', **kwargs):
"""
Brute force maximization of the Generalized Tensor Rayleigh quotient
on the sphere. Optimization is performed with Pymanopt.
:param A: the input tensor
:param B: the second input tensor
:param feedback: the feedback level for pymanopt
:param optimizer: the name of the pymanopt minimizer
:param kwargs: keyword arguments to pass to the pymanopt solver
"""
# get dimension:
d = A.shape[0]
# initialize:
manifold = Sphere(d)
problem = Problem(manifold=manifold,
cost=lambda x: -_GtRq_brute_autograd(x, A, B),
verbosity=feedback)
# optimization:
if optimizer == 'ParticleSwarm':
solver = pymanopt.solvers.ParticleSwarm(logverbosity=0, **kwargs)
Xopt = solver.solve(problem)
elif optimizer == 'TrustRegions':
solver = pymanopt.solvers.TrustRegions(logverbosity=0, **kwargs)
Xopt = solver.solve(problem)
# finalize:
return _GtRq_brute_autograd(Xopt, A, B), Xopt
def GtRq_brute_2D(A, B, num_points=2000):
"""
Brute force maximization of the Generalized Tensor Rayleigh quotient
on the circle.
Works for problems of any rank and 2 dimensions. Since the problem
is one dimensional samples the function on num_points and returns the
maximum.
:param A: the first input tensor
:param B: the second input tensor
:param num_points: the number of points of the search
"""
theta = np.linspace(0., np.pi, num_points)
res = np.array([GtRq([x, y], A, B)
for x, y in zip(np.cos(theta), np.sin(theta))])
sol = np.where(np.diff(np.sign(res[1:]-res[0:-1])))
eig = np.concatenate((res[sol], res[sol]))
eigv = np.concatenate(([[x, y] for x, y in zip(np.cos(theta[sol]),
np.sin(theta[sol]))],
[[x, y] for x, y in zip(np.cos(theta[sol]+np.pi),
np.sin(theta[sol]+np.pi))]
))
#
return eig, eigv
###############################################################################
# Power method (GEAP):
def max_GtRq_geap_power(A, B, maxiter=1000, tau=1.e-6, tol=1.e-10, x0=None,
history=False):
"""
Shifted adaptive power iterations algorithm, also called GEAP, for the
Generalized Tensor Rayleigh quotient.
Described in https://arxiv.org/pdf/1401.1183.pdf
The algorithm is not guaranteed to produce the global maximum but only
a convex maximum. We advice to run the algorithm multiple times to
make sure that the solution that is found is the global maximum.
:param A: the input symmetric tensor
:param B: the second input symmetric tensor
:param maxiter: (default 500) maximum number of iterations.
:param tau: (default 1.e-6) tolerance on being positive definite
:param tol: (default 1.e-10) tolerance on the solution of the eigenvalue
problem
:param x0: (default random on the sphere) starting point
:param history: (default False) wether to return the history of the
power iterations
"""
# get dimension and rank:
d, m = A.shape[0], len(A.shape)
# get random (normalized) initial guess:
if x0 is None:
x = 2.*np.random.rand(d) - 1.
else:
x = x0
x = x / np.sqrt(np.dot(x, x))
# maximum iterations:
if maxiter is None:
maxiter = sys.maxsize
# initialize:
res_history = []
# do the power iterations:
for i in range(maxiter):
# precomputations:
Axmm2 = functools.reduce(np.dot, [A]+[x for i in range(m-2)])
Bxmm2 = functools.reduce(np.dot, [B]+[x for i in range(m-2)])
Axmm1 = np.dot(Axmm2, x)
Bxmm1 = np.dot(Bxmm2, x)
Axm = np.dot(Axmm1, x)
Bxm = np.dot(Bxmm1, x)
lambda_k = Axm/Bxm
# termination check:
term = Axmm1 - lambda_k * Bxmm1
diff = np.sqrt(np.dot(term, term))
if diff < tol:
break
# quantities:
H_k = GtRq_Hess_brute(x, m, Axm, Bxm, Axmm1, Bxmm1, Axmm2, Bxmm2)
alpha_k = max(0., (tau - np.amin(np.linalg.eigvals(H_k)))/m)
# advance and normalize:
x = (Axmm1 - lambda_k*Bxmm1 + (alpha_k + lambda_k)*Bxm*x)
x = x / np.sqrt(np.dot(x, x))
# history:
res_history.append([lambda_k, alpha_k])
# if going for too long warn:
if i % 10000 == 0 and i > 1:
print('WARNING(max_GtRq_geap_power)'
+ ' large number of iterations ('+str(i)+').')
# check for rightful termination:
if i == maxiter-1:
print('WARNING(max_GtRq_geap_power)'
+ ' maximum number of iterations ('+str(maxiter)+') exceeded.')
# returns:
if history:
return lambda_k, x, np.array(res_history)
else:
return lambda_k, x
| 25,511 | 32.436435 | 79 | py |
tensiometer | tensiometer-master/tensiometer/utilities.py | """
This file contains some utilities that are used in the tensiometer package.
"""
###############################################################################
# initial imports:
import numpy as np
import scipy
import scipy.special
from scipy.linalg import sqrtm
from getdist import MCSamples
###############################################################################
def from_confidence_to_sigma(P):
"""
Transforms a probability to effective number of sigmas.
This matches the input probability with the number of standard deviations
that an event with the same probability would have had in a Gaussian
distribution as in Eq. (G1) of
(`Raveri and Hu 18 <https://arxiv.org/pdf/1806.04649.pdf>`_).
.. math::
n_{\\sigma}^{\\rm eff}(P) \\equiv \\sqrt{2} {\\rm Erf}^{-1}(P)
:param P: the input probability.
:return: the effective number of standard deviations.
"""
if (np.all(P < 0.) or np.all(P > 1.)):
raise ValueError('Input probability has to be between zero and one.\n',
'Input value is ', P)
return np.sqrt(2.)*scipy.special.erfinv(P)
###############################################################################
def from_sigma_to_confidence(nsigma):
"""
Gives the probability of an event at a given number of standard deviations
in a Gaussian distribution.
:param nsigma: the input number of standard deviations.
:return: the probability to exceed the number of standard deviations.
"""
if (np.all(nsigma < 0.)):
raise ValueError('Input nsigma has to be positive.\n',
'Input value is ', nsigma)
return scipy.special.erf(nsigma/np.sqrt(2.))
###############################################################################
def from_chi2_to_sigma(val, dofs, exact_threshold=6):
"""
Computes the effective number of standard deviations for a chi squared
variable.
This matches the probability computed from the chi squared variable
to the number of standard deviations that an event with the same
probability would have had in a Gaussian
distribution as in Eq. (G1) of
(`Raveri and Hu 18 <https://arxiv.org/pdf/1806.04649.pdf>`_).
.. math::
n_{\\sigma}^{\\rm eff}(x, {\\rm dofs}) \\equiv
\\sqrt{2} {\\rm Erf}^{-1}({\\rm CDF}(\\chi^2_{\\rm dofs}(x)))
For very high statistical significant events this function
switches from the direct formula to an accurate asyntotic expansion.
:param val: value of the chi2 variable
:param dofs: number of degrees of freedom of the chi2 variable
:param exact_threshold: (default 6) threshold of value/dofs to switch to
the asyntotic formula.
:return: the effective number of standard deviations.
"""
# check:
if (np.all(val < 0.)):
raise ValueError('Input chi2 value has to be positive.\n',
'Input value is ', val)
if (np.all(dofs < 0.)):
raise ValueError('Input number of dofs has to be positive.\n',
'Input value is ', dofs)
# prepare:
x = val/dofs
# if value over dofs is low use direct calculation:
if x < 6:
res = from_confidence_to_sigma(scipy.stats.chi2.cdf(val, dofs))
# if value is high use first order asyntotic expansion:
else:
lgamma = 2*np.log(scipy.special.gamma(dofs/2.))
res = np.sqrt(dofs*(x + np.log(2)) - (-4 + dofs)*np.log(x*dofs)
- 2*np.log(-2 + dofs + x*dofs) + lgamma
- np.log(2*np.pi*(dofs*(x + np.log(2)) - np.log(2*np.pi)
- (-4 + dofs)*np.log(x*dofs)
- 2*np.log(-2 + dofs + x*dofs) + lgamma)))
#
return res
###############################################################################
def KL_decomposition(matrix_a, matrix_b):
"""
Computes the Karhunen–Loeve (KL) decomposition of the matrix A and B. \n
Notice that B has to be real, symmetric and positive. \n
The algorithm is taken from
`this link <http://fourier.eng.hmc.edu/e161/lectures/algebra/node7.html>`_.
The algorithm is NOT optimized for speed but for precision.
:param matrix_a: the first matrix.
:param matrix_b: the second matrix.
:return: the KL eigenvalues and the KL eigenvectors.
"""
# compute the eigenvalues of b, lambda_b:
_lambda_b, _phi_b = np.linalg.eigh(matrix_b)
# check that this is positive:
if np.any(_lambda_b < 0.):
raise ValueError('B is not positive definite\n',
'KL eigenvalues are ', _lambda_b)
_sqrt_lambda_b = np.diag(1./np.sqrt(_lambda_b))
_phib_prime = np.dot(_phi_b, _sqrt_lambda_b)
_a_prime = np.dot(np.dot(_phib_prime.T, matrix_a), _phib_prime)
_lambda, _phi_a = np.linalg.eigh(_a_prime)
_phi = np.dot(np.dot(_phi_b, _sqrt_lambda_b), _phi_a)
return _lambda, _phi
###############################################################################
def QR_inverse(matrix):
"""
Invert a matrix with the QR decomposition.
This is much slower than standard inversion but has better accuracy
for matrices with higher condition number.
:param matrix: the input matrix.
:return: the inverse of the matrix.
"""
_Q, _R = np.linalg.qr(matrix)
return np.dot(_Q, np.linalg.inv(_R.T))
###############################################################################
def clopper_pearson_binomial_trial(k, n, alpha=0.32):
"""
http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
alpha confidence intervals for a binomial distribution of k expected
successes on n trials.
:param k: number of success.
:param n: total number of trials.
:param alpha: (optional) confidence level.
:return: lower and upper bound.
"""
lo = scipy.stats.beta.ppf(alpha/2, k, n-k+1)
hi = scipy.stats.beta.ppf(1 - alpha/2, k+1, n-k)
return lo, hi
###############################################################################
def min_samples_for_tension(nsigma, sigma_err):
"""
Computes the minimum number of uncorrelated samples that are
needed to quantify a tension of a given significance with a given error
through binomial trials.
This function works by inverting the Clopper Pearson binomial trial and
likely delivers an underestimate of the points needed.
:param nsigma: number of effective sigmas of the given tension.
:param sigma_err: the desired error on the determination of nsigma.
:returns: minimum number of samples.
"""
P = from_sigma_to_confidence(nsigma)
def dummy(n):
_dn, _up = clopper_pearson_binomial_trial(max(P, 1.-P)*n, n)
_err_up = from_confidence_to_sigma(max(P, 1.-P)) - from_confidence_to_sigma(_dn)
_err_dn = from_confidence_to_sigma(_up) - from_confidence_to_sigma(max(P, 1.-P))
return 0.5*(_err_up + _err_dn) - sigma_err
try:
n = scipy.optimize.brentq(lambda x: dummy(np.exp(x)), 0., 30.)
n = np.exp(n)
except ValueError:
n = np.nan
return n
###############################################################################
def get_separate_mcsamples(chain):
"""
Function that returns separate :class:`~getdist.mcsamples.MCSamples`
for each sampler chain.
:param chain: :class:`~getdist.mcsamples.MCSamples` the input chain.
:return: list of :class:`~getdist.mcsamples.MCSamples` with the separate
chains.
"""
# get separate chains:
_chains = chain.getSeparateChains()
# copy the param names and ranges:
_mc_samples = []
for ch in _chains:
temp = MCSamples()
temp.paramNames = chain.getParamNames()
temp.setSamples(ch.samples, weights=ch.weights, loglikes=ch.loglikes)
temp.sampler = chain.sampler
temp.ranges = chain.ranges
temp.updateBaseStatistics()
_mc_samples.append(temp.copy())
#
return _mc_samples
###############################################################################
def bernoulli_thin(chain, temperature=1, num_repeats=1):
"""
Function that thins a chain with a Bernoulli process.
:param chain: :class:`~getdist.mcsamples.MCSamples` the input chain.
:param temperature: temperature of the Bernoulli process. If T=1 then
this produces a unit weight chain.
:param num_repeats: number of repetitions of the Bernoulli process.
:return: a :class:`~getdist.mcsamples.MCSamples` chain with the
reweighted chain.
"""
# check input:
# get the trial vector:
test = np.log(chain.weights / np.sum(chain.weights))
new_weights = np.exp((1. - temperature) * test)
test = temperature*(test - np.amax(test))
# do the trial:
_filter = np.zeros(len(test)).astype(np.bool)
_sample_repeat = np.zeros(len(test)).astype(np.int)
for i in range(num_repeats):
_temp = np.random.binomial(1, np.exp(test))
_sample_repeat += _temp.astype(np.int)
_filter = np.logical_or(_filter, _temp.astype(np.bool))
new_weights = _sample_repeat*new_weights
# filter the chain:
chain.setSamples(samples=chain.samples[_filter, :],
weights=new_weights[_filter],
loglikes=chain.loglikes[_filter])
# update:
chain._weightsChanged()
chain.updateBaseStatistics()
#
return chain
###############################################################################
def random_samples_reshuffle(chain):
"""
Performs a coherent random reshuffle of the samples.
:param chain: :class:`~getdist.mcsamples.MCSamples` the input chain.
:return: a :class:`~getdist.mcsamples.MCSamples` chain with the
reshuffled chain.
"""
# check input:
# get the reshuffling vector:
_reshuffle_indexes = np.arange(len(chain.weights))
np.random.shuffle(_reshuffle_indexes)
# filter the chain:
chain.setSamples(samples=chain.samples[_reshuffle_indexes, :],
weights=chain.weights[_reshuffle_indexes],
loglikes=chain.loglikes[_reshuffle_indexes])
# update:
chain._weightsChanged()
chain.updateBaseStatistics()
#
return chain
###############################################################################
def make_list(elements):
"""
Checks if elements is a list.
If yes returns elements without modifying it.
If not creates and return a list with elements inside.
:param elements: an element or a list of elements.
:return: a list containing elements.
"""
if isinstance(elements, (list, tuple)):
return elements
else:
return [elements]
###############################################################################
def PDM_to_vector(pdm):
"""
Transforms a positive definite matrix of dimension :math:`d \\times d`
into an unconstrained vector of dimension :math:`d(d+1)/2`.
This does not use the Cholesky decomposition since we need guarantee of
strictly positive definiteness.
The absolute values of the elements with indexes of the returned vector
that satisfy:
.. code-block:: python
np.tril_indices(d, 0)[0] == np.tril_indices(d, 0)[1]
are the eigenvalues of the matrix. The sign of these elements define
the orientation of the eigenvectors.
Note that this is not strictly the inverse of
:meth:`tensiometer.utilities.vector_to_PDM`
since there are a number of discrete symmetries in the definition of the
eigenvectors that we ignore since they are irrelevant for the sake of
representing the matrix.
:param pdm: the input positive definite matrix.
:return: output vector representation.
:reference: https://arxiv.org/abs/1906.00587
"""
# get dimension:
d = pdm.shape[0]
# get the eigenvalues of the matrix:
Lambda, Phi = np.linalg.eigh(pdm)
# get triangular decomposition:
(P, L, U) = scipy.linalg.lu(Phi.T, permute_l=False)
# WR decomposition:
Q, R = np.linalg.qr(L)
L = np.dot(np.dot(R, U), L)
# pivot the eigenvalues:
Lambda2 = np.dot(np.dot(P.T, np.diag(Lambda)), P)
# prepare output:
mat = L
mat[np.diag_indices(d)] = np.sign(mat[np.diag_indices(d)])*np.diag(Lambda2)
#
return mat[np.tril_indices(d, 0)]
def vector_to_PDM(vec):
"""
Transforms an unconstrained vector of dimension :math:`d(d+1)/2`
into a positive definite matrix of dimension :math:`d \\times d`.
In the input vector the eigenvalues are in the positions where
The absolute values of the elements with indexes of the input vector
that satisfy:
.. code-block:: python
np.tril_indices(d, 0)[0] == np.tril_indices(d, 0)[1]
are the eigenvalues of the matrix. The sign of these elements define
the orientation of the eigenvectors.
The purpose of this function is to allow optimization over the space
of positive definite matrices that is either unconstrained or
has constraints on the condition number of the matrix.
:param pdm: the input vector.
:return: output positive definite matrix.
:reference: https://arxiv.org/abs/1906.00587
"""
d = int(np.sqrt(1 + 8*len(vec)) - 1)//2
L = np.zeros((d, d))
# get the diagonal with eigenvalues:
L[np.tril_indices(d, 0)] = vec
Lambda2 = np.diag(np.abs(L[np.diag_indices(d)]))
L[np.diag_indices(d)] = np.sign(L[np.diag_indices(d)]) * np.ones(d)
# qr decompose L:
Q, R = np.linalg.qr(L)
Phi2 = np.dot(L, np.linalg.inv(R))
# rebuild matrix
return np.dot(np.dot(Phi2.T, Lambda2), Phi2)
###############################################################################
def whiten_samples(samples, weights):
"""
Rescales samples by the square root of their inverse covariance.
The resulting samples have identity covariance. This amounts to a change of
coordinates so the physical meaning of different coordinates is changed.
:param samples: the input samples.
:param weights: the input weights of the samples.
:return: whitened samples with identity covariance.
"""
# compute sample covariance:
_cov = np.cov(samples.T, aweights=weights)
# compute its inverse square root:
_temp = sqrtm(QR_inverse(_cov))
# whiten the samples:
white_samples = samples.dot(_temp)
#
return white_samples
###############################################################################
def is_outlier(points, thresh=3.5):
"""
Returns a boolean array with True if points are outliers and False
otherwise.
:param points: An num-observations by num-dimensions array of observations
:param thresh: The modified z-score to use as a threshold. Observations with
a modified z-score (based on the median absolute deviation) greater
than this value will be classified as outliers.
:return: A num-observations-length boolean array.
:reference: Boris Iglewicz and David Hoaglin (1993), "Volume 16: How to Detect and
Handle Outliers", The ASQC Basic References in Quality Control:
Statistical Techniques, Edward F. Mykytka, Ph.D., Editor.
"""
if len(points.shape) == 1:
points = points[:, None]
median = np.median(points, axis=0)
diff = np.sum((points - median)**2, axis=-1)
diff = np.sqrt(diff)
med_abs_deviation = np.median(diff)
modified_z_score = 0.6745 * diff / med_abs_deviation
return modified_z_score > thresh
| 15,550 | 34.997685 | 88 | py |
tensiometer | tensiometer-master/tensiometer/experimental.py | """
Experimental features.
For test purposes:
import os, sys
import time
import gc
from numba import jit
import numpy as np
import getdist.chains as gchains
gchains.print_load_details = False
from getdist import MCSamples, WeightedSamples
import scipy
from scipy.linalg import sqrtm
from scipy.integrate import simps
from scipy.spatial import cKDTree
# imports for parallel calculations:
import multiprocessing
import joblib
# number of threads available:
if 'OMP_NUM_THREADS' in os.environ.keys():
n_threads = int(os.environ['OMP_NUM_THREADS'])
else:
n_threads = multiprocessing.cpu_count()
from getdist import loadMCSamples, MCSamples, WeightedSamples
# add path for correct version of tensiometer:
here = './'
temp_path = os.path.realpath(os.path.join(os.getcwd(), here+'tensiometer'))
sys.path.insert(0, temp_path)
import tensiometer.mcmc_tension as tmt
import tensiometer.utilities as utils
chain_1 = loadMCSamples('./test_chains/DES')
chain_2 = loadMCSamples('./test_chains/Planck18TTTEEE')
chain_12 = loadMCSamples('./test_chains/Planck18TTTEEE_DES')
chain_prior = loadMCSamples('./test_chains/prior')
import matplotlib.pyplot as plt
diff_chain = tmt.parameter_diff_chain(chain_1, chain_2, boost=1)
num_params, num_samples = diff_chain.samples.T.shape
param_names = None
scale = None
method = 'brute_force'
feedback=2
n_threads = 1
"""
import os
import time
import gc
from numba import jit, njit
import numpy as np
import getdist.chains as gchains
gchains.print_load_details = False
from getdist import MCSamples, WeightedSamples
import scipy
from scipy.linalg import sqrtm
from scipy.integrate import simps
from scipy.spatial import cKDTree
from . import mcmc_tension as tmt
"""
# initialize param names:
if param_names is None:
param_names = diff_chain.getParamNames().getRunningNames()
else:
chain_params = diff_chain.getParamNames().list()
if not np.all([name in chain_params for name in param_names]):
raise ValueError('Input parameter is not in the diff chain.\n',
'Input parameters ', param_names, '\n'
'Possible parameters', chain_params)
# indexes:
ind = [diff_chain.index[name] for name in param_names]
# some initial calculations:
_num_samples = np.sum(diff_chain.weights)
_num_params = len(ind)
# number of effective samples:
_num_samples_eff = np.sum(diff_chain.weights)**2 / \
np.sum(diff_chain.weights**2)
# whighten samples:
white_samples = utils.whiten_samples(diff_chain.samples[:, ind],
diff_chain.weights)
weights = diff_chain.weights
white_samples = white_samples
"""
################# minimize scale:
def UCV_SP_bandwidth(white_samples, weights, near=1, near_max=200):
"""
near = 10
near_max = 200
"""
# digest input:
n, d = white_samples.shape
fac = 2**(-d/2.)
# prepare the Tree with the samples:
data_tree = cKDTree(white_samples, balanced_tree=True)
# compute the weights vectors:
wtot = np.sum(weights)
weights2 = weights**2
neff = wtot**2 / np.sum(weights2)
alpha = wtot / (wtot - weights)
# query the Tree for the maximum number of nearest neighbours:
dist, idx = data_tree.query(white_samples, np.arange(2, near_max+1), n_jobs=-1)
r2 = np.square(dist)
# do all sort of precomputations:
R = dist[:, near]
R2 = r2[:, near]
R2s = R2[:, None] + R2[idx]
term_1 = fac*np.sum(weights2/R**d)
weight_term = weights[:, None]*weights[idx]
R2sd = R2s**(-d/2)
Rd = R[:, None]**d
R21 = r2/R2s
R22 = r2/R2[:, None]
alpha_temp = alpha[:, None]
# define helper for minimization:
@njit
def _helper(gamma):
# compute the i != j sum:
temp = weight_term*(R2sd*gamma**(-d/2)*np.exp(-0.5*R21/gamma) - 2.*alpha_temp/Rd/gamma**d*np.exp(-0.5*R22/gamma))
# sum:
_ucv = term_1/gamma**d + np.sum(temp)
_ucv = _ucv / wtot**2
#
return _ucv
# initial guess:
x0 = tmt.AMISE_bandwidth(d, neff)[0, 0]
# call optimizer:
res = scipy.optimize.minimize(lambda x: _helper(np.exp(x)), x0=np.log(x0), method='Nelder-Mead')
res.x = np.exp(res.x)
#
return res
pass
| 4,211 | 26.350649 | 121 | py |
tensiometer | tensiometer-master/tensiometer/__init__.py | __author__ = 'Marco Raveri'
__version__ = "0.1.2"
__url__ = "https://tensiometer.readthedocs.io"
from . import gaussian_tension, mcmc_tension, cosmosis_interface
| 163 | 26.333333 | 64 | py |
tensiometer | tensiometer-master/tensiometer/tests/test_utilities.py | ###############################################################################
# initial imports:
import unittest
import tensiometer.utilities as ttu
import numpy as np
###############################################################################
class test_confidence_to_sigma(unittest.TestCase):
def setUp(self):
pass
# test against known output:
def test_from_confidence_to_sigma_result(self):
result = ttu.from_confidence_to_sigma(np.array([0.68, 0.95, 0.997]))
known_result = np.array([0.99445788, 1.95996398, 2.96773793])
assert np.allclose(result, known_result)
def test_from_sigma_to_confidence_result(self):
result = ttu.from_sigma_to_confidence(np.array([1., 2., 3.]))
known_result = np.array([0.68268949, 0.95449974, 0.9973002])
assert np.allclose(result, known_result)
# test that one function is the inverse of the other:
def test_sigma_confidence_inverse(self):
test_numbers = np.arange(1, 6)
test_confidence = ttu.from_sigma_to_confidence(test_numbers)
test_sigma = ttu.from_confidence_to_sigma(test_confidence)
assert np.allclose(test_numbers, test_sigma)
# test raises:
def test_errors(self):
with self.assertRaises(ValueError):
ttu.from_confidence_to_sigma(-1.)
with self.assertRaises(ValueError):
ttu.from_confidence_to_sigma(2.)
with self.assertRaises(ValueError):
ttu.from_sigma_to_confidence(-1.)
###############################################################################
class test_chi2_to_sigma(unittest.TestCase):
def setUp(self):
pass
# test values:
def test_values(self):
assert np.allclose(ttu.from_chi2_to_sigma(1., 1.), 1.0)
assert np.allclose(ttu.from_chi2_to_sigma(20.0, 2.),
ttu.from_chi2_to_sigma(20.0, 2., 1000))
# test raises:
def test_errors(self):
with self.assertRaises(ValueError):
ttu.from_chi2_to_sigma(-2., 2.)
with self.assertRaises(ValueError):
ttu.from_chi2_to_sigma(2., -2.)
###############################################################################
class test_KL_decomposition(unittest.TestCase):
def setUp(self):
np.random.seed(0)
# generate two random positive matrices:
self.mat_1 = np.random.rand(10)
self.mat_2 = np.random.rand(10)
self.mat_1 = ttu.vector_to_PDM(self.mat_1)
self.mat_2 = ttu.vector_to_PDM(self.mat_2)
# test values:
def test_values(self):
# test with random matrices:
ttu.KL_decomposition(self.mat_1, self.mat_2)
# test that, if the second matrix is the identity then this is equal to eigenvalues:
kl_eig, kl_eigv = ttu.KL_decomposition(self.mat_1, np.identity(self.mat_2.shape[0]))
eig, eigv = np.linalg.eigh(self.mat_1)
assert np.allclose(eig, kl_eig)
assert np.allclose(eigv, kl_eigv)
# test raises:
def test_errors(self):
d = 10
wrong_mat = np.random.rand(d, d)
right_mat = ttu.vector_to_PDM(np.random.rand(d*(d+1)//2))
with self.assertRaises(ValueError):
ttu.KL_decomposition(right_mat, wrong_mat)
###############################################################################
class test_QR_inverse(unittest.TestCase):
def setUp(self):
d = 10
self.mat = ttu.vector_to_PDM(np.random.rand(d*(d+1)//2))
# test values:
def test_values(self):
assert np.allclose(np.linalg.inv(self.mat), ttu.QR_inverse(self.mat))
###############################################################################
class test_clopper_pearson_binomial_trial(unittest.TestCase):
def setUp(self):
pass
# test values:
def test_values(self):
low, high = ttu.clopper_pearson_binomial_trial(1., 2.)
# test raises:
def test_errors(self):
pass
###############################################################################
class test_PDM_vectorization(unittest.TestCase):
def setUp(self):
pass
# test values:
def test_values(self):
# generate a random vector between -1, 1 (seeded so reproducible):
np.random.seed(0)
# sweep dimensions from low to medium
for d in range(2, 20):
num = d*(d+1)//2
# get some random matrices:
for i in range(10):
vec = 2.*np.random.rand(num) -1.
# get the corresponding PDM matrix:
mat = ttu.vector_to_PDM(vec)
# check that it is positive definite:
assert np.all(np.linalg.eig(mat)[0] > 0)
# transform back. This can be different from the previous one
# because of many discrete symmetries in defining the
# eigenvectors
vec2 = ttu.PDM_to_vector(mat)
# transform again. This has to be equal, discrete symmetries for
# eigenvectors do not matter once they are paired with eigenvalues:
mat2 = ttu.vector_to_PDM(vec2)
assert np.allclose(mat, mat2)
# test raises:
def test_errors(self):
pass
###############################################################################
if __name__ == '__main__':
unittest.main(verbosity=2)
| 5,405 | 31.371257 | 92 | py |
tensiometer | tensiometer-master/tensiometer/tests/test_tensor_eigenvalues.py | ###############################################################################
# initial imports:
import unittest
import tensiometer.tensor_eigenvalues as te
import os
import numpy as np
###############################################################################
class test_utilities(unittest.TestCase):
def setUp(self):
pass
def test_random_symm_tensor(self):
assert te.random_symm_tensor(d=2, m=2).shape == (2, 2)
assert te.random_symm_tensor(d=2, m=4).shape == (2, 2, 2, 2)
assert te.random_symm_tensor(d=4, m=2).shape == (4, 4)
assert te.random_symm_tensor(d=4, m=4).shape == (4, 4, 4, 4)
temp = te.random_symm_tensor(d=8, m=2)
assert np.allclose(temp, temp.T)
temp = te.random_symm_tensor(d=8, m=4)
assert np.allclose(temp, temp.T)
def test_random_symm_positive_tensor(self):
assert te.random_symm_positive_tensor(d=2, m=2).shape == (2, 2)
assert te.random_symm_positive_tensor(d=2, m=4).shape == (2, 2, 2, 2)
assert te.random_symm_positive_tensor(d=4, m=2).shape == (4, 4)
assert te.random_symm_positive_tensor(d=4, m=4).shape == (4, 4, 4, 4)
temp = te.random_symm_positive_tensor(d=8, m=2)
assert np.allclose(temp, temp.T)
temp = te.random_symm_positive_tensor(d=8, m=4)
assert np.allclose(temp, temp.T)
def test_identity_tensor(self):
assert te.identity_tensor(d=2, m=2).shape == (2, 2)
###############################################################################
if __name__ == '__main__':
unittest.main(verbosity=2)
| 1,612 | 32.604167 | 79 | py |
tensiometer | tensiometer-master/tensiometer/tests/test_chains_convergence.py | ###############################################################################
# initial imports:
import unittest
import tensiometer.chains_convergence as conv
import tensiometer.utilities as ttu
from getdist import loadMCSamples
import os
import numpy as np
###############################################################################
class test_convergence(unittest.TestCase):
def setUp(self):
# get path:
self.here = os.path.dirname(os.path.abspath(__file__))
# get chains:
self.chain = loadMCSamples(self.here+'/../../test_chains/DES')
# test standard Gelman Rubin for multiple chains:
def test_GR_test(self):
res1 = conv.GR_test(self.chain)
res2 = conv.GR_test(ttu.get_separate_mcsamples(self.chain))
assert np.allclose(res1[0], res2[0]) and np.allclose(res1[1], res2[1])
res3 = conv.GR_test(ttu.get_separate_mcsamples(self.chain),
param_names=self.chain.getParamNames().getRunningNames())
assert np.allclose(res1[0], res3[0]) and np.allclose(res1[1], res3[1])
# test standard Gelman Rubin for two chains:
def test_GR_test_two_chains(self):
res2 = conv.GR_test(ttu.get_separate_mcsamples(self.chain)[:2])
res3 = conv.GR_test(ttu.get_separate_mcsamples(self.chain)[:2],
param_names=self.chain.getParamNames().getRunningNames())
assert np.allclose(res2[0], res3[0]) and np.allclose(res2[1], res3[1])
# test higher moments test:
def test_GRn_test(self):
kwargs = {}
print(conv.GRn_test(self.chain, n=2, param_names=None, feedback=2,
optimizer='ParticleSwarm', **kwargs))
print(conv.GRn_test(self.chain, n=3, param_names=None, feedback=2,
optimizer='ParticleSwarm', **kwargs))
print(conv.GRn_test(self.chain, n=2, param_names=None, feedback=2,
optimizer='TrustRegions', **kwargs))
# test higher moments test with two chains:
def test_GRn_test_two_chains(self):
kwargs = {}
print(conv.GRn_test(ttu.get_separate_mcsamples(self.chain)[:2], n=2, param_names=None, feedback=0,
optimizer='ParticleSwarm', **kwargs))
def test_errors(self):
#self.assertRaises(TypeError, conv.GR_test('test'))
#self.assertRaises(TypeError, conv.GR_test(['test']))
#self.assertRaises(ValueError, conv.GR_test([]))
pass
###############################################################################
if __name__ == '__main__':
unittest.main(verbosity=2)
| 2,635 | 38.343284 | 106 | py |
tensiometer | tensiometer-master/tensiometer/tests/test_mcmc_tension_flow.py | ###############################################################################
# initial imports:
import unittest
import tensiometer.mcmc_tension.param_diff as pd
import tensiometer.mcmc_tension.flow as mt
import tensiometer.utilities as tut
from getdist import loadMCSamples
import os
import numpy as np
###############################################################################
class test_mcmc_shift(unittest.TestCase):
def setUp(self):
# get path:
self.here = os.path.dirname(os.path.abspath(__file__))
# get chains:
self.chain_1 = loadMCSamples(self.here+'/../../test_chains/DES')
self.chain_2 = loadMCSamples(self.here+'/../../test_chains/Planck18TTTEEE')
self.chain_12 = loadMCSamples(self.here+'/../../test_chains/Planck18TTTEEE_DES')
self.chain_prior = loadMCSamples(self.here+'/../../test_chains/prior')
# thin the chain:
self.chain_1.getConvergeTests()
self.chain_2.getConvergeTests()
self.chain_12.getConvergeTests()
self.chain_prior.getConvergeTests()
self.chain_1.weighted_thin(int(self.chain_1.indep_thin))
self.chain_2.weighted_thin(int(self.chain_2.indep_thin))
self.chain_12.weighted_thin(int(self.chain_12.indep_thin))
self.chain_prior.weighted_thin(int(self.chain_prior.indep_thin))
# get difference chain:
self.diff_chain = pd.parameter_diff_chain(self.chain_1,
self.chain_2,
boost=1)
# test that the MAF can be initialized and trained for few epochs:
def test_flow_runs(self):
diff_flow_callback = mt.DiffFlowCallback(self.diff_chain, feedback=0)
# Train model
diff_flow_callback.train(epochs=5)
# Compute tension
diff_flow_callback.estimate_shift(tol=1.0, max_iter=10)
###############################################################################
if __name__ == '__main__':
unittest.main(verbosity=2)
| 2,043 | 36.163636 | 88 | py |
tensiometer | tensiometer-master/tensiometer/tests/test_gaussian_tension.py | ###############################################################################
# initial imports:
import unittest
import tensiometer.gaussian_tension as gt
import os
import numpy as np
from getdist.gaussian_mixtures import GaussianND
from getdist import loadMCSamples
###############################################################################
# initial setup common for all tests:
def setup_test(type):
# define two Gaussian distributions:
type.n1 = 2
type.n2 = 3
type.mean1 = 1.*np.ones(type.n1)
type.mean2 = 2.*np.ones(type.n2)
type.cov1 = 1.*np.diag(np.ones(type.n1))
type.cov2 = 2.*np.diag(np.ones(type.n2))
type.param_names1 = ['p'+str(i) for i in range(type.n1)]
type.param_names2 = ['p'+str(i) for i in range(type.n2)]
type.Gaussian1 = GaussianND(type.mean1, type.cov1,
names=type.param_names1)
type.Gaussian2 = GaussianND(type.mean2, type.cov2,
names=type.param_names2)
type.chain_1 = type.Gaussian1.MCSamples(1000)
type.chain_2 = type.Gaussian2.MCSamples(1000)
# define the prior:
type.GaussianPrior = GaussianND(type.mean2, 10.*type.cov2,
names=type.param_names2)
type.prior_chain = type.GaussianPrior.MCSamples(1000)
###############################################################################
class test_helpers(unittest.TestCase):
def setUp(self):
setup_test(self)
def test_helpers(self):
assert self.chain_1.getParamNames().getRunningNames() == \
gt._check_param_names(self.chain_1, param_names=None)
assert gt._check_param_names(self.chain_1, param_names=['p1']) \
== ['p1']
gt._check_chain_type(self.chain_1)
def test_errors(self):
with self.assertRaises(ValueError):
gt._check_param_names(self.chain_1, param_names=['test'])
with self.assertRaises(TypeError):
gt._check_chain_type(self.Gaussian1)
###############################################################################
class test_utilities(unittest.TestCase):
def setUp(self):
setup_test(self)
def test_get_prior_covariance(self):
self.chain_1.setRanges({'p0': [0., 1.0],
'p1': [0., 1.0]})
gt.get_prior_covariance(self.chain_1)
gt.get_prior_covariance(self.chain_2)
def test_get_Neff(self):
assert np.allclose(gt.get_Neff(self.chain_1), 2.0)
gt.get_Neff(self.chain_1, prior_chain=self.prior_chain)
assert np.allclose(gt.get_Neff(self.chain_1, param_names=['p1']), 1.0)
assert np.allclose(gt.get_Neff(self.chain_1, prior_factor=1.0), 2.0)
def test_gaussian_approximation(self):
gt.gaussian_approximation(self.chain_1)
gt.gaussian_approximation(self.chain_1, param_names=['p1'])
self.chain_1.label = 'chain_1'
temp = gt.gaussian_approximation(self.chain_1)
assert temp.label == 'Gaussian '+self.chain_1.label
self.chain_1.label = None
self.chain_1.name_tag = 'chain_1'
temp = gt.gaussian_approximation(self.chain_1)
assert temp.label == 'Gaussian_'+self.chain_1.name_tag
###############################################################################
if __name__ == '__main__':
unittest.main(verbosity=2)
| 3,369 | 34.473684 | 79 | py |
tensiometer | tensiometer-master/tensiometer/tests/test_cosmosis_interface.py | ###############################################################################
# initial imports:
import unittest
import tensiometer.cosmosis_interface as ci
import os
###############################################################################
class test_cosmosis_interface(unittest.TestCase):
def setUp(self):
# get path:
self.here = os.path.dirname(os.path.abspath(__file__))
# chain dir:
self.chain_dir = self.here+'/../../test_chains/'
def test_MCSamplesFromCosmosis(self):
# import the chain:
chain_name = self.chain_dir+'DES_multinest_cosmosis'
chain = ci.MCSamplesFromCosmosis(chain_name)
###############################################################################
if __name__ == '__main__':
unittest.main(verbosity=2)
| 815 | 24.5 | 79 | py |
tensiometer | tensiometer-master/tensiometer/tests/test_mcmc_tension_kde.py | ###############################################################################
# initial imports:
import unittest
import tensiometer.mcmc_tension.param_diff as pd
import tensiometer.mcmc_tension.kde as mt
import tensiometer.utilities as tut
from getdist import loadMCSamples
import os
import numpy as np
###############################################################################
class test_mcmc_shift(unittest.TestCase):
def setUp(self):
# get path:
self.here = os.path.dirname(os.path.abspath(__file__))
# get chains:
self.chain_1 = loadMCSamples(self.here+'/../../test_chains/DES')
self.chain_2 = loadMCSamples(self.here+'/../../test_chains/Planck18TTTEEE')
self.chain_12 = loadMCSamples(self.here+'/../../test_chains/Planck18TTTEEE_DES')
self.chain_prior = loadMCSamples(self.here+'/../../test_chains/prior')
# thin the chain:
self.chain_1.getConvergeTests()
self.chain_2.getConvergeTests()
self.chain_12.getConvergeTests()
self.chain_prior.getConvergeTests()
self.chain_1.weighted_thin(int(self.chain_1.indep_thin))
self.chain_2.weighted_thin(int(self.chain_2.indep_thin))
self.chain_12.weighted_thin(int(self.chain_12.indep_thin))
self.chain_prior.weighted_thin(int(self.chain_prior.indep_thin))
# get difference chain:
self.diff_chain = pd.parameter_diff_chain(self.chain_1,
self.chain_2,
boost=1)
# test that different exact methods give the same result:
def test_kde_shift(self):
# get brute force resuls:
res_1 = mt.kde_parameter_shift(self.diff_chain,
method='brute_force',
scale=0.5)
# get nearest elimination results:
res_2 = mt.kde_parameter_shift(self.diff_chain,
method='neighbor_elimination',
scale=0.5)
print(res_1, res_2)
assert np.allclose(res_1, res_2)
# now with high feedback:
res_3 = mt.kde_parameter_shift(self.diff_chain,
method='brute_force',
feedback=2,
scale=0.5)
assert np.allclose(res_1, res_3)
res_4 = mt.kde_parameter_shift(self.diff_chain,
method='neighbor_elimination',
feedback=2,
scale=0.5)
print(res_3, res_4)
assert np.allclose(res_2, res_4)
assert np.allclose(res_3, res_4)
# now with given parameter names:
param_names = ['delta_omegam', 'delta_sigma8']
res_1 = mt.kde_parameter_shift(self.diff_chain,
param_names=param_names,
method='brute_force',
scale=0.5)
# get nearest elimination results:
res_2 = mt.kde_parameter_shift(self.diff_chain,
param_names=param_names,
method='neighbor_elimination',
scale=0.5)
print(res_1, res_2)
assert np.allclose(res_1, res_2)
# test band selectors:
def test_band(self):
# prepare:
n, d = self.diff_chain.samples.shape
weights = self.diff_chain.weights
wtot = np.sum(weights)
neff = wtot**2 / np.sum(weights**2)
# compute bands:
mt.Scotts_bandwidth(d, neff)
mt.AMISE_bandwidth(d, neff)
mt.MAX_bandwidth(d, neff)
mt.MISE_bandwidth_1d(d, neff)
mt.MISE_bandwidth(d, neff)
# whiten samples:
white_samples = tut.whiten_samples(self.diff_chain.samples, weights)
mt.UCV_bandwidth(weights, white_samples, mode='1d', feedback=1)
mt.UCV_SP_bandwidth(white_samples, weights, near=1, near_max=20, feedback=1)
# test FFT methods in 1 and 2d:
def test_fft_shift(self):
# test FFT in 1d:
param_names = ['delta_sigma8']
mt.kde_parameter_shift_1D_fft(self.diff_chain, param_names=param_names, feedback=2)
# test FFT in 2d:
param_names = ['delta_omegam', 'delta_sigma8']
mt.kde_parameter_shift_2D_fft(self.diff_chain, param_names=param_names, feedback=2)
# test ball and ellipse estimators:
def test_ball_kde(self):
res_1 = mt.kde_parameter_shift(self.diff_chain,
method='brute_force',
feedback=2,
scale='BALL')
res_2 = mt.kde_parameter_shift(self.diff_chain,
method='neighbor_elimination',
feedback=2,
scale='BALL')
assert np.allclose(res_1, res_2)
res_1 = mt.kde_parameter_shift(self.diff_chain,
method='brute_force',
feedback=2,
scale='ELL')
res_2 = mt.kde_parameter_shift(self.diff_chain,
method='neighbor_elimination',
feedback=2,
scale='ELL')
assert np.allclose(res_1, res_2)
###############################################################################
if __name__ == '__main__':
unittest.main(verbosity=2)
| 5,748 | 41.585185 | 91 | py |
tensiometer | tensiometer-master/tensiometer/mcmc_tension/flow.py | """
"""
###############################################################################
# initial imports and set-up:
import os
import time
import gc
from numba import jit
import numpy as np
import getdist.chains as gchains
gchains.print_load_details = False
from getdist import MCSamples, WeightedSamples
import scipy
from scipy.linalg import sqrtm
from scipy.integrate import simps
from scipy.spatial import cKDTree
import scipy.stats
import pickle
from collections.abc import Iterable
from matplotlib import pyplot as plt
from .. import utilities as utils
from .. import gaussian_tension
try:
import tensorflow as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
from tensorflow.keras.callbacks import Callback
HAS_FLOW = True
except Exception as e:
print("Could not import tensorflow or tensorflow_probability: ", e)
Callback = object
HAS_FLOW = False
try:
from IPython.display import clear_output, set_matplotlib_formats
except ModuleNotFoundError:
pass
###############################################################################
# helper class to build a masked-autoregressive flow:
class SimpleMAF(object):
"""
A class to implement a simple Masked AutoRegressive Flow (MAF) using the implementation :class:`tfp.bijectors.AutoregressiveNetwork` from from `Tensorflow Probability <https://www.tensorflow.org/probability/>`_. Additionally, this class provides utilities to load/save models, including random permutations.
:param num_params: number of parameters, ie the dimension of the space of which the bijector is defined.
:type num_params: int
:param n_maf: number of MAFs to stack. Defaults to None, in which case it is set to `2*num_params`.
:type n_maf: int, optional
:param hidden_units: a list of the number of nodes per hidden layers. Defaults to None, in which case it is set to `[num_params*2]*2`.
:type hidden_units: list, optional
:param permutations: whether to use shuffle dimensions between stacked MAFs, defaults to True.
:type permutations: bool, optional
:param activation: activation function to use in all layers, defaults to :func:`tf.math.asinh`.
:type activation: optional
:param kernel_initializer: kernel initializer, defaults to 'glorot_uniform'.
:type kernel_initializer: str, optional
:param feedback: print the model architecture, defaults to 0.
:type feedback: int, optional
:reference: George Papamakarios, Theo Pavlakou, Iain Murray (2017). Masked Autoregressive Flow for Density Estimation. `arXiv:1705.07057 <https://arxiv.org/abs/1705.07057>`_
"""
def __init__(self, num_params, n_maf=None, hidden_units=None, permutations=True, activation=tf.math.asinh, kernel_initializer='glorot_uniform', feedback=0, **kwargs):
if n_maf is None:
n_maf = 2*num_params
event_shape = (num_params,)
if hidden_units is None:
hidden_units = [num_params*2]*2
if permutations is None:
_permutations = False
elif isinstance(permutations, Iterable):
assert len(permutations) == n_maf
_permutations = permutations
elif isinstance(permutations, bool):
if permutations:
_permutations = [np.random.permutation(num_params) for _ in range(n_maf)]
else:
_permutations = False
self.permutations = _permutations
# Build transformed distribution
bijectors = []
for i in range(n_maf):
if _permutations:
bijectors.append(tfb.Permute(_permutations[i].astype(np.int32)))
made = tfb.AutoregressiveNetwork(params=2, event_shape=event_shape, hidden_units=hidden_units, activation=activation, kernel_initializer=kernel_initializer, **kwargs)
bijectors.append(tfb.MaskedAutoregressiveFlow(shift_and_log_scale_fn=made))
self.bijector = tfb.Chain(bijectors)
if feedback > 0:
print("Building MAF")
print(" - number of MAFs:", n_maf)
print(" - activation:", activation)
print(" - hidden_units:", hidden_units)
def save(self, path):
"""
Save a `SimpleMAF` object.
:param path: path of the directory where to save.
:type path: str
"""
checkpoint = tf.train.Checkpoint(bijector=self.bijector)
checkpoint.write(path)
pickle.dump(self.permutations, open(path+'_permutations.pickle', 'wb'))
@classmethod
def load(cls, num_params, path, **kwargs):
"""
Load a saved `SimpleMAF` object. The number of parameters and all other keyword arguments (except for `permutations`) must be included as the MAF is first created with random weights and then these weights are restored.
:param num_params: number of parameters, ie the dimension of the space of which the bijector is defined.
:type num_params: int
:param path: path of the directory from which to load.
:type path: str
:return: a :class:`~.SimpleMAF`.
"""
permutations = pickle.load(open(path+'_permutations.pickle', 'rb'))
maf = SimpleMAF(num_params=num_params, permutations=permutations, **kwargs)
checkpoint = tf.train.Checkpoint(bijector=maf.bijector)
checkpoint.read(path)
return maf
###############################################################################
# main class to compute NF-based tension:
class DiffFlowCallback(Callback):
"""
A class to compute the normalizing flow estimate of the probability of a parameter shift given an input parameter difference chain.
A normalizing flow is trained to approximate the difference distribution and then used to numerically evaluate the probablity of a parameter shift (see REF). To do so, it defines a bijective mapping that is optimized to gaussianize the difference chain samples. This mapping is performed in two steps, using the gaussian approximation as pre-whitening. The notations used in the code are:
* `X` designates samples in the original parameter difference space;
* `Y` designates samples in the gaussian approximation space, `Y` is obtained by shifting and scaling `X` by its mean and covariance (like a PCA);
* `Z` designates samples in the gaussianized space, connected to `Y` with a normalizing flow denoted `Z2Y_bijector`.
The user may provide the `Z2Y_bijector` as a :class:`~tfp.bijectors.Bijector` object from `Tensorflow Probability <https://www.tensorflow.org/probability/>`_ or make use of the utility class :class:`~.SimpleMAF` to instantiate a Masked Autoregressive Flow (with `Z2Y_bijector='MAF'`).
This class derives from :class:`~tf.keras.callbacks.Callback` from Keras, which allows for visualization during training. The normalizing flows (X->Y->Z) are implemented as :class:`~tfp.bijectors.Bijector` objects and encapsulated in a Keras :class:`~tf.keras.Model`.
Here is an example:
.. code-block:: python
# Initialize the flow and model
diff_flow_callback = DiffFlowCallback(diff_chain, Z2Y_bijector='MAF')
# Train the model
diff_flow_callback.train()
# Compute the shift probability and confidence interval
p, p_low, p_high = diff_flow_callback.estimate_shift_significance()
:param diff_chain: input parameter difference chain.
:type diff_chain: :class:`~getdist.mcsamples.MCSamples`
:param param_names: parameter names of the parameters to be used
in the calculation. By default all running parameters.
:type param_names: list, optional
:param Z2Y_bijector: either a :class:`~tfp.bijectors.Bijector` object
representing the mapping from `Z` to `Y`, or 'MAF' to instantiate a :class:`~.SimpleMAF`, defaults to 'MAF'.
:type Z2Y_bijector: optional
:param pregauss_bijector: not implemented yet, defaults to None.
:type pregauss_bijector: optional
:param learning_rate: initial learning rate, defaults to 1e-3.
:type learning_rate: float, optional
:param feedback: feedback level, defaults to 1.
:type feedback: int, optional
:param validation_split: fraction of samples to use for the validation sample, defaults to 0.1
:type validation_split: float, optional
:param early_stop_nsigma: absolute error on the tension at the zero-shift point to be used
as an approximate convergence criterion for early stopping, defaults to 0.
:type early_stop_nsigma: float, optional
:param early_stop_patience: minimum number of epochs to use when `early_stop_nsigma` is non-zero, defaults to 10.
:type early_stop_patience: int, optional
:raises NotImplementedError: if `pregauss_bijector` is not None.
:reference: George Papamakarios, Theo Pavlakou, Iain Murray (2017). Masked Autoregressive Flow for Density Estimation. `arXiv:1705.07057 <https://arxiv.org/abs/1705.07057>`_
"""
def __init__(self, diff_chain, param_names=None, Z2Y_bijector='MAF', pregauss_bijector=None, learning_rate=1e-3, feedback=1, validation_split=0.1, early_stop_nsigma=0., early_stop_patience=10, **kwargs):
self.feedback = feedback
# Chain
self._init_diff_chain(diff_chain, param_names=param_names, validation_split=validation_split)
# Transformed distribution
self._init_transf_dist(Z2Y_bijector, learning_rate=learning_rate, **kwargs)
if feedback > 0:
print("Building flow")
print(" - trainable parameters:", self.model.count_params())
# Metrics
keys = ["loss", "val_loss", "shift0_chi2", "shift0_pval", "shift0_nsigma", "chi2Z_ks", "chi2Z_ks_p"]
self.log = {_k: [] for _k in keys}
self.chi2Y = np.sum(self.Y_test**2, axis=1)
self.chi2Y_ks, self.chi2Y_ks_p = scipy.stats.kstest(self.chi2Y, 'chi2', args=(self.num_params,))
# Options
self.early_stop_nsigma = early_stop_nsigma
self.early_stop_patience = early_stop_patience
# Pre-gaussianization
if pregauss_bijector is not None:
# The idea is to introduce yet another step of deterministic gaussianization, eg using the prior CDF
# or double prior (convolved with itself, eg a triangular distribution)
raise NotImplementedError
def _init_diff_chain(self, diff_chain, param_names=None, validation_split=0.1):
# initialize param names:
if param_names is None:
param_names = diff_chain.getParamNames().getRunningNames()
else:
chain_params = diff_chain.getParamNames().list()
if not np.all([name in chain_params for name in param_names]):
raise ValueError('Input parameter is not in the diff chain.\n',
'Input parameters ', param_names, '\n'
'Possible parameters', chain_params)
# indexes:
ind = [diff_chain.index[name] for name in param_names]
self.num_params = len(ind)
# Gaussian approximation (full chain)
mcsamples_gaussian_approx = gaussian_tension.gaussian_approximation(diff_chain, param_names=param_names)
self.dist_gaussian_approx = tfd.MultivariateNormalTriL(loc=mcsamples_gaussian_approx.means[0].astype(np.float32), scale_tril=tf.linalg.cholesky(mcsamples_gaussian_approx.covs[0].astype(np.float32)))
self.Y2X_bijector = self.dist_gaussian_approx.bijector
# Samples
# Split training/test
n = diff_chain.samples.shape[0]
indices = np.random.permutation(n)
n_split = int(validation_split*n)
test_idx, training_idx = indices[:n_split], indices[n_split:]
# Training
self.X = diff_chain.samples[training_idx, :][:, ind]
self.weights = diff_chain.weights[training_idx]
self.weights *= len(self.weights) / np.sum(self.weights) # weights normalized to number of samples
self.has_weights = np.any(self.weights != self.weights[0])
self.Y = np.array(self.Y2X_bijector.inverse(self.X.astype(np.float32)))
assert not np.any(np.isnan(self.Y))
self.num_samples = len(self.X)
# Test
self.X_test = diff_chain.samples[test_idx, :][:, ind]
self.Y_test = np.array(self.Y2X_bijector.inverse(self.X_test.astype(np.float32)))
self.weights_test = diff_chain.weights[test_idx]
self.weights_test *= len(self.weights_test) / np.sum(self.weights_test) # weights normalized to number of samples
# Training sample generator
Y_ds = tf.data.Dataset.from_tensor_slices((self.Y.astype(np.float32), # input
np.zeros(self.num_samples, dtype=np.float32), # output (dummy zero)
self.weights.astype(np.float32),)) # weights
Y_ds = Y_ds.prefetch(tf.data.experimental.AUTOTUNE).cache()
self.Y_ds = Y_ds.shuffle(self.num_samples, reshuffle_each_iteration=True).repeat()
if self.feedback:
print("Building training/test samples")
if self.has_weights:
print(" - {}/{} training/test samples and non-uniform weights.".format(self.num_samples, self.X_test.shape[0]))
else:
print(" - {}/{} training/test samples and uniform weights.".format(self.num_samples, self.X_test.shape[0]))
def _init_transf_dist(self, Z2Y_bijector, learning_rate=1e-4, **kwargs):
# Model
if Z2Y_bijector == 'MAF':
self.MAF = SimpleMAF(self.num_params, feedback=self.feedback, **kwargs)
Z2Y_bijector = self.MAF.bijector
assert isinstance(Z2Y_bijector, tfp.bijectors.Bijector)
# Bijector and transformed distribution
self.Z2Y_bijector = Z2Y_bijector
self.dist_transformed = tfd.TransformedDistribution(distribution=tfd.MultivariateNormalDiag(np.zeros(self.num_params, dtype=np.float32), np.ones(self.num_params, dtype=np.float32)), bijector=Z2Y_bijector)
# Full bijector
self.Z2X_bijector = tfb.Chain([self.Y2X_bijector, self.Z2Y_bijector])
# Full distribution
self.dist_learned = tfd.TransformedDistribution(distribution=tfd.MultivariateNormalDiag(np.zeros(self.num_params, dtype=np.float32), np.ones(self.num_params, dtype=np.float32)), bijector=self.Z2X_bijector) # samples from std gaussian mapped to X
# Construct model
x_ = Input(shape=(self.num_params,), dtype=tf.float32)
log_prob_ = self.dist_transformed.log_prob(x_)
self.model = Model(x_, log_prob_)
loss = lambda _, log_prob: -log_prob
self.model.compile(optimizer=tf.optimizers.Adam(learning_rate=learning_rate), loss=loss)
def train(self, epochs=100, batch_size=None, steps_per_epoch=None, callbacks=[], verbose=1, **kwargs):
"""
Train the normalizing flow model. Internallay, this runs the fit method of the Keras :class:`~tf.keras.Model`, to which `**kwargs are passed`.
:param epochs: number of training epochs, defaults to 100.
:type epochs: int, optional
:param batch_size: number of samples per batch, defaults to None. If None, the training sample is divided into `steps_per_epoch` batches.
:type batch_size: int, optional
:param steps_per_epoch: number of steps per epoch, defaults to None. If None and `batch_size` is also None, then `steps_per_epoch` is set to 100.
:type steps_per_epoch: int, optional
:param callbacks: a list of additional Keras callbacks, such as :class:`~tf.keras.callbacks.ReduceLROnPlateau`, defaults to [].
:type callbacks: list, optional
:param verbose: verbosity level, defaults to 1.
:type verbose: int, optional
:return: A :class:`~tf.keras.callbacks.History` object. Its `history` attribute is a dictionary of training and validation loss values and metrics values at successive epochs: `"shift0_chi2"` is the squared norm of the zero-shift point in the gaussianized space, with the probability-to-exceed and corresponding tension in `"shift0_pval"` and `"shift0_nsigma"`; `"chi2Z_ks"` and `"chi2Z_ks_p"` contain the :math:`D_n` statistic and probability-to-exceed of the Kolmogorov-Smironov test that squared norms of the transformed samples `Z` are :math:`\\chi^2` distributed (with a number of degrees of freedom equal to the number of parameters).
"""
# We're trying to loop through the full sample each epoch
if batch_size is None:
if steps_per_epoch is None:
steps_per_epoch = 100
batch_size = int(self.num_samples/steps_per_epoch)
else:
if steps_per_epoch is None:
steps_per_epoch = int(self.num_samples/batch_size)
# Run !
hist = self.model.fit(x=self.Y_ds.batch(batch_size),
batch_size=batch_size,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_data=(self.Y_test, np.zeros(len(self.Y_test), dtype=np.float32), self.weights_test),
verbose=verbose,
callbacks=[tf.keras.callbacks.TerminateOnNaN(), self]+callbacks,
**kwargs)
return hist
def estimate_shift(self, tol=0.05, max_iter=1000, step=100000):
"""
Compute the normalizing flow estimate of the probability of a parameter shift given the input parameter difference chain. This is done with a Monte Carlo estimate by comparing the probability density at the zero-shift point to that at samples drawn from the normalizing flow approximation of the distribution.
:param tol: absolute tolerance on the shift significance, defaults to 0.05.
:type tol: float, optional
:param max_iter: maximum number of sampling steps, defaults to 1000.
:type max_iter: int, optional
:param step: number of samples per step, defaults to 100000.
:type step: int, optional
:return: probability value and error estimate.
"""
err = np.inf
counter = max_iter
_thres = self.dist_learned.log_prob(np.zeros(self.num_params, dtype=np.float32))
_num_filtered = 0
_num_samples = 0
while err > tol and counter >= 0:
counter -= 1
_s = self.dist_learned.sample(step)
_s_prob = self.dist_learned.log_prob(_s)
_t = np.array(_s_prob > _thres)
_num_filtered += np.sum(_t)
_num_samples += step
_P = float(_num_filtered)/float(_num_samples)
_low, _upper = utils.clopper_pearson_binomial_trial(float(_num_filtered),
float(_num_samples),
alpha=0.32)
err = np.abs(utils.from_confidence_to_sigma(_upper)-utils.from_confidence_to_sigma(_low))
return _P, _low, _upper
def _compute_shift_proba(self):
zero = np.array(self.Z2X_bijector.inverse(np.zeros(self.num_params, dtype=np.float32)))
chi2Z0 = np.sum(zero**2)
pval = scipy.stats.chi2.cdf(chi2Z0, df=self.num_params)
nsigma = utils.from_confidence_to_sigma(pval)
return zero, chi2Z0, pval, nsigma
def _plot_loss(self, ax, logs={}):
self.log["loss"].append(logs.get('loss'))
self.log["val_loss"].append(logs.get('val_loss'))
if ax is not None:
ax.plot(self.log["loss"], label='Training')
ax.plot(self.log["val_loss"], label='Testing')
ax.set_title("Training Loss")
ax.set_xlabel("Epoch #")
ax.set_ylabel("Loss")
ax.legend()
def _plot_shift_proba(self, ax, logs={}):
# Compute chi2 at zero shift
zero, chi2Z0, pval, nsigma = self._compute_shift_proba()
self.log["shift0_chi2"].append(chi2Z0)
self.log["shift0_pval"].append(pval)
self.log["shift0_nsigma"].append(nsigma)
# Plot
if ax is not None:
ax.plot(self.log["shift0_chi2"])
ax.set_title(r"$\chi^2$ at zero-shift")
ax.set_xlabel("Epoch #")
ax.set_ylabel(r"$\chi^2$")
def _plot_chi2_dist(self, ax, logs={}):
# Compute chi2 and make sure some are finite
chi2Z = np.sum(np.array(self.Z2Y_bijector.inverse(self.Y_test))**2, axis=1)
_s = np.isfinite(chi2Z)
assert np.any(_s)
chi2Z = chi2Z[_s]
# Run KS test
try:
# Note that scipy.stats.kstest does not handle weights yet so we need to resample.
if self.has_weights:
chi2Z = np.random.choice(chi2Z, size=len(chi2Z), replace=True, p=self.weights_test[_s]/np.sum(self.weights_test[_s]))
chi2Z_ks, chi2Z_ks_p = scipy.stats.kstest(chi2Z, 'chi2', args=(self.num_params,))
except:
chi2Z_ks, chi2Z_ks_p = 0., 0.
self.log["chi2Z_ks"].append(chi2Z_ks)
self.log["chi2Z_ks_p"].append(chi2Z_ks_p)
xx = np.linspace(0, self.num_params*4, 1000)
bins = np.linspace(0, self.num_params*4, 100)
# Plot
if ax is not None:
ax.plot(xx, scipy.stats.chi2.pdf(xx, df=self.num_params), label='$\\chi^2_{{{}}}$ PDF'.format(self.num_params), c='k', lw=1)
ax.hist(self.chi2Y, bins=bins, density=True, histtype='step', weights=self.weights_test, label='Pre-gauss ($D_n$={:.3f})'.format(self.chi2Y_ks))
ax.hist(chi2Z, bins=bins, density=True, histtype='step', weights=self.weights_test[_s], label='Post-gauss ($D_n$={:.3f})'.format(chi2Z_ks))
ax.set_title(r'$\chi^2_{{{}}}$ PDF'.format(self.num_params))
ax.set_xlabel(r'$\chi^2$')
ax.legend(fontsize=8)
def _plot_chi2_ks_p(self, ax, logs={}):
# Plot
if ax is not None:
ln1 = ax.plot(self.log["chi2Z_ks_p"], label='$p$')
ax.set_title(r"KS test ($\chi^2$)")
ax.set_xlabel("Epoch #")
ax.set_ylabel(r"$p$-value")
ax2 = ax.twinx()
ln2 = ax2.plot(self.log["chi2Z_ks"], ls='--', label='$D_n$')
ax2.set_ylabel('r$D_n$')
lns = ln1+ln2
labs = [l.get_label() for l in lns]
ax2.legend(lns, labs, loc=1)
def on_epoch_end(self, epoch, logs={}):
"""
This method is used by Keras to show progress during training if `feedback` is True.
"""
if self.feedback:
if isinstance(self.feedback, int):
if epoch % self.feedback:
return
clear_output(wait=True)
fig, axes = plt.subplots(1, 4, figsize=(16, 3))
else:
axes = [None]*4
self._plot_loss(axes[0], logs=logs)
self._plot_shift_proba(axes[1], logs=logs)
self._plot_chi2_dist(axes[2], logs=logs)
self._plot_chi2_ks_p(axes[3], logs=logs)
for k in self.log.keys():
logs[k] = self.log[k][-1]
if self.early_stop_nsigma > 0.:
if len(self.log["shift0_nsigma"]) > self.early_stop_patience and \
np.std(self.log["shift0_nsigma"][-self.early_stop_patience:]) < self.early_stop_nsigma and \
self.log["chi2Z_ks_p"][-1] > 1e-6:
self.model.stop_training = True
if self.feedback:
plt.tight_layout()
plt.show()
return fig
###############################################################################
# helper function to compute tension with default MAF:
def flow_parameter_shift(diff_chain, param_names=None, epochs=100, batch_size=None, steps_per_epoch=None, callbacks=[], verbose=1, tol=0.05, max_iter=1000, step=100000, **kwargs):
"""
Wrapper function to compute a normalizing flow estimate of the probability of a parameter shift given the input parameter difference chain with a standard MAF. It creates a :class:`~.DiffFlowCallback` object with a :class:`~.SimpleMAF` model (to which kwargs are passed), trains the model and returns the estimated shift probability.
:param diff_chain: input parameter difference chain.
:type diff_chain: :class:`~getdist.mcsamples.MCSamples`
:param param_names: parameter names of the parameters to be used
in the calculation. By default all running parameters.
:type param_names: list, optional
:param epochs: number of training epochs, defaults to 100.
:type epochs: int, optional
:param batch_size: number of samples per batch, defaults to None. If None, the training sample is divided into `steps_per_epoch` batches.
:type batch_size: int, optional
:param steps_per_epoch: number of steps per epoch, defaults to None. If None and `batch_size` is also None, then `steps_per_epoch` is set to 100.
:type steps_per_epoch: int, optional
:param callbacks: a list of additional Keras callbacks, such as :class:`~tf.keras.callbacks.ReduceLROnPlateau`, defaults to [].
:type callbacks: list, optional
:param verbose: verbosity level, defaults to 1.
:type verbose: int, optional
:param tol: absolute tolerance on the shift significance, defaults to 0.05.
:type tol: float, optional
:param max_iter: maximum number of sampling steps, defaults to 1000.
:type max_iter: int, optional
:param step: number of samples per step, defaults to 100000.
:type step: int, optional
:return: probability value and error estimate.
"""
# Callback/model handler
diff_flow_callback = DiffFlowCallback(diff_chain, param_names=param_names, **kwargs)
# Train model
diff_flow_callback.train(epochs=epochs, batch_size=batch_size, steps_per_epoch=steps_per_epoch, callbacks=callbacks, verbose=verbose)
# Compute tension
return diff_flow_callback.estimate_shift(tol=tol, max_iter=max_iter, step=step)
| 26,170 | 49.040153 | 648 | py |
tensiometer | tensiometer-master/tensiometer/mcmc_tension/kde.py | """
"""
"""
For test purposes:
from getdist import loadMCSamples, MCSamples, WeightedSamples
chain_1 = loadMCSamples('./test_chains/DES')
chain_2 = loadMCSamples('./test_chains/Planck18TTTEEE')
chain_12 = loadMCSamples('./test_chains/Planck18TTTEEE_DES')
chain_prior = loadMCSamples('./test_chains/prior')
import tensiometer.utilities as utils
import matplotlib.pyplot as plt
diff_chain = parameter_diff_chain(chain_1, chain_2, boost=1)
num_params, num_samples = diff_chain.samples.T.shape
param_names = None
scale = None
method = 'brute_force'
feedback=2
n_threads = 1
"""
###############################################################################
# initial imports and set-up:
import os
import time
import gc
from numba import jit
import numpy as np
import getdist.chains as gchains
gchains.print_load_details = False
from getdist import MCSamples, WeightedSamples
import scipy
from scipy.linalg import sqrtm
from scipy.integrate import simps
from scipy.spatial import cKDTree
from .. import utilities as utils
# imports for parallel calculations:
import multiprocessing
import joblib
# number of threads available:
if 'OMP_NUM_THREADS' in os.environ.keys():
n_threads = int(os.environ['OMP_NUM_THREADS'])
else:
n_threads = multiprocessing.cpu_count()
###############################################################################
# KDE bandwidth selection:
def Scotts_bandwidth(num_params, num_samples):
"""
Compute Scott's rule of thumb bandwidth covariance scaling.
This should be a fast approximation of the 1d MISE estimate.
:param num_params: the number of parameters in the chain.
:param num_samples: the number of samples in the chain.
:return: Scott's scaling matrix.
:reference: Chacón, J. E., Duong, T. (2018).
Multivariate Kernel Smoothing and Its Applications.
United States: CRC Press.
"""
return num_samples**(-2./(num_params+4.)) * np.identity(int(num_params))
def AMISE_bandwidth(num_params, num_samples):
"""
Compute Silverman's rule of thumb bandwidth covariance scaling AMISE.
This is the default scaling that is used to compute the KDE estimate of
parameter shifts.
:param num_params: the number of parameters in the chain.
:param num_samples: the number of samples in the chain.
:return: AMISE bandwidth matrix.
:reference: Chacón, J. E., Duong, T. (2018).
Multivariate Kernel Smoothing and Its Applications.
United States: CRC Press.
"""
coeff = (num_samples * (num_params + 2.) / 4.)**(-2. / (num_params + 4.))
return coeff * np.identity(int(num_params))
def MAX_bandwidth(num_params, num_samples):
"""
Compute the maximum bandwidth matrix.
This bandwidth is generally oversmoothing.
:param num_params: the number of parameters in the chain.
:param num_samples: the number of samples in the chain.
:return: MAX bandwidth matrix.
:reference: Chacón, J. E., Duong, T. (2018).
Multivariate Kernel Smoothing and Its Applications.
United States: CRC Press.
"""
d, n = num_params, num_samples
coeff = (d + 8.)**((d + 6.) / (d + 4.)) / 4.
coeff = coeff*(1./n/(d + 2.)/scipy.special.gamma(d/2. + 4))**(2./(d + 4.))
return coeff*np.identity(int(num_params))
@jit(nopython=True, fastmath=True)
def _mise1d_optimizer(alpha, d, n):
"""
Utility function that is minimized to obtain the MISE 1d bandwidth.
"""
tmp = 2**(-d/2.) - 2/(2 + alpha)**(d/2.) + (2 + 2*alpha)**(-d/2.) \
+ (alpha**(-d/2.) - (1 + alpha)**(-d/2.))/(2**(d/2.)*n)
return tmp
@jit(nopython=True, fastmath=True)
def _mise1d_optimizer_jac(alpha, d, n):
"""
Jacobian of the MISE 1d bandwidth optimizer.
"""
tmp = d*(2 + alpha)**(-1 - d/2.) - d*(2 + 2*alpha)**(-1 - d/2.) \
+ (2**(-1 - d/2.)*d*(-alpha**(-1 - d/2.)
+ (1 + alpha)**(-1 - d/2.)))/n
return tmp
def MISE_bandwidth_1d(num_params, num_samples, **kwargs):
"""
Computes the MISE bandwidth matrix. All coordinates are considered the same
so the MISE estimate just rescales the identity matrix.
:param num_params: the number of parameters in the chain.
:param num_samples: the number of samples in the chain.
:param kwargs: optional arguments to be passed to the optimizer algorithm.
:return: MISE 1d bandwidth matrix.
:reference: Chacón, J. E., Duong, T. (2018).
Multivariate Kernel Smoothing and Its Applications.
United States: CRC Press.
"""
# initial calculations:
alpha0 = kwargs.pop('alpha0', None)
if alpha0 is None:
alpha0 = AMISE_bandwidth(num_params, num_samples)[0, 0]
d, n = num_params, num_samples
# explicit optimization:
opt = scipy.optimize.minimize(lambda alpha:
_mise1d_optimizer(np.exp(alpha), d, n),
np.log(alpha0),
jac=lambda alpha:
_mise1d_optimizer_jac(np.exp(alpha), d, n),
**kwargs)
# check for success:
if not opt.success:
print(opt)
#
return np.exp(opt.x[0]) * np.identity(num_params)
@jit(nopython=True, fastmath=True)
def _mise_optimizer(H, d, n):
"""
Optimizer function to compute the MISE over the space of SPD matrices.
"""
Id = np.identity(d)
tmp = 1./np.sqrt(np.linalg.det(2.*H))/n
tmp = tmp + (1.-1./n)/np.sqrt(np.linalg.det(2.*H + 2.*Id)) \
- 2./np.sqrt(np.linalg.det(H + 2.*Id)) + np.power(2., -d/2.)
return tmp
def MISE_bandwidth(num_params, num_samples, feedback=0, **kwargs):
"""
Computes the MISE bandwidth matrix by numerically minimizing the MISE
over the space of positive definite symmetric matrices.
:param num_params: the number of parameters in the chain.
:param num_samples: the number of samples in the chain.
:param feedback: feedback level. If > 2 prints a lot of information.
:param kwargs: optional arguments to be passed to the optimizer algorithm.
:return: MISE bandwidth matrix.
:reference: Chacón, J. E., Duong, T. (2018).
Multivariate Kernel Smoothing and Its Applications.
United States: CRC Press.
"""
# initial calculations:
alpha0 = kwargs.pop('alpha0', None)
if alpha0 is None:
alpha0 = MISE_bandwidth_1d(num_params, num_samples)
alpha0 = utils.PDM_to_vector(alpha0)
d, n = num_params, num_samples
# build a constraint:
bounds = kwargs.pop('bounds', None)
if bounds is None:
bounds = np.array([[None, None] for i in range(d*(d+1)//2)])
bounds[np.tril_indices(d, 0)[0] == np.tril_indices(d, 0)[1]] = [alpha0[0]/100, alpha0[0]*100]
# explicit optimization:
opt = scipy.optimize.minimize(lambda x: _mise_optimizer(utils.vector_to_PDM(x), d, n),
x0=alpha0, bounds=bounds, **kwargs)
# check for success:
if not opt.success or feedback > 2:
print('MISE_bandwidth')
print(opt)
#
return utils.vector_to_PDM(opt.x)
@jit(nopython=True, fastmath=True, parallel=True)
def _UCV_optimizer_brute_force(H, weights, white_samples):
"""
Optimizer for the cross validation bandwidth estimator.
This does the computation with a brite force algorithm that scales as
:math:`n_{\\rm samples}^2`. For this reason this is really never used.
Note this solves for sqrt(H).
"""
# digest:
n, d = white_samples.shape
fac = 2**(-d/2.)
# compute the weights vectors:
wtot = np.sum(weights)
neff = wtot**2 / np.sum(weights**2)
alpha = wtot / (wtot - weights)
# compute determinant:
detH = np.linalg.det(H)
# whiten samples with inverse H:
samps = white_samples.dot(np.linalg.inv(H))
# brute force summation:
res = 0.
for i in range(1, n):
for j in range(i):
temp_samp = samps[i]-samps[j]
r2 = np.dot(temp_samp, temp_samp)
temp = fac*np.exp(-0.25*r2) - 2.*alpha[i]*np.exp(-0.5*r2)
res += weights[i]*weights[j]*temp
res = 2. * res / wtot**2
#
return (fac/neff + res)/detH
def _UCV_optimizer_nearest(H, weights, white_samples, n_nearest=20):
"""
Optimizer for the cross validation bandwidth estimator.
This does the computation uses a truncated KD-tree keeping only a limited
number of nearest neighbours.
Note this solves for sqrt(H).
This is the algorithm that is always used in practice.
"""
# digest:
n, d = white_samples.shape
fac = 2**(-d/2.)
# compute the weights vectors:
wtot = np.sum(weights)
neff = wtot**2 / np.sum(weights**2)
alpha = wtot / (wtot - weights)
# compute determinant:
detH = np.linalg.det(H)
# whiten samples with inverse H:
samps = white_samples.dot(np.linalg.inv(H))
# KD-tree computation:
data_tree = cKDTree(samps, balanced_tree=True)
# query for nearest neighbour:
r2, idx = data_tree.query(samps, np.arange(2, n_nearest), workers=-1)
r2 = np.square(r2)
temp = weights[:, None]*weights[idx]*(fac*np.exp(-0.25*r2)
- 2.*np.exp(-0.5*r2)*alpha[:, None])
res = np.sum(temp) / wtot**2
#
return (fac/neff + res)/detH
def UCV_bandwidth(weights, white_samples, alpha0=None, feedback=0, mode='full', **kwargs):
"""
Computes the optimal unbiased cross validation bandwidth for the input samples
by numerical minimization.
:param weights: input sample weights.
:param white_samples: pre-whitened samples (identity covariance)
:param alpha0: (optional) initial guess for the bandwidth. If none is
given then the AMISE band is used as the starting point for minimization.
:param feedback: (optional) how verbose is the algorithm. Default is zero.
:param mode: (optional) selects the space for minimization. Default is
over the full space of SPD matrices. Other options are `diag` to perform
minimization over diagonal matrices and `1d` to perform minimization
over matrices that are proportional to the identity.
:param kwargs: other arguments passed to :func:`scipy.optimize.minimize`
:return: UCV bandwidth matrix.
:reference: Chacón, J. E., Duong, T. (2018).
Multivariate Kernel Smoothing and Its Applications.
United States: CRC Press.
"""
# digest input:
n, d = white_samples.shape
n_nearest = kwargs.pop('n_nearest', 20)
# get number of effective samples:
wtot = np.sum(weights)
neff = wtot**2 / np.sum(weights**2)
# initial guess calculations:
t0 = time.time()
if alpha0 is None:
alpha0 = AMISE_bandwidth(d, neff)
# select mode:
if mode == '1d':
opt = scipy.optimize.minimize(lambda alpha: _UCV_optimizer_nearest(np.sqrt(np.exp(alpha)) * np.identity(d), weights, white_samples, n_nearest),
np.log(alpha0[0, 0]), **kwargs)
res = np.exp(opt.x[0]) * np.identity(d)
elif mode == 'diag':
opt = scipy.optimize.minimize(lambda alpha: _UCV_optimizer_nearest(np.diag(np.sqrt(np.exp(alpha))), weights, white_samples, n_nearest),
x0=np.log(np.diag(alpha0)), **kwargs)
res = np.diag(np.exp(opt.x))
elif mode == 'full':
# build a constraint:
bounds = kwargs.pop('bounds', None)
if bounds is None:
bounds = np.array([[None, None] for i in range(d*(d+1)//2)])
bounds[np.tril_indices(d, 0)[0] == np.tril_indices(d, 0)[1]] = [alpha0[0, 0]/10, alpha0[0, 0]*10]
# explicit optimization:
alpha0 = utils.PDM_to_vector(sqrtm(alpha0))
opt = scipy.optimize.minimize(lambda alpha: _UCV_optimizer_nearest(utils.vector_to_PDM(alpha), weights, white_samples, n_nearest),
x0=alpha0, bounds=bounds, **kwargs)
res = utils.vector_to_PDM(opt.x)
res = np.dot(res, res)
# check for success and final feedback:
if not opt.success or feedback > 2:
print(opt)
if feedback > 0:
t1 = time.time()
print('Time taken for UCV_bandwidth '+mode+' calculation:',
round(t1-t0, 1), '(s)')
#
return res
def UCV_SP_bandwidth(white_samples, weights, feedback=0, near=1, near_max=20):
"""
Computes the optimal unbiased cross validation bandwidth scaling for the
BALL sampling point KDE estimator.
:param white_samples: pre-whitened samples (identity covariance).
:param weights: input sample weights.
:param feedback: (optional) how verbose is the algorithm. Default is zero.
:param near: (optional) number of nearest neighbour to use. Default is 1.
:param near_max: (optional) number of nearest neighbour to use for the UCV calculation. Default is 20.
"""
# digest input:
n, d = white_samples.shape
fac = 2**(-d/2.)
t0 = time.time()
# prepare the Tree with the samples:
data_tree = cKDTree(white_samples, balanced_tree=True)
# compute the weights vectors:
wtot = np.sum(weights)
weights2 = weights**2
neff = wtot**2 / np.sum(weights2)
alpha = wtot / (wtot - weights)
# query the Tree for the maximum number of nearest neighbours:
dist, idx = data_tree.query(white_samples, np.arange(2, near_max+1), workers=-1)
r2 = np.square(dist)
# do all sort of precomputations:
R = dist[:, near]
R2 = r2[:, near]
R2s = R2[:, None] + R2[idx]
term_1 = fac*np.sum(weights2/R**d)
weight_term = weights[:, None]*weights[idx]
R2sd = R2s**(-d/2)
Rd = R[:, None]**d
R21 = r2/R2s
R22 = r2/R2[:, None]
alpha_temp = alpha[:, None]
# define helper for minimization:
@jit(nopython=True)
def _helper(gamma):
# compute the i != j sum:
temp = weight_term*(R2sd*gamma**(-d/2)*np.exp(-0.5*R21/gamma) - 2.*alpha_temp/Rd/gamma**d*np.exp(-0.5*R22/gamma))
# sum:
_ucv = term_1/gamma**d + np.sum(temp)
_ucv = _ucv / wtot**2
#
return _ucv
# initial guess:
x0 = AMISE_bandwidth(d, neff)[0, 0]
# call optimizer:
res = scipy.optimize.minimize(lambda x: _helper(np.exp(x)), x0=np.log(x0), method='Nelder-Mead')
res.x = np.exp(res.x)
#
if feedback > 0:
t1 = time.time()
print('Time taken for UCV_SP_bandwidth calculation:',
round(t1-t0, 1), '(s)')
#
return res
def OptimizeBandwidth_1D(diff_chain, param_names=None, num_bins=1000):
"""
Compute an estimate of an optimal bandwidth for covariance scaling as in
GetDist. This is performed on whitened samples (with identity covariance),
in 1D, and then scaled up with a dimensionality correction.
:param diff_chain: :class:`~getdist.mcsamples.MCSamples`
input parameter difference chain
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param num_bins: number of bins used for the 1D estimate
:return: scaling vector for the whitened parameters
"""
# initialize param names:
if param_names is None:
param_names = diff_chain.getParamNames().getRunningNames()
else:
chain_params = diff_chain.getParamNames().list()
if not np.all([name in chain_params for name in param_names]):
raise ValueError('Input parameter is not in the diff chain.\n',
'Input parameters ', param_names, '\n'
'Possible parameters', chain_params)
# indexes:
ind = [diff_chain.index[name] for name in param_names]
# some initial calculations:
_samples_cov = diff_chain.cov(pars=param_names)
_num_params = len(ind)
# whiten the samples:
_temp = sqrtm(utils.QR_inverse(_samples_cov))
white_samples = diff_chain.samples[:, ind].dot(_temp)
# make these samples so that we can use GetDist band optization:
temp_samples = MCSamples(samples=white_samples,
weights=diff_chain.weights,
ignore_rows=0, sampler=diff_chain.sampler)
# now get optimal band for each parameter:
bands = []
for i in range(_num_params):
# get the parameter:
par = temp_samples._initParamRanges(i, paramConfid=None)
# get the bins:
temp_result = temp_samples._binSamples(temp_samples.samples[:, i],
par, num_bins)
bin_indices, bin_width, binmin, binmax = temp_result
bins = np.bincount(bin_indices, weights=temp_samples.weights,
minlength=num_bins)
# get the optimal smoothing scale:
N_eff = temp_samples._get1DNeff(par, i)
band = temp_samples.getAutoBandwidth1D(bins, par, i, N_eff=N_eff,
mult_bias_correction_order=0,
kernel_order=0) \
* (binmax - binmin)
# correction for dimensionality:
dim_factor = Scotts_bandwidth(_num_params, N_eff)[0, 0]/Scotts_bandwidth(1., N_eff)[0, 0]
#
bands.append(band**2.*dim_factor)
#
return np.array(bands)
###############################################################################
# Parameter difference integrals:
def _gauss_kde_logpdf(x, samples, weights):
"""
Utility function to compute the Gaussian log KDE probability at x from
already whitened samples, possibly with weights.
Normalization constants are ignored.
"""
X = x-samples
return scipy.special.logsumexp(-0.5*(X*X).sum(axis=1), b=weights)
def _gauss_ballkde_logpdf(x, samples, weights, distance_weights):
"""
Utility function to compute the Gaussian log KDE probability
with variable ball bandwidth at x from already whitened samples,
possibly with weights. Each element has its own smoothing scale
that is passed as `distance_weights`.
Normalization constants are ignored.
"""
X = x-samples
return scipy.special.logsumexp(-0.5*(X*X).sum(axis=1)/distance_weights**2,
b=weights)
def _gauss_ellkde_logpdf(x, samples, weights, distance_weights):
"""
Utility function to compute the Gaussian log KDE probability
with variable ellipsoid bandwidth at x from already whitened samples,
possibly with weights. Each element has its own smoothing matrix
that is passed as `distance_weights`.
Normalization constants are ignored.
"""
X = x-samples
X = np.einsum('...j,...jk,...k', X, distance_weights, X)
return scipy.special.logsumexp(-0.5*X, b=weights)
def _brute_force_kde_param_shift(white_samples, weights, zero_prob,
num_samples, feedback, weights_norm=None,
distance_weights=None):
"""
Brute force parallelized algorithm for parameter shift.
"""
# get feedback:
if feedback > 1:
from tqdm import tqdm
def feedback_helper(x): return tqdm(x, ascii=True)
else:
def feedback_helper(x): return x
# prepare:
if distance_weights is not None:
if len(distance_weights.shape) == 1:
_num_params = white_samples.shape[1]
weights_norm = weights/distance_weights**_num_params
_log_pdf = _gauss_ballkde_logpdf
_args = white_samples, weights_norm, distance_weights
if len(distance_weights.shape) == 3:
_log_pdf = _gauss_ellkde_logpdf
_args = white_samples, weights_norm, distance_weights
else:
_log_pdf = _gauss_kde_logpdf
_args = white_samples, weights
# run:
with joblib.Parallel(n_jobs=n_threads) as parallel:
_kde_eval_pdf = parallel(joblib.delayed(_log_pdf)
(samp, *_args)
for samp in feedback_helper(white_samples))
# filter for probability calculation:
_filter = _kde_eval_pdf > zero_prob
# compute number of filtered elements:
_num_filtered = np.sum(weights[_filter])
#
return _num_filtered
def _neighbor_parameter_shift(white_samples, weights, zero_prob, num_samples,
feedback, weights_norm=None, distance_weights=None, **kwargs):
"""
Parameter shift calculation through neighbour elimination.
"""
# import specific for this function:
if feedback > 1:
from tqdm import tqdm
def feedback_helper(x): return tqdm(x, ascii=True)
else:
def feedback_helper(x): return x
# get options:
stable_cycle = kwargs.get('stable_cycle', 2)
chunk_size = kwargs.get('chunk_size', 40)
smallest_improvement = kwargs.get('smallest_improvement', 1.e-4)
# the tree elimination has to work with probabilities to go incremental:
_zero_prob = np.exp(zero_prob)
# build tree:
if feedback > 1:
print('Building KD-Tree with leafsize =', 10*chunk_size)
data_tree = cKDTree(white_samples, leafsize=10*chunk_size,
balanced_tree=True)
# make sure that the weights are floats:
_weights = weights.astype(float)
# initialize the calculation to zero:
_num_elements = len(_weights)
_kde_eval_pdf = np.zeros(_num_elements)
_filter = np.ones(_num_elements, dtype=bool)
_last_n = 0
_stable_cycle = 0
# loop over the neighbours:
if feedback > 1:
print('Neighbours elimination')
for i in range(_num_elements//chunk_size):
ind_min = chunk_size*i
ind_max = chunk_size*i+chunk_size
_dist, _ind = data_tree.query(white_samples[_filter],
ind_max, workers=-1)
if distance_weights is not None:
if len(distance_weights.shape) == 1:
# BALL case:
_kde_eval_pdf[_filter] += np.sum(
weights_norm[_ind[:, ind_min:ind_max]]
* np.exp(-0.5*np.square(_dist[:, ind_min:ind_max]/distance_weights[_ind[:, ind_min:ind_max]])), axis=1)
if len(distance_weights.shape) == 3:
# ELL case:
X = white_samples[_ind[:, ind_min:ind_max]] - white_samples[_ind[:, 0], np.newaxis, :]
d2 = np.einsum('...j,...jk,...k', X, distance_weights[_ind[:, ind_min:ind_max]], X)
_kde_eval_pdf[_filter] += np.sum(
weights_norm[_ind[:, ind_min:ind_max]] * np.exp(-0.5*d2), axis=1)
else:
# standard case:
_kde_eval_pdf[_filter] += np.sum(
_weights[_ind[:, ind_min:ind_max]]
* np.exp(-0.5*np.square(_dist[:, ind_min:ind_max])), axis=1)
_filter[_filter] = _kde_eval_pdf[_filter] < _zero_prob
_num_filtered = np.sum(_filter)
if feedback > 2:
print('neighbor_elimination: chunk', i+1)
print(' surviving elements', _num_filtered,
'of', _num_elements)
# check if calculation has converged:
_term_check = float(np.abs(_num_filtered-_last_n)) \
/ float(_num_elements) < smallest_improvement
if _term_check and _num_filtered < _num_elements:
_stable_cycle += 1
if _stable_cycle >= stable_cycle:
break
elif not _term_check and _stable_cycle > 0:
_stable_cycle = 0
elif _num_filtered == 0:
break
else:
_last_n = _num_filtered
# clean up memory:
del(data_tree)
# brute force the leftovers:
if feedback > 1:
print('neighbor_elimination: polishing')
# prepare:
if distance_weights is not None:
if len(distance_weights.shape) == 1:
_num_params = white_samples.shape[1]
weights_norm = weights/distance_weights**_num_params
_log_pdf = _gauss_ballkde_logpdf
_args = white_samples, weights_norm, distance_weights
if len(distance_weights.shape) == 3:
_log_pdf = _gauss_ellkde_logpdf
_args = white_samples, weights_norm, distance_weights
else:
_log_pdf = _gauss_kde_logpdf
_args = white_samples, weights
# run:
with joblib.Parallel(n_jobs=n_threads) as parallel:
_kde_eval_pdf[_filter] = parallel(joblib.delayed(_log_pdf)
(samp, *_args)
for samp in feedback_helper(white_samples[_filter]))
_filter[_filter] = _kde_eval_pdf[_filter] < np.log(_zero_prob)
if feedback > 1:
print(' surviving elements', np.sum(_filter),
'of', _num_elements)
# compute number of filtered elements:
_num_filtered = num_samples - np.sum(weights[_filter])
#
return _num_filtered
def kde_parameter_shift_1D_fft(diff_chain, param_names=None,
scale=None, nbins=1024, feedback=1,
boundary_correction_order=1,
mult_bias_correction_order=1,
**kwarks):
"""
Compute the MCMC estimate of the probability of a parameter shift given
an input parameter difference chain in 1 dimension and by using FFT.
This function uses GetDist 1D fft and optimal bandwidth estimates to
perform the MCMC parameter shift integral discussed in
(`Raveri, Zacharegkas and Hu 19 <https://arxiv.org/pdf/1912.04880.pdf>`_).
:param diff_chain: :class:`~getdist.mcsamples.MCSamples`
input parameter difference chain
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param scale: (optional) scale for the KDE smoothing.
If none is provided the algorithm uses GetDist optimized bandwidth.
:param nbins: (optional) number of 1D bins for the fft. Powers of 2 work best. Default is 1024.
:param mult_bias_correction_order: (optional) multiplicative bias
correction passed to GetDist.
See :meth:`~getdist.mcsamples.MCSamples.get2DDensity`.
:param boundary_correction_order: (optional) boundary correction
passed to GetDist.
See :meth:`~getdist.mcsamples.MCSamples.get2DDensity`.
:param feedback: (optional) print to screen the time taken
for the calculation.
:return: probability value and error estimate.
:reference: `Raveri, Zacharegkas and Hu 19 <https://arxiv.org/pdf/1912.04880.pdf>`_
"""
# initialize param names:
if param_names is None:
param_names = diff_chain.getParamNames().getRunningNames()
else:
chain_params = diff_chain.getParamNames().list()
if not np.all([name in chain_params for name in param_names]):
raise ValueError('Input parameter is not in the diff chain.\n',
'Input parameters ', param_names, '\n'
'Possible parameters', chain_params)
# check that we only have two parameters:
if len(param_names) != 1:
raise ValueError('Calling 1D algorithm with more than 1 parameters')
# initialize scale:
if scale is None or isinstance(scale, str):
scale = -1
# indexes:
ind = [diff_chain.index[name] for name in param_names]
# compute the density with GetDist:
t0 = time.time()
density = diff_chain.get1DDensity(name=ind[0], normalized=True,
num_bins=nbins,
smooth_scale_1D=scale,
boundary_correction_order=boundary_correction_order,
mult_bias_correction_order=mult_bias_correction_order)
# initialize the spline:
density._initSpline()
# get density of zero:
prob_zero = density.Prob([0.])[0]
# do the MC integral:
probs = density.Prob(diff_chain.samples[:, ind[0]])
# filter:
_filter = probs > prob_zero
# if there are samples above zero then use MC:
if np.sum(_filter) > 0:
_num_filtered = float(np.sum(diff_chain.weights[_filter]))
_num_samples = float(np.sum(diff_chain.weights))
_P = float(_num_filtered)/float(_num_samples)
_low, _upper = utils.clopper_pearson_binomial_trial(_num_filtered,
_num_samples,
alpha=0.32)
# if there are no samples try to do the integral:
else:
norm = simps(density.P, density.x)
_second_filter = density.P < prob_zero
density.P[_second_filter] = 0
_P = simps(density.P, density.x)/norm
_low, _upper = None, None
#
t1 = time.time()
if feedback > 0:
print('Time taken for 1D FFT-KDE calculation:', round(t1-t0, 1), '(s)')
#
return _P, _low, _upper
def kde_parameter_shift_2D_fft(diff_chain, param_names=None,
scale=None, nbins=1024, feedback=1,
boundary_correction_order=1,
mult_bias_correction_order=1,
**kwarks):
"""
Compute the MCMC estimate of the probability of a parameter shift given
an input parameter difference chain in 2 dimensions and by using FFT.
This function uses GetDist 2D fft and optimal bandwidth estimates to
perform the MCMC parameter shift integral discussed in
(`Raveri, Zacharegkas and Hu 19 <https://arxiv.org/pdf/1912.04880.pdf>`_).
:param diff_chain: :class:`~getdist.mcsamples.MCSamples`
input parameter difference chain
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param scale: (optional) scale for the KDE smoothing.
If none is provided the algorithm uses GetDist optimized bandwidth.
:param nbins: (optional) number of 2D bins for the fft. Powers of 2 work best. Default is 1024.
:param mult_bias_correction_order: (optional) multiplicative bias
correction passed to GetDist.
See :meth:`~getdist.mcsamples.MCSamples.get2DDensity`.
:param boundary_correction_order: (optional) boundary correction
passed to GetDist.
See :meth:`~getdist.mcsamples.MCSamples.get2DDensity`.
:param feedback: (optional) print to screen the time taken
for the calculation.
:return: probability value and error estimate.
:reference: `Raveri, Zacharegkas and Hu 19 <https://arxiv.org/pdf/1912.04880.pdf>`_
"""
# initialize param names:
if param_names is None:
param_names = diff_chain.getParamNames().getRunningNames()
else:
chain_params = diff_chain.getParamNames().list()
if not np.all([name in chain_params for name in param_names]):
raise ValueError('Input parameter is not in the diff chain.\n',
'Input parameters ', param_names, '\n'
'Possible parameters', chain_params)
# check that we only have two parameters:
if len(param_names) != 2:
raise ValueError('Calling 2D algorithm with more than 2 parameters')
# initialize scale:
if scale is None or isinstance(scale, str):
scale = -1
# indexes:
ind = [diff_chain.index[name] for name in param_names]
# compute the density with GetDist:
t0 = time.time()
density = diff_chain.get2DDensity(x=ind[0], y=ind[1], normalized=True,
fine_bins_2D=nbins,
smooth_scale_2D=scale,
boundary_correction_order=boundary_correction_order,
mult_bias_correction_order=mult_bias_correction_order)
# initialize the spline:
density._initSpline()
# get density of zero:
prob_zero = density.spl([0.], [0.])[0][0]
# do the MC integral:
probs = density.spl.ev(diff_chain.samples[:, ind[0]],
diff_chain.samples[:, ind[1]])
# filter:
_filter = probs > prob_zero
# if there are samples above zero then use MC:
if np.sum(_filter) > 0:
_num_filtered = float(np.sum(diff_chain.weights[_filter]))
_num_samples = float(np.sum(diff_chain.weights))
_P = float(_num_filtered)/float(_num_samples)
_low, _upper = utils.clopper_pearson_binomial_trial(_num_filtered,
_num_samples,
alpha=0.32)
# if there are no samples try to do the integral:
else:
norm = simps(simps(density.P, density.y), density.x)
_second_filter = density.P < prob_zero
density.P[_second_filter] = 0
_P = simps(simps(density.P, density.y), density.x)/norm
_low, _upper = None, None
#
t1 = time.time()
if feedback > 0:
print('Time taken for 2D FFT-KDE calculation:', round(t1-t0, 1), '(s)')
#
return _P, _low, _upper
@jit(nopython=True)
def _ell_helper(_ind, _white_samples, _num_params):
"""
Small helper for ellipse smoothing
"""
mats = []
dets = []
for idx in _ind:
temp_samp = _white_samples[idx]
temp_samp = temp_samp[1:, :] - temp_samp[0, :]
mat = np.zeros((_num_params, _num_params))
for v in temp_samp:
mat += np.outer(v, v)
mats.append(np.linalg.inv(mat))
dets.append(np.linalg.det(mat))
return dets, mats
def kde_parameter_shift(diff_chain, param_names=None,
scale=None, method='neighbor_elimination',
feedback=1, **kwargs):
"""
Compute the KDE estimate of the probability of a parameter shift given
an input parameter difference chain.
This function uses a Kernel Density Estimate (KDE) algorithm discussed in
(`Raveri, Zacharegkas and Hu 19 <https://arxiv.org/pdf/1912.04880.pdf>`_).
If the difference chain contains :math:`n_{\\rm samples}` this algorithm
scales as :math:`O(n_{\\rm samples}^2)` and might require long run times.
For this reason the algorithm is parallelized with the
joblib library.
If the problem is 1d or 2d use the fft algorithm in :func:`kde_parameter_shift_1D_fft`
and :func:`kde_parameter_shift_2D_fft`.
:param diff_chain: :class:`~getdist.mcsamples.MCSamples`
input parameter difference chain
:param param_names: (optional) parameter names of the parameters to be used
in the calculation. By default all running parameters.
:param scale: (optional) scale for the KDE smoothing.
The scale is always referred to white samples with unit covariance.
If none is provided the algorithm uses MISE estimate.
Options are:
#. a scalar for fixed scaling over all dimensions;
#. a matrix from anisotropic smoothing;
#. `MISE`, `AMISE`, `MAX` for the corresponding smoothing scale;
#. `BALL` or `ELL` for variable adaptive smoothing with nearest neighbour;
:param method: (optional) a string containing the indication for the method
to use in the KDE calculation. This can be very intensive so different
techniques are provided.
#. method = `brute_force` is a parallelized brute force method. This
method scales as :math:`O(n_{\\rm samples}^2)` and can be afforded
only for small tensions. When suspecting a difference that is
larger than 95% other methods are better.
#. method = `neighbor_elimination` is a KD Tree based elimination method.
For large tensions this scales as
:math:`O(n_{\\rm samples}\\log(n_{\\rm samples}))`
and in worse case scenarions, with small tensions, this can scale
as :math:`O(n_{\\rm samples}^2)` but with significant overheads
with respect to the brute force method.
When expecting a statistically significant difference in parameters
this is the recomended algorithm.
Suggestion is to go with brute force for small problems, neighbor
elimination for big problems with signifcant tensions.
Default is `neighbor_elimination`.
:param feedback: (optional) print to screen the time taken
for the calculation.
:param kwargs: extra options to pass to the KDE algorithm.
The `neighbor_elimination` algorithm accepts the following optional
arguments:
#. stable_cycle: (default 2) number of elimination cycles that show
no improvement in the result.
#. chunk_size: (default 40) chunk size for elimination cycles.
For best perfornamces this parameter should be tuned to result
in the greatest elimination rates.
#. smallest_improvement: (default 1.e-4) minimum percentage improvement
rate before switching to brute force.
#. near: (default 1) n-nearest neighbour to use for variable bandwidth KDE estimators.
#. near_alpha: (default 1.0) scaling for nearest neighbour distance.
:return: probability value and error estimate from binomial.
:reference: `Raveri, Zacharegkas and Hu 19 <https://arxiv.org/pdf/1912.04880.pdf>`_
"""
# initialize param names:
if param_names is None:
param_names = diff_chain.getParamNames().getRunningNames()
else:
chain_params = diff_chain.getParamNames().list()
if not np.all([name in chain_params for name in param_names]):
raise ValueError('Input parameter is not in the diff chain.\n',
'Input parameters ', param_names, '\n'
'Possible parameters', chain_params)
# indexes:
ind = [diff_chain.index[name] for name in param_names]
# some initial calculations:
_num_samples = np.sum(diff_chain.weights)
_num_params = len(ind)
# number of effective samples:
_num_samples_eff = np.sum(diff_chain.weights)**2 / \
np.sum(diff_chain.weights**2)
# whighten samples:
_white_samples = utils.whiten_samples(diff_chain.samples[:, ind],
diff_chain.weights)
# scale for the kde:
distance_weights = None
weights_norm = None
if (isinstance(scale, str) and scale == 'MISE') or scale is None:
scale = MISE_bandwidth_1d(_num_params, _num_samples_eff, **kwargs)
elif isinstance(scale, str) and scale == 'AMISE':
scale = AMISE_bandwidth(_num_params, _num_samples_eff)
elif isinstance(scale, str) and scale == 'MAX':
scale = MAX_bandwidth(_num_params, _num_samples_eff)
elif isinstance(scale, str) and scale == 'BALL':
near = kwargs.pop('near', 1)
near_alpha = kwargs.pop('near_alpha', 1.0)
data_tree = cKDTree(_white_samples, balanced_tree=True)
_dist, _ind = data_tree.query(_white_samples, near+1, workers=-1)
distance_weights = np.sqrt(near_alpha)*_dist[:, near]
weights_norm = diff_chain.weights/distance_weights**_num_params
del(data_tree)
elif isinstance(scale, str) and scale == 'ELL':
# build tree:
data_tree = cKDTree(_white_samples, balanced_tree=True)
_dist, _ind = data_tree.query(_white_samples, _num_params+1, workers=-1)
del(data_tree)
# compute the covariances:
dets, mats = _ell_helper(_ind, _white_samples, _num_params)
weights_norm = diff_chain.weights/np.sqrt(dets)
distance_weights = np.array(mats)
elif isinstance(scale, int) or isinstance(scale, float):
scale = scale*np.identity(int(_num_params))
elif isinstance(scale, np.ndarray):
if not scale.shape == (_num_params, _num_params):
raise ValueError('Input scaling matrix does not have correct '
+ 'size \n Input shape: '+str(scale.shape)
+ '\nNumber of parameters: '+str(_num_params))
scale = scale
else:
raise ValueError('Unrecognized option for scale')
# feedback:
if feedback > 0:
with np.printoptions(precision=3):
print(f'Dimension : {int(_num_params)}')
print(f'N samples : {int(_num_samples)}')
print(f'Neff samples : {_num_samples_eff:.2f}')
if not isinstance(scale, str):
if np.count_nonzero(scale - np.diag(np.diagonal(scale))) == 0:
print(f'Smoothing scale :', np.diag(scale))
else:
print(f'Smoothing scale :', scale)
elif scale == 'BALL':
print(f'BALL smoothing scale')
elif scale == 'ELL':
print(f'ELL smoothing scale')
# prepare the calculation:
if not isinstance(scale, str):
_kernel_cov = sqrtm(np.linalg.inv(scale))
_white_samples = _white_samples.dot(_kernel_cov)
_log_pdf = _gauss_kde_logpdf
_args = _white_samples, diff_chain.weights
elif scale == 'BALL':
weights_norm = diff_chain.weights/distance_weights**_num_params
_log_pdf = _gauss_ballkde_logpdf
_args = _white_samples, weights_norm, distance_weights
elif scale == 'ELL':
_log_pdf = _gauss_ellkde_logpdf
_args = _white_samples, weights_norm, distance_weights
# probability of zero:
_kde_prob_zero = _log_pdf(np.zeros(_num_params), *_args)
# compute the KDE:
t0 = time.time()
if method == 'brute_force':
_num_filtered = _brute_force_kde_param_shift(_white_samples,
diff_chain.weights,
_kde_prob_zero,
_num_samples,
feedback,
weights_norm=weights_norm,
distance_weights=distance_weights)
elif method == 'neighbor_elimination':
_num_filtered = _neighbor_parameter_shift(_white_samples,
diff_chain.weights,
_kde_prob_zero,
_num_samples,
feedback,
weights_norm=weights_norm,
distance_weights=distance_weights,
**kwargs)
else:
raise ValueError('Unknown method provided:', method)
t1 = time.time()
# clean up:
gc.collect()
# feedback:
if feedback > 0:
print('KDE method:', method)
print('Time taken for KDE calculation:', round(t1-t0, 1), '(s)')
# probability and binomial error estimate:
_P = float(_num_filtered)/float(_num_samples)
_low, _upper = utils.clopper_pearson_binomial_trial(float(_num_filtered),
float(_num_samples),
alpha=0.32)
#
return _P, _low, _upper
| 43,456 | 41.688605 | 151 | py |
tensiometer | tensiometer-master/tensiometer/mcmc_tension/param_diff.py | """
"""
"""
For test purposes:
from getdist import loadMCSamples, MCSamples, WeightedSamples
chain_1 = loadMCSamples('./test_chains/DES')
chain_2 = loadMCSamples('./test_chains/Planck18TTTEEE')
chain_12 = loadMCSamples('./test_chains/Planck18TTTEEE_DES')
chain_prior = loadMCSamples('./test_chains/prior')
import tensiometer.utilities as utils
import matplotlib.pyplot as plt
diff_chain = parameter_diff_chain(chain_1, chain_2, boost=1)
num_params, num_samples = diff_chain.samples.T.shape
param_names = None
scale = None
method = 'brute_force'
feedback=2
"""
###############################################################################
# initial imports and set-up:
import os
import numpy as np
import getdist.chains as gchains
gchains.print_load_details = False
from getdist import MCSamples, WeightedSamples
import scipy
from scipy.linalg import sqrtm
from scipy.integrate import simps
from scipy.spatial import cKDTree
from .. import utilities as utils
###############################################################################
# Parameter difference chain:
def parameter_diff_weighted_samples(samples_1, samples_2, boost=1,
indexes_1=None, indexes_2=None):
"""
Compute the parameter differences of two input weighted samples.
The parameters of the difference samples are related to the
parameters of the input samples, :math:`\\theta_1` and
:math:`\\theta_2` by:
.. math:: \\Delta \\theta \\equiv \\theta_1 - \\theta_2
This function does not assume Gaussianity of the chain.
This functions does assume that the parameter determinations from the two
chains (i.e. the underlying data sets) are uncorrelated.
Do not use this function for chains that are correlated.
:param samples_1: :class:`~getdist.chains.WeightedSamples`
first input weighted samples with :math:`n_1` samples.
:param samples_2: :class:`~getdist.chains.WeightedSamples`
second input weighted samples with :math:`n_2` samples.
:param boost: (optional) boost the number of samples in the
difference. By default the length of the difference samples
will be the length of the longest one.
Given two samples the full difference samples can contain
:math:`n_1\\times n_2` samples but this is usually prohibitive
for realistic chains.
The boost parameters wil increase the number of samples to be
:math:`{\\rm boost}\\times {\\rm max}(n_1,n_2)`.
Default boost parameter is one.
If boost is None the full difference chain is going to be computed
(and will likely require a lot of memory and time).
:param indexes_1: (optional) array with the indexes of the parameters to
use for the first samples. By default this tries to use all
parameters.
:param indexes_2: (optional) array with the indexes of the parameters to
use for the second samples. By default this tries to use all
parameters.
:return: :class:`~getdist.chains.WeightedSamples` the instance with the
parameter difference samples.
"""
# test for type, this function assumes that we are working with MCSamples:
if not isinstance(samples_1, WeightedSamples):
raise TypeError('Input samples_1 is not of WeightedSamples type.')
if not isinstance(samples_2, WeightedSamples):
raise TypeError('Input samples_2 is not of WeightedSamples type.')
# get indexes:
if indexes_1 is None:
indexes_1 = np.arange(samples_1.samples.shape[1])
if indexes_2 is None:
indexes_2 = np.arange(samples_2.samples.shape[1])
# check:
if not len(indexes_1) == len(indexes_2):
raise ValueError('The samples do not containt the same number',
'of parameters.')
num_params = len(indexes_1)
# order the chains so that the second chain is always with less points:
if (len(samples_1.weights) >= len(samples_2.weights)):
ch1, ch2 = samples_1, samples_2
sign = +1.
ind1, ind2 = indexes_1, indexes_2
else:
ch1, ch2 = samples_2, samples_1
sign = -1.
ind1, ind2 = indexes_2, indexes_1
# get number of samples:
num_samps_1 = len(ch1.weights)
num_samps_2 = len(ch2.weights)
if boost is None:
sample_boost = num_samps_2
else:
sample_boost = min(boost, num_samps_2)
# create the arrays (these might be big depending on boost level...):
weights = np.empty((num_samps_1*sample_boost))
difference_samples = np.empty((num_samps_1*sample_boost, num_params))
if ch1.loglikes is not None and ch2.loglikes is not None:
loglikes = np.empty((num_samps_1*sample_boost))
else:
loglikes = None
# compute the samples:
for ind in range(sample_boost):
base_ind = int(float(ind)/float(sample_boost)*num_samps_2)
_indexes = range(base_ind, base_ind+num_samps_1)
# compute weights (as the product of the weights):
weights[ind*num_samps_1:(ind+1)*num_samps_1] = \
ch1.weights*np.take(ch2.weights, _indexes, mode='wrap')
# compute the likelihood:
if ch1.loglikes is not None and ch2.loglikes is not None:
loglikes[ind*num_samps_1:(ind+1)*num_samps_1] = \
ch1.loglikes+np.take(ch2.loglikes, _indexes, mode='wrap')
# compute the difference samples:
difference_samples[ind*num_samps_1:(ind+1)*num_samps_1, :] = \
ch1.samples[:, ind1] \
- np.take(ch2.samples[:, ind2], _indexes, axis=0, mode='wrap')
# get additional informations:
if samples_1.name_tag is not None and samples_2.name_tag is not None:
name_tag = samples_1.name_tag+'_diff_'+samples_2.name_tag
else:
name_tag = None
if samples_1.label is not None and samples_2.label is not None:
label = samples_1.label+' diff '+samples_2.label
else:
label = None
if samples_1.min_weight_ratio is not None and \
samples_2.min_weight_ratio is not None:
min_weight_ratio = min(samples_1.min_weight_ratio,
samples_2.min_weight_ratio)
# initialize the weighted samples:
diff_samples = WeightedSamples(ignore_rows=0,
samples=sign*difference_samples,
weights=weights, loglikes=loglikes,
name_tag=name_tag, label=label,
min_weight_ratio=min_weight_ratio)
#
return diff_samples
###############################################################################
def parameter_diff_chain(chain_1, chain_2, boost=1):
"""
Compute the chain of the parameter differences between the two input
chains. The parameters of the difference chain are related to the
parameters of the input chains, :math:`\\theta_1` and :math:`\\theta_2` by:
.. math:: \\Delta \\theta \\equiv \\theta_1 - \\theta_2
This function only returns the differences for the parameters that are
common to both chains.
This function preserves the chain separation (if any) so that the
convergence of the difference chain can be tested.
This function does not assume Gaussianity of the chain.
This functions does assume that the parameter determinations from the two
chains (i.e. the underlying data sets) are uncorrelated.
Do not use this function for chains that are correlated.
:param chain_1: :class:`~getdist.mcsamples.MCSamples`
first input chain with :math:`n_1` samples
:param chain_2: :class:`~getdist.mcsamples.MCSamples`
second input chain with :math:`n_2` samples
:param boost: (optional) boost the number of samples in the
difference chain. By default the length of the difference chain
will be the length of the longest chain.
Given two chains the full difference chain can contain
:math:`n_1\\times n_2` samples but this is usually prohibitive
for realistic chains.
The boost parameters wil increase the number of samples to be
:math:`{\\rm boost}\\times {\\rm max}(n_1,n_2)`.
Default boost parameter is one.
If boost is None the full difference chain is going to be computed
(and will likely require a lot of memory and time).
:return: :class:`~getdist.mcsamples.MCSamples` the instance with the
parameter difference chain.
"""
# check input:
if boost is not None:
if boost < 1:
raise ValueError('Minimum boost is 1\n Input value is ', boost)
# test for type, this function assumes that we are working with MCSamples:
if not isinstance(chain_1, MCSamples):
raise TypeError('Input chain_1 is not of MCSamples type.')
if not isinstance(chain_2, MCSamples):
raise TypeError('Input chain_2 is not of MCSamples type.')
# get the parameter names:
param_names_1 = chain_1.getParamNames().list()
param_names_2 = chain_2.getParamNames().list()
# get the common names:
param_names = [_p for _p in param_names_1 if _p in param_names_2]
num_params = len(param_names)
if num_params == 0:
raise ValueError('There are no shared parameters to difference')
# get the names and labels:
diff_param_names = ['delta_'+name for name in param_names]
diff_param_labels = ['\\Delta '+name.label for name in
chain_1.getParamNames().parsWithNames(param_names)]
# get parameter indexes:
indexes_1 = [chain_1.index[name] for name in param_names]
indexes_2 = [chain_2.index[name] for name in param_names]
# get separate chains:
if not hasattr(chain_1, 'chain_offsets'):
_chains_1 = [chain_1]
else:
if chain_1.chain_offsets is None:
_chains_1 = [chain_1]
else:
_chains_1 = chain_1.getSeparateChains()
if not hasattr(chain_2, 'chain_offsets'):
_chains_2 = [chain_2]
else:
if chain_2.chain_offsets is None:
_chains_2 = [chain_2]
else:
_chains_2 = chain_2.getSeparateChains()
# set the boost:
if chain_1.sampler == 'nested' \
or chain_2.sampler == 'nested' or boost is None:
chain_boost = max(len(_chains_1), len(_chains_2))
sample_boost = None
else:
chain_boost = min(boost, max(len(_chains_1), len(_chains_2)))
sample_boost = boost
# get the combinations:
if len(_chains_1) > len(_chains_2):
temp_ind = np.indices((len(_chains_2), len(_chains_1)))
else:
temp_ind = np.indices((len(_chains_1), len(_chains_2)))
ind1 = np.concatenate([np.diagonal(temp_ind, offset=i, axis1=1, axis2=2)[0]
for i in range(chain_boost)])
ind2 = np.concatenate([np.diagonal(temp_ind, offset=i, axis1=1, axis2=2)[1]
for i in range(chain_boost)])
chains_combinations = [[_chains_1[i], _chains_2[j]]
for i, j in zip(ind1, ind2)]
# compute the parameter difference samples:
diff_chain_samples = [parameter_diff_weighted_samples(samp1,
samp2, boost=sample_boost, indexes_1=indexes_1,
indexes_2=indexes_2) for samp1, samp2
in chains_combinations]
# create the samples:
diff_samples = MCSamples(names=diff_param_names, labels=diff_param_labels)
diff_samples.chains = diff_chain_samples
diff_samples.makeSingle()
# get the ranges:
_ranges = {}
for name, _min, _max in zip(diff_param_names,
np.amin(diff_samples.samples, axis=0),
np.amax(diff_samples.samples, axis=0)):
_ranges[name] = [_min, _max]
diff_samples.setRanges(_ranges)
# initialize other things:
if chain_1.name_tag is not None and chain_2.name_tag is not None:
diff_samples.name_tag = chain_1.name_tag+'_diff_'+chain_2.name_tag
# set distinction between base and derived parameters:
_temp = diff_samples.getParamNames().list()
_temp_paramnames = chain_1.getParamNames()
for _nam in diff_samples.getParamNames().parsWithNames(_temp):
_temp_name = _nam.name.replace('delta_', '', 1)
_nam.isDerived = _temp_paramnames.parWithName(_temp_name).isDerived
# update and compute everything:
diff_samples.updateBaseStatistics()
diff_samples.deleteFixedParams()
#
return diff_samples
| 12,497 | 42.852632 | 79 | py |
tensiometer | tensiometer-master/tensiometer/mcmc_tension/__init__.py | """
This module contains the functions and utilities to compute non-Gaussian
Monte Carlo tension estimators.
The submodule `param_diff` contains the functions and utilities to compute the distribution
of parameter differences from the parameter posterior of two experiments.
The submodule `kde` contains the functions to compute the statistical significance
of a difference in parameters with KDE methods.
This submodule `flow` contains the functions and utilities to compute the statistical significance
of a difference in parameters with normalizing flow methods.
For more details on the method implemented see
`arxiv 1806.04649 <https://arxiv.org/pdf/1806.04649.pdf>`_
and `arxiv 1912.04880 <https://arxiv.org/pdf/1912.04880.pdf>`_.
"""
# parameter difference module import:
from .param_diff import parameter_diff_chain, parameter_diff_weighted_samples
# kde module import:
from .kde import kde_parameter_shift_1D_fft, kde_parameter_shift_2D_fft, kde_parameter_shift
# flow module import:
from .flow import HAS_FLOW
if HAS_FLOW:
from .flow import SimpleMAF, DiffFlowCallback, flow_parameter_shift
| 1,111 | 37.344828 | 98 | py |
tensiometer | tensiometer-master/docs/example_notebooks/pymaxent.py | #!/usr/bin/env python
"""PyMaxEnt.py: Implements a maximum entropy reconstruction of distributions with known moments."""
__author__ = "Tony Saad and Giovanna Ruai"
__copyright__ = "Copyright (c) 2019, Tony Saad"
__credits__ = ["University of Utah Department of Chemical Engineering", "University of Utah UROP office"]
__license__ = "MIT"
__version__ = "1.0.0"
__maintainer__ = "Tony Saad"
__email__ = "[email protected]"
__status__ = "Production"
import numpy as np
from scipy.integrate import quad
from scipy.optimize import fsolve
def moments_c(f, k=0, bnds=[-np.inf, np.inf]):
'''
Creates "k" moments: μ0, μ1, ..., μ(k-1) for a function "f" on the support given by "bnds".
Parameters:
f (function): distribution function **must be in the form of a function**
k (int): integer number of moments to compute. Will evaluate the first k moments of f, μ0, μ1, ..., μ(k-1)
bnds (tuple): boundaries for the integration
Returns:
moments: an array of moments of length "k"
Example:
μ = moments(3, f, [-1, 1])
'''
def mom(x, k):
return x**k*f(x)
moms = np.zeros(k)
a = bnds[0]
b = bnds[1]
for i in range(0,k):
moms[i] = quad(mom,a,b,args = i)[0]
return moms
def moments_d(f,k,x):
'''
Calculates the first "k" moments: μ0, μ1, ..., μ(k-1) of a discrete distribution "f".
Parameters:
f (array): an array of values for a discrete distribution
k (int): number of moments to compute. Will evaluate the first k moments of f, μ0, μ1, ..., μ(k-1)
x (array): list or array containing the values of the random variable over which the distribution is to be integrated
Returns:
mom: an array of length k containing the moments for the known distribution
'''
moms = []
for i in range(0,k):
xp = np.power(x,i) # compute x^p
xpf = np.dot(xp,f) # compute x^p * f(x)
mom.append(np.sum(xpf)) # compute moment: sum(x^p * f(x))
return np.array(moms)
def moments(f, k, rndvar=None, bnds=None):
'''
Computes the first "k" moments of a function "f" on the support given by "bnd". If "rndvar" is provided, then a discrete distribution is assumed and "f" ##must## be a list or array of scalar values.
Parameters:
f (function): distribution function **must be in the form of a function**
k (integer): will evaluate the first k moments of f, μ0, μ1, ..., μ(k-1)
rndvar (array): optional - designates a list or array of discrete values for a random variable. If x is provided, then the moments will be computed based on a discrete distribution. This means that f must be an array as well.
bnds (tuple): a list of two numbers consisting of the lower and upper bounds of the support
Returns:
moments: an array of moments of length `k`
Example:
μ = moments(3, f, [-1, 1])
'''
if rndvar is not None:
if bnds is not None:
print('WARNING: You specified BOTH x and boundaries. I will assume this is a discrete distribution. If you want to calculate a continuous distribution, please specify bnd ONLY.')
return moments_d(f,k,rndvar)
else:
return moments_c(f,k,bnds)
def integrand(x, lamb, k=0, discrete=False):
'''
Calculates the integrand of the \(k^\mathrm{th}\) moment.
Parameters:
x (array): linear space or set of values for a random variable on which the integrand is applied
lamb (array): an array of Lagrange multipliers used to approximate the distribution
k (integer): a constant representing the order of the moment being calculated
Returns:
integrand: the caclulated portion of the integrand at each x value
'''
neqs = len(lamb)
xi = np.array([x**i for i in range(0, neqs)])
if discrete:
return x**k * np.exp(np.dot(lamb, xi))
else:
return x**k * np.exp(np.dot(lamb, xi))
def residual_d(lamb,x,k,mu):
'''
Calculates the residual of the moment approximation function.
Parameters:
lamb (array): an array of Lagrange constants used to approximate the distribution
x (array):
k (integer): order of the moment
mu (array): an array of the known moments needed to approximate the distribution function
Returns:
rhs: the integrated right hand side of the moment approximation function
'''
l_sum = []
for i in range(0,len(lamb)):
l_sum.append( np.sum(integrand(x,lamb,i,discrete=True)) - mu[i] )
return np.array(l_sum)
def maxent_reconstruct_d(rndvar, mu):
'''
Computes the most likely distribution from the moments given using maximum entropy theorum.
Parameters:
rndvar (array): a list or array of known dependent variables. For example, for a 6-faced die, rndvar=[1,2,3,4,5,6]
mu (array): vector of size m containing the known moments of a distribution. This does NOT assume that μ0 = 1. This vector contains moments μ_k starting with μ_0, μ_1, etc... For example, μ = [1,0,0]
Returns:
probabilites: vector of size b (from bnd[1]) containing the probabilities for the distribution
lambsol: vector of lagrangian multipliers
'''
lambguess = np.zeros(len(mu))
lambguess[0] = -np.log(np.sqrt(2*np.pi))
k = len(mu)
lambsol = fsolve(residual_d, lambguess, args = (rndvar,k,mu))
probabilites = integrand(rndvar, lambsol, k=0, discrete=True)
return probabilites, lambsol
def residual_c(lamb, mu, bnds):
'''
Calculates the residual of the moment approximation function.
Parameters:
lamb (array): an array of Lagrange constants used to approximate the distribution
mu (array): an array of the known moments needed to approximate the distribution function
bnds (tuple): support bounds
Returns:
rhs: the integrated right hand side of the moment approximation function
'''
a = bnds[0]
b = bnds[1]
neqs = len(lamb)
rhs = np.zeros(neqs)
for k in range(0, neqs):
rhs[k] = quad(integrand, a, b, args=(lamb, k))[0] - mu[k]
return rhs
def maxent_reconstruct_c(mu, bnds=[-np.inf, np.inf]):
'''
Used to construct a continuous distribution from a limited number of known moments(μ). This function applies Maximum Entropy Theory in order to solve for the constraints found in the approximation equation that is given as an output.
Parameters:
μ: vector of size m containing the known moments of a distribution. This does NOT assume that μ0 = 1. This vector contains moments μ_k starting with μ_0, μ_1, etc...
Ex. μ = [1,0,0]
bnds: Support for the integration [a,b]
## It is important the bounds include roughly all non-zero values of the distribution that is being recreated ##
Returns:
Distribution Function: The recreated probability distribution function from the moment vector (μ) input given. requires a support to be ploted
Example:
>>> f, sol = maxent([1,0,0], [-1,1])
'''
neqs = len(mu)
lambguess = np.zeros(neqs) # initialize guesses
lambguess[0] = -np.log(np.sqrt(2*np.pi)) # set the first initial guess - this seems to work okay
lambsol = fsolve(residual_c, lambguess, args=(mu,bnds), col_deriv=True)
recon = lambda x: integrand(x, lambsol, k=0)
return recon, lambsol
def reconstruct(mu, rndvar=None, bnds=None):
'''
This is the main function call to generate maximum entropy solutions.
Parameters:
mu (array): a list or array of known moments
rndvar (array): optional - a list or array of known dependent variables. For example, for a 6-faced die, rndvar=[1,2,3,4,5,6]. If rndvar is provided, we will assume a discrete reconstruction.
bnds (tuple): a tuple [a,b] containing the bounds or support of the reconstructed solution. This is only required for continuous distributions and will be neglected if rndvar is provided.
Returns:
recon: reconstructed distribution. If continuous, then `recon` is a Python function, `f(x)`. If discrete, then recon is an array of probabilities.
lambsol (array): array containing the lagrangian multipliers
Examples:
### reconstruct a discrete distribution
>>> from pymaxent import *
>>> mu = [1,3.5]
>>> x = [1,2,3,4,5,6]
>>> sol, lambdas = reconstruct(mu,rndvar=x)
### reconstruct a continuous distribution
>>> from pymaxent import *
>>> mu = [1,0,0.04]
>>> sol, lambdas = reconstruct(mu,bnds=[-1,1])
>>> x = np.linspace(-1,1)
>>> plot(x,sol(x))
'''
result = 0
# Discrete case
if rndvar is not None:
rndvar = np.array(rndvar) # convert things to numpy arrays
if bnds is not None:
print('WARNING: You specified BOTH x and boundaries. I will assume this is a discrete distribution. If you want to calculate a continuous distribution, please specify bnd ONLY.')
result = maxent_reconstruct_d(rndvar, mu)
# Continuous case
else:
result = maxent_reconstruct_c(mu, bnds)
return result
| 9,329 | 40.838565 | 237 | py |
tensiometer | tensiometer-master/docs/source/conf.py | # -*- coding: utf-8 -*-
#
# MyProj documentation build configuration file, created by
# sphinx-quickstart on Thu Jun 18 20:57:49 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
autoclass_content = 'both'
nitpicky = True
nitpick_ignore = [('py:class', 'optional')]
# Prevent spurious errors for every field ivar (not sure why..)
def on_missing_reference(app, env, node, contnode):
if node['reftype'] == 'obj':
return contnode
else:
return None
def setup(app):
app.connect('missing-reference', on_missing_reference)
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('exts'))
here = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.abspath(here+'/../../'))
import tensiometer
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.3'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.viewcode', 'sphinx.ext.autosummary',
'matplotlib.sphinxext.plot_directive'
]
intersphinx_mapping = {'python': ('https://docs.python.org/3', None),
'numpy': ('https://docs.scipy.org/doc/numpy/', None),
'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
'matplotlib': ('https://matplotlib.org/', None),
'getdist': ('https://getdist.readthedocs.io/en/latest/', None),
"tensorflow": ("https://www.tensorflow.org/api_docs/python", "https://github.com/GPflow/tensorflow-intersphinx/raw/master/tf2_py_objects.inv"),
"tensorflow_probability": ("https://www.tensorflow.org/probability/api_docs/python", "https://github.com/GPflow/tensorflow-intersphinx/raw/master/tfp_py_objects.inv")}
# plot_formats = [('png', 80)]
plot_html_show_formats = False
plot_html_show_source_link = False
autosummary_generate = True
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'tensiometer'
copyright = u'Marco Raveri'
author = u'Marco Raveri'
version = tensiometer.__version__
release = tensiometer.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
'collapse_navigation': True,
}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
html_extra_path = ['../example_notebooks']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'TensionToolsDoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
# Latex figure (float) alignment
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'TensionTools.tex', u'TensionTools Documentation',
author, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'TensionTools', u'TensionTools Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'TensionTools', u'TensionTools Documentation',
author, 'TensionTools', 'Concordance and discordance analysis.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
| 9,842 | 32.593857 | 190 | py |
Atari-5 | Atari-5-main/atari_util.py | import matplotlib.pyplot as plt
cmap10 = plt.get_cmap('tab10')
cmap20 = plt.get_cmap('tab20')
def color_fade(x, factor=0.5):
if len(x) == 3:
r,g,b = x
a = 1.0
else:
r,g,b,a = x
r = (1*factor+(1-factor)*r)
g = (1*factor+(1-factor)*g)
b = (1*factor+(1-factor)*b)
return (r, g, b, a)
GAME_GENRE = {
'alien': "Maze",
'amidar': "Maze",
'assault': "Fixed shooter",
'asterix': "Action",
'asteroids': "Multidirectional shooter",
'atlantis': "Fixed shooter",
'bankheist': "Maze",
'battlezone': "First-person shooter",
'beamrider': "Fixed shooter",
'berzerk': "Multidirectional shooter",
'bowling': "Sports",
'boxing': "Sports",
'breakout': "Action",
'centipede': "Fixed shooter",
'choppercommand': "Scrolling shooter",
'crazyclimber': "Climbing",
'defender': "Scrolling shooter",
'demonattack': "Fixed shooter",
'doubledunk': "Sports",
'enduro': "Sports", # racing
'fishingderby': "Sports",
'freeway': "Action",
'frostbite': "Action",
'gopher': "Action", # genre is hard to clasifiy
'gravitar': "Multidirectional shooter",
'hero': "Action",
'icehockey': "Sports",
'jamesbond': "Scrolling shooter",
'kangaroo': "Platform",
'krull': "Action",
'kungfumaster': "Beat 'em up",
'montezumarevenge': "Platform",
'mspacman': "Maze",
'namethisgame': "Action",
'phoenix': "Fixed shooter",
'pitfall': "Platform",
'pong': "Sports",
'privateeye': "Action",
'qbert': "Action",
'riverraid': "Scrolling shooter",
'roadrunner': "Racing",
'robotank': "First-person shooter", # Wiki says Shoot 'em up, but it's clearly the same genre as battlezone.
'seaquest': "Shoot 'em up",
'skiing': "Sports",
'solaris': "Space combat simulator",
'spaceinvaders': "Shoot 'em up",
'stargunner': "Scrolling shooter",
'surround': "Action",
'tennis': "Sports",
'timepilot': "Multidirectional shooter",
'tutankham': "Maze", # action-adventure / maze-shooter
'upndown': "Racing",
'venture': "Action",
'videopinball': "Pinball",
'wizardofwor': "Maze",
'yarsrevenge': "Multidirectional shooter",
'zaxxon': "Scrolling shooter",
}
GENRE_COLOR = {
'Platform': cmap20(0),
'Maze': cmap20(1),
'Racing': cmap20(2),
'Sports': cmap20(3),
'Pinball': cmap20(4),
'Climbing': cmap20(5),
"Shoot 'em up": cmap20(6),
"Beat 'em up": cmap20(7),
'Fixed shooter': cmap20(8),
'Scrolling shooter': cmap20(9),
'First-person shooter': cmap20(10),
'Multidirectional shooter': cmap20(11),
'Space combat simulator': cmap20(12),
'Action': cmap20(13),
}
GENRE_TO_CATEGORY = {
'Platform': 'Maze',
'Maze': 'Maze',
'Racing': 'Sports',
'Sports': 'Sports',
'Pinball': 'Other',
'Climbing': 'Other',
"Shoot 'em up": 'Combat',
"Beat 'em up": 'Combat',
'Fixed shooter': 'Combat',
'Scrolling shooter': 'Combat',
'First-person shooter': 'Combat',
'Multidirectional shooter': 'Combat',
'Space combat simulator': 'Combat',
'Action': 'Action',
}
CATEGORY_COLOR = {
'Combat': cmap10(1),
'Action': cmap10(0),
'Sports': cmap10(2),
'Maze': cmap10(3),
'Other': cmap10(7), # gray
}
CATEGORY_HATCH = {
'Combat': "+++",
'Action': "|||",
'Sports': "---",
'Maze': "...",
'Other': "xxx",
}
# these are the names of the games in the standard 57-game ALE benchmark.
canonical_57 = [
"Alien",
"Amidar",
"Assault",
"Asterix",
"Asteroids",
"Atlantis",
"Bank Heist",
"Battle Zone",
"Beam Rider",
"Berzerk",
"Bowling",
"Boxing",
"Breakout",
"Centipede",
"Chopper Command",
"Crazy Climber",
"Defender",
"Demon Attack",
"Double Dunk",
"Enduro",
"Fishing Derby",
"Freeway",
"Frostbite",
"Gopher",
"Gravitar",
"Hero",
"Ice Hockey",
"James Bond",
"Kangaroo",
"Krull",
"Kung Fu Master",
"Montezuma Revenge",
"Ms Pacman",
"Name This Game",
"Phoenix",
"Pitfall",
"Pong",
"Private Eye",
"QBert",
"Riverraid",
"Road Runner",
"Robotank",
"Seaquest",
"Skiing",
"Solaris",
"Space Invaders",
"Star Gunner",
"Surround",
"Tennis",
"Time Pilot",
"Tutankham",
"Up n Down",
"Venture",
"Video Pinball",
"Wizard of Wor",
"Yars Revenge",
"Zaxxon"
]
printable_57 = canonical_57.copy()
canonical_57 = ["".join(x.split(" ")).lower() for x in canonical_57]
for k, v in CATEGORY_COLOR.items():
CATEGORY_COLOR[k] = color_fade(v, 0.33)
def clean_name(game):
"""
Converts from a print name to a lowercase no spaces name. E.g. "Space Invaders" -> "spaceinvaders"
"""
return "".join(c for c in game.lower() if c in "abcdefghijklmnopqrstuvwxyz")
def print_name(game):
"""
Converts from a clean name to a print. E.g. "spaceinvaders" -> "Space Invaders"
"""
game = clean_name(game) # standarsize input
for canonical in printable_57:
if clean_name(canonical) == game:
return canonical
return "Unknown"
def get_game_genre_color(game):
# look up the genre
genre = GAME_GENRE.get(clean_name(game), "none")
if genre in GENRE_COLOR:
return GENRE_COLOR[genre]
else:
print(f"no color for game {game} genre {genre}")
return (0.8, 0.8, 0.8)
def get_game_category_color(game):
# look up the genre
genre = GAME_GENRE.get(clean_name(game), "none")
category = GENRE_TO_CATEGORY[genre]
return CATEGORY_COLOR[category]
def get_game_category_hatch(game):
# look up the genre
genre = GAME_GENRE.get(clean_name(game), "none")
category = GENRE_TO_CATEGORY[genre]
return CATEGORY_HATCH[category]
| 5,851 | 23.082305 | 112 | py |
Atari-5 | Atari-5-main/atari5.py | import numpy as np
import pandas
import pandas as pd
import itertools
import sklearn
import sklearn.linear_model
import statsmodels
import statsmodels.api as sm
import json
import csv
import matplotlib.pyplot as plt
import multiprocessing
import functools
import time
from sklearn.model_selection import cross_val_score
from tqdm.contrib.concurrent import process_map # or thread_map
from tqdm import tqdm
from atari_util import printable_57, canonical_57
# not using an intercept means input scores of 0 -> output scores 0.
USE_INTERCEPT = False
# adds a fix to the riverraid results in the rainbow paper...
USE_RAINBOW_RIVERRAID = True # should be true
USE_UPDATED_A3C = False # Atari-5 paper used scores 'from papers with code' which excludes pheonix,
# including the pheonix changes coefficently only very slightly.
# First return used different settings so is not included, but sometimes I need to load it so that I can evaluate
# performance on it.
INCLUDE_FIRSTRETURN = False
# number of CPU workers to use
PROCESSES = 12
BANNED_ALGOS = {
# these algorithms are not included in the dataset. Reason is given for each one.
"PapersWithCode.Go-Explore", # used 400k max_frames
"PapersWithCode.UCT", # use of emulator as model
"PapersWithCode.Rainbow", # incomplete data, but added via RAINBOW.csv
"PapersWithCode.DQN noop", # this is fine, but I use the version from RAINBOW
"PapersWithCode.Duel noop", # this is fine, but I use the version from RAINBOW
"PapersWithCode.DDQN (tuned) noop", # this is fine, but I use the version from RAINBOW
"SEED.R2D2 Ref", # remove (matched) duplicate
"PapersWithCode.Advantage Learning", # using alternative
"PapersWithCode.Persistent AL", # using alternative
"PapersWithCode.C51", # using alternative
"PapersWithCode.C51 noop", # using alternative
"PapersWithCode.Bootstrapped DQN", # using alternative
"PapersWithCode.QR-DQN-1", # using alternative
}
game_map = {
"montezumasrevenge": "montezumarevenge",
'upanddown': 'upndown',
# these games are ignored
'pooyan': None,
'journeyescape': None,
'elevatoraction': None,
'carnival': None,
'airraid': None,
}
# global vars
algo_scores = None
# also allow just the first word in a multi word game
for game in printable_57:
if " " in game:
game_map[game.split(" ")[0].lower()] = "".join(game.split(' ')).lower()
class bcolors:
HEADER = '\033[95m'
OKBLUE = '\033[94m'
OKCYAN = '\033[96m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
class Algorithm():
def __init__(self, name: str, source: str, paper_title: str, paper_year: int):
self.name = name
self.source = source
self.paper = paper_title
self.year = paper_year
self.game_scores = {}
self.log_target = None
@property
def full_name(self):
return self.source+'.'+self.name
def __str__(self):
return self.full_name
@property
def a57_games(self):
return {k: v for k, v in self.game_scores.items() if k in canonical_57}
def games_matched(s1, s2):
"""
Checks if two sets of game scores are similar to each other.
s1, and s2 are dictionaries mapping from game name to score.
"""
result = 0
for game, score in s1.items():
if game in s2 and abs(s2[game] - score) / (abs(score)+1e-3) < 0.01:
result += 1
return result
def transform(x):
return np.log10(1+np.clip(x, 0, float('inf')))
def inv_transform(x):
return (10 ** x) - 1
def clean_up_atari_game_name(name, keep_games=False):
"""
Takes the name of an atari game and returns it's normalized form
"""
name = "".join([c for c in name if c.isalpha()])
name = name.lower()
if name in game_map:
result = game_map[name]
if result is None and keep_games:
return name
else:
return result
else:
return name
def convert_atari_data(path, output_file, algorithm_name_map = None):
"""
Extract data from json file (source from https://github.com/paperswithcode/paperswithcode-data)
Path is to JSON file from paperswithcode
Saves a CSV file containing results
"""
with open(path, 'r') as t:
data = json.load(t)
atari_key = [i for i in range(len(data)) if 'Atari' in data[i]['task']]
assert len(atari_key) == 1 # usually item 407, but this might change...
atari_datasets = data[atari_key[0]]["datasets"]
games = [atari_datasets[i]['dataset'] for i in range(len(atari_datasets))]
print(f'Found {len(games)} games')
def sanitize(s: str):
return "".join(c for c in s if c not in [','])
atari_prefix = 'atari 2600 '
with open(output_file, 'w', newline='', encoding='utf-8') as t:
csv_writer = csv.writer(t, delimiter=',')
csv_writer.writerow(['Algorithm', 'Score', 'Extra Training Data', 'Paper Title', 'Date', 'Game'])
for dataset in atari_datasets:
for row in dataset['sota']['rows']:
if 'Score' not in row['metrics']:
continue
algorithm = row['model_name']
if algorithm_name_map is not None and algorithm in algorithm_name_map:
algorithm = algorithm_name_map[algorithm]
score = sanitize(row['metrics']['Score'])
extra_training_data = row['uses_additional_data'] # for now
paper_title = row['paper_title']
paper_date = row['paper_date']
if paper_date is not None:
paper_date = paper_date[:4] # year is enough
game = dataset['dataset'].lower()
if game.startswith(atari_prefix):
game = game[len(atari_prefix):]
game = clean_up_atari_game_name(game)
if game is None:
continue
if game not in canonical_57:
print(f" -game {game} ignored")
continue
csv_writer.writerow([algorithm, score, extra_training_data, paper_title, paper_date, game])
def count_57_games(algo):
""" Returns number of games within the canonical set of 57 that
this algorithmh has scores for. """
return 57-len(missing_57_games(algo))
def excess_57_games(algo):
""" Returns number of games within the canonical set of 57 that
this algorithmh has scores for. """
subset = algo_scores[algo_scores["Algorithm"] == algo]
# filter by 57 games
subset = subset[np.logical_not(subset["In57"])]
return subset
def missing_57_games(algo):
""" Returns all games missing from the Atari-57 datset. """
subset = algo_scores[algo_scores["Algorithm"] == algo]
return [game for game in canonical_57 if game not in list(subset["Game"])]
def calculate_median(algo, scores):
""" Calculate the median score for a given algorithm. """
# filter by algorithm
subset = scores[scores["Algorithm"] == algo]
if len(subset) == 0:
return float("nan")
return np.median(subset["Normalized"])
def get_subset(games_set, scores=None):
""" Returns rows matching any algorithm in games_set"""
scores = scores if scores is not None else algo_scores
return scores[[game in games_set for game in scores["Game"]]]
def fit_model(games_subset, algo_scores, intercept=False):
"""
Fit a linear regression model to games subset.
@algo_scores: dataframe containing scores for each algorithm / environment,
@returns:
lm, (X, y), algos
lm is the linear model
X are the input used to fit model
y are the targets used to fit model
"""
# scores = get_subset(games_subset, algo_scores)
# scores = scores[scores["train"]]
#
# X_all = scores.pivot_table(
# index='Algorithm',
# columns="Game",
# values="LogNormalized",
# fill_value=None,
# )[list(games_subset)]
X_all = x_pivot[list(games_subset)]
# switch to numpy (because I'm paranoid about pandas and removing rows...)
X = np.asarray(X_all)
y = np.asarray([algorithms[algo].log_target for algo in X_all.index])
algorithm_names = np.asarray(X_all.index)
mask = np.all(np.logical_not(np.isnan(X)), axis=1)
X = X[mask]
y = y[mask]
algorithm_names = algorithm_names[mask]
X = pandas.DataFrame(X, index=algorithm_names, columns=games_subset)
if len(X) < 20:
print(f"Warning! Too many missing samples for: {games_subset}")
lm = sklearn.linear_model.LinearRegression(fit_intercept=intercept)
lm.fit(X, y)
return lm, (X, y)
class RegressionEvaluation:
def __init__(self, games_subset, intercept=False):
"""
True scores must be in sorted order according to algos
@param k_fold_threshold: If given only models with an MSE less than this value will have cvs calculated.
This can speed things up a lot, by excluding obviously bad models from the fairly slow K-fold evaluation.
"""
lm, (X, log_y) = fit_model(games_subset, algo_scores, intercept=intercept)
self.intercept = intercept
self.games_subset = games_subset
self.N = len(X)
log_predicted_scores = lm.predict(X)
self.log_errors = log_y - log_predicted_scores
self.log_mse = (self.log_errors ** 2).mean()
self.log_mae = np.abs(self.log_errors).mean()
self.log_tss = np.var(log_y, ddof=0) * self.N
self.r2 = 1 - (self.log_mse * self.N) / self.log_tss
# cross validation score is calculated later
self._cross_validation_mse = None
self._cross_validation_mae = None
self._coef = [lm.intercept_] + [lm.coef_]
def _get_cross_validation_score(self, squared=True):
# mean of means, shouldn't really do this, but bins will be either 6 or 7, so shouldn't
# make much difference.
lm, (X, y_raw) = fit_model(self.games_subset, algo_scores, intercept=self.intercept)
return -np.mean(cross_val_score(
lm, X, y_raw, cv=sklearn.model_selection.KFold(10, shuffle=True, random_state=1982),
scoring='neg_mean_squared_error' if squared else 'neg_mean_absolute_error'
))
@property
def cv_mse(self):
# lazy cross validation score
if self._cross_validation_mse is None:
self._cross_validation_mse = self._get_cross_validation_score()
return self._cross_validation_mse
@property
def cv_mae(self):
# lazy cross validation score
if self._cross_validation_mae is None:
self._cross_validation_mae = self._get_cross_validation_score(squared=False)
return self._cross_validation_mae
@property
def cv_r2(self):
# lazy cross validation score
return 1 - (self.cv_mse * self.N) / self.log_tss
def __str__(self):
return f"{self.games_subset}={self.cv_mse ** 0.5:.3f}"
def search_regression(r=3, always_envs=None, banned_games=None, verbose=True, game_set=None, top_k=57, intercept=False):
"""
Search over subsets.
"""
if game_set is None:
game_set = canonical_57
if banned_games is None:
banned_games = []
else:
banned_games = list(banned_games)
if always_envs is None:
always_envs = tuple()
counter = 0
print("Checking all sets of {} games.".format(r))
# these games are always banned, as they do not have enough datapoints.
games_to_search_through = [game for game in game_set if game.lower() not in banned_games and game.lower() not in always_envs]
combinations = list(itertools.combinations(games_to_search_through, r-len(always_envs)))
# add always envs in...
combinations = [x+always_envs for x in combinations]
start_time = time.time()
args = {
'intercept': intercept,
}
if PROCESSES > 1:
results = process_map(
functools.partial(RegressionEvaluation, **args),
combinations,
max_workers=PROCESSES,
chunksize=1024
)
results = list(results)
else:
results = []
for c in tqdm(combinations):
results.append(RegressionEvaluation(c, **args))
time_taken = time.time() - start_time
fps = len(results) / time_taken
# print(f"Generated {len(results)} models in {time_taken:.1f}s at {fps:.1f} models/second.")
# we take the top_k in terms of log_mse, then calculate validation scores for those, and sort by validation.
# Due to the small number of parameters the ordering generated from the cross validation scores and log_mse scores
# match very closely, and we only care about the top one anyway.
results.sort(reverse=True, key=lambda x: x.log_mse)
results = results[-top_k:]
results.sort(reverse=True, key=lambda x: x.cv_mse)
print(f"{'subset':<60} {'rmse':<10} {'cv_rmse':<10} {'coef'}")
for re in results:
if verbose:
print(f"{str(re.games_subset):<60} {re.log_mse**0.5:<10.3f} {re.cv_mse ** 0.5:<10.3f} {re._coef}")
counter += 1
time.sleep(0.100) # wait for TQDM process to die.
print()
return results
def bold(x):
return f"{bcolors.BOLD}{x}{bcolors.ENDC}"
def run_init(do_not_train_on=None, verbose=False):
"""
Run initialization.
sets the following global varaibles
all_algo_scores: same as algo_scores but includes algorithms that do not meet criteria.
algo_scores: dataframe containing where each row is a score for an algorithm on a game (with bad algorithms filtered out)
algorithms: dictionary of Algorithms instances with useful stats for each algorithm.
"""
global all_algo_scores
global algo_scores
global x_pivot # pivot table version
global algorithms
do_not_train_on = do_not_train_on or []
if verbose:
print("Number of canonical games {}".format(len(canonical_57)))
pd.set_option('display.max_columns', None)
# load human and random results
human_scores = pd.read_csv("Atari-Human.csv", dtype={"Score": float})
# load in the paper with code data (this is in a slightly different format
pwc_file = "PapersWithCode.csv"
convert_atari_data('evaluation-tables (Feb22).json', pwc_file)
algo_scores = pd.read_csv(pwc_file, dtype={"Score": float})
algo_scores['Source'] = pwc_file.split('.')[0]
def load_data(filename):
"""
Expected format is Game, <algo1>, <algo2>, ...
"""
new_data = pd.read_csv(filename)
# standardise names
new_data['Game'] = [clean_up_atari_game_name(x) for x in new_data['Game']]
# covert from wide to long
new_data = pd.melt(
new_data,
id_vars=new_data.columns[0:1],
value_vars=new_data.columns[1:],
value_name='Score',
var_name='Algorithm'
)
# change '-' to none
for idx, row in new_data.iterrows():
if row['Score'] == "-":
row['Score'] = None
else:
# force float type
# silly NGU...
if type(row['Score']) is str and row['Score'] != '' and row['Score'][-1] == 'k':
row['Score'] = float(row['Score'][:-1]) * 1000
else:
row['Score'] = float(row['Score'])
# remove invalid games
new_data = new_data.dropna()
new_data = new_data.copy()
# add source
new_data['Source'] = filename.split('.')[0]
new_data['Score'] = new_data['Score'].astype('float')
return new_data
# append data
ADDITIONAL_DATA_SETS = [
'RAINBOW (fixed).csv' if USE_RAINBOW_RIVERRAID else 'RAINBOW.csv',
'SEED.csv',
'INCREASE.csv',
'DRLQR.csv',
'BOOTDQN.csv',
'LASER.csv',
'REVISITING.csv',
'NEAT.csv',
'REACTOR.csv',
'NGU.csv',
'SIMPLE.csv',
]
if USE_UPDATED_A3C:
# use the scores from the paper, rather than from paperswithcode, as PWC misses some games.
ADDITIONAL_DATA_SETS.append('A3C.csv')
global BANNED_ALGOS
BANNED_ALGOS = BANNED_ALGOS.union([
"PapersWithCode.A3C FF hs",
"PapersWithCode.A3C LSTM hs",
"PapersWithCode.A3C FF (1 day) hs",
])
if INCLUDE_FIRSTRETURN:
ADDITIONAL_DATA_SETS.append("FIRSTRETURN.csv")
# load in data from each paper
new_data = [
load_data(x) for x in ADDITIONAL_DATA_SETS
]
algo_scores = pd.concat([algo_scores, *new_data])
# make all names lower case
human_scores['Game'] = [clean_up_atari_game_name(x) for x in human_scores['Game']]
algo_scores["Game"] = algo_scores["Game"].str.lower()
# remove data from banned algorithms
algo_scores.reset_index(drop=True, inplace=True) # needed as indexes are duplicated...
drop_rows = []
keep_rows = []
for idx, row in algo_scores.iterrows():
full_name = row["Source"] + "." + row["Algorithm"]
if full_name in BANNED_ALGOS or row["Algorithm"][0] == "_":
drop_rows.append(idx)
else:
keep_rows.append(idx)
algo_scores.drop(drop_rows, axis=0, inplace=True)
algo_scores.reset_index(drop=True, inplace=True)
if len(drop_rows) > 0:
print(f"Dropped {len(drop_rows)} rows.")
for index, row in algo_scores.iterrows():
if row["Game"] not in canonical_57:
print(f"Invalid game '{row['Game']}' on algorithm {row['Algorithm']}")
algo_scores = algo_scores.merge(human_scores[["Game", "Random", "Human"]], on="Game", how="left")
algo_scores["Normalized"] = 100 * (algo_scores["Score"] - algo_scores["Random"]) / (
algo_scores["Human"] - algo_scores["Random"])
algo_scores["LogNormalized"] = transform(algo_scores["Normalized"])
algo_scores["In57"] = [game in canonical_57 for game in algo_scores["Game"]]
all_algorithms_list = set(algo_scores["Algorithm"])
if verbose:
print("All algorithms:", all_algorithms_list)
for game in do_not_train_on:
assert game in all_algorithms_list, f"{game} missing from algorithms list"
algo_scores["train"] = [game not in do_not_train_on for game in algo_scores["Algorithm"]]
# get algorithm stats and look for (algorithm,score) pair duplications
algorithms = {}
for index, row in algo_scores.iterrows():
name = row['Algorithm']
game = row['Game']
norm_score = 100 * (row["Score"] - row["Random"]) / (row["Human"] - row["Random"])
source = row['Source']
paper_title = row['Paper Title']
if type(paper_title) is not str:
paper_title = "?"
try:
paper_year = int(row['Date'])
except:
paper_year = 0
if name in algorithms:
# just verify
algo = algorithms[name]
if game in algo.game_scores:
print(f"Warning, found duplicate entries for pair ({name}, {game}) in {source}")
if source != algo.source:
print(f"Warning, found algorithm {name} in multiple sources. {algo.source}, {source}")
else:
algo = Algorithm(name=name, source=source, paper_title=paper_title, paper_year=paper_year)
if norm_score is not float('nan'):
algo.game_scores[game] = norm_score
else:
print(f"Warning {algo}:{game} has nan score.")
algorithms[name] = algo
for algo in algorithms.values():
algo.a57_median = np.median(list(algo.a57_games.values()))
algo.log_target = transform(algo.a57_median)
algo.is_good = len(algo.a57_games.values()) >= 40 and algo.a57_median > 40 and algo.full_name not in BANNED_ALGOS
good_algos = [algo.name for algo in algorithms.values() if algo.is_good]
all_algorithms = [algo.name for algo in algorithms.values()]
print(f"Found {bcolors.BOLD}{len(all_algorithms)}{bcolors.ENDC} algorithms, with {bcolors.WARNING}{len(good_algos)}{bcolors.ENDC} meeting requirements.")
# stub:
print(sorted(good_algos))
# make sure we don't have duplicates (again...)
for a1_name in algorithms.keys():
for a2_name in algorithms.keys():
if a1_name == a2_name:
continue
a1 = algorithms[a1_name]
a2 = algorithms[a2_name]
matches = games_matched(a1.a57_games, a2.a57_games)
if matches > 25:
print(f"Algorithms {bold(a1)} and {bold(a2)} look similar, matched on {matches}/{len(algorithms[a1_name].a57_games)} of games.")
# median before algorithm filter...
all_median_scores = {
k: calculate_median(k, get_subset(canonical_57)) for k in all_algorithms
}
# filter bad algorithms
algo_scores["good"] = [algo in good_algos for algo in algo_scores["Algorithm"]]
algo_scores = algo_scores[algo_scores['good']]
algo_scores["Game"] = algo_scores["Game"].astype('category') # faster?
median_sorted_scores = sorted([(v, k) for k, v in all_median_scores.items()])
# filtered results, and calculate true targets
all_algo_scores = algo_scores.copy()
algo_scores = algo_scores[[algo in good_algos for algo in algo_scores["Algorithm"]]]
algo_scores['LogNormalized'] = transform(algo_scores['Normalized'])
# pregenerate the pivot table
scores = algo_scores[algo_scores["train"]]
x_pivot = scores.pivot_table(
index='Algorithm',
columns="Game",
values="LogNormalized",
fill_value=None,
)
if verbose:
print(f"Found {len(good_algos)} datapoints with 40 or more games.")
for n_games in reversed(range(1, 57 + 1)):
matching = [algo for algo in all_algorithms if count_57_games(algo) == n_games]
if len(matching) > 0:
print(f"[{n_games:02d}] {matching}")
print()
print("Missing games:")
for algo in good_algos:
print(f" -{algo}: {missing_57_games(algo)}")
print()
print("Median_57 scores:")
for score, algo in median_sorted_scores:
if algo not in good_algos:
continue
marker = "*" if algo in do_not_train_on else ""
if algo not in good_algos:
marker = marker + " -"
print(f" -{algo}: {score:.0f} {marker}")
if __name__ == "__main__":
# ---------------------------------------------------
# find a good subsets...
run_init()
algo_scores.to_csv('dataset.csv')
args = {
'top_k': 20,
'intercept': USE_INTERCEPT,
#'k_fold_threshold': 0.15, # makes things a bit faster
#'k_fold_threshold': (3.5**2) # makes things a bit faster
}
results = {}
# games with 50+/62
#not_enough_data_games = ["defender", "phoenix", "pitfall", "skiing", "solaris", "surround", "yarsrevenge"]
# games with 40+/62
not_enough_data_games = ["surround"]
print("-" * 60)
print(" Test")
print("-" * 60)
results['Atari_Single'] = search_regression(1, intercept=True, top_k=10)[-1]
args['banned_games'] = not_enough_data_games
results['Atari_5'] = atari5 = search_regression(5, **args)[-1]
results['Atari_3'] = atari3 = search_regression(3, game_set=atari5.games_subset, **args)[-1]
results['Atari_1'] = atari1 = search_regression(1, game_set=atari3.games_subset, **args)[-1]
print("-" * 60)
print(" Validation")
print("-" * 60)
args['banned_games'] = not_enough_data_games + list(atari5.games_subset)
# atari3_val is done first as three games is usually enough for validation.
results['Atari_3_Val'] = atari3_val = search_regression(3, **args)[-1]
results['Atari_5_Val'] = atari5_val = search_regression(5, always_envs=atari3_val.games_subset, **args)[-1]
results['Atari_1_Val'] = atari1_val = search_regression(1, game_set=atari3_val.games_subset, **args)[-1]
print("-" * 60)
print(" Atari-10")
print("-" * 60)
# for when you really need a precise result
args['banned_games'] = not_enough_data_games + list(atari5_val.games_subset)
results['Atari_7'] = search_regression(7, always_envs=atari5.games_subset, **args)[-1]
results['Atari_10'] = search_regression(10, always_envs=atari5.games_subset, **args)[-1]
# the overlapped version uses validation set games, This is just to see how much more we gain using 10 environments
# rather than 5.
# args['banned_games'] = not_enough_data_games
# results['Atari_10_Overlap'] = atari10_overlap = search_regression(10, always_envs=atari5.games_subset, **args)[-1]
# show atari_score normalizer settings
print("SUBSETS = {")
for k, v in results.items():
relative_error = v.cv_mae * np.log(10) * 100
print(f"'{k}': ({list(v.games_subset)}, {list(v._coef[1])}, {relative_error:.1f}),")
print("}") | 25,279 | 33.301221 | 157 | py |
white_box_rarl | white_box_rarl-main/wbrarl_plotting.py |
from pathlib import Path
import numpy as np
import pickle
import matplotlib.pyplot as plt
from matplotlib import rc
from scipy import stats
rc('font', **{'family': 'serif', 'serif': ['Palatino']})
plt.rcParams['pdf.fonttype'] = 42
results_path = Path('./results/')
N_TRAIN_STEPS = 2000000
FS = 15
N_EXCLUDE = 20
TOTAL_N = 40
COEF_DICT = {'HalfCheetah-v3': {'mass': [0.2, 0.3, 0.4, 0.5, 1.5, 2.0, 2.5, 3.0],
'friction': [0.05, 0.1, 0.2, 0.3, 1.3, 1.5, 1.7, 1.9]},
'Hopper-v3': {'mass': [0.2, 0.3, 0.4, 0.5, 1.05, 1.1, 1.15, 1.2],
'friction': [0.2, 0.3, 0.4, 0.5, 1.4, 1.6, 1.8, 2.0]},
}
def get_env_name(names):
if 'Cheetah' in names[0]:
env_name = 'HalfCheetah-v3'
elif 'Hopper' in names[0]:
env_name = 'Hopper-v3'
else:
raise NotImplementedError
return env_name
def get_learning_curves(fnames):
paths = results_path.glob(fnames + '*')
ys = []
for path in paths:
with open(str(path), 'rb') as f:
y = pickle.load(f)
ys.append(y)
ys = np.array(ys)
idxs = np.argsort(ys[:, -1])
ys = ys[idxs[N_EXCLUDE:]]
return np.array(ys)
def heatmap(data, row_labels, col_labels, ax=None, col_i=0, do_cbar=True,
cbar_kw={}, cbarlabel="", **kwargs):
if not ax:
ax = plt.gca()
# Plot the heatmap
im = ax.imshow(data, **kwargs)
# Create colorbar
if do_cbar:
cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw)
cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom")
else:
cbar = None
# Show all ticks and label them with the respective list entries.
ax.set_xticks(np.arange(data.shape[1]))
ax.set_xticklabels(col_labels)
if col_i == 0:
ax.set_yticks(np.arange(data.shape[0]))
ax.set_yticklabels(labels=row_labels)
else:
ax.set_yticks([])
ax.set_yticklabels([])
# Let the horizontal axes labeling appear on top.
ax.tick_params(top=False, bottom=True,
labeltop=False, labelbottom=True)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=30, ha="right",
rotation_mode="anchor")
# Turn spines off and create white grid.
# ax.spines[:].set_visible(False)
ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True)
ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)
ax.grid(which="minor", color="w", linestyle='-', linewidth=3)
ax.tick_params(which="minor", bottom=False, left=False)
return im, cbar
def annotate_heatmap(im, data=None,
textcolors=("black", "white"),
threshold=None, **textkw):
if not isinstance(data, (list, np.ndarray)):
data = im.get_array()
# Normalize the threshold to the images color range.
if threshold is not None:
threshold = im.norm(threshold)
else:
threshold = im.norm(data.max())/2.
# Set default alignment to center, but allow it to be
# overwritten by textkw.
kw = dict(horizontalalignment="center",
verticalalignment="center")
kw.update(textkw)
# Get the formatter in case a string is supplied
# if isinstance(valfmt, str):
# valfmt = matplotlib.ticker.StrMethodFormatter(valfmt)
# Loop over the data and create a `Text` for each "pixel".
# Change the text's color depending on the data.
texts = []
for i in range(data.shape[0]):
for j in range(data.shape[1]):
kw.update(color=textcolors[int(im.norm(data[i, j]) > threshold)])
text = im.axes.text(j, i, int(data[i, j]), **kw)
texts.append(text)
return texts
def plot_learning_curves(results_names):
env_name = get_env_name(results_names)
ys = []
for name in results_names:
ys.append(get_learning_curves(name))
fig = plt.figure()
colors = ['royalblue', 'firebrick', 'darkorange']
labels = ['RL', 'RARL', 'Latent/Action WB-RARL']
for i, y in enumerate(ys):
x = np.linspace(1, N_TRAIN_STEPS, num=len(y[0]))
y_mean = np.mean(y, axis=0)
plt.plot(x, y_mean, color=colors[i])
plt.legend(labels[:len(results_names)], fontsize=FS-2, loc='lower right')
for i, y in enumerate(ys):
x = np.linspace(1, N_TRAIN_STEPS, num=len(y[0]))
y_mean = np.mean(y, axis=0)
y_sem = stats.sem(y, axis=0)
plt.fill_between(x, y_mean-y_sem, y_mean+y_sem, color=colors[i], alpha=0.2)
plt.title(env_name.replace('-v3', '') + f', n={TOTAL_N - N_EXCLUDE}', fontsize=FS)
plt.ylabel('Reward per Episode', fontsize=FS)
plt.grid(alpha=0.4)
plt.show()
fig.savefig(f"./{env_name.replace('-v3', '')}_train.pdf", format='pdf', bbox_inches='tight')
def plot_and_print_eval(eval_names, do_cbar=True):
eval_results = []
all_hm_means = []
for names in eval_names:
paths = results_path.glob(names + '_eval*')
hmaps = []
for path in paths:
with open(str(path), 'rb') as f:
data = pickle.load(f)
hmaps.append(np.array(data))
hmaps = np.array(hmaps)
hm_means = np.mean(hmaps, axis=(1, 2))
idxs = np.argsort(hm_means)
hmaps = hmaps[idxs[N_EXCLUDE:]]
if len(hmaps) > 0:
hmap_means = np.mean(hmaps, axis=0)
hmap_means = np.round(hmap_means)
eval_results.append(hmap_means)
all_hm_means.append([np.mean(hm) for hm in hmaps])
env_name = get_env_name(eval_names)
mass_vals = COEF_DICT[env_name]['mass']
friction_vals = COEF_DICT[env_name]['friction']
xvals = [str(v) + 'x mass' for v in mass_vals]
yvals = [str(v) + 'x fric' for v in friction_vals]
titles = ['RL', 'RARL', 'Latent/Action WB-RARL']
all_min = min([np.min(er) for er in eval_results])
all_max = max([np.max(er) for er in eval_results])
wid = 18 if do_cbar else 16
fig, axes = plt.subplots(1, 3, figsize=(wid, 6))
for i in range(len(eval_results)):
im, cbar = heatmap(eval_results[i], xvals, yvals, ax=axes[i], col_i=i, do_cbar=do_cbar,
cmap="YlGn", **{'vmin': all_min, 'vmax': all_max})
_ = annotate_heatmap(im)
fig.tight_layout()
axes[i].set_title(titles[i] + f', n={TOTAL_N - N_EXCLUDE}', fontsize=FS+4)
axes[0].set_ylabel(env_name.replace('-v3', ''), fontsize=FS+8)
fig.tight_layout(rect=[0, 0.15, 0.95, 1.0])
all_hm_means = np.array(all_hm_means)
all_means = []
for hm in all_hm_means:
print(np.mean(np.array(hm)), np.std(np.array(hm)))
all_means.append(int(np.round(np.mean(np.array(hm)))))
pairs = [(0, 1), (0, 2), (1, 2)]
all_ps = []
for pair in pairs:
result = stats.ttest_ind(all_hm_means[pair[0]], all_hm_means[pair[1]], alternative='less')
print(pair, result)
all_ps.append(round(result[1], 3))
axes[0].set_xlabel(f'Mean: {all_means[0]}\nWB-RARL v. RL p: {all_ps[1]}', fontsize=FS+3)
axes[1].set_xlabel(f'Mean: {all_means[1]}\nRARL v. RL p: {all_ps[2]}', fontsize=FS+3)
axes[2].set_xlabel(f'Mean: {all_means[2]}', fontsize=FS+3)
plt.show()
# plt.imsave(f"{env_name.replace('-v3', '')}_eval.pdf", fig)
fig.savefig(f"./{env_name.replace('-v3', '')}_eval_cbar={do_cbar}.pdf", format='pdf', bbox_inches='tight')
def plot_learning_curves_and_row_col_means(results_names, eval_names, errbar=True):
env_name = get_env_name(results_names)
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
colors = ['royalblue', 'firebrick', 'darkorange']
labels = ['RL', 'RARL', 'Latent/Action WB-RARL']
# plot the learning curves
ys = []
y_means = []
for name in results_names:
ys.append(get_learning_curves(name))
for i, y in enumerate(ys):
x = np.linspace(1, N_TRAIN_STEPS, num=len(y[0]))
y_mean = np.mean(y, axis=0)
axes[0].plot(x, y_mean, color=colors[i])
axes[0].legend(labels[:len(results_names)], fontsize=FS, loc='lower right')
for i, y in enumerate(ys):
x = np.linspace(1, N_TRAIN_STEPS, num=len(y[0]))
y_mean = np.mean(y, axis=0)
y_means.append(y_mean)
y_sem = stats.sem(y, axis=0)
axes[0].fill_between(x, y_mean-y_sem, y_mean+y_sem, color=colors[i], alpha=0.2)
axes[0].set_title(env_name.replace('-v3', '') + f' Train, n={TOTAL_N - N_EXCLUDE}', fontsize=FS+2)
axes[0].set_ylabel('Reward per Episode', fontsize=FS)
axes[0].set_xlabel('t', fontsize=FS)
axes[0].grid(alpha=0.4)
all_rowmeans = []
all_colmeans = []
for names in eval_names:
paths = results_path.glob(names + '_eval*')
hmaps = []
for path in paths:
with open(str(path), 'rb') as f:
data = pickle.load(f)
hmaps.append(np.array(data))
hmaps = np.array(hmaps)
hm_means = np.mean(hmaps, axis=(1, 2))
idxs = np.argsort(hm_means)
hmaps = hmaps[idxs[N_EXCLUDE:]]
if len(hmaps) > 0:
all_rowmeans.append(np.mean(hmaps, axis=1))
all_colmeans.append(np.mean(hmaps, axis=2))
all_rowmeans = np.array(all_rowmeans)
all_colmeans = np.array(all_colmeans)
rowmeans = np.mean(all_rowmeans, axis=1)
rowsems = stats.sem(all_rowmeans, axis=1)
colmeans = np.mean(all_colmeans, axis=1)
colsems = stats.sem(all_colmeans, axis=1)
mass_vals = COEF_DICT[env_name]['mass']
friction_vals = COEF_DICT[env_name]['friction']
xlbls = [''] + [str(v) + 'x' for v in mass_vals]
ylbls = [''] + [str(v) + 'x' for v in friction_vals]
xvals = list(range(rowmeans.shape[1]))
if errbar:
for i in range(len(rowmeans)):
axes[1].errorbar(xvals, rowmeans[i], rowsems[i], color=colors[i], fmt='s', lw=4, capsize=8, capthick=4,
alpha=0.5)
axes[2].errorbar(xvals, colmeans[i], colsems[i], color=colors[i], fmt='s', lw=4, capsize=8, capthick=4,
alpha=0.5)
else:
for i in range(len(rowmeans)):
axes[1].plot(xvals, rowmeans[i], color=colors[i])
axes[1].fill_between(xvals, rowmeans[i] - rowsems[i], rowmeans[i] + rowsems[i], color=colors[i], alpha=0.2)
axes[2].plot(xvals, colmeans[i], color=colors[i])
axes[2].fill_between(xvals, colmeans[i] - colsems[i], colmeans[i] + colsems[i], color=colors[i], alpha=0.2)
axes[1].set_xticklabels(xlbls)
axes[2].set_xticklabels(ylbls)
axes[1].set_xlabel('Mass Multiplier', fontsize=FS)
axes[2].set_xlabel('Friction Multiplier', fontsize=FS)
axes[1].set_title(env_name.replace('-v3', '') + f' Test, n={TOTAL_N - N_EXCLUDE}', fontsize=FS+2)
axes[2].set_title(env_name.replace('-v3', '') + f' Test, n={TOTAL_N - N_EXCLUDE}', fontsize=FS+2)
axes[1].grid(alpha=0.4)
axes[2].grid(alpha=0.4)
plt.show()
fig.savefig(f"./{env_name.replace('-v3', '')}_train_and_eval_errbar={errbar}.pdf", format='pdf', bbox_inches='tight')
hopper_results_names = [
'agent_control_Hopper-v3_2000000_*_rewards',
'agent_rarl_Hopper-v3_2000000_id=*_rewards',
'agent_lat_act_rarl_Hopper-v3_2000000_id=*_rewards']
half_cheetah_results_names = [
'agent_control_HalfCheetah-v3_2000000_*_rewards',
'agent_rarl_HalfCheetah-v3_2000000_id=*_rewards',
'agent_lat_act_rarl_HalfCheetah-v3_2000000_id=*_rewards']
half_cheetah_eval_names = ['best_agent_control_HalfCheetah-v3_2000000_id=*',
'best_agent_rarl_HalfCheetah-v3_2000000_id=*',
'best_agent_lat_act_rarl_HalfCheetah-v3_2000000_id=*']
hopper_eval_names = ['best_agent_control_Hopper-v3_2000000_id=*',
'best_agent_rarl_Hopper-v3_2000000_id=*',
'best_agent_lat_act_rarl_Hopper-v3_2000000_id=*']
plot_learning_curves(half_cheetah_results_names)
plot_and_print_eval(half_cheetah_eval_names)
plot_learning_curves_and_row_col_means(half_cheetah_results_names, half_cheetah_eval_names)
plot_learning_curves(hopper_results_names)
plot_and_print_eval(hopper_eval_names)
plot_learning_curves_and_row_col_means(hopper_results_names, hopper_eval_names) | 12,269 | 34.877193 | 121 | py |
white_box_rarl | white_box_rarl-main/wbrarl.py | import sys
import os
import time
import random
import argparse
import multiprocessing
import pickle
import copy
from multiprocessing import freeze_support
import numpy as np
import torch
import gym
from stable_baselines3.ppo import PPO
from stable_baselines3.sac import SAC
from stable_baselines3.common.vec_env import SubprocVecEnv, VecNormalize
from stable_baselines3.common.utils import set_random_seed
import warnings
LAST_LAYER_DIM = 256
HYPERS_SAC = {'Hopper-v3': {'learning_starts': 4000, 'learning_rate': 0.0002}}
HYPERS_PPO = {'HalfCheetah-v3': {'batch_size': 64,
'ent_coef': 0.0025,
'n_steps': 128, # orig was 512, made smaller because n_envs is high
'gamma': 0.98,
'learning_rate': 2.0633e-05,
'gae_lambda': 0.92,
'n_epochs': 12, # orig was 20
'max_grad_norm': 0.5,
'vf_coef': 0.58096,
'clip_range': 0.06,
'policy_kwargs': {'log_std_init': -2.0, 'ortho_init': False,
'activation_fn': torch.nn.ReLU,
'net_arch': dict(pi=[256, 256], vf=[256, 256])}}}
ADV_HYPERS_SAC = {'Hopper-v3': {'ent_coef': 0.15, 'learning_starts': 4000}}
ADV_HYPERS_PPO = {'HalfCheetah-v3': {'ent_coef': 0.0075}}
COEF_DICT = {'HalfCheetah-v3': {'mass': [0.2, 0.3, 0.4, 0.5, 1.5, 2.0, 2.5, 3.0],
'friction': [0.05, 0.1, 0.2, 0.3, 1.3, 1.5, 1.7, 1.9]},
'Hopper-v3': {'mass': [0.2, 0.3, 0.4, 0.5, 1.05, 1.1, 1.15, 1.2],
'friction': [0.2, 0.3, 0.4, 0.5, 1.4, 1.6, 1.8, 2.0]},
}
np.set_printoptions(suppress=True)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--experiment_type', type=str, default='') # 'ctrl', 'rarl', or 'rarl' with 'act', 'val', and/or 'lat' as prefixes
parser.add_argument('--agent_ckpt', type=str, default='')
parser.add_argument('--env_ckpt', type=str, default='')
parser.add_argument('--env', type=str, default='HalfCheetah-v3')
parser.add_argument('--id', type=int, default=0)
parser.add_argument('--model_dir', type=str, default='./models/')
parser.add_argument('--results_dir', type=str, default='./results/')
parser.add_argument('--n_test_episodes', type=int, default=10)
parser.add_argument('--n_envs', type=int, default=16) # epoch size is n_steps * n_envs
parser.add_argument('--n_train', type=int, default=int(2e6))
parser.add_argument('--n_train_per_iter', type=int, default=10000) # how often to switch advs and report results
parser.add_argument('--test_each', type=int, default=2) # how often to test
parser.add_argument('--start_adv_training', type=int, default=200000) # when to start the adv
parser.add_argument('--n_advs', type=int, default=1) # how many adversaries to train in an ensemble
parser.add_argument('--delta_action', type=float, default=0.075) # how much to let the adv maximally perturb
parser.add_argument('--lam', type=float, default=0.05) # how much to penalize the adversary's action L1 norm
parser.add_argument('--device', type=str, default='cuda')
parser.add_argument('--mode', type=str, default='train')
parser.add_argument('--perturb_style', type=str, default='action') # 'body' or 'action' depending on what the adversary perturbs
parser.add_argument('--n_report', type=int, default=2)
args = parser.parse_args()
return args
def get_seed():
# gets a random seed from the current time
return int(str(time.time()).replace('.', '')[-5:])
class DummyAdvEnv(gym.Wrapper):
# this is used for initializing adversarial policies
def __init__(self, env, lat, act, val, act_space):
self.env = env
obs_dict = {'ob': self.env.observation_space}
if lat:
lat_size = LAST_LAYER_DIM
obs_dict['lat'] = gym.spaces.Box(np.float32(-np.inf * np.ones(lat_size)),
np.float32(np.inf * np.ones(lat_size)))
if act:
obs_dict['act'] = self.env.action_space
if val:
obs_dict['val'] = gym.spaces.Box(np.float32(np.array([-np.inf])), np.float32(np.array([np.inf])))
self.observation_space = gym.spaces.Dict(obs_dict)
self.action_space = act_space
class RARLEnv(gym.Wrapper):
# this can be an env for either the protagonist or adversary depending on whether agent_mode or adv_mode is called
def __init__(self, env, args, agent_ckpt, adv_ckpts, mode, obs_mean=0, obs_var=1):
super().__init__(env)
self.env = env
self.args = copy.deepcopy(args)
self.sd = get_seed()
self.lat = 'lat' in self.args.experiment_type
self.act = 'act' in self.args.experiment_type
self.val = 'val' in self.args.experiment_type
self.observation = None
self.agent_action = None
self.agent_ckpt = agent_ckpt
if isinstance(adv_ckpts, str):
adv_ckpts = [adv_ckpts]
self.adv_ckpts = adv_ckpts
self.obs_mean = obs_mean
self.obs_var = obs_var
if mode == 'agent':
self.agent_mode()
elif mode == 'adv':
self.adv_mode()
def agent_mode(self):
# get observation space, action space, agents, step, and reset
self.observation_space = self.env.observation_space
self.action_space = self.env.action_space
if self.adv_ckpts[0]:
self.advs = [args.alg.load(self.args.model_dir + self.adv_ckpts[i], device='cpu')
for i in range(args.n_advs)]
else:
dummy_adv_env = DummyAdvEnv(copy.deepcopy(self.env), self.lat, self.act, self.val, self.get_adv_action_space())
self.advs = [self.args.alg('MultiInputPolicy', dummy_adv_env, seed=self.sd, device='cpu', **self.args.adv_hypers[self.args.env])
for _ in range(args.n_advs)]
if self.agent_ckpt:
self.agent = self.args.alg.load(self.args.model_dir + self.agent_ckpt, device='cpu')
else:
self.agent = self.args.alg('MlpPolicy', self, device='cpu', seed=self.sd, **self.args.hypers[self.args.env])
self.step = self.step_agent
self.reset = self.reset_agent
self.adv_i = 0
def adv_mode(self):
# get observation space, action space, agents, step, and reset
obs_dict = {'ob': self.env.observation_space}
if self.lat:
lat_size = LAST_LAYER_DIM
obs_dict['lat'] = gym.spaces.Box(np.float32(-np.inf * np.ones(lat_size)),
np.float32(np.inf * np.ones(lat_size)))
if self.act:
obs_dict['act'] = self.env.action_space
if self.val:
obs_dict['val'] = gym.spaces.Box(np.float32(np.array([-np.inf])), np.float32(np.array([np.inf])))
self.observation_space = gym.spaces.Dict(obs_dict)
self.action_space = self.get_adv_action_space()
if self.agent_ckpt:
self.agent = self.args.alg.load(self.args.model_dir + self.agent_ckpt, device='cpu')
else:
self.agent = self.args.alg('MlpPolicy', self.env, device='cpu', seed=self.sd, **self.args.hypers[self.args.env])
self.step = self.step_adv
self.reset = self.reset_adv
def reset_agent(self):
self.observation = self.env.reset()
self.adv_i = random.randint(0, len(self.advs)-1)
return self.observation
def reset_adv(self):
self.observation = self.env.reset()
self.agent_action = self.agent.predict(self.observation, deterministic=True)[0]
return self.get_adv_obs(self.agent_action)
def get_adv_obs(self, agent_action):
obs = {'ob': self.observation}
if self.lat:
tens_ob = torch.unsqueeze(torch.from_numpy(self.observation), dim=0).float()
if self.args.alg == SAC:
latent_pi_val = self.agent.policy.actor.latent_pi(tens_ob)
else:
features = self.agent.policy.extract_features(tens_ob)
latent_pi_val, _ = self.agent.policy.mlp_extractor(features)
self.agent_latent = latent_pi_val.detach().numpy()
obs['lat'] = np.squeeze(self.agent_latent)
if self.act:
obs['act'] = agent_action
if self.val:
raise NotImplementedError
return obs
def step_agent(self, agent_action):
adv_obs = self.get_adv_obs(agent_action)
adv_action = self.advs[self.adv_i].predict(adv_obs, deterministic=False)[0]
if self.args.perturb_style == 'body':
self.adv_to_xfrc(adv_action)
if self.args.perturb_style == 'action':
agent_action += adv_action
agent_action = np.clip(agent_action, self.env.action_space.low, self.env.action_space.high)
obs, reward, done, info = self.env.step(agent_action)
return obs, reward, done, info
def step_adv(self, adv_action):
if self.args.perturb_style == 'body':
self.adv_to_xfrc(adv_action)
self.observation, reward, done, infos = self.env.step(self.agent_action)
norm_penalty = self.args.lam * np.mean(np.abs(adv_action))
adv_reward = -1 * reward - norm_penalty
norm_obs = np.clip((self.observation - self.obs_mean) / np.sqrt(self.obs_var + 1e-8), -10, 10)
self.agent_action = self.agent.predict(norm_obs, deterministic=False)[0]
if self.args.perturb_style == 'action':
self.agent_action += adv_action
self.agent_action = np.clip(self.agent_action, self.env.action_space.low, self.env.action_space.high)
obs = self.get_adv_obs(self.agent_action)
return obs, adv_reward, done, infos
def get_adv_action_space(self):
if self.args.perturb_style == 'body':
high_adv = np.float32(np.ones(self.n_dim * len(self.body_idx)) * self.args.delta_body)
return gym.spaces.Box(-high_adv, high_adv)
elif self.args.perturb_style == 'action':
high_adv = self.env.action_space.high * self.args.delta_action
return gym.spaces.Box(-high_adv, high_adv)
else:
raise NotImplementedError
def make_rarl_env(wrapper, args, agent_ckpt, adv_ckpts, mode, obs_mean, obs_var, rank):
def _init():
gym_env = gym.make(args.env)
env = wrapper(gym_env, args, agent_ckpt, adv_ckpts, mode, obs_mean, obs_var)
env.seed(rank)
return env
set_random_seed(rank)
return _init
def make_env(args, rank, mc=1.0, fc=1.0):
def _init():
env = gym.make(args.env)
env.seed(rank)
body_mass = env.model.body_mass * mc
env.model.body_mass[:] = body_mass
geom_friction = env.model.geom_friction * fc
env.model.geom_friction[:] = geom_friction
return env
set_random_seed(rank)
return _init
def get_save_suff(args, iter):
savename = f'rarl_{args.env}_{iter * args.n_train_per_iter}_id={args.id}'
if 'act' in args.experiment_type:
savename = 'act_' + savename
if 'val' in args.experiment_type:
savename = 'val_' + savename
if 'lat' in args.experiment_type:
savename = 'lat_' + savename
return savename
def simple_eval(policy, eval_env, n_episodes):
all_rewards = []
observation = eval_env.reset()
for _ in range(n_episodes):
done = False
ep_reward = 0.0
while not done:
action = policy.predict(observation=observation, deterministic=False)[0]
observation, reward, done, infos = eval_env.step(action)
done = done[0]
ep_reward += reward[0]
all_rewards.append(ep_reward)
observation = eval_env.reset()
return sum(all_rewards) / n_episodes
def train_rarl(args):
env_wrapper = RARLEnv
n_iters = (args.n_train // args.n_train_per_iter)
sd = get_seed()
agent_rewards = []
adv_improvements = []
last_saved_agent = ''
last_saved_adv = ''
best_mean_reward = -np.inf
obs_mean = 0
obs_var = 1
adv_envs_raw = [SubprocVecEnv([make_rarl_env(env_wrapper, args, last_saved_agent, last_saved_adv,
'adv', obs_mean, obs_var, sd + i)
for i in range(args.n_envs)]) for _ in range(args.n_advs)]
adv_envs = [VecNormalize(adv_envs_raw[j], norm_reward=False) for j in range(args.n_advs)]
adv_policies = [args.alg('MultiInputPolicy', adv_envs[j], device=args.device, seed=sd, **args.adv_hypers[args.env]) for j in range(args.n_advs)]
agent_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, last_saved_agent, last_saved_adv,
'agent', obs_mean, obs_var, sd + i)
for i in range(args.n_envs)])
agent_env = VecNormalize(agent_env_raw, norm_reward=False)
agent_policy = args.alg('MlpPolicy', agent_env, device=args.device, seed=sd, **args.hypers[args.env])
last_saved_agent = 'agent_' + get_save_suff(args, 0)
agent_policy.save(args.model_dir + last_saved_agent + '.zip')
adv_eval_envs_raw = [SubprocVecEnv([make_rarl_env(env_wrapper, args, last_saved_agent, last_saved_adv,
'adv', obs_mean, obs_var, 42)])
for _ in range(args.n_advs)]
adv_eval_envs = [VecNormalize(adv_eval_envs_raw[j], norm_reward=False) for j in range(args.n_advs)]
agent_eval_env_raw = SubprocVecEnv([make_env(args, 42)])
agent_eval_env = VecNormalize(agent_eval_env_raw, norm_reward=False)
last_saved_advs = [] # for deleting files no longer needed
for i in range(1, n_iters + 1):
save_suff = get_save_suff(args, i)
n_train_this_iter = args.n_train_per_iter + args.hypers[args.env].get('learning_starts', 0)
# train adv
if ((args.perturb_style == 'body' and args.delta_body > 0.0) or
(args.perturb_style == 'action' and args.delta_action > 0.0)) and \
args.n_train_per_iter * i > args.start_adv_training:
obs_mean = agent_env.obs_rms.mean
obs_var = agent_env.obs_rms.var
for adv_policy, adv_env, adv_eval_env in zip(adv_policies, adv_envs, adv_eval_envs):
adv_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, last_saved_agent, last_saved_adv,
'adv', obs_mean, obs_var, sd + i)
for i in range(args.n_envs)])
adv_env_state = adv_env.__getstate__()
adv_env.__setstate__(adv_env_state)
adv_env.set_venv(adv_env_raw)
adv_policy.env = adv_env
adv_eval_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, last_saved_agent, adv_policy,
'adv', obs_mean, obs_var, 42)])
adv_eval_env.__setstate__(adv_env_state)
adv_eval_env.set_venv(adv_eval_env_raw)
if (i - 1) % args.test_each == 0:
mean_rewards_pre = [simple_eval(adv_policy, adv_eval_envs[j], args.n_test_episodes) for j, adv_policy in enumerate(adv_policies)]
else:
mean_rewards_pre = 0
for adv_policy in adv_policies:
adv_policy.learn(n_train_this_iter)
for adv_policy, adv_env, adv_eval_env in zip(adv_policies, adv_envs, adv_eval_envs):
adv_env_state = adv_env.__getstate__()
adv_eval_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, last_saved_agent, adv_policy,
'adv', obs_mean, obs_var, 42)])
adv_eval_env.__setstate__(adv_env_state)
adv_eval_env.set_venv(adv_eval_env_raw)
if (i - 1) % args.test_each == 0:
mean_rewards_post = [simple_eval(adv_policy, adv_eval_envs[j], args.n_test_episodes) for j, adv_policy in enumerate(adv_policies)]
adv_improvements.append(round((sum(mean_rewards_post) - sum(mean_rewards_pre)) / args.n_advs))
if i % args.n_report == 0:
print(f'{args.experiment_type} id={args.id} adv_improvements:', adv_improvements, sum(adv_improvements))
sys.stdout.flush()
for lsa in last_saved_advs:
os.remove(args.model_dir + lsa + '.zip')
last_saved_advs = [f'adv{j}_' + save_suff for j in range(args.n_advs)]
for i_policy, adv_policy in enumerate(adv_policies):
adv_policy.save(args.model_dir + last_saved_advs[i_policy] + '.zip')
# train agent
agent_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, last_saved_agent, last_saved_advs,
'agent', obs_mean, obs_var, sd + j)
for j in range(args.n_envs)])
agent_env_state = agent_env.__getstate__()
agent_env.__setstate__(agent_env_state)
agent_env.set_venv(agent_env_raw)
agent_policy.env = agent_env
agent_policy.learn(n_train_this_iter)
agent_env_state = agent_env.__getstate__()
agent_eval_env_raw = SubprocVecEnv([make_env(args, 42)])
agent_eval_env.__setstate__(agent_env_state)
agent_eval_env.set_venv(agent_eval_env_raw)
if (i - 1) % args.test_each == 0:
mean_reward = simple_eval(agent_policy, agent_eval_env, args.n_test_episodes)
if mean_reward >= best_mean_reward:
best_mean_reward = mean_reward
best_save_suff = get_save_suff(args, n_iters)
agent_savename = 'best_agent_' + best_save_suff
agent_policy.save(args.model_dir + agent_savename + '.zip')
agent_rewards.append(round(mean_reward))
if i % args.n_report == 0:
print(f'{args.env} {args.experiment_type} id={args.id} timestep: {i * args.n_train_per_iter}, mean agent rewards: {agent_rewards}')
sys.stdout.flush()
os.remove(args.model_dir + last_saved_agent + '.zip')
last_saved_agent = 'agent_' + save_suff
agent_policy.save(args.model_dir + last_saved_agent + '.zip')
savename = 'agent_' + get_save_suff(args, n_iters)
agent_policy.save(args.model_dir + savename + '.zip')
with open(args.results_dir + savename + '_rewards.pkl', 'wb') as f:
pickle.dump(agent_rewards, f)
agent_eval_env.save(args.model_dir + savename + '_eval_env')
def train_control(args):
n_iters = (args.n_train // args.n_train_per_iter)
sd = get_seed()
env = VecNormalize(SubprocVecEnv([make_env(args, sd + i) for i in range(args.n_envs)]), norm_reward=False)
eval_env = VecNormalize(SubprocVecEnv([make_env(args, 42)]), norm_reward=False)
policy = args.alg('MlpPolicy', env, device=args.device, seed=sd, **args.hypers[args.env])
best_mean_reward = -np.inf
savename = f'best_agent_control_{args.env}_{args.n_train}_id={args.id}'
rewards = []
for i in range(1, n_iters + 1):
n_train_this_iter = args.n_train_per_iter + args.hypers[args.env].get('learning_starts', 0)
policy.learn(n_train_this_iter)
# update the state of the eval env to be the same as the regular env
env_state = env.__getstate__()
eval_env_raw = SubprocVecEnv([make_env(args, 42)])
eval_env.__setstate__(env_state)
eval_env.set_venv(eval_env_raw)
if i % args.n_report == 0:
mean_reward = simple_eval(policy, eval_env, args.n_test_episodes)
rewards.append(round(mean_reward))
if mean_reward >= best_mean_reward:
best_mean_reward = mean_reward
policy.save(args.model_dir + savename + '.zip')
if i % args.n_report == 0:
print(f'{args.env} {args.experiment_type} id={args.id} timestep: {i * args.n_train_per_iter}, mean agent rewards: {rewards}')
sys.stdout.flush()
with open(args.results_dir + f'agent_control_{args.env}_{args.n_train}_id={args.id}' + '_rewards.pkl', 'wb') as f:
pickle.dump(rewards, f)
eval_env.save(args.model_dir + f'agent_control_{args.env}_{args.n_train}_id={args.id}_eval_env')
env.close()
eval_env.close()
def eval_agent_grid(args):
mass_coeffs = COEF_DICT[args.env]['mass']
friction_coeffs = COEF_DICT[args.env]['friction']
assert args.agent_ckpt, 'Must give --agent_ckpt to test an agent'
assert args.env_ckpt, 'Must give --env_ckpt to test an agent'
all_mean_rewards = []
for mc in mass_coeffs:
all_mean_rewards.append([])
for fc in friction_coeffs:
eval_env = SubprocVecEnv([make_env(args, 42, mc, fc)])
eval_env = VecNormalize.load(args.model_dir + args.env_ckpt, eval_env)
agent_policy = args.alg.load(args.model_dir + args.agent_ckpt, device=args.device)
mean_reward = simple_eval(agent_policy, eval_env, 16)
print(f'{args.agent_ckpt} mass={mc} friction={fc} mean eval reward: {mean_reward}')
all_mean_rewards[-1].append(mean_reward)
with open(args.results_dir + args.agent_ckpt + f'_eval.pkl', 'wb') as f:
pickle.dump(all_mean_rewards, f)
def eval_adv(args):
args.lam = 0
env_wrapper = RARLEnv
n_iters = (args.n_train // args.n_train_per_iter)
sd = get_seed()
assert args.agent_ckpt, 'Must give --agent_ckpt to test an agent'
assert args.env_ckpt, 'Must give --env_ckpt to test an agent'
agent_env = SubprocVecEnv([make_env(args, 42)])
agent_env = VecNormalize.load(args.model_dir + args.env_ckpt, agent_env)
obs_mean = agent_env.obs_rms.mean
obs_var = agent_env.obs_rms.var
adv_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, args.agent_ckpt, '',
'adv', obs_mean, obs_var, sd + i)
for i in range(args.n_envs)])
adv_eval_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, args.agent_ckpt, '',
'adv', obs_mean, obs_var, 42)])
adv_env = VecNormalize(adv_env_raw, norm_reward=False)
adv_eval_env = VecNormalize(adv_eval_env_raw, norm_reward=False)
adv_env_state = adv_env.__getstate__()
agent_env_state = agent_env.__getstate__()
adv_env_state['obs_rms']['ob'] = agent_env_state['obs_rms']
adv_env_state['ret_rms'] = agent_env_state['ret_rms']
adv_env.__setstate__(adv_env_state)
adv_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, args.agent_ckpt, '',
'adv', obs_mean, obs_var, sd + i)
for i in range(args.n_envs)])
adv_env.set_venv(adv_env_raw)
adv_policy = args.alg('MultiInputPolicy', adv_env, device=args.device, seed=sd, **args.adv_hypers[args.env])
n_train_per_iter = args.n_train_per_iter + args.hypers[args.env].get('learning_starts', 0)
for i in range(1, n_iters + 1):
adv_policy.learn(n_train_per_iter)
if (i - 1) % args.test_each == 0:
adv_eval_env_raw = SubprocVecEnv([make_rarl_env(env_wrapper, args, args.agent_ckpt, '',
'adv', obs_mean, obs_var, 42)])
adv_env_state = adv_env.__getstate__()
adv_eval_env.__setstate__(adv_env_state)
adv_eval_env.set_venv(adv_eval_env_raw)
mean_adv_reward = simple_eval(adv_policy, adv_eval_env, args.n_test_episodes)
print(f'adv eval id={args.id} mean_adv_reward:', mean_adv_reward)
sys.stdout.flush()
# TODO save
if __name__ == '__main__':
warnings.filterwarnings("ignore")
freeze_support()
multiprocessing.set_start_method('spawn')
args = parse_args()
if 'HalfCheetah' in args.env:
args.alg = PPO
args.hypers = HYPERS_PPO
args.adv_hypers = ADV_HYPERS_PPO
else:
args.alg = SAC
args.hypers = HYPERS_SAC
args.adv_hypers = ADV_HYPERS_SAC
if args.mode == 'eval':
eval_agent_grid(args)
elif args.mode == 'eval_adv':
eval_adv(args)
elif 'rarl' in args.experiment_type:
train_rarl(args)
elif args.experiment_type == 'ctrl':
train_control(args)
else:
raise NotImplementedError()
print('Done :)')
| 24,786 | 43.341682 | 148 | py |
neurotron_experiments | neurotron_experiments-main/run_sim05.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim05_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim05_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim05_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim05_setup['filterlist'],
sim05_setup['dlist'],
sim05_setup['boundlist'],
sim05_setup['betalist'],
sim05_setup['etalist_tron'],
sim05_setup['blist'],
sim05_setup['width'],
sim05_setup['num_iters'],
etalist_sgd=sim05_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim05_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim05_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim01.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim01_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim01_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim01_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r'Normal data ($\sigma=1$), $\theta_\ast$ = {}'.format(sim01_setup['boundlist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim01_setup['name']+'_tron_vs_sgd_theta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,838 | 24.123894 | 120 | py |
neurotron_experiments | neurotron_experiments-main/run_sim07.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim07_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim07_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim07_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim07_setup['filterlist'],
sim07_setup['dlist'],
sim07_setup['boundlist'],
sim07_setup['betalist'],
sim07_setup['etalist_tron'],
sim07_setup['blist'],
sim07_setup['width'],
sim07_setup['num_iters'],
etalist_sgd=sim07_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim07_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim07_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim05.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim05_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim05_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim05_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r'Normal data ($\sigma=1$), $\beta$ = {}'.format(sim05_setup['betalist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim05_setup['name']+'_tron_vs_sgd_beta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,830 | 24.053097 | 119 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim06.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim06_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim06_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim06_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r'Normal data ($\sigma=3$), $\beta$ = {}'.format(sim06_setup['betalist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim06_setup['name']+'_tron_vs_sgd_beta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,830 | 24.053097 | 119 | py |
neurotron_experiments | neurotron_experiments-main/plot_tron_theta_no_attack.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim01_setup, sim02_setup, sim03_setup, sim04_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = []
for sim_setup in [sim01_setup, sim02_setup, sim03_setup, sim04_setup]:
tron_error_loaded.append(np.loadtxt(output_path.joinpath(sim_setup['name']+'_tron.csv'), delimiter=','))
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set axis limits and ticks
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
ylims = [-16.2, 2.2]
yticks = [-16, -14, -12, -10, -8, -6, -4, -2, 0, 2]
yticklabels = ['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2']
# %%
save = True
fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True, figsize=(16, 8))
plt.subplots_adjust(hspace=0.15, wspace=0.03)
xrange = range(1, tron_error_loaded[0].shape[0]+1)
labels = [
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][0]),
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][1])
]
for i in range(2):
axes[0, 0].plot(
xrange,
np.log10(tron_error_loaded[0][:, i]),
linewidth=2. # ,
# label=labels[0]
)
axes[0, 0].set_title(r'Normal data ($\sigma=1$)', fontsize=fontsize)
# axes[0, 0].set_ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
axes[0, 0].set_ylim(ylims)
axes[0, 0].set_yticks(yticks)
axes[0, 0].set_yticklabels(yticklabels, rotation=0, fontsize=fontsize)
for i in range(2):
axes[1, 0].plot(
xrange,
np.log10(tron_error_loaded[1][:, i]),
linewidth=2.,
label=labels[0]
)
axes[1, 0].set_title(r'Normal data ($\sigma=3$)', fontsize=fontsize)
# axes[1, 0].set_ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
axes[1, 0].set_xticks(xticks)
axes[1, 0].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
axes[1, 0].set_ylim(ylims)
axes[1, 0].set_yticks(yticks)
axes[1, 0].set_yticklabels(yticklabels, rotation=0, fontsize=fontsize)
for i in range(2):
axes[0, 1].plot(
xrange,
np.log10(tron_error_loaded[2][:, i]),
linewidth=2. # ,
# label=labels[0]
)
axes[0, 1].set_title(r'Laplace data ($scale=2$)', fontsize=fontsize)
for i in range(2):
axes[1, 1].plot(
xrange,
np.log10(tron_error_loaded[3][:, i]),
linewidth=2. # ,
# label=labels[0]
)
axes[1, 1].set_title(r't-distributed data ($df=4$)', fontsize=fontsize)
axes[1, 1].set_xticks(xticks)
axes[1, 1].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
# %%
if save:
plt.savefig(
output_path.joinpath('tron_theta_no_attack.png'),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 3,020 | 22.787402 | 108 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim04.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim04_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim04_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim04_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r't-distributed data ($df=4$), $\theta_\ast$ = {}'.format(sim04_setup['boundlist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim04_setup['name']+'_tron_vs_sgd_theta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,841 | 24.150442 | 120 | py |
neurotron_experiments | neurotron_experiments-main/run_sim02.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim02_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim02_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim02_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim02_setup['filterlist'],
sim02_setup['dlist'],
sim02_setup['boundlist'],
sim02_setup['betalist'],
sim02_setup['etalist_tron'],
sim02_setup['blist'],
sim02_setup['width'],
sim02_setup['num_iters'],
etalist_sgd=sim02_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim02_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim02_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/plot_tron_merged_theta.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim01_setup, sim02_setup, sim03_setup, sim04_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = []
for sim_setup in [sim01_setup, sim02_setup, sim03_setup, sim04_setup]:
tron_error_loaded.append(np.loadtxt(output_path.joinpath(sim_setup['name']+'_tron.csv'), delimiter=','))
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
# [-16.2, 2.2],
[-4.3, 1.2],
[-4.3, 1.2],
[-4.3, 1.2],
[-4.3, 1.2]
]
yticks = [
# [-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
# ['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Set sub-plots titles
titles = [
r'Normal data ($\sigma=1$)',
r'Normal data ($\sigma=3$)',
r'Laplace data ($scale=2$)',
r't-distributed data ($df=4$)'
]
# %% Set line labels
sim01_theta_labels = [
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][0]),
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][1]),
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][2]),
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][3]),
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][4]),
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][5]),
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][6])
]
sim02_theta_labels = [
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][0]),
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][1]),
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][2]),
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][3]),
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][4]),
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][5]),
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][6])
]
sim03_theta_labels = [
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][0]),
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][1]),
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][2]),
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][3]),
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][4]),
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][5]),
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][6])
]
sim04_theta_labels = [
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][0]),
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][1]),
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][2]),
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][3]),
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][4]),
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][5]),
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][6])
]
theta_labels = [sim01_theta_labels, sim02_theta_labels, sim03_theta_labels, sim04_theta_labels]
# %% Selection of theta values to plot
theta_vals = [1, 2, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded[0].shape[0]+1)
fig, axes = plt.subplots(nrows=4, ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(4):
for j in range(len(theta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[i][:, theta_vals[j]]),
linewidth=2.,
label=theta_labels[i][theta_vals[j]]
)
axes[i].set_ylim(ylims[i])
axes[i].set_title(titles[i].format(sim01_setup['boundlist'][theta_vals[j]]), y=1.0, fontsize=fontsize)
axes[i].set_yticks(yticks[i])
axes[i].set_yticklabels(ylabels[i], fontsize=fontsize)
axes[i].legend(loc='upper right', ncol=2, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[3].set_xticks(xticks)
axes[3].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath('all_sims_tron_merged_theta_vals.png'),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 4,623 | 28.832258 | 108 | py |
neurotron_experiments | neurotron_experiments-main/neurotron_torch.py | # %% [markdown]
# # Settings
# %%
import torch
import matplotlib.pyplot as plt
import numpy as np
import torch.nn as nn
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from torch.utils.data import DataLoader, Dataset
# %% [markdown]
# # The NeuroTron class
# %%
class NeuroTron(nn.Module):
def __init__(self, n, r, h, activation=nn.functional.relu, w_init='const', dtype=torch.float32):
"""
Arguments:
n: number of input features
r: number of parameters
h: hidden layer width
activation: activation function
"""
super().__init__()
self.w = nn.Parameter(torch.empty(r, dtype=dtype), requires_grad=False)
self.M = torch.randn(r, n, dtype=dtype)
self.set_A(n, r, h, dtype=dtype)
self.set_w(w_init)
self.activation = activation
def set_A(self, n, r, h, dtype=torch.float32):
self.A = torch.empty(h, r, n, dtype=dtype)
C = torch.randn(r, n, dtype=dtype)
k = h // 2
i = 0
for factor in range(-k, k+1):
if factor != 0:
Z = self.M + factor * C
self.A[i, :, :] = Z
i += 1
def set_w(self, init):
if init == 'const':
nn.init.constant_(self.w, 1.)
elif init == 'unif':
nn.init.uniform_(self.w)
def num_A(self):
return self.A.shape[0]
def forward(self, x):
postactivation = 0.
for i in range(self.num_A()):
preactivation = torch.matmul(torch.matmul(self.w, self.A[i, :, :]), x.t())
postactivation += self.activation(preactivation)
return postactivation / self.num_A()
def gradient(self, x, output, y):
return torch.matmul(self.M, torch.matmul(y - output, x) / x.shape[0])
def update_parameters(self, x, output, y, stepsize):
self.w.data.add_(stepsize * self.gradient(x, output, y))
def train(self, train_loader, stepsize, loss, log_step=200, test_loader=None):
train_losses, test_losses = [], []
for train_batch_idx, (train_data, train_targets) in enumerate(train_loader):
train_output = self.forward(train_data)
self.update_parameters(train_data, train_output, train_targets, stepsize)
if (train_batch_idx % log_step == 0):
train_losses.append(loss(train_targets, train_output))
if (test_loader is not None):
test_data, test_targets = next(iter(test_loader))
test_output = self.forward(test_data)
test_losses.append(loss(test_targets, self.forward(test_data)))
if (test_loader is not None):
test_losses = torch.stack(test_losses)
return torch.stack(train_losses), test_losses
# %% [markdown]
# # The PoisonedDataset class
# %%
class PoisonedDataset(Dataset):
def __init__(self, x, y, beta, theta):
self.x = x
self.y = y
self.beta = beta
self.theta = theta
def attack(self, y):
a = torch.bernoulli(torch.full_like(y, self.beta))
xi = torch.distributions.uniform.Uniform(torch.full_like(y, -self.theta), torch.full_like(y, self.theta)).sample()
return y + a * xi
def __repr__(self):
return f'PoisonedDataset'
def __len__(self):
return len(self.x)
def __getitem__(self, i):
return self.x[i], self.attack(self.y[i])
# %% [markdown]
# # Standard normal example
# %% [markdown]
# ## Prepare the data
# %%
num_samples = 125000
num_features = 100
sampling_distribution = torch.distributions.multivariate_normal.MultivariateNormal(
torch.zeros(num_features, dtype=torch.float32), torch.eye(num_features, dtype=torch.float32)
)
normal_data = sampling_distribution.sample([num_samples])
normal_targets = torch.stack([sampling_distribution.log_prob(normal_data[i, :]).exp() for i in range(num_samples)], dim=0)
# normal_targets = normal_data.norm(p=2, dim=1)
print(normal_data.shape, normal_targets.shape)
# %%
x_train, x_test, y_train, y_test = train_test_split(normal_data, normal_targets, test_size=0.2)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
# %%
beta = 0.5
theta = 0.125
train_dataset = PoisonedDataset(x_train, y_train, beta=beta, theta=theta)
test_dataset = PoisonedDataset(x_test, y_test, beta=beta, theta=theta)
# %%
train_batch_size = 16
test_batch_size = 3 * train_batch_size
train_loader = DataLoader(train_dataset, batch_size=train_batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=test_batch_size, shuffle=True)
# %% [markdown]
# ## Instantiate NeuroTron class
# %%
neurotron = NeuroTron(n=num_features, r=25, h=10, dtype=torch.float32)
# %% [markdown]
# ## Training
# %%
num_epochs = 2
train_losses = []
test_losses = []
verbose_msg = 'Train epoch {:' + str(len(str(num_epochs))) + '} of {:' + str(len(str(num_epochs))) +'}'
for epoch in range(num_epochs):
print(verbose_msg.format(epoch+1, num_epochs))
train_losses_in_epoch, test_losses_in_epoch = neurotron.train(
train_loader, stepsize=0.0001, loss=nn.MSELoss(reduction='mean'), log_step=10, test_loader=test_loader
)
train_losses.append(train_losses_in_epoch)
test_losses.append(test_losses_in_epoch)
train_losses = torch.stack(train_losses, dim=0)
test_losses = torch.stack(test_losses, dim=0)
# %% [markdown]
# ## Plotting training and test loss
# %%
plt.plot(torch.flatten(train_losses), label='Train loss')
plt.plot(torch.flatten(test_losses), label='Test loss')
plt.yscale('log')
plt.legend(loc='upper right')
# %% [markdown]
# # California housing example
# %% [markdown]
# ## Prepare the data
# %%
california_housing = fetch_california_housing(as_frame=True)
# california_housing.frame
# california_housing.data
# california_housing.target
# %%
x_train, x_test, y_train, y_test = train_test_split(california_housing.data, california_housing.target, test_size=0.25)
# %%
x_train = StandardScaler().fit_transform(x_train.to_numpy(dtype=np.float32))
x_test = StandardScaler().fit_transform(x_test.to_numpy(dtype=np.float32))
y_train = y_train.to_numpy(dtype=np.float32)
y_test = y_test.to_numpy(dtype=np.float32)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
# %%
beta = 0.
theta = 0.01
train_dataset = PoisonedDataset(torch.from_numpy(x_train), torch.from_numpy(y_train), beta=beta, theta=theta)
# test_dataset = PoisonedDataset(torch.from_numpy(x_test), torch.from_numpy(y_test), beta=beta, theta=theta)
# %%
train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)
# test_loader = DataLoader(test_dataset, batch_size=len(test_dataset), shuffle=False)
# %% [markdown]
# ## Instantiate NeuroTron class
# %%
neurotron = NeuroTron(n=8, r=6, h=10, dtype=torch.float32)
# %% [markdown]
# ## Training
# %%
num_epochs = 10
train_losses = []
verbose_msg = 'Train epoch {:' + str(len(str(num_epochs))) + '} of {:' + str(len(str(num_epochs))) +'}'
for epoch in range(num_epochs):
print(verbose_msg.format(epoch+1, num_epochs))
train_losses_in_epoch, _ = neurotron.train(
train_loader, stepsize=0.00001, loss=nn.MSELoss(reduction='mean'), log_step=10, test_loader=None
)
train_losses.append(train_losses_in_epoch)
train_losses = torch.stack(train_losses, dim=0)
# %% [markdown]
# ## Plotting training and test loss
# %%
plt.plot(torch.flatten(train_losses), label="Train loss")
plt.yscale('log')
plt.legend(loc='upper right')
# %% [markdown]
# ## Printing dimensions of various tensors
# %%
x, y = next(iter(train_loader))
# %%
x.shape, x.shape[0], x.shape[1], y.shape
# %%
neurotron.w.shape, neurotron.A.shape, neurotron.M.shape
# %%
output = neurotron.forward(x)
# %%
output.shape
# %%
neurotron.w.shape, neurotron.A[0, :, :].shape, x.t().shape, x.shape
# %%
torch.matmul(neurotron.w, neurotron.A[0, :, :]).shape
# %%
torch.matmul(torch.matmul(neurotron.w, neurotron.A[0, :, :]), x.t()).shape
# %%
| 8,128 | 25.478827 | 122 | py |
neurotron_experiments | neurotron_experiments-main/plot_tron_q_assist_sim.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_neuron1_error_loaded = []
for k in range(3):
tron_neuron1_error_loaded.append(np.loadtxt(output_path.joinpath('q_assist_neuro1_'+str(k)+'_tron.csv'), delimiter=','))
tron_neuron10_error_loaded = []
for k in range(3):
tron_neuron10_error_loaded.append(np.loadtxt(output_path.joinpath('q_assist_neuro10_'+str(k)+'_tron.csv'), delimiter=','))
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set axis limits and ticks
xticks = np.linspace(0, 40000, num=9)
xlabels = [str(round(i)) for i in xticks]
ylims = [
[-6.2, 1.2],
[-13.2, 1.2]
]
yticks = [
[-6, -5, -4, -3, -2, -1, 0, 1],
[-13, -11, -9, -7, -5, -3, -1, 1]
]
ylabels = [
['1e-6', '1e-5', '1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-13', '1e-11', '1e-9', '1e-7', '1e-5', '1e-3', '1e-1', '1e+1']
]
# %%
iterations = 4*(10**4)
beta_val = 0.05
theta_vals = [0, 0.5, 1]
# %%
save = True
xrange = range(1, iterations+1)
plt.figure(figsize=[8, 4])
for i in range(3):
plt.plot(
xrange,
np.log10(tron_neuron1_error_loaded[i]),
linewidth=2.,
label=r'$\beta = {}$, $\theta_\ast$ = {}'.format(beta_val, theta_vals[i])
)
plt.ylim(ylims[0])
# plt.title("Neurotron (q=1)", fontsize=fontsize)
# plt.xlabel('Iteration', fontsize=fontsize)
# plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
plt.xticks(ticks=xticks, labels=xlabels, fontsize=fontsize)
plt.yticks(ticks=yticks[0], labels=ylabels[0], fontsize=fontsize)
plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=1)
if save:
plt.savefig(
output_path.joinpath('q_assist_neuro1_tron.png'),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
# %%
xrange = range(1, iterations+1)
plt.figure(figsize=[8, 4])
for i in range(3):
plt.plot(
xrange,
np.log10(tron_neuron10_error_loaded[i]),
linewidth=2.,
label=r'$\beta = {}$, $\theta_\ast$ = {}'.format(beta_val, theta_vals[i])
)
plt.ylim(ylims[1])
# plt.title("Neurotron (q=10)", fontsize=fontsize)
# plt.xlabel('Iteration', fontsize=fontsize)
# plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
plt.xticks(ticks=xticks, labels=xlabels, fontsize=fontsize)
plt.yticks(ticks=yticks[1], labels=ylabels[1], fontsize=fontsize)
plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=1)
if save:
plt.savefig(
output_path.joinpath('q_assist_neuro10_tron.png'),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,981 | 20.608696 | 126 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim05.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim05_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim05_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim05_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
beta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(beta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(beta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[beta_vals[i]])
axes[i].set_title(
r'$\beta$ = {}'.format(sim05_setup['betalist'][beta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[beta_vals[i]])
axes[i].set_yticklabels(ylabels[beta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim05_setup['name']+'_tron_vs_sgd_merged_beta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,834 | 23.025424 | 104 | py |
neurotron_experiments | neurotron_experiments-main/sim_setup.py | # %% Import packages
import numpy as np
from pathlib import Path
# %% Set output path
output_path = Path().joinpath('output')
# %% Setup for simulation 1: data ~ normal(mu=0, sigma=1), varying theta_{*}
sim01_setup = {
'sample_data' : lambda s : np.random.normal(loc=0.0, scale=1.0, size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [100 for _ in range(7)], # n: input dimension
'boundlist' : [0, 0.125, 0.25, 0.5, 1., 2., 4.], # theta_{*}
'betalist' : [0.5 for _ in range(7)], # beta
'etalist_tron' : [0.0001 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.0001 for _ in range(7)], # eta: learning rate for SGD
'seed' : 1,
'name' : 'sim01'
}
# %% Setup for simulation 2: data ~ normal(mu=0, sigma=3), varying theta_{*}
sim02_setup = {
'sample_data' : lambda s : np.random.normal(loc=0.0, scale=3.0, size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [50 for _ in range(7)], # n: input dimension
'boundlist' : [0, 0.125, 0.25, 0.5, 1., 2., 4.], # theta_{*}
'betalist' : [0.5 for _ in range(7)], # beta
'etalist_tron' : [0.00005 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.00005 for _ in range(7)], # eta: learning rate for SGD
'seed' : 2,
'name' : 'sim02'
}
# %% Setup for simulation 3: data ~ Laplace(loc=0, scale=2), varying theta_{*}
sim03_setup = {
'sample_data' : lambda s : np.random.laplace(loc=0.0, scale=2.0, size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [50 for _ in range(7)], # n: input dimension
'boundlist' : [0, 0.125, 0.25, 0.5, 1., 2., 4.], # theta_{*}
'betalist' : [0.5 for _ in range(7)], # beta
'etalist_tron' : [0.00005 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.00005 for _ in range(7)], # eta: learning rate for SGD
'seed' : 3,
'name' : 'sim03'
}
# %% Setup for simulation 4: data ~ student(df=4), varying theta_{*}
sim04_setup = {
'sample_data' : lambda s : np.random.standard_t(4., size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [100 for _ in range(7)], # n: input dimension
'boundlist' : [0, 0.125, 0.25, 0.5, 1., 2., 4.], # theta_{*}
'betalist' : [0.5 for _ in range(7)], # beta
'etalist_tron' : [0.0001 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.0001 for _ in range(7)], # eta: learning rate for SGD
'seed' : 4,
'name' : 'sim04'
}
# %% Setup for simulation 5: data ~ normal(mu=0, sigma=1), varying beta
sim05_setup = {
'sample_data' : lambda s : np.random.normal(loc=0.0, scale=1.0, size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [100 for _ in range(7)], # n: input dimension
'boundlist' : [0.25 for _ in range(7)], # theta_{*}
'betalist' : [0., 0.005, 0.05, 0.1, 0.2, 0.5, 0.9], # beta
'etalist_tron' : [0.0001 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.0001 for _ in range(7)], # eta: learning rate for SGD
'seed' : 5,
'name' : 'sim05'
}
# %% Setup for simulation 6: data ~ normal(mu=0, sigma=3), varying beta
sim06_setup = {
'sample_data' : lambda s : np.random.normal(loc=0.0, scale=3.0, size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [50 for _ in range(7)], # n: input dimension
'boundlist' : [0.25 for _ in range(7)], # theta_{*}
'betalist' : [0., 0.005, 0.05, 0.1, 0.2, 0.5, 0.9], # beta
'etalist_tron' : [0.00005 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.00005 for _ in range(7)], # eta: learning rate for SGD
'seed' : 6,
'name' : 'sim06'
}
# %% Setup for simulation 7: data ~ Laplace(loc=0, scale=2), varying beta
sim07_setup = {
'sample_data' : lambda s : np.random.laplace(loc=0.0, scale=2.0, size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [50 for _ in range(7)], # n: input dimension
'boundlist' : [0.25 for _ in range(7)], # theta_{*}
'betalist' : [0., 0.005, 0.05, 0.1, 0.2, 0.5, 0.9], # beta
'etalist_tron' : [0.00005 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.00005 for _ in range(7)], # eta: learning rate for SGD
'seed' : 7,
'name' : 'sim07'
}
# %% Setup for simulation 8: data ~ student(df=4), varying beta
sim08_setup = {
'sample_data' : lambda s : np.random.standard_t(4., size=s),
'filterlist' : [25 for _ in range(7)], # r: filter size
'dlist' : [100 for _ in range(7)], # n: input dimension
'boundlist' : [0.25 for _ in range(7)], # theta_{*}
'betalist' : [0., 0.005, 0.05, 0.1, 0.2, 0.5, 0.9], # beta
'etalist_tron' : [0.0001 for _ in range(7)], # eta: learning rate for NeuroTron
'blist' : [16 for _ in range(7)], # b
'width' : 10, # k: width
'num_iters' : 40000,
'etalist_sgd' : [0.0001 for _ in range(7)], # eta: learning rate for SGD
'seed' : 8,
'name' : 'sim08'
}
| 5,716 | 38.157534 | 84 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim01.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim01_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim01_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim01_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
theta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(theta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(theta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[theta_vals[i]])
axes[i].set_title(
r'$\theta_\ast$ = {}'.format(sim01_setup['boundlist'][theta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[theta_vals[i]])
axes[i].set_yticklabels(ylabels[theta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim01_setup['name']+'_tron_vs_sgd_merged_theta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,851 | 23.169492 | 112 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim08.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim08_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim08_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim08_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r't-distributed data ($df=4$), $\beta$ = {}'.format(sim08_setup['betalist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim08_setup['name']+'_tron_vs_sgd_beta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,833 | 24.079646 | 119 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim03.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim03_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim03_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim03_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
theta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(theta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(theta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[theta_vals[i]])
axes[i].set_title(
r'$\theta_\ast$ = {}'.format(sim03_setup['boundlist'][theta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[theta_vals[i]])
axes[i].set_yticklabels(ylabels[theta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim03_setup['name']+'_tron_vs_sgd_merged_theta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,851 | 23.169492 | 112 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim07.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim07_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim07_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim07_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r'Laplace data ($scale=2$), $\beta$ = {}'.format(sim07_setup['betalist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim07_setup['name']+'_tron_vs_sgd_beta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,830 | 24.053097 | 119 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim08.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim08_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim08_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim08_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
beta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(beta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(beta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[beta_vals[i]])
axes[i].set_title(
r'$\beta$ = {}'.format(sim08_setup['betalist'][beta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[beta_vals[i]])
axes[i].set_yticklabels(ylabels[beta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim08_setup['name']+'_tron_vs_sgd_merged_beta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,834 | 23.025424 | 104 | py |
neurotron_experiments | neurotron_experiments-main/run_sim03.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim03_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim03_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim03_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim03_setup['filterlist'],
sim03_setup['dlist'],
sim03_setup['boundlist'],
sim03_setup['betalist'],
sim03_setup['etalist_tron'],
sim03_setup['blist'],
sim03_setup['width'],
sim03_setup['num_iters'],
etalist_sgd=sim03_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim03_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim03_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim06.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim06_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim06_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim06_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
beta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(beta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(beta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[beta_vals[i]])
axes[i].set_title(
r'$\beta$ = {}'.format(sim06_setup['betalist'][beta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[beta_vals[i]])
axes[i].set_yticklabels(ylabels[beta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim06_setup['name']+'_tron_vs_sgd_merged_beta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,834 | 23.025424 | 104 | py |
neurotron_experiments | neurotron_experiments-main/run_sim08.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim08_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim08_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim08_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim08_setup['filterlist'],
sim08_setup['dlist'],
sim08_setup['boundlist'],
sim08_setup['betalist'],
sim08_setup['etalist_tron'],
sim08_setup['blist'],
sim08_setup['width'],
sim08_setup['num_iters'],
etalist_sgd=sim08_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim08_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim08_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/run_sim01.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim01_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim01_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim01_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim01_setup['filterlist'],
sim01_setup['dlist'],
sim01_setup['boundlist'],
sim01_setup['betalist'],
sim01_setup['etalist_tron'],
sim01_setup['blist'],
sim01_setup['width'],
sim01_setup['num_iters'],
etalist_sgd=sim01_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim01_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim01_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim07.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim07_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim07_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim07_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
beta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(beta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(beta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, beta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[beta_vals[i]])
axes[i].set_title(
r'$\beta$ = {}'.format(sim07_setup['betalist'][beta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[beta_vals[i]])
axes[i].set_yticklabels(ylabels[beta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim07_setup['name']+'_tron_vs_sgd_merged_beta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,834 | 23.025424 | 104 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim04.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim04_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim04_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim04_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
theta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(theta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(theta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[theta_vals[i]])
axes[i].set_title(
r'$\theta_\ast$ = {}'.format(sim04_setup['boundlist'][theta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[theta_vals[i]])
axes[i].set_yticklabels(ylabels[theta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim04_setup['name']+'_tron_vs_sgd_merged_theta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,851 | 23.169492 | 112 | py |
neurotron_experiments | neurotron_experiments-main/run_sim04.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim04_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim04_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim04_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim04_setup['filterlist'],
sim04_setup['dlist'],
sim04_setup['boundlist'],
sim04_setup['betalist'],
sim04_setup['etalist_tron'],
sim04_setup['blist'],
sim04_setup['width'],
sim04_setup['num_iters'],
etalist_sgd=sim04_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim04_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim04_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/run_tron_q_assist_sim.py | #%% -*- coding: utf-8 -*-
"""NC submission Neurotron q assist
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1Grqd8YloStHVD0eoAnUtOJ3jSxX1A8rA
#Introduction
"""
# %% Import packages
import numpy as np
import random
from random import sample
import pandas as pd
from numpy.linalg import matrix_rank as rank
from sim_setup import output_path
# %% Set seed
np.random.seed(seed=1001)
#%% An example of the basic ploting mechanism
#type(data),np.shape(data)
y = [[i**2 for i in range(1,10)],[i**3 for i in range(1,10)] ]
x = [i for i in range(1,10)]
# %% Defining the SGD & Stochastic_NeuroTron class
class SGD_NeuroTron:
def __init__(self,w_star,d,eta,b,width,filter,q,M,C):
assert(len(w_star) == filter)
self.w_true = w_star
self.dim = d
self.w_now = np.ones((filter,1)) #np.random.randn(filter,1) #random normal initialization
self.step = eta
self.minibatch = b
self.w = width #the w in the paper is the width of the net
self.r = filter #the r in the paper - the filter dimension << dim
self.q = q
self.M = M
self.C = C
self.A_list = []
c = 0
k = width/2
for i in range (width+1):
factor = (-k+c)
#print ("factor = ",factor)
if factor != 0:
Z = self.M + factor*self.C
Z = Z/1
self.A_list.append(Z)
c+=1
#Elements of A_list are r x dim
#Here we check that average(A_list) = M = something of full rank = r
sum = 0
for i in range(width):
sum += 1*self.A_list[i]
avg = sum/width
# print ("Filter dimension = ",self.r," Input dimension = ",self.dim," Shape of Avg-A = ",np.shape(avg)," Shape of M = ",np.shape(self.M))
# print ("Rank of the (M,average A_i) =(", rank(self.M), ",", rank(avg), ") ||Avg-A - M|| =",np.linalg.norm(avg - self.M))
#print ("True weight =",self.w_true)
#print ("Intiial weight =",self.w_now)
def err(self):
return np.linalg.norm(self.w_true-self.w_now)
def sample(self,mu,sigma):
return mu + sigma*np.random.randn(self.minibatch,self.dim)
def attack(self,bound,beta):
b = self.minibatch
u = np.random.uniform(0,1,b)
v = u <= beta
x = v* np.resize([-1*bound,1*bound], b)
return x
#A_i in filter x dim
#weight is filter x 1
#data is b x dim
def net (self,data,weight):
sum = 0
for i in range(self.w):
#print ("Shape of data^T =",np.shape(data.transpose()))
#print ("Shape of A[i] =",np.shape(self.A_list[i]))
#print ("Shape of weight^T =",np.shape(weight.transpose()))
y_fake_now = np.matmul(weight.transpose(),np.matmul(self.A_list[i],data.transpose())) #= w^TA_ix
#y_fake_now is 1 x b
indi = (y_fake_now > 0).astype(float)
sum += self.q*indi*y_fake_now #(= max(0, xA_i^Tw)) #random.uniform(5, 10) does the job of qi
return (sum/self.w).flatten()
def net_der (self,data,weight):
sum = 0
for i in range(self.w):
##print ("Shape of data^T =",np.shape(data.transpose())) #data^T is dim x b ?
#print ("Shape of A[i] =",np.shape(self.A_list[i]))
#print ("Shape of weight^T =",np.shape(weight.transpose()))
Aix = np.matmul(self.A_list[i],data.transpose())
##print ("Shape of A_ix =",np.shape(Aix)) #A_ix is r x b ?
y_fake_now = np.matmul(weight.transpose(),Aix) #= w^TA_ix
#y_fake_now is 1 x b
indi = (y_fake_now > 0).astype(float) #1(w^TA_ix >0) is 1 x b
indi = np.diag(indi[0])
##print ("Shape of indi=",np.shape(indi)) # b x b ?
##print (indi)
indAix = self.q*np.matmul(Aix,indi)
##print ("Shape of indi*A_ix =",np.shape(indAix)) #ind*A_ix is r x b ?
##print (indAix)
sum += indAix
#final = (sum/self.w).flatten()
final = (sum/self.w) # r x b
##print ("Shape of final =",np.shape(final)) #final is r x b ?
return final
#M is rx dim
#w_now is a r x 1 current point
#inputs are 1 x dim
def update_neuro (self,w_now,mu,sigma,bound,beta):
data = self.sample(mu,sigma) #b x dim sized data matrix sampled from N(mu,sigma)
y_oracle = self.net(data,self.w_true)
poison = self.attack(bound,beta)
#print ("Shape of poison =",np.shape(poison),poison) # [b,]
#print ("Shape of y_oracle =",np.shape(y_oracle),y_oracle) # 1 x b
y_oracle += poison #np.reshape(poison,(self.minibatch,1))
#print ("Shape of y_oracle post-attack =",np.shape(y_oracle),y_oracle)
y_now = self.net(data,self.w_now)
#print ("Shape of y_now =",np.shape(y_now),y_now) # 1 x b
sum = 0
for i in range(0,self.minibatch):
#print ("y_oracle[i] = ",y_oracle[i])
#print ("y_now[i] = ", y_now[i])
sum += (y_oracle[i] -y_now[i])*data[i,:]
g_tron = (1/self.minibatch)*np.matmul(self.M,sum.reshape(self.dim,1))
self.w_now += self.step * g_tron
return self.err()
def update_sgd (self,w_now,mu,sigma,bound,beta):
data = self.sample(mu,sigma) #b x dim sized data matrix sampled from N(mu,sigma)
y_oracle = self.net(data,self.w_true)
poison = self.attack(bound,beta)
#print ("Shape of poison =",np.shape(poison),poison) # [b,]
#print ("Shape of y_oracle =",np.shape(y_oracle),y_oracle) # 1 x b
y_oracle += poison #np.reshape(poison,(self.minibatch,1))
#print ("Shape of y_oracle post-attack =",np.shape(y_oracle),y_oracle)
y_now = self.net(data,self.w_now)
#print ("Shape of y_now =",np.shape(y_now),y_now) # 1 x b
net_der_now = self.net_der(data,self.w_now)
sum = 0
for i in range(0,self.minibatch):
#print ("y_oracle[i] = ",y_oracle[i])
#print ("y_now[i] = ", y_now[i])
sum += (y_oracle[i] -y_now[i])*np.reshape(net_der_now[:,0],(self.r,1))
g_sgd = (1/self.minibatch)*sum
self.w_now += self.step * g_sgd
return self.err()
#%% Running SGD & Stochastic NeuroTron : fixed beta, varying theta*
#Choose w_initial as the all ones vector but sample w_star from a normal
#theta* = 0 works : dim = 100, filter = 5, width = 4, eta = 10^(-4), b = 2^4
#All theta* works : dim = 50, filter = 20, width = 100, eta = 10^(-4), b = 2^6
dlist = [50] #data dimension in 100, 50, 20, 10
width = 100 #choose "width" as an even number
etalist = [0.0001] #step-lengths in 0.05,0.1,0.2
blist = [2**6] #[2**2,2**4,2**6,2**8,2**10] #mini-batch
mu = 0 #mu of the Gaussian data distribution N(mu,sigma)
sigma = 1 #sigma of the Gaussian data distribution N(mu,sigma)
#works for dlist 50, filter 20, width 100,eta = 10^(-4), b = 2^6
#betalist = [0.005,0.5] #[0,0.005,0.05,0.1,0.2,0.5,0.9]
#boundlist = [0] #[0,2**(-1),2**0,2**1] #[0,2**(-3),2**(-2),2**0,2**1,2**2]
betalist = [0.05] #[0.005,0.5] #[0,0.005,0.05,0.1,0.2,0.5,0.9]
boundlist = [0,2**(-1),2**0] #[0,2**(-3),2**(-2),2**0,2**1,2**2]
filterlist = [20]
d0=50
filter0=20
#Choosing the "M" matrix
M_X = np.random.randn(filter0,filter0)
M_Y = np.random.randn(filter0,d0-filter0)
M = np.concatenate((M_X,M_Y),axis=1)
C = np.random.randn(filter0,d0)
samples = 1 # 5
iterations = 4*(10**4)
k = 0
for filter in filterlist:
w_star = np.random.randn(filter,1) #Choosing the w_* from a Normal distribution
#print(w_star)
for d in dlist:
for bound in boundlist:
for beta in betalist:
for eta in etalist:
for b in blist:
#err_final_sgd = []
err_final_neuro1 = []
err_final_neuro10 = []
for s in range(samples):
err_list_sgd = []
err_list_neuro1 = []
err_list_neuro10 = []
#(self,w_star,d,eta,b,width,filter)
q0 = 10
SN_neuro10 = SGD_NeuroTron(w_star,d,eta/5,b,width,filter,10,M,C)
SN_neuro1 = SGD_NeuroTron(w_star,d,eta,b,width,filter,1,M,C)
#SN_sgd = SGD_NeuroTron(w_star,d,eta/10,b,width,filter,q0)
for i in range(iterations):
#(SN.w_now,mu,sigma,bound,beta)
err_list_neuro1.append(SN_neuro1.update_neuro(SN_neuro1.w_now,mu,sigma,bound,beta))
err_list_neuro10.append(SN_neuro10.update_neuro(SN_neuro1.w_now,mu,sigma,bound,beta))
#err_list_sgd.append(SN_sgd.update_sgd(SN_sgd.w_now,mu,sigma,bound,beta))
#print ("At sample index =",s," the error =",err_list[iterations-1])
#err_final_sgd = np.sum(np.array([err_final_sgd,err_list_sgd]),axis=0)
err_final_neuro1 = np.sum(np.array([err_final_neuro1,err_list_neuro1]),axis=0)
err_final_neuro10 = np.sum(np.array([err_final_neuro10,err_list_neuro10]),axis=0)
np.savetxt(output_path.joinpath('q_assist_neuro1_'+str(k)+'_tron.csv'), err_final_neuro1, delimiter=',')
np.savetxt(output_path.joinpath('q_assist_neuro10_'+str(k)+'_tron.csv'), err_final_neuro10, delimiter=',')
# np.savetxt(output_path.joinpath('q_assist_neuro1_'+str(k)+'_tron.csv'), err_final_neuro1/samples, delimiter=',')
# np.savetxt(output_path.joinpath('q_assist_neuro10_'+str(k)+'_tron.csv'), err_final_neuro10/samples, delimiter=',')
k = k + 1
#print(s)
# print ("(dim,iterations,eta,b,sigma,(beta,attack-bound)) =", (d,iterations,eta,b,filter,(beta,bound)))
#print ("final sample averaged error for SGD =", err_final_sgd[iterations-1]/samples)
# print ("final sample averaged error for NeuroTron, (q=1)=", err_final_neuro1[iterations-1]/samples)
# print ("final sample averaged error for NeuroTron, (q=10)=", err_final_neuro10[iterations-1]/samples)
| 10,704 | 39.703422 | 146 | py |
neurotron_experiments | neurotron_experiments-main/plot_tron_merged_beta.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim05_setup, sim06_setup, sim07_setup, sim08_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = []
for sim_setup in [sim05_setup, sim06_setup, sim07_setup, sim08_setup]:
tron_error_loaded.append(np.loadtxt(output_path.joinpath(sim_setup['name']+'_tron.csv'), delimiter=','))
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
# [-16.2, 2.2],
[-4.3, 1.2],
[-4.3, 1.2],
[-4.3, 1.2],
[-4.3, 1.2]
]
yticks = [
# [-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
# ['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Set sub-plots titles
titles = [
r'Normal data ($\sigma=1$)',
r'Normal data ($\sigma=3$)',
r'Laplace data ($scale=2$)',
r't-distributed data ($df=4$)'
]
# %% Set line labels
sim01_beta_labels = [
r'$\beta$ = {}'.format(sim05_setup['betalist'][0]),
r'$\beta$ = {}'.format(sim05_setup['betalist'][1]),
r'$\beta$ = {}'.format(sim05_setup['betalist'][2]),
r'$\beta$ = {}'.format(sim05_setup['betalist'][3]),
r'$\beta$ = {}'.format(sim05_setup['betalist'][4]),
r'$\beta$ = {}'.format(sim05_setup['betalist'][5]),
r'$\beta$ = {}'.format(sim05_setup['betalist'][6])
]
sim02_beta_labels = [
r'$\beta$ = {}'.format(sim06_setup['betalist'][0]),
r'$\beta$ = {}'.format(sim06_setup['betalist'][1]),
r'$\beta$ = {}'.format(sim06_setup['betalist'][2]),
r'$\beta$ = {}'.format(sim06_setup['betalist'][3]),
r'$\beta$ = {}'.format(sim06_setup['betalist'][4]),
r'$\beta$ = {}'.format(sim06_setup['betalist'][5]),
r'$\beta$ = {}'.format(sim06_setup['betalist'][6])
]
sim03_beta_labels = [
r'$\beta$ = {}'.format(sim07_setup['betalist'][0]),
r'$\beta$ = {}'.format(sim07_setup['betalist'][1]),
r'$\beta$ = {}'.format(sim07_setup['betalist'][2]),
r'$\beta$ = {}'.format(sim07_setup['betalist'][3]),
r'$\beta$ = {}'.format(sim07_setup['betalist'][4]),
r'$\beta$ = {}'.format(sim07_setup['betalist'][5]),
r'$\beta$ = {}'.format(sim07_setup['betalist'][6])
]
sim04_beta_labels = [
r'$\beta$ = {}'.format(sim08_setup['betalist'][0]),
r'$\beta$ = {}'.format(sim08_setup['betalist'][1]),
r'$\beta$ = {}'.format(sim08_setup['betalist'][2]),
r'$\beta$ = {}'.format(sim08_setup['betalist'][3]),
r'$\beta$ = {}'.format(sim08_setup['betalist'][4]),
r'$\beta$ = {}'.format(sim08_setup['betalist'][5]),
r'$\beta$ = {}'.format(sim08_setup['betalist'][6])
]
beta_labels = [sim01_beta_labels, sim02_beta_labels, sim03_beta_labels, sim04_beta_labels]
# %% Selection of beta values to plot
beta_vals = [1, 2, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded[0].shape[0]+1)
fig, axes = plt.subplots(nrows=4, ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(4):
for j in range(len(beta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[i][:, beta_vals[j]]),
linewidth=2.,
label=beta_labels[i][beta_vals[j]]
)
axes[i].set_ylim(ylims[i])
axes[i].set_title(titles[i].format(sim05_setup['betalist'][beta_vals[j]]), y=1.0, fontsize=fontsize)
axes[i].set_yticks(yticks[i])
axes[i].set_yticklabels(ylabels[i], fontsize=fontsize)
axes[i].legend(loc='upper right', ncol=2, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[3].set_xticks(xticks)
axes[3].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath('all_sims_tron_merged_beta_vals.png'),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 4,409 | 27.451613 | 108 | py |
neurotron_experiments | neurotron_experiments-main/run_sim06.py | # %% Import packages
import numpy as np
from pathlib import Path
from neurotron import NeuroTron
from sim_setup import output_path, sim06_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Set the seed
np.random.seed(sim06_setup['seed'])
# %% Instantiate NeuroTron
neurotron = NeuroTron(sample_data=sim06_setup['sample_data'])
# %% Run neurotron
tron_error, sgd_error = neurotron.run(
sim06_setup['filterlist'],
sim06_setup['dlist'],
sim06_setup['boundlist'],
sim06_setup['betalist'],
sim06_setup['etalist_tron'],
sim06_setup['blist'],
sim06_setup['width'],
sim06_setup['num_iters'],
etalist_sgd=sim06_setup['etalist_sgd']
)
# %% Save output
np.savetxt(output_path.joinpath(sim06_setup['name']+'_tron.csv'), np.transpose(tron_error), delimiter=',')
np.savetxt(output_path.joinpath(sim06_setup['name']+'_sgd.csv'), np.transpose(sgd_error), delimiter=',')
| 956 | 22.341463 | 106 | py |
neurotron_experiments | neurotron_experiments-main/neurotron.py | import numpy as np
class NeuroTron:
def __init__(self, sample_data=None, w_star=None, d=None, eta_tron=None, eta_sgd=None, b=None, width=None, filter=None):
self.sample_data = sample_data
self.reset(w_star, d, eta_tron, b, width, filter)
def reset(self, w_star, d, eta_tron, b, width, filter, eta_sgd=None):
if (w_star is not None) and (filter is not None):
assert(len(w_star) == filter)
self.w_true = w_star.copy() if (w_star is not None) else w_star
self.dim = d # Corresponds to n in the paper (input dimension)
self.step_tron = eta_tron
self.step_sgd = eta_sgd
self.minibatch = b
self.w = width # The w in the paper is the width of the net
self.r = filter # The r in the paper - the filter dimension < dim
if (d is not None) and (width is not None) and (filter is not None):
self.w_now_tron = np.ones((filter, 1)) if (filter is not None) else None # Initial point for NeuroTron
self.w_now_sgd = np.ones((filter, 1)) if (filter is not None) else None # Initial point for SGD
# Choosing the M matrix
M_X = np.random.randn(filter, filter)
M_Y = np.random.randn(filter, d-filter)
self.M = np.concatenate((M_X, M_Y), axis=1)
# Fixing the neural net
self.A_list = []
C = np.random.randn(filter, d)
c = 0
k = width/2
for i in range (width+1):
factor = (-k+c)
if factor != 0:
Z = self.M+factor*C
self.A_list.append(Z)
c+=1
def err_tron(self):
return np.linalg.norm(self.w_true-self.w_now_tron)
def err_sgd(self):
return np.linalg.norm(self.w_true-self.w_now_sgd)
def attack(self,bound,beta):
b = self.minibatch
u = np.random.uniform(0,1,b)
v = u <= beta
x = v* np.resize([-1*bound, 1*bound], b)
return x
# A_i in filter x dim
# weight is filter x 1
# data is b x dim
def net(self, data, weight):
sum = 0
for i in range(self.w):
y_fake_now = np.matmul(weight.transpose(), np.matmul(self.A_list[i], data.transpose())) # = w^TA_ix
# y_fake_now is 1 x b
indi = (y_fake_now > 0).astype(float)
sum += indi*y_fake_now # (= max(0, xA_i^Tw))
return (sum/self.w).flatten()
def net_der(self, data,weight):
sum = 0
for i in range(self.w):
# data^T is dim x b
Aix = np.matmul(self.A_list[i], data.transpose())
# A_ix is r x b
y_fake_now = np.matmul(weight.transpose(), Aix) # = w^TA_ix
# y_fake_now is 1 x b
indi = (y_fake_now > 0).astype(float)
# 1(w^TA_ix >0) is 1 x b
indi = np.diag(indi[0])
# indi is b x b
indAix = np.matmul(Aix, indi)
# ind*A_ix is r x b
sum += indAix
final = (sum/self.w) # r x b
return final
# M is r x dim
# w_now_tron is a r x 1 current point
# inputs are 1 x dim
def update_tron(self, bound, beta):
data = self.sample_data([self.minibatch, self.dim])
y_oracle = self.net(data, self.w_true)
poison = self.attack(bound, beta)
y_oracle += poison
y_now = self.net(data, self.w_now_tron) # 1 x b
sum = 0
for i in range(0, self.minibatch):
sum += (y_oracle[i]-y_now[i])*data[i, :]
g_tron = (1/self.minibatch)*np.matmul(self.M, sum.reshape(self.dim, 1))
self.w_now_tron += self.step_tron*g_tron
return self.err_tron()
def update_sgd(self, bound, beta):
data = self.sample_data([self.minibatch, self.dim])
y_oracle = self.net(data, self.w_true)
poison = self.attack(bound, beta)
y_oracle += poison
y_now = self.net(data, self.w_now_sgd) # 1 x b
net_der_now = self.net_der(data, self.w_now_sgd)
sum = 0
for i in range(0,self.minibatch):
sum += (y_oracle[i]-y_now[i])*np.reshape(net_der_now[:, 0], (self.r, 1))
g_sgd = (1/self.minibatch)*sum
self.w_now_sgd += self.step_sgd*g_sgd
return self.err_sgd()
def run(
self, filterlist, dlist, boundlist, betalist, etalist_tron, blist, width, num_iters,
etalist_sgd=None, verbose=True
):
num_runs = len(filterlist)
if verbose:
ir = 0
msg = 'Iteration {:' + str(len(str(num_runs))) + '} out of ' + str(num_runs)
tron_error = np.empty([num_runs, num_iters])
if etalist_sgd is not None:
sgd_error = np.empty([num_runs, num_iters])
else:
sgd_error = None
for i in range(num_runs):
if verbose:
ir += 1
print(msg.format(ir, num_runs))
# Choosing the ground truth w_* from a Normal distribution
w_star = np.random.randn(filterlist[i], 1)
self.reset(w_star, dlist[i], etalist_tron[i], blist[i], width, filterlist[i], eta_sgd=etalist_sgd[i])
for j in range(num_iters):
tron_error[i, j] = self.update_tron(boundlist[i], betalist[i])
if etalist_sgd is not None:
sgd_error[i, j] = self.update_sgd(boundlist[i], betalist[i])
return tron_error, sgd_error
| 5,583 | 33.68323 | 124 | py |
neurotron_experiments | neurotron_experiments-main/plot_merged_sim02.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim02_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim02_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim02_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Selection of theta values to plot
theta_vals = [1, 3, 4, 5, 6]
# %%
save = True
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neuro-Tron', 'SGD']
fig, axes = plt.subplots(nrows=len(theta_vals), ncols=1, sharex=True, figsize=(8, 18))
plt.subplots_adjust(hspace = 0.15)
for i in range(len(theta_vals)):
axes[i].plot(
xrange,
np.log10(tron_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[0]
)
axes[i].plot(
xrange,
np.log10(sgd_error_loaded[:, theta_vals[i]]),
linewidth=2.,
label=labels[1]
)
axes[i].set_ylim(ylims[theta_vals[i]])
axes[i].set_title(
r'$\theta_\ast$ = {}'.format(sim02_setup['boundlist'][theta_vals[i]]), y=1.0, pad=-23, fontsize=fontsize
)
axes[i].set_yticks(yticks[theta_vals[i]])
axes[i].set_yticklabels(ylabels[theta_vals[i]], fontsize=fontsize)
axes[i].legend(labels=labels, loc='upper right', ncol=1, fontsize=fontsize, frameon=False)
xticks = np.linspace(0, 40000, num=9)
xticklabels = [str(round(i)) for i in xticks]
axes[4].set_xticks(xticks)
axes[4].set_xticklabels(xticklabels, rotation=0, fontsize=fontsize)
if save:
plt.savefig(
output_path.joinpath(
sim02_setup['name']+'_tron_vs_sgd_merged_theta_vals.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,851 | 23.169492 | 112 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim03.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim03_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim03_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim03_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r'Laplace data ($scale=2$), $\theta_\ast$ = {}'.format(sim03_setup['boundlist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim03_setup['name']+'_tron_vs_sgd_theta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,838 | 24.123894 | 120 | py |
neurotron_experiments | neurotron_experiments-main/plot_sim02.py | # %% Import packages
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from sim_setup import output_path, sim02_setup
# %% Create output path if it does not exist
output_path.mkdir(parents=True, exist_ok=True)
# %% Load numerical output
tron_error_loaded = np.loadtxt(output_path.joinpath(sim02_setup['name']+'_tron.csv'), delimiter=',')
sgd_error_loaded = np.loadtxt(output_path.joinpath(sim02_setup['name']+'_sgd.csv'), delimiter=',')
# %% Set font size
fontsize = 13
# %% Set transparency
transparent = False
# %% Set y axis limits and ticks
ylims = [
[-16.2, 2.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2],
[-4.2, 1.2]
]
yticks = [
[-16, -14, -12, -10, -8, -6, -4, -2, 0, 2],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1],
[-4, -3, -2, -1, 0, 1]
]
ylabels = [
['1e-16', '1e-14', '1e-12', '1e-10', '1e-8', '1e-6', '1e-4', '1e-2', '1e-0', '1e+2'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1'],
['1e-4', '1e-3', '1e-2', '1e-1', '1e-0', '1e+1']
]
# %% Generate and save NeuroTron-vs-SGD figure
save = True
for i in range(tron_error_loaded.shape[1]):
plt.figure(figsize=[8, 5])
xrange = range(1, tron_error_loaded.shape[0]+1)
labels = ['Neurotron', 'SGD']
plt.plot(
xrange,
np.log10(tron_error_loaded[:, i]),
linewidth=2.,
label=labels[0]
)
plt.plot(
xrange,
np.log10(sgd_error_loaded[:, i]),
linewidth=2.,
label=labels[1]
)
plt.ylim(ylims[i])
plt.title(r'Normal data ($\sigma=3$), $\theta_\ast$ = {}'.format(sim02_setup['boundlist'][i]))
plt.xlabel('Iteration', fontsize=fontsize)
plt.ylabel(r'Parameter error ($\log_{10}$ scale)', fontsize=fontsize)
xtickstep = 5000
xticks = range(0, sgd_error_loaded.shape[0]+xtickstep, xtickstep)
plt.xticks(ticks=xticks, fontsize=fontsize)
plt.yticks(ticks=yticks[i], labels=ylabels[i], fontsize=fontsize)
leg = plt.legend(frameon=False, fontsize=fontsize, handletextpad=0.5, ncol=2)
for legobj in leg.legendHandles:
legobj.set_linewidth(3.)
if save:
plt.savefig(
output_path.joinpath(
sim02_setup['name']+'_tron_vs_sgd_theta_val'+str(i+1).zfill(len(str(tron_error_loaded.shape[1])))+'.png'
),
dpi=300,
pil_kwargs={'quality': 100},
transparent=transparent,
bbox_inches='tight',
pad_inches=0.1
)
| 2,838 | 24.123894 | 120 | py |
presto | presto-master/setup.py | from __future__ import print_function
import os
import sys
import numpy
# setuptools has to be imported before numpy.distutils.core
import setuptools
from numpy.distutils.core import Extension, setup
version = "4.0"
define_macros = []
undef_macros = []
extra_compile_args = ["-DUSEFFTW"]
include_dirs = [numpy.get_include()]
# For MacOS with MacPorts use the following
# include_dirs.append("/opt/local/include")
# Note: you might need to add "gfortran" to the following list if
# you see errors relating to missing "g" functions....
ppgplot_libraries = ["cpgplot", "pgplot", "X11", "png", "m"]
ppgplot_library_dirs = ["/usr/X11R6/lib"]
presto_libraries = ["presto", "fftw3f", "m"]
presto_library_dirs = []
ppgplot_include_dirs = include_dirs
presto_include_dirs = include_dirs
undef_macros.append('USE_NUMARRAY')
if os.name != "posix":
raise Exception("os not supported")
if "PGPLOT_DIR" in os.environ:
ppgplot_library_dirs.append(os.environ["PGPLOT_DIR"])
ppgplot_include_dirs.append(os.environ["PGPLOT_DIR"])
else:
print("PGPLOT_DIR env var not defined!", file=sys.stderr)
if "PRESTO" in os.environ:
presto_library_dirs.append(os.path.join(os.environ["PRESTO"], "lib"))
presto_include_dirs.append(os.path.join(os.environ["PRESTO"], "include"))
extra_link_args = ["-Wl,-rpath,"+os.path.join(os.environ["PRESTO"], "lib")]
else:
print("PRESTO env var not defined!", file=sys.stderr)
presto_include_dirs.append(os.path.join(os.path.dirname(__file__),
'include'))
extra_link_args = ["-Wl,-rpath,"+os.path.join(os.path.dirname(__file__),
"lib")]
ext_ppgplot = Extension('_ppgplot',
['python/ppgplot_src/_ppgplot.c'],
include_dirs=ppgplot_include_dirs,
libraries=ppgplot_libraries,
library_dirs=ppgplot_library_dirs,
define_macros=define_macros,
extra_compile_args=extra_compile_args,
extra_link_args=extra_link_args)
ext_presto = Extension('_presto',
['python/presto_src/presto_wrap.c'],
include_dirs=presto_include_dirs,
libraries=presto_libraries,
library_dirs=presto_library_dirs,
define_macros=define_macros,
extra_compile_args=extra_compile_args,
extra_link_args=extra_link_args)
ext_fftfit = Extension('_fftfit', sources=['python/fftfit_src/brent.f',
'python/fftfit_src/cprof.f',
'python/fftfit_src/fccf.f',
'python/fftfit_src/ffft.f',
'python/fftfit_src/fftfit.f',
'python/fftfit_src/_fftfit.pyf'])
scripts = ['bin/' + i for i in os.listdir('bin') if i.endswith('.py') or i.endswith('.sh')]
setup(name="presto",
version=version,
install_requires=['numpy', 'future', 'six', 'scipy', 'matplotlib', 'astropy', 'pyslalib'],
scripts=scripts,
description="Python interfaces to PGPLOT and PRESTO",
author="Scott Ransom (ppgplot from Nick Patavlis)",
author_email="[email protected]",
url="https://github.com/scottransom/presto",
packages=['presto', 'presto.ppgplot', 'presto.presto', 'presto.singlepulse'],
package_dir={'presto.ppgplot': 'python/ppgplot_src',
'presto.presto': 'python/presto_src',
'presto': 'python/presto',
'presto.singlepulse': 'python/presto/singlepulse',
},
package_data={'presto': ['*.json']},
ext_modules=[ext_ppgplot, ext_presto, ext_fftfit])
| 3,895 | 40.010526 | 96 | py |
presto | presto-master/python/presto_src/prestoswig.py | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 4.1.0
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info as _swig_python_version_info
if _swig_python_version_info < (2, 7, 0):
raise RuntimeError("Python 2.7 or later required")
import _presto
try:
import builtins as __builtin__
except ImportError:
import __builtin__
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except __builtin__.Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
def _swig_setattr_nondynamic_instance_variable(set):
def set_instance_attr(self, name, value):
if name == "thisown":
self.this.own(value)
elif name == "this":
set(self, name, value)
elif hasattr(self, name) and isinstance(getattr(type(self), name), property):
set(self, name, value)
else:
raise AttributeError("You cannot add instance attributes to %s" % self)
return set_instance_attr
def _swig_setattr_nondynamic_class_variable(set):
def set_class_attr(cls, name, value):
if hasattr(cls, name) and not isinstance(getattr(cls, name), property):
set(cls, name, value)
else:
raise AttributeError("You cannot add class attributes to %s" % cls)
return set_class_attr
def _swig_add_metaclass(metaclass):
"""Class decorator for adding a metaclass to a SWIG wrapped class - a slimmed down version of six.add_metaclass"""
def wrapper(cls):
return metaclass(cls.__name__, cls.__bases__, cls.__dict__.copy())
return wrapper
class _SwigNonDynamicMeta(type):
"""Meta class to enforce nondynamic attributes (no new attributes) for a class"""
__setattr__ = _swig_setattr_nondynamic_class_variable(type.__setattr__)
class fcomplex(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
r = property(_presto.fcomplex_r_get, _presto.fcomplex_r_set)
i = property(_presto.fcomplex_i_get, _presto.fcomplex_i_set)
def __init__(self):
_presto.fcomplex_swiginit(self, _presto.new_fcomplex())
__swig_destroy__ = _presto.delete_fcomplex
# Register fcomplex in _presto:
_presto.fcomplex_swigregister(fcomplex)
SQRT2 = _presto.SQRT2
PI = _presto.PI
TWOPI = _presto.TWOPI
DEGTORAD = _presto.DEGTORAD
RADTODEG = _presto.RADTODEG
PIBYTWO = _presto.PIBYTWO
SOL = _presto.SOL
SECPERJULYR = _presto.SECPERJULYR
SECPERDAY = _presto.SECPERDAY
ARCSEC2RAD = _presto.ARCSEC2RAD
SEC2RAD = _presto.SEC2RAD
LOWACC = _presto.LOWACC
HIGHACC = _presto.HIGHACC
INTERBIN = _presto.INTERBIN
INTERPOLATE = _presto.INTERPOLATE
NO_CHECK_ALIASED = _presto.NO_CHECK_ALIASED
CHECK_ALIASED = _presto.CHECK_ALIASED
CONV = _presto.CONV
CORR = _presto.CORR
INPLACE_CONV = _presto.INPLACE_CONV
INPLACE_CORR = _presto.INPLACE_CORR
FFTDK = _presto.FFTDK
FFTD = _presto.FFTD
FFTK = _presto.FFTK
NOFFTS = _presto.NOFFTS
RAW = _presto.RAW
PREPPED = _presto.PREPPED
FFT = _presto.FFT
SAME = _presto.SAME
def read_wisdom() -> "void":
return _presto.read_wisdom()
def good_factor(nn: "long long") -> "long long":
return _presto.good_factor(nn)
def fftwcall(indata: "fcomplex", isign: "int") -> "void":
return _presto.fftwcall(indata, isign)
def tablesixstepfft(indata: "fcomplex", isign: "int") -> "void":
return _presto.tablesixstepfft(indata, isign)
def realfft(data: "float *", isign: "int") -> "void":
return _presto.realfft(data, isign)
class infodata(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
ra_s = property(_presto.infodata_ra_s_get, _presto.infodata_ra_s_set)
dec_s = property(_presto.infodata_dec_s_get, _presto.infodata_dec_s_set)
N = property(_presto.infodata_N_get, _presto.infodata_N_set)
dt = property(_presto.infodata_dt_get, _presto.infodata_dt_set)
fov = property(_presto.infodata_fov_get, _presto.infodata_fov_set)
mjd_f = property(_presto.infodata_mjd_f_get, _presto.infodata_mjd_f_set)
dm = property(_presto.infodata_dm_get, _presto.infodata_dm_set)
freq = property(_presto.infodata_freq_get, _presto.infodata_freq_set)
freqband = property(_presto.infodata_freqband_get, _presto.infodata_freqband_set)
chan_wid = property(_presto.infodata_chan_wid_get, _presto.infodata_chan_wid_set)
wavelen = property(_presto.infodata_wavelen_get, _presto.infodata_wavelen_set)
waveband = property(_presto.infodata_waveband_get, _presto.infodata_waveband_set)
energy = property(_presto.infodata_energy_get, _presto.infodata_energy_set)
energyband = property(_presto.infodata_energyband_get, _presto.infodata_energyband_set)
num_chan = property(_presto.infodata_num_chan_get, _presto.infodata_num_chan_set)
mjd_i = property(_presto.infodata_mjd_i_get, _presto.infodata_mjd_i_set)
ra_h = property(_presto.infodata_ra_h_get, _presto.infodata_ra_h_set)
ra_m = property(_presto.infodata_ra_m_get, _presto.infodata_ra_m_set)
dec_d = property(_presto.infodata_dec_d_get, _presto.infodata_dec_d_set)
dec_m = property(_presto.infodata_dec_m_get, _presto.infodata_dec_m_set)
bary = property(_presto.infodata_bary_get, _presto.infodata_bary_set)
numonoff = property(_presto.infodata_numonoff_get, _presto.infodata_numonoff_set)
notes = property(_presto.infodata_notes_get, _presto.infodata_notes_set)
name = property(_presto.infodata_name_get, _presto.infodata_name_set)
object = property(_presto.infodata_object_get, _presto.infodata_object_set)
instrument = property(_presto.infodata_instrument_get, _presto.infodata_instrument_set)
observer = property(_presto.infodata_observer_get, _presto.infodata_observer_set)
analyzer = property(_presto.infodata_analyzer_get, _presto.infodata_analyzer_set)
telescope = property(_presto.infodata_telescope_get, _presto.infodata_telescope_set)
band = property(_presto.infodata_band_get, _presto.infodata_band_set)
filt = property(_presto.infodata_filt_get, _presto.infodata_filt_set)
def __init__(self):
_presto.infodata_swiginit(self, _presto.new_infodata())
__swig_destroy__ = _presto.delete_infodata
# Register infodata in _presto:
_presto.infodata_swigregister(infodata)
def readinf(data: "infodata", filenm: "char *") -> "void":
return _presto.readinf(data, filenm)
def writeinf(data: "infodata") -> "void":
return _presto.writeinf(data)
class orbitparams(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
p = property(_presto.orbitparams_p_get, _presto.orbitparams_p_set)
e = property(_presto.orbitparams_e_get, _presto.orbitparams_e_set)
x = property(_presto.orbitparams_x_get, _presto.orbitparams_x_set)
w = property(_presto.orbitparams_w_get, _presto.orbitparams_w_set)
t = property(_presto.orbitparams_t_get, _presto.orbitparams_t_set)
pd = property(_presto.orbitparams_pd_get, _presto.orbitparams_pd_set)
wd = property(_presto.orbitparams_wd_get, _presto.orbitparams_wd_set)
def __init__(self):
_presto.orbitparams_swiginit(self, _presto.new_orbitparams())
__swig_destroy__ = _presto.delete_orbitparams
# Register orbitparams in _presto:
_presto.orbitparams_swigregister(orbitparams)
class psrparams(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
jname = property(_presto.psrparams_jname_get, _presto.psrparams_jname_set)
bname = property(_presto.psrparams_bname_get, _presto.psrparams_bname_set)
alias = property(_presto.psrparams_alias_get, _presto.psrparams_alias_set)
ra2000 = property(_presto.psrparams_ra2000_get, _presto.psrparams_ra2000_set)
dec2000 = property(_presto.psrparams_dec2000_get, _presto.psrparams_dec2000_set)
dm = property(_presto.psrparams_dm_get, _presto.psrparams_dm_set)
timepoch = property(_presto.psrparams_timepoch_get, _presto.psrparams_timepoch_set)
p = property(_presto.psrparams_p_get, _presto.psrparams_p_set)
pd = property(_presto.psrparams_pd_get, _presto.psrparams_pd_set)
pdd = property(_presto.psrparams_pdd_get, _presto.psrparams_pdd_set)
f = property(_presto.psrparams_f_get, _presto.psrparams_f_set)
fd = property(_presto.psrparams_fd_get, _presto.psrparams_fd_set)
fdd = property(_presto.psrparams_fdd_get, _presto.psrparams_fdd_set)
orb = property(_presto.psrparams_orb_get, _presto.psrparams_orb_set)
def __init__(self):
_presto.psrparams_swiginit(self, _presto.new_psrparams())
__swig_destroy__ = _presto.delete_psrparams
# Register psrparams in _presto:
_presto.psrparams_swigregister(psrparams)
def get_psr_at_epoch(psrname: "char *", epoch: "double", psr: "psrparams") -> "int":
return _presto.get_psr_at_epoch(psrname, epoch, psr)
def get_psr_from_parfile(parfilenm: "char *", epoch: "double", psr: "psrparams") -> "int":
return _presto.get_psr_from_parfile(parfilenm, epoch, psr)
def mjd_to_datestr(mjd: "double", datestr: "char *") -> "void":
return _presto.mjd_to_datestr(mjd, datestr)
def fresnl(xxa: "double") -> "double *, double *":
return _presto.fresnl(xxa)
class rderivs(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
pow = property(_presto.rderivs_pow_get, _presto.rderivs_pow_set)
phs = property(_presto.rderivs_phs_get, _presto.rderivs_phs_set)
dpow = property(_presto.rderivs_dpow_get, _presto.rderivs_dpow_set)
dphs = property(_presto.rderivs_dphs_get, _presto.rderivs_dphs_set)
d2pow = property(_presto.rderivs_d2pow_get, _presto.rderivs_d2pow_set)
d2phs = property(_presto.rderivs_d2phs_get, _presto.rderivs_d2phs_set)
locpow = property(_presto.rderivs_locpow_get, _presto.rderivs_locpow_set)
def __init__(self):
_presto.rderivs_swiginit(self, _presto.new_rderivs())
__swig_destroy__ = _presto.delete_rderivs
# Register rderivs in _presto:
_presto.rderivs_swigregister(rderivs)
class fourierprops(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
r = property(_presto.fourierprops_r_get, _presto.fourierprops_r_set)
rerr = property(_presto.fourierprops_rerr_get, _presto.fourierprops_rerr_set)
z = property(_presto.fourierprops_z_get, _presto.fourierprops_z_set)
zerr = property(_presto.fourierprops_zerr_get, _presto.fourierprops_zerr_set)
w = property(_presto.fourierprops_w_get, _presto.fourierprops_w_set)
werr = property(_presto.fourierprops_werr_get, _presto.fourierprops_werr_set)
pow = property(_presto.fourierprops_pow_get, _presto.fourierprops_pow_set)
powerr = property(_presto.fourierprops_powerr_get, _presto.fourierprops_powerr_set)
sig = property(_presto.fourierprops_sig_get, _presto.fourierprops_sig_set)
rawpow = property(_presto.fourierprops_rawpow_get, _presto.fourierprops_rawpow_set)
phs = property(_presto.fourierprops_phs_get, _presto.fourierprops_phs_set)
phserr = property(_presto.fourierprops_phserr_get, _presto.fourierprops_phserr_set)
cen = property(_presto.fourierprops_cen_get, _presto.fourierprops_cen_set)
cenerr = property(_presto.fourierprops_cenerr_get, _presto.fourierprops_cenerr_set)
pur = property(_presto.fourierprops_pur_get, _presto.fourierprops_pur_set)
purerr = property(_presto.fourierprops_purerr_get, _presto.fourierprops_purerr_set)
locpow = property(_presto.fourierprops_locpow_get, _presto.fourierprops_locpow_set)
def __init__(self):
_presto.fourierprops_swiginit(self, _presto.new_fourierprops())
__swig_destroy__ = _presto.delete_fourierprops
# Register fourierprops in _presto:
_presto.fourierprops_swigregister(fourierprops)
class foldstats(object):
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
__repr__ = _swig_repr
numdata = property(_presto.foldstats_numdata_get, _presto.foldstats_numdata_set)
data_avg = property(_presto.foldstats_data_avg_get, _presto.foldstats_data_avg_set)
data_var = property(_presto.foldstats_data_var_get, _presto.foldstats_data_var_set)
numprof = property(_presto.foldstats_numprof_get, _presto.foldstats_numprof_set)
prof_avg = property(_presto.foldstats_prof_avg_get, _presto.foldstats_prof_avg_set)
prof_var = property(_presto.foldstats_prof_var_get, _presto.foldstats_prof_var_set)
redchi = property(_presto.foldstats_redchi_get, _presto.foldstats_redchi_set)
def __init__(self):
_presto.foldstats_swiginit(self, _presto.new_foldstats())
__swig_destroy__ = _presto.delete_foldstats
# Register foldstats in _presto:
_presto.foldstats_swigregister(foldstats)
def gen_fvect(nl: "long") -> "long *":
return _presto.gen_fvect(nl)
def gen_cvect(nl: "long") -> "long *":
return _presto.gen_cvect(nl)
def power_arr(dft: "fcomplex") -> "long *":
return _presto.power_arr(dft)
def phase_arr(dft: "fcomplex") -> "long *":
return _presto.phase_arr(dft)
def frotate(data: "float *", bins_to_left: "float") -> "void":
return _presto.frotate(data, bins_to_left)
def drotate(data: "double *", bins_to_left: "double") -> "void":
return _presto.drotate(data, bins_to_left)
def keplers_eqn(t: "double", p_orb: "double", e: "double", Eacc: "double") -> "double":
return _presto.keplers_eqn(t, p_orb, e, Eacc)
def E_to_phib(E: "double *", orb: "orbitparams") -> "void":
return _presto.E_to_phib(E, orb)
def E_to_v(E: "double *", orb: "orbitparams") -> "void":
return _presto.E_to_v(E, orb)
def E_to_p(E: "double *", p_psr: "double", orb: "orbitparams") -> "void":
return _presto.E_to_p(E, p_psr, orb)
def E_to_z(E: "double *", p_psr: "double", T: "double", orb: "orbitparams") -> "void":
return _presto.E_to_z(E, p_psr, T, orb)
def E_to_phib_BT(E: "double *", orb: "orbitparams") -> "void":
return _presto.E_to_phib_BT(E, orb)
def dorbint(Eo: "double", numpts: "long", dt: "double", orb: "orbitparams") -> "long *":
return _presto.dorbint(Eo, numpts, dt, orb)
def binary_velocity(T: "double", orbit: "orbitparams") -> "double *, double *":
return _presto.binary_velocity(T, orbit)
def r_resp_halfwidth(accuracy: "presto_interp_acc") -> "int":
return _presto.r_resp_halfwidth(accuracy)
def z_resp_halfwidth(z: "double", accuracy: "presto_interp_acc") -> "int":
return _presto.z_resp_halfwidth(z, accuracy)
def w_resp_halfwidth(z: "double", w: "double", accuracy: "presto_interp_acc") -> "int":
return _presto.w_resp_halfwidth(z, w, accuracy)
def bin_resp_halfwidth(ppsr: "double", T: "double", orbit: "orbitparams") -> "int":
return _presto.bin_resp_halfwidth(ppsr, T, orbit)
def gen_r_response(roffset: "double", numbetween: "int", numkern: "int") -> "long *":
return _presto.gen_r_response(roffset, numbetween, numkern)
def gen_z_response(roffset: "double", numbetween: "int", numkern: "int", z: "double") -> "long *":
return _presto.gen_z_response(roffset, numbetween, numkern, z)
def gen_w_response(roffset: "double", numbetween: "int", numkern: "int", z: "double", w: "double") -> "long *":
return _presto.gen_w_response(roffset, numbetween, numkern, z, w)
def gen_w_response2(roffset: "double", numbetween: "int", numkern: "int", z: "double", w: "double") -> "long *":
return _presto.gen_w_response2(roffset, numbetween, numkern, z, w)
def gen_bin_response(roffset: "double", numbetween: "int", numkern: "int", ppsr: "double", T: "double", orbit: "orbitparams") -> "long *":
return _presto.gen_bin_response(roffset, numbetween, numkern, ppsr, T, orbit)
def get_localpower(data: "fcomplex", r: "double") -> "float":
return _presto.get_localpower(data, r)
def get_localpower3d(data: "fcomplex", r: "double", z: "double", w: "double") -> "float":
return _presto.get_localpower3d(data, r, z, w)
def get_derivs3d(data: "fcomplex", numdata: "long", r: "double", z: "double", w: "double", localpower: "float", result: "rderivs") -> "void":
return _presto.get_derivs3d(data, numdata, r, z, w, localpower, result)
def calc_props(data: "rderivs", r: "double", z: "double", w: "double", result: "fourierprops") -> "void":
return _presto.calc_props(data, r, z, w, result)
def calc_binprops(props: "fourierprops", T: "double", lowbin: "int", nfftbins: "int", result: "binaryprops *") -> "void":
return _presto.calc_binprops(props, T, lowbin, nfftbins, result)
def calc_rzwerrs(props: "fourierprops", T: "double", result: "rzwerrs *") -> "void":
return _presto.calc_rzwerrs(props, T, result)
def extended_equiv_gaussian_sigma(logp: "double") -> "double":
return _presto.extended_equiv_gaussian_sigma(logp)
def log_asymtotic_incomplete_gamma(a: "double", z: "double") -> "double":
return _presto.log_asymtotic_incomplete_gamma(a, z)
def log_asymtotic_gamma(z: "double") -> "double":
return _presto.log_asymtotic_gamma(z)
def equivalent_gaussian_sigma(logp: "double") -> "double":
return _presto.equivalent_gaussian_sigma(logp)
def chi2_logp(chi2: "double", dof: "double") -> "double":
return _presto.chi2_logp(chi2, dof)
def chi2_sigma(chi2: "double", dof: "double") -> "double":
return _presto.chi2_sigma(chi2, dof)
def candidate_sigma(power: "double", numsum: "int", numtrials: "double") -> "double":
return _presto.candidate_sigma(power, numsum, numtrials)
def power_for_sigma(sigma: "double", numsum: "int", numtrials: "double") -> "double":
return _presto.power_for_sigma(sigma, numsum, numtrials)
def switch_f_and_p(_in: "double", ind: "double", indd: "double") -> "double *, double *, double *":
return _presto.switch_f_and_p(_in, ind, indd)
def chisqr(data: "double *", avg: "double", var: "double") -> "double":
return _presto.chisqr(data, avg, var)
def z2n(data: "double *", var: "double", n: "int") -> "double":
return _presto.z2n(data, var, n)
def print_candidate(cand: "fourierprops", dt: "double", N: "unsigned long", nph: "float", numerrdigits: "int") -> "void":
return _presto.print_candidate(cand, dt, N, nph, numerrdigits)
def print_bin_candidate(cand: "binaryprops *", numerrdigits: "int") -> "void":
return _presto.print_bin_candidate(cand, numerrdigits)
def read_rzw_cand(file: "FILE *", cands: "fourierprops") -> "int":
return _presto.read_rzw_cand(file, cands)
def get_rzw_cand(filenm: "char *", candnum: "int", cand: "fourierprops") -> "void":
return _presto.get_rzw_cand(filenm, candnum, cand)
def read_bin_cand(file: "FILE *", cands: "binaryprops *") -> "int":
return _presto.read_bin_cand(file, cands)
def get_bin_cand(filenm: "char *", candnum: "int", cand: "binaryprops *") -> "void":
return _presto.get_bin_cand(filenm, candnum, cand)
def next2_to_n(x: "long long") -> "long long":
return _presto.next2_to_n(x)
def is_power_of_10(n: "long long") -> "int":
return _presto.is_power_of_10(n)
def choose_good_N(orig_N: "long long") -> "long long":
return _presto.choose_good_N(orig_N)
def dms2rad(deg: "int", min: "int", sec: "double") -> "double":
return _presto.dms2rad(deg, min, sec)
def hms2rad(hour: "int", min: "int", sec: "double") -> "double":
return _presto.hms2rad(hour, min, sec)
def hours2hms(hours: "double") -> "int *, int *, double *":
return _presto.hours2hms(hours)
def deg2dms(degrees: "double") -> "int *, int *, double *":
return _presto.deg2dms(degrees)
def sphere_ang_diff(ra1: "double", dec1: "double", ra2: "double", dec2: "double") -> "double":
return _presto.sphere_ang_diff(ra1, dec1, ra2, dec2)
def corr_rz_plane(data: "fcomplex", numbetween: "int", startbin: "int", zlo: "double", zhi: "double", numz: "int", fftlen: "int", accuracy: "presto_interp_acc") -> "long *":
return _presto.corr_rz_plane(data, numbetween, startbin, zlo, zhi, numz, fftlen, accuracy)
def corr_rzw_vol(data: "fcomplex", numbetween: "int", startbin: "int", zlo: "double", zhi: "double", numz: "int", wlo: "double", whi: "double", numw: "int", fftlen: "int", accuracy: "presto_interp_acc") -> "long *":
return _presto.corr_rzw_vol(data, numbetween, startbin, zlo, zhi, numz, wlo, whi, numw, fftlen, accuracy)
def max_r_arr(data: "fcomplex", rin: "double", derivs: "rderivs") -> "double *, double *":
return _presto.max_r_arr(data, rin, derivs)
def max_rz_arr(data: "fcomplex", rin: "double", zin: "double", derivs: "rderivs") -> "double *, double *, double *":
return _presto.max_rz_arr(data, rin, zin, derivs)
def max_rz_arr_harmonics(data: "fcomplex", rin: "double", zin: "double", derivdata: "double *") -> "double *, double *":
return _presto.max_rz_arr_harmonics(data, rin, zin, derivdata)
def max_rzw_arr_harmonics(data: "fcomplex", rin: "double", zin: "double", win: "double", derivdata: "double *") -> "double *, double *, double *":
return _presto.max_rzw_arr_harmonics(data, rin, zin, win, derivdata)
def max_rzw_arr(data: "fcomplex", rin: "double", zin: "double", win: "double", derivs: "rderivs") -> "double *, double *, double *, double *":
return _presto.max_rzw_arr(data, rin, zin, win, derivs)
def barycenter(topotimes: "double *", barytimes: "double *", voverc: "double *", ra: "char *", dec: "char *", obs: "char *", ephem: "char *") -> "void":
return _presto.barycenter(topotimes, barytimes, voverc, ra, dec, obs, ephem)
def DOF_corr(dt_per_bin: "double") -> "double":
return _presto.DOF_corr(dt_per_bin)
def simplefold(data: "float *", dt: "double", tlo: "double", prof: "double *", startphs: "double", f0: "double", fdot: "double", fdotdot: "double", standard: "int") -> "double":
return _presto.simplefold(data, dt, tlo, prof, startphs, f0, fdot, fdotdot, standard)
def nice_output_1(output: "char *", val: "double", err: "double", len: "int") -> "int":
return _presto.nice_output_1(output, val, err, len)
def nice_output_2(output: "char *", val: "double", err: "double", len: "int") -> "int":
return _presto.nice_output_2(output, val, err, len)
| 22,317 | 45.11157 | 215 | py |
presto | presto-master/python/presto_src/__init__.py | from __future__ import print_function
from __future__ import absolute_import
from builtins import input
from builtins import range
from .prestoswig import *
import os.path
import numpy as np
from presto import Pgplot
from presto import psr_utils
def val_with_err(value, error, length=0, digits=2, latex=0):
"""
val_with_err(value, error, length=0, digits=2):
Returns a string of length length (auto if 0) with 'value'
rounded to the appropriate decimal place and the
'error' in parenthesis as in scientific journals.
The error has 'digits' decimal places.
Notes:
'length' should be ~20 to show full double precision
if the base 10 exponent of the error needs to be shown.
If length == 0, left-justified minimum length string is returned.
If length > 0, the string returned is right justified.
If length < 0, the string returned is left justified.
If latex=1, the string is converted into LaTeX markup.
"""
slen = 40
outstr = ' ' * slen
if abs(length) > slen:
slen = abs(length)
if digits == 2:
slen = nice_output_2(outstr, value, error, length)
else:
slen = nice_output_1(outstr, value, error, length)
outstr = outstr[:slen].strip() # remove null termination and any space
if length < 0:
outstr = outstr + (20 - len(outstr)) * ' '
if length > 0:
outstr = (20 - len(outstr)) * ' ' + outstr
if latex:
if outstr.find("x10") > 0:
outstr = outstr.replace("x10^", r"$\times$10$^{") + "}$"
return outstr
def read_inffile(filename, verbose=True):
"""
read_inffile(filename, verbose=True):
Return an infodata 'C' structure containing the data from the
'inf' file in 'filename'.
"""
fname = filename[:-4] if (filename[-4:] == ".inf") else filename
id = infodata()
if verbose:
print("Reading information from", "\"" + fname + ".inf\"")
readinf(id, fname)
return id
def write_inffile(infodata, verbose=True):
"""
wite_inffile(infodata, verbose=True):
Write an '.inf' file based on its input structure
"""
if verbose:
print("Writing .inf file to '%s.inf'" % infodata.name)
writeinf(infodata)
def psrepoch(psrname, epoch, verbose=True):
"""
psrepoch(psrname or parname, epoch):
Return a psrparams 'C' structure which includes data for
PSR 'psrname' (a string of the B1950 or J2000 name of the
pulsar -- without PSR, J, or B included) at epoch 'epoch'
(in MJD format) from the ATNF database, or, a parfile is
passed, read the pulsar information from it instead.
"""
pp = psrparams()
if os.path.isfile(psrname):
get_psr_from_parfile(psrname, epoch, pp)
if verbose:
print('Retrieved data at MJD %f from "%s"' % (epoch, psrname))
else:
num = get_psr_at_epoch(psrname, epoch, pp)
if verbose:
print('Retrieved data at MJD %f for %s' % (epoch, pp.jname))
print('The pulsar was #%d in the database.' % num)
return pp
def read_rzwcands(filename):
"""
read_rzwcands(filename):
Return a list of all of the rzw search candidates from
the file 'filename'.
"""
infile = open(filename, "r")
cands = []
nextcand = fourierprops()
while (read_rzw_cand(infile, nextcand)):
cands.append(nextcand)
nextcand = fourierprops()
infile.close()
return cands
def read_rawbincands(filename):
"""
read_rawbincands(filename):
Return a list of all of the raw binary search candidates
from the file 'filename'.
"""
infile = open(filename, "r")
cands = []
nextcand = rawbincand()
while (read_rawbin_cand(infile, nextcand)):
cands.append(nextcand)
nextcand = rawbincand()
infile.close()
return cands
def next2_to_n(x):
"""
next2_to_n(x):
Return the first value of 2^n >= x.
"""
i = 1
while (i < x): i = i << 1
return i
def rfft(data, sign=-1):
"""
rfft(data, sign=-1):
Return the FFT of the real-valued, 32-bit floating point 'data'
Note: This only returns the positive frequency half of the FFT,
since the other half is symmetric. The Nyquist frequency
is stored in the complex part of frequency 0 as per
Numerical Recipes.
The optional value 'sign' should be -1 (forward) or +1 (inverse).
"""
# Default to sign = -1 if the user gives a bad value
if (sign == -1 or sign != 1):
tmp = np.array(data, copy=1).astype(np.float32)
realfft(tmp, -1)
return tmp.view(np.complex64)
else:
tmp = np.array(data.view(np.float32), copy=1).astype(np.float32)
realfft(tmp, 1)
return tmp.view(np.float32)
def spectralpower(fftarray):
"""
spectralpower(fftarray):
Return the power spectrum of a complex FFT 'fftarray'.
"""
return power_arr(np.asarray(fftarray).astype(np.complex64))
def spectralphase(fftarray):
"""
spectralphase(fftarray):
Return the spectral phase (deg) of a complex FFT 'fftarray'.
"""
return phase_arr(np.asarray(fftarray).astype(np.complex64))
def rzw_response(roffset, z, w, numbetween=1, numkern=None):
"""
rzw_response(roffset, z, w, numbetween=1, numkern=None):
Return the response of a signal offset from a Fourier bin
by roffset bins, with a Fourier f-dot of z, and a
Fourier f-dotdot of w. The Fourier interpolation
factor is the integer numbetween, and the the length
of the resulting kernel will be auto-determined if
numkern is None.
"""
if numkern is None:
numkern = w_resp_halfwidth(z, w, LOWACC)
return gen_w_response(roffset, numbetween, numkern, z, w)
def maximize_r(data, r, norm=None):
"""
maximize_r(data, r, norm = None):
Optimize the detection of a signal at Fourier frequency 'r' in
a FFT 'data'. The routine returns a list containing
the optimized values of the maximum normalized power, rmax,
and an rderivs structure for the peak.
"""
rd = rderivs()
(rmax, maxpow) = max_r_arr(data, r, rd)
maxpow = maxpow / rd.locpow if norm is None else maxpow / norm
return [maxpow, rmax, rd]
def maximize_rz(data, r, z, norm=None):
"""
maximize_rz(data, r, z, norm = None):
Optimize the detection of a signal at location 'r', 'z' in
the F-Fdot plane. The routine returns a list containing
the optimized values of the maximum normalized power, rmax,
zmax, and an rderivs structure for the peak.
"""
rd = rderivs()
(rmax, zmax, maxpow) = max_rz_arr(data, r, z, rd)
maxpow = maxpow / rd.locpow if norm is None else maxpow / norm
return [maxpow, rmax, zmax, rd]
def maximize_rz_harmonics(data, r, z, numharm, norm=None):
"""
maximize_rz_harmonics(data, r, z, numharm, norm = None):
Optimize the detection of a signal at location 'r', 'z' in
the F-Fdot plane, including harmonic summing of the harmonics.
The routine returns a list containing the optimized values of
the maximum normalized power, rmax, zmax, and a list of
rderivs structures for the peak.
"""
rds = [rderivs() for ii in range(numharm)]
derivdata = np.zeros(7 * numharm, dtype=np.float64)
rmax, zmax = max_rz_arr_harmonics(data, r, z, derivdata)
maxpow = 0.0
for ii in range(numharm):
rds[ii].pow = derivdata[ii * 7 + 0]
rds[ii].phs = derivdata[ii * 7 + 1]
rds[ii].dpow = derivdata[ii * 7 + 2]
rds[ii].dphs = derivdata[ii * 7 + 3]
rds[ii].d2pow = derivdata[ii * 7 + 4]
rds[ii].d2phs = derivdata[ii * 7 + 5]
rds[ii].locpow = derivdata[ii * 7 + 6]
maxpow += rds[ii].pow / rds[ii].locpow if norm is None else rds[ii].pow / norm
return [maxpow, rmax, zmax, rds]
def maximize_rzw(data, r, z, w, norm=None):
"""
maximize_rzw(data, r, z, w, norm = None):
Optimize the detection of a signal at location 'r', 'z', 'w' in
the F-Fdot-Fdotdot plane. The routine returns a list containing
the optimized values of the maximum normalized power, rmax,
zmax, wmax, and an rderivs structure for the peak.
"""
rd = rderivs()
(rmax, zmax, wmax, maxpow) = max_rzw_arr(data, r, z, w, rd)
maxpow = maxpow / rd.locpow if norm is None else maxpow / norm
return [maxpow, rmax, zmax, wmax, rd]
def maximize_rzw_harmonics(data, r, z, w, numharm, norm=None):
"""
maximize_rzw_harmonics(data, r, z, w, numharm, norm = None):
Optimize the detection of a signal at location 'r', 'z', 'w' in
the F-Fd-Fdd volume, including harmonic summing of the harmonics.
The routine returns a list containing the optimized values of
the maximum normalized power, rmax, zmax, wmax, and a list of
rderivs structures for the peak.
"""
rds = [rderivs() for ii in range(numharm)]
derivdata = np.zeros(7 * numharm, dtype=np.float64)
rmax, zmax, wmax = max_rzw_arr_harmonics(data, r, z, w, derivdata)
maxpow = 0.0
for ii in range(numharm):
rds[ii].pow = derivdata[ii * 7 + 0]
rds[ii].phs = derivdata[ii * 7 + 1]
rds[ii].dpow = derivdata[ii * 7 + 2]
rds[ii].dphs = derivdata[ii * 7 + 3]
rds[ii].d2pow = derivdata[ii * 7 + 4]
rds[ii].d2phs = derivdata[ii * 7 + 5]
rds[ii].locpow = derivdata[ii * 7 + 6]
maxpow += rds[ii].pow / rds[ii].locpow if norm is None else rds[ii].pow / norm
return [maxpow, rmax, zmax, wmax, rds]
def search_fft(data, numcands, norm='default'):
"""
search_fft(data, numcands):
Search a short FFT and return a list containing the powers and
Fourier frequencies of the 'numcands' highest candidates in 'data'.
'norm' is the value to multiply each pow power by to get
a normalized power spectrum (defaults to 1.0/(Freq 0) value)
"""
if (norm == 'default'): norm = 1.0 / data[0].real
hp = np.zeros(numcands, 'f')
hf = np.zeros(numcands, 'f')
search_minifft(data, len(data), norm, numcands, hp, hf)
cands = []
for i in range(numcands):
cands.append([hp[i], hf[i]])
return cands
def ffdot_plane(data, lor, dr, numr, loz, dz, numz):
"""
ffdot_plane(data, lor, dr, numr, loz, dz, numz):
Generate an F-Fdot plane with the 'lower-left' corners
at the point 'lor', 'loz'. The plane will have 'numr' frequency
bins and 'numz' slices in the fdot direction, separated by 'dr'
and 'dz' respectively. 'lor', 'numr', and 'numz' should all be
integers. 'data' is the input FFT.
Note: 'dr' much be the reciprocal of an integer
(i.e. 1 / numbetween). Also, 'r' is considered to be
the average frequency (r = ro + z / 2).
"""
lor = int(lor)
numr = int(numr)
numz = int(numz)
numbetween = int(1.0 / dr)
hiz = loz + (numz - 1) * dz
maxabsz = max(abs(loz), abs(hiz))
kern_half_width = z_resp_halfwidth(maxabsz, LOWACC)
fftlen = next2_to_n(numr + 2 * numbetween * kern_half_width)
ffd = corr_rz_plane(data, numbetween, lor, loz, hiz,
numz, fftlen, LOWACC)
return np.array(ffd[:, 0:numr], copy=1)
def fdotdot_vol(data, lor, dr, numr, loz, dz, numz, low, dw, numw):
"""
fdotdot_vol(data, lor, dr, numr, loz, dz, numz, low, dw, numw):
Generate an F-Fdot-Fdotdot volume with the 'lower-left' corners
at the point 'lor', 'loz', 'low'. The vol will have 'numr' frequency
bins, 'numz'/'numw' slices in the fdot/fdotdot direction, separated
by 'dr', 'dz', and 'dw' respectively. 'lor', 'numr', 'numz', and
'numw' should all be integers. 'data' is the input FFT.
Note: 'dr' much be the reciprocal of an integer
(i.e. 1 / numbetween). Also, 'r' is considered to be
the average frequency (r = r0 + w/6 + z0/2), and 'z'
is the average fdot (z = z0 + w / 2).
"""
lor = int(lor)
numr, numz, numw = int(numr), int(numz), int(numw)
numbetween = int(1.0 / dr)
hiz = loz + (numz - 1) * dz
maxabsz = max(abs(loz), abs(hiz))
hiw = low + (numw - 1) * dw
maxabsw = max(abs(low), abs(hiw))
kern_half_width = w_resp_halfwidth(maxabsz, maxabsw, LOWACC)
fftlen = next2_to_n(numr + 2 * numbetween * kern_half_width)
ffd = corr_rzw_vol(data, numbetween, lor, loz, hiz,
numz, low, hiw, numw, fftlen, LOWACC)
return np.array(ffd[:, :, 0:numr], copy=1)
def estimate_rz(psr, T, show=0, device='/XWIN'):
"""
estimate_rz(psr, T, show=0, device='/XWIN'):
Return estimates of a pulsar's average Fourier freq ('r')
relative to its nominal Fourier freq as well as its
Fourier f-dot ('z') in bins, of a pulsar.
'psr' is a psrparams structure describing the pulsar.
'T' is the length of the observation in sec.
'show' if true, displays plots of 'r' and 'z'.
'device' if the device to plot to if 'show' is true.
"""
startE = keplers_eqn(psr.orb.t, psr.orb.p, psr.orb.e, 1.0E-15)
numorbpts = int(T / psr.orb.p + 1.0) * 1024 + 1
dt = T / (numorbpts - 1)
E = dorbint(startE, numorbpts, dt, psr.orb)
z = z_from_e(E, psr, T)
r = T / p_from_e(E, psr) - T / psr.p
if show:
times = np.arange(numorbpts) * dt
Pgplot.plotxy(r, times, labx='Time', \
laby='Fourier Frequency (r)', device=device)
if device == '/XWIN':
print('Press enter to continue:')
try:
i = raw_input()
except NameError:
i = input()
Pgplot.nextplotpage()
Pgplot.plotxy(z, times, labx='Time',
laby='Fourier Frequency Derivative (z)', device=device)
Pgplot.closeplot()
return r.mean(), z.mean()
def alias(r, rny):
"""
alias_to_r(r, rny):
Convert an aliased Fourier frequency into the 'true' Fourier
frequency of a signal. Or vise-versa -- the transformation is
symmetric about the Nyquist Freq.
'r' is the signal's Fourier frequency to convert.
'rny' is the Nyquist frequency (in bins). For an FFT
of real data, 'rny' = number of data points FFT'd / 2.
"""
return 2.0 * rny - r
def show_ffdot_plane(data, r, z, dr=0.125, dz=0.5,
numr=300, numz=300, T=None,
contours=None, title=None,
image="astro", device="/XWIN", norm=1.0):
"""
show_ffdot_plane(data, r, z):
Show a color plot of the F-Fdot plane centered on the point 'r', 'z'.
"""
ffdp = ffdot_plane(data, r, dr, numr, z, dz, numz)
ffdpow = spectralpower(ffdp.ravel())
ffdpow.shape = (numz, numr)
startbin = int(r - (numr * dr) / 2)
startz = int(z - (numz * dz) / 2)
x = np.arange(numr, dtype="d") * dr + startbin
y = np.arange(numz, dtype="d") * dz + startz
highpt = np.argmax(ffdpow.ravel())
hir = highpt % numr
hiz = highpt / numr
print("")
print("Fourier Freqs from ", min(x), "to", max(x), ".")
print("Fourier Fdots from ", min(y), "to", max(y), ".")
print("Maximum normalized power is ", ffdpow[hiz][hir])
print("The max value is located at: r =", startbin + hir * dr, \
" z =", startz + hiz * dz)
print("")
if not T:
Pgplot.plot2d(ffdpow, x, y, labx="Fourier Frequency (bins)", \
laby="Fourier Frequency Derivative", \
title=title, image=image, \
contours=contours, device=device)
else:
Pgplot.plot2d(ffdpow, x / T, y / (T ** 2.0), labx="Frequency (hz)", \
laby="Frequency Derivative (Hz/sec)", \
rangex2=[x[0], x[-1]], rangey2=[y[0], y[-1]], \
labx2="Fourier Frequency", \
laby2="Fourier Frequency Derivative", \
title=title, image=image, \
contours=contours, device=device)
def v_from_e(e, psr):
"""
v_from_e(e, psr):
Return a vector of velocities (km/s) from a vector of Eccentric
anomalys.
'e' is the vector of Eccentric anomalys.
'psr' is a psrparams instance containing info about the pulsar.
"""
oldw = psr.orb.w
v = np.array(e, copy=1)
E_to_v(v, psr.orb)
psr.orb.w = oldw
return v
def d_from_e(e, psr):
"""
d_from_e(e, psr):
Return a vector of time delays (s) from a vector of Eccentric
anomalys.
'e' is the vector of Eccentric anomalys.
'psr' is a psrparams instance containing info about the pulsar.
"""
oldw = psr.orb.w
d = np.array(e, copy=1)
E_to_phib(d, psr.orb)
psr.orb.w = oldw
return d
def p_from_e(e, psr):
"""
p_from_e(e, psr):
Return a vector of pulsar periods (s) from a vector of Eccentric
anomalys.
'e' is the vector of Eccentric anomalys.
'psr' is a psrparams instance containing info about the pulsar.
"""
oldw = psr.orb.w
psr.orb.w = psr.orb.w * DEGTORAD
p = np.array(e, copy=1)
E_to_p(p, psr.p, psr.orb)
psr.orb.w = oldw
return p
def z_from_e(e, psr, T):
"""
z_from_e(e, psr):
Return a vector of Fourier F-dots (bins) from a vector of Eccentric
anomalys.
'e' is the vector of Eccentric anomalys.
'psr' is a psrparams instance containing info about the pulsar.
'T' is the total length of the observation (s).
"""
oldw = psr.orb.w
psr.orb.w = psr.orb.w * DEGTORAD
z = np.array(e, copy=1)
E_to_z(z, psr.p, T, psr.orb)
psr.orb.w = oldw
return z
def pcorr(data, kernel, numbetween, lo, hi):
"""
pcorr(data, kernel, numbetween, lo, hi):
Perform a correlation with the raw complex vectors 'data' and
'kernel'. The returned vector should start at frequency
'lo' (must be an integer), and go up to but not include 'hi'
(also an integer).
"""
kern_half_width = len(kernel) / (2 * numbetween)
result = np.zeros((hi - lo) * numbetween, 'F')
corr_complex(data, len(data), RAW,
kernel, len(kernel), RAW,
result, len(result), lo,
numbetween, kern_half_width, CORR)
return result
def p_to_f(p, pd, pdd):
"""
p_to_f(p, pd, pdd):
Convert period, period derivative and period second
derivative to the equivalent frequency counterparts.
Will also convert from f to p.
"""
f = 1.0 / p
fd = -pd / (p * p)
if (pdd == 0.0):
fdd = 0.0
else:
fdd = 2.0 * pd * pd / (p ** 3.0) - pdd / (p * p)
return [f, fd, fdd]
def bary_to_topo(pb, pbd, pbdd, infofilenm, ephem="DE200"):
"""
bary_to_topo(pb, pbd, pbdd, infofilenm, ephem="DE200"):
Use least squares to calculate topocentric period
period derivative, and period second derivative
for the corresponding barycentric values. The data
for the observation must be found in the info file.
"""
from numpy.linalg.old import linear_least_squares
if infofilenm[-4:] == ".inf": infofilenm = infofilenm[:-4]
obs = read_inffile(infofilenm)
T = obs.N * obs.dt
dt = 10.0
tto = obs.mjd_i + obs.mjd_f
tts = np.arange(tto, tto + (T + dt) / SECPERDAY, dt / SECPERDAY)
nn = len(tts)
bts = np.zeros(nn, 'd')
vel = np.zeros(nn, 'd')
ra = psr_utils.coord_to_string(obs.ra_h, obs.ra_m, obs.ra_s)
dec = psr_utils.coord_to_string(obs.dec_d, obs.dec_m, obs.dec_s)
if (obs.telescope == 'Parkes'):
tel = 'PK'
elif (obs.telescope == 'Effelsberg'):
tel = 'EB'
elif (obs.telescope == 'Arecibo'):
tel = 'AO'
elif (obs.telescope == 'MMT'):
tel = 'MT'
else:
print("Telescope not recognized.")
return 0
barycenter(tts, bts, vel, nn, ra, dec, tel, ephem)
print("Topocentric start time = %17.11f" % tts[0])
print("Barycentric start time = %17.11f" % bts[0])
avgvel = np.add.reduce(vel) / nn
print("Average Earth velocity = %10.5e c" % (avgvel))
tts = np.arange(nn, dtype='d') * dt
bts = (bts - bts[0]) * SECPERDAY
[fb, fbd, fbdd] = p_to_f(pb, pbd, pbdd)
b = fb * bts + fbd * bts ** 2.0 / 2.0 + fbdd * bts ** 3.0 / 6.0
a = np.transpose(np.asarray([tts, tts ** 2.0, tts ** 3.0]))
[ft, ftd, ftdd], residuals, rank, sv = linear_least_squares(a, b)
[pt, ptd, ptdd] = p_to_f(ft, ftd, ftdd)
print(" Topocentric period = %15.12f" % pt)
print(" Topocentric p-dot = %15.9e" % ptd)
print(" Topocentric p-dotdot = %15.9e" % ptdd)
print(" Quick Topo period = %15.12f" % (pb * (1.0 + avgvel)))
print(" Quick Topo p-dot = %15.9e" % (pbd * (1.0 + avgvel)))
print(" Quick Topo p-dotdot = %15.9e" % (pbdd * (1.0 + avgvel)))
return [pt, ptd, ptdd]
def measure_phase(profile, template, sigma, fwhm):
"""
measure_phase(profile, template, sigma, fwhm):
TOA measurement technique from J. H. Taylor's talk
_Pulsar_Timing_and_Relativistic_Gravity_. Routine
takes two profiles, the first measured and the
second a high S/N template and determines the phase
offset of 'profile' from 'template'. Both profiles
must have the same number of points. 'sigma' denotes
the RMS noise level of the 'profile'. 'fwhm' is the
approximate width of the template pulse (0-1). The phase
returned is cyclic (i.e. from 0-1). The routine
returns a tuple comtaining (tau, tau_err, b, b_err, a).
Where 'tau' is the phase, 'B' is the scaling factor,
and 'a' is the DC offset. The error values are
estimates of the 1 sigma errors.
"""
from simple_roots import newton_raphson
N = len(profile)
if not (N == len(template)):
print("Lengths of 'profile' and 'template' must")
print(" be equal in measure_phase().")
return 0.0
ft = rfft(profile)
p0 = ft[0].real
# Nyquist freq
ft[0] = complex(ft[0].imag, 0.0)
P_k = abs(ft)
frotate(P_k, len(ft), 1)
Theta_k = np.arctan2(-ft.imag, ft.real)
frotate(Theta_k, len(ft), 1)
ft = rfft(template)
s0 = ft[0].real
# Nyquist freq
ft[0] = complex(ft[0].imag, 0.0)
S_k = abs(ft)
frotate(S_k, len(ft), 1)
Phi_k = np.arctan2(-ft.imag, ft.real)
frotate(Phi_k, len(ft), 1)
# Estimate of the noise sigma (This needs to be checked)
# Note: Checked 10 Jul 2000. Looks OK.
sig = sigma * np.sqrt(N)
k = np.arange(len(ft), dtype='d') + 1.0
def fn(tau, k=k, p=P_k, s=S_k, theta=Theta_k, phi=Phi_k):
# Since Nyquist freq always has phase = 0.0
k[-1] = 0.0
return np.add.reduce(k * p * s *
np.sin(phi - theta + k * tau))
def dfn(tau, k=k, p=P_k, s=S_k, theta=Theta_k, phi=Phi_k):
# Since Nyquist freq always has phase = 0.0
k[-1] = 0.0
return np.add.reduce(k * k * p * s *
np.cos(phi - theta + k * tau))
numphases = 200
ddchidt = np.zeros(numphases, 'd')
phases = np.arange(numphases, dtype='d') / \
float(numphases - 1) * TWOPI - PI
for i in np.arange(numphases):
ddchidt[i] = dfn(phases[i])
maxdphase = phases[np.argmax(ddchidt)] + \
0.5 * TWOPI / (numphases - 1.0)
# Solve for tau
tau = newton_raphson(fn, dfn, maxdphase - 0.5 * fwhm * TWOPI,
maxdphase + 0.5 * fwhm * TWOPI)
# Solve for b
c = P_k * S_k * np.cos(Phi_k - Theta_k + k * tau)
d = np.add.reduce(S_k ** 2.0)
b = np.add.reduce(c) / d
# tau sigma
tau_err = sig * np.sqrt(1.0 / (2.0 * b *
np.add.reduce(k ** 2.0 * c)))
# b sigma (Note: This seems to be an underestimate...)
b_err = sig * np.sqrt(1.0 / (2.0 * d))
# Solve for a
a = (p0 - b * s0) / float(N)
return (tau / TWOPI, tau_err / TWOPI, b, b_err, a)
def get_baryv(ra, dec, mjd, T, obs="PK"):
"""
get_baryv(ra, dec, mjd, T, obs="PK"):
Determine the average barycentric velocity towards 'ra', 'dec'
during an observation from 'obs'. The RA and DEC are in the
standard string format (i.e. 'hh:mm:ss.ssss' and 'dd:mm:ss.ssss').
'T' is in sec and 'mjd' is (of course) in MJD. The obs variable
is the standard two character string from TEMPO: PK, GB, AO, GM, JB, ...
"""
tts = np.linspace(mjd, mjd + T / 86400.0, 100)
nn = len(tts)
bts = np.zeros(nn, dtype=np.float64)
vel = np.zeros(nn, dtype=np.float64)
barycenter(tts, bts, vel, ra, dec, obs, "DE421")
return vel.mean()
def fold(indata, dt, nbins, f, fd=0.0, fdd=0.0, startphs=0.0, tlo=0.0, standard=True):
"""
fold(indata, dt, nbins, f, fd=0.0, fdd=0.0, startphs=0.0, tlo=0.0):
This is an interface into PRESTO's fold() code, which is what
prepfold uses to fold data. It will return a tuple of a
double-precision profile of length nbins, and the ending phase
(0-1) of the fold.
indata is an array of floats to fold
dt is the duration in sec of each of the indata bins
f, fd, and fdd are the freq, freq deriv, and freq 2nd deriv to fold (Hz)
startphs (0-1) is the phase for the beginning of the first bin
tlo is the time (in sec) referring to the start of the first bin,
with respect to the reference time of f, fd, and fdd (i.e. tlo=0.0).
If standard (bool), then traditional prepfold "drizzling" will be
used. Otherwise, treat each input data point as a sample and put
it fully in a single profile bin.
"""
prof = np.zeros(nbins, dtype=np.float64)
data = indata.astype(np.float32)
phs = simplefold(data, dt, tlo, prof, startphs, f, fd, fdd,
1 if standard else 0)
return (prof, phs)
def compute_chi2(data, avg, var):
"""Compute chi^2 as a pulsation test for a folded pulse profile 'data'
To get the reduced-chi^2, you would typically divide the result by
the number of profile bins minus 1 (but beware of prepfold's inter-bin
correlations! See DOF_corr() in prepfold.py for details.)
See Leahy et al. 1983 for details:
https://ui.adsabs.harvard.edu/abs/1983ApJ...266..160L/abstract
Parameters
----------
data : [double precision numpy array]
A folded pulse profile on which to compute Z^2_N
avg : [double]
The average level of the data (should be the background average).
var : [double]
The variance of the data (should be the background variance).
Beware prepfold's bin correlations!
"""
return chisqr(data, avg, var)
def compute_Z2N(data, N, var):
"""Compute Z^2_N statistic for a folded pulse profile 'data'
See Bachetti et al. 2021 for details:
https://ui.adsabs.harvard.edu/abs/2021ApJ...909...33B/abstract
Parameters
----------
data : [double precision numpy array]
A folded pulse profile on which to compute Z^2_N
N : [integer]
The number of harmonics to include in the Z^2_N calculation
var : [double]
The variance of the data (should be the background variance).
Beware prepfold's bin correlations!
"""
return z2n(data, var, N)
| 27,908 | 36.562584 | 86 | py |
presto | presto-master/python/binresponses/monte_short.py | from __future__ import print_function
from builtins import range
from time import clock
from math import *
from Numeric import *
from presto import *
from miscutils import *
from Statistics import *
import Pgplot
# Some admin variables
showplots = 0 # True or false
showsumplots = 0 # True or false
debugout = 0 # True or false
outfiledir = '/home/ransom'
outfilenm = 'monte'
pmass = 1.35 # Pulsar mass in solar masses
cmass = {'WD': 0.3, 'NS': 1.35, 'BH': 10.0} # Companion masses to use
ecc = {'WD': 0.0, 'NS': 0.6, 'BH': 0.6} # Eccentricities to use
orbsperpt = {'WD': 20, 'NS': 20, 'BH': 20} # # of orbits to avg per pt
ppsr = [0.002, 0.02, 0.2] # Pulsar periods to test
# Simulation parameters
ctype = 'BH' # The type of binary companion: 'WD', 'NS', or 'BH'
Pb = 7200.0 # Orbital period in seconds
dt = 0.0001 # The duration of each data sample (s)
searchtype = 'short' # One of 'ffdot', 'sideband', 'short'
Tfft = 60.0 # Length of FFTs in seconds (must evenly divide Pb)
numbetween = 2
##################################################
# You shouldn't need to edit anyting below here. #
##################################################
outfilenm = (outfiledir+'/'+outfilenm+
'_'+searchtype+repr(Tfft)+'_'+ctype+'.out')
def psrparams_from_list(pplist):
psr = psrparams()
psr.p = pplist[0]
psr.orb.p = pplist[1]
psr.orb.x = pplist[2]
psr.orb.e = pplist[3]
psr.orb.w = pplist[4]
psr.orb.t = pplist[5]
return psr
####################################################################
# Open a file to save each orbit calculation
file = open(outfilenm,'w')
numffts = int(Pb / Tfft)
TbyPb = (arange(numffts, typecode='d')+1.0)/numffts
xb = asini_c(Pb, mass_funct2(pmass, cmass[ctype], pi / 3.0))
for pp in ppsr:
pows = zeros(orbsperpt[ctype], 'd')
stim = clock()
numbins = 0
for ct in range(orbsperpt[ctype]):
wb = ct * 180.0 / orbsperpt[ctype]
psr = psrparams_from_list([pp, Pb, xb, ecc[ctype], wb, 0.0])
tmpnumbins = 2 * numbetween * bin_resp_halfwidth(psr.p, Pb, psr.orb)
if tmpnumbins > numbins: numbins = tmpnumbins
# Powers averaged over orb.t as a function of orb.w
pwrs_w = zeros((orbsperpt[ctype], numbins), Float32)
for ct in range(orbsperpt[ctype]):
wb = ct * 180.0 / orbsperpt[ctype]
if debugout: print('wb = '+repr(wb))
psr = psrparams_from_list([pp, Pb, xb, ecc[ctype], wb, 0.0])
for i in range(numffts):
psr.orb.t = i * Tfft
tmppwrs = spectralpower(gen_bin_response(0.0, numbetween,
psr.p, Tfft,
psr.orb, numbins))
if debugout: print(' tb = '+repr(psr.orb.t)+' Max pow = '+\
repr(max(tmppwrs)))
if showplots:
Pgplot.plotxy(tmppwrs)
Pgplot.closeplot()
pwrs_w[ct] = pwrs_w[ct] + tmppwrs
if showsumplots:
Pgplot.plotxy(pwrs_w[ct], title='power(w) averaged over orb.t')
Pgplot.closeplot()
pwrs_w = pwrs_w / numffts
max_avg_pow = average(maximum.reduce(pwrs_w,1))
if showsumplots:
Pgplot.plotxy(add.reduce(pwrs_w), title='power(w) averaged over orb.t')
Pgplot.closeplot()
tim = clock() - stim
if debugout:
print('Time for this point was ',tim, ' s.')
file.write('%8.6f %10.5f %10d %13.9f\n' % \
(pp, Tfft, int(Tfft/dt), max_avg_pow))
file.flush()
file.close()
| 3,677 | 36.530612 | 79 | py |
presto | presto-master/python/binresponses/monte_ffdot.py | from __future__ import print_function
from builtins import range
from time import clock
from math import *
from Numeric import *
from presto import *
from miscutils import *
from Statistics import *
# Some admin variables
parallel = 0 # True or false
showplots = 0 # True or false
debugout = 0 # True or false
outfiledir = '/home/ransom'
outfilenm = 'monte'
pmass = 1.35 # Pulsar mass in solar masses
cmass = {'WD': 0.3, 'NS': 1.35, 'BH': 10.0} # Companion masses to use
ecc = {'WD': 0.0, 'NS': 0.6, 'BH': 0.6} # Eccentricities to use
orbsperpt = {'WD': 20, 'NS': 100, 'BH': 100} # # of orbits to avg per pt
ppsr = [0.002, 0.02, 0.2, 2.0] # Pulsar periods to test
# Simulation parameters
numTbyPb = 100 # The number of points along the x axis
minTbyPb = 0.01 # Minimum Obs Time / Orbital Period
maxTbyPb = 10.0 # Maximum Obs Time / Orbital Period
ctype = 'BH' # The type of binary companion: 'WD', 'NS', or 'BH'
Pb = 7200.0 # Orbital period in seconds
dt = 0.0001 # The duration of each data sample (s)
searchtype = 'ffdot' # One of 'ffdot', 'sideband', 'shortffts'
maxTbyPb_ffdot = 11.0
##################################################
# You shouldn't need to edit anyting below here. #
##################################################
# Figure out our environment
if showplots:
import Pgplot
if parallel:
import mpi
from mpihelp import *
myid = mpi.comm_rank()
numprocs = mpi.comm_size()
outfilenm = (outfiledir+'/'+outfilenm+repr(myid)+
'_'+searchtype+'_'+ctype+'.out')
else:
myid = 0
numprocs = 1
outfilenm = (outfiledir+'/'+outfilenm+
'_'+searchtype+'_'+ctype+'.out')
def psrparams_from_list(pplist):
psr = psrparams()
psr.p = pplist[0]
psr.orb.p = pplist[1]
psr.orb.x = pplist[2]
psr.orb.e = pplist[3]
psr.orb.w = pplist[4]
psr.orb.t = pplist[5]
return psr
def predict_mini_r(fftlen, Pb, T):
nyquist = fftlen / 2
r = fftlen * Pb / T
if (r > nyquist): rpred = alias(r, nyquist)
else: rpred = r
return rpred
####################################################################
# Calculate the values of our X and Y axis points
logTbyPb = span(log(minTbyPb), log(maxTbyPb), numTbyPb)
TbyPb = exp(logTbyPb)
# Open a file to save each orbit calculation
file = open(outfilenm,'w')
# The Simulation loops
# Loop over T / Porb
for x in range(numTbyPb)[51:66]:
T = Pb * TbyPb[x]
xb = asini_c(Pb, mass_funct2(pmass, cmass[ctype], pi / 3.0))
eb = ecc[ctype]
# Loop over ppsr
for y in range(len(ppsr)):
# Each processor calculates its own point
if not (y % numprocs == myid): continue
else:
pows = zeros(orbsperpt[ctype], 'd')
stim = clock()
if (TbyPb[x] <= maxTbyPb_ffdot):
# Loop over the number of tries per point
for ct in range(orbsperpt[ctype]):
if (eb == 0.0):
wb, tp = 0.0, ct * Pb / orbsperpt[ctype]
else:
(orbf, orbi) = modf(ct / sqrt(orbsperpt[ctype]))
orbi = orbi / sqrt(orbsperpt[ctype])
wb, tp = orbf * 180.0, Pb * orbi
if debugout:
print('T = '+repr(T)+' ppsr = '+repr(ppsr[y])+\
' Pb = '+repr(Pb)+' xb = '+repr(xb)+' eb = '+\
repr(eb)+' wb = '+repr(wb)+' tp = '+repr(tp))
psr = psrparams_from_list([ppsr[y], Pb, xb, eb, wb, tp])
psr_numbins = 2 * bin_resp_halfwidth(psr.p, T, psr.orb)
psr_resp = gen_bin_response(0.0, 1, psr.p, T, psr.orb,
psr_numbins)
if showplots:
print("The raw response:")
Pgplot.plotxy(spectralpower(psr_resp))
Pgplot.closeplot()
# The following places the nominative psr freq
# approx in bin len(data)/2
datalen = next2_to_n(psr_numbins * 2)
if datalen < 1024: datalen = 1024
data = zeros(datalen, 'F')
lo = (len(data) - len(psr_resp)) / 2
hi = lo + len(psr_resp)
data[lo:hi] = array(psr_resp, copy=1)
(tryr, tryz) = estimate_rz(psr, T, show=showplots)
tryr = tryr + len(data) / 2.0
numr = 200
numz = 200
dr = 0.5
dz = 1.0
if debugout:
# print 'psr_numbins = '+`psr_numbins`+\
# ' TbyPb[x] = '+`TbyPb[x]`+\
# ' ppsr[y] = '+`ppsr[y]`+\
# ' len(data) = '+`len(data)`
print(' tryr = %11.5f tryz = %11.5f' % \
(tryr, tryz))
ffd = ffdot_plane(data, tryr, dr, numr,
tryz, dz, numz)
maxarg = argmax(spectralpower(ffd.flat))
rind = maxarg % numr
zind = maxarg / numr
peakr = (rind * dr + int(tryr - (numr * dr) / 2.0))
peakz = (zind * dz + tryz - (numz * dz) / 2.0)
peakpow = ffd[zind][rind].real**2 + \
ffd[zind][rind].imag**2
if showplots:
xvals = arange(numr, typecode="d") * dr + \
int(tryr - (numr * dr) / 2.0)
yvals = arange(numz, typecode="d") * dz + \
tryz - (numz * dz) / 2.0
ffdpow = spectralpower(ffd.flat)
ffdpow.shape = (numz, numr)
Pgplot.plot2d(ffdpow, xvals, yvals, \
labx="Fourier Freq (bins)", \
laby="Fourier Freq Deriv", \
image="astro")
Pgplot.closeplot()
if debugout:
print('peakr = %11.5f peakz = %11.5f' % \
(peakr, peakz))
if (peakpow < 0.2):
pows[ct] = peakpow
rmax = peakr
zmax = peakz
else:
[pows[ct], rmax, zmax, rd] = \
maximize_rz(data, peakr, peakz, \
norm=1.0)
if debugout:
print('max_r = %11.5f max_z = %11.5f' % \
(rmax, zmax))
if debugout:
print(repr(x)+' '+repr(y)+' '+repr(TbyPb[x])+' ', end=' ')
print(repr(ppsr[y])+' '+repr(pows[ct]))
tim = clock() - stim
if debugout:
print('Time for this point was ',tim, ' s.')
file.write('%5d %9.6f %8.6f %11.9f %11.9f %11.9f\n' % \
(y * numTbyPb + x, TbyPb[x], ppsr[y], \
average(pows), max(pows), min(pows)))
file.flush()
file.close()
| 7,526 | 40.585635 | 85 | py |
presto | presto-master/python/binresponses/monte_sideb.py | from __future__ import print_function
from builtins import range
from time import clock
from math import *
from Numeric import *
from presto import *
from miscutils import *
from Statistics import *
from random import expovariate
import RNG
global theo_sum_pow, b_pows, bsum_pows, newpows, noise, fftlen
# Some admin variables
parallel = 0 # True or false
showplots = 0 # True or false
debugout = 0 # True or false
outfiledir = '/home/ransom'
outfilenm = 'monte'
pmass = 1.35 # Pulsar mass in solar masses
cmass = {'WD': 0.3, 'NS': 1.35, 'BH': 10.0} # Companion masses to use
ecc = {'WD': 0.0, 'NS': 0.6, 'BH': 0.6} # Eccentricities to use
orbsperpt = {'WD': 20, 'NS': 100, 'BH': 100} # of orbits to avg per pt
ppsr = [0.002, 0.02, 0.2, 2.0] # Pulsar periods to test
# Simulation parameters
ctype = 'BH' # One of 'WD', 'NS', or 'BH'
Pb = 7200.0 # Orbital period in seconds
dt = 0.0001 # The duration of each data sample (s)
searchtype = 'sideband' # One of 'ffdot', 'sideband', 'shortffts'
scope = 'PK' # One of the following 'PK' = Parkes Multibeam (20cm)
# 'EB' = Effelsberg (20cm)
scopename = {'PK':'Parkes Multibeam', 'EB':'Effelsberg'}
gain = {'PK':1.5, 'EB':0.666} # Antenna gain (Jy/K)
tsys = {'PK':23.5, 'EB':25.0} # System noise temp (K)
bw = {'PK':64.0, 'EB':90.0} # Bandwidth in MHz
numpol = 2 # Number of polarizations
dt = 0.0001 # Length of each time series bin
P_orb = 7200.0 # The orbital period (s) used
detect_sigma = 6.0 # What sigma constitutes a detection
# Calculated parameters
sigma_t = 1000.0 * gain[scope] * tsys[scope] / \
sqrt(numpol * dt * bw[scope] * 1e6)
dist = RNG.ExponentialDistribution(1.0)
rng = RNG.CreateGenerator(0, dist)
##################################################
# You shouldn't need to edit anyting below here. #
##################################################
# Figure out our environment
if showplots:
import Pgplot
if parallel:
import mpi
from mpihelp import *
myid = mpi.comm_rank()
numprocs = mpi.comm_size()
outfilenm = (outfiledir+'/'+outfilenm+repr(myid)+
'_'+searchtype+'_'+ctype+'.out')
else:
myid = 0
numprocs = 1
outfilenm = (outfiledir+'/'+outfilenm+
'_'+searchtype+'_'+ctype+'.out')
def secant(func, oldx, x, TOL=1e-6): # f(x)=func(x)
"""
Summary
Solve for a zero of function using Secant method
Usage
real = func(real)
real = secant(func, real, real [, TOL=real])
Similar to Newton's method, but the derivative is estimated by divided
difference using only function calls. A root is estimated by
x = x - f(x) (x - oldx)/(f(x) - f(oldx))
where oldx = x[i-1] and x = x[i].
"""
oldf, f = func(oldx), func(x)
if (abs(f) > abs(oldf)): # swap so that f(x) is closer to 0
oldx, x = x, oldx
oldf, f = f, oldf
count = 0
while 1:
dx = f * (x - oldx) / float(f - oldf)
if abs(dx) < TOL * (1 + abs(x)): return x - dx
if count > 50:
x = average([x, oldx, x - dx])
f = func(x)
# print "secant(%d): x=%s, f(x)=%s" % (count, x, f)
return x
oldx, x = x, x - dx
oldf, f = f, func(x)
count = count + 1
# print "secant(%d): x=%s, f(x)=%s" % (count, x, f)
def mini_fft_sum_pows(tryamp):
global theo_sum_pow, b_pows, bsum_pows, newpows, noise, fftlen
fdata = rfft(newpows * tryamp + noise)
norm = fdata[0].real
rpred = predict_mini_r(fftlen, psr.orb.p, T)
[b_pows[ct], rmax, rd] = \
maximize_r(fdata, rpred, norm=norm)
# print 'avg(dat) = ',average(newpows * tryamp[ct] + noise)
# print 'avg(fft) = ',average(spectralpower(fdata)[1:]/norm)
# print tryamp
if debugout:
print('Nyquist = '+repr(fftlen/2))
print(' rpred = %10.3f power = %10.7f' % \
(rpred, b_pows[ct]))
bsum_pows[ct] = b_pows[ct]
if (TbyPb[x] > 2.0):
for harmonic in arange(int(TbyPb[x]-1.0))+2:
hrpred = predict_mini_r(fftlen, harmonic * \
psr.orb.p, T)
[tmppow, hrmax, rd] = \
maximize_r(fdata, hrpred, norm=norm)
bsum_pows[ct] = bsum_pows[ct] + tmppow
if debugout:
print(' hrpred = %10.3f power = %10.7f' % \
(hrpred, tmppow))
if debugout:
print(' r = %10.3f meas_r = %10.3f '\
'alias_r = %10.3f' % \
(fftlen * psr.orb.p / T, rmax,
alias(rmax, fftlen/2)))
print(' p = %10.3f meas_p = %10.3f '\
'alias_p = %10.3f' % \
(psr.orb.p, rmax * T / fftlen,
alias(rmax, fftlen/2) * T / fftlen))
print(' BigPow = %10.7f SumPow = %10.7f' % \
(b_pows[ct], bsum_pows[ct]))
return bsum_pows[ct] - theo_sum_pow
def psrparams_from_list(pplist):
psr = psrparams()
psr.p = pplist[0]
psr.orb.p = pplist[1]
psr.orb.x = pplist[2]
psr.orb.e = pplist[3]
psr.orb.w = pplist[4]
psr.orb.t = pplist[5]
return psr
def predict_mini_r(fftlen, Pb, T):
nyquist = fftlen / 2
r = fftlen * Pb / T
if (r > nyquist): rpred = alias(r, nyquist)
else: rpred = r
return rpred
def slice_resp(psr, T, response):
c1 = TWOPI * psr.orb.x / (psr.orb.p * sqrt(1.0 - psr.orb.e * psr.orb.e))
c2 = psr.orb.e * cos(psr.orb.w * DEGTORAD)
v1 = c1 * (c2 + 1.0);
v2 = c1 * (c2 - 1.0);
if (v1 < v2): lo, hi = v2, v1
else: lo, hi = v1, v2
lo = len(response)/2 - int(fabs(T * fabs(lo) / \
(psr.p * (1.0 + fabs(lo)))))
hi = len(response)/2 + int(fabs(T * fabs(hi) / \
(psr.p * (1.0 + fabs(hi)))))
diff = hi-lo
newlen = (diff/100 + 1)*100
newresp = zeros(newlen, 'f')
newresp[0:diff] = response[lo:hi]
return newresp
####################################################################
# Calculate the values of our X and Y axis points
TbyPb = arange(1.15, 10.15, 0.2)
# Open a file to save each orbit calculation
file = open(outfilenm,'w')
# The Simulation loops
# Loop over T / Porb
xb = asini_c(Pb, mass_funct2(pmass, cmass[ctype], pi / 3.0))
for x in range(len(TbyPb)):
T = Pb * TbyPb[x]
N = T / dt
# Loop over ppsr
for y in range(len(ppsr)):
# Each processor calculates its own point
z = 2 * pi * xb / ppsr[y]
if not (y % numprocs == myid): continue
else:
b_pows = zeros(orbsperpt[ctype], 'd')
tryamp = zeros(orbsperpt[ctype], 'd')
bsum_pows = zeros(orbsperpt[ctype], 'd')
fftlen = 0
# Loop over the number of tries per point
for ct in range(orbsperpt[ctype]):
stim = clock()
if (ecc[ctype] == 0.0):
wb, tp = 0.0, ct * Pb / orbsperpt[ctype]
else:
(orbf, orbi) = modf(ct / sqrt(orbsperpt[ctype]))
orbi = orbi / sqrt(orbsperpt[ctype])
wb, tp = orbf * 180.0, Pb * orbi
# Generate the PSR response
psr = psrparams_from_list([ppsr[y], Pb, xb, ecc[ctype], wb, tp])
psr_numbins = 2 * bin_resp_halfwidth(psr.p, T, psr.orb)
psr_resp = gen_bin_response(0.0, 1, psr.p, T, psr.orb,
psr_numbins)
if debugout:
print('T = %9.3f Pb = %9.3f Ppsr = %9.7f' % \
(T, psr.orb.p, psr.p))
newpows = slice_resp(psr, T, spectralpower(psr_resp))
if showplots:
print("The raw response:")
Pgplot.plotxy(newpows)
Pgplot.closeplot()
fftlen = len(newpows)
noise = rng.sample(fftlen)
tryamp[ct] = 500.0
theo_sum_pow = powersum_at_sigma(detect_sigma,
int(T/psr.orb.p))
if debugout:
print('theo_sum_pow = ', theo_sum_pow)
newloop = 1
tryamp[ct] = secant(mini_fft_sum_pows, tryamp[ct]/2,
tryamp[ct], 0.01)
# Pgplot.plotxy(spectralpower(fdata)[1:]/norm, \
# arange(len(fdata))*T/fftlen, \
# labx='Orbital Period (s))', \
# laby='Power')
# Pgplot.closeplot()
#print ' BigPow = %10.7f SumPow = %10.7f S(mJy) = %10.5f' % \
# (b_pows[ct], bsum_pows[ct]-theo_sum_pow, 2 * sigma_t * sqrt(tryamp[ct]/N))
tim = clock() - stim
if debugout:
print('Time for this point was ',tim, ' s.')
# Note: The output contains the average value of tryamp. To convert this
# to a minimum flux density, use the formula
# S(mJy) = 2 * sigma_t * sqrt(tryamp / N)
file.write('%9.6f %8.6f %10d %7d %13.9f %13.9f %13.7f\n' % \
(TbyPb[x], ppsr[y], N, fftlen, average(b_pows),
average(bsum_pows), average(tryamp)))
file.flush()
file.close()
| 9,594 | 37.075397 | 97 | py |
presto | presto-master/python/binresponses/montebinresp.py | from __future__ import print_function
from builtins import range
from time import clock
from math import *
from Numeric import *
from presto import *
from miscutils import *
from Statistics import *
# Some admin variables
parallel = 0 # True or false
showplots = 1 # True or false
debugout = 1 # True or false
outfiledir = '/home/ransom'
outfilenm = 'montebinresp'
pmass = 1.35 # Pulsar mass in solar masses
cmass = {'WD': 0.3, 'NS': 1.35, 'BH': 10.0} # Companion masses to use
ecc = {'WD': 0.0, 'NS': 0.6, 'BH': 0.6} # Eccentricities to use
orbsperpt = {'WD': 20, 'NS': 100, 'BH': 100} # # of orbits to avg per pt
ppsr = [0.002, 0.02, 0.2, 2.0] # Pulsar periods to test
# Simulation parameters
numTbyPb = 100 # The number of points along the x axis
minTbyPb = 0.01 # Minimum Obs Time / Orbital Period
maxTbyPb = 10.0 # Maximum Obs Time / Orbital Period
ctype = 'BH' # The type of binary companion: 'WD', 'NS', or 'BH'
Pb = 7200.0 # Orbital period in seconds
dt = 0.0001 # The duration of each data sample (s)
searchtype = 'sideband' # One of 'ffdot', 'sideband', 'shortffts'
maxTbyPb_ffdot = 11.0
minTbyPb_sideband = 1.75
fftlen_shortffts = 0.05
##################################################
# You shouldn't need to edit anyting below here. #
##################################################
# Figure out our environment
if showplots:
import Pgplot
if parallel:
import mpi
from mpihelp import *
myid = mpi.comm_rank()
numprocs = mpi.comm_size()
outfilenm = (outfiledir+'/'+outfilenm+repr(myid)+
'_'+searchtype+'_'+ctype+'.out')
else:
myid = 0
numprocs = 1
outfilenm = (outfiledir+'/'+outfilenm+
'_'+searchtype+'_'+ctype+'.out')
def psrparams_from_list(pplist):
psr = psrparams()
psr.p = pplist[0]
psr.orb.p = pplist[1]
psr.orb.x = pplist[2]
psr.orb.e = pplist[3]
psr.orb.w = pplist[4]
psr.orb.t = pplist[5]
return psr
def predict_mini_r(fftlen, Pb, T):
nyquist = fftlen / 2
r = fftlen * Pb / T
if (r > nyquist): rpred = alias(r, nyquist)
else: rpred = r
return rpred
####################################################################
# Calculate the values of our X and Y axis points
logTbyPb = span(log(minTbyPb), log(maxTbyPb), numTbyPb)
TbyPb = exp(logTbyPb)
# Open a file to save each orbit calculation
file = open(outfilenm,'w')
# The Simulation loops
# Loop over T / Porb
for x in range(numTbyPb):
T = Pb * TbyPb[x]
xb = asini_c(Pb, mass_funct2(pmass, cmass[ctype], pi / 3.0))
eb = ecc[ctype]
# Loop over ppsr
for y in range(len(ppsr)):
# Each processor calculates its own point
if not (y % numprocs == myid): continue
else:
pows = zeros(orbsperpt[ctype], 'd')
stim = clock()
if ((searchtype == 'ffdot' and
TbyPb[x] <= maxTbyPb_ffdot) or
(searchtype == 'sideband' and
TbyPb[x] >= minTbyPb_sideband) or
(searchtype == 'shortffts')):
# Loop over the number of tries per point
for ct in range(orbsperpt[ctype]):
if (eb == 0.0):
wb, tp = 0.0, ct * Pb / orbsperpt[ctype]
else:
(orbf, orbi) = modf(ct / sqrt(orbsperpt[ctype]))
orbi = orbi / sqrt(orbsperpt[ctype])
wb, tp = orbf * 180.0, Pb * orbi
if debugout:
print('T = '+repr(T)+' ppsr = '+repr(ppsr[y])+\
' Pb = '+repr(Pb)+' xb = '+repr(xb)+' eb = '+\
repr(eb)+' wb = '+repr(wb)+' tp = '+repr(tp))
psr = psrparams_from_list([ppsr[y], Pb, xb, eb, wb, tp])
psr_numbins = 2 * bin_resp_halfwidth(psr.p, T, psr.orb)
psr_resp = gen_bin_response(0.0, 1, psr.p, T, psr.orb,
psr_numbins)
if showplots:
print("The raw response:")
Pgplot.plotxy(spectralpower(psr_resp))
Pgplot.closeplot()
if searchtype == 'ffdot':
# The following places the nominative psr freq
# approx in bin len(data)/2
datalen = next2_to_n(psr_numbins * 2)
if datalen < 1024: datalen = 1024
data = zeros(datalen, 'F')
lo = (len(data) - len(psr_resp)) / 2
hi = lo + len(psr_resp)
data[lo:hi] = array(psr_resp, copy=1)
(tryr, tryz) = estimate_rz(psr, T, show=showplots)
tryr = tryr + len(data) / 2.0
numr = 200
numz = 200
dr = 0.5
dz = 1.0
if debugout:
# print 'psr_numbins = '+`psr_numbins`+\
# ' TbyPb[x] = '+`TbyPb[x]`+\
# ' ppsr[y] = '+`ppsr[y]`+\
# ' len(data) = '+`len(data)`
print(' tryr = %11.5f tryz = %11.5f' % \
(tryr, tryz))
ffd = ffdot_plane(data, tryr, dr, numr,
tryz, dz, numz)
maxarg = argmax(spectralpower(ffd.flat))
rind = maxarg % numr
zind = maxarg / numr
peakr = (rind * dr + int(tryr - (numr * dr) / 2.0))
peakz = (zind * dz + tryz - (numz * dz) / 2.0)
peakpow = ffd[zind][rind].real**2 + \
ffd[zind][rind].imag**2
if showplots:
xvals = arange(numr, typecode="d") * dr + \
int(tryr - (numr * dr) / 2.0)
yvals = arange(numz, typecode="d") * dz + \
tryz - (numz * dz) / 2.0
ffdpow = spectralpower(ffd.flat)
ffdpow.shape = (numz, numr)
Pgplot.plot2d(ffdpow, xvals, yvals, \
labx="Fourier Freq (bins)", \
laby="Fourier Freq Deriv", \
image="astro")
Pgplot.closeplot()
if debugout:
print('peakr = %11.5f peakz = %11.5f' % \
(peakr, peakz))
if (peakpow < 0.2):
pows[ct] = peakpow
rmax = peakr
zmax = peakz
else:
[pows[ct], rmax, zmax, rd] = \
maximize_rz(data, peakr, peakz, \
norm=1.0)
if debugout:
print('max_r = %11.5f max_z = %11.5f' % \
(rmax, zmax))
elif searchtype == 'sideband':
if debugout:
print('T = %9.3f Pb = %9.3f Ppsr = %9.7f' % \
(T, psr.orb.p, psr.p))
# The biggest FFT first
psr_pows = spectralpower(psr_resp)
fftlen = int(next2_to_n(len(psr_pows)))
fdata = zeros(fftlen, 'f')
fdata[0:len(psr_pows)] = array(psr_pows, copy=1)
fdata = rfft(fdata)
rpred = predict_mini_r(fftlen, psr.orb.p, T)
[pows[ct], rmax, rd] = \
maximize_r(fdata, rpred, norm=1.0)
if debugout:
print('theo_r = %f alias_r = %f' % \
(fftlen * psr.orb.p / T, rpred))
print('pow1 = %f Porb = %f' % \
(pows[ct], rmax * T / fftlen))
#cands = search_fft(fdata, 15, norm=1.0/fftlen)
#if debugout:
#print 'rpred = %11.5f '\
# 'max_r = %11.5f max_pow = %11.5f' % \
# (rpred, rmax, pows[ct])
if showplots:
Pgplot.plotxy(spectralpower(fdata), \
arange(len(fdata))*T/fftlen, \
labx='Orbital Period (s))', \
laby='Power')
Pgplot.closeplot()
#if debugout:
#for ii in range(15):
# print ' r = %11.5f pow = %9.7f' % \
# (cands[ii][1], cands[ii][0])
# Do the first half-length FFT
fftlen = fftlen / 2
fdata = zeros(fftlen, 'f')
fdata[0:fftlen] = array(psr_pows[0:fftlen], copy=1)
fdata = rfft(fdata)
rpred = predict_mini_r(fftlen, psr.orb.p, T)
[tmppow, rmax, rd] = \
maximize_r(fdata, rpred, norm=1.0)
if tmppow > pows[ct]: pows[ct] = tmppow
if debugout:
print('theo_r = %f alias_r = %f' % \
(fftlen * psr.orb.p / T, rpred))
print('pow1 = %f Porb = %f' % \
(tmppow, rmax * T / fftlen))
#cands = search_fft(fdata, 15, norm=1.0/fftlen)
if showplots:
Pgplot.plotxy(spectralpower(fdata), \
arange(len(fdata))*T/fftlen, \
labx='Orbital Period (s))', \
laby='Power')
Pgplot.closeplot()
#if debugout:
#for ii in range(15):
# print ' r = %11.5f pow = %9.7f' % \
# (cands[ii][1], cands[ii][0])
# Do the second half-length FFT
fdata = zeros(fftlen, 'f')
lencopy = len(psr_pows[fftlen:])
fdata[0:lencopy] = array(psr_pows[fftlen:], copy=1)
fdata = rfft(fdata)
rpred = predict_mini_r(fftlen, psr.orb.p, T)
[tmppow, rmax, rd] = \
maximize_r(fdata, rpred, norm=1.0)
if tmppow > pows[ct]: pows[ct] = tmppow
if debugout:
print('theo_r = %f alias_r = %f' % \
(fftlen * psr.orb.p / T, rpred))
print('pow1 = %f Porb = %f' % \
(tmppow, rmax * T / fftlen))
#cands = search_fft(fdata, 15, norm=1.0/fftlen)
if showplots:
Pgplot.plotxy(spectralpower(fdata), \
arange(len(fdata))*T/fftlen, \
labx='Orbital Period (s))', \
laby='Power')
Pgplot.closeplot()
#if debugout:
#for ii in range(15):
# print ' r = %11.5f pow = %9.7f' % \
# (cands[ii][1], cands[ii][0])
if debugout:
print(repr(x)+' '+repr(y)+' '+repr(TbyPb[x])+' ', end=' ')
print(repr(ppsr[y])+' '+repr(pows[ct]))
tim = clock() - stim
if debugout:
print('Time for this point was ',tim, ' s.')
file.write('%5d %9.6f %8.6f %11.9f %11.9f %11.9f\n' % \
(y * numTbyPb + x, TbyPb[x], ppsr[y], \
average(pows), max(pows), min(pows)))
file.flush()
file.close()
| 12,905 | 46.623616 | 85 | py |
presto | presto-master/python/binopttest/comb.py | from __future__ import print_function
from builtins import range
from Numeric import *
from presto import *
from LeastSquares import leastSquaresFit
from orbitstuff import *
# Observation parameters
dt = 0.000125 # The duration of each data sample
N = 2**28 # The number of points in the observation
T = N*dt # The total observation time
ctype = 'NS' # The type of binary companion: 'WD', 'NS', or 'BH'
mpsr = mc = 0.0
psr = fake_mspsr(companion = ctype)
psr.orb.e = 0.0
psr.orb.t = 0.0
psr.orb.w = 0.0
z = 2*pi*psr.orb.x/psr.p
print('')
print(' PSR mass =', mpsr)
print(' Companion mass =', mc)
print(' PSR period (s) =', psr.p)
print(' PSR frequency (hz) =', 1.0/psr.p)
print(' Orbit period (s) =', psr.orb.p)
print(' Orbit asini/c (lt-s) =', psr.orb.x)
print(' Orbit eccentricity =', psr.orb.e)
print(' Orbit angle (deg) =', psr.orb.w)
print(' Orbit time (s) =', psr.orb.t)
print(' Orbit Fourier Freq =', T/psr.orb.p)
print(' Orbit z =', z)
print('')
m = 0
kernel = presto.gen_bin_response(0.0, 1, psr.p, T, psr.orb ,
presto.LOWACC, m)
fftlen = next2_to_n(len(kernel))
comb= zeros(fftlen, 'F')
comb[0:len(kernel)] = kernel
minifft = rfft(spectralpower(comb))
minipow = spectralpower(minifft)
miniphs = spectralphase(minifft)
miniphs = array(miniphs, typecode='d', copy=1)
for i in range(len(minifft)-1):
if miniphs[i+1]-miniphs[i] < 0.0:
miniphs[i+1:] = miniphs[i+1:] + 360.0
| 1,549 | 31.978723 | 71 | py |
presto | presto-master/python/binopttest/bindata.py | from __future__ import print_function
from future import standard_library
standard_library.install_aliases()
from builtins import range
def catvar(col):
ret = []
global a, b, c, d, e
for i in range(shape(a)[0]):
ret.append(a[i][col])
for i in range(shape(b)[0]):
ret.append(b[i][col])
for i in range(shape(c)[0]):
ret.append(c[i][col])
for i in range(shape(d)[0]):
ret.append(d[i][col])
for i in range(shape(e)[0]):
ret.append(e[i][col])
return ret
def readsaves(file='montebinopt_saves.txt'):
f = open(file, 'r')
result = []
while 1:
try:
result.append(load(f))
except ValueError:
continue
except EOFError:
break
f.close()
return result
def saveresults(file="testresults.txt"):
from pickle import *
global psrp, orbp, orbx, orbe, orbw, orbt
global widthp, widthx, widtht, widthe, widthw
global mf, z
vars = ('psrp', 'orbp', 'orbx', 'orbe', 'orbw', 'orbt',
'widthp', 'widthx', 'widtht', 'widthe', 'widthw',
'mf', 'z')
f = open(file, 'w')
for var in vars:
print('Saving ', var, '...')
exec('dump(%s, f)' % (var))
f.close()
print('Saved em.')
def quadratic(parameters, x):
a = parameters[0]
b = parameters[1]
c = parameters[2]
return (a * x + b) * x + c
def linear(parameters, x):
m = parameters[0]
b = parameters[1]
return m * x + b
def genfits():
from LeastSquares import leastSquaresFit
global psrp, orbp, orbx, orbe, orbw, orbt
global widthp, widthx, widtht, widthe, widthw
global mf, z
yvals = {'Orb p':widthp, 'Orb x':widthx, 'Orb t':widtht}
xvals = {'Orb p':orbp/z, 'Orb x':1.0/z, 'Orb t':1.0/z}
xtits = {'Orb p':'Orb p/z', 'Orb x':'1.0/z', 'Orb t':'1.0/z'}
for fitvar in ['Orb p', 'Orb x', 'Orb t']:
vals = []
for i in range(len(xvals[fitvar])):
vals.append((xvals[fitvar][i], yvals[fitvar][i]))
fit = leastSquaresFit(linear, (1.0, 0.0), vals)
print('%s width = %10.7f * %s + %10.7f (Acc: %f)' % (fitvar,
fit[0][0],
xtits[fitvar],
fit[0][1],
fit[1]))
plotxy(yvals[fitvar], xvals[fitvar],
laby=fitvar+' Width (Fractional)',
labx=xtits[fitvar], line=None, font=2,
symbol=2, color='red', device='fits.ps/CPS')
plotxy([fit[0][0]*min(xvals[fitvar])+fit[0][1],
fit[0][0]*max(xvals[fitvar])+fit[0][1]],
[min(xvals[fitvar]), max(xvals[fitvar])],
line=1, symbol=None, color='blue')
nextplotpage(1)
closeplot()
def genlogfits():
from LeastSquares import leastSquaresFit
global psrp, orbp, orbx, orbe, orbw, orbt
global widthp, widthx, widtht, widthe, widthw
global mf, z
yvals = {'Orb p':log(widthp), 'Orb x':log(widthx), 'Orb t':log(widtht)}
xvals = {'Orb p':log(orbp/z), 'Orb x':log(1.0/z), 'Orb t':log(1.0/z)}
xtits = {'Orb p':'log(Orb p/z)', 'Orb x':'log(1.0/z)',
'Orb t':'log(1.0/z)'}
for fitvar in ['Orb p', 'Orb x', 'Orb t']:
vals = []
for i in range(len(xvals[fitvar])):
vals.append((xvals[fitvar][i], yvals[fitvar][i]))
fit = leastSquaresFit(linear, (1.0, 0.0), vals)
print('log(%s) width = %10.7f * %s + %10.7f (Acc: %f)' % (fitvar,
fit[0][0],
xtits[fitvar],
fit[0][1],
fit[1]))
plotxy(yvals[fitvar], xvals[fitvar],
laby='log('+fitvar+') Width (Fractional)',
labx=xtits[fitvar], line=None, font=2,
symbol=2, color='red', device='logfits.ps/CPS')
plotxy([fit[0][0]*min(xvals[fitvar])+fit[0][1],
fit[0][0]*max(xvals[fitvar])+fit[0][1]],
[min(xvals[fitvar]), max(xvals[fitvar])],
line=1, symbol=None, color='blue')
nextplotpage(1)
closeplot()
if __name__ == '__main__':
from math import *
from Numeric import *
from stats import *
from Statistics import *
from Pgplot import *
from miscutils import *
def help(funct):
"""
help(funct):
Print the documentation string of a function or method.
"""
print(eval(funct + '.__doc__'))
from pickle import *
vars = ('psrp', 'orbp', 'orbx', 'orbe', 'orbw', 'orbt',
'widthp', 'widthx', 'widtht', 'widthe', 'widthw',
'mf', 'z')
f = open("testresults.txt")
for var in vars:
print('Loading ', var, '...')
exec(var + ' = asarray(load(f))')
f.close()
print('Got em.')
| 5,131 | 34.638889 | 81 | py |
presto | presto-master/python/binopttest/montebinopt.py | from __future__ import print_function
from future import standard_library
standard_library.install_aliases()
from builtins import str
from builtins import input
from builtins import range
import math, string, Numeric, presto, random, sys, pickle
from LeastSquares import leastSquaresFit
from orbitstuff import *
# Some admin variables
parallel = 0 # True or false
showplots = 0 # True or false
debugout = 0 # True or false
numloops = 200 # Number of orbits to test
numbetween = 16 # The number of bins to interpolate
# Observation parameters
dt = 0.000125 # The duration of each data sample
N = 2**28 # The number of points in the observation
T = N*dt # The total observation time
ctype = 'NS' # The type of binary companion: 'WD', 'NS', or 'BH'
# These are the minimum distances to measure from the true values
Dp = 0.00002 # fraction of orbital period
Dx = 0.0002 # fraction of projected orbital semi-major axis
De = 0.01 # eccentricity (absolute)
Dw = 0.5 # degrees (absolute)
Dt = 0.00002 # fraction of orbital period
if showplots:
import Pgplot
if parallel:
import mpi
from mpihelp import *
myid = mpi.comm_rank()
numprocs = mpi.comm_size()
if ctype=='WD':
if numprocs!=3:
raise SystemExit('You need 3 procs for the NS-WD simulation.')
else:
if numprocs!=5:
raise SystemExit('You need 5 procs for a NS-NS or NS-BH simulation.')
else:
myid = 0
# The mathematical model of the Fourier Peak.
def quadratic(parameters, x):
a = parameters[0]
b = parameters[1]
c = parameters[2]
return (a * x + b) * x + c
# Store important psrparams info in a list
def psrparams_to_list(psr):
result = []
result.append(psr.p)
result.append(psr.orb.p)
result.append(psr.orb.x)
result.append(psr.orb.e)
result.append(psr.orb.w)
result.append(psr.orb.t)
return result
# Get important psrparams info from a list
def psrparams_from_list(pplist):
psr = presto.psrparams()
psr.p = pplist[0]
psr.orb.p = pplist[1]
psr.orb.x = pplist[2]
psr.orb.e = pplist[3]
psr.orb.w = pplist[4]
psr.orb.t = pplist[5]
return psr
# Correlate a kernel with a data array. Return the good parts.
def corr(data, kernel, numbetween, firsttime=0):
m = len(kernel)/2
lobin = m / numbetween
numbins = len(data) - 2 * lobin
if firsttime: samedata = 0
else: samedata = 1
result = Numeric.zeros(numbins * numbetween, 'F')
presto.corr_complex(data, len(data), lobin, numbins,
numbetween, kernel, m, result, samedata, 0)
return result
# Open a file to save each orbit calculation
file = open('montebinopt_saves.txt','w')
# Decide what work we have to perform
work = ['p', 'x', 't', 'e', 'w']
if not parallel:
if ctype=='WD':
numjobs = 3
else:
numjobs = 5
else:
numjobs = 1
# Begin the loop over the candidates
widths = []
for i in range(numloops):
# Generate the fake pulsar parameters
if myid==0:
psr = fake_mspsr(companion = ctype)
psrlist = psrparams_to_list(psr)
else:
psr = presto.psrparams()
psrlist = None
if parallel:
psrlist = bcast_general(psrlist, myid, 0)
if myid==0: psrlist = psrparams_to_list(psr)
else: psr = psrparams_from_list(psrlist)
if debugout:
allproc_print(numprocs, 'Psr period =', psr.p)
print('')
print('Trial', i)
if debugout:
print('')
print(' PSR mass =', mpsr)
print(' Companion mass =', mc)
print(' PSR period (s) =', psr.p)
print(' PSR frequency (hz) =', 1.0/psr.p)
print(' Orbit period (s) =', psr.orb.p)
print(' Orbit asini/c (lt-s) =', psr.orb.x)
print(' Orbit eccentricity =', psr.orb.e)
print(' Orbit angle (deg) =', psr.orb.w)
print(' Orbit time (s) =', psr.orb.t)
print(' Orbit Fourier Freq =', T/psr.orb.p)
print(' Orbit z =', \
presto.TWOPI*psr.orb.x/psr.p)
print('')
# Create the data set
cand = presto.orbitparams()
m = 0
comb = presto.gen_bin_response(0.0, 1, psr.p, T, psr.orb ,
presto.LOWACC, m)
ind = len(comb)
# The follwoing is performed automatically in gen_bin_resp() now
# m = (ind / 2 + 10) * numbetween
data = Numeric.zeros(3 * ind, 'F')
data[ind:2*ind] = comb
if showplots and not parallel:
Pgplot.plotxy(presto.spectralpower(data), color='red',
title='Data', labx='Fourier Frequency',
laby='Relative Power')
a = input("Press enter to continue...")
Pgplot.nextplotpage(1)
# Perform the loops over the Keplerian parameters
for job in range(numjobs):
if parallel:
myjob = work[myid]
else:
myjob = work[job]
if myjob=='p':
Dd = Dp
psrref = psr.orb.p
if myjob=='x':
Dd = Dx
psrref = psr.orb.x
if myjob=='t':
Dd = Dt
psrref = psr.orb.p
if myjob=='e':
Dd = De
psrref = 1.0
if myjob=='w':
Dd = Dw
psrref = 1.0
firsttime = 1
vals = []
vals.append((0.0, 1.0))
vals.append((0.0, 1.0))
ddelta = Dd * psrref
delta = ddelta
while vals[-1][1] > 0.5 and vals[-2][1] > 0.5:
for currentdelta in [delta, -delta]:
# Adjust our candidate orbital period
cand = copyorb(psr.orb, cand)
if myjob=='p': cand.p = psr.orb.p + currentdelta
if myjob=='x': cand.x = psr.orb.x + currentdelta
if myjob=='t': cand.t = psr.orb.t + currentdelta
if myjob=='e': cand.e = psr.orb.e + currentdelta
if myjob=='w': cand.w = psr.orb.w + currentdelta
# Generate the new correlation kernel
kernel = presto.gen_bin_response(0.0, numbetween,
psr.p, T, cand,
presto.LOWACC, m)
# Perform the correlation
result = corr(data, kernel, numbetween, firsttime)
firsttime = 0
# Convert to a power spectrum
respow = presto.spectralpower(result)
vals.append((currentdelta/psrref,
Numeric.maximum.reduce(respow)))
if debugout:
# Print the most recent results
print(' %s: Delta = %10.6f Response = %8.5f' % \
(myjob, vals[-1][0], vals[-1][1]))
if showplots and not parallel:
# Plot the results of the correlation
Pgplot.plotxy(respow, labx='Frequency',
laby='Relative Power')
a = input("Press enter to continue...")
Pgplot.nextplotpage(1)
# A very rough adaptive stepsize
if abs(vals[-3][1] - vals[-1][1]) < 0.04:
ddelta = ddelta * 2.0
delta = delta + ddelta
# Fit a quadratic to the width values
fit = leastSquaresFit(quadratic, (-1.0, 0.0, 1.0), vals)
if debugout:
print('\n %sfit = %fx^2 + %fx + %f\n' % (myjob, fit[0][0],
fit[0][1],
fit[0][2]))
width = 2.0*math.sqrt(-0.5/fit[0][0])
if parallel:
newwidths = mpi.gather_string(str(width), 0)
if myid==0:
for proc in range(numprocs):
psrlist.append(string.atof(newwidths[proc]))
else:
psrlist.append(width)
widths.append(psrlist)
if debugout:
print('Widths are', widths[i])
# Save our most recent orbit and width information
pickle.dump(widths[i], file, 1)
file.close()
| 8,294 | 34.448718 | 81 | py |
presto | presto-master/python/presto/sifting.py | #!/usr/bin/env python
from __future__ import print_function
from __future__ import absolute_import
from builtins import zip, str, range, object
from operator import attrgetter
import sys, re, os, copy
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import os.path
import glob
from presto import infodata
from presto.presto import candidate_sigma
# Note: the following are global variables that can
# (and should) be set in whatever module
# imports this file. The following values are
# "OK" defaults for some searches....
# How close a candidate has to be to another candidate to
# consider it the same candidate (in Fourier bins)
r_err = 1.1
# Longest period candidates to consider (s)
long_period = 15.0
# Shortest period candidates to consider (s)
short_period = 0.0005
# Ignore candidates with a sigma (from incoherent power summation) less than this
sigma_threshold = 6.0
# Ignore candidates with a coherent power less than this
c_pow_threshold = 100.0
# Ignore any candidates where at least one harmonic does not exceed this power
harm_pow_cutoff = 8.0
# If the birds file works well, the following shouldn't
# be needed at all...
# (ms, err)
known_birds_p = []
# (Hz, err)
known_birds_f = []
#---------------------------------------------------
fund_re = re.compile("^\d")
harms_re = re.compile("^[ ]\d")
DM_re = re.compile("DM(\d+\.\d{2})")
# Add some functions to maintain support for the old
# sifting API
def remove_duplicate_candidates(candlist, *args, **kwargs):
"""Remove duplicate candidates. The candlist is modified
**in-place**.
Note: This function is defined to maintain support
for old code. It simply calls the
'remove_duplicate_candidates' method of candlist.
Inputs:
** All arguments are passed onto the
'remove_duplicate_candidates' method of candlist.
Output:
candlist: The modified candidate list.
"""
candlist.remove_duplicate_candidates(*args, **kwargs)
return candlist
def remove_DM_problems(candlist, *args, **kwargs):
"""Remove candidates with DM problems. The candlist is modified
**in-place**.
Note: This function is defined to maintain support
for old code. It simply calls the
'remove_DM_problems' method of candlist.
Inputs:
** All arguments are passed onto the
'remove_DM_problems' method of candlist.
Output:
candlist: The modified candidate list.
"""
candlist.remove_DM_problems(*args, **kwargs)
return candlist
def remove_harmonics(candlist, *args, **kwargs):
"""Remove harmonics. The candlist is modified
**in-place**.
Note: This function is defined to maintain support
for old code. It simply calls the
'remove_harmonics' method of candlist.
Inputs:
** All arguments are passed onto the
'remove_harmonics' method of candlist.
Output:
candlist: The modified candidate list.
"""
candlist.remove_harmonics(*args, **kwargs)
return candlist
def write_candlist(candlist, *args, **kwargs):
candlist.to_file(*args, **kwargs)
def sigma_to_size(sigmas):
"""Given a numpy array of sigma values, return an array
of same size with sizes of markers to plot.
Inputs:
sigmas: Numpy array of sigma values.
Output:
sizes: Numpy array of marker sizes.
"""
# return 8+sigmas**1.7
return np.clip(20**(sigmas/6), 5, 400)
def print_sift_globals():
print("r_err =", r_err)
print("short_period =", short_period)
print("long_period =", long_period)
print("sigma_threshold =", sigma_threshold)
print("c_pow_threshold =", c_pow_threshold)
print("harm_pow_cutoff =", harm_pow_cutoff)
print("known_birds_p =", known_birds_p)
print("known_birds_f =", known_birds_f)
def parse_power(pow):
power = float(pow.split("(")[0])
if ("^" in pow): # add exponent...
try:
expon = float(pow.split("^")[1])
except ValueError:
expon = 5 # power gets chopped off if this large
power *= 10.0**(expon)
return power
class Candidate(object):
def __init__(self, candnum, sigma, numharm, ipow, cpow, bin, z,
DMstr, filename, T):
self.path, self.filename = os.path.split(filename)
self.candnum = candnum
self.sigma = sigma
self.numharm = numharm
self.ipow_det = ipow
self.cpow = cpow
self.r = bin
self.f = bin/T
self.z = z
self.T = T
self.p = 1.0/self.f
self.DMstr = DMstr
self.DM = float(DMstr)
self.harm_pows = None
self.harm_amps = None
self.snr = 0.0
self.hits = []
self.note = ""
def add_as_hit(self, other):
self.hits.extend(other.hits)
def __str__(self):
cand = self.filename + ':' + repr(self.candnum)
return "%-65s %7.2f %6.2f %6.2f %s %7.1f %7.1f " \
"%12.6f %10.2f %8.2f " % \
(cand, self.DM, self.snr, self.sigma, \
"%2d".center(7)%self.numharm,
self.ipow_det, self.cpow, self.p*1000, self.r, self.z)
def harms_to_snr(self):
# Remove the average power level
harmamps = np.asarray(self.harm_pows) - 1.0
# Set the S/N to 0.0 for harmonics with "negative" amplitudes
harmamps[harmamps < 0.0] = 0.0
self.snr = np.sum(np.sqrt(harmamps))
class Candlist(object):
def __init__(self, cands=None, trackbad=False, trackdupes=False):
if cands is None:
self.cands = []
else:
self.cands = cands
self.trackbad = trackbad # Should we keep track of bad candidates
self.trackdupes = trackdupes # Should we keep track of duplicates
# Set default badlists
self.badlists = {'knownbirds': [], \
'longperiod': [], \
'shortperiod': [], \
'threshold': [], \
'harmpowcutoff': [], \
'rogueharmpow': [], \
'harmonic': [], \
'dmproblem': []}
self.duplicates = []
def __iter__(self):
return iter(self.cands)
def __getitem__(self, key):
return self.cands[key]
def __delitem__(self, key):
del(self.cands[key])
def sort(self, *args, **kwargs):
self.cands.sort(*args, **kwargs)
def plot_summary(self, usefreqs=True):
"""Produce a plot summarizing the sifiting performed.
Input:
usefreqs: If True, the horizontal axis will use
frequency. If False, use period.
Output:
fig: A matplotlib figure instance.
"""
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,8))
ax = plt.axes((0.08, 0.18, 0.87, 0.80))
# Get all candidates and sort by sigma
allcands = self.get_all_cands()
sigmas = np.array([c.sigma for c in allcands])
isort = sigmas.argsort()
sigmas = sigmas[isort]
if usefreqs:
xdata = np.array([c.f for c in allcands])[isort]
xlabel = "Freq (Hz)"
xscale = "log"
else:
xdata = np.array([c.p for c in allcands])[isort]
xlabel = "Period (s)"
xscale = "loglin"
dms = np.array([c.DM for c in allcands])[isort]
numharms = np.array([c.numharm for c in allcands])[isort]
# Plot the all candidates
scatt = plt.scatter(xdata, dms, s=sigma_to_size(sigmas), \
c=np.log2(numharms), \
marker='o', alpha=0.7, zorder=-1)
plt.set_cmap("Spectral")
# Add colorbar
fmtr = matplotlib.ticker.FuncFormatter(lambda x, pos: "%d" % 2**x)
cax = plt.axes((0.18, 0.06, 0.67, 0.035))
cb = plt.colorbar(scatt, cax=cax, ticks=(0,1,2,3,4), format=fmtr, \
orientation="horizontal")
cb.set_label("Num harmonics summed")
plt.axes(ax) # Set scatter plot's axes as current
plt.xscale(xscale)
plt.xlabel(xlabel)
mindm = np.min(dms)
maxdm = np.max(dms)
dmrange = np.ptp(dms)
# Use log-scale y-axis if max DM > 2000
yscale = "log" if maxdm > 2000.0 else "linear"
plt.yscale(yscale)
if yscale is "log":
plt.ylim(1.0, maxdm+0.1*dmrange)
else:
plt.ylim(mindm-0.1*dmrange, maxdm+0.1*dmrange)
plt.ylabel(r"DM (pc cm$^{-3}$)")
if not usefreqs:
plt.gca().xaxis.set_ticks(np.concatenate((\
np.logspace(-4,0,4, endpoint=False), \
np.linspace(1,15,8))))
plt.gca().xaxis.set_ticks(np.logspace(-4,0,40), minor=True)
plt.gca().xaxis.set_ticklabels([r"10$^{-4}$", r"10$^{-3}$", \
r"10$^{-2}$", r"10$^{-1}$", "1", "3", "5", "7", \
"9", "11", "13", "15"])
plt.xlim(max(short_period/5.0, min(xdata)/5.0), \
min(long_period+0.5, max(xdata)+0.5))
else:
plt.xlim(min(xdata)/5.0, max(xdata)*2.0)
ax.format_coord = lambda x,y: "x=%g, y=%g" % (x,y)
return fig
def plot_rejects(self, usefreqs=True):
"""Produce a plot showing why candidates were rejected by
the sifiting performed.
Input:
usefreqs: If True, the horizontal axis will use
frequency. If False, use period.
Output:
fig: A matplotlib figure instance.
"""
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,8))
ax = plt.axes((0.08, 0.18, 0.87, 0.80))
# Plot bad candidates
candlists = [self.badlists['knownbirds'], self.badlists['longperiod'], \
self.badlists['shortperiod'], self.badlists['threshold'], \
self.badlists['harmpowcutoff'], self.badlists['rogueharmpow'], \
self.badlists['harmonic'], self.badlists['dmproblem'], \
self.cands, self.duplicates]
labels = ['Known birdires', 'Long period', 'Short period', \
'Threshold', 'Harm power cutoff', 'Rogue harm power', \
'Harmonic cand', 'DM problem', 'Good cands', 'Hits']
colours = ['#FF0000', '#800000', '#008000', '#00FF00', \
'#00FFFF', '#0000FF', '#FF00FF', '#800080', 'r', 'k']
markers = ['o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'x', 's']
zorders = [-2, -2, -2, -2, -2, -2, -2, -2, 0, 0]
sizes = [50, 50, 50, 50, 50, 50, 50, 50, 100, 10]
fixedsizes = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1]
lws = [1,1,1,1,1,1,1,1,2,1]
handles = []
for cands, colour, marker, zorder, size, fixedsize, lw in \
zip(candlists, colours, markers, zorders, sizes, fixedsizes, lws):
if len(cands):
sigmas = np.array([c.sigma for c in cands])
isort = sigmas.argsort()
sigmas = sigmas[isort]
if usefreqs:
xdata = np.array([c.f for c in cands])[isort]
xlabel = "Freq (Hz)"
xscale = "log"
else:
xdata = np.array([c.p for c in cands])[isort]
xlabel = "Period (s)"
xscale = "loglin"
dms = np.array([c.DM for c in cands])[isort]
# Plot the candidates
if fixedsize:
plt.scatter(xdata, dms, s=size, lw=lw, \
c=colour, marker=marker, alpha=0.7, zorder=zorder)
else:
plt.scatter(xdata, dms, s=sigma_to_size(sigmas), lw=lw, \
c=colour, marker=marker, alpha=0.7, zorder=zorder)
handles.append(plt.scatter([0], [0], s=size, c=colour, \
marker=marker, alpha=0.7))
fig.legend(handles, labels, 'lower center', \
prop={'size':'x-small'}, ncol=4)
plt.xscale(xscale)
plt.xlabel(xlabel)
alldms = np.array([c.DM for c in self.get_all_cands()])
mindm = np.min(alldms)
maxdm = np.max(alldms)
dmrange = np.ptp(alldms)
# Use log-scale y-axis if max DM > 2000
yscale = "log" if maxdm > 2000.0 else "linear"
plt.yscale(yscale)
if yscale is "log":
plt.ylim(1.0, maxdm+0.1*dmrange)
else:
plt.ylim(mindm-0.1*dmrange, maxdm+0.1*dmrange)
plt.ylabel(r"DM (pc cm$^{-3}$)")
if not usefreqs:
all_xdata = np.array([c.p for c in self.get_all_cands()])
plt.gca().xaxis.set_ticks(np.concatenate((\
np.logspace(-4,0,4, endpoint=False), \
np.linspace(1,15,8))))
plt.gca().xaxis.set_ticks(np.logspace(-4,0,40), minor=True)
plt.gca().xaxis.set_ticklabels([r"10$^{-4}$", r"10$^{-3}$", \
r"10$^{-2}$", r"10$^{-1}$", "1", "3", "5", "7", \
"9", "11", "13", "15"])
plt.xlim(max(short_period/5.0, min(all_xdata)/5.0), \
min(long_period+0.5, max(all_xdata)+0.5))
else:
all_xdata = np.array([c.f for c in self.get_all_cands()])
plt.xlim(min(all_xdata)/5.0, max(all_xdata)*2.0)
return fig
def plot_goodcands(self, usefreqs=True):
"""Produce a plot highlighting good candidates as selected by
the sifiting performed.
Input:
usefreqs: If True, the horizontal axis will use
frequency. If False, use period.
Output:
fig: A matplotlib figure instance.
"""
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,8))
ax = plt.axes((0.08, 0.18, 0.87, 0.80))
# Plot candidates
labels = []
candlists = []
for key in self.badlists:
labels.append(key.title())
candlists.append(self.badlists[key])
candlists.append(self.cands)
labels.append('Good cands')
colours = ['#FF0000', '#800000', '#008000', '#00FF00', \
'#00FFFF', '#0000FF', '#FF00FF', '#800080', 'r']
markers = ['o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o']
zorders = [-2, -2, -2, -2, -2, -2, -2, -2, 0]
sizes = [10, 10, 10, 10, 10, 10, 10, 10, 50]
fixedsizes = [1, 1, 1, 1, 1, 1, 1, 1, 0]
lws = [1,1,1,1,1,1,1,1,1,1]
ecs = ['none', 'none', 'none', 'none', 'none', 'none', 'none', 'none', 'k']
alphas = [1,1,1,1,1,1,1,1,0.7]
handles = []
for cands, colour, marker, zorder, size, fixedsize, lw, alpha, ec in \
zip(candlists, colours, markers, zorders, sizes, fixedsizes, lws, alphas, ecs):
sigmas = []
dms = []
xdata = []
for c in cands:
sigmas.extend([h.sigma for h in c.hits])
dms.extend([h[0] for h in c.hits])
if usefreqs:
xval = c.f
else:
xval = c.p
xdata.extend([xval]*len(c.hits))
sigmas = np.array(sigmas)
dms = np.array(dms)
xdata = np.array(xdata)
isort = sigmas.argsort()
sigmas = sigmas[isort]
dms = dms[isort]
xdata = xdata[isort]
if usefreqs:
xlabel = "Freq (Hz)"
xscale = "log"
else:
xlabel = "Period (s)"
xscale = "loglin"
# Plot the candidates
if fixedsize:
plt.scatter(xdata, dms, s=size, lw=lw, edgecolors=ec, \
c=colour, marker=marker, alpha=alpha, zorder=zorder)
else:
plt.scatter(xdata, dms, s=sigma_to_size(sigmas), lw=lw, edgecolors=ec, \
c=colour, marker=marker, alpha=alpha, zorder=zorder)
handles.append(plt.scatter([], [], s=size, c=colour, \
marker=marker, alpha=0.7))
fig.legend(handles, labels, 'lower center', \
prop={'size':'x-small'}, ncol=4)
plt.xscale(xscale)
plt.xlabel(xlabel)
mindm = np.min(dms)
maxdm = np.max(dms)
dmrange = np.ptp(dms)
plt.ylim(mindm-0.1*dmrange, maxdm+0.1*dmrange)
plt.ylabel(r"DM (pc cm$^{-3}$)")
if not usefreqs:
plt.gca().xaxis.set_ticks(np.concatenate((\
np.logspace(-4,0,4, endpoint=False), \
np.linspace(1,15,8))))
plt.gca().xaxis.set_ticks(np.logspace(-4,0,40), minor=True)
plt.gca().xaxis.set_ticklabels([r"10$^{-4}$", r"10$^{-3}$", \
r"10$^{-2}$", r"10$^{-1}$", "1", "3", "5", "7", \
"9", "11", "13", "15"])
plt.xlim(max(short_period/5.0, min(xdata)/5.0), \
min(long_period+0.5, max(xdata)+0.5))
return fig
def mark_as_bad(self, icand, badlistname):
cand = self.cands.pop(icand)
if self.trackbad:
badlist = self.badlists.setdefault(badlistname, [])
badlist.append(cand)
def mark_as_duplicate(self, icand):
cand = self.cands.pop(icand)
if self.trackdupes:
self.duplicates.append(cand)
def get_all_cands(self):
cands = self.get_all_goodcands()
return self.get_all_goodcands() + self.get_all_badcands()
def get_all_goodcands(self):
return self.cands + self.duplicates
def get_all_badcands(self):
cands = []
for key in list(self.badlists.keys()):
cands += self.badlists[key]
return cands
def reject_longperiod(self, long_period=None):
"""Find and remove very long period candidates.
Long period candidates are moved to the 'longperiod' bad-list.
Inputs:
long_period: The longest allowed period for a 'good' cand.
(Default: Globally defined limit, "long_period")
Outputs:
None
"""
if long_period is None:
long_period = globals()['long_period']
for ii in reversed(list(range(len(self.cands)))):
cand = self.cands[ii]
if (cand.p > long_period):
cand.note = "Period is too long (%g ms > %g ms)" % \
(cand.p*1000, long_period*1000)
self.mark_as_bad(ii, 'longperiod')
def reject_shortperiod(self, short_period=None):
"""Find and remove very short period candidates.
Short period candidates are moved to the 'shortperiod' bad-list.
Inputs:
short_period: The shortest allowed period for a 'good' cand.
(Default: Globally defined limit, "short_period")
Outputs:
None
"""
if short_period is None:
short_period = globals()['short_period']
for ii in reversed(list(range(len(self.cands)))):
cand = self.cands[ii]
if (cand.p < short_period):
cand.note = "Period is too short (%g ms < %g ms)" % \
(cand.p*1000, short_period*1000)
self.mark_as_bad(ii, 'shortperiod')
def reject_knownbirds(self, known_birds_f=[], known_birds_p=[]):
"""Find and remove candidates conincident with known birds.
Inputs:
known_birds_f: A list of tuples containing bad frequencies
and widths. The tuples should contain
(<bad freq (Hz)>, <one-sided width (Hz)>)
(Default: Globally defined "known_birds_f")
known_birds_p: A list of tuples containing bad peridocities
and widths. The tuples should contain
(<bad freq (ms)>, <one-sided width (ms)>)
(Default: Globally defined "known_birds_p")
Outputs:
None
"""
if known_birds_f is None:
known_birds_f = globals()['known_birds_f']
if known_birds_p is None:
known_birds_p = globals()['known_birds_p']
for ii in reversed(list(range(len(self.cands)))):
cand = self.cands[ii]
known_bird = 0
for bird, err in known_birds_f:
if (np.fabs(cand.f-bird) < err):
known_bird = 1
cand.note = "Freq (%.2f Hz) is within %g Hz " \
"of a known birdie centred at %.2f Hz" % \
(cand.f, err, bird)
break
if known_bird:
self.mark_as_bad(ii, 'knownbirds')
continue
for bird, err in known_birds_p:
if (np.fabs(cand.p*1000.0-bird) < err):
known_bird = 1
cand.note = "Period (%.2f ms) is within %g ms " \
"of a known birdie centred at %.2f ms" % \
(cand.f*1000, err, bird)
break
if known_bird:
self.mark_as_bad(ii, 'knownbirds')
continue
def reject_threshold(self, sigma_threshold=None, \
c_pow_threshold=None):
"""Find and remove candidates that don't pass our threshold.
The conditions for rejection are different for candidates
with a single harmonic, and multiple harmonics.
Inputs:
sigma_threshold: The threshold for sigma.
(Default: Globally defined threshold, "sigma_threshold")
c_pow_threshold: The threshold for coherent power.
(Default: Globally defined threshold, "c_pow_threshold")
Outputs:
None
"""
if sigma_threshold is None:
sigma_threshold = globals()['sigma_threshold']
if c_pow_threshold is None:
c_pow_threshold = globals()['c_pow_threshold']
for ii in reversed(list(range(len(self.cands)))):
cand = self.cands[ii]
if cand.numharm == 1:
# Single harmonic case
if (cand.sigma < sigma_threshold) and \
(cand.cpow < c_pow_threshold):
cand.note = "Only 1 harmonic and both sigma " \
"(%g < %g) and coherent power (%g < %g) are " \
"too low." % (cand.sigma, sigma_threshold, \
cand.cpow, c_pow_threshold)
self.mark_as_bad(ii, 'threshold')
else:
# Multiple harmonic case
if cand.sigma < sigma_threshold:
cand.note = "%d harmonics and sigma " \
"(%g < %g) is too low." % \
(cand.numharm, cand.sigma, sigma_threshold)
self.mark_as_bad(ii, 'threshold')
def reject_harmpowcutoff(self, harm_pow_cutoff=None):
"""Find and remove the candidates where the harmonic with the
highest power is not more than harm_pow_cutoff.
Inputs:
harm_pow_cutoff: Minimum power for a good harmonic.
(Default: Globally defined "harm_pow_cutoff")
Outputs:
None
"""
if harm_pow_cutoff is None:
harm_pow_cutoff = globals()['harm_pow_cutoff']
for ii in reversed(list(range(len(self.cands)))):
cand = self.cands[ii]
maxharm = np.argmax(cand.harm_pows)
maxpow = cand.harm_pows[maxharm]
if maxpow < harm_pow_cutoff:
cand.note = "All harmonics have power < %g" % harm_pow_cutoff
self.mark_as_bad(ii, 'harmpowcutoff')
def reject_rogueharmpow(self):
"""Find and remove candidates which are dominated by a single
high-power but high-numbered harmonic.
Inputs:
None
Ouputs:
None
"""
for ii in reversed(list(range(len(self.cands)))):
cand = self.cands[ii]
maxharm = np.argmax(cand.harm_pows)
maxpow = cand.harm_pows[maxharm]
# Sort the harmonics by power
sortedpows = np.sort(cand.harm_pows)
if (cand.numharm >= 8 and maxharm > 4 and \
maxpow > 2*sortedpows[-2]):
# Max-power harmonic is at least 2x more powerful
# than the next highest-power harmonic, and is the
# 4+th harmonic our of 8+ harmonics
cand.note = "High-numbered harmonic (%d) has too " \
"much power" % maxharm
self.mark_as_bad(ii, 'rogueharmpow')
elif (cand.numharm >= 4 and maxharm > 2 and \
maxpow > 3*sortedpows[-2]):
# Max-power harmonic is at least 3x more powerful
# than the next highest-power harmonic, and is the
# 2+th harmonic our of 4+ harmonics
cand.note = "High-numbered harmonic (%d) has too " \
"much power" % maxharm
self.mark_as_bad(ii, 'rogueharmpow')
def default_rejection(self):
"""Run all rejection methonds with default arguments.
Inputs:
None
Outputs:
None
"""
self.reject_longperiod()
self.reject_shortperiod()
self.reject_knownbirds()
self.reject_threshold()
self.reject_harmpowcutoff()
self.reject_rogueharmpow()
def remove_duplicate_candidates(self, verbosity=1):
"""Remove lower-significance 'duplicate' (i.e. same period)
candidates from a list of candidates. For the highest
significance candidate, include a list of the DMs (and SNRs)
of all the other detections.
Inputs:
verbosity: Verbosity level. (Default: 1)
Ouputs:
None
"""
if verbosity >= 1:
print(" Sorting the %d candidates by frequency..." % \
self.get_numcands())
self.cands.sort(key=attrgetter('r'))
if verbosity >= 1:
print(" Searching for dupes...")
ii = 0
# Find any match
while ii < self.get_numcands():
jj = ii + 1
if jj < self.get_numcands() and \
np.fabs(self.cands[ii].r-self.cands[jj].r) < r_err:
# Find others that match
jj += 1
while jj < self.get_numcands() and \
np.fabs(self.cands[ii].r-self.cands[jj].r) < r_err:
jj += 1
matches = self.cands[ii:jj]
matches.sort(key=attrgetter('sigma'), reverse=True)
bestindex = self.cands.index(matches[0])
#sigmas = [c.sigma for c in matches]
#bestindex = np.argmax(sigmas)+ii
# flag the duplicates
bestcand = self.cands[bestindex]
# Add other matching cands as hit of highest-sigma cand
for matchind in reversed(list(range(ii, jj))):
if matchind == bestindex:
# The current candidate is the highest-sigma cand
# Don't remove it
continue
match = self.cands[matchind]
bestcand.add_as_hit(match)
match.note = "This candidate is a duplicate of %s:%d" % \
(bestcand.filename, bestcand.candnum)
self.mark_as_duplicate(matchind)
if verbosity >= 2:
print("Removing %s:%d (index: %d)" % \
(match.filename, match.candnum, matchind))
print(" %s" % match.note)
# If the best candidate isn't at the same freq
# as ii, then it's possible even more hits should
# be added. So we don't increment the index
# (note that the best cand has moved into position ii).
else:
ii += 1 # No candidates to be added as hits, move on
if verbosity >= 1:
print("Found %d candidates.\n" % self.get_numcands())
self.cands.sort(key=attrgetter('sigma'), reverse=True)
def remove_harmonics(self, verbosity=1):
"""Remove the candidates that are lower significance harmonics
of other candidates from the candlist.
Inputs:
verbosity: Verbosity level. (Default: 1)
Ouputs:
None
"""
# Note: should probably put the harmonics into the fundamental as hits (use sets)
numremoved = 0
self.cands.sort(key=attrgetter('sigma'), reverse=True)
f_err = r_err/self.cands[0].T
if verbosity >= 1:
print("\nSearching for duplicate harmonics...")
ii = 0
while 1:
fundcand = self.cands[ii]
jj = len(self.cands) - 1
zapj = 0
while 1:
harmcand = self.cands[jj]
if zapj: print("Hey!")
for factor in np.arange(1.0, 17.0):
if np.fabs(fundcand.f - harmcand.f*factor) < f_err*factor:
zapj = 1
harmstr = "1/%dth" % factor
elif np.fabs(fundcand.f - harmcand.f/factor) < f_err/factor:
zapj = 1
if factor==2.0:
harmstr = "%dnd" % factor
else:
harmstr = "%dth" % factor
if zapj:
if verbosity >= 2:
print("Removing %s:%d (%.2f Hz) because it is " \
"a harmonic (%s) of %s:%d (%.2f Hz)" % \
(harmcand.filename, \
harmcand.candnum, \
harmcand.f, \
harmstr, \
fundcand.filename, \
fundcand.candnum, \
fundcand.f))
break
# Check a few other common ratios
for numer,denom in zip([3.0, 5.0, 2.0, 4.0, 5.0, \
3.0, 5.0, 2.0, 3.0, 4.0],
[2.0, 2.0, 3.0, 3.0, 3.0, \
4.0, 4.0, 5.0, 5.0, 5.0]):
factor = numer/denom
if np.fabs(fundcand.f-harmcand.f*factor) < f_err*factor:
if verbosity >= 2:
print("Removing %s:%d (%.2f Hz) because it is " \
"a harmonic (%d/%dth) of %s:%d (%.2f Hz)" % \
(harmcand.filename, \
harmcand.candnum, \
harmcand.f, \
denom, \
numer, \
fundcand.filename, \
fundcand.candnum, \
fundcand.f))
harmstr = "%d/%dth" % (denom, numer)
zapj = 1
break
if zapj:
harmcand.note = "This candidate (P=%.4f s, DM=%.2f) is " \
"a harmonic (%s) of %s:%d " \
"(P=%.4f s, DM=%.2f)." % \
(harmcand.p, harmcand.DM, harmstr, \
fundcand.filename, fundcand.candnum, \
fundcand.p, fundcand.DM)
numremoved += 1
self.mark_as_bad(jj, 'harmonic')
if verbosity >= 2:
print("Removing %s:%d (index: %d)" % \
(harmcand.filename, harmcand.candnum, jj))
print(" %s" % harmcand.note)
zapj = 0
jj -= 1
if jj == ii:
break
ii += 1
if ii >= len(self.cands) - 1:
break
if verbosity >= 1:
print("Removed a total of %d harmonics.\n" % numremoved)
def remove_DM_problems(self, numdms, dmlist, low_DM_cutoff, verbosity=1):
"""Remove the candidates where any of the following are true:
1) The number of hits is < numdms
2) The highest S/N candidate occurs below a DM of low_DM_cutoff
3) The minimum difference in DM indices between the hits is > 1
Inputs:
numdms: The minimum number of hits for a good candidate.
dmlist: List of DMs.
low_DM_cutoff: The lowest DM possible for a good candidate.
verbosity: Verbosity level. (Default: 1)
Ouputs:
None
"""
# Create a dictionary where the key is the dmstr
# and the values are the index
dmdict = {}
dms = np.unique([float(dm) for dm in dmlist])
dmstrs = ['%.2f'%dm for dm in dms]
dmdict = dict(list(zip(dmstrs, list(range(len(dms))))))
numremoved = 0
num_toofew = 0
num_toolow = 0
num_gaps = 0
self.cands.sort(key=attrgetter('sigma'), reverse=True)
for ii in reversed(list(range(len(self.cands)))):
currcand = self.cands[ii]
# Remove all the candidates without enough DM hits
if len(currcand.hits) < numdms:
numremoved += 1
num_toofew += 1
currcand.note = "Candidate has only %d DM hits. This is less " \
"than minimum for 'good' cands (%d hits)" % \
(len(currcand.hits), numdms)
self.mark_as_bad(ii, 'dmproblem')
if verbosity >= 2:
print("Removing %s:%d (index: %d)" % \
(currcand.filename, currcand.candnum, ii))
print(" %s" % currcand.note)
continue
# Remove all the candidates where the max sigma DM is
# less than the cutoff DM
# Recall - A hit is a 3-tuple: (DM, SNR, sigma)
imax = np.argmax(np.array([hit[2] for hit in currcand.hits]))
hitdm, hitsnr, hitsigma = currcand.hits[imax]
if float(hitdm) <= low_DM_cutoff:
numremoved += 1
num_toolow += 1
currcand.note = "Hit with max sigma (%g) has dm (%.2f) " \
"<= low DM cutoff (%.2f) " % \
(hitsigma, hitdm, low_DM_cutoff)
self.mark_as_bad(ii, 'dmproblem')
if verbosity >= 2:
print("Removing %s:%d (index: %d)" % \
(currcand.filename, currcand.candnum, ii))
print(" %s" % currcand.note)
continue
# Remove all the candidates where there are no hits at consecutive DMs
if len(currcand.hits) > 1:
currcand.hits.sort(key=lambda cand: float(cand[0]))
dm_indices = np.asarray([dmdict["%.2f"%currcand.hits[jj][0]]
for jj in range(len(currcand.hits))])
min_dmind_diff = min(dm_indices[1:] - dm_indices[:-1])
if min_dmind_diff > 1:
numremoved += 1
num_gaps += 1
currcand.note = "DM list of hits has gaps (i.e. " \
"consecutive DMs don't have hits)."
self.mark_as_bad(ii, 'dmproblem')
if verbosity >= 2:
print("Removing %s:%d (index: %d)" % \
(currcand.filename, currcand.candnum, ii))
print(" %s" % currcand.note)
continue
if verbosity >= 1:
print("Removed %d candidates with DM problems.\n" % numremoved)
if verbosity >= 2:
print(" # with too few hits:", num_toofew)
print(" # with peak SNR too low:", num_toolow)
print(" # with gaps in DM hits:", num_gaps)
def print_cand_summary(self, summaryfilenm=None):
"""Write a summary of all candidates to file (or stdout).
Input:
summaryfilenm: Name of file to write to. If None write to stdout.
(Default: write to stdout).
Outputs:
None
"""
if summaryfilenm is None:
summaryfile = sys.stdout
elif summaryfilenm in [sys.stdout, sys.stderr]:
summaryfile = summaryfilenm
else:
summaryfile = open(summaryfilenm, "w")
summaryfile.write(" Candlist contains %d 'good' candidates\n" % \
len(self.cands))
summaryfile.write(" # Known RFI rejects: %d\n" % \
len(self.badlists['knownbirds']))
summaryfile.write(" # Short period rejects: %d\n" % \
len(self.badlists['shortperiod']))
summaryfile.write(" # Long period rejects: %d\n" % \
len(self.badlists['longperiod']))
summaryfile.write(" # Missed threshold: %d\n" % \
len(self.badlists['threshold']))
summaryfile.write(" # No good harmonics: %d\n" % \
len(self.badlists['harmpowcutoff']))
summaryfile.write(" # One bad harmonic: %d\n" % \
len(self.badlists['rogueharmpow']))
summaryfile.write(" # Duplicate candidates: %d\n" % \
len(self.duplicates))
summaryfile.write(" # Harmonic candidates: %d\n" % \
len(self.badlists['harmonic']))
summaryfile.write(" # Candidates with DM problems: %d\n" % \
len(self.badlists['dmproblem']))
if summaryfilenm not in [None, sys.stdout, sys.stderr]:
summaryfile.close()
def write_cand_report(self, reportfilenm=None):
"""Write a report of all bad candidates to file (or stdout).
Input:
reportfilenm: Name of file to write to. If None write to stdout.
(Default: write to stdout).
Outputs:
None
"""
if reportfilenm is None:
reportfile = sys.stdout
else:
reportfile = open(reportfilenm, "w")
reportfile.write("#" + "file:candnum".center(66) + "DM".center(9) +
"SNR".center(8) + "sigma".center(8) + "numharm".center(9) +
"ipow".center(9) + "cpow".center(9) + "P(ms)".center(14) +
"r".center(12) + "z".center(8) + "numhits".center(9) + "\n")
badcands = self.get_all_badcands()
for badcand in badcands:
reportfile.write("%s (%d)\n" % (str(badcand), len(badcand.hits)))
reportfile.write(" Note: %s\n\n" % badcand.note)
if reportfilenm is not None:
reportfile.close()
def __add__(self, other):
copy_of_self = copy.deepcopy(self)
copy_of_self.extend(other)
return copy_of_self
def get_numcands(self):
"""Get the number of good candidates (i.e. len(self.cands)).
Inputs:
None
Outputs:
None
"""
return len(self)
def __len__(self):
# return the number of good candidates
return len(self.cands)
def extend(self, other):
"""Extend Candlist with another. This combines
the candidates, as well as the lists of bad cands.
Inputs:
other: A second Candlist object to extend from.
Outputs:
None - the original Candlist object is extended in place.
"""
self.cands.extend(other.cands)
self.duplicates.extend(other.duplicates)
for key in other.badlists:
bad = self.badlists.setdefault(key, [])
bad.extend(other.badlists[key])
def to_file(self, candfilenm=None):
"""Write Candlist to file (or stdout).
Input:
candfilenm: Name of file to write to. If None,
write to stdout. (Default: write to stdout).
Outputs:
None
"""
if candfilenm is None:
candfile = sys.stdout
else:
candfile = open(candfilenm, "w")
candfile.write("#" + "file:candnum".center(66) + "DM".center(9) +
"SNR".center(8) + "sigma".center(8) + "numharm".center(9) +
"ipow".center(9) + "cpow".center(9) + "P(ms)".center(14) +
"r".center(12) + "z".center(8) + "numhits".center(9) + "\n")
for goodcand in self.cands:
candfile.write("%s (%d)\n" % (str(goodcand), len(goodcand.hits)))
if (len(goodcand.hits) > 1):
goodcand.hits.sort(key=lambda cand: float(cand[0]))
for hit in goodcand.hits:
numstars = int(hit[2]/3.0)
candfile.write(" DM=%6.2f SNR=%5.2f Sigma=%5.2f "%hit + \
numstars*'*' + '\n')
if candfilenm is not None:
candfile.close()
def candlist_from_candfile(filename, trackbad=False, trackdupes=False):
candfile = open(filename, 'r')
# First identify the length of the observation searched
for line in candfile:
if line.startswith(" Number of bins in the time series"):
numsamp = int(line.split()[-1])
if line.startswith(" Width of each time series bin (sec)"):
dt = float(line.split()[-1])
tobs = numsamp * dt
# Go back to the start of the file to read the candidates
candfile.seek(0)
cands = []
candnums = []
current_goodcandnum = 0
last_candnum = 0
last_goodcandnum = 0
for line in candfile:
# Identify the candidates in the top of the file
if fund_re.match(line):
split_line = line.split()
candnum = int(split_line[0])
if len(cands) and len(split_line[0])==4 and cands[-1].candnum >= 9999:
candnum = last_candnum + 1
sigma = float(split_line[1])
i_pow_det = float(split_line[2])
c_pow = float(split_line[3])
numharm = int(split_line[4])
bin = float(split_line[7].split("(")[0])
z = float(split_line[9].split("(")[0])
f = bin / tobs # Spin freq in hz
p = 1.0 / f # Spin period in sec
# Add it to the candidates list
DMstr = DM_re.search(filename).groups()[0]
cands.append(Candidate(candnum, sigma, numharm,
i_pow_det, c_pow, bin, z,
DMstr, filename, tobs))
candnums.append(candnum)
last_candnum = candnum
continue
# Parse the harmonic powers
elif harms_re.match(line):
split_line = line.split()
candnum = int(split_line[0])
# For rare cases where >10K cands, candnum is wrong in ACCEL file
if len(split_line[0])==4 and last_goodcandnum >= 9999:
candnum = last_goodcandnum + 1
if candnum in candnums:
cand = cands[candnums.index(candnum)]
cand.harm_pows = np.zeros(cand.numharm, dtype=np.float64)
cand.harm_amps = np.zeros(cand.numharm, dtype=np.complex64)
power = parse_power(split_line[3])
phase = float(split_line[9].split("(")[0])
cand.harm_pows[0] = power
cand.harm_amps[0] = np.sqrt(power) * np.exp(phase*1.0j)
if (cand.numharm > 1):
current_goodcandnum = candnum
current_harmnum = 1
else:
last_goodcandnum = candnum
current_goodcandnum = 0
# Compute the S/N
cand.harms_to_snr()
# These are the "optimized" power...
opt_ipow = cand.harm_pows[0]
# and sigma (calculated assuming _1_ trial!)
opt_sigma = candidate_sigma(opt_ipow, 1, 1)
cand.sigma = opt_sigma
# Now that S/N and sigma are available
# List candidate as a hit of itself
cand.hits = [(cand.DM, cand.snr, cand.sigma)]
cand.ipow_det = opt_ipow
continue
# Parse the higher (than the first) harmonic powers
if current_goodcandnum:
cand = cands[candnums.index(current_goodcandnum)]
power = parse_power(line.split()[2])
phase = float(line.split()[8].split("(")[0])
cand.harm_pows[current_harmnum] = power
cand.harm_amps[current_harmnum] = np.sqrt(power) * np.exp(phase*1.0j)
current_harmnum += 1
# Calculate other stats after all harmonics have been read in
if (current_harmnum==cand.numharm):
# Compute the S/N
cand.harms_to_snr()
# Compute sigma and incoherent power
opt_ipow = sum(cand.harm_pows)
opt_sigma = candidate_sigma(opt_ipow, cand.numharm, 1)
cand.sigma = opt_sigma
# Now that S/N and sigma are available
# List candidate as a hit of itself
cand.hits = [(cand.DM, cand.snr, cand.sigma)]
cand.ipow_det = opt_ipow
last_goodcandnum = candnum
current_goodcandnum = 0
candfile.close()
return Candlist(cands, trackbad=trackbad, trackdupes=trackdupes)
def read_candidates(filenms, prelim_reject=True, track=False):
"""Read in accelsearch candidates from the test ACCEL files.
Return a Candlist object of Candidate instances.
Inputs:
filenms: A list of files to read candidates from.
prelim_reject: If True, perform preliminary rejection of
candidates. (Default: True)
track: If True, keep track of bad/duplicate candidates.
(Default: False)
"""
candlist = Candlist(trackbad=track, trackdupes=track)
numfiles = len(filenms)
if filenms:
print("\nReading candidates from %d files...." % len(filenms))
for ii, filenm in enumerate(filenms):
curr_candlist = candlist_from_candfile(filenm, trackbad=track, trackdupes=track)
if prelim_reject:
curr_candlist.default_rejection()
candlist.extend(curr_candlist)
sys.stdout.write(" Read %d of %d files (%d cands)\r" % (ii+1, numfiles, len(candlist)))
sys.stdout.flush()
print("\nDone")
else:
print("Error: There are no candidate files to read!")
return candlist
def sift_directory(dir, outbasenm):
"""Sift candidates in given directory.
Inputs:
dir: The directory containing candidates to sift.
outbasenm: The base name of the output files.
Outputs:
None
"""
# TODO: Remove hard-coded values in this function
# replace with command line options.
global sigma_threshold
sigma_threshold = 5.0
print_sift_globals()
# Get list of DMs from *.inf files
inffns = glob.glob(os.path.join(dir, '*.inf'))
dmstrs = ['%.2f'%infodata.infodata(inffn).DM for inffn in inffns]
# Read candidates found in low acceleration searching
lo_accel_fns = glob.glob(os.path.join(dir, '*ACCEL_0'))
lo_accel_cands = read_candidates(lo_accel_fns, track=True)
print("Read %d candidates from %d files" % \
(len(lo_accel_cands), len(lo_accel_fns)))
print("%d candidates passed default rejection" % len(lo_accel_cands))
if len(lo_accel_cands):
lo_accel_cands.remove_duplicate_candidates()
if len(lo_accel_cands):
lo_accel_cands.remove_DM_problems(2, dmstrs, 2.0)
lo_accel_cands.print_cand_summary()
# Read candidates found in high acceleration searching
hi_accel_fns = glob.glob(os.path.join(dir, '*ACCEL_50'))
hi_accel_cands = read_candidates(hi_accel_fns, track=True)
print("Read %d candidates from %d files" % \
(len(hi_accel_cands), len(hi_accel_fns)))
print("%d candidates passed default rejection" % len(hi_accel_cands))
if len(hi_accel_cands):
hi_accel_cands.remove_duplicate_candidates()
if len(hi_accel_cands):
hi_accel_cands.remove_DM_problems(2, dmstrs, 2.0)
hi_accel_cands.print_cand_summary()
all_accel_cands = lo_accel_cands + hi_accel_cands
if len(all_accel_cands):
all_accel_cands.remove_harmonics()
# Note: the candidates will be sorted in _sigma_ order, not _SNR_!
all_accel_cands.cands.sort(key=attrgetter('sigma'), reverse=True)
print("Found %d good candidates" % len(all_accel_cands))
all_accel_cands.to_file(outbasenm+".accelcands")
all_accel_cands.write_cand_report(outbasenm+".accelcands.report")
all_accel_cands.print_cand_summary()
all_accel_cands.plot_rejects(usefreqs=True)
plt.savefig(outbasenm+".accelcands.rejects.png")
#all_accel_cands.plot_goodcands(usefreqs=False)
#plt.savefig(outbasenm+".accelcands.goodcands.png")
all_accel_cands.plot_summary(usefreqs=True)
plt.savefig(outbasenm+".accelcands.summary.png")
plt.show()
def LogLinScaleFactory(b):
class LogLinScale(matplotlib.scale.ScaleBase):
name = 'loglin'
def __init__(self, axis, **kwargs):
matplotlib.scale.ScaleBase.__init__(self)
self.thresh = kwargs.pop("thresh", 1e-5)
if self.thresh <= 0.0:
raise ValueError("thresh must be larger than 0")
def get_transform(self):
return self.LogLinTransform(self.thresh)
def set_default_locators_and_formatters(self, axis):
pass
def limit_range_for_scale(self, vmin, vmax, minpos):
return max(vmin, self.thresh), vmax
class LogLinTransform(matplotlib.transforms.Transform):
input_dims = 1
output_dims = 1
is_separable = True
brk = b
def __init__(self, thresh):
matplotlib.transforms.Transform.__init__(self)
self.thresh = thresh
def transform(self, a):
aa = np.ma.masked_where(a<self.thresh, a)
if aa.mask.any():
aa[a<self.brk] = np.ma.log10(a[a<self.brk]) - \
np.log10(self.brk)+self.brk
else:
aa[a<self.brk] = np.log10(a[a<self.brk]) - \
np.log10(self.brk)+self.brk
return aa
def inverted(self):
return LogLinScale.InvertedLogLinTransform(self.thresh)
class InvertedLogLinTransform(matplotlib.transforms.Transform):
input_dims = 1
output_dims = 1
is_separable = True
brk = b
def __init__(self, thresh):
matplotlib.transforms.Transform.__init__(self)
self.thresh = thresh
def transform(self, a):
aa = a.copy()
aa[a<self.brk] = np.ma.power(10, a[a<self.brk]-self.brk + \
np.log10(self.brk))
return aa
def inverted(self):
return LogLinScale.LogLinTransform(self.thresh)
return LogLinScale
matplotlib.scale.register_scale(LogLinScaleFactory(1))
def main():
# Sift candidates in PWD
outbasenm = sys.argv[1]
sift_directory(os.getcwd(), outbasenm)
if __name__ == '__main__':
main()
| 54,077 | 39.146993 | 99 | py |
presto | presto-master/python/presto/infodata.py | from builtins import object
## Automatically adapted for numpy Apr 14, 2006 by convertcode.py
class infodata(object):
def __init__(self, filenm):
self.breaks = 0
for line in open(filenm, encoding="latin-1"):
if line.startswith(" Data file name"):
self.basenm = line.split("=")[-1].strip()
continue
if line.startswith(" Telescope"):
self.telescope = line.split("=")[-1].strip()
continue
if line.startswith(" Instrument"):
self.instrument = line.split("=")[-1].strip()
continue
if line.startswith(" Object being observed"):
self.object = line.split("=")[-1].strip()
continue
if line.startswith(" J2000 Right Ascension"):
self.RA = line.split("=")[-1].strip()
continue
if line.startswith(" J2000 Declination"):
self.DEC = line.split("=")[-1].strip()
continue
if line.startswith(" Data observed by"):
self.observer = line.split("=")[-1].strip()
continue
if line.startswith(" Epoch"):
self.epoch = float(line.split("=")[-1].strip())
continue
if line.startswith(" Barycentered?"):
self.bary = int(line.split("=")[-1].strip())
continue
if line.startswith(" Number of bins"):
self.N = int(line.split("=")[-1].strip())
continue
if line.startswith(" Width of each time series bin"):
self.dt = float(line.split("=")[-1].strip())
continue
if line.startswith(" Any breaks in the data?"):
self.breaks = int(line.split("=")[-1].strip())
if self.breaks:
self.onoff = []
continue
if line.startswith(" On/Off bin pair"):
vals = line.split("=")[-1].strip().split(",")
self.onoff.append((int(vals[0]), int(vals[1])))
continue
if line.startswith(" Type of observation"):
self.waveband = line.split("=")[-1].strip()
continue
if line.startswith(" Beam diameter"):
self.beam_diam = float(line.split("=")[-1].strip())
continue
if line.startswith(" Dispersion measure"):
self.DM = float(line.split("=")[-1].strip())
continue
if line.startswith(" Central freq of low channel"):
self.lofreq = float(line.split("=")[-1].strip())
continue
if line.startswith(" Total bandwidth"):
self.BW = float(line.split("=")[-1].strip())
continue
if line.startswith(" Number of channels"):
self.numchan = int(line.split("=")[-1].strip())
continue
if line.startswith(" Channel bandwidth"):
self.chan_width = float(line.split("=")[-1].strip())
continue
if line.startswith(" Data analyzed by"):
self.analyzer = line.split("=")[-1].strip()
continue
def to_file(self, inffn, notes=None):
if not inffn.endswith(".inf"):
raise ValueError("PRESTO info files must end with '.inf'. "
"Got: %s" % inffn)
with open(inffn, 'w') as ff:
if hasattr(self, 'basenm'):
ff.write(" Data file name without suffix = %s\n" %
self.basenm)
if hasattr(self, 'telescope'):
ff.write(" Telescope used = %s\n" %
self.telescope)
if hasattr(self, 'instrument'):
ff.write(" Instrument used = %s\n" %
self.instrument)
if hasattr(self, 'object'):
ff.write(" Object being observed = %s\n" %
self.object)
if hasattr(self, 'RA'):
ff.write(" J2000 Right Ascension (hh:mm:ss.ssss) = %s\n" %
self.RA)
if hasattr(self, 'DEC'):
ff.write(" J2000 Declination (dd:mm:ss.ssss) = %s\n" %
self.DEC)
if hasattr(self, 'observer'):
ff.write(" Data observed by = %s\n" %
self.observer)
if hasattr(self, 'epoch'):
ff.write(" Epoch of observation (MJD) = %05.15f\n" %
self.epoch)
if hasattr(self, 'bary'):
ff.write(" Barycentered? (1=yes, 0=no) = %d\n" %
self.bary)
if hasattr(self, 'N'):
ff.write(" Number of bins in the time series = %-11.0f\n" %
self.N)
if hasattr(self, 'dt'):
ff.write(" Width of each time series bin (sec) = %.15g\n" %
self.dt)
if hasattr(self, 'breaks') and self.breaks:
ff.write(" Any breaks in the data? (1 yes, 0 no) = 1\n")
if hasattr(self, 'onoff'):
for ii, (on, off) in enumerate(self.onoff, 1):
ff.write(" On/Off bin pair #%3d = %-11.0f, %-11.0f\n" %
(ii, on, off))
else:
ff.write(" Any breaks in the data? (1 yes, 0 no) = 0\n")
if hasattr(self, 'DM'):
ff.write(" Dispersion measure (cm-3 pc) = %.12g\n" %
self.DM)
if hasattr(self, 'lofreq'):
ff.write(" Central freq of low channel (Mhz) = %.12g\n" %
self.lofreq)
if hasattr(self, 'BW'):
ff.write(" Total bandwidth (Mhz) = %.12g\n" %
self.BW)
if hasattr(self, 'numchan'):
ff.write(" Number of channels = %d\n" %
self.numchan)
if hasattr(self, 'chan_width'):
ff.write(" Channel bandwidth (Mhz) = %.12g\n" %
self.chan_width)
if hasattr(self, 'analyzer'):
ff.write(" Data analyzed by = %s\n" %
self.analyzer)
if hasattr(self, 'deorbited'):
ff.write(" Orbit removed? (1=yes, 0=no) = %d\n" %
self.deorbited)
ff.write(" Any additional notes:\n")
if notes is not None:
ff.write(" %s\n" % notes.strip())
| 6,965 | 47.041379 | 98 | py |
presto | presto-master/python/presto/binary_psr.py | from __future__ import print_function
from __future__ import absolute_import
from builtins import object
import numpy as Num
from presto import parfile, psr_utils
from presto.psr_constants import *
def myasarray(a):
if type(a) in [type(1.0),type(1),type(1),type(1j)]:
a = Num.asarray([a])
if len(a) == 0:
a = Num.asarray([a])
return a
def shapR(m2):
"""
shapR(m2):
Return the Shapiro 'R' parameter (in sec) with m2 in
solar units.
"""
return Tsun * m2
def shapS(m1, m2, x, pb):
"""
shapS(m1, m2, x, pb):
Return the Shapiro 'S' parameter with m1 and m2 in
solar units, x (asini/c) in sec, and pb in days.
The Shapiro S param is also equal to sin(i).
"""
return x * (pb*SECPERDAY/TWOPI)**(-2.0/3.0) * \
Tsun**(-1.0/3.0) * (m1 + m2)**(2.0/3.0) * 1.0/m2
# Note: S is also equal to sin(i)
class binary_psr(object):
"""
class binary_psr
This class reads in a parfile (the only option for instantiation) of
a binary pulsar. It allows access the calculation of the mean,
eccentric, and true anomalies, orbital position, radial velocity,
and predicted spin period as a function of time.
"""
def __init__(self, parfilenm):
self.par = parfile.psr_par(parfilenm)
if not hasattr(self.par, 'BINARY'):
print("'%s' doesn't contain parameters for a binary pulsar!")
return None
self.PBsec = self.par.PB*SECPERDAY
self.T0 = self.par.T0
def calc_anoms(self, MJD):
"""
calc_anoms(MJD):
Return a tuple of the mean, eccentric, and true anomalies (all
in radians) at the barycentric epoch MJD(s).
"""
MJD = myasarray(MJD)
difft = (MJD - self.T0)*SECPERDAY
sec_since_peri = Num.fmod(difft, self.PBsec)
sec_since_peri[sec_since_peri < 0.0] += self.PBsec
mean_anom = sec_since_peri/self.PBsec*TWOPI
ecc_anom = self.eccentric_anomaly(mean_anom)
true_anom = psr_utils.true_anomaly(ecc_anom, self.par.E)
return (mean_anom, ecc_anom, true_anom)
def most_recent_peri(self, MJD):
"""
most_recent_peri(MJD):
Return the MJD(s) of the most recent periastrons that occurred
before the input MJD(s).
"""
MJD = myasarray(MJD)
difft = MJD - self.T0
days_since_peri = Num.fmod(difft, self.par.PB)
if (days_since_peri < 0.0): days_since_peri += self.par.PB
return MJD - days_since_peri
def eccentric_anomaly(self, mean_anomaly):
"""
eccentric_anomaly(mean_anomaly):
Return the eccentric anomaly in radians, given a set of mean_anomalies
in radians.
"""
ma = Num.fmod(mean_anomaly, TWOPI)
ma = Num.where(ma < 0.0, ma+TWOPI, ma)
eccentricity = self.par.E
ecc_anom_old = ma
ecc_anom = ma + eccentricity*Num.sin(ecc_anom_old)
# This is a simple iteration to solve Kepler's Equation
while (Num.maximum.reduce(Num.fabs(ecc_anom-ecc_anom_old)) > 5e-15):
ecc_anom_old = ecc_anom[:]
ecc_anom = ma + eccentricity*Num.sin(ecc_anom_old)
return ecc_anom
def calc_omega(self, MJD):
"""
calc_omega(MJD):
Return the argument of periastron (omega in radians) at
time (or times) MJD(s).
"""
MJD = myasarray(MJD)
difft = (MJD - self.T0)*SECPERDAY
if hasattr(self.par, 'OMDOT'):
# Note: This is an array
return (self.par.OM + difft/SECPERJULYR*self.par.OMDOT)*DEGTORAD
else:
return self.par.OM*DEGTORAD
def radial_velocity(self, MJD):
"""
radial_velocity(MJD):
Return the radial velocity of the pulsar (km/s) at the given MJD(s).
"""
ma, ea, ta = self.calc_anoms(MJD)
ws = self.calc_omega(MJD)
c1 = TWOPI*self.par.A1/self.PBsec
c2 = Num.cos(ws)*Num.sqrt(1-self.par.E*self.par.E)
sws = Num.sin(ws)
cea = Num.cos(ea)
return SOL/1000.0*c1*(c2*cea - sws*Num.sin(ea)) / (1.0 - self.par.E*cea)
def doppler_period(self, MJD):
"""
doppler_period(MJD):
Return the observed pulse spin period in sec at the given MJD(s).
"""
vs = self.radial_velocity(MJD)*1000.0 # m/s
return self.par.P0*(1.0+vs/SOL)
def position(self, MJD, inc=60.0, returnz=False):
"""
position(MJD, inc=60.0, returnz=False):
Return the 'x' (along the LOS with + being towards us) and 'y' (in the
plane of the sky with + being away from the line of nodes and -
being in the direction of the line of nodes) positions of the
pulsar with respect to the center of mass in units of lt-sec.
(Note: This places the observer at (+inf,0.0) and the line of nodes
extending towards (0.0,-inf) with the pulsar orbiting (0.0,0.0)
clockwise). 'inc' is the inclination of the orbit in degrees.
MJD can be an array. The return value is (xs, ys). If returnz
is True, return (xs, ys, zs), where 'z' is the other in-the-sky
direction. These coordinates correspond to the I, J, and K vectors
in Damour & Taylor (1992) in the following way:
x = -K
y = -I
z = -J
"""
ma, ea, ta = self.calc_anoms(MJD)
ws = self.calc_omega(MJD)
orb_phs = ta + ws
sini = Num.sin(inc*DEGTORAD)
x = self.par.A1/sini # This is a since A1 is asini
r = x*(1.0-self.par.E*self.par.E)/(1.0+self.par.E*Num.cos(ta))
if returnz:
return -r*Num.sin(orb_phs)*sini, -r*Num.cos(orb_phs), \
-r*Num.sin(orb_phs)*Num.cos(inc*DEGTORAD)
else:
return -r*Num.sin(orb_phs)*sini, -r*Num.cos(orb_phs)
def reflex_motion(self, MJD, inc, Omega, dist):
"""
reflex_motion(MJD, inc, Omega, dist):
Return the projected on-sky orbital reflex motion in mas referenced
to Omega, which is the line-of-nodes, clockwise from East towards
North. This is the definition of Omega used by e.g. Damour &
Taylor (1992) and Kopeikin (1996), but note that it differs from
most non-pulsar applications (in which Omega is measured counter-
clockwise from North to East). The distance to the pulsar is in
kpc. The returned values are dRA (corrected by cos(dec)), dDEC.
"""
xs, ys, zs = self.position(MJD, inc, returnz=True)
ys = -ys / dist * 2.003988804115705e-03 # in mas, (i.e. DT92 "I")
zs = -zs / dist * 2.003988804115705e-03 # in mas, (i.e. DT92 "J")
sino, coso = Num.sin(omega*DEGTORAD), Num.cos(omega*DEGTORAD)
# Convert from DT92 I, J to I_0, J_0 (= RA, Dec)
dRA = (coso * ys - sino * zs) / Num.cos(self.par.DEC_RAD)
dDEC = (sino * ys + coso * zs)
return dRA, dDEC
def demodulate_TOAs(self, MJD):
"""
demodulate_TOAs(MJD):
Return arrival times correctly orbitally de-modulated using
the iterative procedure described in Deeter, Boynton, and Pravdo
(1981ApJ...247.1003D, thanks, Deepto!). This corrects for the
fact that the emitted times are what you want when you only
have the arrival times. MJD can be an array. The returned
values are in MJD as well.
"""
ts = MJD[:] # start of iteration
dts = Num.ones_like(MJD)
# This is a simple Newton's Method iteration based on
# the code orbdelay.c written by Deepto Chakrabarty
while (Num.maximum.reduce(Num.fabs(dts)) > 1e-10):
# radial position in lt-days
xs = -self.position(ts, inc=90.0)[0]/86400.0
# radial velocity in units of C
dxs = self.radial_velocity(ts)*1000.0/SOL
dts = (ts + xs - MJD) / (1.0 + dxs)
ts = ts - dts
return ts
def shapiro_delays(self, R, S, MJD):
"""
shapiro_delays(R, S, MJD):
Return the predicted Shapiro delay (in us) for a variety of
barycentric MJDs, given the R and S parameters.
"""
ma, ea, ta = self.calc_anoms(MJD)
ws = self.calc_omega(MJD)
canoms = Num.cos(ea)
sanoms = Num.sin(ea)
ecc = self.par.E
cw = Num.cos(ws)
sw = Num.sin(ws)
delay = -2.0e6*R*Num.log(1.0 - ecc*canoms -
S*(sw*(canoms-ecc) +
Num.sqrt((1.0 - ecc*ecc)) * cw * sanoms))
return delay
def shapiro_measurable(self, R, S, MJD):
"""
shapiro_measurable(R, S, MJD):
Return the predicted _measurable_ Shapiro delay (in us) for a
variety of barycentric MJDs, given the R and S parameters.
This is eqn 28 in Freire & Wex 2010 and is only valid in
the low eccentricity limit.
"""
ma, ea, ta = self.calc_anoms(MJD)
ws = self.calc_omega(MJD)
Phi = ma + ws
cbar = Num.sqrt(1.0 - S**2.0)
zeta = S / (1.0 + cbar)
h3 = R * zeta**3.0
sPhi = Num.sin(Phi)
delay = -2.0e6 * h3 * (
Num.log(1.0 + zeta*zeta - 2.0 * zeta * sPhi) / zeta**3.0 +
2.0 * sPhi / zeta**2.0 -
Num.cos(2.0 * Phi) / zeta)
return delay
if __name__=='__main__':
import presto.Pgplot as pg
# The following reproduces the RV plot in Hulse & Taylor, 1975
psrA = binary_psr("B1913+16.par")
T0 = 42320.933 # From Hulse & Taylor, 1975
times = psr_utils.span(0.0, psrA.par.PB, 1000) + T0
rv = psrA.radial_velocity(times)
pg.plotxy(rv, (times-T0)*24, \
labx="Hours since Periastron", laby="Radial Velocity (km.s)")
pg.closeplot()
| 10,195 | 38.366795 | 84 | py |
presto | presto-master/python/presto/parfile.py | from __future__ import print_function
from __future__ import absolute_import
from builtins import object
import six
import math, re
from presto import psr_utils as pu
from presto import psr_constants as pc
try:
from slalib import sla_ecleq, sla_eqecl, sla_eqgal
slalib = True
except ImportError:
slalib = False
# MJD of the J2000.0 epoch
J2000 = 51544.5
#
# The following are the parameters that are accepted in a
# par file when trying to determine a pulsar ephemeris.
#
# PEPOCH Epoch of period/frequency parameters and position (MJD)
# F0 Pulsar rotation frequency (s-2)
# F Alternative for F0
# F1 Pulsar rotation frequency derivative (s^-2)
# F2 Pulsar rotation frequency second derivative
# P0 Pulsar period (s).
# P Alternative for P0
# P1 Pulsar period derivative (10^-15).
# DM Dispersion measure (pc cm^-3)
# A1 Projected pulsar semi-major axis of 1st orbit
# E Eccentricity of 1st orbit
# T0 Epoch of periastron passage of 1st orbit (MJD)
# TASC Epoch of ascending node passage (MJD)
# PB Period of 1st orbit (days)
# OM Longitude of periastron passage, 2st orbit (deg)
# EPS1 First Laplace parameter [eccentricity times sin(omega)]
# EPS2 Second Laplace parameter [eccentricity times cos(omega)]
# EPS1DOT Time derivative of EPS1
# EPS2DOT Time derivative of EPS2
# OMDOT Rate of periastron advance (deg/yr)
# PBDOT Rate of change of orbital period (10^-12)
# XDOT Rate of change of projected semi-major axis (-12)
# EDOT Rate of change of eccentricity (-12)
#
# The following are _not_ currently implemented:
# F3, F4, F5,... Higher order frequency derivative terms
# OM2DOT Second time derivative of angle of periastron (rad/s^2)
# X2DOT Second time derivative of projected semi-major axis (1/s)
#
float_keys = ["PEPOCH", "POSEPOCH", "DM", "START", "FINISH", "NTOA",
"TRES", "TZRMJD", "TZRFRQ", "TZRSITE", "NITS",
"A1", "XDOT", "E", "ECC", "EDOT", "T0", "PB", "PBDOT", "OM", "OMDOT",
"EPS1", "EPS2", "EPS1DOT", "EPS2DOT", "TASC", "LAMBDA", "BETA",
"RA_RAD", "DEC_RAD", "GAMMA", "SINI", "M2", "MTOT", "XPBDOT",
"ELAT", "ELONG", "PMLAMBDA", "PMBETA", "PX",
"PMRA", "PMDEC", "PB_2", "A1_2", "E_2", "T0_2", "OM_2"]
floatn_keys = ["F", "P", "FB", "FD", "DMX_", "DMXEP_", "DMXR1_",
"DMXR2_", "DMXF1_", "DMXF2_"]
str_keys = ["FILE", "PSR", "PSRJ", "RAJ", "DECJ", "EPHEM", "CLK", "BINARY"]
class psr_par(object):
def __init__(self, parfilenm):
self.FILE = parfilenm
pf = open(parfilenm)
for line in pf.readlines():
# Skip comments
if line[0]=='#':
continue
# Convert any 'D-' or 'D+' to 'E-' or 'E+'
line = line.replace("D-", "E-")
line = line.replace("D+", "E+")
splitline = line.split()
# Skip blank lines
if len(splitline)==0:
continue
key = splitline[0]
# Regex checks for non-digit chars, followed by digit chars
m1 = re.search(r'(\D+)(\d+)$', key)
# This one looks for the DMX[RF][12]_* params
m2 = re.search(r'(\D+\d+_)(\d+)$', key)
if key == "JUMP":
if splitline[3] not in ['0', '1']:
setattr(self, key+'_%s'%splitline[2], float(splitline[3]))
if len(splitline)==5:
if splitline[4] not in ['0', '1']:
setattr(self, key+'_%s'%splitline[2]+'_ERR',
float(splitline[4]))
elif len(splitline)==6:
setattr(self, key+'_%s'%splitline[2]+'_ERR',
float(splitline[5]))
if key in str_keys:
setattr(self, key, splitline[1])
elif key in float_keys:
try:
setattr(self, key, float(splitline[1]))
except ValueError:
pass
elif m1 is not None:
m = m1
if m2 is not None:
m = m2
if m.group(1) in floatn_keys:
try:
setattr(self, key, float(splitline[1]))
except ValueError:
pass
if len(splitline)==3: # Some parfiles don't have flags, but do have errors
if splitline[2] not in ['0', '1']:
setattr(self, key+'_ERR', float(splitline[2]))
if len(splitline)==4:
setattr(self, key+'_ERR', float(splitline[3]))
# Deal with Ecliptic coords
if (hasattr(self, 'BETA') and hasattr(self, 'LAMBDA')):
setattr(self, 'ELAT', self.BETA)
setattr(self, 'ELONG', self.LAMBDA)
if (slalib and hasattr(self, 'ELAT') and hasattr(self, 'ELONG')):
# TEMPO's ecliptic coords are always based on J2000 epoch
ra_rad, dec_rad = sla_ecleq(self.ELONG*pc.DEGTORAD,
self.ELAT*pc.DEGTORAD, J2000)
rstr = pu.coord_to_string(*pu.rad_to_hms(ra_rad))
dstr = pu.coord_to_string(*pu.rad_to_dms(dec_rad))
setattr(self, 'RAJ', rstr)
setattr(self, 'DECJ', dstr)
if hasattr(self, 'RAJ'):
setattr(self, 'RA_RAD', pu.ra_to_rad(self.RAJ))
if hasattr(self, 'DECJ'):
setattr(self, 'DEC_RAD', pu.dec_to_rad(self.DECJ))
# Compute the Galactic coords
if (slalib and hasattr(self, 'RA_RAD') and hasattr(self, 'DEC_RAD')):
l, b = sla_eqgal(self.RA_RAD, self.DEC_RAD)
setattr(self, 'GLONG', l*pc.RADTODEG)
setattr(self, 'GLAT', b*pc.RADTODEG)
# Compute the Ecliptic coords
if (slalib and hasattr(self, 'RA_RAD') and hasattr(self, 'DEC_RAD')):
# TEMPO's ecliptic coords are always based on J2000 epoch
elon, elat = sla_eqecl(self.RA_RAD, self.DEC_RAD, J2000)
setattr(self, 'ELONG', elon*pc.RADTODEG)
setattr(self, 'ELAT', elat*pc.RADTODEG)
if hasattr(self, 'P'):
setattr(self, 'P0', self.P)
if hasattr(self, 'P0'):
setattr(self, 'F0', 1.0/self.P0)
if hasattr(self, 'F0'):
setattr(self, 'P0', 1.0/self.F0)
if hasattr(self, 'FB0'):
setattr(self, 'PB', (1.0/self.FB0)/pc.SECPERDAY)
if hasattr(self, 'P0_ERR'):
if hasattr(self, 'P1_ERR'):
f, ferr, fd, fderr = pu.pferrs(self.P0, self.P0_ERR,
self.P1, self.P1_ERR)
setattr(self, 'F0_ERR', ferr)
setattr(self, 'F1', fd)
setattr(self, 'F1_ERR', fderr)
else:
f, fd, = pu.p_to_f(self.P0, self.P1)
setattr(self, 'F0_ERR', self.P0_ERR/(self.P0*self.P0))
setattr(self, 'F1', fd)
else:
if hasattr(self, 'P1'):
f, fd, = pu.p_to_f(self.P0, self.P1)
setattr(self, 'F1', fd)
elif hasattr(self, 'F1'):
p, pd, = pu.p_to_f(self.F0, self.F1)
setattr(self, 'P1', pd)
if (hasattr(self, 'F0_ERR') and hasattr(self, 'F1_ERR')):
p, perr, pd, pderr = pu.pferrs(self.F0, self.F0_ERR,
self.F1, self.F1_ERR)
setattr(self, 'P0_ERR', perr)
setattr(self, 'P1', pd)
setattr(self, 'P1_ERR', pderr)
elif (hasattr(self, 'F0') and hasattr(self, 'F0_ERR')):
setattr(self, 'P0_ERR', self.F0_ERR/(self.F0*self.F0))
if hasattr(self, 'EPS1') and hasattr(self, 'EPS2'):
ecc = math.sqrt(self.EPS1 * self.EPS1 + self.EPS2 * self.EPS2)
omega = math.atan2(self.EPS1, self.EPS2)
setattr(self, 'E', ecc)
setattr(self, 'OM', omega * pc.RADTODEG)
setattr(self, 'T0', self.TASC + self.PB * omega/pc.TWOPI)
if hasattr(self, 'PB') and hasattr(self, 'A1') and not \
(hasattr(self, 'E') or hasattr(self, 'ECC')):
setattr(self, 'E', 0.0)
if hasattr(self, 'T0') and not hasattr(self, 'TASC'):
setattr(self, 'TASC', self.T0 - self.PB * self.OM/360.0)
if hasattr(self, 'E') and not hasattr(self, 'ECC'):
setattr(self, 'ECC', self.E)
if not hasattr(self, 'EPS1'):
if hasattr(self, 'E_ERR'):
setattr(self, 'ECC_ERR', self.E_ERR)
if hasattr(self, 'ECC') and not hasattr(self, 'E'):
setattr(self, 'E', self.ECC)
setattr(self, 'E_ERR', self.ECC_ERR)
pf.close()
def __str__(self):
out = ""
for k, v in list(self.__dict__.items()):
if k[:2]!="__":
if type(self.__dict__[k]) in six.string_types:
out += "%10s = '%s'\n" % (k, v)
else:
out += "%10s = %-20.15g\n" % (k, v)
return out
def ELL1_check(par_file, output=False):
"""
ELL1_check(par_file):
Check the parfile to see if ELL1 can be safely used as the
binary model. To work properly, we should have:
asini/c * ecc**2 << timing precision / sqrt(# TOAs)
"""
psr = psr_par(par_file)
try:
lhs = psr.A1 * psr.E ** 2.0 * 1e6
except:
if output:
print("Can't compute asini/c * ecc**2, maybe parfile doesn't have a binary?")
return
try:
rhs = psr.TRES / Num.sqrt(psr.NTOA)
except:
if output:
print("Can't compute TRES / sqrt(# TOAs), maybe this isn't a TEMPO output parfile?")
return
if output:
print("Condition is asini/c * ecc**2 << timing precision / sqrt(# TOAs) to use ELL1:")
print(" asini/c * ecc**2 = %8.3g us" % lhs)
print(" TRES / sqrt(# TOAs) = %8.3g us" % rhs)
if lhs * 50.0 < rhs:
if output:
print("Should be fine.")
return True
elif lhs * 5.0 < rhs:
if output:
print("Should be OK, but not optimal.")
return True
else:
if output:
print("Should probably use BT or DD instead.")
return False
if __name__ == '__main__':
a = psr_par("2140-2310A.par")
print(a)
| 10,504 | 41.703252 | 96 | py |
presto | presto-master/python/presto/events.py | from __future__ import print_function
import bisect
from presto.psr_constants import PI, TWOPI, PIBYTWO
from presto.simple_roots import newton_raphson
from scipy.special import iv, chdtri, ndtr, ndtri
from presto.cosine_rand import *
import numpy as np
def sine_events(pulsed_frac, Nevents, phase=0.0):
"""
sine_events(pulsed_frac, Nevents, phase=0.0):
Return an array of 'Nevents' of phase values [0,1)
simulating a folded profile with a pulsed fraction
'pulsed_frac', a phase offset 'phase', and with a
sinusoidal pulse profile.
"""
Nsrc = int(pulsed_frac*Nevents+0.5)
Nbak = Nevents - Nsrc
phases = Num.zeros(Nevents, dtype=float)
phases[:Nsrc] += cosine_rand(Nsrc) + phase
phases[Nsrc:] += Num.random.random(Nbak)
phases = Num.fmod(phases, 1.0)
phases[phases<0.0] += 1.0
return phases
def gaussian_events(pulsed_frac, Nevents, fwhm, phase=0.0):
"""
gaussian_events(pulsed_frac, Nevents, phase=0.0):
Return an array of 'Nevents' of phase values [0,1)
simulating a folded profile with a pulsed fraction
'pulsed_frac', a phase offset 'phase', and with a
gaussian pulse profile of width 'fwhm'
"""
sigma = fwhm / 2.35482
Nsrc = int(pulsed_frac*Nevents+0.5)
Nbak = Nevents - Nsrc
phases = Num.zeros(Nevents, dtype=float)
phases[:Nsrc] += Num.random.standard_normal(Nsrc)*sigma + phase
phases[Nsrc:] += Num.random.random(Nbak)
phases = Num.fmod(phases, 1.0)
phases[phases<0.0] += 1.0
return phases
def harm_to_sum(fwhm):
"""
harm_to_sum(fwhm):
For an MVMD profile of width 'fwhm', returns the
optimal number of harmonics to sum incoherently
"""
fwhms = [0.0108, 0.0110, 0.0113, 0.0117, 0.0119, 0.0124, 0.0127, 0.0132,
0.0134, 0.0140, 0.0145, 0.0151, 0.0154, 0.0160, 0.0167, 0.0173,
0.0180, 0.0191, 0.0199, 0.0207, 0.0220, 0.0228, 0.0242, 0.0257,
0.0273, 0.0295, 0.0313, 0.0338, 0.0366, 0.0396, 0.0437, 0.0482,
0.0542, 0.0622, 0.0714, 0.0836, 0.1037, 0.1313, 0.1799, 0.2883]
return len(fwhms)-bisect.bisect(fwhms, fwhm)+1
def DFTexact(times, f, maxnumharms=20):
"""
DFTexact(times, f, maxnumharms=20):
Return an array of 'maxnumharms' complex amplitudes
corresponding to the harmonics of the 'times' (in sec)
with a fundamental at frequency 'f' Hz.
"""
const = -TWOPI*(Num.arange(maxnumharms, dtype=float)+1.0)*f*complex(0.0, 1.0)
return Num.add.reduce(Num.exp(Num.outerproduct(const,times)), axis=1)
def incoherent_sum(amps):
"""
incoherent_sum(amps):
Return the incoherent sum of an array of complex Fourier
amplitudes. Usually these correspond to the complex
harmonics of a periodic signal.
"""
return Num.add.accumulate(Num.abs(amps)**2.0)
def coherent_sum(amps):
"""
coherent_sum(amps):
Return the coherent sum (i.e. including phase information)
of an array of complex Fourier amplitudes. Usually these
correspond to the complex harmonics of a periodic signal.
"""
phss = Num.arctan2(amps.imag, amps.real)
phs0 = phss[0]
phscorr = phs0 - Num.fmod(Num.arange(1.0, len(amps)+1,
dtype=float)*phs0, TWOPI)
sumamps = Num.add.accumulate(amps*Num.exp(complex(0.0, 1.0)*phscorr))
return Num.abs(sumamps)**2.0
def Htest_exact(phases, maxnumharms=20, weights=None):
"""
Htest_exact(phases, maxnumharms=20, weights=None):
Return an exactly computed (i.e. unbinned) H-test statistic
for periodicity for the events with folded phases 'phases' [0,1).
Also return the best number of harmonics. The H-statistic and
harmonic number are returned as a tuple: (hstat, harmnum).
This routine returns the Leahy normalized H-statistic, and the
best number of harmonics summed. If weights are set to be
fractional photon weights, then the weighted Htest is returned
(see Kerr 2011: http://arxiv.org/pdf/1103.2128.pdf)
"""
N = len(phases)
Zm2s = np.zeros(maxnumharms, dtype=np.float)
rad_phases = 2.0*np.pi*phases
weightfact = 1.0/(np.sum(weights**2.0) / N) if \
weights is not None else 1.0
for harmnum in range(1, maxnumharms+1):
phss = harmnum*rad_phases
Zm2s[harmnum-1] = 2.0/N*(np.add.reduce(np.sin(phss))**2.0+
np.add.reduce(np.cos(phss))**2.0)
Zm2s[harmnum-1] *= weightfact
hs = np.add.accumulate(Zm2s) - \
4.0*np.arange(1.0, maxnumharms+1)+4.0
bestharm = hs.argmax()
return (hs[bestharm], bestharm+1)
def Hstat_prob(h):
"""
Hstat_prob(h):
Return the probability associated with an H-test statistic
of value 'h'. Uses de Jager & Busching 2010 result.
"""
return Num.exp(-0.4 * h)
def gauss_sigma_to_prob(sigma):
"""
gauss_sigma_to_prob(sigma):
Returns the area under the Gaussian probability density
function, integrated from 'sigma' to infinity.
"""
if sigma < 5.0:
return 1.0 - ndtr(sigma)
else:
# From A&S page 932, eqn 26.2.12 for Q(x)
x = sigma
Z = 1.0/Num.sqrt(2.0*Num.pi) * Num.exp(-0.5*x*x)
series = Num.sum(Num.asarray([1.0, -1.0/(x*x), 3.0/(x**4.0),
-15.0/(x**6.0), 105.0/(x**8.0)]))
return Z/x*series
def prob_to_gauss_sigma(prob):
"""
prob_to_gauss_sigma(prob):
Returns the Gaussian sigma for which the area under the
Gaussian probability density function (integrated from minus
infinity to 'sigma') is equal to 'prob'.
"""
return ndtri(prob)
def xray_time_to_detect(ctrate, pfract, dt, fpsr, bins=0, confidence=0.99,
detectfract=0.99):
"""
xray_time_to_detect(ctrate, pfract, dt, fpsr, bins=0, confidence=0.99,
detectfract=0.99):
Return the observation duration required (assuming no breaks
and a sinusoidal pulse profile) to detect pulsations at
frequency 'fpsr' while looking in a number of Fourier
bins equal to 'bins' (Note: the default value of 0 means
that all bins will be examined). 'dt' is the bin duration in
sec, 'ctrate' is the total expected count rate, and 'pfract' is
the expected pulsed fraction. 'confidence' is the confidence
level that the signal is not caused by noise, and 'detectfract'
is the fraction of the time that you want this observation to
occur (i.e. if set to 0.5, 50% of observations of this duration
would detect the specified signal at 'confidence' level).
"""
nyquist_freq = 0.5 / dt
factor = binning_factor(fpsr, nyquist_freq)**2.0
A = pfract * ctrate # Signal ct rate
if (bins):
P_detect = max_noise_power(bins, confidence=confidence)
power_required = required_signal_power(P_detect, confidence=detectfract)
# The following is from para 1, sect 3.3, of Ransom, Gaensler, and Slane, 2002
#return (power_required - 1.0) 4 * ctrate * dt**2.0 / (A**2.0 * factor) *
else:
print("Not implemented yet...I think we need to iterate.")
# The following routines are based on the method of signal
# estimation described by Vaughan et al., 1994, ApJ, 435, p362.
# The math comes from Groth, 1975, ApJS, 29, p285.
def power_average(signal_power, n=1):
"""
power_average(signal_power, n=1):
Return the expectation value of the measured power given
a signal with intrinsic power 'signal_power' and 'n'
summed powers. This is from equation 14 in Groth, 1975.
"""
return signal_power + n
def power_variance(signal_power, n=1):
"""
power_variance(signal_power, n=1):
Return the variance of the measured power given a signal
with intrinsic power 'signal_power' and 'n' summed
powers. This is from equation 14 in Groth, 1975.
"""
return 2.0 * signal_power + n
def power_sigma(signal_power, n=1):
"""
power_sigma(signal_power, n=1):
Return the standard deviation of the measured power
given a signal with intrinsic power 'signal_power' and
'n' summed powers. This is from equation 14 in Groth, 1975.
"""
return Num.sqrt(power_variance(signal_power, n))
def log_fact_table(maxn):
"""
log_fact_table(maxn):
Return a table of the natural logarithms of the
first 'maxn'+1 factorials.
"""
table = Num.arange(maxn+1, dtype='d')
table[0] = 1.0
return Num.add.accumulate(Num.log(table))
def binning_factor(freq, nyquist_freq):
"""
binning_factor(freq, nyquist_freq):
Return the factor that causes high frequency Fourier
Amplitudes to be decreased if the time series is
made of binned events. Square this for a power
spectrum adjustment. 'freq' is the frequency of
interest and 'nyquist_freq' is the Nyquist Frequency
which can be defined as N/(2*T).
"""
x = 0.5 * Num.asarray(freq) / nyquist_freq
return Num.sinc(x) # numpy sinc is defined with pi
def max_noise_power(bins, n=1, confidence=0.99):
"""
max_noise_power(bins, n=1, confidence=0.99):
Return the power level that gives you some
'confidence' that spectral noise could not cause
that level in your power spectrum. The total number
of independent frequencies searched is 'bins'.
This is P_detect in Vaughan et. al, 1994, and is also
known as P_threshold.
"""
if (n==1):
return -Num.log((1.0 - confidence) / bins)
else:
return 0.5 * chdtri(2.0 * n, (1.0 - confidence) / bins)
def prob_power_series(power, signal_power, n=1, TOL=1.0e-14):
"""
prob_power_series(power, signal_power, n=1, TOL=1.0e-14):
Return the integrated probability from P=0 to 'power'
that a signal with theoretical power 'signal_power'
will show up in a power spectrum with power 'power'.
This method evaluates the integral using an infinite
sum and is equation 16 in Groth, 1975.
"""
fact = Num.exp(-(power + signal_power))
lf = log_fact_table((power + signal_power) * 5)
lp, lps = Num.log(power), Num.log(signal_power)
sum = 0.0
term = 1.0
m = 0
while (1):
kmax = m + n
term = fact * Num.add.reduce(Num.exp((Num.arange(kmax)*lp + m*lps) - \
(lf[0:kmax] + lf[m])))
sum = sum + term
if (m > signal_power and term < TOL): break
m = m + 1
return 1.0 - sum
def prob_power_integral(power, signal_power, n=1):
"""
prob_power_integral(power, signal_power, n=1):
Return the integrated probability from P=0 to 'power'
that a signal with theoretical power 'signal_power'
will show up in a power spectrum with power 'power'.
This method evaluates the integral numerically and
is equation 18 in Groth, 1975.
"""
def integrand(theta, p, ps, n):
t1 = 2 * n * theta
t2 = Num.sin(2.0 * theta)
A = t1 + ps * t2
B = t1 + (ps - p) * t2
sintheta = Num.sin(theta)
sin2theta = sintheta**2.0
return (Num.exp(-2.0 * ps * sin2theta) *
(Num.sin(A - theta) - Num.exp(-2.0 * p * sin2theta) *
Num.sin(B - theta)) / sintheta)
(val, err) = quad(integrand, 0.0, PIBYTWO, (power, signal_power, n))
return val/PI
def power_probability(power, signal_power, n=1):
"""
power_probability(power, signal_power, n=1):
Return the probability of a signal with power
'signal_power' actually showing up with power
'power' in a power spectrum' This is equation
12 in Groth, 1975 and is the integrand of the
prob_power_* functions (which integrate it from 0 to P)
"""
return (power / signal_power)**(0.5 * (n - 1)) * \
Num.exp(-(power + signal_power)) * \
iv(n - 1.0, 2 * Num.sqrt(power * signal_power))
def required_signal_power(power, n=1, confidence=0.99):
"""
required_signal_power(power, n=1, confidence=0.99):
Return the required power of a signal that will cause
at least a power 'power' in a power spectrum a fraction
'confidence' of the time. This is the inverse of
equation 16 in Groth, 1975, with solves for P_signal.
If called with 'power' = P_detect the result is
the search sensitivity. If called with 'power' = P_max,
then the result is the upper limit on the signal power
in the power spectrum.
"""
prob = 1.0 - confidence
def func(x, power=power, prob=prob, n=n):
return prob_power_series(power, x, n) - prob
def dfunc(x, power=power, n=n):
return power_probability(power, x, n)
P_signal = newton_raphson(func, dfunc, 0.0001, 100.0)
return P_signal
def fft_sensitivity(N, bins=0, n=1, confidence=0.99):
"""
fft_sensitivity(N, bins=0, n=1, confidence=0.99):
Return a measure of the weakest signal power you can
confidently detect in an FFT search containing 'N' data
points (this is the number of bins in the time series -- the
number of Frequency bins searched is usually N/2). 'bins' is
only different from 0 if the number of independent frequencies
searched does not equal N/2 (i.e. when an acceleration search
is performed). 'confidence' is our fractional confidence in
the result (i.e. 0.99 = 99% limit). This calculation does not
include the correction to sensitivity due to binning effects.
These calculations are based on the Vaughan et al 1994 paper
and compute P_sens.
"""
if not (bins): bins = N / 2
P_threshold = max_noise_power(bins, n, confidence)
return required_signal_power(P_threshold, n, confidence)
def rzw_sensitivity(N, zlo=-100.0, zhi=100.0, n=1, confidence=0.99):
"""
rzw_sensitivity(N, zlo=-100.0, zhi=100.0, n=1, confidence=0.99):
Return a measure of the weakest signal power you can
confidently detect in an RZW (Fourier acceleration) search
containing 'N' data points (this is the number of bins in the
time series) and low and high acceleration values of 'zlo'
and 'zhi'. 'confidence' is our fractional confidence in
the result (i.e. 0.99 = 99% limit). This calculation does not
include the correction to sensitivity due to binning effects.
These calculations are based on the Vaughan et al 1994 paper
and compute P_sens.
"""
bins = N / 2.0 * (zhi - zlo + 1.0) / 6.95
P_threshold = max_noise_power(bins, n, confidence)
return required_signal_power(P_threshold, n, confidence)
def binned_fft_sensitivity(N, dt, freq, bins=0, n=1, confidence=0.99):
"""
binned_fft_sensitivity(N, dt, freq, bins=0, n=1, confidence=0.99):
Return a measure of the weakest signal power of frequency 'freq'
Hz you can confidently detect in an FFT search containing 'N'
data points (this is the number of bins in the time series --
the number of Frequency bins searched is usually 1/2 of this
value) each of which was binned into 'dt' sec bins.
'bins' is only different from 0 if the number of independent
frequencies searched does not equal N/2 (i.e. when an
acceleration search is performed). 'confidence' is our
fractional confidence in the result (i.e. 0.99 = 99% limit).
This calculation includes the correction to sensitivity
due to binning effects. These calculations are based on
the Vaughan et al 1994 paper and compute P_sens.
"""
nyquist_freq = 0.5 / dt
factor = binning_factor(freq, nyquist_freq)**2.0
return fft_sensitivity(N, bins, n, confidence) / factor
def binned_rzw_sensitivity(N, dt, freq, zlo=-100.0, zhi=100.0,
n=1, confidence=0.99):
"""
binned_rzw_sensitivity(N, dt, freq, zlo=-100.0, zhi=100.0,
n=1, confidence=0.99):
Return a measure of the weakest signal power of frequency 'freq'
Hz you can confidently detect in an RZW (Fourier acceleration)
search containing 'N' data points (this is the number of bins in
the time series) each of which was binned into 'dt' sec bins.
Low and high acceleration values of 'zlo' and 'zhi' were used.
'confidence' is our fractional confidence in the result (i.e.
0.99 = 99% limit). This calculation includes the correction to
sensitivity due to binning effects. These calculations are
based on the Vaughan et al 1994 paper and compute P_sens.
"""
bins = N / 2.0 * (zhi - zlo + 1.0) / 6.95
nyquist_freq = 0.5 / dt
factor = binning_factor(freq, nyquist_freq)**2.0
return fft_sensitivity(N, bins, n, confidence) / factor
def pulsed_fraction_limit(Nphot, Pow):
"""
pulsed_fraction_limit(phot, Pow):
Return an _observational_ (i.e. not intrinsic) upper limit
to the pulsed fraction of a signal that is in the data but
was not detected. By observational, I mean that some of the
unpulsed events do not come from the source you are looking
for pulsations in. The data contain a total of 'Nphot'
photons and the largest measured power (or P_sens as
calculated using the *_sensitivity functions in this module)
is 'Pow'. If you want the _intrinsic_ pulsed fraction,
you should divide the returned value by the fraction of Nphot
that actually comes from the _source_ (i.e. the NS).
"""
return Num.sqrt(4.0 * (Pow - 1.0) / Nphot)
if __name__=="__main__":
from presto.psr_utils import *
from presto.Pgplot import *
from presto.presto import *
from RandomArray import *
prof = expcos_profile(128, 0.0, 0.1) + normal(0.0, 5.0, 128)
plotxy(prof)
closeplot()
fprof = rfft(prof)
fprof = fprof/Num.sqrt(fprof[0].real)
pows = spectralpower(fprof)
tcsum = Num.add.accumulate(Num.sqrt(pows[1:10]))**2.0
csum = coherent_sum(fprof[1:10])
isum = incoherent_sum(fprof[1:10])
print(isum)
print(csum)
print(tcsum)
for ii in range(len(csum)):
print(candidate_sigma(isum[ii], ii+1, 1), candidate_sigma(csum[ii]/(ii+1), 1, 1))
| 18,498 | 40.947846 | 89 | py |
presto | presto-master/python/presto/pypsrcat.py | from __future__ import print_function
from __future__ import absolute_import
from builtins import object
from operator import attrgetter
import struct
import os.path
import math
import csv
import astropy.coordinates as c
import astropy.units as u
from presto import presto
import presto.psr_utils as pu
import presto.psr_constants as pc
## The most recent catalogs are available here:
##
## http://www.atnf.csiro.au/research/pulsar/psrcat/
version = 'v1.63'
## And here is the command used to get the data:
# Note version number now!
# http://www.atnf.csiro.au/people/pulsar/psrcat/proc_form.php?version=1.63&Name=Name&JName=JName&RaJ=RaJ&DecJ=DecJ&PMRA=PMRA&PMDec=PMDec&PX=PX&PosEpoch=PosEpoch&GL=GL&GB=GB&F0=F0&F1=F1&F2=F2&F3=F3&PEpoch=PEpoch&DM=DM&DM1=DM1&S400=S400&S1400=S1400&Binary=Binary&T0=T0&PB=PB&A1=A1&OM=OM&Ecc=Ecc&Tasc=Tasc&Eps1=Eps1&Eps2=Eps2&Dist=Dist&Assoc=Assoc&Survey=Survey&Type=Type&startUserDefined=true&c1_val=&c2_val=&c3_val=&c4_val=&sort_attr=jname&sort_order=asc&condition=&pulsar_names=&ephemeris=short&coords_unit=raj%2Fdecj&radius=&coords_1=&coords_2=&style=Long+csv+with+errors&no_value=*&x_axis=&x_scale=linear&y_axis=&y_scale=linear&state=query&table_bottom.x=40&table_bottom.y=0
params = ["NAME", "PSRJ", "RAJ", "DECJ", "PMRA", "PMDEC", "PX", "POSEPOCH",
"Gl", "Gb", "F0", "F1", "F2", "F3", "PEPOCH", "DM", "DM1",
"S400", "S1400", "BINARY", "T0", "PB", "A1", "OM", "ECC",
"TASC", "EPS1", "EPS2", "DIST", "ASSOC", "SURVEY", "PSR"]
params_with_errs = ["RAJ", "DECJ", "PMRA", "PMDEC", "PX", "P0", "P1", "F2", "F3",
"DM", "DM1", "S400", "S1400", "T0", "PB", "A1", "OM", "ECC",
"TASC", "EPS1", "EPS2"]
digits = '0123456789'
class psr(object):
def __init__(self, parts, indices):
# Do RAJ and DECJ first
posn = c.SkyCoord(parts[indices['RAJ']]+" "+parts[indices['DECJ']],
frame=c.ICRS, unit=(u.hourangle, u.deg))
for param in params:
part_index = indices[param]
if param=="NAME":
if not parts[part_index]=='*':
self.name = parts[part_index][1:]
else:
self.name = ""
elif param=="PSRJ":
if not parts[part_index]=='*':
self.jname = parts[part_index][1:]
if self.name == self.jname:
self.name = ""
elif param=="RAJ":
if not parts[part_index]=='*':
self.rajstr = parts[part_index]
self.ra = posn.ra.to(u.rad).value
self.raerr = float(parts[part_index+1]) * pc.SECTORAD
elif param=="DECJ":
if not parts[part_index]=='*':
self.decjstr = parts[part_index]
self.dec = posn.dec.to(u.rad).value
self.decerr = float(parts[part_index+1]) * pc.ARCSECTORAD
elif param=="PMRA":
if not parts[part_index]=='*':
self.pmra, self.pmraerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="PMDEC":
if not parts[part_index]=='*':
self.pmdec, self.pmdecerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="PX":
if not parts[part_index]=='*':
self.px, self.pxerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="POSEPOCH":
if not parts[part_index]=='*':
self.posepoch = float(parts[part_index])
elif param=="Gl":
if not parts[part_index]=='*':
self.l = float(parts[part_index])
elif param=="Gb":
if not parts[part_index]=='*':
self.b = float(parts[part_index])
elif param=="F0":
if not parts[part_index]=='*':
self.f, self.ferr = float(parts[part_index]), float(parts[part_index+1])
self.p, self.perr = pu.pferrs(self.f, self.ferr)
else:
self.f = self.ferr = self.p = self.perr = 0.0
self.fd = self.fdd = self.fddd = 0.0
self.pd = self.pdd = self.pddd = 0.0
self.fderr = self.fdderr = self.fddderr = 0.0
self.pderr = self.pdderr = self.pddderr = 0.0
elif param=="F1":
if not parts[part_index]=='*':
self.fd, self.fderr = float(parts[part_index]), float(parts[part_index+1])
self.p, self.perr, self.pd, self.pderr = pu.pferrs(self.f, self.ferr, self.fd, self.fderr)
elif param=="F2":
if not parts[part_index]=='*':
self.fdd, self.fdderr = float(parts[part_index]), float(parts[part_index+1])
self.p, self.pd, self.pdd = presto.p_to_f(self.f, self.fd, self.fdd)
elif param=="F3":
if not parts[part_index]=='*':
self.fddd, self.fddderr = float(parts[part_index]), float(parts[part_index+1])
elif param=="PEPOCH":
if parts[part_index]=='*':
self.pepoch = 51000.0 # Just to pick a reasonable value
else:
self.pepoch = float(parts[part_index])
elif param=="DM":
if not parts[part_index]=='*':
self.dm, self.dmerr = float(parts[part_index]), float(parts[part_index+1])
else:
self.dm = self.dmerr = 0.0
elif param=="DM1":
if not parts[part_index]=='*':
self.ddm, self.ddmerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="S400":
if not parts[part_index]=='*':
self.s400, self.s400err = float(parts[part_index]), float(parts[part_index+1])
else:
self.s400 = None
elif param=="S1400":
if not parts[part_index]=='*':
self.s1400, self.s1400err = float(parts[part_index]), float(parts[part_index+1])
else:
self.s1400 = None
elif param=="BINARY":
if not parts[part_index]=='*':
self.binary_model = parts[part_index]
self.binary = 1
self.pb = self.x = self.w = self.To = self.e = None
self.pberr = self.xerr = self.werr = self.Toerr =self.eerr = None
else:
self.binary = 0
elif param=="T0":
if self.binary and not parts[part_index]=='*':
self.To, self.Toerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="PB":
if self.binary and not parts[part_index]=='*':
self.pb, self.pberr = float(parts[part_index]), float(parts[part_index+1])
elif param=="A1":
if self.binary and not parts[part_index]=='*':
self.x, self.xerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="OM":
if self.binary and not parts[part_index]=='*':
self.w, self.werr = float(parts[part_index]), float(parts[part_index+1])
elif param=="ECC":
if self.binary and not parts[part_index]=='*':
self.e, self.eerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="TASC":
if self.binary and self.binary_model=="ELL1" and not parts[part_index]=='*':
self.To, self.Toerr = float(parts[part_index]), float(parts[part_index+1])
elif param=="EPS1":
if self.binary and self.binary_model=="ELL1" and not parts[part_index]=='*':
self.eps1, self.eps1err = float(parts[part_index]), float(parts[part_index+1])
elif param=="EPS2":
if self.binary and self.binary_model=="ELL1" and not parts[part_index]=='*':
self.eps2, self.eps2err = float(parts[part_index]), float(parts[part_index+1])
if not hasattr(self, 'eps1'): self.eps1 = 0.0
self.e = math.sqrt(self.eps1*self.eps1 + self.eps2*self.eps2)
self.eerr = 0.0001 # This needs fixing...
self.w = pc.RADTODEG*math.atan2(self.eps1, self.eps2)
if (self.w < 0.0): self.w += 360.0
self.werr = 1.0 # This needs fixing...
elif param=="DIST":
if not parts[part_index]=='*':
self.dist = float(parts[part_index])
else:
self.dist = None
elif param=="ASSOC":
if not parts[part_index]=='*':
self.assoc = parts[part_index]
else:
self.assoc = None
elif param=="SURVEY":
if not parts[part_index]=='*':
self.survey = parts[part_index]
else:
self.survey = None
elif param=="PSR":
if not parts[part_index]=='*':
self.type = parts[part_index]
else:
self.type = None
self.alias = ""
def __str__(self):
out = ''
if (self.name):
out = out + "\nPulsar B%s (J%s)\n" % \
(self.name, self.jname)
else:
out = out + "\nPulsar J%s\n" % (self.jname)
if (self.alias):
out = out + " Alias = %s\n" % self.alias
if (self.assoc is not None):
out = out + " Association = %s\n" % self.assoc
if (self.survey is not None):
out = out + " Survey Detections = %s\n" % self.survey
out = out + " (Discoverer first)\n"
if (self.type is not None):
out = out + " Type = %s\n" % self.type
(h, m, s) = pu.rad_to_hms(self.ra)
serr = pc.RADTOSEC * self.raerr
out = out + " RA (J2000) = %s +/- %.4fs\n" % \
(pu.coord_to_string(h, m, s), serr)
(d, m, s) = pu.rad_to_dms(self.dec)
serr = pc.RADTOARCSEC * self.decerr
out = out + " DEC (J2000) = %s +/- %.4f\"\n" % \
(pu.coord_to_string(d, m, s), serr)
out = out + " (l, b) = (%.2f, %.2f)\n" % \
(self.l, self.b)
out = out + " DM (cm-3 pc) = %.8g +/- %.5g\n" % \
(self.dm, self.dmerr)
if (self.s400 is not None):
out = out + " S_400MHz (mJy) = %.3g +/- %.2g\n" % \
(self.s400, self.s400err)
if (self.s1400 is not None):
out = out + " S_1400MHz (mJy) = %.3g +/- %.2g\n" % \
(self.s1400, self.s1400err)
if (self.dist is not None):
out = out + " Distance (kpc) = %.3g\n" % self.dist
out = out + " Period (s) = %.15g +/- %.15g\n" % \
(self.p, self.perr)
out = out + " P-dot (s/s) = %.8g +/- %.8g\n" % \
(self.pd, self.pderr)
out = out + " Epoch (MJD) = %.10g\n" % self.pepoch
if (self.binary):
out = out + " P_binary (s) = %.10g +/- %.10g\n" % \
(self.pb*86400.0, self.pberr*86400.0)
out = out + " P_binary (d) = %.10g +/- %.10g\n" % \
(self.pb, self.pberr)
if self.x is not None:
out = out + " a*sin(i)/c (s) = %.8g +/- %.8g\n" % \
(self.x, self.xerr)
if self.e is not None:
out = out + " Eccentricity = %.8g +/- %.8g\n" % \
(self.e, self.eerr)
if (self.e > 0.0):
if self.w is not None:
out = out + " Long of Peri (deg) = %.10g +/- %.10g\n" % \
(self.w, self.werr)
if self.To is not None:
out = out + " Time of Peri (MJD) = %.12g +/- %.12g\n" % \
(self.To, self.Toerr)
else:
if self.To is not None:
out = out + " T of Ascd Node (MJD) = %.12g +/- %.12g\n" % \
(self.To, self.Toerr)
return out
def pack_structs(self):
out = struct.Struct("13s9s10s12d")
packed = out.pack(self.jname.encode('utf-8'),
self.name.encode('utf-8'),
self.alias.lower().encode('utf-8'),
self.ra, self.raerr,
self.dec, self.decerr,
self.p, self.perr, self.pd, self.pderr,
self.dm, self.dmerr, self.pepoch, self.binary)
if self.binary:
if self.pb is None: self.pb = 0.0
if self.pberr is None: self.pberr = 0.0
if self.x is None: self.x = 0.0
if self.xerr is None: self.xerr = 0.0
if self.e is None: self.e = 0.0
if self.eerr is None: self.eerr = 0.0
if self.w is None: self.w = 0.0
if self.werr is None: self.werr = 0.0
if self.To is None: self.To = 0.0
if self.Toerr is None: self.Toerr = 0.0
packed += struct.pack("10d",
self.pb, self.pberr, self.x, self.xerr,
self.e, self.eerr, self.w, self.werr,
self.To, self.Toerr)
return packed
pulsars = {}
num_binaries = 0
# Read the file that was taken from the ATNF database
presto_path = os.getenv("PRESTO")
with open(os.path.join(presto_path, "lib", "psr_catalog.txt")) as csvfile:
reader = csv.reader(csvfile, delimiter=';')
first = next(reader)
indices = {}
for ii in range(len(first)):
if first[ii]!='' and first!="#":
indices[first[ii]] = ii
units = next(reader)
for row in reader:
currentpulsar = psr(row, indices)
pulsars[currentpulsar.jname] = currentpulsar
if currentpulsar.binary: num_binaries += 1
# Now add the aliases to the pulsars
infile = open(os.path.join(presto_path, "lib", "aliases.txt"))
for line in infile.readlines()[1:]:
if line[0]=='J':
vals = line.split()
jname = vals[0][1:]
if jname in pulsars:
pulsars[jname].alias = vals[2]
infile.close()
psrs = list(pulsars.values())
psrs.sort(key=attrgetter('jname'))
# Now create a new dictionary of pulsars with aliases
psr_aliases = {}
for psr in psrs:
if psr.alias:
psr_aliases[psr.alias] = psr
# No create a new master dictionary with all pulsar names and aliases
allpsrs = {}
for psr in psrs:
allpsrs[psr.jname] = psr
allpsrs["j%s" % psr.jname] = psr
allpsrs["J%s" % psr.jname] = psr
if psr.alias:
allpsrs[psr.alias] = psr
if psr.name:
allpsrs[psr.name] = psr
allpsrs["b%s" % psr.name] = psr
allpsrs["B%s" % psr.name] = psr
# Add a couple important pulsars
for psr in psrs:
if psr.jname=="1614-23":
psr.jname=="1614-2318"
psr.f = 29.8475387364133766
psr.fd = -4.683105034721e-17
psr.p, psr.pd = pu.p_to_f(psr.f, psr.fd)
psr.x = 1.327490
psr.e = 0.0
psr.To = 52819.878171
psr.pb = 3.15238573
psr.w = 0.0
psr.dm = 52.43
psr.l = 351.91856
psr.b = 19.74496
psr.dist = 1.80
if psr.jname=="2204+27":
psr.x = 0.1
psr.xerr = 1.0
psr.e = 0.129
psr.eerr = 0.05
psr.To = 57000.0
psr.Toerr = 16.0
psr.w = 180.0
psr.werr = 180.0
psr.pb = 32.0*24.0
psr.pberr = 1.0
# If calling this as a main program, then write out the new pulsars.cat file
if __name__ == '__main__' :
presto_path = os.getenv("PRESTO")
outfilename = os.path.join(presto_path, "lib", "pulsars.cat")
outfile = open(outfilename, "wb")
print("Writing %d pulsars (%d binaries) to %s" % \
(len(psrs), num_binaries, outfilename))
for ii, psr in enumerate(psrs):
try:
outfile.write(psr.pack_structs())
except:
print(ii, psr.jname)
outfile.close()
| 16,621 | 44.539726 | 676 | py |
presto | presto-master/python/presto/mpfit.py | """
Perform Levenberg-Marquardt least-squares minimization, based on MINPACK-1.
AUTHORS
The original version of this software, called LMFIT, was written in FORTRAN
as part of the MINPACK-1 package by XXX.
Craig Markwardt converted the FORTRAN code to IDL. The information for the
IDL version is:
Craig B. Markwardt, NASA/GSFC Code 662, Greenbelt, MD 20770
[email protected]
UPDATED VERSIONs can be found on my WEB PAGE:
http://cow.physics.wisc.edu/~craigm/idl/idl.html
Mark Rivers created this Python version from Craig's IDL version.
Mark Rivers, University of Chicago
Building 434A, Argonne National Laboratory
9700 South Cass Avenue, Argonne, IL 60439
[email protected]
Updated versions can be found at http://cars.uchicago.edu/software
DESCRIPTION
MPFIT uses the Levenberg-Marquardt technique to solve the
least-squares problem. In its typical use, MPFIT will be used to
fit a user-supplied function (the "model") to user-supplied data
points (the "data") by adjusting a set of parameters. MPFIT is
based upon MINPACK-1 (LMDIF.F) by More' and collaborators.
For example, a researcher may think that a set of observed data
points is best modelled with a Gaussian curve. A Gaussian curve is
parameterized by its mean, standard deviation and normalization.
MPFIT will, within certain constraints, find the set of parameters
which best fits the data. The fit is "best" in the least-squares
sense; that is, the sum of the weighted squared differences between
the model and data is minimized.
The Levenberg-Marquardt technique is a particular strategy for
iteratively searching for the best fit. This particular
implementation is drawn from MINPACK-1 (see NETLIB), and is much faster
and more accurate than the version provided in the Scientific Python package
in Scientific.Functions.LeastSquares.
This version allows upper and lower bounding constraints to be placed on each
parameter, or the parameter can be held fixed.
The user-supplied Python function should return an array of weighted
deviations between model and data. In a typical scientific problem
the residuals should be weighted so that each deviate has a
gaussian sigma of 1.0. If X represents values of the independent
variable, Y represents a measurement for each value of X, and ERR
represents the error in the measurements, then the deviates could
be calculated as follows:
DEVIATES = (Y - F(X)) / ERR
where F is the analytical function representing the model. You are
recommended to use the convenience functions MPFITFUN and
MPFITEXPR, which are driver functions that calculate the deviates
for you. If ERR are the 1-sigma uncertainties in Y, then
TOTAL( DEVIATES^2 )
will be the total chi-squared value. MPFIT will minimize the
chi-square value. The values of X, Y and ERR are passed through
MPFIT to the user-supplied function via the FUNCTKW keyword.
Simple constraints can be placed on parameter values by using the
PARINFO keyword to MPFIT. See below for a description of this
keyword.
MPFIT does not perform more general optimization tasks. See TNMIN
instead. MPFIT is customized, based on MINPACK-1, to the
least-squares minimization problem.
USER FUNCTION
The user must define a function which returns the appropriate
values as specified above. The function should return the weighted
deviations between the model and the data. It should also return a status
flag and an optional partial derivative array. For applications which
use finite-difference derivatives -- the default -- the user
function should be declared in the following way:
def myfunct(p, fjac=None, x=None, y=None, err=None)
# Parameter values are passed in "p"
# If fjac==None then partial derivatives should not be
# computed. It will always be None if MPFIT is called with default
# flag.
model = F(x, p)
# Non-negative status value means MPFIT should continue, negative means
# stop the calculation.
status = 0
return([status, (y-model)/err]
See below for applications with analytical derivatives.
The keyword parameters X, Y, and ERR in the example above are
suggestive but not required. Any parameters can be passed to
MYFUNCT by using the functkw keyword to MPFIT. Use MPFITFUN and
MPFITEXPR if you need ideas on how to do that. The function *must*
accept a parameter list, P.
In general there are no restrictions on the number of dimensions in
X, Y or ERR. However the deviates *must* be returned in a
one-dimensional numpy array of type float.
User functions may also indicate a fatal error condition using the
status return described above. If status is set to a number between
-15 and -1 then MPFIT will stop the calculation and return to the caller.
ANALYTIC DERIVATIVES
In the search for the best-fit solution, MPFIT by default
calculates derivatives numerically via a finite difference
approximation. The user-supplied function need not calculate the
derivatives explicitly. However, if you desire to compute them
analytically, then the AUTODERIVATIVE=0 keyword must be passed to MPFIT.
As a practical matter, it is often sufficient and even faster to allow
MPFIT to calculate the derivatives numerically, and so
AUTODERIVATIVE=0 is not necessary.
If AUTODERIVATIVE=0 is used then the user function must check the parameter
FJAC, and if FJAC!=None then return the partial derivative array in the
return list.
def myfunct(p, fjac=None, x=None, y=None, err=None)
# Parameter values are passed in "p"
# If FJAC!=None then partial derivatives must be comptuer.
# FJAC contains an array of len(p), where each entry
# is 1 if that parameter is free and 0 if it is fixed.
model = F(x, p)
Non-negative status value means MPFIT should continue, negative means
# stop the calculation.
status = 0
if (dojac):
pderiv = numpy.zeros([len(x), len(p)], numpy.float64)
for j in range(len(p)):
pderiv[:,j] = FGRAD(x, p, j)
else:
pderiv = None
return([status, (y-model)/err, pderiv]
where FGRAD(x, p, i) is a user function which must compute the
derivative of the model with respect to parameter P[i] at X. When
finite differencing is used for computing derivatives (ie, when
AUTODERIVATIVE=1), or when MPFIT needs only the errors but not the
derivatives the parameter FJAC=None.
Derivatives should be returned in the PDERIV array. PDERIV should be an m x
n array, where m is the number of data points and n is the number
of parameters. dp[i,j] is the derivative at the ith point with
respect to the jth parameter.
The derivatives with respect to fixed parameters are ignored; zero
is an appropriate value to insert for those derivatives. Upon
input to the user function, FJAC is set to a vector with the same
length as P, with a value of 1 for a parameter which is free, and a
value of zero for a parameter which is fixed (and hence no
derivative needs to be calculated).
If the data is higher than one dimensional, then the *last*
dimension should be the parameter dimension. Example: fitting a
50x50 image, "dp" should be 50x50xNPAR.
CONSTRAINING PARAMETER VALUES WITH THE PARINFO KEYWORD
The behavior of MPFIT can be modified with respect to each
parameter to be fitted. A parameter value can be fixed; simple
boundary constraints can be imposed; limitations on the parameter
changes can be imposed; properties of the automatic derivative can
be modified; and parameters can be tied to one another.
These properties are governed by the PARINFO structure, which is
passed as a keyword parameter to MPFIT.
PARINFO should be a list of dictionaries, one list entry for each parameter.
Each parameter is associated with one element of the array, in
numerical order. The dictionary can have the following keys
(none are required, keys are case insensitive):
'value' - the starting parameter value (but see the START_PARAMS
parameter for more information).
'fixed' - a boolean value, whether the parameter is to be held
fixed or not. Fixed parameters are not varied by
MPFIT, but are passed on to MYFUNCT for evaluation.
'limited' - a two-element boolean array. If the first/second
element is set, then the parameter is bounded on the
lower/upper side. A parameter can be bounded on both
sides. Both LIMITED and LIMITS must be given
together.
'limits' - a two-element float array. Gives the
parameter limits on the lower and upper sides,
respectively. Zero, one or two of these values can be
set, depending on the values of LIMITED. Both LIMITED
and LIMITS must be given together.
'parname' - a string, giving the name of the parameter. The
fitting code of MPFIT does not use this tag in any
way. However, the default iterfunct will print the
parameter name if available.
'step' - the step size to be used in calculating the numerical
derivatives. If set to zero, then the step size is
computed automatically. Ignored when AUTODERIVATIVE=0.
'mpside' - the sidedness of the finite difference when computing
numerical derivatives. This field can take four
values:
0 - one-sided derivative computed automatically
1 - one-sided derivative (f(x+h) - f(x) )/h
-1 - one-sided derivative (f(x) - f(x-h))/h
2 - two-sided derivative (f(x+h) - f(x-h))/(2*h)
Where H is the STEP parameter described above. The
"automatic" one-sided derivative method will chose a
direction for the finite difference which does not
violate any constraints. The other methods do not
perform this check. The two-sided method is in
principle more precise, but requires twice as many
function evaluations. Default: 0.
'mpmaxstep' - the maximum change to be made in the parameter
value. During the fitting process, the parameter
will never be changed by more than this value in
one iteration.
A value of 0 indicates no maximum. Default: 0.
'tied' - a string expression which "ties" the parameter to other
free or fixed parameters. Any expression involving
constants and the parameter array P are permitted.
Example: if parameter 2 is always to be twice parameter
1 then use the following: parinfo(2).tied = '2 * p(1)'.
Since they are totally constrained, tied parameters are
considered to be fixed; no errors are computed for them.
[ NOTE: the PARNAME can't be used in expressions. ]
'mpprint' - if set to 1, then the default iterfunct will print the
parameter value. If set to 0, the parameter value
will not be printed. This tag can be used to
selectively print only a few parameter values out of
many. Default: 1 (all parameters printed)
Future modifications to the PARINFO structure, if any, will involve
adding dictionary tags beginning with the two letters "MP".
Therefore programmers are urged to avoid using tags starting with
the same letters; otherwise they are free to include their own
fields within the PARINFO structure, and they will be ignored.
PARINFO Example:
parinfo = [{'value':0., 'fixed':0, 'limited':[0,0], 'limits':[0.,0.]}]*5
parinfo[0]['fixed'] = 1
parinfo[4]['limited'][0] = 1
parinfo[4]['limits'][0] = 50.
values = [5.7, 2.2, 500., 1.5, 2000.]
for i in range(5): parinfo[i]['value']=values[i]
A total of 5 parameters, with starting values of 5.7,
2.2, 500, 1.5, and 2000 are given. The first parameter
is fixed at a value of 5.7, and the last parameter is
constrained to be above 50.
EXAMPLE
import mpfit
import numpy
x = numpy.arange(100, numpy.float64)
p0 = [5.7, 2.2, 500., 1.5, 2000.]
y = ( p[0] + p[1]*[x] + p[2]*[x**2] + p[3]*numpy.sqrt(x) +
p[4]*numpy.log(x))
fa = {'x':x, 'y':y, 'err':err}
m = mpfit('myfunct', p0, functkw=fa)
print('status = ', m.status)
if (m.status <= 0): print('error message = ', m.errmsg)
print('parameters = ', m.params)
Minimizes sum of squares of MYFUNCT. MYFUNCT is called with the X,
Y, and ERR keyword parameters that are given by FUNCTKW. The
results can be obtained from the returned object m.
THEORY OF OPERATION
There are many specific strategies for function minimization. One
very popular technique is to use function gradient information to
realize the local structure of the function. Near a local minimum
the function value can be taylor expanded about x0 as follows:
f(x) = f(x0) + f'(x0) . (x-x0) + (1/2) (x-x0) . f''(x0) . (x-x0)
----- --------------- ------------------------------- (1)
Order 0th 1st 2nd
Here f'(x) is the gradient vector of f at x, and f''(x) is the
Hessian matrix of second derivatives of f at x. The vector x is
the set of function parameters, not the measured data vector. One
can find the minimum of f, f(xm) using Newton's method, and
arrives at the following linear equation:
f''(x0) . (xm-x0) = - f'(x0) (2)
If an inverse can be found for f''(x0) then one can solve for
(xm-x0), the step vector from the current position x0 to the new
projected minimum. Here the problem has been linearized (ie, the
gradient information is known to first order). f''(x0) is
symmetric n x n matrix, and should be positive definite.
The Levenberg - Marquardt technique is a variation on this theme.
It adds an additional diagonal term to the equation which may aid the
convergence properties:
(f''(x0) + nu I) . (xm-x0) = -f'(x0) (2a)
where I is the identity matrix. When nu is large, the overall
matrix is diagonally dominant, and the iterations follow steepest
descent. When nu is small, the iterations are quadratically
convergent.
In principle, if f''(x0) and f'(x0) are known then xm-x0 can be
determined. However the Hessian matrix is often difficult or
impossible to compute. The gradient f'(x0) may be easier to
compute, if even by finite difference techniques. So-called
quasi-Newton techniques attempt to successively estimate f''(x0)
by building up gradient information as the iterations proceed.
In the least squares problem there are further simplifications
which assist in solving eqn (2). The function to be minimized is
a sum of squares:
f = Sum(hi^2) (3)
where hi is the ith residual out of m residuals as described
above. This can be substituted back into eqn (2) after computing
the derivatives:
f' = 2 Sum(hi hi')
f'' = 2 Sum(hi' hj') + 2 Sum(hi hi'') (4)
If one assumes that the parameters are already close enough to a
minimum, then one typically finds that the second term in f'' is
negligible [or, in any case, is too difficult to compute]. Thus,
equation (2) can be solved, at least approximately, using only
gradient information.
In matrix notation, the combination of eqns (2) and (4) becomes:
hT' . h' . dx = - hT' . h (5)
Where h is the residual vector (length m), hT is its transpose, h'
is the Jacobian matrix (dimensions n x m), and dx is (xm-x0). The
user function supplies the residual vector h, and in some cases h'
when it is not found by finite differences (see MPFIT_FDJAC2,
which finds h and hT'). Even if dx is not the best absolute step
to take, it does provide a good estimate of the best *direction*,
so often a line minimization will occur along the dx vector
direction.
The method of solution employed by MINPACK is to form the Q . R
factorization of h', where Q is an orthogonal matrix such that QT .
Q = I, and R is upper right triangular. Using h' = Q . R and the
ortogonality of Q, eqn (5) becomes
(RT . QT) . (Q . R) . dx = - (RT . QT) . h
RT . R . dx = - RT . QT . h (6)
R . dx = - QT . h
where the last statement follows because R is upper triangular.
Here, R, QT and h are known so this is a matter of solving for dx.
The routine MPFIT_QRFAC provides the QR factorization of h, with
pivoting, and MPFIT_QRSOLV provides the solution for dx.
REFERENCES
MINPACK-1, Jorge More', available from netlib (www.netlib.org).
"Optimization Software Guide," Jorge More' and Stephen Wright,
SIAM, *Frontiers in Applied Mathematics*, Number 14.
More', Jorge J., "The Levenberg-Marquardt Algorithm:
Implementation and Theory," in *Numerical Analysis*, ed. Watson,
G. A., Lecture Notes in Mathematics 630, Springer-Verlag, 1977.
MODIFICATION HISTORY
Translated from MINPACK-1 in FORTRAN, Apr-Jul 1998, CM
Copyright (C) 1997-2002, Craig Markwardt
This software is provided as is without any warranty whatsoever.
Permission to use, copy, modify, and distribute modified or
unmodified copies is granted, provided this copyright and disclaimer
are included unchanged.
Translated from MPFIT (Craig Markwardt's IDL package) to Python,
August, 2002. Mark Rivers
"""
from __future__ import print_function
from builtins import str
from builtins import range
from builtins import object
import numpy
# Original FORTRAN documentation
# **********
#
# subroutine lmdif
#
# the purpose of lmdif is to minimize the sum of the squares of
# m nonlinear functions in n variables by a modification of
# the levenberg-marquardt algorithm. the user must provide a
# subroutine which calculates the functions. the jacobian is
# then calculated by a forward-difference approximation.
#
# the subroutine statement is
#
# subroutine lmdif(fcn,m,n,x,fvec,ftol,xtol,gtol,maxfev,epsfcn,
# diag,mode,factor,nprint,info,nfev,fjac,
# ldfjac,ipvt,qtf,wa1,wa2,wa3,wa4)
#
# where
#
# fcn is the name of the user-supplied subroutine which
# calculates the functions. fcn must be declared
# in an external statement in the user calling
# program, and should be written as follows.
#
# subroutine fcn(m,n,x,fvec,iflag)
# integer m,n,iflag
# double precision x(n),fvec(m)
# ----------
# calculate the functions at x and
# return this vector in fvec.
# ----------
# return
# end
#
# the value of iflag should not be changed by fcn unless
# the user wants to terminate execution of lmdif.
# in this case set iflag to a negative integer.
#
# m is a positive integer input variable set to the number
# of functions.
#
# n is a positive integer input variable set to the number
# of variables. n must not exceed m.
#
# x is an array of length n. on input x must contain
# an initial estimate of the solution vector. on output x
# contains the final estimate of the solution vector.
#
# fvec is an output array of length m which contains
# the functions evaluated at the output x.
#
# ftol is a nonnegative input variable. termination
# occurs when both the actual and predicted relative
# reductions in the sum of squares are at most ftol.
# therefore, ftol measures the relative error desired
# in the sum of squares.
#
# xtol is a nonnegative input variable. termination
# occurs when the relative error between two consecutive
# iterates is at most xtol. therefore, xtol measures the
# relative error desired in the approximate solution.
#
# gtol is a nonnegative input variable. termination
# occurs when the cosine of the angle between fvec and
# any column of the jacobian is at most gtol in absolute
# value. therefore, gtol measures the orthogonality
# desired between the function vector and the columns
# of the jacobian.
#
# maxfev is a positive integer input variable. termination
# occurs when the number of calls to fcn is at least
# maxfev by the end of an iteration.
#
# epsfcn is an input variable used in determining a suitable
# step length for the forward-difference approximation. this
# approximation assumes that the relative errors in the
# functions are of the order of epsfcn. if epsfcn is less
# than the machine precision, it is assumed that the relative
# errors in the functions are of the order of the machine
# precision.
#
# diag is an array of length n. if mode = 1 (see
# below), diag is internally set. if mode = 2, diag
# must contain positive entries that serve as
# multiplicative scale factors for the variables.
#
# mode is an integer input variable. if mode = 1, the
# variables will be scaled internally. if mode = 2,
# the scaling is specified by the input diag. other
# values of mode are equivalent to mode = 1.
#
# factor is a positive input variable used in determining the
# initial step bound. this bound is set to the product of
# factor and the euclidean norm of diag*x if nonzero, or else
# to factor itself. in most cases factor should lie in the
# interval (.1,100.). 100. is a generally recommended value.
#
# nprint is an integer input variable that enables controlled
# printing of iterates if it is positive. in this case,
# fcn is called with iflag = 0 at the beginning of the first
# iteration and every nprint iterations thereafter and
# immediately prior to return, with x and fvec available
# for printing. if nprint is not positive, no special calls
# of fcn with iflag = 0 are made.
#
# info is an integer output variable. if the user has
# terminated execution, info is set to the (negative)
# value of iflag. see description of fcn. otherwise,
# info is set as follows.
#
# info = 0 improper input parameters.
#
# info = 1 both actual and predicted relative reductions
# in the sum of squares are at most ftol.
#
# info = 2 relative error between two consecutive iterates
# is at most xtol.
#
# info = 3 conditions for info = 1 and info = 2 both hold.
#
# info = 4 the cosine of the angle between fvec and any
# column of the jacobian is at most gtol in
# absolute value.
#
# info = 5 number of calls to fcn has reached or
# exceeded maxfev.
#
# info = 6 ftol is too small. no further reduction in
# the sum of squares is possible.
#
# info = 7 xtol is too small. no further improvement in
# the approximate solution x is possible.
#
# info = 8 gtol is too small. fvec is orthogonal to the
# columns of the jacobian to machine precision.
#
# nfev is an integer output variable set to the number of
# calls to fcn.
#
# fjac is an output m by n array. the upper n by n submatrix
# of fjac contains an upper triangular matrix r with
# diagonal elements of nonincreasing magnitude such that
#
# t t t
# p *(jac *jac)*p = r *r,
#
# where p is a permutation matrix and jac is the final
# calculated jacobian. column j of p is column ipvt(j)
# (see below) of the identity matrix. the lower trapezoidal
# part of fjac contains information generated during
# the computation of r.
#
# ldfjac is a positive integer input variable not less than m
# which specifies the leading dimension of the array fjac.
#
# ipvt is an integer output array of length n. ipvt
# defines a permutation matrix p such that jac*p = q*r,
# where jac is the final calculated jacobian, q is
# orthogonal (not stored), and r is upper triangular
# with diagonal elements of nonincreasing magnitude.
# column j of p is column ipvt(j) of the identity matrix.
#
# qtf is an output array of length n which contains
# the first n elements of the vector (q transpose)*fvec.
#
# wa1, wa2, and wa3 are work arrays of length n.
#
# wa4 is a work array of length m.
#
# subprograms called
#
# user-supplied ...... fcn
#
# minpack-supplied ... dpmpar,enorm,fdjac2,,qrfac
#
# fortran-supplied ... dabs,dmax1,dmin1,dsqrt,mod
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
# **********
class mpfit(object):
def __init__(self, fcn, xall=None, functkw={}, parinfo=None,
ftol=1.e-10, xtol=1.e-10, gtol=1.e-10,
damp=0., maxiter=200, factor=100., nprint=1,
iterfunct='default', iterkw={}, nocovar=0,
fastnorm=0, rescale=0, autoderivative=1, quiet=0,
diag=None, epsfcn=None, debug=0):
"""
Inputs:
fcn:
The function to be minimized. The function should return the weighted
deviations between the model and the data, as described above.
xall:
An array of starting values for each of the parameters of the model.
The number of parameters should be fewer than the number of measurements.
This parameter is optional if the parinfo keyword is used (but see
parinfo). The parinfo keyword provides a mechanism to fix or constrain
individual parameters.
Keywords:
autoderivative:
If this is set, derivatives of the function will be computed
automatically via a finite differencing procedure. If not set, then
fcn must provide the (analytical) derivatives.
Default: set (=1)
NOTE: to supply your own analytical derivatives,
explicitly pass autoderivative=0
fastnorm:
Set this keyword to select a faster algorithm to compute sum-of-square
values internally. For systems with large numbers of data points, the
standard algorithm can become prohibitively slow because it cannot be
vectorized well. By setting this keyword, MPFIT will run faster, but
it will be more prone to floating point overflows and underflows. Thus, setting
this keyword may sacrifice some stability in the fitting process.
Default: clear (=0)
ftol:
A nonnegative input variable. Termination occurs when both the actual
and predicted relative reductions in the sum of squares are at most
ftol (and status is accordingly set to 1 or 3). Therefore, ftol
measures the relative error desired in the sum of squares.
Default: 1E-10
functkw:
A dictionary which contains the parameters to be passed to the
user-supplied function specified by fcn via the standard Python
keyword dictionary mechanism. This is the way you can pass additional
data to your user-supplied function without using global variables.
Consider the following example:
if functkw = {'xval':[1.,2.,3.], 'yval':[1.,4.,9.],
'errval':[1.,1.,1.] }
then the user supplied function should be declared like this:
def myfunct(p, fjac=None, xval=None, yval=None, errval=None):
Default: {} No extra parameters are passed to the user-supplied
function.
gtol:
A nonnegative input variable. Termination occurs when the cosine of
the angle between fvec and any column of the jacobian is at most gtol
in absolute value (and status is accordingly set to 4). Therefore,
gtol measures the orthogonality desired between the function vector
and the columns of the jacobian.
Default: 1e-10
iterkw:
The keyword arguments to be passed to iterfunct via the dictionary
keyword mechanism. This should be a dictionary and is similar in
operation to FUNCTKW.
Default: {} No arguments are passed.
iterfunct:
The name of a function to be called upon each NPRINT iteration of the
MPFIT routine. It should be declared in the following way:
def iterfunct(myfunct, p, iter, fnorm, functkw=None,
parinfo=None, quiet=0, dof=None, [iterkw keywords here])
# perform custom iteration update
iterfunct must accept all three keyword parameters (FUNCTKW, PARINFO
and QUIET).
myfunct: The user-supplied function to be minimized,
p: The current set of model parameters
iter: The iteration number
functkw: The arguments to be passed to myfunct.
fnorm: The chi-squared value.
quiet: Set when no textual output should be printed.
dof: The number of degrees of freedom, normally the number of points
less the number of free parameters.
See below for documentation of parinfo.
In implementation, iterfunct can perform updates to the terminal or
graphical user interface, to provide feedback while the fit proceeds.
If the fit is to be stopped for any reason, then iterfunct should return a
a status value between -15 and -1. Otherwise it should return None
(e.g. no return statement) or 0.
In principle, iterfunct should probably not modify the parameter values,
because it may interfere with the algorithm's stability. In practice it
is allowed.
Default: an internal routine is used to print the parameter values.
Set iterfunct=None if there is no user-defined routine and you don't
want the internal default routine be called.
maxiter:
The maximum number of iterations to perform. If the number is exceeded,
then the status value is set to 5 and MPFIT returns.
Default: 200 iterations
nocovar:
Set this keyword to prevent the calculation of the covariance matrix
before returning (see COVAR)
Default: clear (=0) The covariance matrix is returned
nprint:
The frequency with which iterfunct is called. A value of 1 indicates
that iterfunct is called with every iteration, while 2 indicates every
other iteration, etc. Note that several Levenberg-Marquardt attempts
can be made in a single iteration.
Default value: 1
parinfo
Provides a mechanism for more sophisticated constraints to be placed on
parameter values. When parinfo is not passed, then it is assumed that
all parameters are free and unconstrained. Values in parinfo are never
modified during a call to MPFIT.
See description above for the structure of PARINFO.
Default value: None All parameters are free and unconstrained.
quiet:
Set this keyword when no textual output should be printed by MPFIT
damp:
A scalar number, indicating the cut-off value of residuals where
"damping" will occur. Residuals with magnitudes greater than this
number will be replaced by their hyperbolic tangent. This partially
mitigates the so-called large residual problem inherent in
least-squares solvers (as for the test problem CURVI,
http://www.maxthis.com/curviex.htm).
A value of 0 indicates no damping.
Default: 0
Note: DAMP doesn't work with autoderivative=0
xtol:
A nonnegative input variable. Termination occurs when the relative error
between two consecutive iterates is at most xtol (and status is
accordingly set to 2 or 3). Therefore, xtol measures the relative error
desired in the approximate solution.
Default: 1E-10
Outputs:
Returns an object of type mpfit. The results are attributes of this class,
e.g. mpfit.status, mpfit.errmsg, mpfit.params, npfit.niter, mpfit.covar.
.status
An integer status code is returned. All values greater than zero can
represent success (however .status == 5 may indicate failure to
converge). It can have one of the following values:
-16
A parameter or function value has become infinite or an undefined
number. This is usually a consequence of numerical overflow in the
user's model function, which must be avoided.
-15 to -1
These are error codes that either MYFUNCT or iterfunct may return to
terminate the fitting process. Values from -15 to -1 are reserved
for the user functions and will not clash with MPFIT.
0 Improper input parameters.
1 Both actual and predicted relative reductions in the sum of squares
are at most ftol.
2 Relative error between two consecutive iterates is at most xtol
3 Conditions for status = 1 and status = 2 both hold.
4 The cosine of the angle between fvec and any column of the jacobian
is at most gtol in absolute value.
5 The maximum number of iterations has been reached.
6 ftol is too small. No further reduction in the sum of squares is
possible.
7 xtol is too small. No further improvement in the approximate solution
x is possible.
8 gtol is too small. fvec is orthogonal to the columns of the jacobian
to machine precision.
.fnorm
The value of the summed squared residuals for the returned parameter
values.
.covar
The covariance matrix for the set of parameters returned by MPFIT.
The matrix is NxN where N is the number of parameters. The square root
of the diagonal elements gives the formal 1-sigma statistical errors on
the parameters if errors were treated "properly" in fcn.
Parameter errors are also returned in .perror.
To compute the correlation matrix, pcor, use this example:
cov = mpfit.covar
pcor = cov * 0.
for i in range(n):
for j in range(n):
pcor[i,j] = cov[i,j]/numpy.sqrt(cov[i,i]*cov[j,j])
If nocovar is set or MPFIT terminated abnormally, then .covar is set to
a scalar with value None.
.errmsg
A string error or warning message is returned.
.nfev
The number of calls to MYFUNCT performed.
.niter
The number of iterations completed.
.perror
The formal 1-sigma errors in each parameter, computed from the
covariance matrix. If a parameter is held fixed, or if it touches a
boundary, then the error is reported as zero.
If the fit is unweighted (i.e. no errors were given, or the weights
were uniformly set to unity), then .perror will probably not represent
the true parameter uncertainties.
*If* you can assume that the true reduced chi-squared value is unity --
meaning that the fit is implicitly assumed to be of good quality --
then the estimated parameter uncertainties can be computed by scaling
.perror by the measured chi-squared value.
dof = len(x) - len(mpfit.params) # deg of freedom
# scaled uncertainties
pcerror = mpfit.perror * numpy.sqrt(mpfit.fnorm / dof)
"""
self.niter = 0
self.params = None
self.covar = None
self.perror = None
self.status = 0 # Invalid input flag set while we check inputs
self.debug = debug
self.errmsg = ''
self.fastnorm = fastnorm
self.nfev = 0
self.damp = damp
self.machar = machar(double=1)
machep = self.machar.machep
if (fcn is None):
self.errmsg = "Usage: parms = mpfit('myfunt', ... )"
return
if (iterfunct == 'default'): iterfunct = self.defiter
## Parameter damping doesn't work when user is providing their own
## gradients.
if (self.damp != 0) and (autoderivative == 0):
self.errmsg = 'ERROR: keywords DAMP and AUTODERIVATIVE are mutually exclusive'
return
## Parameters can either be stored in parinfo, or x. x takes precedence if it exists
if (xall is None) and (parinfo is None):
self.errmsg = 'ERROR: must pass parameters in P or PARINFO'
return
## Be sure that PARINFO is of the right type
if (parinfo is not None):
if (type(parinfo) != list):
self.errmsg = 'ERROR: PARINFO must be a list of dictionaries.'
return
else:
if (type(parinfo[0]) != dict):
self.errmsg = 'ERROR: PARINFO must be a list of dictionaries.'
return
if ((xall is not None) and (len(xall) != len(parinfo))):
self.errmsg = 'ERROR: number of elements in PARINFO and P must agree'
return
## If the parameters were not specified at the command line, then
## extract them from PARINFO
if (xall is None):
xall = self.parinfo(parinfo, 'value')
if (xall is None):
self.errmsg = 'ERROR: either P or PARINFO(*)["value"] must be supplied.'
return
## Make sure parameters are numpy arrays of type numpy.float
xall = numpy.asarray(xall, numpy.float64)
npar = len(xall)
self.fnorm = -1.
fnorm1 = -1.
## TIED parameters?
ptied = self.parinfo(parinfo, 'tied', default='', n=npar)
self.qanytied = 0
for i in range(npar):
ptied[i] = ptied[i].strip()
if (ptied[i] != ''): self.qanytied = 1
self.ptied = ptied
## FIXED parameters ?
pfixed = self.parinfo(parinfo, 'fixed', default=0, n=npar)
pfixed = (pfixed == 1)
for i in range(npar):
pfixed[i] = pfixed[i] or (ptied[i] != '') ## Tied parameters are also effectively fixed
## Finite differencing step, absolute and relative, and sidedness of deriv.
step = self.parinfo(parinfo, 'step', default=0., n=npar)
dstep = self.parinfo(parinfo, 'relstep', default=0., n=npar)
dside = self.parinfo(parinfo, 'mpside', default=0, n=npar)
## Maximum and minimum steps allowed to be taken in one iteration
maxstep = self.parinfo(parinfo, 'mpmaxstep', default=0., n=npar)
minstep = self.parinfo(parinfo, 'mpminstep', default=0., n=npar)
qmin = minstep * 0 ## Remove minstep for now!!
qmax = maxstep != 0
wh, = numpy.nonzero(((qmin != 0.) & (qmax != 0.)) & (maxstep < minstep))
if (len(wh) > 0):
self.errmsg = 'ERROR: MPMINSTEP is greater than MPMAXSTEP'
return
wh, = numpy.nonzero((qmin!=0.) & (qmax!=0.))
qminmax = len(wh > 0)
## Finish up the free parameters
ifree, = numpy.nonzero(pfixed != 1)
nfree = len(ifree)
if nfree == 0:
self.errmsg = 'ERROR: no free parameters'
return
dside = dside.take(ifree)
## Compose only VARYING parameters
self.params = xall ## self.params is the set of parameters to be returned
x = numpy.take(self.params, ifree) ## x is the set of free parameters
## LIMITED parameters ?
limited = self.parinfo(parinfo, 'limited', default=[0,0])
limits = self.parinfo(parinfo, 'limits', default=[0.,0.])
if (limited is not None) and (limits is not None):
## Error checking on limits in parinfo
wh, = numpy.nonzero((limited[:,0] & (xall < limits[:,0])) |
(limited[:,1] & (xall > limits[:,1])))
if (len(wh) > 0):
self.errmsg = 'ERROR: parameters are not within PARINFO limits'
return
wh, = numpy.nonzero((limited[:,0] & limited[:,1]) &
(limits[:,0] >= limits[:,1]) &
(pfixed == 0))
if (len(wh) > 0):
self.errmsg = 'ERROR: PARINFO parameter limits are not consistent'
return
## Transfer structure values to local variables
qulim = numpy.take(limited[:,1], ifree)
ulim = numpy.take(limits [:,1], ifree)
qllim = numpy.take(limited[:,0], ifree)
llim = numpy.take(limits [:,0], ifree)
wh, = numpy.nonzero((qulim!=0.) | (qllim!=0.))
if (len(wh) > 0): qanylim = 1
else: qanylim = 0
else:
## Fill in local variables with dummy values
qulim = numpy.zeros(nfree)
ulim = x * 0.
qllim = qulim
llim = x * 0.
qanylim = 0
n = len(x)
## Check input parameters for errors
if ((n < 0) or (ftol <= 0) or (xtol <= 0) or (gtol <= 0)
or (maxiter <= 0) or (factor <= 0)):
self.errmsg = 'ERROR: input keywords are inconsistent'
return
if (rescale != 0):
self.errmsg = 'ERROR: DIAG parameter scales are inconsistent'
if (len(diag) < n): return
wh, = numpy.nonzero(diag <= 0)
if (len(wh) > 0): return
self.errmsg = ''
# Make sure x is a numpy array of type numpy.float
x = numpy.asarray(x, numpy.float64)
[self.status, fvec] = self.call(fcn, self.params, functkw)
if (self.status < 0):
self.errmsg = 'ERROR: first call to "'+str(fcn)+'" failed'
return
m = len(fvec)
if (m < n):
self.errmsg = 'ERROR: number of parameters must not exceed data'
return
self.fnorm = self.enorm(fvec)
## Initialize Levelberg-Marquardt parameter and iteration counter
par = 0.
self.niter = 1
qtf = x * 0.
self.status = 0
## Beginning of the outer loop
while(1):
## If requested, call fcn to enable printing of iterates
numpy.put(self.params, ifree, x)
if (self.qanytied): self.params = self.tie(self.params, ptied)
if (nprint > 0) and (iterfunct is not None):
if (((self.niter-1) % nprint) == 0):
mperr = 0
xnew0 = self.params.copy()
dof = max(len(fvec) - len(x), 0)
status = iterfunct(fcn, self.params, self.niter, self.fnorm**2,
functkw=functkw, parinfo=parinfo, quiet=quiet,
dof=dof, **iterkw)
if (status is not None): self.status = status
## Check for user termination
if (self.status < 0):
self.errmsg = 'WARNING: premature termination by ' + str(iterfunct)
return
## If parameters were changed (grrr..) then re-tie
if (max(abs(xnew0-self.params)) > 0):
if (self.qanytied): self.params = self.tie(self.params, ptied)
x = numpy.take(self.params, ifree)
## Calculate the jacobian matrix
self.status = 2
catch_msg = 'calling MPFIT_FDJAC2'
fjac = self.fdjac2(fcn, x, fvec, step, qulim, ulim, dside,
epsfcn=epsfcn,
autoderivative=autoderivative, dstep=dstep,
functkw=functkw, ifree=ifree, xall=self.params)
if (fjac is None):
self.errmsg = 'WARNING: premature termination by FDJAC2'
return
## Determine if any of the parameters are pegged at the limits
if (qanylim):
catch_msg = 'zeroing derivatives of pegged parameters'
whlpeg, = numpy.nonzero(qllim & (x == llim))
nlpeg = len(whlpeg)
whupeg, = numpy.nonzero(qulim & (x == ulim))
nupeg = len(whupeg)
## See if any "pegged" values should keep their derivatives
if (nlpeg > 0):
## Total derivative of sum wrt lower pegged parameters
for i in range(nlpeg):
sum = numpy.sum(fvec * fjac[:,whlpeg[i]])
if (sum > 0): fjac[:,whlpeg[i]] = 0
if (nupeg > 0):
## Total derivative of sum wrt upper pegged parameters
for i in range(nupeg):
sum = numpy.sum(fvec * fjac[:,whupeg[i]])
if (sum < 0): fjac[:,whupeg[i]] = 0
## Compute the QR factorization of the jacobian
[fjac, ipvt, wa1, wa2] = self.qrfac(fjac, pivot=1)
## On the first iteration if "diag" is unspecified, scale
## according to the norms of the columns of the initial jacobian
catch_msg = 'rescaling diagonal elements'
if (self.niter == 1):
if ((rescale==0) or (len(diag) < n)):
diag = wa2.copy()
wh, = numpy.nonzero(diag == 0)
numpy.put(diag, wh, 1.)
## On the first iteration, calculate the norm of the scaled x
## and initialize the step bound delta
wa3 = diag * x
xnorm = self.enorm(wa3)
delta = factor*xnorm
if (delta == 0.): delta = factor
## Form (q transpose)*fvec and store the first n components in qtf
catch_msg = 'forming (q transpose)*fvec'
wa4 = fvec.copy()
for j in range(n):
lj = ipvt[j]
temp3 = fjac[j,lj]
if (temp3 != 0):
fj = fjac[j:,lj]
wj = wa4[j:]
## *** optimization wa4(j:*)
wa4[j:] = wj - fj * numpy.sum(fj*wj) / temp3
fjac[j,lj] = wa1[j]
qtf[j] = wa4[j]
## From this point on, only the square matrix, consisting of the
## triangle of R, is needed.
fjac = fjac[0:n, 0:n]
fjac.shape = [n, n]
temp = fjac.copy()
for i in range(n):
temp[:,i] = fjac[:, ipvt[i]]
fjac = temp.copy()
## Check for overflow. This should be a cheap test here since FJAC
## has been reduced to a (small) square matrix, and the test is
## O(N^2).
#wh = where(finite(fjac) EQ 0, ct)
#if ct GT 0 then goto, FAIL_OVERFLOW
## Compute the norm of the scaled gradient
catch_msg = 'computing the scaled gradient'
gnorm = 0.
if (self.fnorm != 0):
for j in range(n):
l = ipvt[j]
if (wa2[l] != 0):
sum = numpy.sum(fjac[0:j+1,j]*qtf[0:j+1])/self.fnorm
gnorm = max([gnorm,abs(sum/wa2[l])])
## Test for convergence of the gradient norm
if (gnorm <= gtol):
self.status = 4
return
## Rescale if necessary
if (rescale == 0):
diag = numpy.choose(diag>wa2, (wa2, diag))
## Beginning of the inner loop
while(1):
## Determine the levenberg-marquardt parameter
catch_msg = 'calculating LM parameter (MPFIT_)'
[fjac, par, wa1, wa2] = self.lmpar(fjac, ipvt, diag, qtf,
delta, wa1, wa2, par=par)
## Store the direction p and x+p. Calculate the norm of p
wa1 = -wa1
if (qanylim == 0) and (qminmax == 0):
## No parameter limits, so just move to new position WA2
alpha = 1.
wa2 = x + wa1
else:
## Respect the limits. If a step were to go out of bounds, then
## we should take a step in the same direction but shorter distance.
## The step should take us right to the limit in that case.
alpha = 1.
if (qanylim):
## Do not allow any steps out of bounds
catch_msg = 'checking for a step out of bounds'
if (nlpeg > 0):
numpy.put(wa1, whlpeg, numpy.clip(
numpy.take(wa1, whlpeg), 0., max(wa1)))
if (nupeg > 0):
numpy.put(wa1, whupeg, numpy.clip(
numpy.take(wa1, whupeg), min(wa1), 0.))
dwa1 = abs(wa1) > machep
whl, = numpy.nonzero(((dwa1!=0.) & qllim) & ((x + wa1) < llim))
if (len(whl) > 0):
t = ((numpy.take(llim, whl) - numpy.take(x, whl)) /
numpy.take(wa1, whl))
alpha = min(alpha, min(t))
whu, = numpy.nonzero(((dwa1!=0.) & qulim) & ((x + wa1) > ulim))
if (len(whu) > 0):
t = ((numpy.take(ulim, whu) - numpy.take(x, whu)) /
numpy.take(wa1, whu))
alpha = min(alpha, min(t))
## Obey any max step values.
if (qminmax):
nwa1 = wa1 * alpha
whmax, = numpy.nonzero((qmax != 0.) & (maxstep > 0))
if (len(whmax) > 0):
mrat = max(numpy.take(nwa1, whmax) /
numpy.take(maxstep, whmax))
if (mrat > 1): alpha = alpha / mrat
## Scale the resulting vector
wa1 = wa1 * alpha
wa2 = x + wa1
## Adjust the final output values. If the step put us exactly
## on a boundary, make sure it is exact.
wh, = numpy.nonzero((qulim!=0.) & (wa2 >= ulim*(1-machep)))
if (len(wh) > 0): numpy.put(wa2, wh, numpy.take(ulim, wh))
wh, = numpy.nonzero((qllim!=0.) & (wa2 <= llim*(1+machep)))
if (len(wh) > 0): numpy.put(wa2, wh, numpy.take(llim, wh))
# endelse
wa3 = diag * wa1
pnorm = self.enorm(wa3)
## On the first iteration, adjust the initial step bound
if (self.niter == 1): delta = min([delta,pnorm])
numpy.put(self.params, ifree, wa2)
## Evaluate the function at x+p and calculate its norm
mperr = 0
catch_msg = 'calling '+str(fcn)
[self.status, wa4] = self.call(fcn, self.params, functkw)
if (self.status < 0):
self.errmsg = 'WARNING: premature termination by "'+fcn+'"'
return
fnorm1 = self.enorm(wa4)
## Compute the scaled actual reduction
catch_msg = 'computing convergence criteria'
actred = -1.
if ((0.1 * fnorm1) < self.fnorm): actred = - (fnorm1/self.fnorm)**2 + 1.
## Compute the scaled predicted reduction and the scaled directional
## derivative
for j in range(n):
wa3[j] = 0
wa3[0:j+1] = wa3[0:j+1] + fjac[0:j+1,j]*wa1[ipvt[j]]
## Remember, alpha is the fraction of the full LM step actually
## taken
temp1 = self.enorm(alpha*wa3)/self.fnorm
temp2 = (numpy.sqrt(alpha*par)*pnorm)/self.fnorm
prered = temp1*temp1 + (temp2*temp2)/0.5
dirder = -(temp1*temp1 + temp2*temp2)
## Compute the ratio of the actual to the predicted reduction.
ratio = 0.
if (prered != 0): ratio = actred/prered
## Update the step bound
if (ratio <= 0.25):
if (actred >= 0): temp = .5
else: temp = .5*dirder/(dirder + .5*actred)
if ((0.1*fnorm1) >= self.fnorm) or (temp < 0.1): temp = 0.1
delta = temp*min([delta,pnorm/0.1])
par = par/temp
else:
if (par == 0) or (ratio >= 0.75):
delta = pnorm/.5
par = .5*par
## Test for successful iteration
if (ratio >= 0.0001):
## Successful iteration. Update x, fvec, and their norms
x = wa2
wa2 = diag * x
fvec = wa4
xnorm = self.enorm(wa2)
self.fnorm = fnorm1
self.niter = self.niter + 1
## Tests for convergence
if ((abs(actred) <= ftol) and (prered <= ftol)
and (0.5 * ratio <= 1)): self.status = 1
if delta <= xtol*xnorm: self.status = 2
if ((abs(actred) <= ftol) and (prered <= ftol)
and (0.5 * ratio <= 1) and (self.status == 2)): self.status = 3
if (self.status != 0): break
## Tests for termination and stringent tolerances
if (self.niter >= maxiter): self.status = 5
if ((abs(actred) <= machep) and (prered <= machep)
and (0.5*ratio <= 1)): self.status = 6
if delta <= machep*xnorm: self.status = 7
if gnorm <= machep: self.status = 8
if (self.status != 0): break
## End of inner loop. Repeat if iteration unsuccessful
if (ratio >= 0.0001): break
## Check for over/underflow - SKIP FOR NOW
##wh = where(finite(wa1) EQ 0 OR finite(wa2) EQ 0 OR finite(x) EQ 0, ct)
##if ct GT 0 OR finite(ratio) EQ 0 then begin
## errmsg = ('ERROR: parameter or function value(s) have become '+$
## 'infinite# check model function for over- '+$
## 'and underflow')
## self.status = -16
## break
if (self.status != 0): break;
## End of outer loop.
catch_msg = 'in the termination phase'
## Termination, either normal or user imposed.
if (len(self.params) == 0):
return
if (nfree == 0): self.params = xall.copy()
else: numpy.put(self.params, ifree, x)
if (nprint > 0) and (self.status > 0):
catch_msg = 'calling ' + str(fcn)
[status, fvec] = self.call(fcn, self.params, functkw)
catch_msg = 'in the termination phase'
self.fnorm = self.enorm(fvec)
if ((self.fnorm is not None) and (fnorm1 is not None)):
self.fnorm = max([self.fnorm, fnorm1])
self.fnorm = self.fnorm**2.
self.covar = None
self.perror = None
## (very carefully) set the covariance matrix COVAR
if ((self.status > 0) and (nocovar==0) and (n is not None)
and (fjac is not None) and (ipvt is not None)):
sz = numpy.shape(fjac)
if ((n > 0) and (sz[0] >= n) and (sz[1] >= n)
and (len(ipvt) >= n)):
catch_msg = 'computing the covariance matrix'
cv = self.calc_covar(fjac[0:n,0:n], ipvt[0:n])
cv.shape = [n, n]
nn = len(xall)
## Fill in actual covariance matrix, accounting for fixed
## parameters.
self.covar = numpy.zeros([nn, nn], numpy.float64)
for i in range(n):
indices = ifree+ifree[i]*n
numpy.put(self.covar, indices, cv[:,i])
## Compute errors in parameters
catch_msg = 'computing parameter errors'
self.perror = numpy.zeros(nn, numpy.float64)
d = numpy.diagonal(self.covar).copy()
wh, = numpy.nonzero(d >= 0)
if len(wh) > 0:
numpy.put(self.perror, wh, numpy.sqrt(numpy.take(d, wh)))
return
## Default procedure to be called every iteration. It simply prints
## the parameter values.
def defiter(self, fcn, x, iter, fnorm=None, functkw=None,
quiet=0, iterstop=None, parinfo=None,
format=None, pformat='%.10g', dof=1):
if (self.debug): print('Entering defiter...')
if (quiet): return
if (fnorm is None):
[status, fvec] = self.call(fcn, x, functkw)
fnorm = self.enorm(fvec)**2
## Determine which parameters to print
nprint = len(x)
print("Iter ", ('%6i' % iter)," CHI-SQUARE = ",('%.10g' % fnorm)," DOF = ", ('%i' % dof))
for i in range(nprint):
if (parinfo is not None) and ('parname' in parinfo[i]):
p = ' ' + parinfo[i]['parname'] + ' = '
else:
p = ' P' + str(i) + ' = '
if (parinfo is not None) and ('mpprint' in parinfo[i]):
iprint = parinfo[i]['mpprint']
else:
iprint = 1
if (iprint):
print(p + (pformat % x[i]) + ' ')
return(0)
## DO_ITERSTOP:
## if keyword_set(iterstop) then begin
## k = get_kbrd(0)
## if k EQ string(byte(7)) then begin
## message, 'WARNING: minimization not complete', /info
## print, 'Do you want to terminate this procedure? (y/n)', $
## format='(A,$)'
## k = ''
## read, k
## if strupcase(strmid(k,0,1)) EQ 'Y' then begin
## message, 'WARNING: Procedure is terminating.', /info
## mperr = -1
## endif
## endif
## endif
## Procedure to parse the parameter values in PARINFO, which is a list of dictionaries
def parinfo(self, parinfo=None, key='a', default=None, n=0):
if (self.debug): print('Entering parinfo...')
if (n == 0) and (parinfo is not None): n = len(parinfo)
if (n == 0):
values = default
return(values)
values = []
for i in range(n):
if ((parinfo is not None) and (key in parinfo[i])):
values.append(parinfo[i][key])
else:
values.append(default)
# Convert to numeric arrays if possible
test = default
if (type(default) == list): test=default[0]
if isinstance(test, int):
values = numpy.asarray(values, numpy.int64)
elif isinstance(test, float):
values = numpy.asarray(values, numpy.float64)
return(values)
## Call user function or procedure, with _EXTRA or not, with
## derivatives or not.
def call(self, fcn, x, functkw, fjac=None):
if (self.debug): print('Entering call...')
if (self.qanytied): x = self.tie(x, self.ptied)
self.nfev = self.nfev + 1
if (fjac is None):
[status, f] = fcn(x, fjac=fjac, **functkw)
if (self.damp > 0):
## Apply the damping if requested. This replaces the residuals
## with their hyperbolic tangent. Thus residuals larger than
## DAMP are essentially clipped.
f = numpy.tanh(f/self.damp)
return([status, f])
else:
return(fcn(x, fjac=fjac, **functkw))
def enorm(self, vec):
if (self.debug): print('Entering enorm...')
## NOTE: it turns out that, for systems that have a lot of data
## points, this routine is a big computing bottleneck. The extended
## computations that need to be done cannot be effectively
## vectorized. The introduction of the FASTNORM configuration
## parameter allows the user to select a faster routine, which is
## based on TOTAL() alone.
# Very simple-minded sum-of-squares
if (self.fastnorm):
ans = numpy.sqrt(numpy.sum(vec*vec))
else:
agiant = self.machar.rgiant / len(vec)
adwarf = self.machar.rdwarf * len(vec)
## This is hopefully a compromise between speed and robustness.
## Need to do this because of the possibility of over- or underflow.
mx = max(vec)
mn = min(vec)
mx = max(abs(mx), abs(mn))
if mx == 0: return(vec[0]*0.)
if mx > agiant or mx < adwarf:
ans = mx * numpy.sqrt(numpy.sum((vec/mx)*(vec/mx)))
else:
ans = numpy.sqrt(numpy.sum(vec*vec))
return(ans)
def fdjac2(self, fcn, x, fvec, step=None, ulimited=None, ulimit=None, dside=None,
epsfcn=None, autoderivative=1,
functkw=None, xall=None, ifree=None, dstep=None):
if (self.debug): print('Entering fdjac2...')
machep = self.machar.machep
if epsfcn is None: epsfcn = machep
if xall is None: xall = x
if ifree is None: ifree = numpy.arange(len(xall))
if step is None: step = x * 0.
nall = len(xall)
eps = numpy.sqrt(max([epsfcn, machep]))
m = len(fvec)
n = len(x)
## Compute analytical derivative if requested
if (autoderivative == 0):
mperr = 0
fjac = numpy.zeros(nall, numpy.float64)
numpy.put(fjac, ifree, 1.0) ## Specify which parameters need derivatives
[status, fp, pderiv] = self.call(fcn, xall, functkw, fjac=fjac)
fjac = pderiv
if fjac.shape != (m, nall):
print('ERROR: Derivative matrix was not computed properly.')
return(None)
## This definition is c1onsistent with CURVEFIT
## Sign error found (thanks Jesus Fernandez <[email protected]>)
fjac = -fjac
## Select only the free parameters
if len(ifree) < nall:
fjac = fjac[:,ifree]
fjac.shape = [m, n]
return(fjac)
fjac = numpy.zeros([m, n], numpy.float64)
h = eps * abs(x)
## if STEP is given, use that
if step is not None:
stepi = numpy.take(step, ifree)
wh, = numpy.nonzero(stepi > 0)
if (len(wh) > 0): numpy.put(h, wh, numpy.take(stepi, wh))
## if relative step is given, use that
if (len(dstep) > 0):
dstepi = numpy.take(dstep, ifree)
wh, = numpy.nonzero(dstepi > 0)
if len(wh) > 0: numpy.put(h, wh, abs(numpy.take(dstepi,wh)*numpy.take(x,wh)))
## In case any of the step values are zero
wh, = numpy.nonzero(h == 0)
if len(wh) > 0: numpy.put(h, wh, eps)
## Reverse the sign of the step if we are up against the parameter
## limit, or if the user requested it.
mask = dside == -1
if len(ulimited) > 0 and len(ulimit) > 0:
mask = mask | (ulimited & (x > ulimit-h))
wh, = numpy.nonzero(mask)
if len(wh) > 0: numpy.put(h, wh, -numpy.take(h, wh))
## Loop through parameters, computing the derivative for each
for j in range(n):
xp = xall.copy()
xp[ifree[j]] = xp[ifree[j]] + h[j]
[status, fp] = self.call(fcn, xp, functkw)
if (status < 0): return(None)
if abs(dside[j]) <= 1:
## COMPUTE THE ONE-SIDED DERIVATIVE
## Note optimization fjac(0:*,j)
fjac[0:,j] = (fp-fvec)/h[j]
else:
## COMPUTE THE TWO-SIDED DERIVATIVE
xp[ifree[j]] = xall[ifree[j]] - h[j]
mperr = 0
[status, fm] = self.call(fcn, xp, functkw)
if (status < 0): return(None)
## Note optimization fjac(0:*,j)
fjac[0:,j] = (fp-fm)/(2*h[j])
return(fjac)
# Original FORTRAN documentation
# **********
#
# subroutine qrfac
#
# this subroutine uses householder transformations with column
# pivoting (optional) to compute a qr factorization of the
# m by n matrix a. that is, qrfac determines an orthogonal
# matrix q, a permutation matrix p, and an upper trapezoidal
# matrix r with diagonal elements of nonincreasing magnitude,
# such that a*p = q*r. the householder transformation for
# column k, k = 1,2,...,min(m,n), is of the form
#
# t
# i - (1/u(k))*u*u
#
# where u has zeros in the first k-1 positions. the form of
# this transformation and the method of pivoting first
# appeared in the corresponding linpack subroutine.
#
# the subroutine statement is
#
# subroutine qrfac(m,n,a,lda,pivot,ipvt,lipvt,rdiag,acnorm,wa)
#
# where
#
# m is a positive integer input variable set to the number
# of rows of a.
#
# n is a positive integer input variable set to the number
# of columns of a.
#
# a is an m by n array. on input a contains the matrix for
# which the qr factorization is to be computed. on output
# the strict upper trapezoidal part of a contains the strict
# upper trapezoidal part of r, and the lower trapezoidal
# part of a contains a factored form of q (the non-trivial
# elements of the u vectors described above).
#
# lda is a positive integer input variable not less than m
# which specifies the leading dimension of the array a.
#
# pivot is a logical input variable. if pivot is set true,
# then column pivoting is enforced. if pivot is set false,
# then no column pivoting is done.
#
# ipvt is an integer output array of length lipvt. ipvt
# defines the permutation matrix p such that a*p = q*r.
# column j of p is column ipvt(j) of the identity matrix.
# if pivot is false, ipvt is not referenced.
#
# lipvt is a positive integer input variable. if pivot is false,
# then lipvt may be as small as 1. if pivot is true, then
# lipvt must be at least n.
#
# rdiag is an output array of length n which contains the
# diagonal elements of r.
#
# acnorm is an output array of length n which contains the
# norms of the corresponding columns of the input matrix a.
# if this information is not needed, then acnorm can coincide
# with rdiag.
#
# wa is a work array of length n. if pivot is false, then wa
# can coincide with rdiag.
#
# subprograms called
#
# minpack-supplied ... dpmpar,enorm
#
# fortran-supplied ... dmax1,dsqrt,min0
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
# **********
# NOTE: in IDL the factors appear slightly differently than described
# above. The matrix A is still m x n where m >= n.
#
# The "upper" triangular matrix R is actually stored in the strict
# lower left triangle of A under the standard notation of IDL.
#
# The reflectors that generate Q are in the upper trapezoid of A upon
# output.
#
# EXAMPLE: decompose the matrix [[9.,2.,6.],[4.,8.,7.]]
# aa = [[9.,2.,6.],[4.,8.,7.]]
# mpfit_qrfac, aa, aapvt, rdiag, aanorm
# IDL> print, aa
# 1.81818* 0.181818* 0.545455*
# -8.54545+ 1.90160* 0.432573*
# IDL> print, rdiag
# -11.0000+ -7.48166+
#
# The components marked with a * are the components of the
# reflectors, and those marked with a + are components of R.
#
# To reconstruct Q and R we proceed as follows. First R.
# r = fltarr(m, n)
# for i = 0, n-1 do r(0:i,i) = aa(0:i,i) # fill in lower diag
# r(lindgen(n)*(m+1)) = rdiag
#
# Next, Q, which are composed from the reflectors. Each reflector v
# is taken from the upper trapezoid of aa, and converted to a matrix
# via (I - 2 vT . v / (v . vT)).
#
# hh = ident ## identity matrix
# for i = 0, n-1 do begin
# v = aa(*,i) & if i GT 0 then v(0:i-1) = 0 ## extract reflector
# hh = hh ## (ident - 2*(v # v)/total(v * v)) ## generate matrix
# endfor
#
# Test the result:
# IDL> print, hh ## transpose(r)
# 9.00000 4.00000
# 2.00000 8.00000
# 6.00000 7.00000
#
# Note that it is usually never necessary to form the Q matrix
# explicitly, and MPFIT does not.
def qrfac(self, a, pivot=0):
if (self.debug): print('Entering qrfac...')
machep = self.machar.machep
sz = numpy.shape(a)
m = sz[0]
n = sz[1]
## Compute the initial column norms and initialize arrays
acnorm = numpy.zeros(n, numpy.float64)
for j in range(n):
acnorm[j] = self.enorm(a[:,j])
rdiag = acnorm.copy()
wa = rdiag.copy()
ipvt = numpy.arange(n)
## Reduce a to r with householder transformations
minmn = min([m,n])
for j in range(minmn):
if (pivot != 0):
## Bring the column of largest norm into the pivot position
rmax = max(rdiag[j:])
kmax, = numpy.nonzero(rdiag[j:] == rmax)
ct = len(kmax)
kmax = kmax + j
if ct > 0:
kmax = kmax[0]
## Exchange rows via the pivot only. Avoid actually exchanging
## the rows, in case there is lots of memory transfer. The
## exchange occurs later, within the body of MPFIT, after the
## extraneous columns of the matrix have been shed.
if kmax != j:
temp = ipvt[j] ; ipvt[j] = ipvt[kmax] ; ipvt[kmax] = temp
rdiag[kmax] = rdiag[j]
wa[kmax] = wa[j]
## Compute the householder transformation to reduce the jth
## column of A to a multiple of the jth unit vector
lj = ipvt[j]
ajj = a[j:,lj]
ajnorm = self.enorm(ajj)
if ajnorm == 0: break
if a[j,j] < 0: ajnorm = -ajnorm
ajj = ajj / ajnorm
ajj[0] = ajj[0] + 1
## *** Note optimization a(j:*,j)
a[j:,lj] = ajj
## Apply the transformation to the remaining columns
## and update the norms
## NOTE to SELF: tried to optimize this by removing the loop,
## but it actually got slower. Reverted to "for" loop to keep
## it simple.
if (j+1 < n):
for k in range(j+1, n):
lk = ipvt[k]
ajk = a[j:,lk]
## *** Note optimization a(j:*,lk)
## (corrected 20 Jul 2000)
if a[j,lj] != 0:
a[j:,lk] = ajk - ajj * numpy.sum(ajk*ajj)/a[j,lj]
if ((pivot != 0) and (rdiag[k] != 0)):
temp = a[j,lk]/rdiag[k]
rdiag[k] = rdiag[k] * numpy.sqrt(max((1.-temp**2), 0.))
temp = rdiag[k]/wa[k]
if ((0.05*temp*temp) <= machep):
rdiag[k] = self.enorm(a[j+1:,lk])
wa[k] = rdiag[k]
rdiag[j] = -ajnorm
return([a, ipvt, rdiag, acnorm])
# Original FORTRAN documentation
# **********
#
# subroutine qrsolv
#
# given an m by n matrix a, an n by n diagonal matrix d,
# and an m-vector b, the problem is to determine an x which
# solves the system
#
# a*x = b , d*x = 0 ,
#
# in the least squares sense.
#
# this subroutine completes the solution of the problem
# if it is provided with the necessary information from the
# factorization, with column pivoting, of a. that is, if
# a*p = q*r, where p is a permutation matrix, q has orthogonal
# columns, and r is an upper triangular matrix with diagonal
# elements of nonincreasing magnitude, then qrsolv expects
# the full upper triangle of r, the permutation matrix p,
# and the first n components of (q transpose)*b. the system
# a*x = b, d*x = 0, is then equivalent to
#
# t t
# r*z = q *b , p *d*p*z = 0 ,
#
# where x = p*z. if this system does not have full rank,
# then a least squares solution is obtained. on output qrsolv
# also provides an upper triangular matrix s such that
#
# t t t
# p *(a *a + d*d)*p = s *s .
#
# s is computed within qrsolv and may be of separate interest.
#
# the subroutine statement is
#
# subroutine qrsolv(n,r,ldr,ipvt,diag,qtb,x,sdiag,wa)
#
# where
#
# n is a positive integer input variable set to the order of r.
#
# r is an n by n array. on input the full upper triangle
# must contain the full upper triangle of the matrix r.
# on output the full upper triangle is unaltered, and the
# strict lower triangle contains the strict upper triangle
# (transposed) of the upper triangular matrix s.
#
# ldr is a positive integer input variable not less than n
# which specifies the leading dimension of the array r.
#
# ipvt is an integer input array of length n which defines the
# permutation matrix p such that a*p = q*r. column j of p
# is column ipvt(j) of the identity matrix.
#
# diag is an input array of length n which must contain the
# diagonal elements of the matrix d.
#
# qtb is an input array of length n which must contain the first
# n elements of the vector (q transpose)*b.
#
# x is an output array of length n which contains the least
# squares solution of the system a*x = b, d*x = 0.
#
# sdiag is an output array of length n which contains the
# diagonal elements of the upper triangular matrix s.
#
# wa is a work array of length n.
#
# subprograms called
#
# fortran-supplied ... dabs,dsqrt
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
def qrsolv(self, r, ipvt, diag, qtb, sdiag):
if (self.debug): print('Entering qrsolv...')
sz = numpy.shape(r)
m = sz[0]
n = sz[1]
## copy r and (q transpose)*b to preserve input and initialize s.
## in particular, save the diagonal elements of r in x.
for j in range(n):
r[j:n,j] = r[j,j:n]
x = numpy.diagonal(r).copy()
wa = qtb.copy()
## Eliminate the diagonal matrix d using a givens rotation
for j in range(n):
l = ipvt[j]
if (diag[l] == 0): break
sdiag[j:] = 0
sdiag[j] = diag[l]
## The transformations to eliminate the row of d modify only a
## single element of (q transpose)*b beyond the first n, which
## is initially zero.
qtbpj = 0.
for k in range(j,n):
if (sdiag[k] == 0): break
if (abs(r[k,k]) < abs(sdiag[k])):
cotan = r[k,k]/sdiag[k]
sine = 0.5/numpy.sqrt(.25 + .25*cotan*cotan)
cosine = sine*cotan
else:
tang = sdiag[k]/r[k,k]
cosine = 0.5/numpy.sqrt(.25 + .25*tang*tang)
sine = cosine*tang
## Compute the modified diagonal element of r and the
## modified element of ((q transpose)*b,0).
r[k,k] = cosine*r[k,k] + sine*sdiag[k]
temp = cosine*wa[k] + sine*qtbpj
qtbpj = -sine*wa[k] + cosine*qtbpj
wa[k] = temp
## Accumulate the transformation in the row of s
if (n > k+1):
temp = cosine*r[k+1:n,k] + sine*sdiag[k+1:n]
sdiag[k+1:n] = -sine*r[k+1:n,k] + cosine*sdiag[k+1:n]
r[k+1:n,k] = temp
sdiag[j] = r[j,j]
r[j,j] = x[j]
## Solve the triangular system for z. If the system is singular
## then obtain a least squares solution
nsing = n
wh, = numpy.nonzero(sdiag == 0)
if (len(wh) > 0):
nsing = wh[0]
wa[nsing:] = 0
if (nsing >= 1):
wa[nsing-1] = wa[nsing-1]/sdiag[nsing-1] ## Degenerate case
## *** Reverse loop ***
for j in range(nsing-2,-1,-1):
sum = numpy.sum(r[j+1:nsing,j]*wa[j+1:nsing])
wa[j] = (wa[j]-sum)/sdiag[j]
## Permute the components of z back to components of x
numpy.put(x, ipvt, wa)
return(r, x, sdiag)
# Original FORTRAN documentation
#
# subroutine lmpar
#
# given an m by n matrix a, an n by n nonsingular diagonal
# matrix d, an m-vector b, and a positive number delta,
# the problem is to determine a value for the parameter
# par such that if x solves the system
#
# a*x = b , sqrt(par)*d*x = 0 ,
#
# in the least squares sense, and dxnorm is the euclidean
# norm of d*x, then either par is zero and
#
# (dxnorm-delta) .le. 0.1*delta ,
#
# or par is positive and
#
# abs(dxnorm-delta) .le. 0.1*delta .
#
# this subroutine completes the solution of the problem
# if it is provided with the necessary information from the
# qr factorization, with column pivoting, of a. that is, if
# a*p = q*r, where p is a permutation matrix, q has orthogonal
# columns, and r is an upper triangular matrix with diagonal
# elements of nonincreasing magnitude, then lmpar expects
# the full upper triangle of r, the permutation matrix p,
# and the first n components of (q transpose)*b. on output
# lmpar also provides an upper triangular matrix s such that
#
# t t t
# p *(a *a + par*d*d)*p = s *s .
#
# s is employed within lmpar and may be of separate interest.
#
# only a few iterations are generally needed for convergence
# of the algorithm. if, however, the limit of 10 iterations
# is reached, then the output par will contain the best
# value obtained so far.
#
# the subroutine statement is
#
# subroutine lmpar(n,r,ldr,ipvt,diag,qtb,delta,par,x,sdiag,
# wa1,wa2)
#
# where
#
# n is a positive integer input variable set to the order of r.
#
# r is an n by n array. on input the full upper triangle
# must contain the full upper triangle of the matrix r.
# on output the full upper triangle is unaltered, and the
# strict lower triangle contains the strict upper triangle
# (transposed) of the upper triangular matrix s.
#
# ldr is a positive integer input variable not less than n
# which specifies the leading dimension of the array r.
#
# ipvt is an integer input array of length n which defines the
# permutation matrix p such that a*p = q*r. column j of p
# is column ipvt(j) of the identity matrix.
#
# diag is an input array of length n which must contain the
# diagonal elements of the matrix d.
#
# qtb is an input array of length n which must contain the first
# n elements of the vector (q transpose)*b.
#
# delta is a positive input variable which specifies an upper
# bound on the euclidean norm of d*x.
#
# par is a nonnegative variable. on input par contains an
# initial estimate of the levenberg-marquardt parameter.
# on output par contains the final estimate.
#
# x is an output array of length n which contains the least
# squares solution of the system a*x = b, sqrt(par)*d*x = 0,
# for the output par.
#
# sdiag is an output array of length n which contains the
# diagonal elements of the upper triangular matrix s.
#
# wa1 and wa2 are work arrays of length n.
#
# subprograms called
#
# minpack-supplied ... dpmpar,enorm,qrsolv
#
# fortran-supplied ... dabs,dmax1,dmin1,dsqrt
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
def lmpar(self, r, ipvt, diag, qtb, delta, x, sdiag, par=None):
if (self.debug): print('Entering lmpar...')
dwarf = self.machar.minnum
sz = numpy.shape(r)
m = sz[0]
n = sz[1]
## Compute and store in x the gauss-newton direction. If the
## jacobian is rank-deficient, obtain a least-squares solution
nsing = n
wa1 = qtb.copy()
wh, = numpy.nonzero(numpy.diagonal(r) == 0)
if len(wh) > 0:
nsing = wh[0]
wa1[wh[0]:] = 0
if nsing > 1:
## *** Reverse loop ***
for j in range(nsing-1,-1,-1):
wa1[j] = wa1[j]/r[j,j]
if (j-1 >= 0):
wa1[0:j] = wa1[0:j] - r[0:j,j]*wa1[j]
## Note: ipvt here is a permutation array
numpy.put(x, ipvt, wa1)
## Initialize the iteration counter. Evaluate the function at the
## origin, and test for acceptance of the gauss-newton direction
iter = 0
wa2 = diag * x
dxnorm = self.enorm(wa2)
fp = dxnorm - delta
if (fp <= 0.1*delta):
return[r, 0., x, sdiag]
## If the jacobian is not rank deficient, the newton step provides a
## lower bound, parl, for the zero of the function. Otherwise set
## this bound to zero.
parl = 0.
if nsing >= n:
wa1 = numpy.take(diag, ipvt)*numpy.take(wa2, ipvt)/dxnorm
wa1[0] = wa1[0] / r[0,0] ## Degenerate case
for j in range(1,n): ## Note "1" here, not zero
sum = numpy.sum(r[0:j,j]*wa1[0:j])
wa1[j] = (wa1[j] - sum)/r[j,j]
temp = self.enorm(wa1)
parl = ((fp/delta)/temp)/temp
## Calculate an upper bound, paru, for the zero of the function
for j in range(n):
sum = numpy.sum(r[0:j+1,j]*qtb[0:j+1])
wa1[j] = sum/diag[ipvt[j]]
gnorm = self.enorm(wa1)
paru = gnorm/delta
if paru == 0: paru = dwarf/min([delta,0.1])
## If the input par lies outside of the interval (parl,paru), set
## par to the closer endpoint
par = max([par,parl])
par = min([par,paru])
if par == 0: par = gnorm/dxnorm
## Beginning of an interation
while(1):
iter = iter + 1
## Evaluate the function at the current value of par
if par == 0: par = max([dwarf, paru*0.001])
temp = numpy.sqrt(par)
wa1 = temp * diag
[r, x, sdiag] = self.qrsolv(r, ipvt, wa1, qtb, sdiag)
wa2 = diag*x
dxnorm = self.enorm(wa2)
temp = fp
fp = dxnorm - delta
if ((abs(fp) <= 0.1*delta) or
((parl == 0) and (fp <= temp) and (temp < 0)) or
(iter == 10)): break;
## Compute the newton correction
wa1 = numpy.take(diag, ipvt)*numpy.take(wa2, ipvt)/dxnorm
for j in range(n-1):
wa1[j] = wa1[j]/sdiag[j]
wa1[j+1:n] = wa1[j+1:n] - r[j+1:n,j]*wa1[j]
wa1[n-1] = wa1[n-1]/sdiag[n-1] ## Degenerate case
temp = self.enorm(wa1)
parc = ((fp/delta)/temp)/temp
## Depending on the sign of the function, update parl or paru
if fp > 0: parl = max([parl,par])
if fp < 0: paru = min([paru,par])
## Compute an improved estimate for par
par = max([parl, par+parc])
## End of an iteration
## Termination
return[r, par, x, sdiag]
## Procedure to tie one parameter to another.
def tie(self, p, ptied=None):
if (self.debug): print('Entering tie...')
if (ptied is None): return
for i in range(len(ptied)):
if ptied[i] == '': continue
cmd = 'p[' + str(i) + '] = ' + ptied[i]
exec(cmd)
return(p)
# Original FORTRAN documentation
# **********
#
# subroutine covar
#
# given an m by n matrix a, the problem is to determine
# the covariance matrix corresponding to a, defined as
#
# t
# inverse(a *a) .
#
# this subroutine completes the solution of the problem
# if it is provided with the necessary information from the
# qr factorization, with column pivoting, of a. that is, if
# a*p = q*r, where p is a permutation matrix, q has orthogonal
# columns, and r is an upper triangular matrix with diagonal
# elements of nonincreasing magnitude, then covar expects
# the full upper triangle of r and the permutation matrix p.
# the covariance matrix is then computed as
#
# t t
# p*inverse(r *r)*p .
#
# if a is nearly rank deficient, it may be desirable to compute
# the covariance matrix corresponding to the linearly independent
# columns of a. to define the numerical rank of a, covar uses
# the tolerance tol. if l is the largest integer such that
#
# abs(r(l,l)) .gt. tol*abs(r(1,1)) ,
#
# then covar computes the covariance matrix corresponding to
# the first l columns of r. for k greater than l, column
# and row ipvt(k) of the covariance matrix are set to zero.
#
# the subroutine statement is
#
# subroutine covar(n,r,ldr,ipvt,tol,wa)
#
# where
#
# n is a positive integer input variable set to the order of r.
#
# r is an n by n array. on input the full upper triangle must
# contain the full upper triangle of the matrix r. on output
# r contains the square symmetric covariance matrix.
#
# ldr is a positive integer input variable not less than n
# which specifies the leading dimension of the array r.
#
# ipvt is an integer input array of length n which defines the
# permutation matrix p such that a*p = q*r. column j of p
# is column ipvt(j) of the identity matrix.
#
# tol is a nonnegative input variable used to define the
# numerical rank of a in the manner described above.
#
# wa is a work array of length n.
#
# subprograms called
#
# fortran-supplied ... dabs
#
# argonne national laboratory. minpack project. august 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
# **********
def calc_covar(self, rr, ipvt=None, tol=1.e-14):
if (self.debug): print('Entering calc_covar...')
if rr.ndim != 2:
print('ERROR: r must be a two-dimensional matrix')
return(-1)
s = numpy.shape(rr)
n = s[0]
if s[0] != s[1]:
print('ERROR: r must be a square matrix')
return(-1)
if (ipvt is None): ipvt = numpy.arange(n)
r = rr.copy()
r.shape = [n,n]
## For the inverse of r in the full upper triangle of r
l = -1
tolr = tol * abs(r[0,0])
for k in range(n):
if (abs(r[k,k]) <= tolr): break
r[k,k] = 1./r[k,k]
for j in range(k):
temp = r[k,k] * r[j,k]
r[j,k] = 0.
r[0:j+1,k] = r[0:j+1,k] - temp*r[0:j+1,j]
l = k
## Form the full upper triangle of the inverse of (r transpose)*r
## in the full upper triangle of r
if l >= 0:
for k in range(l+1):
for j in range(k):
temp = r[j,k]
r[0:j+1,j] = r[0:j+1,j] + temp*r[0:j+1,k]
temp = r[k,k]
r[0:k+1,k] = temp * r[0:k+1,k]
## For the full lower triangle of the covariance matrix
## in the strict lower triangle or and in wa
wa = numpy.repeat([r[0,0]], n)
for j in range(n):
jj = ipvt[j]
sing = j > l
for i in range(j+1):
if sing: r[i,j] = 0.
ii = ipvt[i]
if ii > jj: r[ii,jj] = r[i,j]
if ii < jj: r[jj,ii] = r[i,j]
wa[jj] = r[j,j]
## Symmetrize the covariance matrix in r
for j in range(n):
r[0:j+1,j] = r[j,0:j+1]
r[j,j] = wa[j]
return(r)
class machar(object):
def __init__(self, double=1):
if (double == 0):
self.machep = 1.19209e-007
self.maxnum = 3.40282e+038
self.minnum = 1.17549e-038
self.maxgam = 171.624376956302725
else:
self.machep = 2.2204460e-016
self.maxnum = 1.7976931e+308
self.minnum = 2.2250739e-308
self.maxgam = 171.624376956302725
self.maxlog = numpy.log(self.maxnum)
self.minlog = numpy.log(self.minnum)
self.rdwarf = numpy.sqrt(self.minnum*1.5) * 10
self.rgiant = numpy.sqrt(self.maxnum) * 0.1
| 88,531 | 38.190792 | 97 | py |
presto | presto-master/python/presto/sigproc.py | #!/usr/bin/env python
from __future__ import print_function
from __future__ import absolute_import
from builtins import zip
import os
import struct
import sys
import math
import warnings
from presto.psr_constants import ARCSECTORAD
telescope_ids = {"Fake": 0, "Arecibo": 1, "ARECIBO 305m": 1,
"Ooty": 2, "Nancay": 3, "Parkes": 4, "Jodrell": 5,
"GBT": 6, "GMRT": 7, "Effelsberg": 8, "ATA": 9,
"SRT": 10, "LOFAR": 11, "VLA": 12, "CHIME": 20,
"FAST": 21, "MeerKAT": 64, "KAT-7": 65}
ids_to_telescope = dict(list(zip(list(telescope_ids.values()), list(telescope_ids.keys()))))
machine_ids = {"FAKE": 0, "PSPM": 1, "Wapp": 2, "WAPP": 2, "AOFTM": 3,
"BCPM1": 4, "BPP": 4, "OOTY": 5, "SCAMP": 6,
"GBT Pulsar Spigot": 7, "SPIGOT": 7, "BG/P": 11,
"PDEV": 12, "CHIME+PSR": 20, "KAT": 64, "KAT-DC2": 65}
ids_to_machine = dict(list(zip(list(machine_ids.values()), list(machine_ids.keys()))))
header_params = {
"HEADER_START": 'flag',
"telescope_id": 'i',
"machine_id": 'i',
"data_type": 'i',
"rawdatafile": 'str',
"source_name": 'str',
"barycentric": 'i',
"pulsarcentric": 'i',
"az_start": 'd',
"za_start": 'd',
"src_raj": 'd',
"src_dej": 'd',
"tstart": 'd',
"tsamp": 'd',
"nbits": 'i',
"signed": 'b',
"nsamples": 'i',
"nbeams": "i",
"ibeam": "i",
"fch1": 'd',
"foff": 'd',
"FREQUENCY_START": 'flag',
"fchannel": 'd',
"FREQUENCY_END": 'flag',
"nchans": 'i',
"nifs": 'i',
"refdm": 'd',
"period": 'd',
"npuls": 'q',
"nbins": 'i',
"HEADER_END": 'flag'}
def dec2radians(src_dej):
"""
dec2radians(src_dej):
Convert the SIGPROC-style DDMMSS.SSSS declination to radians
"""
sign = 1.0
if (src_dej < 0): sign = -1.0;
xx = math.fabs(src_dej)
dd = int(math.floor(xx / 10000.0))
mm = int(math.floor((xx - dd * 10000.0) / 100.0))
ss = xx - dd * 10000.0 - mm * 100.0
return sign * ARCSECTORAD * (60.0 * (60.0 * dd + mm) + ss)
def ra2radians(src_raj):
"""
ra2radians(src_raj):
Convert the SIGPROC-style HHMMSS.SSSS right ascension to radians
"""
return 15.0 * dec2radians(src_raj)
def read_doubleval(filfile, stdout=False):
dblval = struct.unpack('d', filfile.read(8))[0]
if stdout:
print(" double value = '%20.15f'"%dblval)
return dblval
def read_intval(filfile, stdout=False):
intval = struct.unpack('i', filfile.read(4))[0]
if stdout:
print(" int value = '%d'"%intval)
return intval
def read_charval(filfile, stdout=False):
charval = struct.unpack('b', filfile.read(1))[0]
if stdout:
print(" char value = '%d'"%charval)
return charval
def read_longintval(filfile, stdout=False):
longintval = struct.unpack('q', filfile.read(8))[0]
if stdout:
print(" long int value = '%d'"%longintval)
return longintval
def read_string(filfile, stdout=False):
strlen = struct.unpack('i', filfile.read(4))[0]
strval = filfile.read(strlen)
if stdout:
print(" string = '%s'"%strval)
return strval.decode('utf-8')
def read_paramname(filfile, stdout=False):
paramname = read_string(filfile, stdout=False)
if stdout:
print("Read '%s'"%paramname)
return paramname
def read_hdr_val(filfile, stdout=False):
paramname = read_paramname(filfile, stdout)
try:
if header_params[paramname] == 'd':
return paramname, read_doubleval(filfile, stdout)
elif header_params[paramname] == 'i':
return paramname, read_intval(filfile, stdout)
elif header_params[paramname] == 'q':
return paramname, read_longintval(filfile, stdout)
elif header_params[paramname] == 'b':
return paramname, read_charval(filfile, stdout)
elif header_params[paramname] == 'str':
return paramname, read_string(filfile, stdout)
elif header_params[paramname] == 'flag':
return paramname, None
except KeyError:
warnings.warn("key '%s' is unknown!" % paramname)
return None, None
def prep_string(string):
return struct.pack('i', len(string))+string.encode('utf-8')
def prep_double(name, value):
return prep_string(name)+struct.pack('d', float(value))
def prep_int(name, value):
return prep_string(name)+struct.pack('i', int(value))
def prep_char(name, value):
return prep_string(name)+struct.pack('b', int(value))
def addto_hdr(paramname, value):
try:
if header_params[paramname] == 'd':
return prep_double(paramname, value)
elif header_params[paramname] == 'i':
return prep_int(paramname, value)
elif header_params[paramname] == 'b':
return prep_char(paramname, value)
elif header_params[paramname] == 'str':
return prep_string(paramname) + prep_string(value)
elif header_params[paramname] == 'flag':
return prep_string(paramname)
except KeyError:
warnings.warn("key '%s' is unknown!" % paramname)
return ""
def read_header(infile):
"""
read_header(infile):
Read a SIGPROC-style header and return the keys/values in a dictionary,
as well as the length of the header: (hdrdict, hdrlen)
"""
hdrdict = {}
if type(infile) == type("abc"):
infile = open(infile,'rb')
param = ""
while (param != "HEADER_END"):
param, val = read_hdr_val(infile, stdout=False)
hdrdict[param] = val
hdrlen = infile.tell()
infile.close()
return hdrdict, hdrlen
def samples_per_file(infile, hdrdict, hdrlen):
"""
samples_per_file(infile, hdrdict, hdrlen):
Given an input SIGPROC-style filterbank file and a header
dictionary and length (as returned by read_header()),
return the number of (time-domain) samples in the file.
"""
numbytes = os.stat(infile)[6] - hdrlen
bytes_per_sample = hdrdict['nchans'] * (hdrdict['nbits']/8)
if numbytes % bytes_per_sample:
print("Warning!: File does not appear to be of the correct length!")
numsamples = numbytes / bytes_per_sample
return numsamples
if __name__ == "__main__":
if len(sys.argv)==1:
print("\nusage: mod_filterbank_hdr.py infile.fil [outfile.fil]\n")
sys.exit()
filhdr = {}
newhdr = ""
infile = open(sys.argv[1], 'rb')
# Loop over the values in the .fil file
while 1:
param, val = read_hdr_val(infile, stdout=True)
filhdr[param] = val
# Add lines here to correct stuff
#if param=="nchans": val = 768
# Append to the new hdr string
# newhdr += addto_hdr(param, val)
# Break out of the loop if the header is over
if param=="HEADER_END": break
if len(sys.argv) > 2:
print("Writing new header to '%s'"%sys.argv[2])
outfile = open(sys.argv[2], 'wb')
outfile.write(newhdr)
outfile.close()
else:
print(filhdr)
| 7,132 | 31.130631 | 92 | py |
presto | presto-master/python/presto/waterfaller.py | ../../bin/waterfaller.py | 24 | 24 | 24 | py |
presto | presto-master/python/presto/spectra.py | from builtins import str
from builtins import range
from builtins import object
import copy
import numpy as np
import scipy.signal
from presto import psr_utils
class Spectra(object):
"""A class to store spectra. This is mainly to provide
reusable functionality.
"""
def __init__(self, freqs, dt, data, starttime=0, dm=0):
"""Spectra constructor.
Inputs:
freqs: Observing frequencies for each channel.
dt: Sample time (in seconds).
data: A 2D numpy array containing pulsar data.
Axis 0 should contain channels. (e.g. data[0,:])
Axis 1 should contain spectra. (e.g. data[:,0])
starttime: Start time (in seconds) of the spectra
with respect to the start of the observation.
(Default: 0).
dm: Dispersion measure (in pc/cm^3). (Default: 0)
Output:
spectra_obj: Spectrum object.
"""
self.numchans, self.numspectra = data.shape
assert len(freqs)==self.numchans
self.freqs = freqs
self.data = data.astype('float')
self.dt = dt
self.starttime = starttime
self.dm = 0
def __str__(self):
return str(self.data)
def __getitem__(self, key):
return self.data[key]
def __setitem__(self, key, value):
self.data[key] = value
def get_chan(self, channum):
return self.data[channum,:]
def get_spectrum(self, specnum):
return self.data[:,specnum]
def shift_channels(self, bins, padval=0):
"""Shift each channel to the left by the corresponding
value in bins, an array.
Inputs:
bins: An array containing the number of bins
to shift each channel by.
padval: Value to use when shifting near the edge
of a channel. This can be a numeric value,
'median', 'mean', or 'rotate'.
The values 'median' and 'mean' refer to the
median and mean of the channel. The value
'rotate' takes values from one end of the
channel and shifts them to the other.
Outputs:
None
*** Shifting happens in-place ***
"""
assert self.numchans == len(bins)
for ii in range(self.numchans):
chan = self.get_chan(ii)
# Use 'chan[:]' so update happens in-place
# this way the change effects self.data
chan[:] = psr_utils.rotate(chan, bins[ii])
if padval!='rotate':
# Get padding value
if padval=='mean':
pad = np.mean(chan)
elif padval=='median':
pad = np.median(chan)
else:
pad = padval
# Replace rotated values with padval
if bins[ii]>0:
chan[-bins[ii]:] = pad
elif bins[ii]<0:
chan[:-bins[ii]] = pad
def subband(self, nsub, subdm=None, padval=0):
"""Reduce the number of channels to 'nsub' by subbanding.
The channels within a subband are combined using the
DM 'subdm'. 'padval' is passed to the call to
'Spectra.shift_channels'.
Inputs:
nsub: Number of subbands. Must be a factor of
the number of channels.
subdm: The DM with which to combine channels within
each subband (Default: don't shift channels
within each subband)
padval: The padding value to use when shifting
channels during dedispersion. See documentation
of Spectra.shift_channels. (Default: 0)
Outputs:
None
*** Subbanding happens in-place ***
"""
assert (self.numchans % nsub) == 0
assert (subdm is None) or (subdm >= 0)
nchan_per_sub = self.numchans // nsub
sub_hifreqs = self.freqs[np.arange(nsub)*nchan_per_sub]
sub_lofreqs = self.freqs[(1+np.arange(nsub))*nchan_per_sub-1]
sub_ctrfreqs = 0.5*(sub_hifreqs+sub_lofreqs)
if subdm is not None:
# Compute delays
ref_delays = psr_utils.delay_from_DM(subdm-self.dm, sub_ctrfreqs)
delays = psr_utils.delay_from_DM(subdm-self.dm, self.freqs)
rel_delays = delays-ref_delays.repeat(nchan_per_sub) # Relative delay
rel_bindelays = np.round(rel_delays/self.dt).astype('int')
# Shift channels
self.shift_channels(rel_bindelays, padval)
# Subband
self.data = np.array([np.sum(sub, axis=0) for sub in \
np.vsplit(self.data, nsub)])
self.freqs = sub_ctrfreqs
self.numchans = nsub
def scaled(self, indep=False):
"""Return a scaled version of the Spectra object.
When scaling subtract the median from each channel,
and divide by global std deviation (if indep==False), or
divide by std deviation of each row (if indep==True).
Input:
indep: Boolean. If True, scale each row
independantly (Default: False).
Output:
scaled_spectra: A scaled version of the
Spectra object.
"""
other = copy.deepcopy(self)
if not indep:
std = other.data.std()
for ii in range(other.numchans):
chan = other.get_chan(ii)
median = np.median(chan)
if indep:
std = chan.std()
chan[:] = (chan-median)/std
return other
def scaled2(self, indep=False):
"""Return a scaled version of the Spectra object.
When scaling subtract the min from each channel,
and divide by global max (if indep==False), or
divide by max of each row (if indep==True).
Input:
indep: Boolean. If True, scale each row
independantly (Default: False).
Output:
scaled_spectra: A scaled version of the
Spectra object.
"""
other = copy.deepcopy(self)
if not indep:
max = other.data.max()
for ii in range(other.numchans):
chan = other.get_chan(ii)
min = chan.min()
if indep:
max = chan.max()
chan[:] = (chan-min)/max
return other
def masked(self, mask, maskval='median-mid80'):
"""Replace masked data with 'maskval'. Returns
a masked copy of the Spectra object.
Inputs:
mask: An array of boolean values of the same size and shape
as self.data. True represents an entry to be masked.
maskval: Value to use when masking. This can be a numeric
value, 'median', 'mean', or 'median-mid80'.
The values 'median' and 'mean' refer to the median and
mean of the channel, respectively. The value 'median-mid80'
refers to the median of the channel after the top and bottom
10% of the sorted channel is removed.
(Default: 'median-mid80')
Output:
maskedspec: A masked version of the Spectra object.
"""
assert self.data.shape == mask.shape
maskvals = np.ones(self.numchans)
for ii in range(self.numchans):
chan = self.get_chan(ii)
# Use 'chan[:]' so update happens in-place
if maskval=='mean':
maskvals[ii]=np.mean(chan)
elif maskval=='median':
maskvals[ii]=np.median(chan)
elif maskval=='median-mid80':
n = int(np.round(0.1*self.numspectra))
maskvals[ii]=np.median(sorted(chan)[n:-n])
else:
maskvals[ii]=maskval
if np.all(mask[ii]):
self.data[ii] = np.ones_like(self.data[ii])*(maskvals[:,np.newaxis][ii])
return self
def dedisperse(self, dm=0, padval=0):
"""Shift channels according to the delays predicted by
the given DM.
Inputs:
dm: The DM (in pc/cm^3) to use.
padval: The padding value to use when shifting
channels during dedispersion. See documentation
of Spectra.shift_channels. (Default: 0)
Outputs:
None
*** Dedispersion happens in place ***
"""
assert dm >= 0
ref_delay = psr_utils.delay_from_DM(dm-self.dm, np.max(self.freqs))
delays = psr_utils.delay_from_DM(dm-self.dm, self.freqs)
rel_delays = delays-ref_delay # Relative delay
rel_bindelays = np.round(rel_delays/self.dt).astype('int')
# Shift channels
self.shift_channels(rel_bindelays, padval)
self.dm=dm
def smooth(self, width=1, padval=0):
"""Smooth each channel by convolving with a top hat
of given width. The height of the top had is
chosen shuch that RMS=1 after smoothing.
Overlap values are determined by 'padval'.
Inputs:
width: Number of bins to smooth by (Default: no smoothing)
padval: Padding value to use. Possible values are
float-value, 'mean', 'median', 'wrap'.
(Default: 0).
Ouputs:
None
This bit of code is taken from Scott Ransom's
PRESTO's single_pulse_search.py (line ~ 423).
*** Smoothing is done in place. ***
"""
if width > 1:
kernel = np.ones(width, dtype='float32')/np.sqrt(width)
for ii in range(self.numchans):
chan = self.get_chan(ii)
if padval=='wrap':
tosmooth = np.concatenate([chan[-width:], \
chan, chan[:width]])
elif padval=='mean':
tosmooth = np.ones(self.numspectra+width*2) * \
np.mean(chan)
tosmooth[width:-width] = chan
elif padval=='median':
tosmooth = np.ones(self.numspectra+width*2) * \
np.median(chan)
tosmooth[width:-width] = chan
else: # padval is a float
tosmooth = np.ones(self.numspectra+width*2) * \
padval
tosmooth[width:-width] = chan
smoothed = scipy.signal.convolve(tosmooth, kernel, 'same')
chan[:] = smoothed[width:-width]
def trim(self, bins=0):
"""Trim the end of the data by 'bins' spectra.
Input:
bins: Number of spectra to trim off the end of the observation.
If bins is negative trim spectra off the beginning of the
observation.
Outputs:
None
*** Trimming is irreversible ***
"""
assert bins < self.numspectra
if bins == 0:
return
elif bins > 0:
self.data = self.data[:,:-bins]
self.numspectra = self.numspectra-bins
elif bins < 0:
self.data = self.data[:,bins:]
self.numspectra = self.numspectra-bins
self.starttime = self.starttime+bins*self.dt
def downsample(self, factor=1, trim=True):
"""Downsample (in-place) the spectra by co-adding
'factor' adjacent bins.
Inputs:
factor: Reduce the number of spectra by this
factor. Must be a factor of the number of
spectra if 'trim' is False.
trim: Trim off excess bins.
Ouputs:
None
*** Downsampling is done in place ***
"""
assert trim or not (self.numspectra % factor)
new_num_spectra = self.numspectra // factor
num_to_trim = self.numspectra%factor
self.trim(num_to_trim)
self.data = np.array(np.column_stack([np.sum(subint, axis=1) for \
subint in np.hsplit(self.data,new_num_spectra)]))
self.numspectra = new_num_spectra
self.dt = self.dt*factor
| 12,864 | 36.616959 | 88 | py |
presto | presto-master/python/presto/psr_utils.py | from __future__ import print_function
from __future__ import absolute_import
from builtins import str
from builtins import range
import bisect
import numpy as Num
import numpy.fft as FFT
from scipy.special import ndtr, ndtri, chdtrc, chdtri, fdtrc, i0, kolmogorov
from scipy.optimize import leastsq
import scipy.optimize.zeros as zeros
from presto import Pgplot, ppgplot, sinc_interp
import presto.psr_constants as pc
isintorlong = lambda x: type(x) == type(0) or type(x) == type(0)
def span(Min, Max, Number):
"""
span(Min, Max, Number):
Create a range of 'Num' floats given inclusive 'Min' and 'Max' values.
"""
return Num.linspace(Min, Max, Number)
def distance(width):
"""
distance(width):
Return a 'width' x 'width' Num Python array with each
point set to the geometric distance from the array's center.
"""
x = Num.arange(-width / 2.0 + 0.5, width / 2.0 + 0.5, 1.0) ** 2
x = Num.resize(x, (width, width))
return Num.sqrt(x + Num.transpose(x))
def is_power_of_10(n):
"""
is_power_of_10(n):
If n is a power of 10, return True.
"""
N = int(n)
while (N > 9 and N % 10 == 0):
N //= 10
return N == 1
def choose_N(orig_N):
"""
choose_N(orig_N):
Choose a time series length that is larger than
the input value but that is highly factorable.
Note that the returned value must be divisible
by at least the maximum downsample factor * 2.
Currently, this is 8 * 2 = 16.
"""
# A list of 4-dgit numbers that are highly factorable by small primes
goodfactors = [1000, 1008, 1024, 1056, 1120, 1152, 1200, 1232, 1280,
1296, 1344, 1408, 1440, 1536, 1568, 1584, 1600, 1680,
1728, 1760, 1792, 1920, 1936, 2000, 2016, 2048, 2112,
2160, 2240, 2304, 2352, 2400, 2464, 2560, 2592, 2640,
2688, 2800, 2816, 2880, 3024, 3072, 3136, 3168, 3200,
3360, 3456, 3520, 3584, 3600, 3696, 3840, 3872, 3888,
3920, 4000, 4032, 4096, 4224, 4320, 4400, 4480, 4608,
4704, 4752, 4800, 4928, 5040, 5120, 5184, 5280, 5376,
5488, 5600, 5632, 5760, 5808, 6000, 6048, 6144, 6160,
6272, 6336, 6400, 6480, 6720, 6912, 7040, 7056, 7168,
7200, 7392, 7680, 7744, 7776, 7840, 7920, 8000, 8064,
8192, 8400, 8448, 8624, 8640, 8800, 8960, 9072, 9216,
9408, 9504, 9600, 9680, 9856, 10000]
if orig_N < 10000:
return 0
# Get the number represented by the first 4 digits of orig_N
first4 = int(str(orig_N)[:4])
# Now get the number that is just bigger than orig_N
# that has its first 4 digits equal to "factor"
for factor in goodfactors:
if (factor == first4 and
orig_N % factor == 0 and
is_power_of_10(orig_N//factor)): break
if factor > first4: break
new_N = factor
while new_N < orig_N:
new_N *= 10
if new_N == orig_N:
return orig_N
# Finally, compare new_N to the closest power_of_two
# greater than orig_N. Take the closest.
two_N = 2
while two_N < orig_N:
two_N *= 2
return min(two_N, new_N)
def running_avg(arr, navg):
"""
running_avg(arr, navg):
Return an array of the running average of 'navg' bins from the
input array 'arr'.
"""
a = Num.asarray(arr, 'd')
a.shape = (len(a) // navg, navg)
return Num.add.reduce(Num.transpose(a)) / navg
def hist(data, bins, range=None, laby="Number", **kwargs):
"""
hist(data, bins, range=None, laby="Number", **kwargs):
Return and plot a histogram in one variable.
data -- a sequence of data points
bins -- the number of bins into which the data is to be sorted
range -- a tuple of two values, specifying the lower and
the upper end of the interval spanned by the bins.
Any data point outside this interval will be ignored.
If no range is given, the smallest and largest
data values are used to define the interval.
Note: This command also accepts all the keyword arge of plotbinned().
"""
ys, bin_edges = Num.histogram(data, bins, range)
dx = bin_edges[1] - bin_edges[0]
xs = bin_edges[:-1] + 0.5 * dx
maxy = int(1.1 * max(ys))
if maxy < max(ys):
maxy = max(ys) + 1.0
if 'rangey' not in list(kwargs.keys()):
kwargs['rangey'] = [0, maxy]
Pgplot.plotbinned(ys, xs, laby=laby, **kwargs)
return (xs, ys)
def KS_test(data, cumdist, output=0):
"""
KS_test(data, cumdist, output=0):
Perform a Kolmogorov-Smirnov test on data compared to the
cumulative-distribution function cumdist.
"""
nn = len(data)
sdata = Num.sort(Num.asarray(data))
D1 = Num.maximum.reduce(Num.absolute(cumdist(sdata) -
Num.arange(nn, dtype='d') / nn))
D2 = Num.maximum.reduce(Num.absolute(cumdist(sdata) -
Num.arange(1, nn + 1, dtype='d') / nn))
D = max((D1, D2))
P = kolmogorov(Num.sqrt(nn) * D)
if (output):
print("Max distance between the cumulative distributions (D) = %.5g" % D)
print("Prob the data is from the specified distrbution (P) = %.3g" % P)
return (D, P)
def weighted_mean(arrin, weights_in, inputmean=None, calcerr=False, sdev=False):
"""
NAME:
weighted_mean()
PURPOSE:
Calculate the weighted mean, error, and optionally standard deviation of
an input array. By default error is calculated assuming the weights are
1/err^2, but if you send calcerr=True this assumption is dropped and the
error is determined from the weighted scatter.
CALLING SEQUENCE:
wmean,werr = wmom(arr, weights, inputmean=None, calcerr=False, sdev=False)
INPUTS:
arr: A numpy array or a sequence that can be converted.
weights: A set of weights for each elements in array.
OPTIONAL INPUTS:
inputmean:
An input mean value, around which the mean is calculated.
calcerr=False:
Calculate the weighted error. By default the error is calculated as
1/sqrt( weights.sum() ). If calcerr=True it is calculated as sqrt(
(w**2 * (arr-mean)**2).sum() )/weights.sum()
sdev=False:
If True, also return the weighted standard deviation as a third
element in the tuple.
OUTPUTS:
wmean, werr: A tuple of the weighted mean and error. If sdev=True the
tuple will also contain sdev: wmean,werr,wsdev
REVISION HISTORY:
Converted from IDL: 2006-10-23. Erin Sheldon, NYU
"""
# no copy made if they are already arrays
arr = Num.array(arrin, ndmin=1, copy=False)
# Weights is forced to be type double. All resulting calculations
# will also be double
weights = Num.array(weights_in, ndmin=1, dtype='f8', copy=False)
wtot = weights.sum()
# user has input a mean value
if inputmean is None:
wmean = (weights * arr).sum() / wtot
else:
wmean = float(inputmean)
# how should error be calculated?
if calcerr:
werr2 = (weights ** 2 * (arr - wmean) ** 2).sum()
werr = Num.sqrt(werr2) / wtot
else:
werr = 1.0 / Num.sqrt(wtot)
# should output include the weighted standard deviation?
if sdev:
wvar = (weights * (arr - wmean) ** 2).sum() / wtot
wsdev = Num.sqrt(wvar)
return wmean, werr, wsdev
else:
return wmean, werr
def MJD_to_JD(MJD):
"""
MJD_to_JD(MJD):
Convert Modified Julian Date (MJD) to Julian Date (JD)
"""
return MJD + 2400000.5
def JD_to_MJD(JD):
"""
JD_to_MJD(JD):
Convert Julian Date (JD) to Modified Julian Date (MJD)
"""
return JD - 2400000.5
def MJD_to_Julian_Epoch(MJD):
"""
MJD_to_Julian_Epoch(MJD):
Convert Modified Julian Date (MJD) to Julian Epoch
"""
return 2000.0 + (MJD - 51544.5) / 365.25
def Julian_Epoch_to_MJD(jepoch):
"""
Julian_Epoch_to_MJD(jepoch):
Convert Julian Epoch to Modified Julian Date (MJD)
"""
return 51544.5 + (jepoch - 2000.0) * 365.25
def MJD_to_Besselian_Epoch(MJD):
"""
MJD_to_Besselian_Epoch(MJD):
Convert Modified Julian Date (MJD) to Besselian Epoch
"""
return 1900.0 + (MJD - 15019.81352) / 365.242198781
def Besselian_Epoch_to_MJD(bepoch):
"""
Besselian_Epoch_to_MJD(bepoch):
Convert Besselian Epoch to Modified Julian Date (MJD)
"""
return 15019.81352 + (bepoch - 1900.0) * 365.242198781
def rad_to_dms(rad):
"""
rad_to_dms(rad):
Convert radians to degrees, minutes, and seconds of arc.
"""
if (rad < 0.0):
sign = -1
else:
sign = 1
arc = pc.RADTODEG * Num.fmod(Num.fabs(rad), pc.PI)
d = int(arc)
arc = (arc - d) * 60.0
m = int(arc)
s = (arc - m) * 60.0
if sign == -1 and d == 0:
return (sign * d, sign * m, sign * s)
else:
return (sign * d, m, s)
def dms_to_rad(deg, min, sec):
"""
dms_to_rad(deg, min, sec):
Convert degrees, minutes, and seconds of arc to radians.
"""
if (deg < 0.0):
sign = -1
elif (deg == 0.0 and (min < 0.0 or sec < 0.0)):
sign = -1
else:
sign = 1
return sign * pc.ARCSECTORAD * \
(60.0 * (60.0 * Num.fabs(deg) +
Num.fabs(min)) + Num.fabs(sec))
def dms_to_deg(deg, min, sec):
"""
dms_to_deg(deg, min, sec):
Convert degrees, minutes, and seconds of arc to degrees.
"""
return pc.RADTODEG * dms_to_rad(deg, min, sec)
def rad_to_hms(rad):
"""
rad_to_hms(rad):
Convert radians to hours, minutes, and seconds of arc.
"""
rad = Num.fmod(rad, pc.TWOPI)
if (rad < 0.0): rad = rad + pc.TWOPI
arc = pc.RADTOHRS * rad
h = int(arc)
arc = (arc - h) * 60.0
m = int(arc)
s = (arc - m) * 60.0
return (h, m, s)
def hms_to_rad(hour, min, sec):
"""
hms_to_rad(hour, min, sec):
Convert hours, minutes, and seconds of arc to radians
"""
if (hour < 0.0):
sign = -1
else:
sign = 1
return sign * pc.SECTORAD * \
(60.0 * (60.0 * Num.fabs(hour) +
Num.fabs(min)) + Num.fabs(sec))
def hms_to_hrs(hour, min, sec):
"""
hms_to_hrs(hour, min, sec):
Convert hours, minutes, and seconds of arc to hours.
"""
return pc.RADTOHRS * hms_to_rad(hour, min, sec)
def coord_to_string(h_or_d, m, s):
"""
coord_to_string(h_or_d, m, s):
Return a formatted string of RA or DEC values as
'hh:mm:ss.ssss' if RA, or 'dd:mm:ss.ssss' if DEC.
"""
retstr = ""
if h_or_d < 0:
retstr = "-"
elif abs(h_or_d) == 0:
if (m < 0.0) or (s < 0.0):
retstr = "-"
h_or_d, m, s = abs(h_or_d), abs(m), abs(s)
if (s >= 9.9995):
return retstr + "%.2d:%.2d:%.4f" % (h_or_d, m, s)
else:
return retstr + "%.2d:%.2d:0%.4f" % (h_or_d, m, s)
def ra_to_rad(ra_string):
"""
ra_to_rad(ar_string):
Given a string containing RA information as
'hh:mm:ss.ssss', return the equivalent decimal
radians.
"""
h, m, s = ra_string.split(":")
return hms_to_rad(int(h), int(m), float(s))
def dec_to_rad(dec_string):
"""
dec_to_rad(dec_string):
Given a string containing DEC information as
'dd:mm:ss.ssss', return the equivalent decimal
radians.
"""
d, m, s = dec_string.split(":")
if "-" in d and int(d) == 0:
m, s = '-' + m, '-' + s
return dms_to_rad(int(d), int(m), float(s))
def delta_m(flux_factor):
"""
delta_m(flux_factor):
Return the change in magnitudes caused by a change
in flux of flux_factor.
"""
return -2.5 * Num.log10(flux_factor)
def flux_factor(delta_m):
"""
flux_factor(delta_m):
Return the change in flux caused by a change
in magnitude of delta_m magnitudes
"""
return 10.0 ** (delta_m / -2.5)
def distance_modulus_to_distance(dm, absorption=0.0):
"""
distance_modulus_to_distance(dm, absorption=0.0):
Return the distance (kpc) given a distance modulus dm and
an optional absorption.
"""
return 10.0 ** (((dm - absorption) + 5.0) / 5.0) / 1000.0
def distance_to_distance_modulus(d, absorption=0.0):
"""
distance_to_distance_modulus(d, absorption=0.0):
Return the distance modulus given a distance d and
an optional absorption.
"""
return 5.0 * Num.log10(d * 1000.0) - 5.0 + absorption
def true_anomaly(E, ecc):
"""
true_anomaly(E, ecc):
Return the True Anomaly (in radians) given the Eccentric anomaly
(E in radians) and the eccentricity (ecc)
"""
return 2.0 * Num.arctan(Num.sqrt((1.0 + ecc) / (1.0 - ecc)) * Num.tan(E / 2.0))
def mass_funct(pb, x):
"""
mass_funct(pb, x):
Return the mass function of an orbit given the following:
'pb' is the binary period in days.
'x' is the projected semi-major axis in lt-sec.
"""
pbs = pb * pc.SECPERDAY
return 8015123.37129 * x ** 3.0 / (pbs * pbs)
def mass_funct2(mp, mc, i):
"""
mass_funct2(mp, mc, i):
Return the mass function of an orbit given the following:
'mp' is the mass of the primary in solar masses.
'mc' is the mass of the companion in solar masses.
'i' is the orbital inclination (rad).
Note: An 'average' orbit has cos(i) = 0.5, or i = 60 deg
"""
return (mc * Num.sin(i)) ** 3.0 / (mc + mp) ** 2.0
def asini_c(pb, mf):
"""
asini_c(pb, mf):
Return the orbital projected semi-major axis (lt-sec) given:
'pb' is the binary period in sec.
'mf' is the mass function of the orbit.
"""
return (mf * pb * pb / 8015123.37129) ** (1.0 / 3.0)
def TS99_WDmass(pb, pop="I+II"):
"""
TS99_WDmass(pb, pop="I+II"):
Return the mass of the predicted WD companion for an MSP-HE WD
system, with an oprbital period of 'pb' days. The options
for the pop parameter are "I", "II", or the default "I+II".
That is the population of the stars that formed the system
(i.e. pop II stars are older and more metal poor)
From Tauris & Savonije, 1999, ApJ.
"""
vals = {"I": (4.50, 1.2e5, 0.120),
"I+II": (4.75, 1.1e5, 0.115),
"II": (5.00, 1.0e5, 0.110)}
if pop not in vals.keys():
print("Not a valid stellar pop: should be 'I', 'I+II', or 'II'")
return None
else:
a, b, c = vals[pop]
return (pb/b)**(1.0/a) + c
def ELL1_check(A1, E, TRES, NTOA, output=False):
"""
ELL1_check(A1, E, TRES, NTOA, output=False):
Check if a binary pulsar to see if ELL1 can be safely used as the
binary model. To work properly, we should have:
asini/c * ecc**2 << timing precision / sqrt(# TOAs)
or A1 * E**2 << TRES / sqrt(NTOA)
"""
lhs = A1 * E ** 2.0 * 1e6
rhs = TRES / Num.sqrt(NTOA)
if output:
print("Condition is asini/c * ecc**2 << timing precision / sqrt(# TOAs) to use ELL1:")
print(" asini/c * ecc**2 = %8.3g us" % lhs)
print(" TRES / sqrt(# TOAs) = %8.3g us" % rhs)
if lhs * 50.0 < rhs:
if output:
print("Should be fine.")
return True
elif lhs * 5.0 < rhs:
if output:
print("Should be OK, but not optimal.")
return True
else:
if output:
print("Should probably use BT or DD instead.")
return False
def accel_to_z(accel, T, reffreq, harm=1):
"""
accel_to_z(accel, T, reffreq, harm=1):
Return the accelsearch 'z' (i.e. number of bins drifted)
at a reference frequency 'reffreq', for an observation
of duration 'T' seconds and with acceleration (in m/s/s)
'accel'. You can specify the harmonic number in 'harm'.
"""
return accel * harm * reffreq * T * T / pc.SOL
def z_to_accel(z, T, reffreq, harm=1):
"""
z_to_accel(z, T, reffreq, harm=1):
Return the acceleration (in m/s/s) corresponding to the
accelsearch 'z' (i.e. number of bins drifted) at a
reference frequency 'reffreq', for an observation
of duration 'T'. You can specify the harmonic number
in 'harm'.
"""
return z * pc.SOL / (harm * reffreq * T * T)
def bins_to_accel(z, T, f=[1.0, 1000.0], device="/XWIN"):
"""
bins_to_accel(z, T, f=[1.0, 1000.0], device="/XWIN"):
Make a plot showing the acceleration which corresponds
to a certain number of Fourier bins drifted 'z' during
an observation of length 'T'.
"""
fs = span(Num.log10(f[0]), Num.log10(f[1]), 1000)
accels = z_to_accel(z, T, 10.0 ** fs)
if (device):
Pgplot.plotxy(Num.log10(accels), fs, logx=1, logy=1,
labx="Frequency (Hz)",
laby=r"Acceleration (m/s\u2\d)", device=device)
ppgplot.pgmtxt("T", -2.0, 0.75, 0.0, "T = %.0f sec" % T)
ppgplot.pgmtxt("T", -3.5, 0.75, 0.0, r"r\B\u\.\d = %.1f bins" % z)
if (device != '/XWIN'):
Pgplot.closeplot()
else:
return accels
def pulsar_mass(pb, x, mc, inc):
"""
pulsar_mass(pb, x, mc, inc):
Return the pulsar mass (in solar mass units) for a binary
system with the following characteristics:
'pb' is the binary period in days.
'x' is the projected semi-major axis in lt-sec.
'inc' is the orbital inclination in degrees.
'mc' is the mass of the companion in solar mass units.
"""
massfunct = mass_funct(pb, x)
def localmf(mp, mc=mc, mf=massfunct, i=inc * pc.DEGTORAD):
return mass_funct2(mp, mc, i) - mf
return zeros.bisect(localmf, 0.0, 1000.0)
def companion_mass(pb, x, inc=60.0, mpsr=1.4):
"""
companion_mass(pb, x, inc=60.0, mpsr=1.4):
Return the companion mass (in solar mass units) for a binary
system with the following characteristics:
'pb' is the binary period in days.
'x' is the projected semi-major axis in lt-sec.
'inc' is the orbital inclination in degrees.
'mpsr' is the mass of the pulsar in solar mass units.
"""
massfunct = mass_funct(pb, x)
def localmf(mc, mp=mpsr, mf=massfunct, i=inc * pc.DEGTORAD):
return mass_funct2(mp, mc, i) - mf
return zeros.bisect(localmf, 0.0, 1000.0)
def companion_mass_limit(pb, x, mpsr=1.4):
"""
companion_mass_limit(pb, x, mpsr=1.4):
Return the lower limit (corresponding to i = 90 degrees) of the
companion mass (in solar mass units) in a binary system with
the following characteristics:
'pb' is the binary period in days.
'x' is the projected semi-major axis in lt-sec.
'mpsr' is the mass of the pulsar in solar mass units.
"""
return companion_mass(pb, x, inc=90.0, mpsr=mpsr)
def OMDOT(porb, e, Mp, Mc):
"""
OMDOT(porb, e, Mp, Mc):
Return the predicted advance of periaston (deg/yr) given the
orbital period (days), eccentricity, and pulsar and companion masses.
"""
return 3.0 * (porb * pc.SECPERDAY / pc.TWOPI) ** (-5.0 / 3.0) * \
(pc.Tsun * (Mp + Mc)) ** (2.0 / 3.0) / (1.0 - e ** 2.0) * \
pc.RADTODEG * pc.SECPERJULYR
def GAMMA(porb, e, Mp, Mc):
"""
GAMMA(porb, e, Mp, Mc):
Return the predicted value of relativistic gamma (sec) given the
orbital period (days), eccentricity, and pulsar and companion masses.
"""
return e * (porb * pc.SECPERDAY / pc.TWOPI) ** (1.0 / 3.0) * \
pc.Tsun ** (2.0 / 3.0) * (Mp + Mc) ** (-4.0 / 3.0) * Mc * (Mp + 2.0 * Mc)
def PBDOT(porb, e, Mp, Mc):
"""
PBDOT(porb, e, Mp, Mc):
Return the predicted orbital period derivative (s/s) given the
orbital period (s), eccentricity, and pulsar and companion masses.
"""
return -192.0 * pc.PI / 5.0 * (porb * pc.SECPERDAY / pc.TWOPI) ** (-5.0 / 3.0) * \
(1.0 + 73.0 / 24.0 * e ** 2.0 + 37.0 / 96.0 * e ** 4.0) * \
(1.0 - e ** 2.0) ** (-7.0 / 2.0) * pc.Tsun ** (5.0 / 3.0) * \
Mp * Mc * (Mp + Mc) ** (-1.0 / 3.0)
def OMDOT_to_Mtot(OMDOT, porb, e):
"""
OMDOT_to_Mtot(OMDOT, porb, e):
Return the total mass (in solar units) of a system given an advance
of periastron (OMDOT) in deg/yr. The orbital period should be in days.
"""
wd = OMDOT / pc.SECPERJULYR * pc.DEGTORAD # rad/s
return (wd / 3.0 * (1.0 - e * e) * (porb * pc.SECPERDAY / \
pc.TWOPI) ** (5.0 / 3.0)) ** (3.0 / 2.0) / pc.Tsun
def GAMMA_to_Mc(gamma, porb, e, Mp):
"""
GAMMA_to_Mc(gamma, porb, e, Mp):
Given the relativistic gamma in sec, the orbital period in days,
the eccentricity and the pulsar mass in solar units, return the
predicted companion mass.
"""
def funct(mc, mp=Mp, porb=porb, e=e, gamma=gamma):
return GAMMA(porb, e, mp, mc) - gamma
return zeros.bisect(funct, 0.01, 20.0)
def shklovskii_effect(pm, D):
"""
shklovskii_effect(pm, D):
Return the 'acceleration' due to the transverse Doppler effect
(i.e. the Shklovskii Effect) given the proper motion (pm) in mas/yr
and the distance (D) in kpc. Note: What is returned is a_pm/C,
or equivalently, Pdot_pm/P.
"""
return (pm / 1000.0 * pc.ARCSECTORAD / pc.SECPERJULYR) ** 2.0 * \
pc.KMPERKPC * D / (pc.C / 1000.0)
def galactic_accel_simple(l, b, D, v_o=240.0, R_o=8.34):
"""
galactic_accel_simple(l, b, D, v_o=240.0, R_o = 8.34):
Return the approximate projected acceleration/c (in s^-1)
(a_p - a_ssb) dot n / c, where a_p and a_ssb are acceleration
vectors, and n is the los vector. This assumes a simple spherically
symmetric isothermal sphere with v_o = 220 km/s circular velocity
and R_o = 8 kpc to the center of the sphere from the SSB. l and
b are the galactic longitude and latitude (in deg) respectively,
and D is the distance in kpc. This is eqn 2.4 of Phinney 1992.
The default v_o and R_o values are from Reid et al 2014.
"""
A_sun = v_o * v_o / (pc.C / 1000.0 * R_o * pc.KMPERKPC)
d = D / R_o
cbcl = Num.cos(b * pc.DEGTORAD) * Num.cos(l * pc.DEGTORAD)
return -A_sun * (cbcl + (d - cbcl) / (1.0 + d * d - 2.0 * d * cbcl))
def galactic_accel(l, b, D, v_o=240.0, R_o=8.34):
"""
galactic_accel(l, b, D, v_o=240.0, R_o = 8.34):
Return the approximate projected acceleration/c (in s^-1)
(a_p - a_ssb) dot n / c, where a_p and a_ssb are acceleration
vectors, and n is the los vector. This assumes v_o = 220 km/s
circular velocity and R_o = 8 kpc to the center of Galaxy. l and
b are the galactic longitude and latitude (in deg) respectively,
and D is the distance in kpc. This is eqn 5 of Nice & Taylor 1995.
The default v_o and R_o values are from Reid et al 2014.
"""
A_sun = v_o * v_o / (pc.C / 1000.0 * R_o * pc.KMPERKPC)
cb = Num.cos(b * pc.DEGTORAD)
cl = Num.cos(l * pc.DEGTORAD)
sl = Num.sin(l * pc.DEGTORAD)
beta = D / R_o * cb - cl
return -A_sun * cb * (cl + beta / (sl ** 2 + beta ** 2))
def gal_z_accel(l, b, D):
"""
gal_z_accel(l, b, D):
Return the approximate projected acceleration/c (in s^-1)
(a_p - a_ssb) dot n / c, where a_p and a_ssb are acceleration
vectors, and n is the los vector, caused by the acceleration
of the pulsar towards the plane of the galaxy. l and b are
the galactic longitude and latitude (in deg) respectively, and D
is the distance in kpc. This is eqn 3+4 of Nice & Taylor 1995.
"""
sb = Num.sin(b * pc.DEGTORAD)
z = D * sb
az = 1.08e-19 * (1.25 * z / Num.sqrt(z ** 2 + 0.0324) + 0.58 * z)
return az * sb
def beam_halfwidth(obs_freq, dish_diam):
"""
beam_halfwidth(obs_freq, dish_diam):
Return the telescope beam halfwidth in arcmin
'obs_freq' = the observing frqeuency in MHz
'dish_diam' = the telescope diameter in m
"""
return 1.2 * pc.SOL / (obs_freq * 10.0 ** 6) / dish_diam * pc.RADTODEG * 60 / 2
def limiting_flux_dens(Ttot, G, BW, T, P=0.01, W=0.05, polar=2, factor=15.0):
"""
limiting_flux_dens(Ttot, G, BW, T, P=0.01, W=0.05, polar=2, factor=15.0):
Return the approximate limiting flux density for a pulsar
survey in mJy based of the following characteristics:
'Ttot' = sky + system temperature (K)
'G' = forward gain of the antenna (K/Jy)
'BW' = observing bandwidth (MHz)
'T' = integration time (s)
'P' = pulsar period (s) (default = 0.01)
'W' = duty cycle of pulsar (0-1) (default = 0.05)
'polar' = number of polarizations (default = 2)
'factor' = normalization factor that take into account
limiting SNR, hardware limitations etc. (default = 15.0)
Note: This is a _very_ approximate calculation. For a better
calculation, see Cordes and Chernoff, ApJ, 482, p971, App. A.
Observatories:
Parkes Multibeam: Tsys = 21 K, G = 0.735 K/Jy
"""
w = W * P
return Num.sqrt(w / ((P - w) * polar * BW * T)) * factor * Ttot / G
def dm_info(dm=None, dmstep=1.0, freq=1390.0, numchan=512, chanwidth=0.5):
"""
dm_info(dm=None, dmstep=1.0, freq=1390.0, numchan=512, chanwidth=0.5):
Return info about potential DM smearing during an observation.
"""
BW = chanwidth * numchan
print(" Center freq (MHz) = %.3f" % (freq))
print(" Number of channels = %d" % (numchan))
print(" Channel width (MHz) = %.3g" % (chanwidth))
print(" Total bandwidth (MHz) = %.3g" % (BW))
print(" DM offset (0.5*step) = %.3g" % (0.5 * dmstep))
print(" Smearing over BW (ms) = %.3g" % \
(1000.0 * dm_smear(0.5 * dmstep, BW, freq)))
if (dm):
print(" Smearing per chan (ms) = %.3g" % \
(1000.0 * dm_smear(dm, chanwidth, freq)))
def best_dm_step(maxsmear=0.1, dt=0.00080, dm=0.0, freq=1390.0, numchan=512, chanwidth=0.5):
"""
best_dm_step(maxsmear=0.1, dt=0.00080, dm=0.0, freq=1390.0, numchan=512, chanwidth=0.5):
Return the required DM step to keep the total smearing below 'maxsmear' (in ms).
"""
BW = chanwidth * numchan
tau_tot = maxsmear / 1000.0
tau_chan = dm_smear(dm, chanwidth, freq)
tau_samp = dt
if (tau_tot ** 2.0 < (tau_chan ** 2.0 + tau_samp ** 2.0)):
print("The requested total smearing is smaller than one or more of the components.")
return 0.0
else:
return 0.0001205 * freq ** 3.0 * 2.0 / BW * Num.sqrt(tau_tot ** 2.0 - tau_chan ** 2.0 - tau_samp ** 2.0)
def dm_smear(dm, BW, center_freq):
"""
dm_smear(dm, BW, center_freq):
Return the smearing in sec caused by a 'dm' over a bandwidth
of 'BW' MHz centered at 'center_freq' MHz.
"""
return dm * BW / (0.0001205 * center_freq * center_freq * center_freq)
def diagonal_DM(dt, chanBW, center_freq):
"""
diagonal_DM(dt, chanBW, center_freq):
Return the so-called "diagonal DM" where the smearing across
one channel is equal to the sample time.
"""
return (0.0001205 * center_freq * center_freq * center_freq) * dt / chanBW
def pulse_broadening(DM, f_ctr):
"""
pulse_broadening(DM, f_ctr):
Return the approximate pulse broadening (tau) in ms due to scattering
based on the rough relation in Cordes' 'Pulsar Observations I' paper.
'f_ctr' should be in MHz. The approximate error is 0.65 in log(tau).
"""
logDM = Num.log10(DM)
return 10.0 ** (-3.59 + 0.129 * logDM + 1.02 * logDM ** 2.0 -
4.4 * Num.log10(f_ctr / 1000.0)) / 1000.0
def rrat_period(times, numperiods=20, output=True):
"""
rrat_period(times, numperiods=20, output=True):
Try to determine a RRAT pulse period using a brute force
search when the input times are (real!) single-pulse
arrival times. numperiods is the number of integer pulses
to try between the first two pulses. If output is True,
print some diagnostic information
"""
ts = Num.asarray(sorted(times))
ps = (ts[1] - ts[0]) / Num.arange(1, numperiods + 1)
dts = Num.diff(ts)
xs = dts / ps[:, Num.newaxis]
metric = Num.sum(Num.fabs((xs - xs.round())), axis=1)
pnum = metric.argmin()
numrots = xs.round()[pnum].sum()
p = (ts[-1] - ts[0]) / numrots
if output:
print("Min, avg, std metric values are %.4f, %.4f, %.4f" % \
(metric.min(), metric.mean(), metric.std()))
print(" Approx period is likely:", ps[pnum])
print("Refined period is likely:", p)
print("Rotations between pulses are:")
print(dts / p)
return p
def rrat_period_multiday(days_times, numperiods=20, output=True):
"""
rrat_period_multiday(days_times, numperiods=20, output=True):
Try to determine a RRAT pulse period using a brute force
search when the input times are (real!) single-pulse
arrival times. numperiods is the maximum number of periods
to try in the smallest interval betweeen pulses.
If output is True, print some diagnostic information.
days_times should be a list where each entry is the list
you would pass to rrat_period for a single day/observation.
e.g.
[[times, from, one, day], [times from, another, day], ...]
"""
all_dt = []
for times in days_times:
daily_dt = Num.diff(sorted(times))
all_dt.extend(daily_dt.tolist())
dts = Num.asarray(sorted(all_dt))
ps = dts[0] / Num.arange(1, numperiods + 1)
xs = dts / ps[:, Num.newaxis]
metric = Num.sum(Num.fabs((xs - xs.round())), axis=1)
pnum = metric.argmin()
numrots = xs.round()[pnum].sum()
p = dts.sum() / numrots
if output:
print("Min, avg, std metric values are %.4f, %.4f, %.4f" % \
(metric.min(), metric.mean(), metric.std()))
print(" Approx period is likely:", ps[pnum])
print("Refined period is likely:", p)
print("Rotations between pulses are:")
print(dts / p)
return p
def guess_DMstep(DM, dt, BW, f_ctr):
"""
guess_DMstep(DM, dt, BW, f_ctr):
Choose a reasonable DMstep by setting the maximum smearing across the
'BW' to equal the sampling time 'dt'.
"""
return dt * 0.0001205 * f_ctr ** 3.0 / (0.5 * BW)
def delay_from_DM(DM, freq_emitted):
"""
Return the delay in seconds caused by dispersion, given
a Dispersion Measure (DM) in cm-3 pc, and the emitted
frequency (freq_emitted) of the pulsar in MHz.
"""
if (type(freq_emitted) == type(0.0)):
if (freq_emitted > 0.0):
return DM / (0.000241 * freq_emitted * freq_emitted)
else:
return 0.0
else:
return Num.where(freq_emitted > 0.0,
DM / (0.000241 * freq_emitted * freq_emitted), 0.0)
def delay_from_foffsets(df, dfd, dfdd, times):
"""
Return the delays in phase caused by offsets in
frequency (df), and two frequency derivatives (dfd, dfdd)
at the given times in seconds.
"""
f_delays = df * times
fd_delays = dfd * times ** 2 / 2.0
fdd_delays = dfdd * times ** 3 / 6.0
return (f_delays + fd_delays + fdd_delays)
def smear_plot(dm=[1.0, 1000.0], dmstep=1.0, subdmstep=10.0, freq=1390.0,
numchan=512, numsub=32, chanwidth=0.5, dt=0.000125,
device='/xwin'):
"""
smear_plot(dm=[0.0,1000.0], dmstep=1.0, subdmstep=10.0, freq=1390.0,
numchan=512, numsub=32, chanwidth=0.5, dt=0.000125,
device='/xwin'):
Show a plot that displays the expected smearing in ms
from various effects during a radio pulsar search.
"""
numpts = 500
BW = numchan * chanwidth
subBW = numchan / numsub * chanwidth
maxDMerror = 0.5 * dmstep
maxsubDMerror = 0.5 * subdmstep
ldms = span(Num.log10(dm[0]), Num.log10(dm[1]), numpts)
dms = 10.0 ** ldms
# Smearing from sample rate
dts = Num.zeros(numpts) + 1000.0 * dt
# Smearing due to the intrinsic channel width
chan_smear = 1000.0 * dm_smear(dms, chanwidth, freq)
# Smearing across the full BW due to max DM mismatch
BW_smear = Num.zeros(numpts) + \
1000.0 * dm_smear(maxDMerror, BW, freq)
# Smearing in each subband due to max DM mismatch
subband_smear = Num.zeros(numpts) + \
1000.0 * dm_smear(maxsubDMerror, subBW, freq)
total_smear = Num.sqrt(dts ** 2.0 + chan_smear ** 2.0 +
subband_smear ** 2.0 + BW_smear ** 2.0)
maxval = Num.log10(2.0 * max(total_smear))
minval = Num.log10(0.5 * min([min(dts), min(chan_smear),
min(BW_smear), min(subband_smear)]))
Pgplot.plotxy(Num.log10(total_smear), ldms, rangey=[minval, maxval],
logx=1, logy=1, labx="Dispersion Measure",
laby="Smearing (ms)", device=device)
ppgplot.pgsch(0.8)
ppgplot.pgmtxt("t", 1.5, 1.0 / 12.0, 0.5, r"\(2156)\dcenter\u = %gMHz" % freq)
ppgplot.pgmtxt("t", 1.5, 3.0 / 12.0, 0.5, r"N\dchan\u = %d" % numchan)
ppgplot.pgmtxt("t", 1.5, 5.0 / 12.0, 0.5, r"N\dsub\u = %d" % numsub)
ppgplot.pgmtxt("t", 1.5, 7.0 / 12.0, 0.5, r"BW\dchan\u = %gMHz" % chanwidth)
ppgplot.pgmtxt("t", 1.5, 9.0 / 12.0, 0.5, r"\gDDM = %g" % dmstep)
ppgplot.pgmtxt("t", 1.5, 11.0 / 12.0, 0.5, r"\gDDM\dsub\u = %g" % subdmstep)
ppgplot.pgsch(1.0)
ppgplot.pgmtxt("b", -7.5, 0.95, 1.0, "Total")
Pgplot.plotxy(Num.log10(dts), ldms, color="green",
logx=1, logy=1)
ppgplot.pgmtxt("b", -6.0, 0.95, 1.0, "Sample Rate")
Pgplot.plotxy(Num.log10(chan_smear), ldms, color="purple",
logx=1, logy=1)
ppgplot.pgmtxt("b", -4.5, 0.95, 1.0, "Channel")
Pgplot.plotxy(Num.log10(BW_smear), ldms, color="red",
logx=1, logy=1)
ppgplot.pgmtxt("b", -3.0, 0.95, 1.0, "Full BW")
Pgplot.plotxy(Num.log10(subband_smear), ldms, color="blue",
logx=1, logy=1)
ppgplot.pgmtxt("b", -1.5, 0.95, 1.0, "Subband")
ppgplot.pgsci(1)
def search_sensitivity(Ttot, G, BW, chan, freq, T, dm, ddm, dt, Pmin=0.001,
Pmax=1.0, W=0.1, polar=2, factor=15.0, pts=1000):
"""
(periods, S_min) = search_sensitivity(Ttot, G, BW, chan, freq, T, dm,
ddm, dt, Pmin=0.001, Pmax=1.0, W=0.1, polar=2, factor=15.0, pts=1000):
Return the approximate limiting flux density for a pulsar
survey in mJy based of the following characteristics:
'Ttot' = sky + system temperature (K)
'G' = forward gain of the antenna (K/Jy)
'BW' = observing bandwidth (MHz)
'chan' = number of channels in the filterbank
'freq' = central observing frequency (MHz)
'T' = integration time (s)
'dm' = Dispersion Measure in pc cm^-3
'ddm' = Dispersion Measure stepsize in pc cm^-3
'dt' = Sample time for each data point in sec
'Pmin' = minimum pulsar period (s) (default = 0.001)
'Pmax' = maximum pulsar period (s) (default = 1.0)
'W' = duty cycle of pulsar (0-1) (default = 0.1)
'polar' = number of polarizations (default = 2)
'factor' = normalization factor that take into account
limiting SNR, hardware limitations etc. (default = 15.0)
'pts' = the number of points to calculate
Note: This is a _very_ approximate calculation. For a better
calculation, see Cordes and Chernoff, ApJ, 482, p971, App. A.
Observatories:
Parkes Multibeam: Tsys = 21 K, G = 0.735 K/Jy
"""
periods = span(Pmin, Pmax, pts)
widths = Num.sqrt((W * periods) ** 2.0 +
dm_smear(dm, BW / chan, freq) ** 2.0 + \
dm_smear(ddm / 2.0, BW, freq) ** 2.0 + \
dt ** 2.0) / periods
return (periods, limiting_flux_dens(Ttot, G, BW, T, periods, widths,
polar=polar, factor=factor))
def smin_noise(Ttot, G, BW, dt):
"""
smin_noise(Ttot, G, BW, dt):
Return the 1 sigma Gaussian noise level (mJy) for each time
series bin in a pulsar data simulation. Default is for a
sinusoidal pulse (i.e. W = P / 2) with freq << Nyquist freq.
'Ttot' = sky + system temperature (K)
'G' = forward gain of the antenna (K/Jy)
'BW' = observing bandwidth (MHz)
'dt' = time per time series bin (s)
Observatories:
Parkes Multibeam: Tsys = 21 K, G = 0.735 K/Jy
"""
return Ttot / (G * Num.sqrt(2 * BW * dt))
def read_profile(filenm, normalize=0):
"""
read_profile(filenm, normalize=0):
Read a simple ASCII profile with one bin per line
from the file 'filenm'. Comments are allowed
if they begin with '#'. The profile is pseudo-
normalized if 'normalize' is true.
"""
prof = []
for line in open(filenm):
if line.startswith("#"):
continue
else:
prof.append(float(line.split()[-1]))
prof = Num.asarray(prof)
if normalize:
prof -= min(prof)
prof /= max(prof)
return prof
def calc_phs(MJD, refMJD, *args):
"""
calc_phs(MJD, refMJD, *args):
Return the rotational phase (0-1) at MJD (can be an array)
given a reference MJD and the rotational freq (f0) and
optional freq derivs (f1...) as ordered in the *args
list (e.g. [f0, f1, f2, ...]).
"""
t = (MJD - refMJD) * pc.SECPERDAY
n = len(args) # polynomial order
nargs = Num.concatenate(([0.0], args))
taylor_coeffs = Num.concatenate(([0.0],
Num.cumprod(1.0 / (Num.arange(float(n)) + 1.0))))
p = Num.poly1d((taylor_coeffs * nargs)[::-1])
return Num.fmod(p(t), 1.0)
def calc_freq(MJD, refMJD, *args):
"""
calc_freq(MJD, refMJD, *args):
Return the instantaneous frequency at an MJD (can be an array)
given a reference MJD and the rotational freq (f0) and
optional freq derivs (f1...) as ordered in the *args
list (e.g. [f0, f1, f2, ...]).
"""
t = (MJD - refMJD) * pc.SECPERDAY
n = len(args) # polynomial order
taylor_coeffs = Num.concatenate(([1.0],
Num.cumprod(1.0 / (Num.arange(float(n - 1)) + 1.0))))
p = Num.poly1d((taylor_coeffs * args)[::-1])
return p(t)
def calc_t0(MJD, refMJD, *args):
"""
calc_t0(MJD, refMJD, *args):
Return the closest previous MJD corresponding to phase=0 of the pulse.
*args are the spin freq (f0) and optional freq derivs (f1...)
"""
phs = calc_phs(MJD, refMJD, *args)
p = 1.0 / calc_freq(MJD, refMJD, *args)
return MJD - phs * p / pc.SECPERDAY
def write_princeton_toa(toa_MJDi, toa_MJDf, toaerr, freq, dm, obs='@', name=' ' * 13):
"""
Princeton Format
columns item
1-1 Observatory (one-character code) '@' is barycenter
2-2 must be blank
16-24 Observing frequency (MHz)
25-44 TOA (decimal point must be in column 30 or column 31)
45-53 TOA uncertainty (microseconds)
69-78 DM correction (pc cm^-3)
"""
# Splice together the fractional and integer MJDs
toa = "%5d" % int(toa_MJDi) + ("%.13f" % toa_MJDf)[1:]
if dm != 0.0:
print(obs + " %13s %8.3f %s %8.2f %9.4f" % \
(name, freq, toa, toaerr, dm))
else:
print(obs + " %13s %8.3f %s %8.2f" % \
(name, freq, toa, toaerr))
def write_tempo2_toa(toa_MJDi, toa_MJDf, toaerr, freq, dm, obs='@', name='unk', flags=""):
"""
Write Tempo2 format TOAs.
Note that first line of file should be "FORMAT 1"
TOA format is "file freq sat satErr siteID <flags>"
"""
toa = "%5d" % int(toa_MJDi) + ("%.13f" % toa_MJDf)[1:]
if dm != 0.0:
flags += "-dm %.4f" % (dm,)
print("%s %f %s %.2f %s %s" % (name, freq, toa, toaerr, obs, flags))
def rotate(arr, bins):
"""
rotate(arr, bins):
Return an array rotated by 'bins' places to the left
"""
bins = int(bins) % len(arr)
if bins==0:
return arr
else:
return Num.concatenate((arr[bins:], arr[:bins]))
def interp_rotate(arr, bins, zoomfact=10):
"""
interp_rotate(arr, bins, zoomfact=10):
Return a sinc-interpolated array rotated by 'bins' places to the left.
'bins' can be fractional and will be rounded to the closest
whole-number of interpolated bins. The resulting vector will
have the same length as the oiginal.
"""
newlen = len(arr) * zoomfact
rotbins = int(Num.floor(bins * zoomfact + 0.5)) % newlen
newarr = sinc_interp.periodic_interp(arr, zoomfact)
return rotate(newarr, rotbins)[::zoomfact]
def fft_rotate(arr, bins):
"""
fft_rotate(arr, bins):
Return array 'arr' rotated by 'bins' places to the left. The
rotation is done in the Fourier domain using the Shift Theorem.
'bins' can be fractional. The resulting vector will have
the same length as the original.
"""
arr = Num.asarray(arr)
freqs = Num.arange(arr.size / 2 + 1, dtype=float)
phasor = Num.exp(complex(0.0, pc.TWOPI) * freqs * bins / float(arr.size))
return Num.fft.irfft(phasor * Num.fft.rfft(arr), arr.size)
def corr(profile, template):
"""
corr(profile, template):
Cross-correlate (using FFTs) a 'profile' and a 'template'.
"""
return FFT.irfft(FFT.rfft(template) * Num.conjugate(FFT.rfft(profile)),
profile.size)
def autocorr(x):
"""
autocorr(x):
Circular normalized auto-correlation of the (real) function x
using FFTs. Returns only N/2+1 points as the remaining N/2-1
points are symmetric (corresponding to negative lags).
"""
fftx = FFT.rfft(x)
acf = FFT.irfft(fftx * Num.conjugate(fftx), x.size)[:len(x) // 2 + 1]
return acf / acf[0]
def maxphase(profile, template):
"""
maxphase(profile, template):
Return the phase offset required to get the 'profile' to best
match the 'template'.
"""
return float(Num.argmax(corr(profile, template))) / len(profile)
def linear_interpolate(vector, zoom=10):
"""
linear_interpolate(vector, zoom=10):
Linearly interpolate 'vector' by a factor of 'zoom'.
"""
n = len(vector)
ivect = Num.zeros(zoom * n, dtype='d')
nvect = Num.concatenate((vector, vector[:1]))
ivals = Num.arange(zoom, dtype='d') / zoom
loy = nvect[0]
for ii in range(n):
hiy = nvect[ii + 1]
ivect[ii * zoom:(ii + 1) * zoom] = ivals * (hiy - loy) + loy
loy = hiy
return ivect
def downsample(vector, factor):
"""
downsample(vector, factor):
Downsample (i.e. co-add consecutive numbers) a short section
of a vector by an integer factor.
"""
if (len(vector) % factor):
print("Length of 'vector' is not divisible by 'factor'=%d!" % factor)
return 0
newvector = Num.reshape(vector, (len(vector) // factor, factor))
return Num.add.reduce(newvector, 1)
def measure_phase_corr(profile, template, zoom=10):
"""
measure_phase_corr(profile, template, zoom=10):
Return the phase offset required to get the 'profile' to best
match the 'template', each of which has been interpolated
by a factor of 'zoom'.
"""
zoomprof = zoomtemp = zoom
if (len(template) != len(profile)):
if (len(template) % len(profile) == 0):
zoomprof = zoom * len(template) // len(profile)
else:
print("Warning!: The lengths of the template (%d) and profile (%d)" % \
(len(template), len(profile)))
print(" are not the same!")
# itemp = linear_interpolate(rotate(template, Num.argmax(template)), zoomtemp)
itemp = linear_interpolate(template, zoomtemp)
iprof = linear_interpolate(profile, zoomprof)
return maxphase(iprof, itemp)
def spike_profile(N, phase, fwhm):
"""
spike_profile(N, phase, fwhm):
Return a triangular pulse profile with 'N' bins and
an integrated 'flux' of 1 unit.
'N' = the number of points in the profile
'phase' = the pulse phase (0-1)
'fwhm' = the triangular pulses full width at half-max
"""
phsval = Num.arange(N, dtype='d') / float(N)
peakst = 0.5 - fwhm
peakend = 0.5 + fwhm
normalize = 1.0 / fwhm
# TODO: (gijs) bug, mean is not defined
if (mean < 0.5):
phsval = Num.where(Num.greater(phsval, mean + 0.5),
phsval - 1.0, phsval)
else:
phsval = Num.where(Num.less(phsval, mean - 0.5),
phsval + 1.0, phsval)
return Num.where(Num.less_equal(phsval, 0.5),
Num.where(Num.less_equal(phsval, peakst),
0.0, (phsval - peakst) *
normalize * normalize),
Num.where(Num.greater(phsval, peakend),
0.0, (1.0 - (phsval - 0.5) *
normalize) * normalize))
def harm_to_sum(fwhm):
"""
harm_to_sum(fwhm):
For an MVMD profile returns the optimal number
of harmonics to sum incoherently
"""
fwhms = [0.0108, 0.0110, 0.0113, 0.0117, 0.0119, 0.0124, 0.0127, 0.0132,
0.0134, 0.0140, 0.0145, 0.0151, 0.0154, 0.0160, 0.0167, 0.0173,
0.0180, 0.0191, 0.0199, 0.0207, 0.0220, 0.0228, 0.0242, 0.0257,
0.0273, 0.0295, 0.0313, 0.0338, 0.0366, 0.0396, 0.0437, 0.0482,
0.0542, 0.0622, 0.0714, 0.0836, 0.1037, 0.1313, 0.1799, 0.2883]
return len(fwhms) - bisect.bisect(fwhms, fwhm) + 1
def expcos_profile(N, phase, fwhm):
"""
expcos_profile(N, phase, fwhm):
Return a pulse profile with 'N' bins and an integrated 'flux'
of 1 unit based on the 'Exponentiated Sinusoid'.
'N' = the number of points in the profile
'phase' = the pulse phase (0-1)
'fwhm' = pulse full width at half-max (0.0 < fwhm <= 0.5)
"""
from simple_roots import secant
def fwhm_func(k, fwhm=fwhm):
if (fwhm < 0.02):
return Num.arccos(1.0 - Num.log(2.0) / k) / pc.PI - fwhm
else:
return Num.arccos(Num.log(0.5 * (Num.exp(k) +
Num.exp(-k))) / k) / pc.PI - fwhm
phsval = pc.TWOPI * Num.arange(N, dtype='d') / float(N)
phi = -phase * pc.TWOPI
if (fwhm >= 0.5):
return Num.cos(phsval + phi) + 1.0
elif (fwhm < 0.02):
# The following is from expanding of iO(x) as x->Infinity.
k = Num.log(2.0) / (1.0 - Num.cos(pc.PI * fwhm))
# print("Expansion: k = %f FWHM = %f" % (k, fwhm_func(k, 0.0)))
phsval = Num.fmod(phsval + phi, pc.TWOPI)
phsval = Num.where(Num.greater(phsval, pc.PI),
phsval - pc.TWOPI, phsval)
denom = ((1 + 1 / (8 * k) + 9 / (128 * k * k) + 75 / (1024 * k ** 3) +
3675 / (32768 * k ** 4) + 59535 / (262144 * k ** 5)) / Num.sqrt(pc.TWOPI * k))
return Num.where(Num.greater(Num.fabs(phsval / pc.TWOPI), 3.0 * fwhm), 0.0,
Num.exp(k * (Num.cos(phsval) - 1.0)) / denom)
else:
k = secant(fwhm_func, 1e-8, 0.5)
norm = 1.0 / (i0(k) - Num.exp(-k))
# print("Full Calc: k = %f FWHM = %f" % (k, fwhm_func(k, 0.0)))
if (k < 0.05):
tmp = Num.cos(phsval + phi)
tmp2 = tmp * tmp
return norm * (k * (tmp + 1) +
k * k * (tmp2 - 1.0) / 2.0 +
k * k * k * (tmp2 * tmp + 1.0) / 6.0)
else:
return norm * (Num.exp(k * Num.cos(phsval + phi)) -
Num.exp(-k))
def read_gaussfitfile(gaussfitfile, proflen):
"""
read_gaussfitfile(gaussfitfile, proflen):
Read a Gaussian-fit file as created by the output of pygaussfit.py.
The input parameters are the name of the file and the number of
bins to include in the resulting template file. A numpy array
of that length is returned.
"""
phass = []
ampls = []
fwhms = []
for line in open(gaussfitfile):
if line.lstrip().startswith("phas"):
phass.append(float(line.split()[2]))
if line.lstrip().startswith("ampl"):
ampls.append(float(line.split()[2]))
if line.lstrip().startswith("fwhm"):
fwhms.append(float(line.split()[2]))
if not (len(phass) == len(ampls) == len(fwhms)):
print("Number of phases, amplitudes, and FWHMs are not the same in '%s'!" % gaussfitfile)
return 0.0
phass = Num.asarray(phass)
ampls = Num.asarray(ampls)
fwhms = Num.asarray(fwhms)
# Now sort them all according to decreasing amplitude
new_order = Num.argsort(ampls)
new_order = new_order[::-1]
ampls = Num.take(ampls, new_order)
phass = Num.take(phass, new_order)
fwhms = Num.take(fwhms, new_order)
# Now put the biggest gaussian at phase = 0.0
phass = phass - phass[0]
phass = Num.where(phass < 0.0, phass + 1.0, phass)
template = Num.zeros(proflen, dtype='d')
for ii in range(len(ampls)):
template += ampls[ii] * gaussian_profile(proflen, phass[ii], fwhms[ii])
return template
def gaussian_profile(N, phase, fwhm):
"""
gaussian_profile(N, phase, fwhm):
Return a gaussian pulse profile with 'N' bins and
an integrated 'flux' of 1 unit.
'N' = the number of points in the profile
'phase' = the pulse phase (0-1)
'fwhm' = the gaussian pulses full width at half-max
Note: The FWHM of a gaussian is approx 2.35482 sigma
"""
sigma = fwhm / 2.35482
mean = phase % 1.0 # Ensures between 0-1
phss = Num.arange(N, dtype=Num.float64) / N - mean
# Following two lines allow the Gaussian to wrap in phase
phss[phss > 0.5] -= 1.0
phss[phss < -0.5] += 1.0
zs = Num.fabs(phss) / sigma
# The following avoids overflow by truncating the Gaussian at 20 sigma
return Num.where(zs < 20.0, Num.exp(-0.5 * zs ** 2.0) / \
(sigma * Num.sqrt(2 * Num.pi)), 0.0)
def gauss_profile_params(profile, output=0):
"""
gauss_profile_params(profile, output=0):
Return parameters of a best-fit gaussian to a profile.
The funtion returns a tuple containg the following values:
ret[0] = Best-fit gaussian integrated 'flux'.
ret[1] = Best-fit gaussian FWHM.
ret[2] = Best-fit gaussian phase (0.0-1.0).
ret[3] = Baseline (i.e. noise) average value.
ret[4] = Residuals average value.
ret[5] = Residuals standard deviation.
If 'output' is true, the fit will be plotted and
the return values will be printed.
"""
profile = Num.asarray(profile)
def funct(afpo, profile):
return afpo[0] * gaussian_profile(len(profile), afpo[2], afpo[1]) \
+ afpo[3] - profile
ret = leastsq(funct, [profile.max() - profile.min(),
0.25, profile.argmax() / float(len(profile)),
profile.min()], args=(profile))
if (output):
phases = Num.arange(0.0, 1.0,
1.0 / len(profile)) + 0.5 / len(profile)
Pgplot.plotxy(profile, phases, rangex=[0.0, 1.0],
labx='Pulse Phase', laby='Pulse Intensity')
bestfit = ret[0][0] * gaussian_profile(len(profile),
ret[0][2], ret[0][1]) \
+ ret[0][3]
if (output):
Pgplot.plotxy(bestfit, phases, color='red')
Pgplot.closeplot()
residuals = bestfit - profile
resid_avg = residuals.mean()
resid_std = residuals.std()
if (output):
Pgplot.plotxy(residuals, phases, rangex=[0.0, 1.0],
rangey=[min(residuals) - 2 * resid_std,
max(residuals) + 2 * resid_std],
labx='Pulse Phase', laby='Residuals',
line=None, symbol=3)
ppgplot.pgerrb(6, phases, residuals,
Num.zeros(len(residuals), 'd') + \
resid_std, 2)
Pgplot.plotxy([resid_avg, resid_avg], [0.0, 1.0], line=2)
Pgplot.closeplot()
print("")
print(" Best-fit gaussian integrated 'flux' = ", ret[0][0])
print(" Best-fit gaussian FWHM = ", ret[0][1])
print(" Best-fit gaussian phase (0.0-1.0) = ", ret[0][2])
print(" Baseline (i.e. noise) average = ", ret[0][3])
print(" Residuals average = ", resid_avg)
print(" Residuals standard deviation = ", resid_std)
print("")
return (ret[0][0], ret[0][1], ret[0][2], ret[0][3], resid_avg, resid_std)
def twogauss_profile_params(profile, output=0):
"""
twogauss_profile_params(profile, output=0):
Return parameters of a two best-fit gaussians to a profile.
The function returns a tuple containg the following values:
ret[0] = Best-fit gaussian integrated 'flux'.
ret[1] = Best-fit gaussian FWHM.
ret[2] = Best-fit gaussian phase (0.0-1.0).
ret[3] = Best-fit gaussian integrated 'flux'.
ret[4] = Best-fit gaussian FWHM.
ret[5] = Best-fit gaussian phase (0.0-1.0).
ret[6] = Baseline (i.e. noise) average value.
ret[7] = Residuals average value.
ret[8] = Residuals standard deviation.
If 'output' is true, the fit will be plotted and
the return values will be printed.
"""
def yfunct(afpo, n):
return afpo[0] * gaussian_profile(n, afpo[2], afpo[1]) + \
afpo[3] * gaussian_profile(n, afpo[5], afpo[4]) + afpo[6]
def min_funct(afpo, profile):
return yfunct(afpo, len(profile)) - profile
ret = leastsq(min_funct, [max(profile) - min(profile),
0.05,
Num.argmax(profile) / float(len(profile)),
0.2 * max(profile) - min(profile),
0.1,
Num.fmod(Num.argmax(profile) / float(len(profile)) + 0.5, 1.0),
min(profile)], args=(profile))
if (output):
phases = Num.arange(0.0, 1.0,
1.0 / len(profile)) + 0.5 / len(profile)
Pgplot.plotxy(profile, phases, rangex=[0.0, 1.0],
labx='Pulse Phase', laby='Pulse Intensity')
bestfit = yfunct(ret[0], len(profile))
if (output):
Pgplot.plotxy(bestfit, phases, color='red')
Pgplot.closeplot()
residuals = bestfit - profile
resid_avg = residuals.mean()
resid_std = residuals.std()
if (output):
Pgplot.plotxy(residuals, phases, rangex=[0.0, 1.0],
rangey=[min(residuals) - 2 * resid_std,
max(residuals) + 2 * resid_std],
labx='Pulse Phase', laby='Residuals',
line=None, symbol=3)
ppgplot.pgerrb(6, phases, residuals,
Num.zeros(len(residuals), 'd') + \
resid_std, 2)
Pgplot.plotxy([resid_avg, resid_avg], [0.0, 1.0], line=2)
Pgplot.closeplot()
print("")
print(" Best-fit gaussian integrated 'flux' = ", ret[0][0])
print(" Best-fit gaussian FWHM = ", ret[0][1])
print(" Best-fit gaussian phase (0.0-1.0) = ", ret[0][2])
print(" Best-fit gaussian integrated 'flux' = ", ret[0][3])
print(" Best-fit gaussian FWHM = ", ret[0][4])
print(" Best-fit gaussian phase (0.0-1.0) = ", ret[0][5])
print(" Baseline (i.e. noise) average = ", ret[0][6])
print(" Residuals average = ", resid_avg)
print(" Residuals standard deviation = ", resid_std)
print("")
return (ret[0][0], ret[0][1], ret[0][2], ret[0][3], ret[0][4],
ret[0][5], ret[0][6], resid_avg, resid_std)
def estimate_flux_density(profile, N, dt, Ttot, G, BW, prof_stdev, display=0):
"""
estimate_flux_density(profile, N, dt, Ttot, G, BW, prof_stdev, display=0):
Return an estimate of the flux density (mJy) of a pulsar.
'profile' = the pulse profile you are using
'N' = number of time series bins folded
'dt' = time per time series bin (s)
'Ttot' = sky + system temperature (K)
'G' = forward gain of the antenna (K/Jy)
'BW' = observing bandwidth (MHz)
'prof_stdev' = profile standard deviation
'display' = if set, the gaussian fit plots are shown
Observatories:
Parkes Multibeam: Tsys = 21 K, G = 0.735 K/Jy
"""
(amp, fwhm, phase, offset, resid_avg, resid_std) = \
gauss_profile_params(profile, display)
T = N * dt
norm_fact = (prof_stdev * len(profile)) / \
smin_noise(Ttot, G, BW, T / len(profile))
return Num.add.reduce(profile - offset) / norm_fact
def max_spike_power(FWHM):
"""
max_spike_power(FWHM):
Return the (approx.) ratio of the highest power from a
triangular spike pulse profile to the power from a
perfect sinusoidal pulse profile. In other words, if a
sine gives you a power of 1, what power does a spike profile
give you? Both the spike and the sine are assumed to have
an area under one full pulse of 1 unit. Note: A gaussian
profile gives almost identical powers as a spike profile
of the same width. This expression was determined using
a least-squares fit (Max abs error ~ 0.016).
'FWHM' is the full width at half-max of the spike.
(0.0 < FWHM <= 0.5)
"""
return ((36.4165309504 * FWHM - 32.0107844537) * FWHM \
+ 0.239948319674) * FWHM + 4.00277916584
def num_spike_powers(FWHM):
"""
num_spike_powers(FWHM):
Return the (approx.) number of powers from a triangular spike
pulse profile which are greater than one half the power
perfect sinusoidal pulse profile. Both the spike and the
sine are assumed to have an area under one full pulse of 1 unit.
Note: A gaussian profile gives almost identical numbers of
high powers as a spike profile of the same width. This
expression was determined using a least-squares fit.
(Errors get large as FWHM -> 0).
'FWHM' is the full width at half-max of the spike.
(0.0 < FWHM <= 0.5)
"""
return -3.95499721563e-05 / FWHM ** 2 + 0.562069634689 / FWHM - \
0.683604041138
def incoherent_sum(amps):
"""
incoherent_sum(amps):
Given a series of complex Fourier amplitudes, return a vector
showing the accumulated incoherently-summed powers.
"""
return Num.add.accumulate(Num.absolute(amps) ** 2.0)
def coherent_sum(amps):
"""
coherent_sum(amps):
Given a series of complex Fourier amplitudes, return a vector
showing the accumulated coherently-summed powers.
"""
phss = Num.arctan2(amps.imag, amps.real)
phs0 = phss[0]
phscorr = phs0 - Num.fmod((Num.arange(len(amps), dtype='d') + 1.0) * phs0, pc.TWOPI)
sumamps = Num.add.accumulate(amps * Num.exp(complex(0.0, 1.0) * phscorr))
return Num.absolute(sumamps) ** 2.0
def dft_vector_response(roff, z=0.0, w=0.0, phs=0.0, N=1000):
"""
dft_vector_response(roff, z=0.0, w=0.0, phs=0.0, N=1000):
Return a complex vector addition of N vectors showing the DFT
response for a noise-less signal with Fourier frequency
offset roff, (roff=0 would mean that we are exactly at the
signal freq), average Fourier f-dot, z, and Fourier 2nd
deriv, w. An optional phase in radians can be added.
"""
r0 = roff - 0.5 * z + w / 12.0 # Make symmetric for all z and w
z0 = z - 0.5 * w
us = Num.linspace(0.0, 1.0, N)
phss = 2.0 * Num.pi * (us * (us * (us * w / 6.0 + z0 / 2.0) + r0) + phs)
return Num.cumsum(Num.exp(Num.complex(0.0, 1.0) * phss)) / N
def prob_power(power):
"""
prob_power(power):
Return the probability for noise to exceed a normalized power
level of 'power' in a power spectrum.
"""
return Num.exp(-power)
def Ftest(chi2_1, dof_1, chi2_2, dof_2):
"""
Ftest(chi2_1, dof_1, chi2_2, dof_2):
Compute an F-test to see if a model with extra parameters is
significant compared to a simpler model. The input values are the
(non-reduced) chi^2 values and the numbers of DOF for '1' the
original model and '2' for the new model (with more fit params).
The probability is computed exactly like Sherpa's F-test routine
(in Ciao) and is also described in the Wikipedia article on the
F-test: http://en.wikipedia.org/wiki/F-test
The returned value is the probability that the improvement in
chi2 is due to chance (i.e. a low probability means that the
new fit is quantitatively better, while a value near 1 means
that the new model should likely be rejected).
"""
delta_chi2 = chi2_1 - chi2_2
delta_dof = dof_1 - dof_2
new_redchi2 = chi2_2 / dof_2
F = (delta_chi2 / delta_dof) / new_redchi2
return fdtrc(delta_dof, dof_2, F)
def equivalent_gaussian_sigma(p):
"""
equivalent_gaussian_sigma(p):
Return the equivalent gaussian sigma corresponding
to the cumulative gaussian probability p. In other
words, return x, such that Q(x) = p, where Q(x) is the
cumulative normal distribution. For very small
"""
if Num.isscalar(p):
logp = Num.log(p)
return ndtri(1.0 - p) if logp > -30.0 else extended_equiv_gaussian_sigma(logp)
else: # logp is an array
return _vec_equivalent_gaussian_sigma(p)
_vec_equivalent_gaussian_sigma = Num.vectorize(equivalent_gaussian_sigma,
doc="Vectorized `equivalent_gaussian_sigma` over p")
def extended_equiv_gaussian_sigma(logp):
"""
extended_equiv_gaussian_sigma(logp):
Return the equivalent gaussian sigma corresponding
to the log of the cumulative gaussian probability logp.
In other words, return x, such that Q(x) = p, where Q(x)
is the cumulative normal distribution. This version uses
the rational approximation from Abramowitz and Stegun,
eqn 26.2.23. Using the log(P) as input gives a much
extended range.
"""
t = Num.sqrt(-2.0 * logp)
num = 2.515517 + t * (0.802853 + t * 0.010328)
denom = 1.0 + t * (1.432788 + t * (0.189269 + t * 0.001308))
return t - num / denom
def log_asymtotic_incomplete_gamma(a, z):
"""
log_asymtotic_incomplete_gamma(a, z):
Return the log of the incomplete gamma function in its
asymtotic limit as z->infty. This is from Abramowitz
and Stegun eqn 6.5.32.
"""
x = 1.0
newxpart = 1.0
term = 1.0
ii = 1
while (Num.fabs(newxpart) > 1e-15):
term *= (a - ii)
newxpart = term / z ** ii
x += newxpart
ii += 1
return (a - 1.0) * Num.log(z) - z + Num.log(x)
def log_asymtotic_gamma(z):
"""
log_asymtotic_gamma(z):
Return the log of the gamma function in its asymtotic limit
as z->infty. This is from Abramowitz and Stegun eqn 6.1.41.
"""
x = (z - 0.5) * Num.log(z) - z + 0.91893853320467267
y = 1.0 / (z * z)
x += (((- 5.9523809523809529e-4 * y
+ 7.9365079365079365079365e-4) * y
- 2.7777777777777777777778e-3) * y
+ 8.3333333333333333333333e-2) / z;
return x
def prob_sum_powers(power, nsum):
"""
prob_sum_powers(power, nsum):
Return the probability for noise to exceed 'power' in
the sum of 'nsum' normalized powers from a power spectrum.
"""
# Notes:
# prob_sum_powers(power, nsum)
# = scipy.special.gammaincc(nsum, power)
# = statdists.chi_prob(power*2, nsum*2)
# = scipy.special.chdtrc(nsum*2, power*2)
# = Q(power*2|nsum*2) (from A&S 26.4.19)
# = Gamma(nsum,power)/Gamma(nsum)
# = [Gamma(nsum) - gamma(nsum,power)]/Gamma(nsum)
return chdtrc(2 * nsum, 2.0 * power)
def log_prob_sum_powers(power, nsum):
"""
log_prob_sum_powers(power, nsum):
Return the log of the probability for noise to exceed
'power' in the sum of 'nsum' normalized powers from a
power spectrum. This version uses allows the use of
very large powers by using asymtotic expansions from
Abramowitz and Stegun Chap 6.
"""
# Notes:
# prob_sum_powers(power, nsum)
# = scipy.special.gammaincc(nsum, power)
# = statdists.chi_prob(power*2, nsum*2)
# = scipy.special.chdtrc(nsum*2, power*2)
# = Q(power*2|nsum*2) (from A&S 26.4.19)
# = Gamma(nsum,power)/Gamma(nsum)
# = [Gamma(nsum) - gamma(nsum,power)]/Gamma(nsum)
#
# For chi^2 dist with dof=2*nsum, mean=dof and var=2*dof
# And our powers are 1/2 what they should be in chi^2 dist
# Set our cutoff above ~10 sigma
thresh = 0.5 * (2 * nsum + 10 * Num.sqrt(4 * nsum)) # (mean + 10*std) / 2
if Num.isscalar(power):
return Num.log(prob_sum_powers(power, nsum)) if power < thresh \
else log_asymtotic_incomplete_gamma(nsum, power) - log_asymtotic_gamma(nsum)
else: # power is an array
return _vec_log_prob_sum_powers(power, nsum)
_vec_log_prob_sum_powers = Num.vectorize(log_prob_sum_powers,
doc="Vectorized `log_prob_sum_powers` over powers")
def sigma_power(power):
"""
sigma_power(power):
Return the approximate equivalent Gaussian sigma for noise
to exceed a normalized power level given as 'power'
in a power spectrum.
"""
if Num.isscalar(power):
return Num.sqrt(2.0 * power - Num.log(pc.PI * power)) if power > 36.0 \
else equivalent_gaussian_sigma(prob_power(power))
else: # power is an array
return _vec_sigma_power(power)
_vec_sigma_power = Num.vectorize(sigma_power,
doc="Vectorized `sigma_power` over powers")
def sigma_sum_powers(power, nsum):
"""
sigma_sum_powers(power, nsum):
Return the approximate equivalent Gaussian sigma for noise
to exceed a sum of 'nsum' normalized powers given by 'power'
in a power spectrum.
"""
# For chi^2 dist with dof=2*nsum, mean=dof and var=2*dof
# And our powers are 1/2 what they should be in chi^2 dist
# Set our cutoff above ~10 sigma
thresh = 0.5 * (2 * nsum + 10 * Num.sqrt(4 * nsum)) # (mean + 10*std) / 2
if Num.isscalar(power):
return equivalent_gaussian_sigma(prob_sum_powers(power, nsum)) if power < thresh \
else extended_equiv_gaussian_sigma(log_prob_sum_powers(power, nsum))
else: # power is an array
return _vec_sigma_sum_powers(power, nsum)
_vec_sigma_sum_powers = Num.vectorize(sigma_sum_powers,
doc="Vectorized `sigma_sum_powers` over powers")
def power_at_sigma(sigma):
"""
power_at_sigma(sigma):
Return the approximate normalized power level that is
equivalent to a detection of significance 'sigma'.
"""
return sigma ** 2 / 2.0 + Num.log(Num.sqrt(pc.PIBYTWO)
* sigma)
def powersum_at_sigma(sigma, nsum):
"""
powersum_at_sigma(sigma, nsum):
Return the approximate sum of 'nsum' normalized powers that is
equivalent to a detection of significance 'sigma'.
"""
return 0.5 * chdtri(2.0 * nsum, 1.0 - ndtr(sigma))
def cand_sigma(N, power):
"""
cand_sigma(N, power):
Return the sigma of a candidate found in a power
spectrum of 'N' bins after taking into account the
number of bins searched.
"""
return ndtri(1.0 - N * prob_power(power))
def fft_max_pulsed_frac(N, numphot, sigma=3.0):
"""
fft_max_pulsed_frac(N, numphot, sigma=3.0):
Return the approximate maximum pulsed fraction for a
sinusoidal signal that _wasn't_ found in a FFT-based
search. 'N' is the number of bins searched in the FFT.
'numphot' is the number of photons present in the data.
And 'sigma' is your confidence (in sigma) that you
have in expressing this limit.
"""
# The following is the power level required to get a
# noise spike that would appear somewhere in N bins
# at the 'sigma' level
power_required = -Num.log((1.0 - ndtr(sigma)) / N)
return Num.sqrt(4.0 * numphot * power_required) / N
def p_to_f(p, pd, pdd=None):
"""
p_to_f(p, pd, pdd=None):
Convert period, period derivative and period second
derivative to the equivalent frequency counterparts.
Will also convert from f to p.
"""
f = 1.0 / p
fd = -pd / (p * p)
if (pdd is None):
return [f, fd]
else:
if (pdd == 0.0):
fdd = 0.0
else:
fdd = 2.0 * pd * pd / (p ** 3.0) - pdd / (p * p)
return [f, fd, fdd]
def pferrs(porf, porferr, pdorfd=None, pdorfderr=None):
"""
pferrs(porf, porferr, pdorfd=None, pdorfderr=None):
Calculate the period or frequency errors and
the pdot or fdot errors from the opposite one.
"""
if (pdorfd is None):
return [1.0 / porf, porferr / porf ** 2.0]
else:
forperr = porferr / porf ** 2.0
fdorpderr = Num.sqrt((4.0 * pdorfd ** 2.0 * porferr ** 2.0) / porf ** 6.0 +
pdorfderr ** 2.0 / porf ** 4.0)
[forp, fdorpd] = p_to_f(porf, pdorfd)
return [forp, forperr, fdorpd, fdorpderr]
def pdot_from_B(p, B):
"""
pdot_from_B(p, B):
Return a pdot (or p, actually) that a pulsar with spin
period (or pdot) 'p' (in sec) would experience given a
magnetic field strength 'B' in gauss.
"""
return (B / 3.2e19) ** 2.0 / p
def pdot_from_age(p, age):
"""
pdot_from_age(p, age):
Return the pdot that a pulsar with spin period 'p' (in sec)
would experience given a characteristic age 'age' (in yrs).
"""
return p / (2.0 * age * pc.SECPERJULYR)
def pdot_from_edot(p, edot, I=1.0e45):
"""
pdot_from_edot(p, edot, I=1.0e45):
Return the pdot that a pulsar with spin period 'p (in sec)
would experience given an Edot 'edot' (in ergs/s) and a
moment of inertia I.
"""
return (p ** 3.0 * edot) / (4.0 * pc.PI * pc.PI * I)
def pulsar_age(f, fdot, n=3, fo=1e99):
"""
pulsar_age(f, fdot, n=3, fo=1e99):
Return the age of a pulsar (in years) given the spin frequency
and frequency derivative. By default, the characteristic age
is returned (assuming a braking index 'n'=3 and an initial
spin freqquency fo >> f). But 'n' and 'fo' can be set.
"""
return -f / ((n - 1.0) * fdot) * (1.0 - (f / fo) ** (n - 1.0)) / pc.SECPERJULYR
def pulsar_edot(f, fdot, I=1.0e45):
"""
pulsar_edot(f, fdot, I=1.0e45):
Return the pulsar Edot (in erg/s) given the spin frequency and
frequency derivative. The NS moment of inertia is assumed to be
I = 1.0e45 g cm^2
"""
return -4.0 * pc.PI * pc.PI * I * f * fdot
def pulsar_B(f, fdot):
"""
pulsar_B(f, fdot):
Return the estimated pulsar surface magnetic field strength
(in Gauss) given the spin frequency and frequency derivative.
"""
return 3.2e19 * Num.sqrt(-fdot / f ** 3.0)
def pulsar_B_lightcyl(f, fdot):
"""
pulsar_B_lightcyl(f, fdot):
Return the estimated pulsar magnetic field strength at the
light cylinder (in Gauss) given the spin frequency and
frequency derivative.
"""
p, pd = p_to_f(f, fdot)
return 2.9e8 * p ** (-5.0 / 2.0) * Num.sqrt(pd)
def psr_info(porf, pdorfd, time=None, input=None, I=1e45):
"""
psr_info(porf, pdorfd, time=None, input=None, I=1e45):
Print a list of standard derived pulsar parameters based
on the period (or frequency) and its first derivative. The
routine will automatically assume you are using periods if
'porf' <= 1.0 and frequencies otherwise. You can override this
by setting input='p' or 'f' appropriately. If time is specified
(duration of an observation) it will also return the Fourier
frequency 'r' and Fourier fdot 'z'. I is the NS moment of inertia.
"""
if ((input == None and porf > 1.0) or
(input == 'f' or input == 'F')):
pdorfd = - pdorfd / (porf * porf)
porf = 1.0 / porf
[f, fd] = p_to_f(porf, pdorfd)
print("")
print(" Period = %f s" % porf)
print(" P-dot = %g s/s" % pdorfd)
print(" Frequency = %f Hz" % f)
print(" F-dot = %g Hz/s" % fd)
if (time):
print(" Fourier Freq = %g bins" % (f * time))
print(" Fourier F-dot = %g bins" % (fd * time * time))
print(" E-dot = %g ergs/s" % pulsar_edot(f, fd, I))
print(" Surface B Field = %g gauss" % pulsar_B(f, fd))
print(" Characteristic Age = %g years" % pulsar_age(f, fd))
print(" Assumed I = %g g cm^2" % I)
print("")
def doppler(freq_observed, voverc):
"""doppler(freq_observed, voverc):
This routine returns the frequency emitted by a pulsar
(in MHz) given that we observe the pulsar at frequency
freq_observed (MHz) while moving with radial velocity
(in units of v/c) of voverc wrt the pulsar.
"""
return freq_observed * (1.0 + voverc)
| 75,060 | 36.399601 | 112 | py |
presto | presto-master/python/presto/psr_constants.py | ## Automatically adapted for numpy Apr 14, 2006 by convertcode.py
ARCSECTORAD = float('4.8481368110953599358991410235794797595635330237270e-6')
RADTOARCSEC = float('206264.80624709635515647335733077861319665970087963')
SECTORAD = float('7.2722052166430399038487115353692196393452995355905e-5')
RADTOSEC = float('13750.987083139757010431557155385240879777313391975')
RADTODEG = float('57.295779513082320876798154814105170332405472466564')
DEGTORAD = float('1.7453292519943295769236907684886127134428718885417e-2')
RADTOHRS = float('3.8197186342054880584532103209403446888270314977710')
HRSTORAD = float('2.6179938779914943653855361527329190701643078328126e-1')
PI = float('3.1415926535897932384626433832795028841971693993751')
TWOPI = float('6.2831853071795864769252867665590057683943387987502')
PIBYTWO = float('1.5707963267948966192313216916397514420985846996876')
SECPERDAY = float('86400.0')
SECPERJULYR = float('31557600.0')
KMPERPC = float('3.0856776e13')
KMPERKPC = float('3.0856776e16')
Tsun = float('4.925490947e-6') # sec
Msun = float('1.9891e30') # kg
Mjup = float('1.8987e27') # kg
Rsun = float('6.9551e8') # m
Rearth = float('6.378e6') # m
SOL = float('299792458.0') # m/s
G = float('6.673e-11') # m^3/s^2/kg
C = SOL
| 1,369 | 51.692308 | 77 | py |
presto | presto-master/python/presto/prepfold.py | from __future__ import print_function
from __future__ import absolute_import
from builtins import range
from builtins import object
import sys
import numpy as Num
import copy, random, struct
from presto import psr_utils, infodata, polycos, Pgplot
import six
import numbers
from presto.bestprof import bestprof
from presto.presto import chi2_sigma
class pfd(object):
def __init__(self, filename):
self.pfd_filename = filename
infile = open(filename, "rb")
# See if the .bestprof file is around
try:
self.bestprof = bestprof(filename+".bestprof")
except IOError:
self.bestprof = 0
swapchar = '<' # this is little-endian
data = infile.read(5*4)
testswap = struct.unpack(swapchar+"i"*5, data)
# This is a hack to try and test the endianness of the data.
# None of the 5 values should be a large positive number.
if (Num.fabs(Num.asarray(testswap))).max() > 100000:
swapchar = '>' # this is big-endian
(self.numdms, self.numperiods, self.numpdots, self.nsub, self.npart) = \
struct.unpack(swapchar+"i"*5, data)
(self.proflen, self.numchan, self.pstep, self.pdstep, self.dmstep, \
self.ndmfact, self.npfact) = struct.unpack(swapchar+"i"*7, infile.read(7*4))
self.filenm = infile.read(struct.unpack(swapchar+"i", infile.read(4))[0])
self.candnm = infile.read(struct.unpack(swapchar+"i", infile.read(4))[0]).decode("utf-8")
self.telescope = infile.read(struct.unpack(swapchar+"i", infile.read(4))[0]).decode("utf-8")
self.pgdev = infile.read(struct.unpack(swapchar+"i", infile.read(4))[0])
test = infile.read(16)
if not test[:8]==b"Unknown" and b':' in test:
self.rastr = test[:test.find(b'\0')]
test = infile.read(16)
self.decstr = test[:test.find(b'\0')]
else:
self.rastr = "Unknown"
self.decstr = "Unknown"
if ':' not in test:
infile.seek(-16, 1) # rewind the file before the bad read
(self.dt, self.startT) = struct.unpack(swapchar+"dd", infile.read(2*8))
(self.endT, self.tepoch, self.bepoch, self.avgvoverc, self.lofreq, \
self.chan_wid, self.bestdm) = struct.unpack(swapchar+"d"*7, infile.read(7*8))
# The following "fixes" (we think) the observing frequency of the Spigot
# based on tests done by Ingrid on 0737 (comparing it to GASP)
# The same sorts of corrections should be made to WAPP data as well...
# The tepoch corrections are empirically determined timing corrections
# Note that epoch is only double precision and so the floating
# point accuracy is ~1 us!
if self.telescope=='GBT':
if (Num.fabs(Num.fmod(self.dt, 8.192e-05) < 1e-12) and \
("spigot" in filename.lower() or "guppi" not in filename.lower()) and \
(self.tepoch < 54832.0)):
sys.stderr.write("Assuming SPIGOT data...\n")
if self.chan_wid==800.0/1024: # Spigot 800 MHz mode 2
self.lofreq -= 0.5 * self.chan_wid
# original values
#if self.tepoch > 0.0: self.tepoch += 0.039334/86400.0
#if self.bestprof: self.bestprof.epochf += 0.039334/86400.0
# values measured with 1713+0747 wrt BCPM2 on 13 Sept 2007
if self.tepoch > 0.0: self.tepoch += 0.039365/86400.0
if self.bestprof: self.bestprof.epochf += 0.039365/86400.0
elif self.chan_wid==800.0/2048:
self.lofreq -= 0.5 * self.chan_wid
if self.tepoch < 53700.0: # Spigot 800 MHz mode 16 (downsampled)
if self.tepoch > 0.0: self.tepoch += 0.039352/86400.0
if self.bestprof: self.bestprof.epochf += 0.039352/86400.0
else: # Spigot 800 MHz mode 14
# values measured with 1713+0747 wrt BCPM2 on 13 Sept 2007
if self.tepoch > 0.0: self.tepoch += 0.039365/86400.0
if self.bestprof: self.bestprof.epochf += 0.039365/86400.0
elif self.chan_wid==50.0/1024 or self.chan_wid==50.0/2048: # Spigot 50 MHz modes
self.lofreq += 0.5 * self.chan_wid
# Note: the offset has _not_ been measured for the 2048-lag mode
if self.tepoch > 0.0: self.tepoch += 0.039450/86400.0
if self.bestprof: self.bestprof.epochf += 0.039450/86400.0
(self.topo_pow, tmp) = struct.unpack(swapchar+"f"*2, infile.read(2*4))
(self.topo_p1, self.topo_p2, self.topo_p3) = struct.unpack(swapchar+"d"*3, \
infile.read(3*8))
(self.bary_pow, tmp) = struct.unpack(swapchar+"f"*2, infile.read(2*4))
(self.bary_p1, self.bary_p2, self.bary_p3) = struct.unpack(swapchar+"d"*3, \
infile.read(3*8))
(self.fold_pow, tmp) = struct.unpack(swapchar+"f"*2, infile.read(2*4))
(self.fold_p1, self.fold_p2, self.fold_p3) = struct.unpack(swapchar+"d"*3, \
infile.read(3*8))
# Save current p, pd, pdd
# NOTE: Fold values are actually frequencies!
self.curr_p1, self.curr_p2, self.curr_p3 = \
psr_utils.p_to_f(self.fold_p1, self.fold_p2, self.fold_p3)
self.pdelays_bins = Num.zeros(self.npart, dtype='d')
(self.orb_p, self.orb_e, self.orb_x, self.orb_w, self.orb_t, self.orb_pd, \
self.orb_wd) = struct.unpack(swapchar+"d"*7, infile.read(7*8))
self.dms = Num.asarray(struct.unpack(swapchar+"d"*self.numdms, \
infile.read(self.numdms*8)))
if self.numdms==1:
self.dms = self.dms[0]
self.periods = Num.asarray(struct.unpack(swapchar+"d"*self.numperiods, \
infile.read(self.numperiods*8)))
self.pdots = Num.asarray(struct.unpack(swapchar+"d"*self.numpdots, \
infile.read(self.numpdots*8)))
self.numprofs = self.nsub*self.npart
if (swapchar=='<'): # little endian
self.profs = Num.zeros((self.npart, self.nsub, self.proflen), dtype='d')
for ii in range(self.npart):
for jj in range(self.nsub):
self.profs[ii,jj,:] = Num.fromfile(infile, Num.float64, self.proflen)
else:
self.profs = Num.asarray(struct.unpack(swapchar+"d"*self.numprofs*self.proflen, \
infile.read(self.numprofs*self.proflen*8)))
self.profs = Num.reshape(self.profs, (self.npart, self.nsub, self.proflen))
if (self.numchan==1):
try:
idata = infodata.infodata(self.filenm[:self.filenm.rfind(b'.')]+b".inf")
try:
if idata.waveband=="Radio":
self.bestdm = idata.DM
self.numchan = idata.numchan
except:
self.bestdm = 0.0
self.numchan = 1
except IOError:
print("Warning! Can't open the .inf file for "+filename+"!")
self.binspersec = self.fold_p1*self.proflen
self.chanpersub = self.numchan // self.nsub
self.subdeltafreq = self.chan_wid*self.chanpersub
self.hifreq = self.lofreq + (self.numchan-1)*self.chan_wid
self.losubfreq = self.lofreq + self.subdeltafreq - self.chan_wid
self.subfreqs = Num.arange(self.nsub, dtype='d')*self.subdeltafreq + \
self.losubfreq
self.subdelays_bins = Num.zeros(self.nsub, dtype='d')
# Save current DM
self.currdm = 0
self.killed_subbands = []
self.killed_intervals = []
self.pts_per_fold = []
# Note: a foldstats struct is read in as a group of 7 doubles
# the correspond to, in order:
# numdata, data_avg, data_var, numprof, prof_avg, prof_var, redchi
self.stats = Num.zeros((self.npart, self.nsub, 7), dtype='d')
for ii in range(self.npart):
currentstats = self.stats[ii]
for jj in range(self.nsub):
if (swapchar=='<'): # little endian
currentstats[jj] = Num.fromfile(infile, Num.float64, 7)
else:
currentstats[jj] = Num.asarray(struct.unpack(swapchar+"d"*7, \
infile.read(7*8)))
self.pts_per_fold.append(self.stats[ii][0][0]) # numdata from foldstats
self.start_secs = Num.add.accumulate([0]+self.pts_per_fold[:-1])*self.dt
self.pts_per_fold = Num.asarray(self.pts_per_fold)
self.mid_secs = self.start_secs + 0.5*self.dt*self.pts_per_fold
if (not self.tepoch==0.0):
self.start_topo_MJDs = self.start_secs/86400.0 + self.tepoch
self.mid_topo_MJDs = self.mid_secs/86400.0 + self.tepoch
if (not self.bepoch==0.0):
self.start_bary_MJDs = self.start_secs/86400.0 + self.bepoch
self.mid_bary_MJDs = self.mid_secs/86400.0 + self.bepoch
self.Nfolded = Num.add.reduce(self.pts_per_fold)
self.T = self.Nfolded*self.dt
self.avgprof = (self.profs/self.proflen).sum()
self.varprof = self.calc_varprof()
# nominal number of degrees of freedom for reduced chi^2 calculation
self.DOFnom = float(self.proflen) - 1.0
# corrected number of degrees of freedom due to inter-bin correlations
self.dt_per_bin = self.curr_p1 / self.proflen / self.dt
self.DOFcor = self.DOFnom * self.DOF_corr()
infile.close()
self.barysubfreqs = None
if self.avgvoverc==0:
if self.candnm.startswith("PSR_"):
# If this doesn't work, we should try to use the barycentering calcs
# in the presto module.
try:
psrname = self.candnm[4:]
self.polycos = polycos.polycos(psrname,
filenm=self.pfd_filename+".polycos")
midMJD = self.tepoch + 0.5*self.T/86400.0
self.avgvoverc = self.polycos.get_voverc(int(midMJD), midMJD-int(midMJD))
#sys.stderr.write("Approximate Doppler velocity (in c) is: %.4g\n"%self.avgvoverc)
# Make the Doppler correction
self.barysubfreqs = self.subfreqs*(1.0+self.avgvoverc)
except IOError:
self.polycos = 0
if self.barysubfreqs is None:
self.barysubfreqs = self.subfreqs
def __str__(self):
out = ""
for k, v in list(self.__dict__.items()):
if k[:2]!="__":
if isinstance(self.__dict__[k], six.string_types):
out += "%10s = '%s'\n" % (k, v)
elif isinstance(self.__dict__[k], numbers.Integral):
out += "%10s = %d\n" % (k, v)
elif isinstance(self.__dict__[k], numbers.Real):
out += "%10s = %-20.15g\n" % (k, v)
return out
def dedisperse(self, DM=None, interp=0, doppler=0):
"""
dedisperse(DM=self.bestdm, interp=0, doppler=0):
Rotate (internally) the profiles so that they are de-dispersed
at a dispersion measure of DM. Use FFT-based interpolation if
'interp' is non-zero (NOTE: It is off by default!).
Doppler shift subband frequencies if doppler is non-zero.
(NOTE: It is also off by default.)
"""
if DM is None:
DM = self.bestdm
# Note: Since TEMPO Doppler corrects observing frequencies, for
# TOAs, at least, we need to de-disperse using topocentric
# observing frequencies.
if doppler:
freqs = psr_utils.doppler(self.subfreqs, self.avgvoverc)
else:
freqs = self.subfreqs
self.subdelays = psr_utils.delay_from_DM(DM, freqs)
self.hifreqdelay = self.subdelays[-1]
self.subdelays = self.subdelays-self.hifreqdelay
delaybins = self.subdelays*self.binspersec - self.subdelays_bins
if interp:
new_subdelays_bins = delaybins
for ii in range(self.npart):
for jj in range(self.nsub):
tmp_prof = self.profs[ii,jj,:]
self.profs[ii,jj] = psr_utils.fft_rotate(tmp_prof, delaybins[jj])
# Note: Since the rotation process slightly changes the values of the
# profs, we need to re-calculate the average profile value
self.avgprof = (self.profs/self.proflen).sum()
else:
new_subdelays_bins = Num.floor(delaybins+0.5)
for ii in range(self.nsub):
rotbins = int(new_subdelays_bins[ii])%self.proflen
if rotbins: # i.e. if not zero
subdata = self.profs[:,ii,:]
self.profs[:,ii] = Num.concatenate((subdata[:,rotbins:],
subdata[:,:rotbins]), 1)
self.subdelays_bins += new_subdelays_bins
self.sumprof = self.profs.sum(0).sum(0)
if Num.fabs((self.sumprof/self.proflen).sum() - self.avgprof) > 1.0:
print("self.avgprof is not the correct value!")
self.currdm = DM
def freq_offsets(self, p=None, pd=None, pdd=None):
"""
freq_offsets(p=*bestp*, pd=*bestpd*, pdd=*bestpdd*):
Return the offsets between given frequencies
and fold frequencies.
If p, pd or pdd are None use the best values.
A 3-tuple is returned.
"""
if self.fold_pow == 1.0:
bestp = self.bary_p1
bestpd = self.bary_p2
bestpdd = self.bary_p3
else:
if self.topo_p1 == 0.0:
bestp = self.fold_p1
bestpd = self.fold_p2
bestpdd = self.fold_p3
else:
bestp = self.topo_p1
bestpd = self.topo_p2
bestpdd = self.topo_p3
if p is not None:
bestp = p
if pd is not None:
bestpd = pd
if pdd is not None:
bestpdd = pdd
# self.fold_p[123] are actually frequencies, convert to periods
foldf, foldfd, foldfdd = self.fold_p1, self.fold_p2, self.fold_p3
foldp, foldpd, foldpdd = psr_utils.p_to_f(self.fold_p1, \
self.fold_p2, self.fold_p3)
# Get best f, fd, fdd
# Use folding values to be consistent with prepfold_plot.c
bestfdd = psr_utils.p_to_f(foldp, foldpd, bestpdd)[2]
bestfd = psr_utils.p_to_f(foldp, bestpd)[1]
bestf = 1.0/bestp
# Get frequency and frequency derivative offsets
f_diff = bestf - foldf
fd_diff = bestfd - foldfd
# bestpdd=0.0 only if there was no searching over pdd
if bestpdd != 0.0:
fdd_diff = bestfdd - foldfdd
else:
fdd_diff = 0.0
return (f_diff, fd_diff, fdd_diff)
def DOF_corr(self):
"""
DOF_corr():
Return a multiplicative correction for the effective number of
degrees of freedom in the chi^2 measurement resulting from a
pulse profile folded by PRESTO's fold() function
(i.e. prepfold). This is required because there are
correlations between the bins caused by the way that prepfold
folds data (i.e. treating a sample as finite duration and
smearing it over potenitally several bins in the profile as
opposed to instantaneous and going into just one profile bin).
The correction is semi-analytic (thanks to Paul Demorest and
Walter Brisken) but the values for 'power' and 'factor' have
been determined from Monte Carlos. The correction is good to
a fractional error of less than a few percent as long as
dt_per_bin is > 0.5 or so (which it usually is for pulsar
candidates). There is a very minimal number-of-bins
dependence, which is apparent when dt_per_bin < 0.7 or so.
dt_per_bin is the width of a profile bin in samples (a float),
and so for prepfold is pulse period / nbins / sample time. Note
that the sqrt of this factor can be used to 'inflate' the RMS
of the profile as well, for radiometer eqn flux density estimates,
for instance.
"""
power, factor = 1.806, 0.96 # From Monte Carlo
return self.dt_per_bin * factor * \
(1.0 + self.dt_per_bin**(power))**(-1.0/power)
def use_for_timing(self):
"""
use_for_timing():
This method returns True or False depending on whether
the .pfd file can be used for timing or not. For this
to return true, the pulsar had to have been folded with
a parfile and -no[p/pd]search (this includes -timing), or
with a p/pdot/pdotdot and a corresponding -no[p/pd]search.
In other words, if you let prepfold search for the best
p/pdot/pdotdot, you will get bogus TOAs if you try timing
with it.
"""
T = self.T
bin_dphi = 1.0/self.proflen
# If any of the offsets causes more than a 0.1-bin rotation over
# the obs, then prepfold searched and we can't time using it
# Allow up to a 0.5 bin shift for pdd/fdd since the conversions
# back and forth can cause float issues.
offsets = Num.fabs(Num.asarray(self.freq_offsets()))
dphis = offsets * Num.asarray([T, T**2.0/2.0, T**3.0/6.0])
if max(dphis[:2]) > 0.1 * bin_dphi or dphis[2] > 0.5 * bin_dphi:
return False
else:
return True
def time_vs_phase(self, p=None, pd=None, pdd=None, interp=0):
"""
time_vs_phase(p=*bestp*, pd=*bestpd*, pdd=*bestpdd*):
Return the 2D time vs. phase profiles shifted so that
the given period and period derivative are applied.
Use FFT-based interpolation if 'interp' is non-zero.
(NOTE: It is off by default as in prepfold!).
"""
# Cast to single precision and back to double precision to
# emulate prepfold_plot.c, where parttimes is of type "float"
# but values are upcast to "double" during computations.
# (surprisingly, it affects the resulting profile occasionally.)
parttimes = self.start_secs.astype('float32').astype('float64')
# Get delays
f_diff, fd_diff, fdd_diff = self.freq_offsets(p, pd, pdd)
#print "DEBUG: in myprepfold.py -- parttimes", parttimes
delays = psr_utils.delay_from_foffsets(f_diff, fd_diff, fdd_diff, parttimes)
# Convert from delays in phase to delays in bins
bin_delays = Num.fmod(delays * self.proflen, self.proflen) - self.pdelays_bins
# Rotate subintegrations
# subints = self.combine_profs(self.npart, 1)[:,0,:] # Slower than sum by ~9x
subints = Num.sum(self.profs, axis=1).squeeze()
if interp:
new_pdelays_bins = bin_delays
for ii in range(self.npart):
tmp_prof = subints[ii,:]
# Negative sign in num bins to shift because we calculated delays
# Assuming +ve is shift-to-right, psr_utils.rotate assumes +ve
# is shift-to-left
subints[ii,:] = psr_utils.fft_rotate(tmp_prof, -new_pdelays_bins[ii])
else:
new_pdelays_bins = Num.floor(bin_delays+0.5)
indices = Num.outer(Num.arange(self.proflen), Num.ones(self.npart))
indices = Num.mod(indices-new_pdelays_bins, self.proflen).T
indices += Num.outer(Num.arange(self.npart)*self.proflen, \
Num.ones(self.proflen))
subints = subints.flatten('C')[indices.astype('i8')]
return subints
def adjust_period(self, p=None, pd=None, pdd=None, interp=0):
"""
adjust_period(p=*bestp*, pd=*bestpd*, pdd=*bestpdd*):
Rotate (internally) the profiles so that they are adjusted to
the given period and period derivatives. By default,
use the 'best' values as determined by prepfold's seaqrch.
This should orient all of the profiles so that they are
almost identical to what you see in a prepfold plot which
used searching. Use FFT-based interpolation if 'interp'
is non-zero. (NOTE: It is off by default, as in prepfold!)
"""
if self.fold_pow == 1.0:
bestp = self.bary_p1
bestpd = self.bary_p2
bestpdd = self.bary_p3
else:
bestp = self.topo_p1
bestpd = self.topo_p2
bestpdd = self.topo_p3
if p is None:
p = bestp
if pd is None:
pd = bestpd
if pdd is None:
pdd = bestpdd
# Cast to single precision and back to double precision to
# emulate prepfold_plot.c, where parttimes is of type "float"
# but values are upcast to "double" during computations.
# (surprisingly, it affects the resulting profile occasionally.)
parttimes = self.start_secs.astype('float32').astype('float64')
# Get delays
f_diff, fd_diff, fdd_diff = self.freq_offsets(p, pd, pdd)
delays = psr_utils.delay_from_foffsets(f_diff, fd_diff, fdd_diff, parttimes)
# Convert from delays in phase to delays in bins
bin_delays = Num.fmod(delays * self.proflen, self.proflen) - self.pdelays_bins
if interp:
new_pdelays_bins = bin_delays
else:
new_pdelays_bins = Num.floor(bin_delays+0.5)
# Rotate subintegrations
for ii in range(self.nsub):
for jj in range(self.npart):
tmp_prof = self.profs[jj,ii,:]
# Negative sign in num bins to shift because we calculated delays
# Assuming +ve is shift-to-right, psr_utils.rotate assumes +ve
# is shift-to-left
if interp:
self.profs[jj,ii] = psr_utils.fft_rotate(tmp_prof, -new_pdelays_bins[jj])
else:
self.profs[jj,ii] = psr_utils.rotate(tmp_prof, \
-new_pdelays_bins[jj])
self.pdelays_bins += new_pdelays_bins
if interp:
# Note: Since the rotation process slightly changes the values of the
# profs, we need to re-calculate the average profile value
self.avgprof = (self.profs/self.proflen).sum()
self.sumprof = self.profs.sum(0).sum(0)
if Num.fabs((self.sumprof/self.proflen).sum() - self.avgprof) > 1.0:
print("self.avgprof is not the correct value!")
# Save current p, pd, pdd
self.curr_p1, self.curr_p2, self.curr_p3 = p, pd, pdd
def combine_profs(self, new_npart, new_nsub):
"""
combine_profs(self, new_npart, new_nsub):
Combine intervals and/or subbands together and return a new
array of profiles.
"""
if (self.npart % new_npart):
print("Warning! The new number of intervals (%d) is not a" % new_npart)
print(" divisor of the original number of intervals (%d)!" % self.npart)
print("Doing nothing.")
return None
if (self.nsub % new_nsub):
print("Warning! The new number of subbands (%d) is not a" % new_nsub)
print(" divisor of the original number of subbands (%d)!" % self.nsub)
print("Doing nothing.")
return None
dp = self.npart // new_npart
ds = self.nsub // new_nsub
newprofs = Num.zeros((new_npart, new_nsub, self.proflen), 'd')
for ii in range(new_npart):
# Combine the subbands if required
if (self.nsub > 1):
for jj in range(new_nsub):
subprofs = Num.add.reduce(self.profs[:,jj*ds:(jj+1)*ds], 1)
# Combine the time intervals
newprofs[ii][jj] = Num.add.reduce(subprofs[ii*dp:(ii+1)*dp])
else:
newprofs[ii][0] = Num.add.reduce(self.profs[ii*dp:(ii+1)*dp,0])
return newprofs
def kill_intervals(self, intervals):
"""
kill_intervals(intervals):
Set all the subintervals (internally) from the list of
subintervals to all zeros, effectively 'killing' them.
"""
for part in intervals:
self.profs[part,:,:] *= 0.0
self.killed_intervals.append(part)
# Update the stats
self.avgprof = (self.profs/self.proflen).sum()
self.varprof = self.calc_varprof()
def kill_subbands(self, subbands):
"""
kill_subbands(subbands):
Set all the profiles (internally) from the list of
subbands to all zeros, effectively 'killing' them.
"""
for sub in subbands:
self.profs[:,sub,:] *= 0.0
self.killed_subbands.append(sub)
# Update the stats
self.avgprof = (self.profs/self.proflen).sum()
self.varprof = self.calc_varprof()
def plot_sumprof(self, device='/xwin'):
"""
plot_sumprof(self, device='/xwin'):
Plot the dedispersed and summed profile.
"""
if 'subdelays' not in self.__dict__:
print("Dedispersing first...")
self.dedisperse()
normprof = self.sumprof - min(self.sumprof)
normprof /= max(normprof)
Pgplot.plotxy(normprof, labx="Phase Bins", laby="Normalized Flux",
device=device)
def greyscale(self, array2d, **kwargs):
"""
greyscale(array2d, **kwargs):
Plot a 2D array as a greyscale image using the same scalings
as in prepfold.
"""
# Use the same scaling as in prepfold_plot.c
global_max = Num.maximum.reduce(Num.maximum.reduce(array2d))
if (global_max==0.0): global_max = 1.0
min_parts = Num.minimum.reduce(array2d, 1)
array2d = (array2d-min_parts[:,Num.newaxis])/Num.fabs(global_max)
Pgplot.plot2d(array2d, image='antigrey', **kwargs)
def plot_intervals(self, phasebins='All', device='/xwin'):
"""
plot_intervals(self, phasebins='All', device='/xwin'):
Plot the subband-summed profiles vs time. Restrict
the bins in the plot to the (low:high) slice defined
by the phasebins option if it is a tuple (low,high)
instead of the string 'All'.
"""
if 'subdelays' not in self.__dict__:
print("Dedispersing first...")
self.dedisperse()
if phasebins != 'All':
lo, hi = phasebins
profs = self.profs[:,:,lo:hi].sum(1)
else:
lo, hi = 0.0, self.proflen
profs = self.profs.sum(1)
self.greyscale(profs, rangex=[lo, hi], rangey=[0.0, self.npart],
labx="Phase Bins", labx2="Pulse Phase", laby="Time Intervals",
rangex2=Num.asarray([lo, hi])*1.0/self.proflen,
laby2="Time (s)", rangey2=[0.0, self.T],
device=device)
def plot_subbands(self, phasebins='All', device='/xwin'):
"""
plot_subbands(self, phasebins='All', device='/xwin'):
Plot the interval-summed profiles vs subband. Restrict
the bins in the plot to the (low:high) slice defined
by the phasebins option if it is a tuple (low,high)
instead of the string 'All'.
"""
if 'subdelays' not in self.__dict__:
print("Dedispersing first...")
self.dedisperse()
if phasebins != 'All':
lo, hi = phasebins
profs = self.profs[:,:,lo:hi].sum(0)
else:
lo, hi = 0.0, self.proflen
profs = self.profs.sum(0)
lof = self.lofreq - 0.5*self.chan_wid
hif = lof + self.chan_wid*self.numchan
self.greyscale(profs, rangex=[lo, hi], rangey=[0.0, self.nsub],
labx="Phase Bins", labx2="Pulse Phase", laby="Subbands",
rangex2=Num.asarray([lo, hi])*1.0/self.proflen,
laby2="Frequency (MHz)", rangey2=[lof, hif],
device=device)
def calc_varprof(self):
"""
calc_varprof(self):
This function calculates the summed profile variance of the
current pfd file. Killed profiles are ignored.
"""
varprof = 0.0
for part in range(self.npart):
if part in self.killed_intervals: continue
for sub in range(self.nsub):
if sub in self.killed_subbands: continue
varprof += self.stats[part][sub][5] # foldstats prof_var
return varprof
def calc_redchi2(self, prof=None, avg=None, var=None):
"""
calc_redchi2(self, prof=None, avg=None, var=None):
Return the calculated reduced-chi^2 of the current summed profile.
"""
if 'subdelays' not in self.__dict__:
print("Dedispersing first...")
self.dedisperse()
if prof is None: prof = self.sumprof
if avg is None: avg = self.avgprof
if var is None: var = self.varprof
# Note: use the _corrected_ DOF for reduced chi^2 calculation
return ((prof-avg)**2.0/var).sum() / self.DOFcor
def calc_sigma(self):
"""
calc_sigma(self):
Return the calculated sigma (equivalent gaussian sig) of the summed profile.
"""
return chi2_sigma(self.calc_redchi2() * self.DOFcor, self.DOFcor)
def plot_chi2_vs_DM(self, loDM, hiDM, N=100, interp=0, device='/xwin'):
"""
plot_chi2_vs_DM(self, loDM, hiDM, N=100, interp=0, device='/xwin'):
Plot (and return) an array showing the reduced-chi^2 versus
DM (N DMs spanning loDM-hiDM). Use sinc_interpolation
if 'interp' is non-zero.
"""
# Sum the profiles in time
sumprofs = self.profs.sum(0)
if not interp:
profs = sumprofs
else:
profs = Num.zeros(Num.shape(sumprofs), dtype='d')
DMs = psr_utils.span(loDM, hiDM, N)
chis = Num.zeros(N, dtype='f')
subdelays_bins = self.subdelays_bins.copy()
for ii, DM in enumerate(DMs):
subdelays = psr_utils.delay_from_DM(DM, self.barysubfreqs)
hifreqdelay = subdelays[-1]
subdelays = subdelays - hifreqdelay
delaybins = subdelays*self.binspersec - subdelays_bins
if interp:
interp_factor = 16
for jj in range(self.nsub):
profs[jj] = psr_utils.interp_rotate(sumprofs[jj], delaybins[jj],
zoomfact=interp_factor)
# Note: Since the interpolation process slightly changes the values of the
# profs, we need to re-calculate the average profile value
avgprof = (profs/self.proflen).sum()
else:
new_subdelays_bins = Num.floor(delaybins+0.5)
for jj in range(self.nsub):
profs[jj] = psr_utils.rotate(profs[jj], int(new_subdelays_bins[jj]))
subdelays_bins += new_subdelays_bins
avgprof = self.avgprof
sumprof = profs.sum(0)
chis[ii] = self.calc_redchi2(prof=sumprof, avg=avgprof)
# Now plot it
Pgplot.plotxy(chis, DMs, labx="DM", laby=r"Reduced-\gx\u2\d", device=device)
return (chis, DMs)
def plot_chi2_vs_sub(self, device='/xwin'):
"""
plot_chi2_vs_sub(self, device='/xwin'):
Plot (and return) an array showing the reduced-chi^2 versus
the subband number.
"""
# Sum the profiles in each subband
profs = self.profs.sum(0)
# Compute the averages and variances for the subbands
avgs = profs.sum(1)/self.proflen
vars = []
for sub in range(self.nsub):
var = 0.0
if sub in self.killed_subbands:
vars.append(var)
continue
for part in range(self.npart):
if part in self.killed_intervals:
continue
var += self.stats[part][sub][5] # foldstats prof_var
vars.append(var)
chis = Num.zeros(self.nsub, dtype='f')
for ii in range(self.nsub):
chis[ii] = self.calc_redchi2(prof=profs[ii], avg=avgs[ii], var=vars[ii])
# Now plot it
Pgplot.plotxy(chis, labx="Subband Number", laby=r"Reduced-\gx\u2\d",
rangey=[0.0, max(chis)*1.1], device=device)
return chis
def estimate_offsignal_redchi2(self, numtrials=20):
"""
estimate_offsignal_redchi2():
Estimate the reduced-chi^2 off of the signal based on randomly shifting
and summing all of the component profiles.
"""
redchi2s = []
for count in range(numtrials):
prof = Num.zeros(self.proflen, dtype='d')
for ii in range(self.npart):
for jj in range(self.nsub):
tmpprof = copy.copy(self.profs[ii][jj])
prof += psr_utils.rotate(tmpprof, random.randrange(0,self.proflen))
redchi2s.append(self.calc_redchi2(prof=prof))
return Num.mean(redchi2s)
def adjust_fold_frequency(self, phasebins, profs=None, shiftsubs=False):
"""
adjust_fold_frequency(phasebins, profs=None, shiftsubs=False):
Linearly shift the intervals by phasebins over the course of
the observation in order to change the apparent folding
frequency. Return a 2D array containing the de-dispersed
profiles as a function of time (i.e. shape = (npart, proflen)),
and the reduced chi^2 of the resulting summed profile.
If profs is not None, then use profs instead of self.profs.
If shiftsubs is not False, then actually correct the subbands
instead of a 2D projection of them.
"""
if 'subdelays' not in self.__dict__:
print("Dedispersing first...")
self.dedisperse()
if shiftsubs:
print("Shifting all the subbands...")
if profs is None:
profs = self.profs
for ii in range(self.npart):
bins_to_shift = int(round(float(ii)/self.npart * phasebins))
for jj in range(self.nsub):
profs[ii,jj] = psr_utils.rotate(profs[ii,jj], bins_to_shift)
redchi = self.calc_redchi2(prof=profs.sum(0).sum(0))
else:
print("Shifting just the projected intervals (not individual subbands)...")
if profs is None:
profs = self.profs.sum(1)
for ii in range(self.npart):
bins_to_shift = int(round(float(ii)/self.npart * phasebins))
profs[ii] = psr_utils.rotate(profs[ii], bins_to_shift)
redchi = self.calc_redchi2(prof=profs.sum(0))
print("New reduced-chi^2 =", redchi)
return profs, redchi
def dynamic_spectra(self, onbins, combineints=1, combinechans=1,
calibrate=True, plot=True, device='/xwin'):
"""
dynamic_spectra(onbins, combineints=1, combinechans=1,
calibrate=True, plot=True, device='/xwin'):
Return (and plot) the dynamic spectrum (DS) resulting
from the folds in the .pfd assuming that the pulsar
is 'on' during the bins specified in 'onbins' and
off elsewhere (ON-OFF). If calibrate is True, the
DS will be (ON-OFF)/OFF. combineints and combinechans
describe how many adjacent intervals or frequency
channels will be combined when making the DS.
"""
# Determine the indices of the off-pulse region
indices = Num.arange(self.proflen)
Num.put(indices, Num.asarray(onbins), -1)
offbins = Num.compress(indices >= 0, Num.arange(self.proflen))
numon = len(onbins)
numoff = len(offbins)
# De-disperse if required first
if 'subdelays' not in self.__dict__:
print("Dedispersing first...")
self.dedisperse()
# The following is the average offpulse level
offpulse = Num.sum(Num.take(self.profs, offbins, 2), 2)/float(numoff)
# The following is the average onpulse level
onpulse = Num.sum(Num.take(self.profs, onbins, 2), 2)/float(numon)
# Now make the DS
self.DS = onpulse - offpulse
self.DSnpart = self.npart
self.DSstart_secs = self.start_secs
self.DSintdt = self.DSstart_secs[1] - self.DSstart_secs[0]
self.DSnsub = self.nsub
self.DSsubfreqs = self.subfreqs
self.DSsubdeltafreq = self.subdeltafreq
if (calibrate):
# Protect against division by zero
offpulse[offpulse==0.0] = 1.0
self.DS /= offpulse
# Combine intervals if required
if (combineints > 1):
# First chop off any extra intervals
if (self.npart % combineints):
self.DSnpart = (self.npart // combineints) * combineints
self.DS = self.DS[:self.DSnpart,:]
# Now reshape and add the neighboring intervals
self.DS = Num.reshape(self.DS, (self.DSnpart // combineints,
combineints, self.DSnsub))
print(Num.shape(self.DS))
self.DS = Num.sum(self.DS, 1)
self.DSstart_secs = self.DSstart_secs[::combineints]
self.DSintdt *= combineints
self.DSnpart //= combineints
# Combine channels if required
if (combinechans > 1):
# First chop off any extra channels
if (self.nsub % combinechans):
self.DSnsub = (self.nsub // combinechans) * combinechans
self.DS = self.DS[:,:self.DSnsub]
# Now reshape and add the neighboring intervals
self.DS = Num.reshape(self.DS, (self.DSnpart,
self.DSnsub // combinechans, combinechans))
self.DS = Num.sum(self.DS, 2)
self.DSsubfreqs = psr_utils.running_avg(self.subfreqs[:self.DSnsub], combinechans)
self.DSsubdeltafreq *= combinechans
self.DSnsub //= combinechans
print("DS shape = ", Num.shape(self.DS))
# Plot it if required
if plot:
lof = self.subfreqs[0]-0.5*self.DSsubdeltafreq
hif = self.subfreqs[-1]+0.5*self.DSsubdeltafreq
lot = 0.0
hit = self.DSstart_secs[-1] + self.DSintdt
self.greyscale(self.DS, rangex=[lof, hif], rangey=[lot, hit],
labx="Frequency (MHz)", labx2="Subband Number",
laby="Time (s)", laby2="Interval Number",
rangex2=[0, self.DSnsub], rangey2=[0, self.DSnpart],
device=device)
return self.DS
if __name__ == "__main__":
#testpfd = "/home/ransom/tmp_pfd/M5_52725_W234_PSR_1518+0204A.pfd"
#testpfd = "/home/ransom/tmp_pfd/M13_52724_W234_PSR_1641+3627C.pfd"
testpfd = "M13_53135_W34_rficlean_DM30.10_PSR_1641+3627C.pfd"
tp = pfd(testpfd)
if (0):
print(tp.start_secs)
print(tp.mid_secs)
print(tp.start_topo_MJDs)
print(tp.mid_topo_MJDs)
print(tp.T)
#tp.kill_subbands([6,7,8,9,30,31,32,33])
#tp.kill_intervals([2,3,4,5,6])
#tp.plot_chi2_vs_sub()
#(chis, DMs) = tp.plot_chi2_vs_DM(0.0, 50.0, 501, interp=1)
#best_index = Num.argmax(chis)
#print "Best DM = ", DMs[best_index]
(chis, DMs) = tp.plot_chi2_vs_DM(0.0, 50.0, 501)
best_index = Num.argmax(chis)
print("Best DM = ", DMs[best_index])
tp.dedisperse()
tp.plot_subbands()
tp.plot_sumprof()
print("DM =", tp.bestdm, "gives reduced chi^2 =", tp.calc_redchi2())
tp.dedisperse(27.0)
tp.plot_subbands()
tp.plot_sumprof()
print("DM = 27.0 gives reduced chi^2 =", tp.calc_redchi2())
tp.dedisperse(33.0)
tp.plot_subbands()
tp.plot_sumprof()
print("DM = 33.0 gives reduced chi^2 =", tp.calc_redchi2())
tp.plot_intervals()
| 41,422 | 46.071591 | 103 | py |
presto | presto-master/python/presto/injectpsr.py | #!/usr/bin/env python
"""Inject a fake pulsar into real data, creating
a filterbank file.
Patrick Lazarus, June 26, 2012
"""
from __future__ import print_function
from builtins import zip
from builtins import object
import sys
import argparse
import warnings
import copy
import numpy as np
import scipy.integrate
import scipy.interpolate
import matplotlib
matplotlib.use('agg') # Use a non-interactive backend
import matplotlib.pyplot as plt
import scipy.integrate
from presto import filterbank
from presto import psr_utils
DEBUG = False # Print debugging messages
NUMSECS = 1.0 # Number of seconds of data to use to determine global scale
# when repacking floating-point data into integers
BLOCKSIZE = 1e4 # Number of spectra to manipulate at once
NUMPOINTS = 512 # Number of points to use for spline profiles when applying DM
NINTEG_PER_BIN = 1 # Number of points to average integrate per time bin
def integrate_phase_bin(prof_func, phs, dphs, nseg=1):
intervals = np.linspace(0, dphs, nseg+1, endpoint=True)
edges = intervals+np.asarray(phs)[...,np.newaxis]
return np.trapz(prof_func(edges), edges, axis=-1)
class Profile(object):
"""A class to represent a generic pulse profile.
"""
def __init__(self, prof_func, scale=1):
"""Construct a profile.
Inputs:
prof_func: A function of a single variable.
This function should:
1) Represent the pulse profile.
2) Expect input values of phase ranging between
0 and 1.
3) Work when provided with a numpy array.
scale: An overall scaling factor to multiply
the profile by.
Output:
prof: The profile object.
"""
self.prof_func = prof_func
self.scale = scale
def __call__(self, phs):
"""Return the value of the profile at the given phase.
Inputs:
phs: The phase of the profile (between 0 and 1) where
the profile should be evaluated.
Output:
vals: The values of the profile at the requested phases.
"""
profvals = self.prof_func(phs)
return self.scale*profvals
def plot(self, nbin=1024, scale=1):
x0 = np.linspace(0, 1.0, nbin+1, endpoint=True)
plt.plot(x0, self(x0)*scale)
plt.xlabel("Phase")
def get_area(self, npts=4096, **kwargs):
"""Return the area under the pulse in units of (intensity x phase).
The area is calculated by evaluating the profile at many points
and numerically integrated using the trapezoid rule.
NOTE: the scale-factor will be applied.
Input:
npts: The number of points to use when evaluating the
profile.
Ouput:
area: The area under the pulse in units of (intensity x phase).
"""
phs = np.linspace(0, 1.0, npts+1, endpoint=True)
area = np.trapz(y=self(phs), x=phs)
#area, err = scipy.integrate.quadrature(self, 0, 1, maxiter=250)
return area
def get_max(self, npts=4096):
"""Return the maximum value of the profile.
The profile is evaluated at many points. The quantity returned
is the maximum value evaluated.
NOTE: the scale-factor will be applied.
Inputs:
npts: The number of points to use when evaluating the
profile.
Ouput:
profmax: The profile maximum.
"""
phs = np.linspace(0, 1.0, npts+1, endpoint=True)
profmax = np.max(self(phs))
return profmax
def get_fwhm(self, npts=4096):
"""Determine and return the FWHM of the profile, in phase.
This only works if two points in the profile are at half-maximum,
and all points in between are larger than half-max.
Input:
npts: The number of points to use when evaluating the
profile.
Ouput:
fwhm: The full-width at half-maximum of the profile, in phase.
"""
phs = np.linspace(0, 1.0, npts+1, endpoint=True)
vals = self(phs)
profmax = np.max(self(phs))
halfmax = profmax/2.0
ma = np.ma.masked_less(vals, halfmax)
# Unmasked region has vals >= halfmax
unmasked = np.ma.notmasked_contiguous(ma)
if len(unmasked) == 1:
sl = unmasked[0]
return ((sl.stop-1) - (sl.start))/float(npts)
elif len(unmasked) == 2:
sl1 = unmasked[0]
sl2 = unmasked[1]
if sl1.start==0 and sl2.stop==len(phs):
# profile straddles phase=0
return (npts+(sl1.stop-1) - sl2.start)/float(npts)
else:
return None
def get_equivalent_width(self, npts=4096):
"""Determine and return the equivalent width of the profile, in phase.
The equivalent width is the area under the pulse divided
by the profile's maximum value.
Input:
npts: The number of points to use when evaluating the
profile.
Ouput:
weq: The equivalent width of the profile, in phase.
"""
return self.get_area(npts=npts)/self.get_max(npts=npts)
def set_scaling(self, scale):
"""Set the profile's scaling factor.
Input:
scale: The scaling factor to use.
Outputs:
None
"""
self.scale = scale
def delay(self, phasedelay):
"""Delay the profile and return a new Profile object.
Input:
phasedelay: The amount of phase to delay the profile by.
Output:
delayed: The delayed Profile.
"""
delayed_prof = Profile(lambda ph: self((ph-phasedelay) % 1)/self.scale, \
scale=self.scale)
return delayed_prof
def convolve_with(self, other, npts=4096, conserve_area=True):
"""Convolve Profile with another. Return a SplineProfile
with the requested number of points.
Inputs:
other: The Profile to convolve with.
npts: The number of points to use when creating the
resulting convolution (i.e. a SplineProfile).
(Default: 4096)
conserve_area: If true, scale the covolution such that
it has the same area as the input profile (i.e. 'self')
Other:
convolution: The convolution, a SplineProfile object.
"""
phs = np.linspace(0, 1, npts, endpoint=False)
conv_vals = np.fft.irfft(np.fft.rfft(self(phs))*np.fft.rfft(other(phs)))
if conserve_area:
# scale convolution such that area is same as before
# Isn't this already a property of convolutions? Should it always be true?
scale = self.get_area()/np.trapz(conv_vals, phs)
convolution = SplineProfile(conv_vals*scale)
return convolution
def smear(self, smearphs, delayphs=0, npts=4096):
"""Smear the profile with a boxcar of width 'smearphs'. Return
a SplineProfile object sampled with 'npts' points.
Inputs:
smearphs: The amount (in phase) to smear the profile by.
delayphs: The amount (in phase) to delay the pulse by.
(Default: No delay)
npts: The number of points to use when creating the
smeared SplineProfile. (Default: 4096)
Output:
smeared: The smeared Profile.
"""
if smearphs < 0:
raise ValueError("Amount of phase to smear by (%g) " \
"cannot be negative!" % scatterphs)
elif smearphs == 0:
# No scattering to do, return profile
return copy.deepcopy(self)
bc = boxcar_factory(smearphs, delayphs)
return self.convolve_with(bc, npts=npts)
def scatter(self, scatterphs, npts=4096):
"""Scatter the profile with a one-sided exponential of width
'scatterphs'. Return a SplineProfile object sampled
with 'npts' points.
Inputs:
scatterphs: The time-scale (in phase) of the exponential
scattering function.
npts: The number of points to use when creating the scattered
SplineProfile. (Default: 4096)
Outputs:
scattered: The scattered Profile.
"""
if scatterphs < 0:
raise ValueError("Amount of phase to scatter by (%g) " \
"cannot be negative!" % scatterphs)
elif scatterphs == 0:
# No scattering to do, return profile
return copy.deepcopy(self)
ex = exponential_factory(scatterphs)
return self.convolve_with(ex, npts=npts)
class SplineProfile(Profile):
def __init__(self, profvals, scale=1, **spline_kwargs):
"""Construct a profile that uses a spline to interpolate a function.
Inputs:
profvals: The values of the profile to be interpolated.
scale: An overall scaling factor to multiply
the profile by.
**All additional keyword arguments are passed to the
spline constructor.
Output:
prof: The profile object.
"""
self.profvals = profvals
# TODO: Should we evaluate at the centre of the bins?
phs = np.linspace(0,1, len(profvals)+1, endpoint=True)
# Manually set value at phs=1.0 to the value at phs=0.0
vals = np.concatenate((profvals, [profvals[0]]))
# Create spline object and use it as the profile function
spline = scipy.interpolate.InterpolatedUnivariateSpline(phs, \
vals, **spline_kwargs)
super(SplineProfile, self).__init__(spline, scale)
def __call__(self, phs):
"""Return the value of the profile at the given phase.
Inputs:
phs: The phase of the profile (between 0 and 1) where
the profile should be evaluated.
Output:
vals: The values of the profile at the requested phases.
"""
vals = super(SplineProfile, self).__call__(phs.flat)
# Re-shape values because spline return flattened array.
vals = np.atleast_1d(vals)
vals.shape = phs.shape
return vals
class MultiComponentProfile(Profile):
"""A class to represent a pulse profile made up of
multiple components.
"""
def __init__(self, components=None, scale=1):
"""Construct a multi-component profile.
Input:
components: A list of Profile objects that serve
as the components of this MultiComponentProfile
object. (Default: Create a multi-component profile
with no components.)
scale: An overall scaling factor to multiply
the profile by.
Output:
prof: The MultiComponentProfile object.
"""
self.scale = scale
self.components = []
for component in components:
self.add_component(component)
super(MultiComponentProfile, self).__init__(self._get_profile(), scale)
def _get_profile(self):
"""Private method to get the pulse profile vs. phase
function.
"""
if self.components:
prof = lambda ph: np.sum([comp(ph) for comp \
in self.components], axis=0)
else:
prof = lambda ph: 0
return prof
def add_component(self, comp):
self.components.append(comp)
def plot(self, nbin=1024):
super(MultiComponentProfile, self).plot(nbin=nbin)
for comp in self.components:
comp.plot(nbin=nbin, scale=self.scale)
class VectorProfile(object):
"""A class to represent a vector of pulse profiles.
This can be used to encode intrinsic profile variation,
or extrisinc smearing/scattering across the band
"""
def __init__(self, profiles, scales=1):
"""Construct a vector of profiles.
Inputs:
profiles: A list of Profile objects.
scales: A numpy array of scaling factors to multiply
the profile vector by when evaluating.
Output:
prof: The profile object.
"""
self.profiles = profiles
self.nprofs = len(profiles)
self.scales = scales
def __call__(self, phs):
phs = np.atleast_1d(np.asarray(phs))
nphs = phs.shape[-1]
vals = np.zeros((nphs, self.nprofs))
if phs.ndim == 1:
# Evaluate all profiles at the same phases
for ii, (prof, scale) in enumerate(zip(self.profiles, self.scales)):
# If scale is not a finite number don't inject anything for this
# channel
if np.isfinite(scale):
vals[:,ii] = prof(phs)*scale
elif phs.ndim == 2:
# Evaluate each profile at a different set of phases
nphs_vecs = phs.shape[0]
if nphs_vecs != self.nprofs:
raise ValueError("Length of axis=1 of 'phs' (%d) must be " \
"equal to the number of profiles in the " \
"vector (%d)." % (nphs_vecs, self.nprofs))
for ii, (prof, ph, scale) in enumerate(zip(self.profiles, phs, self.scales)):
# If scale is not a finite number don't inject anything for this
# channel
if np.isfinite(scale):
vals[:,ii] = prof(ph)*scale
else:
raise ValueError("VectorProfile can only be evaluated with " \
"1D or 2D arrays")
return vals
def set_scaling(self, scales):
"""Set the profiles' scaling factors.
Input:
scales: The scaling factors to use.
Outputs:
None
"""
arr = np.asarray(scales)
if arr.ndim not in (0, 1):
raise ValueError("Bad number of dimensions for 'scales'. " \
"Number of dimensions must be 0 or 1. " \
"Scales provided has %d dimensions." % arr.ndim)
self.scales = arr
def get_area(self, *args, **kwargs):
"""Return the area under the pulse in units of (intensity x phase).
The area is calculated by evaluating the profile at many points
and numerically integrated using the trapezoid rule.
NOTE: the scale-factor will be applied.
Input:
npts: The number of points to use when evaluating the
profile.
Ouput:
area: The area under the pulse in units of (intensity x phase).
"""
areas = np.zeros(len(self.profiles))
for ii, prof in enumerate(self.profiles):
areas[ii] = prof.get_area(*args, **kwargs)
return areas
def plot(self, nbin=1024, scale=1):
phs = np.linspace(0, 1.0, nbin+1, endpoint=True)
data = self(phs).transpose()
imax = plt.axes((0.1, 0.1, 0.6, 0.8))
plt.imshow(data, interpolation='nearest', \
extent=(0, 1, 0, self.nprofs), aspect='auto')
plt.xlabel("Phase")
plt.ylabel("Channel number")
plt.axes((0.7, 0.1, 0.25, 0.8), sharey=imax)
plt.plot(np.sum(data, axis=1)[::-1], np.arange(self.nprofs), 'k-')
# Re-set axes for image
imax.set_xlim(0, 1)
imax.set_ylim(0, self.nprofs)
class DispersedProfile(VectorProfile):
def __init__(self, profiles, dm, freqs, period, scales=1, intrinsic=None, delayed=True):
super(DispersedProfile, self).__init__(profiles, scales)
self.dm = dm
self.freqs = freqs
self.period = period
self.intrinsic = intrinsic
self.delayed = delayed
def get_equivalent_width(self, *args, **kwargs):
return np.ones_like(self.freqs)*self.intrinsic.get_equivalent_width(*args, **kwargs)
def get_area(self, *args, **kwargs):
return np.ones_like(self.freqs)*self.intrinsic.get_area(*args, **kwargs)
def get_max(self, *args, **kwargs):
return np.ones_like(self.freqs)*self.intrinsic.get_max(*args, **kwargs)
def plot(self, nbin=1024, scale=1, dedisp=False):
phs = np.linspace(0, 1.0, nbin+1, endpoint=True)
data = self(phs).transpose()
if self.delayed:
delays = get_phasedelays(self.dm, self.freqs, self.period)
else:
delays = np.zeros_like(self.freqs)
delayedphs = (phs+delays[:,np.newaxis]) % 1
dedispdata = self(delayedphs).transpose()
imax = plt.axes((0.1, 0.1, 0.6, 0.6))
if dedisp:
plt.imshow(dedispdata, interpolation='nearest', \
extent=(0, 1, 0, self.nprofs), aspect='auto')
else:
plt.imshow(data, interpolation='nearest', \
extent=(0, 1, 0, self.nprofs), aspect='auto')
plt.set_cmap('gist_yarg')
plt.xlabel("Phase")
plt.ylabel("Channel number")
plt.axes((0.7, 0.1, 0.25, 0.6), sharey=imax)
plt.plot(np.sum(data, axis=1)[::-1], np.arange(self.nprofs), 'k-')
plt.axes((0.1, 0.7, 0.6, 0.25), sharex=imax)
plt.plot(phs, np.sum(dedispdata, axis=0), ls='-',
c='k', lw=1, label='Smeared and scattered', zorder=2)
if self.intrinsic is not None:
plt.plot(phs, self.intrinsic(phs)*np.ma.masked_invalid(self.scales).sum(),
ls='-', c='#bbbbbb', lw=3, label='Input', zorder=1)
plt.legend(loc='best', prop=dict(size='small'))
plt.figtext(0.05, 0.05, "Period = %.3f ms" % (self.period*1000), size='xx-small')
plt.figtext(0.05, 0.035, r"DM = %.3f cm$\mathrm{^{-3}}$pc" % self.dm, size='xx-small')
# Re-set axes for image
imax.set_xlim(0, 1)
imax.set_ylim(0, self.nprofs)
def get_phasedelays(dm, freqs, period):
"""Return phase delays corresponding to a particular DM.
Inputs:
dm: DM (in pc cm-3)
freqs: The list of frequencies (in MHz)
period: The profiles period (in seconds)
Outputs:
phasedelays: The corresponding phase delays.
"""
# Prepare delays
timedelays = psr_utils.delay_from_DM(dm, freqs)
# Reference all delays to highest frequency channel, which remains
# unchanged
# TODO: Do we really want to refer to high freq?
timedelays -= timedelays[np.argmax(freqs)]
phasedelays = timedelays/period
return phasedelays
def apply_dm(inprof, period, dm, chan_width, freqs, tsamp, \
do_delay=True, do_smear=True, do_scatter=True,
verbose=True):
"""Given a profile apply DM delays, smearing, and scattering
within each channel as is appropriate for the given params.
Inputs:
inprof: The profile to modify.
period: The profiles period (in seconds)
dm: The DM (in pc cm-3)
chan_width: The width of each channel (in MHz)
freqs: The list of frequencies (in MHz)
tsamp: Sample time of the recipient filterbank file (in seconds).
do_delay: Boolean, if True apply DM delays to each channel.
The highest freq channel is not shifted. (Default: True)
do_smear: Boolean, if True apply DM smearing to each channel.
(Default: True)
do_scatter: Boolean, if True apply scattering to each channel.
(Default: True)
Outputs:
vecprof: The delayed and smeared VectorProfile.
"""
weq = inprof.get_equivalent_width()
nfreqs = len(freqs)
if verbose:
print("Applying DM to profile (DM = %.2f; %d channels)..." % \
(dm, nfreqs))
# A list of profiles, one for each channel
profiles = []
if dm <= 0:
warnings.warn("DM will not be applied because it is 0 (or smaller?!)")
do_delay = False
do_smear = False
do_scatter = False
if do_delay:
phasedelays = get_phasedelays(dm, freqs, period)
else:
phasedelays = np.zeros(nfreqs)
# Prepare for smear campaign
smeartimes = psr_utils.dm_smear(dm, abs(chan_width), freqs) # In seconds
smearphases = smeartimes/period
# Prepare to scatter
scattertimes = psr_utils.pulse_broadening(dm, freqs)*1e-3 # In seconds
scatterphases = scattertimes/period
if DEBUG:
for ichan, (freq, smear, scatt, delay) in \
enumerate(zip(freqs, smearphases, scatterphases, phasedelays)):
print(" Chan #%d - Freq: %.3f MHz -- " \
"Smearing, scattering, delay (all in phase): " \
"%g, %g, %g" % (ichan, freq, smear, scatt, delay))
oldprogress = 0
sys.stdout.write(" %3.0f %%\r" % oldprogress)
sys.stdout.flush()
# ylim = None
# ylim2 = None
# ylim3 = None
# ylim4 = None
# ylim5 = None
for ii, (delayphs, smearphs, scattphs) in \
enumerate(zip(phasedelays, smearphases, scatterphases)):
#########
# DEBUG: plot all profiles
# plt.clf()
# ax = plt.subplot(5,1,1)
# inprof.plot()
# if ylim is not None:
# ax.set_ylim(ylim)
# else:
# ylim = ax.get_ylim()
if do_smear and not ((smearphs < 0.2*weq) or (smearphs < (tsamp/period))):
# Only smear if requested and smearing-phase is large enough
# bc = boxcar_factory(smearphs, delayphs)
# ax2 = plt.subplot(5,1,2,sharex=ax)
# bc.plot()
# if ylim2 is not None:
# ax2.set_ylim(ylim2)
# else:
# ylim2 = ax2.get_ylim()
if DEBUG:
print("Smearing")
tmpprof = inprof.smear(smearphs, delayphs, npts=NUMPOINTS)
else:
tmpprof = inprof.delay(delayphs)
phs = np.linspace(0, 1, NUMPOINTS+1)
tmpprof = SplineProfile(tmpprof(phs))
# ax3 = plt.subplot(5,1,3,sharex=ax)
# if ylim3 is not None:
# ax3.set_ylim(ylim3)
# else:
# ylim3 = ax3.get_ylim()
# tmpprof.plot()
if do_scatter and not ((scattphs < 0.2*weq) or (scattphs < (tsamp/period))):
# Only scatter if requested and scattering-phase is large enough
# ex = exponential_factory(scattphs)
# ax4 = plt.subplot(5,1,4,sharex=ax)
# ex.plot()
# if ylim4 is not None:
# ax4.set_ylim(ylim4)
# else:
# ylim4 = ax4.get_ylim()
if DEBUG:
print("Scattering")
tmpprof = tmpprof.scatter(scattphs, npts=NUMPOINTS)
# ax5 = plt.subplot(5,1,5,sharex=ax)
# tmpprof.plot()
# if ylim5 is not None:
# ax5.set_ylim(ylim5)
# else:
# ylim5 = ax5.get_ylim()
profiles.append(tmpprof)
# plt.xlim(0,1)
# plt.xlabel("Phase")
# plt.suptitle("Prof %d (%f MHz)" % (ii, freqs[ii]))
# plt.savefig("prof%d.png" % ii)
#########
# Print progress to screen
progress = int(100.0*ii/nfreqs)
if progress > oldprogress:
sys.stdout.write(" %3.0f %%\r" % progress)
sys.stdout.flush()
oldprogress = progress
sys.stdout.write("Done \n")
sys.stdout.flush()
dispersedprof = DispersedProfile(profiles, dm=dm, freqs=freqs, period=period,
intrinsic=inprof, delayed=do_delay)
return dispersedprof
def get_spline_profile(prof, npts=1024, **spline_kwargs):
"""Given a profile object evaluate it and return
a SplineProfile object. If the input profile object
is already an instance of SplineProfile, do nothing
and return the input profile.
Inputs:
prof: The profile object to conver to a SplineProfile.
npts: The number of points to use when evaluating the
profile. (Default: 1024)
**All additional keyword arguments are passed to the
spline constructor.
Outputs:
spline_prof: The resulting SplineProfile object.
"""
if isinstance(prof, SplineProfile):
# Input profile is already a SplineProfile. Do nothing. Return it.
return prof
else:
phs = np.linspace(0,1, npts, endpoint=False)
profvals = prof(phs)/prof.scale
spline_prof = SplineProfile(profvals, scale=prof.scale, **spline_kwargs)
return spline_prof
def vonmises_factory(amp,shape,loc):
"""Return a Profile with a single von Mises component.
Inputs:
amp: Amplitude of the von Mises function.
shape: A parameter characterising the width
(NOTE: this value is not directly the width.)
loc: Phase of the peak of the von Mises function.
Output:
vm_prof: A Profile object with a von Mises profile described
by the input parameters.
"""
# Need to use a factory for the von Mises functions
# to make sure the lambda uses amp,shape,loc from a local
# scope. The values in a lambda function are stored by reference
# and only looked up dynamically when the function is called.
def vm(ph):
return amp*np.exp(shape*(np.cos(2*np.pi*(ph-loc))-1))
return Profile(vm)
def boxcar_factory(width, delay=0):
"""Return a boxcar Profile scaled to have unit area.
Inputs:
width: The width of the boxcar function in phase.
NOTE: if width > 1, it will be folded
delay: The delay, in phase, to apply to the boxcar.
NOTE: positive delays cause the boxcar to be shifted
(Default: No delay)
to the right (i.e. larger pulse phase)
Output:
boxcar_prof: A boxcar Profile object with the given width.
"""
width = float(width) # Make sure width is a floating-point number
delay = delay % 1 # Make sure delay is in [0, 1)
nwraps = int(width/1)
rem = ((width%1)+delay)%1
if delay < rem:
# Boxcar does not straddle phase=1
def bc(ph):
return (nwraps + ((delay<=ph) & (ph<rem)).astype('int'))/width
else:
# Boxcar is split
def bc(ph):
return (nwraps + ((ph<rem) | (delay<=ph)).astype('int'))/width
return Profile(bc)
def exponential_factory(efold):
"""Return a one-sided exponential Profile.
Inputs:
efold: The e-folding phase of the exponential function.
Output:
exp_prof: A one-sided exponential Profile object.
"""
denom = (1-np.exp(-1/efold))*efold
def ex(ph):
# Denominator comes from sum of geometric series
return np.exp(-ph/efold)/denom
return Profile(ex)
def create_vonmises_components(vonmises_strs):
if not vonmises_strs:
warnings.warn("Using default von Mises profile (Amplitude=1.0 " \
"b=5, and phase=0.5)")
vonmises_comps = [vonmises_factory(1.0, 5, 0.5)]
else:
vonmises_comps = []
for vonmises_str in vonmises_strs:
split = vonmises_str.split()
if len(split) != 3:
raise ValueError("Bad number of von Mises components " \
"should be 3, got %d" % len(split))
amp = float(split[0])
shape = float(split[1])
loc = float(split[2])
# Need to use a factory for the von Mises functions
# to make sure the lambda uses amp,shape,loc from a local
# scope. The values in a lambda function are stored by reference
# and only looked up dynamically when the function is called.
vonmises_comps.append(vonmises_factory(amp,shape,loc))
return vonmises_comps
def scale_from_snr(fil, prof, snr, rms):
"""Set the profile's scaling factor such that the simulated
injected pulsar signal will have the given Smean.
Inputs:
fil: A FilterbankFile object.
prof: The Profile object representing the profile to inject.
snr: The desired signal-to-noise ratio
rms: The RMS of the recipient file's DM=0 time series.
Outputs:
scale: The scaling factor to apply to the profile.
"""
# Set scale to 1, just in case it's be altered already.
prof.set_scaling(1)
# Characterise the pulse
area = prof.get_area()
profmax = prof.get_max()
scale = snr*rms/fil.nchans/np.sqrt(fil.nspec*profmax*area)
print("Average area %s, average profile maximum: %s" % \
(np.mean(area), np.mean(profmax)))
print("Average recommended scale factor: %s" % np.mean(scale))
return scale
def snr_from_smean(fil, prof, smean, gain, tsys):
"""Set the profile's scaling factor such that the simulated
injected pulsar signal will have the given Smean.
Inputs:
fil: A FilterbankFile object.
prof: The Profile object representing the profile to inject.
smean: The mean flux density to simulate, in mJy.
gain: The telescope's gain, in K/Jy.
tsys: The observing system's temperature, in K.
Outputs:
snr: The target signal-to-noise ratio.
"""
dutycycle = prof.get_equivalent_width() # in phase
# Because width is reported in phase,
# it is actually the duty cycle
# Characterise the recipient filterbank file
tint = fil.nspec*fil.tsamp
bw = np.abs(fil.foff*fil.nchans)
# Target SNR
warnings.warn("Assuming 2 (summed) polarizations.")
snr = smean*gain*np.sqrt(2*tint*bw)/tsys*np.sqrt(1/dutycycle-1)
print("Expected SNR of injected pulsar signal (after folding " \
"and integrating over frequency): %s" % snr)
return snr
def inject(infile, outfn, prof, period, dm, nbitsout=None,
block_size=BLOCKSIZE, pulsar_only=False, inplace=False):
if isinstance(infile, filterbank.FilterbankFile):
fil = infile
elif inplace:
fil = filterbank.FilterbankFile(infile, 'readwrite')
else:
fil = filterbank.FilterbankFile(infile, 'read')
print("Injecting pulsar signal into: %s" % fil.filename)
if False:
delays = psr_utils.delay_from_DM(dm, fil.frequencies)
delays -= delays[np.argmax(fil.frequencies)]
get_phases = lambda times: (times-delays)/period % 1
else:
get_phases = lambda times: times/period % 1
# Create the output filterbank file
if nbitsout is None:
nbitsout = fil.nbits
if inplace:
warnings.warn("Injecting pulsar signal *in-place*")
outfil = fil
else:
# Start an output file
print("Creating out file: %s" % outfn)
outfil = filterbank.create_filterbank_file(outfn, fil.header, \
nbits=nbitsout, mode='append')
if outfil.nbits == 8:
raise NotImplementedError("This code is out of date. 'delays' is not " \
"done in this way anymore..")
# Read the first second of data to get the global scaling to use
onesec = fil.get_timeslice(0, 1).copy()
onesec_nspec = onesec.shape[0]
times = np.atleast_2d(np.arange(onesec_nspec)*fil.tsamp).T+delays
phases = times/period % 1
onesec += prof(phases)
minimum = np.min(onesec)
median = np.median(onesec)
# Set median to 1/3 of dynamic range
global_scale = (256.0/3.0) / median
del onesec
else:
# No scaling to be performed
# These values will cause scaling to keep data unchanged
minimum = 0
global_scale = 1
sys.stdout.write(" %3.0f %%\r" % 0)
sys.stdout.flush()
oldprogress = -1
# Loop over data
lobin = 0
spectra = fil.get_spectra(0, block_size)
numread = spectra.shape[0]
while numread:
if pulsar_only:
# Do not write out data from input file
# zero it out
spectra *= 0
hibin = lobin+numread
# Sample at middle of time bin
times = (np.arange(lobin, hibin, 1.0/NINTEG_PER_BIN)+0.5/NINTEG_PER_BIN)*fil.dt
#times = (np.arange(lobin, hibin)+0.5)*fil.dt
phases = get_phases(times)
profvals = prof(phases)
shape = list(profvals.shape)
shape[1:1] = [NINTEG_PER_BIN]
shape[0] /= NINTEG_PER_BIN
profvals.shape = shape
toinject = profvals.mean(axis=1)
#toinject = profvals
if np.ndim(toinject) > 1:
injected = spectra+toinject
else:
injected = spectra+toinject[:,np.newaxis]
scaled = (injected-minimum)*global_scale
if inplace:
outfil.write_spectra(scaled, lobin)
else:
outfil.append_spectra(scaled)
# Print progress to screen
progress = int(100.0*hibin/fil.nspec)
if progress > oldprogress:
sys.stdout.write(" %3.0f %%\r" % progress)
sys.stdout.flush()
oldprogress = progress
# Prepare for next iteration
lobin = hibin
spectra = fil.get_spectra(lobin, lobin+block_size)
numread = spectra.shape[0]
sys.stdout.write("Done \n")
sys.stdout.flush()
def load_profile(infn, verbose=True):
if verbose:
print("Loading profile from file (%s)" % infn)
data = np.load(infn)
profiles = []
for key in sorted(data.keys()):
profiles.append(SplineProfile(data[key]))
prof = VectorProfile(profiles)
data.close()
return prof
def save_profile(prof, outfn, verbose=True):
if verbose:
print("Writing %s instance to file (%s)" % \
(type(prof).__name__, outfn))
outfile = open(outfn, 'wb')
profvals = {}
for ii, pp in enumerate(prof.profiles):
profvals['chan%d' % ii] = pp.profvals
np.savez(outfile, **profvals)
outfile.close()
def parse_cfgstr(cfgstrs):
cfgs = {}
for cfgstr in cfgstrs:
for cfg in cfgstr.split(','):
key, val = cfg.split('=')
cfgs[key] = val
return cfgs
def get_scaling(fil, prof, cfgstrs):
"""Given a target filterbank file, a profile, and
a configuration string return the corresponding
scaling factor.
Inputs:
fil: A filterbank.FilterbankFile object.
prof: A Profile object.
cfgstrs: A list of strings containing configurations.
Output:
scaling: The corresponding scaling.
"""
cfgs = parse_cfgstr(cfgstrs)
return float(cfgs['scale'])
def get_scaling_from_snr(fil, prof, cfgstrs):
"""Given a target filterbank file, a profile, and
a configuration string compute the scaling factor
given the SNR.
Inputs:
fil: A filterbank.FilterbankFile object.
prof: A Profile object.
cfgstrs: A list of strings containing configurations.
Output:
scaling: The corresponding scaling.
"""
cfgs = parse_cfgstr(cfgstrs)
snr = float(cfgs['snr'])
rms = float(cfgs['rms'])
scale = scale_from_snr(fil, prof, snr=snr, rms=rms)
return scale
def get_scaling_from_smean(fil, prof, cfgstrs):
"""Given a target filterbank file, a profile, and
a configuration string compute the scaling factor
given the target mean flux density.
Inputs:
fil: A filterbank.FilterbankFile object.
prof: A Profile object.
cfgstrs: A list of strings containing configurations.
Output:
scaling: The corresponding scaling.
"""
cfgs = parse_cfgstr(cfgstrs)
smean = float(cfgs['smean'])
rms = float(cfgs['rms'])
gain = float(cfgs['gain'])
tsys = float(cfgs['tsys'])
snr = snr_from_smean(fil, prof, smean=smean, \
gain=gain, tsys=tsys)
scale = scale_from_snr(fil, prof, snr, rms=rms)
return scale
def get_scaling_from_file(fil, prof, cfgstrs):
"""Given a target filterbank file, a profile, and
a configuration string read scaling factors from
a text file. The file should have one floating point
number per line. There should be as many lines as there
are frequency channels in the filterbank file.
Inputs:
fil: A filterbank.FilterbankFile object.
prof: A Profile object.
cfgstrs: A list of strings containing configurations.
Output:
scaling: The corresponding scaling.
"""
cfgs = parse_cfgstr(cfgstrs)
fn = cfgs['file']
smean = float(cfgs['smean'])
col = int(cfgs.get('col', 0))
area = prof.get_area()
# 'conversion_factors' contains the per-channel conversion between mJy
# and receiver units (in units of mJy/rcvr)
conversion_factors = np.loadtxt(fn, usecols=(col,))
scales = smean/conversion_factors/area
return scales
SCALE_METHODS = {'scale': get_scaling, \
'snr': get_scaling_from_snr, \
'radiometer': get_scaling_from_smean, \
'scalefile': get_scaling_from_file}
def scale_profile(prof, scale_name, scale_cfgstrs, fil, verbose=True):
"""Scale the profile.
Inputs:
prof: The profile object to scale.
scale_name: The name of the type of scaler to use.
scale_cfgstrs: A list of configuration strings for the scaler.
fil: A FilterbankFile object.
verbose: Print extra information. (Default: True)
Outputs:
None - the scalings are applied directly to the input profile.
"""
# Determine scaling
scale_getter = SCALE_METHODS[scale_name]
scaling = scale_getter(fil, prof, scale_cfgstrs)
if verbose:
print("Band-averaged scale-factor: %g" % np.ma.masked_invalid(scaling).mean())
prof.set_scaling(scaling)
def make_profile(vonmises, verbose=True):
"""Create the profile object to use.
Inputs:
vonmises: A list of strings defining von mises components.
verbose: Print extra information. (Default: True)
"""
comps = create_vonmises_components(vonmises)
prof = MultiComponentProfile(comps)
if verbose:
print("Creating profile. Number of components: %d" % len(comps))
print("Profile area (intensity x phase): %g" % prof.get_area())
print("Equivalent width (phase): %g" % prof.get_equivalent_width())
print("FWHM (phase): %g" % prof.get_fwhm())
print("Profile maximum: %g" % prof.get_max())
prof = get_spline_profile(prof)
return prof
def main():
fn = args.infile
if args.inplace:
fil = filterbank.FilterbankFile(fn, mode='readwrite')
else:
fil = filterbank.FilterbankFile(fn, mode='read')
if args.inprof is not None:
warnings.warn("Saved profiles already may be tuned to a particular " \
"DM, period and filterbank file (freq, nchans, " \
"tsamp, etc).")
prof = load_profile(args.inprof)
else:
prof = make_profile(args.vonmises)
prof = apply_dm(prof, args.period, args.dm, \
fil.foff, fil.frequencies, fil.tsamp)
scale_profile(prof, args.scale_name, args.scale_cfgstrs, fil)
if args.outprof is not None:
save_profile(prof, args.outprof)
outfn = args.outname % fil.header
print("Showing plot of profile to be injected...")
plt.figure()
plt.clf()
prof.plot(dedisp=True)
plt.xlim(0,1)
plt.savefig(outfn+".ps")
if args.dryrun:
sys.exit()
inject(fil, outfn, prof, args.period, args.dm, \
nbitsout=args.output_nbits, block_size=args.block_size, \
pulsar_only=args.pulsar_only, inplace=args.inplace)
def parse_model_file(modelfn):
"""Parse a pass model file (*.m) written by paas.
Return a list of parameters describing each component.
In particular (amplitude, shape, phase).
Input:
modelfn: The name of the model file.
Outputs:
params: List of parameters for each component.
(i.e. "amplitude shape phase")
"""
mfile = open(modelfn, 'r')
return [" ".join(reversed(line.split())) \
for line in mfile.readlines()]
class ParseMfileAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
vonmises = getattr(namespace, self.dest)
vonmises.extend(parse_model_file(values))
class ScaleHelpAction(argparse.Action):
def __call__(self, parser, namepsace, values, option_string=None):
helptext = "Scaling methods:\n\n" + \
"scale - Multiply injected signal by a scaling factor\n" + \
"Configs: 'scale' - (float) the scaling factor\n\n" + \
"snr - Scale the injected signal so the integrated \n" + \
" profile has the given SNR\n" + \
"Configs: 'snr' - (float) the target SNR\n" + \
" 'rms' - (float) the RMS of the cleaned DM=0 timeseries\n\n" + \
"radiometer - Scale the injected signal so the integrated \n" + \
" profile has the given mean flux density\n" + \
"Configs: 'smean' - (float) the target mean flux density (in mJy)\n" + \
" 'gain' - (float) the telescope's gain (in K/Jy)\n" + \
" 'tsys' - (float) the observing system's temperature (in K)\n" + \
" 'rms' - (float) the RMS of the cleaned DM=0 timeseries\n\n" + \
"scalefile - Scale the signal in each channel independently\n" + \
" according to the scaling factors in the file\n" + \
"Configs: 'smean' - (float) the target mean flux density\n" + \
" 'file' - (string) a text file containing per-channel \n" + \
" conversion factors from flux-density to 'receiver' \n" + \
" units (in mJy/rcvr).\n" + \
" Each row should have a single floating-point \n" + \
" number. The number and order of the rows should \n" + \
" correspond to the input filterbank file.\n" + \
" 'col' - (int) The column number to read conversion factors \n" +\
" from. Columns are numbered starting at 0.\n" + \
" (Default: first column)\n\n"
sys.stderr.write(helptext)
sys.exit(1)
if __name__ == '__main__':
parser = argparse.ArgumentParser(prog='injectpsr.py', \
description="v0.7 Patrick Lazarus (Jan 16, 2014)")
parser.add_argument("--dm", dest='dm', type=float, \
help="The DM of the (fake) injected pulsar signal. " \
"(This argument is required.", \
default=None)
parser.add_argument("-p", "--period", dest='period', \
default=None, type=float, \
help="The *topocentric* period (in seconds) of the (fake) injected " \
"pulsar signal. (This argument is required.)")
parser.add_argument("-c", "--scale-configs", dest='scale_cfgstrs', type=str, \
required=True, default=[], action='append', \
help="A string of comma-separated parameters to " \
"configure how the injected signal is scaled. " \
"Format is '<param-name>=<value>,...'")
parser.add_argument("-s", "--scale-method", dest='scale_name', type=str, \
required=True, \
help="Name of a method for determining how the " \
"injected signal is scaled.")
parser.add_argument("--scale-help", dest='show_scale_help', \
nargs=0, action=ScaleHelpAction, \
help="Show help text for scaling methods, parameters.")
parser.add_argument("-v", "--vonmises", dest='vonmises', action='append', \
help="A string of 3 parameters defining a vonmises " \
"component to be injected. Be sure to quote the " \
"3 parameters together. The params are: 'amplitude " \
"shape phase'. Amplitude is not related to SNR in " \
"any way. Also, phase should be between 0 and 1. " \
"(Default: if no compoments are provided " \
"a von Mises with amplitude=1.0, shape=5, and " \
"phase=0.5 will be used.)", \
default=[])
parser.add_argument("-m", "--model-file", dest="vonmises", \
type=str, action=ParseMfileAction, \
help="A model file (*.m) as written by 'paas'.")
parser.add_argument("--block-size", dest='block_size', default=BLOCKSIZE, \
type=float, \
help="Number of spectra per block. This is the amount " \
"of data manipulated/written at a time. (Default: " \
" %d spectra)" % BLOCKSIZE)
parser.add_argument("--nbits", dest='output_nbits', default=None, type=int, \
help="Number of bits per same to use in output " \
"filterbank file. (Default: same as input file)")
parser.add_argument("-n", "--dryrun", dest="dryrun", action="store_true", \
help="Show the pulse profile to be injected and exit. " \
"(Default: do not show profile, inject it)")
parser.add_argument("--no-apply-dm", dest='apply_dm', action='store_false', \
default=True, \
help="Do not apply the DM (i.e. do not delay or smear " \
"the pulse; Default: Apply DM)")
parser.add_argument("--load-prof", dest="inprof", default=None, \
help="Load a profile object from file. (Default: " \
"create a fresh profile object.)")
parser.add_argument("--save-prof", dest='outprof', default=None, \
help="Save a profile object to file. (Default: " \
"do not save profile object.)")
parser.add_argument("-o", "--outname", dest='outname', \
default="injected.fil", \
help="The name of the output file.")
parser.add_argument("--write-pulsar-only", dest='pulsar_only', \
action='store_true', \
help="Only write the pulsar signal to the output file. "
"That is, do not include the data from the input "
"file. This is useful for debugging. (Default: "
"write data from input file _and_ pulsar signal.)")
parser.add_argument("--in-place", dest='inplace', \
action='store_true', \
help="Inject the pulsar signal in-place. " \
"THIS WILL OVERWRITE THE INPUT DATAFILE!" \
"(Default: Do _not_ inject in-place)")
parser.add_argument("infile", \
help="File that will receive synthetic pulses.")
args = parser.parse_args()
if args.period is None or args.dm is None:
raise ValueError("Both a period and a DM _must_ be provided!")
if args.scale_name is not None and args.inprof is not None:
raise ValueError("Loading a saved profile via the " \
"'--load-prof' args is incompatible " \
"with scaling the profile.")
main()
| 48,769 | 37.371361 | 94 | py |
presto | presto-master/python/presto/fftfit.py | from _fftfit import * | 21 | 21 | 21 | py |
presto | presto-master/python/presto/Pgplot.py | """
Routine for easy to use 1-D and 2-D plotting using 'PGPLOT'
and the Python 'PPGPLOT' package
Written by Scott M. Ransom ([email protected])
last revision: 01 Jul 2000
'PGPLOT' was writtten by Tim Pearson <[email protected]>,
and can be found at http://astro.caltech.edu/~tjp/pgplot/
'PPGPLOT' was written by Nick Patavalis <[email protected]>,
and can be found at http://ariadne.di.uoa.gr/ppgplot/
_or_ an updated version is available in the same directory
where this file was found: ftp://cfa-ftp.harvard.edu/pub/ransom
"""
from __future__ import print_function
from __future__ import absolute_import
from builtins import range
from builtins import object
import sys
import numpy as Num
from presto import ppgplot
# Check if string in Py2 and Py3 compatible way
def isstr(var):
return isinstance(var, str if sys.version_info[0] >= 3 else basestring)
# True if we have an /XWIN or /XSERVE device open yet
ppgplot_dev_open_ = 0
# True if we have already scaled and/or prepped the current page
ppgplot_dev_prep_ = 0
# Default plotting device
ppgplot_device_ = '/XWIN'
# Default font to use
ppgplot_font_ = 1
# Default font size to use
ppgplot_font_size_ = 1.0
# Default line style to use
ppgplot_linestyle_ = 1
# Default line width to use
ppgplot_linewidth_ = 2
# Default symbol to plot
ppgplot_symbol_ = None
# Default label color for contour lines
ppgplot_labels_ = None
# Default label interval for labeling contour lines
ppgplot_labelint_ = 20
# Default minimum label interval for labeling contour lines
ppgplot_labelmin_ = 20
# Default line colors to use
ppgplot_color_ = 'white'
# Default color palette for IMAG routines
ppgplot_palette_ = 'rainbow'
# The set of colors for PGPLOT
ppgplot_colors_ = { \
'black':0, 'Black':0, 'BLACK':0, \
'white':1, 'White':1, 'WHITE':1, \
'red':2, 'Red':2, 'RED':2, \
'green':3, 'Green':3, 'GREEN':3, \
'blue':4, 'Blue':4, 'BLUE':4, \
'cyan':5, 'Cyan':5, 'CYAN':5, \
'magenta':6, 'Magenta':6, 'MAGENTA':6, \
'yellow':7, 'Yellow':7, 'YELLOW':7, \
'orange':8, 'Orange':8, 'ORANGE':8, \
'green2':9, 'Green2':9, 'GREEN2':9, \
'green3':10, 'Green3':10, 'GREEN3':10, \
'blue2':11, 'Blue2':14, 'BLUE2':11, \
'purple':12, 'Purple':12, 'PURPLE':12, \
'pink':13, 'Pink':13, 'PINK':13, \
'darkgray':14, 'DarkGray':14, 'DARKGRAY':14, \
'dark gray':14, 'Dark Gray':14, 'DARK GRAY':14, \
'lightgray':15, 'LightGray':15, 'LIGHTGRAY':15, \
'light gray':15, 'Light Gray':15, 'LIGHT GRAY':15 \
}
# Data should be a Numpy Array
def scalerange(data):
"""
scalerange(data):
Adjust the range to be plotted so that it fits nicely on the page.
Return a list with adjusted minimum and maximum values from 'data'.
"""
min = Num.minimum.reduce(data)
max = Num.maximum.reduce(data)
extra = 0.1 * (max - min)
return [min - extra, max + extra]
# Reset global variables to defaults
def resetdefaults():
"""
resetdefaults():
Reset global plotting variables to default values.
"""
global ppgplot_font_, ppgplot_linestyle_, ppgplot_linewidth_, \
ppgplot_color_, ppgplot_font_size_
ppgplot.pgscf(ppgplot_font_)
ppgplot.pgsls(ppgplot_linestyle_)
ppgplot.pgslw(ppgplot_linewidth_)
ppgplot.pgsci(ppgplot_colors_[ppgplot_color_])
ppgplot.pgsch(ppgplot_font_size_)
# Go to a subsequent plotting page
def nextplotpage(reset=0):
"""
nextplotpage():
Advance the plotting device to a new page.
The optional entry is:
reset: reset defaults or not (default = 0 (no))
"""
global ppgplot_dev_open_, ppgplot_dev_prep_
if (ppgplot_dev_open_):
ppgplot.pgpage()
ppgplot_dev_prep_ = 0
else:
print("Can't go to the next plot page unless a plotting device is open.")
if (reset): resetdefaults()
# Reset the color indices to the default values
def reset_colors():
lo_col_ind, hi_col_ind = ppgplot.pgqcol()
ppgplot.pgscir(lo_col_ind, hi_col_ind)
ppgplot.pgscr( 0, 0.00, 0.00, 0.00) # Black (background)
ppgplot.pgscr( 1, 1.00, 1.00, 1.00) # White (default)
ppgplot.pgscr( 2, 1.00, 0.00, 0.00) # Red
ppgplot.pgscr( 3, 0.00, 1.00, 0.00) # Green
ppgplot.pgscr( 4, 0.00, 0.00, 1.00) # Blue
ppgplot.pgscr( 5, 0.00, 1.00, 1.00) # Cyan (Green + Blue)
ppgplot.pgscr( 6, 1.00, 0.00, 1.00) # Magenta (Red + Blue)
ppgplot.pgscr( 7, 1.00, 1.00, 0.00) # Yellow (Red + Green)
ppgplot.pgscr( 8, 1.00, 0.50, 0.00) # Red + Yellow (Orange)
ppgplot.pgscr( 9, 0.50, 1.00, 0.00) # Green + Yellow
ppgplot.pgscr(10, 0.00, 1.00, 0.50) # Green + Cyan
ppgplot.pgscr(11, 0.00, 0.50, 1.00) # Blue + Cyan
ppgplot.pgscr(12, 0.50, 0.00, 1.00) # Blue + Magenta
ppgplot.pgscr(13, 1.00, 0.00, 0.50) # Red + Magenta
ppgplot.pgscr(14, 0.33, 0.33, 0.33) # Dark Gray
ppgplot.pgscr(15, 0.66, 0.66, 0.66) # Light Gray
for ci in range(16, hi_col_ind+1):
ppgplot.pgscr(ci, 0.00, 0.00, 0.00) # Black (background)
# Open a plotting device
def prepplot(rangex, rangey, title=None, labx=None, laby=None, \
rangex2=None, rangey2=None, labx2=None, laby2=None, \
logx=0, logy=0, logx2=0, logy2=0, font=ppgplot_font_, \
fontsize=ppgplot_font_size_, id=0, aspect=1, ticks='in', \
panels=[1,1], device=ppgplot_device_):
"""
prepplot(rangex, rangey, ...)
Open a PGPLOT device for plotting.
'rangex' and 'rangey' are sequence objects giving min and
max values for each axis.
The optional entries are:
title: graph title (default = None)
labx: label for the x-axis (default = None)
laby: label for the y-axis (default = None)
rangex2: ranges for 2nd x-axis (default = None)
rangey2: ranges for 2nd y-axis (default = None)
labx2: label for the 2nd x-axis (default = None)
laby2: label for the 2nd y-axis (default = None)
logx: make the 1st x-axis log (default = 0 (no))
logy: make the 1st y-axis log (default = 0 (no))
logx2: make the 2nd x-axis log (default = 0 (no))
logy2: make the 2nd y-axis log (default = 0 (no))
font: PGPLOT font to use (default = 1 (normal))
fontsize: PGPLOT font size to use (default = 1.0 (normal))
id: Show ID line on plot (default = 0 (no))
aspect: Aspect ratio (default = 1 (square))
ticks: Ticks point in or out (default = 'in')
panels: Number of subpanels [r,c] (default = [1,1])
device: PGPLOT device to use (default = '/XWIN')
Note: Many default values are defined in global variables
with names like ppgplot_font_ or ppgplot_device_.
"""
global ppgplot_dev_open_, ppgplot_dev_prep_
# Check if we will use second X or Y axes
# Note: if using a 2nd X axis, the range should correspond
# to the minimum and maximum values of the 1st X axis. If
# using a 2nd Y axis, the range should correspond to the
# scalerange() values of the 1st Y axis.
if rangex2 is None:
rangex2=rangex
otherxaxis=0
else: otherxaxis=1
if rangey2 is None:
rangey2=rangey
otheryaxis=0
else: otheryaxis=1
# Open the plot device
if (not ppgplot_dev_open_):
ppgplot.pgopen(device)
# Let the routines know that we already have a device open
ppgplot_dev_open_ = 1
# Set the aspect ratio
ppgplot.pgpap(0.0, aspect)
if (panels != [1,1]):
# Set the number of panels
ppgplot.pgsubp(panels[0], panels[1])
ppgplot.pgpage()
# Choose the font
ppgplot.pgscf(font)
# Choose the font size
ppgplot.pgsch(fontsize)
# Choose the font size
ppgplot.pgslw(ppgplot_linewidth_)
# Plot the 2nd axis if needed first
if otherxaxis or otheryaxis:
ppgplot.pgvstd()
ppgplot.pgswin(rangex2[0], rangex2[1], rangey2[0], rangey2[1])
# Decide how the axes will be drawn
if ticks=='in': env = "CMST"
else: env = "CMSTI"
if logx2: lxenv='L'
else: lxenv=''
if logy2: lyenv='L'
else: lyenv=''
if otherxaxis and otheryaxis:
ppgplot.pgbox(env+lxenv, 0.0, 0, env+lyenv, 0.0, 0)
elif otheryaxis:
ppgplot.pgbox("", 0.0, 0, env+lyenv, 0.0, 0)
else:
ppgplot.pgbox(env+lxenv, 0.0, 0, "", 0.0, 0)
# Now setup the primary axis
ppgplot.pgvstd()
ppgplot.pgswin(rangex[0], rangex[1], rangey[0], rangey[1])
# Decide how the axes will be drawn
if ticks=='in': env = "ST"
else: env = "STI"
if logx: lxenv='L'
else: lxenv=''
if logy: lyenv='L'
else: lyenv=''
if otherxaxis and otheryaxis:
ppgplot.pgbox("BN"+env+lxenv, 0.0, 0, "BN"+env+lyenv, 0.0, 0)
elif otheryaxis:
ppgplot.pgbox("BCN"+env+lxenv, 0.0, 0, "BN"+env+lyenv, 0.0, 0)
elif otherxaxis:
ppgplot.pgbox("BN"+env+lxenv, 0.0, 0, "BCN"+env+lyenv, 0.0, 0)
else:
ppgplot.pgbox("BCN"+env+lxenv, 0.0, 0, "BCN"+env+lyenv, 0.0, 0)
# Add labels
if not title is None: ppgplot.pgmtxt("T", 3.2, 0.5, 0.5, title)
ppgplot.pgmtxt("B", 3.0, 0.5, 0.5, labx)
ppgplot.pgmtxt("L", 2.6, 0.5, 0.5, laby)
if otherxaxis: ppgplot.pgmtxt("T", 2.0, 0.5, 0.5, labx2)
if otheryaxis: ppgplot.pgmtxt("R", 3.0, 0.5, 0.5, laby2)
# Add ID line if required
if (id==1): ppgplot.pgiden()
# Let the routines know that we have already prepped the device
ppgplot_dev_prep_ = 1
# Close plotting device
def closeplot():
"""
closeplot():
Close the currently open plotting device
"""
global ppgplot_dev_open_, ppgplot_dev_prep_
ppgplot.pgend()
ppgplot_dev_open_ = 0
ppgplot_dev_prep_ = 0
# Plot simple XY line plots with optional arguments and keywords
def plotxy(y, x=None, title=None, rangex=None, rangey=None, \
labx='', laby='', rangex2=None, rangey2=None, \
labx2='', laby2='', symbol=ppgplot_symbol_, \
line=ppgplot_linestyle_, width=ppgplot_linewidth_, \
color=ppgplot_color_, font=ppgplot_font_, logx=0, logy=0, \
logx2=0, logy2=0, errx=None, erry=None, id=0, noscale=0, \
aspect=0.7727, fontsize=ppgplot_font_size_, ticks='in', \
panels=[1,1], device=ppgplot_device_, setup=1):
"""
plotxy(y, ...)
An interface to make various XY style plots using PGPLOT.
'y' is the 1D sequence object to plot.
The optional entries are:
x: x values (default = 0, 1, ...)
title: graph title (default = None)
rangex: ranges for the x-axis (default = automatic)
rangey: ranges for the y-axis (default = automatic)
labx: label for the x-axis (default = None)
laby: label for the y-axis (default = None)
rangex2: ranges for 2nd x-axis (default = None)
rangey2: ranges for 2nd y-axis (default = None)
labx2: label for the 2nd x-axis (default = None)
laby2: label for the 2nd y-axis (default = None)
logx: make the 1st x-axis log (default = 0 (no))
logy: make the 1st y-axis log (default = 0 (no))
logx2: make the 2nd x-axis log (default = 0 (no))
logy2: make the 2nd y-axis log (default = 0 (no))
errx: symmetric x errors (default = None)
erry: symmetric y errors (default = None)
symbol: symbol for points (default = None)
line: line style (default = 1 (solid))
width: line width (default = 1 (thin))
color: line and/or symbol color (default = 'white')
font: PGPLOT font to use (default = 1 (normal))
fontsize: PGPLOT font size to use (default = 1.0 (normal))
id: show ID line on plot (default = 0 (no))
noscale: turn off auto scaling (default = 0 (no))
aspect: aspect ratio (default = 0.7727 (rect))
ticks: Ticks point in or out (default = 'in')
panels: Number of subpanels [r,c] (default = [1,1])
device: PGPLOT device to use (default = '/XWIN')
setup: Auto-setup the plot (default = 1)
Note: Many default values are defined in global variables
with names like ppgplot_font_ or ppgplot_device_.
"""
# Make sure the input data is an array
y = Num.asarray(y);
# Announce the global variables we will be using
global ppgplot_dev_open_, ppgplot_dev_prep_, ppgplot_colors_
# Define the X axis limits if needed
if x is None: x=Num.arange(len(y), dtype='f')
else: x = Num.asarray(x)
# Determine the scaling to use for the first axis
if rangex is None: rangex=[x.min(), x.max()]
if rangey is None:
if noscale: rangey=[y.min(), y.max()]
else: rangey=scalerange(y)
# Prep the plotting device...
if (not ppgplot_dev_prep_ and setup):
prepplot(rangex, rangey, title, labx, laby, \
rangex2, rangey2, labx2, laby2, \
logx, logy, logx2, logy2, font, fontsize, \
id, aspect, ticks, panels, device=device)
# Choose the line color
if isstr(color):
ppgplot.pgsci(ppgplot_colors_[color])
else:
ppgplot.pgsci(color)
# Plot symbols (and errors) if requested
if not symbol is None:
ppgplot.pgpt(x, y, symbol)
# Error bars
if errx is not None:
if not logx:
errx = Num.asarray(errx)
ppgplot.pgerrx(x+errx, x-errx, y, 1.0)
else:
errx = 10.0**Num.asarray(errx)
ppgplot.pgerrx(Num.log10(10.0**x + errx),
Num.log10(10.0**x - errx), y, 1.0)
if erry is not None:
if not logy:
erry = Num.asarray(erry)
ppgplot.pgerry(x, y+erry, y-erry, 1.0)
else:
erry = 10.0**Num.asarray(erry)
ppgplot.pgerry(x, Num.log10(10.0**y + erry),
Num.log10(10.0**y - erry), 1.0)
# Plot connecting lines if requested
if not line is None:
# Choose the line style
ppgplot.pgsls(line)
# Choose the line width
ppgplot.pgslw(width)
ppgplot.pgline(x, y)
# Make an X-Y plot of binned data (i.e. useful for histograms)
def plotbinned(y, x=None, title=None, labx='Bins', laby='Counts', \
rangex=None, rangey=None, labx2='', laby2='', \
rangex2=None, rangey2=None, \
line=ppgplot_linestyle_, width=ppgplot_linewidth_, \
color=ppgplot_color_, font=ppgplot_font_, logx=0, logy=0, \
logx2=0, logy2=0, erry=None, id=0, noscale=0, \
aspect=0.7727, fontsize=ppgplot_font_size_, \
ticks='out', panels=[1,1], device=ppgplot_device_, setup=1):
"""
plotbinned(y, ...):
Plot x-y data that is binned. This routine differs from
plotxy() in that instead of each point being connected
by diagonal lines, each point is actually a flat-line
with the width of a bin.
'y' is the numerical sequence of binned data to plot.
The optional entries are:
x: x-centers of each bin. (default = auto)
title: graph title (default = None)
labx: label for the x-axis (default = 'Bins')
laby: label for the y-axis (default = 'Counts')
rangex: ranges for the x-axis (default = automatic)
rangey: ranges for the y-axis (default = automatic)
labx2: label for the 2nd x-axis (default = None)
laby2: label for the 2nd y-axis (default = None)
rangex2: ranges for 2nd x-axis (default = None)
rangey2: ranges for 2nd y-axis (default = None)
logx: make the 1st x-axis log (default = 0 (no))
logy: make the 1st y-axis log (default = 0 (no))
logx2: make the 2nd x-axis log (default = 0 (no))
logy2: make the 2nd y-axis log (default = 0 (no))
erry: symmetric y errors (default = None)
line: line style (default = 1 (solid))
width: line width (default = 1 (thin))
color: line and/or symbol color (default = 'white')
font: PGPLOT font to use (default = 1 (normal))
fontsize: PGPLOT font size to use (default = 1.0 (normal))
id: show ID line on plot (default = 0 (no))
aspect: aspect ratio (default = 0.7727 (rect))
ticks: Ticks point in or out (default = 'in')
panels: Number of subpanels [r,c] (default = [1,1])
device: PGPLOT device to use (default = '/XWIN')
setup: Auto-setup the plot (default = 1)
Note: Many default values are defined in global variables
with names like ppgplot_font_ or ppgplot_device_.
"""
# Make sure our entry sequences are Num arrays
Num.asarray(y)
Num.asarray(x)
if x is None: x = Num.arange(len(y)) + 0.5
dx = x[1] - x[0]
# Correct for the fact that 'x' are the bin centers
x = x - 0.5 * dx
# Make the repeat array
r = Num.zeros(len(x), dtype=Num.int32)+2
ny = Num.repeat(y, r)
r[0] = 1
nx = Num.repeat(x, r)
# Add the right side of the right-most bin
nx = Num.concatenate((nx, Num.zeros(1)+nx[-1]+dx))
plotxy(ny, nx, title, labx=labx, laby=laby, line=line, \
labx2=labx2, laby2=laby2, \
rangex2=rangex2, rangey2=rangey2, logx=logx, logy=logy, \
logx2=logx2, logy2=logy2, noscale=noscale, \
width=width, color=color, font=font, fontsize=fontsize, \
id=id, aspect=aspect, rangex=rangex, rangey=rangey, \
ticks=ticks, panels=panels, device=device, setup=setup)
if erry is not None:
ppgplot.pgerry(Num.arange(len(y))+0.5, y+erry, y-erry, 1.0)
# Show a 2D color intensity plot with optional arguments and keywords
def plot2d(z, x=None, y=None, title=None, rangex=None, rangey=None, \
rangez=None, labx='', laby='', rangex2=None, rangey2=None, \
labx2='', laby2='', image=ppgplot_palette_, contours=None, \
logx=0, logy=0, logx2=0, logy2=0, \
line=ppgplot_linestyle_, width=ppgplot_linewidth_, \
color=ppgplot_color_, labels=ppgplot_labels_, \
labelint=ppgplot_labelint_, labelmin=ppgplot_labelmin_, \
font=ppgplot_font_, id=0, noscale=0, aspect=1, \
fontsize=ppgplot_font_size_, ticks='out', panels=[1,1], \
device=ppgplot_device_):
"""
plot2d(z, ...)
An interface to make various 2D plots using PGPLOT.
'z' is the 2D Numpy array to be plotted.
The optional entries are:
x: x values (default = 0, 1, ...)
y: y values (default = 0, 1, ...)
title: graph title (default = None)
rangex: range for the x-axis (default = automatic)
rangey: range for the y-axis (default = automatic)
rangez: range for the z-axis (default = automatic)
labx: label for the x-axis (default = None)
laby: label for the y-axis (default = None)
rangex2: range for 2nd x-axis (default = None)
rangey2: range for 2nd y-axis (default = None)
labx2: label for the 2nd x-axis (default = None)
laby2: label for the 2nd y-axis (default = None)
logx: make the 1st x-axis log (default = 0 (no))
logy: make the 1st y-axis log (default = 0 (no))
logx2: make the 2nd x-axis log (default = 0 (no))
logy2: make the 2nd y-axis log (default = 0 (no))
image: color palette for image (default = 'rainbow')
contours: list of contour values (default = None)
line: contour line style (default = 1 (solid))
width: contour line width (default = 1 (thin))
color: contour line color (default = 'white')
labels: color of contour labels (default = None)
labelint: contour label spacing (default = 20)
labelmin: min contour label spacing (default = 20)
font: PGPLOT font to use (default = 1 (normal))
fontsize: PGPLOT font size to use (default = 1.0 (normal))
id: show ID line on plot (default = 0 (no))
noscale: turn off auto scaling (default = 0 (no))
aspect: Aspect ratio (default = 1 (square))
ticks: Ticks point in or out (default = 'out')
panels: Number of subpanels [r,c] (default = [1,1])
device: PGPLOT device to use (default = '/XWIN')
Note: Many default values are defined in global variables
with names like ppgplot_font_ or ppgplot_device_.
"""
# Make sure the input data is a 2D array
z = Num.asarray(z);
if not len(z.shape)==2:
print('Input data array must be 2 dimensional.')
return
# Announce the global variables we will be using
global ppgplot_dev_open_, ppgplot_dev_prep_, pgpalette
# Define the X and Y axis limits if needed
if x is None: x=Num.arange(z.shape[1], dtype='f')
else: x = Num.asarray(x)
if y is None: y=Num.arange(z.shape[0], dtype='f')
else: y = Num.asarray(y)
# Determine the scaling to use for the axes
if rangex is None:
dx = x[-1]-x[-2]
rangex=[x[0], x[-1]+dx]
if rangey is None:
dy = y[-1]-y[-2]
rangey=[y[0], y[-1]+dy]
if rangez is None: rangez=[Num.minimum.reduce(Num.ravel(z)), \
Num.maximum.reduce(Num.ravel(z))]
# Prep the plotting device...
if (not ppgplot_dev_prep_):
prepplot(rangex, rangey, title, labx, laby, \
rangex2, rangey2, labx2, laby2, logx, logy, \
logx2, logy2, font, fontsize, id, aspect, \
ticks, panels, device=device)
if image is not None:
# Set the color indices and the color table
lo_col_ind, hi_col_ind = ppgplot.pgqcol()
lo_col_ind = lo_col_ind + 2
ppgplot.pgscir(lo_col_ind, hi_col_ind)
pgpalette.setpalette(image)
ppgplot.pgctab(pgpalette.l,pgpalette.r,pgpalette.g,pgpalette.b)
# Construct the image
ppgplot.pgimag_s(z, 0.0, 0.0, rangex[0], rangey[0], \
rangex[1], rangey[1])
reset_colors()
if contours is not None:
contours = Num.asarray(contours)
# Choose the line style
ppgplot.pgsls(line)
# Choose the line width
ppgplot.pgslw(width)
# Choose the line color for the contourlines
if isstr(color):
ppgplot.pgsci(ppgplot_colors_[color])
else:
ppgplot.pgsci(color)
# Construct the contours
ppgplot.pgcont_s(z, len(contours), contours, rangex[0], \
rangey[0], rangex[1], rangey[1])
# Label the contours if requested
if labels is not None:
# Choose the line color for the contourlines
if isstr(labels):
ppgplot.pgsci(ppgplot_colors_[labels])
else:
ppgplot.pgsci(labels)
for i in range(len(contours)):
ppgplot.pgconl_s(z, contours[i], str(contours[i]),
labelint, labelmin)
#########################################################################
class Palette(object):
# Set the color palette
def setpalette(self, palette):
"""
setpalette(self, palette):
Set the color palette for imag-style routines
"""
if (palette == 'rainbow'):
self.l = Num.array([0.0, 0.015, 0.225, 0.4, 0.59,
0.6, 0.775, 0.955, 0.965, 1.0])
self.r = Num.array([1.0, 1.0, 1.0, 0.0, 0.0,
0.0, 0.0, 0.947, 1.0, 1.0])
self.g = Num.array([0.0, 0.0, 1.0, 1.0, 1.0,
0.946, 0.0, 0.8, 0.844, 1.0])
self.b = Num.array([0.0, 0.0, 0.0, 0.0, 0.95,
1.0, 1.0, 1.0, 1.0, 1.0])
elif (palette == 'antirainbow'):
self.l = Num.array([0.0, 0.035, 0.045, 0.225, 0.4,
0.41, 0.6, 0.775, 0.985, 1.0])
self.r = Num.array([1.0, 1.0, 0.947, 0.0, 0.0,
0.0, 0.0, 1.0, 1.0, 1.0])
self.g = Num.array([1.0, 0.844, 0.8, 0.0, 0.946,
1.0, 1.0, 1.0, 0.0, 0.0])
self.b = Num.array([1.0, 1.0, 1.0, 1.0, 1.0,
0.95, 0.0, 0.0, 0.0, 0.0])
elif (palette == 'astro'):
self.l = Num.array([0.0, 0.167, 0.333, 0.5,
0.667, 0.833, 1.0])
self.r = Num.array([0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0])
self.g = Num.array([0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0])
self.b = Num.array([0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0])
elif (palette == 'hue'):
self.l = Num.array([0.0, 0.167, 0.333, 0.5,
0.667, 0.833, 1.0])
self.r = Num.array([1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0])
self.g = Num.array([0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0])
self.b = Num.array([0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0])
elif (palette == 'heat'):
self.l = Num.array([0.0, 0.48, 0.7, 0.75, 1.0])
self.r = Num.array([0.0, 1.0, 1.0, 1.0, 1.0])
self.g = Num.array([0.0, 0.0, 0.423, 0.519, 1.0])
self.b = Num.array([0.0, 0.0, 0.0, 0.0, 1.0])
elif (palette == 'gamma'):
self.l = Num.array([0.0, 0.33, 0.66, 1.0])
self.r = Num.array([0.3, 1.0, 0.0, 0.0])
self.g = Num.array([0.0, 0.3, 1.0, 0.0])
self.b = Num.array([0.0, 0.0, 0.3, 1.0])
elif (palette == 'antigray' or palette == 'antigrey'):
self.l = Num.array([0.0, 1.0])
self.r = Num.array([1.0, 0.0])
self.g = Num.array([1.0, 0.0])
self.b = Num.array([1.0, 0.0])
elif (palette == 'apjgray' or palette == 'apjgrey'):
self.l = Num.array([0.0, 1.0])
self.r = Num.array([1.0, 0.25])
self.g = Num.array([1.0, 0.25])
self.b = Num.array([1.0, 0.25])
else:
self.l = Num.array([0.0, 1.0])
self.r = Num.array([0.0, 1.0])
self.g = Num.array([0.0, 1.0])
self.b = Num.array([0.0, 1.0])
pgpalette = Palette()
#
# Demo code
#
if __name__ == '__main__':
from math import *
from numpy import *
def distance(width):
"""
distance(width):
Return a 'width' x 'width' Numpy array with each
point set to the geometric distance from the array's center.
"""
x = Num.arange(-width/2.0+0.5, width/2.0+0.5, 1.0)**2
x = Num.resize(x, (width,width))
return Num.sqrt(x + Num.transpose(x))
# Do a couple 1-D plots
x = arange(0.0, 10.0, 0.05)
xcm = x * 2.54
rx2 = [min(xcm), max(xcm)]
ry2 = [-0.25, 0.25]
y = cos(x)
f = exp(-0.1*x)
# Show the simplest calling sequence
plotxy(y)
closeplot()
# Show something a little more useful
plotxy(y, x, rangex2=rx2, rangey2=ry2, \
labx='inches', laby='foobar activity', labx2='cm', \
laby2='aged foobar activity', id=1)
# Over-plot the following
plotxy(y*f, x, color='red', line=2, width=6)
closeplot()
# Show a couple 2-D examples
a = exp(-0.02*distance(200))
ca = a*cos(0.04*distance(200))
# Show the simplest calling sequence
plot2d(a)
closeplot()
# Show 3 related plots which are a little more useful
plot2d(ca, x, x, title='Contours', labx='x', laby='y', image=None, \
contours=[0.0, 0.4, 0.8], labels='yellow', \
color='red', labelint=40, labelmin=20)
closeplot()
# Show the same thing but with an image
plot2d(ca, x, x, title='Image', labx='x', laby='y', image='heat')
closeplot()
# Show the same thing but with an image and contours
plot2d(ca, x, x, title='Image+Contours', labx='x', laby='y', \
image='heat', contours=[0.0, 0.4, 0.8])
closeplot()
| 29,817 | 41.965418 | 81 | py |
presto | presto-master/python/presto/polycos.py | from __future__ import absolute_import
from builtins import range
from builtins import object
import os
import sys
import subprocess
from presto import parfile
import numpy as Num
# Constants
NUMCOEFFS_DEFAULT = 12
SPAN_DEFAULT = 60 # span of each polyco in minutes
# Telescope name to TEMPO observatory code conversion
telescope_to_id = {"GBT": '1', \
"Arecibo":' 3', \
"VLA": '6', \
"Parkes": '7', \
"Jodrell": '8', \
"GB43m": 'a', \
"GB 140FT": 'a', \
"Nancay": 'f', \
"Effelsberg": 'g', \
"WSRT": 'i', \
"FAST": 'k', \
"GMRT": 'r', \
"CHIME": 'y', \
"Geocenter": '0', \
"Barycenter": '@'}
# TEMPO observatory code to Telescope name conversion
id_to_telescope = {'1': "GBT", \
'3': "Arecibo", \
'6': "VLA", \
'7': "Parkes", \
'8': "Jodrell", \
'a': "GB43m", \
'a': "GB 140FT", \
'f': "Nancay", \
'g': "Effelsberg", \
'i': "WSRT", \
'k': "FAST", \
'r': "GMRT", \
'y': "CHIME", \
'0': "Geocenter", \
'@': "Barycenter"}
# Telescope name to track length (max hour angle) conversion
telescope_to_maxha = {"GBT": 12, \
"Arecibo": 3, \
"FAST": 5, \
"VLA": 6, \
"Parkes": 12, \
"Jodrell": 12, \
"GB43m": 12, \
"GB 140FT": 12, \
"Nancay": 4, \
"Effelsberg": 12, \
"WSRT": 12, \
"GMRT": 12, \
"CHIME": 1, \
"Geocenter": 12, \
"Barycenter": 12}
class polyco(object):
def __init__(self, fileptr):
line = fileptr.readline()
if (line==""):
self.psr = None
else:
sl = line.split()
self.psr = sl[0]
self.date = sl[1]
self.UTC = sl[2]
self.TMIDi = float(sl[3].split(".")[0])
self.TMIDf = float("0."+sl[3].split(".")[1])
self.TMID = self.TMIDi+self.TMIDf
self.DM = float(sl[4])
if (len(sl)==7):
self.doppler = float(sl[5])*1e-4
self.log10rms = float(sl[6])
else:
self.log10rms = "-"+sl[-1].split("-")[-1]
self.doppler = float(sl[-1][:sl[-1].find(self.log10rms)])*1e-4
self.log10rms = float(self.log10rms)
sl = fileptr.readline().split()
self.RPHASE = float(sl[0])
self.F0 = float(sl[1])
self.obs = sl[2]
self.dataspan = int(sl[3])
self.numcoeff = int(sl[4])
self.obsfreq = float(sl[5])
if (len(sl)==7):
self.binphase = float(sl[6])
self.coeffs = Num.zeros(self.numcoeff, 'd')
for linenum in range(self.numcoeff//3):
sl = fileptr.readline().split()
self.coeffs[linenum*3+0] = float(sl[0].replace('D', 'E'))
self.coeffs[linenum*3+1] = float(sl[1].replace('D', 'E'))
self.coeffs[linenum*3+2] = float(sl[2].replace('D', 'E'))
if self.numcoeff%3 != 0: # get remaining terms if needed
sl = fileptr.readline().split()
nlines = self.numcoeff//3
for coeffnum in range(len(sl)):
self.coeffs[nlines*3+coeffnum] = float(sl[coeffnum].replace('D', 'E'))
self.phasepoly = Num.polynomial.polynomial.Polynomial(self.coeffs)
def phase(self, mjdi, mjdf):
"""
self.phase(mjdi, mjdf):
Return the predicted pulsar phase at a given integer and frational MJD.
"""
return self.rotation(mjdi, mjdf) % 1
def rotation(self, mjdi, mjdf):
"""
self.rotation(mjdi, mjdf):
Return the predicted pulsar (fractional) rotation at a
given integer and fractional MJD.
"""
DT = ((mjdi-self.TMIDi)+(mjdf-self.TMIDf))*1440.0
phase = self.phasepoly(DT)
#phase = self.coeffs[self.numcoeff-1]
#for ii in range(self.numcoeff-1, 0, -1):
# phase = DT*phase + self.coeffs[ii-1]
phase += self.RPHASE + DT*60.0*self.F0
return phase
def freq(self, mjdi, mjdf):
"""
self.freq(mjdi, mjdf):
Return the predicted pulsar spin frequency at a given integer and frational MJD.
"""
DT = ((mjdi-self.TMIDi)+(mjdf-self.TMIDf))*1440.0
psrfreq = 0.0
for ii in range(self.numcoeff-1, 0, -1):
psrfreq = DT*psrfreq + ii*self.coeffs[ii]
return self.F0 + psrfreq/60.0
class polycos(object):
def __init__(self, psrname, filenm="polyco.dat"):
self.psr = psrname
self.file = filenm
self.polycos = []
self.TMIDs = []
infile = open(filenm, "r")
tmppoly = polyco(infile)
while(tmppoly.psr):
if (len(self.polycos)):
if (tmppoly.dataspan != self.dataspan):
sys.stderr.write("Data span is changing!\n")
else:
self.dataspan = tmppoly.dataspan
if (tmppoly.psr==psrname):
self.polycos.append(tmppoly)
self.TMIDs.append(tmppoly.TMID)
tmppoly = polyco(infile)
sys.stderr.write("Read %d polycos for PSR %s\n" % (len(self.polycos), psrname))
self.TMIDs = Num.asarray(self.TMIDs)
infile.close()
self.validrange = 0.5*self.dataspan/1440.0
def select_polyco(self, mjdi, mjdf):
"""
self.select_polyco(mjdi, mjdf):
Return the polyco number that is valid for the specified time.
"""
goodpoly = Num.argmin(Num.fabs(self.TMIDs-(mjdi+mjdf)))
if (Num.fabs(self.TMIDs[goodpoly]-(mjdi+mjdf)) > self.validrange):
sys.stderr.write("Cannot find a valid polyco at %f!\n" % (mjdi+mjdf))
return goodpoly
def get_phase(self, mjdi, mjdf):
"""
self.get_phase(mjdi, mjdf):
Return the predicted pulsar phase for the specified time.
"""
goodpoly = self.select_polyco(mjdi, mjdf)
return self.polycos[goodpoly].phase(mjdi, mjdf)
def get_rotation(self, mjdi, mjdf):
"""
self.get_rotation(mjdi, mjdf):
Return the predicted pulsar (fractional) rotation
number for the specified time.
"""
goodpoly = self.select_polyco(mjdi, mjdf)
return self.polycos[goodpoly].rotation(mjdi, mjdf)
def get_freq(self, mjdi, mjdf):
"""
self.get_freq(mjdi, mjdf):
Return the predicted pulsar spin frquency for the specified time.
"""
goodpoly = self.select_polyco(mjdi, mjdf)
return self.polycos[goodpoly].freq(mjdi, mjdf)
def get_phs_and_freq(self, mjdi, mjdf):
"""
self.get_voverc(mjdi, mjdf):
Return the predicted pulsar phase and spin frquency for the specified time.
"""
goodpoly = self.select_polyco(mjdi, mjdf)
return (self.polycos[goodpoly].phase(mjdi, mjdf),
self.polycos[goodpoly].freq(mjdi, mjdf))
def get_voverc(self, mjdi, mjdf):
"""
self.get_voverc(mjdi, mjdf):
Return the (approximate) topocentric v/c for the specified time.
"""
goodpoly = self.select_polyco(mjdi, mjdf)
return self.polycos[goodpoly].doppler
def create_polycos(parfn, telescope_id, center_freq, start_mjd, end_mjd, \
max_hour_angle=None, span=SPAN_DEFAULT, \
numcoeffs=NUMCOEFFS_DEFAULT, keep_file=False):
"""Create polycos object from a parfile.
Inputs:
parfn: parfile's filename, or a parfile object.
telescope_id: The TEMPO 1-character telescope identifier.
center_freq: The observation's center frequencies in MHz.
start_mjd: MJD on which the polycos should start.
end_mjd: MJD until the polycos should extend.
max_hour_angle: The maximum hour angle as expected by tempo.
(Default: Use default value chosen for given telescope).
span: Span of each set of polycos in min.
(Default: 60 min).
numcoeffs: Number of coefficients to use.
(Default: 12).
keep_file: If true do not delete polyco.dat file.
(Default: delete polyco.dat file).
Output:
new_polycos: a polycos object.
"""
if type(parfn)==bytes:
# assume parfn is a filename
par = parfile.psr_par(parfn)
else:
# assume par is already a parfile.psr_par object
par = parfn
if max_hour_angle is None:
telescope_name = id_to_telescope[telescope_id]
max_hour_angle = telescope_to_maxha[telescope_name]
tzfile = open("tz.in", "w")
# Default parameters for prediction mode
tzfile.write("%s %d %d %d %0.5f\n" % (telescope_id, max_hour_angle, \
SPAN_DEFAULT, NUMCOEFFS_DEFAULT, center_freq))
# TEMPO ignores lines 2 and 3 in tz.in file
tzfile.write("\n\n")
if hasattr(par, "PSR"):
psrname = par.PSR
else:
psrname = par.PSRJ
tzfile.write("%s %d %d %d %0.5f\n" % (psrname, SPAN_DEFAULT, \
NUMCOEFFS_DEFAULT, max_hour_angle, center_freq))
tzfile.close()
tempo = subprocess.Popen("tempo -z -f %s" % par.FILE, shell=True, \
stdin=subprocess.PIPE, stdout=subprocess.PIPE, \
stderr=subprocess.PIPE)
(out, err) = tempo.communicate("%d %d\n" % (start_mjd, end_mjd))
if err:
raise TempoError("The following error message was encountered " \
"when running TEMPO to generate polycos from " \
"the input parfile (%s):\n\n%s\n" % (parfn, err))
new_polycos = polycos(psrname, filenm='polyco.dat')
# Remove files created by us and by TEMPO
os.remove("tz.in")
if not keep_file:
os.remove("polyco.dat")
return new_polycos
class TempoError(Exception):
pass
| 10,605 | 36.878571 | 92 | py |
presto | presto-master/python/presto/kuiper.py | from __future__ import print_function
from __future__ import absolute_import
from builtins import range
import numpy as num
from presto import Pgplot
from functools import reduce
def noverk(n,k):
# This is the combinations formula
return float(reduce(lambda a,b: a*(n-b)/(b+1), range(k),1))
def Tt(t, z, N):
overN = 1.0/float(N)
y = z + t*overN
return y**(t-3.0) * (y**3.0*N
- y*y*t*(3.0-2.0*overN)*overN
- (t*(t-1.0)*(t-2.0))*overN*overN)
def kuiper_prob(D, N):
# From section 14.3 in Numerical Recipes
EPS1 = 1e-6
EPS2 = 1e-12
en = num.sqrt(N)
lamda = (en + 0.155 + 0.24 / en) * D
if (lamda < 0.4): return 1.0
probks = termbf = 0.0
a2 = -2.0 * lamda * lamda
for ii in range(1, 100):
a2ii2 = a2 * ii * ii
term = 2.0 * (-2.0 * a2ii2 - 1.0) * num.exp(a2ii2)
probks += term
if (num.fabs(term) <= EPS1*termbf or
num.fabs(term) <= EPS2*probks):
return probks
termbf = num.fabs(term)
return 1.0
def kuiper_prob2(D, N):
# From Paltani 2004, eqn 3 (for large N)
EPS1 = 1e-6
EPS2 = 1e-12
z = D * num.sqrt(N)
term1bf = term2bf = 0.0
term1 = term2 = 0.0
for m in range(1, 1000):
x = 4.0*m*m*z*z
term = 2.0 * (x - 1.0) * num.exp(-0.5*x)
term1 += term
if (num.fabs(term) <= EPS1*term1bf or
num.fabs(term) <= EPS2*term1):
break
term1bf = num.fabs(term1)
for m in range(1, 1000):
x = 4.0*m*m*z*z
term = m * m * (x - 3.0) * num.exp(-0.5*x)
term2 += term
if (num.fabs(term) <= EPS1*term2bf or
num.fabs(term) <= EPS2*term2):
break
term2bf = num.fabs(term2)
return term1 - 8.0*z/(3.0*num.sqrt(N)) * term2
def kuiper_prob3(D, N):
# From Paltani 2004, eqn 6 (for large D)
# note: this equation does not seem consistent with the other 2...
EPS1 = 1e-6
EPS2 = 1e-12
prob = termbf = 0.0
for t in range(1000):
term = noverk(N, t) * (1.0-D-t/float(N))**(N-t-1) * Tt(t, D, N)
prob += term
if (num.fabs(term) <= EPS1*termbf or
num.fabs(term) <= EPS2*prob):
return prob
termbf = num.fabs(term)
return 1.0
def kuiper_uniform_test(data, output=0):
"""
kuiper_uniform_test(data, output=0):
Conduct a Kuiper test on the data. The data must be values
within [0,1) (e.g. phases from a periodicity search). They
will be compared to a uniform distribution. The return value
is the probability that the data is uniformly distributed.
"""
sdata = num.asarray(data)
N = sdata.size
sdata.sort()
f0 = num.arange(N, dtype=num.float64)/N
fn = (num.arange(N, dtype=num.float64)+1.0)/N
Dp = (fn - sdata).max()
Dm = (sdata - f0).max()
D = Dp + Dm
P = kuiper_prob(D, N)
if (output):
xs = (num.arange(N+3, dtype=num.float64)/(N+2.0)).repeat(2)[1:-1]
ys = num.concatenate((num.asarray([0.0]), sdata, num.asarray([1.0]))).repeat(2)
Pgplot.plotxy(ys, xs, rangex=[-0.03, 1.03], rangey=[-0.03, 1.03], aspect=1.0,
labx="Fraction of Data", laby="Cumulative Value", width=2)
Pgplot.plotxy(num.asarray([0.0, 1.0]), num.asarray([0.0, 1.0]), width=1)
Pgplot.closeplot()
print("Max distance between the cumulative distributions (D) = %.5g" % D)
print("Prob the data is from the specified distrbution (P) = %.3g" % P)
return (D, P)
if __name__=="__main__":
if (0):
from kstest import *
for ii in range(4):
N = 200
data = num.random.random(N)
print("-------")
print("K-S:")
(D, P) = KS_test(data, cum_uniform_dist, output=1)
print("Kuiper:")
(D, P) = kuiper_uniform_test(data, output=1)
if (1):
p1s = []
p2s = []
p3s = []
N = 50
Ds = num.arange(0.01, 0.6, 0.01)
for D in Ds:
p1s.append(kuiper_prob(D, N))
p2s.append(kuiper_prob2(D, N))
p3s.append(kuiper_prob3(D, N))
Pgplot.plotxy(num.log10(num.asarray(p1s)), Ds, color='red')
Pgplot.plotxy(num.log10(num.asarray(p2s)), Ds, color='blue')
Pgplot.plotxy(num.log10(num.asarray(p3s)), Ds, color='green')
Pgplot.closeplot()
| 4,426 | 32.793893 | 87 | py |
presto | presto-master/python/presto/residuals.py | from __future__ import print_function
from builtins import range
from builtins import object
import struct
import numpy as Num
#
# From the TEMPO Documentation:
#
# The file resid2.tmp contains residuals, etc. in inary format.
# Each record contains eight real*8 values:
# --TOA (MJD, referenced to solar system barycenter)
# --Postfit residual (pulse phase, from 0 to 1)
# --Postfit residual (seconds)
# --Orbital phase (where applicable)
# --Observing frequency (in barycenter frame)
# --Weight of point in the fit
# --Timing uncertainty (according to input file)
# --Prefit residual (seconds)
#
class residuals(object):
pass
def read_residuals(filename="resid2.tmp"):
"""
read_residuals(filename="resid2.tmp"):
Read a TEMPO1 style binary residuals file and return all the elements
in a residuals 'class'. The class instance will have an attribute
called .numTOAs with the number of TOAs and up to 8 arrays with
the following (as appropriate):
.bary_TOA Barycentric TOA (MJD)
.uncertainty TOA uncertainty (seconds)
.bary_freq Observing frequency (in barycenter frame)
.prefit_phs Prefit residual (pulse phase, from 0 to 1)
.prefit_sec Prefit residual (seconds)
.postfit_phs Postfit residual (pulse phase, from 0 to 1)
.postfit_sec Postfit residual (seconds)
.orbit_phs Orbital phase (where applicable)
.weight Weight of point in the fit
"""
r = residuals()
infile = open(filename, "rb")
swapchar = '<' # this is little-endian (default)
data = infile.read(8)
test_int32 = struct.unpack(swapchar+"i", data[:4])[0]
test_int64 = struct.unpack(swapchar+"q", data)[0]
if ((test_int32 > 100 or test_int32 < 0) and
(test_int64 > 100 or test_int64 < 0)):
swapchar = '>' # this is big-endian
if (test_int32 < 100 and test_int32 > 0):
marktype = 'i' # 32-bit int
reclen = test_int32 + 2 * 4
else:
marktype = 'q' # long long
reclen = test_int64 + 2 * 8
rectype = swapchar+marktype+9*'d'+marktype
# print test_int32, test_int64, marktype, reclen, rectype
infile.seek(0, 2) # position at file end
filelen = infile.tell()
if (filelen % reclen or
not (reclen==struct.calcsize(rectype))):
print("Warning: possibly reading residuals incorrectly... don't understand record size")
infile.seek(0, 0) # position at file start
r.numTOAs = filelen // reclen
r.bary_TOA = Num.zeros(r.numTOAs, 'd')
r.postfit_phs = Num.zeros(r.numTOAs, 'd')
r.postfit_sec = Num.zeros(r.numTOAs, 'd')
r.orbit_phs = Num.zeros(r.numTOAs, 'd')
r.bary_freq = Num.zeros(r.numTOAs, 'd')
r.weight = Num.zeros(r.numTOAs, 'd')
r.uncertainty = Num.zeros(r.numTOAs, 'd')
r.prefit_phs = Num.zeros(r.numTOAs, 'd')
for ii in range(r.numTOAs):
rec = struct.unpack(rectype, infile.read(reclen))
(r.bary_TOA[ii],
r.postfit_phs[ii],
r.postfit_sec[ii],
r.orbit_phs[ii],
r.bary_freq[ii],
r.weight[ii],
r.uncertainty[ii],
r.prefit_phs[ii]) = (rec[1], rec[2], rec[3], rec[4], \
rec[5], rec[6], rec[7], rec[8])
infile.close()
if not Num.nonzero(r.orbit_phs): del r.orbit_phs
if not Num.nonzero(r.bary_freq): del r.bary_freq
if not Num.nonzero(r.weight): del r.weight
r.prefit_sec = r.postfit_sec/r.postfit_phs*r.prefit_phs
r.uncertainty *= 1.e-6 # Convert uncertainties in usec to sec
return r
| 3,668 | 38.451613 | 97 | py |
presto | presto-master/python/presto/__init__.py | 0 | 0 | 0 | py |
|
presto | presto-master/python/presto/simple_roots.py | from __future__ import print_function
from builtins import range
# 'Safe' Newton-Raphson and Secant method
# for numerical root-finding
#
# Written by Scott M. Ransom <[email protected]>
def bisect(func, lox, hix, TOL=1e-14, MAXIT=200):
"""
bisect(func, lox, hix, TOL=1e-14, MAXIT=200):
Try to find a root between 'lox' and 'hix' using a simple
bisection of the region. 'TOL' is an _absolute_
tolerance. 'MAXIT' is the maximum number of iterations
"""
f = func(lox)
fmid = func(hix)
if (f * fmid >= 0.0):
print("Root must be bracketed in bisect()!")
return 0.0
if (f < 0.0):
dx, rtb = hix - lox, lox
else:
dx, rtb = lox - hix, hix
for i in range(MAXIT):
dx = dx * 0.5
xmid = rtb + dx
fmid = func(xmid)
if (fmid <= 0.0):
rtb = xmid
if (abs(dx) < TOL or fmid == 0.0):
return rtb
print("Too many bisections in bisect()!")
return 0.0
def secant(func, oldx, x, TOL=1e-14):
"""
secant(func, oldx, x, TOL=1e-14):
Similar to Newton's method, but the derivative is estimated
by divided difference using only function calls. A root is
estimated by x = x - f(x) (x - oldx)/(f(x) - f(oldx))
where oldx = x[i-1] and x = x[i].
"""
oldf, f = func(oldx), func(x)
if (abs(f) > abs(oldf)):
oldx, x = x, oldx
oldf, f = f, oldf
count = 0
while 1:
dx = f * (x - oldx) / float(f - oldf)
if abs(dx) < TOL * (1 + abs(x)): return x - dx
oldx, x = x, x - dx
oldf, f = f, func(x)
count = count + 1
# print "secant(%d): x=%s, f(x)=%s" % (count, x, f)
def newton_raphson(func, dfunc, lox, hix, TOL=1.0e-14):
"""
newton_raphson(func, dfunc, lox, hix, TOL):
Finds the root of |func| which is bracketed by values
|lox| and |hix| to an accuracy of +/- |TOL|. The algorithm
used is a safe version of Newton-Raphson (see page 366 of NR in
C, 2ed). |func| must be a function of one variable whose
derivative is the function 'dfunc'.
"""
maxit = 500
fl, fh = func(lox), func(hix)
if ((fl > 0.0 and fh > 0.0) or (fl < 0.0 and fh < 0.0)):
print("Root must be bracketed in newtonRaphson()")
return 0.0
if (fl == 0.0): return lox
if (fh == 0.0): return hix
if (fl < 0.0):
xl=lox
xh=hix
else:
xh=lox
xl=hix
rts=0.5*(lox+hix)
dxold=abs(hix-lox)
dx=dxold
f, df = func(rts), dfunc(rts)
for j in range(maxit):
if ((((rts-xh)*df-f)*((rts-xl)*df-f) > 0.0)
or (abs(2.0*f) > abs(dxold*df))):
dxold=dx
dx=0.5*(xh-xl)
rts=xl+dx
if (xl == rts): return rts
else:
dxold=dx
dx=f/df
temp=rts
rts=rts-dx
if (temp == rts): return rts
if (abs(dx) < TOL): return rts
f, df = func(rts), dfunc(rts)
if (f < 0.0):
xl=rts
else:
xh=rts
print("Maximum number of iterations exceeded in newton_raphson()")
return 0.0
# Test code
if __name__ == '__main__':
from numpy.core.umath import pi, sin, cos
def func(x):
return sin(x) - x + pi/4.0
def dfunc(x):
return cos(x) - 1
nr = newton_raphson(func, dfunc, 0.0, 3.0)
s = secant(func, 0.0, 3.0)
bs = bisect(func, 0.0, 3.0)
theo = 1.766340286602865756325301235707
print('')
print('Finding the root (between 0.0 and 3.0) of:')
print(' x - sin(x) = pi/4')
print('')
print(' Newton-Raphson gives (default accuracy) = %15.14f' % nr)
print(' Secant method gives (default accuracy) = %15.14f' % s)
print(' Bisection method gives (default accuracy) = %15.14f' % bs)
print('Theoretical result (correct to all shown digits) = %15.14f' % theo)
print('')
| 3,967 | 29.75969 | 78 | py |
presto | presto-master/python/presto/bestprof.py | from builtins import object
## Automatically adapted for numpy Apr 14, 2006 by convertcode.py
import numpy as num
def get_epochs(line):
i, f = line.split("=")[-1].split(".")
f = "0."+f
epochi = float(i)
epochf = float(f)
# Check to see if it is very close to 1 sec
# If it is, assume the epoch was _exactly_ at the second
fsec = epochf*86400.0 + 1e-10
if (num.fabs(fsec - int(fsec)) < 1e-6):
# print "Looks like an exact second"
epochf = float(int(fsec))/86400.0
return epochi, epochf
class bestprof(object):
def __init__(self, filenm):
infile = open(filenm)
self.topo = 0
self.profile = []
for line in infile.readlines():
if line[0]=="#":
if line.startswith("# Input file"):
self.datnm = line.split("=")[-1][:-1]
continue
if line.startswith("# Candidate"):
if line.startswith("# Candidate = PSR_"):
self.psr = line.split("=")[-1].split("_")[1][:-1]
continue
else:
self.psr = None
if line.startswith("# T_sample"):
self.dt = float(line.split("=")[-1])
continue
if line.startswith("# Data Folded"):
self.N = float(line.split("=")[-1])
continue
if line.startswith("# Data Avg"):
self.data_avg = float(line.split("=")[-1])
continue
if line.startswith("# Data StdDev"):
self.data_std = float(line.split("=")[-1])
continue
if line.startswith("# Profile Avg"):
self.prof_avg = float(line.split("=")[-1])
continue
if line.startswith("# Profile StdDev"):
self.prof_std = float(line.split("=")[-1])
continue
if line.startswith("# Reduced chi-sqr"):
self.chi_sqr = float(line.split("=")[-1])
continue
if line.startswith("# Epoch_topo"):
try:
self.epochi, self.epochf = get_epochs(line)
self.epochi_topo, self.epochf_topo = self.epochi, self.epochf
self.topo = 1
except ValueError:
pass
continue
if line.startswith("# Epoch_bary"):
try:
self.epochi_bary, self.epochf_bary = get_epochs(line)
if not self.topo:
self.epochi, self.epochf = self.epochi_bary, self.epochf_bary
except ValueError:
pass
continue
if line.startswith("# P_topo"):
try:
self.p0_topo = float(line.split("=")[-1].split("+")[0])/1000.0
self.p0err_topo = float(line.split("=")[-1].split("+")[1][2:])/1000.0
if self.topo:
self.p0, self.p0err = self.p0_topo, self.p0err_topo
except:
pass
continue
if line.startswith("# P_bary"):
try:
self.p0_bary = float(line.split("=")[-1].split("+")[0])/1000.0
self.p0err_bary = float(line.split("=")[-1].split("+")[1][2:])/1000.0
if not self.topo:
self.p0, self.p0err = self.p0_bary, self.p0err_bary
except:
pass
continue
if line.startswith("# P'_topo"):
try:
self.p1_topo = float(line.split("=")[-1].split("+")[0])
self.p1err_topo = float(line.split("=")[-1].split("+")[1][2:])
if self.topo:
self.p1, self.p1err = self.p1_topo, self.p1err_topo
except:
pass
continue
if line.startswith("# P'_bary"):
try:
self.p1_bary = float(line.split("=")[-1].split("+")[0])
self.p1err_bary = float(line.split("=")[-1].split("+")[1][2:])
if not self.topo:
self.p1, self.p1err = self.p1_bary, self.p1err_bary
except:
pass
continue
if line.startswith("# P''_topo"):
try:
self.p2_topo = float(line.split("=")[-1].split("+")[0])
self.p2err_topo = float(line.split("=")[-1].split("+")[1][2:])
if self.topo:
self.p2, self.p2err = self.p2_topo, self.p2err_topo
except:
pass
continue
if line.startswith("# P''_bary"):
try:
self.p2_bary = float(line.split("=")[-1].split("+")[0])
self.p2err_bary = float(line.split("=")[-1].split("+")[1][2:])
if not self.topo:
self.p2, self.p2err = self.p2_bary, self.p2err_bary
except:
pass
continue
else:
self.profile.append(float(line.split()[-1]))
infile.close()
self.T = self.dt*self.N
self.proflen = len(self.profile)
def normalize(self):
normprof = num.asarray(self.profile)
normprof -= min(normprof)
normprof /= max(normprof)
return normprof
| 5,982 | 42.671533 | 93 | py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.