code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def __init__( self, active_dims: tp.Union[list[int], slice, None] = None, lengthscale: tp.Union[LengthscaleCompatible, nnx.Variable[Lengthscale]] = 1.0, variance: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, period: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, n_dims: tp.Union[int, None] = None, compute_engine: AbstractKernelComputation = DenseKernelComputation(), ): """Initializes the kernel. Args: active_dims: the indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. period: the period of the kernel p. n_dims: the number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: the computation engine that the kernel uses to compute the covariance matrix. """ if isinstance(period, nnx.Variable): self.period = period else: self.period = PositiveReal(period) super().__init__(active_dims, lengthscale, variance, n_dims, compute_engine)
Initializes the kernel. Args: active_dims: the indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. period: the period of the kernel p. n_dims: the number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: the computation engine that the kernel uses to compute the covariance matrix.
__init__
python
JaxGaussianProcesses/GPJax
gpjax/kernels/stationary/periodic.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/kernels/stationary/periodic.py
Apache-2.0
def __init__( self, active_dims: tp.Union[list[int], slice, None] = None, variance: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, n_dims: tp.Union[int, None] = None, compute_engine: AbstractKernelComputation = ConstantDiagonalKernelComputation(), ): """Initializes the kernel. Args: active_dims: The indices of the input dimensions that the kernel operates on. variance: the variance of the kernel σ. n_dims: The number of input dimensions. compute_engine: The computation engine that the kernel uses to compute the covariance matrix """ super().__init__(active_dims, 1.0, variance, n_dims, compute_engine)
Initializes the kernel. Args: active_dims: The indices of the input dimensions that the kernel operates on. variance: the variance of the kernel σ. n_dims: The number of input dimensions. compute_engine: The computation engine that the kernel uses to compute the covariance matrix
__init__
python
JaxGaussianProcesses/GPJax
gpjax/kernels/stationary/white.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/kernels/stationary/white.py
Apache-2.0
def __init__( self, active_dims: tp.Union[list[int], slice, None] = None, lengthscale: tp.Union[LengthscaleCompatible, nnx.Variable[Lengthscale]] = 1.0, variance: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, n_dims: tp.Union[int, None] = None, compute_engine: AbstractKernelComputation = DenseKernelComputation(), ): """Initializes the kernel. Args: active_dims: The indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. n_dims: The number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: The computation engine that the kernel uses to compute the covariance matrix. """ super().__init__(active_dims, n_dims, compute_engine) self.n_dims = _validate_lengthscale(lengthscale, self.n_dims) if isinstance(lengthscale, nnx.Variable): self.lengthscale = lengthscale else: self.lengthscale = PositiveReal(lengthscale) # static typing if tp.TYPE_CHECKING: self.lengthscale = tp.cast(PositiveReal[Lengthscale], self.lengthscale) if isinstance(variance, nnx.Variable): self.variance = variance else: self.variance = PositiveReal(variance) # static typing if tp.TYPE_CHECKING: self.variance = tp.cast(PositiveReal[ScalarFloat], self.variance)
Initializes the kernel. Args: active_dims: The indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. n_dims: The number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: The computation engine that the kernel uses to compute the covariance matrix.
__init__
python
JaxGaussianProcesses/GPJax
gpjax/kernels/stationary/base.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/kernels/stationary/base.py
Apache-2.0
def _check_lengthscale(lengthscale: tp.Any): """Check that the lengthscale is a valid value.""" if isinstance(lengthscale, nnx.Variable): _check_lengthscale(lengthscale.value) return if not isinstance(lengthscale, (int, float, jnp.ndarray, list, tuple)): raise TypeError( f"Expected `lengthscale` to be a array-like. Got {lengthscale}." ) if isinstance(lengthscale, (jnp.ndarray, list)): ls_shape = jnp.shape(jnp.asarray(lengthscale)) if len(ls_shape) > 1: raise ValueError( f"Expected `lengthscale` to be a scalar or 1D array. " f"Got `lengthscale` with shape {ls_shape}." )
Check that the lengthscale is a valid value.
_check_lengthscale
python
JaxGaussianProcesses/GPJax
gpjax/kernels/stationary/base.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/kernels/stationary/base.py
Apache-2.0
def __init__( self, active_dims: tp.Union[list[int], slice, None] = None, lengthscale: tp.Union[LengthscaleCompatible, nnx.Variable[Lengthscale]] = 1.0, variance: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, alpha: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, n_dims: tp.Union[int, None] = None, compute_engine: AbstractKernelComputation = DenseKernelComputation(), ): """Initializes the kernel. Args: active_dims: The indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. alpha: the alpha parameter of the kernel α. n_dims: The number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: The computation engine that the kernel uses to compute the covariance matrix. """ if isinstance(alpha, nnx.Variable): self.alpha = alpha else: self.alpha = PositiveReal(alpha) super().__init__(active_dims, lengthscale, variance, n_dims, compute_engine)
Initializes the kernel. Args: active_dims: The indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. alpha: the alpha parameter of the kernel α. n_dims: The number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: The computation engine that the kernel uses to compute the covariance matrix.
__init__
python
JaxGaussianProcesses/GPJax
gpjax/kernels/stationary/rational_quadratic.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/kernels/stationary/rational_quadratic.py
Apache-2.0
def __init__( self, active_dims: tp.Union[list[int], slice, None] = None, lengthscale: tp.Union[LengthscaleCompatible, nnx.Variable[Lengthscale]] = 1.0, variance: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, power: tp.Union[ScalarFloat, nnx.Variable[ScalarArray]] = 1.0, n_dims: tp.Union[int, None] = None, compute_engine: AbstractKernelComputation = DenseKernelComputation(), ): """Initializes the kernel. Args: active_dims: the indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. power: the power of the kernel κ. n_dims: the number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: the computation engine that the kernel uses to compute the covariance matrix. """ if isinstance(power, nnx.Variable): self.power = power else: self.power = SigmoidBounded(power) super().__init__(active_dims, lengthscale, variance, n_dims, compute_engine)
Initializes the kernel. Args: active_dims: the indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. power: the power of the kernel κ. n_dims: the number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: the computation engine that the kernel uses to compute the covariance matrix.
__init__
python
JaxGaussianProcesses/GPJax
gpjax/kernels/stationary/powered_exponential.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/kernels/stationary/powered_exponential.py
Apache-2.0
def __init__( self, laplacian: Num[Array, "N N"], active_dims: tp.Union[list[int], slice, None] = None, lengthscale: tp.Union[ScalarFloat, Float[Array, " D"], Parameter] = 1.0, variance: tp.Union[ScalarFloat, Parameter] = 1.0, smoothness: ScalarFloat = 1.0, n_dims: tp.Union[int, None] = None, compute_engine: AbstractKernelComputation = EigenKernelComputation(), ): """Initializes the kernel. Args: laplacian: the Laplacian matrix of the graph. active_dims: The indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. smoothness: the smoothness parameter of the Matérn kernel. n_dims: The number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: The computation engine that the kernel uses to compute the covariance matrix. """ if isinstance(smoothness, Parameter): self.smoothness = smoothness else: self.smoothness = PositiveReal(smoothness) self.laplacian = Static(laplacian) evals, eigenvectors = jnp.linalg.eigh(self.laplacian.value) self.eigenvectors = Static(eigenvectors) self.eigenvalues = Static(evals.reshape(-1, 1)) self.num_vertex = self.eigenvalues.value.shape[0] super().__init__(active_dims, lengthscale, variance, n_dims, compute_engine)
Initializes the kernel. Args: laplacian: the Laplacian matrix of the graph. active_dims: The indices of the input dimensions that the kernel operates on. lengthscale: the lengthscale(s) of the kernel ℓ. If a scalar or an array of length 1, the kernel is isotropic, meaning that the same lengthscale is used for all input dimensions. If an array with length > 1, the kernel is anisotropic, meaning that a different lengthscale is used for each input. variance: the variance of the kernel σ. smoothness: the smoothness parameter of the Matérn kernel. n_dims: The number of input dimensions. If `lengthscale` is an array, this argument is ignored. compute_engine: The computation engine that the kernel uses to compute the covariance matrix.
__init__
python
JaxGaussianProcesses/GPJax
gpjax/kernels/non_euclidean/graph.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/kernels/non_euclidean/graph.py
Apache-2.0
def __post_init__(self): """ At initialisation we check that the posterior handlers and datasets are consistent (i.e. have the same tags), and then initialise the posteriors, optimizing them using the corresponding datasets. """ self.datasets = copy.copy( self.datasets ) # Ensure initial datasets passed in to DecisionMaker are not mutated from within if self.batch_size < 1: raise ValueError( f"Batch size must be greater than 0, got {self.batch_size}." ) # Check that posterior handlers and datasets are consistent if self.posterior_handlers.keys() != self.datasets.keys(): raise ValueError( "Posterior handlers and datasets must have the same keys. " f"Got posterior handlers keys {self.posterior_handlers.keys()} and " f"datasets keys {self.datasets.keys()}." ) # Initialize posteriors self.posteriors: Dict[str, AbstractPosterior] = {} for tag, posterior_handler in self.posterior_handlers.items(): self.posteriors[tag] = posterior_handler.get_posterior( self.datasets[tag], optimize=True, key=self.key )
At initialisation we check that the posterior handlers and datasets are consistent (i.e. have the same tags), and then initialise the posteriors, optimizing them using the corresponding datasets.
__post_init__
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/decision_maker.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/decision_maker.py
Apache-2.0
def ask(self, key: KeyArray) -> Float[Array, "B D"]: """ Get the point(s) to be queried next. Args: key (KeyArray): JAX PRNG key for controlling random state. Returns: Float[Array, "1 D"]: Point to be queried next """ raise NotImplementedError
Get the point(s) to be queried next. Args: key (KeyArray): JAX PRNG key for controlling random state. Returns: Float[Array, "1 D"]: Point to be queried next
ask
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/decision_maker.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/decision_maker.py
Apache-2.0
def tell(self, observation_datasets: Mapping[str, Dataset], key: KeyArray): """ Add newly observed data to datasets and update the corresponding posteriors. Args: observation_datasets: dictionary of datasets containing new observations. Tags are used to distinguish datasets, and correspond to tags in `posterior_handlers` and `self.datasets`. key: JAX PRNG key for controlling random state. """ if observation_datasets.keys() != self.datasets.keys(): raise ValueError( "Observation datasets and existing datasets must have the same keys. " f"Got observation datasets keys {observation_datasets.keys()} and " f"existing datasets keys {self.datasets.keys()}." ) for tag, observation_dataset in observation_datasets.items(): self.datasets[tag] += observation_dataset for tag, posterior_handler in self.posterior_handlers.items(): key, _ = jr.split(key) self.posteriors[tag] = posterior_handler.update_posterior( self.datasets[tag], self.posteriors[tag], optimize=True, key=key )
Add newly observed data to datasets and update the corresponding posteriors. Args: observation_datasets: dictionary of datasets containing new observations. Tags are used to distinguish datasets, and correspond to tags in `posterior_handlers` and `self.datasets`. key: JAX PRNG key for controlling random state.
tell
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/decision_maker.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/decision_maker.py
Apache-2.0
def run( self, n_steps: int, black_box_function_evaluator: FunctionEvaluator ) -> Mapping[str, Dataset]: """ Run the decision making loop continuously for for `n_steps`. This is broken down into three main steps: 1. Call the `ask` method to get the point to be queried next. 2. Call the `black_box_function_evaluator` to evaluate the black box functions of interest at the point chosen to be queried. 3. Call the `tell` method to update the datasets and posteriors with the newly observed data. In addition to this, after the `ask` step, the functions in the `post_ask` list are executed, taking as arguments the decision maker and the point chosen to be queried next. Similarly, after the `tell` step, the functions in the `post_tell` list are executed, taking the decision maker as the sole argument. Args: n_steps (int): Number of steps to run the decision making loop for. black_box_function_evaluator (FunctionEvaluator): Function evaluator which evaluates the black box functions of interest at supplied points. Returns: Mapping[str, Dataset]: Dictionary of datasets containing the observations made throughout the decision making loop, as well as the initial data supplied when initialising the `DecisionMaker`. """ for _ in range(n_steps): query_point = self.ask(self.key) for post_ask_method in self.post_ask: post_ask_method(self, query_point) self.key, _ = jr.split(self.key) observation_datasets = black_box_function_evaluator(query_point) self.tell(observation_datasets, self.key) for post_tell_method in self.post_tell: post_tell_method(self) return self.datasets
Run the decision making loop continuously for for `n_steps`. This is broken down into three main steps: 1. Call the `ask` method to get the point to be queried next. 2. Call the `black_box_function_evaluator` to evaluate the black box functions of interest at the point chosen to be queried. 3. Call the `tell` method to update the datasets and posteriors with the newly observed data. In addition to this, after the `ask` step, the functions in the `post_ask` list are executed, taking as arguments the decision maker and the point chosen to be queried next. Similarly, after the `tell` step, the functions in the `post_tell` list are executed, taking the decision maker as the sole argument. Args: n_steps (int): Number of steps to run the decision making loop for. black_box_function_evaluator (FunctionEvaluator): Function evaluator which evaluates the black box functions of interest at supplied points. Returns: Mapping[str, Dataset]: Dictionary of datasets containing the observations made throughout the decision making loop, as well as the initial data supplied when initialising the `DecisionMaker`.
run
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/decision_maker.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/decision_maker.py
Apache-2.0
def ask(self, key: KeyArray) -> Float[Array, "B D"]: """ Get updated utility function(s) and return the point(s) which maximises it/them. This method also stores the utility function(s) in `self.current_utility_functions` so that they can be accessed after the ask function has been called. This is useful for non-deterministic utility functions, which may differ between calls to `ask` due to the splitting of `self.key`. Note that in general `SinglePointUtilityFunction`s are only capable of generating one point to be queried at each iteration of the decision making loop (i.e. `self.batch_size` must be 1). However, Thompson sampling can be used in a batched setting by drawing a batch of different samples from the GP posterior. This is done by calling `build_utility_function` with different keys sequentilly, and optimising each of these individual samples in sequence in order to obtain `self.batch_size` points to query next. Args: key (KeyArray): JAX PRNG key for controlling random state. Returns: Float[Array, "B D"]: Point(s) to be queried next. """ self.current_utility_functions = [] maximizers = [] # We currently only allow Thompson sampling to be run with batch size > 1. More # batched utility functions may be added in the future. if isinstance(self.utility_function_builder, ThompsonSampling) or ( (not isinstance(self.utility_function_builder, ThompsonSampling)) and (self.batch_size == 1) ): # Draw 'self.batch_size' Thompson samples and optimize each of them in order to # obtain 'self.batch_size' points to query next. for _ in range(self.batch_size): decision_function = ( self.utility_function_builder.build_utility_function( self.posteriors, self.datasets, key ) ) self.current_utility_functions.append(decision_function) _, key = jr.split(key) maximizer = self.utility_maximizer.maximize( decision_function, self.search_space, key ) maximizers.append(maximizer) _, key = jr.split(key) maximizers = jnp.concatenate(maximizers) return maximizers else: raise NotImplementedError( "Only Thompson sampling currently supports batch size > 1." )
Get updated utility function(s) and return the point(s) which maximises it/them. This method also stores the utility function(s) in `self.current_utility_functions` so that they can be accessed after the ask function has been called. This is useful for non-deterministic utility functions, which may differ between calls to `ask` due to the splitting of `self.key`. Note that in general `SinglePointUtilityFunction`s are only capable of generating one point to be queried at each iteration of the decision making loop (i.e. `self.batch_size` must be 1). However, Thompson sampling can be used in a batched setting by drawing a batch of different samples from the GP posterior. This is done by calling `build_utility_function` with different keys sequentilly, and optimising each of these individual samples in sequence in order to obtain `self.batch_size` points to query next. Args: key (KeyArray): JAX PRNG key for controlling random state. Returns: Float[Array, "B D"]: Point(s) to be queried next.
ask
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/decision_maker.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/decision_maker.py
Apache-2.0
def build_function_evaluator( functions: Dict[str, Callable[[Float[Array, "N D"]], Float[Array, "N 1"]]], ) -> FunctionEvaluator: """ Takes a dictionary of functions and returns a `FunctionEvaluator` which can be used to evaluate each of the functions at a supplied set of points and return a dictionary of datasets storing the evaluated points. """ return lambda x: {tag: Dataset(x, f(x)) for tag, f in functions.items()}
Takes a dictionary of functions and returns a `FunctionEvaluator` which can be used to evaluate each of the functions at a supplied set of points and return a dictionary of datasets storing the evaluated points.
build_function_evaluator
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utils.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utils.py
Apache-2.0
def get_best_latent_observation_val( posterior: AbstractPosterior, dataset: Dataset ) -> Float[Array, ""]: """ Takes a posterior and dataset and returns the best (latent) function value in the dataset, corresponding to the minimum of the posterior mean value evaluated at locations in the dataset. In the noiseless case, this corresponds to the minimum value in the dataset. """ return jnp.min(posterior(dataset.X, dataset).mean())
Takes a posterior and dataset and returns the best (latent) function value in the dataset, corresponding to the minimum of the posterior mean value evaluated at locations in the dataset. In the noiseless case, this corresponds to the minimum value in the dataset.
get_best_latent_observation_val
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utils.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utils.py
Apache-2.0
def get_posterior( self, dataset: Dataset, optimize: bool, key: Optional[KeyArray] = None ) -> AbstractPosterior: """ Initialise (and optionally optimize) a posterior using the given dataset. Args: dataset: dataset to get posterior for. optimize: whether to optimize the posterior hyperparameters. key: a JAX PRNG key which is used for optimizing the posterior hyperparameters. Returns: Posterior for the given dataset. """ posterior = self.prior * self.likelihood_builder(dataset.n) if optimize: if key is None: raise ValueError( "A key must be provided in order to optimize the posterior." ) posterior = self._optimize_posterior(posterior, dataset, key) return posterior
Initialise (and optionally optimize) a posterior using the given dataset. Args: dataset: dataset to get posterior for. optimize: whether to optimize the posterior hyperparameters. key: a JAX PRNG key which is used for optimizing the posterior hyperparameters. Returns: Posterior for the given dataset.
get_posterior
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/posterior_handler.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/posterior_handler.py
Apache-2.0
def update_posterior( self, dataset: Dataset, previous_posterior: AbstractPosterior, optimize: bool, key: Optional[KeyArray] = None, ) -> AbstractPosterior: """ Update the given posterior with the given dataset. This needs to be done when the number of datapoints in the (training) dataset of the posterior changes, as the `AbstractLikelihood` class requires the number of datapoints to be specified. Hyperparameters may or may not be optimized, depending on the value of the `optimize` parameter. Note that the updated poterior will be initialised with the same prior hyperparameters as the previous posterior, but the likelihood will be re-initialised with the new number of datapoints, and hyperparameters set as in the `likelihood_builder` function. Args: dataset: dataset to get posterior for. previous_posterior: posterior being updated. This is supplied as one may wish to simply increase the number of datapoints in the likelihood, without optimizing the posterior hyperparameters, in which case the previous posterior can be used to obtain the previously set prior hyperparameters. optimize: whether to optimize the posterior hyperparameters. key: A JAX PRNG key which is used for optimizing the posterior hyperparameters. """ posterior = previous_posterior.prior * self.likelihood_builder(dataset.n) if optimize: if key is None: raise ValueError( "A key must be provided in order to optimize the posterior." ) posterior = self._optimize_posterior(posterior, dataset, key) return posterior
Update the given posterior with the given dataset. This needs to be done when the number of datapoints in the (training) dataset of the posterior changes, as the `AbstractLikelihood` class requires the number of datapoints to be specified. Hyperparameters may or may not be optimized, depending on the value of the `optimize` parameter. Note that the updated poterior will be initialised with the same prior hyperparameters as the previous posterior, but the likelihood will be re-initialised with the new number of datapoints, and hyperparameters set as in the `likelihood_builder` function. Args: dataset: dataset to get posterior for. previous_posterior: posterior being updated. This is supplied as one may wish to simply increase the number of datapoints in the likelihood, without optimizing the posterior hyperparameters, in which case the previous posterior can be used to obtain the previously set prior hyperparameters. optimize: whether to optimize the posterior hyperparameters. key: A JAX PRNG key which is used for optimizing the posterior hyperparameters.
update_posterior
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/posterior_handler.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/posterior_handler.py
Apache-2.0
def _optimize_posterior( self, posterior: AbstractPosterior, dataset: Dataset, key: KeyArray ) -> AbstractPosterior: """ Takes a posterior and corresponding dataset and optimizes the posterior using the GPJax `fit` method. Args: posterior: Posterior being optimized. dataset: Dataset used for optimizing posterior. key: A JAX PRNG key for generating random numbers. Returns: Optimized posterior. """ opt_posterior, _ = gpx.fit( model=posterior, objective=self.optimization_objective, train_data=dataset, optim=self.optimizer, num_iters=self.num_optimization_iters, safe=True, key=key, verbose=False, ) return opt_posterior
Takes a posterior and corresponding dataset and optimizes the posterior using the GPJax `fit` method. Args: posterior: Posterior being optimized. dataset: Dataset used for optimizing posterior. key: A JAX PRNG key for generating random numbers. Returns: Optimized posterior.
_optimize_posterior
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/posterior_handler.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/posterior_handler.py
Apache-2.0
def sample(self, num_points: int, key: KeyArray) -> Float[Array, "N D"]: """Sample points from the search space. Args: num_points (int): Number of points to be sampled from the search space. key (KeyArray): JAX PRNG key. Returns: Float[Array, "N D"]: `num_points` points sampled from the search space. """ raise NotImplementedError
Sample points from the search space. Args: num_points (int): Number of points to be sampled from the search space. key (KeyArray): JAX PRNG key. Returns: Float[Array, "N D"]: `num_points` points sampled from the search space.
sample
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/search_space.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/search_space.py
Apache-2.0
def dimensionality(self) -> int: """Dimensionality of the search space. Returns: int: Dimensionality of the search space. """ raise NotImplementedError
Dimensionality of the search space. Returns: int: Dimensionality of the search space.
dimensionality
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/search_space.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/search_space.py
Apache-2.0
def sample(self, num_points: int, key: KeyArray) -> Float[Array, "N D"]: """Sample points from the search space using a Halton sequence. Args: num_points (int): Number of points to be sampled from the search space. key (KeyArray): JAX PRNG key. Returns: Float[Array, "N D"]: `num_points` points sampled using the Halton sequence from the search space. """ if num_points <= 0: raise ValueError("Number of points must be greater than 0.") initial_sample = tfp.mcmc.sample_halton_sequence( dim=self.dimensionality, num_results=num_points, seed=key ) return ( self.lower_bounds + (self.upper_bounds - self.lower_bounds) * initial_sample )
Sample points from the search space using a Halton sequence. Args: num_points (int): Number of points to be sampled from the search space. key (KeyArray): JAX PRNG key. Returns: Float[Array, "N D"]: `num_points` points sampled using the Halton sequence from the search space.
sample
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/search_space.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/search_space.py
Apache-2.0
def _get_discrete_maximizer( query_points: Float[Array, "N D"], utility_function: SinglePointUtilityFunction ) -> Float[Array, "1 D"]: """Get the point which maximises the utility function evaluated at a given set of points. Args: query_points: set of points at which to evaluate the utility function, as an array of shape `[n_points, n_dims]`. utility_function: the single point utility function to be evaluated at `query_points`. Returns: Array of shape `[1, n_dims]` representing the point which maximises the utility function. """ utility_function_values = utility_function(query_points) max_utility_function_value_idx = jnp.argmax( utility_function_values, axis=0, keepdims=True ) best_sample_point = jnp.take_along_axis( query_points, max_utility_function_value_idx, axis=0 ) return best_sample_point
Get the point which maximises the utility function evaluated at a given set of points. Args: query_points: set of points at which to evaluate the utility function, as an array of shape `[n_points, n_dims]`. utility_function: the single point utility function to be evaluated at `query_points`. Returns: Array of shape `[1, n_dims]` representing the point which maximises the utility function.
_get_discrete_maximizer
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_maximizer.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_maximizer.py
Apache-2.0
def maximize( self, utility_function: SinglePointUtilityFunction, search_space: AbstractSearchSpace, key: KeyArray, ) -> Float[Array, "1 D"]: """Maximize the given utility function over the search space provided. Args: utility_function: utility function to be maximized. search_space: search space over which to maximize the utility function. key: JAX PRNG key. Returns: Float[Array, "1 D"]: Point at which the utility function is maximized. """ raise NotImplementedError
Maximize the given utility function over the search space provided. Args: utility_function: utility function to be maximized. search_space: search space over which to maximize the utility function. key: JAX PRNG key. Returns: Float[Array, "1 D"]: Point at which the utility function is maximized.
maximize
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_maximizer.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_maximizer.py
Apache-2.0
def _scalar_utility_function(x: Float[Array, "1 D"]) -> ScalarFloat: """ The Jaxopt minimizer requires a function which returns a scalar. It calls the utility function with one point at a time, so the utility function returns an array of shape [1, 1], so we index to return a scalar. Note that we also return the negative of the utility function - this is because utility functions should be *maximimized* but the Jaxopt minimizer minimizes functions. """ return -utility_function(x)[0][0]
The Jaxopt minimizer requires a function which returns a scalar. It calls the utility function with one point at a time, so the utility function returns an array of shape [1, 1], so we index to return a scalar. Note that we also return the negative of the utility function - this is because utility functions should be *maximimized* but the Jaxopt minimizer minimizes functions.
maximize._scalar_utility_function
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_maximizer.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_maximizer.py
Apache-2.0
def maximize( self, utility_function: SinglePointUtilityFunction, search_space: ContinuousSearchSpace, key: KeyArray, ) -> Float[Array, "1 D"]: max_observed_utility_function_value = None maximizer = None for _ in range(self.num_restarts): key, _ = jr.split(key) initial_sample_points = search_space.sample( self.num_initial_samples, key=key ) best_initial_sample_point = _get_discrete_maximizer( initial_sample_points, utility_function ) def _scalar_utility_function(x: Float[Array, "1 D"]) -> ScalarFloat: """ The Jaxopt minimizer requires a function which returns a scalar. It calls the utility function with one point at a time, so the utility function returns an array of shape [1, 1], so we index to return a scalar. Note that we also return the negative of the utility function - this is because utility functions should be *maximimized* but the Jaxopt minimizer minimizes functions. """ return -utility_function(x)[0][0] lbfgsb = ScipyBoundedMinimize( fun=_scalar_utility_function, method="l-bfgs-b" ) bounds = (search_space.lower_bounds, search_space.upper_bounds) optimized_point = lbfgsb.run( best_initial_sample_point, bounds=bounds ).params optimized_utility_function_value = _scalar_utility_function(optimized_point) if (max_observed_utility_function_value is None) or ( optimized_utility_function_value > max_observed_utility_function_value ): max_observed_utility_function_value = optimized_utility_function_value maximizer = optimized_point return maximizer
The Jaxopt minimizer requires a function which returns a scalar. It calls the utility function with one point at a time, so the utility function returns an array of shape [1, 1], so we index to return a scalar. Note that we also return the negative of the utility function - this is because utility functions should be *maximimized* but the Jaxopt minimizer minimizes functions.
maximize
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_maximizer.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_maximizer.py
Apache-2.0
def generate_dataset( self, num_points: int, key: KeyArray, obs_stddev: float = 0.0 ) -> Dataset: """ Generate a toy dataset from the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. obs_stddev (float): (Optional) standard deviation of Gaussian distributed noise added to observations. Returns: Dataset: Dataset of points sampled from the test function. """ X = self.search_space.sample(num_points=num_points, key=key) gaussian_noise = tfp.distributions.Normal( jnp.zeros(num_points), obs_stddev * jnp.ones(num_points) ) y = self.evaluate(X) + jnp.transpose( gaussian_noise.sample(sample_shape=[1], seed=key) ) return Dataset(X=X, y=y)
Generate a toy dataset from the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. obs_stddev (float): (Optional) standard deviation of Gaussian distributed noise added to observations. Returns: Dataset: Dataset of points sampled from the test function.
generate_dataset
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/test_functions/continuous_functions.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/test_functions/continuous_functions.py
Apache-2.0
def generate_test_points( self, num_points: int, key: KeyArray ) -> Float[Array, "N D"]: """ Generate test points from the search space of the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. Returns: Float[Array, 'N D']: Test points sampled from the search space. """ return self.search_space.sample(num_points=num_points, key=key)
Generate test points from the search space of the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. Returns: Float[Array, 'N D']: Test points sampled from the search space.
generate_test_points
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/test_functions/continuous_functions.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/test_functions/continuous_functions.py
Apache-2.0
def evaluate(self, x: Float[Array, "N D"]) -> Float[Array, "N 1"]: """ Evaluate the test function at a set of points. Args: x (Float[Array, 'N D']): Points to evaluate the test function at. Returns: Float[Array, 'N 1']: Values of the test function at the points. """ raise NotImplementedError
Evaluate the test function at a set of points. Args: x (Float[Array, 'N D']): Points to evaluate the test function at. Returns: Float[Array, 'N 1']: Values of the test function at the points.
evaluate
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/test_functions/continuous_functions.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/test_functions/continuous_functions.py
Apache-2.0
def generate_dataset(self, num_points: int, key: KeyArray) -> Dataset: """ Generate a toy dataset from the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. Returns: Dataset: Dataset of points sampled from the test function. """ X = self.search_space.sample(num_points=num_points, key=key) y = self.evaluate(X) return Dataset(X=X, y=y)
Generate a toy dataset from the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. Returns: Dataset: Dataset of points sampled from the test function.
generate_dataset
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/test_functions/non_conjugate_functions.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/test_functions/non_conjugate_functions.py
Apache-2.0
def generate_test_points( self, num_points: int, key: KeyArray ) -> Float[Array, "N D"]: """ Generate test points from the search space of the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. Returns: Float[Array, 'N D']: Test points sampled from the search space. """ return self.search_space.sample(num_points=num_points, key=key)
Generate test points from the search space of the test function. Args: num_points (int): Number of points to sample. key (KeyArray): JAX PRNG key. Returns: Float[Array, 'N D']: Test points sampled from the search space.
generate_test_points
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/test_functions/non_conjugate_functions.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/test_functions/non_conjugate_functions.py
Apache-2.0
def evaluate(self, x: Float[Array, "N 1"]) -> Int[Array, "N 1"]: """ Evaluate the test function at a set of points. Function taken from https://docs.jaxgaussianprocesses.com/_examples/poisson/#dataset. Args: x (Float[Array, 'N D']): Points to evaluate the test function at. Returns: Float[Array, 'N 1']: Values of the test function at the points. """ key = jr.key(42) f = lambda x: 2.0 * jnp.sin(3 * x) + 0.5 * x return jr.poisson(key, jnp.exp(f(x)))
Evaluate the test function at a set of points. Function taken from https://docs.jaxgaussianprocesses.com/_examples/poisson/#dataset. Args: x (Float[Array, 'N D']): Points to evaluate the test function at. Returns: Float[Array, 'N 1']: Values of the test function at the points.
evaluate
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/test_functions/non_conjugate_functions.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/test_functions/non_conjugate_functions.py
Apache-2.0
def build_utility_function( self, posteriors: Mapping[str, ConjugatePosterior], datasets: Mapping[str, Dataset], key: KeyArray, ) -> SinglePointUtilityFunction: """ Constructs the probability of improvement utility function using the predictive posterior of the objective function. Args: posteriors (Mapping[str, AbstractPosterior]): Dictionary of posteriors to be used to form the utility function. One of the posteriors must correspond to the `OBJECTIVE` key, as we sample from the objective posterior to form the utility function. datasets (Mapping[str, Dataset]): Dictionary of datasets which may be used to form the utility function. Keys in `datasets` should correspond to keys in `posteriors`. One of the datasets must correspond to the `OBJECTIVE` key. key (KeyArray): JAX PRNG key used for random number generation. Since the probability of improvement is computed deterministically from the predictive posterior, the key is not used. Returns: SinglePointUtilityFunction: the probability of improvement utility function. """ self.check_objective_present(posteriors, datasets) objective_posterior = posteriors[OBJECTIVE] if not isinstance(objective_posterior, ConjugatePosterior): raise ValueError( "Objective posterior must be a ConjugatePosterior to compute the Probability of Improvement using a Gaussian CDF." ) objective_dataset = datasets[OBJECTIVE] if ( objective_dataset.X is None or objective_dataset.n == 0 or objective_dataset.y is None ): raise ValueError( "Objective dataset must be non-empty to compute the " "Probability of Improvement (since we need a " "`best_y` value)." ) def probability_of_improvement(x_test: Num[Array, "N D"]): best_y = get_best_latent_observation_val( objective_posterior, objective_dataset ) predictive_dist = objective_posterior.predict(x_test, objective_dataset) normal_dist = tfp.distributions.Normal( loc=predictive_dist.mean(), scale=predictive_dist.stddev(), ) return normal_dist.cdf(best_y).reshape(-1, 1) return probability_of_improvement
Constructs the probability of improvement utility function using the predictive posterior of the objective function. Args: posteriors (Mapping[str, AbstractPosterior]): Dictionary of posteriors to be used to form the utility function. One of the posteriors must correspond to the `OBJECTIVE` key, as we sample from the objective posterior to form the utility function. datasets (Mapping[str, Dataset]): Dictionary of datasets which may be used to form the utility function. Keys in `datasets` should correspond to keys in `posteriors`. One of the datasets must correspond to the `OBJECTIVE` key. key (KeyArray): JAX PRNG key used for random number generation. Since the probability of improvement is computed deterministically from the predictive posterior, the key is not used. Returns: SinglePointUtilityFunction: the probability of improvement utility function.
build_utility_function
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_functions/probability_of_improvement.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_functions/probability_of_improvement.py
Apache-2.0
def build_utility_function( self, posteriors: Mapping[str, ConjugatePosterior], datasets: Mapping[str, Dataset], key: KeyArray, ) -> SinglePointUtilityFunction: """ Draw an approximate sample from the posterior of the objective model and return the *negative* of this sample as a utility function, as utility functions are *maximised*. Args: posteriors (Mapping[str, ConjugatePosterior]): Dictionary of posteriors to be used to form the utility function. One of the posteriors must correspond to the `OBJECTIVE` key, as we sample from the objective posterior to form the utility function. datasets (Mapping[str, Dataset]): Dictionary of datasets which may be used to form the utility function. Keys in `datasets` should correspond to keys in `posteriors`. One of the datasets must correspond to the `OBJECTIVE` key. key (KeyArray): JAX PRNG key used for random number generation. This can be changed to draw different samples. Returns: SinglePointUtilityFunction: An appproximate sample from the objective model posterior to to be *maximised* in order to decide which point to query next. """ self.check_objective_present(posteriors, datasets) objective_posterior = posteriors[OBJECTIVE] if not isinstance(objective_posterior, ConjugatePosterior): raise ValueError( "Objective posterior must be a ConjugatePosterior to draw an approximate sample." ) objective_dataset = datasets[OBJECTIVE] thompson_sample = objective_posterior.sample_approx( num_samples=1, train_data=objective_dataset, key=key, num_features=self.num_features, ) return lambda x: -1.0 * thompson_sample(x) # Utility functions are *maximised*
Draw an approximate sample from the posterior of the objective model and return the *negative* of this sample as a utility function, as utility functions are *maximised*. Args: posteriors (Mapping[str, ConjugatePosterior]): Dictionary of posteriors to be used to form the utility function. One of the posteriors must correspond to the `OBJECTIVE` key, as we sample from the objective posterior to form the utility function. datasets (Mapping[str, Dataset]): Dictionary of datasets which may be used to form the utility function. Keys in `datasets` should correspond to keys in `posteriors`. One of the datasets must correspond to the `OBJECTIVE` key. key (KeyArray): JAX PRNG key used for random number generation. This can be changed to draw different samples. Returns: SinglePointUtilityFunction: An appproximate sample from the objective model posterior to to be *maximised* in order to decide which point to query next.
build_utility_function
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_functions/thompson_sampling.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_functions/thompson_sampling.py
Apache-2.0
def check_objective_present( self, posteriors: Mapping[str, AbstractPosterior], datasets: Mapping[str, Dataset], ) -> None: """ Check that the objective posterior and dataset are present in the posteriors and datasets. Args: posteriors: dictionary of posteriors to be used to form the utility function. datasets: dictionary of datasets which may be used to form the utility function. Raises: ValueError: If the objective posterior or dataset are not present in the posteriors or datasets. """ if OBJECTIVE not in posteriors.keys(): raise ValueError("Objective posterior not found in posteriors") elif OBJECTIVE not in datasets.keys(): raise ValueError("Objective dataset not found in datasets")
Check that the objective posterior and dataset are present in the posteriors and datasets. Args: posteriors: dictionary of posteriors to be used to form the utility function. datasets: dictionary of datasets which may be used to form the utility function. Raises: ValueError: If the objective posterior or dataset are not present in the posteriors or datasets.
check_objective_present
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_functions/base.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_functions/base.py
Apache-2.0
def build_utility_function( self, posteriors: Mapping[str, AbstractPosterior], datasets: Mapping[str, Dataset], key: KeyArray, ) -> SinglePointUtilityFunction: """ Build a `UtilityFunction` from a set of posteriors and datasets. Args: posteriors: dictionary of posteriors to be used to form the utility function. datasets: dictionary of datasets which may be used to form the utility function. key: JAX PRNG key used for random number generation. Returns: SinglePointUtilityFunction: Utility function to be *maximised* in order to decide which point to query next. """ raise NotImplementedError
Build a `UtilityFunction` from a set of posteriors and datasets. Args: posteriors: dictionary of posteriors to be used to form the utility function. datasets: dictionary of datasets which may be used to form the utility function. key: JAX PRNG key used for random number generation. Returns: SinglePointUtilityFunction: Utility function to be *maximised* in order to decide which point to query next.
build_utility_function
python
JaxGaussianProcesses/GPJax
gpjax/decision_making/utility_functions/base.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/gpjax/decision_making/utility_functions/base.py
Apache-2.0
def process_file(file: Path, out_file: Path | None = None, execute: bool = False): """Converts a python file to markdown using jupytext and nbconvert.""" out_dir = out_file.parent command = f"cd {out_dir.as_posix()} && " out_file = out_file.relative_to(out_dir).as_posix() if execute: command += f"jupytext --to ipynb {file} --output - " command += ( f"| jupyter nbconvert --to markdown --execute --stdin --output {out_file}" ) else: command += f"jupytext --to markdown {file} --output {out_file}" subprocess.run(command, shell=True, check=False)
Converts a python file to markdown using jupytext and nbconvert.
process_file
python
JaxGaussianProcesses/GPJax
docs/scripts/gen_examples.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/docs/scripts/gen_examples.py
Apache-2.0
def is_modified(file: Path, out_file: Path): """Check if the output file is older than the input file.""" return out_file.exists() and out_file.stat().st_mtime < file.stat().st_mtime
Check if the output file is older than the input file.
is_modified
python
JaxGaussianProcesses/GPJax
docs/scripts/gen_examples.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/docs/scripts/gen_examples.py
Apache-2.0
def confidence_ellipse(x, y, ax, n_std=3.0, facecolor="none", **kwargs): """ Create a plot of the covariance confidence ellipse of *x* and *y*. Parameters ---------- x, y : array-like, shape (n, ) Input data. ax : matplotlib.axes.Axes The axes object to draw the ellipse into. n_std : float The number of standard deviations to determine the ellipse's radiuses. **kwargs Forwarded to `~matplotlib.patches.Ellipse` Returns ------- matplotlib.patches.Ellipse """ x = np.array(x) y = np.array(y) if x.size != y.size: raise ValueError("x and y must be the same size") cov = np.cov(x, y) pearson = cov[0, 1] / np.sqrt(cov[0, 0] * cov[1, 1]) # Using a special case to obtain the eigenvalues of this # two-dimensionl dataset. ell_radius_x = np.sqrt(1 + pearson) ell_radius_y = np.sqrt(1 - pearson) ellipse = Ellipse( (0, 0), width=ell_radius_x * 2, height=ell_radius_y * 2, facecolor=facecolor, **kwargs, ) # Calculating the stdandard deviation of x from # the squareroot of the variance and multiplying # with the given number of standard deviations. scale_x = np.sqrt(cov[0, 0]) * n_std mean_x = np.mean(x) # calculating the stdandard deviation of y ... scale_y = np.sqrt(cov[1, 1]) * n_std mean_y = np.mean(y) transf = ( transforms.Affine2D() .rotate_deg(45) .scale(scale_x, scale_y) .translate(mean_x, mean_y) ) ellipse.set_transform(transf + ax.transData) return ax.add_patch(ellipse)
Create a plot of the covariance confidence ellipse of *x* and *y*. Parameters ---------- x, y : array-like, shape (n, ) Input data. ax : matplotlib.axes.Axes The axes object to draw the ellipse into. n_std : float The number of standard deviations to determine the ellipse's radiuses. **kwargs Forwarded to `~matplotlib.patches.Ellipse` Returns ------- matplotlib.patches.Ellipse
confidence_ellipse
python
JaxGaussianProcesses/GPJax
examples/utils.py
https://github.com/JaxGaussianProcesses/GPJax/blob/master/examples/utils.py
Apache-2.0
def pytest_addoption(parser): """Define pytest command-line option""" group = parser.getgroup("jupyter_book") group.addoption( "--jb-tempdir", dest="jb_tempdir", default=None, help="Specify a directory in which to create tempdirs", )
Define pytest command-line option
pytest_addoption
python
jupyter-book/jupyter-book
conftest.py
https://github.com/jupyter-book/jupyter-book/blob/master/conftest.py
BSD-3-Clause
def test_myst_init(cli: CliRunner, temp_with_override): """Test adding myst metadata to text files.""" path = temp_with_override.joinpath("tmp.md").absolute() text = "TEST" with open(path, "w") as ff: ff.write(text) init_myst_file(path, kernel="python3") # Make sure it runs properly. Default kernel should be python3 new_text = path.read_text(encoding="utf8") assert "format_name: myst" in new_text assert "TEST" == new_text.strip().split("\n")[-1] assert "name: python3" in new_text # Make sure the CLI works too with warnings.catch_warnings(): warnings.simplefilter("error") result = cli.invoke(myst_init, f"{path} --kernel python3".split()) # old versions of jupytext give: UserWarning: myst-parse failed unexpectedly assert result.exit_code == 0 # Non-existent kernel with pytest.raises(Exception) as err: init_myst_file(path, kernel="blah") assert "Did not find kernel: blah" in str(err) # Missing file with pytest.raises(Exception) as err: init_myst_file(path.joinpath("MISSING"), kernel="python3") assert "Markdown file not found:" in str(err)
Test adding myst metadata to text files.
test_myst_init
python
jupyter-book/jupyter-book
tests/test_utils.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_utils.py
BSD-3-Clause
def test_toc_startwithlist(cli: CliRunner, temp_with_override, file_regression): """Testing a basic _toc.yml for tableofcontents directive""" path_output = temp_with_override.joinpath("mybook").absolute() # Regular TOC should work p_toc = path_books.joinpath("toc") path_toc = p_toc.joinpath("_toc_startwithlist.yml") result = cli.invoke( build, [ p_toc.as_posix(), "--path-output", path_output.as_posix(), "--toc", path_toc.as_posix(), "-W", ], ) # print(result.output) assert result.exit_code == 0 path_toc_directive = path_output.joinpath("_build", "html", "index.html") # print(path_toc_directive.read_text(encoding="utf8")) # get the tableofcontents markup soup = BeautifulSoup(path_toc_directive.read_text(encoding="utf8"), "html.parser") toc = soup.find_all("div", class_="toctree-wrapper") assert len(toc) == 1 file_regression.check(toc[0].prettify(), extension=".html", encoding="utf8")
Testing a basic _toc.yml for tableofcontents directive
test_toc_startwithlist
python
jupyter-book/jupyter-book
tests/test_tocdirective.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_tocdirective.py
BSD-3-Clause
def test_toc_parts(cli: CliRunner, temp_with_override, file_regression): """Testing `header` in _toc.yml""" path_input = temp_with_override.joinpath("mybook_input").absolute() path_output = temp_with_override.joinpath("mybook").absolute() # Regular TOC should work p_toc = path_books.joinpath("toc") shutil.copytree(p_toc, path_input) # setup correct files (path_input / "subfolder" / "asubpage.md").unlink() for i in range(4): (path_input / "subfolder" / f"asubpage{i+1}.md").write_text( f"# A subpage {i+1}\n", encoding="utf8" ) path_toc = path_input.joinpath("_toc_parts.yml") result = cli.invoke( build, [ path_input.as_posix(), "--path-output", path_output.as_posix(), "--toc", path_toc.as_posix(), "-W", ], ) # print(result.output) assert result.exit_code == 0 path_index = path_output.joinpath("_build", "html", "index.html") # get the tableofcontents markup soup = BeautifulSoup(path_index.read_text(encoding="utf8"), "html.parser") toc = soup.find_all("div", class_="toctree-wrapper") assert len(toc) == 2 file_regression.check( toc[0].prettify(), basename="test_toc_parts_directive", extension=f"{SPHINX_VERSION}.html", encoding="utf8", ) # check the sidebar structure is correct file_regression.check( soup.select(".bd-links")[0].prettify(), basename="test_toc_parts_sidebar", extension=f"{SPHINX_VERSION}.html", encoding="utf8", )
Testing `header` in _toc.yml
test_toc_parts
python
jupyter-book/jupyter-book
tests/test_tocdirective.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_tocdirective.py
BSD-3-Clause
def test_toc_urllink(cli: CliRunner, temp_with_override, file_regression): """Testing with additional `url` link key in _toc.yml""" path_output = temp_with_override.joinpath("mybook").absolute() # Regular TOC should work p_toc = path_books.joinpath("toc") path_toc = p_toc.joinpath("_toc_urllink.yml") result = cli.invoke( build, [ p_toc.as_posix(), "--path-output", path_output.as_posix(), "--toc", path_toc.as_posix(), ], ) print(result.output) assert result.exit_code == 0 path_toc_directive = path_output.joinpath("_build", "html", "index.html") # get the tableofcontents markup soup = BeautifulSoup(path_toc_directive.read_text(encoding="utf8"), "html.parser") toc = soup.find_all("div", class_="toctree-wrapper") assert len(toc) == 1 file_regression.check(toc[0].prettify(), extension=".html", encoding="utf8")
Testing with additional `url` link key in _toc.yml
test_toc_urllink
python
jupyter-book/jupyter-book
tests/test_tocdirective.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_tocdirective.py
BSD-3-Clause
def test_toc_latex_parts(cli: CliRunner, temp_with_override, file_regression): """Testing LaTex output""" path_input = temp_with_override.joinpath("mybook_input").absolute() path_output = temp_with_override.joinpath("mybook").absolute() # Regular TOC should work p_toc = path_books.joinpath("toc") shutil.copytree(p_toc, path_input) # setup correct files (path_input / "subfolder" / "asubpage.md").unlink() for i in range(4): (path_input / "subfolder" / f"asubpage{i+1}.md").write_text( f"# A subpage {i+1}\n", encoding="utf8" ) path_toc = path_input.joinpath("_toc_parts.yml") result = cli.invoke( build, [ path_input.as_posix(), "--path-output", path_output.as_posix(), "--toc", path_toc.as_posix(), "--builder", "pdflatex", "-W", ], ) assert result.exit_code == 0, result.output # reading the tex file path_output_file = path_output.joinpath("_build", "latex", "toc-tests.tex") file_content = TexSoup(path_output_file.read_text()) # checking the table of contents which is a list with '\begin{itemize}' itemizes = file_content.find_all("itemize") file_regression.check( str(itemizes[0]) + "\n" + str(itemizes[1]), extension=".tex", encoding="utf8" )
Testing LaTex output
test_toc_latex_parts
python
jupyter-book/jupyter-book
tests/test_tocdirective.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_tocdirective.py
BSD-3-Clause
def test_toc_latex_urllink(cli: CliRunner, temp_with_override, file_regression): """Testing LaTex output""" path_output = temp_with_override.joinpath("mybook").absolute() # Regular TOC should work p_toc = path_books.joinpath("toc") path_toc = p_toc.joinpath("_toc_urllink.yml") result = cli.invoke( build, [ p_toc.as_posix(), "--path-output", path_output.as_posix(), "--toc", path_toc.as_posix(), "--builder", "pdflatex", ], ) assert result.exit_code == 0, result.output # reading the tex file path_output_file = path_output.joinpath("_build", "latex", "toc-tests.tex") file_content = TexSoup(path_output_file.read_text()) # checking the table of contents which is a list with the first '\begin{itemize}' file_regression.check(str(file_content.itemize), extension=".tex", encoding="utf8")
Testing LaTex output
test_toc_latex_urllink
python
jupyter-book/jupyter-book
tests/test_tocdirective.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_tocdirective.py
BSD-3-Clause
def build_resources(temp_with_override): """Copys ./books and ./books/tocs to a temporary directory and yields the paths as `pathlib.Path` objects. """ src = Path(__file__).parent.resolve().joinpath("books").absolute() dst = temp_with_override / "books" shutil.copytree(src, dst) yield Path(dst), Path(dst) / "toc" shutil.rmtree(dst)
Copys ./books and ./books/tocs to a temporary directory and yields the paths as `pathlib.Path` objects.
build_resources
python
jupyter-book/jupyter-book
tests/conftest.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/conftest.py
BSD-3-Clause
def pages(temp_with_override): """Copys ./pages to a temporary directory and yields the path as a `pathlib.Path` object. """ src = Path(__file__).parent.joinpath("pages").absolute() dst = temp_with_override / "pages" shutil.copytree(src, dst) yield Path(dst) shutil.rmtree(dst)
Copys ./pages to a temporary directory and yields the path as a `pathlib.Path` object.
pages
python
jupyter-book/jupyter-book
tests/conftest.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/conftest.py
BSD-3-Clause
def cli(): """Provides a click.testing CliRunner object for invoking CLI commands.""" runner = CliRunner() yield runner del runner
Provides a click.testing CliRunner object for invoking CLI commands.
cli
python
jupyter-book/jupyter-book
tests/conftest.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/conftest.py
BSD-3-Clause
def test_build_from_template(temp_with_override, cli): """Test building the book template and a few test configs.""" # Create the book from the template book = temp_with_override / "new_book" _ = cli.invoke(commands.create, book.as_posix()) build_result = cli.invoke( commands.build, [book.as_posix(), "-n", "-W", "--keep-going"] ) assert build_result.exit_code == 0, build_result.output html = book.joinpath("_build", "html") assert html.joinpath("index.html").exists() assert html.joinpath("intro.html").exists()
Test building the book template and a few test configs.
test_build_from_template
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_build_dirhtml_from_template(temp_with_override, cli): """Test building the book template with dirhtml.""" # Create the book from the template book = temp_with_override / "new_book" _ = cli.invoke(commands.create, book.as_posix()) build_result = cli.invoke( commands.build, [book.as_posix(), "-n", "-W", "--builder", "dirhtml"] ) assert build_result.exit_code == 0, build_result.output html = book.joinpath("_build", "dirhtml") assert html.joinpath("index.html").exists() assert html.joinpath("intro", "index.html").exists()
Test building the book template with dirhtml.
test_build_dirhtml_from_template
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_build_singlehtml_from_template(temp_with_override, cli): """Test building the book template with singlehtml.""" # Create the book from the template book = temp_with_override / "new_book" _ = cli.invoke(commands.create, book.as_posix()) build_result = cli.invoke( commands.build, [book.as_posix(), "-n", "-W", "--builder", "singlehtml"] ) # TODO: Remove when docutils>=0.20 is pinned in jupyter-book # https://github.com/mcmtroffaes/sphinxcontrib-bibtex/issues/322 if (0, 18) <= docutils.__version_info__ < (0, 20): assert build_result.exit_code == 1, build_result.output else: assert build_result.exit_code == 0, build_result.output html = book.joinpath("_build", "singlehtml") assert html.joinpath("index.html").exists() assert html.joinpath("intro.html").exists()
Test building the book template with singlehtml.
test_build_singlehtml_from_template
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_custom_config(cli, build_resources): """Test a variety of custom configuration values.""" books, _ = build_resources config = books.joinpath("config") result = cli.invoke(commands.build, [config.as_posix(), "-n", "-W", "--keep-going"]) assert result.exit_code == 0, result.output html = config.joinpath("_build", "html", "index.html").read_text(encoding="utf8") soup = BeautifulSoup(html, "html.parser") assert '<p class="title logo__title">TEST PROJECT NAME</p>' in html assert '<div class="tab-set docutils">' in html assert '<link rel="stylesheet" type="text/css" href="_static/mycss.css" />' in html assert '<script src="_static/js/myjs.js"></script>' in html # Check that our comments engines were correctly added assert soup.find("script", attrs={"kind": "hypothesis"}) assert soup.find("script", attrs={"kind": "utterances"})
Test a variety of custom configuration values.
test_custom_config
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_toc_builds(cli, build_resources, toc): """Test building the book template with several different TOC files.""" books, tocs = build_resources result = cli.invoke( commands.build, [tocs.as_posix(), "--toc", (tocs / toc).as_posix(), "-n", "-W", "--keep-going"], ) assert result.exit_code == 0, result.output
Test building the book template with several different TOC files.
test_toc_builds
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_toc_rebuild(cli, build_resources): """Changes to the TOC should force a re-build of pages. Also tests for changes to the relative ordering of content pages. """ _, tocs = build_resources toc = tocs / "_toc_simple.yml" index_html = tocs.joinpath("_build", "html", "index.html") # Not using -W because we expect warnings for pages not listed in TOC result = cli.invoke( commands.build, [tocs.as_posix(), "--toc", toc.as_posix(), "-n"], ) html = BeautifulSoup(index_html.read_text(encoding="utf8"), "html.parser") tags = html.find_all("a", "reference internal") assert result.exit_code == 0, result.output assert tags[1].attrs["href"] == "content1.html" assert tags[2].attrs["href"] == "content2.html" # Clean build manually (to avoid caching of sidebar) build_path = tocs.joinpath("_build") shutil.rmtree(build_path) # Build with secondary ToC toc = tocs / "_toc_simple_changed.yml" result = cli.invoke( commands.build, [tocs.as_posix(), "--toc", toc.as_posix(), "-n"], ) assert result.exit_code == 0, result.output html = BeautifulSoup(index_html.read_text(encoding="utf8"), "html.parser") tags = html.find_all("a", "reference internal") # The rendered TOC should reflect the order in the modified _toc.yml assert tags[1].attrs["href"] == "content2.html" assert tags[2].attrs["href"] == "content1.html"
Changes to the TOC should force a re-build of pages. Also tests for changes to the relative ordering of content pages.
test_toc_rebuild
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_build_page(pages, cli): """Test building a page.""" page = pages.joinpath("single_page.ipynb") html = pages.joinpath("_build", "_page", "single_page", "html") index = html.joinpath("index.html") result = cli.invoke(commands.build, [page.as_posix(), "-n", "-W", "--keep-going"]) assert result.exit_code == 0, result.output assert html.joinpath("single_page.html").exists() assert not html.joinpath("extra_page.html").exists() assert 'url=single_page.html" />' in index.read_text(encoding="utf8")
Test building a page.
test_build_page
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_build_page_nested(build_resources, cli): """Test building a page.""" books, _ = build_resources src = books.joinpath("nested") page = src.joinpath("contents", "markdown.md") html = src.joinpath("_build", "_page", "contents-markdown", "html") index = html.joinpath("index.html") result = cli.invoke(commands.build, [page.as_posix(), "-n", "-W", "--keep-going"]) assert result.exit_code == 0, result.output assert html.joinpath("markdown.html").exists() assert not html.joinpath("extra_page.html").exists() assert 'url=markdown.html" />' in index.read_text(encoding="utf8")
Test building a page.
test_build_page_nested
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_execution_timeout(pages, build_resources, cli): """Testing timeout execution for a page.""" books, _ = build_resources path_page = pages.joinpath("loop_unrun.ipynb") path_c = books.joinpath("config", "_config_timeout.yml") path_html = pages.joinpath("_build", "_page", "loop_unrun", "html") result = cli.invoke( commands.build, [ path_page.as_posix(), "--config", path_c.as_posix(), "-n", "-W", "--keep-going", ], ) assert "Executing notebook failed:" in result.stdout assert path_html.joinpath("reports", "loop_unrun.err.log").exists()
Testing timeout execution for a page.
test_execution_timeout
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_build_using_custom_builder(cli, build_resources): """Test building the book template using a custom builder""" books, _ = build_resources config = books.joinpath("config_custombuilder") result = cli.invoke( commands.build, [ config.as_posix(), "--builder=custom", "--custom-builder=mycustombuilder", "-n", "-W", "--keep-going", ], ) assert result.exit_code == 0, result.output html = config.joinpath("_build", "mycustombuilder", "index.html").read_text( encoding="utf8" ) assert '<p class="title logo__title">TEST PROJECT NAME</p>' in html assert '<link rel="stylesheet" type="text/css" href="_static/mycss.css" />' in html assert '<script src="_static/js/myjs.js"></script>' in html
Test building the book template using a custom builder
test_build_using_custom_builder
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_toc_numbered( toc_file: str, cli: CliRunner, temp_with_override, file_regression ): """Testing that numbers make it into the sidebar""" path_output = temp_with_override.joinpath("book1").absolute() p_toc = PATH_BOOKS.joinpath("toc") path_toc = p_toc.joinpath(toc_file) result = cli.invoke( commands.build, [ p_toc.as_posix(), "--path-output", path_output.as_posix(), "--toc", path_toc.as_posix(), "-W", ], ) assert result.exit_code == 0, result.output path_toc_directive = path_output.joinpath("_build", "html", "index.html") # get the tableofcontents markup soup = BeautifulSoup(path_toc_directive.read_text(encoding="utf8"), "html.parser") toc = soup.select("nav.bd-links")[0] file_regression.check( toc.prettify(), basename=toc_file.split(".")[0], extension=f"{SPHINX_VERSION}.html", )
Testing that numbers make it into the sidebar
test_toc_numbered
python
jupyter-book/jupyter-book
tests/test_build.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_build.py
BSD-3-Clause
def test_toc_numbered_multitoc_numbering_false( toc_file, cli, build_resources, file_regression ): """Testing use_multitoc_numbering: false""" books, tocs = build_resources config = books.joinpath("config").joinpath("_config_sphinx_multitoc_numbering.yml") toc = tocs.joinpath(toc_file) # TODO: commented out because of the issue described below. Uncomment when it is resolved. # Issue #1339: There is an issue when using CliRunner and building projects # that make use of --config. The internal state of Sphinx appears to # be correct, but the written outputs (i.e. html) are not correct # suggesting some type of caching is going on. # result = cli.invoke( # commands.build, # [ # tocs.as_posix(), # "--path-output", # books.as_posix(), # "--toc", # toc.as_posix(), # "--config", # config.as_posix(), # "-W", # ], # ) # assert result.exit_code == 0, result.output process = subprocess.Popen( [ "jb", "build", tocs.as_posix(), "--path-output", books.as_posix(), "--toc", toc.as_posix(), "--config", config.as_posix(), "-W", ], stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) stdout, stderr = process.communicate() assert process.returncode == 0, stderr path_toc_directive = books.joinpath("_build", "html", "index.html") # get the tableofcontents markup soup = BeautifulSoup(path_toc_directive.read_text(encoding="utf8"), "html.parser") toc = soup.select("nav.bd-links")[0] file_regression.check( toc.prettify(), basename=toc_file.split(".")[0] + "_multitoc_numbering_false", extension=f"{SPHINX_VERSION}.html", )
Testing use_multitoc_numbering: false
test_toc_numbered_multitoc_numbering_false
python
jupyter-book/jupyter-book
tests/test_sphinx_multitoc_numbering.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_sphinx_multitoc_numbering.py
BSD-3-Clause
def test_toc_fail(cli: CliRunner, build_resources): """Folder with no content should return none""" books, tocs = build_resources p_empty = tocs.parent result = cli.invoke(create_toc, [p_empty.as_posix()]) assert result.exit_code != 0 assert isinstance(result.exception, OSError) assert "path does not contain a root file" in str(result.exception)
Folder with no content should return none
test_toc_fail
python
jupyter-book/jupyter-book
tests/test_toc.py
https://github.com/jupyter-book/jupyter-book/blob/master/tests/test_toc.py
BSD-3-Clause
def init_myst_file(path, kernel, verbose=True): """Initialize a file with a Jupytext header that marks it as MyST markdown. Parameters ---------- path : string A path to a markdown file to be initialized for Jupytext kernel : string A kernel name to add to the markdown file. See a list of kernel names with `jupyter kernelspec list`. """ try: from jupytext.cli import jupytext except ImportError: raise ImportError( "In order to use myst markdown features, " "please install jupytext first." ) if not Path(path).exists(): raise FileNotFoundError(f"Markdown file not found: {path}") kernels = list(find_kernel_specs().keys()) kernels_text = "\n".join(kernels) if kernel is None: if len(kernels) > 1: _error( "There are multiple kernel options, so you must give one manually." " with `--kernel`\nPlease specify one of the following kernels.\n\n" f"{kernels_text}" ) else: kernel = kernels[0] if kernel not in kernels: raise ValueError( f"Did not find kernel: {kernel}\nPlease specify one of the " f"installed kernels:\n\n{kernels_text}" ) args = (str(path), "-q", "--set-kernel", kernel, "--set-formats", "myst") jupytext(args) if verbose: print(f"Initialized file: {path}\nWith kernel: {kernel}")
Initialize a file with a Jupytext header that marks it as MyST markdown. Parameters ---------- path : string A path to a markdown file to be initialized for Jupytext kernel : string A kernel name to add to the markdown file. See a list of kernel names with `jupyter kernelspec list`.
init_myst_file
python
jupyter-book/jupyter-book
jupyter_book/utils.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/utils.py
BSD-3-Clause
def get_default_sphinx_config(): """Some configuration values that are really sphinx-specific.""" return dict( extensions=[ "sphinx_togglebutton", "sphinx_copybutton", "myst_nb", "jupyter_book", "sphinx_thebe", "sphinx_comments", "sphinx_external_toc", "sphinx.ext.intersphinx", "sphinx_design", "sphinx_book_theme", ], pygments_style="sphinx", html_theme="sphinx_book_theme", html_theme_options={"search_bar_text": "Search this book..."}, html_sourcelink_suffix="", numfig=True, recursive_update=False, suppress_warnings=["myst.domains"], )
Some configuration values that are really sphinx-specific.
get_default_sphinx_config
python
jupyter-book/jupyter-book
jupyter_book/config.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/config.py
BSD-3-Clause
def validate_yaml(yaml: dict, raise_on_errors=False, print_func=print): """Validate the YAML configuration against a JSON schema.""" errors = sorted(get_validator().iter_errors(yaml), key=lambda e: e.path) error_msg = "\n".join( [ "- {} [key path: '{}']".format( error.message, "/".join([str(p) for p in error.path]) ) for error in errors ] ) if not errors: return if raise_on_errors: raise jsonschema.ValidationError(error_msg) return _message_box( f"Warning: Validation errors in config:\n{error_msg}", color="orange", print_func=print_func, )
Validate the YAML configuration against a JSON schema.
validate_yaml
python
jupyter-book/jupyter-book
jupyter_book/config.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/config.py
BSD-3-Clause
def get_final_config( *, user_yaml: Optional[Union[dict, Path]] = None, cli_config: Optional[dict] = None, sourcedir: Optional[Path] = None, validate: bool = True, raise_on_invalid: bool = False, use_external_toc: bool = True, ): """Create the final configuration dictionary, to parser to sphinx :param user_config_path: A path to a YAML file written by the user :param cli_config: Configuration coming directly from the CLI :param sourcedir: path to source directory. If it contains a `_static` folder, we ad that to the final `html_static_path` :param validate: Validate user yaml against the data schema :param raise_on_invalid: Raise a ValidationError, or only log a warning Order of precedence is: 1. CLI Sphinx Configuration 2. User JB(YAML) Configuration 3. Default JB (YAML) Configuration 4. Default Sphinx Configuration """ # get the default sphinx configuration sphinx_config = get_default_sphinx_config() # get the default yaml configuration yaml_config, default_yaml_update, add_paths = yaml_to_sphinx( yaml.safe_load(PATH_YAML_DEFAULT.read_text(encoding="utf8")) ) yaml_config.update(default_yaml_update) # if available, get the user defined configuration user_yaml_recurse, user_yaml_update = {}, {} user_yaml_path = None if user_yaml: if isinstance(user_yaml, Path): user_yaml_path = user_yaml user_yaml = yaml.safe_load(user_yaml.read_text(encoding="utf8")) else: user_yaml = user_yaml if validate: validate_yaml(user_yaml, raise_on_errors=raise_on_invalid) user_yaml_recurse, user_yaml_update, add_paths = yaml_to_sphinx(user_yaml) # add paths from yaml config if user_yaml_path: for path in add_paths: path = (user_yaml_path.parent / path).resolve() sys.path.append(path.as_posix()) # first merge the user yaml into the default yaml _recursive_update(yaml_config, user_yaml_recurse) # then merge this into the default sphinx config _recursive_update(sphinx_config, yaml_config) # TODO: deprecate this in version 0.14 # https://github.com/executablebooks/jupyter-book/issues/1502 if "mathjax_config" in user_yaml_update: # Switch off warning if user has specified mathjax v2 if ( "mathjax_path" in user_yaml_update and "@2" in user_yaml_update["mathjax_path"] ): # use mathjax2_config so not to trigger deprecation warning in future user_yaml_update["mathjax2_config"] = user_yaml_update.pop("mathjax_config") else: _message_box( ( f"[Warning] Mathjax configuration has changed for sphinx>=4.0 [Using sphinx: {sphinx.__version__}]\n" # noqa: E501 "Your _config.yml needs to be updated:\n" # noqa: E501 "mathjax_config -> mathjax3_config\n" # noqa: E501 "To continue using `mathjax v2` you will need to use the `mathjax_path` configuration\n" # noqa: E501 "\n" "See Sphinx Documentation:\n" "https://www.sphinx-doc.org/en/master/usage/extensions/math.html#module-sphinx.ext.mathjax" # noqa: E501 ), color="orange", print_func=print, ) # Automatically make the configuration name substitution so older projects build user_yaml_update["mathjax3_config"] = user_yaml_update.pop("mathjax_config") # Recursively update sphinx config if option is specified, # otherwise forcefully override options non-recursively if sphinx_config.pop("recursive_update") is True: _recursive_update(sphinx_config, user_yaml_update) else: sphinx_config.update(user_yaml_update) # This is to deal with a special case, where the override needs to be applied after # the sphinx app is initialised (since the default is a function) # TODO I'm not sure if there is a better way to deal with this? config_meta = { "latex_doc_overrides": sphinx_config.pop("latex_doc_overrides"), "latex_individualpages": cli_config.pop("latex_individualpages"), } if sphinx_config.get("use_jupyterbook_latex"): sphinx_config["extensions"].append("sphinx_jupyterbook_latex") # Add sphinx_multitoc_numbering extension if necessary if sphinx_config.get("use_multitoc_numbering"): sphinx_config["extensions"].append("sphinx_multitoc_numbering") # finally merge in CLI configuration _recursive_update(sphinx_config, cli_config or {}) # Initialize static files if sourcedir and Path(sourcedir).joinpath("_static").is_dir(): # Add the `_static` folder to html_static_path, only if it exists paths_static = sphinx_config.get("html_static_path", []) paths_static.append("_static") sphinx_config["html_static_path"] = paths_static # Search the static files paths and initialize any CSS or JS files. for path in paths_static: path = Path(sourcedir).joinpath(path) for path_css in path.rglob("*.css"): css_files = sphinx_config.get("html_css_files", []) css_files.append((path_css.relative_to(path)).as_posix()) sphinx_config["html_css_files"] = css_files for path_js in path.rglob("*.js"): js_files = sphinx_config.get("html_js_files", []) js_files.append((path_js.relative_to(path)).as_posix()) sphinx_config["html_js_files"] = js_files if not use_external_toc: # TODO perhaps a better logic for this? # remove all configuration related to sphinx_external_toc try: idx = sphinx_config["extensions"].index("sphinx_external_toc") except ValueError: pass else: sphinx_config["extensions"].pop(idx) sphinx_config.pop("external_toc_path", None) sphinx_config.pop("external_toc_exclude_missing", None) return sphinx_config, config_meta
Create the final configuration dictionary, to parser to sphinx :param user_config_path: A path to a YAML file written by the user :param cli_config: Configuration coming directly from the CLI :param sourcedir: path to source directory. If it contains a `_static` folder, we ad that to the final `html_static_path` :param validate: Validate user yaml against the data schema :param raise_on_invalid: Raise a ValidationError, or only log a warning Order of precedence is: 1. CLI Sphinx Configuration 2. User JB(YAML) Configuration 3. Default JB (YAML) Configuration 4. Default Sphinx Configuration
get_final_config
python
jupyter-book/jupyter-book
jupyter_book/config.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/config.py
BSD-3-Clause
def yaml_to_sphinx(yaml: dict): """Convert a Jupyter Book style config structure into a Sphinx config dict. :returns: (recursive_updates, override_updates, add_paths) add_paths collects paths that are specified in the _config.yml (such as those provided in local_extensions) and returns them for adding to sys.path in a context where the _config.yml path is known """ sphinx_config = {} # top-level, string type YAML_TRANSLATIONS = { "title": "html_title", "author": "author", "copyright": "copyright", "logo": "html_logo", "project": "project", } for key, newkey in YAML_TRANSLATIONS.items(): if key in yaml: val = yaml.get(key) if val is None: val = "" sphinx_config[newkey] = val # exclude patterns if "exclude_patterns" in yaml: # we always include these excludes, so as not to break back-compatibility defaults = {"_build", "Thumbs.db", ".DS_Store", "**.ipynb_checkpoints"} defaults.update(yaml["exclude_patterns"]) sphinx_config["exclude_patterns"] = list(sorted(defaults)) if "only_build_toc_files" in yaml: sphinx_config["external_toc_exclude_missing"] = yaml["only_build_toc_files"] # Theme sphinx_config["html_theme_options"] = theme_options = {} if "launch_buttons" in yaml: theme_options["launch_buttons"] = yaml["launch_buttons"] repository_config = yaml.get("repository", {}) for spx_key, yml_key in [ ("path_to_docs", "path_to_book"), ("repository_url", "url"), ("repository_branch", "branch"), ]: if yml_key in repository_config: theme_options[spx_key] = repository_config[yml_key] # HTML html = yaml.get("html") if html: for spx_key, yml_key in [ ("html_favicon", "favicon"), ("html_baseurl", "baseurl"), ("comments_config", "comments"), ("use_multitoc_numbering", "use_multitoc_numbering"), ]: if yml_key in html: sphinx_config[spx_key] = html[yml_key] for spx_key, yml_key in [ ("navbar_footer_text", "navbar_footer_text"), # Deprecate navbar_footer_text after a release cycle ("extra_footer", "extra_footer"), ("home_page_in_toc", "home_page_in_navbar"), ("announcement", "announcement"), ]: if yml_key in html: theme_options[spx_key] = html[yml_key] # Fix for renamed field spx_analytics = theme_options["analytics"] = {} google_analytics_id = html.get("google_analytics_id") if google_analytics_id is not None: _message_box( ( "[Warning] The `html.google_analytics_id` configuration value has moved to `html.analytics.google_analytics_id`" # noqa: E501 ), color="orange", print_func=print, ) spx_analytics["google_analytics_id"] = google_analytics_id # Analytics yml_analytics = html.get("analytics", {}) for spx_key, yml_key in [ ("google_analytics_id", "google_analytics_id"), ("plausible_analytics_domain", "plausible_analytics_domain"), ("plausible_analytics_url", "plausible_analytics_url"), ]: if yml_key in yml_analytics: spx_analytics[spx_key] = yml_analytics[yml_key] # Pass through the buttons btns = ["use_repository_button", "use_edit_page_button", "use_issues_button"] use_buttons = {btn: html.get(btn) for btn in btns if btn in html} if any(use_buttons.values()): if not repository_config.get("url"): raise ValueError( "To use 'repository' buttons, you must specify the repository URL" ) # Update our config theme_options.update(use_buttons) # Parse and Rendering parse = yaml.get("parse") if parse: # Enable extra extensions extensions = sphinx_config.get("myst_enable_extensions", []) # TODO: deprecate this in v0.11.0 if parse.get("myst_extended_syntax") is True: extensions.append( [ "colon_fence", "dollarmath", "amsmath", "deflist", "html_image", ] ) _message_box( ( "myst_extended_syntax is deprecated, instead specify extensions " "you wish to be enabled. See https://myst-parser.readthedocs.io/en/latest/using/syntax-optional.html" # noqa: E501 ), color="orange", print_func=print, ) for ext in parse.get("myst_enable_extensions", []): if ext not in extensions: extensions.append(ext) if extensions: sphinx_config["myst_enable_extensions"] = extensions # Configuration values we'll just pass-through for ikey in ["myst_substitutions", "myst_url_schemes"]: if ikey in parse: sphinx_config[ikey] = parse.get(ikey) # Execution execute = yaml.get("execute") if execute: for spx_key, yml_key in [ ("nb_execution_allow_errors", "allow_errors"), ("nb_execution_raise_on_error", "raise_on_error"), ("nb_eval_name_regex", "eval_regex"), ("nb_execution_show_tb", "show_tb"), ("nb_execution_in_temp", "run_in_temp"), ("nb_output_stderr", "stderr_output"), ("nb_execution_timeout", "timeout"), ("nb_execution_cache_path", "cache"), ("nb_execution_mode", "execute_notebooks"), ("nb_execution_excludepatterns", "exclude_patterns"), ]: if yml_key in execute: sphinx_config[spx_key] = execute[yml_key] if sphinx_config.get("nb_execution_mode") is False: # Special case because YAML treats `off` as "False". sphinx_config["nb_execution_mode"] = "off" # LaTeX latex = yaml.get("latex") if latex: for spx_key, yml_key in [ ("latex_engine", "latex_engine"), ("use_jupyterbook_latex", "use_jupyterbook_latex"), ]: if yml_key in latex: sphinx_config[spx_key] = latex[yml_key] sphinx_config["latex_doc_overrides"] = {} if "title" in yaml: sphinx_config["latex_doc_overrides"]["title"] = yaml["title"] for key, val in yaml.get("latex", {}).get("latex_documents", {}).items(): sphinx_config["latex_doc_overrides"][key] = val # Sphinx Configuration extra_extensions = yaml.get("sphinx", {}).get("extra_extensions") if extra_extensions: sphinx_config["extensions"] = get_default_sphinx_config()["extensions"] if not isinstance(extra_extensions, list): extra_extensions = [extra_extensions] for extension in extra_extensions: if extension not in sphinx_config["extensions"]: sphinx_config["extensions"].append(extension) local_extensions = yaml.get("sphinx", {}).get("local_extensions") # add_paths collects additional paths for sys.path add_paths = [] if local_extensions: if "extensions" not in sphinx_config: sphinx_config["extensions"] = get_default_sphinx_config()["extensions"] for extension, path in local_extensions.items(): if extension not in sphinx_config["extensions"]: sphinx_config["extensions"].append(extension) if path not in sys.path: add_paths.append(path) # Overwrite sphinx config or not if "recursive_update" in yaml.get("sphinx", {}): sphinx_config["recursive_update"] = yaml.get("sphinx", {}).get( "recursive_update" ) # Citations sphinxcontrib_bibtex_configs = ["bibtex_bibfiles", "bibtex_reference_style"] if any(bibtex_config in yaml for bibtex_config in sphinxcontrib_bibtex_configs): # Load sphincontrib-bibtex if "extensions" not in sphinx_config: sphinx_config["extensions"] = get_default_sphinx_config()["extensions"] sphinx_config["extensions"].append("sphinxcontrib.bibtex") # Report Bug in Specific Docutils Versions # TODO: Remove when docutils>=0.20 is pinned in jupyter-book # https://github.com/mcmtroffaes/sphinxcontrib-bibtex/issues/322 if (0, 18) <= docutils.__version_info__ < (0, 20): logger.warning( "[sphinxcontrib-bibtex] Beware that docutils versions 0.18 and 0.19 " "(you are running {}) are known to generate invalid html for citations. " "If this issue affects you, please use docutils<0.18 or >=0.20 instead. " "For more details, see https://sourceforge.net/p/docutils/patches/195/".format( docutils.__version__ ) ) # Pass through configuration if yaml.get("bibtex_bibfiles"): if isinstance(yaml.get("bibtex_bibfiles"), str): yaml["bibtex_bibfiles"] = [yaml["bibtex_bibfiles"]] sphinx_config["bibtex_bibfiles"] = yaml["bibtex_bibfiles"] # items in sphinx.config will override defaults, # rather than recursively updating them return sphinx_config, yaml.get("sphinx", {}).get("config") or {}, add_paths
Convert a Jupyter Book style config structure into a Sphinx config dict. :returns: (recursive_updates, override_updates, add_paths) add_paths collects paths that are specified in the _config.yml (such as those provided in local_extensions) and returns them for adding to sys.path in a context where the _config.yml path is known
yaml_to_sphinx
python
jupyter-book/jupyter-book
jupyter_book/config.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/config.py
BSD-3-Clause
def _recursive_update(config, update, list_extend=False): """Update the dict `config` with `update` recursively. This *updates* nested dicts / lists instead of replacing them. """ for key, val in update.items(): if isinstance(config.get(key), dict): # if a dict value update is set to None, # then the entire dict will be "wiped", # otherwise it is recursively updated. if isinstance(val, dict): _recursive_update(config[key], val, list_extend) else: config[key] = val elif isinstance(config.get(key), list): if isinstance(val, list) and list_extend: config[key].extend(val) else: config[key] = val else: config[key] = val
Update the dict `config` with `update` recursively. This *updates* nested dicts / lists instead of replacing them.
_recursive_update
python
jupyter-book/jupyter-book
jupyter_book/config.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/config.py
BSD-3-Clause
def build_sphinx( sourcedir, outputdir, *, use_external_toc=True, confdir=None, path_config=None, noconfig=False, confoverrides=None, doctreedir=None, filenames=None, force_all=False, quiet=False, really_quiet=False, builder="html", freshenv=False, warningiserror=False, tags=None, verbosity=0, jobs=None, keep_going=False, ) -> Union[int, Exception]: """Sphinx build "main" command-line entry. This is a slightly modified version of https://github.com/sphinx-doc/sphinx/blob/3.x/sphinx/cmd/build.py#L198. """ ####################### # Configuration creation sphinx_config, config_meta = get_final_config( user_yaml=Path(path_config) if path_config else None, cli_config=confoverrides or {}, sourcedir=Path(sourcedir), use_external_toc=use_external_toc, ) ################################## # Preparing Sphinx build arguments # Configuration directory if noconfig: confdir = None elif not confdir: confdir = sourcedir # Doctrees directory if not doctreedir: doctreedir = Path(outputdir).parent.joinpath(".doctrees") if jobs is None: jobs = 1 # Manually re-building files in filenames if filenames is None: filenames = [] missing_files = [] for filename in filenames: if not op.isfile(filename): missing_files.append(filename) if missing_files: raise IOError("cannot find files %r" % missing_files) if force_all and filenames: raise ValueError("cannot combine -a option and filenames") # Debug args (hack to get this to pass through properly) def debug_args(): pass debug_args.pdb = False debug_args.verbosity = False debug_args.traceback = False # Logging behavior status = sys.stdout warning = sys.stderr error = sys.stderr if quiet: status = None if really_quiet: status = warning = None ################### # Build with Sphinx app = None # In case we fail, this allows us to handle the exception try: # These patches temporarily override docutils global variables, # such as the dictionaries of directives, roles and nodes # NOTE: this action is not thread-safe and not suitable for asynchronous use! with patch_docutils(confdir), docutils_namespace(): app = Sphinx( srcdir=sourcedir, confdir=confdir, outdir=outputdir, doctreedir=doctreedir, buildername=builder, confoverrides=sphinx_config, status=status, warning=warning, freshenv=freshenv, warningiserror=warningiserror, tags=tags, verbosity=verbosity, parallel=jobs, keep_going=keep_going, ) # We have to apply this update after the sphinx initialisation, # since default_latex_documents is dynamically generated # see sphinx/builders/latex/__init__.py:default_latex_documents new_latex_documents = update_latex_documents( app.config.latex_documents, config_meta["latex_doc_overrides"] ) app.config.latex_documents = new_latex_documents # set the below flag to always to enable maths in singlehtml builder if app.builder.name == "singlehtml": app.set_html_assets_policy("always") # setting up sphinx-multitoc-numbering if app.config["use_multitoc_numbering"]: # if sphinx-external-toc is used if "external_toc_path" in app.config: import yaml site_map = app.config.external_site_map site_map_str = yaml.dump(site_map.as_json()) # only if there is at least one numbered: true in the toc file if "numbered: true" in site_map_str: app.setup_extension("sphinx_multitoc_numbering") else: app.setup_extension("sphinx_multitoc_numbering") # Build latex_doc tuples based on --individualpages option request if config_meta["latex_individualpages"]: from .pdf import autobuild_singlepage_latexdocs # Ask Builder to read the source files to fetch titles and documents app.builder.read() latex_documents = autobuild_singlepage_latexdocs(app) app.config.latex_documents = latex_documents app.build(force_all, filenames) return app.statuscode except (Exception, KeyboardInterrupt) as exc: handle_exception(app, debug_args, exc, error) return exc
Sphinx build "main" command-line entry. This is a slightly modified version of https://github.com/sphinx-doc/sphinx/blob/3.x/sphinx/cmd/build.py#L198.
build_sphinx
python
jupyter-book/jupyter-book
jupyter_book/sphinx.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/sphinx.py
BSD-3-Clause
def html_to_pdf(html_file, pdf_file): """ Convert arbitrary HTML file to PDF using playwright. Parameters ---------- html_file : str A path to an HTML file to convert to PDF pdf_file : str A path to an output PDF file that will be created """ asyncio.run(_html_to_pdf(html_file, pdf_file))
Convert arbitrary HTML file to PDF using playwright. Parameters ---------- html_file : str A path to an HTML file to convert to PDF pdf_file : str A path to an output PDF file that will be created
html_to_pdf
python
jupyter-book/jupyter-book
jupyter_book/pdf.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/pdf.py
BSD-3-Clause
def update_latex_documents(latex_documents, latexoverrides): """ Apply latexoverrides from _config.yml to latex_documents tuple """ if len(latex_documents) > 1: _message_box( "Latex documents specified as a multi element list in the _config", "This suggests the user has made custom settings to their build", "[Skipping] processing of automatic latex overrides", ) return latex_documents # Extract latex document tuple latex_document = latex_documents[0] # Apply single overrides from _config.yml updated_latexdocs = [] for loc, item in enumerate(LATEX_DOCUMENTS): # the last element toctree_only seems optionally included if loc >= len(latex_document): break if item in latexoverrides.keys(): updated_latexdocs.append(latexoverrides[item]) else: updated_latexdocs.append(latex_document[loc]) return [tuple(updated_latexdocs)]
Apply latexoverrides from _config.yml to latex_documents tuple
update_latex_documents
python
jupyter-book/jupyter-book
jupyter_book/pdf.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/pdf.py
BSD-3-Clause
def latex_document_components(latex_documents): """Return a dictionary of latex_document components by name""" latex_tuple_components = {} for idx, item in enumerate(LATEX_DOCUMENTS): # skip if latex_documents doesn't doesn't contain all elements # of the LATEX_DOCUMENT specification tuple if idx >= len(latex_documents): continue latex_tuple_components[item] = latex_documents[idx] return latex_tuple_components
Return a dictionary of latex_document components by name
latex_document_components
python
jupyter-book/jupyter-book
jupyter_book/pdf.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/pdf.py
BSD-3-Clause
def latex_document_tuple(components): """Return a tuple for latex_documents from named components dictionary""" latex_doc = [] for item in LATEX_DOCUMENTS: if item not in components.keys(): continue else: latex_doc.append(components[item]) return tuple(latex_doc)
Return a tuple for latex_documents from named components dictionary
latex_document_tuple
python
jupyter-book/jupyter-book
jupyter_book/pdf.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/pdf.py
BSD-3-Clause
def autobuild_singlepage_latexdocs(app): """ Build list of tuples for each document in the Project [((startdocname, targetname, title, author, theme, toctree_only))] https://www.sphinx-doc.org/en/3.x/usage/configuration.html#confval-latex_documents """ latex_documents = app.config.latex_documents if len(latex_documents) > 1: _message_box( "Latex documents specified as a multi element list in the _config", "This suggests the user has made custom settings to their build", "[Skipping] --individualpages option", ) return latex_documents # Extract latex_documents updated tuple latex_documents = latex_documents[0] titles = app.env.titles master_doc = app.config.master_doc sourcedir = os.path.dirname(master_doc) # Construct Tuples DEFAULT_VALUES = latex_document_components(latex_documents) latex_documents = [] for doc, title in titles.items(): latex_doc = copy(DEFAULT_VALUES) # if doc has a subdir relative to src dir docname = None parts = Path(doc).parts latex_doc["startdocname"] = doc if DEFAULT_VALUES["startdocname"] == doc: targetdoc = DEFAULT_VALUES["targetname"] else: if sourcedir in parts: parts = list(parts) # assuming we need to remove only the first instance parts.remove(sourcedir) docname = "-".join(parts) targetdoc = docname + ".tex" latex_doc["targetname"] = targetdoc latex_doc["title"] = title.astext() latex_doc = latex_document_tuple(latex_doc) latex_documents.append(latex_doc) return latex_documents
Build list of tuples for each document in the Project [((startdocname, targetname, title, author, theme, toctree_only))] https://www.sphinx-doc.org/en/3.x/usage/configuration.html#confval-latex_documents
autobuild_singlepage_latexdocs
python
jupyter-book/jupyter-book
jupyter_book/pdf.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/pdf.py
BSD-3-Clause
def version_callback(ctx, param, value): """Callback for supplying version information""" if not value or ctx.resilient_parsing: return from jupyter_cache import __version__ as jcv from myst_nb import __version__ as mnbv from myst_parser import __version__ as mpv from nbclient import __version__ as ncv from sphinx_book_theme import __version__ as sbtv from sphinx_external_toc import __version__ as etoc from jupyter_book import __version__ as jbv versions = { "Jupyter Book": jbv, "External ToC": etoc, "MyST-Parser": mpv, "MyST-NB": mnbv, "Sphinx Book Theme": sbtv, "Jupyter-Cache": jcv, "NbClient": ncv, } versions_string = "\n".join(f"{tt:<18}: {vv}" for tt, vv in versions.items()) click.echo(versions_string) ctx.exit()
Callback for supplying version information
version_callback
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def main(): """Build and manage books with Jupyter.""" pass
Build and manage books with Jupyter.
main
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def build( path_source, path_output, config, toc, warningiserror, nitpick, keep_going, freshenv, builder, custom_builder, verbose, quiet, individualpages, get_config_only=False, ): """Convert your book's or page's content to HTML or a PDF.""" from sphinx_external_toc.parsing import MalformedError, parse_toc_yaml from jupyter_book import __version__ as jbv from jupyter_book.sphinx import build_sphinx if not get_config_only: click.secho(f"Running Jupyter-Book v{jbv}", bold=True, fg="green") # Paths for the notebooks PATH_SRC_FOLDER = Path(path_source).absolute() config_overrides = {} use_external_toc = True found_config = find_config_path(PATH_SRC_FOLDER) BUILD_PATH = path_output if path_output is not None else found_config[0] # Set config for --individualpages option (pages, documents) if individualpages: if builder != "pdflatex": _error( """ Specified option --individualpages only works with the following builders: pdflatex """ ) # Build Page if not PATH_SRC_FOLDER.is_dir(): # it is a single file build_type = "page" use_external_toc = False subdir = None PATH_SRC = Path(path_source) PATH_SRC_FOLDER = PATH_SRC.parent.absolute() PAGE_NAME = PATH_SRC.with_suffix("").name # checking if the page is inside a sub directory # then changing the build_path accordingly if str(BUILD_PATH) in str(PATH_SRC_FOLDER): subdir = str(PATH_SRC_FOLDER.relative_to(BUILD_PATH)) if subdir and subdir != ".": subdir = subdir.replace("/", "-") subdir = subdir + "-" + PAGE_NAME BUILD_PATH = Path(BUILD_PATH).joinpath("_build", "_page", subdir) else: BUILD_PATH = Path(BUILD_PATH).joinpath("_build", "_page", PAGE_NAME) # Find all files that *aren't* the page we're building and exclude them to_exclude = [ op.relpath(ifile, PATH_SRC_FOLDER) for ifile in iglob(str(PATH_SRC_FOLDER.joinpath("**", "*")), recursive=True) if ifile != str(PATH_SRC.absolute()) ] to_exclude.extend(["_build", "Thumbs.db", ".DS_Store", "**.ipynb_checkpoints"]) # Now call the Sphinx commands to build config_overrides = { "master_doc": PAGE_NAME, "exclude_patterns": to_exclude, # --individualpages option set to True for page call "latex_individualpages": True, } # Build Project else: build_type = "book" PAGE_NAME = None BUILD_PATH = Path(BUILD_PATH).joinpath("_build") # Table of contents toc = PATH_SRC_FOLDER.joinpath("_toc.yml") if toc is None else Path(toc) if not get_config_only: if not toc.exists(): _error( "Couldn't find a Table of Contents file. " "To auto-generate one, run:" f"\n\n\tjupyter-book toc from-project {path_source}" ) # we don't need to read the toc here, but do so to control the error message try: parse_toc_yaml(toc) except MalformedError as exc: _error( f"The Table of Contents file is malformed: {exc}\n" "You may need to migrate from the old format, using:" f"\n\n\tjupyter-book toc migrate {toc} -o {toc}" ) # TODO could also check/warn if the format is not set to jb-article/jb-book? config_overrides["external_toc_path"] = ( toc.relative_to(PATH_SRC_FOLDER).as_posix() if get_config_only else toc.as_posix() ) # --individualpages option passthrough config_overrides["latex_individualpages"] = individualpages # Use the specified configuration file, or one found in the root directory path_config = config or ( found_config[0].joinpath("_config.yml") if found_config[1] else None ) if path_config and not Path(path_config).exists(): raise IOError(f"Config file path given, but not found: {path_config}") if builder in ["html", "pdfhtml", "linkcheck"]: OUTPUT_PATH = BUILD_PATH.joinpath("html") elif builder in ["latex", "pdflatex"]: OUTPUT_PATH = BUILD_PATH.joinpath("latex") elif builder in ["dirhtml"]: OUTPUT_PATH = BUILD_PATH.joinpath("dirhtml") elif builder in ["singlehtml"]: OUTPUT_PATH = BUILD_PATH.joinpath("singlehtml") elif builder in ["custom"]: OUTPUT_PATH = BUILD_PATH.joinpath(custom_builder) BUILDER_OPTS["custom"] = custom_builder if nitpick: config_overrides["nitpicky"] = True # If we only want config (e.g. for printing/validation), stop here if get_config_only: return (path_config, PATH_SRC_FOLDER, config_overrides) # print information about the build click.echo( click.style("Source Folder: ", bold=True, fg="blue") + click.format_filename(f"{PATH_SRC_FOLDER}") ) click.echo( click.style("Config Path: ", bold=True, fg="blue") + click.format_filename(f"{path_config}") ) click.echo( click.style("Output Path: ", bold=True, fg="blue") + click.format_filename(f"{OUTPUT_PATH}") ) # Now call the Sphinx commands to build result = build_sphinx( PATH_SRC_FOLDER, OUTPUT_PATH, use_external_toc=use_external_toc, noconfig=True, path_config=path_config, confoverrides=config_overrides, builder=BUILDER_OPTS[builder], warningiserror=warningiserror, keep_going=keep_going, freshenv=freshenv, verbosity=verbose, quiet=quiet > 0, really_quiet=quiet > 1, ) builder_specific_actions( result, builder, OUTPUT_PATH, build_type, PAGE_NAME, click.echo )
Convert your book's or page's content to HTML or a PDF.
build
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def create(path_book, cookiecutter, no_input): """Create a Jupyter Book template that you can customize.""" book = Path(path_book) if not cookiecutter: # this will be the more common option template_path = Path(__file__).parent.parent.joinpath("book_template") sh.copytree(template_path, book) else: cc_url = "gh:executablebooks/cookiecutter-jupyter-book" try: from cookiecutter.main import cookiecutter except ModuleNotFoundError as e: _error( f"{e}. To install, run\n\n\tpip install cookiecutter", kind=e.__class__, ) book = cookiecutter(cc_url, output_dir=Path(path_book), no_input=no_input) _message_box(f"Your book template can be found at\n\n {book}{os.sep}")
Create a Jupyter Book template that you can customize.
create
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def remove_option(path, option, rm_both=False): """Remove folder specified under option. If rm_both is True, remove folder and skip message_box.""" option_path = path.joinpath(option) if not option_path.is_dir(): return sh.rmtree(option_path) if not rm_both: _message_box(f"Your {option} directory has been removed")
Remove folder specified under option. If rm_both is True, remove folder and skip message_box.
clean.remove_option
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def remove_html_latex(path): """Remove both html and latex folders.""" print_msg = False for opt in ["html", "latex"]: if path.joinpath(opt).is_dir(): print_msg = True remove_option(path, opt, True) if print_msg: _message_box("Your html and latex directories have been removed")
Remove both html and latex folders.
clean.remove_html_latex
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def remove_all(path): """Remove _build directory entirely.""" sh.rmtree(path) _message_box("Your _build directory has been removed")
Remove _build directory entirely.
clean.remove_all
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def remove_default(path): """Remove all subfolders in _build except .jupyter_cache.""" to_remove = [ dd for dd in path.iterdir() if dd.is_dir() and dd.name != ".jupyter_cache" ] for dd in to_remove: sh.rmtree(path.joinpath(dd.name)) _message_box("Your _build directory has been emptied except for .jupyter_cache")
Remove all subfolders in _build except .jupyter_cache.
clean.remove_default
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def clean(path_book, all_, html, latex): """Empty the _build directory except jupyter_cache. If the all option has been flagged, it will remove the entire _build. If html/latex option is flagged, it will remove the html/latex subdirectories.""" def remove_option(path, option, rm_both=False): """Remove folder specified under option. If rm_both is True, remove folder and skip message_box.""" option_path = path.joinpath(option) if not option_path.is_dir(): return sh.rmtree(option_path) if not rm_both: _message_box(f"Your {option} directory has been removed") def remove_html_latex(path): """Remove both html and latex folders.""" print_msg = False for opt in ["html", "latex"]: if path.joinpath(opt).is_dir(): print_msg = True remove_option(path, opt, True) if print_msg: _message_box("Your html and latex directories have been removed") def remove_all(path): """Remove _build directory entirely.""" sh.rmtree(path) _message_box("Your _build directory has been removed") def remove_default(path): """Remove all subfolders in _build except .jupyter_cache.""" to_remove = [ dd for dd in path.iterdir() if dd.is_dir() and dd.name != ".jupyter_cache" ] for dd in to_remove: sh.rmtree(path.joinpath(dd.name)) _message_box("Your _build directory has been emptied except for .jupyter_cache") PATH_OUTPUT = Path(path_book).absolute() if not PATH_OUTPUT.is_dir(): _error(f"Path to book isn't a directory: {PATH_OUTPUT}") build_path = PATH_OUTPUT.joinpath("_build") if not build_path.is_dir(): return if all_: remove_all(build_path) elif html and latex: remove_html_latex(build_path) elif html: remove_option(build_path, "html") elif latex: remove_option(build_path, "latex") else: remove_default(build_path)
Empty the _build directory except jupyter_cache. If the all option has been flagged, it will remove the entire _build. If html/latex option is flagged, it will remove the html/latex subdirectories.
clean
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def myst(): """Manipulate MyST markdown files.""" pass
Manipulate MyST markdown files.
myst
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def init(path, kernel): """Add Jupytext metadata for your markdown file(s), with optional Kernel name.""" from jupyter_book.utils import init_myst_file for ipath in path: init_myst_file(ipath, kernel, verbose=True)
Add Jupytext metadata for your markdown file(s), with optional Kernel name.
init
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def config(): """Inspect your _config.yml file.""" pass
Inspect your _config.yml file.
config
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def sphinx(ctx, path_source, config, toc): """Generate a Sphinx conf.py representation of the build configuration.""" from jupyter_book.config import get_final_config path_config, full_path_source, config_overrides = ctx.invoke( build, path_source=path_source, config=config, toc=toc, get_config_only=True ) sphinx_config, _ = get_final_config( user_yaml=Path(path_config) if path_config else None, sourcedir=Path(full_path_source), cli_config=config_overrides, ) lines = [ "###############################################################################", "# Auto-generated by `jupyter-book config`", "# If you wish to continue using _config.yml, make edits to that file and", "# re-generate this one.", "###############################################################################", ] for key in sorted(sphinx_config): lines.append(f"{key} = {sphinx_config[key]!r}") content = "\n".join(lines).rstrip() + "\n" out_folder = Path(path_config).parent if path_config else Path(full_path_source) out_folder.joinpath("conf.py").write_text(content, encoding="utf8") click.secho(f"Wrote conf.py to {out_folder}", fg="green")
Generate a Sphinx conf.py representation of the build configuration.
sphinx
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def find_config_path(path: Path) -> Tuple[Path, bool]: """checks for any _config.yml file in current/parent dirs. if found then returns the path which has _config.yml, else returns the present dir as the path. """ if path.is_dir(): current_dir = path else: current_dir = path.parent if (current_dir / "_config.yml").is_file(): return (current_dir, True) while current_dir != current_dir.parent: if (current_dir / "_config.yml").is_file(): return (current_dir, True) current_dir = current_dir.parent if not path.is_dir(): return (path.parent, False) return (path, False)
checks for any _config.yml file in current/parent dirs. if found then returns the path which has _config.yml, else returns the present dir as the path.
find_config_path
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def builder_specific_actions( result, builder, output_path, cmd_type, page_name=None, print_func=print ): """Run post-sphinx-build actions. :param result: the result of the build execution; a status code or and exception """ from jupyter_book.pdf import html_to_pdf from jupyter_book.sphinx import REDIRECT_TEXT if isinstance(result, Exception): msg = ( f"There was an error in building your {cmd_type}. " "Look above for the cause." ) # TODO ideally we probably only want the original traceback here raise RuntimeError(_message_box(msg, color="red", doprint=False)) from result elif result: msg = ( f"Building your {cmd_type}, returns a non-zero exit code ({result}). " "Look above for the cause." ) _message_box(msg, color="red", print_func=click.echo) sys.exit(result) # Builder-specific options if builder == "html": path_output_rel = Path(op.relpath(output_path, Path())) if cmd_type == "page": path_page = path_output_rel.joinpath(f"{page_name}.html") # Write an index file if it doesn't exist so we get redirects path_index = path_output_rel.joinpath("index.html") if not path_index.exists(): path_index.write_text(REDIRECT_TEXT.format(first_page=path_page.name)) _message_box( dedent( f""" Page build finished. Your page folder is: {path_page.parent}{os.sep} Open your page at: {path_page} """ ) ) elif cmd_type == "book": path_output_rel = Path(op.relpath(output_path, Path())) path_index = path_output_rel.joinpath("index.html") _message_box( f"""\ Finished generating HTML for {cmd_type}. Your book's HTML pages are here: {path_output_rel}{os.sep} You can look at your book by opening this file in a browser: {path_index} Or paste this line directly into your browser bar: file://{path_index.resolve()}\ """ ) if builder == "pdfhtml": print_func(f"Finished generating HTML for {cmd_type}...") print_func(f"Converting {cmd_type} HTML into PDF...") path_pdf_output = output_path.parent.joinpath("pdf") path_pdf_output.mkdir(exist_ok=True) if cmd_type == "book": path_pdf_output = path_pdf_output.joinpath("book.pdf") html_to_pdf(output_path.joinpath("index.html"), path_pdf_output) elif cmd_type == "page": path_pdf_output = path_pdf_output.joinpath(page_name + ".pdf") html_to_pdf(output_path.joinpath(page_name + ".html"), path_pdf_output) path_pdf_output_rel = Path(op.relpath(path_pdf_output, Path())) _message_box( f"""\ Finished generating PDF via HTML for {cmd_type}. Your PDF is here: {path_pdf_output_rel}\ """ ) if builder == "pdflatex": print_func(f"Finished generating latex for {cmd_type}...") print_func(f"Converting {cmd_type} latex into PDF...") # Convert to PDF via tex and template built Makefile and make.bat if sys.platform == "win32": makecmd = os.environ.get("MAKE", "make.bat") else: makecmd = os.environ.get("MAKE", "make") try: output = subprocess.run([makecmd, "all-pdf"], cwd=output_path) if output.returncode != 0: _error("Error: Failed to build pdf") return output.returncode _message_box( f"""\ A PDF of your {cmd_type} can be found at: {output_path} """ ) except OSError: _error("Error: Failed to run: %s" % makecmd) return 1
Run post-sphinx-build actions. :param result: the result of the build execution; a status code or and exception
builder_specific_actions
python
jupyter-book/jupyter-book
jupyter_book/cli/main.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/main.py
BSD-3-Clause
def __init__(self, *, entry_point_group: str, **kwargs: Any): """Initialize with entry point group.""" self.exclude_external_plugins = False self._entry_point_group: str = entry_point_group self._use_internal: Set[str] = kwargs.pop("use_internal", set()) super().__init__(**kwargs)
Initialize with entry point group.
__init__
python
jupyter-book/jupyter-book
jupyter_book/cli/pluggable.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/pluggable.py
BSD-3-Clause
def list_commands(self, ctx: click.Context) -> Iterable[str]: """Add entry point names of available plugins to the command list.""" subcommands = super().list_commands(ctx) if not self.exclude_external_plugins: subcommands.extend(get_entry_point_names(self._entry_point_group)) return subcommands
Add entry point names of available plugins to the command list.
list_commands
python
jupyter-book/jupyter-book
jupyter_book/cli/pluggable.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/pluggable.py
BSD-3-Clause
def get_command(self, ctx: click.Context, name: str) -> click.BaseCommand: """Try to load a subcommand from entry points, else defer to super.""" command = None if self.exclude_external_plugins or name in self._use_internal: command = super().get_command(ctx, name) else: try: command = load_entry_point(self._entry_point_group, name) except KeyError: command = super().get_command(ctx, name) return command
Try to load a subcommand from entry points, else defer to super.
get_command
python
jupyter-book/jupyter-book
jupyter_book/cli/pluggable.py
https://github.com/jupyter-book/jupyter-book/blob/master/jupyter_book/cli/pluggable.py
BSD-3-Clause
def check_source(source_name): """Chooses C or pyx source files, and raises if C is needed but missing""" source_ext = ".pyx" if not HAS_CYTHON: source_name = source_name.replace(".pyx.in", ".c") source_name = source_name.replace(".pyx", ".c") source_ext = ".c" if not os.path.exists(source_name): msg = ( "C source not found. You must have Cython installed to " "build if the C source files have not been generated." ) raise OSError(msg) return source_name, source_ext
Chooses C or pyx source files, and raises if C is needed but missing
check_source
python
statsmodels/statsmodels
setup.py
https://github.com/statsmodels/statsmodels/blob/master/setup.py
BSD-3-Clause
def process_tempita(source_name): """Runs pyx.in files through tempita is needed""" if source_name.endswith("pyx.in"): with open(source_name, encoding="utf-8") as templated: pyx_template = templated.read() pyx = Tempita.sub(pyx_template) pyx_filename = source_name[:-3] with open(pyx_filename, "w", encoding="utf-8") as pyx_file: pyx_file.write(pyx) file_stats = os.stat(source_name) try: os.utime( pyx_filename, ns=(file_stats.st_atime_ns, file_stats.st_mtime_ns), ) except AttributeError: os.utime(pyx_filename, (file_stats.st_atime, file_stats.st_mtime)) source_name = pyx_filename return source_name
Runs pyx.in files through tempita is needed
process_tempita
python
statsmodels/statsmodels
setup.py
https://github.com/statsmodels/statsmodels/blob/master/setup.py
BSD-3-Clause
def close_figures(): """ Fixture that closes all figures after a test function has completed Returns ------- closer : callable Function that will close all figures when called. Notes ----- Used by passing as an argument to the function that produces a plot, for example def test_some_plot(close_figures): <test code> If a function creates many figures, then these can be destroyed within a test function by calling close_figures to ensure that the number of figures does not become too large. def test_many_plots(close_figures): for i in range(100000): plt.plot(x,y) close_figures() """ try: import matplotlib.pyplot def close(): matplotlib.pyplot.close("all") except ImportError: def close(): pass yield close close()
Fixture that closes all figures after a test function has completed Returns ------- closer : callable Function that will close all figures when called. Notes ----- Used by passing as an argument to the function that produces a plot, for example def test_some_plot(close_figures): <test code> If a function creates many figures, then these can be destroyed within a test function by calling close_figures to ensure that the number of figures does not become too large. def test_many_plots(close_figures): for i in range(100000): plt.plot(x,y) close_figures()
close_figures
python
statsmodels/statsmodels
statsmodels/conftest.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/conftest.py
BSD-3-Clause
def reset_randomstate(): """ Fixture that set the global RandomState to the fixed seed 1 Notes ----- Used by passing as an argument to the function that uses the global RandomState def test_some_plot(reset_randomstate): <test code> Returns the state after the test function exits """ state = np.random.get_state() np.random.seed(1) yield np.random.set_state(state)
Fixture that set the global RandomState to the fixed seed 1 Notes ----- Used by passing as an argument to the function that uses the global RandomState def test_some_plot(reset_randomstate): <test code> Returns the state after the test function exits
reset_randomstate
python
statsmodels/statsmodels
statsmodels/conftest.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/conftest.py
BSD-3-Clause
def test(extra_args=None, exit=False): """ Run the test suite Parameters ---------- extra_args : list[str] List of argument to pass to pytest when running the test suite. The default is ['--tb=short', '--disable-pytest-warnings']. exit : bool Flag indicating whether the test runner should exit when finished. Returns ------- int The status code from the test run if exit is False. """ from .tools._test_runner import PytestTester tst = PytestTester(package_path=__file__) return tst(extra_args=extra_args, exit=exit)
Run the test suite Parameters ---------- extra_args : list[str] List of argument to pass to pytest when running the test suite. The default is ['--tb=short', '--disable-pytest-warnings']. exit : bool Flag indicating whether the test runner should exit when finished. Returns ------- int The status code from the test run if exit is False.
test
python
statsmodels/statsmodels
statsmodels/__init__.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/__init__.py
BSD-3-Clause
def fit(self, slice_n=20, **kwargs): """ Estimate the EDR space using Sliced Inverse Regression. Parameters ---------- slice_n : int, optional Target number of observations per slice """ # Sample size per slice if len(kwargs) > 0: msg = "SIR.fit does not take any extra keyword arguments" warnings.warn(msg) # Number of slices n_slice = self.exog.shape[0] // slice_n self._prep(n_slice) mn = [z.mean(0) for z in self._split_wexog] n = [z.shape[0] for z in self._split_wexog] mn = np.asarray(mn) n = np.asarray(n) # Estimate Cov E[X | Y=y] mnc = np.dot(mn.T, n[:, None] * mn) / n.sum() a, b = np.linalg.eigh(mnc) jj = np.argsort(-a) a = a[jj] b = b[:, jj] params = np.linalg.solve(self._covxr.T, b) results = DimReductionResults(self, params, eigs=a) return DimReductionResultsWrapper(results)
Estimate the EDR space using Sliced Inverse Regression. Parameters ---------- slice_n : int, optional Target number of observations per slice
fit
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause
def fit_regularized(self, ndim=1, pen_mat=None, slice_n=20, maxiter=100, gtol=1e-3, **kwargs): """ Estimate the EDR space using regularized SIR. Parameters ---------- ndim : int The number of EDR directions to estimate pen_mat : array_like A 2d array such that the squared Frobenius norm of `dot(pen_mat, dirs)`` is added to the objective function, where `dirs` is an orthogonal array whose columns span the estimated EDR space. slice_n : int, optional Target number of observations per slice maxiter :int The maximum number of iterations for estimating the EDR space. gtol : float If the norm of the gradient of the objective function falls below this value, the algorithm has converged. Returns ------- A results class instance. Notes ----- If each row of `exog` can be viewed as containing the values of a function evaluated at equally-spaced locations, then setting the rows of `pen_mat` to [[1, -2, 1, ...], [0, 1, -2, 1, ..], ...] will give smooth EDR coefficients. This is a form of "functional SIR" using the squared second derivative as a penalty. References ---------- L. Ferre, A.F. Yao (2003). Functional sliced inverse regression analysis. Statistics: a journal of theoretical and applied statistics 37(6) 475-488. """ if len(kwargs) > 0: msg = "SIR.fit_regularized does not take keyword arguments" warnings.warn(msg) if pen_mat is None: raise ValueError("pen_mat is a required argument") start_params = kwargs.get("start_params", None) # Sample size per slice slice_n = kwargs.get("slice_n", 20) # Number of slices n_slice = self.exog.shape[0] // slice_n # Sort the data by endog ii = np.argsort(self.endog) x = self.exog[ii, :] x -= x.mean(0) covx = np.cov(x.T) # Split the data into slices split_exog = np.array_split(x, n_slice) mn = [z.mean(0) for z in split_exog] n = [z.shape[0] for z in split_exog] mn = np.asarray(mn) n = np.asarray(n) self._slice_props = n / n.sum() self.ndim = ndim self.k_vars = covx.shape[0] self.pen_mat = pen_mat self._covx = covx self.n_slice = n_slice self._slice_means = mn if start_params is None: params = np.zeros((self.k_vars, ndim)) params[0:ndim, 0:ndim] = np.eye(ndim) params = params else: if start_params.shape[1] != ndim: msg = "Shape of start_params is not compatible with ndim" raise ValueError(msg) params = start_params params, _, cnvrg = _grass_opt(params, self._regularized_objective, self._regularized_grad, maxiter, gtol) if not cnvrg: g = self._regularized_grad(params.ravel()) gn = np.sqrt(np.dot(g, g)) msg = "SIR.fit_regularized did not converge, |g|=%f" % gn warnings.warn(msg) results = DimReductionResults(self, params, eigs=None) return DimReductionResultsWrapper(results)
Estimate the EDR space using regularized SIR. Parameters ---------- ndim : int The number of EDR directions to estimate pen_mat : array_like A 2d array such that the squared Frobenius norm of `dot(pen_mat, dirs)`` is added to the objective function, where `dirs` is an orthogonal array whose columns span the estimated EDR space. slice_n : int, optional Target number of observations per slice maxiter :int The maximum number of iterations for estimating the EDR space. gtol : float If the norm of the gradient of the objective function falls below this value, the algorithm has converged. Returns ------- A results class instance. Notes ----- If each row of `exog` can be viewed as containing the values of a function evaluated at equally-spaced locations, then setting the rows of `pen_mat` to [[1, -2, 1, ...], [0, 1, -2, 1, ..], ...] will give smooth EDR coefficients. This is a form of "functional SIR" using the squared second derivative as a penalty. References ---------- L. Ferre, A.F. Yao (2003). Functional sliced inverse regression analysis. Statistics: a journal of theoretical and applied statistics 37(6) 475-488.
fit_regularized
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause
def fit(self, **kwargs): """ Estimate the EDR space using PHD. Parameters ---------- resid : bool, optional If True, use least squares regression to remove the linear relationship between each covariate and the response, before conducting PHD. Returns ------- A results instance which can be used to access the estimated parameters. """ resid = kwargs.get("resid", False) y = self.endog - self.endog.mean() x = self.exog - self.exog.mean(0) if resid: from statsmodels.regression.linear_model import OLS r = OLS(y, x).fit() y = r.resid cm = np.einsum('i,ij,ik->jk', y, x, x) cm /= len(y) cx = np.cov(x.T) cb = np.linalg.solve(cx, cm) a, b = np.linalg.eig(cb) jj = np.argsort(-np.abs(a)) a = a[jj] params = b[:, jj] results = DimReductionResults(self, params, eigs=a) return DimReductionResultsWrapper(results)
Estimate the EDR space using PHD. Parameters ---------- resid : bool, optional If True, use least squares regression to remove the linear relationship between each covariate and the response, before conducting PHD. Returns ------- A results instance which can be used to access the estimated parameters.
fit
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause
def fit(self, **kwargs): """ Estimate the EDR space. Parameters ---------- slice_n : int Number of observations per slice """ # Sample size per slice slice_n = kwargs.get("slice_n", 50) # Number of slices n_slice = self.exog.shape[0] // slice_n self._prep(n_slice) cv = [np.cov(z.T) for z in self._split_wexog] ns = [z.shape[0] for z in self._split_wexog] p = self.wexog.shape[1] if not self.bc: # Cook's original approach vm = 0 for w, cvx in zip(ns, cv): icv = np.eye(p) - cvx vm += w * np.dot(icv, icv) vm /= len(cv) else: # The bias-corrected approach of Li and Zhu # \Lambda_n in Li, Zhu av = 0 for c in cv: av += np.dot(c, c) av /= len(cv) # V_n in Li, Zhu vn = 0 for x in self._split_wexog: r = x - x.mean(0) for i in range(r.shape[0]): u = r[i, :] m = np.outer(u, u) vn += np.dot(m, m) vn /= self.exog.shape[0] c = np.mean(ns) k1 = c * (c - 1) / ((c - 1)**2 + 1) k2 = (c - 1) / ((c - 1)**2 + 1) av2 = k1 * av - k2 * vn vm = np.eye(p) - 2 * sum(cv) / len(cv) + av2 a, b = np.linalg.eigh(vm) jj = np.argsort(-a) a = a[jj] b = b[:, jj] params = np.linalg.solve(self._covxr.T, b) results = DimReductionResults(self, params, eigs=a) return DimReductionResultsWrapper(results)
Estimate the EDR space. Parameters ---------- slice_n : int Number of observations per slice
fit
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause
def _grass_opt(params, fun, grad, maxiter, gtol): """ Minimize a function on a Grassmann manifold. Parameters ---------- params : array_like Starting value for the optimization. fun : function The function to be minimized. grad : function The gradient of fun. maxiter : int The maximum number of iterations. gtol : float Convergence occurs when the gradient norm falls below this value. Returns ------- params : array_like The minimizing value for the objective function. fval : float The smallest achieved value of the objective function. cnvrg : bool True if the algorithm converged to a limit point. Notes ----- `params` is 2-d, but `fun` and `grad` should take 1-d arrays `params.ravel()` as arguments. Reference --------- A Edelman, TA Arias, ST Smith (1998). The geometry of algorithms with orthogonality constraints. SIAM J Matrix Anal Appl. http://math.mit.edu/~edelman/publications/geometry_of_algorithms.pdf """ p, d = params.shape params = params.ravel() f0 = fun(params) cnvrg = False for _ in range(maxiter): # Project the gradient to the tangent space g = grad(params) g -= np.dot(g, params) * params / np.dot(params, params) if np.sqrt(np.sum(g * g)) < gtol: cnvrg = True break gm = g.reshape((p, d)) u, s, vt = np.linalg.svd(gm, 0) paramsm = params.reshape((p, d)) pa0 = np.dot(paramsm, vt.T) def geo(t): # Parameterize the geodesic path in the direction # of the gradient as a function of a real value t. pa = pa0 * np.cos(s * t) + u * np.sin(s * t) return np.dot(pa, vt).ravel() # Try to find a downhill step along the geodesic path. step = 2. while step > 1e-10: pa = geo(-step) f1 = fun(pa) if f1 < f0: params = pa f0 = f1 break step /= 2 params = params.reshape((p, d)) return params, f0, cnvrg
Minimize a function on a Grassmann manifold. Parameters ---------- params : array_like Starting value for the optimization. fun : function The function to be minimized. grad : function The gradient of fun. maxiter : int The maximum number of iterations. gtol : float Convergence occurs when the gradient norm falls below this value. Returns ------- params : array_like The minimizing value for the objective function. fval : float The smallest achieved value of the objective function. cnvrg : bool True if the algorithm converged to a limit point. Notes ----- `params` is 2-d, but `fun` and `grad` should take 1-d arrays `params.ravel()` as arguments. Reference --------- A Edelman, TA Arias, ST Smith (1998). The geometry of algorithms with orthogonality constraints. SIAM J Matrix Anal Appl. http://math.mit.edu/~edelman/publications/geometry_of_algorithms.pdf
_grass_opt
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause