code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# Model
## Probability Density Function
A PDF is defined by
$$ pdf_{A}(x_i; \theta) := \frac{f(x_i, \theta)}{N_{A}(f)}$$
where $f$ is a non-negative function, $x_i$ is an m dimensional data point, $\theta$ is a set of values and $N_A(f)$ is the normalization of f over A. By default this is
given by
$$N_A(f) := \int_A(a_i, b_i; \theta) da$$
where $a_i$ and $b_i$ are exclusive subspaces from $x_i$ and $\theta$.
The integration over the limits $A$ in the variables $a_i$ is the normalization term of the PDF.
The variable $x_i$ are event_dependent and are called "observables" while the $\theta$ are independent
and are referred to as "parameters".
### Integration
In zfit, an integral can be over any variable
### Expected counts: yield
A leading `ext_` in a method name means that the method returs an extended term. The extended PDF term
is defined as
$$pdf^{ext}_{A} := pdf_{A} \cdot yield_{A}$$
where the $yield$ is a term that describes the absolute scaling of the PDF. It is a parameter which
is defined as
$$yield_{A} := \frac{\int_A f(x_i; \theta) }{N_A(f)}.$$
In other words, $yield_{A}$ is the *expected count* of events *in the region A*.
| zfit-interface | /zfit_interface-0.0.3.tar.gz/zfit_interface-0.0.3/docs/api_definition/model.ipynb | model.ipynb |
************
zfit physics
************
.. image:: https://scikit-hep.org/assets/images/Scikit--HEP-Affiliated-blue.svg
:target: https://scikit-hep.org
.. image:: https://img.shields.io/pypi/v/zfit_physics.svg
:target: https://pypi.python.org/pypi/zfit_physics
|zfit_logo|
.. |zfit_logo| image:: docs/images/zfit-fin_400x168.png
:target: https://github.com/zfit/zfit
:alt: zfit logo
Tools and models to extend `zfit <https://github.com/zfit/zfit>`_.
The structure of this package is still in flow, so before opening any PR
you may want to open an issue to discuss how your proposed functionality will fit into `zfit-physics`.
It is currently under heavy development. Use it to test currently experimental
features. It offers a similar structure are zfit and will in the future be
simply integrated into the zfit namespace (given that zfit-physics is installed).
What's new
=============
Check the `changelog <CHANGELOG.rst>`_ for recent changes.
Installing
==========
zfit-physics is available on pip. If possible, use a conda or virtual environment and do:
.. code-block:: console
$ pip install zfit-physics
For the newest development version, you can install the version from git with
.. code-block:: console
$ pip install git+https://github.com/zfit/zfit-physics
Contact
=======
You can contact us directly:
- via e-mail: [email protected]
- join our `Gitter channel <https://gitter.im/zfit/zfit>`_
Contributors
============
| zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/README.rst | README.rst |
*********
Changelog
*********
Develop
=======
Major Features and Improvements
-------------------------------
Breaking changes
------------------
Depreceations
-------------
Bug fixes and small changes
---------------------------
Experimental
------------
Requirement changes
-------------------
Thanks
------
0.6.0 (20 Jul 2023)
===================
Upgrade to zfit >= 0.12, support Python 3.8-3.11
0.4.0 (27 Jan 2023)
===================
Compability with zfit >= 0.11, < 0.13
0.3.0 (26 Jan 2023)
===================
Compability with zfit >= 0.10, < 0.11
0.2.0
=======
Added relativistic Breit-Wigner PDF
Many thanks to Simon Thor <[email protected]> for contributing the Relativistic BW
0.1.0
=======
Upgrade to zfit >= 0.6
0.0.3 (14.05.2020)
==================
Major Features and Improvements
-------------------------------
- added ARGUS pdf
Bug fixes and small changes
---------------------------
- fix KDE with numerical integration
Requirement changes
-------------------
- zfit >= 5.2
Thanks
------
- Colm Murphy <[email protected]> for help in contributing the ARGUS PDF
| zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/CHANGELOG.rst | CHANGELOG.rst |
**Questions**: feel free to [ask on StackOverflow](https://stackoverflow.com/questions/ask) with
the **zfit** tag (so we also get notified).
**Bugs or Feature Requests** Please fill the below form _if possible_.
<!--- Provide a general summary of the issue in the Title above -->
## Current Behaviour
<!--- Tell us what happens instead of the expected behavior -->
<!--- Include the commands you ran and the output. -->
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
## Expected Behaviour
<!--- Tell us what should happen. -->
## Context (Environment)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
* zfit version:
* zfit-physics version:
* Python version:
* Are you using conda, pipenv, etc? :
* Operating System:
* Tensorflow version:
## Possible Solution/Implementation
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- and maybe suggest an idea for implementing addition or change -->
| zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/.github/ISSUE_TEMPLATE.md | ISSUE_TEMPLATE.md |
---
name: zfit usage question
about: Questions about the usage and best ways in zfit
title: "[QUESTION]"
labels: ''
assignees: ''
---
Please use preferably one of the following channels:
- read the FAQ, it may holds your answer already
- [ask on StackOverflow](https://stackoverflow.com/questions/ask) with
the **zfit** tag (so we also get notified).
- if you want to ask immediately a broader community, feel free to ask in the [Gitter channel of zfit](https://gitter.im/zfit/zfit)
- or post an issue here with the question
| zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/.github/ISSUE_TEMPLATE/zfit-usage-question.md | zfit-usage-question.md |
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Bugs** Please fill the below form _if possible_.
<!--- Provide a general summary of the issue in the Title above -->
## Current Behaviour
<!--- Tell us what happens instead of the expected behavior -->
<!--- Include the commands you ran and the output. -->
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
## Expected Behaviour
<!--- Tell us what should happen. -->
## Context (Environment)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
* zfit version:
* zfit-physics version:
* Python version:
* Are you using conda, pipenv, etc? :
* Operating System:
* Tensorflow version:
## Possible Solution/Implementation
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- and maybe suggest an idea for implementing addition or change -->
| zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/.github/ISSUE_TEMPLATE/bug_report.md | bug_report.md |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: discussion
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/.github/ISSUE_TEMPLATE/feature-request.md | feature-request.md |
---
name: BehaviorUnderDiscussion
about: If the code raises a BehaviorUnderDiscussion error
title: "[BEHAVIOR]"
labels: discussion
assignees: mayou36
---
## Describe what happens
A description of what you do with zfit-physics and where the error arose, if you know about it.
## StackTrace
Please include the stacktrace here
## Environment
Please fill in the relevant points (if possible).
* zfit version:
* zfit-physics version:
* Python version:
* Are you using conda, pipenv, etc? :
* Operating System:
* Tensorflow version:
## Desired behavior
What would you expect to happen and why? Be aware that there are also many more usecases than your specific one, if you can think of where it would break, please include this here.
| zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/.github/ISSUE_TEMPLATE/behaviorunderdiscussion.md | behaviorunderdiscussion.md |
from typing import Optional
import tensorflow as tf
import tensorflow_probability as tfp
import zfit
import zfit.models.functor
from zfit import z
from zfit.exception import FunctionNotImplementedError
from zfit.util import exception, ztyping
from zfit.util.exception import WorkInProgressError
class NumConvPDFUnbinnedV1(zfit.models.functor.BaseFunctor):
def __init__(
self,
func: zfit.pdf.BasePDF,
kernel: zfit.pdf.BasePDF,
limits: ztyping.ObsTypeInput,
obs: ztyping.ObsTypeInput,
ndraws: int = 20000,
*,
extended: Optional[ztyping.ParamTypeInput] = None,
name: str = "Convolution",
experimental_pdf_normalized=False,
):
"""Numerical Convolution pdf of *func* convoluted with *kernel*.
Args:
func (:py:class:`zfit.pdf.BasePDF`): PDF with `pdf` method that takes x and returns the function value.
Here x is a `Data` with the obs and limits of *limits*.
kernel (:py:class:`zfit.pdf.BasePDF`): PDF with `pdf` method that takes x acting as the kernel.
Here x is a `Data` with the obs and limits of *limits*.
limits (:py:class:`zfit.Space`): Limits for the numerical integration.
obs (:py:class:`zfit.Space`): Observables of the class
extended: If the PDF should be extended, i.e. a yield.
ndraws (int): Number of draws for the mc integration
name (str): Human readable name of the pdf
"""
super().__init__(obs=obs, pdfs=[func, kernel], params={}, name=name, extended=extended)
limits = self._check_input_limits(limits=limits)
if limits.n_limits == 0:
raise exception.LimitsNotSpecifiedError("obs have to have limits to define where to integrate over.")
if limits.n_limits > 1:
raise WorkInProgressError("Multiple Limits not implemented")
# if not isinstance(func, zfit.pdf.BasePDF):
# raise TypeError(f"func has to be a PDF, not {type(func)}")
# if isinstance(kernel, zfit.pdf.BasePDF):
# raise TypeError(f"kernel has to be a PDF, not {type(kernel)}")
# func = lambda x: func.unnormalized_pdf(x=x)
# kernel = lambda x: kernel.unnormalized_pdf(x=x)
self._grid_points = None # true vars # callable func of reco - true vars
self._func_values = None
self.conv_limits = limits
self._ndraws = ndraws
self._experimental_pdf_normalized = experimental_pdf_normalized
@z.function
def _unnormalized_pdf(self, x):
limits = self.conv_limits
# area = limits.area() # new spaces
area = limits.rect_area()[0] # new spaces
samples = self._grid_points
func_values = self._func_values
# if func_values is None:
if True:
# create sample for numerical integral
lower, upper = limits.rect_limits
lower = z.convert_to_tensor(lower, dtype=self.dtype)
upper = z.convert_to_tensor(upper, dtype=self.dtype)
samples_normed = tfp.mcmc.sample_halton_sequence(
dim=limits.n_obs,
num_results=self._ndraws,
dtype=self.dtype,
randomized=False,
)
samples = samples_normed * (upper - lower) + lower # samples is [0, 1], stretch it
samples = zfit.Data.from_tensor(obs=limits, tensor=samples)
self._grid_points = samples
func_values = self.pdfs[0].pdf(samples, norm=False) # func of true vars
self._func_values = func_values
return tf.map_fn(
lambda xi: area * tf.reduce_mean(func_values * self.pdfs[1].pdf(xi - samples.value(), norm=False)),
x.value(),
)
# func of reco vars
@z.function
def _pdf(self, x, norm_range):
if not self._experimental_pdf_normalized:
raise FunctionNotImplementedError
limits = self.conv_limits
# area = limits.area() # new spaces
area = limits.rect_area()[0] # new spaces
samples = self._grid_points
func_values = self._func_values
# if func_values is None:
if True:
# create sample for numerical integral
lower, upper = limits.rect_limits
lower = z.convert_to_tensor(lower, dtype=self.dtype)
upper = z.convert_to_tensor(upper, dtype=self.dtype)
samples_normed = tfp.mcmc.sample_halton_sequence(
dim=limits.n_obs,
num_results=self._ndraws,
dtype=self.dtype,
randomized=False,
)
samples = samples_normed * (upper - lower) + lower # samples is [0, 1], stretch it
samples = zfit.Data.from_tensor(obs=limits, tensor=samples)
self._grid_points = samples
func_values = self.pdfs[0].pdf(samples) # func of true vars
self._func_values = func_values
return tf.map_fn(
lambda xi: area * tf.reduce_mean(func_values * self.pdfs[1].pdf(xi - samples.value())),
x.value(),
) | zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/zfit_physics/models/pdf_conv.py | pdf_conv.py |
from collections import OrderedDict
from typing import Optional
import tensorflow as tf
import tensorflow_probability.python.distributions as tfd
import zfit
from zfit import z
from zfit.models.dist_tfp import WrapDistribution
from zfit.util import ztyping
from zfit.util.container import convert_to_container
from zfit.util.exception import WorkInProgressError
class GaussianKDE(WrapDistribution): # multidimensional kde with gaussian kernel
def __init__(
self,
data: tf.Tensor,
bandwidth: ztyping.ParamTypeInput,
obs: ztyping.ObsTypeInput,
name: str = "GaussianKDE",
*,
extended: Optional[ztyping.ParamTypeInput] = None,
):
"""Gaussian Kernel Density Estimation using Silverman's rule of thumb.
Args:
data: Data points to build a kernel around
bandwidth: sigmas for the covariance matrix of the multivariate gaussian
obs:
name: Name of the PDF
"""
dtype = zfit.settings.ztypes.float
if isinstance(data, zfit.core.interfaces.ZfitData):
raise WorkInProgressError("Currently, no dataset supported yet")
# size = data.nevents
# dims = data.n_obs
# with data.
# data = data.value()
# if data.weights is not None:
else:
if not isinstance(data, tf.Tensor):
data = z.convert_to_tensor(value=data)
data = z.to_real(data)
shape_data = tf.shape(data)
size = tf.cast(shape_data[0], dtype=dtype)
dims = tf.cast(shape_data[-1], dtype=dtype)
bandwidth = convert_to_container(bandwidth)
# Bandwidth definition, use silverman's rule of thumb for nd
def reshaped_kerner_factory():
cov_diag = [
tf.square((4.0 / (dims + 2.0)) ** (1 / (dims + 4)) * size ** (-1 / (dims + 4)) * s) for s in bandwidth
]
# cov = tf.linalg.diag(cov_diag)
# kernel prob output shape: (n,)
# kernel = tfd.MultivariateNormalFullCovariance(loc=data, covariance_matrix=cov)
kernel = tfd.MultivariateNormalDiag(loc=data, scale_diag=cov_diag)
return kernel
# return tfd.Independent(kernel)
# reshaped_kernel = kernel
probs = tf.broadcast_to(1 / size, shape=(tf.cast(size, tf.int32),))
categorical = tfd.Categorical(probs=probs) # no grad -> no need to recreate
dist_kwargs = lambda: dict(
mixture_distribution=categorical,
components_distribution=reshaped_kerner_factory(),
)
distribution = tfd.MixtureSameFamily
# TODO lambda for params
params = OrderedDict((f"bandwidth_{i}", h) for i, h in enumerate(bandwidth))
super().__init__(
distribution=distribution,
dist_params={},
dist_kwargs=dist_kwargs,
params=params,
obs=obs,
name=name,
extended=extended,
)
# @zfit.supports()
# def _analytic_integrate(self, limits, norm_range):
# raise AnalyticIntegralNotImplementedError | zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/zfit_physics/models/pdf_kde.py | pdf_kde.py |
from typing import Optional
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
import zfit
from zfit import z
from zfit.util import ztyping
@z.function(wraps="tensor")
def argus_func(
m: ztyping.NumericalType,
m0: ztyping.NumericalType,
c: ztyping.NumericalType,
p: ztyping.NumericalType,
) -> tf.Tensor:
r"""`ARGUS shape <https://en.wikipedia.org/wiki/ARGUS_distribution>`_ describing the invariant mass of a particle in
a continuous background.
It is defined as
.. math::
\mathrm{Argus}(m, m_0, c, p) = m \cdot \left[ 1 - \left( \frac{m}{m_0} \right)^2 \right]^p
\cdot \exp\left[ c \cdot \left(1 - \left(\frac{m}{m_0}\right)^2 \right) \right]
The implementation follows the `RooFit version <https://root.cern.ch/doc/master/classRooArgusBG.html>`_
Args:
m: Mass of the particle
m0: Maximal energetically allowed mass, cutoff
c: peakiness of the distribution
p: Generalized ARGUS shape, for p = 0.5, the normal ARGUS shape is recovered
Returns:
`tf.Tensor`: the values matching the (broadcasted) shapes of the input
"""
m = tfp.math.clip_by_value_preserve_gradient(m, 0.0, m0)
m_frac = m / m0
m_factor = 1 - z.square(m_frac)
argus = m * z.pow(m_factor, p) * (z.exp(c * m_factor))
return argus
class Argus(zfit.pdf.BasePDF):
def __init__(
self,
obs: ztyping.ObsTypeInput,
m0,
c,
p,
name: str = "ArgusPDF",
extended: Optional[ztyping.ParamTypeInput] = None,
):
r"""`ARGUS shape <https://en.wikipedia.org/wiki/ARGUS_distribution>`_ describing the invariant mass of a particle
in a continuous background.
The ARGUS shaped function describes the reconstructed invariant mass of a decayed particle, especially at the
kinematic boundaries of the maximum beam energy. It is defined as
.. math::
\mathrm{Argus}(m, m_0, c, p) = m \cdot \left[ 1 - \left( \frac{m}{m_0} \right)^2 \right]^p
\cdot \exp\left[ c \cdot \left(1 - \left(\frac{m}{m_0}\right)^2 \right) \right]
and normalized to one over the `norm_range` (which defaults to `obs`).
The implementation follows the `RooFit version <https://root.cern.ch/doc/master/classRooArgusBG.html>`_
Args:
obs: Observable the PDF is defined on
m0: Maximal energetically allowed mass, cutoff
c: Shape parameter; "peakiness" of the distribution
p: Generalization of the ARGUS shape, for p = 0.5, the normal ARGUS shape is recovered
Returns:
`tf.Tensor`: the values matching the (broadcasted) shapes of the input
"""
params = {"m0": m0, "c": c, "p": p}
super().__init__(obs=obs, name=name, params=params, extended=extended)
_N_OBS = 1
def _unnormalized_pdf(self, x):
"""
Calculation of ARGUS PDF value
(Docs: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.argus.html)
"""
m = zfit.z.unstack_x(x)
m0 = self.params["m0"]
c = self.params["c"]
p = self.params["p"]
return argus_func(m, m0, c, p)
# Keep? move to math?
# @z.function_tf
def uppergamma(s, x):
return tf.math.igammac(s, x=x) * z.exp(tf.math.lgamma(x))
@z.function(wraps="tensor")
def argus_cdf_p_half_nonpositive(lim, c, m0):
lim = tf.clip_by_value(lim, 0.0, m0)
cdf = tf.cond(
tf.math.less(c, 0.0),
lambda: argus_cdf_p_half_c_neg(lim=lim, c=c, m0=m0),
lambda: argus_cdf_p_half_c_zero(lim=lim, c=c, m0=m0),
)
return cdf
# Does not work, why?
# # @z.function_tf
# def argus_cdf_p_half_sympy(lim, c, m0):
# # lim = tf.where(tf.less_equal(lim, m0), lim, m0) # take the smaller one, only integrate up to m0
# # lim = tf.where(tf.greater(lim, 0.), lim, z.constant(0.)) # start from 0 as minimum
# lim = tf.clip_by_value(lim, 0., m0)
# lim_square = z.square(lim)
# m0_squared = z.square(m0)
# return (-0.5 * m0_squared * z.pow(-c * (1 - lim_square / m0_squared), -0.5)
# * z.sqrt(1 - lim_square / m0_squared) * uppergamma((z.constant(1.5)),
# -c * (1 - lim_square / m0_squared)) / c)
@z.function(wraps="tensor")
def argus_cdf_p_half_c_neg(lim, c, m0):
f1 = 1 - z.square(lim / m0)
cdf = -0.5 * z.square(m0)
cdf *= z.exp(c * f1) * z.sqrt(f1) / c + 0.5 / z.pow(-c, 1.5) * z.sqrt(z.pi) * tf.math.erf(z.sqrt(-c * f1))
return cdf
@z.function(wraps="tensor")
def argus_cdf_p_half_c_zero(lim, c, m0):
del c
f1 = 1 - z.square(lim / m0)
cdf = -z.square(m0) / 3.0 * f1 * z.sqrt(f1)
return cdf
# TODO: add Faddeev function approximation
# def argus_cdf_p_half_c_pos(lim, c, m0):
# f1 = 1 - z.square(lim)
# cdf = 0.5 * z.square(m0) * z.exp(c * f1) / (c * z.sqrt(c))
# # cdf *= (0.5 * z.sqrt(z.pi) * (RooMath::faddeeva(sqrt(c * f1))).imag() - z.sqrt(c * f1))
# return cdf
@z.function(wraps="tensor")
def argus_integral_p_half_func(lower, upper, c, m0):
return argus_cdf_p_half_nonpositive(upper, c=c, m0=m0) - argus_cdf_p_half_nonpositive(lower, c=c, m0=m0)
def argus_integral_p_half(limits, params, model):
p = params["p"]
if not isinstance(p, zfit.param.ConstantParameter) or not np.isclose(p.static_value, 0.5):
raise zfit.exception.AnalyticIntegralNotImplementedError()
c = params["c"]
if not isinstance(c, zfit.param.ConstantParameter) or c.static_value > 0:
raise zfit.exception.AnalyticIntegralNotImplementedError()
m0 = params["m0"]
lower, upper = limits.limit1d
lower = z.convert_to_tensor(lower)
upper = z.convert_to_tensor(upper)
integral = argus_integral_p_half_func(lower=lower, upper=upper, c=c, m0=m0)
return integral
argus_integral_limits = zfit.Space(axes=(0,), limits=(zfit.Space.ANY_LOWER, zfit.Space.ANY_UPPER))
Argus.register_analytic_integral(func=argus_integral_p_half, limits=argus_integral_limits)
if __name__ == "__main__":
# create the integral
import sympy as sp
N = sp.Symbol("N")
m = sp.Symbol("m")
m0 = sp.Symbol("m0")
c = sp.Symbol("c")
t = sp.Symbol("t")
mu = sp.Symbol("mu")
sigma = sp.Symbol("sigma")
# p = sp.Symbol('p')
p = 0.5
low = sp.Symbol("low")
lim = sp.Symbol("up")
from sympy.assumptions.assume import global_assumptions
global_assumptions.add(sp.Q.positive(N))
global_assumptions.add(sp.Q.finite(N))
global_assumptions.add(sp.Q.positive(sigma))
global_assumptions.add(sp.Q.finite(sigma))
global_assumptions.add(sp.Q.positive(m))
global_assumptions.add(sp.Q.finite(m))
global_assumptions.add(sp.Q.positive(m / m0))
global_assumptions.add(sp.Q.finite(m / m0))
global_assumptions.add(sp.Q.positive(p))
global_assumptions.add(sp.Q.finite(p))
# global_assumptions.add(sp.Q.integer(p))
global_assumptions.add(sp.Q.finite(c))
global_assumptions.add(sp.Q.positive(c))
m_factor = 1 - (m / m0) ** 2
integral_expression = m * m_factor**p * (sp.exp(c * m_factor))
# integral_expression = (N * m * (1 - (m / m0) ** 2) ** p * sp.exp(c * (1 - (m / m0) ** 2)))
integral = sp.integrate(integral_expression, m)
print(integral)
func1 = sp.lambdify(integral.free_symbols, integral, "tensorflow")
import inspect
source = inspect.getsource(func1)
print(source)
# sp.lambdify() | zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/zfit_physics/models/pdf_argus.py | pdf_argus.py |
from typing import Optional
import numpy as np
import tensorflow as tf
import zfit
from zfit import z
from zfit.core.space import ANY_LOWER, ANY_UPPER, Space
from zfit.util import ztyping
@z.function(wraps="tensor")
def relbw_pdf_func(x, m, gamma):
"""Calculate the relativistic Breit-Wigner PDF.
Args:
x: value(s) for which the CDF will be calculated.
m: Mean value
gamma: width
Returns:
`tf.Tensor`: The calculated PDF values.
Notes:
Based on code from this [github gist](https://gist.github.com/andrewfowlie/cd0ed7e6c96f7c9e88f85eb3b9665b97#file-bw-py-L87-L110)
"""
x = z.unstack_x(x)
alpha = gamma / m
gamma2 = m**2 * (1.0 + alpha**2) ** 0.5
k = 2.0 ** (3.0 / 2.0) * m**2 * alpha * gamma2 / (np.pi * (m**2 + gamma2) ** 0.5)
return k / ((x**2 - m**2) ** 2 + m**4 * alpha**2)
class RelativisticBreitWigner(zfit.pdf.BasePDF):
_N_OBS = 1
def __init__(
self,
m: ztyping.ParamTypeInput,
gamma: ztyping.ParamTypeInput,
obs: ztyping.ObsTypeInput,
name: str = "RelativisticBreitWigner",
extended: Optional[ztyping.ParamTypeInput] = None,
):
"""Relativistic Breit-Wigner distribution.
Formula for PDF and CDF are based on https://gist.github.com/andrewfowlie/cd0ed7e6c96f7c9e88f85eb3b9665b97
Args:
m: the average value
gamma: the width of the distribution
"""
params = dict(m=m, gamma=gamma)
super().__init__(obs=obs, params=params, name=name, extended=extended)
def _unnormalized_pdf(self, x: tf.Tensor) -> tf.Tensor:
"""Calculate the PDF at value(s) x.
Args:
x : Either one value or an array
Returns:
`tf.Tensor`: The value(s) of the unnormalized PDF at x.
"""
return relbw_pdf_func(x, m=self.params["m"], gamma=self.params["gamma"])
@z.function(wraps="tensor")
def arctan_complex(x):
r"""Function that evaluates arctan(x) using tensorflow but also supports complex numbers. It is defined as.
.. math::
\mathrm{arctan}(x) = \frac{i}{2} \left(\ln(1-ix) - \ln(1+ix)\right)
Args:
x: tf.Tensor
Returns:
.. math:: \mathrm{arctan}(x)
Notes:
Formula is taken from https://www.wolframalpha.com/input/?i=arctan%28a%2Bb*i%29
TODO: move somewhere?
"""
return 1 / 2 * 1j * (tf.math.log(1 - 1j * x) - tf.math.log(1 + 1j * x))
@z.function(wraps="tensor")
def relbw_cdf_func(x, m, gamma):
"""Analytical function for the CDF of the relativistic Breit-Wigner distribution.
Args:
x: value(s) for which the CDF will be calculated.
m: Mean value
gamma: width
Returns:
`tf.Tensor`: The calculated CDF values.
Notes:
Based on code from this [github gist](https://gist.github.com/andrewfowlie/cd0ed7e6c96f7c9e88f85eb3b9665b97#file-bw-py-L112-L154)
"""
gamma = z.to_complex(gamma)
m = z.to_complex(m)
x = z.to_complex(z.unstack_x(x))
alpha = gamma / m
gamma2 = m**2 * (1.0 + alpha**2) ** 0.5
k = 2.0 ** (3.0 / 2.0) * m**2 * alpha * gamma2 / (np.pi * (m**2 + gamma2) ** 0.5)
arg_1 = z.to_complex(-1) ** (1.0 / 4.0) / (-1j + alpha) ** 0.5 * x / m
arg_2 = z.to_complex(-1) ** (3.0 / 4.0) / (1j + alpha) ** 0.5 * x / m
shape = -1j * arctan_complex(arg_1) / (-1j + alpha) ** 0.5 - arctan_complex(arg_2) / (1j + alpha) ** 0.5
norm = z.to_complex(-1) ** (1.0 / 4.0) * k / (2.0 * alpha * m**3)
cdf_ = shape * norm
cdf_ = z.to_real(cdf_)
return cdf_
def relbw_integral(limits: ztyping.SpaceType, params: dict, model) -> tf.Tensor:
"""Calculates the analytic integral of the relativistic Breit-Wigner PDF.
Args:
limits: An object with attribute rect_limits.
params: A hashmap from which the parameters that defines the PDF will be extracted.
model: Will be ignored.
Returns:
The calculated integral.
"""
lower, upper = limits.rect_limits
lower_cdf = relbw_cdf_func(x=lower, m=params["m"], gamma=params["gamma"])
upper_cdf = relbw_cdf_func(x=upper, m=params["m"], gamma=params["gamma"])
return upper_cdf - lower_cdf
# These lines of code adds the analytic integral function to RelativisticBreitWigner PDF.
relbw_integral_limits = Space(axes=(0,), limits=(((ANY_LOWER,),), ((ANY_UPPER,),)))
RelativisticBreitWigner.register_analytic_integral(func=relbw_integral, limits=relbw_integral_limits) | zfit-physics | /zfit_physics-0.6.0.tar.gz/zfit_physics-0.6.0/zfit_physics/models/pdf_relbw.py | pdf_relbw.py |
|zfit_logo|
*******************************
zfit: scalable pythonic fitting
*******************************
.. image:: https://scikit-hep.org/assets/images/Scikit--HEP-Affiliated-blue.svg
:target: https://scikit-hep.org
.. image:: https://img.shields.io/pypi/v/zfit.svg
:target: https://pypi.python.org/pypi/zfit
.. image:: https://img.shields.io/conda/vn/conda-forge/zfit
:alt: conda-forge
:target: https://anaconda.org/conda-forge/zfit
.. image:: https://github.com/zfit/zfit/workflows/CI/badge.svg
:target: https://github.com/zfit/zfit/actions
.. image:: https://github.com/zfit/zfit/workflows/build/badge.svg
:target: https://github.com/zfit/zfit/actions
.. image:: https://coveralls.io/repos/github/zfit/zfit/badge.svg?branch=meta_changes
:target: https://coveralls.io/github/zfit/zfit?branch=meta_changes
.. image:: https://www.codefactor.io/repository/github/zfit/zfit/badge
:target: https://www.codefactor.io/repository/github/zfit/zfit
:alt: CodeFactor
.. |zfit_logo| image:: docs/images/zfit-logo_hires.png
:target: https://github.com/zfit/zfit
:alt: zfit logo
.. |scikit-hep_logo| image:: docs/images/scikit-hep-logo_168x168.png
:target: https://scikit-hep.org/affiliated
:alt: scikit-hep logo
zfit is a highly scalable and customizable model manipulation and fitting library. It uses
`TensorFlow <https://www.tensorflow.org/>`_ as its computational backend
and is optimised for simple and direct manipulation of probability density functions. The project is affiliated with
and well integrated into `Scikit-HEP <https://scikit-hep.org/>`_, the HEP Python ecosystem.
- **Tutorials**: `Interactive introduction and tutorials <https://zfit-tutorials.readthedocs.io/en/latest/>`_
- **Quick start**: `Example scripts <examples>`_
- **Documentation**: See `stable documentation`_ or `latest documentation`_
- **Questions**: see the `FAQ <https://github.com/zfit/zfit/wiki/FAQ>`_,
`ask on StackOverflow <https://stackoverflow.com/questions/ask?tags=zfit>`_ (with the **zfit** tag) or `contact`_ us directly.
- **Physics, HEP**: `zfit-physics <https://github.com/zfit/zfit-physics>`_ is the place to contribute and find more HEP
related content
- **Statistical inference**: `hepstats <https://github.com/scikit-hep/hepstats>`_ for limits, CI, sWeights and more
If you use zfit in **research**, please
consider `citing <https://www.sciencedirect.com/science/article/pii/S2352711019303851>`_.
*N.B.*: zfit is currently in *beta stage*, so while most core parts are established,
some may still be missing and bugs may be encountered.
It is, however, mostly ready for production, and is being used in analyses projects.
If you want to use it for your project and you are not sure if all the needed functionality is there,
feel free to `contact`_.
Installation
=============
zfit is available on pip. To install it (recommended: use a virtual/conda env!) with all the dependencies (minimizers, uproot, ...), use
.. code-block:: bash
pip install -U zfit[all]
(the ``-U`` just indicates to upgrade zfit, in case you have it already installed)
or for minimal dependencies
.. code-block:: bash
pip install zfit
Why?
====
The basic idea behind zfit is to offer a Python oriented alternative to the very successful RooFit library
from the `ROOT <https://root.cern.ch/>`_ data analysis package that can integrate with the other packages
that are part if the scientific Python ecosystem.
Contrary to the monolithic approach of ROOT/RooFit, the aim of zfit is to be light and flexible enough t
o integrate with any state-of-art tools and to allow scalability going to larger datasets.
These core ideas are supported by two basic pillars:
- The skeleton and extension of the code is minimalist, simple and finite:
the zfit library is exclusively designed for the purpose of model fitting and sampling with no attempt to extend its
functionalities to features such as statistical methods or plotting.
- zfit is designed for optimal parallelisation and scalability by making use of TensorFlow as its backend.
The use of TensorFlow provides crucial features in the context of model fitting like taking care of the
parallelisation and analytic derivatives.
How to use
==========
While the zfit library provides a model fitting and sampling framework for a broad list of applications,
we will illustrate its main features with a simple example by fitting a Gaussian distribution with an unbinned
likelihood fit and a parameter uncertainty estimation.
Example in short
----------------
.. code-block:: python
obs = zfit.Space('x', limits=(-10, 10))
# create the model
mu = zfit.Parameter("mu" , 2.4, -1, 5)
sigma = zfit.Parameter("sigma", 1.3, 0, 5)
gauss = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=sigma)
# load the data
data_np = np.random.normal(size=10000)
data = zfit.Data.from_numpy(obs=obs, array=data_np)
# build the loss
nll = zfit.loss.UnbinnedNLL(model=gauss, data=data)
# minimize
minimizer = zfit.minimize.Minuit()
result = minimizer.minimize(nll)
# calculate errors
param_errors = result.hesse()
This follows the zfit workflow
.. image:: docs/images/zfit_workflow_v2.png
:alt: zfit workflow
Full explanation
----------------
The default space (e.g. normalization range) of a PDF is defined by an *observable space*, which is created using the ``zfit.Space`` class:
.. code-block:: python
obs = zfit.Space('x', limits=(-10, 10))
To create a simple Gaussian PDF, we define its parameters and their limits using the ``zfit.Parameter`` class.
.. code-block:: python
# syntax: zfit.Parameter("any_name", value, lower, upper)
mu = zfit.Parameter("mu" , 2.4, -1, 5)
sigma = zfit.Parameter("sigma", 1.3, 0, 5)
gauss = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=sigma)
For simplicity, we create the dataset to be fitted starting from a numpy array, but zfit allows for the use of other sources such as ROOT files:
.. code-block:: python
mu_true = 0
sigma_true = 1
data_np = np.random.normal(mu_true, sigma_true, size=10000)
data = zfit.Data.from_numpy(obs=obs, array=data_np)
Fits are performed in three steps:
1. Creation of a loss function, in our case a negative log-likelihood.
2. Instantiation of our minimiser of choice, in the example the ``Minuit``.
3. Minimisation of the loss function.
.. code-block:: python
# Stage 1: create an unbinned likelihood with the given PDF and dataset
nll = zfit.loss.UnbinnedNLL(model=gauss, data=data)
# Stage 2: instantiate a minimiser (in this case a basic minuit)
minimizer = zfit.minimize.Minuit()
# Stage 3: minimise the given negative log-likelihood
result = minimizer.minimize(nll)
Errors are calculated with a further function call to avoid running potentially expensive operations if not needed:
.. code-block:: python
param_errors = result.hesse()
Once we've performed the fit and obtained the corresponding uncertainties, we can examine the fit results:
.. code-block:: python
print("Function minimum:", result.fmin)
print("Converged:", result.converged)
print("Full minimizer information:", result)
# Information on all the parameters in the fit
params = result.params
print(params)
# Printing information on specific parameters, e.g. mu
print("mu={}".format(params[mu]['value']))
And that's it!
For more details and information of what you can do with zfit, checkout the `latest documentation`_.
Prerequisites
=============
``zfit`` works with Python versions 3.7, 3.8 and 3.9.
The following packages (amongst others) are required:
- `tensorflow <https://www.tensorflow.org/>`_ >= 2.6
- `tensorflow_probability <https://www.tensorflow.org/probability>`_
- `scipy <https://www.scipy.org/>`_ >=1.2
- `uproot <https://github.com/scikit-hep/uproot>`_
- `iminuit <https://github.com/scikit-hep/iminuit>`_
... and some other packages. For a full list, check the `requirements <requirements.txt>`_.
Installing
==========
zfit is currently *only available on pip*. The **conda version is highly outdated and should not be used**.
If possible, use a conda or virtual environment and do:
.. code-block:: console
$ pip install zfit
For the newest development version, you can install the version from git with
.. code-block:: console
$ pip install git+https://github.com/zfit/zfit
Contributing
============
Any idea of how to improve the library? Or interested to write some code?
Contributions are always welcome, please have a look at the `Contributing guide`_.
.. _Contributing guide: CONTRIBUTING.rst
Contact
========
You can contact us directly:
- via e-mail: [email protected]
- join our `Gitter channel <https://gitter.im/zfit/zfit>`_
Original Authors
================
| Jonas Eschle <[email protected]>
| Albert Puig <[email protected]>
| Rafael Silva Coutinho <[email protected]>
See here for `all authors and contributors`_
.. _all authors and contributors: AUTHORS.rst
Acknowledgements
================
zfit has been developed with support from the University of Zurich and the Swiss National Science Foundation (SNSF) under contracts 168169 and 174182.
The idea of zfit is inspired by the `TensorFlowAnalysis <https://gitlab.cern.ch/poluekt/TensorFlowAnalysis>`_ framework
developed by Anton Poluektov and `TensorProb <https://github.com/tensorprob/tensorprob>`_ by Chris Burr and Igor Babuschkin
using the TensorFlow open source library and more libraries.
.. _`latest documentation`: https://zfit.readthedocs.io/en/latest/
.. _`stable documentation`: https://zfit.readthedocs.io/en/stable/
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/README.rst | README.rst |
.. highlight:: shell
=================
How to contribute
=================
Contributions are welcome, and they are greatly appreciated! Every little bit
helps, and credit will always be given.
* You can report bugs at https://github.com/zfit/zfit/issues.
* You can send feedback by filing an issue at https://github.com/zfit/zfit/issues or,
for more informal discussions, you can also join our `Gitter channel <https://gitter.im/zfit/zfit>`_.
Get Started!
------------
Ready to contribute? Here's how to set up *zfit* for local development.
1. Fork the *zfit* repo on GitHub.
2. Clone your fork locally::
$ git clone [email protected]:your_name_here/zfit.git
3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development::
$ mkvirtualenv zfit
$ cd zfit/
$ pip install -e .[alldev] # (or [dev] if this fails)
4. Create a branch for local development::
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
5. When you're done making changes, check that your changes pass the
tests (this can take a while ~30 mins)::
$ pytest
6. Commit your changes and push your branch to GitHub::
$ git add .
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature
7. Submit a pull request through the GitHub website. The test suite is going
to run again, testing all the necessary Python versions.
Pull Request Guidelines
-----------------------
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs may need to be updated. Put
your new functionality into a function with a docstring (and add the
necessary explanations in the corresponding rst file in the docs).
If any math is involved, please document the exact formulae implemented
in the docstring/docs.
3. The pull request should work for all Python versions. Check
https://travis-ci.org/zfit/zfit/pull_requests
and make sure that the tests pass for all supported Python versions.
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/CONTRIBUTING.rst | CONTRIBUTING.rst |
*********
Changelog
*********
.. _newest-changelog:
Develop
========================
Major Features and Improvements
-------------------------------
Breaking changes
------------------
Deprecations
-------------
Bug fixes and small changes
---------------------------
Experimental
------------
Requirement changes
-------------------
Thanks
------
0.15.5 (26 July 2023)
========================
Bug fixes and small changes
---------------------------
- fix a bug in histmodifier that would not properly take into account the yield of the wrapped PDF
0.15.2 (20 July 2023)
========================
Fix missing ``attrs`` dependency
Major Features and Improvements
-------------------------------
- add option ``full`` in loss to return the full, unoptimized value (currently not default), allowing for easier statistical tests using the loss
0.15.0 (13 July 2023)
========================
Update to TensorFlow 2.13.x
Requirement changes
-------------------
- TensorFlow upgraded to ~=2.13.0
- as TF 2.13.0 ships with the arm64 macos wheels, the requirement of ``tensorflow_macos`` is removed
Thanks
------
- Iason Krommydas for helping with the macos requirements for TF
0.14.1 (1 July 2023)
========================
Major Features and Improvements
-------------------------------
- zfit broke for pydantic 2, which upgraded.
Requirement changes
-------------------
- restrict pydantic to <2.0.0
0.14.0 (22 June 2023)
========================
Major Features and Improvements
-------------------------------
- support for Python 3.11, dropped support for Python 3.7
Bug fixes and small changes
---------------------------
-fix longstanding bug in parameters caching
Requirement changes
-------------------
- update to TensorFlow 2.12
- removed tf_quant_finance
0.13.2 (15. June 2023)
========================
Bug fixes and small changes
---------------------------
- fix a caching problem with parameters (could cause issues with larger PDFs as params would be "remembered" wrongly)
- more helpful error message when jacobian (as used for weighted corrections) is analytically asked but fails
- make analytical gradient for CB integral work
0.13.1 (20 Apr 2023)
========================
Bug fixes and small changes
---------------------------
- array bandwidth for KDE works now correctly
Requirement changes
-------------------
- fixed uproot for Python 3.7 to <5
Thanks
------
- @schmitse for reporting and solving the bug in the KDE bandwidth with arrays
0.13.0 (19 April 2023)
========================
Major Features and Improvements
-------------------------------
last Python 3.7 version
Bug fixes and small changes
---------------------------
- ``SampleData`` is not used anymore, a ``Data`` object is returned (for simple sampling). The ``create_sampler`` will still return a ``SamplerData`` object though as this differs from ``Data``.
Experimental
------------
- Added support on a best-effort for human-readable serialization of objects including an HS3-like representation, find a `tutorial on serialization here<https://zfit-tutorials.readthedocs.io/en/latest/tutorials/components/README.html#serialization>`_. Most built-in unbinned PDFs are supported. This is still experimental and not yet fully supported. Dumping can be performed safely, loading maybe easily breaks (also between versions), so do not rely on it yet. Everything else - apart of trying to dump - should only be used for playing around and giving feedback purposes.
Requirement changes
-------------------
- allow uproot 5 (remove previous restriction)
Thanks
------
- to Johannes Lade for the amazing work on the serialization, which made this HS3 implementation possible!
0.12.1 (1 April 2023)
========================
Bug fixes and small changes
---------------------------
- added ``extended`` as a parameter to all PDFs: a PDF can now directly be extended without the need for
``create_extended`` (or ``set_yield``).
- ``to_pandas`` and ``from_pandas`` now also support weights as columns. Default column name is ``""``.
- add ``numpy`` and ``backend`` to options when setting the seed
- reproducibility by fixing the seed in zfit is restored, ``zfit.run.set_seed`` now also sets the seed for the backend(numpy, tensorflow, etc.) if requested (on by default)
Thanks
------
- Sebastian Schmitt @schmitse for reporting the bug in the non-reproducibility of the seed.
0.12.0 (13 March 2023)
========================
Bug fixes and small changes
---------------------------
- ``create_extended`` added ``None`` to the name, removed.
- ``SimpleConstraint`` now also takes a function that has an explicit ``params`` argument.
- add ``name`` argument to ``create_extended``.
- adding binned losses would error due to the removed ``fit_range`` argument.
- setting a global seed made the sampler return constant values, fixed (unoptimized but correct). If you ran
a fit with a global seed, you might want to rerun it.
- histogramming and limit checks failed due to a stricter Numpy check, fixed.
Thanks
------
- @P-H-Wagner for finding the bug in ``SimpleConstraint``.
- Dan Johnson for finding the bug in the binned loss that would fail to sum them up.
- Hanae Tilquin for spotting the bug with TensorFlows changed behavior or random states inside a tf.function,
leading to biased samples whenever a global seed was set.
0.11.1 (20 Nov 2022)
=========================
Hotfix for wrong import
0.11.0 (29 Nov 2022)
========================
Major Features and Improvements
-------------------------------
- columns of unbinned ``data`` can be accessed with the obs like a mapping (like a dataframe)
- speedup builtin ``errors`` method and make it more robust
Breaking changes
------------------
- ``Data`` can no longer be used directly as an array-like object but got mapping-like behavior.
- some old deprecated methods were removed
Bug fixes and small changes
---------------------------
- improved caching speed, reduced tradeoff against memory
- yields were not added correctly in some (especially binned) PDFs and the fit would fail
Requirement changes
-------------------
- add jacobi (many thanks at @HansDembinski for the package)
0.10.1 (31 Aug 2022)
========================
Major Features and Improvements
-------------------------------
- reduce the memory footprint on (some) fits, especially repetitive (loops) ones.
Reduces the number of cached compiled functions. The cachesize can be set with
``zfit.run.set_cache_size(int)``
and specifies the number of compiled functions that are kept in memory. The default is 10, but
this can be tuned. Lower values can reduce memory usage, but potentially increase runtime.
Bug fixes and small changes
---------------------------
- Enable uniform binning for n-dimensional distributions with integer(s).
- Sum of histograms failed for calling the pdf method (can be indirectly), integrated over wrong axis.
- Binned PDFs expected binned spaces for limits, now unbinned limits are also allowed and automatically
converted to binned limits using the PDFs binning.
- Speedup sampling of binned distributions.
- add ``to_binned`` and ``to_unbinned`` methods to PDF
Thanks
------
- Justin Skorupa for finding the bug in the sum of histograms and the missing automatic
conversion of unbinned spaces to binned spaces.
0.10.0 (22. August 2022)
========================
Public release of binned fits and upgrade to Python 3.10 and TensorFlow 2.9.
Major Features and Improvements
-------------------------------
- improved data handling in constructors ``from_pandas`` (which allows now to
have weights as columns, dataframes that are a superset of the obs) and
``from_root`` (obs can now be spaces and therefore cuts can be direcly applied)
- add hashing of unbinned datasets with a ``hashint`` attribute. None if no hash was possible.
Breaking changes
------------------
Deprecations
-------------
Bug fixes and small changes
---------------------------
- SimpleLoss correctly supports both functions with implicit and explicit parameters, also if they
are decorated.
- extended sampling errored for some cases of binned PDFs.
- ``ConstantParameter`` errored when converted to numpy.
- Simultaneous binned fits could error with different binning due to a missing sum over
a dimension.
- improved stability in loss evaluation of constraints and poisson/chi2 loss.
- reduce gradient evaluation time in ``errors`` for many parameters.
- Speedup Parameter value assignement in fits, which is most notably when the parameter update time is
comparably large to the fit evaluation time, such as is the case for binned fits with many nuisance
parameters.
- fix ipyopt was not pickleable in a fitresult
- treat parameters sometimes as "stateless", possibly reducing the number of retraces and reducing the
memory footprint.
Experimental
------------
Requirement changes
-------------------
- nlopt and ipyopt are now optional dependencies.
- Python 3.10 added
- TensorFlow >= 2.9.0, <2.11 is now required and the corresponding TensorFlow-Probability
version >= 0.17.0, <0.19.0
Thanks
------
- @YaniBion for discovering the bug in the extended sampling and testing the alpha release
- @ResStump for reporting the bug with the simultaneous binned fit
0.9.0a2
========
Major Features and Improvements
-------------------------------
- Save results by pickling, unpickling a frozen (``FitResult.freeze()``) result and using
``zfit.param.set_values(params, result)`` to set the values of ``params``.
Deprecations
-------------
- the default name of the uncertainty methods ``hesse`` and ``errors`` depended on
the method used (such as 'minuit_hesse', 'zfit_errors' etc.) and would be the exact method name.
New names are now 'hesse' and 'errors', independent of the method used. This reflects better that the
methods, while internally different, produce the same result.
To update, use 'hesse' instead of 'minuit_hesse' or 'hesse_np' and 'errors' instead of 'zfit_errors'
or 'minuit_minos' in order to access the uncertainties in the fitresult.
Currently, the old names are still available for backwards compatibility.
If a name was explicitly chosen in the error method, nothing changed.
Bug fixes and small changes
---------------------------
- KDE datasets are now correctly mirrored around observable space limits
- multinomial sampling would return wrong results when invoked multiple times in graph mode due to
a non-dynamic shape. This is fixed and the sampling is now working as expected.
- increase precision in FitResult string representation and add that the value is rounded
Thanks
------
- schmitse for finding and fixing a mirroring bug in the KDEs
- Sebastian Bysiak for finding a bug in the multinomial sampling
0.9.0a0
========
Major Features and Improvements
-------------------------------
- Binned fits support, although limited in content, is here! This includes BinnedData, binned PDFs, and
binned losses. TODO: extend to include changes/point to binned introduction.
- new Poisson PDF
- added Poisson constraint, LogNormal Constraint
- Save results by pickling, unpickling a frozen (``FitResult.freeze()``) result and using
``zfit.param.set_values(params, result)`` to set the values of ``params``.
Breaking changes
------------------
- params given in ComposedParameters are not sorted anymore. Rely on their name instead.
- ``norm_range`` is now called ``norm`` and should be replaced everywhere if possible. This will break in
the future.
Deprecation
-------------
Bug fixes and small changes
---------------------------
- remove warning when using ``rect_limits`` or similar.
- gauss integral accepts now also tensor inputs in limits
- parameters at limits is now shown correctly
Experimental
------------
Requirement changes
-------------------
- add TensorFlow 2.7 support
Thanks
------
0.8.3 (5 Apr 2022)
===================
- fixate nlopt to < 2.7.1
0.8.2 (20 Sep 2021)
====================
Bug fixes and small changes
---------------------------
- fixed a longstanding bug in the DoubleCB implementation of the integral.
- remove outdated deprecations
0.8.1 (14. Sep. 2021)
======================
Major Features and Improvements
-------------------------------
- allow ``FitResult`` to ``freeze()``, making it pickleable. The parameters
are replaced by their name, the objects such as loss and minimizer as well.
- improve the numerical integration by adding a one dimensional efficient integrator, testing for the accuracy of
multidimensional integrals. If there is a sharp peak, this maybe fails to integrate and the number of points
has to be manually raised
- add highly performant kernel density estimation (mainly contributed by Marc Steiner)
in 1 dimension which allow
for the choice of arbitrary kernels, support
boundary mirroring of the data and allow for large (millions) of data samples:
- :class:`~zfit.pdf.KDE1DimExact` for the normal density estimation
- :class:`~zfit.pdf.KDE1DimGrid` using a binning
- :class:`~zfit.pdf.KDE1DimFFT` using a binning and FFT
- :class:`~zfit.pdf.KDE1DimISJ` using a binning and an algorithm (ISJ) to solve the optimal bandwidth
For an introduction, see either :ref:`sec-kernel-density-estimation` or the tutorial :ref:`sec-components-model`
- add windows in CI
Breaking changes
------------------
- the numerical integration improved with more sensible values for tolerance. This means however that some fits will
greatly increase the runtime. To restore the old behavior globally, do
for each instance ``pdf.update_integration_options(draws_per_dim=40_000, max_draws=40_000, tol=1)``
This will integrate regardless of the chosen precision and it may be non-optimal.
However, the precision estimate in the integrator is also not perfect and maybe overestimates the error, so that
the integration by default takes longer than necessary. Feel free to play around with the parameters and report back.
Bug fixes and small changes
---------------------------
- Double crystallball: move a minus sign down, vectorize the integral, fix wrong output shape of pdf
- add a minimal value in the loss to avoid NaNs when taking the log of 0
- improve feedback when taking the derivative with respect to a parameter that
a function does not depend on or if the function is purely Python.
- make parameters deletable, especially it works now to create parameters in a function only
and no NameAlreadyTakenError will be thrown.
Requirement changes
-------------------
- add TensorFlow 2.6 support (now 2.5 and 2.6 are supported)
Thanks
------
- Marc Steiner for contributing many new KDE methods!
0.7.2 (7. July 2021)
======================
Bug fixes and small changes
---------------------------
- fix wrong arguments to ``minimize``
- make BaseMinimizer arguments optional
0.7.1 (6. July 2021)
======================
Bug fixes and small changes
---------------------------
- make loss callable with array arguments and therefore combatible with iminuit cost functions.
- fix a bug that allowed FitResults to be valid that are actually invalid (reported by Maxime Schubiger).
0.7.0 (03 Jun 2021)
=====================
Major Features and Improvements
-------------------------------
- add Python 3.9 support
- upgrade to TensorFlow 2.5
Bug fixes and small changes
---------------------------
- Scipy minimizers with hessian arguments use now ``BFGS`` as default
Requirement changes
-------------------
- remove Python 3.6 support
- boost-histogram
0.6.6 (12.05.2021)
==================
Update ipyopt requirement < 0.12 to allow numpy compatible with TensorFlow
0.6.5 (04.05.2021)
==================
- hotfix for wrong argument in exponential PDF
- removed requirement ipyopt, can be installed with ``pip install zfit[ipyopt]``
or by manually installing ``pip install ipyopt``
0.6.4 (16.4.2021)
==================
Bug fixes and small changes
---------------------------
- remove requirement of Ipyopt on MacOS as no wheels are available. This rendered zfit
basically non-installable.
0.6.3 (15.4.2021)
==================
Bug fixes and small changes
---------------------------
- fix loss failed for large datasets
- catch hesse failing for iminuit
0.6.2
========
Minor small fixes.
Bug fixes and small changes
---------------------------
- add ``loss`` to callback signature that gives full access to the model
- add :meth:`~zfit.loss.UnbinnedNLL.create_new` to losses in order to re-instantiate
them with new models and data
preserving their current (and future) options and other arguments
0.6.1 (31.03.2021)
===================
Release for fix of minimizers that performed too bad
Breaking changes
------------------
- remove badly performing Scipy minimizers :class:`~zfit.minimize.ScipyTrustKrylovV1` and
:class:`~zfit.minimize.ScipyTrustNCGV1`
Bug fixes and small changes
---------------------------
- fix auto conversion to complex parameter using constructor
0.6.0 (30.3.2021)
===================
Added many new minimizers from different libraries, all with uncertainty estimation available.
Major Features and Improvements
-------------------------------
- upgraded to TensorFlow 2.4
- Added many new minimizers. A full list can be found in :ref:`minimize_user_api`.
- :class:`~zfit.minimize.IpyoptV1` that wraps the powerful Ipopt large scale minimization library
- Scipy minimizers now have their own, dedicated wrapper for each instance such as
:class:`~zfit.minimize.ScipyLBFGSBV1`, or :class:`~zfit.minimize.ScipySLSQPV1`
- NLopt library wrapper that contains many algorithms for local searches such as
:class:`~zfit.minimize.NLoptLBFGSV1`, :class:`~zfit.minimize.NLoptTruncNewtonV1` or
:class:`~zfit.minimize.NLoptMMAV1` but also includes more global minimizers such as
:class:`~zfit.minimize.NLoptMLSLV1` and :class:`~zfit.minimize.NLoptESCHV1`.
- Completely new and overhauled minimizers design, including:
- minimizers can now be used with arbitrary Python functions and an initial array independent of zfit
- a minimization can be 'continued' by passing ``init`` to ``minimize``
- more streamlined arguments for minimizers, harmonized names and behavior.
- Adding a flexible criterion (currently EDM) that will terminate the minimization.
- Making the minimizer fully stateless.
- Moving the loss evaluation and strategy into a LossEval that simplifies the handling of printing and NaNs.
- Callbacks are added to the strategy.
- Major overhaul of the ``FitResult``, including:
- improved ``zfit_error`` (equivalent of ``MINOS``)
- ``minuit_hesse`` and ``minuit_minos`` are now available with all minimizers as well thanks to an great
improvement in iminuit.
- Added an ``approx`` hesse that returns the approximate hessian (if available, otherwise empty)
- upgrade to iminuit v2 changes the way it works and also the Minuit minimizer in zfit,
including a new step size heuristic.
Possible problems can be caused by iminuit itself, please report
in case your fits don't converge anymore.
- improved ``compute_errors`` in speed by caching values and the reliability
by making the solution unique.
- increased stability for large datasets with a constant subtraction in the NLL
Breaking changes
------------------
- NLL (and extended) subtracts now by default a constant value. This can be changed with a new ``options`` argument.
COMPARISON OF DIFFEREN NLLs (their absolute values) fails now! (flag can be deactivated)
- BFGS (from TensorFlow Probability) has been removed as it is not working properly. There are many alternatives
such as ScipyLBFGSV1 or NLoptLBFGSV1
- Scipy (the minimizer) has been removed. Use specialized ``Scipy*`` minimizers instead.
- Creating a ``zfit.Parameter``, usign ``set_value`` or ``set_values`` now raises a ``ValueError``
if the value is outside the limits. Use ``assign`` to suppress it.
Deprecation
-------------
- strategy to minimizer should now be a class, not an instance anymore.
Bug fixes and small changes
---------------------------
- ``zfit_error`` moved only one parameter to the correct initial position. Speedup and more reliable.
- FFTconv was shifted if the kernel limits were not symetrical, now properly taken into account.
- circumvent overflow error in sampling
- shuffle samples from sum pdfs to ensure uniformity and remove conv sampling bias
- ``create_sampler`` now samples immediately to allow for precompile, a new hook that will allow objects to optimize
themselves.
Requirement changes
-------------------
- ipyopt
- nlopt
- iminuit>=2.3
- tensorflow ~= 2.4
- tensorflow-probability~=12
For devs:
- pre-commit
- pyyaml
- docformatter
Thanks
------
- Hans Dembinski for the help on upgrade to imituit V2
- Thibaud Humair for helpful remarks on the parameters
0.5.6 (26.1.2020)
=================
Update to fix iminuit version
Bug fixes and small changes
---------------------------
- Fix issue when using a ``ComposedParameter`` as the ``rate`` argument of a ``Poisson`` PDF
Requirement changes
-------------------
- require iminuit < 2 to avoid breaking changes
0.5.5 (20.10.2020)
==================
Upgrade to TensorFlow 2.3 and support for weighted hessian error estimation.
Added a one dimensional Convolution PDF
Major Features and Improvements
-------------------------------
- upgrad to TensorFlow 2.3
Breaking changes
------------------
Deprecation
-------------
Bug fixes and small changes
---------------------------
- print parameter inside function context works now correctly
Experimental
------------
- Computation of the covariance matrix and hessian errors with weighted data
- Convolution PDF (FFT in 1Dim) added (experimental, feedback welcome!)
Requirement changes
-------------------
- TensorFlow==2.3 (before 2.2)
- tensorflow_probability==0.11
- tensorflow-addons # spline interpolation in convolution
Thanks
------
0.5.4 (16.07.2020)
==================
Major Features and Improvements
-------------------------------
- completely new doc design
Breaking changes
------------------
- Minuit uses its own, internal gradient by default. To change this back, use ``use_minuit_grad=False``
- ``minimize(params=...)`` now filters correctly non-floating parameters.
- ``z.log`` has been moved to ``z.math.log`` (following TF)
Bug fixes and small changes
---------------------------
- ncalls is not correctly using the internal heuristc or the ncalls explicitly
- ``minimize(params=...)`` automatically extracts independent parameters.
- fix copy issue of KDEV1 and change name to 'adaptive' (instead of 'adaptiveV1')
- change exp name of ``lambda_`` to lam (in init)
- add ``set_yield`` to BasePDF to allow setting the yield in place
- Fix possible bug in SumPDF with extended pdfs (automatically)
Experimental
------------
Requirement changes
-------------------
- upgrade to iminuit>=1.4
- remove cloudpickle hack fix
Thanks
------
Johannes for the docs re-design
0.5.3 (02.07.20)
================
Kernel density estimation for 1 dimension.
Major Features and Improvements
-------------------------------
- add correlation method to FitResult
- Gaussian (Truncated) Kernel Density Estimation in one dimension ``zfit.pdf.GaussianKDE1DimV1`` implementation with fixed and
adaptive bandwidth added as V1. This
is a feature that needs to be improved and feedback is welcome
- Non-relativistic Breit-Wigner PDF, called Cauchy, implementation added.
Breaking changes
------------------
- change human-readable name of ``Gauss``, ``Uniform`` and ``TruncatedGauss`` to remove the ``'_tfp'`` at the end of the name
Bug fixes and small changes
---------------------------
- fix color wrong in printout of results, params
- packaging: moved to pyproject.toml and a setup.cfg mainly, development requirements can
be installed with the ``dev`` extra as (e.g.) ``pip install zfit[dev]``
- Fix shape issue in TFP distributions for partial integration
- change zfit internal algorithm (``zfit_error``) to compute error/intervals from the profile likelihood,
which is 2-3 times faster than previous algorithm.
- add ``from_minuit`` constructor to ``FitResult`` allowing to create it when
using directly iminuit
- fix possible bias with sampling using accept-reject
Requirement changes
-------------------
- pin down cloudpickle version (upstream bug with pip install) and TF, TFP versions
0.5.2 (13.05.2020)
==================
Major Features and Improvements
-------------------------------
- Python 3.8 and TF 2.2 support
- easier debugigng with ``set_graph_mode`` that can also be used temporarily
with a context manager. False will make everything execute Numpy-like.
Bug fixes and small changes
---------------------------
- added ``get_params`` to loss
- fix a bug with the ``fixed_params`` when creating a sampler
- improve exponential PDF stability and shift when normalized
- improve accept reject sampling to account for low statistics
Requirement changes
-------------------
- TensorFlow >= 2.2
0.5.1 (24.04.2020)
==================
(0.5.0 was skipped)
Complete refactoring of Spaces to allow arbitrary function.
New, more consistent behavior with extended PDFs.
SumPDF refactoring, more explicit handling of fracs and yields.
Improved graph building allowing for more fine-grained control of tracing.
Stabilized minimization including a push-back for NaNs.
Major Features and Improvements
-------------------------------
- Arbitrary limits as well as vectorization (experimental)
are now fully supported. The new ``Space`` has an additional argument for a function that
tests if a vector x is inside.
To test if a value is inside a space, ``Space.inside`` can be used. To filter values, ``Space.filter``.
The limits returned are now by default numpy arrays with the shape (1, n_obs). This corresponds well
to the old layout and can, using ``z.unstack_x(lower)`` be treated like ``Data``. This has also some
consequences for the output format of ``rect_area``: this is now a vector.
Due to the ambiguity of the name ``limits``, ``area`` etc (since they do only reflect the rectangular case)
method with leading ``rect_*`` have been added (``rect_limits``, ``rect_area`` etc.) and are encouraged to be used.
- Extending a PDF is more straightforward and removes any "magic". The philosophy is: a PDF can be extended
or not. But it does not change the fundamental behavior of functions.
- SumPDF has been refactored and behaves now as follows:
Giving in pdfs (extended or not or mixed) *and* fracs (either length pdfs or one less) will create a
non-extended SumPDF using the fracs. The fact that the pdfs are maybe extended is ignored.
This will lead to highly consistent behavior.
If the number of fracs given equals the number of pdfs, it is up to the user (currently) to take care of
the normalization.
*Only* if *all* pdfs are extended **and** no fracs are given, the sumpdf will be using the yields as
normalized fracs and be extended.
- Improved graph building and ``z.function``
* the ``z.function`` can now, as with ``tf.function``, be used either as a decorator without arguments or as a
decorator with arguments. They are the same as in ``tf.function``, except of a few additional ones.
* ``zfit.run.set_mode`` allows to set the policy for whether everything is run in eager mode (``graph=False``),
everything in graph, or most of it (``graph=True``) or an optimized variant, doing graph building only with
losses but not just models (e.g. ``pdf`` won't trigger a graph build, ``loss.value()`` will) with ``graph='auto'``.
* The graph cache can be cleaned manually using ``zfit.run.clear_graph_cache()`` in order to prevent slowness
in repeated tasks.
- Switch for numerical gradients has been added as well in ``zfit.run.set_mode(autograd=True/False)``.
- Resetting to the default can be done with ``zfit.run.set_mode_default()``
- Improved stability of minimizer by adding penalty (currently in ``Minuit``) as default. To have a
better behavior with toys (e.g. never fail on NaNs but return an invalid ``FitResult``), use the
``DefaultToyStrategy`` in ``zfit.mnimize``.
- Exceptions are now publicly available in ``zfit.exception``
- Added nice printout for ``FitResult`` and ``FitResult.params``.
- ``get_params`` is now more meaningful, returning by default all independent parameters of the pdf, including yields.
Arguments (``floating``, ``is_yield``) allow for more fine-grained control.
Breaking changes
------------------
- Multiple limits are now handled by a MultiSpace class. Each Space has only "one limit"
and no complicated layout has to be remembered. If you want to have a space that is
defined in disconnected regions, use the ``+`` operator or functionally ``zfit.dimension.add_spaces``
To extract limits from multiple limits, ``MultiSpace`` and ``Space`` are both iterables, returning
the containing spaces respectively itself (for the ``Space`` case).
- SumPDF changed in the behavior. Read above in the Major Features and Improvement.
- Integrals of extended PDFs are not extended anymore, but ``ext_integrate`` now returns the
integral multiplied by the yield.
Deprecations
-------------
- ``ComposedParameter`` takes now ``params`` instead of ``dependents`` as argument, it acts now as
the arguments to the ``value_fn``. To stay future compatible, create e.g. ``def value_fn(p1, pa2)``
and using ``params = ['param1, param2]``, ``value_fn`` will then be called as ``value_fn(param1, parma2)``.
``value_fn`` without arguments will probably break in the future.
- ``FitResult.error`` has been renamed to ``errors`` to better reflect that multiple errors, the lower and
upper are returned.
Bug fixes and small changes
---------------------------
- fix a (nasty, rounding) bug in sampling with multiple limits
- fix bug in numerical calculation
- fix bug in SimplePDF
- fix wrong caching signature may lead to graph not being rebuild
- add ``zfit.param.set_values`` method that allows to set the values of multiple
parameters with one command. Can, as the ``set_value`` method be used with a context manager.
- wrong size of weights when applying cuts in a dataset
- ``with_coords`` did drop axes/obs
- Fix function not traced when an error was raised during first trace
- MultipleLimits support for analytic integrals
- ``zfit.param.set_values(..)`` now also can use a ``FitResult`` as ``values`` argument to set the values
from.
Experimental
------------
- added a new error method, 'zfit_error' that is equivalent to 'minuit_minos', but not fully
stable. It can be used with other minimizers as well, not only Minuit.
Requirement changes
-------------------
- remove the outdated typing module
- add tableformatter, colored, colorama for colored table printout
Thanks
------
- Johannes Lade for code review and discussions.
- Hans Dembinski for useful inputs to the uncertainties.
0.4.3 (11.3.2020)
=================
Major Features and Improvements
-------------------------------
- refactor ``hesse_np`` with covariance matrix, make it available to all minimizers
Behavioral changes
------------------
Bug fixes and small changes
---------------------------
- fix bug in ``hesse_np``
Requirement changes
-------------------
Thanks
------
0.4.2 (27.2.2020)
=================
Major Features and Improvements
-------------------------------
- Refactoring of the Constraints, dividing into ``ProbabilityConstraint`` that can be
sampled from and more general constraints (e.g. for parameter boundaries) that
can not be sampled from.
- Doc improvements in the constraints.
- Add ``hesse`` error method ('hesse_np') available to all minimizers (not just Minuit).
Behavioral changes
------------------
- Changed default step size to an adaptive scheme, a fraction (1e-4) of the range between the lower and upper limits.
Bug fixes and small changes
---------------------------
- Add ``use_minuit_grad`` option to Minuit optimizer to use the internal gradient, often for more stable fits
- added experimental flag ``zfit.experimental_loss_penalty_nan``, which adds a penalty to the loss in case the value is
nan. Can help with the optimisation. Feedback welcome!
Requirement changes
-------------------
Thanks
------
0.4.1 (12.1.20)
===============
Release to keep up with TensorFlow 2.1
Major Features and Improvements
-------------------------------
- Fixed the comparison in caching the graph (implementation detail) that leads to an error.
0.4.0 (7.1.2020)
================
This release switched to TensorFlow 2.0 eager mode. In case this breaks things for you and you need **urgently**
a running version, install a version
< 0.4.1. It is highly recommended to upgrade and make the small changes required.
Please read the ``upgrade guide <docs/project/upgrade_guide.rst>`` on a more detailed explanation how to upgrade.
TensorFlow 2.0 is eager executing and uses functions to abstract the performance critical parts away.
Major Features and Improvements
-------------------------------
- Dependents (currently, and probably also in the future) need more manual tracking. This has mostly
an effect on CompositeParameters and SimpleLoss, which now require to specify the dependents by giving
the objects it depends (indirectly) on. For example, it is sufficient to give a ``ComplexParameter`` (which
itself is not independent but has dependents) to a ``SimpleLoss`` as dependents (assuming the loss
function depends on it).
- ``ComposedParameter`` does no longer allow to give a Tensor but requires a function that, when evaluated,
returns the value. It depends on the ``dependents`` that are now required.
- Added numerical differentiation, which allows now to wrap any function with ``z.py_function`` (``zfit.z``).
This can be switched on with ``zfit.settings.options['numerical_grad'] = True``
- Added gradient and hessian calculation options to the loss. Support numerical calculation as well.
- Add caching system for graph to prevent recursive graph building
- changed backend name to ``z`` and can be used as ``zfit.z`` or imported from it. Added:
- ``function`` decorator that can be used to trace a function. Respects dependencies of inputs and automatically
caches/invalidates the graph and recreates.
- ``py_function``, same as ``tf.py_function``, but checks and may extends in the future
- ``math`` module that contains autodiff and numerical differentiation methods, both working with tensors.
Behavioral changes
------------------
- EDM goal of the minuit minimizer has been reduced by a factor of 10 to 10E-3 in agreement with
the goal in RooFits Minuit minimizer. This can be varied by specifying the tolerance.
- known issue: the ``projection_pdf`` has troubles with the newest TF version and may not work properly (runs out of
memory)
Bug fixes and small changes
---------------------------
Requirement changes
-------------------
- added numdifftools (for numerical differentiation)
Thanks
------
0.3.7 (6.12.19)
================
This is a legacy release to add some fixes, next release is TF 2 eager mode only release.
Major Features and Improvements
-------------------------------
- mostly TF 2.0 compatibility in graph mode, tests against 1.x and 2.x
Behavioral changes
------------------
Bug fixes and small changes
---------------------------
- ``get_depentents`` returns now an OrderedSet
- errordef is now a (hidden) attribute and can be changed
- fix bug in polynomials
Requirement changes
-------------------
- added ordered-set
0.3.6 (12.10.19)
================
**Special release for conda deployment and version fix (TF 2.0 is out)**
**This is the last release before breaking changes occur**
Major Features and Improvements
-------------------------------
- added ConstantParameter and ``zfit.param`` namespace
- Available on conda-forge
Behavioral changes
------------------
- an implicitly created parameter with a Python numerical (e.g. when instantiating a model)
will be converted to a ConstantParameter instead of a fixed Parameter and therefore
cannot be set to floating later on.
Bug fixes and small changes
---------------------------
- added native support TFP distributions for analytic sampling
- fix Gaussian (TFP Distribution) Constraint with mixed up order of parameters
- ``from_numpy`` automatically converts to default float regardless the original numpy dtype,
``dtype`` has to be used as an explicit argument
Requirement changes
-------------------
- TensorFlow >= 1.14 is required
Thanks
------
- Chris Burr for the conda-forge deployment
0.3.4 (30-07-19)
================
**This is the last release before breaking changes occur**
Major Features and Improvements
-------------------------------
- create ``Constraint`` class which allows for more fine grained control and information on the applied constraints.
- Added Polynomial models
- Improved and fixed sampling (can still be slightly biased)
Behavioral changes
------------------
None
Bug fixes and small changes
---------------------------
- fixed various small bugs
Thanks
------
for the contribution of the Constraints to Matthieu Marinangeli <[email protected]>
0.3.3 (15-05-19)
================
Fixed Partial numeric integration
Bugfixes mostly, a few major fixes. Partial numeric integration works now.
Bugfixes
- data_range cuts are now applied correctly, also in several dimensions when a subset is selected
(which happens internally of some Functors, e.g. ProductPDF). Before, only the selected obs was respected for cuts.
- parital integration had a wrong take on checking limits (now uses supports).
0.3.2 (01-05-19)
================
With 0.3.2, bugfixes and three changes in the API/behavior
Breaking changes
----------------
- tfp distributions wrapping is now different with dist_kwargs allowing for non-Parameter arguments (like other dists)
- sampling allows now for importance sampling (sampler in Model specified differently)
- ``model.sample`` now also returns a tensor, being consistent with ``pdf`` and ``integrate``
Bugfixes
--------
- shape handling of tfp dists was "wrong" (though not producing wrong results!), fixed. TFP distributions now get a tensor with shape (nevents, nobs) instead of a list of tensors with (nevents,)
Improvements
------------
- refactor the sampling for more flexibility and performance (less graph constructed)
- allow to use more sophisticated importance sampling (e.g. phasespace)
- on-the-fly normalization (experimentally) implemented with correct gradient
0.3.1 (30-04-19)
================
Minor improvements and bugfixes including:
- improved importance sampling allowing to preinstantiate objects before it's called inside the while loop
- fixing a problem with ``ztf.sqrt``
0.3.0 (2019-03-20)
==================
Beta stage and first pip release
0.0.1 (2018-03-22)
==================
- First creation of the package.
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/CHANGELOG.rst | CHANGELOG.rst |
========
Authors
========
| Jonas Eschle <[email protected]>
| Albert Puig <[email protected]>
| Rafael Silva Coutinho <[email protected]>
| Nicola Serra <[email protected]>
| Matthieu Marinangeli <[email protected]>
Development Lead
----------------
* zfit <[email protected]>
Contributors
------------
| Marc Steiner <[email protected]>
| Chris Burr <[email protected]>
| Martina Ferrillo <[email protected]>
| Abhijit Mathad <[email protected]>
| Oliver Lantwin <[email protected]>
| Johannes Lade <[email protected]>
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/AUTHORS.rst | AUTHORS.rst |
.. |zfit_logo| image:: images/zfit-logo_400x168.png
:target: https://github.com/zfit/zfit/
:alt: zfit logo
.. |scikit-hep_logo| image:: images/scikit-hep-logo_168x168.png
:target: https://scikit-hep.org/affiliated
:alt: scikit-hep logo
|zfit_logo|
.. toctree::
:maxdepth: 2
:hidden:
whats_new/index
getting_started/index
tutorials/index
user_api/index
project/index
ask_a_question
=========================
Scalable pythonic fitting
=========================
.. link-button:: https://zfit-tutorials.readthedocs.io/en/latest/
:type: url
:text: Online interactive tutorials
:classes: btn-outline-primary btn-block
.. panels::
:header: text-center
.. dropdown:: Getting started
.. link-button:: getting_started/installation
:type: ref
:text: Installation
:classes: btn-outline-primary btn-block
.. link-button:: getting_started/5_minutes_to_zfit
:type: ref
:text: 5 minutes to zfit
:classes: btn-outline-primary btn-block
.. link-button:: ask_a_question
:type: ref
:text: Asking questions
:classes: btn-outline-primary btn-block
---
.. dropdown:: Tutorials
.. link-button:: tutorials/introduction
:type: ref
:text: Introduction
:classes: btn-outline-primary btn-block
.. link-button:: https://zfit-tutorials.readthedocs.io/en/latest/
:type: url
:text: Interactive tutorials
:classes: btn-outline-primary btn-block
.. link-button:: tutorials/components/index
:type: ref
:text: Components
:classes: btn-outline-primary btn-block
---
.. link-button:: whats_new/index
:type: ref
:text: What's new?
:classes: btn-outline-primary btn-block
---
.. link-button:: user_api/index
:type: ref
:text: API documentation
:classes: btn-outline-primary btn-block
The zfit package is a model fitting library based on `TensorFlow <https://www.tensorflow.org/>`_ and optimised for simple and direct manipulation of probability density functions. The main focus is on the scalability, parallelisation and a user friendly experience framework (no cython, no C++ needed to extend). The basic idea is to offer a pythonic oriented alternative to
the very successful RooFit library from the `ROOT <https://root.cern.ch/>`_ data analysis package. While RooFit has provided a stable platform for most of the needs of the High Energy Physics (HEP) community in the last few years, it has become increasingly difficult to integrate all the developments in the scientific Python ecosystem into RooFit due to its monolithic nature. Conversely, the core of zfit aims at becoming a solid ground for model fitting while providing enough flexibility to incorporate state-of-art tools and to allow scalability going to larger datasets.
This challenging task is tackled by following two basic design pillars:
- The skeleton and extension of the code is minimalist, simple and finite:
the zfit library is exclusively designed for the purpose of model fitting and sampling---opposite to the self-contained RooFit/ROOT frameworks---with no attempt to extend its functionalities to features such as statistical methods or plotting.
This design philosophy is well exemplified by examining maximum likelihood fits: while zfit works as a backend for likelihood fits and can be integrated to packages such as `hepstats <https://github.com/scikit-hep/hepstats>`_ and `matplotlib <https://matplotlib.org/>`_, RooFit performs the fit, the statistical treatment and plotting within.
This wider scope of RooFit results in a lack of flexibility with respect to new minimisers, statistic methods and, broadly speaking, any new tool that might come.
- Another paramount aspect of zfit is its design for optimal parallelisation and scalability. Even though the choice of TensorFlow as backend introduces a strong software dependency, its use provides several interesting features in the context of model fitting.
The key concept is that TensorFlow is built under the `dataflow programming model <https://en.wikipedia.org/wiki/Dataflow_programming>`_.
Put it simply, TensorFlow creates a computational graph with the operations as the nodes of the graph and tensors to its edges. Hence, the computation only happens when the graph is executed in a session, which simplifies the parallelisation by identifying the dependencies between the edges and operations or even the partition across multiple devices (more details can be found in the `TensorFlow guide <https://www.tensorflow.org/guide/>`_).
The architecture of zfit is built upon this idea and it aims to provide a high level interface to these features, *i.e.*, most of the operations of graphs and evaluations are hidden for the user, leaving a natural and friendly model fitting and sampling experience.
The zfit package is Free software, using an Open Source license. Both the software and this document are works in progress.
Source code can be found in `our github page <https://github.com/zfit/zfit/>`_.
.. hint::
Yay, the new website design is here!
We are proud to present the new look and feel of our website. While most of the
things are where they used to be, here's a short explanation of the sections:
* **What's new?**: Changelog and other new features of zfit.
* **Getting started**: Installation guide, quickstart and examples.
* **API reference**: Dive deep into the API.
* **Project**: Learn who wrote zfit, how to contribute and other information about the project.
* **Ask a question**: Does exactly what it says on the tin.
If you have suggestions, contact us on our `Gitter channel`_ or open an issue on `GitHub`_.
Thanks to pandas for open sourcing `pydata-sphinx-theme <https://pydata-sphinx-theme.readthedocs.io/en/latest/>`_.
|
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/index.rst | index.rst |
===============
Outreach
===============
The project zfit aims to establish a basis in terms of API and basic functionality for a (likelihood) fitting
ecosystem that is capable of dealing with the demands from High Energy Physics (HEP).
Papers and proceedings
=======================
- `Original zfit paper <https://www.sciencedirect.com/science/article/pii/S2352711019303851>`_
- `Computing in High Energy Physics (CHEP) 2019 <https://www.epj-conferences.org/articles/epjconf/ref/2020/21/epjconf_chep2020_06025/epjconf_chep2020_06025.html>`_
- `PyHEP 2020 tutorial <https://zenodo.org/record/4147540#.YHSdF3UzZH4>`_
- `PyHEP 2020 presentation <https://zenodo.org/record/4147528#.YHSdkHUzZH4>`_
- `PyHEP 2019 presentation <https://zenodo.org/record/3960059#.YHSdUnUzZH4>`_
Used by
========
The following analysis have used zfit
- Search for long-lived particles decaying to :math:`$$e ^\pm $$$$\mu ^\mp $$$$\nu $$`,
`Eur. Phys. J. C 81, 261 (2021) <https://doi.org/10.1140/epjc/s10052-021-08994-0>`_
- Angular analysis of
:math:`$$ {B}^0\to {D}^{\ast -}{D}_s^{\ast +} $$with $$ {D}_s^{\ast +}\to {D}_s^{+}\gamma $$` decays,
`J. High Energ. Phys. 2021, 177 (2021) <https://doi.org/10.1007/JHEP06(2021)177>`_
.. _section-citing:
Citing
======
If you use zfit in research, please consider citing:
.. code-block:: latex
@article{ESCHLE2020100508,
title = {zfit: Scalable pythonic fitting},
journal = {SoftwareX},
volume = {11},
pages = {100508},
year = {2020},
issn = {2352-7110},
doi = {https://doi.org/10.1016/j.softx.2020.100508},
url = {https://www.sciencedirect.com/science/article/pii/S2352711019303851},
author = {Jonas Eschle and Albert {Puig Navarro} and Rafael {Silva Coutinho} and Nicola Serra},
keywords = {Model fitting, Data analysis, Statistical inference, Python},
abstract = {Statistical modeling is a key element in many scientific fields and especially in
High-Energy Physics (HEP) analysis. The standard framework to perform this task in HEP is the
C++ ROOT/RooFit toolkit; with Python bindings that are only loosely integrated into the
scientific Python ecosystem. In this paper, zfit, a new alternative to RooFit written in pure Python,
is presented. Most of all, zfit provides a well defined high-level API and workflow
for advanced model building and fitting, together with an implementation on top of TensorFlow,
allowing a transparent usage of CPUs and GPUs. It is designed to be extendable in a
very simple fashion, allowing the usage of cutting-edge developments from
the scientific Python ecosystem in a transparent way. The main features of zfit are introduced,
and its extension to data analysis, especially in the context of HEP experiments, is discussed.}
}
Material
=========
:download:`zfit logo high resolution <../images/zfit-logo_veryhires.png>`
:download:`zfit logo normal resolution <../images/zfit-logo_hires.png>`
:download:`zfit vectorgraphics <../images/zfit-vector.svg>`
:download:`zfit favicon <../images/zfit-favicon.png>`
:download:`zfit workflow <../images/zfit_workflow_v2.png>`
If there is material missing, do not hesitate to contact us.
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/project/outreach.rst | outreach.rst |
PDF
---
Basic PDFs
##########
Basic shapes are fundamental PDFs, with often well-known functional form.
They are usually fully analytically implemented and often a thin
wrapper around :py:class:`~tensorflow_probability.distribution.Distribution`.
Any missing shape can be easily wrapped using :py:class:`~zfit.pdf.WrapDistribution`.
.. toctree::
:maxdepth: 2
pdf/basic
Binned PDFs
###########
Binned PDFs extend the functionality of unbinned PDFs by providing more histogram-like features in
addition to the basic unbinned PDFs. They interface with the
`Unified Histogram Interface (uhi) <https://uhi.readthedocs.io/en/latest/?badge=latest>`_
that is provided `boost-histogram <https://boost-histogram.readthedocs.io/en/latest/>`_ and especially
`Hist <https://github.com/scikit-hep/hist>`_.
.. toctree::
:maxdepth: 2
pdf/binned_pdf
Polynomials
#############
While polynomials are also basic PDFs, they convey mathematically
a more special class of functions.
They constitute a sum of different degrees.
.. toctree::
:maxdepth: 2
pdf/polynomials
Kernel Density Estimations
#############################
KDEs provide a means of non-parametric density estimation.
An extensive introduction and explanation can be found in
:ref:`sec-kernel-density-estimation`.
.. toctree::
:maxdepth: 2
pdf/kde_api
Composed PDFs
#############################
Composed PDFs build on top of others and provide sums, products and more.
.. toctree::
:maxdepth: 2
pdf/composed_pdf
Custom base class
#############################
These base classes are used internally to build PDFs and can also be
used to implement custom PDFs.
They offer more or less support and freedom.
.. toctree::
:maxdepth: 2
pdf/custom_base
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/user_api/zfit.pdf.rst | zfit.pdf.rst |
Data
----
Data can be unbinned or binned. If unbinned, the data is stored in a ``Data`` object while binned data,
or histogram-like data, is stored in a ``BinnedData`` object.
The binned data is built closely to the histograms in the
`boost-histogram <https://boost-histogram.readthedocs.io/en/latest/>`_ and especially
`Hist <https://github.com/scikit-hep/hist>`_ libraries and has to and from methods to seamlessly go back and
forth between the libraries. Furthermore, ``BinnedData`` implements the
`Unified Histogram Interface, UHI <https://github.com/scikit-hep/uhi>`_ and zfit often expects only an
object that implements the UHI.
.. autosummary::
:toctree: _generated/data
zfit.data.Data
zfit.data.BinnedData
zfit.data.RegularBinning
zfit.data.VariableBinning
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/user_api/zfit.data.rst | zfit.data.rst |
=============
API reference
=============
This is the API reference for the zfit fitting package.
Most classes and functions are documented with docstrings, but don't hesitate to contact us if this documentation is
insufficient!
.. attention::
This document is in the process of being built. If you want to add to it you can simply click the 'Edit this
page' button at the top of the right column on which ever page you'd like to change. You need a
`GitHub <https://github.com/>`_ account to do this.
.. toctree::
:maxdepth: 3
zfit.dimension
zfit.data
zfit.param
zfit.func
zfit.pdf
zfit.sample
zfit.loss
zfit.constraint
zfit.minimize
zfit.result
zfit.settings
zfit.hs3
zfit.exception
zfit.util.typing
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/user_api/index.rst | index.rst |
Loss
----
The loss, or also called "cost", describes the disagreement between the data and the model.
Most commonly, the likelihood (or, to be precise, the *negative log* likelihood)
is used, as the maximum likelihood estimation provides many
beneficial characteristics.
Binned losses require the PDF and data to be binned as well.
Extended losses take the expected count ("yield") of a PDF into account and require the
PDF to be extended in the first place.
.. autosummary::
:toctree: _generated/loss
zfit.loss.UnbinnedNLL
zfit.loss.ExtendedUnbinnedNLL
zfit.loss.BinnedNLL
zfit.loss.ExtendedBinnedNLL
zfit.loss.BinnedChi2
zfit.loss.ExtendedBinnedChi2
zfit.loss.BaseLoss
zfit.loss.SimpleLoss
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/user_api/zfit.loss.rst | zfit.loss.rst |
.. _minimize_user_api:
Minimize
--------
This module contains everything related to minimization in zfit.
Minimizers
##########
zfit supplies wrappers for different minimizers from multiple libraries. Most of the are local
minimizers (such as :class:`~zfit.minimize.Minuit`, :class:`~zfit.minimize.IpyoptV1` or
:class:`~zfit.minimize.ScipyLBFGSBV1` are) while there are also a few global ones such as
the :class:`~zfit.minimize.NLoptISRESV1` or :class:`~zfit.minimize.NLoptStoGOV1`.
.. toctree::
:maxdepth: 2
minimize/minimizers
Strategy
#############
Strategy to deal with NaNs and to provide callbacks.
.. toctree::
:maxdepth: 2
minimize/strategy
Criterion
#############
Criterion for the convergence of the minimization.
.. toctree::
:maxdepth: 2
minimize/criterion
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/user_api/zfit.minimize.rst | zfit.minimize.rst |
zfit supplies wrappers for different minimizers from multiple libraries. Most of the are local
minimizers (such as :class:`~zfit.minimize.Minuit`, :class:`~zfit.minimize.IpyoptV1` or
:class:`~zfit.minimize.ScipyLBFGSBV1` are) while there are also a few global ones such as
the :class:`~zfit.minimize.NLoptISRESV1` or :class:`~zfit.minimize.NLoptStoGOV1`.
While the former are usually faster and preferred, they depend more on the initial values than
the latter. Especially in higher dimensions, a global search of the parameters
can increase the minimization time drastically and is often infeasible. It is also possible to
couple the minimizers by first doing an approximate global minimization and then polish the
minimum found with a local minimizer.
All minimizers support similar arguments, most notably ``tol`` which denotes the termination
value. This is reached if the value of the convergence criterion, which defaults to
:class:`~zfit.minimize.EDM`, the same that is also used in :class:`~zfit.minimize.Minuit`.
Other than that, there are a also a few minimizer specific arguments that differ from each minimizer.
They all have the exact same minimization method :meth:`~zfit.minimize.BaseMinimizer.minimize`
which takes a loss, parameters and (optionally) a :class:`~zfit.result.FitResult` from which it can
take information to have a better start into the minimization.
Minuit
======
.. autosummary::
:toctree: _generated/minimizers
zfit.minimize.Minuit
Ipyopt
======
.. autosummary::
:toctree: _generated/minimizers
zfit.minimize.IpyoptV1
Scipy
=====
.. autosummary::
:toctree: _generated/minimizers
zfit.minimize.ScipyLBFGSBV1
zfit.minimize.ScipyTrustConstrV1
zfit.minimize.ScipyPowellV1
zfit.minimize.ScipySLSQPV1
zfit.minimize.ScipyTruncNCV1
NLopt
=====
.. autosummary::
:toctree: _generated/minimizers
zfit.minimize.NLoptLBFGSV1
zfit.minimize.NLoptTruncNewtonV1
zfit.minimize.NLoptSLSQPV1
zfit.minimize.NLoptMMAV1
zfit.minimize.NLoptCCSAQV1
zfit.minimize.NLoptSubplexV1
zfit.minimize.NLoptCOBYLAV1
zfit.minimize.NLoptMLSLV1
zfit.minimize.NLoptStoGOV1
zfit.minimize.NLoptBOBYQAV1
zfit.minimize.NLoptISRESV1
zfit.minimize.NLoptESCHV1
zfit.minimize.NLoptShiftVarV1
Tensorflow
======================
.. autosummary::
:toctree: _generated/minimizers
zfit.minimize.Adam
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/user_api/minimize/minimizers.rst | minimizers.rst |
===============
Getting started
===============
.. toctree::
:maxdepth: 1
:hidden:
installation
5_minutes_to_zfit
../tutorials/introduction
Welcome to zfit!
How to set it up: :ref:`Installing zfit <zfit_installation>`
:ref:`Quickstart: 5 minutes to zfit <5-minutes-to-zfit>`
A more complete introduction to zfit can be found :ref:`in the introduction section.<introduction_tutorial>`
For a throughout explanation see the :ref:`tutorial_section`. Some of tutorials are also available in the
`zfit youtube channel <https://www.youtube.com/channel/UC_IOWLYdQSkWr54K-o8BTCA>`_
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/getting_started/index.rst | index.rst |
.. highlight:: shell
.. _zfit_installation:
============
Installation
============
Stable release
--------------
To install the newest stable version of zfit, use pip and run this command in your terminal (recommended to do inside a
`conda <https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html>`_ or
`virtualenv <https://virtualenv.pypa.io/en/latest/>`_ virtual environment):
.. code-block:: console
$ pip install zfit
This is the preferred method to install zfit, as it will always install the most recent stable release.
Cutting edge version
-----------------------
The cutting edge development version (unstable) can be installed from the `Github repo`_ with
.. code-block:: console
$ pip install git+https://github.com/zfit/zfit
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/zfit/zfit
.. _Github repo: https://github.com/zfit/zfit
.. _tarball: https://github.com/zfit/zfit/tarball/master
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/getting_started/installation.rst | installation.rst |
.. _5-minutes-to-zfit:
=================
5 minutes to zfit
=================
The zfit library provides a simple model fitting and sampling framework for a broad list of applications.
This section is designed to give an overview of the main concepts and features in the context of likelihood fits in
a *crash course* manner. The simplest example is to generate, fit and plot a Gaussian distribution.
The first step is to import ``zfit`` and to verify that the installation has been done successfully:
.. jupyter-execute::
:hide-output:
:hide-code:
import os
os.environ["ZFIT_DISABLE_TF_WARNINGS"] = "1"
import numpy as np
.. jupyter-execute::
import tensorflow as tf
import zfit
from zfit import z # math backend of zfit
.. thebe-button:: Run this interactively
Since we want to generate/fit a Gaussian within a given range, the domain of the PDF is defined by
an *observable space*. This can be created using the :py:class:`~zfit.Space` class
.. jupyter-execute::
obs = zfit.Space('x', limits=(-10, 10))
The best interpretation of the observable at this stage is that it defines the name and range of the observable axis.
Using this domain, we can now create a simple Gaussian PDF.
The most common PDFs are already pre-defined within the :py:mod:`~zfit.pdf` module, including a simple Gaussian.
First, we have to define the parameters of the PDF and their limits using the :py:class:`~zfit.Parameter` class:
.. jupyter-execute::
mu = zfit.Parameter("mu", 2.4, -1, 5)
sigma = zfit.Parameter("sigma", 1.3, 0, 5)
With these parameters we can instantiate the Gaussian PDF from the library
.. jupyter-execute::
gauss = zfit.pdf.Gauss(obs=obs, mu=mu, sigma=sigma)
It is recommended to pass the arguments of the PDF as keyword arguments.
The next stage is to create a dataset to be fitted. There are several ways of producing this within the
zfit framework (see the :ref:`Data <data-section>` section). In this case, for simplicity we simply produce
it using numpy and the :func:`Data.from_numpy <zfit.Data.from_numpy>` method:
.. jupyter-execute::
data_np = np.random.normal(0, 1, size=10000)
data = zfit.Data.from_numpy(obs=obs, array=data_np)
Now we have all the ingredients in order to perform a maximum likelihood fit.
Conceptually this corresponds to three basic steps:
1. create a loss function, in our case a negative log-likelihood :math:`\log\mathcal{L}`;
2. instantiate our choice of minimiser;
3. and minimise the log-likelihood.
.. jupyter-execute::
# Stage 1: create an unbinned likelihood with the given PDF and dataset
nll = zfit.loss.UnbinnedNLL(model=gauss, data=data)
# Stage 2: instantiate a minimiser (in this case a basic minuit minimizer)
minimizer = zfit.minimize.Minuit()
# Stage 3: minimise the given negative likelihood
result = minimizer.minimize(nll)
This corresponds to the most basic example where the negative likelihood is defined within the pre-determined
observable range and all the parameters in the PDF are floated in the fit. It is often the case that we want to
only vary a given set of parameters. In this case it is necessary to specify which are the parameters to be floated
(so all the remaining ones are fixed to their initial values).
Also note that we can now do various things with the pdf such as plotting the fitting result
with the model gaussian without extracting the loss
minimizing parameters from ``result``. This is possible because parameters are mutable. This means that the
minimizer can directly manipulate the value of the floating parameter. So when you call the ``minimizer.minimize()``
method the value of ``mu`` changes during the optimisation. ``gauss.pdf()`` then uses this new value to calculate the
pdf.
.. jupyter-execute::
# Stage 3: minimise the given negative likelihood but floating only specific parameters (e.g. mu)
result2 = minimizer.minimize(nll, params=[mu])
It is important to highlight that conceptually zfit separates the minimisation of the loss function with respect to the error calculation,
in order to give the freedom of calculating this error whenever needed and to allow the use of external error calculation packages.
In order to get an estimate for the errors, it is possible to call ``Hesse`` that will calculate
the parameter uncertainties. This uses the inverse Hessian to approximate the minimum of the loss and returns a symmetric estimate.
When using weighted datasets, this will automatically perform the asymptotic correction to the fit covariance matrix,
returning corrected parameter uncertainties to the user. The correction applied is based on Equation 18 in `this paper <https://arxiv.org/abs/1911.01303>`_.
To call ``Hesse``, do:
.. jupyter-execute::
param_hesse = result.hesse()
print(param_hesse)
which will return a dictionary of the fit parameters as keys with ``error`` values for each one.
The errors will also be added to the result object and show up when printing the result.
While the hessian approximation has many advantages, it may not hold well for certain loss functions, especially for
asymetric uncertainties. It is also possible to use a more CPU-intensive error calculating with the ``errors`` method.
This has the advantage of taking into account all the correlations and can describe well a
a loss minimum that is not well approximated by a quadratic function *(it is however not valid in the case of weights and takes
considerably longer).* It estimates the lower and upper uncertainty independently.
As an example, with the :py:class:`~zfit.minimize.Minuit` one can calculate the ``MINOS`` uncertainties with:
.. jupyter-execute::
:hide-output:
param_errors, _ = result.errors()
.. jupyter-execute::
print(param_errors)
Once we've performed the fit and obtained the corresponding uncertainties,
it is now important to examine the fit results.
The object ``result`` (:py:class:`~zfit.minimizers.fitresult.FitResult`) has all the relevant information we need:
.. jupyter-execute::
print(f"Function minimum: {result.fmin}")
print(f"Converged: {result.converged}")
print(f"Valid: {result.valid}")
This is all available if we print the fitresult (not shown here as display problems).
.. jupyter-execute::
:hide-output:
print(result)
Similarly one can obtain only the information on the fitted parameters with
.. jupyter-execute::
# Information on all the parameters in the fit
print(result.params)
# Printing information on specific parameters, e.g. mu
print("mu={}".format(result.params[mu]['value']))
As already mentioned, there is no dedicated plotting feature within zfit. However, we can easily use external
libraries, such as ``matplotlib`` and `mplhep, a library for HEP-like plots <https://github.com/scikit-hep/mplhep>`_ ,
to do the job:
.. jupyter-execute::
import mplhep
import matplotlib.pyplot as plt
import numpy as np
lower, upper = obs.limits
data_np = zfit.run(data.value()[:, 0])
# plot the data as a histogramm
bins = 80
counts, bin_edges = np.histogram(data_np, bins, range=(lower[-1][0], upper[0][0]))
mplhep.histplot((counts, bin_edges), yerr=True, color='black', histtype='errorbar')
# evaluate the func at multiple x and plot
x_plot = np.linspace(lower[-1][0], upper[0][0], num=1000)
y_plot = zfit.run(gauss.pdf(x_plot, norm_range=obs))
plt.plot(x_plot, y_plot * data_np.shape[0] / bins * obs.area(), color='xkcd:blue')
plt.show()
The specific call to :func:`zfit.run` simply converts the Eager Tensor (that is already array-like) to a Numpy array.
Often, this conversion is however not necessary and a Tensor can directly be used.
The full script :jupyter-download:script:`can be downloaded here <5 minutes to zfit>`.
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/getting_started/5_minutes_to_zfit.rst | 5_minutes_to_zfit.rst |
================================
Upgrade guide
================================
Upgrade from zfit 0.3.x to 0.4.0
================================
zfit moved from TensorFlow 1.x to 2.x. The main difference is that in 1.x, you would mostly built
a graph all the time and execute it when needed. In TF 2.x, this has gone and happens implicitly
if a function is decorated with the right decorator. But it is also possible to build no graph at all
and execute the code _eagerly_, just as Numpy would. So writing just TF 2.x code is "no different", if not wrapped
by a :py:func:`tf.function`, than executing Numpy code.
In short: write TF 2.x as if you would write Numpy. If something is supposed to _change_, it has to be
newly generated each time, e.g. be a function that can be called.
zfit offers objects that still keep track of everything.
Consequences for zfit:
Dependents
----------
this implies that zfit does not rely on the graph structure anymore.
Therefore, dependencies have to be given manually (although in the future, certain automatitions
can surely be added).
Affected from this is the :py:class:`~zfit.ComposedParameter`. Instead of giving a Tensor,
a function returning a value has to be given _and_ the dependents have to be specified
explicitly.
.. code-block:: python
mu = zfit.Parameter(...)
shift = zfit.Parameter(...)
def shifted_mu_func():
return mu + shift
shifted_mu = zfit.params.ComposedParameter(shifted_mu_func, dependents=[mu, shift])
The same is true for the :py:class:`~zfit.loss.SimpleLoss`
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/whats_new/upgrade_guide.rst | upgrade_guide.rst |
.. _introduction_tutorial:
=========================
Introduction tutorial
=========================
:ref:`Quickstart: 5 minutes to zfit <5-minutes-to-zfit>`
A more extensive introduction: 20 minutes to zfit.
.. youtube:: https://youtu.be/uYl5jArxhmM?t=170
:width: 480
.. toctree::
:maxdepth: 1
:hidden:
../getting_started/5_minutes_to_zfit
components_introduction
.. toctree::
:maxdepth: 1
../_tmp/zfit-tutorials/introduction/Introduction.ipynb
:ref:`Getting started with the individual components of zfit <components_introduction>`
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/introduction.rst | introduction.rst |
.. _example-scripts:
========
Examples
========
Minimizing Python functions
---------------------------
zfit minimizers can also be used to minimize normal Python functions as shown here.
.. literalinclude:: /../examples/minimize_python_func.py
Composing PDFs
--------------
.. literalinclude:: /../examples/composing_pdf.py
Custom 3D Functor
-----------------
.. literalinclude:: /../examples/custom_3d_functor.py
Custom PDF simple
-----------------
.. literalinclude:: /../examples/custom_pdf_simple.py
Custom PDF advanced
-------------------
.. literalinclude:: /../examples/custom_pdf_advanced.py
Custom 3D PDF
-------------
.. literalinclude:: /../examples/custom_3d_pdf_simple.py
Multidimensional preprocess fit
-------------------------------
.. literalinclude:: /../examples/multidim_preprocess_fit.py
Plot signal background
----------------------
.. literalinclude:: /../examples/plot_sig_bkg.py
Simple fit
----------
.. literalinclude:: /../examples/simple_fit.py
Signal-background-mass fit
--------------------------
.. literalinclude:: /../examples/signal_bkg_mass_fit.py
Signal-background-mass fit extended
-----------------------------------
.. literalinclude:: /../examples/signal_bkg_mass_extended_fit.py
Simultanous fit
---------------
.. literalinclude:: /../examples/simultaneous_fit.py
.. _spaces-example:
Spaces
------
.. literalinclude:: /../examples/spaces.py
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/examples.rst | examples.rst |
==============
Components
==============
This tutorial describes the main components of zfit in more depths. For a brief overview, see also
zfit general
============
Variables
=========
.. toctree::
:maxdepth: 1
intro/space
intro/parameter
Data
====
.. toctree::
:maxdepth: 1
intro/data
.. _sec-components-model:
Model
=====
.. toctree::
:maxdepth: 1
intro/model
models/kde_models
../../../_tmp/zfit-tutorials/components/13 - Kernel Density Estimation.ipynb
The following tutorials can also be found in
`zfit tutorials as Jupyter Notebooks. <https://github.com/zfit/zfit-tutorials>`_
.. toctree::
:maxdepth: 1
../../../_tmp/zfit-tutorials/components/20 - Composite Models.ipynb
../../../_tmp/zfit-tutorials/components/60 - Custom PDF.ipynb
../../../_tmp/zfit-tutorials/components/62 - Multidim Custom PDF.ipynb
Loss
====
.. toctree::
:maxdepth: 1
intro/loss
Minimizer
=========
.. toctree::
:maxdepth: 1
intro/minimizer
Result
======
.. toctree::
:maxdepth: 1
intro/result
Statistical Inference
======================
.. toctree::
:maxdepth: 1
intro/toy_studies
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/index.rst | index.rst |
.. _sec-kernel-density-estimation:
Kernel Density Estimation
""""""""""""""""""""""""""""
:Authors: Marc Steiner; Jonas Eschle
.. jupyter-kernel::
:id: zfit_kde_introduction.ipynb
An introduction to Kernel Density Estimations, explanations to all methods implemented in zfit and a throughout
comparison of the performance can be found in
`Performance of univariate kernel density estimation methods in TensorFlow <https://astroviking.github.io/ba-thesis/>`_
by Marc Steiner from which many parts here are taken.
Estimating the density of a population can be done in a non-parametric manner. The simplest form is to create a
density histogram, which is notably not so precise.
A more sophisticated non-parametric method is the kernel density estimation (KDE), which can be looked at as a sort of
generalized histogram. In a kernel density estimation each data point is substituted with a so called kernel function
that specifies how much it influences its neighboring regions. This kernel functions can then be summed up to get an
estimate of the probability density distribution, quite similarly as summing up data points inside bins.
However, since
the kernel functions are centered on the data points directly, KDE circumvents the problem of arbitrary bin positioning.
Since KDE still depends on kernel bandwidth (a measure of the spread of the kernel function) instead of bin width,
one might argue that this is not a major improvement. Upon closer inspection, one finds that the underlying PDF
does depend less strongly on the kernel bandwidth than histograms do on bin width and it is much easier to specify
rules for an approximately optimal kernel bandwidth than it is to do so for bin width.
Given a set of :math:`n` sample points :math:`x_k` (:math:`k = 1,\cdots,n`), an exact kernel density
estimation :math:`\widehat{f}_h(x)` can be calculated as
.. math::
:label: eq-exact-kde
\widehat{f}_h(x) = \frac{1}{nh} \sum_{k=1}^n K\Big(\frac{x-x_k}{h}\Big)
where :math:`K(x)` is called the kernel function, :math:`h` is the bandwidth of the kernel and :math:`x` is the
value for which the estimate is calculated. The kernel function defines the shape and size of influence of a single
data point over the estimation, whereas the bandwidth defines the range of influence. Most typically a simple
Gaussian distribution (:math:`K(x) :=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}`) is used as kernel function.
The larger the bandwidth parameter :math:h the larger is the range of influence of
a single data point on the estimated distribution.
.. _section-exact-kdes:
Exact KDEs
''''''''''
**Summary**
*An exact KDE calculates the true sum of the kernels around the input points without approximating the
dataset, leaving possibilities for the choice of a bandwidth.
The computational complexity -- especially the peak memory consumption -- is proportional to the sample size.
Therefore, this kind of PDF is better used for smaller datasets and a Grid KDE is preferred for larger data.*
**Explanation**
Exact KDEs implement exactly Eq. :eq:`eq-exact-kde` without any approximation and therefore no loss of information.
The computational complexity of the exact KDE above is given by :math:`\mathcal{O}(nm)` where :math:`n`
is the number of sample points to estimate from and :math:`m` is the number of evaluation points
(the points where you want to calculate the estimate).
Due to this cost, the method is most often used for smaller datasamples.
There exist several approximative methods to decrease this complexity and therefore decrease the runtime as well.
**Implementation**
In zfit, the exact KDE :py:class:`~zfit.pdf.KDE1DimExact` takes an arbitrary kernel, which is a
TensorFlow-Probability distribution.
.. jupyter-execute::
:hide-code:
:hide-output:
import os
os.environ["ZFIT_DISABLE_TF_WARNINGS"] = "1"
import zfit
from zfit import z
import numpy as np
import matplotlib.pyplot as plt
.. jupyter-execute::
obs = zfit.Space('x', (-5, 5))
gauss = zfit.pdf.Gauss(obs=obs, mu=0, sigma=2)
data = gauss.sample(1000)
kde = zfit.pdf.KDE1DimExact(data,
# obs, bandwidth, kernel,
# padding, weights, name
)
x = np.linspace(-5, 5, 200)
plt.plot(x, kde.pdf(x), label='Exact KDE')
plt.plot(x, gauss.pdf(x), label='True PDF')
plt.legend()
.. thebe-button:: Run interactively
.. _sec-grid-kdes:
Grid KDEs
'''''''''
**Summary** *To reduce the computational complexity, the input data can be finely binned into a histogram, where each
bin serves as the sample point to a kernel. This is well suited for larger datasets. Like the exact KDE, this
leaves the possibility to choose a bandwidth.*
**Explanation**
The most straightforward way to decrease the computational complexity is by limiting the number of sample points.
This can be done by a binning routine, where the values at a smaller number of regular grid points are estimated
from the original larger number of sample points.
Given a set of sample points :math:`X = \{x_0, x_1, ..., x_k, ..., x_{n-1}, x_n\}` with weights :math:w_k and a set of
equally spaced grid points :math:`G = \{g_0, g_1, ..., g_l, ..., g_{n-1}, g_N\}` where :math:`N < n`
we can assign an estimate
(or a count) :math:`c_l` to each grid point :math:`g_l` and use the newly found :math:`g_l` to calculate
the kernel density estimation instead.
.. math::
:label: eq-kdebin
\widehat{f}_h(x) = \frac{1}{nh} \sum_{l=1}^N c_l \cdot K\Big(\frac{x-g_l}{h}\Big)
This lowers the computational complexity down to :math:`\mathcal{O}(N \cdot m)`.
Depending on the number of grid points :math:`N` there is tradeoff between accuracy and speed.
However as we will see in the comparison chapter later as well, even for ten million sample points, a grid of size
:math:`1024` is enough to capture the true density with high accuracy. As described in the extensive overview
by Artur Gramacki[@gramacki2018fft], simple binning or linear binning can be used, although the last is often
preferred since it is more accurate and the difference in computational complexity is negligible.
**Implementation**
The implementation of Grid KDEs is similar to the exact KDEs, except that the former first bins the incomming data and
uses this as a grid for the kernel. Therefore, it also takes parameters for the binning, such as the number of bins
and the method.
.. jupyter-execute::
data = gauss.sample(100_000)
kde = zfit.pdf.KDE1DimGrid(data,
# obs, bandwidth, kernel, num_grid_points,
# binning_method, padding, weights, name
)
plt.plot(x, kde.pdf(x), label='Grid KDE')
plt.plot(x, gauss.pdf(x), label='True PDF')
plt.legend()
.. _sec-kde-bandwidth:
Bandwidth
'''''''''
**Summary** *Bandwidth denotes the width of the kernel and aims usually at reducing the integrated squared error.
There are two main distinction, a global and local bandwidths. The former
is the same width for all kernels while the latter uses different bandwidth for each kernel and therefore can, in
general, better reflect the local density, often at a computational cost.*
**Explanation**
The optimal bandwidth is often defined as the one that minimizes the
mean integrated squared error (:math:`MISE`) between the kernel density
estimation :math:`\widehat{f}_{h,norm}(x)` and the true probability
density function :math:`f(x)`, where :math:`\mathbb{E}_f` denotes the
expected value with respect to the sample which was used to calculate
the KDE.
.. math::
:label: eq-mise
MISE(h) = \mathbb{E}_f\int [\widehat{f}_{h,norm}(x) - f(x)]^2 dx
To find the optimal bandwidth it is useful to look at the second order
derivative :math:`f^{(2)}` of the unknown distribution as it indicates
how many peaks the distribution has and how steep they are. For a
distribution with many narrow peaks close together a smaller bandwidth
leads to better result since the peaks do not get smeared together to a
single peak for instance.
An asymptotically optimal bandwidth
:math:`h_{AMISE}` which minimizes a first-order asymptotic approximation
of the :math:`MISE` is then given by
.. math::
:label: eq-amise
`h_{AMISE}(x) = \Big( \frac{1}{2N\sqrt{\pi} \| f^{(2)}(x)\|^2}\Big)^{\frac{1}{5}}`
where :math:`N` is the number of sample points (or grid points if
binning is used).
**Implementation**
zfit implements a few different bandwidth methods, of which not all are available for all KDEs.
Rule of thumb
Scott and Silvermann both proposed rule of thumb for the kernel bandwidth. These are approximate estimates
that are a global parameter.
Adaptive methods
These methods calculate a local density parameter that is approximately :math:`/propto f( x ) ^ {-1/2}`,
where :math:`f(x)` is an initial estimate of the density. This can be calculated for example by using
a rule of thumb to obtain an initial estimate.
The main disadvantage is a higher computational complexity; the initial overhead as well as
for the evaluation of the PDF. Most notably the memory consumption can be significantly higher.
.. _sec-fft-kdes:
FFT KDEs
'''''''''
**Summary** *By rewriting the KDE as a discrete convolution and using the FFT, the density can be
approximated interpolating between the discetized values.*
Another technique to speed up the computation is rewriting the kernel
density estimation as convolution operation between the kernel function
and the grid counts (bin counts) calculated by the binning routine given
above.
By using the fact that a convolution is just a multiplication in Fourier
space and only evaluating the KDE at grid points one can reduce the
computational complexity down to
:math:`\mathcal{O}(\log{N} \cdot N)`
Using Eq. :eq:`eq-kdebin` from above only evaluated at grid
points gives us
.. math::
:label: eq-binkdegrid
\widehat{f}_h(g_j) = \frac{1}{nh} \sum_{l=1}^N c_l \cdot K\Big(\frac{g_j-g_l}{h}\Big) = \frac{1}{nh}
\sum_{l=1}^N k_{j-l} \cdot c_l
where :math:`k_{j-l} = K(\frac{g_j-g_l}{h})`.
If we set :math:`c_l = 0` for all :math:`l` not in the set
:math:`\{1, ..., N\}` and notice that :math:`K(-x) = K(x)` we can extend
Eq. :eq:`eq-binkdegrid` to a discrete convolution as follows
.. math::
\widehat{f}_h(g_j) = \frac{1}{nh} \sum_{l=-N}^N k_{j-l} \cdot c_l = \vec{c} \ast \vec{k}
By using the convolution theorem we can fourier transform
:math:`\vec{c}` and :math:`\vec{k}`, multiply them and inverse fourier
transform them again to get the result of the discrete convolution.
Due to the limitation of evaluating only at the grid points
themselves, one needs to interpolate to get values for the estimated
distribution at points in between.
**Implementation**
This is implemented in zfit as :py:class:`~zfit.pdf.KDE1DimFFT`. It
supports similar arguments such as the grid KDEs except that the
bandwidth can't be variable.
.. jupyter-execute::
kde = zfit.pdf.KDE1DimFFT(data,
# obs, bandwidth, kernel, num_grid_points, fft_method,
# binning_method, padding, weights, name
)
plt.plot(x, kde.pdf(x), label='FFT KDE')
plt.plot(x, gauss.pdf(x), label='True PDF')
plt.legend()
.. _sec-isj-kde:
ISJ KDEs
'''''''''
**Summary** *A different take on KDEs is
a new adaptive kernel density estimator based on linear
diffusion processes. It performs especially well on data that do not follow
a normal distribution. The method also includes an estimation for the optimal
bandwidth.*
The method is described completely in the paper ‘Kernel density
estimation by diffusion’ by :cite:t:`Botev_2010`.
The general idea is briefly sketched below.
As explained in :ref:`sec-kde-bandwidth`, the optimal bandwidth is often
defined as the one that minimizes the
(:math:`MISE`) as defined in Eq. :eq:`eq-mise` respectively a
first-order asymptotic approximation :math:`h_{AMISE}` as defined in
Eq. :eq:`eq-amise`.
As Sheather and Jones showed, this second order derivative can be
estimated, starting from an even higher order derivative
:math:`\|f^{(l+2)}\|^2` by using the fact that
:math:`\|f^{(j)}\|^2 = (-1)^j \mathbb{E}_f[f^{(2j)}(X)], \text{ } j\geq 1`
.. math::
h_j=\left(\frac{1+1 / 2^{j+1 / 2}}{3} \frac{1 \times 3 \times 5 \times \cdots \times(2 j-1)}{N \sqrt{\pi / 2}\left\|f^{(j+1)}\right\|^{2}}\right)^{1 /(3+2 j)} = \gamma_j(h_{j+1})
where :math:`h_j` is the optimal bandwidth for the :math:`j`-th
derivative of :math:`f` and the function :math:`\gamma_j` defines the
dependency of :math:`h_j` on :math:`h_{j+1}`
Their proposed plug-in method works as follows:
1. Compute :math:`\|\widehat{f}^{(l+2)}\|^2` by assuming that :math:`f`
is the normal pdf with mean and variance estimated from the sample
data
2. Using :math:`\|\widehat{f}^{(l+2)}\|^2` compute :math:`h_{l+1}`
3. Using :math:`h_{l+1}` compute :math:`\|\widehat{f}^{(l+1)}\|^2`
4. Repeat steps 2 and 3 to compute :math:`h^{l}`,
:math:`\|\widehat{f}^{(l)}\|^2`, :math:`h^{l-1}`, :math:`\cdots` and
so on until :math:`\|\widehat{f}^{(2)}\|^2` is calculated
5. Use :math:`\|\widehat{f}^{(2)}\|^2` to compute :math:`h_{AMISE}`
The weakest point of this procedure is the assumption that the true
distribution is a Gaussian density function in order to compute
:math:`\|\widehat{f}^{(l+2)}\|^2`. This can lead to arbitrarily bad
estimates of :math:`h_{AMISE}`, when the true distribution is far from
being normal.
Therefore Botev et al. took this idea further. Given
the function :math:`\gamma^{[k]}` such that
.. math::
\gamma^{[k]}(h)=\underbrace{\gamma_{1}\left(\cdots \gamma_{k-1}\left(\gamma_{k}\right.\right.}_{k \text { times }}(h)) \cdots)
:math:`h_{AMISE}` can be calculated as
.. math::
h_{AMISE} = h_{1}=\gamma^{[1]}(h_{2})= \gamma^{[2]}(h_{3})=\cdots=\gamma^{[l]}(h_{l+1})
By setting :math:`h_{AMISE}=h_{l+1}` and using fixed point iteration to
solve the equation
.. math::
h_{AMISE} = \gamma^{[l]}(h_{AMISE})
the optimal bandwidth :math:`h_{AMISE}` can be found directly.
This **eliminates the need to assume normally distributed data** for the
initial estimate and leads to improved accuracy, especially for
density distributions that are far from normal.
According to their paper increasing :math:`l` beyond
:math:`l=5` does not increase the accuracy in any practically meaningful
way. The computation is especially efficient if :math:`\gamma^{[5]}` is
computed using the Discrete Cosine Transform - an FFT related
transformation.
The optimal bandwidth :math:`h_{AMISE}` can then either be used for
other kernel density estimation methods (like the FFT-approach discussed
above) or also to compute the kernel density estimation directly using
another Discrete Cosine Transform.
.. jupyter-execute::
kde = zfit.pdf.KDE1DimISJ(data,
# obs, num_grid_points, binning_method,
# padding, weights, name
)
plt.plot(x, kde.pdf(x), label='ISJ KDE')
plt.plot(x, gauss.pdf(x), label='True PDF')
plt.legend()
.. _sec-boundary-bias-and-padding:
Boundary bias and padding
''''''''''''''''''''''''''
KDEs have a peculiar weakness: the boundaries, as the outside has a zero density. This makes the KDE
go down at the bountary as well, as the density approaches zero, no matter what the
density inside the boundary was.
.. jupyter-execute::
obs = zfit.Space('x', (-2, 0.5)) # will cut of data at -2, 0.5
data_narrow = gauss.sample(1000, limits=obs)
kde = zfit.pdf.KDE1DimExact(data_narrow)
x = np.linspace(-2, 0.5, 200)
plt.plot(x, kde.pdf(x), label='Biased KDE')
plt.plot(x, gauss.pdf(x, obs), label='True PDF')
plt.legend()
There are two ways to circumvent this problem:
The best solution: providing a larger dataset than the default space the PDF is used in
.. jupyter-execute::
obs_wide = zfit.Space('x', (-5, 5))
data_wide = gauss.sample(1000, limits=obs_wide)
kde = zfit.pdf.KDE1DimExact(data, obs=obs)
plt.plot(x, kde.pdf(x), label='Wide KDE')
plt.plot(x, gauss.pdf(x, obs), label='True PDF')
plt.legend()
To get an idea of what happened, this is actually the full PDF. Notice that it is normalized over
``obs``.
.. jupyter-execute::
:hide-code:
x = np.linspace(-5, 5, 200)
plt.plot(x, kde.pdf(x), label='Wide KDE')
plt.plot(x, gauss.pdf(x, obs), label='True PDF')
plt.legend()
x = np.linspace(-2, 0.5, 200)
Another technique, as we may don't have more data on the edges, is to mirror
the existing data at the boundaries, which is equivalent to a boundary condition
with a zero derivative. This is a padding technique and can improve the boundaries.
.. jupyter-execute::
kde = zfit.pdf.KDE1DimExact(data_narrow, obs=obs, padding=0.2)
plt.plot(x, kde.pdf(x), label='Padded KDE')
plt.plot(x, gauss.pdf(x, obs), label='True PDF')
plt.legend()
However, one important drawback of this method is to keep in mind that this will actually
alter the PDF *to look mirrored*. Plotting the PDF in a larger range makes this
clear.
.. jupyter-execute::
:hide-code:
x = np.linspace(-5, 5, 200)
plt.plot(x, kde.pdf(x), label='Padded KDE')
plt.plot(x, gauss.pdf(x, obs), label='True PDF')
plt.legend()
Download this tutorial :jupyter-download:notebook:`notebook <zfit_kde_introduction.ipynb>`,
:jupyter-download:script:`script <zfit_kde_introduction.ipynb>`
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/models/kde_models.rst | kde_models.rst |
Parameter
=========
Several objects in zfit, most importantly models, have one or more parameter which typically
parametrise a function or distribution. There are two different kinds of parameters in zfit:
* Independent: can be changed in a fit (or explicitly be set to ``fixed``).
* Dependent: **cannot** be directly changed but *may* depend on independent parameters.
Unique names
-------------
Parameters in zfit are global, unique objects. No Parameters with the same name can therefore exist as its meaning would
be ambiguous. If a new parameter with the same name will be created, a :class:`~zfit.exception.NameAlreadyTakenError`
will be raised.
For Jupyter notebooks, see also :ref:`about parameters in Jupyter <params-in-jupyter>` for
additional information
Independent Parameter
---------------------
To create a parameter that can be changed, *e.g.*, to fit a model, a :py:class:`~zfit.Parameter` has to
be instantiated.
The syntax is as follows:
.. code:: python
param1 = zfit.Parameter("unique_param_name", start_value[, lower_limit, upper_limit])
Furthermore, a ``step_size`` can be specified. If not, it is set to a default value around 0.1.
:py:class:`~zfit.Parameter` can have limits (tested with :py:meth:`~zfit.Parameter.has_limits`), which will
clip the value to the limits given by :py:meth:`~zfit.Parameter.lower_limit` and
:py:meth:`~zfit.Parameter.upper_limit`.
.. note:: Comparison to RooFit
While this closely follows the RooFit syntax, it is very important to note that the optional limits
of the parameter behave differently:
if not given, the parameter will be "unbounded", not fixed (as in RooFit).
Parameters are therefore floating by default, but their value can be fixed by setting the attribute
``floating`` to ``False`` or already specifying it in the init.
The value of the parameter can be changed with the :py:func:`~zfit.Parameter.set_value` method.
Using this method as a context manager, the value can also temporarily changed.
However, be aware that anything _dependent_ on the parameter will have a value with the
parameter evaluated with the new value at run-time:
.. jupyter-execute::
:hide-code:
:hide-output:
import zfit
.. jupyter-execute::
mu = zfit.Parameter("mu_one", 1) # no limits, but FLOATING (!)
with mu.set_value(3):
print(f'before {mu}')
# here, mu is again 1
print(f'after {mu}')
Dependent Parameter
-------------------
A parameter can be composed of several other parameters. They can be used equivalently to :py:class:`~zfit.Parameter`.
.. jupyter-execute::
mu2 = zfit.Parameter("mu_two", 7)
def dependent_func(mu, mu2):
return mu * 5 + mu2 # or any kind of computation
dep_param = zfit.ComposedParameter("dependent_param", dependent_func, params=[mu, mu2])
print(dep_param.get_params())
A special case of the above is :py:class:`~zfit.ComplexParameter`: it
provides a few special methods (like :py:func:`~zfit.ComplexParameter.real`,
:py:func:`~zfit.ComplexParameter.conj` etc.)
to easier deal with complex numbers.
Additionally, the :py:func:`~zfit.ComplexParameter.from_cartesian` and :py:func:`~zfit.ComplexParameter.from_polar`
methods can be used to initialize polar parameters from floats, avoiding the need of creating complex
:py:class:`tf.Tensor` objects.
.. _params-in-jupyter:
Parameters in Jupyter
----------------------
Parameters are unique, global objects. This can conflict with the typical workflow in a jupyter notebook as cells are
often executed multiple times. If a cell that creates a parameter is executed again (meaning a parameter with the same
name as already existing should be created), it raises a :class:`~zfit.exception.NameAlreadyTakenError`
(there is `an extensive discussion of the why <https://github.com/zfit/zfit/issues/186>`_)
To circumvent this, which comes from the fact that Jupyter is stateful, there are a few ways:
- if possible, simply rerun everything.
- move the creation of the variables into a separate cell at the beginning. Remember that you can set a value on a
variable anytime using :meth:`~zfit.Parameter.set_value` which can be called as often as desired.
- create a wrapper that returns the same parameter again if it exists. With this way it is clear what is done
and it is convenient to use as a de-facto drop-in replacement for :class:~`zfit.Parameter` (using it in
other places except for exploratory work may has unintended side-consequences)
Example wrapper:
.. jupyter-execute::
all_params = {}
def get_param(name, value=None, lower=None, upper=None, step_size=None, **kwargs):
"""Either create a parameter or return existing if a parameter with this name already exists.
If anything else than *name* is given, this will be used to change the existing parameter.
Args:
name: Name of the Parameter
value : starting value
lower : lower limit
upper : upper limit
step_size : step size
Returns:
``zfit.Parameter``
"""
if name in all_params:
parameter = all_params[name]
if lower is not None:
parameter.lower = lower
if upper is not None:
parameter.upper = upper
if step_size is not None:
parameter.step_size = step_size
if value is not None:
parameter.set_value(value)
return parameter
# otherwise create new one
parameter = zfit.Parameter(name, value, lower, upper, step_size)
all_params[name] = parameter
return parameter
This wrapper can then be used instead of :class:`~zfit.Parameter` as
.. jupyter-execute::
param1 = get_param('param1', 5., 3., 10., step_size=5)
# Do something with it
param2 = get_param('param1', 3., 1., 10.) # will have step_size 5 as we don't change that
assert param2 is param1
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/parameter.rst | parameter.rst |
.. _data-section:
====
Data
====
An easy and fast data manipulation are among the crucial aspects in High Energy Particle physics data analysis.
With the increasing data availability (e.g. with the advent of LHC), this challenge has been pursued in different
manners. Common strategies vary from multidimensional arrays with attached row/column labels (e.g. ``DataFrame`` in *pandas*) or compressed binary formats (e.g. ROOT). While each of these data structure designs have their own advantages in terms of speed and acessibility, the data concept inplemented in ``zfit`` follows closely the features of ``DataFrame`` in *pandas*.
The :py:class:`~zfit.Data` class provides a simple and structured access/manipulation of *data* -- similarly to concept of multidimensional arrays approach from *pandas*. The key feature of :py:class:`~zfit.Data` is its relation to the :py:class:`~zfit.Space` or more explicitly its axis or name. A more equally convention is to name the role of the :py:class:`~zfit.Space` in this context as the *observable* under investigation. Note that no explicit range for the :py:class:`~zfit.Space` is required at the moment of the data definition, since this is only required at the moment some calculation is needed (e.g. integrals, fits, etc).
Import dataset from a ROOT file
--------------------------------
With the proliferation of the ROOT framework in the context of particle physics, it is often the case that the user will have access to a ROOT file in their analysis. A simple method has been used to handle this conversion:
.. code-block::
data = zfit.Data.from_root(root_file, root_tree, branches)
where ``root_file`` is the path to the ROOT file, ``root_tree`` is the tree name and branches are the list (or a single) of branches that the user wants to import from the ROOT file.
From the default conversion of the dataset there are two optional funcionalities for the user, i.e. the use of weights and the rename of the specified branches. The nominal structure follows:
.. code-block:: pycon
>>> data = zfit.Data.from_root(root_file,
... root_tree,
... branches,
... branches_alias=None,
... weights=None)
The ``branches_alias`` can be seen as a list of strings that renames the original ``branches``. The ``weights`` has two different implementations: (1) either a 1-D column is provided with shape equals to the data (nevents) or (2) a column of the ROOT file by using a string corresponding to a column. Note that in case of multiple weights are required, the weight manipulation has to be performed by the user beforehand, e.g. using Numpy/pandas or similar.
.. note::
The implementation of the ``from_root`` method makes uses of the uproot packages,
which uses Numpy to cast blocks of data from the ROOT file as Numpy arrays in time optimised manner.
This also means that the *goodies* from uproot can also be used by specifying the root_dir_options,
such as cuts in the dataset. However, this can be applied later when examining the produced dataset
and it is the advised implementation of this.
Import dataset from a pandas DataFrame or Numpy ndarray
-------------------------------------------------------
A very simple manipulation of the dataset is provided via the pandas DataFrame. Naturally this is simplified since the :py:class:`~zfit.Space` (observable) is not mandatory, and can be obtained directly from the columns:
.. code-block:: pycon
>>> data = zfit.Data.from_pandas(pandas.DataFrame,
... obs=None,
... weights=None)
In the case of Numpy, the only difference is that as input is required a numpy ndarray and the :py:class:`~zfit.Space` (obs) is mandatory:
.. code-block:: pycon
>>> data = zfit.Data.from_numpy(numpy.ndarray,
... obs,
... weights=None)
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/data.rst | data.rst |
.. _basic-model:
Building a model
================
In order to build a generic model the concept of function and distributed density functions (PDFs) need to be clarified.
The PDF, or density of a continuous random variable, of X is a function f(x) that describes the relative likelihood for this random variable to take on a given value.
In this sense, for any two numbers a and b with :math:`a \leq b`,
:math:`P(a \leq X \leq b) = \int^{b}_{a}f(X)dx`
That is, the probability that X takes on a value in the interval :math:`[a, b]` is the area above this interval and under the graph of the density function.
In other words, in order to a function to be a PDF it must satisfy two criteria:
1. :math:`f(x) \geq 0` for all x;
2. :math:`\int^{\infty}_{-\infty}f(x)dx =` are under the entire graph of :math:`f(x)=1`.
In zfit these distinctions are respected, *i.e.*, a function can be converted into a PDF by imposing the basic two criteria above.
Predefined PDFs and basic properties
------------------------------------
A series of predefined PDFs are available to the users and can be easily accessed using autocompletion (if available). In fact, all of these can also be seen in
.. jupyter-kernel::
:id: zfit_model_introduction.ipynb
.. jupyter-execute::
:hide-output:
:hide-code:
import os
os.environ["ZFIT_DISABLE_TF_WARNINGS"] = "1"
import zfit
from zfit import z
import numpy as np
.. thebe-button:: Run interactively
.. jupyter-execute::
print(zfit.pdf.__all__)
These include the basic function but also some operations discussed below. Let's consider
the simple example of a ``CrystalBall``.
PDF objects must also be initialised giving their named parameters. For example:
.. jupyter-execute::
obs = zfit.Space('x', limits=(4800, 6000))
# Creating the parameters for the crystal ball
mu = zfit.Parameter("mu", 5279, 5100, 5300)
sigma = zfit.Parameter("sigma", 20, 0, 50)
a = zfit.Parameter("a", 1, 0, 10)
n = zfit.Parameter("n", 1, 0, 10)
# Single crystal Ball
model_cb = zfit.pdf.CrystalBall(obs=obs, mu=mu, sigma=sigma, alpha=a, n=n)
In this case the CB object corresponds to a normalised PDF. The main properties of a PDF, e.g.
the probability for a given normalisation range or even
to set a temporary normalisation range can be given as
.. jupyter-execute::
# Get the probabilities of some random generated events
probs = model_cb.pdf(x=np.random.random(10))
# And now execute the tensorflow graph
result = zfit.run(probs)
print(result)
.. jupyter-execute::
# The norm range of the pdf can be changed any time with a contextmanager (temporary) or without (permanent)
with model_cb.set_norm_range((5000, 6000)):
print(model_cb.norm_range)
Another feature for the PDF is to calculate its integral in a certain limit. This can be easily achieved by
.. jupyter-execute::
# Calculate the integral between 5000 and 5250 over the PDF normalized
integral_norm = model_cb.integrate(limits=(5000, 5250))
print(f"Integral={integral_norm}")
In this case the CB has been normalised using the range defined in the observable.
Conversely, the ``norm_range`` in which the PDF is normalised can also be specified as input.
Composite PDF
-------------
A common feature in building composite models it the ability to combine in terms of sum and products different PDFs.
There are two ways to create such models, either with the class API or with simple Python syntax.
Let's consider a second crystal ball with the same mean position and width, but different tail parameters
.. jupyter-execute::
# New tail parameters for the second CB
a2 = zfit.Parameter("a2", -1, -10, 0)
n2 = zfit.Parameter("n2", 1, 0, 10)
# New crystal Ball function defined in the same observable range
model_cb2 = zfit.pdf.CrystalBall(obs=obs, mu=mu, sigma=sigma, alpha=a2, n=n2)
We can now combine these two PDFs to create a double Crystal Ball with a single mean and width through the :py:class:`zfit.pdf.SumPDF` class:
.. jupyter-execute::
# or via the class API
frac = 0.3 # can also be a Parameter
double_cb_class = zfit.pdf.SumPDF(pdfs=[model_cb, model_cb2], fracs=frac)
Notice that the new PDF has the same observables as the original ones, as they coincide.
Alternatively one could consider having PDFs for different axis, which would then create a totalPDF with higher dimension.
A simple extension of these operations is if we want to instead of a sum of PDFs, to model a two-dimensional Gaussian (e.g.):
.. jupyter-execute::
# Defining two Gaussians in two different observables
mu1 = zfit.Parameter("mu1", 1.)
sigma1 = zfit.Parameter("sigma1", 1.)
gauss1 = zfit.pdf.Gauss(obs=obs, mu=mu1, sigma=sigma1)
obs2 = zfit.Space('y', limits=(5, 11))
mu2 = zfit.Parameter("mu2", 1.)
sigma2 = zfit.Parameter("sigma2", 1.)
gauss2 = zfit.pdf.Gauss(obs=obs2, mu=mu2, sigma=sigma2)
# Producing the product of two PDFs
prod_gauss = gauss1 * gauss2
# Or alternatively
prod_gauss_class = zfit.pdf.ProductPDF(pdfs=[gauss2, gauss1]) # notice the different order or the pdf
The new PDF is now in two dimensions.
The order of the observables follows the order of the PDFs given.
.. jupyter-execute::
print("python syntax product obs", prod_gauss.obs)
.. jupyter-execute::
print("class API product obs", prod_gauss_class.obs)
Extended PDF
------------
In the event there are different *species* of distributions in a given observable,
the simple sum of PDFs does not a priori provides the absolute number of events for each specie but rather the fraction as seen above.
An example is a Gaussian mass distribution with an exponential background, e.g.
:math:`P = f_{S}\frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}} + (1 - f_{S}) e^{-\alpha x}`
Since we are interested to express a measurement of the number of events,
the expression :math:`M(x) = N_{S}S(x) + N_{B}B(x)` respect that M(x) is normalised to :math:`N_{S} + N_{B} = N` instead of one.
This means that :math:`M(x)` is not a true PDF but rather an expression for two quantities, the shape and the number of events in the distributions.
An extended PDF can be easily implemented in zfit in two ways:
.. jupyter-execute::
# Create a parameter for the number of events
yield_gauss = zfit.Parameter("yield_gauss", 100, 0, 1000)
# Extended PDF using a predefined method
extended_gauss = gauss1.create_extended(yield_gauss)
This will leave ``gauss1`` unextended while the ``extended_gauss`` is now extended. However, there are cases where
:meth:`~zfit.pdf.BasePDF.create_extended` may fails, such as if it can't copy the original PDF. A PDF can also be
extended in-place
.. jupyter-execute::
print(f"Gauss is extended: {gauss1.is_extended}")
gauss1.set_yield(yield_gauss)
print(f"Gauss is extended: {gauss1.is_extended}")
.. note::
An extended PDF in zfit *does not fundamentally alter the behavior*. Most importantly,
**anything that works for a non-extended PDF will work in the exact same way if the PDF is extended** (anything
working, e.g. exceptions may differ). This implies that the output of :meth:`~zfit.pdf.BasePDF.pdf` and
:meth:`~zfit.pdf.BasePDF.integrate` will remain the same.
An extended PDF will have *more* available functionality such as the methods :meth:`~zfit.pdf.BasePDF.ext_pdf` and
:meth:`~zfit.pdf.BasePDF.ext_integrate`, which will scale the output by the yield.
This means that there is no damage done in extending a PDF. It also implies that the other way around,
"de-extending" is not possible but also never required.
Custom PDF
----------
A fundamental design choice of zfit is the ability to create custom PDFs and functions in an easy way.
Let's consider a simplified implementation
.. jupyter-execute::
class MyGauss(zfit.pdf.ZPDF):
"""Simple implementation of a Gaussian similar to zfit.pdf.Gauss class"""
_N_OBS = 1 # dimension, can be omitted
_PARAMS = ['mean', 'std'] # name of the parameters
def _unnormalized_pdf(self, x):
x = z.unstack_x(x)
mean = self.params['mean']
std = self.params['std']
return z.exp(- ((x - mean) / std) ** 2)
This is the basic information required for this custom PDF.
With this new PDF one can access the same feature of the predefined PDFs, e.g.
.. jupyter-execute::
obs_own = zfit.Space("my obs", limits=(-4, 4))
mean = zfit.Parameter("mean", 1.)
std = zfit.Parameter("std", 1.)
my_gauss = MyGauss(obs=obs_own, mean=mean, std=std)
# For instance sampling, integral and probabilities
data = my_gauss.sample(15)
integral = my_gauss.integrate(limits=(-1, 2))
probs = my_gauss.pdf(data,norm_range=(-3, 4))
print(f"Probs: {probs} and integral: {integral}")
Finally, we could also improve the description of the PDF by providing a analytical integral for the ``MyGauss`` PDF:
.. jupyter-execute::
def gauss_integral_from_any_to_any(limits, params, model):
(lower,), (upper,) = limits.limits
mean = params['mean']
std = params['std']
# Write you integral
return 42. # Dummy value
# Register the integral
limits = zfit.Space(axes=0, limits=(zfit.Space.ANY_LOWER, zfit.Space.ANY_UPPER))
MyGauss.register_analytic_integral(func=gauss_integral_from_any_to_any, limits=limits)
Sampling from a Model
'''''''''''''''''''''
In order to sample from model, there are two different methods,
:py:meth:`~zfit.core.basemodel.BaseModel.sample` for **advanced** sampling returning a Tensor, and
:py:meth:`~zfit.core.basemodel.BaseModel.create_sampler` for **multiple sampling** as used for toys.
Tensor sampling
'''''''''''''''''
The sample from :py:meth:`~zfit.core.basemodel.BaseModel.sample` is a Tensor that samples when executed.
This is for an advanced usecase only
Advanced sampling and toy studies
'''''''''''''''''''''''''''''''''''''
More advanced and repeated sampling, such as used in toy studies, will be
explained in :ref:`playing_with_toys`.
Download this tutorial :jupyter-download:notebook:`notebook <zfit_model_introduction.ipynb>`,
:jupyter-download:script:`script <zfit_model_introduction.ipynb>`
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/model.rst | model.rst |
.. _loss-section:
====
Loss
====
A *loss function* can be defined as a measurement of the discrepancy between the observed data and the predicted data by the fitted function.
To some extent it can be visualised as a metric of the goodness of a given prediction as you change the settings of your algorithm.
For example, in a general linear model the loss function is essentially the sum of squared deviations from the fitted line or plane.
A more useful application in the context of High Energy Physics (HEP) is the Maximum Likelihood Estimator (MLE).
The MLE is a specific type of probability model estimation, where the loss function is the negative log-likelihood (NLL).
In zfit, loss functions inherit from the :py:class:`~zfit.loss.BaseLoss` class and they follow a common interface, in which the model,
the dataset **must** be given, and
where parameter constraints in form of a dictionary ``{param: constraint}`` **may** be given.
As an example, we can create an unbinned negative log-likelihood loss (:py:class:`~zfit.loss.UnbinnedNLL`) from the model described in the :ref:`Basic model section <basic-model>` and the data from the :ref:`Data section <data-section>`:
.. code-block::
my_loss = zfit.loss.UnbinnedNLL(model_cb, data)
Adding constraints
------------------
Constraints (or, in general, penalty terms) can be added to the loss function either by using the ``constraints`` keyword when creating the loss object or by using the :py:func:`~zfit.loss.BaseLoss.add_constraints` method.
These constraints are specified as a list of penalty terms, which can be any object inheriting from :py:class:`~zfit.constraint.BaseConstraint` that is simply added to the calculation of the loss.
Useful implementations of penalties can be found in the :py:mod:`zfit.constraint` module.
For example, if we wanted to add a gaussian constraint on the ``mu`` parameter of the previous model, we would write:
.. code-block:: pycon
>>> constraint = zfit.constraint.GaussianConstraint(params=mu,
>>> observation=5279.,
>>> uncertainty=10.)
>>> my_loss = zfit.loss.UnbinnedNLL(model_cb,
>>> data,
>>> constraints=constraint)
Custom penalties can also be added to the loss function, for instance if you want to set limits on a parameter:
.. code-block:: pycon
>>> def custom_constraint(param):
max_value = 5400
return tf.cond(tf.greater_equal(param, max_value), lambda: 10000., lambda: 0.)
The custom penalty needs to be a ``SimpleConstraint`` to be added to the loss function whereas ``mu`` will be used
as the argument to the constraint
.. code-block:: pycon
>>> simple_constraint = zfit.constraint.SimpleConstraint(custom_constraint, params=mu)
>>> my_loss.add_constraints(simple_constraint)
In this example if the value of ``param`` is larger than ``max_value`` a large value is added the loss function
driving it away from the minimum.
Simultaneous fits
-----------------
There are currently two loss function implementations in the ``zfit`` library, the :py:class:`~zfit.loss.UnbinnedNLL` and :py:class:`~zfit.loss.ExtendedUnbinnedNLL` classes, which cover non-extended and extended negative log-likelihoods.
A very common use case of likelihood fits in HEP is the possibility to examine simultaneously different datasets (that can be independent or somehow correlated).
To build loss functions for simultaneous fits, the addition operator can be used (the particular combination that is performed depends on the type of loss function):
.. code-block:: pycon
>>> models = [model1, model2]
>>> datasets = [data1, data2]
>>> my_loss1 = zfit.loss.UnbinnedNLL(models[0], datasets[0], fit_range=(-10, 10))
>>> my_loss2 = zfit.loss.UnbinnedNLL(models[1], datasets[1], fit_range=(-10, 10))
>>> my_loss_sim_operator = my_loss1 + my_loss2
The same result can be achieved by passing a list of PDFs on instantiation, along with the same number of datasets:
.. code-block:: pycon
>>> # Adding a list of models and datasets
>>> my_loss_sim = zfit.loss.UnbinnedNLL(model=[model1, model2, ...], data=[data1, data2, ...])
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/loss.rst | loss.rst |
.. _result-section:
Result
============
As seen before, the ``result`` object contains all the information about the minimization result:
.. code-block:: pycon
>>> print("Function minimum:", result.fmin)
Function minimum: 14170.396450111948
>>> print("Converged:", result.converged)
Converged: True
>>> print("Valid:", result.valid)
Valid: True
>>> print("Full minimizer information:", result)
.. code-block:: pycon
>>> # Information on all the parameters in the fit
>>> params = result.params
>>> # Printing information on specific parameters, e.g. mu
>>> print("mu={}".format(params[mu]['value']))
mu=0.012464509810750313
Estimating uncertainties
----------------------------
In order to get an estimate for the uncertainty of the parameters after the minimization, the FitResult can be used.
.. code-block:: pycon
>>> param_hesse = result.hesse()
>>> print(param_hesse)
This will print out the uncertainties of the parameter using a Hessian approximation at the minimum of the loss.
While the approximation is fast and often good enough, it is symmetric and does maybe not describe the uncertainty
well.
The :py:func:`~zfit.minimizers.fitresult.FitResult.errors` method can be used to perform the CPU-intensive
error calculation.
It returns two objects, the first are the parameter errors and the second is a new ``FitResult`` *in case a new
minimum was found during the profiling*; this will also render the original result invalid as can
be checked with ``result.valid``.
.. code-block:: pycon
>>> param_errors, _ = result.errors()
>>> print(param_errors)
This will print out the uncertainties of the parameter using a profiling method (like :meth:`~iminuit.Minuit.minos`
does)
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/result.rst | result.rst |
Minimization
============
Minimizers are the second last key element in the API framework of zfit.
In particular, these are connected to the loss function that they minimize.
zfit minimizers are stateless and have two main components:
- the creation of a minimizer instance which has some common arguments such as ``tol``, ``verbosity`` or ``maxiter``
and many minimizer specific arguments such as ``gradient``, ``max_trust_radius`` that only a few or one
minimizer takes. This instance will be stateless and anything (inherently) stateful, such as the convergence
criterion, are created newly during each minimization process.
In this sense, a zfit minimizer is a way of "storing a configuration".
- the actual minimization which is done through the :meth:`~zfit.minimize.BaseMinimizer.minimize` method, which
takes a loss (or a callable), optionally the parameters and optionally a previous result to start from. This method
looks exactly the same for all algorithms.
This makes minimizers interchangeable as they all are invoked in the same way and return a
:class:`~zfit.result.FitResult`, which has the same structure for all minimizers.
The zfit library is designed such that it is simple to introduce new sets of minimizers.
Minimization
-------------------
There are multiple minimizers currently included in the package: :class:`~zfit.minimize.IpyoptV1`,
:class:`~zfit.minimize.Minuit`, the SciPy optimizers (such as :class:`~zfit.minimize.ScipyLBFGSBV1`) and the
NLopt library (such as :class:`~zfit.minimize.NLoptLBFGSV1`). Also TensorFlow minimizers are included, however due
to the different nature of problem that they usually intend to solve, their performance is often inferior.
.. code-block:: pycon
>>> # Minuit minimizer
>>> minimizer_minuit = zfit.minimize.Minuit()
>>> # Ipyopt minimizer
>>> minimizer_ipyopt = zfit.minimize.IpyoptV1()
>>> # One of the NLopt minimizer
>>> minimizer_nloptbfgs = zfit.minimize.NLoptLBFGSV1()
>>> # One of the SciPy minimizer
>>> minimizer_scipybfgs = zfit.minimize.ScipyLBFGSBV1()
.. note:: Why the "V1" at the end of the name?
This minimizers and their API have been introduced recently. Due to their stochastic nature, it is hard
to reliably assess their performance *without large scale testing by the community*. So there will be
improved versions, called V2 etc, which can be tested easily against the V1 in order to have a direct
comparison. At some point later, there will be only one minimizer version, the one without any V.
Any of these minimizers can then be used to minimize the loss function we created
in the :ref:`previous section <loss-section>`, or a pure Python function
.. code-block:: pycon
>>> result = minimizer_minuit.minimize(loss=my_loss)
The choice of which parameters of your model should be floating in the fit can also be made at this stage
.. code-block:: pycon
>>> # In the case of a Gaussian (e.g.)
>>> result = minimizer_minuit.minimize(loss=my_loss, params=[mu, sigma])
**Only** the parameters given in ``params`` are floated in the optimisation process.
If this argument is not provided or ``params=None``, all the floating parameters in the loss function are
floated in the minimization process.
The third argument to minimize can be a :class:`~zfit.result.FitResult` that initializes the minimization. This can be
used to possibly chain minimizations and for example first search with a global minimizer at a high tolerance and then
refine with a local minimizer.
.. code-block:: pycon
>>> result_refined = minimizer_ipyopt.minimize(loss=my_loss, params=[mu, sigma], init=result)
The result of the fit is returned as a :py:class:`~zfit.minimizers.fitresult.FitResult` object,
which provides access the minimiser state.
zfit separates the minimisation of the loss function with respect to the error calculation
in order to give the freedom of calculating this error whenever needed.
The ``result`` object contains all the information about the minimization result:
.. code-block:: pycon
>>> print("Function minimum:", result.fmin)
Function minimum: 14170.396450111948
>>> print("Converged:", result.converged)
Converged: True
>>> print("Valid:", result.valid)
Valid: True
>>> print("Full minimizer information:", result)
.. code-block:: pycon
>>> # Information on all the parameters in the fit
>>> params = result.params
>>> # Printing information on specific parameters, e.g. mu
>>> print("mu={}".format(params[mu]['value']))
mu=0.012464509810750313
More on the result and how to get an estimate of the uncertainty is described in
the :ref:`next section <result-section>`.
Creating your own minimizer
----------------------------
Adding new minimizers is well possible in zfit as there are convenient base classes offered that take most of the heavy
lifting.
While this is a feature of zfit that can be fully used, it will not be as stable as the simple usage of a minimizer
until the 1.0 release.
A wrapper for TensorFlow optimisers is also available to allow to easily integrate new ideas in the framework.
For instance, the Adam minimizer could have been initialised by
.. code-block:: pycon
>>> # Adam's TensorFlor optimiser using a wrapper
>>> minimizer_wrapper = zfit.minimize.WrapOptimizer(tf.keras.optimizer.Adam())
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/minimize.rst | minimize.rst |
.. _playing_with_toys:
Toy studies and inference
================================================
While a single fit is useful, it does not say a lot about the *uncertainty* of
the result and whether the fit is biased in any way or not.
Many statistical methods, such as obtaining sWeights,
(Feldman and Cousins) Confidence Interval, setting limits and more are
all covered it the
`hepstats library <https://github.com/scikit-hep/hepstats>`_,
which work directly with zfit parts.
For other toy studies, models offer a sampler function that can be used
for repeated sampling.
Playing with toys: Multiple samplings
''''''''''''''''''''''''''''''''''''''
The method :py:meth:`~zfit.core.basemodel.BaseModel.create_sampler` returns a sampler that can be used
like a :py:class:`~zift.Data` object (e.g. for building a :py:class:`~zfit.core.interfaces.ZfitLoss`).
The sampling itself is *not yet done* but only when :py:meth:`~zfit.core.data.Sampler.resample` is
invoked. The sample generated depends on the original pdf at this point, e.g. parameters have the
value they have when the :py:meth:`~zfit.core.data.Sampler.resample` is invoked. To have certain
parameters fixed, they have to be specified *either* on :py:meth:`~zfit.core.basemodel.BaseModel.create_sampler`
via ``fixed_params``, on :py:meth:`~zfit.core.data.Sampler.resample` by specifying which parameter
will take which value via ``param_values`` or by changing the attribute of :py:class:`~zfit.core.data.Sampler`.
Reusing the model, obs and parameters from :ref:`basic-model`,
this is typically how toys look like:
.. jupyter-execute::
:hide-output:
:hide-code:
import os
os.environ["ZFIT_DISABLE_TF_WARNINGS"] = "1"
import zfit
from zfit import z
import numpy as np
obs = zfit.Space('x', limits=(4800, 6000))
mu1 = zfit.Parameter("mu1", 1.)
sigma1 = zfit.Parameter("sigma1", 1.)
gauss1 = zfit.pdf.Gauss(obs=obs, mu=mu1, sigma=sigma1)
mu2 = zfit.Parameter("mu2", 1.)
sigma2 = zfit.Parameter("sigma2", 1.)
.. jupyter-execute::
# using the previous gaussians and obs to create a model
gauss3 = zfit.pdf.Gauss(obs=obs, mu=mu2, sigma=sigma2)
model = zfit.pdf.SumPDF([gauss1, gauss3], fracs=0.4)
sampler = model.create_sampler(n=1000,fixed_params=True)
nll = zfit.loss.UnbinnedNLL(model=model, data=sampler)
minimizer = zfit.minimize.Minuit()
results = []
nruns = 5
for run_number in range(nruns):
# initialize the parameters randomly
sampler.resample() # now the resampling gets executed
mu1.set_value(np.random.normal())
sigma1.set_value(abs(np.random.normal()) + 0.5)
result = minimizer.minimize(nll)
results.append(result)
# safe the result, collect the values, calculate errors...
Here we fixed all parameters as they have been initialized and then sample. If we do not provide any
arguments to ``resample``, this will always sample now from the distribution with the parameters set to the values when
the sampler was created.
To give another, though not very useful example:
.. jupyter-execute::
# create a model depending on mu1, sigma1, mu2, sigma2
sampler = model.create_sampler(n=1000, fixed_params=[mu1, mu2])
nll = zfit.loss.UnbinnedNLL(model=model, data=sampler)
sampler.resample() # now it sampled
# do something with nll
minimizer.minimize(nll) # minimize
sampler.resample()
# note that the nll, being dependent on ``sampler``, also changed!
The sample is now resampled with the *current values* (minimized values) of ``sigma1``, ``sigma2`` and with
the initial values of ``mu1``, ``mu2`` (because they have been fixed).
We can also specify the parameter values explicitly by
using the following argument. Reusing the example above
.. jupyter-execute::
sigma1.set_value(np.random.normal())
sampler.resample(param_values={sigma1: 5})
The sample (and therefore also the sample the ``nll`` depends on) is now sampled with ``sigma1`` set to 5.
If some parameters are constrained to values observed from external measurements, usually Gaussian constraints,
then sampling of the observed values might be needed to obtain an unbiased sample from the model. Example:
.. jupyter-execute::
# same model depending on mu1, sigma1, mu2, sigma2
constraint = zfit.constraint.GaussianConstraint(params=[sigma1, sigma2],
observation=[1.0, 0.5],
uncertainty=[0.1, 0.05])
n_samples = 5
sampler = model.create_sampler(n=n_samples, fixed_params=[mu1, mu2])
nll = zfit.loss.UnbinnedNLL(model=model, data=sampler, constraints=constraint)
constr_values = constraint.sample(n=n_samples)
for constr_params, constr_vals in constr_values.items():
sampler.resample()
# do something with nll, temporarily assigning values to the parameters
with zfit.param.set_values(constr_params, constr_vals):
minimizer.minimize(nll) # minimize
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/toy_studies.rst | toy_studies.rst |
============================
Space, Observable and Range
============================
Inside zfit, :py:class:`~zfit.Space` defines the domain of objects by specifying the observables/axes and *maybe* also
the limits. Any model and data needs to be specified in a certain domain, which is usually done using the
``obs`` argument. It is crucial that the axis used by the observable of the data and the model match, and this matching is
handle by the :py:class:`~zfit.Space` class.
.. jupyter-execute::
:hide-code:
:hide-output:
import zfit
from zfit import z
import numpy as np
.. code-block::
obs = zfit.Space("x")
model = zfit.pdf.Gauss(obs=obs, ...)
data = zfit.Data.from_numpy(obs=obs, ...)
Definitions
-----------
**Space**: an *n*-dimensional definition of a domain (either by using one or more observables or axes),
with or without limits.
.. note:: Difference to RooFit
:class: dropdown
*compared to ``RooFit``, a space is **not** the equivalent of an observable but rather corresponds
to an object combining **a set** of observables (which of course can be of size 1). Furthermore,
there is a **strong** distinction in zfit between a :py:class:`~zfit.Space` (or observables)
and a :py:class:`~zfit.Parameter`, both conceptually and in terms of implementation and usage.*
**Observable**: a string defining the axes; a named axes.
*(for advanced usage only, can be skipped on first read)*
**Axis**: integer defining the axes *internally* of a model. There is always a mapping of observables <-> axes *once inside a model*.
**Limit** The range on a certain axis. Typically defines an interval. In fact, there are two times of limits:
* **rectangular**: This type is the usual limit as e.g. ``(-2, 5)`` for a simple, 1 dimensional interval. It is
rectangular. This can either be given as ``limits`` of a :py:class:`~zfit.Space` or as ``rect_limits``.
* **functional**: In order to define arbitrary limits, a function can be used that receives a tensor-like
object ``x`` and returns ``True`` on every position that is inside the limits, ``False`` for every value outside.
When a functional limit is given, rectangular limits that contain the functional limit as a subset **must** be
defined as well.
Since every object has a well defined domain, it is possible to combine them in an unambiguous way.
While not enforced, a space should usually be created with limits that define the default space of an object.
This correspond for example to the default normalization range ``norm_range`` or sampling range.
.. code-block::
lower1, upper1 = [0, 1], [2, 3]
lower2, upper2 = [-4, 1], [10, 3]
obs1 = zfit.Space(['x', 'y'], limits=(lower1, upper2))
obs2 = zfit.Space(['z', 'y'], limits=(lower2, upper2))
model1 = zfit.pdf.Gauss(obs=obs1, ...)
model2 = zfit.pdf.Gauss(obs=obs2, ...)
# creating a composite pdf
product = model1 * model2
# OR, equivalently
product = zfit.pdf.ProductPDF([model1, model2])
assert obs1 * obs2 == product.space
The ``product`` is now defined in the space with observables ``['x', 'y', 'z']``. Any :py:class:`~zfit.Data` object
to be combined with ``product`` has to be specified in the same space.
.. code-block::
# create the space
combined_obs = obs1 * obs2
data = zfit.Data.from_numpy(obs=combined_obs, ...)
Now we have a :py:class:`~zfit.Data` object that is defined in the same domain as ``product``
and can be used to build a loss function.
Limits
------
In many places, just defining the observables is not enough and an interval, specified by its limits, is required.
Examples are a normalization range, the limits of an integration or sampling in a certain region.
Simple, 1-dimensional limits can be specified as follows. Operations like addition (creating a space with
two intervals) or combination (increase the dimensionality) are also possible.
.. jupyter-execute::
simple_limit1 = zfit.Space(obs='obs1', limits=(-5, 1))
simple_limit2 = zfit.Space(obs='obs1', limits=(3, 7.5))
added_limits = simple_limit1 + simple_limit2
In this case, ``added_limits`` is now a :py:class:`zfit.Space` with observable ``obs1'`` defined in the intervals
(-5, 1) and (3, 7.5). This can be useful, *e.g.*, when fitting in two regions.
An example of the product of different :py:class:`zfit.Space` instances has been shown before as ``combined_obs``.
Functional limits
'''''''''''''''''
Limits can be defined by a function that returns whether a value is inside the boundaries or not **and** rectangular
limits (note that specifying ``rect_limit`` does *not* enforce them, the function itself has to take care of that).
This example specifies the bounds between (-4, 0.5) with the ``limit_fn`` (which, in this simple case, could be better
achieved by directly specifying them as rectangular limits).
.. code:: python
def limit_fn(x):
x = z.unstack_x(x)
inside_lower = tf.greater_equal(x, -4)
inside_upper = tf.less_equal(x, 0.5)
inside = tf.logical_and(inside_lower, inside_upper)
return inside
space = zfit.Space(obs='obs1', limits=limit_fn, rect_limits=(-5, 1))
Combining limits
''''''''''''''''
To define simple, 1-dimensional limits, a tuple with two numbers or a functional
limit in 1 dimension is enough. For anything more complicated,
the operators product ``*`` or addition ``+`` respectively their functional API
:py:func:`zfit.dimension.combine_spaces`
and :py:func:`zfit.dimension.add_spaces` can be used.
A working code example of :py:class:`~zfit.Space` handling is provided in :ref:`spaces.py <spaces-example>`.
Using the limits
'''''''''''''''''
To use the limits of any object, the methods :py:meth:`~zfit.Space.inside`
(to test if values are inside or outside of the boundaries)
and :py:meth:`~zfit.Space.filter` can be used.
The rectangular limits can also direclty be accessed by ``rect_limits``, ``rect_lower`` or ``rect_upper``.
The returned shape is of
``(n_events, n_obs)``, for the lower respectively upper limit (``rect_limits`` is a tuple of ``(lower, upper)``).
This should be used with caution and only if the rectangular limits are desired.
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/components/intro/space.rst | space.rst |
================================
Applied Guides
================================
Full guides through applied examples
The following tutorials can be found
in `zfit tutorials as Jupyter Notebooks. <https://github.com/zfit/zfit-tutorials>`_
.. youtube:: nGzyXVXfHnw
:width: 320
.. toctree::
:maxdepth: 1
../../../_tmp/zfit-tutorials/guides/constraints_simultaneous_fit_discovery_splot.ipynb
.. youtube:: wdzPvoSDq6o
:width: 320
.. toctree::
:maxdepth: 1
../../../_tmp/zfit-tutorials/guides/custom_models.ipynb
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/docs/tutorials/guides/index.rst | index.rst |
import tensorflow as tf
import zfit
from zfit import z
class CustomPDF2D(zfit.pdf.BasePDF):
"""My custom, 2 dimensional pdf where the axes are: Energy, Momentum."""
def __init__(
self,
param1,
param2,
param3,
obs,
name="CustomPDF",
):
# we can now do complicated stuff here if needed
# only thing: we have to specify explicitly here what is which parameter
params = {
"super_param": param1, # we can change/compose etc parameters
"param2": param2,
"param3": param3,
}
super().__init__(obs, params, name=name)
def _unnormalized_pdf(self, x):
energy, momentum = x.unstack_x()
param1 = self.params["super_param"]
param2 = self.params["param2"]
param3 = self.params["param3"]
# just a fantasy function
probs = (
param1 * tf.cos(energy**2) + tf.math.log(param2 * momentum**2) + param3
)
return probs
# add an analytic integral
# define the integral function
def integral_full(limits, norm_range, params, model):
(
lower,
upper,
) = limits.rect_limits # for a more detailed guide, see the space.py example
param1 = params["super_param"]
param2 = params["param2"]
param3 = params["param3"]
lower = z.convert_to_tensor(lower)
upper = z.convert_to_tensor(upper)
# calculate the integral here, dummy integral, wrong!
integral = param1 * param2 * param3 + z.reduce_sum([lower, upper])
return integral
# define the space over which it is defined. Here, we use the axes
lower_full = (-10, zfit.Space.ANY_LOWER)
upper_full = (10, zfit.Space.ANY_UPPER)
integral_full_limits = zfit.Space(axes=(0, 1), limits=(lower_full, upper_full))
CustomPDF2D.register_analytic_integral(func=integral_full, limits=integral_full_limits)
# define the partial integral function
def integral_axis1(x, limits, norm_range, params, model):
data_0 = x.unstack_x() # data from axis 0
param1 = params["super_param"]
param2 = params["param2"]
param3 = params["param3"]
lower, upper = limits.limit1d # for a more detailed guide, see the space.py example
lower = z.convert_to_tensor(lower) # the limits are now 1-D, for axis 1
upper = z.convert_to_tensor(upper)
# calculate the integral here, dummy integral
integral = data_0**2 * param1 * param2 * param3 + z.reduce_sum([lower, upper])
# notice that the returned shape will be in the same as data_0, e.g. the number of events given in x
return integral
# define the space over which it is defined. Here, we use the axes
lower_axis1 = ((zfit.Space.ANY_LOWER,),)
upper_axis1 = ((zfit.Space.ANY_UPPER,),)
integral_axis1_limits = zfit.Space(
axes=(1,), # axes one corresponds to the second obs, here obs2
limits=(lower_axis1, upper_axis1),
)
CustomPDF2D.register_analytic_integral(
func=integral_axis1, limits=integral_axis1_limits
)
if __name__ == "__main__":
import numpy as np
obs = zfit.Space("obs1", (-10, 10)) * zfit.Space("obs2", (-3, 5))
pdf = CustomPDF2D(1, 2, 3, obs=obs)
sample = pdf.sample(n=1000)
pdf.pdf([[2.0, 2.5], [5.4, 3.2]])
x_part = zfit.Data.from_numpy(array=np.array([2.1, 2.2, 3.2]), obs="obs1")
# integrate over obs2 with limits 1, 2 for the `x_part`. This will use the analytic integral above
pdf.partial_integrate(x=x_part, limits=zfit.Space("obs2", (1, 2)))
# we can explicitly call the analytic integral. Without registering it (e.g. comment the line with the `register`
# and run again), it will raise an error
pdf.partial_analytic_integrate(x=x_part, limits=zfit.Space("obs2", (1, 2))) | zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/examples/custom_pdf_advanced.py | custom_pdf_advanced.py |
import matplotlib.pyplot as plt
import mplhep
import numpy as np
import zfit
n_bins = 50
# create space
obs_binned = zfit.Space("x", binning=zfit.binned.RegularBinning(50, -10, 10, name="x"))
obs = obs_binned.with_binning(None)
# parameters
mu = zfit.Parameter("mu", 1.0, -4, 6)
sigma = zfit.Parameter("sigma", 1.0, 0.1, 10)
lambd = zfit.Parameter("lambda", -0.06, -1, -0.01)
# model building, pdf creation
gauss = zfit.pdf.Gauss(mu=mu, sigma=sigma, obs=obs)
exponential = zfit.pdf.Exponential(lambd, obs=obs)
n_bkg = zfit.Parameter("n_bkg", 20000)
n_sig = zfit.Parameter("n_sig", 1000)
gauss_extended = gauss.create_extended(n_sig)
exp_extended = exponential.create_extended(n_bkg)
model_unbinned = zfit.pdf.SumPDF([gauss_extended, exp_extended])
# make binned
model = zfit.pdf.BinnedFromUnbinnedPDF(model_unbinned, space=obs_binned)
# data
n_sample = 21200
data = model.sample(n=n_sample)
n_bins = 50
plot_scaling = n_sample / n_bins * obs.area()
x = np.linspace(-10, 10, 1000)
def plot_pdf(title):
plt.figure()
plt.title(title)
y = model.pdf(x).numpy()
y_gauss = (gauss.pdf(x) * model_unbinned.params["frac_0"]).numpy()
y_exp = (exponential.pdf(x) * model_unbinned.params["frac_1"]).numpy()
plt.plot(x, y * plot_scaling, label="Sum - Model")
plt.plot(x, y_gauss * plot_scaling, label="Gauss - Signal")
plt.plot(x, y_exp * plot_scaling, label="Exp - Background")
# mplhep.histplot(np.histogram(data_np, bins=n_bins), yerr=True, color='black', histtype='errorbar')
mplhep.histplot(data.to_hist(), yerr=True, color="black", histtype="errorbar")
plt.ylabel("Counts")
plt.xlabel("obs: $B_{mass}$")
plt.legend()
# set the values to a start value for the fit
mu.set_value(0.5)
sigma.set_value(1.2)
lambd.set_value(-0.05)
# create NLL
nll = zfit.loss.ExtendedBinnedNLL(model=model, data=data)
# create a minimizer
minimizer = zfit.minimize.Minuit()
plot_pdf("before fit")
result = minimizer.minimize(nll)
print(result.params)
# do the error calculations, here with hesse, than with minos
param_hesse = result.hesse()
(
param_errors,
_,
) = result.errors() # this returns a new FitResult if a new minimum was found
print(result.valid) # check if the result is still valid
# plot the data
plot_pdf("after fit")
# uncomment to display plots
plt.show() | zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/examples/signal_bkg_mass_extended_fit_binned.py | signal_bkg_mass_extended_fit_binned.py |
import argparse
import os
import re
import yaml
here = os.path.dirname(os.path.realpath(__file__))
parser = argparse.ArgumentParser(
description="Replace arguments with central stored ones",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
parser.add_argument("files", nargs="*", help="Files to be processed.")
parser.add_argument("--dry", action="store_true", help="Dry run WITHOUT replacing.")
cfg = parser.parse_args()
with open(here + "/argdocs.yaml") as replfile:
replacements = yaml.load(replfile, Loader=yaml.Loader)
# Replace the target string
# auto_end_old = r'|@docend|'
for filepath in cfg.files:
if not filepath.endswith(".py"):
continue
with open(filepath) as file:
filedata = file.read()
infile = False
needs_replacement = False
for param, replacement in replacements.items():
replacement = replacement.rstrip("\n")
while replacement[:1] == " ": # we want to remove the whitespace
replacement = replacement[1:]
auto_start = rf"|@doc:{param}|"
auto_end = rf"|@docend:{param}|"
matches = re.findall(
auto_start.replace("|", r"\|") + r".*?" + auto_end.replace("|", r"\|"),
filedata,
re.DOTALL,
)
if not matches:
continue
infile = True
replacement_mod = f"{auto_start} {replacement} {auto_end}"
for match in matches:
if auto_start in match[len(auto_start) :]: # sanity check
raise ValueError(
f"Docstring formatting error,"
f" has more than one start until an end command: {match}"
)
if match != replacement_mod:
needs_replacement = True
filedata = filedata.replace(match, replacement_mod)
# Write the file out again
replace_msg = "replaced docs" if needs_replacement else "docs already there"
filename = filepath.split("/")[-1]
if infile:
if cfg.dry:
print(
f"Match in {filename}, {replace_msg}, not writing to {filepath}, dry run."
)
else:
if needs_replacement:
with open(filepath, "w") as file:
file.write(filedata)
print(f"Modified {filename}.") | zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/utils/api/replace_argdocs.py | replace_argdocs.py |
**Questions**: feel free to [ask on StackOverflow](https://stackoverflow.com/questions/ask) with the **zfit** tag (so we
also get notified).
**Bugs or Feature Requests** Please fill the below form _if possible_.
<!--- Provide a general summary of the issue in the Title above -->
## Current Behaviour
<!--- Tell us what happens instead of the expected behavior -->
<!--- Include the commands you ran and the output. -->
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
## Expected Behaviour
<!--- Tell us what should happen. -->
## Context (Environment)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
- zfit version:
- Python version:
- Are you using conda, pipenv, etc? :
- Operating System:
- Tensorflow version:
## Possible Solution/Implementation
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- and maybe suggest an idea for implementing addition or change -->
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/.github/ISSUE_TEMPLATE.md | ISSUE_TEMPLATE.md |
---
name: zfit usage question
about: Questions about the usage and best ways in zfit
title: "[QUESTION]"
labels: ""
assignees: ""
---
Please use preferably one of the following channels:
- read the FAQ, it may holds your answer already
- [ask on StackOverflow](https://stackoverflow.com/questions/ask) with the **zfit** tag (so we also get notified).
- if you want to ask immediately a broader community, feel free to ask in
the [Gitter channel of zfit](https://gitter.im/zfit/zfit)
- or post an issue here with the question
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/.github/ISSUE_TEMPLATE/zfit-usage-question.md | zfit-usage-question.md |
---
name: Bug report
about: Create a report to help us improve
title: ""
labels: bug
assignees: ""
---
**Bugs** Please fill the below form _if possible_.
<!--- Provide a general summary of the issue in the Title above -->
## Current Behaviour
<!--- Tell us what happens instead of the expected behavior -->
<!--- Include the commands you ran and the output. -->
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
## Expected Behaviour
<!--- Tell us what should happen. -->
## Context (Environment)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
- zfit version:
- Python version:
- Are you using conda, pipenv, etc? :
- Operating System:
- Tensorflow version:
## Possible Solution/Implementation
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- and maybe suggest an idea for implementing addition or change -->
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/.github/ISSUE_TEMPLATE/bug_report.md | bug_report.md |
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: discussion
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/.github/ISSUE_TEMPLATE/feature-request.md | feature-request.md |
---
name: BehaviorUnderDiscussion
about: If the code raises a BehaviorUnderDiscussion error
title: "[BEHAVIOR]"
labels: discussion
assignees: mayou36
---
## Describe what happens
A description of what you do with zfit and where the error arose, if you know about it.
## StackTrace
Please include the stacktrace here
## Environment
Please fill in the relevant points (if possible).
- zfit version:
- Python version:
- Are you using conda, pipenv, etc? :
- Operating System:
- Tensorflow version:
## Desired behavior
What would you expect to happen and why? Be aware that there are also many more usecases than your specific one, if you
can think of where it would break, please include this here.
| zfit | /zfit-0.15.5.tar.gz/zfit-0.15.5/.github/ISSUE_TEMPLATE/behaviorunderdiscussion.md | behaviorunderdiscussion.md |
Introduction
============
This Python module is for fitting electrical models to measured impedance data. It also includes a command-line program to assist with automated fitting.
Installation
============
The easiest way is using the command line command:
$ pip install .
zfit
====
Here's an example of how model fitting can be performed using a Python script:
``` python
from zfitpy import zfit
net = "(CPE('K', 'alpha') | R('R2')) + R('R1')"
ranges = {'R1': (1e-3, 1e3), 'K': (1e-3, 1e3), 'alpha': (-1, 1), 'R2': (100, 1e4)}
data, fitmodel = zfit('E4990A-example1.csv', net, ranges, Ns=10)
print(fitmodel)
print(fitmodel.error)
```
Here `Ns` specifies the number of steps in each search range. It can be explicitly defined for each search range, for example,
``` python
ranges = {'R1': (1e-3, 1e3, 10), 'K': (1e-3, 1e3, 20), 'alpha': (-1, 1), 'R2': (100, 1e4)}
```
The error between the measured data and best-fit model can be plotted using:
``` python
from zfitpy import Plotter
plotter.Z_error(data, fitmodel)
```
Note, a parameter names cannot be a substring of another parameter
name, i.e., 'R' cannot be used if there is a parameter 'R1'.
zfitpy
======
zfitpy is a command-line Python program. It is designed for fitting
electrical models to impedance data. For example:
``` bash
$ zfitpy --net "L('L1') + (R('R1') | (L('L2') + R('R2')))" --ranges="{'R1':(0,5e3),'L1':(1e-3,20e-3),'R2':(0,0.1),'L2':(1e-3,20e-3)}" --input demo/E4990A-example1.csv --plot-error
```
The network is specified using Lcapy notation for networks. This example uses a network comprised of a parallel combination of RL series networks. The network can be drawn using:
``` bash
$ zfitpy --net "L('L1') + (R('R1') | (L('L2') + R('R2')))" --draw
```

The network in this example has four parameters: `R1`, `L1`, `R2`, and
`L2`. A brute force search is performed for each component using the
specified ranges; this is refined with a finishing search. The ranges
are specified as a Python dictionary, keyed by component name, with
the range for each component specified as a tuple. The number of
steps in each range is 20 can be altered with the `--steps` option.
The impedance of the data and model can be plotted using:
```
$ zfitpy --plot-fit --net "L('L1') + (R('R1') | (L('L2') + R('R2')))" --ranges="{'R1':(0,5e3),'L1':(1e-3,20e-3),'R2':(0,0.1),'L2':(1e-3,20e-3)}" --input demo/E4990A-example1.csv
```

The impedance error between the data and model can be plotted using:
```
$ zfitpy --plot-error --net "L('L1') + (R('R1') | (L('L2') + R('R2')))" --ranges="{'R1':(0,5e3),'L1':(1e-3,20e-3),'R2':(0,0.1),'L2':(1e-3,20e-3)}" --input demo/E4990A-example1.csv
```

Here's another network using a constant phase element (CPE).
``` bash
$ zfitpy --net "(CPE('K', 'alpha') | R('R2')) + R('R1')" --draw
```

```
$ zfitpy --plot-error --net "(CPE('K', 'alpha') | R('R2')) + R('R1')" --ranges="{'R1':(0,1e3),'K':(1e-3,1e3),'alpha':(-1,1),'R2':(1e2,1e4)}" --input demo/E4990A-example1.csv
```

The data format for the plots depends on the extension. matplotlib is
used for the plotting and so the pdf, png, pgf, and jpg formats are
all supported. For example:
``` bash
$ zfitpy --net "CPE('K', 'alpha')" --draw --output CPE.png
```
The data can be plotted without fitting if the `ranges` option is not specified. For example:
``` bash
$ zfitpy --plot-data --input demo/E4990A-example1.csv
```

A Nyquist plot is generated if the `--nyquist` option is specified. Magnitude and phase is plotted is the `--magphase` option is specified. The plot style can be altered using the `--style` option to specify a Matplotlib style file.
Other command line options for zfitpy can be found with the --help option.
Here's another example that loads the network and ranges from files.
It uses a Matplotlib style file and annotates the title with the
model, the optimization method, and the rms error.
``` bash
$ zfitpy --net RL2.net --ranges=RL2.ranges --input data/data.csv --plot-error --method='brute' --title='%model, %method, %rmse' --style=z.mplstyle --output RL2-brute.pdf
```
| zfitpy | /zfitpy-0.4.0.tar.gz/zfitpy-0.4.0/README.md | README.md |
# zfl-ai
[](https://pypi.org/project/django-hatch)
[](https://pypi.org/project/django-hatch)

-----
**目次**
- [詳細](#詳細)
- [インストールとDjangoプロジェクト作成](#インストールとDjangoプロジェクト作成)
- [Djangoプロジェクトにプラグイン](#Djangoプロジェクトにプラグイン)
- [License](#license)
## 詳細
djangoプロジェクトでインストールできるDjangoアプリです。
使用方法は以下からご覧ください。
## インストールとDjangoプロジェクト作成
実行環境は「Python3.7」、「Django2.2.5」です。
```console
$ python3 --version
Python 3.7.0
```
Djangoアプリはpipでインストールします。
```console
$ pip install zfl-ai
```
Djangoプロジェクトを作成
```console
$ django-admin startproject myproject .
```
言語を設定します。
`myproject/settings.py`
```python
# Internationalization
# https://docs.djangoproject.com/en/4.2/topics/i18n/
LANGUAGE_CODE = 'ja'
TIME_ZONE = 'Asia/Tokyo'
USE_I18N = True
USE_TZ = True
```
## Djangoプロジェクトにプラグイン
Djangoアプリを設定します。
`myproject/settings.py`
```python
INSTALLED_APPS = [
...,
"ai",
]
```
プロジェクト直下のtemplatesを読み込むように設定。
```python
TEMPLATES = [
{
...
'DIRS': [os.path.join(BASE_DIR, 'templates')],
...
},
]
```
`myproject/urls.py`
```python
...
from django.urls import path, include
urlpatterns = [
...,
path('', include('ai.urls')),
]
```
zfl-aiのテンプレートである「index.html」ではテンプレートタグ「{% extends '...' %}」で「base.html」を読み込むように設定しているので、「templates/base.html」を作成する必要があります。
```console
$ mkdir templates && touch templates/base.html
```
base.htmlの内容
```html
{% load static %}
<html lang="ja" prefix="og: http://ogp.me/ns#">
<head>
<!-- BootStrap CSS Link -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
<!-- StaticFiles CSS -->
{% block ai-style %}{% endblock %}
</head>
<body>
<!-- Djangoテンプレートタグ -->
{% block content %}
{% endblock %}
<!-- BootStrap jQuery -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
</body>
</html>
```
サーバーを起動します。
```console
$ python3 manage.py runserver
```
`127.0.0.1:8000/ai/`にアクセスします。
## License
`zfl-ai` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
| zfl-ai | /zfl_ai-0.0.14.tar.gz/zfl_ai-0.0.14/README.md | README.md |
from django.shortcuts import render # type: ignore
# from blogs.models import Category, Blog
from django_pandas.io import read_frame # type: ignore
from .models import GoogleAccess
# from matplotlib.backends.backend_agg import FigureCanvasAgg
# import io
# import matplotlib.pyplot as plt
# import matplotlib.dates as mdates
# import seaborn as sns
# sns.set_style('darkgrid')
# import numpy as np
# import japanize_matplotlib
def index(request):
"""
Dataアプリトップページ
グラフ表用にpandasを使用してテーブルの処理
"""
qs = GoogleAccess.objects.all()
# データをテーブル形式に変換
df = read_frame(qs, fieldnames=["date_data", "access_data"])
# アクセス数の合計
access_sum = "{:,}".format(df["access_data"].sum())
# 日付を再設定
df["date_data"] = df["date_data"].apply(lambda df: df.strftime("%Y年%m月"))
# 日付を元にアクセス数を集約する
df = df["access_data"].groupby(df["date_data"]).sum().reset_index()
# アクセス数を1段ずらしてカラムを作成
# df = df.join(df['access_data'].shift(1).rename('previous_month'))
# アクセス数を12段ずらしてカラムを作成
# df = df.join(df['access_data'].shift(12).rename('past_months'))
# 底から12段を変数に格納
df = df.tail(12)
# データタイプを整数型に変更
# df['past_months'] = df['past_months'].astype(int)
# 前年月との差分を出しカラムを作成
# df['months_error'] = df['access_data'] - df['past_months']
# 先月との差分を出しカラムを作成
# df['previous_month_error'] = df['access_data'] - df['previous_month']
# データタイプを整数型に変更
# df['previous_month_error'] = df['previous_month_error'].astype(int)
context = {"df": df, "access_sum": access_sum}
return render(request, "data/index.html", context)
# def blogs_plot(request):
# """
# Blogカテゴリーのグラフ
#
# """
# plt.rcParams.update({'figure.autolayout': True})
# fig = plt.figure()
# ax = fig.add_subplot(1, 1, 1)
# fig.patch.set_facecolor('whitesmoke')#背景の指定
#
# """ここにデータを作成する"""
# category_choice = Category.objects.all()
# blogs_choice = Blog.objects.select_related('category').all()
#
# x1 = [data.title for data in category_choice]
# y1 = [data.category_id for data in blogs_choice]
# y1 = np.unique(y1, return_counts=True)
#
# colorlist = ['r', 'y', 'g', 'b', 'm', 'c', '#ffff33', '#f781bf']
# ax.bar(x1, y1[1], color=colorlist, width=0.3, alpha=0.5)
#
# buf = io.BytesIO()
# canvas = FigureCanvasAgg(fig)
# canvas.print_png(buf)
# response = HttpResponse(buf.getvalue(), content_type='image/png')
# fig.clear()
# response['Content-Length'] = str(len(response.content))
# return response | zfl-data | /zfl_data-0.0.5.tar.gz/zfl_data-0.0.5/data/views.py | views.py |
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
import os
import datetime
from dateutil.relativedelta import relativedelta
from django.conf import settings
SCOPES = ['https://www.googleapis.com/auth/analytics.readonly']
KEY_FILE_LOCATION = os.path.join(settings.ZEROFROMLIGHT_DIR, 'client_secrets.json')
VIEW_ID = settings.ZEROFROMLIGHT_KEYS['VIEW_ID']
# 12ヵ月前の年・月・日を取得
months_age_13 = (datetime.datetime.today() - relativedelta(months=13)).strftime("%Y-%m-%d")
def initialize_analyticsreporting():
credentials = ServiceAccountCredentials.from_json_keyfile_name(
KEY_FILE_LOCATION, SCOPES)
analytics = build('analyticsreporting', 'v4', credentials=credentials)
return analytics
# def get_report(analytics):
# """
# ブログ閲覧数取得
#
# """
# return analytics.reports().batchGet(
# body={
# 'reportRequests': [{
# 'viewId': VIEW_ID,
# 'pageSize': 10,
# 'dateRanges': [{"startDate": "3daysAgo", "endDate": "today"}],
# 'dimensions': [{'name': 'ga:pagePath'}, {'name': 'ga:pageTitle'}],
# 'dimensionFilterClauses': [{'filters': [{'dimensionName': 'ga:pagePath',
# 'expressions': ['/blogs/detail/']}]
# }],
# 'metrics': [{'expression': 'ga:pageviews'}],
# 'orderBys': [{'fieldName': 'ga:pageviews', 'sortOrder': 'DESCENDING'}],
# }]
# }
# ).execute()
def get_report_web(analytics):
"""
本日のアクセス数取得
"""
return analytics.reports().batchGet(
body={
'reportRequests': [{
'viewId': VIEW_ID,
'dateRanges': [{"startDate": months_age_13, "endDate": "today"}],
# 'dateRanges': [{"startDate": "5daysAgo", "endDate": "today"}],
'dimensions': [{'name': 'ga:date'}],
'metrics': [{'expression': 'ga:users'}],
}]
}
).execute()
def print_response():
"""
ブログ閲覧数取得処理
"""
analytics = initialize_analyticsreporting()
response = get_report(analytics)
for row in response['reports'][0]['data']['rows']:
row_path = 'https://zerofromlight.com' + row['dimensions'][0]
#row_path = row['dimensions'][0][6:]
#row_path = row['dimensions'][0].split('/')[3]
row_title = row['dimensions'][1]
row_view = row['metrics'][0]['values'][0]
yield row_title, row_path, row_view
def print_response_web():
"""
アクセス数取得処理
"""
analytics = initialize_analyticsreporting()
response = get_report_web(analytics)
for row in response['reports'][0]['data']['rows']:
row_user = row['metrics'][0]['values'][0]
yield row_user | zfl-data | /zfl_data-0.0.5.tar.gz/zfl_data-0.0.5/data/management/commands/analytics_api.py | analytics_api.py |
import requests
class bencoding:
# http://fileformats.wikia.com/wiki/Torrent_file
def __init__(self, magnet):
self.special = {'i': self.integer_eval,
'l': self.list_eval,
'd': self.dict_eval
}
# FIRST we need to decode the magnet link and convert it in a torrent
# file, so we can parse it and get infos.
self.torrent = self.download_torrent(magnet)
self.reader = self.create_gen(self.torrent)
self.result = None
def download_torrent(self, magnet):
"""
Downlad from torrage a torrent file according to the magnet file passed
on argument.
DOC:
https://en.wikipedia.org/wiki/Magnet_URI_scheme
"""
url = 'https://torrage.com/torrent/'
# First we should get the BitTorrent Info Hash ("xt=urn:btih:")
search = "xt=urn:btih:"
i = 0
while (i < (len(magnet) - len(search))) \
and ((magnet[i:i + len(search)]) != search):
i += 1
if i == (len(magnet) - len(search)):
raise Exception()
i += len(search)
# Adding the search length to set i at the start of the BIH
j = i # END of the info hash
while magnet[j] != '&' and j < len(magnet):
j += 1
# magnet[i:j] represent the BIH
print("Downloading the torrent from " + url + magnet[i:j])
torrent = requests.get(url + magnet[i:j].upper())
return torrent.text
def string_eval(self, i):
res = ''
for x in range(i):
current = next(self.reader, None)
if current is None:
break
res += current
return res
def integer_eval(self):
current = next(self.reader)
res = ''
while current != 'e':
res += current
current = next(self.reader)
return int(res) if res else ''
def list_eval(self):
current = next(self.reader)
res = []
while current != 'e' and current is not None:
if self.special.get(current, False):
new = self.special[current]()
res.append(new)
elif current.isdigit():
while current.isdigit() and current is not None:
current += next(self.reader, None)
new = self.string_eval(int(current[:-1]))
res.append(new)
current = next(self.reader, None)
return res
def dict_eval(self):
current = next(self.reader)
res = {}
while (current != 'e') and (current is not None):
if self.special.get(current, False):
key = self.special[current]()
elif current.isdigit():
while current.isdigit() and (current is not None):
current += next(self.reader, None)
key = self.string_eval(int(current[:-1]))
current = next(self.reader, None)
if self.special.get(current, False):
value = self.special[current]()
elif current.isdigit():
while current.isdigit() and (current is not None):
current += next(self.reader, None)
value = self.string_eval(int(current[:-1]))
res[key] = value
current = next(self.reader, None)
return res
def decode(self):
"""
Find the first data structure to parse.
"""
if self.result is not None:
return self.result
current = next(self.reader)
if current == "d":
res = self.dict_eval()
if current == "l":
res = self.list_eval()
self.result = res
return res
def create_gen(self, fileContent):
"""
Create a generator from a torrent file, to return the next letter,
each time he is called.
"""
# torrent = open(fileName, 'r')
# current = torrent.read(1)
# while current != '':
# yield current
# current = torrent.read(1)
# torrent.close()
for i in fileContent:
yield i
def get_info(magnet, destdir):
"""
Return a list of all files the torrent will output.
"""
torrent = bencoding(magnet)
info = torrent.decode()
info = info['info']
if info.get('files', False):
# If there is more than 1 file.
res = []
for torrentFile in info['files']:
res.append({'name': '/'.join(torrentFile['path']),
'length': torrentFile['length'],
'folder': info['name']}
)
else:
res = [{'name': info['name'],
'length': info['length'],
'folder': None}]
if destdir != "/tmp":
# If the user want to save the torrent file.
with open(destdir + res[0]['name'], "w") as torrent:
torrent.write(torrent.torrent)
print("Torrent saved in " + destdir)
return res
if __name__ == '__main__':
# For debugging.
import sys
print(get_info(sys.argv[1])) | zflix | /zflix-0.31.tar.gz/zflix-0.31/src/torrent_info.py | torrent_info.py |
import os
import sys
import argparse
import subprocess
from configParser import parse_config, parse_default
from multiprocessing import Process, Manager
from torrent_info import get_info
from subtitle.opensubtitle import opensubtitle
from subtitle.opensubtitle import NoSubtitleFound
class bcolors:
HEADER = '\033[95m'
BLUE = '\033[94m'
RED = '\033[31m'
GREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1;1m'
UNDERLINE = '\033[4m'
def start_search(tracker, query, queryResult):
"""
Funtion used to make tracker query easier (the tracker module can just send
back a dict and don't have to mess with the dict)
"""
result = tracker().search_torrent(query)
if result:
# If they are result to display else no need to include it in
# the dict
queryResult[tracker] = result
def main(option):
# trackers = json.load('trackers.json')
# TODO Should find a way to import all of the trackers
from trackers.torrentz import torrentz
from trackers.kat import kat
trackers = [torrentz, kat]
manager = Manager()
tmpResult = manager.dict()
processes = []
for x in trackers:
# Create all processes and stock them in a list to be sure that
# they are all finished.
print("Searching %s with %s" % (option.search, x))
tmpProcess = Process(target=start_search,
args=(x, option.search, tmpResult)
)
tmpProcess.start()
processes.append(tmpProcess)
for process in processes:
# Wait for every process to be finished.
process.join()
queryResult = dict(tmpResult)
# Convert into a traditionnal dict to be more easy to work with
# CREATING the torrent selection output.
# for a number of torrent the user specified in the option.
i = 0
outputList = []
while i < option.number_of_output and queryResult:
# while there is torrent to display.
maxSeeds = max(queryResult,
key=lambda x: int(queryResult[x][0]['seeds'])
)
# Find the torrent with the most seeder trough all tracker result.
# It's basicly a merge of all the result of the trackers.
out = queryResult[maxSeeds].pop(0)
if not len(queryResult[maxSeeds]):
# If the list is empty delete so they are no error
del queryResult[maxSeeds]
# TODO Add the reference here
# out["ref"] = maxSeeds # Set a reference so the program can call the
# class later.
outputList.append(out)
print('%2i| % 50s | Size:% 9s | S:% 5s | P:% 5s' %
(i,
out['title'] if len(out['title']) < 50 else out['title'][:50],
#bcolors.UNDERLINE + out['size'] + bcolors.ENDC,
out['size'],
#bcolors.BOLD + bcolors.BLUE + out['seeds'] + bcolors.ENDC,
out['seeds'],
out['peers']
)
)
i += 1
if not outputList:
# If no result found
print("No result found, exiting...")
exit()
# ASKING the user wich torrent he want to retrive.
try:
print("Enter the torrent number you want to get. ", end="")
torrentNum = input()
except KeyboardInterrupt:
print("\nExiting.")
exit()
if (not torrentNum.isdigit() or int(torrentNum) > len(outputList)):
# Good value check.
print("Exiting.")
exit()
else:
torrentNum = int(torrentNum)
pageLink = outputList[torrentNum]
torrentLink = pageLink['link']
###############################################################
# Getting the torrent.
# Use magne link to save the torrent.
ref = pageLink['ref'] # Reference to the tracker.
magnetLink = ref.get_magnet(torrentLink)
###############################################################
# Getting the torrent metadata.
info = get_info(magnetLink, option.destdir)
# TODO add the aptitude to save the torrent.
command = "peerflix '%s' --%s" \
% (magnetLink, option.player)
if option.destdir != "/tmp":
command += " --path " + option.destdir
###############################################################
# Getting the subtitle.
if option.subtitle:
os = opensubtitle()
print("Getting the subtitle from OpenSubtitle...", end=" ")
# TODO Add a better support for multifiles torrents.
fileInfo = info[0]
# For now we wiil just use the first one
try:
subtitle = os.get_subtitle(fileInfo['name'],
option.language,
fileInfo['length'],
option.destdir)
# TODO ADD SIZE
print("Saved as " + subtitle)
command += " --subtitles '%s'" % subtitle
except NoSubtitleFound:
pass
try:
peerflix = subprocess.Popen(command, shell=True)
peerflix.wait()
except KeyboardInterrupt:
# If during the stream trhe user exit it, the program will ask if he
# want to remove the download because maybe it's not ended.
if option.destdir == '/tmp':
print('Exiting')
exit()
#for dirFile in os.path.dirname(option.destdir):
# for i in range(min(len()))
# TODO not functionnal
# remove = raw_input("Do you want to remove the file ? [(y)es/(n)o]")
print("Do you want to remove the file ? [(y)es/(n)o] ", end="")
remove = sys.stdin.readline().strip()
if remove.lower() in ['yes', 'y', 'ye', 'ys']:
toRemove = info[0]['folder']
if toRemove is None:
toRemove = info[0]['name']
os.remove(toRemove)
except:
print('Peerflix is not installed, please type the following command'
+ ' to install it (if npm is installed):')
print('sudo npm install -g peerflix')
if __name__ == "__main__":
parser = argparse.ArgumentParser()
config = parse_config() # User config file
default = parse_default() # Default config file
try:
parser.add_argument('search',
nargs='?',
default=None,
type=str
)
####################################################################
try:
defaultDestdir = config.get('general', 'destdir')
except:
# If no argument.
print('No destination dir specified in the config file.')
defaultDestdir = default.get('general', 'destdir')
print('Using: ' + defaultDestdir)
parser.add_argument('-d', '--destdir',
default=defaultDestdir,
type=str,
help='Destination of the downloaded torrent'
)
####################################################################
try:
defaultMagnet = config.getboolean('general', 'magnet')
except:
# If no argument.
print('No magnet preference specified in the config file.')
defaultMagnet = default.getboolean('general', 'magnet')
actionMagnet = "store_false" if defaultMagnet else "store_true"
parser.add_argument('-m', '--magnet',
default=defaultMagnet,
action=actionMagnet,
help=("Use magnet link (no torrent download.")
)
# This option will call the get_magnet option of a tracker.
# instead of the .download one.
####################################################################
try:
defaultPlayer = config.get('general', 'player')
except:
# If no argument.
print('No player specified in the config file.')
defaultPlayer = default.get('general', 'player')
parser.add_argument('-p', '--player',
default=defaultPlayer,
type=str,
help=("Choose the player you want to use to watch"
+ " your streamed torrent")
)
####################################################################
try:
defaultLang = config.get('general', 'language')
except:
# If no argument.
print('No language specified in the config file.')
defaultLang = default.get('general', 'language')
parser.add_argument('-l', '--language',
default=defaultLang,
type=str,
help=("Set the language you want to use for the "
+ "subtitles")
)
####################################################################
try:
defaultSub = config.getboolean('general', 'subtitle')
except:
# If no argument.
print('No subtitle option specified in the config file.')
defaultSub = default.getboolean('general', 'subtitle')
actionSub = "store_false" if defaultSub else "store_true"
parser.add_argument('-s', '--subtitle',
default=defaultSub,
action=actionSub,
help=("Make the program download subtitle (or not "
+ "if the option is already set to 'True' in"
+ "your config file (default))")
)
####################################################################
try:
defaultNumber = config.getint('general', 'number_of_output')
except:
# If no argument.
print('No number_of_output specified in the config file.')
defaultNumber = default.getint('general', 'number_of_output')
parser.add_argument('-out', '--number_of_output',
default=defaultNumber,
type=int,
help=("Set the number of torrent displayed with "
+ "your search.")
)
####################################################################
parser.add_argument('-no', '--no_data',
default=False,
action='store_true',
help="No data are saved, stream goes into /tmp"
)
option = parser.parse_args()
except Exception as e:
# Would happen if config file is lacking of argument
print('Error parsing in the config file.')
print(e)
else:
if option.search is None:
# If the user entered no "search" option.
print("Enter keywords you want to search: ", end="")
option.search = input()
if option.no_data:
# option.magnet = True
option.destdir = "/tmp"
option.destdir = os.path.expanduser(option.destdir)
main(option) | zflix | /zflix-0.31.tar.gz/zflix-0.31/src/zflix.py | zflix.py |
import bs4
import requests
class kat:
domain = 'https://kat.cr/'
def __init__(self):
self.results = {}
def get_magnet(self, pageLink):
"""
Function returning the magnet link for the torrent.
ARGUMENT:
pageLink: The link of the page you want to get the torrent.
RETURN VALUE:
A magnet link.
"""
return (self.results[pageLink])["magnet"]
def _get_torrents_from_link(self, pageLink):
"""
Return the torrent from the "pageLink".
ARGUMENTS:
pageLink: The link of the page you want to get the torrents infos.
RETURN VALUE:
A dict with the following structure
https://github.com/thomacer/zflix/wiki/Adding-trackers.
"""
result = []
request = requests.get(pageLink)
data = request.text
soup = bs4.BeautifulSoup(data)
odd = soup.find_all("tr", class_="odd")
# First torrent on the page is considered odd
even = soup.find_all("tr", class_="even")
for torrent1, torrent2 in zip(odd, even):
newEntry = {}
mainCell = torrent1.find("a", class_="cellMainLink")
newEntry['title'] = mainCell.text
newEntry['link'] = mainCell.get("href")
description = torrent1.find_all("td", class_="center")
newEntry['size'] = description[0].text
newEntry['seeds'] = description[3].text.replace(',', '')
newEntry['peers'] = description[4].text.replace(',', '')
magnet = torrent2.find("a", class_="imagnet icon16")
newEntry['magnet'] = magnet.get("href")
download = torrent2.find("a", class_="idownload icon16")
newEntry['download'] = download.get("href")
newEntry['ref'] = self
result.append(newEntry)
self.results[newEntry["link"]] = newEntry
# Doing the same for the second torrent.
newEntry = {}
mainCell = torrent2.find("a", class_="cellMainLink")
newEntry['title'] = mainCell.text
newEntry['link'] = mainCell.get("href")
description = torrent2.find_all("td", class_="center")
newEntry['size'] = description[0].text
# Don't need to be converted in an int.
newEntry['seeds'] = description[3].text.replace(',', '')
newEntry['peers'] = description[4].text.replace(',', '')
magnet = torrent2.find("a", class_="imagnet icon16")
newEntry['magnet'] = magnet.get("href")
download = torrent2.find("a", class_="idownload icon16")
newEntry['download'] = download.get("href")
newEntry['ref'] = self
result.append(newEntry)
self.results[newEntry["link"]] = newEntry
return result
def _get_popular(self):
"""
Get the popular right now torrent from kat.
RETURN VALUE:
A dict with the following structure
https://github.com/thomacer/zflix/wiki/Adding-trackers.
"""
pageByType = ['https://kat.cr/movies/', 'https://kat.cr/tv/',
'https://kat.cr/anime/']
pageResult = {}
for pageLink in pageByType:
pageResult[pageLink] = self._get_torrents_from_link(pageLink)
result = []
for i in range(sum([len(pageResult[x]) for x in pageResult])):
maxSeeds = max(pageResult,
key=lambda x: int(pageResult[x][0]['seeds'])
)
# Find the torrent with the most seeder trough all tracker result.
out = pageResult[maxSeeds].pop(0)
if not len(pageResult[maxSeeds]):
# If the list is empty delete so they are no error
del pageResult[maxSeeds]
result.append(out)
return result
def search_torrent(self, search):
"""
Search in kat the argument "search".
ARGUMENTS:
search: The user searched torrents arguments.
RETURN VALUE:
A dict with the following structure
https://github.com/thomacer/zflix/wiki/Adding-trackers.
"""
if search == '':
# If the user just want to search through popular torrents
return self._get_popular()
pageLink = self.domain + "usearch/?q=" + search
return self._get_torrents_from_link(pageLink) | zflix | /zflix-0.31.tar.gz/zflix-0.31/src/trackers/kat.py | kat.py |
import requests
try:
import bs4
except:
print("BeautifulSoup4 is not installed.")
exit()
class torrentz:
def __init__(self):
self.domain = 'https://www.torrentz.com'
def get_magnet_from_tracker(self, trackerLink):
"""
Get a magnet link on a webpage
ARGUMENT:
trackerLink: A link to a tracker.
RETURN VALUE:
A magnet link.
"""
page = requests.get(trackerLink)
page = page.text
# Getting all the link in the page to try to get the magnet link.
soup = bs4.BeautifulSoup(page)
urls = soup.find_all('a')
magnetLink = None
for link in urls:
inPageLink = link.get('href')
if inPageLink is not None and 'magnet:' in inPageLink:
print('Getting ' + link.get('href'))
magnetLink = link.get('href')
return magnetLink
def get_specific_tracker(self, pageLink):
"""
Get the a supported tracker from a torrentz page.
ARGUMENT:
pageLink: A torrentz download page link.
RETURN VALUE:
The link of a supported tracker.
"""
# Getting the page.
page = requests.get(pageLink)
page = page.text
soup = bs4.BeautifulSoup(page)
trackersUrls = soup.find('div', class_="download")
# Get the div with the links first
trackersUrls = trackersUrls.find_all('dl')
trackersUrls.pop(0) # First is a sponsored link
# Every trackers listed in the page
for tracker in trackersUrls:
# YIELD every tracker link to try to get the magnet link there.
trackerLink = tracker.find('a')
yield trackerLink.get('href')
print("Error: Torrent found in none of the locations")
def get_magnet(self, pageLink):
"""
Function returning the magnet link for the torrent.
ARGUMENT:
pageLink: The link of the page you want to get the torrent.
RETURN VALUE:
A magnet link.
"""
downloadLocationTest = self.get_specific_tracker(self.domain + pageLink)
magnet = False
while magnet is False:
trackerLink = next(downloadLocationTest, None)
if trackerLink is not None:
magnet = self.get_magnet_from_tracker(trackerLink)
else:
break
return magnet
def search_torrent(self, search):
"""
Add to the dic "queryResult" with a refernce used for the key
a list of returned torrent link with a specific search term.
ARGUMENTS:
search: The user searcher torrents.
queryResult: A dict proxy where the result will be stocked.
"""
torrentzPage = requests.get(self.domain + '/any?f=' + search)
torrentzPage = torrentzPage.text
soup = bs4.BeautifulSoup(torrentzPage)
torrentLinks = soup.find('div', class_="results")
torrentLinks = torrentLinks.find_all('dl')
result = []
for link in torrentLinks[:-1]:
newEntry = {}
mainCell = link.find("a")
newEntry['title'] = mainCell.text
newEntry['link'] = mainCell.get("href")
try:
newEntry['size'] = link.find("span", class_="s").text
except AttributeError:
newEntry['size'] = "Pending"
# Don't need to be converted in an int.
newEntry['seeds'] = link.find("span", class_="u"
).text.replace(',', '')
newEntry['peers'] = link.find("span", class_="d"
).text.replace(',', '')
newEntry['ref'] = self
result.append(newEntry)
return result | zflix | /zflix-0.31.tar.gz/zflix-0.31/src/trackers/torrentz.py | torrentz.py |
from xmlrpc.client import ServerProxy
import struct, os
import requests
import zipfile
class NoSubtitleFound(Exception):
def __init__(self, arg=None):
print("No subtitle found.")
class opensubtitle:
def __init__(self):
self.domain = "http://api.opensubtitles.org/xml-rpc"
self.xmlrpc_server = ServerProxy(self.domain)
login = self.xmlrpc_server.LogIn("", "", "eng" , "OSTestUserAgent")
self.token = login["token"]
# Save the token to use it to query the webapi.
def hash_name(self, name):
"""
Code from:
http://trac.opensubtitles.org/projects/opensubtitles/wiki/HashSourceCodes
Used to return a movie hash, from the name of a movie.
"""
try:
longlongformat = '<q' # little-endian long long
bytesize = struct.calcsize(longlongformat)
f = open(name, "rb")
filesize = os.path.getsize(name)
hash = filesize
if filesize < 65536 * 2:
return "SizeError"
for x in range(65536/bytesize):
buffer = f.read(bytesize)
(l_value,)= struct.unpack(longlongformat, buffer)
hash += l_value
hash = hash & 0xFFFFFFFFFFFFFFFF # to remain as 64bit number
f.seek(max(0,filesize-65536),0)
for x in range(65536/bytesize):
buffer = f.read(bytesize)
(l_value,) = struct.unpack(longlongformat, buffer)
hash += l_value
hash = hash & 0xFFFFFFFFFFFFFFFF
f.close()
returnedhash = "%016x" % hash
return returnedhash
except(IOError):
return "IOError"
def search_query(self, query, lang):
"""
Return type: a list of movies, returned by the opensubtitle api.
"""
res = self.xmlrpc_server.SearchSubtitles(self.token,
[{"query": query,
"sublanguageid": lang,
}]
)["data"]
return res
def search_hash(self, name, lang, length):
"""
"""
hashed_name = self.hash_name(name)
searchTerms = [{"sublanguageid": lang,
"moviehash": hashed_name,
"moviebytesize": length,
}]
res = self.xmlrpc_server.SearchSubtitles(self.token, searchTerms
)['data']
# If there is no result, the api return False.
return res
def download(self, link, location, name):
"""
ARGUMENTS:
archive: The archive file to extract.
location: Destination dir of the streamed torrent.
name: Name of your streamed torrent
RETURN VALUE:
the name of extracted files
"""
if location[-1] != '/':
# Double checking.
location += '/'
archive = requests.get(link).content # .content get the bytes.
newName = '/tmp/' + name + '.zip'
tmpZipFile = open(newName, "wb")
tmpZipFile.write(archive)
tmpZipFile.close()
zfile = zipfile.ZipFile(newName)
for name in zfile.namelist():
if '.nfo' not in name:
extractedFile = location + name
zfile.extract(name, location)
# TODO OS remove zip file creted and jk
return extractedFile
def get_subtitle(self, name, lang, length, location):
"""
Try to find the best subtitle. according to the name the user pass in
argument. And the download them in the movie folder.
"""
subtitle = self.search_hash(name, lang, length)
if subtitle is False:
print('not hash')
# Research by query are often more flexible and give more result.
subtitle = self.search_query(name, lang)
if subtitle:
subtitle = subtitle.pop(0)
else:
raise NoSubtitleFound
# for now we get the first subtitle of "subtitle", then we will try
# to be more precise
# Get the download.
return self.download(subtitle["ZipDownloadLink"], location, name) | zflix | /zflix-0.31.tar.gz/zflix-0.31/src/subtitle/opensubtitle.py | opensubtitle.py |
# 新版正方教务教务系统API
更多内容可以访问项目主页:
> [正方教务管理系统API](https://neroasmar.top/zfnew/)
一个有关新版正方教务管理系统(如下图展示的主页面即为新版教务系统)的API,可以实现教务系统内基础的查询功能,未来还会添加选课抢课,一键评价等功能。

## 已实现与待实现
* [x] 自动登陆、cookies获取
* [x] 个人信息
* [x] 学校通知
* [x] 调课、改课消息
* [x] 成绩
* [x] 课程表
* [x] 考试信息
* [ ] 自动抢课
* [ ] 一键评价
* [ ] 选课名单
* [ ] 实验考试
* [ ] 选课名单
## 如何开始
1. 使用pip命令安装 `pip install zfnew`
2. 在[Pypi](https://pypi.org/project/zfnew/#files)下载包文件,然后
```bash
tar -zxvf 包名.tar.gz
cd 包名
python setup.py build
python setup.py install
```
接着在python中运行如下代码验证:
```python
from zfnew import *
base_url = '学校教务系统的主页url'
lgn = Login(base_url=base_url)
lgn.login('账号', '密码')
cookies = lgn.cookies # cookiejar类cookies获取方法
person = GetInfo(base_url=base_url, cookies=cookies)
print(person.get_pinfo())
```
如果能输出json形式的个人信息,说明运行成功。
## 详细API介绍
**推荐直接结合调用例子:**[examples]('https://github.com/NeroAsmarr/zfnew/tree/master/examples')
### Login类
- 调用方法:`from zfnew import Login`
- 传入参数:
- `base_url # 学校教务系统的主页url`
- 属性:
- `self.base_url # 学校教务系统的主页url`
- `self.headers # 登陆的headers`
- `self.sess # 登陆的会话`
- `self.cookies # 登陆的cookiejar`
- `self.cookies_str # 登陆的cookie字符串`
- 方法:
- `login(sid, password)`
- sid: 学号
- password: 密码
### GetInfo类
- 调用方法:`from zfnew import GetInfo`
- 传入参数:
- `base_url # 学校教务系统的主页url`
- `self.cookies # 登陆的cookiejar`
- 属性:
- `self.base_url # 学校教务系统的主页url`
- `self.headers # 登陆的headers`
- `self.cookies # 登陆的cookiejar`
- 方法:
- `get_pinfo() # 获取个人信息`
- `get_notice(sid, password) # 获取学校通知`
- `get_message(sid, password) # 获取调课、改课消息`
- `get_grade(year, term) # 获取成绩`
- year: 学年(如:2019)
- term: 学期(0:全年,1:第一学期,2:第二学期)
- `get_exam(sid, password) # 获取考试信息`
- year: 学年(如:2019)
- term: 学期(1:第一学期,2:第二学期)
- `get_schedule(sid, password) # 获取课程表`
- year: 学年(如:2019)
- term: 学期(1:第一学期,2:第二学期)
#### 方法返回的json数据
- get_pinfo 获取个人信息
```json
{
"name": "姓名",
"studentId": "学号",
"brithday": "生日",
"idNumber": "身份证号",
"candidateNumber": "考试号",
"status": "学籍状态",
"collegeName": "学院名",
"majorName": "专业名",
"className": "班级",
"entryDate": "入学日期",
"graduationSchool": "毕业中学",
"domicile": "户籍所在地",
"politicalStatus": "政治面貌",
"national": "民族",
"education": "教育程度",
"postalCode": "邮政编码"
}
```
- get_notice 获取学校通知
```json
[
{
"title": "通知标题",
"publisher": "发布者",
"ctime": "发布时间",
"vnum": "浏览量",
"content": "发布内容",
"doc_urls": ["文内附带文件链接", "..."]
},
"..."
]
```
- get_message 获取调课、改课消息
```json
[
{
"message": "发布内容",
"ctime": "发布时间"
},
"..."
]
```
- get_grade 获取成绩
```json
{
"name": "姓名",
"studentId": "学号",
"schoolYear": "学年",
"schoolTerm": "学期",
"course":
[
{
"courseTitle": "课程名",
"teacher": "授课老师",
"courseId": "课程号",
"className": "教学班",
"courseNature": "课程性质",
"credit": "课程学分",
"grade": "成绩",
"gradePoint": "绩点",
"gradeNature": "考试性质",
"startCollege": "开课学院",
"courseMark": "课程标记",
"courseCategory": "课程类别",
"courseAttribution": "课程归属"
},
"..."
]
}
```
- get_exam 获取考试信息
```json
{
"name": "姓名",
"studentId": "学号",
"schoolYear": "学年",
"schoolTerm": "学期",
"exams":
[
{
"courseTitle": "课程名",
"teacher": "授课老师",
"courseId": "课程号",
"reworkMark": "重修标记",
"selfeditingMark": "自修标记",
"examName": "考试名称",
"paperId": "试卷编号",
"examTime": "考试时间",
"eaxmLocation": "考试地点",
"campus": "考试校区",
"examSeatNumber": "座位号",
},
"..."
]
}
```
- get_schedule 获取课程表
```json
{
"name": "姓名",
"studentId": "学号",
"schoolYear": "学年",
"schoolTerm": "学期",
"normalCourse":
[
{
"courseTitle": "课程名",
"teacher": "授课老师",
"courseId": "课程号",
"courseSection": "课程节次",
"courseWeek": "课程周次",
"courseRoom": "上课地点",
"className": "教学班名称",
"hoursComposition": "课程学时组成",
"weeklyHours": "周学时",
"totalHours": "总学时",
"credit": "学分"
},
"..."
],
"otherCourses": ["其他课程信息", "..."]
}
``` | zfnew | /zfnew-0.0.3.tar.gz/zfnew-0.0.3/README.md | README.md |
# 对基本类型dict, list, set, tuple进行遍历
### 对遍历的数据进行操作
### 测试代码
```
from zforeach import foreach_object
from zforeach import foreach_data
def foreach_fun(item: foreach_object):
print('foreach:', item)
if item.key == 'level2':
item.key = 'change_level2'
data = [
'level1',
{
'level2':
[
'level3'
],
}
]
value = foreach_data(data, foreach_fun, deep=True)
print('结果', value)
```
> foreach: <level:1> level1
> foreach: <level:3> level3
> foreach: <level:2> level2: ['level3']
> foreach: <level:1> {'change_level2': ['level3']}
> result: ['level1', {'change_level2': ['level3']}]
更新日志:
> 1.0.0
> 首发
- - -
本项目仅供所有人学习交流使用,禁止用于商业用途
| zforeach | /zforeach-1.0.0.tar.gz/zforeach-1.0.0/README.md | README.md |
__author__ = 'Zhang Fan'
class _discard_foreach_data:
pass
class foreach_object():
def __init__(self, value, main_type: type, main_type_text: str, level=None, key=None):
assert main_type, '请设置main_type'
assert main_type_text, '请设置main_type_text'
self._value = value
self._value_type_text = None
self._main_type = main_type
self._main_type_text = main_type_text
self._level = level or 1
self._key = key
self._is_discard = False
self._retry_deep_foreach = False
@property
def value(self):
return self._value
@value.setter
def value(self, data):
self.set_value(data)
@property
def main_type(self):
return self._main_type
@property
def main_type_text(self):
return self._main_type_text
@property
def level(self):
return self._level
@property
def key(self):
return self._key
@key.setter
def key(self, data):
assert self.main_type is dict, '只有dict才存在key属性'
self._key = data
@property
def value_type_text(self):
if self._value_type_text is None:
self._value_type_text = get_base_type_text_of_value(self._value)
return self._value_type_text
def is_base_data_type(self):
return self.value_type_text
def is_main_type(self):
return self.value_type_text == self._main_type_text
def is_discard(self):
return self._is_discard or (self.value is _discard_foreach_data)
def is_retry_deep_foreach(self):
return self._retry_deep_foreach
def set_value(self, data):
self._value = data
self._value_type_text = None
def discard(self):
self._is_discard = True
def retry_deep_foreach(self):
self._retry_deep_foreach = True
def __str__(self):
if self._key:
return '<level:{}> {}: {}'.format(self._level, self._key, self._value)
return '<level:{}> {}'.format(self._level, self._value)
def get_base_type_text_of_value(value):
if isinstance(value, dict):
return 'dict'
if isinstance(value, list):
return 'list'
if isinstance(value, tuple):
return 'tuple'
if isinstance(value, set):
return 'set'
def get_base_type_text_of_type(main_type: type):
if issubclass(main_type, dict):
return 'dict'
if issubclass(main_type, list):
return 'list'
if issubclass(main_type, tuple):
return 'tuple'
if issubclass(main_type, set):
return 'set'
def get_base_type_of_type_text(type_text: str):
if type_text == 'dict':
return dict
if type_text == 'list':
return list
if type_text == 'tuple':
return tuple
if type_text == 'set':
return set
def get_base_type_of_value(value):
main_type = type(value)
if issubclass(main_type, dict):
return dict
if issubclass(main_type, list):
return list
if issubclass(main_type, tuple):
return tuple
if issubclass(main_type, set):
return set
def _value_transform(item: foreach_object, callback, deep=False, allow_other_type=False):
if item.is_discard():
return _discard_foreach_data
if not deep or not item.is_base_data_type():
callback(item)
if item.is_discard():
return _discard_foreach_data
if item.is_retry_deep_foreach() and item.is_base_data_type() and item.value:
item._retry_deep_foreach = False
return _value_transform(item, callback=callback, deep=deep, allow_other_type=allow_other_type)
return item.value
return _deep_value_transform(item, callback=callback, allow_other_type=allow_other_type)
def _deep_value_transform(item: foreach_object, callback, allow_other_type=False):
if allow_other_type:
item.value = foreach_data(item.value, callback=callback, deep=True, level=item.level + 1)
callback(item)
elif item.is_main_type():
func_name = 'foreach_{}'.format(item.main_type_text)
foreach_func = globals().get(func_name)
# assert foreach_func, '未定义函数: {}'.format(func_name)
item.value = foreach_func(item.value, callback=callback, deep=True, allow_other_type=allow_other_type,
level=item.level + 1)
callback(item)
else:
return item.value
if item.is_discard():
return _discard_foreach_data
if item.is_retry_deep_foreach() and item.is_base_data_type() and item.value:
item._retry_deep_foreach = False
return _deep_value_transform(item, callback=callback, allow_other_type=allow_other_type)
return item.value
def foreach_data(data: dict or list or tuple or list, callback, deep=False, level=None):
value_type_text = get_base_type_text_of_value(data)
assert value_type_text, '只能传入dict, list, tuple, list, 你传入的是: {}'.format(type(data))
func_name = 'foreach_{}'.format(value_type_text)
foreach_func = globals().get(func_name)
# assert foreach_func, '未定义函数: {}'.format(func_name)
return foreach_func(data, callback=callback, deep=deep, allow_other_type=True, level=level)
def foreach_dict(data: dict, callback, deep=False, allow_other_type=False, level=None):
assert isinstance(data, dict), '只能传入dict, 你传入的是: {}'.format(type(data))
result_item = dict()
for key, value in data.items():
item = foreach_object(value, main_type=dict, main_type_text='dict', level=level, key=key)
value = _value_transform(item, callback=callback, deep=deep, allow_other_type=allow_other_type)
if value is _discard_foreach_data:
continue
result_item[item.key] = value
return result_item
def _foreach_iter(data, main_type: list or tuple or set, callback, deep=False, allow_other_type=False, level=None):
main_type_text = get_base_type_text_of_type(main_type)
assert isinstance(data, main_type), '只能传入{}, 你传入的是: {}'.format(main_type_text, type(data))
result_item = []
for value in data:
item = foreach_object(value, main_type=main_type, main_type_text=main_type_text, level=level)
value = _value_transform(item, callback=callback, deep=deep, allow_other_type=allow_other_type)
if value is _discard_foreach_data:
continue
result_item.append(value)
if main_type is list:
return result_item
return main_type(result_item)
def foreach_list(data: list, callback, deep=False, allow_other_type=False, level=None):
return _foreach_iter(data, main_type=list, callback=callback, deep=deep, allow_other_type=allow_other_type,
level=level)
def foreach_tuple(data: tuple, callback, deep=False, allow_other_type=False, level=None):
return _foreach_iter(data, main_type=tuple, callback=callback, deep=deep, allow_other_type=allow_other_type,
level=level)
def foreach_set(data: set, callback, deep=False, allow_other_type=False, level=None):
return _foreach_iter(data, main_type=set, callback=callback, deep=deep, allow_other_type=allow_other_type,
level=level)
if __name__ == '__main__':
def foreach_fun(item: foreach_object):
print('foreach:', item)
if item.key == 'level2':
item.key = 'change_level2'
data = [
'level1',
{
'level2':
[
'level3'
],
}
]
value = foreach_data(data, foreach_fun, deep=True)
print('结果', value) | zforeach | /zforeach-1.0.0.tar.gz/zforeach-1.0.0/zforeach.py | zforeach.py |
import re
def format_css(code, action='compact', indentation='\t'):
actFuns = {
'expand' : expand_rules,
'expand-bs' : expand_rules, # expand (break selectors)
'compact' : compact_rules,
'compact-bs' : compact_rules, # compact (break selectors)
'compact-ns' : compact_ns_rules, # compact (no spaces)
'compact-bs-ns' : compact_ns_rules, # compact (break selectors, no spaces)
'compress' : compress_rules
}
if action not in actFuns:
return code
# Comments
if action == 'compress':
# Remove comments
code = re.sub(r'\s*\/\*[\s\S]*?\*\/\s*', '', code)
else:
# Protect comments
commentReg = r'[ \t]*\/\*[\s\S]*?\*\/'
comments = re.findall(commentReg, code)
code = re.sub(commentReg, '!comment!', code)
# Protect strings
stringReg = r'(content\s*:|[\w-]+\s*=)\s*(([\'\"]).*?\3)\s*'
strings = re.findall(stringReg, code)
code = re.sub(stringReg, r'\1!string!', code)
# Protect urls
urlReg = r'((?:url|url-prefix|regexp)\([^\)]+\))'
urls = re.findall(urlReg, code)
code = re.sub(urlReg, '!url!', code)
# Pre process
code = re.sub(r'\s*([\{\}:;,])\s*', r'\1', code) # remove \s before and after characters {}:;,
code = re.sub(r'([\[\(])\s*', r'\1', code) # remove space inner [ or (
code = re.sub(r'\s*([\)\]])', r'\1', code) # remove space inner ) or ]
# code = re.sub(r'(\S+)\s*([\+>~])\s*(\S+)', r'\1\2\3', code) # remove \s before and after relationship selectors
code = re.sub(r',[\d\s\.\#\+>~:]*\{', '{', code) # remove invalid selectors without \w
code = re.sub(r'([;,])\1+', r'\1', code) # remove repeated ;,
if action != 'compress':
# Group selector
if re.search('-bs', action):
code = break_selectors(code) # break after selectors' ,
else:
code = re.sub(r',\s*', ', ', code) # add space after ,
# Add space
if re.search('-ns', action):
code = re.sub(r', +', ',', code) # remove space after ,
code = re.sub(r'\s+!important', '!important', code) # remove space before !important
else:
code = re.sub(r'([A-Za-z-](?:\+_?)?):([^;\{]+[;\}])', r'\1: \2', code) # add space after properties' :
code = re.sub(r'\s*!important', ' !important', code) # add space before !important
# Process action rules
code = actFuns[action](code)
if action == 'compress':
# Remove last semicolon
code = code.replace(';}', '}')
else:
# Add blank line between each block in `expand-bs` mode
if action == 'expand-bs':
code = re.sub(r'\}\s*', '}\n\n', code) # double \n after }
# Fix comments
code = re.sub(r'\s*!comment!\s*@', '\n\n!comment!\n@', code)
code = re.sub(r'\s*!comment!\s*([^\/\{\};]+?)\{', r'\n\n!comment!\n\1{', code)
code = re.sub(r'\s*\n!comment!', '\n\n!comment!', code)
# Backfill comments
for i in range(len(comments)):
code = re.sub(r'[ \t]*!comment!', comments[i], code, 1)
# Indent
code = indent_code(code, indentation)
# Backfill strings
for i in range(len(strings)):
code = code.replace('!string!', strings[i][1], 1)
# Backfill urls
for i in range(len(urls)):
code = code.replace('!url!', urls[i], 1)
# Trim
code = re.sub(r'^\s*(\S+(\s+\S+)*)\s*$', r'\1', code)
return code
# Expand Rules
def expand_rules(code):
code = re.sub('{', ' {\n', code) # add space before { and add \n after {
code = re.sub(';', ';\n', code) # add \n after ;
code = re.sub(r';\s*([^\{\};]+?)\{', r';\n\n\1{', code) # double \n between ; and include selector
code = re.sub(r'\s*(!comment!)\s*;\s*', r' \1 ;\n', code) # fix comment before ;
code = re.sub(r'(:[^:;]+;)\s*(!comment!)\s*', r'\1 \2\n', code) # fix comment after ;
code = re.sub(r'\s*\}', '\n}', code) # add \n before }
code = re.sub(r'\}\s*', '}\n', code) # add \n after }
return code
# Compact Rules
def compact_rules(code):
code = re.sub('{', ' { ', code) # add space before and after {
code = re.sub(r'(@[\w-]*(document|font-feature-values|keyframes|media|supports)[^;]*?\{)\s*', r'\1\n', code)
# add \n after @xxx {
code = re.sub(';', '; ', code) # add space after ;
code = re.sub(r'(@(charset|import|namespace).+?;)\s*', r'\1\n', code) # add \n after @charset & @import
code = re.sub(r';\s*([^\};]+?\{)', r';\n\1', code) # add \n before included selector
code = re.sub(r'\s*(!comment!)\s*;', r' \1 ;', code) # fix comment before ;
code = re.sub(r'(:[^:;]+;)\s*(!comment!)\s*', r'\1 \2 ', code) # fix comment after ;
code = re.sub(r'\s*\}', ' }', code) # add space before }
code = re.sub(r'\}\s*', '}\n', code) # add \n after }
return code
# Compact Rules (no space)
def compact_ns_rules(code):
code = re.sub(r'(@[\w-]*(document|font-feature-values|keyframes|media|supports)[^;]*?\{)\s*', r'\1\n', code)
# add \n after @xxx {
code = re.sub(r'(@(charset|import|namespace).+?;)\s*', r'\1\n', code) # add \n after @charset & @import
code = re.sub(r';\s*([^\};]+?\{)', r';\n\1', code) # add \n before included selector
code = re.sub(r'\s*(!comment!)\s*;', r'\1;', code) # fix comment before ;
code = re.sub(r'(:[^:;]+;)\s*(!comment!)\s*', r'\1\2', code) # fix comment after ;
code = re.sub(r'\}\s*', '}\n', code) # add \n after }
return code
# Compress Rules
def compress_rules(code):
code = re.sub(r'\s*([\{\}:;,])\s*', r'\1', code) # remove \s before and after characters {}:;, again
code = re.sub(r'\s+!important', '!important', code) # remove space before !important
code = re.sub(r'((?:@charset|@import)[^;]+;)\s*', r'\1\n', code) # add \n after @charset & @import
return code
# Break after Selector
def break_selectors(code):
block = code.split('}')
for i in range(len(block)):
b = block[i].split('{')
bLen = len(b)
for j in range(bLen):
if j == bLen - 1:
b[j] = re.sub(r',\s*', ', ', b[j]) # add space after properties' ,
else:
s = b[j].split(';')
sLen = len(s)
sLast = s[sLen - 1]
for k in range(sLen - 1):
s[k] = re.sub(r',\s*', ', ', s[k]) # add space after properties' ,
# For @document, @media
if re.search(r'\s*@(document|media)', sLast):
s[sLen - 1] = re.sub(r',\s*', ', ', sLast) # add space after @media's ,
# For mixins
elif re.search(r'(\(|\))', sLast):
u = sLast.split(')')
for m in range(len(u)):
v = u[m].split('(')
vLen = len(v)
if vLen < 2:
continue
v[0] = re.sub(r',\s*', ',\n', v[0])
v[1] = re.sub(r',\s*', ', ', v[1]) # do not break arguments
u[m] = '('.join(v)
s[sLen - 1] = ')'.join(u)
# For selectors
else:
s[sLen - 1] = re.sub(r',\s*', ',\n', sLast) # add \n after selectors' ,
b[j] = ';'.join(s)
block[i] = '{'.join(b)
code = '}'.join(block)
return code
# Code Indent
def indent_code(code, indentation='\t'):
lines = code.split('\n')
level = 0
inComment = False
outPrefix = ''
for i in range(len(lines)):
if not inComment:
# Quote level adjustment
validCode = re.sub(r'\/\*[\s\S]*?\*\/', '', lines[i])
validCode = re.sub(r'\/\*[\s\S]*', '', validCode)
adjustment = validCode.count('{') - validCode.count('}')
# Trim
m = re.match(r'^(\s+)\/\*.*', lines[i])
if m is not None:
outPrefix = m.group(1)
lines[i] = re.sub(r'^' + outPrefix + '(.*)\s*$', r'\1', lines[i])
else:
lines[i] = re.sub(r'^\s*(.*)\s*$', r'\1', lines[i])
else:
# Quote level adjustment
adjustment = 0
# Trim
lines[i] = re.sub(r'^' + outPrefix + '(.*)\s*$', r'\1', lines[i])
# Is next line in comment?
commentQuotes = re.findall(r'\/\*|\*\/', lines[i])
for quote in commentQuotes:
if inComment and quote == '*/':
inComment = False
elif quote == '/*':
inComment = True
# Quote level adjustment
nextLevel = level + adjustment
thisLevel = level if adjustment > 0 else nextLevel
level = nextLevel
# Add indentation
lines[i] = indentation * thisLevel + lines[i] if lines[i] != '' else ''
code = '\n'.join(lines)
return code | zformat | /zformat-0.9.2-py3-none-any.whl/cssformatter.py | cssformatter.py |
# Standard Python Library Imports
import os.path
import argparse
import sys
import re
# 3rd Party Imports
import jsbeautifier
import cssformatter
from jsmin import jsmin
# 1st Party Imports
def remove_css_variables(css_content):
"""Remove CSS variables from the CSS and replace each usage of the variable with its value (via RegEx).
- Useful for IE 11, which does not support CSS variables: http://caniuse.com/#feat=css-variables
"""
# Find all of the CSS variable declarations
css_var_definitions = re.compile(r"--(?P<var_name>[\w-]+?):\s+?(?P<var_val>.+?);")
variables = re.findall(css_var_definitions, css_content)
# Some CSS variables use other nested CSS variables in their definition. Replace each by working backwards.
for var_name, var_value in reversed(variables):
css_content = css_content.replace("var(--" + var_name + ")", var_value)
# Remove the :root{} section with all of the CSS variable declarations
css_content = css_content.replace(re.match(r":root \{[\s\S]*?\}", css_content).group(0), "")
return css_content
def format_file(path, minify, check_format):
"""Unified formatting function for files.
"""
path = os.path.abspath(path)
if not os.path.isfile(path):
sys.exit("Error: file does not exist {}".format(path))
content = open(path, 'r').read()
root, extension = os.path.splitext(path)
if minify and check_format:
sys.exit("Error: cannot minify and check the file's formatting")
result = ""
bad_formatting = False # Used as return code for check_format
if extension == '.js':
if check_format:
result = cssformatter.format_css(content, 'expand-bs', ' ')
if result != content:
bad_formatting = True
print("{} not formatted correctly".format(path))
elif minify:
result = jsmin(content, quote_chars="""'\"`""")
else: # Format in-place
result = jsbeautifier.beautify(content)
elif extension == '.css':
if check_format:
result = cssformatter.format_css(content, 'expand-bs', ' ')
if result != content:
bad_formatting = True
print("{} not formatted correctly".format(path))
elif minify:
result = remove_css_variables(content)
result = cssformatter.format_css(result, 'compress')
else: # Format in-place
result = cssformatter.format_css(content, 'expand-bs', ' ')
else:
sys.exit("Error: unknown file extension {}".format(extension))
if check_format:
return bad_formatting
elif minify:
path = root + ".min" + extension
open(path, 'w').write(result)
else: # Format in-place
open(path, 'w').write(result)
def main():
"""Command Line entry point.
"""
arg_parser = argparse.ArgumentParser(
description="An opinionated CLI formatter for JavaScript and CSS (check formatting, format, or minimize). "
"If the file is being beautified, the file's contents are replaced with the new "
"formatting. If the file is being minimized, we create a new file with `.min` "
"before the normal file extensions (e.g. `.min.js` or `.min.css`).\n\n For CSS "
"minification, this will remove CSS variables by using search and replace to "
"support IE 11."
)
arg_parser.add_argument('path', type=str, help="file's path to format")
arg_parser.add_argument('-m', '--minify', default=False, action="store_true",
help="minimize the file's content instead of beautifying")
arg_parser.add_argument('-c', '--check', default=False, action="store_true",
help="check the file's formatting only (print and return error code if incorrect)")
args = arg_parser.parse_args()
format_file(args.path, args.minify, args.check)
if __name__ == "__main__":
main() | zformat | /zformat-0.9.2-py3-none-any.whl/zformat.py | zformat.py |
# z-format
An opinionated CLI formatter for JavaScript and CSS (beautify or minimize).
If the file is being beautified, the file's contents are replaced with the new
formatting. If the file is being minimized, we create a new file with `.min`
before the normal file extensions (e.g. `.min.js` or `.min.css`).
## Usage
```sh
usage: format.py [-h] [-m] path
An opinionated CLI formatter for JavaScript and CSS (beautify or minimize).
positional arguments:
path file's path to format
optional arguments:
-h, --help show this help message and exit
-m, --minify minimize the file's content instead of beautifying
```
## Install
```sh
pip install z-format
```
## Dependencies
This is a simple wrapper for several libraries which it installs by default:
- jsbeautifier - JavaScript beautification
- jsmin - JavaScript minification
- cssformatter - CSS beautification and minification
## License
MIT License
Copyright (c) 2017 Joshua S. Ziegler
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| zformat | /zformat-0.9.2-py3-none-any.whl/zformat-0.9.2.dist-info/DESCRIPTION.rst | DESCRIPTION.rst |
[](https://github.com/seung-lab/zfpc/actions/workflows/test-suite.yml)
# zfpc: zfp container format
_An unofficial project unaffiliated with the [`zfp`](https://github.com/LLNL/zfp/) project._
An exerimental container format for `zfp` encoded vector fields. As described in the [zfp documentation](https://zfp.readthedocs.io/en/latest/faq.html#q-vfields), datasets such as vector fields are not optimally compressed within a single zfp stream. This is due to the uncorrelated X and Y components. Compress the X and Y components as separate `zfp` arrays and you will yield a higher compression ratio.
However, this method of separate files is cumbersome, must be maintained per a project, and is not compatible with existing data viewers (such as Neuroglancer) that expect to download a single file per an image tile. `zfpc` provides a means for splitting up a 1-4D array based on their (user specified) uncorrelated dimensions, compressing those slices into separate `zfp` streams, and encoding them into a single file. This file can then be decompressed back into its original form seamlessly using `zfpc`. In the future, it may be possible to automatically determine which dimensions are uncorrelated using statistical tests.
In fixed rate mode, it should still be possible to perform random access though this feature is not available yet.
```python
import zfpc
# example shape: (1202, 1240, 64, 2)
vector_field = np.array(...) # dtype must be float or int, 32 or 64-bit
# For data that are arranged as a Z stack of planar XY vectors
# e.g. arr[x,y,z,channel] mostly smoothly vary in the XY plane
# per a channel. Therefore, we set correlated_dims as
# [True,True,False,False] as the z and channel dimensions
# do not smoothly vary to obtain optimal compression.
#
# tolerance, rate, and precision are supported modes.
# By default, lossless compression is used.
correlated_dims = [True, True, False, False]
binary = zfpc.compress(
vector_field,
tolerance=0.01,
correlated_dims=correlated_dims,
)
recovered_img = zfpc.decompress(binary)
```
## Container Format
header,index,streams
### Header
The header is 15 bytes long in the following format written in little endian.
| Field | Type | Description |
|-------------------|---------|----------------------------------------------------------------------------------------------------------|
| magic | char[4] | "zfpc" magic number for file format. |
| format version | uint8 | Always version 0 (for now). |
| dtype,mode,order | uint8 | bits 1-3: zfp data type; bits 4-6: zfp mode; bit 7: unused; bit 8: true indicates c order (bits: DDDMMMUC) |
| nx | uint32 | Size of x axis. |
| ny | uint32 | Size of y axis. |
| nz | uint32 | Size of z axis. |
| nw | uint32 | Number of channels. |
| correlated dims | uint8 | Bottom 4 bits are a bitfield with 1 indicating correlated, 0 uncorrelated. Top 4 bits unused. (xyzw0000) |
### Index
All entries in the index are uint64 (8 bytes) little endian.
Stream offset followed by the size of each stream. The number of streams is calculated by the product of all the uncorrelated dimension sizes.
The stream offset is not a strictly necessary element, but will allow the format to be changed while allowing older decompressors to still function.
### Streams
All zfp streams are concatenated together in Fortran order. The streams are written with a full header so that they can be decompressed independently.
In the future, it might make sense to get savings by condensing them into a single header and writing headerless streams. However, writing full headers presents the possibility of using different compression settings for each stream which could pay off for different components.
| zfpc | /zfpc-0.1.2.tar.gz/zfpc-0.1.2/README.md | README.md |
ZFP
===
[](https://travis-ci.org/LLNL/zfp)
[](https://ci.appveyor.com/project/lindstro/zfp)
[](https://zfp.readthedocs.io/en/release0.5.5/?badge=release0.5.5)
[](https://codecov.io/gh/LLNL/zfp)
zfp is a compressed format for representing multidimensional floating-point
and integer arrays. zfp provides compressed-array classes that support high
throughput read and write random access to individual array elements. zfp
also supports serial and parallel (OpenMP and CUDA) compression of whole
arrays, e.g., for applications that read and write large data sets to and
from disk.
zfp uses lossy but optionally error-bounded compression to achieve high
compression ratios. Bit-for-bit lossless compression is also possible
through one of zfp's compression modes. zfp works best for 2D, 3D, and 4D
arrays that exhibit spatial correlation, such as continuous fields from
physics simulations, natural images, regularly sampled terrain surfaces, etc.
zfp compression of 1D arrays is possible but generally discouraged.
zfp is freely available as open source and is distributed under a BSD license.
zfp is primarily written in C and C++ but also includes Python and Fortran
bindings. zfp conforms to various language standards, including C89, C99,
C11, C++98, C++11, and C++14, and is supported on Linux, macOS, and Windows.
Quick Start
-----------
To download zfp, type:
git clone https://github.com/LLNL/zfp.git
zfp may be built using either [CMake](https://cmake.org/) or
[GNU make](https://www.gnu.org/software/make/). To use CMake, type:
cd zfp
mkdir build
cd build
cmake ..
cmake --build . --config Release
ctest
This builds the zfp library in the `build/lib` directory and the zfp
command-line executable in the `build/bin` directory. It then runs
the regression tests.
zfp may also be built using GNU make:
cd zfp
make
make test
Note: GNU builds are less flexible and do not support all available features,
e.g., CUDA support.
For further configuration and build instructions, please consult the
[documentation](https://zfp.readthedocs.io/en/latest/installation.html).
Documentation
-------------
Full HTML [documentation](http://zfp.readthedocs.io/) is available online.
A [PDF](http://readthedocs.org/projects/zfp/downloads/pdf/latest/) version
is also available.
Contributing
------------
The zfp project uses the
[Gitflow](https://nvie.com/posts/a-successful-git-branching-model/)
development model. Contributions should be made as pull requests on the
`develop` branch. Although this branch is under continuous development,
it should be robust enough to pass all regression tests.
The `master` branch is updated with each release and reflects the most
recent official release of zfp. See the
[Releases Page](https://github.com/LLNL/zfp/releases) for a history
of releases.
Authors
-------
zfp was originally developed by [Peter Lindstrom](https://people.llnl.gov/pl)
at [Lawrence Livermore National Laboratory](https://www.llnl.gov/). Please
see the [Contributors Page](https://github.com/LLNL/zfp/graphs/contributors)
for a full list of contributors.
### Citing zfp
If you use zfp for scholarly research, please cite this paper:
* Peter Lindstrom.
[Fixed-Rate Compressed Floating-Point Arrays](https://www.researchgate.net/publication/264417607_Fixed-Rate_Compressed_Floating-Point_Arrays).
IEEE Transactions on Visualization and Computer Graphics, 20(12):2674-2683, December 2014.
[doi:10.1109/TVCG.2014.2346458](http://doi.org/10.1109/TVCG.2014.2346458).
Additional Resources
--------------------
For more information on zfp, please see the
[zfp website](https://computing.llnl.gov/casc/zfp/).
For bug reports and feature requests, please consult the
[GitHub issue tracker](https://github.com/LLNL/zfp/issues/).
For questions and comments not answered here or in the
[documentation](http://zfp.readthedocs.io),
please send e-mail to [[email protected]](mailto:[email protected]).
License
-------
zfp is distributed under the terms of the BSD 3-Clause license. See the
files [LICENSE](https://github.com/LLNL/zfp/blob/develop/LICENSE) and
[NOTICE](https://github.com/LLNL/zfp/blob/develop/NOTICE) for details.
SPDX-License-Identifier: BSD-3-Clause
LLNL-CODE-663824
| zfpy | /zfpy-0.5.5rc5.tar.gz/zfpy-0.5.5rc5/README.md | README.md |
# ZFS autobackup
[](https://github.com/psy0rz/zfs_autobackup/actions?query=workflow%3A%22Regression+tests%22) [](https://coveralls.io/github/psy0rz/zfs_autobackup) [](https://pypi.org/project/zfs-autobackup/)
## Introduction
ZFS-autobackup tries to be the most reliable and easiest to use tool, while having all the features.
You can either use it as a **backup** tool, **replication** tool or **snapshot** tool.
You can select what to backup by setting a custom `ZFS property`. This makes it easy to add/remove specific datasets, or just backup your whole pool.
Other settings are just specified on the commandline: Simply setup and test your zfs-autobackup command and fix all the issues you might encounter. When you're done you can just copy/paste your command to a cron or script.
Since its using ZFS commands, you can see what it's actually doing by specifying `--debug`. This also helps a lot if you run into some strange problem or error. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools)
An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your system.
## Features
* Works across operating systems: Tested with **Linux**, **FreeBSD/FreeNAS** and **SmartOS**.
* Low learning curve: no complex daemons or services, no additional software or networking needed. (Only read this page)
* Plays nicely with existing replication systems. (Like Proxmox HA)
* Automatically selects filesystems to backup by looking at a simple ZFS property.
* Creates consistent snapshots. (takes all snapshots at once, atomicly.)
* Multiple backups modes:
* Backup local data on the same server.
* "push" local data to a backup-server via SSH.
* "pull" remote data from a server via SSH and backup it locally.
* "pull+push": Zero trust between source and target.
* Can be scheduled via simple cronjob or run directly from commandline.
* ZFS encryption support: Can decrypt / encrypt or even re-encrypt datasets during transfer.
* Supports sending with compression. (Using pigz, zstd etc)
* IO buffering to speed up transfer.
* Bandwidth rate limiting.
* Multiple backups from and to the same datasets are no problem.
* Resillient to errors.
* Ability to manually 'finish' failed backups to see whats going on.
* Easy to debug and has a test-mode. Actual unix commands are printed.
* Uses progressive thinning for older snapshots.
* Uses zfs-holds on important snapshots to prevent accidental deletion.
* Automatic resuming of failed transfers.
* Easy migration from existing zfs backups.
* Gracefully handles datasets that no longer exist on source.
* Complete and clean logging.
* Easy installation:
* Just install zfs-autobackup via pip.
* Only needs to be installed on one side.
* Written in python and uses zfs-commands, no special 3rd party dependency's or compiled libraries needed.
* No annoying config files or properties.
## Getting started
Please look at our wiki to [Get started](https://github.com/psy0rz/zfs_autobackup/wiki).
# Sponsor list
This project was sponsorred by:
* JetBrains (Provided me with a license for their whole professional product line, https://www.jetbrains.com/pycharm/ )
| zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/README.md | README.md |
from __future__ import print_function
import sys
class LogConsole:
"""Log-class that outputs to console, adding colors if needed"""
def __init__(self, show_debug, show_verbose, color):
self.last_log = ""
self.show_debug = show_debug
self.show_verbose = show_verbose
if color:
# try to use color, failback if colorama not available
self.colorama=False
try:
import colorama
global colorama
self.colorama = True
except ImportError:
pass
else:
self.colorama=False
def error(self, txt):
if self.colorama:
print(colorama.Fore.RED + colorama.Style.BRIGHT + "! " + txt + colorama.Style.RESET_ALL, file=sys.stderr)
else:
print("! " + txt, file=sys.stderr)
sys.stderr.flush()
def warning(self, txt):
if self.colorama:
print(colorama.Fore.YELLOW + colorama.Style.BRIGHT + " NOTE: " + txt + colorama.Style.RESET_ALL)
else:
print(" NOTE: " + txt)
sys.stdout.flush()
def verbose(self, txt):
if self.show_verbose:
if self.colorama:
print(colorama.Style.NORMAL + " " + txt + colorama.Style.RESET_ALL)
else:
print(" " + txt)
sys.stdout.flush()
def debug(self, txt):
if self.show_debug:
if self.colorama:
print(colorama.Fore.GREEN + "# " + txt + colorama.Style.RESET_ALL)
else:
print("# " + txt)
sys.stdout.flush()
def progress(self, txt):
"""print progress output to stderr (stays on same line)"""
self.clear_progress()
print(">>> {}\r".format(txt), end='', file=sys.stderr)
sys.stderr.flush()
def clear_progress(self):
import colorama
print(colorama.ansi.clear_line(), end='', file=sys.stderr)
sys.stderr.flush() | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/LogConsole.py | LogConsole.py |
import time
import argparse
from .ZfsAuto import ZfsAuto
from . import compressors
from .ExecuteNode import ExecuteNode
from .Thinner import Thinner
from .ZfsDataset import ZfsDataset
from .ZfsNode import ZfsNode
from .ThinnerRule import ThinnerRule
class ZfsAutobackup(ZfsAuto):
"""The main zfs-autobackup class. Start here, at run() :)"""
def __init__(self, argv, print_arguments=True):
# NOTE: common options and parameters are in ZfsAuto
super(ZfsAutobackup, self).__init__(argv, print_arguments)
def parse_args(self, argv):
"""do extra checks on common args"""
args = super(ZfsAutobackup, self).parse_args(argv)
if not args.no_holds:
self.verbose("Hold name : {}".format(self.hold_name))
if args.allow_empty:
args.min_change = 0
if args.destroy_incompatible:
args.rollback = True
if args.resume:
self.warning("The --resume option isn't needed anymore (its autodetected now)")
if args.raw:
self.warning(
"The --raw option isn't needed anymore (its autodetected now). Also see --encrypt and --decrypt.")
if args.compress and args.ssh_source is None and args.ssh_target is None:
self.warning("Using compression, but transfer is local.")
if args.compress and args.zfs_compressed:
self.warning("Using --compress with --zfs-compressed, might be inefficient.")
return args
def get_parser(self):
"""extend common parser with extra stuff needed for zfs-autobackup"""
parser = super(ZfsAutobackup, self).get_parser()
group = parser.add_argument_group("Snapshot options")
group.add_argument('--no-snapshot', action='store_true',
help='Don\'t create new snapshots (useful for finishing uncompleted backups, or cleanups)')
group.add_argument('--pre-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND before snapshotting (can be used multiple times.')
group.add_argument('--post-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND after snapshotting (can be used multiple times.')
group.add_argument('--min-change', metavar='BYTES', type=int, default=1,
help='Only create snapshot if enough bytes are changed. (default %('
'default)s)')
group.add_argument('--allow-empty', action='store_true',
help='If nothing has changed, still create empty snapshots. (Faster. Same as --min-change=0)')
group.add_argument('--other-snapshots', action='store_true',
help='Send over other snapshots as well, not just the ones created by this tool.')
group = parser.add_argument_group("Transfer options")
group.add_argument('--no-send', action='store_true',
help='Don\'t transfer snapshots (useful for cleanups, or if you want a serperate send-cronjob)')
group.add_argument('--no-holds', action='store_true',
help='Don\'t hold snapshots. (Faster. Allows you to destroy common snapshot.)')
group.add_argument('--clear-refreservation', action='store_true',
help='Filter "refreservation" property. (recommended, safes space. same as '
'--filter-properties refreservation)')
group.add_argument('--clear-mountpoint', action='store_true',
help='Set property canmount=noauto for new datasets. (recommended, prevents mount '
'conflicts. same as --set-properties canmount=noauto)')
group.add_argument('--filter-properties', metavar='PROPERTY,...', type=str,
help='List of properties to "filter" when receiving filesystems. (you can still restore '
'them with zfs inherit -S)')
group.add_argument('--set-properties', metavar='PROPERTY=VALUE,...', type=str,
help='List of propererties to override when receiving filesystems. (you can still restore '
'them with zfs inherit -S)')
group.add_argument('--rollback', action='store_true',
help='Rollback changes to the latest target snapshot before starting. (normally you can '
'prevent changes by setting the readonly property on the target_path to on)')
group.add_argument('--destroy-incompatible', action='store_true',
help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
group.add_argument('--ignore-transfer-errors', action='store_true',
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
'acltype errors)')
group.add_argument('--decrypt', action='store_true',
help='Decrypt data before sending it over.')
group.add_argument('--encrypt', action='store_true',
help='Encrypt data after receiving it.')
group.add_argument('--zfs-compressed', action='store_true',
help='Transfer blocks that already have zfs-compression as-is.')
group = parser.add_argument_group("ZFS send/recv pipes")
group.add_argument('--compress', metavar='TYPE', default=None, nargs='?', const='zstd-fast',
choices=compressors.choices(),
help='Use compression during transfer, defaults to zstd-fast if TYPE is not specified. ({})'.format(
", ".join(compressors.choices())))
group.add_argument('--rate', metavar='DATARATE', default=None,
help='Limit data transfer rate (e.g. 128K. requires mbuffer.)')
group.add_argument('--buffer', metavar='SIZE', default=None,
help='Add zfs send and recv buffers to smooth out IO bursts. (e.g. 128M. requires mbuffer)')
group.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs send output through COMMAND (can be used multiple times)')
group.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs recv input through COMMAND (can be used multiple times)')
group = parser.add_argument_group("Thinner options")
group.add_argument('--no-thinning', action='store_true', help="Do not destroy any snapshots.")
group.add_argument('--keep-source', metavar='SCHEDULE', type=str, default="10,1d1w,1w1m,1m1y",
help='Thinning schedule for old source snapshots. Default: %(default)s')
group.add_argument('--keep-target', metavar='SCHEDULE', type=str, default="10,1d1w,1w1m,1m1y",
help='Thinning schedule for old target snapshots. Default: %(default)s')
group.add_argument('--destroy-missing', metavar="SCHEDULE", type=str, default=None,
help='Destroy datasets on target that are missing on the source. Specify the time since '
'the last snapshot, e.g: --destroy-missing 30d')
# obsolete
parser.add_argument('--resume', action='store_true', help=argparse.SUPPRESS)
parser.add_argument('--raw', action='store_true', help=argparse.SUPPRESS)
return parser
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def thin_missing_targets(self, target_dataset, used_target_datasets):
"""thin target datasets that are missing on the source."""
self.debug("Thinning obsolete datasets")
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
dataset not in used_target_datasets]
count = 0
for dataset in missing_datasets:
count = count + 1
if self.args.progress:
self.progress("Analysing missing {}/{}".format(count, len(missing_datasets)))
try:
dataset.debug("Missing on source, thinning")
dataset.thin()
except Exception as e:
dataset.error("Error during thinning of missing datasets ({})".format(str(e)))
if self.args.progress:
self.clear_progress()
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def destroy_missing_targets(self, target_dataset, used_target_datasets):
"""destroy target datasets that are missing on the source and that meet the requirements"""
self.debug("Destroying obsolete datasets")
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
dataset not in used_target_datasets]
count = 0
for dataset in missing_datasets:
count = count + 1
if self.args.progress:
self.progress("Analysing destroy missing {}/{}".format(count, len(missing_datasets)))
try:
# cant do anything without our own snapshots
if not dataset.our_snapshots:
if dataset.datasets:
# its not a leaf, just ignore
dataset.debug("Destroy missing: ignoring")
else:
dataset.verbose(
"Destroy missing: has no snapshots made by us. (please destroy manually)")
else:
# past the deadline?
deadline_ttl = ThinnerRule("0s" + self.args.destroy_missing).ttl
now = int(time.time())
if dataset.our_snapshots[-1].timestamp + deadline_ttl > now:
dataset.verbose("Destroy missing: Waiting for deadline.")
else:
dataset.debug("Destroy missing: Removing our snapshots.")
# remove all our snaphots, except last, to safe space in case we fail later on
for snapshot in dataset.our_snapshots[:-1]:
snapshot.destroy(fail_exception=True)
# does it have other snapshots?
has_others = False
for snapshot in dataset.snapshots:
if not snapshot.is_ours():
has_others = True
break
if has_others:
dataset.verbose("Destroy missing: Still in use by other snapshots")
else:
if dataset.datasets:
dataset.verbose("Destroy missing: Still has children here.")
else:
dataset.verbose("Destroy missing.")
dataset.our_snapshots[-1].destroy(fail_exception=True)
dataset.destroy(fail_exception=True)
except Exception as e:
if self.args.progress:
self.clear_progress()
dataset.error("Error during --destroy-missing: {}".format(str(e)))
if self.args.progress:
self.clear_progress()
def get_send_pipes(self, logger):
"""determine the zfs send pipe"""
ret = []
# IO buffer
if self.args.buffer:
logger("zfs send buffer : {}".format(self.args.buffer))
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m" + self.args.buffer])
# custom pipes
for send_pipe in self.args.send_pipe:
ret.append(ExecuteNode.PIPE)
ret.extend(send_pipe.split(" "))
logger("zfs send custom pipe : {}".format(send_pipe))
# compression
if self.args.compress != None:
ret.append(ExecuteNode.PIPE)
cmd = compressors.compress_cmd(self.args.compress)
ret.extend(cmd)
logger("zfs send compression : {}".format(" ".join(cmd)))
# transfer rate
if self.args.rate:
logger("zfs send transfer rate : {}".format(self.args.rate))
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m16M", "-R" + self.args.rate])
return ret
def get_recv_pipes(self, logger):
ret = []
# decompression
if self.args.compress != None:
cmd = compressors.decompress_cmd(self.args.compress)
ret.extend(cmd)
ret.append(ExecuteNode.PIPE)
logger("zfs recv decompression : {}".format(" ".join(cmd)))
# custom pipes
for recv_pipe in self.args.recv_pipe:
ret.extend(recv_pipe.split(" "))
ret.append(ExecuteNode.PIPE)
logger("zfs recv custom pipe : {}".format(recv_pipe))
# IO buffer
if self.args.buffer:
# only add second buffer if its usefull. (e.g. non local transfer or other pipes active)
if self.args.ssh_source != None or self.args.ssh_target != None or self.args.recv_pipe or self.args.send_pipe or self.args.compress != None:
logger("zfs recv buffer : {}".format(self.args.buffer))
ret.extend(["mbuffer", "-q", "-s128k", "-m" + self.args.buffer, ExecuteNode.PIPE])
return ret
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def sync_datasets(self, source_node, source_datasets, target_node):
"""Sync datasets, or thin-only on both sides
:type target_node: ZfsNode
:type source_datasets: list of ZfsDataset
:type source_node: ZfsNode
"""
send_pipes = self.get_send_pipes(source_node.verbose)
recv_pipes = self.get_recv_pipes(target_node.verbose)
fail_count = 0
count = 0
target_datasets = []
for source_dataset in source_datasets:
# stats
if self.args.progress:
count = count + 1
self.progress("Analysing dataset {}/{} ({} failed)".format(count, len(source_datasets), fail_count))
try:
# determine corresponding target_dataset
target_name = self.make_target_name(source_dataset)
target_dataset = target_node.get_dataset(target_name)
target_datasets.append(target_dataset)
# ensure parents exists
# TODO: this isnt perfect yet, in some cases it can create parents when it shouldn't.
if not self.args.no_send \
and target_dataset.parent not in target_datasets \
and not target_dataset.parent.exists:
target_dataset.parent.create_filesystem(parents=True)
# determine common zpool features (cached, so no problem we call it often)
source_features = source_node.get_pool(source_dataset).features
target_features = target_node.get_pool(target_dataset).features
common_features = source_features and target_features
# sync the snapshots of this dataset
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress,
features=common_features, filter_properties=self.filter_properties_list(),
set_properties=self.set_properties_list(),
ignore_recv_exit_code=self.args.ignore_transfer_errors,
holds=not self.args.no_holds, rollback=self.args.rollback,
also_other_snapshots=self.args.other_snapshots,
no_send=self.args.no_send,
destroy_incompatible=self.args.destroy_incompatible,
send_pipes=send_pipes, recv_pipes=recv_pipes,
decrypt=self.args.decrypt, encrypt=self.args.encrypt,
zfs_compressed=self.args.zfs_compressed)
except Exception as e:
if self.args.progress:
self.clear_progress()
fail_count = fail_count + 1
source_dataset.error("FAILED: " + str(e))
if self.args.debug:
self.verbose("Debug mode, aborting on first error")
raise
if self.args.progress:
self.clear_progress()
target_path_dataset = target_node.get_dataset(self.args.target_path)
if not self.args.no_thinning:
self.thin_missing_targets(target_dataset=target_path_dataset, used_target_datasets=target_datasets)
if self.args.destroy_missing is not None:
self.destroy_missing_targets(target_dataset=target_path_dataset, used_target_datasets=target_datasets)
return fail_count
def thin_source(self, source_datasets):
self.set_title("Thinning source")
for source_dataset in source_datasets:
source_dataset.thin(skip_holds=True)
def filter_properties_list(self):
if self.args.filter_properties:
filter_properties = self.args.filter_properties.split(",")
else:
filter_properties = []
if self.args.clear_refreservation:
filter_properties.append("refreservation")
return filter_properties
def set_properties_list(self):
if self.args.set_properties:
set_properties = self.args.set_properties.split(",")
else:
set_properties = []
if self.args.clear_mountpoint:
set_properties.append("canmount=noauto")
return set_properties
def run(self):
try:
################ create source zfsNode
self.set_title("Source settings")
description = "[Source]"
if self.args.no_thinning:
source_thinner = None
else:
source_thinner = Thinner(self.args.keep_source)
source_node = ZfsNode(snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
ssh_config=self.args.ssh_config,
ssh_to=self.args.ssh_source, readonly=self.args.test,
debug_output=self.args.debug_output, description=description, thinner=source_thinner)
################# select source datasets
self.set_title("Selecting")
source_datasets = source_node.selected_datasets(property_name=self.property_name,
exclude_received=self.args.exclude_received,
exclude_paths=self.exclude_paths,
exclude_unchanged=self.args.exclude_unchanged,
min_change=self.args.min_change)
if not source_datasets:
self.print_error_sources()
return 255
################# snapshotting
if not self.args.no_snapshot:
self.set_title("Snapshotting")
snapshot_name = time.strftime(self.snapshot_time_format)
source_node.consistent_snapshot(source_datasets, snapshot_name,
min_changed_bytes=self.args.min_change,
pre_snapshot_cmds=self.args.pre_snapshot_cmd,
post_snapshot_cmds=self.args.post_snapshot_cmd)
################# sync
# if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode)
if self.args.target_path:
# create target_node
self.set_title("Target settings")
if self.args.no_thinning:
target_thinner = None
else:
target_thinner = Thinner(self.args.keep_target)
target_node = ZfsNode(snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name,
logger=self, ssh_config=self.args.ssh_config,
ssh_to=self.args.ssh_target,
readonly=self.args.test, debug_output=self.args.debug_output,
description="[Target]",
thinner=target_thinner)
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
self.set_title("Synchronising")
# check if exists, to prevent vague errors
target_dataset = target_node.get_dataset(self.args.target_path)
if not target_dataset.exists:
raise (Exception(
"Target path '{}' does not exist. Please create this dataset first.".format(target_dataset)))
# do the actual sync
# NOTE: even with no_send, no_thinning and no_snapshot it does a usefull thing because it checks if the common snapshots and shows incompatible snapshots
fail_count = self.sync_datasets(
source_node=source_node,
source_datasets=source_datasets,
target_node=target_node)
# no target specified, run in snapshot-only mode
else:
if not self.args.no_thinning:
self.thin_source(source_datasets)
fail_count = 0
if not fail_count:
if self.args.test:
self.set_title("All tests successful.")
else:
self.set_title("All operations completed successfully")
if not self.args.target_path:
self.verbose("(No target_path specified, only operated as snapshot tool.)")
else:
if fail_count != 255:
self.error("{} dataset(s) failed!".format(fail_count))
if self.args.test:
self.verbose("")
self.warning("TEST MODE - DID NOT MAKE ANY CHANGES!")
return fail_count
except Exception as e:
self.error("Exception: " + str(e))
if self.args.debug:
raise
return 255
except KeyboardInterrupt:
self.error("Aborted")
return 255
def cli():
import sys
sys.exit(ZfsAutobackup(sys.argv[1:], False).run())
if __name__ == "__main__":
cli() | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/ZfsAutobackup.py | ZfsAutobackup.py |
import subprocess
import os
import select
try:
from shlex import quote as cmd_quote
except ImportError:
from pipes import quote as cmd_quote
class CmdItem:
"""one command item, to be added to a CmdPipe"""
def __init__(self, cmd, readonly=False, stderr_handler=None, exit_handler=None, shell=False):
"""create item. caller has to make sure cmd is properly escaped when using shell.
:type cmd: list of str
"""
self.cmd = cmd
self.readonly = readonly
self.stderr_handler = stderr_handler
self.exit_handler = exit_handler
self.shell = shell
self.process = None
def __str__(self):
"""return copy-pastable version of command."""
if self.shell:
# its already copy pastable for a shell:
return " ".join(self.cmd)
else:
# make it copy-pastable, will make a mess of quotes sometimes, but is correct
return " ".join(map(cmd_quote, self.cmd))
def create(self, stdin):
"""actually create the subprocess (called by CmdPipe)"""
# make sure the command gets all the data in utf8 format:
# (this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
encoded_cmd = []
for arg in self.cmd:
encoded_cmd.append(arg.encode('utf-8'))
self.process = subprocess.Popen(encoded_cmd, env=os.environ, stdout=subprocess.PIPE, stdin=stdin,
stderr=subprocess.PIPE, shell=self.shell)
class CmdPipe:
"""a pipe of one or more commands. also takes care of utf-8 encoding/decoding and line based parsing"""
def __init__(self, readonly=False, inp=None):
"""
:param inp: input string for stdin
:param readonly: Only execute if entire pipe consist of readonly commands
"""
# list of commands + error handlers to execute
self.items = []
self.inp = inp
self.readonly = readonly
self._should_execute = True
def add(self, cmd_item):
"""adds a CmdItem to pipe.
:type cmd_item: CmdItem
"""
self.items.append(cmd_item)
if not cmd_item.readonly and self.readonly:
self._should_execute = False
def __str__(self):
"""transform whole pipe into oneliner for debugging and testing. this should generate a copy-pastable string for in a console """
ret = ""
for item in self.items:
if ret:
ret = ret + " | "
ret = ret + "({})".format(item) # this will do proper escaping to make it copypastable
return ret
def should_execute(self):
return self._should_execute
def execute(self, stdout_handler):
"""run the pipe. returns True all exit handlers returned true"""
if not self._should_execute:
return True
# first process should have actual user input as stdin:
selectors = []
# create processes
last_stdout = None
stdin = subprocess.PIPE
for item in self.items:
item.create(stdin)
selectors.append(item.process.stderr)
if last_stdout is None:
# we're the first process in the pipe, do we have some input?
if self.inp is not None:
# TODO: make streaming to support big inputs?
item.process.stdin.write(self.inp.encode('utf-8'))
item.process.stdin.close()
else:
# last stdout was piped to this stdin already, so close it because we dont need it anymore
last_stdout.close()
last_stdout = item.process.stdout
stdin = last_stdout
# monitor last stdout as well
selectors.append(last_stdout)
while True:
# wait for output on one of the stderrs or last_stdout
(read_ready, write_ready, ex_ready) = select.select(selectors, [], [])
eof_count = 0
done_count = 0
# read line and call appropriate handlers
if last_stdout in read_ready:
line = last_stdout.readline().decode('utf-8').rstrip()
if line != "":
stdout_handler(line)
else:
eof_count = eof_count + 1
for item in self.items:
if item.process.stderr in read_ready:
line = item.process.stderr.readline().decode('utf-8').rstrip()
if line != "":
item.stderr_handler(line)
else:
eof_count = eof_count + 1
if item.process.poll() is not None:
done_count = done_count + 1
# all filehandles are eof and all processes are done (poll() is not None)
if eof_count == len(selectors) and done_count == len(self.items):
break
# close filehandles
last_stdout.close()
for item in self.items:
item.process.stderr.close()
# call exit handlers
success = True
for item in self.items:
if item.exit_handler is not None:
success=item.exit_handler(item.process.returncode) and success
return success | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/CmdPipe.py | CmdPipe.py |
import os
import time
from .ExecuteNode import ExecuteNode
from .ZfsAuto import ZfsAuto
from .ZfsDataset import ZfsDataset
from .ZfsNode import ZfsNode
import sys
import platform
def tmp_name(suffix=""):
"""create temporary name unique to this process and node"""
#we could use uuids but those are ugly and confusing
name="zfstmp_{}_{}".format(platform.node(), os.getpid())
name=name+suffix
return name
def hash_tree_tar(node, path):
"""calculate md5sum of a directory tree, using tar"""
node.debug("Hashing filesystem {} ".format(path))
cmd=[ "tar", "-cf", "-", "-C", path, ".",
ExecuteNode.PIPE, "md5sum"]
stdout = node.run(cmd)
if node.readonly:
hashed=None
else:
hashed = stdout[0].split(" ")[0]
node.debug("Hash of {} filesytem is {}".format(path, hashed))
return hashed
def compare_trees_tar(source_node, source_path, target_node, target_path):
"""compare two trees using tar. compatible and simple"""
source_hash= hash_tree_tar(source_node, source_path)
target_hash= hash_tree_tar(target_node, target_path)
if source_hash != target_hash:
raise Exception("md5hash difference: {} != {}".format(source_hash, target_hash))
def compare_trees_rsync(source_node, source_path, target_node, target_path):
"""use rsync to compare two trees.
Advantage is that we can see which individual files differ.
But requires rsync and cant do remote to remote."""
cmd = ["rsync", "-rcn", "--info=COPY,DEL,MISC,NAME,SYMSAFE", "--msgs2stderr", "--delete" ]
#local
if source_node.ssh_to is None and target_node.ssh_to is None:
cmd.append("{}/".format(source_path))
cmd.append("{}/".format(target_path))
source_node.debug("Running rsync locally, on source.")
stdout, stderr = source_node.run(cmd, return_stderr=True)
#source is local
elif source_node.ssh_to is None and target_node.ssh_to is not None:
cmd.append("{}/".format(source_path))
cmd.append("{}:{}/".format(target_node.ssh_to, target_path))
source_node.debug("Running rsync locally, on source.")
stdout, stderr = source_node.run(cmd, return_stderr=True)
#target is local
elif source_node.ssh_to is not None and target_node.ssh_to is None:
cmd.append("{}:{}/".format(source_node.ssh_to, source_path))
cmd.append("{}/".format(target_path))
source_node.debug("Running rsync locally, on target.")
stdout, stderr=target_node.run(cmd, return_stderr=True)
else:
raise Exception("Source and target cant both be remote when verifying. (rsync limitation)")
if stderr:
raise Exception("Dataset verify failed, see above list for differences")
def verify_filesystem(source_snapshot, source_mnt, target_snapshot, target_mnt, method):
"""Compare the contents of two zfs filesystem snapshots """
try:
# mount the snapshots
source_snapshot.mount(source_mnt)
target_snapshot.mount(target_mnt)
if method=='rsync':
compare_trees_rsync(source_snapshot.zfs_node, source_mnt, target_snapshot.zfs_node, target_mnt)
elif method == 'tar':
compare_trees_tar(source_snapshot.zfs_node, source_mnt, target_snapshot.zfs_node, target_mnt)
else:
raise(Exception("program errror, unknown method"))
finally:
source_snapshot.unmount()
target_snapshot.unmount()
def hash_dev(node, dev):
"""calculate md5sum of a device on a node"""
node.debug("Hashing volume {} ".format(dev))
cmd = [ "md5sum", dev ]
stdout = node.run(cmd)
if node.readonly:
hashed=None
else:
hashed = stdout[0].split(" ")[0]
node.debug("Hash of volume {} is {}".format(dev, hashed))
return hashed
# def activate_volume_snapshot(dataset, snapshot):
# """enables snapdev, waits and tries to findout /dev path to the volume, in a compatible way. (linux/freebsd/smartos)"""
#
# dataset.set("snapdev", "visible")
#
# #NOTE: add smartos location to this list as well
# locations=[
# "/dev/zvol/" + snapshot.name
# ]
#
# dataset.debug("Waiting for /dev entry to appear...")
# time.sleep(0.1)
#
# start_time=time.time()
# while time.time()-start_time<10:
# for location in locations:
# stdout, stderr, exit_code=dataset.zfs_node.run(["test", "-e", location], return_all=True, valid_exitcodes=[0,1])
#
# #fake it in testmode
# if dataset.zfs_node.readonly:
# return location
#
# if exit_code==0:
# return location
# time.sleep(1)
#
# raise(Exception("Timeout while waiting for {} entry to appear.".format(locations)))
#
# def deacitvate_volume_snapshot(dataset):
# dataset.inherit("snapdev")
#NOTE: https://www.google.com/search?q=Mount+Path+Limit+freebsd
#Freebsd has limitations regarding path length, so we cant use the above method.
#Instead we create a temporary clone
def get_tmp_clone_name(snapshot):
pool=snapshot.zfs_node.get_pool(snapshot)
return pool.name+"/"+tmp_name()
def activate_volume_snapshot(snapshot):
"""clone volume, waits and tries to findout /dev path to the volume, in a compatible way. (linux/freebsd/smartos)"""
clone_name=get_tmp_clone_name(snapshot)
clone=snapshot.clone(clone_name)
#NOTE: add smartos location to this list as well
locations=[
"/dev/zvol/" + clone_name
]
clone.debug("Waiting for /dev entry to appear...")
time.sleep(0.1)
start_time=time.time()
while time.time()-start_time<10:
for location in locations:
stdout, stderr, exit_code=clone.zfs_node.run(["test", "-e", location], return_all=True, valid_exitcodes=[0,1])
#fake it in testmode
if clone.zfs_node.readonly:
return location
if exit_code==0:
return location
time.sleep(1)
raise(Exception("Timeout while waiting for {} entry to appear.".format(locations)))
def deacitvate_volume_snapshot(snapshot):
clone_name=get_tmp_clone_name(snapshot)
clone=snapshot.zfs_node.get_dataset(clone_name)
clone.destroy()
def verify_volume(source_dataset, source_snapshot, target_dataset, target_snapshot):
"""compare the contents of two zfs volume snapshots"""
try:
source_dev= activate_volume_snapshot(source_snapshot)
target_dev= activate_volume_snapshot(target_snapshot)
source_hash= hash_dev(source_snapshot.zfs_node, source_dev)
target_hash= hash_dev(target_snapshot.zfs_node, target_dev)
if source_hash!=target_hash:
raise Exception("md5hash difference: {} != {}".format(source_hash, target_hash))
finally:
deacitvate_volume_snapshot(source_snapshot)
deacitvate_volume_snapshot(target_snapshot)
def create_mountpoints(source_node, target_node):
# prepare mount points
source_node.debug("Create temporary mount point")
source_mnt = "/tmp/"+tmp_name("source")
source_node.run(["mkdir", source_mnt])
target_node.debug("Create temporary mount point")
target_mnt = "/tmp/"+tmp_name("target")
target_node.run(["mkdir", target_mnt])
return source_mnt, target_mnt
def cleanup_mountpoint(node, mnt):
node.debug("Cleaning up temporary mount point")
node.run([ "rmdir", mnt ], hide_errors=True, valid_exitcodes=[] )
class ZfsAutoverify(ZfsAuto):
"""The zfs-autoverify class, default agruments and stuff come from ZfsAuto"""
def __init__(self, argv, print_arguments=True):
# NOTE: common options and parameters are in ZfsAuto
super(ZfsAutoverify, self).__init__(argv, print_arguments)
def parse_args(self, argv):
"""do extra checks on common args"""
args=super(ZfsAutoverify, self).parse_args(argv)
if args.target_path == None:
self.log.error("Please specify TARGET-PATH")
sys.exit(255)
return args
def get_parser(self):
"""extend common parser with extra stuff needed for zfs-autobackup"""
parser=super(ZfsAutoverify, self).get_parser()
group=parser.add_argument_group("Verify options")
group.add_argument('--fs-compare', metavar='METHOD', default="tar", choices=["tar", "rsync"],
help='Compare method to use for filesystems. (tar, rsync) Default: %(default)s ')
return parser
def verify_datasets(self, source_mnt, source_datasets, target_node, target_mnt):
fail_count=0
count = 0
for source_dataset in source_datasets:
# stats
if self.args.progress:
count = count + 1
self.progress("Analysing dataset {}/{} ({} failed)".format(count, len(source_datasets), fail_count))
try:
# determine corresponding target_dataset
target_name = self.make_target_name(source_dataset)
target_dataset = target_node.get_dataset(target_name)
# find common snapshots to verify
source_snapshot = source_dataset.find_common_snapshot(target_dataset)
target_snapshot = target_dataset.find_snapshot(source_snapshot)
if source_snapshot is None or target_snapshot is None:
raise(Exception("Cant find common snapshot"))
target_snapshot.verbose("Verifying...")
if source_dataset.properties['type']=="filesystem":
verify_filesystem(source_snapshot, source_mnt, target_snapshot, target_mnt, self.args.fs_compare)
elif source_dataset.properties['type']=="volume":
verify_volume(source_dataset, source_snapshot, target_dataset, target_snapshot)
else:
raise(Exception("{} has unknown type {}".format(source_dataset, source_dataset.properties['type'])))
except Exception as e:
if self.args.progress:
self.clear_progress()
fail_count = fail_count + 1
target_dataset.error("FAILED: " + str(e))
if self.args.debug:
self.verbose("Debug mode, aborting on first error")
raise
if self.args.progress:
self.clear_progress()
return fail_count
def run(self):
source_node=None
source_mnt=None
target_node=None
target_mnt=None
try:
################ create source zfsNode
self.set_title("Source settings")
description = "[Source]"
source_node = ZfsNode(snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
ssh_config=self.args.ssh_config,
ssh_to=self.args.ssh_source, readonly=self.args.test,
debug_output=self.args.debug_output, description=description)
################# select source datasets
self.set_title("Selecting")
source_datasets = source_node.selected_datasets(property_name=self.property_name,
exclude_received=self.args.exclude_received,
exclude_paths=self.exclude_paths,
exclude_unchanged=self.args.exclude_unchanged,
min_change=0)
if not source_datasets:
self.print_error_sources()
return 255
# create target_node
self.set_title("Target settings")
target_node = ZfsNode(snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name,
logger=self, ssh_config=self.args.ssh_config,
ssh_to=self.args.ssh_target,
readonly=self.args.test, debug_output=self.args.debug_output,
description="[Target]")
target_node.verbose("Verify datasets under: {}".format(self.args.target_path))
self.set_title("Verifying")
source_mnt, target_mnt= create_mountpoints(source_node, target_node)
fail_count = self.verify_datasets(
source_mnt=source_mnt,
source_datasets=source_datasets,
target_mnt=target_mnt,
target_node=target_node)
if not fail_count:
if self.args.test:
self.set_title("All tests successful.")
else:
self.set_title("All datasets verified ok")
else:
if fail_count != 255:
self.error("{} dataset(s) failed!".format(fail_count))
if self.args.test:
self.verbose("")
self.warning("TEST MODE - DID NOT VERIFY ANYTHING!")
return fail_count
except Exception as e:
self.error("Exception: " + str(e))
if self.args.debug:
raise
return 255
except KeyboardInterrupt:
self.error("Aborted")
return 255
finally:
# cleanup
if source_mnt is not None:
cleanup_mountpoint(source_node, source_mnt)
if target_mnt is not None:
cleanup_mountpoint(target_node, target_mnt)
def cli():
import sys
sys.exit(ZfsAutoverify(sys.argv[1:], False).run())
if __name__ == "__main__":
cli() | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/ZfsAutoverify.py | ZfsAutoverify.py |
import re
import time
from .CachedProperty import CachedProperty
from .ExecuteNode import ExecuteError
class ZfsDataset:
"""a zfs dataset (filesystem/volume/snapshot/clone) Note that a dataset
doesn't have to actually exist (yet/anymore) Also most properties are cached
for performance-reasons, but also to allow --test to function correctly.
"""
# illegal properties per dataset type. these will be removed from --set-properties and --filter-properties
ILLEGAL_PROPERTIES = {
'filesystem': [],
'volume': ["canmount"],
}
def __init__(self, zfs_node, name, force_exists=None):
"""
Args:
:type zfs_node: ZfsNode.ZfsNode
:type name: str
:type force_exists: bool
"""
self.zfs_node = zfs_node
self.name = name # full name
self._virtual_snapshots = []
self.invalidate()
self.force_exists = force_exists
def __repr__(self):
return "{}: {}".format(self.zfs_node, self.name)
def __str__(self):
return self.name
def __eq__(self, obj):
if not isinstance(obj, ZfsDataset):
return False
return self.name == obj.name
def verbose(self, txt):
"""
Args:
:type txt: str
"""
self.zfs_node.verbose("{}: {}".format(self.name, txt))
def error(self, txt):
"""
Args:
:type txt: str
"""
self.zfs_node.error("{}: {}".format(self.name, txt))
def debug(self, txt):
"""
Args:
:type txt: str
"""
self.zfs_node.debug("{}: {}".format(self.name, txt))
def invalidate(self):
"""clear caches"""
CachedProperty.clear(self)
self.force_exists = None
self._virtual_snapshots = []
def split_path(self):
"""return the path elements as an array"""
return self.name.split("/")
def lstrip_path(self, count):
"""return name with first count components stripped
Args:
:type count: int
"""
return "/".join(self.split_path()[count:])
def rstrip_path(self, count):
"""return name with last count components stripped
Args:
:type count: int
"""
return "/".join(self.split_path()[:-count])
@property
def filesystem_name(self):
"""filesystem part of the name (before the @)"""
if self.is_snapshot:
(filesystem, snapshot) = self.name.split("@")
return filesystem
else:
return self.name
@property
def snapshot_name(self):
"""snapshot part of the name"""
if not self.is_snapshot:
raise (Exception("This is not a snapshot"))
(filesystem, snapshot_name) = self.name.split("@")
return snapshot_name
@property
def is_snapshot(self):
"""true if this dataset is a snapshot"""
return self.name.find("@") != -1
def is_selected(self, value, source, inherited, exclude_received, exclude_paths, exclude_unchanged, min_change):
"""determine if dataset should be selected for backup (called from
ZfsNode)
Args:
:type exclude_paths: list of str
:type value: str
:type source: str
:type inherited: bool
:type exclude_received: bool
:type exclude_unchanged: bool
:type min_change: bool
:param value: Value of the zfs property ("false"/"true"/"child"/"-")
:param source: Source of the zfs property ("local"/"received", "-")
:param inherited: True of the value/source was inherited from a higher dataset.
"""
# sanity checks
if source not in ["local", "received", "-"]:
# probably a program error in zfs-autobackup or new feature in zfs
raise (Exception(
"{} autobackup-property has illegal source: '{}' (possible BUG)".format(self.name, source)))
if value not in ["false", "true", "child", "-"]:
# user error
raise (Exception(
"{} autobackup-property has illegal value: '{}'".format(self.name, value)))
# non specified, ignore
if value == "-":
return False
# only select childs of this dataset, ignore
if value == "child" and not inherited:
return False
# manually excluded by property
if value == "false":
self.verbose("Excluded")
return False
# from here on the dataset is selected by property, now do additional exclusion checks
# our path starts with one of the excluded paths?
for exclude_path in exclude_paths:
# if self.name.startswith(exclude_path):
if (self.name + "/").startswith(exclude_path + "/"):
# too noisy for verbose
self.debug("Excluded (path in exclude list)")
return False
if source == "received":
if exclude_received:
self.verbose("Excluded (dataset already received)")
return False
if exclude_unchanged and not self.is_changed(min_change):
self.verbose("Excluded (unchanged since last snapshot)")
return False
self.verbose("Selected")
return True
@CachedProperty
def parent(self):
"""get zfs-parent of this dataset. for snapshots this means it will get
the filesystem/volume that it belongs to. otherwise it will return the
parent according to path
we cache this so everything in the parent that is cached also stays.
"""
if self.is_snapshot:
return self.zfs_node.get_dataset(self.filesystem_name)
else:
return self.zfs_node.get_dataset(self.rstrip_path(1))
# NOTE: unused for now
# def find_prev_snapshot(self, snapshot, also_other_snapshots=False):
# """find previous snapshot in this dataset. None if it doesn't exist.
#
# also_other_snapshots: set to true to also return snapshots that where
# not created by us. (is_ours)
#
# Args:
# :type snapshot: str or ZfsDataset.ZfsDataset
# :type also_other_snapshots: bool
# """
#
# if self.is_snapshot:
# raise (Exception("Please call this on a dataset."))
#
# index = self.find_snapshot_index(snapshot)
# while index:
# index = index - 1
# if also_other_snapshots or self.snapshots[index].is_ours():
# return self.snapshots[index]
# return None
def find_next_snapshot(self, snapshot, also_other_snapshots=False):
"""find next snapshot in this dataset. None if it doesn't exist
Args:
:type snapshot: ZfsDataset
:type also_other_snapshots: bool
"""
if self.is_snapshot:
raise (Exception("Please call this on a dataset."))
index = self.find_snapshot_index(snapshot)
while index is not None and index < len(self.snapshots) - 1:
index = index + 1
if also_other_snapshots or self.snapshots[index].is_ours():
return self.snapshots[index]
return None
@CachedProperty
def exists(self):
"""check if dataset exists. Use force to force a specific value to be
cached, if you already know. Useful for performance reasons
"""
if self.force_exists is not None:
self.debug("Checking if filesystem exists: was forced to {}".format(self.force_exists))
return self.force_exists
else:
self.debug("Checking if filesystem exists")
return (self.zfs_node.run(tab_split=True, cmd=["zfs", "list", self.name], readonly=True, valid_exitcodes=[0, 1],
hide_errors=True) and True)
def create_filesystem(self, parents=False):
"""create a filesystem
Args:
:type parents: bool
"""
if parents:
self.verbose("Creating filesystem and parents")
self.zfs_node.run(["zfs", "create", "-p", self.name])
else:
self.verbose("Creating filesystem")
self.zfs_node.run(["zfs", "create", self.name])
self.force_exists = True
def destroy(self, fail_exception=False):
"""destroy the dataset. by default failures are not an exception, so we
can continue making backups
Args:
:type fail_exception: bool
"""
self.verbose("Destroying")
if self.is_snapshot:
self.release()
try:
self.zfs_node.run(["zfs", "destroy", self.name])
self.invalidate()
self.force_exists = False
return True
except ExecuteError:
if not fail_exception:
return False
else:
raise
@CachedProperty
def properties(self):
"""all zfs properties"""
cmd = [
"zfs", "get", "-H", "-o", "property,value", "-p", "all", self.name
]
if not self.exists:
return {}
self.debug("Getting zfs properties")
ret = {}
for pair in self.zfs_node.run(tab_split=True, cmd=cmd, readonly=True, valid_exitcodes=[0]):
if len(pair) == 2:
ret[pair[0]] = pair[1]
return ret
def is_changed(self, min_changed_bytes=1):
"""dataset is changed since ANY latest snapshot ?
Args:
:type min_changed_bytes: int
"""
self.debug("Checking if dataset is changed")
if min_changed_bytes == 0:
return True
if int(self.properties['written']) < min_changed_bytes:
return False
else:
return True
def is_ours(self):
"""return true if this snapshot name has format"""
try:
test = self.timestamp
except ValueError as e:
return False
return True
@property
def holds(self):
"""get list of holds for dataset"""
output = self.zfs_node.run(["zfs", "holds", "-H", self.name], valid_exitcodes=[0], tab_split=True,
readonly=True)
return map(lambda fields: fields[1], output)
def is_hold(self):
"""did we hold this snapshot?"""
return self.zfs_node.hold_name in self.holds
def hold(self):
"""hold dataset"""
self.debug("holding")
self.zfs_node.run(["zfs", "hold", self.zfs_node.hold_name, self.name], valid_exitcodes=[0, 1])
def release(self):
"""release dataset"""
if self.zfs_node.readonly or self.is_hold():
self.debug("releasing")
self.zfs_node.run(["zfs", "release", self.zfs_node.hold_name, self.name], valid_exitcodes=[0, 1])
@property
def timestamp(self):
"""get timestamp from snapshot name. Only works for our own snapshots
with the correct format.
"""
time_secs = time.mktime(time.strptime(self.snapshot_name, self.zfs_node.snapshot_time_format))
return time_secs
def from_names(self, names):
"""convert a list of names to a list ZfsDatasets for this zfs_node
Args:
:type names: list of str
"""
ret = []
for name in names:
ret.append(self.zfs_node.get_dataset(name))
return ret
# def add_virtual_snapshot(self, snapshot):
# """pretend a snapshot exists (usefull in test mode)"""
#
# # NOTE: we could just call self.snapshots.append() but this would trigger a zfs list which is not always needed.
# if CachedProperty.is_cached(self, 'snapshots'):
# # already cached so add it
# print ("ADDED")
# self.snapshots.append(snapshot)
# else:
# # self.snapshots will add it when requested
# print ("ADDED VIRT")
# self._virtual_snapshots.append(snapshot)
@CachedProperty
def snapshots(self):
"""get all snapshots of this dataset"""
if not self.exists:
return []
self.debug("Getting snapshots")
cmd = [
"zfs", "list", "-d", "1", "-r", "-t", "snapshot", "-H", "-o", "name", self.name
]
return self.from_names(self.zfs_node.run(cmd=cmd, readonly=True))
@property
def our_snapshots(self):
"""get list of snapshots creates by us of this dataset"""
ret = []
for snapshot in self.snapshots:
if snapshot.is_ours():
ret.append(snapshot)
return ret
def find_snapshot(self, snapshot):
"""find snapshot by snapshot (can be a snapshot_name or a different
ZfsDataset )
Args:
:rtype: ZfsDataset
:type snapshot: str or ZfsDataset
"""
if not isinstance(snapshot, ZfsDataset):
snapshot_name = snapshot
else:
snapshot_name = snapshot.snapshot_name
for snapshot in self.snapshots:
if snapshot.snapshot_name == snapshot_name:
return snapshot
return None
def find_snapshot_index(self, snapshot):
"""find snapshot index by snapshot (can be a snapshot_name or
ZfsDataset)
Args:
:type snapshot: str or ZfsDataset
"""
if not isinstance(snapshot, ZfsDataset):
snapshot_name = snapshot
else:
snapshot_name = snapshot.snapshot_name
index = 0
for snapshot in self.snapshots:
if snapshot.snapshot_name == snapshot_name:
return index
index = index + 1
return None
@CachedProperty
def written_since_ours(self):
"""get number of bytes written since our last snapshot"""
latest_snapshot = self.our_snapshots[-1]
self.debug("Getting bytes written since our last snapshot")
cmd = ["zfs", "get", "-H", "-ovalue", "-p", "written@" + str(latest_snapshot), self.name]
output = self.zfs_node.run(readonly=True, tab_split=False, cmd=cmd, valid_exitcodes=[0])
return int(output[0])
def is_changed_ours(self, min_changed_bytes=1):
"""dataset is changed since OUR latest snapshot?
Args:
:type min_changed_bytes: int
"""
if min_changed_bytes == 0:
return True
if not self.our_snapshots:
return True
# NOTE: filesystems can have a very small amount written without actual changes in some cases
if self.written_since_ours < min_changed_bytes:
return False
return True
@CachedProperty
def recursive_datasets(self, types="filesystem,volume"):
"""get all (non-snapshot) datasets recursively under us
Args:
:type types: str
"""
self.debug("Getting all recursive datasets under us")
names = self.zfs_node.run(tab_split=False, readonly=True, valid_exitcodes=[0], cmd=[
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
])
return self.from_names(names[1:])
@CachedProperty
def datasets(self, types="filesystem,volume"):
"""get all (non-snapshot) datasets directly under us
Args:
:type types: str
"""
self.debug("Getting all datasets under us")
names = self.zfs_node.run(tab_split=False, readonly=True, valid_exitcodes=[0], cmd=[
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", "-d", "1", self.name
])
return self.from_names(names[1:])
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes, zfs_compressed):
"""returns a pipe with zfs send output for this snapshot
resume_token: resume sending from this token. (in that case we don't
need to know snapshot names)
Args:
:param send_pipes: output cmd array that will be added to actual zfs send command. (e.g. mbuffer or compression program)
:type send_pipes: list of str
:type features: list of str
:type prev_snapshot: ZfsDataset
:type resume_token: str
:type show_progress: bool
:type raw: bool
"""
# build source command
cmd = []
cmd.extend(["zfs", "send", ])
# all kind of performance options:
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
cmd.append("--large-block") # large block support (only if recordsize>128k which is seldomly used)
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
cmd.append("--embed") # WRITE_EMBEDDED, more compact stream
if zfs_compressed and "-c" in self.zfs_node.supported_send_options:
cmd.append("--compressed") # use compressed WRITE records
# raw? (send over encrypted data in its original encrypted form without decrypting)
if raw:
cmd.append("--raw")
# progress output
if show_progress:
cmd.append("--verbose")
cmd.append("--parsable")
# resume a previous send? (don't need more parameters in that case)
if resume_token:
cmd.extend(["-t", resume_token])
else:
# send properties
if send_properties:
cmd.append("--props")
# incremental?
if prev_snapshot:
cmd.extend(["-i", "@" + prev_snapshot.snapshot_name])
cmd.append(self.name)
cmd.extend(send_pipes)
output_pipe = self.zfs_node.run(cmd, pipe=True, readonly=True)
return output_pipe
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False):
"""starts a zfs recv for this snapshot and uses pipe as input
note: you can it both on a snapshot or filesystem object. The
resulting zfs command is the same, only our object cache is invalidated
differently.
Args:
:param recv_pipes: input cmd array that will be prepended to actual zfs recv command. (e.g. mbuffer or decompression program)
:type pipe: subprocess.pOpen
:type features: list of str
:type filter_properties: list of str
:type set_properties: list of str
:type ignore_exit_code: bool
"""
if set_properties is None:
set_properties = []
if filter_properties is None:
filter_properties = []
# build target command
cmd = []
cmd.extend(recv_pipes)
cmd.extend(["zfs", "recv"])
# don't mount filesystem that is received
cmd.append("-u")
for property_ in filter_properties:
cmd.extend(["-x", property_])
for property_ in set_properties:
cmd.extend(["-o", property_])
# verbose output
cmd.append("-v")
if 'extensible_dataset' in features and "-s" in self.zfs_node.supported_recv_options:
# support resuming
self.debug("Enabled resume support")
cmd.append("-s")
cmd.append(self.filesystem_name)
if ignore_exit_code:
valid_exitcodes = []
else:
valid_exitcodes = [0]
self.zfs_node.reset_progress()
self.zfs_node.run(cmd, inp=pipe, valid_exitcodes=valid_exitcodes)
# invalidate cache, but we at least know we exist now
self.invalidate()
# in test mode we assume everything was ok and it exists
if self.zfs_node.readonly:
self.force_exists = True
# check if transfer was really ok (exit codes have been wrong before due to bugs in zfs-utils and some
# errors should be ignored, thats where the ignore_exitcodes is for.)
if not self.exists:
self.error("error during transfer")
raise (Exception("Target doesn't exist after transfer, something went wrong."))
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed):
"""transfer this snapshot to target_snapshot. specify prev_snapshot for
incremental transfer
connects a send_pipe() to recv_pipe()
Args:
:type send_pipes: list of str
:type recv_pipes: list of str
:type target_snapshot: ZfsDataset
:type features: list of str
:type prev_snapshot: ZfsDataset
:type show_progress: bool
:type filter_properties: list of str
:type set_properties: list of str
:type ignore_recv_exit_code: bool
:type resume_token: str
:type raw: bool
"""
if set_properties is None:
set_properties = []
if filter_properties is None:
filter_properties = []
self.debug("Transfer snapshot to {}".format(target_snapshot.filesystem_name))
if resume_token:
target_snapshot.verbose("resuming")
# initial or increment
if not prev_snapshot:
target_snapshot.verbose("receiving full".format(self.snapshot_name))
else:
# incremental
target_snapshot.verbose("receiving incremental".format(self.snapshot_name))
# do it
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes)
def abort_resume(self):
"""abort current resume state"""
self.debug("Aborting resume")
self.zfs_node.run(["zfs", "recv", "-A", self.name])
def rollback(self):
"""rollback to latest existing snapshot on this dataset"""
for snapshot in reversed(self.snapshots):
if snapshot.exists:
self.debug("Rolling back")
self.zfs_node.run(["zfs", "rollback", snapshot.name])
return
def get_resume_snapshot(self, resume_token):
"""returns snapshot that will be resumed by this resume token (run this
on source with target-token)
Args:
:type resume_token: str
"""
# use zfs send -n option to determine this
# NOTE: on smartos stderr, on linux stdout
(stdout, stderr) = self.zfs_node.run(["zfs", "send", "-t", resume_token, "-n", "-v"], valid_exitcodes=[0, 255],
readonly=True, return_stderr=True)
if stdout:
lines = stdout
else:
lines = stderr
for line in lines:
matches = re.findall("toname = .*@(.*)", line)
if matches:
snapshot_name = matches[0]
snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot_name)
snapshot.debug("resume token belongs to this snapshot")
return snapshot
return None
def thin_list(self, keeps=None, ignores=None):
"""determines list of snapshots that should be kept or deleted based on
the thinning schedule. cull the herd!
returns: ( keeps, obsoletes )
Args:
:param keeps: list of snapshots to always keep (usually the last)
:param ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
:type keeps: list of ZfsDataset
:type ignores: list of ZfsDataset
"""
if ignores is None:
ignores = []
if keeps is None:
keeps = []
snapshots = [snapshot for snapshot in self.our_snapshots if snapshot not in ignores]
return self.zfs_node.thin(snapshots, keep_objects=keeps)
def thin(self, skip_holds=False):
"""destroys snapshots according to thin_list, except last snapshot
Args:
:type skip_holds: bool
"""
(keeps, obsoletes) = self.thin_list(keeps=self.our_snapshots[-1:])
for obsolete in obsoletes:
if skip_holds and obsolete.is_hold():
obsolete.verbose("Keeping (common snapshot)")
else:
obsolete.destroy()
self.snapshots.remove(obsolete)
def find_common_snapshot(self, target_dataset):
"""find latest common snapshot between us and target returns None if its
an initial transfer
Args:
:type target_dataset: ZfsDataset
"""
if not target_dataset.snapshots:
# target has nothing yet
return None
else:
for source_snapshot in reversed(self.snapshots):
if target_dataset.find_snapshot(source_snapshot):
source_snapshot.debug("common snapshot")
return source_snapshot
target_dataset.error("Cant find common snapshot with source.")
raise (Exception("You probably need to delete the target dataset to fix this."))
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
"""finds first snapshot to send :rtype: ZfsDataset or None if we cant
find it.
Args:
:type common_snapshot: ZfsDataset
:type also_other_snapshots: bool
"""
if not common_snapshot:
if not self.snapshots:
start_snapshot = None
else:
# no common snapshot, start from beginning
start_snapshot = self.snapshots[0]
if not start_snapshot.is_ours() and not also_other_snapshots:
# try to start at a snapshot thats ours
start_snapshot = self.find_next_snapshot(start_snapshot, also_other_snapshots)
else:
# normal situation: start_snapshot is the one after the common snapshot
start_snapshot = self.find_next_snapshot(common_snapshot, also_other_snapshots)
return start_snapshot
def find_incompatible_snapshots(self, common_snapshot):
"""returns a list of snapshots that is incompatible for a zfs recv onto
the common_snapshot. all direct followup snapshots with written=0 are
compatible.
Args:
:type common_snapshot: ZfsDataset
"""
ret = []
if common_snapshot and self.snapshots:
followup = True
for snapshot in self.snapshots[self.find_snapshot_index(common_snapshot) + 1:]:
if not followup or int(snapshot.properties['written']) != 0:
followup = False
ret.append(snapshot)
return ret
def get_allowed_properties(self, filter_properties, set_properties):
"""only returns lists of allowed properties for this dataset type
Args:
:type filter_properties: list of str
:type set_properties: list of str
"""
allowed_filter_properties = []
allowed_set_properties = []
illegal_properties = self.ILLEGAL_PROPERTIES[self.properties['type']]
for set_property in set_properties:
(property_, value) = set_property.split("=")
if property_ not in illegal_properties:
allowed_set_properties.append(set_property)
for filter_property in filter_properties:
if filter_property not in illegal_properties:
allowed_filter_properties.append(filter_property)
return allowed_filter_properties, allowed_set_properties
def _add_virtual_snapshots(self, source_dataset, source_start_snapshot, also_other_snapshots):
"""add snapshots from source to our snapshot list. (just the in memory
list, no disk operations)
Args:
:type source_dataset: ZfsDataset
:type source_start_snapshot: ZfsDataset
:type also_other_snapshots: bool
"""
self.debug("Creating virtual target snapshots")
snapshot = source_start_snapshot
while snapshot:
# create virtual target snapsho
# NOTE: with force_exist we're telling the dataset it doesnt exist yet. (e.g. its virtual)
virtual_snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot.snapshot_name, force_exists=False)
self.snapshots.append(virtual_snapshot)
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
def _pre_clean(self, common_snapshot, target_dataset, source_obsoletes, target_obsoletes, target_keeps):
"""cleanup old stuff before starting snapshot syncing
Args:
:type common_snapshot: ZfsDataset
:type target_dataset: ZfsDataset
:type source_obsoletes: list of ZfsDataset
:type target_obsoletes: list of ZfsDataset
:type target_keeps: list of ZfsDataset
"""
# on source: destroy all obsoletes before common. (since we cant send them anyways)
# But after common, only delete snapshots that target also doesn't want
if common_snapshot:
before_common = True
else:
before_common = False
for source_snapshot in self.snapshots:
if common_snapshot and source_snapshot.snapshot_name == common_snapshot.snapshot_name:
before_common = False
# never destroy common snapshot
else:
target_snapshot = target_dataset.find_snapshot(source_snapshot)
if (source_snapshot in source_obsoletes) and (before_common or (target_snapshot not in target_keeps)):
source_snapshot.destroy()
# on target: destroy everything thats obsolete, except common_snapshot
for target_snapshot in target_dataset.snapshots:
if (target_snapshot in target_obsoletes) \
and ( not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
if target_snapshot.exists:
target_snapshot.destroy()
def _validate_resume_token(self, target_dataset, start_snapshot):
"""validate and get (or destory) resume token
Args:
:type target_dataset: ZfsDataset
:type start_snapshot: ZfsDataset
"""
if 'receive_resume_token' in target_dataset.properties:
if start_snapshot==None:
target_dataset.verbose("Aborting resume, its obsolete.")
target_dataset.abort_resume()
else:
resume_token = target_dataset.properties['receive_resume_token']
# not valid anymore
resume_snapshot = self.get_resume_snapshot(resume_token)
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
target_dataset.verbose("Aborting resume, its no longer valid.")
target_dataset.abort_resume()
else:
return resume_token
def _plan_sync(self, target_dataset, also_other_snapshots):
"""plan where to start syncing and what to sync and what to keep
Args:
:rtype: ( ZfsDataset, ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset )
:type target_dataset: ZfsDataset
:type also_other_snapshots: bool
"""
# determine common and start snapshot
target_dataset.debug("Determining start snapshot")
common_snapshot = self.find_common_snapshot(target_dataset)
start_snapshot = self.find_start_snapshot(common_snapshot, also_other_snapshots)
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot)
# let thinner decide whats obsolete on source
source_obsoletes = []
if self.our_snapshots:
source_obsoletes = self.thin_list(keeps=[self.our_snapshots[-1]])[1]
# let thinner decide keeps/obsoletes on target, AFTER the transfer would be done (by using virtual snapshots)
target_dataset._add_virtual_snapshots(self, start_snapshot, also_other_snapshots)
target_keeps = []
target_obsoletes = []
if target_dataset.our_snapshots:
(target_keeps, target_obsoletes) = target_dataset.thin_list(keeps=[target_dataset.our_snapshots[-1]],
ignores=incompatible_target_snapshots)
return common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps, incompatible_target_snapshots
def handle_incompatible_snapshots(self, incompatible_target_snapshots, destroy_incompatible):
"""destroy incompatbile snapshots on target before sync, or inform user
what to do
Args:
:type incompatible_target_snapshots: list of ZfsDataset
:type destroy_incompatible: bool
"""
if incompatible_target_snapshots:
if not destroy_incompatible:
for snapshot in incompatible_target_snapshots:
snapshot.error("Incompatible snapshot")
raise (Exception("Please destroy incompatible snapshots or use --destroy-incompatible."))
else:
for snapshot in incompatible_target_snapshots:
snapshot.verbose("Incompatible snapshot")
snapshot.destroy()
self.snapshots.remove(snapshot)
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed):
"""sync this dataset's snapshots to target_dataset, while also thinning
out old snapshots along the way.
Args:
:type send_pipes: list of str
:type recv_pipes: list of str
:type target_dataset: ZfsDataset
:type features: list of str
:type show_progress: bool
:type filter_properties: list of str
:type set_properties: list of str
:type ignore_recv_exit_code: bool
:type holds: bool
:type rollback: bool
:type decrypt: bool
:type also_other_snapshots: bool
:type no_send: bool
:type destroy_incompatible: bool
"""
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
incompatible_target_snapshots) = \
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots)
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
# Also usefull with no_send to still cleanup stuff.
self._pre_clean(
common_snapshot=common_snapshot, target_dataset=target_dataset,
target_keeps=target_keeps, target_obsoletes=target_obsoletes, source_obsoletes=source_obsoletes)
# handle incompatible stuff on target
target_dataset.handle_incompatible_snapshots(incompatible_target_snapshots, destroy_incompatible)
# now actually transfer the snapshots, if we want
if no_send:
return
# check if we can resume
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
# rollback target to latest?
if rollback:
target_dataset.rollback()
#defaults for these settings if there is no encryption stuff going on:
send_properties = True
raw = False
write_embedded = True
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties, set_properties)
# source dataset encrypted?
if self.properties.get('encryption', 'off')!='off':
# user wants to send it over decrypted?
if decrypt:
# when decrypting, zfs cant send properties
send_properties=False
else:
# keep data encrypted by sending it raw (including properties)
raw=True
# encrypt at target?
if encrypt and not raw:
# filter out encryption properties to let encryption on the target take place
active_filter_properties.extend(["keylocation","pbkdf2iters","keyformat", "encryption"])
write_embedded=False
# now actually transfer the snapshots
prev_source_snapshot = common_snapshot
source_snapshot = start_snapshot
while source_snapshot:
target_snapshot = target_dataset.find_snapshot(source_snapshot) # still virtual
# does target actually want it?
if target_snapshot not in target_obsoletes:
source_snapshot.transfer_snapshot(target_snapshot, features=features,
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
filter_properties=active_filter_properties,
set_properties=active_set_properties,
ignore_recv_exit_code=ignore_recv_exit_code,
resume_token=resume_token, write_embedded=write_embedded, raw=raw, send_properties=send_properties, send_pipes=send_pipes, recv_pipes=recv_pipes, zfs_compressed=zfs_compressed)
resume_token = None
# hold the new common snapshots and release the previous ones
if holds:
target_snapshot.hold()
source_snapshot.hold()
if prev_source_snapshot:
if holds:
prev_source_snapshot.release()
target_dataset.find_snapshot(prev_source_snapshot).release()
# we may now destroy the previous source snapshot if its obsolete
if prev_source_snapshot in source_obsoletes:
prev_source_snapshot.destroy()
# destroy the previous target snapshot if obsolete (usually this is only the common_snapshot,
# the rest was already destroyed or will not be send)
prev_target_snapshot = target_dataset.find_snapshot(prev_source_snapshot)
if prev_target_snapshot in target_obsoletes:
prev_target_snapshot.destroy()
prev_source_snapshot = source_snapshot
else:
source_snapshot.debug("skipped (target doesn't need it)")
# was it actually a resume?
if resume_token:
target_dataset.verbose("Aborting resume, we dont want that snapshot anymore.")
target_dataset.abort_resume()
resume_token = None
source_snapshot = self.find_next_snapshot(source_snapshot, also_other_snapshots)
def mount(self, mount_point):
self.debug("Mounting")
cmd = [
"mount", "-tzfs", self.name, mount_point
]
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
def unmount(self):
self.debug("Unmounting")
cmd = [
"umount", self.name
]
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
def clone(self, name):
"""clones this snapshot and returns ZfsDataset of the clone"""
self.debug("Cloning to {}".format(name))
cmd = [
"zfs", "clone", self.name, name
]
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
return self.zfs_node.get_dataset(name, force_exists=True)
def set(self, prop, value):
"""set a zfs property"""
self.debug("Setting {}={}".format(prop, value))
cmd = [
"zfs", "set", "{}={}".format(prop, value), self.name
]
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
self.invalidate()
def inherit(self, prop):
"""inherit zfs property"""
self.debug("Inheriting property {}".format(prop))
cmd = [
"zfs", "inherit", prop, self.name
]
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
self.invalidate() | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/ZfsDataset.py | ZfsDataset.py |
import re
class ThinnerRule:
"""a thinning schedule rule for Thinner"""
TIME_NAMES = {
'y': 3600 * 24 * 365.25,
'm': 3600 * 24 * 30,
'w': 3600 * 24 * 7,
'd': 3600 * 24,
'h': 3600,
'min': 60,
's': 1,
}
TIME_DESC = {
'y': 'year',
'm': 'month',
'w': 'week',
'd': 'day',
'h': 'hour',
'min': 'minute',
's': 'second',
}
def __init__(self, rule_str):
"""parse scheduling string
example:
daily snapshot, remove after a week: 1d1w
weekly snapshot, remove after a month: 1w1m
monthly snapshot, remove after 6 months: 1m6m
yearly snapshot, remove after 2 year: 1y2y
keep all snapshots, remove after a day 1s1d
keep nothing: 1s1s
"""
rule_str = rule_str.lower()
matches = re.findall("([0-9]*)([a-z]*)([0-9]*)([a-z]*)", rule_str)[0]
if '' in matches:
raise (Exception("Invalid schedule string: '{}'".format(rule_str)))
period_amount = int(matches[0])
period_unit = matches[1]
ttl_amount = int(matches[2])
ttl_unit = matches[3]
if period_unit not in self.TIME_NAMES:
raise (Exception("Invalid period string in schedule: '{}'".format(rule_str)))
if ttl_unit not in self.TIME_NAMES:
raise (Exception("Invalid ttl string in schedule: '{}'".format(rule_str)))
self.period = period_amount * self.TIME_NAMES[period_unit]
self.ttl = ttl_amount * self.TIME_NAMES[ttl_unit]
if self.period > self.ttl:
raise (Exception("Period cant be longer than ttl in schedule: '{}'".format(rule_str)))
self.rule_str = rule_str
self.human_str = "Keep every {} {}{}, delete after {} {}{}.".format(
period_amount, self.TIME_DESC[period_unit], period_amount != 1 and "s" or "", ttl_amount,
self.TIME_DESC[ttl_unit], ttl_amount != 1 and "s" or "")
def __str__(self):
"""get schedule as a schedule string"""
return self.rule_str | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/ThinnerRule.py | ThinnerRule.py |
import time
from .ThinnerRule import ThinnerRule
class Thinner:
"""progressive thinner (universal, used for cleaning up snapshots)"""
def __init__(self, schedule_str=""):
"""
Args:
schedule_str: comma seperated list of ThinnerRules. A plain number specifies how many snapshots to always keep.
"""
self.rules = []
self.always_keep = 0
if schedule_str == "":
return
rule_strs = schedule_str.split(",")
for rule_str in rule_strs:
if rule_str.lstrip('-').isdigit():
self.always_keep = int(rule_str)
if self.always_keep < 0:
raise (Exception("Number of snapshots to keep cant be negative: {}".format(self.always_keep)))
else:
self.rules.append(ThinnerRule(rule_str))
def human_rules(self):
"""get list of human readable rules"""
ret = []
if self.always_keep:
ret.append("Keep the last {} snapshot{}.".format(self.always_keep, self.always_keep != 1 and "s" or ""))
for rule in self.rules:
ret.append(rule.human_str)
return ret
def thin(self, objects, keep_objects=None, now=None):
"""thin list of objects with current schedule rules. objects: list of
objects to thin. every object should have timestamp attribute.
return( keeps, removes )
Args:
objects: list of objects to check (should have a timestamp attribute)
keep_objects: objects to always keep (if they also are in the in the normal objects list)
now: if specified, use this time as current time
"""
if not keep_objects:
keep_objects = []
# always keep a number of the last objets?
if self.always_keep:
# all of them
if len(objects) <= self.always_keep:
return objects, []
# determine which ones
always_keep_objects = objects[-self.always_keep:]
else:
always_keep_objects = []
# determine time blocks
time_blocks = {}
for rule in self.rules:
time_blocks[rule.period] = {}
if not now:
now = int(time.time())
keeps = []
removes = []
# traverse objects
for thisobject in objects:
# important they are ints!
timestamp = int(thisobject.timestamp)
age = int(now) - timestamp
# store in the correct time blocks, per period-size, if not too old yet
# e.g.: look if there is ANY timeblock that wants to keep this object
keep = False
for rule in self.rules:
if age <= rule.ttl:
block_nr = int(timestamp / rule.period)
if block_nr not in time_blocks[rule.period]:
time_blocks[rule.period][block_nr] = True
keep = True
# keep it according to schedule, or keep it because it is in the keep_objects list
if keep or thisobject in keep_objects or thisobject in always_keep_objects:
keeps.append(thisobject)
else:
removes.append(thisobject)
return keeps, removes | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/Thinner.py | Thinner.py |
import argparse
import os.path
import sys
from .LogConsole import LogConsole
class ZfsAuto(object):
"""Common Base class, this class is always used subclassed. Look at ZfsAutobackup and ZfsAutoverify ."""
# also used by setup.py
VERSION = "3.2-alpha1"
HEADER = "{} v{} - (c)2021 E.H.Eefting ([email protected])".format(os.path.basename(sys.argv[0]), VERSION)
def __init__(self, argv, print_arguments=True):
self.hold_name = None
self.snapshot_time_format = None
self.property_name = None
self.exclude_paths = None
# helps with investigating failed regression tests:
if print_arguments:
print("ARGUMENTS: " + " ".join(argv))
self.args = self.parse_args(argv)
def parse_args(self, argv):
"""parse common arguments, setup logging, check and adjust parameters"""
parser=self.get_parser()
args = parser.parse_args(argv)
if args.help:
parser.print_help()
sys.exit(255)
if args.version:
print(self.HEADER)
sys.exit(255)
# auto enable progress?
if sys.stderr.isatty() and not args.no_progress:
args.progress = True
if args.debug_output:
args.debug = True
if args.test:
args.verbose = True
if args.debug:
args.verbose = True
self.log = LogConsole(show_debug=args.debug, show_verbose=args.verbose, color=sys.stdout.isatty())
self.verbose(self.HEADER)
self.verbose("")
if args.backup_name == None:
parser.print_usage()
self.log.error("Please specify BACKUP-NAME")
sys.exit(255)
if args.target_path is not None and args.target_path[0] == "/":
self.log.error("Target should not start with a /")
sys.exit(255)
if args.ignore_replicated:
self.warning("--ignore-replicated has been renamed, using --exclude-unchanged")
args.exclude_unchanged = True
# Note: Before version v3.1-beta5, we always used exclude_received. This was a problem if you wanted to
# replicate an existing backup to another host and use the same backupname/snapshots. However, exclude_received
# may still need to be used to explicitly exclude a backup with the 'received' source property to avoid accidental
# recursive replication of a zvol that is currently being received in another session (as it will have changes).
self.exclude_paths = []
if args.ssh_source == args.ssh_target:
if args.target_path:
# target and source are the same, make sure to exclude target_path
self.verbose("NOTE: Source and target are on the same host, excluding target-path from selection.")
self.exclude_paths.append(args.target_path)
else:
self.verbose("NOTE: Source and target are on the same host, excluding received datasets from selection.")
args.exclude_received = True
if args.test:
self.warning("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
#format all the names
self.property_name = args.property_format.format(args.backup_name)
self.snapshot_time_format = args.snapshot_format.format(args.backup_name)
self.hold_name = args.hold_format.format(args.backup_name)
self.verbose("")
self.verbose("Selecting dataset property : {}".format(self.property_name))
self.verbose("Snapshot format : {}".format(self.snapshot_time_format))
return args
def get_parser(self):
parser = argparse.ArgumentParser(description=self.HEADER, add_help=False,
epilog='Full manual at: https://github.com/psy0rz/zfs_autobackup')
#positional arguments
parser.add_argument('backup_name', metavar='BACKUP-NAME', default=None, nargs='?',
help='Name of the backup to select')
parser.add_argument('target_path', metavar='TARGET-PATH', default=None, nargs='?',
help='Target ZFS filesystem (optional)')
# Basic options
group=parser.add_argument_group("Basic options")
group.add_argument('--help', '-h', action='store_true', help='show help')
group.add_argument('--test', '--dry-run', '-n', action='store_true',
help='Dry run, dont change anything, just show what would be done (still does all read-only '
'operations)')
group.add_argument('--verbose', '-v', action='store_true', help='verbose output')
group.add_argument('--debug', '-d', action='store_true',
help='Show zfs commands that are executed, stops after an exception.')
group.add_argument('--debug-output', action='store_true',
help='Show zfs commands and their output/exit codes. (noisy)')
group.add_argument('--progress', action='store_true',
help='show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable)')
group.add_argument('--no-progress', action='store_true',
help=argparse.SUPPRESS) # needed to workaround a zfs recv -v bug
group.add_argument('--version', action='store_true',
help='Show version.')
group.add_argument('--strip-path', metavar='N', default=0, type=int,
help='Number of directories to strip from target path (use 1 when cloning zones between 2 '
'SmartOS machines)')
# SSH options
group=parser.add_argument_group("SSH options")
group.add_argument('--ssh-config', metavar='CONFIG-FILE', default=None, help='Custom ssh client config')
group.add_argument('--ssh-source', metavar='USER@HOST', default=None,
help='Source host to get backup from.')
group.add_argument('--ssh-target', metavar='USER@HOST', default=None,
help='Target host to push backup to.')
group=parser.add_argument_group("String formatting options")
group.add_argument('--property-format', metavar='FORMAT', default="autobackup:{}",
help='Dataset selection string format. Default: %(default)s')
group.add_argument('--snapshot-format', metavar='FORMAT', default="{}-%Y%m%d%H%M%S",
help='ZFS Snapshot string format. Default: %(default)s')
group.add_argument('--hold-format', metavar='FORMAT', default="zfs_autobackup:{}",
help='ZFS hold string format. Default: %(default)s')
group=parser.add_argument_group("Selection options")
group.add_argument('--ignore-replicated', action='store_true', help=argparse.SUPPRESS)
group.add_argument('--exclude-unchanged', action='store_true',
help='Exclude datasets that have no changes since any last snapshot. (Useful in combination with proxmox HA replication)')
group.add_argument('--exclude-received', action='store_true',
help='Exclude datasets that have the origin of their autobackup: property as "received". '
'This can avoid recursive replication between two backup partners.')
return parser
def verbose(self, txt):
self.log.verbose(txt)
def warning(self, txt):
self.log.warning(txt)
def error(self, txt):
self.log.error(txt)
def debug(self, txt):
self.log.debug(txt)
def progress(self, txt):
self.log.progress(txt)
def clear_progress(self):
self.log.clear_progress()
def set_title(self, title):
self.log.verbose("")
self.log.verbose("#### " + title)
def print_error_sources(self):
self.error(
"No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets "
"you want to select.".format(
self.args.backup_name))
def make_target_name(self, source_dataset):
"""make target_name from a source_dataset"""
return self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path) | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/ZfsAuto.py | ZfsAuto.py |
import os
import select
import subprocess
from .CmdPipe import CmdPipe, CmdItem
from .LogStub import LogStub
try:
from shlex import quote as cmd_quote
except ImportError:
from pipes import quote as cmd_quote
class ExecuteError(Exception):
pass
class ExecuteNode(LogStub):
"""an endpoint to execute local or remote commands via ssh"""
PIPE=1
def __init__(self, ssh_config=None, ssh_to=None, readonly=False, debug_output=False):
"""ssh_config: custom ssh config
ssh_to: server you want to ssh to. none means local
readonly: only execute commands that don't make any changes (useful for testing-runs)
debug_output: show output and exit codes of commands in debugging output.
"""
self.ssh_config = ssh_config
self.ssh_to = ssh_to
self.readonly = readonly
self.debug_output = debug_output
def __repr__(self):
if self.ssh_to is None:
return "(local)"
else:
return self.ssh_to
def _parse_stdout(self, line):
"""parse stdout. can be overridden in subclass"""
if self.debug_output:
self.debug("STDOUT > " + line.rstrip())
def _parse_stderr(self, line, hide_errors):
"""parse stderr. can be overridden in subclass"""
if hide_errors:
self.debug("STDERR > " + line.rstrip())
else:
self.error("STDERR > " + line.rstrip())
def _quote(self, cmd):
"""return quoted version of command. if it has value PIPE it will add an actual | """
if cmd==self.PIPE:
return('|')
else:
return(cmd_quote(cmd))
def _shell_cmd(self, cmd):
"""prefix specified ssh shell to command and escape shell characters"""
ret=[]
#add remote shell
if not self.is_local():
ret=["ssh"]
if self.ssh_config is not None:
ret.extend(["-F", self.ssh_config])
ret.append(self.ssh_to)
ret.append(" ".join(map(self._quote, cmd)))
return ret
def is_local(self):
return self.ssh_to is None
def run(self, cmd, inp=None, tab_split=False, valid_exitcodes=None, readonly=False, hide_errors=False,
return_stderr=False, pipe=False, return_all=False):
"""run a command on the node , checks output and parses/handle output and returns it
Either uses a local shell (sh -c) or remote shell (ssh) to execute the command.
Therefore the command can have stuff like actual pipes in it, if you dont want to use pipe=True to pipe stuff.
:param cmd: the actual command, should be a list, where the first item is the command
and the rest are parameters. use ExecuteNode.PIPE to add an unescaped |
(if you want to use system piping instead of python piping)
:param pipe: return CmdPipe instead of executing it.
:param inp: Can be None, a string or a CmdPipe that was previously returned.
:param tab_split: split tabbed files in output into a list
:param valid_exitcodes: list of valid exit codes for this command (checks exit code of both sides of a pipe)
Use [] to accept all exit codes. Default [0]
:param readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
:param hide_errors: don't show stderr output as error, instead show it as debugging output (use to hide expected errors)
:param return_stderr: return both stdout and stderr as a tuple. (normally only returns stdout)
:param return_all: return both stdout and stderr and exit_code as a tuple. (normally only returns stdout)
"""
# create new pipe?
if not isinstance(inp, CmdPipe):
cmd_pipe = CmdPipe(self.readonly, inp)
else:
# add stuff to existing pipe
cmd_pipe = inp
# stderr parser
error_lines = []
returned_exit_code=None
def stderr_handler(line):
if tab_split:
error_lines.append(line.rstrip().split('\t'))
else:
error_lines.append(line.rstrip())
self._parse_stderr(line, hide_errors)
# exit code hanlder
if valid_exitcodes is None:
valid_exitcodes = [0]
def exit_handler(exit_code):
if self.debug_output:
self.debug("EXIT > {}".format(exit_code))
if (valid_exitcodes != []) and (exit_code not in valid_exitcodes):
self.error("Command \"{}\" returned exit code {} (valid codes: {})".format(cmd_item, exit_code, valid_exitcodes))
return False
return True
# add shell command and handlers to pipe
cmd_item=CmdItem(cmd=self._shell_cmd(cmd), readonly=readonly, stderr_handler=stderr_handler, exit_handler=exit_handler, shell=self.is_local())
cmd_pipe.add(cmd_item)
# return pipe instead of executing?
if pipe:
return cmd_pipe
# stdout parser
output_lines = []
def stdout_handler(line):
if tab_split:
output_lines.append(line.rstrip().split('\t'))
else:
output_lines.append(line.rstrip())
self._parse_stdout(line)
if cmd_pipe.should_execute():
self.debug("CMD > {}".format(cmd_pipe))
else:
self.debug("CMDSKIP> {}".format(cmd_pipe))
# execute and calls handlers in CmdPipe
if not cmd_pipe.execute(stdout_handler=stdout_handler):
raise(ExecuteError("Last command returned error"))
if return_all:
return output_lines, error_lines, cmd_item.process and cmd_item.process.returncode
elif return_stderr:
return output_lines, error_lines
else:
return output_lines | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/ExecuteNode.py | ExecuteNode.py |
from __future__ import print_function
import re
import shlex
import subprocess
import sys
import time
from .ExecuteNode import ExecuteNode
from .Thinner import Thinner
from .CachedProperty import CachedProperty
from .ZfsPool import ZfsPool
from .ZfsDataset import ZfsDataset
from .ExecuteNode import ExecuteError
class ZfsNode(ExecuteNode):
"""a node that contains zfs datasets. implements global (systemwide/pool wide) zfs commands"""
def __init__(self, snapshot_time_format, hold_name, logger, ssh_config=None, ssh_to=None, readonly=False,
description="",
debug_output=False, thinner=None):
self.snapshot_time_format = snapshot_time_format
self.hold_name = hold_name
self.description = description
self.logger = logger
if ssh_config:
self.verbose("Using custom SSH config: {}".format(ssh_config))
if ssh_to:
self.verbose("Datasets on: {}".format(ssh_to))
else:
self.verbose("Datasets are local")
if thinner is not None:
rules = thinner.human_rules()
if rules:
for rule in rules:
self.verbose(rule)
else:
self.verbose("Keep no old snaphots")
self.__thinner = thinner
# list of ZfsPools
self.__pools = {}
self.__datasets = {}
self._progress_total_bytes = 0
self._progress_start_time = time.time()
ExecuteNode.__init__(self, ssh_config=ssh_config, ssh_to=ssh_to, readonly=readonly, debug_output=debug_output)
def thin(self, objects, keep_objects):
# NOTE: if thinning is disabled with --no-thinning, self.__thinner will be none.
if self.__thinner is not None:
return self.__thinner.thin(objects, keep_objects)
else:
return (keep_objects, [])
@CachedProperty
def supported_send_options(self):
"""list of supported options, for optimizing sends"""
# not every zfs implementation supports them all
ret = []
for option in ["-L", "-e", "-c"]:
if self.valid_command(["zfs", "send", option, "zfs_autobackup_option_test"]):
ret.append(option)
return ret
@CachedProperty
def supported_recv_options(self):
"""list of supported options"""
# not every zfs implementation supports them all
ret = []
for option in ["-s"]:
if self.valid_command(["zfs", "recv", option, "zfs_autobackup_option_test"]):
ret.append(option)
return ret
def valid_command(self, cmd):
"""test if a specified zfs options are valid exit code. use this to determine support options"""
try:
self.run(cmd, hide_errors=True, valid_exitcodes=[0, 1])
except ExecuteError:
return False
return True
def get_pool(self, dataset):
"""get a ZfsPool() object from dataset. stores objects internally to enable caching"""
if not isinstance(dataset, ZfsDataset):
raise (Exception("{} is not a ZfsDataset".format(dataset)))
zpool_name = dataset.name.split("/")[0]
return self.__pools.setdefault(zpool_name, ZfsPool(self, zpool_name))
def get_dataset(self, name, force_exists=None):
"""get a ZfsDataset() object from name. stores objects internally to enable caching"""
return self.__datasets.setdefault(name, ZfsDataset(self, name))
def reset_progress(self):
"""reset progress output counters"""
self._progress_total_bytes = 0
self._progress_start_time = time.time()
def parse_zfs_progress(self, line, hide_errors, prefix):
"""try to parse progress output of zfs recv -Pv, and don't show it as error to the user """
# is it progress output?
progress_fields = line.rstrip().split("\t")
if (line.find("nvlist version") == 0 or
line.find("resume token contents") == 0 or
len(progress_fields) != 1 or
line.find("skipping ") == 0 or
re.match("send from .*estimated size is ", line)):
# always output for debugging offcourse
self.debug(prefix + line.rstrip())
# actual useful info
if len(progress_fields) >= 3:
if progress_fields[0] == 'full' or progress_fields[0] == 'size':
self._progress_total_bytes = int(progress_fields[2])
elif progress_fields[0] == 'incremental':
self._progress_total_bytes = int(progress_fields[3])
elif progress_fields[1].isnumeric():
bytes_ = int(progress_fields[1])
if self._progress_total_bytes:
percentage = min(100, int(bytes_ * 100 / self._progress_total_bytes))
speed = int(bytes_ / (time.time() - self._progress_start_time) / (1024 * 1024))
bytes_left = self._progress_total_bytes - bytes_
minutes_left = int((bytes_left / (bytes_ / (time.time() - self._progress_start_time))) / 60)
self.logger.progress(
"Transfer {}% {}MB/s (total {}MB, {} minutes left)".format(percentage, speed, int(
self._progress_total_bytes / (1024 * 1024)), minutes_left))
return
# still do the normal stderr output handling
if hide_errors:
self.debug(prefix + line.rstrip())
else:
self.error(prefix + line.rstrip())
# def _parse_stderr_pipe(self, line, hide_errors):
# self.parse_zfs_progress(line, hide_errors, "STDERR|> ")
def _parse_stderr(self, line, hide_errors):
self.parse_zfs_progress(line, hide_errors, "STDERR > ")
def verbose(self, txt):
self.logger.verbose("{} {}".format(self.description, txt))
def error(self, txt):
self.logger.error("{} {}".format(self.description, txt))
def warning(self, txt):
self.logger.warning("{} {}".format(self.description, txt))
def debug(self, txt):
self.logger.debug("{} {}".format(self.description, txt))
def consistent_snapshot(self, datasets, snapshot_name, min_changed_bytes, pre_snapshot_cmds=[],
post_snapshot_cmds=[]):
"""create a consistent (atomic) snapshot of specified datasets, per pool.
"""
pools = {}
# collect snapshots that we want to make, per pool
# self.debug(datasets)
for dataset in datasets:
if not dataset.is_changed_ours(min_changed_bytes):
dataset.verbose("No changes since {}".format(dataset.our_snapshots[-1].snapshot_name))
continue
# force_exist, since we're making it
snapshot = self.get_dataset(dataset.name + "@" + snapshot_name, force_exists=True)
pool = dataset.split_path()[0]
if pool not in pools:
pools[pool] = []
pools[pool].append(snapshot)
# update cache, but try to prevent an unneeded zfs list
if self.readonly or CachedProperty.is_cached(dataset, 'snapshots'):
dataset.snapshots.append(snapshot) # NOTE: this will trigger zfs list if its not cached
if not pools:
self.verbose("No changes anywhere: not creating snapshots.")
return
try:
for cmd in pre_snapshot_cmds:
self.verbose("Running pre-snapshot-cmd")
self.run(cmd=shlex.split(cmd), readonly=False)
# create consistent snapshot per pool
for (pool_name, snapshots) in pools.items():
cmd = ["zfs", "snapshot"]
cmd.extend(map(lambda snapshot_: str(snapshot_), snapshots))
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
self.run(cmd, readonly=False)
finally:
for cmd in post_snapshot_cmds:
self.verbose("Running post-snapshot-cmd")
try:
self.run(cmd=shlex.split(cmd), readonly=False)
except Exception as e:
pass
def selected_datasets(self, property_name, exclude_received, exclude_paths, exclude_unchanged, min_change):
"""determine filesystems that should be backed up by looking at the special autobackup-property, systemwide
returns: list of ZfsDataset
"""
self.debug("Getting selected datasets")
# get all source filesystems that have the backup property
lines = self.run(tab_split=True, readonly=True, cmd=[
"zfs", "get", "-t", "volume,filesystem", "-o", "name,value,source", "-H",
property_name
])
# The returnlist of selected ZfsDataset's:
selected_filesystems = []
# list of sources, used to resolve inherited sources
sources = {}
for line in lines:
(name, value, raw_source) = line
dataset = self.get_dataset(name, force_exists=True)
# "resolve" inherited sources
sources[name] = raw_source
if raw_source.find("inherited from ") == 0:
inherited = True
inherited_from = re.sub("^inherited from ", "", raw_source)
source = sources[inherited_from]
else:
inherited = False
source = raw_source
# determine it
if dataset.is_selected(value=value, source=source, inherited=inherited, exclude_received=exclude_received,
exclude_paths=exclude_paths, exclude_unchanged=exclude_unchanged,
min_change=min_change):
selected_filesystems.append(dataset)
return selected_filesystems | zfs-autobackup | /zfs_autobackup-3.2a1.tar.gz/zfs_autobackup-3.2a1/zfs_autobackup/ZfsNode.py | ZfsNode.py |
# Remote debug with vscode
- Install debugpy in remote virtualenv
```bash
(virtualenv)me@remote $ pip install debugpy
```
- Create launch configuration in vscode
```json
{
"name": "Python: Attach remote",
"type": "python",
"request": "attach",
"connect": {
"host": "hostname",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "remote_host_project_dir"
}
],
"redirectOutput": true
}
```
- Launch tool with debug parameter in remote virtualenv
```bash
(virtualenv)me@remote $ zcm -D ls
```
- Launch debug config in vscode
| zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/DEBUG.md | DEBUG.md |
# ZFS Clone Manager
Tool to add version control and historic data of a directory with ZFS. The functionality is similar to Solaris beadm but generalized for any ZFS filesystem, not just ROOT and VAR.
The suggested workflow is:
1. Initialize a manager (zcm init)
2. Make changes in active clone
3. Create new clone (zcm clone)
4. Make changes in new clone
5. Activate new clone (zcm activate)
6. [Remove older clones (zcm rm)]
7. Go to step 2
## Usage
- Initialize a ZCM manager
```bash
$ zcm init rpool/directory /directory
ZCM initialized ZFS rpool/directory at path /directory
```
"rpool/directory" -> root of the ZFS for clones and snapshots.
"/directory" -> path of the filesystem (mountpoint of the active clone).
- Show ZCM information
```bash
$ zcm info /directory
Path: /directory
Root ZFS: rpool/directory
Root ZFS size: 63.00 KB
Total clone count: 1
Older clone count: 0
Newer clone count: 0
Oldest clone ID: 00000000
Active clone ID: 00000000
Newest clone ID: 00000000
Next clone ID: 00000001
$ zcm ls /directory
MANAGER A ID CLONE MOUNTPOINT ORIGIN DATE SIZE
rpool/directory 00000001 rpool/directory/00000000 /directory 2021-02-16 10:46:59 32.00 KB
```
- Create new clones (derived from active)
```bash
$ zcm clone /directory
Created clone 00000001 at path /directory/.clones/00000001
$ zcm clone /directory
Created clone 00000002 at path /directory/.clones/00000002
$ zcm ls /directory
MANAGER A ID CLONE MOUNTPOINT ORIGIN DATE SIZE
rpool/directory * 00000000 rpool/directory/00000000 /directory 2021-02-20 06:51:14 32.00 KB
rpool/directory 00000001 rpool/directory/00000001 /directory/.clones/00000001 00000000 2021-02-20 06:57:01 18.00 KB
rpool/directory 00000002 rpool/directory/00000002 /directory/.clones/00000002 00000000 2021-02-20 06:57:02 18.00 KB
```
- Activate the previously created clone, mounting it at ZCM path
```bash
$ zcm activate /directory 00000002
Activated clone 00000002
```
The activate command can not be executed from inside the path, therefore the parameter -p <path> is mandatory.
- All the clones are visible at <path>/.clones
```bash
$ ls /directory/.clones
0000000 00000001 00000002
```
- Show differences of a clone from it's origin
```bash
$ mkdir /directory/tmp
$ mkfile 10m /directory/tmp/file
$ zcm diff /directory
MOUNTPOINT DATE CHANGE FILE FILE_TYPE
/directory 2021-02-22 06:19:34.094470 Modified . directory
/directory 2021-02-22 06:21:07.236145 Added tmp directory
/directory 2021-02-22 06:21:07.309059 Added tmp/file file
```
- Remove clones
```bash
$ zcm rm /directory 00000001
WARNING!!!!!!!!
All the filesystems, snapshots and directories associated with clone 00000001 will be permanently deleted.
This operation is not reversible.
Do you want to proceed? (yes/NO) yes
Removed clone 00000001
```
- Destroy ZCM related data
This is dangerous, you should backup data first.
```bash
$ zcm destroy /directory
WARNING!!!!!!!!
All the filesystems, clones, snapshots and directories associated with rpool/directory will be permanently deleted.
This operation is not reversible.
Do you want to proceed? (yes/NO) yes
Destroyed ZCM rpool/directory
```
- Initialize a ZCM manager based on an existing directory:
```bash
$ zcm ls /directory
There is no ZCM manager at /directory
$ mkdir -p /directory/tmp
$ mkfile 10m /directory/tmp/file
$ zcm init -M rpool/directory /directory
ZCM initialized ZFS rpool/directory at path /directory
$ zcm ls rpool/directory
MANAGER A ID CLONE MOUNTPOINT ORIGIN DATE SIZE
rpool/directory * 00000000 rpool/directory/00000000 /directory 2021-02-22 13:37:28 10.04 MB
$ ls /directory
.clones/ tmp/
$ ls /directory/tmp
file
```
- Initialize a ZCM manager based on an existing ZFS:
```bash
$ zfs create -o mountpoint=/directory rpool/directory
$ mkdir /directory/tmp
$ mkfile 10m /directory/tmp/file
$ zcm init -m rpool/directory /directory
ZCM initialized ZFS rpool/directory at path /directory
$ zcm ls rpool/directory
MANAGER A ID CLONE MOUNTPOINT ORIGIN DATE SIZE
rpool/directory * 00000000 rpool/directory/00000000 /directory 2021-02-22 13:39:43 10.04 MB
$ ls /directory
tmp
$ ls /directory/tmp
file
``` | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/README.md | README.md |
# Change log for ZFS Clone Manager
## 2021-03-05: Version 3.4.0
- Addded __main__.py for calling zcm as module (with python -m zcm)
- Added solaris packaging information
## 2021-02-23: Version 3.3.0
- Changed zfs_set(zfs_name,mounted=*) to zfs_[un]mount(zfs_name)
- Fix zfs not mounted after migration
- Changed shutil.copytree and shutil.copy2 to "cd source; tar cf - . | (cd target; tar xf -)" command
- Fixed "option not recongnized" bug in zfs/get_cmd()
- Library zfs* functions now raise ZFSError on error and returns stdout instead of returncode
- Added JSON output to commands clone and list
## 2021-02-22: Version 3.2.1
- Fix some copy paste issues with some strings
## 2021-02-22: Version 3.2.0
- Added new check in Manager.load()
- Added command diff
- Added pagination to print_table (and print commands)
- Added supress headers to print_table (and print commands)
- Added parameter to initialize command with migration
- Renamed Manager.initialize_zfs() to Manager.initialize_manager()
## 2021-02-21: Version 3.1.0
- Fix Manager.size uninitialized
- Removed loop for path detection in get_zcm_for_path()
- Removed ZFS property zfs_clone_manager:active.
Active detection going back to clone_zfs.mountpoint==root_zfs.zfs_clone_manager:path
- Renamed back create command to clone
## 2021-02-20: Version 3.0.0
- Moved parameter -p,--path as subcommand argument filesystem|path
- Changed subcommand name and aliases behavior
- Added zfs_clone_manager:path and zfs_clone_manager:active ZFS properties handling
- Renamed zfs property to name in Manager
- Better handling of on/off properties in zfs
- Added Clone class to use it instead of a dict
- Renamed name properties to zfs in Manager and Clone
## 2021-02-17: Version 2.2.0
- Added --auto-remove activate and clone commands
- Unified helper functions in lib module
- Added confirmation message to remove command
- Added --max-total to activate command
- Moved print from Manager to CLI
- Added parseable output to information command
- Added Clone class for use instead of dict
- Renamed instance properties to clone in Manager
## 2021-02-16: Version 2.1.0
- Added --max-newer and --max-older options to activate command
## 2021-02-16: Version 2.0.0
- Renamed project to ZFS Clone Manager
- Renamed CLI tool to zcm
## 2021-02-15: Version 1.1.0
- Added quiet mode
- Added info command
- Added zfs size info
- Renamed Manager.clones to Manager.clones
- Added older and newer lists
- Added --max-newer and --max-total options to clone command
## 2021-02-15: Version 1.0.0
- First production release
## 2021-02-11: Version 0.0.1
- Start project
| zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/CHANGELOG.md | CHANGELOG.md |
import logging
import shutil
from pathlib import Path
from zcm.api.clone import Clone
from zcm.exceptions import ZCMError, ZCMException
from zcm.lib.helpers import copy_directory, id_generator
from zcm.lib.zfs import (ZFSError, zfs_clone, zfs_create, zfs_destroy,
zfs_inherit, zfs_list, zfs_mount, zfs_promote,
zfs_rename, zfs_set, zfs_snapshot, zfs_unmount)
log = logging.getLogger(__name__)
def get_zcm_for_path(path_str):
path = Path(path_str).absolute()
if not path.is_dir():
return None
absolute_path_str = str(path)
zfs_list_output = zfs_list(absolute_path_str, zfs_type='filesystem', properties=[
'name', 'zfs_clone_manager:path', 'mountpoint'])
if len(zfs_list_output) != 1:
return None
zfs = zfs_list_output[0]
if str(zfs['zfs_clone_manager:path']) == absolute_path_str and str(zfs['mountpoint']) == absolute_path_str:
splitted_name = zfs['name'].split('/')
name = '/'.join(splitted_name[:-1])
try:
int(splitted_name[-1], base=16)
except ValueError:
return None
return name
def snapshot_to_origin_id(snapshot):
# snapshot -> rpool/zfsa/zfsb/00000004@00000005
# snapshot.split('/') -> ['rpool','zfsa','zfsb','00000004@00000005']
# snapshot.split('/')[-1] -> '00000004@00000005'
# snapshot.split('/')[-1].split('@') -> ['00000004','00000005']
# snapshot.split('/')[-1].split('@')[0] -> '00000004'
if snapshot:
return snapshot.split('/')[-1].split('@')[0]
return None
class Manager:
def __init__(self, zfs_or_path):
self.zfs = None
self.path = None
self.clones = []
self.older_clones = []
self.newer_clones = []
self.active_clone = None
self.next_id = None
self.size = None
zfs = get_zcm_for_path(zfs_or_path)
self.zfs = zfs_or_path if zfs is None else zfs
self.load()
@staticmethod
def get_managers():
zfs_list_output = zfs_list(
properties=['name', 'zfs_clone_manager:path', 'mountpoint'])
return [Manager(zfs['name'])
for zfs in zfs_list_output
if zfs['zfs_clone_manager:path'] is not None and
zfs['mountpoint'] == Path(zfs['zfs_clone_manager:path'], '.clones')]
@staticmethod
def initialize_manager(zfs_str, path_str, migrate=None):
path = Path(path_str)
zfs_list_output = zfs_list(zfs_str, zfs_type='all', properties=[
'name', 'type', 'zfs_clone_manager:path', 'origin', 'mountpoint'], recursive=True)
if zfs_list_output:
zfs = zfs_list_output[0]
if zfs['zfs_clone_manager:path']:
raise ZCMError(
'The ZFS %s is a ZCM manager, will not initialize' % zfs_str)
if len(zfs_list_output) > 1:
raise ZCMError(
'The ZFS %s has children, can not initialize ZCM with it' % zfs_str)
if zfs['type'] != 'filesystem':
raise ZCMError('The ZFS %s is of type %s, can not initialize ZCM with it' % (
zfs_str, zfs['type']))
if migrate != 'ZFS':
raise ZCMError(
'ZFS %s already exists, will not initialize a manager with it' % zfs_str)
if zfs['mountpoint'] != path and path.exists():
raise ZCMError(
'Path %s already exists (and it is not the ZFS %s mountpoint, can not use it' % (path_str, zfs_str))
random_id = id_generator()
zfs_parts = zfs_str.split('/')[:-1]
zfs_parts.append(random_id)
random_zfs = '/'.join(zfs_parts)
try:
zfs_unmount(zfs_str)
zfs_rename(zfs_str, random_zfs)
zfs_create(zfs_str, zcm_path=path_str, recursive=True)
zfs_unmount(zfs_str)
zfs_rename(random_zfs, zfs_str + '/00000000')
zfs_set(zfs_str + '/00000000', mountpoint=path)
zfs_mount(zfs_str + '/00000000')
zfs_set(zfs_str, mountpoint=Path(path, '.clones'))
zfs_mount(zfs_str)
log.info('Migrated ZFS %s at path %s to ZCM' % (zfs_str, path_str))
except ZFSError as e:
raise ZCMError(e.message)
return
original_path = None
if path.exists():
if migrate != 'PATH':
raise ZCMError(
'Path %s already exists, will not initialize a manager' % path_str)
random_id = id_generator()
original_path = Path(path.parents[0], random_id)
path.rename(original_path)
try:
zfs_create(zfs_str, zcm_path=path_str, recursive=True)
zfs_unmount(zfs_str)
zfs_create('00000000', zfs_str, mountpoint=path)
zfs_set(zfs_str, mountpoint=Path(path, '.clones'))
zfs_mount(zfs_str)
log.info('Created ZCM %s at path %s' % (zfs_str, path_str))
except ZFSError as e:
raise ZCMError(e.message)
if original_path:
if copy_directory(original_path, path) != 0:
raise ZCMError('Could not move content of original directory, kept at %s' % original_path)
shutil.rmtree(original_path)
log.info('Moved content of path %s to clone' % path_str)
def load(self):
if not isinstance(self.zfs, str):
raise ZCMError(
'The name property is invalid: ' + str(self.zfs))
self.path = None
self.clones = []
self.older_clones = []
self.newer_clones = []
self.active_clone = None
self.next_id = None
self.size = None
last_id = 0
zfs_list_output = zfs_list(self.zfs, zfs_type='filesystem', properties=[
'name', 'zfs_clone_manager:path', 'origin', 'mountpoint',
'creation', 'used'], recursive=True)
if not zfs_list_output:
raise ZCMError(
'There is no ZCM manager at %s' % self.zfs)
for zfs in zfs_list_output:
if self.path is None:
self.zfs = zfs['name']
if zfs['zfs_clone_manager:path'] is None:
raise ZCMError(
'The ZFS %s is not a valid ZCM manager' % zfs['name'])
self.path = zfs['zfs_clone_manager:path']
self.size = zfs['used']
else:
splitted_name = zfs['name'].split('/')
name = '/'.join(splitted_name[:-1])
id = splitted_name[-1]
if name != self.zfs:
raise ZCMError(
'The ZFS %s is not a valid ZCM clone' % zfs['name'])
try:
last_id = max(last_id, int(id, base=16))
except ValueError:
raise ZCMError(
'The ZFS %s is not a valid ZCM clone' % zfs['name'])
origin_id = snapshot_to_origin_id(zfs['origin'])
clone = Clone(id, zfs['name'], zfs['origin'], origin_id,
zfs['mountpoint'], zfs['creation'], zfs['used'])
if not isinstance(self.path, Path) or not self.path.is_dir():
raise ZCMError(
'The path property is invalid: ' + self.path)
if zfs['mountpoint'] == self.path:
self.active_clone = clone
else:
if self.active_clone:
self.newer_clones.append(clone)
else:
self.older_clones.append(clone)
self.clones.append(clone)
self.next_id = format(last_id + 1, '08x')
def clone(self, max_newer=None, max_total=None, auto_remove=False):
if not self.active_clone:
raise ZCMError('There is no active clone, activate one first')
if not auto_remove and max_newer is not None and len(self.newer_clones) >= max_newer:
raise ZCMException(
'There are already %d newer clones, can not create another' % len(self.newer_clones))
if not auto_remove and max_total is not None and len(self.clones) >= max_total:
raise ZCMException(
'There are already %d clones, can not create another' % len(self.clones))
id = self.next_id
try:
snapshot = zfs_snapshot(id, self.active_clone.zfs)
zfs = zfs_clone(self.zfs + '/' + id, snapshot)
except ZFSError as e:
raise ZCMError(e.message)
self.load()
clone = self.get_clone(id)
log.info('Created clone ' + clone.id)
self.auto_remove(max_newer=max_newer, max_total=max_total)
return clone
def auto_remove(self, max_newer=None, max_older=None, max_total=None):
while max_older is not None and len(self.older_clones) > max_older:
self.remove(self.older_clones[0].id)
while max_newer is not None and len(self.newer_clones) > max_newer:
self.remove(self.newer_clones[0].id)
while max_total is not None and len(self.clones) > max_total:
if self.older_clones:
self.remove(self.older_clones[0].id)
elif self.newer_clones:
self.remove(self.newer_clones[0].id)
else:
raise ZCMError(
'There are no more clones to remove in order to satisfy max limit of ' + max_total)
def get_clone(self, id):
for clone in self.clones:
if clone.id == id:
return clone
raise ZCMError('There is no clone with id ' + id)
def unmount(self):
try:
for clone in self.clones:
if clone != self.active_clone:
zfs_unmount(clone.zfs)
zfs_unmount(self.zfs)
if self.active_clone is not None:
zfs_unmount(self.active_clone.zfs)
except ZFSError as e:
# at lest one unmount failed, remount all and fail
self.mount()
raise ZCMError(e.message)
def mount(self):
if not self.active_clone:
raise ZCMError('There is no active clone, activate one first')
try:
zfs_mount(self.active_clone.zfs)
zfs_mount(self.zfs)
for clone in self.clones:
if clone != self.active_clone:
zfs_mount(clone.zfs)
except ZFSError as e:
raise ZCMError(e.message)
def activate(self, id, max_newer=None, max_older=None, max_total=None, auto_remove=False):
next_active = self.get_clone(id)
if next_active == self.active_clone:
raise ZCMException('Manager %s already active' % id)
if not auto_remove and (max_newer is not None or max_older is not None):
newer_count = 0
older_count = 0
has_reach_active = False
for clone in self.clones:
if clone == next_active:
has_reach_active = True
else:
if has_reach_active:
newer_count += 1
else:
older_count += 1
if not auto_remove and max_newer is not None and newer_count > max_newer:
raise ZCMException(
'Command denied, Activating %s violates the maximum number of newer clones (%d/%d)'
% (id, newer_count, max_newer))
if not auto_remove and max_older is not None and older_count > max_older:
raise ZCMException(
'Command denied, Activating %s violates the maximum number of older clones (%d/%d)'
% (id, older_count, max_older))
self.unmount()
try:
if self.active_clone is not None:
zfs_inherit(self.active_clone.zfs, 'mountpoint')
zfs_set(next_active.zfs, mountpoint=self.path)
except ZFSError as e:
raise ZCMError(e.message)
self.active_clone = next_active
self.mount()
log.info('Activated clone ' + id)
self.load()
self.auto_remove(max_newer=max_newer,
max_older=max_older, max_total=max_total)
return next_active
def find_clones_with_origin(self, id):
clones = []
for clone in self.clones:
if clone.origin_id == id:
clones.append(clone)
return clones
def remove(self, id):
clone = self.get_clone(id)
if clone == self.active_clone:
raise ZCMError(
'Manager with id %s is active, can not remove' % id)
clones = self.find_clones_with_origin(id)
promoted = None
if clones:
promoted = clones[-1]
zfs_promote(promoted.zfs)
try:
zfs_destroy(clone.zfs)
if clone.origin:
zfs_destroy(clone.origin)
if promoted:
zfs_destroy('%s@%s' % (promoted.zfs, promoted.id))
log.info('Removed clone ' + clone.id)
self.load()
except ZFSError as e:
raise ZCMError(e.message)
def destroy(self):
try:
self.unmount()
zfs_destroy(self.zfs, recursive=True)
self.path.rmdir()
except ZFSError as e:
raise ZCMError(e.message)
except OSError as e:
raise ZCMError('Could not destroy path ' + self.path)
def to_dictionary(self):
return {
'zfs': self.zfs,
'path': str(self.path),
'size': self.size,
'clones': [ clone.id for clone in self.clones ],
'older_clones': [ clone.id for clone in self.older_clones ],
'newer_clones': [ clone.id for clone in self.newer_clones ],
'active_clone': self.active_clone.id,
'next_id': self.next_id
} | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/api/manager.py | manager.py |
import argparse
import json
from zcm.api.manager import Manager
from zcm.lib.print import format_bytes, print_table
class List:
name = 'list'
aliases = ['ls']
@staticmethod
def init_parser(parent_subparsers):
parent_parser = argparse.ArgumentParser(add_help=False)
parser = parent_subparsers.add_parser(List.name,
parents=[parent_parser],
aliases=List.aliases,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description='List clones',
help='List clones')
parser.add_argument('-j', '--json',
action='store_true',
help='Output a JSON object')
parser.add_argument('-T', '--no-trunc',
help='Don\'t truncate output',
action='store_true')
parser.add_argument('-H', '--no-header',
help='Don\'t show header line(s)',
action='store_true')
parser.add_argument('-P', '--page-size',
help='If list is longer than <page-size>, ask for more to continue in <page-size> intervals.\
Enter 0 to avoid pagination',
type=int,
default=25)
parser.add_argument('path',
nargs='*',
metavar='filesystem|path',
help='zfs filesystem or path to show')
def __init__(self, options):
table = []
managers = []
if options.path:
managers = [Manager(path) for path in options.path]
else:
managers = Manager.get_managers()
if options.json:
clones = []
for manager in managers:
for clone in manager.clones:
clones.append(clone.to_dictionary())
print(json.dumps(clones, indent=4))
else:
for manager in managers:
for clone in manager.clones:
table.append({
'manager': manager.zfs,
'a': '*' if manager.active_clone == clone else ' ',
'id': clone.id,
'clone': clone.zfs,
'mountpoint': str(clone.mountpoint),
'origin': clone.origin_id if clone.origin_id else '',
'date': clone.creation,
'size': format_bytes(clone.size)
})
print_table(table, header=(not options.no_header), truncate=(
not options.no_trunc), page_size=options.page_size) | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/cli/list.py | list.py |
import argparse
import json
from zcm.api.manager import Manager
from zcm.lib.helpers import check_one_or_more
class Clone:
name = 'clone'
aliases = []
@staticmethod
def init_parser(parent_subparsers):
parent_parser = argparse.ArgumentParser(add_help=False)
parser = parent_subparsers.add_parser(Clone.name,
parents=[parent_parser],
aliases=Clone.aliases,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description='Create a new clone from the active clone',
help='Create a new clone')
parser.add_argument('-j', '--json',
action='store_true',
help='Output a JSON object')
parser.add_argument('-m', '--max-newer',
type=check_one_or_more,
help='Do not create if there are <max-newer> newer clones')
parser.add_argument('-t', '--max-total',
type=check_one_or_more,
help='Do not create if there are <max-total> clones')
parser.add_argument('-a', '--auto-remove',
action='store_true',
help='Remove clones if maximum limits excedeed')
parser.add_argument('path',
metavar='filesystem|path',
help='zfs filesystem or path of ZCM')
def __init__(self, options):
manager = Manager(options.path)
clone = manager.clone(
options.max_newer, options.max_total, options.auto_remove)
if not options.quiet:
if options.json:
print(json.dumps(clone.to_dictionary(), indent=4))
else:
print('Created clone %s at path %s' %
(clone.id, clone.mountpoint)) | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/cli/clone.py | clone.py |
import argparse
from zcm.api.manager import Manager
from zcm.lib.print import print_table
from zcm.lib.zfs import zfs_diff
class Difference:
name = 'difference'
aliases = ['diff']
@staticmethod
def init_parser(parent_subparsers):
parent_parser = argparse.ArgumentParser(add_help=False)
parser = parent_subparsers.add_parser(Difference.name,
parents=[parent_parser],
aliases=Difference.aliases,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description='Gives a high-level description of the differences between\
a clone and its origin',
help='Differencies between a clone and its origin')
parser.add_argument('-T', '--no-trunc',
help='Don\'t truncate output',
action='store_true')
parser.add_argument('-H', '--no-header',
help='Don\'t show header line(s)',
action='store_true')
parser.add_argument('-P', '--page-size',
help='If list is longer than <page-size>, ask for more to continue in <page-size> intervals.\
Enter 0 to avoid pagination',
type=int,
default=25)
parser.add_argument('path',
metavar='filesystem|path',
help='zfs filesystem or path of ZCM')
parser.add_argument('id',
nargs='?',
help='clone id',
default='active')
def __init__(self, options):
manager = Manager(options.path)
id = manager.active_clone.id if options.id == 'active' else options.id
clone = manager.get_clone(id)
table = zfs_diff(clone.zfs, clone.origin, include_file_types=True)
print_table(table, header=(not options.no_header), truncate=(
not options.no_trunc), page_size=options.page_size) | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/cli/difference.py | difference.py |
import argparse
from zcm.api.manager import Manager
from zcm.lib.helpers import check_one_or_more, check_positive
class Activate:
name = 'activate'
aliases = []
@staticmethod
def init_parser(parent_subparsers):
parent_parser = argparse.ArgumentParser(add_help=False)
parser = parent_subparsers.add_parser(Activate.name,
parents=[parent_parser],
aliases=Activate.aliases,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description='activate an clone',
help='activate an clone')
parser.add_argument('-m', '--max-newer',
type=check_positive,
help='Do not activate if there will be more than <max-newer> newer clones')
parser.add_argument('-M', '--max-older',
type=check_positive,
help='Do not activate if there will be more than <max-older> older clones')
parser.add_argument('-t', '--max-total',
type=check_one_or_more,
help='Do not clone if there are <max-total> clones')
parser.add_argument('-a', '--auto-remove',
action='store_true',
help='Remove clones if maximum limits excedeed')
parser.add_argument('path',
metavar='filesystem|path',
help='zfs filesystem or path of ZCM')
parser.add_argument('id',
help='clone id to activate')
def __init__(self, options):
manager = Manager(options.path)
manager.activate(options.id, options.max_newer,
options.max_older, options.max_total, options.auto_remove)
if not options.quiet:
print('Activated clone ' + options.id) | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/cli/activate.py | activate.py |
import argparse
import logging
from zcm import __version__
from zcm.cli.activate import Activate
from zcm.cli.clone import Clone
from zcm.cli.destroy import Destroy
from zcm.cli.difference import Difference
from zcm.cli.information import Information
from zcm.cli.initialize import Initialize
from zcm.cli.list import List
from zcm.cli.remove import Remove
from zcm.exceptions import ZCMException
log = logging.getLogger(__name__)
def set_log_level(log_level):
if log_level == 'none':
logging.disable(logging.CRITICAL)
else:
levels = {
'debug': logging.DEBUG,
'info': logging.INFO,
'warn': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL
}
logging.basicConfig(level=levels[log_level])
class CustomFormatter(argparse.ArgumentDefaultsHelpFormatter,
argparse.RawDescriptionHelpFormatter):
pass
class CLI:
commands = [Initialize, Information, List, Clone, Activate, Difference, Remove, Destroy]
def __init__(self):
parser = argparse.ArgumentParser(
formatter_class=CustomFormatter,
description='Tool to manage ZFS clones with history metadata')
parser.add_argument('-V', '--version',
help='Print version information and quit',
action='version',
version='%(prog)s version ' + __version__)
parser.add_argument('-l', '--log-level',
help='Set the logging level ("debug"|"info"|"warn"|"error"|"fatal")',
choices=[
'debug',
'info',
'warn',
'error',
'critical',
'none'
],
metavar='LOG_LEVEL',
default='none')
parser.add_argument('-D', '--debug',
help='Enable debug mode',
action='store_true')
parser.add_argument('-q', '--quiet',
help='Enable quiet mode',
action='store_true')
subparsers = parser.add_subparsers(
dest='command',
metavar='COMMAND',
required=True)
for command in CLI.commands:
command.init_parser(subparsers)
options = parser.parse_args()
set_log_level(options.log_level)
if options.debug:
import debugpy
debugpy.listen(('0.0.0.0', 5678))
log.info("Waiting for IDE to attach...")
debugpy.wait_for_client()
try:
for command in self.commands:
if options.command == command.name or options.command in command.aliases:
command(options)
break
except ZCMException as e:
log.error(e.message)
print(e.message)
exit(-1)
def main():
CLI()
if __name__ == '__main__':
main() | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/cli/main.py | main.py |
import argparse
from zcm.api import Manager
from zcm.lib.print import format_bytes, print_info, print_table
class Information:
name = 'information'
aliases = ['info']
@staticmethod
def init_parser(parent_subparsers):
parent_parser = argparse.ArgumentParser(add_help=False)
parser = parent_subparsers.add_parser(Information.name,
parents=[parent_parser],
aliases=Information.aliases,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description='Show ZCM information',
help='Show ZCM information')
parser.add_argument('-t', '--table',
help='Show information as table',
action='store_true')
parser.add_argument('path',
nargs='*',
metavar='filesystem|path',
help='zfs filesystem or path to show')
def __init__(self, options):
managers = []
if options.path:
managers = [ Manager(path) for path in options.path ]
else:
managers = Manager.get_managers()
if options.table:
table = []
for manager in managers:
table.append({
'path': manager.path,
'zfs': manager.zfs,
'size': format_bytes(manager.size),
'total': len(manager.clones),
'older': len(manager.older_clones),
'newer': len(manager.newer_clones),
'oldest_id': manager.clones[0].id,
'active_id': manager.active_clone.id,
'newest_id': manager.clones[-1].id,
'next_id': manager.next_id
})
print_table(table)
else:
for manager in managers:
data = {
'Path': manager.path,
'Root ZFS': manager.zfs,
'Root ZFS size': format_bytes(manager.size),
'Total clone count': len(manager.clones),
'Older clone count': len(manager.older_clones),
'Newer clone count': len(manager.newer_clones),
'Oldest clone ID': manager.clones[0].id,
'Active clone ID': manager.active_clone.id,
'Newest clone ID': manager.clones[-1].id,
'Next clone ID': manager.next_id
}
print_info(data)
print() | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/cli/information.py | information.py |
from collections import deque
from itertools import islice
from zcm import zcm_config
# TODO: use from prettytable import PrettyTable ?
def print_table(table, header=True, truncate=True, separation=2, identation=0, page_size=25):
if page_size <= 0:
page_size = None
i = iter(table)
ask_for_more = False
while True:
page = tuple(islice(i, 0, page_size))
if len(page):
if ask_for_more:
answer = input('Do you want to see more? (Y/n) ')
if answer and answer.upper()[0] == 'N':
return
print_table_page(page, header, truncate, separation, identation)
ask_for_more = True
else:
return
def print_table_page(page, header=True, truncate=True, separation=2, identation=0):
MAX_COLUMN_LENGTH = zcm_config['max_column_length']
if len(page) == 0:
return
columns = []
# initialize columns from first row's keys
for key in page[0]:
columns.append({
'key': key,
'tittle': key.upper(),
'length': len(key)
})
# adjust columns lenghts to max record sizes
for column in columns:
for row in page:
value = str(row[column['key']]).replace('\t', ' ')
row[column['key']] = value
column['length'] = max(column['length'], len(value))
if truncate:
for column in columns:
column['length'] = min(column['length'], MAX_COLUMN_LENGTH)
separation_string = ' ' * separation
# print headers
if header:
strings = [''] * identation if identation > 0 else []
for column in columns:
str_format = '{:%s}' % str(column['length'])
strings.append(str_format.format(column['tittle']))
print(separation_string.join(strings))
for row in page:
strings = [''] * identation if identation > 0 else []
for column in columns:
value = row[column['key']]
if truncate and len(value) > MAX_COLUMN_LENGTH:
value = value[:MAX_COLUMN_LENGTH-3] + '...'
str_format = '{:%s}' % str(column['length'])
strings.append(str_format.format(value))
print(separation_string.join(strings))
def format_bytes(size):
# 2**10 = 1024
power = 2**10
n = 0
power_labels = {0: 'B', 1: 'KB', 2: 'MB', 3: 'GB', 4: 'TB', 5: 'PB'}
while size > power:
size /= power
n += 1
return '{:.2f} {:s}'.format(size, power_labels[n])
def print_info(data):
for key, value in data.items():
print('%s: %s' % (key, value)) | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/lib/print.py | print.py |
import io
import logging
import pathlib
import subprocess
from datetime import datetime
log = logging.getLogger(__name__)
class ZFSError(Exception):
def __init__(self, message="ZFS Error"):
super().__init__()
self.message = message
def get_cmd(command, arguments, options):
cmd = ['/usr/sbin/zfs', command]
if options is not None:
for option in options:
cmd += ['-o', option]
if arguments is not None:
cmd += arguments
log.debug('Running command: "' + ' '.join(cmd) + '"')
return cmd
def _zfs(command, arguments=None, options=None, stdout=None):
cmd = get_cmd(command, arguments, options)
return subprocess.Popen(cmd, stdout=stdout)
def zfs(command, arguments=None, options=None):
cmd = get_cmd(command, arguments, options)
process = subprocess.run(cmd, capture_output=True, text=True)
if process.stdout:
for line in iter(process.stdout.splitlines()):
log.info(line)
if process.stderr:
for line in iter(process.stderr.splitlines()):
if process.returncode == 0:
log.warning(line)
else:
log.error(line)
if process.returncode != 0:
raise ZFSError(process.stderr)
return process.stdout
def zfs_create(zfs_name, parent=None, mountpoint=None, compression=None, recursive=False, zcm_path=None):
filesystem = zfs_name
if parent is None:
last_index = zfs_name.rfind('/')
parent = filesystem[:last_index]
zfs_name = filesystem[last_index+1:]
else:
filesystem = '%s/%s' % (parent, zfs_name)
if recursive:
zfs_path = ''
is_zpool = True
for zfs_fs in parent.split('/'):
zfs_path += '/' if zfs_path else ''
zfs_path += zfs_fs
if not zfs_is_filesystem(zfs_path):
if is_zpool:
return None
zfs('create', [zfs_path])
is_zpool = False
options = []
if compression is not None:
options.append('compression=' + compression)
if mountpoint is not None:
options.append('mountpoint=' + str(mountpoint))
if zcm_path is not None:
options.append('zfs_clone_manager:path=' + str(zcm_path))
if len(options) == 0:
options = None
zfs('create', [filesystem], options)
return filesystem
def zfs_clone(zfs_name, snapshot, parent=None, mountpoint=None):
if not isinstance(zfs_name, str):
log.error('The ZFS clone name must be provided')
if not isinstance(snapshot, str):
log.error('The ZFS snapshot must be provided')
filesystem = zfs_name
if parent is not None:
filesystem = parent + '/' + zfs_name
options = []
if mountpoint is not None:
options.append('mountpoint=' + str(mountpoint))
zfs('clone', [snapshot, filesystem], options)
return filesystem
def zfs_set(zfs_name, readonly=None, mountpoint=None, zcm_path=None):
result = []
if readonly is not None:
option = 'readonly=' + ('on' if readonly else 'off')
result.append(zfs('set', [option, zfs_name]))
if mountpoint is not None:
result.append(zfs('set', ['mountpoint=' + str(mountpoint), zfs_name]))
if zcm_path is not None:
result.append(zfs(
'set', ['zfs_clone_manager:path=' + str(zcm_path), zfs_name]))
return '\n'.join(result)
def zfs_inherit(zfs_name, property_name):
return zfs('inherit', [property_name, zfs_name])
def value_convert(property_name, value):
if value == 'on':
return True
if value == 'off':
return False
if value == '-':
return None
if property_name in ['mountpoint', 'zfs_clone_manager:path']:
return pathlib.Path(value)
if property_name in ['creation', 'st_ctim', 'mtime', 'atime', 'crtime']:
try:
return datetime.fromtimestamp(float(value))
except ValueError:
pass
try:
return int(value)
except ValueError:
return value
def zfs_get(zfs_name, property_name):
if property_name == 'all':
raise NotImplementedError()
cmd = ['/usr/sbin/zfs', 'get', '-Hp', property_name, zfs_name]
with open('/dev/null', 'w') as dev_null:
log.debug('Running command: "' + ' '.join(cmd) + '"')
output = subprocess.check_output(cmd, stderr=dev_null)
value = output.decode('utf-8').split('\t')[2]
return value_convert(property_name, value)
return None
def zfs_snapshot(zfs_name, filesystem, recursive=False):
arguments = []
if recursive:
arguments.append('-r')
snapshot = filesystem + '@' + zfs_name
arguments.append(snapshot)
zfs('snapshot', arguments)
return snapshot
def zfs_destroy(zfs_name, recursive=False, synchronous=True):
arguments = []
if recursive:
arguments.append('-r')
if synchronous:
arguments.append('-s')
arguments.append(zfs_name)
return zfs('destroy', arguments)
def zfs_promote(zfs_name):
return zfs('promote', [zfs_name])
def zfs_send(last_snapshot, target_file_path, first_snapshot=None, recursive=False):
arguments = []
if recursive:
arguments.append('-R')
if first_snapshot is not None:
arguments += ['-I', first_snapshot]
arguments.append(last_snapshot)
with open(target_file_path, 'wb') as target_file:
return _zfs('send', arguments, stdout=target_file)
def zfs_list(zfs_name=None, zfs_type=None, recursive=False,
properties=['name', 'used', 'avail', 'refer', 'mountpoint']):
cmd = ['/usr/sbin/zfs', 'list', '-Hp']
if recursive:
cmd.append('-r')
if zfs_type is not None and zfs_type in ['all', 'filesystem',
'snapshot', 'volume']:
cmd += ['-t', zfs_type]
if properties is not None:
cmd += ['-o', ','.join(properties)]
if zfs_name is not None:
cmd.append(zfs_name)
try:
with open('/dev/null', 'w') as dev_null:
filesystems = []
log.debug('Running command: "' + ' '.join(cmd) + '"')
output = subprocess.check_output(cmd, stderr=dev_null)
for line in output.decode('utf-8').strip().split('\n'):
values = line.split('\t')
filesystem = {}
for property_name, value in zip(properties, values):
filesystem[property_name] = value_convert(
property_name, value)
filesystems.append(filesystem)
return filesystems
except:
return []
def zfs_exists(zfs_name):
filesystems = zfs_list(zfs_name, zfs_type='all', properties=['name'])
return len(filesystems) == 1
def zfs_is_filesystem(zfs_name):
try:
return zfs_get(zfs_name, 'type') == 'filesystem'
except:
return False
def zfs_is_snapshot(zfs_name):
try:
return zfs_get(zfs_name, 'type') == 'snapshot'
except:
return False
def zfs_diff(zfs_name, origin_snapshot=None, include_file_types=False, recursive=False):
# Implemented as generator, in case it is too big
file_types = {
'F': 'file',
'/': 'directory',
'B': 'device',
'>': 'door',
'|': 'fifo',
'@': 'link',
'P': 'portal',
'=': 'socket'
}
change_types = {
'+': 'Added',
'-': 'Removed',
'M': 'Modified',
'R': 'Renamed'
}
arguments = ['-H', '-t']
if include_file_types:
arguments.append('-F')
if recursive:
arguments.append('-r')
if origin_snapshot is None:
arguments.append('-E')
else:
arguments.append(origin_snapshot)
arguments.append(zfs_name)
mountpoint = None
if not '@' in zfs_name:
mountpoint = zfs_get(zfs_name, 'mountpoint')
process = _zfs('diff', arguments, stdout=subprocess.PIPE)
for line in io.TextIOWrapper(process.stdout, encoding="utf-8"):
records = line.strip().split('\t')
data = {}
file = pathlib.Path(records[-1])
if mountpoint is not None:
data['mountpoint'] = mountpoint
file = file.relative_to(mountpoint)
data['date'] = value_convert('st_ctim', records[0])
data['change'] = change_types[records[1]]
data['file'] = file
if include_file_types:
data['file_type'] = file_types[records[2]]
yield data
def zfs_rename(original_zfs_name, new_zfs_name):
arguments = [original_zfs_name, new_zfs_name]
return zfs('rename', arguments)
def zfs_mount(zfs_name):
return zfs('mount', [zfs_name])
def zfs_unmount(zfs_name):
return zfs('unmount', [zfs_name]) | zfs-clone-manager | /zfs-clone-manager-3.4.0.tar.gz/zfs-clone-manager-3.4.0/zcm/lib/zfs.py | zfs.py |
# zfs-replicate
https://github.com/alunduil/zfs-replicate
By Alex Brandt <[email protected]>
## Description
zfs-replicate sends all ZFS snapshots to a remote host via SSH. zfs-replicate
does **not** create ZFS snapshots.
zfs-replicate is based on [autorepl.py] used by [FreeNAS].
zfs-replicate is related to several other projects which fit other niches:
1. [sanoid]: A full snapshot management system. Its companion application,
syncoid, handles replication with many available options.
1. [zfs-replicate (BASH)]: A very similar project. The major differences are
configuration style (our project uses parameters whereas this project uses a
BASH script), and the system expectations (e.g., logging controls).
1. [znapzend]: Another scheduling and replicating system.
1. [zrep]: A SH script with several control commands for snapshot replication.
## Terms of Use
You are free to copy, modify, and distribute zfs-replicate with attribution
under the terms of the BSD-2-Clause licence. See the LICENSE for details.
## Prerequisites
1. A remote system with a ZFS filesystem and the zfs CLI tools
1. SSH access to that remote system
1. If you're not using the root user remotely:
1. Ensure the user can mount filesystems
1. [FreeBSD]: `sysctl -w vfs.usermount=1`
1. Add ZFS permissions
1. Command: `zfs allow ${USER} ${PERMISSIONS} ${BACKUP_DATASET}`
1. Permissions
1. clone
1. create
1. destroy
1. hold
1. mount
1. promote
1. quota
1. readonly
1. receive
1. rename
1. reservation
1. rollback
1. send
1. snapshot
1. A local ZFS filesystem and zfs CLI tools
_N.B., don't use the root user to access your remote system._
## How to use zfs-replicate
1. `poetry install`
1. `poetry run -- zfs-replicate --help`
## Documentation
* `zfs-replicate --help`: Help for zfs-replicate.
* LICENSE: Licence file explaining usage of zfs-replicate.
* [Survey of ZFS Replication Tools][survey]: Overview of various ZFS replication
tools and their uses.
* [Working With Oracle Solaris ZFS Snapshots and Clones]: Oracle's guide to
working with ZFS snapshots.
* [ZFS REMOTE REPLICATION SCRIPT WITH REPORTING]
* [ZFS replication without using Root user]: How to configure ZFS replication
for a non-root user.
## Getting Support
* [GitHub issues]: Report any problems or features requests to GitHub issues.
[autorepl.py]: https://github.com/freenas/freenas/blob/master/gui/tools/autorepl.py
[FreeBSD]: https://www.freebsd.org/
[FreeNAS]: http://www.freenas.org/
[GitHub issues]: https://github.com/alunduil/zfs-replicate/issues
[sanoid]: https://github.com/jimsalterjrs/sanoid
[survey]: https://www.reddit.com/r/zfs/comments/7fqu1y/a_small_survey_of_zfs_remote_replication_tools/
[Working With Oracle Solaris ZFS Snapshots and Clones]: https://docs.oracle.com/cd/E26505_01/html/E37384/gavvx.html#scrolltoc
[ZFS REMOTE REPLICATION SCRIPT WITH REPORTING]: https://techblog.jeppson.org/2014/10/zfs-remote-replication-script-with-reporting/
[zfs-replicate (BASH)]: https://github.com/leprechau/zfs-replicate
[ZFS replication without using Root user]: https://forums.freenas.org/index.php?threads/zfs-replication-without-using-root-user.21731/
[znapzend]: http://www.znapzend.org/
[zrep]: http://www.bolthole.com/solaris/zrep/
| zfs-replicate | /zfs_replicate-3.1.1.tar.gz/zfs_replicate-3.1.1/README.md | README.md |
import itertools
from typing import List, Tuple
from .. import filesystem, optional, snapshot
from ..compress import Compression
from ..filesystem import FileSystem
from .type import Action, Task
def execute(
remote: FileSystem,
tasks: List[Tuple[FileSystem, List[Task]]],
ssh_command: str,
follow_delete: bool,
compression: Compression,
) -> None:
"""Execute all tasks."""
sorted_tasks = sorted(tasks, key=lambda x: len(x[0].name.split("/")), reverse=True)
for _, filesystem_tasks in sorted_tasks:
action_tasks = {
action: list(action_tasks)
for action, action_tasks in itertools.groupby(
filesystem_tasks, key=lambda x: x.action
)
}
for action, a_tasks in action_tasks.items():
if action == Action.CREATE:
_create(a_tasks, ssh_command=ssh_command)
elif action == Action.DESTROY:
_destroy(a_tasks, ssh_command=ssh_command)
elif action == Action.SEND:
_send(
remote,
a_tasks,
ssh_command=ssh_command,
follow_delete=follow_delete,
compression=compression,
)
def _create(tasks: List[Task], ssh_command: str) -> None:
for task in tasks:
filesystem.create(task.filesystem, ssh_command=ssh_command)
def _destroy(tasks: List[Task], ssh_command: str) -> None:
for task in tasks:
if task.snapshot is None:
filesystem.destroy(task.filesystem, ssh_command=ssh_command)
else:
snapshot.destroy(task.snapshot, ssh_command=ssh_command)
def _send(
remote: FileSystem,
tasks: List[Task],
ssh_command: str,
follow_delete: bool,
compression: Compression,
) -> None:
for task in tasks:
snapshot.send(
remote,
optional.value(task.snapshot),
ssh_command=ssh_command,
compression=compression,
follow_delete=follow_delete,
previous=optional.value(task.snapshot).previous,
) | zfs-replicate | /zfs_replicate-3.1.1.tar.gz/zfs_replicate-3.1.1/zfs/replicate/task/execute.py | execute.py |
import itertools
from typing import List, Tuple
from ..filesystem import FileSystem
from ..snapshot import Snapshot
from .type import Action, Task
LIMITS = {"filesystem": 6, "action": 4, "snapshot": 13}
AFTERS = {"filesystem": "action", "action": "snapshot"}
def report(tasks: List[Task]) -> str:
"""Pretty printed report on given Tasks."""
filesystems = [
(filesystem, list(tasks))
for filesystem, tasks in itertools.groupby(tasks, key=lambda x: x.filesystem)
]
if len(filesystems) > LIMITS["filesystem"]:
return _counts("filesystem", tasks)
return _report_filesystem(filesystems)
def _report_filesystem(filesystems: List[Tuple[FileSystem, List[Task]]]) -> str:
output = ""
for filesystem, tasks in filesystems:
output += f"\nfilesystem: {filesystem.name}\n"
actions = [
(action, list(tasks))
for action, tasks in itertools.groupby(tasks, key=_action)
]
if len(actions) > LIMITS["action"]:
output += _counts("action", tasks, indentation=" ")
else:
output += _report_action(actions, indentation=" - ")
return output
def _report_action(
actions: List[Tuple[Action, List[Task]]], indentation: str = ""
) -> str:
output = ""
for action, tasks in actions:
output += f"{indentation}action: {action}\n"
snapshots = [
(snapshot, list(tasks))
for snapshot, tasks in itertools.groupby(tasks, key=lambda x: x.snapshot)
if snapshot is not None
]
if len(snapshots) > LIMITS["snapshot"]:
output += _counts("snapshot", tasks, indentation=" " + indentation)
else:
output += _report_snapshot(snapshots, indentation=" " + indentation)
return output
def _report_snapshot(
snapshots: List[Tuple[Snapshot, List[Task]]], indentation: str = ""
) -> str:
output = "\n".join(
[f"{indentation}snapshot: {s.filesystem.name}@{s.name}" for s, _ in snapshots]
)
if output:
output += "\n"
return output
def _counts(current: str, tasks: List[Task], indentation: str = "") -> str:
group = {getattr(x, current) for x in tasks}
output = f"{indentation}{current}:{len(group)}\n"
if current in AFTERS:
output += _counts(AFTERS[current], tasks, indentation=indentation)
return output
# Excising this makes typing happy. Check if it can be injected.
def _action(task: Task) -> Action:
return task.action | zfs-replicate | /zfs_replicate-3.1.1.tar.gz/zfs_replicate-3.1.1/zfs/replicate/task/report.py | report.py |
from typing import Dict, List
from ..filesystem import FileSystem
from ..filesystem import filesystem as filesystem_t
from ..filesystem import remote_filesystem
from ..list import venn
from ..snapshot import Snapshot
from .type import Action, Task
def generate(
remote: FileSystem,
local_snapshots: Dict[FileSystem, List[Snapshot]],
remote_snapshots: Dict[FileSystem, List[Snapshot]],
follow_delete: bool = False,
) -> List[Task]:
"""Generate Tasks for replicating local snapshots to remote snapshots."""
tasks = []
for filesystem in local_snapshots:
remote_snapshots = {
filesystem_t(
name=key.name.replace(remote.name + "/", ""),
readonly=filesystem.readonly,
): value
for key, value in remote_snapshots.items()
}
if filesystem not in remote_snapshots:
tasks.append(
Task(
action=Action.CREATE,
filesystem=remote_filesystem(remote, filesystem),
snapshot=None,
)
)
tasks.extend(
[
Task(action=Action.SEND, filesystem=remote, snapshot=s)
for s in local_snapshots[filesystem]
]
)
continue
lefts: List[Snapshot]
middles: List[Snapshot]
rights: List[Snapshot]
lefts, middles, rights = venn(
local_snapshots[filesystem], remote_snapshots[filesystem]
)
if not middles:
tasks.extend(
[
Task(
action=Action.DESTROY,
filesystem=remote_filesystem(remote, filesystem),
snapshot=s,
)
for s in rights
],
)
tasks.extend(
[Task(action=Action.SEND, filesystem=remote, snapshot=s) for s in lefts]
)
if middles and follow_delete:
tasks.extend(
[
Task(
action=Action.DESTROY,
filesystem=remote_filesystem(remote, filesystem),
snapshot=s,
)
for s in rights
],
)
for filesystem in remote_snapshots:
filesystem = filesystem_t(
name=filesystem.name.replace(remote.name + "/", ""),
readonly=filesystem.readonly,
)
if filesystem not in local_snapshots:
tasks.extend(
[
Task(
action=Action.DESTROY,
filesystem=remote_filesystem(remote, filesystem),
snapshot=s,
)
for s in remote_snapshots[filesystem]
],
)
tasks.append(
Task(
action=Action.DESTROY,
filesystem=remote_filesystem(remote, filesystem),
snapshot=None,
)
)
return tasks | zfs-replicate | /zfs_replicate-3.1.1.tar.gz/zfs_replicate-3.1.1/zfs/replicate/task/generate.py | generate.py |
import itertools
import click
from .. import filesystem, snapshot, ssh, task
from ..compress import Compression
from ..filesystem import FileSystem
from ..filesystem import filesystem as filesystem_t
from ..ssh import Cipher
from .click import EnumChoice
@click.command() # type: ignore[misc]
@click.option("--verbose", "-v", is_flag=True, help="Print additional output.") # type: ignore[misc]
@click.option( # type: ignore[misc]
"--dry-run",
is_flag=True,
help="Generate replication tasks but do not execute them.",
)
@click.option( # type: ignore[misc]
"--follow-delete",
is_flag=True,
help="Delete snapshots on REMOTE_FS that have been deleted from LOCAL_FS.",
)
@click.option("--recursive", is_flag=True, help="Recursively replicate snapshots.") # type: ignore[misc]
@click.option( # type: ignore[misc]
"--port",
"-p",
type=click.IntRange(1, 65535),
metavar="PORT",
default=22,
help="Connect to SSH on PORT.",
)
@click.option( # type: ignore[misc]
"--login",
"-l",
"--user",
"-u",
"user",
metavar="USER",
help="Connect to SSH as USER.",
)
@click.option( # type: ignore[misc]
"-i",
"--identity-file",
type=click.Path(exists=True, dir_okay=False),
required=True,
help="SSH identity file to use.",
)
@click.option( # type: ignore[misc]
"--cipher",
type=EnumChoice(Cipher),
default=Cipher.STANDARD,
help="One of: disable (no ciphers), fast (only fast ciphers), or standard (default ciphers).",
)
@click.option( # type: ignore[misc]
"--compression",
type=EnumChoice(Compression),
default=Compression.LZ4,
help="One of: off (no compression), lz4 (fastest), pigz (all rounder), or plzip (best compression).",
)
@click.argument("host", required=True) # type: ignore[misc]
@click.argument("remote_fs", type=filesystem_t, required=True, metavar="REMOTE_FS") # type: ignore[misc]
@click.argument("local_fs", type=filesystem_t, required=True, metavar="LOCAL_FS") # type: ignore[misc]
def main( # pylint: disable=R0914,R0913
verbose: bool,
dry_run: bool,
follow_delete: bool,
recursive: bool,
port: int,
user: str,
identity_file: str,
cipher: Cipher,
compression: Compression,
host: str,
remote_fs: FileSystem,
local_fs: FileSystem,
) -> None:
"""Replicate LOCAL_FS to REMOTE_FS on HOST."""
ssh_command = ssh.command(cipher, user, identity_file, port, host)
if verbose:
click.echo(f"checking filesystem {local_fs.name}")
l_snaps = snapshot.list(local_fs, recursive=recursive)
# Improvement: exclusions from snapshots to replicate.
if verbose:
click.echo(f"found {len(l_snaps)} snapshots on {local_fs.name}")
click.echo()
r_filesystem = filesystem.remote_dataset(remote_fs, local_fs)
filesystem.create(r_filesystem, ssh_command=ssh_command)
if verbose:
click.echo(f"checking filesystem {host}/{r_filesystem.name}")
r_snaps = snapshot.list(r_filesystem, recursive=recursive, ssh_command=ssh_command)
if verbose:
click.echo(f"found {len(r_snaps)} snapshots on {r_filesystem.name}")
click.echo()
filesystem_l_snaps = {
filesystem: list(l_snaps)
for filesystem, l_snaps in itertools.groupby(
l_snaps, key=lambda x: x.filesystem
)
}
filesystem_r_snaps = {
filesystem: list(r_snaps)
for filesystem, r_snaps in itertools.groupby(
r_snaps, key=lambda x: x.filesystem
)
}
tasks = task.generate(
remote_fs, filesystem_l_snaps, filesystem_r_snaps, follow_delete=follow_delete
)
if verbose:
click.echo(task.report(tasks))
if not dry_run:
filesystem_tasks = [
(filesystem, list(tasks))
for filesystem, tasks in itertools.groupby(
tasks, key=lambda x: x.filesystem
)
]
task.execute(
remote_fs,
filesystem_tasks,
follow_delete=follow_delete,
compression=compression,
ssh_command=ssh_command,
) | zfs-replicate | /zfs_replicate-3.1.1.tar.gz/zfs_replicate-3.1.1/zfs/replicate/cli/main.py | main.py |
from typing import List, Optional
from .. import subprocess # nosec
from ..error import ZFSReplicateError
from ..filesystem import FileSystem, filesystem
from .type import Snapshot
def list( # pylint: disable=W0622
filesystem: FileSystem, # pylint: disable=W0621
recursive: bool,
ssh_command: Optional[str] = None,
) -> List[Snapshot]:
"""List ZFS snapshots."""
command = _list(filesystem, recursive)
if ssh_command is not None:
command = ssh_command + " " + command
proc = subprocess.open(command)
output, error = proc.communicate()
if error is not None:
error = (
error.strip(b"\n")
.strip(b"\r")
.replace(b"WARNING: ENABLED NONE CIPHER", b"")
)
if proc.returncode:
raise ZFSReplicateError(
f"error encountered while listing snapshots of '{filesystem.name}': {error!r}",
filesystem,
error,
)
return _snapshots(output)
def _list(filesystem: FileSystem, recursive: bool) -> str: # pylint: disable=W0621
"""ZFS List Snapshot command."""
options = ["-H", "-t snapshot", "-p", "-o name,creation", "-r"]
if not recursive:
options.append("-d 1")
return f"/usr/bin/env - zfs list {' '.join(options)} '{filesystem.name}'"
def _snapshots(zfs_list_output: bytes) -> List[Snapshot]:
snapshots = [_snapshot(x) for x in zfs_list_output.split(b"\n") if x != b""]
if not snapshots:
return snapshots
snapshots[0] = _add_previous(snapshots[0], None)
return [snapshots[0]] + [
_add_previous(s, p) for s, p in zip(snapshots[1:], snapshots)
]
def _snapshot(zfs_list_line: bytes) -> Snapshot:
name, timestamp = zfs_list_line.split(b"\t")
my_filesystem, name = name.split(b"@")
return Snapshot(
filesystem=filesystem(name=my_filesystem.decode("utf-8")),
previous=None,
name=name.decode("utf-8"),
timestamp=int(timestamp),
)
def _add_previous(snapshot: Snapshot, previous: Optional[Snapshot] = None) -> Snapshot:
if previous is not None and snapshot.filesystem != previous.filesystem:
previous = None
return Snapshot(
filesystem=snapshot.filesystem,
name=snapshot.name,
previous=previous,
timestamp=snapshot.timestamp,
) | zfs-replicate | /zfs_replicate-3.1.1.tar.gz/zfs_replicate-3.1.1/zfs/replicate/snapshot/list.py | list.py |
import subprocess # nosec
from typing import Optional
from .. import compress, filesystem
from ..compress import Compression
from ..error import ZFSReplicateError
from ..filesystem import FileSystem
from .type import Snapshot
def send( # pylint: disable=R0913
remote: FileSystem,
current: Snapshot,
ssh_command: str,
compression: Compression,
follow_delete: bool,
previous: Optional[Snapshot] = None,
) -> None:
"""Send ZFS Snapshot."""
send_command = _send(current, previous, follow_delete=follow_delete)
compress_command, decompress_command = compress.command(compression)
receive_command = (
compress_command
+ ssh_command
+ " "
+ f'"{_receive(remote, current, decompress_command)}"'
)
command = send_command + " | " + receive_command
proc = subprocess.Popen( # pylint: disable=R1732
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE # nosec
)
output, error = proc.communicate()
output = (
output.strip(b"\n").strip(b"\r").replace(b"WARNING: ENABLED NONE CIPHER", b"")
)
if proc.returncode:
if b"failed to create mountpoint" in error:
return # Ignore this error.
raise ZFSReplicateError(
f"failed to create snapshot: '{current.filesystem.name}@{current.name}': {error!r}",
current,
error,
)
def _send(
current: Snapshot, previous: Optional[Snapshot] = None, follow_delete: bool = False
) -> str:
options = ["--raw"] # ["-V"]
if follow_delete:
options.append("-p")
if previous is not None:
options.append(f"-i '{previous.filesystem.name}@{previous.name}'")
return f"/usr/bin/env - zfs send {' '.join(options)} '{current.filesystem.name}@{current.name}'"
def _receive(remote: FileSystem, current: Snapshot, decompress_command: str) -> str:
destination = filesystem.remote_dataset(remote, current.filesystem)
return f"{decompress_command}/usr/bin/env - zfs receive -F -d '{destination.name}'" | zfs-replicate | /zfs_replicate-3.1.1.tar.gz/zfs_replicate-3.1.1/zfs/replicate/snapshot/send.py | send.py |
# ZFS tools
*This repo has diverged from the upstream
[Rudd-O/zfs-tools](http://github.com/Rudd-O/zfs-tools), as described
[here](https://github.com/Rudd-O/zfs-tools/issues/25). It is now
being maintained directly here by the original author of the `zbackup`
utility, and has its own release numbering.*
*Differences from upstream include (but may not be limited to):*
* *parallel replication, for increased performance of `zreplicate`*
* *locking of filesystems during replication, to avoid problems
arising from very long replications*
* *resumable replications using the facility introduced in zfs 0.7.0*
*This version of the zfs-tools suite is used in production on the
author's system comprising 7 fileservers, 3400 filesystems, and 550TB
of data.*
The ZFS backup tools will help you graft an entire ZFS pool as a filesystem
into a backup machine, without having to screw around snapshot names or
complicated shell commands or crontabs.
The utilities let you do this:
1. zfs-shell:
a shell that allows remote ZFS administration and nothing more
3. zsnap:
a command that snapshots a dataset or pool, then deletes old snapshots
4. zreplicate
a command that replicates an entire dataset tree using ZFS replication
streams. Best used in combination with zsnap as in:
- zsnap on the local machine
- zreplicate from the local machine to the destination machine
Obsolete snapshots deleted by zsnap will be automatically purged on
the destination machine by zreplicate, as a side effect of using
replication streams. To inhibit this, use the
--no-replication-stream option.
Run `zreplicate --help` for a compendium of options you may use.
5. zbackup:
a command to snapshot and replicate filesystems according to their user properties.
This uses zsnap and zreplicate to do the work, which is all driven by properties.
For details, see this [further description of zbackup](doc/README-zbackup.md).
6. zflock:
a command to lock a filesystem against replication by zbackup.
For details, see this [further description of zbackup](doc/README-zbackup.md).
## Setting up
Setup is rather complicated. It assumes that you already have ZFS running
and vaults on both the machine you're going to back up and the machine that
will be receiving the backup.
### On the machine to back up
- Install the zfs-shell command
`cp zfs-shell /usr/local/sbin`
`chmod 755 /usr/local/sbin/zfs-shell`
`chown root.root /usr/local/sbin/zfs-shell`
- Create a user with a home directory and shell `zfs-shell`
`useradd -rUm -b /var/lib -s /usr/local/sbin/zfs-shell zfs`
- Let `sudo` know that the new user can run the zfs command
`zfs ALL = NOPASSWD: /usr/local/sbin/zfs`
(ensure you remove the `requiretty` default on `/etc/sudoers`)
(check `sudoers.zfs-tools` in `contrib/` for an example)
- Set up a cron job to run `zsnap` as frequently as you want to,
snapshotting the dataset you intend to replicate.
### On the backup machine
- Set up public key authentication for SSH so the backup machine
may log as the user `zfs` (as laid out above) in the machine to
be backed up.
- Create a dataset to receive the backup stream.
- Set up a cron job to fetch the dataset snapshotted by zsnap
from the remote machine into the newly created dataset. You
will use `zreplicate` for that (see below for examples).
- After the first replication, you may want to set the `mountpoint`
attributes on the received datasets so they do not automount
on the backup machine.
### Test
If all went well, you should be able to do this without issue:
(on the machine to back up)
[root@peter]
zsnap senderpool
(on the machine to receive)
[root@paul]
zfs create receiverpool/senderpool # <--- run this ONLY ONCE
zreplicate -o zfs@paul:senderpool receiverpool/senderpool
# this should send the entire senderpool with all snapshots
# over from peter to paul, placing it in receiverpool/senderpool
(on the machine to back up)
[root@peter]
zsnap senderpool
(on the machine to receive)
[root@paul]
zreplicate -o zfs@paul:senderpool receiverpool/senderpool
# this should send an incremental stream of senderpool
# into receiverpool/senderpool
And that's it, really.
| zfs-tools-zbackup | /zfs-tools-zbackup-0.5.1.tar.gz/zfs-tools-zbackup-0.5.1/README.md | README.md |
#zbackup
## Introduction
zbackup is a front-end for a backup service using ZFS snapshots and replication to safely replicate a set of ZFS filesystems onto another server. It makes use of zsnap and zreplicate, so ensure these are working nicely before trying to get going with zbackup.
## ZFS properties governing zbackup behaviour
zbackup is driven by ZFS properties, so your scripts and/or crontab entries need make no mention of particular ZFS datasets, number of snapshots to keep, etc.
The following user properties define the behaviour, where *tier* is arbitrary, but expected to be e.g. hourly, daily, weekly, etc. All properties must be in the module `com.github.tesujimath.zbackup`, so prefix each property listed here with `com.github.tesujimath.zbackup:`, following the best practice for user properties as described on the zfs man page.
- `*tier*-snapshots` - turns on snapshots, and limits how many snapshots to keep in given tier
- `*tier*-snapshot-limit` - limits how many snapshots to keep in given tier (overrides *tier*-snapshots)
- `replica` - comma-separated list of dstdatasetname, as used by zreplicate
- `replicate` - *tier*, which tier to replicate
See `zbackup --list`, `zbackup --set`, and `zbackup --unset` below for an easy interface to listing, setting, and unsetting these properties.
Snapshotting for a given tier will be active as soon as `*tier*-snapshots` is defined with an integer value, with a property source of local. Received properties will not cause new snapshots to be taken.
However, old snapshots will be reaped if the property source is local or received. This means that reaping old snapshots on a destination replica is driven by the received property `*tier*-snapshots`, or the property `*tier*-snapshot-limit`, with the latter overriding the former if both are present. Note that the limit property functions even if its source is inherited.
Replication is done for a single tier only, as per the 'replicate' property. Again, these properties must have the source being local to have any effect. Note that the `--no-replication-stream` option for zreplicate is used, so that no destination replica snapshots and filesystems are deleted as a side-effect of running a backup. To purge obsolete snapshots from the destination, it is recommended to use the behaviour described in the previous paragraph.
## Locking of filesystems during replication
Replicating a large filesystem can take many hours, perhaps so long that another zbackup instance is started by cron in the meantime. For this reason, all filesystems to be replicated are locked using `zflock`. A subsequent `zbackup` will simply skip any such locked filesystem.
To manually disable replication of a filesystem, `zflock` may be run by hand. This may be useful for example when migrating replicas from one replica server to another. See `zflock --help` for details.
Note that `zflock` has no effect beyond disabling replication by `zbackup`. (It does nothing at the ZFS filesystem level. It simply creates a lockfile in */var/lib/zfs-tools/zflock*, which is checked only by later instances of `zflock`.)
A previous release of these tools included a `zlock` command, which used lock directories. In contrast, zflock uses flock, and therefore only locks filesystems for the duration of the command it runs.
## ssh authentication
It is up to you to arrange your own ssh authentication. For example, you could use an ssh agent and ssh public key authentication, or say Kerberos. (The crontab example below assumes Kerberos, which explains the call to kinit to acquire a Kerberos ticket from the local keytab file.)
## Interfacing with cron
zbackup is best run as a cron job. It is up to you to define the tiers which make sense to you, but these would be reasonable entries in a crontab:
```
0 8,9,10,11,12,13,14,15,16,17,18 * * * zbackup -v -t '\%Y-\%m-\%d-\%H\%M' hourly >/root/zbackup.hourly 2>&1
0 23 * * * kinit -k -C root/HOST.YOURDOMAIN ; zbackup -v -t '\%Y-\%m-\%d-\%H\%M' -d hourly daily >/root/zbackup.daily 2>&1
30 23 * * 0 zbackup -v -t '\%Y-\%m-\%d-\%H\%M' weekly >/root/zbackup.weekly 2>&1
45 23 1 * * zbackup -v -t '\%Y-\%m-\%d-\%H\%M' monthly >/root/zbackup.monthly 2>&1
```
### Notes
1. Percent signs need to be escaped in crontabs.
2. I specify the timestamp format explicitly, to avoid seconds appearing in the timestamps. You may choose to not care about that.
3. My daily tier sets up ssh authentication by acquiring a Kerberos ticket from the local keytab. This is for a system which performs replication in the daily tier. You will have to change this to match your system requirements.
4. the `-d hourly` option in the daily zbackup deletes all hourly snapshots, so these do not appear on the destination replica. It is up to you to decide what behaviour you want.
## Getting started
Run `zbackup --help` for the usage, and complete options.
Run `zbackup --list` to see what backup properties are set.
Setting and unsetting of the properties used by zbackup is most easily done using zbackup --set, and zbackup --unset. For example:
```
# zbackup --set zpoolsrc/playpen daily-snapshots=6 weekly-snapshots=5 replica=MYREPLICASERVER:zpooldst/playpen2/replica/zpoolsrc/playpen replicate=daily
# zbackup --list
```
## Error reporting
It is clearly rather important to know if zbackup fails. Any or all of these three mechanisms can be used:
1. Non-zero exit status
2. Error text appearing on stderr.
3. Optionally, use the -e option to email the given recipient on failure.
It is recommended to use these to check carefully that replication in particular is working as intended.
## The dangers of ZFS replication streams when used for backup
The default behaviour of zsnap is to use ZFS replication streams, i.e. `zfs send -R`. This is inhibited when zsnap is called from zbackup, using the `--no-replication-stream` option to zsnap.
The problem with ZFS replication streams arises when you have nested ZFS datasets, e.g. home, with a dataset for each user's home directory. If a user's home is deleted on the source side, using zfs destroy, then `zfs send -R` will replicate this deletion to the destination side.
zbackup avoids this unsafe behaviour following a `zfs destroy` operation on the source side.
## Author
zbackup was written by Simon Guest, developed in the [tesujimath fork of zfs-tools](https://github.com/tesujimath/zfs-tools-zbackup), and initially contributed upstream, although later diverging. Thanks to the original author of zfs-tools for providing an excellent framework on which to base zbackup.
| zfs-tools-zbackup | /zfs-tools-zbackup-0.5.1.tar.gz/zfs-tools-zbackup-0.5.1/doc/README-zbackup.md | README-zbackup.md |
import sys
import os
import subprocess
def simplify(x):
'''Take a list of tuples where each tuple is in form [v1,v2,...vn]
and then coalesce all tuples tx and ty where tx[v1] equals ty[v2],
preserving v3...vn of tx and discarding v3...vn of ty.
m = [
(1,2,"one"),
(2,3,"two"),
(3,4,"three"),
(8,9,"three"),
(4,5,"four"),
(6,8,"blah"),
]
simplify(x) -> [[1, 5, 'one'], [6, 9, 'blah']]
'''
y = list(x)
if len(x) < 2: return y
for idx,o in enumerate(list(y)):
for idx2,p in enumerate(list(y)):
if idx == idx2: continue
if o and p and o[0] == p[1]:
y[idx] = None
y[idx2] = list(p)
y[idx2][0] = p[0]
y[idx2][1] = o[1]
return [ n for n in y if n is not None ]
def uniq(seq, idfun=None):
'''Makes a sequence 'unique' in the style of UNIX command uniq'''
# order preserving
if idfun is None:
def idfun(x): return x
seen = {}
result = []
for item in seq:
marker = idfun(item)
# in old Python versions:
# if seen.has_key(marker)
# but in new ones:
if marker in seen: continue
seen[marker] = 1
result.append(item)
return result
class SpecialPopen(subprocess.Popen):
def __init__(self, *a, **kw):
self._saved_args = a[0] if kw.get("args") is None else kw.get("args")
subprocess.Popen.__init__(self, *a, **kw)
def progressbar(pipe, bufsize=-1, ratelimit=-1):
def clpbar(cmdname):
barargs = []
if bufsize != -1:
barargs = ["-bs", str(bufsize)]
if ratelimit != -1:
barargs = barargs + ['-th', str(ratelimit)]
barprg = SpecialPopen(
[cmdname, "-dan"] + barargs,
stdin=pipe, stdout=subprocess.PIPE, bufsize=bufsize)
return barprg
def pv(cmdname):
barargs = []
if bufsize != -1:
barargs = ["-B", str(bufsize)]
if ratelimit != -1:
barargs = barargs + ['-L', str(ratelimit)]
barprg = SpecialPopen(
[cmdname, "-ptrb"] + barargs,
stdin=pipe, stdout=subprocess.PIPE, bufsize=bufsize)
return barprg
barprograms = [
("bar", clpbar),
("clpbar", clpbar),
("pv", pv),
]
for name, func in barprograms:
try:
subprocess.call([name, '-h'], stdout=file(os.devnull, "w"), stderr=file(os.devnull, "w"), stdin=file(os.devnull, "r"))
except OSError, e:
if e.errno == 2: continue
assert 0, "not reached while searching for clpbar or pv"
return func(name)
raise OSError(2, "no such file or directory searching for clpbar or pv")
def stderr(text):
"""print out something to standard error, followed by an ENTER"""
sys.stderr.write(text)
sys.stderr.write("\n")
__verbose = False
def verbose_stderr(*args, **kwargs):
global __verbose
if __verbose: stderr(*args, **kwargs)
def set_verbose(boolean):
global __verbose
__verbose = boolean | zfs-tools-zbackup | /zfs-tools-zbackup-0.5.1.tar.gz/zfs-tools-zbackup-0.5.1/src/zfstools/util.py | util.py |
import subprocess
import os
from zfstools.models import PoolSet
from zfstools.util import progressbar, SpecialPopen, verbose_stderr
from Queue import Queue
from threading import Thread
# Work-around for check_output not existing on Python 2.6, as per
# http://stackoverflow.com/questions/4814970/subprocess-check-output-doesnt-seem-to-exist-python-2-6-5
# The implementation is lifted from
# http://hg.python.org/cpython/file/d37f963394aa/Lib/subprocess.py#l544
if "check_output" not in dir( subprocess ): # duck punch it in!
def f(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd) # , output=output)
return output
subprocess.check_output = f
class ZFSConnection:
host = None
_poolset = None
_dirty = True
_trust = False
_properties = None
def __init__(self,host="localhost", trust=False, sshcipher=None, properties=None, identityfile=None, knownhostsfile=None, verbose=False):
self.host = host
self._trust = trust
self._properties = properties if properties else []
self._poolset= PoolSet()
self.verbose = verbose
if host in ['localhost','127.0.0.1']:
self.command = []
else:
self.command = ["ssh","-o","BatchMode=yes","-a","-x"]
if self._trust:
self.command.extend(["-o","CheckHostIP=no"])
self.command.extend(["-o","StrictHostKeyChecking=no"])
if sshcipher != None:
self.command.extend(["-c",sshcipher])
if identityfile != None:
self.command.extend(["-i",identityfile])
if knownhostsfile != None:
self.command.extend(["-o","UserKnownHostsFile=%s" % knownhostsfile])
self.command.extend([self.host])
def _get_poolset(self):
if self._dirty:
properties = [ 'creation' ] + self._properties
stdout2 = subprocess.check_output(self.command + ["zfs", "list", "-Hpr", "-o", ",".join( ['name'] + properties ), "-t", "all"])
self._poolset.parse_zfs_r_output(stdout2,properties)
self._dirty = False
return self._poolset
pools = property(_get_poolset)
def create_dataset(self, name, parents=False):
parents_opt = ["-p"] if parents else []
subprocess.check_call(self.command + ["zfs", "create"] + parents_opt + [name])
self._dirty = True
return self.pools.lookup(name)
def destroy_dataset(self, name):
subprocess.check_call(self.command + ["zfs", "destroy", name])
self._dirty = True
def destroy_recursively(self, name, returnok=False):
"""If returnok, then simply return success as a boolean."""
ok = True
cmd = self.command + ["zfs", "destroy", '-r', name]
if returnok:
ok = subprocess.call(cmd) == 0
else:
subprocess.check_call(cmd)
self._dirty = True
return ok
def snapshot_recursively(self,name,snapshotname,properties={}):
plist = sum( map( lambda x: ['-o', '%s=%s' % x ], properties.items() ), [] )
subprocess.check_call(self.command + ["zfs", "snapshot", "-r" ] + plist + [ "%s@%s" % (name, snapshotname)])
self._dirty = True
def send(self,name,opts=None,bufsize=-1,compression=False,lockdataset=None):
if not opts: opts = []
cmd = list(self.command)
if compression and cmd[0] == 'ssh': cmd.insert(1,"-C")
if lockdataset is not None:
cmd += ["zflock"]
if self.verbose:
cmd += ["-v"]
cmd += [lockdataset, "--"]
cmd += ["zfs", "send"] + opts
if "-t" not in opts:
# if we're resuming, don't specify the name of what to send
cmd += [name]
verbose_stderr("%s\n" % ' '.join(cmd))
p = SpecialPopen(cmd,stdin=file(os.devnull),stdout=subprocess.PIPE,bufsize=bufsize)
return p
def receive(self,name,pipe,opts=None,bufsize=-1,compression=False,lockdataset=None):
if not opts: opts = []
cmd = list(self.command)
if compression and cmd[0] == 'ssh': cmd.insert(1,"-C")
if lockdataset is not None:
cmd += ["zflock"]
if self.verbose:
cmd += ["-v"]
cmd += [lockdataset, "--"]
cmd = cmd + ["zfs", "receive"] + opts + [name]
verbose_stderr("%s\n" % ' '.join(cmd))
p = SpecialPopen(cmd,stdin=pipe,bufsize=bufsize)
return p
def transfer(self, dst_conn, s, d, fromsnapshot=None, showprogress=False, bufsize=-1, send_opts=None, receive_opts=None, ratelimit=-1, compression=False, locksrcdataset=None, lockdstdataset=None, verbose=False, resumable=False):
if send_opts is None: send_opts = []
if receive_opts is None: receive_opts = []
try:
resume_token = dst_conn.pools.lookup(d).get_property("receive_resume_token")
except:
resume_token = None
queue_of_killables = Queue()
if fromsnapshot: fromsnapshot=["-i",fromsnapshot]
else: fromsnapshot = []
if verbose: verbose_opts = ["-v"]
else: verbose_opts = []
# Regardless of whether resumable is requested this time , if
# there's a resume token on the destination, we have to use it.
if resume_token is not None:
all_send_opts = ["-t", resume_token] + verbose_opts
else:
all_send_opts = [] + fromsnapshot + send_opts + verbose_opts
sndprg = self.send(s, opts=all_send_opts, bufsize=bufsize, compression=compression, lockdataset=locksrcdataset)
sndprg_supervisor = Thread(target=lambda: queue_of_killables.put((sndprg, sndprg.wait())))
sndprg_supervisor.start()
if showprogress:
try:
barprg = progressbar(pipe=sndprg.stdout,bufsize=bufsize,ratelimit=ratelimit)
barprg_supervisor = Thread(target=lambda: queue_of_killables.put((barprg, barprg.wait())))
barprg_supervisor.start()
sndprg.stdout.close()
except OSError:
os.kill(sndprg.pid,15)
raise
else:
barprg = sndprg
try:
if resumable: resumable_recv_opts = ["-s"]
else: resumable_recv_opts = []
all_recv_opts = ["-Fu"] + verbose_opts + resumable_recv_opts + receive_opts
rcvprg = dst_conn.receive(d,pipe=barprg.stdout,opts=all_recv_opts,bufsize=bufsize,compression=compression, lockdataset=lockdstdataset)
rcvprg_supervisor = Thread(target=lambda: queue_of_killables.put((rcvprg, rcvprg.wait())))
rcvprg_supervisor.start()
barprg.stdout.close()
except OSError:
os.kill(sndprg.pid, 15)
if sndprg.pid != barprg.pid: os.kill(barprg.pid, 15)
raise
dst_conn._dirty = True
allprocesses = set([rcvprg, sndprg]) | ( set([barprg]) if showprogress else set() )
while allprocesses:
diedprocess, retcode = queue_of_killables.get()
allprocesses = allprocesses - set([diedprocess])
if retcode != 0:
[ p.kill() for p in allprocesses ]
raise subprocess.CalledProcessError(retcode, diedprocess._saved_args) | zfs-tools-zbackup | /zfs-tools-zbackup-0.5.1.tar.gz/zfs-tools-zbackup-0.5.1/src/zfstools/connection.py | connection.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.