id
int64 11
59.9k
| original
stringlengths 33
150k
| modified
stringlengths 37
150k
|
---|---|---|
5,751 | def poisson_means_test(k1, n1, k2, n2, diff=0, alternative='two-sided'):
r"""
Calculates the poisson mean test, the "E-test", for the mean difference of
two samples that follow a Poisson distribution from descriptive statistics.
This is a two-sided test. The null hypothesis is that two independent
samples have identical average (expected) values.
Let :math:`X_{11},...,X_{1n_1}` and :math:`X_{21},...,X_{2n_2}` be
independent samples from distributions :math:`Poisson(\lambda_1)` and
:math:`Poisson(\lambda_2)`. It is well known that :math:`X_1`
and :math:`X_2` are independent:
.. math:: X_1 = \sum_{i=1}^{n_1} X_{1i} \sim Poisson(n_1\lambda_1)
.. math:: X_2 = \sum_{i=1}^{n_2} X_{2i} \sim Poisson(n_2\lambda_2)
Let `count1` and `count2` be the observed values of :math:`X_1` and
:math:`X_2`, respectively. The null hypothesis and alternative
hypothesis under comparison are
.. math::
H_0: \lambda_1 = \lambda_2 + \mathtt{diff} \quad vs. \quad
H_a: \lambda_1 \ne \lambda_2 + \mathtt{diff}
for ``alternative=two-sided``, where :math:`\mathtt{diff} \ge 0`.
Parameters
----------
k1 : int
Sample values of interest from sample 1.
n1: int
Sample size from sample 1.
k2 : int
Sample values of interest from sample 2.
n2: int
Sample size from sample 2.
diff : int or float, optional
The difference of mean between two samples under the null hypothesis
alternative : {'two-sided', 'less', 'greater'}, optional
Defines the alternative hypothesis.
The following options are available (default is 'two-sided'):
* 'two-sided': :math:`\lambda_1 \ne \lambda_2 + \mathtt{diff}`
* 'less': :math:`\lambda_1 \le \lambda_2 + \mathtt{diff}`
* 'greater': :math:`\lambda_1 \ge \lambda_2 + \mathtt{diff}`
Returns
-------
statistic : float
The test statistic calculated from observed samples
pvalue : float
The associated p-value based on the estimated p-value of the
standardized difference.
Notes
-----
A benefit of the E-test is that it maintains its power even
with smaller sample sizes which can reduce sampling costs [1]_. It has
been evaluated and determined to be more powerful than the comparable
C-test, sometimes referred to as the poisson exact test.
References
----------
.. [1] Krishnamoorthy, K., & Thomson, J. (2004). A more powerful test for
comparing two Poisson means. Journal of Statistical Planning and
Inference, 119(1), 23-35.
.. [2] Przyborowski, J., & Wilenski, H. (1940). Homogeneity of results in
testing samples from Poisson series: With an application to testing
clover seed for dodder. Biometrika, 31(3/4), 313-323.
Examples
--------
Suppose that a gardener wishes to test the number of dodder seeds, a weed,
in a sack of clover seeds that they buy from a seed company.
A 100 gram sample is drawn from the sack before being shipped to the
gardener. The sample is analyzed, and it is found to contain no dodder
seeds; that is, `k1` is 0. However, upon arrival, the gardener draws
another 100 gram sample from the sack. This time, three dodder seeds are
found in the sample; that is, `k2` is 3. The gardener would like to
know if the difference between is significant and not due to chance. The
null hypothesis is that the difference between the two samples is merely
due to chance, or that :math:`\lambda_1 = \lambda_2 + \mathtt{diff}`
where :math:`\mathtt{diff} = 0`. The alternative hypothesis is that the
difference is not due to chance, or :math:`\lambda_1 \ne \lambda_2 + 0`.
The gardener selects a significance level of 5% to reject the null
hypothesis in favor of the alternative [2]_.
>>> res = stats.poisson_means_test(0, 100, 3, 100)
>>> res.statistic, res.pvalue
(-1.7320508075688772, 0.08837900929018157)
The p-value is .088, indicating a near 9% chance of observing a value of
the test statistic under the null hypothesis. This exceeds 5%, so the
gardener does not reject the null hypothesis as the difference cannot be
regarded as significant at this level.
"""
_chck_args_poisson_mean_test(k1, n1, k2, n2, diff, alternative)
# "for a given k_1 and k_2, an estimate of \lambda_2 is given by" [1] (3.4)
lmbd_hat2 = ((k1 + k2) / (n1 + n2) - diff * n1 / (n1 + n2))
# "\hat{\lambda_{2k}} may be less than or equal to zero ... and in this
# case the null hypothesis cannot be rejected ... [and] it is not necessary
# to compute the p-value". [1] page 26 below eq. (3.6).
if lmbd_hat2 <= 0:
return PoissonMeansTestResult(0, 1)
# the unbiased variance estimate [1] (3.2)
var = k1 / (n1 ** 2) + k2 / (n2 ** 2)
# the _observed_ pivot statistic from the input. It follows the
# unnumbered equation following equation (3.3) This is used later in
# comparison with the computed pivot statistics in an indicator function.
t_k1k2 = (k1 / n1 - k2 / n2 - diff) / np.sqrt(var)
# equation (3.5) of [1] is lengthy, so it is broken into several parts,
# beginning here. Note that the probability mass function of poisson is
# exp^(-\mu)*\mu^k/k!, so and this is called with shape \mu, here noted
# here as nlmbd_hat*. The strategy for evaluating the double summation in
# (3.5) is to create two arrays of the values of the two products inside
# the summation and then broadcast them together into a matrix, and then
# sum across the entire matrix.
# compute constants (as seen in the first and second separated products in
# (3.5).). (This is the shape (\mu) parameter of the poisson distribution.)
nlmbd_hat1 = n1 * (lmbd_hat2 + diff)
nlmbd_hat2 = n2 * lmbd_hat2
# determine summation bounds for tail ends of distribution rather than
# summing to infinity. `x1*` is for the outer sum and `x2*` is the inner
# sum
x1_lb, x1_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat1)
x2_lb, x2_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat2)
# construct arrays to function as the x_1 and x_2 counters on the summation
# in (3.5). `x1` is in columns and `x2` is in rows to allow for
# broadcasting.
x1 = np.arange(x1_lb, x1_ub + 1)
x2 = np.arange(x2_lb, x2_ub + 1)[:, None]
# these are the two products in equation (3.5) with `prob_x1` being the
# first (left side) and `prob_x2` being the second (right side). (To
# make as clear as possible: the 1st contains a "+ d" term, the 2nd does
# not.)
prob_x1 = distributions.poisson.pmf(x1, nlmbd_hat1)
prob_x2 = distributions.poisson.pmf(x2, nlmbd_hat2)
# compute constants for use in the the "pivot statistic" per the
# unnumbered equation following (3.3).
lmbd_x1 = x1 / n1
lmbd_x2 = x2 / n2
lmbds_diff = lmbd_x1 - lmbd_x2 - diff
var_x1x2 = lmbd_x1 / n1 + lmbd_x2 / n2
# this is the 'pivot statistic' for use in the indicator of the summation
# (left side of "I[.]"). Before dividing, mask zero-elements in the
# denominator with infinity so that they are `false` in the indicator.
mask_out_invalid = (np.abs(lmbd_x1 - lmbd_x2) > diff
if alternative == 'two-sided' else lmbds_diff > 0)
var_x1x2[~mask_out_invalid] = np.inf
t_x1x2 = lmbds_diff / np.sqrt(var_x1x2)
if alternative == 'two-sided':
alternative_comparison = lambda x, y: np.abs(x) >= np.abs(y)
elif alternative == 'less':
alternative_comparison = lambda x, y: np.less_equal(x, y)
else:
alternative_comparison = lambda x, y: np.less_equal(x, y)
# `[indicator]` implements the "I[.] ... the indicator function" per
# the paragraph following equation (3.5).
indicator = alternative_comparison(t_x1x2, t_k1k2)
# multiply all combinations of the products together, exclude terms
# based on the `indicator` and then sum. (3.5)
pvalue = np.sum((prob_x1 * prob_x2)[indicator])
return PoissonMeansTestResult(t_k1k2, pvalue)
| def poisson_means_test(k1, n1, k2, n2, diff=0, alternative='two-sided'):
r"""
Calculates the poisson mean test, the "E-test", for the mean difference of
two samples that follow a Poisson distribution from descriptive statistics.
This is a two-sided test. The null hypothesis is that two independent
samples have identical average (expected) values.
Let :math:`X_{11},...,X_{1n_1}` and :math:`X_{21},...,X_{2n_2}` be
independent samples from distributions :math:`Poisson(\lambda_1)` and
:math:`Poisson(\lambda_2)`. It is well known that :math:`X_1`
and :math:`X_2` are independent:
.. math:: X_1 = \sum_{i=1}^{n_1} X_{1i} \sim Poisson(n_1\lambda_1)
.. math:: X_2 = \sum_{i=1}^{n_2} X_{2i} \sim Poisson(n_2\lambda_2)
Let `count1` and `count2` be the observed values of :math:`X_1` and
:math:`X_2`, respectively. The null hypothesis and alternative
hypothesis under comparison are
.. math::
H_0: \lambda_1 = \lambda_2 + \mathtt{diff} \quad vs. \quad
H_a: \lambda_1 \ne \lambda_2 + \mathtt{diff}
for ``alternative=two-sided``, where :math:`\mathtt{diff} \ge 0`.
Parameters
----------
k1 : int
Sample values of interest from sample 1.
n1: int
Sample size from sample 1.
k2 : int
Sample values of interest from sample 2.
n2: int
Sample size from sample 2.
diff : int or float, optional
The difference of mean between two samples under the null hypothesis
alternative : {'two-sided', 'less', 'greater'}, optional
Defines the alternative hypothesis.
The following options are available (default is 'two-sided'):
* 'two-sided': :math:`\lambda_1 \ne \lambda_2 + \mathtt{diff}`
* 'less': :math:`\lambda_1 \le \lambda_2 + \mathtt{diff}`
* 'greater': :math:`\lambda_1 \ge \lambda_2 + \mathtt{diff}`
Returns
-------
statistic : float
The test statistic calculated from observed samples.
pvalue : float
The associated p-value based on the estimated p-value of the
standardized difference.
Notes
-----
A benefit of the E-test is that it maintains its power even
with smaller sample sizes which can reduce sampling costs [1]_. It has
been evaluated and determined to be more powerful than the comparable
C-test, sometimes referred to as the poisson exact test.
References
----------
.. [1] Krishnamoorthy, K., & Thomson, J. (2004). A more powerful test for
comparing two Poisson means. Journal of Statistical Planning and
Inference, 119(1), 23-35.
.. [2] Przyborowski, J., & Wilenski, H. (1940). Homogeneity of results in
testing samples from Poisson series: With an application to testing
clover seed for dodder. Biometrika, 31(3/4), 313-323.
Examples
--------
Suppose that a gardener wishes to test the number of dodder seeds, a weed,
in a sack of clover seeds that they buy from a seed company.
A 100 gram sample is drawn from the sack before being shipped to the
gardener. The sample is analyzed, and it is found to contain no dodder
seeds; that is, `k1` is 0. However, upon arrival, the gardener draws
another 100 gram sample from the sack. This time, three dodder seeds are
found in the sample; that is, `k2` is 3. The gardener would like to
know if the difference between is significant and not due to chance. The
null hypothesis is that the difference between the two samples is merely
due to chance, or that :math:`\lambda_1 = \lambda_2 + \mathtt{diff}`
where :math:`\mathtt{diff} = 0`. The alternative hypothesis is that the
difference is not due to chance, or :math:`\lambda_1 \ne \lambda_2 + 0`.
The gardener selects a significance level of 5% to reject the null
hypothesis in favor of the alternative [2]_.
>>> res = stats.poisson_means_test(0, 100, 3, 100)
>>> res.statistic, res.pvalue
(-1.7320508075688772, 0.08837900929018157)
The p-value is .088, indicating a near 9% chance of observing a value of
the test statistic under the null hypothesis. This exceeds 5%, so the
gardener does not reject the null hypothesis as the difference cannot be
regarded as significant at this level.
"""
_chck_args_poisson_mean_test(k1, n1, k2, n2, diff, alternative)
# "for a given k_1 and k_2, an estimate of \lambda_2 is given by" [1] (3.4)
lmbd_hat2 = ((k1 + k2) / (n1 + n2) - diff * n1 / (n1 + n2))
# "\hat{\lambda_{2k}} may be less than or equal to zero ... and in this
# case the null hypothesis cannot be rejected ... [and] it is not necessary
# to compute the p-value". [1] page 26 below eq. (3.6).
if lmbd_hat2 <= 0:
return PoissonMeansTestResult(0, 1)
# the unbiased variance estimate [1] (3.2)
var = k1 / (n1 ** 2) + k2 / (n2 ** 2)
# the _observed_ pivot statistic from the input. It follows the
# unnumbered equation following equation (3.3) This is used later in
# comparison with the computed pivot statistics in an indicator function.
t_k1k2 = (k1 / n1 - k2 / n2 - diff) / np.sqrt(var)
# equation (3.5) of [1] is lengthy, so it is broken into several parts,
# beginning here. Note that the probability mass function of poisson is
# exp^(-\mu)*\mu^k/k!, so and this is called with shape \mu, here noted
# here as nlmbd_hat*. The strategy for evaluating the double summation in
# (3.5) is to create two arrays of the values of the two products inside
# the summation and then broadcast them together into a matrix, and then
# sum across the entire matrix.
# compute constants (as seen in the first and second separated products in
# (3.5).). (This is the shape (\mu) parameter of the poisson distribution.)
nlmbd_hat1 = n1 * (lmbd_hat2 + diff)
nlmbd_hat2 = n2 * lmbd_hat2
# determine summation bounds for tail ends of distribution rather than
# summing to infinity. `x1*` is for the outer sum and `x2*` is the inner
# sum
x1_lb, x1_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat1)
x2_lb, x2_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat2)
# construct arrays to function as the x_1 and x_2 counters on the summation
# in (3.5). `x1` is in columns and `x2` is in rows to allow for
# broadcasting.
x1 = np.arange(x1_lb, x1_ub + 1)
x2 = np.arange(x2_lb, x2_ub + 1)[:, None]
# these are the two products in equation (3.5) with `prob_x1` being the
# first (left side) and `prob_x2` being the second (right side). (To
# make as clear as possible: the 1st contains a "+ d" term, the 2nd does
# not.)
prob_x1 = distributions.poisson.pmf(x1, nlmbd_hat1)
prob_x2 = distributions.poisson.pmf(x2, nlmbd_hat2)
# compute constants for use in the the "pivot statistic" per the
# unnumbered equation following (3.3).
lmbd_x1 = x1 / n1
lmbd_x2 = x2 / n2
lmbds_diff = lmbd_x1 - lmbd_x2 - diff
var_x1x2 = lmbd_x1 / n1 + lmbd_x2 / n2
# this is the 'pivot statistic' for use in the indicator of the summation
# (left side of "I[.]"). Before dividing, mask zero-elements in the
# denominator with infinity so that they are `false` in the indicator.
mask_out_invalid = (np.abs(lmbd_x1 - lmbd_x2) > diff
if alternative == 'two-sided' else lmbds_diff > 0)
var_x1x2[~mask_out_invalid] = np.inf
t_x1x2 = lmbds_diff / np.sqrt(var_x1x2)
if alternative == 'two-sided':
alternative_comparison = lambda x, y: np.abs(x) >= np.abs(y)
elif alternative == 'less':
alternative_comparison = lambda x, y: np.less_equal(x, y)
else:
alternative_comparison = lambda x, y: np.less_equal(x, y)
# `[indicator]` implements the "I[.] ... the indicator function" per
# the paragraph following equation (3.5).
indicator = alternative_comparison(t_x1x2, t_k1k2)
# multiply all combinations of the products together, exclude terms
# based on the `indicator` and then sum. (3.5)
pvalue = np.sum((prob_x1 * prob_x2)[indicator])
return PoissonMeansTestResult(t_k1k2, pvalue)
|
56,516 | def _expand_data_to_arrays(
data: list[tuple[Any, ...]], paramspecs: Sequence[ParamSpecBase]
) -> None:
types = [param.type for param in paramspecs]
# if we have array type parameters expand all other parameters
# to arrays
if 'array' in types:
if ('numeric' in types or 'text' in types
or 'complex' in types):
first_array_element = types.index('array')
types_mapping: dict[int, Callable[[str], np.dtype[Any]]] = {}
for i, x in enumerate(types):
if x == "numeric":
types_mapping[i] = lambda _: np.dtype(np.float64)
elif x == "complex":
types_mapping[i] = lambda _: np.dtype(np.complex128)
elif x == "text":
types_mapping[i] = lambda array: np.dtype(f"U{len(array)}")
for i_row, row in enumerate(data):
# todo should we handle int/float types here
# we would in practice have to perform another
# loop to check that all elements of a given can be cast to
# int without loosing precision before choosing an integer
# representation of the array
data[i_row] = tuple(
np.full_like(
row[first_array_element], array, dtype=types_mapping[i](array)
)
if i in types_mapping
else array
for i, array in enumerate(row)
)
for i_row, row in enumerate(data):
# now expand all one element arrays to match the expected size
# one element arrays are introduced if scalar values are stored
# with an explicit array storage type
max_size = 0
for i, array in enumerate(row):
if array.size > max_size:
if max_size > 1:
log.warning(
f"Cannot expand array of size {max_size} "
f"to size {array.size}"
)
max_size, max_index = array.size, i
data[i_row] = tuple(
np.full_like(row[max_index], array, dtype=array.dtype)
if array.size != max_size
else array
for array in row
)
| def _expand_data_to_arrays(
data: list[tuple[Any, ...]], paramspecs: Sequence[ParamSpecBase]
) -> None:
types = [param.type for param in paramspecs]
# if we have array type parameters expand all other parameters
# to arrays
if 'array' in types:
if ('numeric' in types or 'text' in types
or 'complex' in types):
first_array_element = types.index('array')
types_mapping: dict[int, Callable[[str], np.dtype[Any]]] = {}
for i, x in enumerate(types):
if x == "numeric":
types_mapping[i] = lambda _: np.dtype(np.float64)
elif x == "complex":
types_mapping[i] = lambda _: np.dtype(np.complex128)
elif x == "text":
types_mapping[i] = lambda array: np.dtype(f"U{len(array)}")
for i_row, row in enumerate(data):
# todo should we handle int/float types here
# we would in practice have to perform another
# loop to check that all elements of a given can be cast to
# int without loosing precision before choosing an integer
# representation of the array
data[i_row] = tuple(
np.full_like(
row[first_array_element], array, dtype=types_mapping[i](array)
)
if i in types_mapping
else array
for i, array in enumerate(row)
)
for i_row, row in enumerate(data):
# now expand all one element arrays to match the expected size
# one element arrays are introduced if scalar values are stored
# with an explicit array storage type
max_size = 0
for i, array in enumerate(row):
if array.size > max_size:
if max_size > 1:
log.warning(
f"Cannot expand array of size {array.size} "
f"to size {max_size}"
)
max_size, max_index = array.size, i
data[i_row] = tuple(
np.full_like(row[max_index], array, dtype=array.dtype)
if array.size != max_size
else array
for array in row
)
|
12,352 | def sim(g, nu_ext_over_nu_thr, sim_time, ax_spikes, ax_rates, rate_tick_step):
"""
g -- relative inhibitory to excitatory synaptic strength
nu_ext_over_nu_thr -- ratio of external stimulus rate to threshold rate
sim_time -- simulation time
ax_spikes -- matplotlib axes to plot spikes on
ax_rates -- matplotlib axes to plot rates on
rate_tick_step -- step size for rate axis ticks
"""
# network parameters
N_E = 10000
gamma = 0.25
N_I = round(gamma * N_E)
N = N_E + N_I
epsilon = 0.1
C_E = epsilon * N_E
C_ext = C_E
# neuron parameters
tau = 20 * ms
theta = 20 * mV
V_r = 10 * mV
tau_rp = 2 * ms
# synapse parameters
J = 0.1 * mV
D = 1.5 * ms
# external stimulus
nu_thr = theta / (J * C_E * tau)
defaultclock.dt = 0.1 * ms
neurons = NeuronGroup(N,
"""
dv/dt = -v/tau : volt (unless refractory)
index : integer (constant)
""",
threshold="v > theta",
reset="v = V_r",
refractory=tau_rp,
method="exact",
)
excitatory_neurons = neurons[:N_E]
inhibitory_neurons = neurons[N_E:]
exc_synapses = Synapses(excitatory_neurons, target=neurons, on_pre="v += J", delay=D)
exc_synapses.connect(p=epsilon)
inhib_synapses = Synapses(inhibitory_neurons, target=neurons, on_pre="v += -g*J", delay=D)
inhib_synapses.connect(p=epsilon)
nu_ext = nu_ext_over_nu_thr * nu_thr
external_poisson_input = PoissonInput(
target=neurons, target_var="v", N=C_ext, rate=nu_ext, weight=J
)
rate_monitor = PopulationRateMonitor(neurons)
# shuffle indices to randomly select neurons
indices = neurons.i[:]
np.random.shuffle(indices)
neurons.index = indices
spike_monitor = SpikeMonitor(neurons[:50])
run(sim_time)
ax_spikes.plot(spike_monitor.t / ms, spike_monitor.i, "|")
ax_rates.plot(rate_monitor.t / ms, rate_monitor.rate / Hz)
ax_spikes.set_yticks([], [])
ax_spikes.set_xlim(*params["t_range"])
ax_rates.set_xlim(*params["t_range"])
ax_rates.set_ylim(*params["rate_range"])
ax_rates.set_xlabel("t [ms]")
ax_rates.set_yticks(
np.arange(
params["rate_range"][0], params["rate_range"][1] + rate_tick_step, rate_tick_step
)
)
plt.subplots_adjust(hspace=0)
| def sim(g, nu_ext_over_nu_thr, sim_time, ax_spikes, ax_rates, rate_tick_step):
"""
g -- relative inhibitory to excitatory synaptic strength
nu_ext_over_nu_thr -- ratio of external stimulus rate to threshold rate
sim_time -- simulation time
ax_spikes -- matplotlib axes to plot spikes on
ax_rates -- matplotlib axes to plot rates on
rate_tick_step -- step size for rate axis ticks
"""
# network parameters
N_E = 10000
gamma = 0.25
N_I = round(gamma * N_E)
N = N_E + N_I
epsilon = 0.1
C_E = epsilon * N_E
C_ext = C_E
# neuron parameters
tau = 20 * ms
theta = 20 * mV
V_r = 10 * mV
tau_rp = 2 * ms
# synapse parameters
J = 0.1 * mV
D = 1.5 * ms
# external stimulus
nu_thr = theta / (J * C_E * tau)
defaultclock.dt = 0.1 * ms
neurons = NeuronGroup(N,
"""
dv/dt = -v/tau : volt (unless refractory)
index : integer (constant)
""",
threshold="v > theta",
reset="v = V_r",
refractory=tau_rp,
method="exact",
)
excitatory_neurons = neurons[:N_E]
inhibitory_neurons = neurons[N_E:]
exc_synapses = Synapses(excitatory_neurons, target=neurons, on_pre="v += J", delay=D)
exc_synapses.connect(p=epsilon)
inhib_synapses = Synapses(inhibitory_neurons, target=neurons, on_pre="v += -g*J", delay=D)
inhib_synapses.connect(p=epsilon)
nu_ext = nu_ext_over_nu_thr * nu_thr
external_poisson_input = PoissonInput(
target=neurons, target_var="v", N=C_ext, rate=nu_ext, weight=J
)
rate_monitor = PopulationRateMonitor(neurons)
# shuffle indices to randomly select neurons
indices = neurons.i[:]
np.random.shuffle(indices)
neurons.index = indices
spike_monitor = SpikeMonitor(neurons[:50])
run(sim_time)
ax_spikes.plot(spike_monitor.t / ms, spike_monitor.i, "|")
ax_rates.plot(rate_monitor.t / ms, rate_monitor.rate / Hz)
ax_spikes.set_yticks([])
ax_spikes.set_xlim(*params["t_range"])
ax_rates.set_xlim(*params["t_range"])
ax_rates.set_ylim(*params["rate_range"])
ax_rates.set_xlabel("t [ms]")
ax_rates.set_yticks(
np.arange(
params["rate_range"][0], params["rate_range"][1] + rate_tick_step, rate_tick_step
)
)
plt.subplots_adjust(hspace=0)
|
41,461 | def read_pathos_lightcurve(filename,
flux_column="PSF_FLUX_COR",
quality_bitmask="default"):
"""Returns a `TessLightCurve`.
Parameters
----------
filename : str
Local path or remote url of PATHOS light curve FITS file.
flux_column : 'psf_flux_cor' or 'ap#_flux_cor' (# = 1, 2, 3, or 4)
or 'psf_flux_raw' or 'ap#_flux_raw' (# = 1, 2, 3, or 4)
Which column in the FITS file contains the preferred flux data?
quality_bitmask : str or int
Bitmask (integer) which identifies the quality flag bitmask that should
be used to mask out bad cadences. If a string is passed, it has the
following meaning:
* "none": no cadences will be ignored (`quality_bitmask=0`).
* "default": cadences with severe quality issues will be ignored
(`quality_bitmask=1130799`).
* "hard": more conservative choice of flags to ignore
(`quality_bitmask=1664431`). This is known to remove good data.
* "hardest": removes all data that has been flagged
(`quality_bitmask=2096639`). This mask is not recommended.
See the :class:`TessQualityFlags` class for details on the bitmasks.
"""
lc = read_generic_lightcurve(filename,
flux_column=flux_column.lower(),
time_format='btjd',
quality_column="DQUALITY")
# Filter out poor-quality data
# NOTE: Unfortunately Astropy Table masking does not yet work for columns
# that are Quantity objects, so for now we remove poor-quality data instead
# of masking. Details: https://github.com/astropy/astropy/issues/10119
quality_mask = TessQualityFlags.create_quality_mask(
quality_array=lc['quality'],
bitmask=quality_bitmask)
lc = lc[quality_mask]
lc.meta['TARGETID'] = lc.meta.get('TICID')
lc.meta['QUALITY_BITMASK'] = quality_bitmask
lc.meta['QUALITY_MASK'] = quality_mask
# QLP light curves are normalized by default
lc.meta['NORMALIZED'] = True
return TessLightCurve(data=lc)
| def read_pathos_lightcurve(filename,
flux_column="PSF_FLUX_COR",
quality_bitmask="default"):
"""Returns a `TessLightCurve`.
Parameters
----------
filename : str
Local path or remote url of PATHOS light curve FITS file.
flux_column : 'psf_flux_cor' or 'ap#_flux_cor' (# = 1, 2, 3, or 4)
or 'psf_flux_raw' or 'ap#_flux_raw' (# = 1, 2, 3, or 4)
Which column in the FITS file contains the preferred flux data?
quality_bitmask : str or int
Bitmask (integer) which identifies the quality flag bitmask that should
be used to mask out bad cadences. If a string is passed, it has the
following meaning:
* "none": no cadences will be ignored (`quality_bitmask=0`).
* "default": cadences with severe quality issues will be ignored
(`quality_bitmask=1130799`).
* "hard": more conservative choice of flags to ignore
(`quality_bitmask=1664431`). This is known to remove good data.
* "hardest": removes all data that has been flagged
(`quality_bitmask=2096639`). This mask is not recommended.
See the :class:`TessQualityFlags` class for details on the bitmasks.
"""
lc = read_generic_lightcurve(filename,
flux_column=flux_column.lower(),
time_format='btjd',
quality_column="DQUALITY")
# Filter out poor-quality data
# NOTE: Unfortunately Astropy Table masking does not yet work for columns
# that are Quantity objects, so for now we remove poor-quality data instead
# of masking. Details: https://github.com/astropy/astropy/issues/10119
quality_mask = TessQualityFlags.create_quality_mask(
quality_array=lc['dquality'],
bitmask=quality_bitmask)
lc = lc[quality_mask]
lc.meta['TARGETID'] = lc.meta.get('TICID')
lc.meta['QUALITY_BITMASK'] = quality_bitmask
lc.meta['QUALITY_MASK'] = quality_mask
# QLP light curves are normalized by default
lc.meta['NORMALIZED'] = True
return TessLightCurve(data=lc)
|
17,367 | def _build_discrete_cmap(cmap, levels, extend, filled):
"""
Build a discrete colormap and normalization of the data.
"""
import matplotlib as mpl
if not filled:
# non-filled contour plots
extend = "max"
if extend == "both":
ext_n = 2
elif extend in ["min", "max"]:
ext_n = 1
else:
ext_n = 0
n_colors = len(levels) + ext_n - 1
pal = _color_palette(cmap, n_colors)
new_cmap, cnorm = mpl.colors.from_levels_and_colors(levels, pal, extend=extend)
# copy the old cmap name, for easier testing
new_cmap.name = getattr(cmap, "name", cmap)
# copy colors to use for bad, under, and over values in case they have been set to
# non-default values
new_cmap._rgba_bad = getattr(cmap, "_rgba_bad")
new_cmap._rgba_under = getattr(cmap, "_rgba_under")
new_cmap._rgba_over = getattr(cmap, "_rgba_over")
return new_cmap, cnorm
| def _build_discrete_cmap(cmap, levels, extend, filled):
"""
Build a discrete colormap and normalization of the data.
"""
import matplotlib as mpl
if not filled:
# non-filled contour plots
extend = "max"
if extend == "both":
ext_n = 2
elif extend in ["min", "max"]:
ext_n = 1
else:
ext_n = 0
n_colors = len(levels) + ext_n - 1
pal = _color_palette(cmap, n_colors)
new_cmap, cnorm = mpl.colors.from_levels_and_colors(levels, pal, extend=extend)
# copy the old cmap name, for easier testing
new_cmap.name = getattr(cmap, "name", cmap)
# copy colors to use for bad, under, and over values in case they have been set to
# non-default values
new_cmap._rgba_bad = getattr(cmap, "_rgba_bad", None)
new_cmap._rgba_under = getattr(cmap, "_rgba_under")
new_cmap._rgba_over = getattr(cmap, "_rgba_over")
return new_cmap, cnorm
|
21,966 | def authIfV2(sydent, request, requireTermsAgreed=True):
if request.path.startswith('/_matrix/identity/v2'):
token = tokenFromRequest(request)
if token is None:
raise MatrixRestError(403, "M_UNAUTHORIZED", "Unauthorized")
accountStore = AccountStore(sydent)
account = accountStore.getAccountByToken(token)
if account is None:
raise MatrixRestError(403, "M_UNAUTHORIZED", "Unauthorized")
if requireTermsAgreed:
terms = get_terms(sydent)
if (
terms.getMasterVersion() is not None and
account.consentVersion != terms.getMasterVersion()
):
raise MatrixRestError(403, "M_TERMS_NOT_SIGNED", "Terms not signed")
return account
return None
| def authIfV2(sydent, request, requireTermsAgreed=True):
if request.path.startswith('/_matrix/identity/v2'):
token = tokenFromRequest(request)
if token is None:
raise MatrixRestError(401, "M_UNAUTHORIZED", "Unauthorized")
accountStore = AccountStore(sydent)
account = accountStore.getAccountByToken(token)
if account is None:
raise MatrixRestError(403, "M_UNAUTHORIZED", "Unauthorized")
if requireTermsAgreed:
terms = get_terms(sydent)
if (
terms.getMasterVersion() is not None and
account.consentVersion != terms.getMasterVersion()
):
raise MatrixRestError(403, "M_TERMS_NOT_SIGNED", "Terms not signed")
return account
return None
|
8,359 | def _isophote_list_to_table(isophote_list, key_properties=['main']):
"""
Convert an `~photutils.isophote.IsophoteList` instance to
a `~astropy.table.QTable`.
Parameters
----------
isophote_list : list of `~photutils.isophote.Isophote` or \
`~photutils.isophote.IsophoteList` instance
A list of isophotes.
key_properties : A list of properties to export from the isophote_list
If key_properties = ['all'] or ['main'], it will pick all or few
of the main properties.
Returns
-------
result : `~astropy.table.QTable`
An astropy QTable with the selected or all isophote parameters.
"""
properties = OrderedDict()
isotable = QTable()
# main_properties: `List`
# A list of main parameters matching the original names of
# the isophote_list parameters
def __rename_properties(properties,
orig_names = ['int_err', 'eps', 'ellip_err',
'grad_r_error', 'nflag'],
new_names = ['intens_err', 'ellipticity',
'ellipticity_err', 'grad_rerror',
'nflag']
):
'''
Simple renaming for some of the isophote_list parameters.
Parameters
----------
properties: `OrderedDict`
An OrderedDict with the list of the isophote_list parameters
orig_names: `List`
A list of original names in the isophote_list parameters
to be renamed
new_names: `List`
A list of new names matching in length of the orig_names
Returns
-------
properties: `OrderedDict`
An OrderedDict with the list of the renamed isophote_list
parameters
'''
main_properties = ['sma', 'intens', 'int_err', 'eps', 'ellip_err',
'pa', 'pa_err', 'grad', 'grad_error',
'grad_r_error', 'x0', 'x0_err', 'y0', 'y0_err',
'ndata', 'nflag', 'niter', 'stop_code']
for an_item in main_properties:
if an_item in orig_names:
properties[an_item] = new_names[orig_names.index(an_item)]
else:
properties[an_item] = an_item
return properties
if 'all' in key_properties:
properties = _get_properties(isophote_list)
properties = __rename_properties(properties)
elif 'main' in key_properties:
properties = __rename_properties(properties)
else:
for an_item in key_properties:
properties[an_item] = an_item
for k, v in properties.items():
isotable[v] = np.array([getattr(iso, k) for iso in isophote_list])
if k in ('pa', 'pa_err'):
isotable[v] = isotable[v] * 180. / np.pi * u.deg
return isotable
| def _isophote_list_to_table(isophote_list, key_properties=['main']):
"""
Convert an `~photutils.isophote.IsophoteList` instance to
a `~astropy.table.QTable`.
Parameters
----------
isophote_list : list of `~photutils.isophote.Isophote` or \
`~photutils.isophote.IsophoteList` instance
A list of isophotes.
key_properties : A list of properties to export from the isophote_list
If ``columns`` is 'all' or 'main', it will pick all or few
of the main properties.
Returns
-------
result : `~astropy.table.QTable`
An astropy QTable with the selected or all isophote parameters.
"""
properties = OrderedDict()
isotable = QTable()
# main_properties: `List`
# A list of main parameters matching the original names of
# the isophote_list parameters
def __rename_properties(properties,
orig_names = ['int_err', 'eps', 'ellip_err',
'grad_r_error', 'nflag'],
new_names = ['intens_err', 'ellipticity',
'ellipticity_err', 'grad_rerror',
'nflag']
):
'''
Simple renaming for some of the isophote_list parameters.
Parameters
----------
properties: `OrderedDict`
An OrderedDict with the list of the isophote_list parameters
orig_names: `List`
A list of original names in the isophote_list parameters
to be renamed
new_names: `List`
A list of new names matching in length of the orig_names
Returns
-------
properties: `OrderedDict`
An OrderedDict with the list of the renamed isophote_list
parameters
'''
main_properties = ['sma', 'intens', 'int_err', 'eps', 'ellip_err',
'pa', 'pa_err', 'grad', 'grad_error',
'grad_r_error', 'x0', 'x0_err', 'y0', 'y0_err',
'ndata', 'nflag', 'niter', 'stop_code']
for an_item in main_properties:
if an_item in orig_names:
properties[an_item] = new_names[orig_names.index(an_item)]
else:
properties[an_item] = an_item
return properties
if 'all' in key_properties:
properties = _get_properties(isophote_list)
properties = __rename_properties(properties)
elif 'main' in key_properties:
properties = __rename_properties(properties)
else:
for an_item in key_properties:
properties[an_item] = an_item
for k, v in properties.items():
isotable[v] = np.array([getattr(iso, k) for iso in isophote_list])
if k in ('pa', 'pa_err'):
isotable[v] = isotable[v] * 180. / np.pi * u.deg
return isotable
|
31,543 | def main():
try:
incidents = get_campaign_incidents_from_context()
if incidents:
update_incident_with_required_keys(incidents, KEYS_FETCHED_BY_QUERY)
update_empty_fields()
readable_output = get_incidents_info_md(incidents)
else:
readable_output = NO_CAMPAIGN_INCIDENTS_MSG
result = CommandResults(
readable_output=readable_output,
outputs_prefix='',
outputs_key_field=''
)
return_results(result)
except Exception as err:
return_error(str(err))
| def main():
try:
incidents = get_campaign_incidents_from_context()
if incidents:
update_incident_with_required_keys(incidents, KEYS_FETCHED_BY_QUERY)
update_empty_fields()
readable_output = get_incidents_info_md(incidents)
else:
readable_output = NO_CAMPAIGN_INCIDENTS_MSG
return_results(readable_output)
except Exception as err:
return_error(str(err))
|
845 | def test_do_poly_distance():
# Non-intersecting polygons
square1 = Polygon (Point(0,0), Point(0,1), Point(1,1), Point(1,0))
triangle1 = Polygon(Point(1, 2), Point(2, 2), Point(2, 1))
assert square1._do_poly_distance(triangle1) == sqrt(2)/2
# Polygons which sides intersect
square2 = Polygon(Point(1,0), Point(2,0), Point(2,1), Point(1,1))
with warns(UserWarning, \
match="Polygons may intersect producing erroneous output"):
assert square1._do_poly_distance(square2) == 0
# Polygons which bodies intersect
triangle2 = Polygon(Point(0, -1), Point(2, -1), Point(S.Half, S.Half))
with warns(UserWarning, \
match="Polygons may intersect producing erroneous output"):
assert triangle2._do_poly_distance(square1) == 0
| def test_do_poly_distance():
# Non-intersecting polygons
square1 = Polygon (Point(0,0), Point(0,1), Point(1,1), Point(1,0))
triangle1 = Polygon(Point(1, 2), Point(2, 2), Point(2, 1))
assert square1._do_poly_distance(triangle1) == sqrt(2)/2
# Polygons which sides intersect
square2 = Polygon(Point(1,0), Point(2,0), Point(2,1), Point(1,1))
with warns(UserWarning, \
match="Polygons may intersect producing erroneous output"):
assert square1._do_poly_distance(square2) == 0
# Polygons which bodies intersect
triangle2 = Polygon(Point(0, -1), Point(2, -1), Point(S.Half, S.Half))
with warns(UserWarning, \
match="Polygons may intersect producing erroneous output"):
assert triangle2._do_poly_distance(square1) == 0
|
8,336 | def main():
"""Parse arguments and launch process."""
parser = argparse.ArgumentParser(description="Prometheus exporter.")
parser.add_argument(
"--host",
help="IP address to bind",
default="0.0.0.0",
)
parser.add_argument(
"--port",
help="Port to use",
default=8811,
type=int,
)
parser.add_argument(
"--contest-id",
"-c",
help="Obtain metrics for the specified contest only",
type=int,
)
parser.add_argument(
"--no-submissions",
help="Do not export submissions metrics",
action="store_true",
)
parser.add_argument(
"--no-workers",
help="Do not export workers metrics",
action="store_true",
)
parser.add_argument(
"--no-queue",
help="Do not export queue metrics",
action="store_true",
)
parser.add_argument(
"--no-communications",
help="Do not export communications metrics",
action="store_true",
)
parser.add_argument(
"--no-users",
help="Do not export users metrics",
action="store_true",
)
args = parser.parse_args()
service = PrometheusExporter(args)
REGISTRY.register(service)
start_http_server(args.port, addr=args.host)
print("Started at http://%s:%s/metric" % (args.host, args.port))
Service.run(service)
| def main():
"""Parse arguments and launch process."""
parser = argparse.ArgumentParser(description="Prometheus exporter.")
parser.add_argument(
"--host",
help="IP address to bind to",
default="0.0.0.0",
)
parser.add_argument(
"--port",
help="Port to use",
default=8811,
type=int,
)
parser.add_argument(
"--contest-id",
"-c",
help="Obtain metrics for the specified contest only",
type=int,
)
parser.add_argument(
"--no-submissions",
help="Do not export submissions metrics",
action="store_true",
)
parser.add_argument(
"--no-workers",
help="Do not export workers metrics",
action="store_true",
)
parser.add_argument(
"--no-queue",
help="Do not export queue metrics",
action="store_true",
)
parser.add_argument(
"--no-communications",
help="Do not export communications metrics",
action="store_true",
)
parser.add_argument(
"--no-users",
help="Do not export users metrics",
action="store_true",
)
args = parser.parse_args()
service = PrometheusExporter(args)
REGISTRY.register(service)
start_http_server(args.port, addr=args.host)
print("Started at http://%s:%s/metric" % (args.host, args.port))
Service.run(service)
|
29,624 | def test_client_repr(c):
x = c.submit(inc, 1)
who_has = c.who_has()
has_what = c.has_what()
assert type(who_has) is WhoHas
assert type(has_what) is HasWhat
| def test_client_repr_html(c):
x = c.submit(inc, 1)
who_has = c.who_has()
has_what = c.has_what()
assert type(who_has) is WhoHas
assert type(has_what) is HasWhat
|
32,822 | def get_error_codes():
error_codes = []
try:
error_str = config.http_server.error_statuses
except AttributeError:
error_str = None
if error_str is None:
return [[500, 599]]
error_ranges = error_str.split(",")
for error_range in error_ranges:
values = error_range.split("-")
min_code = int(values[0])
if len(values) == 2:
max_code = int(values[1])
else:
max_code = min_code
if min_code > max_code:
tmp = min_code
min_code = max_code
max_code = tmp
error_codes.append([min_code, max_code])
return error_codes
| def get_error_codes():
error_codes = []
try:
error_str = config.http_server.error_statuses
except AttributeError:
return [[500, 599]]
if error_str is None:
return [[500, 599]]
error_ranges = error_str.split(",")
for error_range in error_ranges:
values = error_range.split("-")
min_code = int(values[0])
if len(values) == 2:
max_code = int(values[1])
else:
max_code = min_code
if min_code > max_code:
tmp = min_code
min_code = max_code
max_code = tmp
error_codes.append([min_code, max_code])
return error_codes
|
23,178 | def apply_gufunc(
func,
signature,
*args,
axes=None,
axis=None,
keepdims=False,
output_dtypes=None,
output_sizes=None,
vectorize=None,
allow_rechunk=False,
meta=None,
**kwargs,
):
"""
Apply a generalized ufunc or similar python function to arrays.
``signature`` determines if the function consumes or produces core
dimensions. The remaining dimensions in given input arrays (``*args``)
are considered loop dimensions and are required to broadcast
naturally against each other.
In other terms, this function is like ``np.vectorize``, but for
the blocks of dask arrays. If the function itself shall also
be vectorized use ``vectorize=True`` for convenience.
Parameters
----------
func : callable
Function to call like ``func(*args, **kwargs)`` on input arrays
(``*args``) that returns an array or tuple of arrays. If multiple
arguments with non-matching dimensions are supplied, this function is
expected to vectorize (broadcast) over axes of positional arguments in
the style of NumPy universal functions [1]_ (if this is not the case,
set ``vectorize=True``). If this function returns multiple outputs,
``output_core_dims`` has to be set as well.
signature: string
Specifies what core dimensions are consumed and produced by ``func``.
According to the specification of numpy.gufunc signature [2]_
*args : numeric
Input arrays or scalars to the callable function.
axes: List of tuples, optional, keyword only
A list of tuples with indices of axes a generalized ufunc should operate on.
For instance, for a signature of ``"(i,j),(j,k)->(i,k)"`` appropriate for
matrix multiplication, the base elements are two-dimensional matrices
and these are taken to be stored in the two last axes of each argument. The
corresponding axes keyword would be ``[(-2, -1), (-2, -1), (-2, -1)]``.
For simplicity, for generalized ufuncs that operate on 1-dimensional arrays
(vectors), a single integer is accepted instead of a single-element tuple,
and for generalized ufuncs for which all outputs are scalars, the output
tuples can be omitted.
axis: int, optional, keyword only
A single axis over which a generalized ufunc should operate. This is a short-cut
for ufuncs that operate over a single, shared core dimension, equivalent to passing
in axes with entries of (axis,) for each single-core-dimension argument and ``()`` for
all others. For instance, for a signature ``"(i),(i)->()"``, it is equivalent to passing
in ``axes=[(axis,), (axis,), ()]``.
keepdims: bool, optional, keyword only
If this is set to True, axes which are reduced over will be left in the result as
a dimension with size one, so that the result will broadcast correctly against the
inputs. This option can only be used for generalized ufuncs that operate on inputs
that all have the same number of core dimensions and with outputs that have no core
dimensions , i.e., with signatures like ``"(i),(i)->()"`` or ``"(m,m)->()"``.
If used, the location of the dimensions in the output can be controlled with axes
and axis.
output_dtypes : Optional, dtype or list of dtypes, keyword only
Valid numpy dtype specification or list thereof.
If not given, a call of ``func`` with a small set of data
is performed in order to try to automatically determine the
output dtypes.
output_sizes : dict, optional, keyword only
Optional mapping from dimension names to sizes for outputs. Only used if
new core dimensions (not found on inputs) appear on outputs.
vectorize: bool, keyword only
If set to ``True``, ``np.vectorize`` is applied to ``func`` for
convenience. Defaults to ``False``.
allow_rechunk: Optional, bool, keyword only
Allows rechunking, otherwise chunk sizes need to match and core
dimensions are to consist only of one chunk.
Warning: enabling this can increase memory usage significantly.
Defaults to ``False``.
meta: Optional, tuple, keyword only
tuple of empty ndarrays describing the shape and dtype of the output of the gufunc.
Defaults to ``None``.
**kwargs : dict
Extra keyword arguments to pass to `func`
Returns
-------
Single dask.array.Array or tuple of dask.array.Array
Examples
--------
>>> import dask.array as da
>>> import numpy as np
>>> def stats(x):
... return np.mean(x, axis=-1), np.std(x, axis=-1)
>>> a = da.random.normal(size=(10,20,30), chunks=(5, 10, 30))
>>> mean, std = da.apply_gufunc(stats, "(i)->(),()", a)
>>> mean.compute().shape
(10, 20)
>>> def outer_product(x, y):
... return np.einsum("i,j->ij", x, y)
>>> a = da.random.normal(size=( 20,30), chunks=(10, 30))
>>> b = da.random.normal(size=(10, 1,40), chunks=(5, 1, 40))
>>> c = da.apply_gufunc(outer_product, "(i),(j)->(i,j)", a, b, vectorize=True)
>>> c.compute().shape
(10, 20, 30, 40)
References
----------
.. [1] https://docs.scipy.org/doc/numpy/reference/ufuncs.html
.. [2] https://docs.scipy.org/doc/numpy/reference/c-api/generalized-ufuncs.html
"""
# Input processing:
## Signature
if not isinstance(signature, str):
raise TypeError("`signature` has to be of type string")
# NumPy versions before https://github.com/numpy/numpy/pull/19627
# would not ignore whitespace characters in `signature` like they
# are supposed to. We remove the whitespace here as a workaround.
signature = re.sub(r"\s+", "", signature)
input_coredimss, output_coredimss = _parse_gufunc_signature(signature)
## Determine nout: nout = None for functions of one direct return; nout = int for return tuples
nout = None if not isinstance(output_coredimss, list) else len(output_coredimss)
## Consolidate onto `meta`
if meta is not None and output_dtypes is not None:
raise ValueError(
"Only one of `meta` and `output_dtypes` should be given (`meta` is preferred)."
)
if meta is None:
if output_dtypes is None:
## Infer `output_dtypes`
if vectorize:
tempfunc = np.vectorize(func, signature=signature)
else:
tempfunc = func
output_dtypes = apply_infer_dtype(
tempfunc, args, kwargs, "apply_gufunc", "output_dtypes", nout
)
## Turn `output_dtypes` into `meta`
if (
nout is None
and isinstance(output_dtypes, (tuple, list))
and len(output_dtypes) == 1
):
output_dtypes = output_dtypes[0]
sample = args[0] if args else None
if nout is None:
meta = meta_from_array(sample, dtype=output_dtypes)
else:
meta = tuple(meta_from_array(sample, dtype=odt) for odt in output_dtypes)
## Normalize `meta` format
meta = meta_from_array(meta)
if isinstance(meta, list):
meta = tuple(meta)
## Validate `meta`
if nout is None:
if isinstance(meta, tuple):
if len(meta) == 1:
meta = meta[0]
else:
raise ValueError(
"For a function with one output, must give a single item for `output_dtypes`/`meta`, "
"not a tuple or list."
)
else:
if not isinstance(meta, tuple):
raise ValueError(
f"For a function with {nout} outputs, must give a tuple or list for `output_dtypes`/`meta`, "
"not a single item."
)
if len(meta) != nout:
raise ValueError(
f"For a function with {nout} outputs, must give a tuple or list of {nout} items for "
f"`output_dtypes`/`meta`, not {len(meta)}."
)
## Vectorize function, if required
if vectorize:
otypes = [x.dtype for x in meta] if isinstance(meta, tuple) else [meta.dtype]
func = np.vectorize(func, signature=signature, otypes=otypes)
## Miscellaneous
if output_sizes is None:
output_sizes = {}
## Axes
input_axes, output_axes = _validate_normalize_axes(
axes, axis, keepdims, input_coredimss, output_coredimss
)
# Main code:
## Cast all input arrays to dask
args = [asarray(a) for a in args]
if len(input_coredimss) != len(args):
raise ValueError(
"According to `signature`, `func` requires %d arguments, but %s given"
% (len(input_coredimss), len(args))
)
## Axes: transpose input arguments
transposed_args = []
for arg, iax, _ in zip(args, input_axes, input_coredimss):
shape = arg.shape
iax = tuple(a if a < 0 else a - len(shape) for a in iax)
tidc = tuple(i for i in range(-len(shape) + 0, 0) if i not in iax) + iax
transposed_arg = arg.transpose(tidc)
transposed_args.append(transposed_arg)
args = transposed_args
## Assess input args for loop dims
input_shapes = [a.shape for a in args]
input_chunkss = [a.chunks for a in args]
num_loopdims = [len(s) - len(cd) for s, cd in zip(input_shapes, input_coredimss)]
max_loopdims = max(num_loopdims) if num_loopdims else None
core_input_shapes = [
dict(zip(icd, s[n:]))
for s, n, icd in zip(input_shapes, num_loopdims, input_coredimss)
]
core_shapes = merge(*core_input_shapes)
core_shapes.update(output_sizes)
loop_input_dimss = [
tuple("__loopdim%d__" % d for d in range(max_loopdims - n, max_loopdims))
for n in num_loopdims
]
input_dimss = [l + c for l, c in zip(loop_input_dimss, input_coredimss)]
loop_output_dims = max(loop_input_dimss, key=len) if loop_input_dimss else tuple()
## Assess input args for same size and chunk sizes
### Collect sizes and chunksizes of all dims in all arrays
dimsizess = {}
chunksizess = {}
for dims, shape, chunksizes in zip(input_dimss, input_shapes, input_chunkss):
for dim, size, chunksize in zip(dims, shape, chunksizes):
dimsizes = dimsizess.get(dim, [])
dimsizes.append(size)
dimsizess[dim] = dimsizes
chunksizes_ = chunksizess.get(dim, [])
chunksizes_.append(chunksize)
chunksizess[dim] = chunksizes_
### Assert correct partitioning, for case:
for dim, sizes in dimsizess.items():
#### Check that the arrays have same length for same dimensions or dimension `1`
if set(sizes) | {1} != {1, max(sizes)}:
raise ValueError(f"Dimension `'{dim}'` with different lengths in arrays")
if not allow_rechunk:
chunksizes = chunksizess[dim]
#### Check if core dimensions consist of only one chunk
if (dim in core_shapes) and (chunksizes[0][0] < core_shapes[dim]):
raise ValueError(
"Core dimension `'{}'` consists of multiple chunks. To fix, rechunk into a single \
chunk along this dimension or set `allow_rechunk=True`, but beware that this may increase memory usage \
significantly.".format(
dim
)
)
#### Check if loop dimensions consist of same chunksizes, when they have sizes > 1
relevant_chunksizes = list(
unique(c for s, c in zip(sizes, chunksizes) if s > 1)
)
if len(relevant_chunksizes) > 1:
raise ValueError(
f"Dimension `'{dim}'` with different chunksize present"
)
## Apply function - use blockwise here
arginds = list(concat(zip(args, input_dimss)))
### Use existing `blockwise` but only with loopdims to enforce
### concatenation for coredims that appear also at the output
### Modifying `blockwise` could improve things here.
tmp = blockwise(
func, loop_output_dims, *arginds, concatenate=True, meta=meta, **kwargs
)
# NOTE: we likely could just use `meta` instead of `tmp._meta`,
# but we use it and validate it anyway just to be sure nothing odd has happened.
metas = tmp._meta
if nout is None:
assert not isinstance(
metas, (list, tuple)
), f"meta changed from single output to multiple output during blockwise: {meta} -> {metas}"
metas = (metas,)
else:
assert isinstance(
metas, (list, tuple)
), f"meta changed from multiple output to single output during blockwise: {meta} -> {metas}"
assert (
len(metas) == nout
), f"Number of outputs changed from {nout} to {len(metas)} during blockwise"
## Prepare output shapes
loop_output_shape = tmp.shape
loop_output_chunks = tmp.chunks
keys = list(flatten(tmp.__dask_keys__()))
name, token = keys[0][0].split("-")
### *) Treat direct output
if nout is None:
output_coredimss = [output_coredimss]
## Split output
leaf_arrs = []
for i, (ocd, oax, meta) in enumerate(zip(output_coredimss, output_axes, metas)):
core_output_shape = tuple(core_shapes[d] for d in ocd)
core_chunkinds = len(ocd) * (0,)
output_shape = loop_output_shape + core_output_shape
output_chunks = loop_output_chunks + core_output_shape
leaf_name = "%s_%d-%s" % (name, i, token)
leaf_dsk = {
(leaf_name,)
+ key[1:]
+ core_chunkinds: ((getitem, key, i) if nout else key)
for key in keys
}
graph = HighLevelGraph.from_collections(leaf_name, leaf_dsk, dependencies=[tmp])
meta = meta_from_array(meta, len(output_shape))
leaf_arr = Array(
graph, leaf_name, chunks=output_chunks, shape=output_shape, meta=meta
)
### Axes:
if keepdims:
slices = len(leaf_arr.shape) * (slice(None),) + len(oax) * (np.newaxis,)
leaf_arr = leaf_arr[slices]
tidcs = [None] * len(leaf_arr.shape)
for ii, oa in zip(range(-len(oax), 0), oax):
tidcs[oa] = ii
j = 0
for ii in range(len(tidcs)):
if tidcs[ii] is None:
tidcs[ii] = j
j += 1
leaf_arr = leaf_arr.transpose(tidcs)
leaf_arrs.append(leaf_arr)
return (*leaf_arrs,) if nout else leaf_arrs[0] # Undo *) from above
| def apply_gufunc(
func,
signature,
*args,
axes=None,
axis=None,
keepdims=False,
output_dtypes=None,
output_sizes=None,
vectorize=None,
allow_rechunk=False,
meta=None,
**kwargs,
):
"""
Apply a generalized ufunc or similar python function to arrays.
``signature`` determines if the function consumes or produces core
dimensions. The remaining dimensions in given input arrays (``*args``)
are considered loop dimensions and are required to broadcast
naturally against each other.
In other terms, this function is like ``np.vectorize``, but for
the blocks of dask arrays. If the function itself shall also
be vectorized use ``vectorize=True`` for convenience.
Parameters
----------
func : callable
Function to call like ``func(*args, **kwargs)`` on input arrays
(``*args``) that returns an array or tuple of arrays. If multiple
arguments with non-matching dimensions are supplied, this function is
expected to vectorize (broadcast) over axes of positional arguments in
the style of NumPy universal functions [1]_ (if this is not the case,
set ``vectorize=True``). If this function returns multiple outputs,
``output_core_dims`` has to be set as well.
signature: string
Specifies what core dimensions are consumed and produced by ``func``.
According to the specification of numpy.gufunc signature [2]_
*args : numeric
Input arrays or scalars to the callable function.
axes: List of tuples, optional, keyword only
A list of tuples with indices of axes a generalized ufunc should operate on.
For instance, for a signature of ``"(i,j),(j,k)->(i,k)"`` appropriate for
matrix multiplication, the base elements are two-dimensional matrices
and these are taken to be stored in the two last axes of each argument. The
corresponding axes keyword would be ``[(-2, -1), (-2, -1), (-2, -1)]``.
For simplicity, for generalized ufuncs that operate on 1-dimensional arrays
(vectors), a single integer is accepted instead of a single-element tuple,
and for generalized ufuncs for which all outputs are scalars, the output
tuples can be omitted.
axis: int, optional, keyword only
A single axis over which a generalized ufunc should operate. This is a short-cut
for ufuncs that operate over a single, shared core dimension, equivalent to passing
in axes with entries of (axis,) for each single-core-dimension argument and ``()`` for
all others. For instance, for a signature ``"(i),(i)->()"``, it is equivalent to passing
in ``axes=[(axis,), (axis,), ()]``.
keepdims: bool, optional, keyword only
If this is set to True, axes which are reduced over will be left in the result as
a dimension with size one, so that the result will broadcast correctly against the
inputs. This option can only be used for generalized ufuncs that operate on inputs
that all have the same number of core dimensions and with outputs that have no core
dimensions , i.e., with signatures like ``"(i),(i)->()"`` or ``"(m,m)->()"``.
If used, the location of the dimensions in the output can be controlled with axes
and axis.
output_dtypes : Optional, dtype or list of dtypes, keyword only
Valid numpy dtype specification or list thereof.
If not given, a call of ``func`` with a small set of data
is performed in order to try to automatically determine the
output dtypes.
output_sizes : dict, optional, keyword only
Optional mapping from dimension names to sizes for outputs. Only used if
new core dimensions (not found on inputs) appear on outputs.
vectorize: bool, keyword only
If set to ``True``, ``np.vectorize`` is applied to ``func`` for
convenience. Defaults to ``False``.
allow_rechunk: Optional, bool, keyword only
Allows rechunking, otherwise chunk sizes need to match and core
dimensions are to consist only of one chunk.
Warning: enabling this can increase memory usage significantly.
Defaults to ``False``.
meta: Optional, tuple, keyword only
tuple of empty ndarrays describing the shape and dtype of the output of the gufunc.
Defaults to ``None``.
**kwargs : dict
Extra keyword arguments to pass to `func`
Returns
-------
Single dask.array.Array or tuple of dask.array.Array
Examples
--------
>>> import dask.array as da
>>> import numpy as np
>>> def stats(x):
... return np.mean(x, axis=-1), np.std(x, axis=-1)
>>> a = da.random.normal(size=(10,20,30), chunks=(5, 10, 30))
>>> mean, std = da.apply_gufunc(stats, "(i)->(),()", a)
>>> mean.compute().shape
(10, 20)
>>> def outer_product(x, y):
... return np.einsum("i,j->ij", x, y)
>>> a = da.random.normal(size=( 20,30), chunks=(10, 30))
>>> b = da.random.normal(size=(10, 1,40), chunks=(5, 1, 40))
>>> c = da.apply_gufunc(outer_product, "(i),(j)->(i,j)", a, b, vectorize=True)
>>> c.compute().shape
(10, 20, 30, 40)
References
----------
.. [1] https://docs.scipy.org/doc/numpy/reference/ufuncs.html
.. [2] https://docs.scipy.org/doc/numpy/reference/c-api/generalized-ufuncs.html
"""
# Input processing:
## Signature
if not isinstance(signature, str):
raise TypeError("`signature` has to be of type string")
# NumPy versions before https://github.com/numpy/numpy/pull/19627
# would not ignore whitespace characters in `signature` like they
# are supposed to. We remove the whitespace here as a workaround.
signature = re.sub(r"\s+", "", signature)
input_coredimss, output_coredimss = _parse_gufunc_signature(signature)
## Determine nout: nout = None for functions of one direct return; nout = int for return tuples
nout = None if not isinstance(output_coredimss, list) else len(output_coredimss)
## Consolidate onto `meta`
if meta is not None and output_dtypes is not None:
raise ValueError(
"Only one of `meta` and `output_dtypes` should be given (`meta` is preferred)."
)
if meta is None:
if output_dtypes is None:
## Infer `output_dtypes`
if vectorize:
tempfunc = np.vectorize(func, signature=signature)
else:
tempfunc = func
output_dtypes = apply_infer_dtype(
tempfunc, args, kwargs, "apply_gufunc", "output_dtypes", nout
)
## Turn `output_dtypes` into `meta`
if (
nout is None
and isinstance(output_dtypes, (tuple, list))
and len(output_dtypes) == 1
):
output_dtypes = output_dtypes[0]
sample = args[0] if args else None
if nout is None:
meta = meta_from_array(sample, dtype=output_dtypes)
else:
meta = tuple(meta_from_array(sample, dtype=odt) for odt in output_dtypes)
## Normalize `meta` format
meta = meta_from_array(meta)
if isinstance(meta, list):
meta = tuple(meta)
## Validate `meta`
if nout is None:
if isinstance(meta, tuple):
if len(meta) == 1:
meta = meta[0]
else:
raise ValueError(
"For a function with one output, must give a single item for `output_dtypes`/`meta`, "
"not a tuple or list."
)
else:
if not isinstance(meta, tuple):
raise ValueError(
f"For a function with {nout} outputs, must give a tuple or list for `output_dtypes`/`meta`, "
"not a single item."
)
if len(meta) != nout:
raise ValueError(
f"For a function with {nout} outputs, must give a tuple or list of {nout} items for "
f"`output_dtypes`/`meta`, not {len(meta)}."
)
## Vectorize function, if required
if vectorize:
otypes = [x.dtype for x in meta] if isinstance(meta, tuple) else [meta.dtype]
func = np.vectorize(func, signature=signature, otypes=otypes)
## Miscellaneous
if output_sizes is None:
output_sizes = {}
## Axes
input_axes, output_axes = _validate_normalize_axes(
axes, axis, keepdims, input_coredimss, output_coredimss
)
# Main code:
## Cast all input arrays to dask
args = [asarray(a) for a in args]
if len(input_coredimss) != len(args):
raise ValueError(
"According to `signature`, `func` requires %d arguments, but %s given"
% (len(input_coredimss), len(args))
)
## Axes: transpose input arguments
transposed_args = []
for arg, iax in zip(args, input_axes):
shape = arg.shape
iax = tuple(a if a < 0 else a - len(shape) for a in iax)
tidc = tuple(i for i in range(-len(shape) + 0, 0) if i not in iax) + iax
transposed_arg = arg.transpose(tidc)
transposed_args.append(transposed_arg)
args = transposed_args
## Assess input args for loop dims
input_shapes = [a.shape for a in args]
input_chunkss = [a.chunks for a in args]
num_loopdims = [len(s) - len(cd) for s, cd in zip(input_shapes, input_coredimss)]
max_loopdims = max(num_loopdims) if num_loopdims else None
core_input_shapes = [
dict(zip(icd, s[n:]))
for s, n, icd in zip(input_shapes, num_loopdims, input_coredimss)
]
core_shapes = merge(*core_input_shapes)
core_shapes.update(output_sizes)
loop_input_dimss = [
tuple("__loopdim%d__" % d for d in range(max_loopdims - n, max_loopdims))
for n in num_loopdims
]
input_dimss = [l + c for l, c in zip(loop_input_dimss, input_coredimss)]
loop_output_dims = max(loop_input_dimss, key=len) if loop_input_dimss else tuple()
## Assess input args for same size and chunk sizes
### Collect sizes and chunksizes of all dims in all arrays
dimsizess = {}
chunksizess = {}
for dims, shape, chunksizes in zip(input_dimss, input_shapes, input_chunkss):
for dim, size, chunksize in zip(dims, shape, chunksizes):
dimsizes = dimsizess.get(dim, [])
dimsizes.append(size)
dimsizess[dim] = dimsizes
chunksizes_ = chunksizess.get(dim, [])
chunksizes_.append(chunksize)
chunksizess[dim] = chunksizes_
### Assert correct partitioning, for case:
for dim, sizes in dimsizess.items():
#### Check that the arrays have same length for same dimensions or dimension `1`
if set(sizes) | {1} != {1, max(sizes)}:
raise ValueError(f"Dimension `'{dim}'` with different lengths in arrays")
if not allow_rechunk:
chunksizes = chunksizess[dim]
#### Check if core dimensions consist of only one chunk
if (dim in core_shapes) and (chunksizes[0][0] < core_shapes[dim]):
raise ValueError(
"Core dimension `'{}'` consists of multiple chunks. To fix, rechunk into a single \
chunk along this dimension or set `allow_rechunk=True`, but beware that this may increase memory usage \
significantly.".format(
dim
)
)
#### Check if loop dimensions consist of same chunksizes, when they have sizes > 1
relevant_chunksizes = list(
unique(c for s, c in zip(sizes, chunksizes) if s > 1)
)
if len(relevant_chunksizes) > 1:
raise ValueError(
f"Dimension `'{dim}'` with different chunksize present"
)
## Apply function - use blockwise here
arginds = list(concat(zip(args, input_dimss)))
### Use existing `blockwise` but only with loopdims to enforce
### concatenation for coredims that appear also at the output
### Modifying `blockwise` could improve things here.
tmp = blockwise(
func, loop_output_dims, *arginds, concatenate=True, meta=meta, **kwargs
)
# NOTE: we likely could just use `meta` instead of `tmp._meta`,
# but we use it and validate it anyway just to be sure nothing odd has happened.
metas = tmp._meta
if nout is None:
assert not isinstance(
metas, (list, tuple)
), f"meta changed from single output to multiple output during blockwise: {meta} -> {metas}"
metas = (metas,)
else:
assert isinstance(
metas, (list, tuple)
), f"meta changed from multiple output to single output during blockwise: {meta} -> {metas}"
assert (
len(metas) == nout
), f"Number of outputs changed from {nout} to {len(metas)} during blockwise"
## Prepare output shapes
loop_output_shape = tmp.shape
loop_output_chunks = tmp.chunks
keys = list(flatten(tmp.__dask_keys__()))
name, token = keys[0][0].split("-")
### *) Treat direct output
if nout is None:
output_coredimss = [output_coredimss]
## Split output
leaf_arrs = []
for i, (ocd, oax, meta) in enumerate(zip(output_coredimss, output_axes, metas)):
core_output_shape = tuple(core_shapes[d] for d in ocd)
core_chunkinds = len(ocd) * (0,)
output_shape = loop_output_shape + core_output_shape
output_chunks = loop_output_chunks + core_output_shape
leaf_name = "%s_%d-%s" % (name, i, token)
leaf_dsk = {
(leaf_name,)
+ key[1:]
+ core_chunkinds: ((getitem, key, i) if nout else key)
for key in keys
}
graph = HighLevelGraph.from_collections(leaf_name, leaf_dsk, dependencies=[tmp])
meta = meta_from_array(meta, len(output_shape))
leaf_arr = Array(
graph, leaf_name, chunks=output_chunks, shape=output_shape, meta=meta
)
### Axes:
if keepdims:
slices = len(leaf_arr.shape) * (slice(None),) + len(oax) * (np.newaxis,)
leaf_arr = leaf_arr[slices]
tidcs = [None] * len(leaf_arr.shape)
for ii, oa in zip(range(-len(oax), 0), oax):
tidcs[oa] = ii
j = 0
for ii in range(len(tidcs)):
if tidcs[ii] is None:
tidcs[ii] = j
j += 1
leaf_arr = leaf_arr.transpose(tidcs)
leaf_arrs.append(leaf_arr)
return (*leaf_arrs,) if nout else leaf_arrs[0] # Undo *) from above
|
3,332 | def query_tag_data(
params: Mapping[str, str],
filter_query: Optional[str] = None,
aggregate_column: Optional[str] = None,
orderby: Optional[str] = None,
referrer: Optional[str] = None,
):
with sentry_sdk.start_span(
op="discover.discover", description="facets.filter_transform"
) as span:
span.set_data("query", filter_query)
snuba_filter = discover.get_filter(filter_query, params)
# Resolve the public aliases into the discover dataset names.
snuba_filter, translated_columns = discover.resolve_discover_aliases(snuba_filter)
# TODO(k-fish): Remove this and pass aliases instead, we can convert them back for raw_query
column_map = {
"duration": "transaction.duration",
"measurements[lcp]": "measurements.lcp",
"span_op_breakdowns[ops.browser]": "spans.browser",
"span_op_breakdowns[ops.http]": "spans.http",
"span_op_breakdowns[ops.db]": "spans.db",
"span_op_breakdowns[ops.resource]": "spans.resource",
}
mapped_aggregate_column = column_map.get(aggregate_column)
# Only operate on allowed columns
if not mapped_aggregate_column:
return None
initial_selected_columns = ["count()", f"avg({mapped_aggregate_column}) as aggregate"]
with sentry_sdk.start_span(op="discover.discover", description="facets.frequent_tags"):
# Get the most relevant tag keys
tag_data = discover.query(
selected_columns=initial_selected_columns,
query=filter_query,
params=params,
orderby=["-count"],
referrer="{}.{}".format(referrer, "all_transactions"),
)
counts = [r["count"] for r in tag_data["data"]]
aggregates = [r["aggregate"] for r in tag_data["data"]]
# Return early to avoid doing more queries with 0 count transactions or aggregates for columns that dont exist
if len(counts) != 1 or counts[0] == 0 or aggregates[0] is None:
return None
if not tag_data["data"][0]:
return None
return tag_data["data"][0]
| def query_tag_data(
params: Mapping[str, str],
filter_query: Optional[str] = None,
aggregate_column: Optional[str] = None,
orderby: Optional[str] = None,
referrer: Optional[str] = None,
):
with sentry_sdk.start_span(
op="discover.discover", description="facets.filter_transform"
) as span:
span.set_data("query", filter_query)
snuba_filter = discover.get_filter(filter_query, params)
# Resolve the public aliases into the discover dataset names.
snuba_filter, translated_columns = discover.resolve_discover_aliases(snuba_filter)
# TODO(k-fish): Remove this and pass aliases instead, we can convert them back for raw_query
column_map = {
"duration": "transaction.duration",
"measurements[lcp]": "measurements.lcp",
"span_op_breakdowns[ops.browser]": "spans.browser",
"span_op_breakdowns[ops.http]": "spans.http",
"span_op_breakdowns[ops.db]": "spans.db",
"span_op_breakdowns[ops.resource]": "spans.resource",
}
mapped_aggregate_column = column_map.get(aggregate_column)
# Only operate on allowed columns
if not mapped_aggregate_column:
return None
initial_selected_columns = ["count()", f"avg({mapped_aggregate_column}) as aggregate"]
with sentry_sdk.start_span(op="discover.discover", description="facets.frequent_tags"):
# Get the most relevant tag keys
tag_data = discover.query(
selected_columns=initial_selected_columns,
query=filter_query,
params=params,
orderby=["-count"],
referrer=f"{referrer}.all_transactions",
)
counts = [r["count"] for r in tag_data["data"]]
aggregates = [r["aggregate"] for r in tag_data["data"]]
# Return early to avoid doing more queries with 0 count transactions or aggregates for columns that dont exist
if len(counts) != 1 or counts[0] == 0 or aggregates[0] is None:
return None
if not tag_data["data"][0]:
return None
return tag_data["data"][0]
|
7,270 | def local_binary_pattern(image, P, R, method='default'):
"""Gray scale and rotation invariant LBP (Local Binary Patterns).
LBP is an invariant descriptor that can be used for texture classification.
Parameters
----------
image : (N, M) array
Graylevel image.
P : int
Number of circularly symmetric neighbour set points (quantization of
the angular space).
R : float
Radius of circle (spatial resolution of the operator).
method : {'default', 'ror', 'uniform', 'var'}
Method to determine the pattern:
``default``
Original local binary pattern which is gray scale but not
rotation invariant.
``ror``
Extension of default implementation which is gray scale and
rotation invariant.
``uniform``
Improved rotation invariance with uniform patterns and finer
quantization of the angular space which is gray scale and
rotation invariant.
``nri_uniform``
Non rotation-invariant uniform patterns variant which is
only gray scale invariant [2]_.
``var``
Rotation invariant variance measures of the contrast of local
image texture which is rotation but not gray scale invariant.
Returns
-------
output : (N, M) array
LBP image.
References
----------
.. [1] Multiresolution Gray-Scale and Rotation Invariant Texture
Classification with Local Binary Patterns.
Timo Ojala, Matti Pietikainen, Topi Maenpaa.
http://www.ee.oulu.fi/research/mvmp/mvg/files/pdf/pdf_94.pdf, 2002.
.. [2] Face recognition with local binary patterns.
Timo Ahonen, Abdenour Hadid, Matti Pietikainen,
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.214.6851,
2004.
"""
check_nD(image, 2)
methods = {
'default': ord('D'),
'ror': ord('R'),
'uniform': ord('U'),
'nri_uniform': ord('N'),
'var': ord('V')
}
image = np.ascontiguousarray(image, dtype=np.double)
output = _local_binary_pattern(image, P, R, methods[method.lower()])
return output
| def local_binary_pattern(image, P, R, method='default'):
"""Gray scale and rotation invariant LBP (Local Binary Patterns).
LBP is an invariant descriptor that can be used for texture classification.
Parameters
----------
image : (N, M) array
Graylevel image.
P : int
Number of circularly symmetric neighbour set points (quantization of
the angular space).
R : float
Radius of circle (spatial resolution of the operator).
method : {'default', 'ror', 'uniform', 'var'}
Method to determine the pattern:
``default``
Original local binary pattern which is gray scale but not
rotation invariant.
``ror``
Extension of default implementation which is gray scale and
rotation invariant.
``uniform``
Improved rotation invariance with uniform patterns and finer
quantization of the angular space which is grayscale and
rotation invariant.
``nri_uniform``
Non rotation-invariant uniform patterns variant which is
only gray scale invariant [2]_.
``var``
Rotation invariant variance measures of the contrast of local
image texture which is rotation but not gray scale invariant.
Returns
-------
output : (N, M) array
LBP image.
References
----------
.. [1] Multiresolution Gray-Scale and Rotation Invariant Texture
Classification with Local Binary Patterns.
Timo Ojala, Matti Pietikainen, Topi Maenpaa.
http://www.ee.oulu.fi/research/mvmp/mvg/files/pdf/pdf_94.pdf, 2002.
.. [2] Face recognition with local binary patterns.
Timo Ahonen, Abdenour Hadid, Matti Pietikainen,
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.214.6851,
2004.
"""
check_nD(image, 2)
methods = {
'default': ord('D'),
'ror': ord('R'),
'uniform': ord('U'),
'nri_uniform': ord('N'),
'var': ord('V')
}
image = np.ascontiguousarray(image, dtype=np.double)
output = _local_binary_pattern(image, P, R, methods[method.lower()])
return output
|
17,332 | def list_equiv(first, second):
equiv = True
if len(first) != len(second):
return False
else:
for i in range(len(first)):
equiv = equiv and equivalent(first[i], second[i])
return equiv
| def list_equiv(first, second):
equiv = True
if len(first) != len(second):
return False
else:
for f, s in zip(first, second):
equiv = equiv and equivalent(first[i], second[i])
return equiv
|
20,202 | def purge(url=None):
akamai_config = settings.WAGTAILFRONTENDCACHE.get('akamai', {})
cloudfront_config = settings.WAGTAILFRONTENDCACHE.get(
'cloudfront', {})
if url:
# Use the Wagtail frontendcache PurgeBatch to perform the purge
batch = PurgeBatch()
batch.add_url(url)
# If the URL matches any of our cloudfront distributions, invalidate
# with that backend
if any(k for k in cloudfront_config.get('DISTRIBUTION_ID', {})
if k in url):
logger.info('Purging {} from cloudfront'.format(url))
batch.purge(backends='cloudfront')
# Otherwise invalidate with our default backend
else:
logger.info('Purging {} from akamai'.format(url))
batch.purge(backends='akamai')
return "Submitted invalidation for %s" % url
else:
# purge_all only exists on our AkamaiBackend
backend = AkamaiBackend(akamai_config)
logger.info('Purging entire site from akamai')
backend.purge_all()
return "Submitted invalidation for the entire site."
| def purge(url=None):
akamai_config = settings.WAGTAILFRONTENDCACHE.get('akamai', {})
cloudfront_config = settings.WAGTAILFRONTENDCACHE.get(
'cloudfront', {})
if url:
# Use the Wagtail frontendcache PurgeBatch to perform the purge
batch = PurgeBatch()
batch.add_url(url)
# If the URL matches any of our CloudFront distributions, invalidate
# If the URL matches any of our cloudfront distributions, invalidate
# with that backend
if any(k for k in cloudfront_config.get('DISTRIBUTION_ID', {})
if k in url):
logger.info('Purging {} from cloudfront'.format(url))
batch.purge(backends='cloudfront')
# Otherwise invalidate with our default backend
else:
logger.info('Purging {} from akamai'.format(url))
batch.purge(backends='akamai')
return "Submitted invalidation for %s" % url
else:
# purge_all only exists on our AkamaiBackend
backend = AkamaiBackend(akamai_config)
logger.info('Purging entire site from akamai')
backend.purge_all()
return "Submitted invalidation for the entire site."
|
16,919 | def slug_url(hass, url) -> str | None:
"""Convert a camera url into a string suitable for a camera name."""
if not url:
return None
if not isinstance(url, template_helper.Template):
url = cv.template(url)
url.hass = hass
# We shouldn't have any exceptions here, because we verified the url earlier
url = url.async_render(parse_result=False)
return slugify(yarl.URL(url).host)
| def slug_url(hass, url) -> str | None:
"""Convert a camera url into a string suitable for a camera name."""
if not url:
return None
if not isinstance(url, template_helper.Template):
url = template_helper.Template(url, hass)
# We shouldn't have any exceptions here, because we verified the url earlier
url = url.async_render(parse_result=False)
return slugify(yarl.URL(url).host)
|
3,648 | def _get_mem_available():
"""Return available memory in bytes, or None if unknown."""
try:
import psutil
return psutil.virtual_memory().available
except (ImportError, AttributeError):
pass
if sys.platform.startswith('linux'):
info = {}
with open('/proc/meminfo', 'r') as f:
for line in f:
p = line.split()
info[p[0].strip(':').lower()] = float(p[1]) * 1e3
if 'memavailable' in info:
# Linux >= 3.14
return info['memavailable']
else:
return info['memfree'] + info['cached']
return None
| def _get_mem_available():
"""Return available memory in bytes, or None if unknown."""
try:
import psutil
return psutil.virtual_memory().available
except (ImportError, AttributeError):
pass
if sys.platform.startswith('linux'):
info = {}
with open('/proc/meminfo', 'r') as f:
for line in f:
p = line.split()
info[p[0].strip(':').lower()] = int(p[1]) * 1024
if 'memavailable' in info:
# Linux >= 3.14
return info['memavailable']
else:
return info['memfree'] + info['cached']
return None
|
17,671 | def parse_gitconfig_dump(dump, cwd=None, multi_value=True):
"""Parse a dump-string from `git config -z --list`
This parser has limited support for discarding unrelated output
that may contaminate the given dump. It does so performing a
relatively strict matching of configuration key syntax, and discarding
lines in the output that are not valid git-config keys.
There is also built-in support for parsing outputs generated
with --show-origin (see return value).
Parameters
----------
dump : str
Null-byte separated output
cwd : path-like, optional
Use this absolute path to convert relative paths for origin reports
into absolute paths. By default, the process working directory
PWD is used.
multi_value : bool, optional
If True, report values from multiple specifications of the
same key as a tuple of values assigned to this key. Otherwise,
the last configuration is reported.
Returns:
--------
dict, set
Configuration items are returned as key/value pairs in a dictionary.
The second tuple-item will be a set of path objects comprising all
source files, if origin information was included in the dump
(--show-origin). An empty set is returned otherwise.
"""
dct = {}
fileset = set()
for line in dump.split('\0'):
# line is a null-delimited chunk
k = None
# in anticipation of output contamination, process within a loop
# scope we can reject non syntax compliant pieces
while line:
if line.startswith('file:'):
# origin line
fname = Path(line[5:])
if not fname.is_absolute():
fname = Path(cwd) / fname if cwd else Path.cwd() / fname
fileset.add(fname)
break
if line.startswith('command line:'):
# no origin that we could as a pathobj
break
# try getting key/value pair from the present chunk
k, v = _gitcfg_rec_to_keyvalue(line)
if k is not None:
# we are done with this chunk when there is a good key
break
# discard the first line and start over
ignore, line = line.split('\n', maxsplit=1)
lgr.debug('Non-standard git-config output, ignoring: %s', ignore)
if not k:
# nothing else to log, all ignored dump was reported before
continue
# multi-value reporting
present_v = dct.get(k, None)
if present_v is None or not multi_value:
dct[k] = v
else:
if isinstance(present_v, tuple):
dct[k] = present_v + (v,)
else:
dct[k] = (present_v, v)
return dct, fileset
| def parse_gitconfig_dump(dump, cwd=None, multi_value=True):
"""Parse a dump-string from `git config -z --list`
This parser has limited support for discarding unrelated output
that may contaminate the given dump. It does so performing a
relatively strict matching of configuration key syntax, and discarding
lines in the output that are not valid git-config keys.
There is also built-in support for parsing outputs generated
with --show-origin (see return value).
Parameters
----------
dump : str
Null-byte separated output
cwd : path-like, optional
Use this absolute path to convert relative paths for origin reports
into absolute paths. By default, the process working directory
PWD is used.
multi_value : bool, optional
If True, report values from multiple specifications of the
same key as a tuple of values assigned to this key. Otherwise,
the last configuration is reported.
Returns:
--------
dict, set
Configuration items are returned as key/value pairs in a dictionary.
The second tuple-item will be a set of path objects comprising all
source files, if origin information was included in the dump
(--show-origin). An empty set is returned otherwise.
"""
dct = {}
fileset = set()
for line in dump.split('\0'):
# line is a null-delimited chunk
k = None
# in anticipation of output contamination, process within a loop
# where we can reject non syntax compliant pieces
while line:
if line.startswith('file:'):
# origin line
fname = Path(line[5:])
if not fname.is_absolute():
fname = Path(cwd) / fname if cwd else Path.cwd() / fname
fileset.add(fname)
break
if line.startswith('command line:'):
# no origin that we could as a pathobj
break
# try getting key/value pair from the present chunk
k, v = _gitcfg_rec_to_keyvalue(line)
if k is not None:
# we are done with this chunk when there is a good key
break
# discard the first line and start over
ignore, line = line.split('\n', maxsplit=1)
lgr.debug('Non-standard git-config output, ignoring: %s', ignore)
if not k:
# nothing else to log, all ignored dump was reported before
continue
# multi-value reporting
present_v = dct.get(k, None)
if present_v is None or not multi_value:
dct[k] = v
else:
if isinstance(present_v, tuple):
dct[k] = present_v + (v,)
else:
dct[k] = (present_v, v)
return dct, fileset
|
29,572 | def start_worker(logdir, scheduler_addr, scheduler_port, worker_addr, nthreads, nprocs,
ssh_username, ssh_port, ssh_private_key, nohost,
memory_limit,
worker_port,
nanny_port,
remote_python=None,
remote_dask_worker=None):
cmd = ('{python} -m {remote_dask_worker} '
'{scheduler_addr}:{scheduler_port} '
'--nthreads {nthreads} --nprocs {nprocs} ')
if not nohost:
cmd += ' --host {worker_addr} '
if memory_limit:
cmd += '--memory-limit {memory_limit} '
if worker_port:
cmd += '--worker-port {worker_port} '
if nanny_port:
cmd += '--nanny-port {nanny_port} '
cmd = cmd.format(
python=remote_python or sys.executable,
remote_dask_worker=remote_dask_worker or 'distributed.cli.dask_worker',
scheduler_addr=scheduler_addr,
scheduler_port=scheduler_port,
worker_addr=worker_addr,
nthreads=nthreads,
nprocs=nprocs,
memory_limit=memory_limit,
worker_port=worker_port,
nanny_port=nanny_port)
# Optionally redirect stdout and stderr to a logfile
if logdir is not None:
cmd = 'mkdir -p {logdir} && '.format(logdir=logdir) + cmd
cmd += '&> {logdir}/dask_scheduler_{addr}.log'.format(
addr=worker_addr, logdir=logdir)
label = 'worker {addr}'.format(addr=worker_addr)
# Create a command dictionary, which contains everything we need to run and
# interact with this command.
input_queue = Queue()
output_queue = Queue()
cmd_dict = {'cmd': cmd, 'label': label, 'address': worker_addr,
'input_queue': input_queue, 'output_queue': output_queue,
'ssh_username': ssh_username, 'ssh_port': ssh_port,
'ssh_private_key': ssh_private_key}
# Start the thread
thread = Thread(target=async_ssh, args=[cmd_dict])
thread.daemon = True
thread.start()
return merge(cmd_dict, {'thread': thread})
| def start_worker(logdir, scheduler_addr, scheduler_port, worker_addr, nthreads, nprocs,
ssh_username, ssh_port, ssh_private_key, nohost,
memory_limit,
worker_port,
nanny_port,
remote_python=None,
remote_dask_worker='distributed.cli.dask_worker'):
cmd = ('{python} -m {remote_dask_worker} '
'{scheduler_addr}:{scheduler_port} '
'--nthreads {nthreads} --nprocs {nprocs} ')
if not nohost:
cmd += ' --host {worker_addr} '
if memory_limit:
cmd += '--memory-limit {memory_limit} '
if worker_port:
cmd += '--worker-port {worker_port} '
if nanny_port:
cmd += '--nanny-port {nanny_port} '
cmd = cmd.format(
python=remote_python or sys.executable,
remote_dask_worker=remote_dask_worker or 'distributed.cli.dask_worker',
scheduler_addr=scheduler_addr,
scheduler_port=scheduler_port,
worker_addr=worker_addr,
nthreads=nthreads,
nprocs=nprocs,
memory_limit=memory_limit,
worker_port=worker_port,
nanny_port=nanny_port)
# Optionally redirect stdout and stderr to a logfile
if logdir is not None:
cmd = 'mkdir -p {logdir} && '.format(logdir=logdir) + cmd
cmd += '&> {logdir}/dask_scheduler_{addr}.log'.format(
addr=worker_addr, logdir=logdir)
label = 'worker {addr}'.format(addr=worker_addr)
# Create a command dictionary, which contains everything we need to run and
# interact with this command.
input_queue = Queue()
output_queue = Queue()
cmd_dict = {'cmd': cmd, 'label': label, 'address': worker_addr,
'input_queue': input_queue, 'output_queue': output_queue,
'ssh_username': ssh_username, 'ssh_port': ssh_port,
'ssh_private_key': ssh_private_key}
# Start the thread
thread = Thread(target=async_ssh, args=[cmd_dict])
thread.daemon = True
thread.start()
return merge(cmd_dict, {'thread': thread})
|
32,541 | def create_indicator_fields(indicator, indicator_type, indicator_value):
"""Creating an indicator fields from a raw indicator"""
params = demisto.params()
indicator_fields = TC_INDICATOR_TO_XSOAR_INDICATOR[indicator_type]
fields: dict = {}
for indicator_key, xsoar_indicator_key in indicator_fields.items():
fields[xsoar_indicator_key] = indicator.get(indicator_key, '')
raw_tags = indicator.get('tags', {}).get('data', [])
tags = [tag.get('name', '') for tag in raw_tags]
fields['tags'] = tags
fields['reportedby'] = [name for name in [indicator.get('ownerName', ''), indicator.get('source', '')] if name]
fields['feedrelatedindicators'] = indicator.get("associatedIndicators", {}).get('data') or []
fields['feedrelatedindicators'].extend(indicator.get("associatedGroups", {}).get('data') or [])
if 'description' not in fields:
fields['description'] = indicator.get('attributes', {}).get('description', '')
if indicator_type == 'Course of Action':
fields['action'] = indicator.get('attributes', {}).get('action', '')
if indicator_type == 'Registry Key':
fields['namefield'] = indicator.get('Key Name', '')
tlp_color = params.get('tlp_color', '')
if tlp_color:
fields['trafficlightprotocol'] = tlp_color # type: ignore
if argToBoolean(params.get('retrieveRelationships')):
relationships = create_indicator_relationships(fields, indicator_type, indicator_value)
if relationships:
fields['relationships'] = relationships
remove_nulls_from_dictionary(fields)
return fields
| def create_indicator_fields(indicator, indicator_type, indicator_value):
"""Creating an indicator fields from a raw indicator"""
params = demisto.params()
indicator_fields_mapping = TC_INDICATOR_TO_XSOAR_INDICATOR[indicator_type]
fields: dict = {}
for indicator_key, xsoar_indicator_key in indicator_fields.items():
fields[xsoar_indicator_key] = indicator.get(indicator_key, '')
raw_tags = indicator.get('tags', {}).get('data', [])
tags = [tag.get('name', '') for tag in raw_tags]
fields['tags'] = tags
fields['reportedby'] = [name for name in [indicator.get('ownerName', ''), indicator.get('source', '')] if name]
fields['feedrelatedindicators'] = indicator.get("associatedIndicators", {}).get('data') or []
fields['feedrelatedindicators'].extend(indicator.get("associatedGroups", {}).get('data') or [])
if 'description' not in fields:
fields['description'] = indicator.get('attributes', {}).get('description', '')
if indicator_type == 'Course of Action':
fields['action'] = indicator.get('attributes', {}).get('action', '')
if indicator_type == 'Registry Key':
fields['namefield'] = indicator.get('Key Name', '')
tlp_color = params.get('tlp_color', '')
if tlp_color:
fields['trafficlightprotocol'] = tlp_color # type: ignore
if argToBoolean(params.get('retrieveRelationships')):
relationships = create_indicator_relationships(fields, indicator_type, indicator_value)
if relationships:
fields['relationships'] = relationships
remove_nulls_from_dictionary(fields)
return fields
|
6,619 | def update_invoice_status():
"""Updates status as Overdue for applicable invoices. Runs daily."""
today = getdate()
payment_schedule = frappe.qb.DocType("Payment Schedule")
for doctype in ("Sales Invoice", "Purchase Invoice"):
invoice = frappe.qb.DocType(doctype)
consider_base_amount = invoice.party_account_currency != invoice.currency
payment_amount = (
frappe.qb.terms.Case()
.when(consider_base_amount, payment_schedule.base_payment_amount)
.else_(payment_schedule.payment_amount)
)
payable_amount = (
frappe.qb.from_(payment_schedule)
.select(Sum(payment_amount))
.where(
(payment_schedule.parent == invoice.name)
& (payment_schedule.due_date < today)
)
)
total = (
frappe.qb.terms.Case()
.when(invoice.disable_rounded_total, invoice.grand_total)
.else_(invoice.rounded_total)
)
base_total = (
frappe.qb.terms.Case()
.when(invoice.disable_rounded_total, invoice.base_grand_total)
.else_(invoice.base_rounded_total)
)
total_amount = (
frappe.qb.terms.Case()
.when(consider_base_amount, base_total)
.else_(total)
)
is_overdue = total_amount - invoice.outstanding_amount < payable_amount
conditions = (
(invoice.docstatus == 1)
& (invoice.outstanding_amount > 0)
& (
invoice.status.like("Unpaid%")
| invoice.status.like("Partly Paid%")
)
& (
(invoice.is_pos & invoice.due_date < today) | is_overdue
if doctype == "Sales Invoice"
else is_overdue
)
)
status = (
frappe.qb.terms.Case()
.when(invoice.status.like("%Discounted"), "Overdue and Discounted")
.else_("Overdue")
)
frappe.qb.update(invoice).set("status", status).where(conditions).run()
| def update_invoice_status():
"""Updates status as Overdue for applicable invoices. Runs daily."""
today = getdate()
payment_schedule = frappe.qb.DocType("Payment Schedule")
for doctype in ("Sales Invoice", "Purchase Invoice"):
invoice = frappe.qb.DocType(doctype)
consider_base_amount = invoice.party_account_currency != invoice.currency
payment_amount = (
frappe.qb.terms.Case()
.when(consider_base_amount, payment_schedule.base_payment_amount)
.else_(payment_schedule.payment_amount)
)
payable_amount = (
((invoice.is_pos & invoice.due_date < today) | is_overdue)
.select(Sum(payment_amount))
.where(
(payment_schedule.parent == invoice.name)
& (payment_schedule.due_date < today)
)
)
total = (
frappe.qb.terms.Case()
.when(invoice.disable_rounded_total, invoice.grand_total)
.else_(invoice.rounded_total)
)
base_total = (
frappe.qb.terms.Case()
.when(invoice.disable_rounded_total, invoice.base_grand_total)
.else_(invoice.base_rounded_total)
)
total_amount = (
frappe.qb.terms.Case()
.when(consider_base_amount, base_total)
.else_(total)
)
is_overdue = total_amount - invoice.outstanding_amount < payable_amount
conditions = (
(invoice.docstatus == 1)
& (invoice.outstanding_amount > 0)
& (
invoice.status.like("Unpaid%")
| invoice.status.like("Partly Paid%")
)
& (
(invoice.is_pos & invoice.due_date < today) | is_overdue
if doctype == "Sales Invoice"
else is_overdue
)
)
status = (
frappe.qb.terms.Case()
.when(invoice.status.like("%Discounted"), "Overdue and Discounted")
.else_("Overdue")
)
frappe.qb.update(invoice).set("status", status).where(conditions).run()
|
11,973 | def load_stylesheet_from_environment(is_pyqtgraph=False):
"""
Load the stylesheet from QT_API (or PYQTGRAPH_QT_LIB) environment variable.
Args:
is_pyqtgraph (bool): True if it is to be set using PYQTGRAPH_QT_LIB.
Raises:
KeyError: if PYQTGRAPH_QT_LIB does not exist.
Returns:
str: the stylesheet string.
"""
warnings.warn(DEPRECATION_MSG, DeprecationWarning)
if is_pyqtgraph:
stylesheet = _load_stylesheet(qt_api=os.environ.get('PYQTGRAPH_QT_LIB', None))
else:
stylesheet = _load_stylesheet()
return stylesheet
| def load_stylesheet_from_environment(is_pyqtgraph=False):
"""
Load the stylesheet from QT_API (or PYQTGRAPH_QT_LIB) environment variable.
Args:
is_pyqtgraph (bool): True if it is to be set using PYQTGRAPH_QT_LIB.
Raises:
KeyError: if PYQTGRAPH_QT_LIB does not exist.
Returns:
str: the stylesheet string.
"""
warnings.warn(DEPRECATION_MSG, DeprecationWarning)
if is_pyqtgraph:
stylesheet = _load_stylesheet(qt_api=os.environ.get('PYQTGRAPH_QT_LIB'))
else:
stylesheet = _load_stylesheet()
return stylesheet
|
57,021 | def validate_cmd(cmd_name, valid_cmd_attribute_specs, actual_cmd_attributes):
"""Validates that the attributes of a command contain all the required
attributes and some/all of optional attributes. It also checks that
the values of attributes belong to a set of allowed values if any.
Args:
cmd_name: str. The command for which validation process is being done.
valid_cmd_attribute_specs: dict. A dict containing the required and
optional attributes for a command along with allowed values
for attributes if any.
actual_cmd_attributes: dict. A dict containing the actual
attributes of a command with values for the attributes.
Raises:
ValidationError. Any required attribute is missing or an extra attribute
exists or the value of an attribute is not allowed.
"""
required_attribute_names = valid_cmd_attribute_specs[
'required_attribute_names']
optional_attribute_names = valid_cmd_attribute_specs[
'optional_attribute_names']
actual_attribute_names = list(actual_cmd_attributes.keys())
missing_attribute_names = [
key for key in required_attribute_names if key not in (
actual_attribute_names)]
extra_attribute_names = [
key for key in actual_attribute_names if key not in (
required_attribute_names + optional_attribute_names)]
error_msg_list = []
if missing_attribute_names:
error_msg_list.append(
'The following required attributes are missing: %s' % (
(', ').join(sorted(missing_attribute_names))))
if extra_attribute_names:
error_msg_list.append(
'The following extra attributes are present: %s' % (
(', ').join(sorted(extra_attribute_names))))
if error_msg_list:
raise utils.ValidationError((', ').join(error_msg_list))
deprecated_values = valid_cmd_attribute_specs.get('deprecated_values')
if deprecated_values:
for attribute_name, attribute_values in deprecated_values.items():
actual_value = actual_cmd_attributes[attribute_name]
if actual_value in attribute_values:
raise utils.DeprecatedError(
'Value for %s in cmd %s: %s is deprecated' % (
attribute_name, cmd_name, actual_value))
allowed_values = valid_cmd_attribute_specs.get('allowed_values')
if not allowed_values:
return
for attribute_name, attribute_values in allowed_values.items():
actual_value = actual_cmd_attributes[attribute_name]
if actual_value not in attribute_values:
raise utils.ValidationError(
'Value for %s in cmd %s: %s is not allowed' % (
attribute_name, cmd_name, actual_value))
| def validate_cmd(cmd_name, valid_cmd_attribute_specs, actual_cmd_attributes):
"""Validates that the attributes of a command contain all the required
attributes and some/all of optional attributes. It also checks that
the values of attributes belong to a set of allowed values if any.
Args:
cmd_name: str. The command for which validation process is being done.
valid_cmd_attribute_specs: dict. A dict containing the required and
optional attributes for a command along with allowed values
for attributes if any.
actual_cmd_attributes: dict. A dict containing the actual
attributes of a command with values for the attributes.
Raises:
ValidationError. Any required attribute is missing or an extra attribute
exists or the value of an attribute is not allowed.
"""
required_attribute_names = valid_cmd_attribute_specs[
'required_attribute_names']
optional_attribute_names = valid_cmd_attribute_specs[
'optional_attribute_names']
actual_attribute_names = list(actual_cmd_attributes.keys())
missing_attribute_names = [
key for key in required_attribute_names if key not in (
actual_attribute_names)]
extra_attribute_names = [
key for key in actual_attribute_names if key not in (
required_attribute_names + optional_attribute_names)]
error_msg_list = []
if missing_attribute_names:
error_msg_list.append(
'The following required attributes are missing: %s' % (
(', ').join(sorted(missing_attribute_names))))
if extra_attribute_names:
error_msg_list.append(
'The following extra attributes are present: %s' % (
(', ').join(sorted(extra_attribute_names))))
if error_msg_list:
raise utils.ValidationError((', ').join(error_msg_list))
deprecated_values = valid_cmd_attribute_specs.get('deprecated_values', {})
for attribute_name, attribute_values in deprecated_values.items():
actual_value = actual_cmd_attributes[attribute_name]
if actual_value in attribute_values:
raise utils.DeprecatedError(
'Value for %s in cmd %s: %s is deprecated' % (
attribute_name, cmd_name, actual_value))
allowed_values = valid_cmd_attribute_specs.get('allowed_values')
if not allowed_values:
return
for attribute_name, attribute_values in allowed_values.items():
actual_value = actual_cmd_attributes[attribute_name]
if actual_value not in attribute_values:
raise utils.ValidationError(
'Value for %s in cmd %s: %s is not allowed' % (
attribute_name, cmd_name, actual_value))
|
8,947 | def auth_after_register(bot):
"""Do NickServ/AuthServ auth
:param bot: a connected Sopel instance
:type bot: :class:`sopel.bot.Sopel`
This function can be used, **after** the bot is connected, to handle one of
these auth methods:
* ``nickserv``: send a private message to the NickServ service
* ``authserv``: send a ``AUTHSERV`` command
* ``Q``: send a ``AUTH`` command
* ``userserv``: send a private message to the UserServ service
.. important::
If ``core.auth_method`` is set, then ``core.nick_auth_method`` will be
ignored. If none is set, then this function does nothing.
"""
if bot.config.core.auth_method:
auth_method = bot.config.core.auth_method
auth_username = bot.config.core.auth_username
auth_password = bot.config.core.auth_password
auth_target = bot.config.core.auth_target
elif bot.config.core.nick_auth_method:
auth_method = bot.config.core.nick_auth_method
auth_username = (bot.config.core.nick_auth_username or
bot.config.core.nick)
auth_password = bot.config.core.nick_auth_password
auth_target = bot.config.core.nick_auth_target
else:
return
# nickserv-based auth method needs to check for current nick
if auth_method == 'nickserv':
if bot.nick != bot.settings.core.nick:
LOGGER.warning('Sending nickserv GHOST command.')
bot.say(
'GHOST %s %s' % (bot.settings.core.nick, auth_password),
auth_target or 'NickServ')
else:
bot.say('IDENTIFY %s' % auth_password, auth_target or 'NickServ')
# other methods use account instead of nick
elif auth_method == 'authserv':
bot.write(('AUTHSERV', 'auth', auth_username, auth_password))
elif auth_method == 'Q':
bot.write(('AUTH', auth_username, auth_password))
elif auth_method == 'userserv':
bot.say("LOGIN %s %s" % (auth_username, auth_password),
auth_target or 'UserServ')
| def auth_after_register(bot):
"""Do NickServ/AuthServ auth.
:param bot: a connected Sopel instance
:type bot: :class:`sopel.bot.Sopel`
This function can be used, **after** the bot is connected, to handle one of
these auth methods:
* ``nickserv``: send a private message to the NickServ service
* ``authserv``: send a ``AUTHSERV`` command
* ``Q``: send a ``AUTH`` command
* ``userserv``: send a private message to the UserServ service
.. important::
If ``core.auth_method`` is set, then ``core.nick_auth_method`` will be
ignored. If none is set, then this function does nothing.
"""
if bot.config.core.auth_method:
auth_method = bot.config.core.auth_method
auth_username = bot.config.core.auth_username
auth_password = bot.config.core.auth_password
auth_target = bot.config.core.auth_target
elif bot.config.core.nick_auth_method:
auth_method = bot.config.core.nick_auth_method
auth_username = (bot.config.core.nick_auth_username or
bot.config.core.nick)
auth_password = bot.config.core.nick_auth_password
auth_target = bot.config.core.nick_auth_target
else:
return
# nickserv-based auth method needs to check for current nick
if auth_method == 'nickserv':
if bot.nick != bot.settings.core.nick:
LOGGER.warning('Sending nickserv GHOST command.')
bot.say(
'GHOST %s %s' % (bot.settings.core.nick, auth_password),
auth_target or 'NickServ')
else:
bot.say('IDENTIFY %s' % auth_password, auth_target or 'NickServ')
# other methods use account instead of nick
elif auth_method == 'authserv':
bot.write(('AUTHSERV', 'auth', auth_username, auth_password))
elif auth_method == 'Q':
bot.write(('AUTH', auth_username, auth_password))
elif auth_method == 'userserv':
bot.say("LOGIN %s %s" % (auth_username, auth_password),
auth_target or 'UserServ')
|
2,931 | def test_groupby_groups_in_BaseGrouper():
# https://github.com/pandas-dev/pandas/issues/26326
m_index = pd.MultiIndex.from_product([['A', 'B'],
['C', 'D']], names=['alpha', 'beta'])
df_sample = pd.DataFrame({'foo': [1, 2, 1, 2], 'bar': [1, 2, 3, 4]},
index=m_index)
dfGBY_BaseGrouper = df_sample.groupby([pd.Grouper(level='alpha'), 'beta'])
dfGBY_noBaseGrouper = df_sample.groupby(['alpha', 'beta'])
assert(dfGBY_BaseGrouper.groups == dfGBY_noBaseGrouper.groups)
| def test_groupby_groups_in_BaseGrouper():
# GH 26326
m_index = pd.MultiIndex.from_product([['A', 'B'],
['C', 'D']], names=['alpha', 'beta'])
df_sample = pd.DataFrame({'foo': [1, 2, 1, 2], 'bar': [1, 2, 3, 4]},
index=m_index)
dfGBY_BaseGrouper = df_sample.groupby([pd.Grouper(level='alpha'), 'beta'])
dfGBY_noBaseGrouper = df_sample.groupby(['alpha', 'beta'])
assert(dfGBY_BaseGrouper.groups == dfGBY_noBaseGrouper.groups)
|
38,863 | def get_spacy_model(
spacy_model_name: str, parse: bool, ner: bool, pos_tags: bool = True
) -> SpacyModelType:
"""
In order to avoid loading spacy models a whole bunch of times, we'll save references to them,
keyed by the options we used to create the spacy model, so any particular configuration only
gets loaded once.
"""
options = (spacy_model_name, pos_tags, parse, ner)
if options not in LOADED_SPACY_MODELS:
disable = ["vectors", "textcat"]
if not pos_tags:
disable.append("tagger")
if not parse:
disable.append("parser")
if not ner:
disable.append("ner")
try:
spacy_model = spacy.load(spacy_model_name, disable=disable)
except OSError:
logger.warning(
f"Spacy models '{spacy_model_name}' not found. Downloading and installing."
)
spacy_download(spacy_model_name)
# Import the downloaded model module directly and load from there
spacy_model_module = __import__(spacy_model_name)
spacy_model = spacy_model_module.load(disable=disable) # type: ignore
LOADED_SPACY_MODELS[options] = spacy_model
return LOADED_SPACY_MODELS[options]
| def get_spacy_model(
spacy_model_name: str, pos_tags: bool = True, parse: bool = False, ner: bool = False
) -> SpacyModelType:
"""
In order to avoid loading spacy models a whole bunch of times, we'll save references to them,
keyed by the options we used to create the spacy model, so any particular configuration only
gets loaded once.
"""
options = (spacy_model_name, pos_tags, parse, ner)
if options not in LOADED_SPACY_MODELS:
disable = ["vectors", "textcat"]
if not pos_tags:
disable.append("tagger")
if not parse:
disable.append("parser")
if not ner:
disable.append("ner")
try:
spacy_model = spacy.load(spacy_model_name, disable=disable)
except OSError:
logger.warning(
f"Spacy models '{spacy_model_name}' not found. Downloading and installing."
)
spacy_download(spacy_model_name)
# Import the downloaded model module directly and load from there
spacy_model_module = __import__(spacy_model_name)
spacy_model = spacy_model_module.load(disable=disable) # type: ignore
LOADED_SPACY_MODELS[options] = spacy_model
return LOADED_SPACY_MODELS[options]
|
31,653 | def main():
params = demisto.params()
args = demisto.args()
url = params.get('url')
verify_certificate = not params.get('insecure', False)
proxy = params.get('proxy', False)
headers = {}
mock_data = str(args.get('mock-data', ''))
if mock_data.lower() == "true":
headers['Mock-Data'] = "True"
headers['Authorization'] = f'Bearer {params["api_key"]}'
headers['Soar-Integration-Origin'] = "Cortex XSOAR"
command = demisto.command()
demisto.debug(f'Command being called is {command}')
try:
requests.packages.urllib3.disable_warnings()
client = Client(urljoin(url, ''), verify_certificate, proxy, headers=headers, auth=None)
commands = {
'abxcortexxsoar-check-the-status-of-an-action-requested-on-a-case':
check_the_status_of_an_action_requested_on_a_case_command,
'abxcortexxsoar-check-the-status-of-an-action-requested-on-a-threat':
check_the_status_of_an_action_requested_on_a_threat_command,
'abxcortexxsoar-get-a-list-of-abnormal-cases-identified-by-abnormal-security':
get_a_list_of_abnormal_cases_identified_by_abnormal_security_command,
'abxcortexxsoar-get-a-list-of-threats':
get_a_list_of_threats_command,
'abxcortexxsoar-get-details-of-a-threat':
get_details_of_a_threat_command,
'abxcortexxsoar-get-details-of-an-abnormal-case':
get_details_of_an_abnormal_case_command,
'abxcortexxsoar-get-the-latest-threat-intel-feed': get_the_latest_threat_intel_feed_command,
'abxcortexxsoar-manage-a-threat-identified-by-abnormal-security':
manage_a_threat_identified_by_abnormal_security_command,
'abxcortexxsoar-manage-an-abnormal-case':
manage_an_abnormal_case_command,
'abxcortexxsoar-submit-an-inquiry-to-request-a-report-on-misjudgement-by-abnormal-security':
submit_an_inquiry_to_request_a_report_on_misjudgement_by_abnormal_security_command,
}
if command == 'test-module':
headers['Mock-Data'] = "True"
test_client = Client(urljoin(url, ''), verify_certificate, proxy, headers=headers, auth=None)
test_module(test_client)
elif command in commands:
return_results(commands[command](client, args))
else:
raise NotImplementedError(f'{command} command is not implemented.')
except Exception as e:
return_error(str(e))
| def main():
params = demisto.params()
args = demisto.args()
url = params.get('url')
verify_certificate = not params.get('insecure', False)
proxy = params.get('proxy', False)
headers = {}
mock_data = str(args.get('mock-data', ''))
if mock_data.lower() == "true":
headers['Mock-Data'] = "True"
headers['Authorization'] = f'Bearer {params["api_key"]}'
headers['Soar-Integration-Origin'] = "Cortex XSOAR"
command = demisto.command()
demisto.debug(f'Command being called is {command}')
try:
requests.packages.urllib3.disable_warnings()
client = Client(urljoin(url, ''), verify_certificate, proxy, headers=headers, auth=None)
commands = {
'abnormal-security-check-case-action-status':
check_the_status_of_an_action_requested_on_a_case_command,
'abxcortexxsoar-check-the-status-of-an-action-requested-on-a-threat':
check_the_status_of_an_action_requested_on_a_threat_command,
'abxcortexxsoar-get-a-list-of-abnormal-cases-identified-by-abnormal-security':
get_a_list_of_abnormal_cases_identified_by_abnormal_security_command,
'abxcortexxsoar-get-a-list-of-threats':
get_a_list_of_threats_command,
'abxcortexxsoar-get-details-of-a-threat':
get_details_of_a_threat_command,
'abxcortexxsoar-get-details-of-an-abnormal-case':
get_details_of_an_abnormal_case_command,
'abxcortexxsoar-get-the-latest-threat-intel-feed': get_the_latest_threat_intel_feed_command,
'abxcortexxsoar-manage-a-threat-identified-by-abnormal-security':
manage_a_threat_identified_by_abnormal_security_command,
'abxcortexxsoar-manage-an-abnormal-case':
manage_an_abnormal_case_command,
'abxcortexxsoar-submit-an-inquiry-to-request-a-report-on-misjudgement-by-abnormal-security':
submit_an_inquiry_to_request_a_report_on_misjudgement_by_abnormal_security_command,
}
if command == 'test-module':
headers['Mock-Data'] = "True"
test_client = Client(urljoin(url, ''), verify_certificate, proxy, headers=headers, auth=None)
test_module(test_client)
elif command in commands:
return_results(commands[command](client, args))
else:
raise NotImplementedError(f'{command} command is not implemented.')
except Exception as e:
return_error(str(e))
|
40,732 | def setup_common_training_handlers(
trainer: Engine,
train_sampler: Optional[DistributedSampler] = None,
to_save: Optional[Mapping] = None,
save_every_iters: int = 1000,
output_path: Optional[str] = None,
lr_scheduler: Optional[Union[ParamScheduler, _LRScheduler]] = None,
with_gpu_stats: bool = False,
output_names: Optional[Iterable[str]] = None,
with_pbars: bool = True,
with_pbar_on_iters: bool = True,
log_every_iters: int = 100,
stop_on_nan: bool = True,
clear_cuda_cache: bool = True,
save_handler: Optional[Union[Callable, BaseSaveHandler]] = None,
**kwargs: Any,
) -> None:
"""Helper method to setup trainer with common handlers (it also supports distributed configuration):
- :class:`~ignite.handlers.TerminateOnNan`
- handler to setup learning rate scheduling
- :class:`~ignite.handlers.ModelCheckpoint`
- :class:`~ignite.metrics.RunningAverage` on `update_function` output
- Two progress bars on epochs and optionally on iterations
Args:
trainer: trainer engine. Output of trainer's `update_function` should be a dictionary
or sequence or a single tensor.
train_sampler: Optional distributed sampler used to call
`set_epoch` method on epoch started event.
to_save: dictionary with objects to save in the checkpoint. This argument is passed to
:class:`~ignite.handlers.Checkpoint` instance.
save_every_iters: saving interval. By default, `to_save` objects are stored
each 1000 iterations.
output_path: output path to indicate where `to_save` objects are stored. Argument is mutually
exclusive with ``save_handler``.
lr_scheduler: learning rate scheduler
as native torch LRScheduler or ignite's parameter scheduler.
with_gpu_stats: if True, :class:`~ignite.contrib.metrics.handlers.GpuInfo` is attached to the
trainer. This requires `pynvml` package to be installed.
output_names: list of names associated with `update_function` output dictionary.
with_pbars: if True, two progress bars on epochs and optionally on iterations are attached.
Default, True.
with_pbar_on_iters: if True, a progress bar on iterations is attached to the trainer.
Default, True.
log_every_iters: logging interval for :class:`~ignite.contrib.metrics.handlers.GpuInfo` and for
epoch-wise progress bar. Default, 100.
stop_on_nan: if True, :class:`~ignite.handlers.TerminateOnNan` handler is added to the trainer.
Default, True.
clear_cuda_cache: if True, `torch.cuda.empty_cache()` is called every end of epoch.
Default, True.
save_handler: Method or callable
class to use to store ``to_save``. See :class:`~ignite.handlers.checkpoint.Checkpoint` for more details.
Argument is mutually exclusive with ``output_path``.
kwargs: optional keyword args to be passed to construct :class:`~ignite.handlers.checkpoint.Checkpoint`.
"""
if idist.get_world_size() > 1:
_setup_common_distrib_training_handlers(
trainer,
train_sampler=train_sampler,
to_save=to_save,
save_every_iters=save_every_iters,
output_path=output_path,
lr_scheduler=lr_scheduler,
with_gpu_stats=with_gpu_stats,
output_names=output_names,
with_pbars=with_pbars,
with_pbar_on_iters=with_pbar_on_iters,
log_every_iters=log_every_iters,
stop_on_nan=stop_on_nan,
clear_cuda_cache=clear_cuda_cache,
save_handler=save_handler,
**kwargs,
)
else:
if train_sampler is not None and isinstance(train_sampler, DistributedSampler):
warnings.warn(
"Argument train_sampler is a distributed sampler,"
" but either there is no distributed setting or world size is < 2. "
"Train sampler argument will be ignored",
UserWarning,
)
_setup_common_training_handlers(
trainer,
to_save=to_save,
save_every_iters=save_every_iters,
output_path=output_path,
lr_scheduler=lr_scheduler,
with_gpu_stats=with_gpu_stats,
output_names=output_names,
with_pbars=with_pbars,
with_pbar_on_iters=with_pbar_on_iters,
log_every_iters=log_every_iters,
stop_on_nan=stop_on_nan,
clear_cuda_cache=clear_cuda_cache,
save_handler=save_handler,
**kwargs,
)
| def setup_common_training_handlers(
trainer: Engine,
train_sampler: Optional[DistributedSampler] = None,
to_save: Optional[Mapping] = None,
save_every_iters: int = 1000,
output_path: Optional[str] = None,
lr_scheduler: Optional[Union[ParamScheduler, _LRScheduler]] = None,
with_gpu_stats: bool = False,
output_names: Optional[Iterable[str]] = None,
with_pbars: bool = True,
with_pbar_on_iters: bool = True,
log_every_iters: int = 100,
stop_on_nan: bool = True,
clear_cuda_cache: bool = True,
save_handler: Optional[Union[Callable, BaseSaveHandler]] = None,
**kwargs: Any,
) -> None:
"""Helper method to setup trainer with common handlers (it also supports distributed configuration):
- :class:`~ignite.handlers.TerminateOnNan`
- handler to setup learning rate scheduling
- :class:`~ignite.handlers.ModelCheckpoint`
- :class:`~ignite.metrics.RunningAverage` on `update_function` output
- Two progress bars on epochs and optionally on iterations
Args:
trainer: trainer engine. Output of trainer's `update_function` should be a dictionary
or sequence or a single tensor.
train_sampler: Optional distributed sampler used to call
`set_epoch` method on epoch started event.
to_save: dictionary with objects to save in the checkpoint. This argument is passed to
:class:`~ignite.handlers.Checkpoint` instance.
save_every_iters: saving interval. By default, `to_save` objects are stored
each 1000 iterations.
output_path: output path to indicate where `to_save` objects are stored. Argument is mutually
exclusive with ``save_handler``.
lr_scheduler: learning rate scheduler
as native torch LRScheduler or ignite's parameter scheduler.
with_gpu_stats: if True, :class:`~ignite.contrib.metrics.handlers.GpuInfo` is attached to the
trainer. This requires `pynvml` package to be installed.
output_names: list of names associated with `update_function` output dictionary.
with_pbars: if True, two progress bars on epochs and optionally on iterations are attached.
Default, True.
with_pbar_on_iters: if True, a progress bar on iterations is attached to the trainer.
Default, True.
log_every_iters: logging interval for :class:`~ignite.contrib.metrics.GpuInfo` and for
epoch-wise progress bar. Default, 100.
stop_on_nan: if True, :class:`~ignite.handlers.TerminateOnNan` handler is added to the trainer.
Default, True.
clear_cuda_cache: if True, `torch.cuda.empty_cache()` is called every end of epoch.
Default, True.
save_handler: Method or callable
class to use to store ``to_save``. See :class:`~ignite.handlers.checkpoint.Checkpoint` for more details.
Argument is mutually exclusive with ``output_path``.
kwargs: optional keyword args to be passed to construct :class:`~ignite.handlers.checkpoint.Checkpoint`.
"""
if idist.get_world_size() > 1:
_setup_common_distrib_training_handlers(
trainer,
train_sampler=train_sampler,
to_save=to_save,
save_every_iters=save_every_iters,
output_path=output_path,
lr_scheduler=lr_scheduler,
with_gpu_stats=with_gpu_stats,
output_names=output_names,
with_pbars=with_pbars,
with_pbar_on_iters=with_pbar_on_iters,
log_every_iters=log_every_iters,
stop_on_nan=stop_on_nan,
clear_cuda_cache=clear_cuda_cache,
save_handler=save_handler,
**kwargs,
)
else:
if train_sampler is not None and isinstance(train_sampler, DistributedSampler):
warnings.warn(
"Argument train_sampler is a distributed sampler,"
" but either there is no distributed setting or world size is < 2. "
"Train sampler argument will be ignored",
UserWarning,
)
_setup_common_training_handlers(
trainer,
to_save=to_save,
save_every_iters=save_every_iters,
output_path=output_path,
lr_scheduler=lr_scheduler,
with_gpu_stats=with_gpu_stats,
output_names=output_names,
with_pbars=with_pbars,
with_pbar_on_iters=with_pbar_on_iters,
log_every_iters=log_every_iters,
stop_on_nan=stop_on_nan,
clear_cuda_cache=clear_cuda_cache,
save_handler=save_handler,
**kwargs,
)
|
30,746 | def get_modified_files_for_testing(files_string):
"""Get a string of the modified files"""
is_conf_json = False
is_reputations_json = False
is_indicator_json = False
sample_tests = []
pack_sample_tests = []
changed_common = []
modified_files_list = []
modified_tests_list = []
all_files = files_string.split('\n')
for _file in all_files:
file_data = _file.split()
if not file_data:
continue
file_status = file_data[0]
if file_status.lower().startswith('r'):
file_path = file_data[2]
else:
file_path = file_data[1]
# ignoring deleted files.
# also, ignore files in ".circle", ".github" and ".hooks" directories and .gitignore
if ((file_status.lower() == 'm' or file_status.lower() == 'a' or file_status.lower().startswith('r'))
and not file_path.startswith('.')):
if checked_type(file_path, CODE_FILES_REGEX) and validate_not_a_package_test_script(file_path):
dir_path = os.path.dirname(file_path)
file_path = glob.glob(dir_path + "/*.yml")[0]
# Common scripts (globally used so must run all tests)
if checked_type(file_path, COMMON_YML_LIST):
changed_common.append(file_path)
# integrations, scripts, playbooks, test-scripts
elif checked_type(file_path, CHECKED_TYPES_REGEXES):
modified_files_list.append(file_path)
# tests
elif checked_type(file_path, YML_TEST_PLAYBOOKS_REGEXES):
modified_tests_list.append(file_path)
# reputations.json
elif re.match(INDICATOR_TYPES_REPUTATIONS_REGEX, file_path, re.IGNORECASE) or \
re.match(PACKS_INDICATOR_TYPES_REPUTATIONS_REGEX, file_path, re.IGNORECASE) or \
re.match(PACKS_INDICATOR_TYPE_JSON_REGEX, file_path, re.IGNORECASE):
is_reputations_json = True
elif checked_type(file_path, INCIDENT_FIELD_REGEXES):
is_indicator_json = True
# conf.json
elif re.match(CONF_PATH, file_path, re.IGNORECASE):
is_conf_json = True
# docs and test files do not influence integration tests filtering
elif checked_type(file_path, FILES_IN_SCRIPTS_OR_INTEGRATIONS_DIRS_REGEXES):
if os.path.splitext(file_path)[-1] not in FILE_TYPES_FOR_TESTING:
continue
elif re.match(DOCS_REGEX, file_path) or os.path.splitext(file_path)[-1] in ['.md', '.png']:
continue
elif any(file in file_path for file in (PACKS_PACK_META_FILE_NAME, PACKS_WHITELIST_FILE_NAME)):
pack_sample_tests.append(file_path)
elif SECRETS_WHITE_LIST not in file_path:
sample_tests.append(file_path)
return (modified_files_list, modified_tests_list, changed_common, is_conf_json, sample_tests, pack_sample_tests,
is_reputations_json, is_indicator_json)
| def get_modified_files_for_testing(files_string):
"""Get a string of the modified files"""
is_conf_json = False
is_reputations_json = False
is_indicator_json = False
sample_tests = []
pack_tests = []
changed_common = []
modified_files_list = []
modified_tests_list = []
all_files = files_string.split('\n')
for _file in all_files:
file_data = _file.split()
if not file_data:
continue
file_status = file_data[0]
if file_status.lower().startswith('r'):
file_path = file_data[2]
else:
file_path = file_data[1]
# ignoring deleted files.
# also, ignore files in ".circle", ".github" and ".hooks" directories and .gitignore
if ((file_status.lower() == 'm' or file_status.lower() == 'a' or file_status.lower().startswith('r'))
and not file_path.startswith('.')):
if checked_type(file_path, CODE_FILES_REGEX) and validate_not_a_package_test_script(file_path):
dir_path = os.path.dirname(file_path)
file_path = glob.glob(dir_path + "/*.yml")[0]
# Common scripts (globally used so must run all tests)
if checked_type(file_path, COMMON_YML_LIST):
changed_common.append(file_path)
# integrations, scripts, playbooks, test-scripts
elif checked_type(file_path, CHECKED_TYPES_REGEXES):
modified_files_list.append(file_path)
# tests
elif checked_type(file_path, YML_TEST_PLAYBOOKS_REGEXES):
modified_tests_list.append(file_path)
# reputations.json
elif re.match(INDICATOR_TYPES_REPUTATIONS_REGEX, file_path, re.IGNORECASE) or \
re.match(PACKS_INDICATOR_TYPES_REPUTATIONS_REGEX, file_path, re.IGNORECASE) or \
re.match(PACKS_INDICATOR_TYPE_JSON_REGEX, file_path, re.IGNORECASE):
is_reputations_json = True
elif checked_type(file_path, INCIDENT_FIELD_REGEXES):
is_indicator_json = True
# conf.json
elif re.match(CONF_PATH, file_path, re.IGNORECASE):
is_conf_json = True
# docs and test files do not influence integration tests filtering
elif checked_type(file_path, FILES_IN_SCRIPTS_OR_INTEGRATIONS_DIRS_REGEXES):
if os.path.splitext(file_path)[-1] not in FILE_TYPES_FOR_TESTING:
continue
elif re.match(DOCS_REGEX, file_path) or os.path.splitext(file_path)[-1] in ['.md', '.png']:
continue
elif any(file in file_path for file in (PACKS_PACK_META_FILE_NAME, PACKS_WHITELIST_FILE_NAME)):
pack_sample_tests.append(file_path)
elif SECRETS_WHITE_LIST not in file_path:
sample_tests.append(file_path)
return (modified_files_list, modified_tests_list, changed_common, is_conf_json, sample_tests, pack_sample_tests,
is_reputations_json, is_indicator_json)
|
3,120 | def maybe_cast_result_dtype(dtype: DtypeObj, how: str) -> DtypeObj:
"""
Get the desired dtype of a result based on the
input dtype and how it was computed.
Parameters
----------
dtype : DtypeObj
Input dtype.
how : str
How the result was computed.
Returns
-------
DtypeObj
The desired dtype of the result.
"""
from pandas.core.arrays.boolean import BooleanDtype
from pandas.core.arrays.floating import Float64Dtype
from pandas.core.arrays.integer import Int64Dtype
if how in ["add", "cumsum", "sum"] and (dtype == np.dtype(bool)):
return np.dtype(np.int64)
elif how in ["add", "cumsum", "sum"] and isinstance(dtype, BooleanDtype):
return Int64Dtype()
elif how in ["mean", "median", "var"] and isinstance(dtype, Int64Dtype):
return Float64Dtype()
return dtype
| def maybe_cast_result_dtype(dtype: DtypeObj, how: str) -> DtypeObj:
"""
Get the desired dtype of a result based on the
input dtype and how it was computed.
Parameters
----------
dtype : DtypeObj
Input dtype.
how : str
How the result was computed.
Returns
-------
DtypeObj
The desired dtype of the result.
"""
from pandas.core.arrays.boolean import BooleanDtype
from pandas.core.arrays.floating import Float64Dtype
from pandas.core.arrays.integer import Int64Dtype
if how in ["add", "cumsum", "sum"] and (dtype == np.dtype(bool)):
return np.dtype(np.int64)
elif how in ["add", "cumsum", "sum"] and isinstance(dtype, (BooleanDtype, IntegerDtype)):
return Int64Dtype()
elif how in ["mean", "median", "var"] and isinstance(dtype, Int64Dtype):
return Float64Dtype()
return dtype
|
15,270 | def calculate_next_active_alarms(alarms):
"""
Calculate garmin next active alarms from settings.
Alarms are sorted by time
"""
active_alarms = []
_LOGGER.debug(alarms)
for alarm_setting in alarms:
if alarm_setting["alarmMode"] != "ON":
continue
for day in alarm_setting["alarmDays"]:
if day == "ONCE":
midnight = datetime.combine(date.today(), datetime.min.time())
alarmtime = alarm_setting["alarmTime"]
alarm = midnight + timedelta(minutes=alarmtime)
if alarm < datetime.now():
alarm += timedelta(days=1)
active_alarms.append(alarm.isoformat())
else:
start_of_week = datetime.combine(
date.today() - timedelta(days=datetime.today().isoweekday() % 7),
datetime.min.time(),
)
alarmtime = alarm_setting["alarmTime"]
days_to_add = DAY_TO_NUMBER[day] % 7
alarm = start_of_week + timedelta(minutes=alarmtime, days=days_to_add)
if alarm < datetime.now():
alarm += timedelta(days=7)
active_alarms.append(alarm.isoformat())
return sorted(active_alarms) if len(active_alarms) > 0 else None
| def calculate_next_active_alarms(alarms):
"""
Calculate garmin next active alarms from settings.
Alarms are sorted by time
"""
active_alarms = []
_LOGGER.debug(alarms)
for alarm_setting in alarms:
if alarm_setting["alarmMode"] != "ON":
continue
for day in alarm_setting["alarmDays"]:
if day == "ONCE":
midnight = datetime.combine(date.today(), datetime.min.time())
alarm_time = alarm_setting["alarmTime"]
alarm = midnight + timedelta(minutes=alarmtime)
if alarm < datetime.now():
alarm += timedelta(days=1)
active_alarms.append(alarm.isoformat())
else:
start_of_week = datetime.combine(
date.today() - timedelta(days=datetime.today().isoweekday() % 7),
datetime.min.time(),
)
alarmtime = alarm_setting["alarmTime"]
days_to_add = DAY_TO_NUMBER[day] % 7
alarm = start_of_week + timedelta(minutes=alarmtime, days=days_to_add)
if alarm < datetime.now():
alarm += timedelta(days=7)
active_alarms.append(alarm.isoformat())
return sorted(active_alarms) if len(active_alarms) > 0 else None
|
58,010 | def format_incidents(resp, eventTypes):
"""
Format the incidents to feed into XSOAR
:param resp: events fetched from the server
:param eventTypes: event types available
:return: incidents to feed into XSOAR
"""
alerts: List[Dict[str, Any]] = []
try:
for eachalert in resp['data']:
for e_type in list(eachalert['alert']['services'].keys()):
e_id = eachalert['alert']['id']
e_priority = eachalert['alert']['priority']
e_created = eachalert['alert']['created_at']
e_keyword = eachalert['alert']['tag_name']
e_bucket = eachalert['alert']['bucket']['name']
alert_details = {
"name": "Cyble Intel Alert on {}".format(eventTypes[e_type]),
"cybleeventtype": "{}".format(e_type),
"severity": INCIDENT_SEVERITY[e_priority.lower()],
"occurred": "{}".format(e_created),
"cybleeventid": "{}".format(e_id),
"cybleeventname": "Incident of {} type".format(eventTypes[e_type]),
"cybleeventbucket": "{}".format(e_bucket),
"cybleeventkeyword": "{}".format(e_keyword),
"cybleeventalias": "{}".format(eventTypes[e_type]),
}
alerts.append(alert_details)
return alerts
except Exception as e:
return "Format incident issue"
| def format_incidents(resp, eventTypes):
"""
Format the incidents to feed into XSOAR
:param resp: events fetched from the server
:param eventTypes: event types available
:return: incidents to feed into XSOAR
"""
alerts: List[Dict[str, Any]] = []
try:
alerts = resp.get('data') or []
for alert in alerts:
alert_data = alert.get('alert', {})
alert_id = alert_data.get('id')
alert_priority = alert_data.get('priority')
alert_created_at = alert_data.get('created_at')
alert_keyword = alert_data.get('tag_name')
alert_bucket_name = alert_data.get('bucket', {}).get('name')
for e_type in list(alert_data.get('services', {}).keys()):
event_type = eventTypes.get(e_type)
alert_details = {
"name": "Cyble Intel Alert on {}".format(event_type),
"cybleeventtype": "{}".format(e_type),
"severity": INCIDENT_SEVERITY.get(alert_priority.lower()),
"occurred": "{}".format(alert_created_at),
"cybleeventid": "{}".format(alert_id),
"cybleeventname": "Incident of {} type".format(event_type),
"cybleeventbucket": "{}".format(alert_bucket_name),
"cybleeventkeyword": "{}".format(alert_keyword),
"cybleeventalias": "{}".format(event_type),
}
alerts.append(alert_details)
return alerts
except Exception as e:
return "Format incident issue"
|
49,643 | def validate_html_favicon(app: Sphinx, config: Config) -> None:
"""Check html_favicon setting."""
if config.html_favicon and not path.isfile(path.join(app.confdir, config.html_favicon)) \
and not urlparse(config.html_favicon).scheme:
logger.warning(__('favicon file %r does not exist'), config.html_favicon)
config.html_favicon = None # type: ignore
| def validate_html_favicon(app: Sphinx, config: Config) -> None:
"""Check html_favicon setting."""
if (config.html_favicon and not path.isfile(path.join(app.confdir, config.html_favicon)) and
not urlparse(config.html_favicon).scheme):
logger.warning(__('favicon file %r does not exist'), config.html_favicon)
config.html_favicon = None # type: ignore
|
42,059 | def run(args: argparse.Namespace) -> None:
kurobako_cmd = os.path.join(args.path_to_kurobako, "kurobako")
subprocess.run(f"{kurobako_cmd} --version", shell=True)
if not (os.path.exists(args.data_dir) and os.path.isdir(args.data_dir)):
raise ValueError(f"Data directory {args.data_dir} cannot be found.")
os.makedirs(args.out_dir, exist_ok=True)
study_json_fn = os.path.join(args.out_dir, "studies.json")
subprocess.check_call(f"echo >| {study_json_fn}", shell=True)
solvers_filename = os.path.join(args.out_dir, "solvers.json")
subprocess.check_call(f"echo >| {solvers_filename}", shell=True)
problems_filename = os.path.join(args.out_dir, "problems.json")
subprocess.check_call(f"echo >| {problems_filename}", shell=True)
# Create ZDT problems
cmd = f"{kurobako_cmd} problem-suite zdt | tee -a {problems_filename}"
subprocess.run(cmd, shell=True)
# Create NAS bench problem(C) (for Multi-Objective Settings).
dataset = os.path.join(args.data_dir, "nasbench_full.bin")
cmd = (
f'{kurobako_cmd} problem nasbench "{dataset}"'
f"--encoding C --metrics accuracy params | tee -a {problems_filename}"
)
subprocess.run(cmd, shell=True)
# Create solvers.
sampler_list = args.sampler_list.split()
sampler_kwargs_list = args.sampler_kwargs_list.split()
if len(sampler_list) != len(sampler_kwargs_list):
raise ValueError(
"The number of samplers does not match the given keyword arguments. \n"
f"sampler_list: {sampler_list}, sampler_kwargs_list: {sampler_kwargs_list}."
)
for sampler, sampler_kwargs in zip(sampler_list, sampler_kwargs_list):
name = f"{args.name_prefix}_{sampler}"
python_command = f"mo_runner.py {sampler} {sampler_kwargs}"
cmd = (
f"{kurobako_cmd} solver --name {name} command python {python_command}"
f"| tee -a {solvers_filename}"
)
subprocess.run(cmd, shell=True)
# Create study.
cmd = (
f"{kurobako_cmd} studies --budget 1000 "
f"--solvers $(cat {solvers_filename}) --problems $(cat {problems_filename}) "
f"--repeats {args.n_runs} --seed {args.seed} "
f"> {study_json_fn}"
)
subprocess.run(cmd, shell=True)
result_filename = os.path.join(args.out_dir, "results.json")
cmd = (
f"cat {study_json_fn} | {kurobako_cmd} run --parallelism {args.n_jobs} "
f"> {result_filename}"
)
subprocess.run(cmd, shell=True)
# Report
report_filename = os.path.join(args.out_dir, "report.md")
cmd = f"cat {result_filename} | {kurobako_cmd} report > {report_filename}"
subprocess.run(cmd, shell=True)
# Plot pareto-front.
problem_names = ["NASBench", "ZDT1", "ZDT2", "ZDT3", "ZDT4", "ZDT5", "ZDT6"]
for problem_name in problem_names:
cmd = (
f"cat {result_filename} | grep {problem_name} | "
f"{kurobako_cmd} plot pareto-front -o {args.out_dir}"
)
subprocess.run(cmd, shell=True)
| def run(args: argparse.Namespace) -> None:
kurobako_cmd = os.path.join(args.path_to_kurobako, "kurobako")
subprocess.run(f"{kurobako_cmd} --version", shell=True)
if not (os.path.exists(args.data_dir) and os.path.isdir(args.data_dir)):
raise ValueError(f"Data directory {args.data_dir} cannot be found.")
os.makedirs(args.out_dir, exist_ok=True)
study_json_fn = os.path.join(args.out_dir, "studies.json")
subprocess.check_call(f"echo >| {study_json_fn}", shell=True)
solvers_filename = os.path.join(args.out_dir, "solvers.json")
subprocess.check_call(f"echo >| {solvers_filename}", shell=True)
problems_filename = os.path.join(args.out_dir, "problems.json")
subprocess.check_call(f"echo >| {problems_filename}", shell=True)
# Create ZDT problems
cmd = f"{kurobako_cmd} problem-suite zdt | tee -a {problems_filename}"
subprocess.run(cmd, shell=True)
# Create NAS bench problem(C) (for Multi-Objective Settings).
dataset = os.path.join(args.data_dir, "nasbench_full.bin")
cmd = (
f'{kurobako_cmd} problem nasbench "{dataset}"'
f"--encoding C --metrics accuracy params | tee -a {problems_filename}"
)
subprocess.run(cmd, shell=True)
# Create solvers.
sampler_list = args.sampler_list.split()
sampler_kwargs_list = args.sampler_kwargs_list.split()
if len(sampler_list) != len(sampler_kwargs_list):
raise ValueError(
"The number of samplers does not match the given keyword arguments. \n"
f"sampler_list: {sampler_list}, sampler_kwargs_list: {sampler_kwargs_list}."
)
for sampler, sampler_kwargs in zip(sampler_list, sampler_kwargs_list):
name = f"{args.name_prefix}_{sampler}"
python_command = f"mo_runner.py {sampler} {sampler_kwargs}"
cmd = (
f"{kurobako_cmd} solver --name {name} command python {python_command}"
f"| tee -a {solvers_filename}"
)
subprocess.run(cmd, shell=True)
# Create study.
cmd = (
f"{kurobako_cmd} studies --budget 300 "
f"--solvers $(cat {solvers_filename}) --problems $(cat {problems_filename}) "
f"--repeats {args.n_runs} --seed {args.seed} "
f"> {study_json_fn}"
)
subprocess.run(cmd, shell=True)
result_filename = os.path.join(args.out_dir, "results.json")
cmd = (
f"cat {study_json_fn} | {kurobako_cmd} run --parallelism {args.n_jobs} "
f"> {result_filename}"
)
subprocess.run(cmd, shell=True)
# Report
report_filename = os.path.join(args.out_dir, "report.md")
cmd = f"cat {result_filename} | {kurobako_cmd} report > {report_filename}"
subprocess.run(cmd, shell=True)
# Plot pareto-front.
problem_names = ["NASBench", "ZDT1", "ZDT2", "ZDT3", "ZDT4", "ZDT5", "ZDT6"]
for problem_name in problem_names:
cmd = (
f"cat {result_filename} | grep {problem_name} | "
f"{kurobako_cmd} plot pareto-front -o {args.out_dir}"
)
subprocess.run(cmd, shell=True)
|
29,891 | def DefaultFromEnv(
item_parser: Callable[[str], T], default_src: str
) -> Callable[[str], OptionalType[T]]: # noqa: D401
"""An option of type T or a default.
The default is sourced from an environment variable with the name ``default_src``.
Either the option or the default must be provided
"""
default = None
env = os.getenv(default_src)
if env:
default = Optional(item_parser)(env)
def default_from_env(text: str) -> OptionalType[T]:
if text:
return Optional(item_parser)(text)
if default:
return default
raise ValueError("No value provided.")
return default_from_env
| def DefaultFromEnv(
item_parser: Callable[[str], T], default_src: str
) -> Callable[[str], OptionalType[T]]: # noqa: D401
"""An option of type T or a default.
The default is sourced from an environment variable with the name specified in ``default_src``.
Either the option or the default must be provided
"""
default = None
env = os.getenv(default_src)
if env:
default = Optional(item_parser)(env)
def default_from_env(text: str) -> OptionalType[T]:
if text:
return Optional(item_parser)(text)
if default:
return default
raise ValueError("No value provided.")
return default_from_env
|
327 | def _transform_samples(samples, model, keep_untransformed=False):
# Find out which RVs we need to compute:
free_rv_names = {x.name for x in model.free_RVs}
unobserved_names = {x.name for x in model.unobserved_RVs}
names_to_compute = unobserved_names - free_rv_names
ops_to_compute = [x for x in model.unobserved_RVs if x.name in names_to_compute]
# Create function graph for these:
fgraph = theano.graph.fg.FunctionGraph(model.free_RVs, ops_to_compute)
# Jaxify, which returns a list of functions, one for each op
jax_fns = jax_funcify(fgraph)
# Put together the inputs
inputs = [samples[x.name] for x in model.free_RVs]
for cur_op, cur_jax_fn in zip(ops_to_compute, jax_fns):
# We need a function taking a single argument to run vmap, while the
# jax_fn takes a list, so:
to_run = lambda x: cur_jax_fn(*x)
result = jax.vmap(jax.vmap(to_run))(inputs)
# Add to sample dict
samples[cur_op.name] = result
# Discard unwanted transformed variables, if desired:
vars_to_keep = set(
pm.util.get_default_varnames(list(samples.keys()), include_transformed=keep_untransformed)
)
samples = {x: y for x, y in samples.items() if x in vars_to_keep}
return samples
| def _transform_samples(samples, model, keep_untransformed=False):
# Find out which RVs we need to compute:
free_rv_names = {x.name for x in model.free_RVs}
unobserved_names = {x.name for x in model.unobserved_RVs}
names_to_compute = unobserved_names - free_rv_names
ops_to_compute = [x for x in model.unobserved_RVs if x.name in names_to_compute]
# Create function graph for these:
fgraph = theano.graph.fg.FunctionGraph(model.free_RVs, ops_to_compute)
# Jaxify, which returns a list of functions, one for each op
jax_fns = jax_funcify(fgraph)
# Put together the inputs
inputs = [samples[x.name] for x in model.free_RVs]
for cur_op, cur_jax_fn in zip(ops_to_compute, jax_fns):
# We need a function taking a single argument to run vmap, while the
# jax_fn takes a list, so:
result = jax.vmap(jax.vmap(cur_jax_fn))(*inputs)
# Add to sample dict
samples[cur_op.name] = result
# Discard unwanted transformed variables, if desired:
vars_to_keep = set(
pm.util.get_default_varnames(list(samples.keys()), include_transformed=keep_untransformed)
)
samples = {x: y for x, y in samples.items() if x in vars_to_keep}
return samples
|
740 | def record_user_interface(d: MitmCliDirector):
tmux = d.start_session(width=120, height=36)
window = tmux.attached_window
d.start_recording("recordings/mitmproxy_user_interface.cast")
d.message("Welcome to the mitmproxy tutorial. In this lesson we cover the user interface.")
d.pause(1)
d.exec("mitmproxy")
d.pause(3)
d.message("This is the default view of mitmproxy.")
d.message("mitmproxy adds rows to the view as new requests come in.")
d.message("Let’s generate some requests using `curl` in a separate terminal.")
pane_top = d.current_pane
pane_bottom = window.split_window(attach=True)
pane_bottom.resize_pane(height=12)
d.focus_pane(pane_bottom)
d.pause(2)
d.type("curl")
d.message("Use curl’s `-x` option to specify a proxy, e.g., `curl -x http://127.0.0.1:8080` to use mitmproxy.")
d.type(" -x http://127.0.0.1:8080")
d.message("We use the text-based weather service `wttr.in`.")
d.exec(" \"http://wttr.in/Paris?0\"")
d.pause(2)
d.press_key("Up")
d.press_key("Left", count=3)
d.press_key("BSpace", count=5)
d.exec("Miami")
d.pause(2)
d.press_key("Up")
d.press_key("Left", count=3)
d.press_key("BSpace", count=5)
d.exec("Tokio")
d.pause(2)
d.press_key("Up")
d.press_key("Left", count=3)
d.press_key("BSpace", count=5)
d.exec("London")
d.pause(2)
d.exec("exit", target=pane_bottom)
d.focus_pane(pane_top)
d.message("You see the requests to `wttr.in` in the list of flows.")
d.message("mitmproxy is controlled using keyboard shortcuts.")
d.message("Use your arrow keys `↑` and `↓` to change the focused flow (`>>`).")
d.press_key("Down", count=3, pause=0.5)
d.press_key("Up", count=2, pause=0.5)
d.press_key("Down", count=2, pause=0.5)
d.message("The focused flow (`>>`) is used as a target for various commands.")
d.message("One such command shows the flow details, it is bound to `↵`.")
d.message("Press `↵` to view the details of the focused flow.")
d.press_key("Enter")
d.message("The flow details view has 3 panes: request, response, and detail.")
d.message("Use your arrow keys `←` and `→` to switch between panes.")
d.press_key("Right", count=2, pause=2.5)
d.press_key("Left", count=2, pause=1)
d.message("Press `q` to exit the current view.",)
d.type("q")
d.message("Press `?` to get a list of all available keyboard shortcuts.")
d.type("?")
d.pause(2)
d.press_key("Down", count=20, pause=0.25)
d.message("Press `q` to exit the current view.")
d.type("q")
d.message("Each shortcut is internally bound to a command.")
d.message("You can also execute commands directly (without using shortcuts).")
d.message("Press `:` to open the command prompt at the bottom.")
d.type(":")
d.message("Enter `console.view.flow @focus`.")
d.type("console.view.flow @focus")
d.message("The command `console.view.flow` opens the details view for a flow.")
d.message("The argument `@focus` defines the target flow.")
d.message("Press `↵` to execute the command.")
d.press_key("Enter")
d.message("Commands unleash the full power of mitmproxy, i.e., to configure interceptions.")
d.message("You now know basics of mitmproxy’s UI and how to control it.")
d.pause(1)
d.save_instructions("recordings/mitmproxy_user_interface_instructions.json")
d.end()
| def record_user_interface(d: MitmCliDirector):
tmux = d.start_session(width=120, height=36)
window = tmux.attached_window
d.start_recording("recordings/mitmproxy_user_interface.cast")
d.message("Welcome to the mitmproxy tutorial. In this lesson we cover the user interface.")
d.pause(1)
d.exec("mitmproxy")
d.pause(3)
d.message("This is the default view of mitmproxy.")
d.message("mitmproxy adds rows to the view as new requests come in.")
d.message("Let’s generate some requests using `curl` in a separate terminal.")
pane_top = d.current_pane
pane_bottom = window.split_window(attach=True)
pane_bottom.resize_pane(height=12)
d.focus_pane(pane_bottom)
d.pause(2)
d.type("curl")
d.message("Use curl’s `-x` option to specify a proxy, e.g., `curl -x http://127.0.0.1:8080` to use mitmproxy.")
d.type(" -x http://127.0.0.1:8080")
d.message("We use the text-based weather service `wttr.in`.")
d.exec(" \"http://wttr.in/Paris?0\"")
d.pause(2)
d.press_key("Up")
d.press_key("Left", count=3)
d.press_key("BSpace", count=5)
d.exec("Miami")
d.pause(2)
d.press_key("Up")
d.press_key("Left", count=3)
d.press_key("BSpace", count=5)
d.exec("Tokio")
d.pause(2)
d.press_key("Up")
d.press_key("Left", count=3)
d.press_key("BSpace", count=5)
d.exec("London")
d.pause(2)
d.exec("exit", target=pane_bottom)
d.focus_pane(pane_top)
d.message("You see the requests to `wttr.in` in the list of flows.")
d.message("mitmproxy is controlled using keyboard shortcuts.")
d.message("Use your arrow keys `↑` and `↓` to change the focused flow (`>>`).")
d.press_key("Down", count=3, pause=0.5)
d.press_key("Up", count=2, pause=0.5)
d.press_key("Down", count=2, pause=0.5)
d.message("The focused flow (`>>`) is used as a target for various commands.")
d.message("One such command shows the flow details, it is bound to the enter key.")
d.message("Press `↵` to view the details of the focused flow.")
d.press_key("Enter")
d.message("The flow details view has 3 panes: request, response, and detail.")
d.message("Use your arrow keys `←` and `→` to switch between panes.")
d.press_key("Right", count=2, pause=2.5)
d.press_key("Left", count=2, pause=1)
d.message("Press `q` to exit the current view.",)
d.type("q")
d.message("Press `?` to get a list of all available keyboard shortcuts.")
d.type("?")
d.pause(2)
d.press_key("Down", count=20, pause=0.25)
d.message("Press `q` to exit the current view.")
d.type("q")
d.message("Each shortcut is internally bound to a command.")
d.message("You can also execute commands directly (without using shortcuts).")
d.message("Press `:` to open the command prompt at the bottom.")
d.type(":")
d.message("Enter `console.view.flow @focus`.")
d.type("console.view.flow @focus")
d.message("The command `console.view.flow` opens the details view for a flow.")
d.message("The argument `@focus` defines the target flow.")
d.message("Press `↵` to execute the command.")
d.press_key("Enter")
d.message("Commands unleash the full power of mitmproxy, i.e., to configure interceptions.")
d.message("You now know basics of mitmproxy’s UI and how to control it.")
d.pause(1)
d.save_instructions("recordings/mitmproxy_user_interface_instructions.json")
d.end()
|
57,868 | def search_group_members(default_base_dn, page_size):
# this command is equivalent to ADGetGroupMembers script
args = demisto.args()
member_type = args.get('member-type')
group_dn = args.get('group-dn')
nested_search = '' if args.get('disable-nested-search') == 'true' else ':1.2.840.113556.1.4.1941:'
time_limit = int(args.get('time_limit', 180))
account_name = args.get('sAMAccountName')
custom_attributes: List[str] = []
if member_type == 'person':
default_attributes = DEFAULT_PERSON_ATTRIBUTES
elif member_type == 'computer':
default_attributes = DEFAULT_COMPUTER_ATTRIBUTES
else:
default_attributes = DEFAULT_GROUP_ATTRIBUTES
if args.get('attributes'):
custom_attributes = args['attributes'].split(",")
attributes = list(set(custom_attributes + default_attributes))
if member_type == 'group':
query = "(&(objectCategory={})(memberOf{}={})(sAMAccountName={}))".format(member_type, nested_search, group_dn,
account_name)
else:
query = "(&(objectCategory={})(objectClass=user)(memberOf{}={})(sAMAccountName={}))"\
.format(member_type, nested_search, group_dn, account_name)
entries = search_with_paging(
query,
default_base_dn,
attributes=attributes,
page_size=page_size,
time_limit=time_limit
)
members = [{'dn': entry['dn'], 'category': member_type} for entry in entries['flat']]
demisto_entry = {
'ContentsFormat': formats['json'],
'Type': entryTypes['note'],
'Contents': entries['raw'],
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': tableToMarkdown("Active Directory - Get Group Members", entries['flat']),
'EntryContext': {
'ActiveDirectory.Groups(obj.dn ==' + group_dn + ')': {
'dn': group_dn,
'members': members
}
}
}
if member_type == 'person':
demisto_entry['EntryContext']['ActiveDirectory.Users(obj.dn == val.dn)'] = entries['flat']
demisto_entry['EntryContext']['Account'] = [account_entry(
entry, custom_attributes) for entry in entries['flat']]
elif member_type == 'computer':
demisto_entry['EntryContext']['ActiveDirectory.Computers(obj.dn == val.dn)'] = entries['flat']
demisto_entry['EntryContext']['Endpoint'] = [endpoint_entry(
entry, custom_attributes) for entry in entries['flat']]
elif member_type == 'group':
demisto_entry['EntryContext']['ActiveDirectory.Groups(obj.dn == val.dn)'] = entries['flat']
demisto_entry['EntryContext']['Group'] = [group_entry(
entry, custom_attributes) for entry in entries['flat']]
demisto.results(demisto_entry)
| def search_group_members(default_base_dn, page_size):
# this command is equivalent to ADGetGroupMembers script
args = demisto.args()
member_type = args.get('member-type')
group_dn = args.get('group-dn')
nested_search = '' if args.get('disable-nested-search') == 'true' else ':1.2.840.113556.1.4.1941:'
time_limit = int(args.get('time_limit', 180))
account_name = args.get('sAMAccountName')
custom_attributes: List[str] = []
default_attribute_mapping = {
'person': DEFAULT_PERSON_ATTRIBUTES,
'group': DEFAULT_GROUP_ATTRIBUTES,
'computer': DEFAULT_COMPUTER_ATTRIBUTES,
}
default_attributes = default_attribute_mapping.get(member_type, DEFAULT_COMPUTER_ATTRIBUTES)
if args.get('attributes'):
custom_attributes = args['attributes'].split(",")
attributes = list(set(custom_attributes + default_attributes))
if member_type == 'group':
query = "(&(objectCategory={})(memberOf{}={})(sAMAccountName={}))".format(member_type, nested_search, group_dn,
account_name)
else:
query = "(&(objectCategory={})(objectClass=user)(memberOf{}={})(sAMAccountName={}))"\
.format(member_type, nested_search, group_dn, account_name)
entries = search_with_paging(
query,
default_base_dn,
attributes=attributes,
page_size=page_size,
time_limit=time_limit
)
members = [{'dn': entry['dn'], 'category': member_type} for entry in entries['flat']]
demisto_entry = {
'ContentsFormat': formats['json'],
'Type': entryTypes['note'],
'Contents': entries['raw'],
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': tableToMarkdown("Active Directory - Get Group Members", entries['flat']),
'EntryContext': {
'ActiveDirectory.Groups(obj.dn ==' + group_dn + ')': {
'dn': group_dn,
'members': members
}
}
}
if member_type == 'person':
demisto_entry['EntryContext']['ActiveDirectory.Users(obj.dn == val.dn)'] = entries['flat']
demisto_entry['EntryContext']['Account'] = [account_entry(
entry, custom_attributes) for entry in entries['flat']]
elif member_type == 'computer':
demisto_entry['EntryContext']['ActiveDirectory.Computers(obj.dn == val.dn)'] = entries['flat']
demisto_entry['EntryContext']['Endpoint'] = [endpoint_entry(
entry, custom_attributes) for entry in entries['flat']]
elif member_type == 'group':
demisto_entry['EntryContext']['ActiveDirectory.Groups(obj.dn == val.dn)'] = entries['flat']
demisto_entry['EntryContext']['Group'] = [group_entry(
entry, custom_attributes) for entry in entries['flat']]
demisto.results(demisto_entry)
|
32,938 | def _set_cookies(span, cookies):
# type: (Span, Dict[str, Union[str, List[str]]]) -> None
for k in cookies:
cookie = cookies[k]
# flask~=0.12.0 create a list of values for each cookie instead a string
if (isinstance(cookie, list) or isinstance(cookie, tuple)) and len(cookie) == 1:
cookie = cookie[0]
# since the header value can be a list, use `set_tag()` to ensure it is converted to a string
span.set_tag(_normalize_tag_name("request", k, kind="cookies"), cookie)
| def _set_cookies(span, cookies):
# type: (Span, Dict[str, Union[str, List[str]]]) -> None
for k in cookies:
cookie = cookies[k]
# flask~=0.12.0 create a list of values for each cookie instead a string
if isinstance(cookie, (list, tuple)) and len(cookie) == 1:
cookie = cookie[0]
# since the header value can be a list, use `set_tag()` to ensure it is converted to a string
span.set_tag(_normalize_tag_name("request", k, kind="cookies"), cookie)
|
45,501 | def main(args, pacu_main: 'Main'):
session = pacu_main.get_active_session()
###### Don't modify these. They can be removed if you are not using the function.
args = parser.parse_args(args)
print = pacu_main.print
input = pacu_main.input
key_info = pacu_main.key_info
fetch_data = pacu_main.fetch_data
######
summary_data = {
'users_confirmed': 0,
'roles_confirmed': 0
}
users = []
roles = []
if args.all_users is True:
if fetch_data(['IAM', 'Users'], module_info['prerequisite_modules'][0], '--users') is False:
print('FAILURE')
print(' SUB-MODULE EXECUTION FAILED')
return
fetched_users = session.IAM['Users']
for user in fetched_users:
users.append({
'UserName': user['UserName'],
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
elif args.user_name is not None:
users.append({
'UserName': args.user_name,
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
summary_data['single_user'] = args.user_name
if args.all_roles is True:
if fetch_data(['IAM', 'Roles'], module_info['prerequisite_modules'][0], '--roles') is False:
print('FAILURE')
print(' SUB-MODULE EXECUTION FAILED')
return
fetched_roles = session.IAM['Roles']
for role in fetched_roles:
roles.append({
'RoleName': role['RoleName'],
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
elif args.role_name is not None:
roles.append({
'RoleName': args.role_name,
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
summary_data['single_role'] = args.role_name
is_user = is_role = False
if not any([args.all_users, args.user_name, args.all_roles, args.role_name]):
client = pacu_main.get_boto3_client('sts')
identity = client.get_caller_identity()
active_aws_key = session.get_active_aws_key(pacu_main.database)
if re.match(r'arn:aws:iam::\d{12}:user/', identity['Arn']) is not None:
is_user = True
# GetCallerIdentity away return user's ARN like this if it was a user
# arn:aws:iam::123456789012:user/username
username = identity['Arn'].split(':user/')[1]
active_aws_key.update(
pacu_main.database,
user_name=username,
arn=identity['Arn'],
user_id=identity['UserId'],
account_id=identity['Account']
)
elif re.match(r'arn:aws:sts::\d{12}:assumed-role/', identity['Arn']) is not None:
is_role = True
active_aws_key.update(
pacu_main.database,
role_name=identity['Arn'].split(':assumed-role/')[1].split('/')[-2],
arn=identity['Arn'],
user_id=identity['UserId'],
account_id=identity['Account']
)
else:
print('Not an IAM user or role. Exiting...\n')
return False
if is_user:
user = key_info(alias=session.key_alias)
user['PermissionsConfirmed'] = True
user['Permissions'] = {'Allow': {}, 'Deny': {}}
users.append(user)
summary_data['single_user'] = user['UserName']
elif is_role:
roles.append({
'RoleName': active_aws_key.role_name,
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
summary_data['single_role'] = active_aws_key.role_name
# list-groups-for-user
# list-user-policies
# list-group-policies
# list-role-policies
# list-attached-role-policies
# list-attached-group-policies
# list-attached-user-policies
# get-policy
# get-policy-version
# get-user-policy
# get-group-policy
# get-role-policy
client = pacu_main.get_boto3_client('iam')
if any([args.all_users, args.user_name, args.all_roles, args.role_name]):
print('Permission Document Location:')
print(' {}/confirmed_permissions/\n'.format(downloads_dir()))
if roles:
print('Confirming permissions for roles:')
for role in roles:
print(' {}...'.format(role['RoleName']))
role['Policies'] = []
try:
# Get inline role policies
policies = []
try:
response = client.list_role_policies(
RoleName=role['RoleName']
)
policies = response['PolicyNames']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_role_policies(
RoleName=role['RoleName'],
Marker=response['Marker']
)
policies += response['PolicyNames']
for policy in policies:
role['Policies'].append({
'PolicyName': policy
})
except ClientError as error:
print(' List role policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
role['PermissionsConfirmed'] = False
# Get document for each inline policy
for policy in policies:
try:
document = client.get_role_policy(
RoleName=role['RoleName'],
PolicyName=policy
)['PolicyDocument']
except ClientError as error:
print(' Get role policy failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
role['PermissionsConfirmed'] = False
role = parse_document(document, role)
# Get attached role policies
attached_policies = []
try:
response = client.list_attached_role_policies(
RoleName=role['RoleName']
)
attached_policies = response['AttachedPolicies']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_attached_role_policies(
RoleName=role['RoleName'],
Marker=response['Marker']
)
attached_policies += response['AttachedPolicies']
role['Policies'] += attached_policies
except ClientError as error:
print(' List attached role policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
role['PermissionsConfirmed'] = False
role = parse_attached_policies(client, attached_policies, role)
if role['PermissionsConfirmed']:
summary_data['roles_confirmed'] += 1
if args.role_name is None and args.all_roles is False:
print(' Confirmed permissions for {}'.format(role['RoleName']))
active_aws_key.update(
pacu_main.database,
role_name=role['RoleName'],
policies=role['Policies'],
permissions_confirmed=role['PermissionsConfirmed'],
allow_permissions=role['Permissions']['Allow'],
deny_permissions=role['Permissions']['Deny']
)
else:
with save('confirmed_permissions/role-{}.json'.format(role['RoleName']), 'w+') as f:
json.dump(role, f, indent=2, default=str)
print(' Permissions stored in role-{}.json'.format(role['RoleName']))
except ClientError as error:
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
print('Skipping {}'.format(role['RoleName']))
if users:
print()
if users:
print('Confirming permissions for users:')
for user in users:
print(' {}...'.format(user['UserName']))
user['Groups'] = []
user['Policies'] = []
try:
policies = []
# Get groups that the user is in
try:
response = client.list_groups_for_user(
UserName=user['UserName']
)
user['Groups'] = response['Groups']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_groups_for_user(
UserName=user['UserName'],
Marker=response['Marker']
)
user['Groups'] += response['Groups']
except ClientError as error:
print(' List groups for user failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
# Get inline and attached group policies
for group in user['Groups']:
group['Policies'] = []
# Get inline group policies
try:
response = client.list_group_policies(
GroupName=group['GroupName']
)
policies = response['PolicyNames']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_group_policies(
GroupName=group['GroupName'],
Marker=response['Marker']
)
policies += response['PolicyNames']
except ClientError as error:
print(' List group policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
# Get document for each inline policy
for policy in policies:
group['Policies'].append({ # Add policies to list of policies for this group
'PolicyName': policy
})
try:
document = client.get_group_policy(
GroupName=group['GroupName'],
PolicyName=policy
)['PolicyDocument']
except ClientError as error:
print(' Get group policy failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_document(document, user)
# Get attached group policies
attached_policies = []
try:
response = client.list_attached_group_policies(
GroupName=group['GroupName']
)
attached_policies = response['AttachedPolicies']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_attached_group_policies(
GroupName=group['GroupName'],
Marker=response['Marker']
)
attached_policies += response['AttachedPolicies']
group['Policies'] += attached_policies
except ClientError as error:
print(' List attached group policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_attached_policies(client, attached_policies, user)
# Get inline user policies
policies = []
if 'Policies' not in user:
user['Policies'] = []
try:
response = client.list_user_policies(
UserName=user['UserName']
)
policies = response['PolicyNames']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_user_policies(
UserName=user['UserName'],
Marker=response['Marker']
)
policies += response['PolicyNames']
for policy in policies:
user['Policies'].append({
'PolicyName': policy
})
except ClientError as error:
print(' List user policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
# Get document for each inline policy
for policy in policies:
try:
document = client.get_user_policy(
UserName=user['UserName'],
PolicyName=policy
)['PolicyDocument']
except ClientError as error:
print(' Get user policy failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_document(document, user)
# Get attached user policies
attached_policies = []
try:
response = client.list_attached_user_policies(
UserName=user['UserName']
)
attached_policies = response['AttachedPolicies']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_attached_user_policies(
UserName=user['UserName'],
Marker=response['Marker']
)
attached_policies += response['AttachedPolicies']
user['Policies'] += attached_policies
except ClientError as error:
print(' List attached user policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_attached_policies(client, attached_policies, user)
if user['PermissionsConfirmed']:
summary_data['users_confirmed'] += 1
if args.user_name is None and args.all_users is False:
print(' Confirmed Permissions for {}'.format(user['UserName']))
active_aws_key.update(
pacu_main.database,
user_name=user['UserName'],
arn=user['Arn'],
user_id=user['UserId'],
groups=user['Groups'],
policies=user['Policies'],
permissions_confirmed=user['PermissionsConfirmed'],
allow_permissions=user['Permissions']['Allow'],
deny_permissions=user['Permissions']['Deny']
)
else:
with save('confirmed_permissions/user-{}.json'.format(session.name, user['UserName']), 'w+') as f:
json.dump(user, f, indent=2, default=str)
print(' Permissions stored in user-{}.json'.format(user['UserName']))
except ClientError as error:
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
print('Skipping {}'.format(user['UserName']))
return summary_data
| def main(args, pacu_main: 'Main'):
session = pacu_main.get_active_session()
###### Don't modify these. They can be removed if you are not using the function.
args = parser.parse_args(args)
print = pacu_main.print
input = pacu_main.input
key_info = pacu_main.key_info
fetch_data = pacu_main.fetch_data
######
summary_data = {
'users_confirmed': 0,
'roles_confirmed': 0
}
users = []
roles = []
if args.all_users is True:
if fetch_data(['IAM', 'Users'], module_info['prerequisite_modules'][0], '--users') is False:
print('FAILURE')
print(' SUB-MODULE EXECUTION FAILED')
return
fetched_users = session.IAM['Users']
for user in fetched_users:
users.append({
'UserName': user['UserName'],
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
elif args.user_name is not None:
users.append({
'UserName': args.user_name,
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
summary_data['single_user'] = args.user_name
if args.all_roles is True:
if fetch_data(['IAM', 'Roles'], module_info['prerequisite_modules'][0], '--roles') is False:
print('FAILURE')
print(' SUB-MODULE EXECUTION FAILED')
return
fetched_roles = session.IAM['Roles']
for role in fetched_roles:
roles.append({
'RoleName': role['RoleName'],
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
elif args.role_name is not None:
roles.append({
'RoleName': args.role_name,
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
summary_data['single_role'] = args.role_name
is_user = is_role = False
if not any([args.all_users, args.user_name, args.all_roles, args.role_name]):
client = pacu_main.get_boto3_client('sts')
identity = client.get_caller_identity()
active_aws_key = session.get_active_aws_key(pacu_main.database)
if re.match(r'arn:aws:iam::\d{12}:user/', identity['Arn']) is not None:
is_user = True
# GetCallerIdentity away return user's ARN like this if it was a user
# arn:aws:iam::123456789012:user/username
username = identity['Arn'].split(':user/')[1]
active_aws_key.update(
pacu_main.database,
user_name=username.split('/')[-1],
arn=identity['Arn'],
user_id=identity['UserId'],
account_id=identity['Account']
)
elif re.match(r'arn:aws:sts::\d{12}:assumed-role/', identity['Arn']) is not None:
is_role = True
active_aws_key.update(
pacu_main.database,
role_name=identity['Arn'].split(':assumed-role/')[1].split('/')[-2],
arn=identity['Arn'],
user_id=identity['UserId'],
account_id=identity['Account']
)
else:
print('Not an IAM user or role. Exiting...\n')
return False
if is_user:
user = key_info(alias=session.key_alias)
user['PermissionsConfirmed'] = True
user['Permissions'] = {'Allow': {}, 'Deny': {}}
users.append(user)
summary_data['single_user'] = user['UserName']
elif is_role:
roles.append({
'RoleName': active_aws_key.role_name,
'PermissionsConfirmed': True,
'Permissions': {
'Allow': {},
'Deny': {}
}
})
summary_data['single_role'] = active_aws_key.role_name
# list-groups-for-user
# list-user-policies
# list-group-policies
# list-role-policies
# list-attached-role-policies
# list-attached-group-policies
# list-attached-user-policies
# get-policy
# get-policy-version
# get-user-policy
# get-group-policy
# get-role-policy
client = pacu_main.get_boto3_client('iam')
if any([args.all_users, args.user_name, args.all_roles, args.role_name]):
print('Permission Document Location:')
print(' {}/confirmed_permissions/\n'.format(downloads_dir()))
if roles:
print('Confirming permissions for roles:')
for role in roles:
print(' {}...'.format(role['RoleName']))
role['Policies'] = []
try:
# Get inline role policies
policies = []
try:
response = client.list_role_policies(
RoleName=role['RoleName']
)
policies = response['PolicyNames']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_role_policies(
RoleName=role['RoleName'],
Marker=response['Marker']
)
policies += response['PolicyNames']
for policy in policies:
role['Policies'].append({
'PolicyName': policy
})
except ClientError as error:
print(' List role policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
role['PermissionsConfirmed'] = False
# Get document for each inline policy
for policy in policies:
try:
document = client.get_role_policy(
RoleName=role['RoleName'],
PolicyName=policy
)['PolicyDocument']
except ClientError as error:
print(' Get role policy failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
role['PermissionsConfirmed'] = False
role = parse_document(document, role)
# Get attached role policies
attached_policies = []
try:
response = client.list_attached_role_policies(
RoleName=role['RoleName']
)
attached_policies = response['AttachedPolicies']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_attached_role_policies(
RoleName=role['RoleName'],
Marker=response['Marker']
)
attached_policies += response['AttachedPolicies']
role['Policies'] += attached_policies
except ClientError as error:
print(' List attached role policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
role['PermissionsConfirmed'] = False
role = parse_attached_policies(client, attached_policies, role)
if role['PermissionsConfirmed']:
summary_data['roles_confirmed'] += 1
if args.role_name is None and args.all_roles is False:
print(' Confirmed permissions for {}'.format(role['RoleName']))
active_aws_key.update(
pacu_main.database,
role_name=role['RoleName'],
policies=role['Policies'],
permissions_confirmed=role['PermissionsConfirmed'],
allow_permissions=role['Permissions']['Allow'],
deny_permissions=role['Permissions']['Deny']
)
else:
with save('confirmed_permissions/role-{}.json'.format(role['RoleName']), 'w+') as f:
json.dump(role, f, indent=2, default=str)
print(' Permissions stored in role-{}.json'.format(role['RoleName']))
except ClientError as error:
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
print('Skipping {}'.format(role['RoleName']))
if users:
print()
if users:
print('Confirming permissions for users:')
for user in users:
print(' {}...'.format(user['UserName']))
user['Groups'] = []
user['Policies'] = []
try:
policies = []
# Get groups that the user is in
try:
response = client.list_groups_for_user(
UserName=user['UserName']
)
user['Groups'] = response['Groups']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_groups_for_user(
UserName=user['UserName'],
Marker=response['Marker']
)
user['Groups'] += response['Groups']
except ClientError as error:
print(' List groups for user failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
# Get inline and attached group policies
for group in user['Groups']:
group['Policies'] = []
# Get inline group policies
try:
response = client.list_group_policies(
GroupName=group['GroupName']
)
policies = response['PolicyNames']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_group_policies(
GroupName=group['GroupName'],
Marker=response['Marker']
)
policies += response['PolicyNames']
except ClientError as error:
print(' List group policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
# Get document for each inline policy
for policy in policies:
group['Policies'].append({ # Add policies to list of policies for this group
'PolicyName': policy
})
try:
document = client.get_group_policy(
GroupName=group['GroupName'],
PolicyName=policy
)['PolicyDocument']
except ClientError as error:
print(' Get group policy failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_document(document, user)
# Get attached group policies
attached_policies = []
try:
response = client.list_attached_group_policies(
GroupName=group['GroupName']
)
attached_policies = response['AttachedPolicies']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_attached_group_policies(
GroupName=group['GroupName'],
Marker=response['Marker']
)
attached_policies += response['AttachedPolicies']
group['Policies'] += attached_policies
except ClientError as error:
print(' List attached group policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_attached_policies(client, attached_policies, user)
# Get inline user policies
policies = []
if 'Policies' not in user:
user['Policies'] = []
try:
response = client.list_user_policies(
UserName=user['UserName']
)
policies = response['PolicyNames']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_user_policies(
UserName=user['UserName'],
Marker=response['Marker']
)
policies += response['PolicyNames']
for policy in policies:
user['Policies'].append({
'PolicyName': policy
})
except ClientError as error:
print(' List user policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
# Get document for each inline policy
for policy in policies:
try:
document = client.get_user_policy(
UserName=user['UserName'],
PolicyName=policy
)['PolicyDocument']
except ClientError as error:
print(' Get user policy failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_document(document, user)
# Get attached user policies
attached_policies = []
try:
response = client.list_attached_user_policies(
UserName=user['UserName']
)
attached_policies = response['AttachedPolicies']
while 'IsTruncated' in response and response['IsTruncated'] is True:
response = client.list_attached_user_policies(
UserName=user['UserName'],
Marker=response['Marker']
)
attached_policies += response['AttachedPolicies']
user['Policies'] += attached_policies
except ClientError as error:
print(' List attached user policies failed')
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
user['PermissionsConfirmed'] = False
user = parse_attached_policies(client, attached_policies, user)
if user['PermissionsConfirmed']:
summary_data['users_confirmed'] += 1
if args.user_name is None and args.all_users is False:
print(' Confirmed Permissions for {}'.format(user['UserName']))
active_aws_key.update(
pacu_main.database,
user_name=user['UserName'],
arn=user['Arn'],
user_id=user['UserId'],
groups=user['Groups'],
policies=user['Policies'],
permissions_confirmed=user['PermissionsConfirmed'],
allow_permissions=user['Permissions']['Allow'],
deny_permissions=user['Permissions']['Deny']
)
else:
with save('confirmed_permissions/user-{}.json'.format(session.name, user['UserName']), 'w+') as f:
json.dump(user, f, indent=2, default=str)
print(' Permissions stored in user-{}.json'.format(user['UserName']))
except ClientError as error:
if error.response['Error']['Code'] == 'AccessDenied':
print(' FAILURE: MISSING REQUIRED AWS PERMISSIONS')
else:
print(' {}'.format(error.response['Error']['Code']))
print('Skipping {}'.format(user['UserName']))
return summary_data
|
2,223 | def test_liblinear_dual_random_state():
# random_state is relevant for liblinear solver only if dual=True
X, y = make_classification(n_samples=20, random_state=0)
lr1 = LogisticRegression(random_state=0, dual=True, max_iter=1, tol=1e-15,
solver='liblinear', multi_class='ovr')
lr1.fit(X, y)
lr2 = LogisticRegression(random_state=0, dual=True, max_iter=1, tol=1e-15,
solver='liblinear', multi_class='ovr')
lr2.fit(X, y)
lr3 = LogisticRegression(random_state=8, dual=True, max_iter=1, tol=1e-15,
solver='liblinear', multi_class='ovr')
lr3.fit(X, y)
# same result for same random state
assert_array_almost_equal(lr1.coef_, lr2.coef_)
# different results for different random states
msg = "Arrays are not almost equal to 6 decimals"
with pytest.raises(ValueError, match=msg):
assert_array_almost_equal(lr1.coef_, lr3.coef_)
| def test_liblinear_dual_random_state():
# random_state is relevant for liblinear solver only if dual=True
X, y = make_classification(n_samples=20, random_state=0)
lr1 = LogisticRegression(random_state=0, dual=True, max_iter=1, tol=1e-15,
solver='liblinear', multi_class='ovr')
lr1.fit(X, y)
lr2 = LogisticRegression(random_state=0, dual=True, max_iter=1, tol=1e-15,
solver='liblinear', multi_class='ovr')
lr2.fit(X, y)
lr3 = LogisticRegression(random_state=8, dual=True, max_iter=1, tol=1e-15,
solver='liblinear', multi_class='ovr')
lr3.fit(X, y)
# same result for same random state
assert_array_almost_equal(lr1.coef_, lr2.coef_)
# different results for different random states
msg = "Arrays are not almost equal to 6 decimals"
with pytest.raises(AssertionError, match=msg):
assert_array_almost_equal(lr1.coef_, lr3.coef_)
|
1,255 | def _infer_intent_from_filename(filename):
"""Parses output filename for common suffixes and fetches corresponding intent code"""
from pathlib import Path
ext = Path(filename).suffixes[0]
return CIFTI_EXTENSIONS_TO_INTENTS.get(ext)
| def _infer_intent_from_filename(filename):
"""Parses output filename for common suffixes and fetches corresponding intent code"""
from pathlib import Path
ext = Path(filename).suffixes[-2]
return CIFTI_EXTENSIONS_TO_INTENTS.get(ext)
|
3,242 | def quantize_time(time, key_hash, duration=300):
""" Adds jitter based on the key_hash around start/end times for caching snuba queries
Given a time and a key_hash this should result in a timestamp that remains the same for a duration
The end of the duration will be different per key_hash which avoids spikes in the number of queries
Must be based on the key_hash so they cache keys are consistent per query
For example: the time is 17:02:00, there's two queries query A has a key_hash of 30, query B has a key_hash of
60, we have the default duration of 300 (5 Minutes)
- query A will have the suffix of 17:00:30 for a timewindow from 17:00:30 until 17:05:30
- eg. Even when its 17:05:00 the suffix will still be 17:00:30
- query B will have the suffix of 17:01:00 for a timewindow from 17:01:00 until 17:06:00
"""
# Use the hash so that seconds past the hour gets rounded differently per query.
jitter = key_hash % duration
seconds_past_hour = time.minute * 60 + time.second
# Round seconds to a multiple of duration, cause this uses "floor" division shouldn't give us a future window
time_window_start = seconds_past_hour // duration * duration + jitter
# If the time is past the rounded seconds then we want our key to be for this timewindow
if time_window_start < seconds_past_hour:
seconds_past_hour = time_window_start
# Otherwise we're in the previous time window, subtract duration to give us the previous timewindows start
else:
seconds_past_hour = time_window_start - duration
return (
# Since we're adding seconds past the hour, we want time but without minutes or seconds
time.replace(minute=0, second=0, microsecond=0)
+
# Use timedelta here so keys are consistent around hour boundaries
timedelta(seconds=seconds_past_hour)
)
| def quantize_time(time, key_hash, duration=300):
""" Adds jitter based on the key_hash around start/end times for caching snuba queries
Given a time and a key_hash this should result in a timestamp that remains the same for a duration
The end of the duration will be different per key_hash which avoids spikes in the number of queries
Must be based on the key_hash so they cache keys are consistent per query
For example: the time is 17:02:00, there's two queries query A has a key_hash of 30, query B has a key_hash of
60, we have the default duration of 300 (5 Minutes)
- query A will have the suffix of 17:00:30 for a timewindow from 17:00:30 until 17:05:30
- eg. Even when its 17:05:00 the suffix will still be 17:00:30
- query B will have the suffix of 17:01:00 for a timewindow from 17:01:00 until 17:06:00
"""
# Use the hash so that seconds past the hour gets rounded differently per query.
jitter = key_hash % duration
seconds_past_hour = time.minute * 60 + time.second
# Round seconds to a multiple of duration, because this uses "floor" division shouldn't give us a future window
time_window_start = seconds_past_hour // duration * duration + jitter
# If the time is past the rounded seconds then we want our key to be for this timewindow
if time_window_start < seconds_past_hour:
seconds_past_hour = time_window_start
# Otherwise we're in the previous time window, subtract duration to give us the previous timewindows start
else:
seconds_past_hour = time_window_start - duration
return (
# Since we're adding seconds past the hour, we want time but without minutes or seconds
time.replace(minute=0, second=0, microsecond=0)
+
# Use timedelta here so keys are consistent around hour boundaries
timedelta(seconds=seconds_past_hour)
)
|
22,391 | def check_binary(name, file_path=True):
# Handles files if file_path is True or text if file_path is False
if file_path:
temp = open(name, "rb")
read_start = int(os.stat(name).st_size/2)
else:
temp = BytesIO(name)
read_start = int(len(name)/2)
try:
# Read 1024 from the middle of the file,
# to avoid issues with long txt headers on binary files.
temp.seek(read_start)
return util.is_binary(temp.read(1024))
finally:
temp.close()
| def check_binary(name, file_path=True):
# Handles files if file_path is True or text if file_path is False
if file_path:
temp = open(name, "rb")
read_start = int(os.stat(name).st_size/2)
else:
temp = BytesIO(name)
read_start = int(len(name) / 2)
try:
# Read 1024 from the middle of the file,
# to avoid issues with long txt headers on binary files.
temp.seek(read_start)
return util.is_binary(temp.read(1024))
finally:
temp.close()
|
10,357 | def db_dump(module, host, user, password, db_name, target, all_databases, port,
config_file, socket=None, ssl_cert=None, ssl_key=None, ssl_ca=None,
single_transaction=None, quick=None, ignore_tables=None, hex_blob=None,
encoding=None, only_tables=None, force=False):
cmd = module.get_bin_path('mysqldump', True)
# If defined, mysqldump demands --defaults-extra-file be the first option
if config_file:
cmd += " --defaults-extra-file=%s" % shlex_quote(config_file)
if user is not None:
cmd += " --user=%s" % shlex_quote(user)
if password is not None:
cmd += " --password=%s" % shlex_quote(password)
if ssl_cert is not None:
cmd += " --ssl-cert=%s" % shlex_quote(ssl_cert)
if ssl_key is not None:
cmd += " --ssl-key=%s" % shlex_quote(ssl_key)
if ssl_ca is not None:
cmd += " --ssl-ca=%s" % shlex_quote(ssl_ca)
if force:
cmd += " --force"
if socket is not None:
cmd += " --socket=%s" % shlex_quote(socket)
else:
cmd += " --host=%s --port=%i" % (shlex_quote(host), port)
if all_databases:
cmd += " --all-databases"
elif only_tables and len(db_name) == 1:
cmd += " %s" % db_name[0]
else:
cmd += " --databases {0} --skip-lock-tables".format(' '.join(db_name))
if (encoding is not None) and (encoding != ""):
cmd += " --default-character-set=%s" % shlex_quote(encoding)
if single_transaction:
cmd += " --single-transaction=true"
if quick:
cmd += " --quick"
if ignore_tables:
for an_ignored_table in ignore_tables:
cmd += " --ignore-table={0}".format(an_ignored_table)
if hex_blob:
cmd += " --hex-blob"
path = None
if os.path.splitext(target)[-1] == '.gz':
path = module.get_bin_path('gzip', True)
elif os.path.splitext(target)[-1] == '.bz2':
path = module.get_bin_path('bzip2', True)
elif os.path.splitext(target)[-1] == '.xz':
path = module.get_bin_path('xz', True)
if path:
cmd = '%s | %s > %s' % (cmd, path, shlex_quote(target))
else:
cmd += " > %s" % shlex_quote(target)
executed_commands.append(cmd)
rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True)
return rc, stdout, stderr
| def db_dump(module, host, user, password, db_name, target, all_databases, port,
config_file, socket=None, ssl_cert=None, ssl_key=None, ssl_ca=None,
single_transaction=None, quick=None, ignore_tables=None, hex_blob=None,
encoding=None, only_tables=None, force=False):
cmd = module.get_bin_path('mysqldump', True)
# If defined, mysqldump demands --defaults-extra-file be the first option
if config_file:
cmd += " --defaults-extra-file=%s" % shlex_quote(config_file)
if user is not None:
cmd += " --user=%s" % shlex_quote(user)
if password is not None:
cmd += " --password=%s" % shlex_quote(password)
if ssl_cert is not None:
cmd += " --ssl-cert=%s" % shlex_quote(ssl_cert)
if ssl_key is not None:
cmd += " --ssl-key=%s" % shlex_quote(ssl_key)
if ssl_ca is not None:
cmd += " --ssl-ca=%s" % shlex_quote(ssl_ca)
if force:
cmd += " --force"
if socket is not None:
cmd += " --socket=%s" % shlex_quote(socket)
else:
cmd += " --host=%s --port=%i" % (shlex_quote(host), port)
if all_databases:
cmd += " --all-databases"
elif only_tables:
cmd += " %s" % db_name[0]
else:
cmd += " --databases {0} --skip-lock-tables".format(' '.join(db_name))
if (encoding is not None) and (encoding != ""):
cmd += " --default-character-set=%s" % shlex_quote(encoding)
if single_transaction:
cmd += " --single-transaction=true"
if quick:
cmd += " --quick"
if ignore_tables:
for an_ignored_table in ignore_tables:
cmd += " --ignore-table={0}".format(an_ignored_table)
if hex_blob:
cmd += " --hex-blob"
path = None
if os.path.splitext(target)[-1] == '.gz':
path = module.get_bin_path('gzip', True)
elif os.path.splitext(target)[-1] == '.bz2':
path = module.get_bin_path('bzip2', True)
elif os.path.splitext(target)[-1] == '.xz':
path = module.get_bin_path('xz', True)
if path:
cmd = '%s | %s > %s' % (cmd, path, shlex_quote(target))
else:
cmd += " > %s" % shlex_quote(target)
executed_commands.append(cmd)
rc, stdout, stderr = module.run_command(cmd, use_unsafe_shell=True)
return rc, stdout, stderr
|
51,508 | def run_query(data_source, parameter_values, query_text, query_id, max_age=0, parameter_schema={}):
if data_source.paused:
if data_source.pause_reason:
message = '{} is paused ({}). Please try later.'.format(data_source.name, data_source.pause_reason)
else:
message = '{} is paused. Please try later.'.format(data_source.name)
return error_response(message)
query = ParameterizedQuery(query_text, parameter_schema).apply(parameter_values)
if query.missing_params:
return error_response(u'Missing parameter value for: {}'.format(u", ".join(query.missing_params)))
if max_age == 0:
query_result = None
else:
query_result = models.QueryResult.get_latest(data_source, query.text, max_age)
if query_result:
return {'query_result': query_result.to_dict()}
else:
job = enqueue_query(query.text, data_source, current_user.id, metadata={"Username": current_user.email, "Query ID": query_id})
return {'job': job.to_dict()}
| def run_query(data_source, parameter_values, query_text, query_id, max_age=0, parameter_schema=None):
if parameter_schema is None:
parameter_schema = {}
if data_source.paused:
if data_source.pause_reason:
message = '{} is paused ({}). Please try later.'.format(data_source.name, data_source.pause_reason)
else:
message = '{} is paused. Please try later.'.format(data_source.name)
return error_response(message)
query = ParameterizedQuery(query_text, parameter_schema).apply(parameter_values)
if query.missing_params:
return error_response(u'Missing parameter value for: {}'.format(u", ".join(query.missing_params)))
if max_age == 0:
query_result = None
else:
query_result = models.QueryResult.get_latest(data_source, query.text, max_age)
if query_result:
return {'query_result': query_result.to_dict()}
else:
job = enqueue_query(query.text, data_source, current_user.id, metadata={"Username": current_user.email, "Query ID": query_id})
return {'job': job.to_dict()}
|
57,683 | def get_machine_guid(machine_name):
query_fields = ['elementDisplayName']
path = [
{
'requestedType': 'Machine',
'filters': [
{'facetName': 'elementDisplayName', 'values': [machine_name]}
],
'isResult': True
}
]
json_body = build_query(query_fields, path)
response = http_request('POST', '/rest/visualsearch/query/simple', json_body=json_body).json()
data = dict_safe_get(response, ['data', 'resultIdToElementDataMap'], {}, dict)
return dict_safe_get(data.keys(), [0])
| def get_machine_guid(machine_name):
query_fields = ['elementDisplayName']
path = [
{
'requestedType': 'Machine',
'filters': [
{'facetName': 'elementDisplayName', 'values': [machine_name]}
],
'isResult': True
}
]
json_body = build_query(query_fields, path)
response = http_request('POST', '/rest/visualsearch/query/simple', json_body=json_body).json()
data = dict_safe_get(response, ['data', 'resultIdToElementDataMap'], {}, dict)
if not data:
raise ValueError('Could not find machine')
return data.keys()[0]
|
54,840 | def sample_to_event(sample: list, max_count_per_mode: int) -> Union[int, None]:
r"""Provides the event corresponding to a given sample.
For an input ``max_count_per_mode``, events are expressed here simply by the total photon
number :math:`k`.
**Example usage:**
>>> sample = [1, 2, 0, 0, 1, 1, 0, 3]
>>> sample_to_event(sample, 4)
8
>>> sample_to_event(sample, 2)
None
Args:
sample (list[int]): a sample from GBS
max_count_per_mode (int): the maximum number of photons counted in any given mode for a
sample to be categorized as an event. Samples with counts exceeding this value are
attributed the event ``None``.
Returns:
int or None: the event of the sample
"""
if max(sample) <= max_count_per_mode:
return sum(sample)
return None
| def sample_to_event(sample: list, max_count_per_mode: int) -> Union[int, None]:
r"""Provides the event corresponding to a given sample.
For an input ``max_count_per_mode``, events are expressed here simply by the total photon
number :math:`k`.
return int(factorial(modes, exact=False) / np.prod(factorial(counts, exact=False)))
**Example usage:**
>>> sample = [1, 2, 0, 0, 1, 1, 0, 3]
>>> sample_to_event(sample, 4)
8
>>> sample_to_event(sample, 2)
None
Args:
sample (list[int]): a sample from GBS
max_count_per_mode (int): the maximum number of photons counted in any given mode for a
sample to be categorized as an event. Samples with counts exceeding this value are
attributed the event ``None``.
Returns:
int or None: the event of the sample
"""
if max(sample) <= max_count_per_mode:
return sum(sample)
return None
|
40,478 | def main():
dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset)
data = dataset[0]
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = Node2Vec(data.edge_index, embedding_dim=128, walk_length=20,
context_size=10, walks_per_node=10,
num_negative_samples=1, p=1, q=1, sparse=True).to(device)
if sys.platform.startswith('win'):
num_workers = 0
else:
num_workers = 4
loader = model.loader(batch_size=128, shuffle=True,
num_workers=num_workers)
optimizer = torch.optim.SparseAdam(list(model.parameters()), lr=0.01)
def train():
model.train()
total_loss = 0
for pos_rw, neg_rw in loader:
optimizer.zero_grad()
loss = model.loss(pos_rw.to(device), neg_rw.to(device))
loss.backward()
optimizer.step()
total_loss += loss.item()
return total_loss / len(loader)
@torch.no_grad()
def test():
model.eval()
z = model()
acc = model.test(z[data.train_mask], data.y[data.train_mask],
z[data.test_mask], data.y[data.test_mask],
max_iter=150)
return acc
for epoch in range(1, 101):
loss = train()
acc = test()
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Acc: {acc:.4f}')
@torch.no_grad()
def plot_points(colors):
model.eval()
z = model(torch.arange(data.num_nodes, device=device))
z = TSNE(n_components=2).fit_transform(z.cpu().numpy())
y = data.y.cpu().numpy()
plt.figure(figsize=(8, 8))
for i in range(dataset.num_classes):
plt.scatter(z[y == i, 0], z[y == i, 1], s=20, color=colors[i])
plt.axis('off')
plt.show()
colors = [
'#ffc0cb', '#bada55', '#008080', '#420420', '#7fe5f0', '#065535',
'#ffd700'
]
plot_points(colors)
| def main():
dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset)
data = dataset[0]
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = Node2Vec(data.edge_index, embedding_dim=128, walk_length=20,
context_size=10, walks_per_node=10,
num_negative_samples=1, p=1, q=1, sparse=True).to(device)
num_workers = 0 if sys.platform.startswith('win') else 4
loader = model.loader(batch_size=128, shuffle=True,
num_workers=num_workers)
optimizer = torch.optim.SparseAdam(list(model.parameters()), lr=0.01)
def train():
model.train()
total_loss = 0
for pos_rw, neg_rw in loader:
optimizer.zero_grad()
loss = model.loss(pos_rw.to(device), neg_rw.to(device))
loss.backward()
optimizer.step()
total_loss += loss.item()
return total_loss / len(loader)
@torch.no_grad()
def test():
model.eval()
z = model()
acc = model.test(z[data.train_mask], data.y[data.train_mask],
z[data.test_mask], data.y[data.test_mask],
max_iter=150)
return acc
for epoch in range(1, 101):
loss = train()
acc = test()
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Acc: {acc:.4f}')
@torch.no_grad()
def plot_points(colors):
model.eval()
z = model(torch.arange(data.num_nodes, device=device))
z = TSNE(n_components=2).fit_transform(z.cpu().numpy())
y = data.y.cpu().numpy()
plt.figure(figsize=(8, 8))
for i in range(dataset.num_classes):
plt.scatter(z[y == i, 0], z[y == i, 1], s=20, color=colors[i])
plt.axis('off')
plt.show()
colors = [
'#ffc0cb', '#bada55', '#008080', '#420420', '#7fe5f0', '#065535',
'#ffd700'
]
plot_points(colors)
|
830 | def test_issue_16350():
d = Derivative(y, x)
assert Derivative(d, x) == Derivative(y, (x, 2))
assert Derivative(d, x, evaluate = False) == Derivative(Derivative(y, x), x)
| def test_issue_16350():
d = Derivative(y, x)
assert Derivative(d, x) == Derivative(y, (x, 2))
assert Derivative(d, x, evaluate=False) == Derivative(Derivative(y, x), x)
|
54,905 | def main():
"""Bandit CLI."""
# bring our logging stuff up as early as possible
debug = (logging.DEBUG if '-d' in sys.argv or '--debug' in sys.argv else
logging.INFO)
_init_logger(debug)
extension_mgr = _init_extensions()
baseline_formatters = [f.name for f in filter(lambda x:
hasattr(x.plugin,
'_accepts_baseline'),
extension_mgr.formatters)]
# now do normal startup
parser = argparse.ArgumentParser(
description='Bandit - a Python source code security analyzer',
formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument(
'targets', metavar='targets', type=str, nargs='*',
help='source file(s) or directory(s) to be tested'
)
parser.add_argument(
'-r', '--recursive', dest='recursive',
action='store_true', help='find and process files in subdirectories'
)
parser.add_argument(
'-a', '--aggregate', dest='agg_type',
action='store', default='file', type=str,
choices=['file', 'vuln'],
help='aggregate output by vulnerability (default) or by filename'
)
parser.add_argument(
'-n', '--number', dest='context_lines',
action='store', default=3, type=int,
help='maximum number of code lines to output for each issue'
)
parser.add_argument(
'-c', '--configfile', dest='config_file',
action='store', default=None, type=str,
help='optional config file to use for selecting plugins and '
'overriding defaults'
)
parser.add_argument(
'-p', '--profile', dest='profile',
action='store', default=None, type=str,
help='profile to use (defaults to executing all tests)'
)
parser.add_argument(
'-t', '--tests', dest='tests',
action='store', default=None, type=str,
help='comma-separated list of test IDs to run'
)
parser.add_argument(
'-s', '--skip', dest='skips',
action='store', default=None, type=str,
help='comma-separated list of test IDs to skip'
)
severity_group = parser.add_mutually_exclusive_group(required=False)
severity_group.add_argument(
'-l', '--level', dest='severity', action='count',
default=1, help='report only issues of a given severity level or '
'higher (-l for LOW, -ll for MEDIUM, -lll for HIGH)'
)
severity_group.add_argument(
'--severity-level', dest='severity_string', action='store',
help='report only issues of a given severity level or higher.'
' "all" and "low" are likely to produce the same results, but it'
' is possible for rules to be undefined which will'
' not be listed in "low".',
choices=['all', 'low', 'medium', 'high']
)
confidence_group = parser.add_mutually_exclusive_group(required=False)
confidence_group.add_argument(
'-i', '--confidence', dest='confidence', action='count',
default=1, help='report only issues of a given confidence level or '
'higher (-i for LOW, -ii for MEDIUM, -iii for HIGH)'
)
confidence_group.add_argument(
'--confidence-level', dest='confidence_string', action='store',
help='report only issues of a given confidence level or higher.'
' "all" and "low" are likely to produce the same results, but it'
' is possible for rules to be undefined which will'
' not be listed in "low".',
choices=["all", "low", "medium", "high"]
)
output_format = 'screen' if sys.stdout.isatty() else 'txt'
parser.add_argument(
'-f', '--format', dest='output_format', action='store',
default=output_format, help='specify output format',
choices=sorted(extension_mgr.formatter_names)
)
parser.add_argument(
'--msg-template', action='store',
default=None, help='specify output message template'
' (only usable with --format custom),'
' see CUSTOM FORMAT section'
' for list of available values',
)
parser.add_argument(
'-o', '--output', dest='output_file', action='store', nargs='?',
type=argparse.FileType('w', encoding='utf-8'), default=sys.stdout,
help='write report to filename'
)
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument(
'-v', '--verbose', dest='verbose', action='store_true',
help='output extra information like excluded and included files'
)
parser.add_argument(
'-d', '--debug', dest='debug', action='store_true',
help='turn on debug mode'
)
group.add_argument(
'-q', '--quiet', '--silent', dest='quiet', action='store_true',
help='only show output in the case of an error'
)
parser.add_argument(
'--ignore-nosec', dest='ignore_nosec', action='store_true',
help='do not skip lines with # nosec comments'
)
parser.add_argument(
'-x', '--exclude', dest='excluded_paths', action='store',
default=','.join(constants.EXCLUDE),
help='comma-separated list of paths (glob patterns '
'supported) to exclude from scan '
'(note that these are in addition to the excluded '
'paths provided in the config file) (default: ' +
','.join(constants.EXCLUDE) + ')'
)
parser.add_argument(
'-b', '--baseline', dest='baseline', action='store',
default=None, help='path of a baseline report to compare against '
'(only JSON-formatted files are accepted)'
)
parser.add_argument(
'--ini', dest='ini_path', action='store', default=None,
help='path to a .bandit file that supplies command line arguments'
)
exit_zero_group = parser.add_mutually_exclusive_group(required=False)
exit_zero_group.add_argument(
'--exit-zero', action='store_true', dest='exit_zero', default=False,
help='exit with 0, even with results found'
)
exit_zero_group.add_argument(
'--exit-zero-severity', dest='exit_zero_severity_string',
action='store', default=None,
help='control which severity makes bandit to exit with zero '
'status code. Lower severities to the specified one are '
'included implicitly '
'(low for LOW, '
'medium for MEDIUM, '
'high for HIGH).'
)
python_ver = sys.version.replace('\n', '')
parser.add_argument(
'--version', action='version',
version='%(prog)s {version}\n python version = {python}'.format(
version=bandit.__version__, python=python_ver)
)
parser.set_defaults(debug=False)
parser.set_defaults(verbose=False)
parser.set_defaults(quiet=False)
parser.set_defaults(ignore_nosec=False)
plugin_info = ["%s\t%s" % (a[0], a[1].name) for a in
extension_mgr.plugins_by_id.items()]
blacklist_info = []
for a in extension_mgr.blacklist.items():
for b in a[1]:
blacklist_info.append('%s\t%s' % (b['id'], b['name']))
plugin_list = '\n\t'.join(sorted(set(plugin_info + blacklist_info)))
dedent_text = textwrap.dedent('''
CUSTOM FORMATTING
-----------------
Available tags:
{abspath}, {relpath}, {line}, {col}, {test_id},
{severity}, {msg}, {confidence}, {range}
Example usage:
Default template:
bandit -r examples/ --format custom --msg-template \\
"{abspath}:{line}: {test_id}[bandit]: {severity}: {msg}"
Provides same output as:
bandit -r examples/ --format custom
Tags can also be formatted in python string.format() style:
bandit -r examples/ --format custom --msg-template \\
"{relpath:20.20s}: {line:03}: {test_id:^8}: DEFECT: {msg:>20}"
See python documentation for more information about formatting style:
https://docs.python.org/3/library/string.html
The following tests were discovered and loaded:
-----------------------------------------------
''')
parser.epilog = dedent_text + "\t{0}".format(plugin_list)
# setup work - parse arguments, and initialize BanditManager
args = parser.parse_args()
# Check if `--msg-template` is not present without custom formatter
if args.output_format != 'custom' and args.msg_template is not None:
parser.error("--msg-template can only be used with --format=custom")
# Check if confidence or severity level have been specified with strings
if args.severity_string is not None:
if args.severity_string == "all":
args.severity = 1
elif args.severity_string == "low":
args.severity = 2
elif args.severity_string == "medium":
args.severity = 3
elif args.severity_string == "high":
args.severity = 4
# Other strings will be blocked by argparse
if args.confidence_string is not None:
if args.confidence_string == "all":
args.confidence = 1
elif args.confidence_string == "low":
args.confidence = 2
elif args.confidence_string == "medium":
args.confidence = 3
elif args.confidence_string == "high":
args.confidence = 4
# Other strings will be blocked by argparse
if args.exit_zero_severity_string is not None:
if args.exit_zero_severity_string == "all":
args.exit_zero_severity = 1
elif args.exit_zero_severity_string == "low":
args.exit_zero_severity = 2
elif args.exit_zero_severity_string == "medium":
args.exit_zero_severity = 3
elif args.exit_zero_severity_string == "high":
args.exit_zero_severity = 4
# Other strings will be blocked by argparse
try:
b_conf = b_config.BanditConfig(config_file=args.config_file)
except utils.ConfigError as e:
LOG.error(e)
sys.exit(2)
# Handle .bandit files in projects to pass cmdline args from file
ini_options = _get_options_from_ini(args.ini_path, args.targets)
if ini_options:
# prefer command line, then ini file
args.excluded_paths = _log_option_source(
args.excluded_paths,
ini_options.get('exclude'),
'excluded paths')
args.skips = _log_option_source(
args.skips,
ini_options.get('skips'),
'skipped tests')
args.tests = _log_option_source(
args.tests,
ini_options.get('tests'),
'selected tests')
ini_targets = ini_options.get('targets')
if ini_targets:
ini_targets = ini_targets.split(',')
args.targets = _log_option_source(
args.targets,
ini_targets,
'selected targets')
# TODO(tmcpeak): any other useful options to pass from .bandit?
args.recursive = _log_option_source(
args.recursive,
ini_options.get('recursive'),
'recursive scan')
args.agg_type = _log_option_source(
args.agg_type,
ini_options.get('aggregate'),
'aggregate output type')
args.context_lines = _log_option_source(
args.context_lines,
ini_options.get('number'),
'max code lines output for issue')
args.profile = _log_option_source(
args.profile,
ini_options.get('profile'),
'profile')
args.severity = _log_option_source(
args.severity,
ini_options.get('level'),
'severity level')
args.confidence = _log_option_source(
args.confidence,
ini_options.get('confidence'),
'confidence level')
args.output_format = _log_option_source(
args.output_format,
ini_options.get('format'),
'output format')
args.msg_template = _log_option_source(
args.msg_template,
ini_options.get('msg-template'),
'output message template')
args.output_file = _log_option_source(
args.output_file,
ini_options.get('output'),
'output file')
args.verbose = _log_option_source(
args.verbose,
ini_options.get('verbose'),
'output extra information')
args.debug = _log_option_source(
args.debug,
ini_options.get('debug'),
'debug mode')
args.quiet = _log_option_source(
args.quiet,
ini_options.get('quiet'),
'silent mode')
args.ignore_nosec = _log_option_source(
args.ignore_nosec,
ini_options.get('ignore-nosec'),
'do not skip lines with # nosec')
args.baseline = _log_option_source(
args.baseline,
ini_options.get('baseline'),
'path of a baseline report')
if not args.targets:
LOG.error("No targets found in CLI or ini files, exiting.")
sys.exit(2)
# if the log format string was set in the options, reinitialize
if b_conf.get_option('log_format'):
log_format = b_conf.get_option('log_format')
_init_logger(log_level=logging.DEBUG, log_format=log_format)
if args.quiet:
_init_logger(log_level=logging.WARN)
try:
profile = _get_profile(b_conf, args.profile, args.config_file)
_log_info(args, profile)
profile['include'].update(args.tests.split(',') if args.tests else [])
profile['exclude'].update(args.skips.split(',') if args.skips else [])
extension_mgr.validate_profile(profile)
except (utils.ProfileNotFound, ValueError) as e:
LOG.error(e)
sys.exit(2)
b_mgr = b_manager.BanditManager(b_conf, args.agg_type, args.debug,
profile=profile, verbose=args.verbose,
quiet=args.quiet,
ignore_nosec=args.ignore_nosec)
if args.baseline is not None:
try:
with open(args.baseline) as bl:
data = bl.read()
b_mgr.populate_baseline(data)
except IOError:
LOG.warning("Could not open baseline report: %s", args.baseline)
sys.exit(2)
if args.output_format not in baseline_formatters:
LOG.warning('Baseline must be used with one of the following '
'formats: ' + str(baseline_formatters))
sys.exit(2)
if args.output_format != "json":
if args.config_file:
LOG.info("using config: %s", args.config_file)
LOG.info("running on Python %d.%d.%d", sys.version_info.major,
sys.version_info.minor, sys.version_info.micro)
# initiate file discovery step within Bandit Manager
b_mgr.discover_files(args.targets, args.recursive, args.excluded_paths)
if not b_mgr.b_ts.tests:
LOG.error('No tests would be run, please check the profile.')
sys.exit(2)
# initiate execution of tests within Bandit Manager
b_mgr.run_tests()
LOG.debug(b_mgr.b_ma)
LOG.debug(b_mgr.metrics)
# trigger output of results by Bandit Manager
sev_level = constants.RANKING[args.severity - 1]
conf_level = constants.RANKING[args.confidence - 1]
b_mgr.output_results(args.context_lines,
sev_level,
conf_level,
args.output_file,
args.output_format,
args.msg_template)
if args.exit_zero:
sys.exit(0)
if ("exit_zero_severity" in args
and not b_mgr.above_threshold_results(args.exit_zero_severity)):
sys.exit(0)
if b_mgr.results_count(sev_filter=sev_level, conf_filter=conf_level) > 0:
sys.exit(1)
else:
sys.exit(0)
| def main():
"""Bandit CLI."""
# bring our logging stuff up as early as possible
debug = (logging.DEBUG if '-d' in sys.argv or '--debug' in sys.argv else
logging.INFO)
_init_logger(debug)
extension_mgr = _init_extensions()
baseline_formatters = [f.name for f in filter(lambda x:
hasattr(x.plugin,
'_accepts_baseline'),
extension_mgr.formatters)]
# now do normal startup
parser = argparse.ArgumentParser(
description='Bandit - a Python source code security analyzer',
formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument(
'targets', metavar='targets', type=str, nargs='*',
help='source file(s) or directory(s) to be tested'
)
parser.add_argument(
'-r', '--recursive', dest='recursive',
action='store_true', help='find and process files in subdirectories'
)
parser.add_argument(
'-a', '--aggregate', dest='agg_type',
action='store', default='file', type=str,
choices=['file', 'vuln'],
help='aggregate output by vulnerability (default) or by filename'
)
parser.add_argument(
'-n', '--number', dest='context_lines',
action='store', default=3, type=int,
help='maximum number of code lines to output for each issue'
)
parser.add_argument(
'-c', '--configfile', dest='config_file',
action='store', default=None, type=str,
help='optional config file to use for selecting plugins and '
'overriding defaults'
)
parser.add_argument(
'-p', '--profile', dest='profile',
action='store', default=None, type=str,
help='profile to use (defaults to executing all tests)'
)
parser.add_argument(
'-t', '--tests', dest='tests',
action='store', default=None, type=str,
help='comma-separated list of test IDs to run'
)
parser.add_argument(
'-s', '--skip', dest='skips',
action='store', default=None, type=str,
help='comma-separated list of test IDs to skip'
)
severity_group = parser.add_mutually_exclusive_group(required=False)
severity_group.add_argument(
'-l', '--level', dest='severity', action='count',
default=1, help='report only issues of a given severity level or '
'higher (-l for LOW, -ll for MEDIUM, -lll for HIGH)'
)
severity_group.add_argument(
'--severity-level', dest='severity_string', action='store',
help='report only issues of a given severity level or higher.'
' "all" and "low" are likely to produce the same results, but it'
' is possible for rules to be undefined which will'
' not be listed in "low".',
choices=['all', 'low', 'medium', 'high']
)
confidence_group = parser.add_mutually_exclusive_group(required=False)
confidence_group.add_argument(
'-i', '--confidence', dest='confidence', action='count',
default=1, help='report only issues of a given confidence level or '
'higher (-i for LOW, -ii for MEDIUM, -iii for HIGH)'
)
confidence_group.add_argument(
'--confidence-level', dest='confidence_string', action='store',
help='report only issues of a given confidence level or higher.'
' "all" and "low" are likely to produce the same results, but it'
' is possible for rules to be undefined which will'
' not be listed in "low".',
choices=["all", "low", "medium", "high"]
)
output_format = 'screen' if sys.stdout.isatty() else 'txt'
parser.add_argument(
'-f', '--format', dest='output_format', action='store',
default=output_format, help='specify output format',
choices=sorted(extension_mgr.formatter_names)
)
parser.add_argument(
'--msg-template', action='store',
default=None, help='specify output message template'
' (only usable with --format custom),'
' see CUSTOM FORMAT section'
' for list of available values',
)
parser.add_argument(
'-o', '--output', dest='output_file', action='store', nargs='?',
type=argparse.FileType('w', encoding='utf-8'), default=sys.stdout,
help='write report to filename'
)
group = parser.add_mutually_exclusive_group(required=False)
group.add_argument(
'-v', '--verbose', dest='verbose', action='store_true',
help='output extra information like excluded and included files'
)
parser.add_argument(
'-d', '--debug', dest='debug', action='store_true',
help='turn on debug mode'
)
group.add_argument(
'-q', '--quiet', '--silent', dest='quiet', action='store_true',
help='only show output in the case of an error'
)
parser.add_argument(
'--ignore-nosec', dest='ignore_nosec', action='store_true',
help='do not skip lines with # nosec comments'
)
parser.add_argument(
'-x', '--exclude', dest='excluded_paths', action='store',
default=','.join(constants.EXCLUDE),
help='comma-separated list of paths (glob patterns '
'supported) to exclude from scan '
'(note that these are in addition to the excluded '
'paths provided in the config file) (default: ' +
','.join(constants.EXCLUDE) + ')'
)
parser.add_argument(
'-b', '--baseline', dest='baseline', action='store',
default=None, help='path of a baseline report to compare against '
'(only JSON-formatted files are accepted)'
)
parser.add_argument(
'--ini', dest='ini_path', action='store', default=None,
help='path to a .bandit file that supplies command line arguments'
)
exit_zero_group = parser.add_mutually_exclusive_group(required=False)
exit_zero_group.add_argument(
'--exit-zero', action='store_true', dest='exit_zero', default=False,
help='exit with 0, even with results found'
)
exit_zero_group.add_argument(
'--exit-zero-severity', dest='exit_zero_severity_string',
action='store', default=None, choices=["all", "low", "medium", "high"],
help='control which severity makes bandit to exit with zero '
'status code. Lower severities to the specified one are '
'included implicitly. '
)
python_ver = sys.version.replace('\n', '')
parser.add_argument(
'--version', action='version',
version='%(prog)s {version}\n python version = {python}'.format(
version=bandit.__version__, python=python_ver)
)
parser.set_defaults(debug=False)
parser.set_defaults(verbose=False)
parser.set_defaults(quiet=False)
parser.set_defaults(ignore_nosec=False)
plugin_info = ["%s\t%s" % (a[0], a[1].name) for a in
extension_mgr.plugins_by_id.items()]
blacklist_info = []
for a in extension_mgr.blacklist.items():
for b in a[1]:
blacklist_info.append('%s\t%s' % (b['id'], b['name']))
plugin_list = '\n\t'.join(sorted(set(plugin_info + blacklist_info)))
dedent_text = textwrap.dedent('''
CUSTOM FORMATTING
-----------------
Available tags:
{abspath}, {relpath}, {line}, {col}, {test_id},
{severity}, {msg}, {confidence}, {range}
Example usage:
Default template:
bandit -r examples/ --format custom --msg-template \\
"{abspath}:{line}: {test_id}[bandit]: {severity}: {msg}"
Provides same output as:
bandit -r examples/ --format custom
Tags can also be formatted in python string.format() style:
bandit -r examples/ --format custom --msg-template \\
"{relpath:20.20s}: {line:03}: {test_id:^8}: DEFECT: {msg:>20}"
See python documentation for more information about formatting style:
https://docs.python.org/3/library/string.html
The following tests were discovered and loaded:
-----------------------------------------------
''')
parser.epilog = dedent_text + "\t{0}".format(plugin_list)
# setup work - parse arguments, and initialize BanditManager
args = parser.parse_args()
# Check if `--msg-template` is not present without custom formatter
if args.output_format != 'custom' and args.msg_template is not None:
parser.error("--msg-template can only be used with --format=custom")
# Check if confidence or severity level have been specified with strings
if args.severity_string is not None:
if args.severity_string == "all":
args.severity = 1
elif args.severity_string == "low":
args.severity = 2
elif args.severity_string == "medium":
args.severity = 3
elif args.severity_string == "high":
args.severity = 4
# Other strings will be blocked by argparse
if args.confidence_string is not None:
if args.confidence_string == "all":
args.confidence = 1
elif args.confidence_string == "low":
args.confidence = 2
elif args.confidence_string == "medium":
args.confidence = 3
elif args.confidence_string == "high":
args.confidence = 4
# Other strings will be blocked by argparse
if args.exit_zero_severity_string is not None:
if args.exit_zero_severity_string == "all":
args.exit_zero_severity = 1
elif args.exit_zero_severity_string == "low":
args.exit_zero_severity = 2
elif args.exit_zero_severity_string == "medium":
args.exit_zero_severity = 3
elif args.exit_zero_severity_string == "high":
args.exit_zero_severity = 4
# Other strings will be blocked by argparse
try:
b_conf = b_config.BanditConfig(config_file=args.config_file)
except utils.ConfigError as e:
LOG.error(e)
sys.exit(2)
# Handle .bandit files in projects to pass cmdline args from file
ini_options = _get_options_from_ini(args.ini_path, args.targets)
if ini_options:
# prefer command line, then ini file
args.excluded_paths = _log_option_source(
args.excluded_paths,
ini_options.get('exclude'),
'excluded paths')
args.skips = _log_option_source(
args.skips,
ini_options.get('skips'),
'skipped tests')
args.tests = _log_option_source(
args.tests,
ini_options.get('tests'),
'selected tests')
ini_targets = ini_options.get('targets')
if ini_targets:
ini_targets = ini_targets.split(',')
args.targets = _log_option_source(
args.targets,
ini_targets,
'selected targets')
# TODO(tmcpeak): any other useful options to pass from .bandit?
args.recursive = _log_option_source(
args.recursive,
ini_options.get('recursive'),
'recursive scan')
args.agg_type = _log_option_source(
args.agg_type,
ini_options.get('aggregate'),
'aggregate output type')
args.context_lines = _log_option_source(
args.context_lines,
ini_options.get('number'),
'max code lines output for issue')
args.profile = _log_option_source(
args.profile,
ini_options.get('profile'),
'profile')
args.severity = _log_option_source(
args.severity,
ini_options.get('level'),
'severity level')
args.confidence = _log_option_source(
args.confidence,
ini_options.get('confidence'),
'confidence level')
args.output_format = _log_option_source(
args.output_format,
ini_options.get('format'),
'output format')
args.msg_template = _log_option_source(
args.msg_template,
ini_options.get('msg-template'),
'output message template')
args.output_file = _log_option_source(
args.output_file,
ini_options.get('output'),
'output file')
args.verbose = _log_option_source(
args.verbose,
ini_options.get('verbose'),
'output extra information')
args.debug = _log_option_source(
args.debug,
ini_options.get('debug'),
'debug mode')
args.quiet = _log_option_source(
args.quiet,
ini_options.get('quiet'),
'silent mode')
args.ignore_nosec = _log_option_source(
args.ignore_nosec,
ini_options.get('ignore-nosec'),
'do not skip lines with # nosec')
args.baseline = _log_option_source(
args.baseline,
ini_options.get('baseline'),
'path of a baseline report')
if not args.targets:
LOG.error("No targets found in CLI or ini files, exiting.")
sys.exit(2)
# if the log format string was set in the options, reinitialize
if b_conf.get_option('log_format'):
log_format = b_conf.get_option('log_format')
_init_logger(log_level=logging.DEBUG, log_format=log_format)
if args.quiet:
_init_logger(log_level=logging.WARN)
try:
profile = _get_profile(b_conf, args.profile, args.config_file)
_log_info(args, profile)
profile['include'].update(args.tests.split(',') if args.tests else [])
profile['exclude'].update(args.skips.split(',') if args.skips else [])
extension_mgr.validate_profile(profile)
except (utils.ProfileNotFound, ValueError) as e:
LOG.error(e)
sys.exit(2)
b_mgr = b_manager.BanditManager(b_conf, args.agg_type, args.debug,
profile=profile, verbose=args.verbose,
quiet=args.quiet,
ignore_nosec=args.ignore_nosec)
if args.baseline is not None:
try:
with open(args.baseline) as bl:
data = bl.read()
b_mgr.populate_baseline(data)
except IOError:
LOG.warning("Could not open baseline report: %s", args.baseline)
sys.exit(2)
if args.output_format not in baseline_formatters:
LOG.warning('Baseline must be used with one of the following '
'formats: ' + str(baseline_formatters))
sys.exit(2)
if args.output_format != "json":
if args.config_file:
LOG.info("using config: %s", args.config_file)
LOG.info("running on Python %d.%d.%d", sys.version_info.major,
sys.version_info.minor, sys.version_info.micro)
# initiate file discovery step within Bandit Manager
b_mgr.discover_files(args.targets, args.recursive, args.excluded_paths)
if not b_mgr.b_ts.tests:
LOG.error('No tests would be run, please check the profile.')
sys.exit(2)
# initiate execution of tests within Bandit Manager
b_mgr.run_tests()
LOG.debug(b_mgr.b_ma)
LOG.debug(b_mgr.metrics)
# trigger output of results by Bandit Manager
sev_level = constants.RANKING[args.severity - 1]
conf_level = constants.RANKING[args.confidence - 1]
b_mgr.output_results(args.context_lines,
sev_level,
conf_level,
args.output_file,
args.output_format,
args.msg_template)
if args.exit_zero:
sys.exit(0)
if ("exit_zero_severity" in args
and not b_mgr.above_threshold_results(args.exit_zero_severity)):
sys.exit(0)
if b_mgr.results_count(sev_filter=sev_level, conf_filter=conf_level) > 0:
sys.exit(1)
else:
sys.exit(0)
|
37,155 | def schedule_circuit(circuit: QuantumCircuit,
schedule_config: ScheduleConfig,
method: Optional[str] = None) -> Schedule:
"""
Basic scheduling pass from a circuit to a pulse Schedule, using the backend. If no method is
specified, then a basic, as late as possible scheduling pass is performed, i.e. pulses are
scheduled to occur as late as possible.
Supported methods:
* ``as_soon_as_possible``: Schedule pulses greedily, as early as possible on a qubit resource.
(alias: ``asap``)
* ``as_late_as_possible``: Schedule pulses late-- keep qubits in the ground state when possible.
(alias: ``alap``)
Args:
circuit: The quantum circuit to translate.
schedule_config: Backend specific parameters used for building the Schedule.
method: The scheduling pass method to use.
Returns:
Schedule corresponding to the input circuit.
Raises:
QiskitError: If method isn't recognized.
"""
methods = {
'as_soon_as_possible': as_soon_as_possible,
'asap': as_soon_as_possible,
'as_late_as_possible': as_late_as_possible,
'alap': as_late_as_possible
}
if method is None:
method = 'as_late_as_possible'
try:
return methods[method](circuit, schedule_config)
except KeyError:
raise QiskitError("Scheduling method {method} isn't recognized.".format(method=method))
| def schedule_circuit(circuit: QuantumCircuit,
schedule_config: ScheduleConfig,
method: Optional[str] = None) -> Schedule:
"""
Basic scheduling pass from a circuit to a pulse Schedule, using the backend. If no method is
specified, then a basic, as late as possible scheduling pass is performed, i.e. pulses are
scheduled to occur as late as possible.
Supported methods:
* ``'as_soon_as_possible'``: Schedule pulses greedily, as early as possible on a qubit resource.
(alias: ``asap``)
* ``as_late_as_possible``: Schedule pulses late-- keep qubits in the ground state when possible.
(alias: ``alap``)
Args:
circuit: The quantum circuit to translate.
schedule_config: Backend specific parameters used for building the Schedule.
method: The scheduling pass method to use.
Returns:
Schedule corresponding to the input circuit.
Raises:
QiskitError: If method isn't recognized.
"""
methods = {
'as_soon_as_possible': as_soon_as_possible,
'asap': as_soon_as_possible,
'as_late_as_possible': as_late_as_possible,
'alap': as_late_as_possible
}
if method is None:
method = 'as_late_as_possible'
try:
return methods[method](circuit, schedule_config)
except KeyError:
raise QiskitError("Scheduling method {method} isn't recognized.".format(method=method))
|
42,058 | def run(args: argparse.Namespace) -> None:
kurobako_cmd = os.path.join(args.path_to_kurobako, "kurobako")
subprocess.run(f"{kurobako_cmd} --version", shell=True)
if not (os.path.exists(args.data_dir) and os.path.isdir(args.data_dir)):
raise ValueError(f"Data directory {args.data_dir} cannot be found.")
os.makedirs(args.out_dir, exist_ok=True)
study_json_fn = os.path.join(args.out_dir, "studies.json")
subprocess.check_call(f"echo >| {study_json_fn}", shell=True)
solvers_filename = os.path.join(args.out_dir, "solvers.json")
subprocess.check_call(f"echo >| {solvers_filename}", shell=True)
problems_filename = os.path.join(args.out_dir, "problems.json")
subprocess.check_call(f"echo >| {problems_filename}", shell=True)
# Create ZDT problems
cmd = f"{kurobako_cmd} problem-suite zdt | tee -a {problems_filename}"
subprocess.run(cmd, shell=True)
# Create NAS bench problem(C) (for Multi-Objective Settings).
dataset = os.path.join(args.data_dir, "nasbench_full.bin")
cmd = (
f'{kurobako_cmd} problem nasbench "{dataset}"'
f"--encoding C --metrics accuracy params | tee -a {problems_filename}"
)
subprocess.run(cmd, shell=True)
# Create solvers.
sampler_list = args.sampler_list.split()
sampler_kwargs_list = args.sampler_kwargs_list.split()
if len(sampler_list) != len(sampler_kwargs_list):
raise ValueError(
"The number of samplers does not match the given keyword arguments. \n"
f"sampler_list: {sampler_list}, sampler_kwargs_list: {sampler_kwargs_list}."
)
for sampler, sampler_kwargs in zip(sampler_list, sampler_kwargs_list):
name = f"{args.name_prefix}_{sampler}"
python_command = f"mo_runner.py {sampler} {sampler_kwargs}"
cmd = (
f"{kurobako_cmd} solver --name {name} command python {python_command}"
f"| tee -a {solvers_filename}"
)
subprocess.run(cmd, shell=True)
# Create study.
cmd = (
f"{kurobako_cmd} studies --budget 1000 "
f"--solvers $(cat {solvers_filename}) --problems $(cat {problems_filename}) "
f"--repeats {args.n_runs} --seed {args.seed} "
f"> {study_json_fn}"
)
subprocess.run(cmd, shell=True)
result_filename = os.path.join(args.out_dir, "results.json")
cmd = (
f"cat {study_json_fn} | {kurobako_cmd} run --parallelism {args.n_jobs} "
f"> {result_filename}"
)
subprocess.run(cmd, shell=True)
# Report
report_filename = os.path.join(args.out_dir, "report.md")
cmd = f"cat {result_filename} | {kurobako_cmd} report > {report_filename}"
subprocess.run(cmd, shell=True)
# Plot pareto-front.
problem_names = ["NASBench", "ZDT1", "ZDT2", "ZDT3", "ZDT4", "ZDT5", "ZDT6"]
for problem_name in problem_names:
cmd = (
f"cat {result_filename} | grep {problem_name} | "
f"{kurobako_cmd} plot pareto-front -o {args.out_dir}"
)
subprocess.run(cmd, shell=True)
| def run(args: argparse.Namespace) -> None:
kurobako_cmd = os.path.join(args.path_to_kurobako, "kurobako")
subprocess.run(f"{kurobako_cmd} --version", shell=True)
if not (os.path.exists(args.data_dir) and os.path.isdir(args.data_dir)):
raise ValueError(f"Data directory {args.data_dir} cannot be found.")
os.makedirs(args.out_dir, exist_ok=True)
study_json_fn = os.path.join(args.out_dir, "studies.json")
subprocess.check_call(f"echo >| {study_json_fn}", shell=True)
solvers_filename = os.path.join(args.out_dir, "solvers.json")
subprocess.check_call(f"echo >| {solvers_filename}", shell=True)
problems_filename = os.path.join(args.out_dir, "problems.json")
subprocess.check_call(f"echo >| {problems_filename}", shell=True)
# Create ZDT problems
cmd = f"{kurobako_cmd} problem-suite zdt | tee -a {problems_filename}"
subprocess.run(cmd, shell=True)
# Create NAS bench problem(A) (for Multi-Objective Settings).
dataset = os.path.join(args.data_dir, "nasbench_full.bin")
cmd = (
f'{kurobako_cmd} problem nasbench "{dataset}"'
f"--encoding C --metrics accuracy params | tee -a {problems_filename}"
)
subprocess.run(cmd, shell=True)
# Create solvers.
sampler_list = args.sampler_list.split()
sampler_kwargs_list = args.sampler_kwargs_list.split()
if len(sampler_list) != len(sampler_kwargs_list):
raise ValueError(
"The number of samplers does not match the given keyword arguments. \n"
f"sampler_list: {sampler_list}, sampler_kwargs_list: {sampler_kwargs_list}."
)
for sampler, sampler_kwargs in zip(sampler_list, sampler_kwargs_list):
name = f"{args.name_prefix}_{sampler}"
python_command = f"mo_runner.py {sampler} {sampler_kwargs}"
cmd = (
f"{kurobako_cmd} solver --name {name} command python {python_command}"
f"| tee -a {solvers_filename}"
)
subprocess.run(cmd, shell=True)
# Create study.
cmd = (
f"{kurobako_cmd} studies --budget 1000 "
f"--solvers $(cat {solvers_filename}) --problems $(cat {problems_filename}) "
f"--repeats {args.n_runs} --seed {args.seed} "
f"> {study_json_fn}"
)
subprocess.run(cmd, shell=True)
result_filename = os.path.join(args.out_dir, "results.json")
cmd = (
f"cat {study_json_fn} | {kurobako_cmd} run --parallelism {args.n_jobs} "
f"> {result_filename}"
)
subprocess.run(cmd, shell=True)
# Report
report_filename = os.path.join(args.out_dir, "report.md")
cmd = f"cat {result_filename} | {kurobako_cmd} report > {report_filename}"
subprocess.run(cmd, shell=True)
# Plot pareto-front.
problem_names = ["NASBench", "ZDT1", "ZDT2", "ZDT3", "ZDT4", "ZDT5", "ZDT6"]
for problem_name in problem_names:
cmd = (
f"cat {result_filename} | grep {problem_name} | "
f"{kurobako_cmd} plot pareto-front -o {args.out_dir}"
)
subprocess.run(cmd, shell=True)
|
38,990 | def make_named_tuple_validator(type_: Type[NamedTupleT]) -> Callable[[Tuple[Any, ...]], NamedTupleT]:
from .main import create_model
# A named tuple can be created with `typing,NamedTuple` with types
# but also with `collections.namedtuple` with just the fields
# in which case we consider the type to be `Any`
named_tuple_annotations: Dict[str, Type[Any]] = getattr(type_, '__annotations__', {k: Any for k in type_._fields})
field_definitions: Dict[str, Any] = {
field_name: (field_type, ...) for field_name, field_type in named_tuple_annotations.items()
}
NamedTupleModel: Type['BaseModel'] = create_model('NamedTupleModel', **field_definitions)
def named_tuple_validator(values: Tuple[Any, ...]) -> NamedTupleT:
dict_values: Dict[str, Any] = dict(zip(named_tuple_annotations, values))
validated_dict_values: Dict[str, Any] = dict(NamedTupleModel(**dict_values))
return type_(**validated_dict_values)
return named_tuple_validator
| def make_named_tuple_validator(type_: Type[NamedTupleT]) -> Callable[[Tuple[Any, ...]], NamedTupleT]:
from .main import create_model
# A named tuple can be created with `typing.NamedTuple` with types
# but also with `collections.namedtuple` with just the fields
# in which case we consider the type to be `Any`
named_tuple_annotations: Dict[str, Type[Any]] = getattr(type_, '__annotations__', {k: Any for k in type_._fields})
field_definitions: Dict[str, Any] = {
field_name: (field_type, ...) for field_name, field_type in named_tuple_annotations.items()
}
NamedTupleModel: Type['BaseModel'] = create_model('NamedTupleModel', **field_definitions)
def named_tuple_validator(values: Tuple[Any, ...]) -> NamedTupleT:
dict_values: Dict[str, Any] = dict(zip(named_tuple_annotations, values))
validated_dict_values: Dict[str, Any] = dict(NamedTupleModel(**dict_values))
return type_(**validated_dict_values)
return named_tuple_validator
|
31,025 | def panorama_zone_lookup_command():
"""
Gets the outgoing interface from the Palo Alto Firewall route table, and the list of interfaces
comparing the two
"""
dest_ip = demisto.args().get("dest_ip")
vr = demisto.args().get("virtual_router", None)
route = panorama_route_lookup(dest_ip, vr)
if not route:
demisto.results(f"Could find a matching route to {dest_ip}.")
return
interface = route["interface"]
interfaces = panorama_get_interfaces()
r = {}
if "ifnet" in interfaces["response"]["result"]:
for entry in interfaces["response"]["result"]["ifnet"]["entry"]:
if entry["name"] == interface:
if "zone" in entry:
r = {**entry, **route}
if r:
demisto.results({
'Type': entryTypes['note'],
'ContentsFormat': formats['json'],
'Contents': r,
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': f'The IP {dest_ip} is in zone {r["zone"]}',
'EntryContext': {"Panorama.ZoneLookup(val.Name == obj.Name)": r} # add key -> deleted: true
})
return r
else:
demisto.results(f"Could not map {dest_ip} to zone.")
return {}
| def panorama_zone_lookup_command():
"""
Gets the outgoing interface from the Palo Alto Firewall route table, and the list of interfaces
comparing the two
"""
dest_ip = demisto.args().get("dest_ip")
vr = demisto.args().get("virtual_router", None)
route = panorama_route_lookup(dest_ip, vr)
if not route:
demisto.results(f"Could find a matching route to {dest_ip}.")
return
interface = route["interface"]
interfaces = panorama_get_interfaces()
r = {}
if "ifnet" in interfaces["response"].get("result"):
for entry in interfaces["response"]["result"]["ifnet"]["entry"]:
if entry["name"] == interface:
if "zone" in entry:
r = {**entry, **route}
if r:
demisto.results({
'Type': entryTypes['note'],
'ContentsFormat': formats['json'],
'Contents': r,
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': f'The IP {dest_ip} is in zone {r["zone"]}',
'EntryContext': {"Panorama.ZoneLookup(val.Name == obj.Name)": r} # add key -> deleted: true
})
return r
else:
demisto.results(f"Could not map {dest_ip} to zone.")
return {}
|
10,895 | def test_union_of_float_and_double_keeps_precision():
"""https://github.com/fastavro/fastavro/issues/437"""
schema = ["float", "double"]
records = [
1.0,
1e200, # Turns into float("+inf") if parsed as 32 bit float
]
parsed_schema = fastavro.parse_schema(schema)
assert records == roundtrip(parsed_schema, records)
| def test_union_of_float_and_double_keeps_precision():
"""https://github.com/fastavro/fastavro/issues/437"""
schema = ["float", "string", "double"]
records = [
1.0,
1e200, # Turns into float("+inf") if parsed as 32 bit float
]
parsed_schema = fastavro.parse_schema(schema)
assert records == roundtrip(parsed_schema, records)
|
10,307 | def has_list_changed(new_list, old_list, sort_lists=True):
"""
Check two lists have differences. Sort lists by default.
"""
if new_list is None:
return False
old_list = old_list or []
if len(new_list) != len(old_list):
return True
if sort_lists:
zip_data = zip(sort(new_list), sort(old_list))
else:
zip_data = zip(new_list, old_list)
for new_item, old_item in zip_data:
is_same_type = type(new_item) == type(old_item)
if not is_same_type:
return True
if isinstance(new_item, dict):
if has_dict_changed(new_item, old_item):
return True
elif new_item != old_item:
return True
return False
| def has_list_changed(new_list, old_list, sort_lists=True):
"""
Check two lists have differences. Sort lists by default.
"""
if new_list is None:
return False
old_list = old_list or []
if len(new_list) != len(old_list):
return True
if sort_lists:
zip_data = zip(sorted(new_list), sorted(old_list))
else:
zip_data = zip(new_list, old_list)
for new_item, old_item in zip_data:
is_same_type = type(new_item) == type(old_item)
if not is_same_type:
return True
if isinstance(new_item, dict):
if has_dict_changed(new_item, old_item):
return True
elif new_item != old_item:
return True
return False
|
32,078 | def parse_host(host):
"""
Parse the host and return if domain or ip (using CommonServerPython is_ip_valid).
Args:
host(str)- host to parse.
Returns:
ip/domain(str).
"""
return 'ip' if is_ip_valid(host) else 'domain'
| def determine_host_ioc_type(host):
"""
Parse the host and return if domain or ip (using CommonServerPython is_ip_valid).
Args:
host(str)- host to parse.
Returns:
ip/domain(str).
"""
return 'ip' if is_ip_valid(host) else 'domain'
|
888 | def Rician(name, alpha, beta):
r"""Creates a continuous random variable with a Rician distribution.
Parameters
==========
alpha : Real number, :math:`0 < alpha`
beta : Real number, :math:`0 < beta`
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import Rician, density, cdf
>>> from sympy import symbols
>>> alpha, beta, x = symbols('alpha, beta, x', positive=True)
>>> R = Rician('R', alpha, beta)
>>> density(R)(x)
x*exp((-alpha**2 - x**2)/(2*beta**2))*besseli(0, alpha*x/beta**2)
>>> cdf(R)(x)
marcumq(1, alpha/beta, x/beta)
Reference
=========
.. [1] https://en.wikipedia.org/wiki/Rice_distribution
.. [1] https://reference.wolfram.com/language/ref/RiceDistribution.html
"""
return rv(name, RicianDistribution, (alpha, beta))
| def Rician(name, alpha, beta):
r"""Creates a continuous random variable with a Rician distribution.
Parameters
==========
alpha : Real number, :math:`0 < alpha`
beta : Real number, :math:`0 < beta`
Returns
=======
RandomSymbol
Examples
========
>>> from sympy.stats import Rician, density, cdf
>>> from sympy import symbols
>>> alpha, beta, x = symbols('alpha, beta, x', positive=True)
>>> R = Rician('R', alpha, beta)
>>> density(R)(x)
x*exp((-alpha**2 - x**2)/(2*beta**2))*besseli(0, alpha*x/beta**2)
>>> cdf(R)(x)
marcumq(1, alpha/beta, x/beta)
References
=========
.. [1] https://en.wikipedia.org/wiki/Rice_distribution
.. [1] https://reference.wolfram.com/language/ref/RiceDistribution.html
"""
return rv(name, RicianDistribution, (alpha, beta))
|
31,267 | def splunk_results_command(service):
res = []
sid = demisto.args().get('sid', '')
count = int(demisto.args().get('count'))
try:
job = service.job(sid)
except HTTPError as error:
if error.message == 'HTTP 404 Not Found -- Unknown sid.':
demisto.results("Found no job for sid: {}".format(sid))
else:
return_error(error.message, error)
else:
for result in results.ResultsReader(job.results(count=count)):
if isinstance(result, results.Message):
demisto.results({"Type": 1, "ContentsFormat": "json", "Contents": json.dumps(result.message)})
elif isinstance(result, dict):
# Normal events are returned as dicts
res.append(result)
demisto.results({"Type": 1, "ContentsFormat": "json", "Contents": json.dumps(res)})
| def splunk_results_command(service):
res = []
sid = demisto.args().get('sid', '')
count = int(demisto.args().get('count', 100))
try:
job = service.job(sid)
except HTTPError as error:
if error.message == 'HTTP 404 Not Found -- Unknown sid.':
demisto.results("Found no job for sid: {}".format(sid))
else:
return_error(error.message, error)
else:
for result in results.ResultsReader(job.results(count=count)):
if isinstance(result, results.Message):
demisto.results({"Type": 1, "ContentsFormat": "json", "Contents": json.dumps(result.message)})
elif isinstance(result, dict):
# Normal events are returned as dicts
res.append(result)
demisto.results({"Type": 1, "ContentsFormat": "json", "Contents": json.dumps(res)})
|
49,626 | def from_dict(data, npartitions=None, orient='columns', dtype=None, columns=None):
"""
Construct a Dask DataFrame from a Python Dictionary
Parameters
----------
data : dict
Of the form {field : array-like} or {field : dict}.
npartitions : int, optional
The number of partitions of the index to create. Note that depending on
the size and index of the dataframe, the output may have fewer
partitions than requested.
orient : {'columns', 'index', 'tight'}, default 'columns'
The "orientation" of the data. If the keys of the passed dict should be the columns of the resulting DataFrame, pass 'columns' (default). Otherwise if the keys should be rows, pass 'index'. If 'tight', assume a dict with keys ['index', 'columns', 'data', 'index_names', 'column_names'].
dtype: bool
Data type to force, otherwise infer.
columns: string, optional
Column labels to use when ``orient='index'``. Raises a ValueError if used with ``orient='columns'`` or ``orient='tight'``.
Examples
--------
>>> import dask.dataframe as dd
>>> ddf = dd.from_dict({"num1": [1, 2, 3, 4], "num2": [7, 8, 9, 10]}, npartitions=2)
"""
pdf = pd.DataFrame.from_dict(data, orient, dtype, columns)
ddf = from_pandas(pdf, npartitions)
return ddf
| def from_dict(data, npartitions=None, orient='columns', dtype=None, columns=None):
"""
Construct a Dask DataFrame from a Python Dictionary
Parameters
----------
data : dict
Of the form {field : array-like} or {field : dict}.
npartitions : int, optional
The number of partitions of the index to create. Note that depending on
the size and index of the dataframe, the output may have fewer
partitions than requested.
orient : {'columns', 'index', 'tight'}, default 'columns'
The "orientation" of the data. If the keys of the passed dict
should be the columns of the resulting DataFrame, pass 'columns'
(default). Otherwise if the keys should be rows, pass 'index'.
If 'tight', assume a dict with keys
['index', 'columns', 'data', 'index_names', 'column_names'].
dtype: bool
Data type to force, otherwise infer.
columns: string, optional
Column labels to use when ``orient='index'``. Raises a ValueError if used with ``orient='columns'`` or ``orient='tight'``.
Examples
--------
>>> import dask.dataframe as dd
>>> ddf = dd.from_dict({"num1": [1, 2, 3, 4], "num2": [7, 8, 9, 10]}, npartitions=2)
"""
pdf = pd.DataFrame.from_dict(data, orient, dtype, columns)
ddf = from_pandas(pdf, npartitions)
return ddf
|
10,429 | def main():
argument_spec = dict(
state=dict(default='present', choices=['present', 'absent', 'enabled', 'disabled']),
name=dict(default='default'),
enable_logging=dict(default=True, type='bool'),
s3_bucket_name=dict(),
s3_key_prefix=dict(),
sns_topic_name=dict(),
is_multi_region_trail=dict(default=False, type='bool'),
enable_log_file_validation=dict(type='bool', aliases=['log_file_validation_enabled']),
include_global_events=dict(default=True, type='bool', aliases=['include_global_service_events']),
cloudwatch_logs_role_arn=dict(),
cloudwatch_logs_log_group_arn=dict(),
kms_key_id=dict(),
tags=dict(default={}, type='dict'),
)
required_if = [('state', 'present', ['s3_bucket_name']), ('state', 'enabled', ['s3_bucket_name'])]
required_together = [('cloudwatch_logs_role_arn', 'cloudwatch_logs_log_group_arn')]
module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True, required_together=required_together, required_if=required_if)
# collect parameters
if module.params['state'] in ('present', 'enabled'):
state = 'present'
elif module.params['state'] in ('absent', 'disabled'):
state = 'absent'
tags = module.params['tags']
enable_logging = module.params['enable_logging']
ct_params = dict(
Name=module.params['name'],
S3BucketName=module.params['s3_bucket_name'],
IncludeGlobalServiceEvents=module.params['include_global_events'],
IsMultiRegionTrail=module.params['is_multi_region_trail'],
)
if module.params['s3_key_prefix']:
ct_params['S3KeyPrefix'] = module.params['s3_key_prefix'].rstrip('/')
if module.params['sns_topic_name']:
ct_params['SnsTopicName'] = module.params['sns_topic_name']
if module.params['cloudwatch_logs_role_arn']:
ct_params['CloudWatchLogsRoleArn'] = module.params['cloudwatch_logs_role_arn']
if module.params['cloudwatch_logs_log_group_arn']:
ct_params['CloudWatchLogsLogGroupArn'] = module.params['cloudwatch_logs_log_group_arn']
if module.params['enable_log_file_validation'] is not None:
ct_params['EnableLogFileValidation'] = module.params['enable_log_file_validation']
if module.params['kms_key_id']:
ct_params['KmsKeyId'] = module.params['kms_key_id']
client = module.client('cloudtrail')
region = get_aws_connection_info(module, boto3=True)[0]
results = dict(
changed=False,
exists=False
)
# Get existing trail facts
trail = get_trail_facts(module, client, ct_params['Name'])
# If the trail exists set the result exists variable
if trail is not None:
results['exists'] = True
if state == 'absent' and results['exists']:
# If Trail exists go ahead and delete
results['changed'] = True
results['exists'] = False
results['trail'] = dict()
if not module.check_mode:
delete_trail(module, client, trail['TrailARN'])
elif state == 'present' and results['exists']:
# If Trail exists see if we need to update it
do_update = False
for key in ct_params:
tkey = str(key)
# boto3 has inconsistent parameter naming so we handle it here
if key == 'EnableLogFileValidation':
tkey = 'LogFileValidationEnabled'
# We need to make an empty string equal None
if ct_params.get(key) == '':
val = None
else:
val = ct_params.get(key)
if val != trail.get(tkey):
do_update = True
results['changed'] = True
# If we are in check mode copy the changed values to the trail facts in result output to show what would change.
if module.check_mode:
trail.update({tkey: ct_params.get(key)})
if not module.check_mode and do_update:
update_trail(module, client, ct_params)
trail = get_trail_facts(module, client, ct_params['Name'])
# Check if we need to start/stop logging
if enable_logging and not trail['IsLogging']:
results['changed'] = True
trail['IsLogging'] = True
if not module.check_mode:
set_logging(module, client, name=ct_params['Name'], action='start')
if not enable_logging and trail['IsLogging']:
results['changed'] = True
trail['IsLogging'] = False
if not module.check_mode:
set_logging(module, client, name=ct_params['Name'], action='stop')
# Check if we need to update tags on resource
tag_dry_run = False
if module.check_mode:
tag_dry_run = True
tags_changed = tag_trail(module, client, tags=tags, trail_arn=trail['TrailARN'], curr_tags=trail['tags'], dry_run=tag_dry_run)
if tags_changed:
results['changed'] = True
trail['tags'] = tags
# Populate trail facts in output
results['trail'] = camel_dict_to_snake_dict(trail)
elif state == 'present' and not results['exists']:
# Trail doesn't exist just go create it
results['changed'] = True
if not module.check_mode:
# If we aren't in check_mode then actually create it
created_trail = create_trail(module, client, ct_params)
# Apply tags
tag_trail(module, client, tags=tags, trail_arn=created_trail['TrailARN'])
# Get the trail status
try:
status_resp = client.get_trail_status(Name=created_trail['Name'])
except (BotoCoreError, ClientError) as err:
module.fail_json_aws(err, msg="Failed to fetch Trail statuc")
# Set the logging state for the trail to desired value
if enable_logging and not status_resp['IsLogging']:
set_logging(module, client, name=ct_params['Name'], action='start')
if not enable_logging and status_resp['IsLogging']:
set_logging(module, client, name=ct_params['Name'], action='stop')
# Get facts for newly created Trail
trail = get_trail_facts(module, client, ct_params['Name'])
# If we are in check mode create a fake return structure for the newly minted trail
if module.check_mode:
acct_id = '123456789012'
try:
sts_client = module.client('sts')
acct_id = sts_client.get_caller_identity()['Account']
except (BotoCoreError, ClientError):
pass
trail = dict()
trail.update(ct_params)
if 'EnableLogFileValidation' not in ct_params:
ct_params['EnableLogFileValidation'] = False
trail['EnableLogFileValidation'] = ct_params['EnableLogFileValidation']
trail.pop('EnableLogFileValidation')
fake_arn = 'arn:aws:cloudtrail:' + region + ':' + acct_id + ':trail/' + ct_params['Name']
trail['HasCustomEventSelectors'] = False
trail['HomeRegion'] = region
trail['TrailARN'] = fake_arn
trail['IsLogging'] = enable_logging
trail['tags'] = tags
# Populate trail facts in output
results['trail'] = camel_dict_to_snake_dict(trail)
module.exit_json(**results)
| def main():
argument_spec = dict(
state=dict(default='present', choices=['present', 'absent', 'enabled', 'disabled']),
name=dict(default='default'),
enable_logging=dict(default=True, type='bool'),
s3_bucket_name=dict(),
s3_key_prefix=dict(),
sns_topic_name=dict(),
is_multi_region_trail=dict(default=False, type='bool'),
enable_log_file_validation=dict(type='bool', aliases=['log_file_validation_enabled']),
include_global_events=dict(default=True, type='bool', aliases=['include_global_service_events']),
cloudwatch_logs_role_arn=dict(),
cloudwatch_logs_log_group_arn=dict(),
kms_key_id=dict(),
tags=dict(default={}, type='dict'),
)
required_if = [('state', 'present', ['s3_bucket_name']), ('state', 'enabled', ['s3_bucket_name'])]
required_together = [('cloudwatch_logs_role_arn', 'cloudwatch_logs_log_group_arn')]
module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True, required_together=required_together, required_if=required_if)
# collect parameters
if module.params['state'] in ('present', 'enabled'):
state = 'present'
elif module.params['state'] in ('absent', 'disabled'):
state = 'absent'
tags = module.params['tags']
enable_logging = module.params['enable_logging']
ct_params = dict(
Name=module.params['name'],
S3BucketName=module.params['s3_bucket_name'],
IncludeGlobalServiceEvents=module.params['include_global_events'],
IsMultiRegionTrail=module.params['is_multi_region_trail'],
)
if module.params['s3_key_prefix']:
ct_params['S3KeyPrefix'] = module.params['s3_key_prefix'].rstrip('/')
if module.params['sns_topic_name']:
ct_params['SnsTopicName'] = module.params['sns_topic_name']
if module.params['cloudwatch_logs_role_arn']:
ct_params['CloudWatchLogsRoleArn'] = module.params['cloudwatch_logs_role_arn']
if module.params['cloudwatch_logs_log_group_arn']:
ct_params['CloudWatchLogsLogGroupArn'] = module.params['cloudwatch_logs_log_group_arn']
if module.params['enable_log_file_validation'] is not None:
ct_params['EnableLogFileValidation'] = module.params['enable_log_file_validation']
if module.params['kms_key_id']:
ct_params['KmsKeyId'] = module.params['kms_key_id']
client = module.client('cloudtrail')
region = module.params.get('region')
results = dict(
changed=False,
exists=False
)
# Get existing trail facts
trail = get_trail_facts(module, client, ct_params['Name'])
# If the trail exists set the result exists variable
if trail is not None:
results['exists'] = True
if state == 'absent' and results['exists']:
# If Trail exists go ahead and delete
results['changed'] = True
results['exists'] = False
results['trail'] = dict()
if not module.check_mode:
delete_trail(module, client, trail['TrailARN'])
elif state == 'present' and results['exists']:
# If Trail exists see if we need to update it
do_update = False
for key in ct_params:
tkey = str(key)
# boto3 has inconsistent parameter naming so we handle it here
if key == 'EnableLogFileValidation':
tkey = 'LogFileValidationEnabled'
# We need to make an empty string equal None
if ct_params.get(key) == '':
val = None
else:
val = ct_params.get(key)
if val != trail.get(tkey):
do_update = True
results['changed'] = True
# If we are in check mode copy the changed values to the trail facts in result output to show what would change.
if module.check_mode:
trail.update({tkey: ct_params.get(key)})
if not module.check_mode and do_update:
update_trail(module, client, ct_params)
trail = get_trail_facts(module, client, ct_params['Name'])
# Check if we need to start/stop logging
if enable_logging and not trail['IsLogging']:
results['changed'] = True
trail['IsLogging'] = True
if not module.check_mode:
set_logging(module, client, name=ct_params['Name'], action='start')
if not enable_logging and trail['IsLogging']:
results['changed'] = True
trail['IsLogging'] = False
if not module.check_mode:
set_logging(module, client, name=ct_params['Name'], action='stop')
# Check if we need to update tags on resource
tag_dry_run = False
if module.check_mode:
tag_dry_run = True
tags_changed = tag_trail(module, client, tags=tags, trail_arn=trail['TrailARN'], curr_tags=trail['tags'], dry_run=tag_dry_run)
if tags_changed:
results['changed'] = True
trail['tags'] = tags
# Populate trail facts in output
results['trail'] = camel_dict_to_snake_dict(trail)
elif state == 'present' and not results['exists']:
# Trail doesn't exist just go create it
results['changed'] = True
if not module.check_mode:
# If we aren't in check_mode then actually create it
created_trail = create_trail(module, client, ct_params)
# Apply tags
tag_trail(module, client, tags=tags, trail_arn=created_trail['TrailARN'])
# Get the trail status
try:
status_resp = client.get_trail_status(Name=created_trail['Name'])
except (BotoCoreError, ClientError) as err:
module.fail_json_aws(err, msg="Failed to fetch Trail statuc")
# Set the logging state for the trail to desired value
if enable_logging and not status_resp['IsLogging']:
set_logging(module, client, name=ct_params['Name'], action='start')
if not enable_logging and status_resp['IsLogging']:
set_logging(module, client, name=ct_params['Name'], action='stop')
# Get facts for newly created Trail
trail = get_trail_facts(module, client, ct_params['Name'])
# If we are in check mode create a fake return structure for the newly minted trail
if module.check_mode:
acct_id = '123456789012'
try:
sts_client = module.client('sts')
acct_id = sts_client.get_caller_identity()['Account']
except (BotoCoreError, ClientError):
pass
trail = dict()
trail.update(ct_params)
if 'EnableLogFileValidation' not in ct_params:
ct_params['EnableLogFileValidation'] = False
trail['EnableLogFileValidation'] = ct_params['EnableLogFileValidation']
trail.pop('EnableLogFileValidation')
fake_arn = 'arn:aws:cloudtrail:' + region + ':' + acct_id + ':trail/' + ct_params['Name']
trail['HasCustomEventSelectors'] = False
trail['HomeRegion'] = region
trail['TrailARN'] = fake_arn
trail['IsLogging'] = enable_logging
trail['tags'] = tags
# Populate trail facts in output
results['trail'] = camel_dict_to_snake_dict(trail)
module.exit_json(**results)
|
7,183 | def circle_perimeter_aa(r, c, radius, shape=None):
"""Generate anti-aliased circle perimeter coordinates.
Parameters
----------
r, c : int
Centre coordinate of circle.
radius: int
Radius of circle.
shape : tuple, optional
Image shape which is used to determine the maximum extent of output
pixel coordinates. This is useful for circles that exceed the image
size. If None, the full extent of the circle is used. Must be at least
length 2. Only the first two values are used to determine the extent of
the input image.
Returns
-------
rr, cc, val : (N,) ndarray (int, int, float)
Indices of pixels (`rr`, `cc`) and intensity values (`val`).
``img[rr, cc] = val``.
Notes
-----
Wu's method draws anti-aliased circle. This implementation doesn't use
lookup table optimization.
To apply circle_perimeter_aa to color images use function draw.set_color.
References
----------
.. [1] X. Wu, "An efficient antialiasing technique", In ACM SIGGRAPH
Computer Graphics, 25 (1991) 143-152.
Examples
--------
>>> from skimage.draw import circle_perimeter_aa
>>> img = np.zeros((10, 10), dtype=np.uint8)
>>> rr, cc, val = circle_perimeter_aa(4, 4, 3)
>>> img[rr, cc] = val * 255
>>> img
array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 60, 211, 255, 211, 60, 0, 0, 0],
[ 0, 60, 194, 43, 0, 43, 194, 60, 0, 0],
[ 0, 211, 43, 0, 0, 0, 43, 211, 0, 0],
[ 0, 255, 0, 0, 0, 0, 0, 255, 0, 0],
[ 0, 211, 43, 0, 0, 0, 43, 211, 0, 0],
[ 0, 60, 194, 43, 0, 43, 194, 60, 0, 0],
[ 0, 0, 60, 211, 255, 211, 60, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> from skimage import data, draw
>>> image = data.chelsea()
>>> rr, cc, val = draw.circle_perimeter_aa(r=100, c=100, radius=75)
>>> draw.set_color(image, (rr, cc), [1, 0, 0], alpha=val)
"""
return _circle_perimeter_aa(r, c, radius, shape)
| def circle_perimeter_aa(r, c, radius, shape=None):
"""Generate anti-aliased circle perimeter coordinates.
Parameters
----------
r, c : int
Centre coordinate of circle.
radius: int
Radius of circle.
shape : tuple, optional
Image shape which is used to determine the maximum extent of output
pixel coordinates. This is useful for circles that exceed the image
size. If None, the full extent of the circle is used. Must be at least
length 2. Only the first two values are used to determine the extent of
the input image.
Returns
-------
rr, cc, val : (N,) ndarray (int, int, float)
Indices of pixels (`rr`, `cc`) and intensity values (`val`).
``img[rr, cc] = val``.
Notes
-----
Wu's method draws anti-aliased circle. This implementation doesn't use
lookup table optimization.
To apply ``circle_perimeter_aa`` to color images use function ``draw.set_color``.
References
----------
.. [1] X. Wu, "An efficient antialiasing technique", In ACM SIGGRAPH
Computer Graphics, 25 (1991) 143-152.
Examples
--------
>>> from skimage.draw import circle_perimeter_aa
>>> img = np.zeros((10, 10), dtype=np.uint8)
>>> rr, cc, val = circle_perimeter_aa(4, 4, 3)
>>> img[rr, cc] = val * 255
>>> img
array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 60, 211, 255, 211, 60, 0, 0, 0],
[ 0, 60, 194, 43, 0, 43, 194, 60, 0, 0],
[ 0, 211, 43, 0, 0, 0, 43, 211, 0, 0],
[ 0, 255, 0, 0, 0, 0, 0, 255, 0, 0],
[ 0, 211, 43, 0, 0, 0, 43, 211, 0, 0],
[ 0, 60, 194, 43, 0, 43, 194, 60, 0, 0],
[ 0, 0, 60, 211, 255, 211, 60, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> from skimage import data, draw
>>> image = data.chelsea()
>>> rr, cc, val = draw.circle_perimeter_aa(r=100, c=100, radius=75)
>>> draw.set_color(image, (rr, cc), [1, 0, 0], alpha=val)
"""
return _circle_perimeter_aa(r, c, radius, shape)
|
58,111 | def main():
# get command and args
command = demisto.command()
args = getArgs()
# initialize common args
api_key = demisto.params().get('api_key')
account_uuid = demisto.params().get('account_uuid')
global HEADERS
HEADERS = {
'Authorization': 'IBToken ' + api_key,
'User-Agent': 'Cortex_Insight.v3',
'Content-Type': 'application/json',
}
# attempt command execution
try:
if command == 'test-module':
response = sendRequest('GET', 'Sensors', 'sensors')
demisto.results('ok')
if command == 'fetch-incidents':
# default first fetch to -7days
first_fetch_time = datetime.now() - timedelta(days=7)
max_results = arg_to_number(
arg=demisto.params().get('max_fetch'),
arg_name='max_fetch',
required=False
)
next_run, incidents = fetchIncidents(
account_uuid=account_uuid,
max_results=max_results,
last_run=demisto.getLastRun(),
first_fetch_time=first_fetch_time
)
demisto.setLastRun(next_run)
demisto.incidents(incidents)
elif command == 'insight-get-events':
if args['response_type'] == "metadata":
response_type = "metadata"
elif args['response_type'] == "aggregations":
pattern = r"^.*[Gg][Rr][Oo][Uu][Pp]\s+[Bb][Yy].*$"
if not re.search(pattern, args['query']):
demisto.results("Error: No 'group by' statement in query. Aggregation requires a 'group by' statement.")
else:
response_type = "aggregations"
else:
response_type = "events"
args.pop('response_type')
response = sendRequest('POST', 'Events', None, args)
response = formatEvents(response, response_type)
if response_type in ("metadata", "aggregations"):
responseToEntry(response, 'Events', 'Data')
else:
responseToEntry(response, 'Events', 'Events')
elif command == 'insight-get-history':
response = sendRequest('GET', 'Events', 'history')
responseToEntry(response, 'UserQueryHistory', 'History')
elif command == 'insight-get-saved-searches':
response = sendRequest('GET', 'Events', 'saved')
responseToEntry(response, 'SavedSearches', 'Saved Queries')
elif command == 'insight-get-sensors':
response = sendRequest('GET', 'Sensors', 'sensors')
responseToEntry(response, 'Sensors', 'Sensors')
elif command == 'insight-get-devices':
response = sendRequest('GET', 'Sensors', 'devices')
responseToEntry(response, 'Devices', 'Device List')
elif command == 'insight-get-tasks':
if 'task_uuid' in args:
endpoint = 'pcaptasks/' + args['task_uuid']
response = sendRequest('GET', 'Sensors', endpoint)
responseToEntry(response, 'Tasks', 'PCAP Task')
else:
response = sendRequest('GET', 'Sensors', 'pcaptasks')
responseToEntry(response, 'Tasks', 'PCAPTasks')
elif command == 'insight-create-task':
sensor_ids = [args['sensor_ids']]
args.pop('sensor_ids')
args['sensor_ids'] = sensor_ids
response = sendRequest('POST', 'Sensors', 'pcaptasks', args)
demisto.results("Task created successfully")
elif command == 'insight-get-detections':
response = sendRequest('GET', 'Detections', 'detections', None, encodeArgsToURL(args))
if response['total_count'] > MAX_DETECTIONS:
if 'limit' not in args or int(args['limit']) > MAX_DETECTIONS:
# pull the remaining detections incrementally
response = getDetectionsInc(response, args)
# filter out training detections
detections = []
for detection in response['detections']:
if detection['account_uuid'] != TRAINING_ACC:
detections.append(detection)
response['detections'] = detections
if 'include' in args:
if args['include'] == 'rules':
response = addDetectionRules(response)
responseToEntry(response, 'Detections', 'Detections')
elif command == 'insight-get-detection-rules':
response = sendRequest('GET', 'Detections', 'rules', None, encodeArgsToURL(args))
responseToEntry(response, 'Rules', 'Rules')
elif command == 'insight-get-detection-rule-events':
rule_uuid = args['rule_uuid']
endpoint = "rules/" + rule_uuid + "/events"
args.pop('rule_uuid')
response = sendRequest('GET', 'Detections', endpoint, None, encodeArgsToURL(args))
responseToEntry(response, 'Detections', 'Events')
elif command == 'insight-resolve-detection':
endpoint = "detections/" + args['detection_uuid'] + "/resolve"
body = {"resolution": args['resolution'], "resolution_comment": args['resolution_comment']}
sendRequest('PUT', 'Detections', endpoint, body, None)
demisto.results("Detection resolved successfully")
elif command == 'insight-create-detection-rule':
run_accts = [args['run_account_uuids']]
dev_ip_fields = [args['device_ip_fields']]
args.pop('run_account_uuids')
args.pop('device_ip_fields')
args['run_account_uuids'] = run_accts
args['device_ip_fields'] = dev_ip_fields
sendRequest('POST', 'Detections', 'rules', args, None)
demisto.results("Rule created successfully")
elif command == 'insight-get-entity-summary':
endpoint = args['entity'] + "/summary"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.Summary', 'Summary')
elif command == 'insight-get-entity-pdns':
endpoint = args['entity'] + "/pdns"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.PDNS', 'PassiveDNS')
elif command == 'insight-get-entity-dhcp':
endpoint = args['entity'] + "/dhcp"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.DHCP', 'DHCP')
elif command == 'insight-get-entity-file':
endpoint = args['hash'] + "/file"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.File', 'File')
elif command == 'insight-get-telemetry-events':
response = sendRequest('GET', 'Sensors', 'telemetry/events', None, encodeArgsToURL(args))
responseToEntry(response, 'Telemetry.Events', 'Data')
elif command == 'insight-get-telemetry-network':
response = sendRequest('GET', 'Sensors', 'telemetry/network', None, encodeArgsToURL(args))
responseToEntry(response, 'Telemetry.Network', 'Data')
elif command == 'insight-get-telemetry-packetstats':
response = sendRequest('GET', 'Sensors', 'telemetry/packetstats', None, encodeArgsToURL(args))
responseToEntry(response, 'Telemetry.Packetstats', 'Data')
# catch exceptions
except Exception as e:
return_error(str(e))
| def main():
# get command and args
command = demisto.command()
args = getArgs()
# initialize common args
api_key = demisto.params().get('api_key')
account_uuid = demisto.params().get('account_uuid')
global HEADERS
HEADERS = {
'Authorization': 'IBToken ' + api_key,
'User-Agent': 'Cortex_Insight.v3',
'Content-Type': 'application/json',
}
# attempt command execution
try:
if command == 'test-module':
response = sendRequest('GET', 'Sensors', 'sensors')
demisto.results('ok')
if command == 'fetch-incidents':
# default first fetch to -7days
first_fetch_time = datetime.now() - timedelta(days=7)
max_results = arg_to_number(
arg=demisto.params().get('max_fetch'),
arg_name='max_fetch',
required=False
)
next_run, incidents = fetchIncidents(
account_uuid=account_uuid,
max_results=max_results,
last_run=demisto.getLastRun(),
first_fetch_time=first_fetch_time
)
demisto.setLastRun(next_run)
demisto.incidents(incidents)
elif command == 'insight-get-events':
if args['response_type'] == "metadata":
response_type = "metadata"
elif args['response_type'] == "aggregations":
pattern = r"^.*[Gg][Rr][Oo][Uu][Pp]\s+[Bb][Yy].*$"
if not re.search(pattern, args['query']):
demisto.results("Error: No 'group by' statement in query. Aggregation requires a 'group by' statement.")
else:
response_type = "aggregations"
else:
response_type = "events"
args.pop('response_type')
response = sendRequest('POST', 'Events', None, args)
response = formatEvents(response, response_type)
if response_type in ("metadata", "aggregations"):
responseToEntry(response, 'Events', 'Data')
else:
responseToEntry(response, 'Events', 'Events')
elif command == 'insight-get-history':
response = sendRequest('GET', 'Events', 'history')
responseToEntry(response, 'UserQueryHistory', 'History')
elif command == 'insight-get-saved-searches':
response = sendRequest('GET', 'Events', 'saved')
responseToEntry(response, 'SavedSearches', 'Saved Queries')
elif command == 'insight-get-sensors':
response = sendRequest('GET', 'Sensors', 'sensors')
responseToEntry(response, 'Sensors', 'Sensors')
elif command == 'insight-get-devices':
response = sendRequest('GET', 'Sensors', 'devices')
responseToEntry(response, 'Devices', 'Device List')
elif command == 'insight-get-tasks':
if 'task_uuid' in args:
endpoint = 'pcaptasks/' + args['task_uuid']
response = sendRequest('GET', 'Sensors', endpoint)
responseToEntry(response, 'Tasks', 'PCAP Task')
else:
response = sendRequest('GET', 'Sensors', 'pcaptasks')
responseToEntry(response, 'Tasks', 'PCAPTasks')
elif command == 'insight-create-task':
sensor_ids = [args['sensor_ids']]
args.pop('sensor_ids')
args['sensor_ids'] = sensor_ids
response = sendRequest('POST', 'Sensors', 'pcaptasks', args)
demisto.results("Task created successfully")
elif command == 'insight-get-detections':
response = sendRequest('GET', 'Detections', 'detections', None, encodeArgsToURL(args))
if response['total_count'] > MAX_DETECTIONS:
if 'limit' not in args or int(args['limit']) > MAX_DETECTIONS:
# pull the remaining detections incrementally
response = getDetectionsInc(response, args)
# filter out training detections
detections = []
for detection in response['detections']:
if detection['account_uuid'] != TRAINING_ACC:
detections.append(detection)
response['detections'] = detections
if 'include' in args:
if args['include'] == 'rules':
response = addDetectionRules(response)
responseToEntry(response, 'Detections', 'Detections')
elif command == 'insight-get-detection-rules':
response = sendRequest('GET', 'Detections', 'rules', None, encodeArgsToURL(args))
responseToEntry(response, 'Rules', 'Rules')
elif command == 'insight-get-detection-rule-events':
rule_uuid = args['rule_uuid']
endpoint = "rules/" + rule_uuid + "/events"
args.pop('rule_uuid')
response = sendRequest('GET', 'Detections', endpoint, None, encodeArgsToURL(args))
responseToEntry(response, 'Detections', 'Events')
elif command == 'insight-resolve-detection':
endpoint = "detections/" + args['detection_uuid'] + "/resolve"
body = {"resolution": args['resolution'], "resolution_comment": args['resolution_comment']}
sendRequest('PUT', 'Detections', endpoint, body, None)
demisto.results("Detection resolved successfully")
elif command == 'insight-create-detection-rule':
run_accts = [args['run_account_uuids']]
dev_ip_fields = [args['device_ip_fields']]
args.pop('run_account_uuids')
args.pop('device_ip_fields')
args['run_account_uuids'] = run_accts
args['device_ip_fields'] = dev_ip_fields
sendRequest('POST', 'Detections', 'rules', args, None)
demisto.results("Rule created successfully")
elif command == 'insight-get-entity-summary':
endpoint = args['entity'] + "/summary"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.Summary', 'Summary')
elif command == 'insight-get-entity-pdns':
endpoint = args['entity'] + "/pdns"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.PDNS', 'PassiveDNS')
elif command == 'insight-get-entity-dhcp':
endpoint = args['entity'] + "/dhcp"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.DHCP', 'DHCP')
elif command == 'insight-get-entity-file':
endpoint = args['hash'] + "/file"
response = sendRequest('GET', 'Entity', endpoint, None, None)
responseToEntry(response, 'Entity.File', 'File')
elif command == 'insight-get-telemetry-events':
response = sendRequest('GET', 'Sensors', 'telemetry/events', None, encodeArgsToURL(args))
responseToEntry(response, 'Telemetry.Events', 'Data')
elif command == 'insight-get-telemetry-network':
response = sendRequest('GET', 'Sensors', 'telemetry/network', None, encodeArgsToURL(args))
responseToEntry(response, 'Telemetry.Network', 'Data')
elif command == 'insight-get-telemetry-packetstats':
response = sendRequest('GET', 'Sensors', 'telemetry/packetstats', None, encodeArgsToURL(args))
responseToEntry(response, 'Telemetry.Packetstats', 'Data')
# catch exceptions
except Exception as e:
demisto.error(traceback.format_exc())
return_error(f'Failed to execute {demisto.command()} command.\nError:\n{str(e)}', e)
|
32,346 | def main():
try:
if demisto.command() == "fetch-indicators":
fetch_indicators()
elif demisto.command() == "reset-data-stream":
days = demisto.getArg("reset")
days = int(days)
new_date = reset_data_stream(int(days))
demisto.results(new_date)
elif demisto.command() == "test-module":
connect()
return_results("ok")
except Exception as e:
demisto.error(traceback.format_exc())
return_error(f"Failed to execute {demisto.command()} command.\nError:\n{str(e)}")
| def main():
try:
if demisto.command() == "fetch-indicators":
fetch_indicators()
elif demisto.command() == "reset-data-stream":
days = demisto.getArg("reset")
days = int(days)
new_date = reset_data_stream(days)
demisto.results(new_date)
elif demisto.command() == "test-module":
connect()
return_results("ok")
except Exception as e:
demisto.error(traceback.format_exc())
return_error(f"Failed to execute {demisto.command()} command.\nError:\n{str(e)}")
|
6,629 | def start_import(invoices):
errors = 0
names = []
for idx, d in enumerate(invoices):
try:
invoice_number = ''
set_child_names = False
if d.invoice_number:
invoice_number = d.invoice_number
set_child_names = True
publish(idx, len(invoices), d.doctype)
doc = frappe.get_doc(d)
doc.flags.ignore_mandatory = True
doc.insert(set_name=invoice_number, set_child_names=set_child_names)
doc.submit()
frappe.db.commit()
names.append(doc.name)
except Exception:
errors += 1
frappe.db.rollback()
message = "\n".join(["Data:", dumps(d, default=str, indent=4), "--" * 50, "\nException:", traceback.format_exc()])
frappe.log_error(title="Error while creating Opening Invoice", message=message)
frappe.db.commit()
if errors:
frappe.msgprint(_("You had {} errors while creating opening invoices. Check {} for more details")
.format(errors, "<a href='/app/List/Error Log' class='variant-click'>Error Log</a>"), indicator="red", title=_("Error Occured"))
return names
| def start_import(invoices):
errors = 0
names = []
for idx, d in enumerate(invoices):
try:
invoice_number = ''
set_child_names = False
if d.invoice_number:
invoice_number = d.invoice_number
set_child_names = True
publish(idx, len(invoices), d.doctype)
doc = frappe.get_doc(d)
doc.flags.ignore_mandatory = True
doc.insert(set_name=invoice_number)
doc.submit()
frappe.db.commit()
names.append(doc.name)
except Exception:
errors += 1
frappe.db.rollback()
message = "\n".join(["Data:", dumps(d, default=str, indent=4), "--" * 50, "\nException:", traceback.format_exc()])
frappe.log_error(title="Error while creating Opening Invoice", message=message)
frappe.db.commit()
if errors:
frappe.msgprint(_("You had {} errors while creating opening invoices. Check {} for more details")
.format(errors, "<a href='/app/List/Error Log' class='variant-click'>Error Log</a>"), indicator="red", title=_("Error Occured"))
return names
|
12,300 | def mcsolve(H, psi0, tlist, c_ops=None, e_ops=None, ntraj=1, *,
args=None, options=None, seeds=None, target_tol=None, timeout=0):
r"""
Monte Carlo evolution of a state vector :math:`|\psi \rangle` for a
given Hamiltonian and sets of collapse operators. Options for the
underlying ODE solver are given by the Options class.
Parameters
----------
H : :class:`qutip.Qobj`, :class:`qutip.QobjEvo`, ``list``, callable.
System Hamiltonian as a Qobj, QobjEvo, can also be a function or list
that can be made into a Qobjevo. (See :class:`qutip.QobjEvo`'s
documentation). ``H`` can be a superoperator (liouvillian) if some
collapse operators are to be treated deterministically.
psi0 : :class:`qutip.Qobj`
Initial state vector
tlist : array_like
Times at which results are recorded.
ntraj : int
Maximum number of trajectories to run. Can be cut short if a time limit
is passed in options (per default, mcsolve will stop after 1e8 sec)::
``options.mcsolve['map_options']['timeout'] = max_sec``
Or if the target tolerance is reached, see ``target_tol``.
c_ops : ``list``
A ``list`` of collapse operators. They must be operators even if ``H``
is a superoperator.
e_ops : ``list``, [optional]
A ``list`` of operator as Qobj, QobjEvo or callable with signature of
(t, state: Qobj) for calculating expectation values. When no ``e_ops``
are given, the solver will default to save the states.
args : dict, [optional]
Arguments for time-dependent Hamiltonian and collapse operator terms.
options : SolverOptions, [optional]
Options for the evolution.
seeds : int, SeedSequence, list, [optional]
Seed for the random number generator. It can be a single seed used to
spawn seeds for each trajectories or a list of seed, one for each
trajectories. Seed are saved in the result, they can be reused with::
seeds=prev_result.seeds
target_tol : float, list, [optional]
Target tolerance of the evolution. The evolution will compute
trajectories until the error on the expectation values is lower than
this tolerance. The error is computed using jackknife resampling.
``target_tol`` can be an absolute tolerance, a pair of absolute and
relative tolerance, in that order. Lastly, it can be a list of pairs of
(atol, rtol) for each e_ops.
timeout : float [optional]
Maximum time for the evolution in second. When reached, no more
trajectories will be computed. Overwrite the option of the same name.
Returns
-------
results : :class:`qutip.solver.Result`
Object storing all results from the simulation. Which results is saved
depend on the presence of ``e_ops`` and the options used. ``collapse``
and ``photocurrent`` is available to Monte Carlo simulation results.
"""
H = QobjEvo(H, args=args, tlist=tlist)
c_ops = c_ops if c_ops is not None else []
if not isinstance(c_ops, (list, tuple)):
c_ops = [c_ops]
c_ops = [QobjEvo(c_op, args=args, tlist=tlist) for c_op in c_ops]
if len(c_ops) == 0:
return mesolve(H, psi0, tlist, e_ops=e_ops, args=args, options=options)
if isinstance(ntraj, list):
if isinstance(options, dict):
options = SolverOptions(**options)
options = copy(options) or SolverOptions()
options.results['keep_runs_results'] = True
max_ntraj = max(ntraj)
else:
max_ntraj = ntraj
mc = McSolver(H, c_ops, options=options)
result = mc.run(psi0, tlist=tlist, ntraj=max_ntraj, e_ops=e_ops,
seed=seeds, target_tol=target_tol, timeout=timeout)
if isinstance(ntraj, list):
result.traj_batch = ntraj
return result
| def mcsolve(H, psi0, tlist, c_ops=None, e_ops=None, ntraj=1, *,
args=None, options=None, seeds=None, target_tol=None, timeout=0):
r"""
Monte Carlo evolution of a state vector :math:`|\psi \rangle` for a
given Hamiltonian and sets of collapse operators. Options for the
underlying ODE solver are given by the Options class.
Parameters
----------
H : :class:`qutip.Qobj`, :class:`qutip.QobjEvo`, ``list``, callable.
System Hamiltonian as a Qobj, QobjEvo, can also be a function or list
that can be made into a Qobjevo. (See :class:`qutip.QobjEvo`'s
documentation). ``H`` can be a superoperator (liouvillian) if some
collapse operators are to be treated deterministically.
psi0 : :class:`qutip.Qobj`
Initial state vector.
tlist : array_like
Times at which results are recorded.
ntraj : int
Maximum number of trajectories to run. Can be cut short if a time limit
is passed in options (per default, mcsolve will stop after 1e8 sec)::
``options.mcsolve['map_options']['timeout'] = max_sec``
Or if the target tolerance is reached, see ``target_tol``.
c_ops : ``list``
A ``list`` of collapse operators. They must be operators even if ``H``
is a superoperator.
e_ops : ``list``, [optional]
A ``list`` of operator as Qobj, QobjEvo or callable with signature of
(t, state: Qobj) for calculating expectation values. When no ``e_ops``
are given, the solver will default to save the states.
args : dict, [optional]
Arguments for time-dependent Hamiltonian and collapse operator terms.
options : SolverOptions, [optional]
Options for the evolution.
seeds : int, SeedSequence, list, [optional]
Seed for the random number generator. It can be a single seed used to
spawn seeds for each trajectories or a list of seed, one for each
trajectories. Seed are saved in the result, they can be reused with::
seeds=prev_result.seeds
target_tol : float, list, [optional]
Target tolerance of the evolution. The evolution will compute
trajectories until the error on the expectation values is lower than
this tolerance. The error is computed using jackknife resampling.
``target_tol`` can be an absolute tolerance, a pair of absolute and
relative tolerance, in that order. Lastly, it can be a list of pairs of
(atol, rtol) for each e_ops.
timeout : float [optional]
Maximum time for the evolution in second. When reached, no more
trajectories will be computed. Overwrite the option of the same name.
Returns
-------
results : :class:`qutip.solver.Result`
Object storing all results from the simulation. Which results is saved
depend on the presence of ``e_ops`` and the options used. ``collapse``
and ``photocurrent`` is available to Monte Carlo simulation results.
"""
H = QobjEvo(H, args=args, tlist=tlist)
c_ops = c_ops if c_ops is not None else []
if not isinstance(c_ops, (list, tuple)):
c_ops = [c_ops]
c_ops = [QobjEvo(c_op, args=args, tlist=tlist) for c_op in c_ops]
if len(c_ops) == 0:
return mesolve(H, psi0, tlist, e_ops=e_ops, args=args, options=options)
if isinstance(ntraj, list):
if isinstance(options, dict):
options = SolverOptions(**options)
options = copy(options) or SolverOptions()
options.results['keep_runs_results'] = True
max_ntraj = max(ntraj)
else:
max_ntraj = ntraj
mc = McSolver(H, c_ops, options=options)
result = mc.run(psi0, tlist=tlist, ntraj=max_ntraj, e_ops=e_ops,
seed=seeds, target_tol=target_tol, timeout=timeout)
if isinstance(ntraj, list):
result.traj_batch = ntraj
return result
|
52,211 | def destdir_join(d1: str, d2: str) -> str:
if not d1:
return d2
d2_path = Path(d2)
d2_parts = d2_path.parts
# c:\destdir + c:\prefix must produce c:\destdir\prefix
if d2_path.drive or d2_path.root:
d2_parts = d2_parts[1:]
return str(Path(d1, *d2_parts))
| def destdir_join(d1: str, d2: str) -> str:
if not d1:
return d2
d2_path = Path(d2)
d2_parts = d2_path.parts
# c:\destdir + c:\prefix must produce c:\destdir\prefix
d2_parts = Path(d2).parts[1:]
return str(Path(d1, *d2_parts))
|
25,979 | def flexible_server_restart(cmd, client, resource_group_name, server_name, fail_over=None):
instance = client.get(resource_group_name, server_name)
if fail_over is not None and instance.high_availability.mode != "ZoneRedundant":
raise ArgumentUsageError("Failing over can only be triggered for zone redundant servers.")
if fail_over is not None:
if fail_over not in ['Planned', 'Forced', 'planned', 'forced']:
raise InvalidArgumentValueError("Allowed failover parameters are 'Planned' and 'Forced'.")
if fail_over.lower() == 'planned':
fail_over = 'plannedFailover'
elif fail_over.lower() == 'forced':
fail_over = 'forcedFailover'
parameters = postgresql_flexibleservers.models.RestartParameter(restart_with_failover=True,
failover_mode=fail_over)
else:
parameters = postgresql_flexibleservers.models.RestartParameter(restart_with_failover=False)
return resolve_poller(
client.begin_restart(resource_group_name, server_name, parameters), cmd.cli_ctx, 'PostgreSQL Server Restart')
| def flexible_server_restart(cmd, client, resource_group_name, server_name, fail_over=None):
instance = client.get(resource_group_name, server_name)
if fail_over is not None and instance.high_availability.mode != "ZoneRedundant":
raise ArgumentUsageError("Failing over can only be triggered for zone redundant servers.")
if fail_over is not None:
if fail_over.lower() not in ['planned', 'forced']:
raise InvalidArgumentValueError("Allowed failover parameters are 'Planned' and 'Forced'.")
if fail_over.lower() == 'planned':
fail_over = 'plannedFailover'
elif fail_over.lower() == 'forced':
fail_over = 'forcedFailover'
parameters = postgresql_flexibleservers.models.RestartParameter(restart_with_failover=True,
failover_mode=fail_over)
else:
parameters = postgresql_flexibleservers.models.RestartParameter(restart_with_failover=False)
return resolve_poller(
client.begin_restart(resource_group_name, server_name, parameters), cmd.cli_ctx, 'PostgreSQL Server Restart')
|
52,496 | def handle_hard_bounce(
message_id: str, bounceSubType: str, recipient_emails: List[str]
) -> None:
"""Handle a hard bounce notification received from SNS
:param message_id: The unique message id assigned by Amazon SES
:param bounceSubType: The subtype of the bounce, as determined
by Amazon SES
:param recipient_emails: a list of email addresses one per recipient
to whom the bounce notification pertains
:return: None
"""
unexpected_events = ["Suppressed", "OnAccountSuppressionList"]
for email in recipient_emails:
if bounceSubType in unexpected_events:
# Handle unexpected bounceSubType events, log a warning
logging.warning(
f"Unexpected {bounceSubType} hard bounce for {email}"
)
# After log the event ban the email address
# Only ban email address if it hasn't been previously banned
EmailFlag.objects.get_or_create(
email_address=email,
flag_type="ban",
defaults={"reason": bounceSubType},
)
else:
# Any other hard bounce even only ban the email address
# Only ban email address if it hasn't been previously banned
EmailFlag.objects.get_or_create(
email_address=email,
flag_type="ban",
defaults={"reason": bounceSubType},
)
| def handle_hard_bounce(
message_id: str, bounceSubType: str, recipient_emails: List[str]
) -> None:
"""Ban any email address that receives a hard bounce.
If the bounce is of an unexpected type, also log a warning.
:param message_id: The unique message id assigned by Amazon SES
:param bounceSubType: The subtype of the bounce, as determined
by Amazon SES
:param recipient_emails: a list of email addresses one per recipient
to whom the bounce notification pertains
:return: None
"""
unexpected_events = ["Suppressed", "OnAccountSuppressionList"]
for email in recipient_emails:
if bounceSubType in unexpected_events:
# Handle unexpected bounceSubType events, log a warning
logging.warning(
f"Unexpected {bounceSubType} hard bounce for {email}"
)
# After log the event ban the email address
# Only ban email address if it hasn't been previously banned
EmailFlag.objects.get_or_create(
email_address=email,
flag_type="ban",
defaults={"reason": bounceSubType},
)
else:
# Any other hard bounce even only ban the email address
# Only ban email address if it hasn't been previously banned
EmailFlag.objects.get_or_create(
email_address=email,
flag_type="ban",
defaults={"reason": bounceSubType},
)
|
26,117 | def test_set_H_greater_then_last_ppseqno(looper,
txnPoolNodeSet,
sdk_pool_handle,
sdk_wallet_steward,
tdir,
tconf,
allPluginsPath):
# send LOG_SIZE requests and check, that all watermarks on all replicas is not changed
# and now is (0, LOG_SIZE)
"""Send random requests for moving watermarks"""
sdk_send_random_and_check(looper, txnPoolNodeSet, sdk_pool_handle, sdk_wallet_steward, LOG_SIZE)
# check, that all of node set up watermark greater, then default and
# ppSeqNo with number LOG_SIZE + 1 will be out from default watermark
assert txnPoolNodeSet[0].replicas[1].last_ordered_3pc[1] == LOG_SIZE
for n in txnPoolNodeSet:
for r in n.replicas._replicas.values():
assert r.h >= LOG_SIZE
assert r.H >= LOG_SIZE + LOG_SIZE
"""Adding new node, for scheduling propagate primary procedure"""
new_node = add_new_node(looper, txnPoolNodeSet, sdk_pool_handle,
sdk_wallet_steward, tdir, tconf, allPluginsPath)
ensure_all_nodes_have_same_data(looper, txnPoolNodeSet,
exclude_from_check=['check_last_ordered_3pc_backup'])
"""Check, that backup replicas set watermark as (0, maxInt)"""
# Check, replica.h is set from last_ordered_3PC and replica.H is set to maxsize
for r in new_node.replicas.values():
assert r.h == r.last_ordered_3pc[1]
if r.isMaster:
assert r.H == r.last_ordered_3pc[1] + LOG_SIZE
else:
assert r.H == sys.maxsize
"""Send requests and check. that backup replicas does not stashing it by outside watermarks reason"""
sdk_send_random_and_check(looper, txnPoolNodeSet, sdk_pool_handle, sdk_wallet_steward, 1)
# check, that there is no any stashed "outside watermark" messages.
for r in new_node.replicas.values():
assert r.stasher.stash_size(STASH_WATERMARKS) == 0
"""Force view change and check, that all backup replicas will not reset watermarks"""
ensure_view_change(looper, txnPoolNodeSet)
ensureElectionsDone(looper, txnPoolNodeSet)
for r in new_node.replicas.values():
if not r.isMaster:
assert r.h == 0
assert r.H == LOG_SIZE
| def test_set_H_greater_then_last_ppseqno(looper,
txnPoolNodeSet,
sdk_pool_handle,
sdk_wallet_steward,
tdir,
tconf,
allPluginsPath):
# send LOG_SIZE requests and check, that all watermarks on all replicas is not changed
# and now is (0, LOG_SIZE)
"""Send random requests for moving watermarks"""
sdk_send_random_and_check(looper, txnPoolNodeSet, sdk_pool_handle, sdk_wallet_steward, LOG_SIZE)
# check, that all of node set up watermark greater, then default and
# ppSeqNo with number LOG_SIZE + 1 will be out from default watermark
assert txnPoolNodeSet[0].replicas[1].last_ordered_3pc[1] == LOG_SIZE
for n in txnPoolNodeSet:
for r in n.replicas._replicas.values():
assert r.h >= LOG_SIZE
assert r.H >= LOG_SIZE + LOG_SIZE
"""Adding new node, for scheduling propagate primary procedure"""
new_node = add_new_node(looper, txnPoolNodeSet, sdk_pool_handle,
sdk_wallet_steward, tdir, tconf, allPluginsPath)
ensure_all_nodes_have_same_data(looper, txnPoolNodeSet,
exclude_from_check=['check_last_ordered_3pc_backup'])
"""Check, that backup replicas set watermark as (0, maxInt)"""
# Check, replica.h is set from last_ordered_3PC and replica.H is set to maxsize
for r in new_node.replicas.values():
assert r.h == r.last_ordered_3pc[1]
if r.isMaster:
assert r.H == r.last_ordered_3pc[1] + LOG_SIZE
else:
assert r.H == sys.maxsize
"""Send requests and check. that backup replicas does not stashing it by outside watermarks reason"""
sdk_send_random_and_check(looper, txnPoolNodeSet, sdk_pool_handle, sdk_wallet_steward, 1)
# check, that there is no any stashed "outside watermark" messages.
for r in new_node.replicas.values():
assert r.stasher.stash_size(STASH_WATERMARKS) == 0
"""Force view change and check, that all backup replicas will reset watermarks"""
ensure_view_change(looper, txnPoolNodeSet)
ensureElectionsDone(looper, txnPoolNodeSet)
for r in new_node.replicas.values():
if not r.isMaster:
assert r.h == 0
assert r.H == LOG_SIZE
|
2,076 | def plot_det_curve(
estimator,
X,
y,
*,
sample_weight=None,
response_method="auto",
name=None,
ax=None,
pos_label=None,
**kwargs
):
"""Plot detection error tradeoff (DET) curve.
Extra keyword arguments will be passed to matplotlib's `plot`.
Read more in the :ref:`User Guide <visualizations>`.
.. versionadded:: 0.24
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
response_method : {'predict_proba', 'decision_function', 'auto'} \
default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
name : str, default=None
Name of ROC Curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
pos_label : str or int, default=None
The label of the positive class.
When `pos_label=None`, if `y_true` is in {-1, 1} or {0, 1},
`pos_label` is set to 1, otherwise an error will be raised.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
roc_auc_score : Compute the area under the ROC curve
roc_curve : Compute Receiver operating characteristic (ROC) curve
Examples
--------
"""
check_matplotlib_support('plot_det_curve')
y_pred, pos_label = _get_response(
X, estimator, response_method, pos_label=pos_label
)
fpr, fnr, _ = det_curve(
y, y_pred, pos_label=pos_label, sample_weight=sample_weight,
)
name = estimator.__class__.__name__ if name is None else name
viz = DetCurveDisplay(
fpr=fpr,
fnr=fnr,
estimator_name=name,
pos_label=pos_label
)
return viz.plot(ax=ax, name=name, **kwargs)
| def plot_det_curve(
estimator,
X,
y,
*,
sample_weight=None,
response_method="auto",
name=None,
ax=None,
pos_label=None,
**kwargs
):
"""Plot detection error tradeoff (DET) curve.
Extra keyword arguments will be passed to matplotlib's `plot`.
Read more in the :ref:`User Guide <visualizations>`.
.. versionadded:: 0.24
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
response_method : {'predict_proba', 'decision_function', 'auto'} \
default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
name : str, default=None
Name of ROC Curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
pos_label : str or int, default=None
The label of the positive class.
When `pos_label=None`, if `y_true` is in {-1, 1} or {0, 1},
`pos_label` is set to 1, otherwise an error will be raised.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
roc_auc_score : Compute the area under the ROC curve
plot_roc_curve : Plot Receiver operating characteristic (ROC) curve
Examples
--------
"""
check_matplotlib_support('plot_det_curve')
y_pred, pos_label = _get_response(
X, estimator, response_method, pos_label=pos_label
)
fpr, fnr, _ = det_curve(
y, y_pred, pos_label=pos_label, sample_weight=sample_weight,
)
name = estimator.__class__.__name__ if name is None else name
viz = DetCurveDisplay(
fpr=fpr,
fnr=fnr,
estimator_name=name,
pos_label=pos_label
)
return viz.plot(ax=ax, name=name, **kwargs)
|
30,921 | def http_cmd(url_suffix, data=None, files=None, parse_json=True):
data = {} if data is None else data
url_params = {} # type:dict
use_public_api = demisto.params().get('public_api_key', False)
api_key = demisto.params().get('api_key', False)
if not api_key and use_public_api:
url_params.setdefault('apikey', demisto.params()['secret_public_api_key'])
elif api_key:
url_params.setdefault('apikey', api_key)
LOG('running request with url=%s\n\tdata=%s\n\tfiles=%s' % (BASE_URL + url_suffix,
data, files,))
res = {} # type:dict
if files:
res = requests.post(BASE_URL + url_suffix, # type:ignore
verify=USE_SSL,
params=url_params,
data=data,
files=files
)
else:
res = requests.get(BASE_URL + url_suffix, # type:ignore
verify=USE_SSL,
params=url_params
)
if res.status_code == 401: # type:ignore
raise Exception('API Key is incorrect')
if res.status_code >= 400: # type:ignore
try:
LOG('result is: %s' % (res.json(),)) # type:ignore
error_msg = res.json()['errors'][0]['msg'] # type:ignore
raise Exception('Your request failed with the following status code (%s) and error: %s.\n%s' % (res.status_code, res.reason, error_msg,)) # type:ignore
except ValueError:
# in case the respons is not parsed as JSON
raise Exception('Your request failed with the following status code (%s) and error: %s.' % (res.status_code, res.reason))
if parse_json:
return res.json() # type:ignore
else:
return res.content # type:ignore
| def http_cmd(url_suffix, data=None, files=None, parse_json=True):
data = {} if data is None else data
url_params = {} # type:dict
use_public_api = demisto.params().get('public_api_key', False)
api_key = demisto.params().get('api_key', False)
if not api_key and use_public_api:
url_params.setdefault('apikey', demisto.params()['secret_public_api_key'])
elif api_key:
url_params.setdefault('apikey', api_key)
LOG('running request with url=%s\n\tdata=%s\n\tfiles=%s' % (BASE_URL + url_suffix,
data, files,))
res = {} # type:dict
if files:
res = requests.post(BASE_URL + url_suffix, # type:ignore
verify=USE_SSL,
params=url_params,
data=data,
files=files
)
else:
res = requests.get(BASE_URL + url_suffix, # type:ignore
verify=USE_SSL,
params=url_params
)
if res.status_code == 401: # type:ignore
raise Exception('API Key is incorrect')
if res.status_code >= 400: # type:ignore
try:
LOG('result is: %s' % (res.json(),)) # type:ignore
error_msg = res.json()['errors'][0]['msg'] # type:ignore
raise Exception('Your request failed with the following status code (%s) and error: %s.\n%s' % (res.status_code, res.reason, error_msg,)) # type:ignore
except ValueError:
# in case the response is not parsed as JSON
raise Exception('Your request failed with the following status code (%s) and error: %s.' % (res.status_code, res.reason))
if parse_json:
return res.json() # type:ignore
else:
return res.content # type:ignore
|
19,846 | def get_task_form_class(task_model, for_edit=False):
"""
Generates a form class for the given task model.
If the form is to edit an existing task, set for_edit to True. This applies
the readonly restrictions on fields defined in admin_form_readonly_on_edit_fields.
"""
fields = task_model.admin_form_fields
form_class = forms.modelform_factory(
task_model,
form=BaseTaskForm,
fields=fields,
widgets=getattr(task_model, 'admin_form_widgets', {})
)
if for_edit:
for field_name in getattr(task_model, 'admin_form_readonly_on_edit_fields', []):
if field_name not in form_class.base_fields:
raise ImproperlyConfigured(
"`%s.admin_form_readonly_on_edit_fields` contains the field "
"'%s' but this field doesn't exist. Have you forgotten to add "
"it to `%s.admin_form_fields`?"
% (task_model.__name__, field_name, task_model.__name__)
)
form_class.base_fields[field_name].disabled = True
return form_class
| def get_task_form_class(task_model, for_edit=False):
"""
Generates a form class for the given task model.
If the form is to edit an existing task, set for_edit to True. This applies
the readonly restrictions on fields defined in admin_form_readonly_on_edit_fields.
"""
fields = task_model.admin_form_fields
form_class = forms.modelform_factory(
task_model,
form=BaseTaskForm,
fields=fields,
widgets=getattr(task_model, 'admin_form_widgets', {})
)
if for_edit:
for field_name in getattr(task_model, 'admin_form_readonly_on_edit_fields', []):
if field_name not in form_class.base_fields:
raise ImproperlyConfigured(
"`%s.admin_form_readonly_on_edit_fields` contains the field "
"'%s' that doesn't exist. Did you forgget to add "
"it to `%s.admin_form_fields`?"
% (task_model.__name__, field_name, task_model.__name__)
)
form_class.base_fields[field_name].disabled = True
return form_class
|
29,763 | def scale_ratings(mbids_and_ratings, params):
""" Scale the ratings so that they fall on a range from 0.0 -> 1.0.
Args:
mbids_and_ratings: list of recommended recording mbids and ratings.
params: RecommendationParams class object.
"""
for row in mbids_and_ratings:
rating = row[1]
scaled_rating = (rating / 2.0) + 0.5
if scaled_rating > 1.0 or scaled_rating < -1.0:
params.ratings_beyond_range.append(rating)
row[1] = round(min(max(scaled_rating, -1.0), 1.0), 3)
| def scale_ratings(mbids_and_ratings, params: RecommendationParams):
""" Scale the ratings so that they fall on a range from 0.0 -> 1.0.
Args:
mbids_and_ratings: list of recommended recording mbids and ratings.
params: RecommendationParams class object.
"""
for row in mbids_and_ratings:
rating = row[1]
scaled_rating = (rating / 2.0) + 0.5
if scaled_rating > 1.0 or scaled_rating < -1.0:
params.ratings_beyond_range.append(rating)
row[1] = round(min(max(scaled_rating, -1.0), 1.0), 3)
|
33,052 | def install_unified_kernel(
args: MkosiArgs,
root: Path,
root_hash: Optional[str],
do_run_build_script: bool,
for_cache: bool,
cached: bool,
mount: Callable[[], ContextManager[None]],
) -> None:
# Iterates through all kernel versions included in the image and generates a combined
# kernel+initrd+cmdline+osrelease EFI file from it and places it in the /EFI/Linux directory of the ESP.
# sd-boot iterates through them and shows them in the menu. These "unified" single-file images have the
# benefit that they can be signed like normal EFI binaries, and can encode everything necessary to boot a
# specific root device, including the root hash.
if not (args.bootable and
args.get_partition(PartitionIdentifier.esp) and
args.with_unified_kernel_images):
return
# Don't run dracut if this is for the cache. The unified kernel
# typically includes the image ID, roothash and other data that
# differs between cached version and final result. Moreover, we
# want that the initrd for the image actually takes the changes we
# make to the image into account (e.g. when we build a systemd
# test image with this we want that the systemd we just built is
# in the initrd, and not one from the cache. Hence even though
# dracut is slow we invoke it only during the last final build,
# never for the cached builds.
if for_cache:
return
# Don't bother running dracut if this is a development build. Strictly speaking it would probably be a
# good idea to run it, so that the development environment differs as little as possible from the final
# build, but then again the initrd should not be relevant for building, and dracut is simply very slow,
# hence let's avoid it invoking it needlessly, given that we never actually invoke the boot loader on the
# development image.
if do_run_build_script:
return
prefix = "boot" if args.get_partition(PartitionIdentifier.xbootldr) else "efi"
with mount(), complete_step("Generating combined kernel + initrd boot file…"):
for kver, kimg in gen_kernel_images(args, root):
if args.image_id:
image_id = args.image_id
if args.image_version:
partlabel = f"{args.image_id}_{args.image_version}"
else:
partlabel = f"{args.image_id}"
else:
image_id = f"mkosi-{args.distribution}"
partlabel = None
# See https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/
boot_count: str = ""
if root.joinpath("etc/kernel/tries").exists():
boot_count=f'+{root.joinpath("etc/kernel/tries").read_text().strip()}'
if args.image_version:
boot_binary = root / prefix / f"EFI/Linux/{image_id}_{args.image_version}{boot_count}.efi"
elif root_hash:
boot_binary = root / prefix / f"EFI/Linux/{image_id}-{kver}-{root_hash}{boot_count}.efi"
else:
boot_binary = root / prefix / f"EFI/Linux/{image_id}-{kver}{boot_count}.efi"
if root.joinpath("etc/kernel/cmdline").exists():
boot_options = root.joinpath("etc/kernel/cmdline").read_text().strip()
elif root.joinpath("/usr/lib/kernel/cmdline").exists():
boot_options = root.joinpath("usr/lib/kernel/cmdline").read_text().strip()
else:
boot_options = ""
if root_hash:
option = "usrhash" if args.usr_only else "roothash"
boot_options = f"{boot_options} {option}={root_hash}"
elif partlabel:
boot_options = f"{boot_options} root=PARTLABEL={partlabel}"
osrelease = root / "usr/lib/os-release"
cmdline = workspace(root) / "cmdline"
cmdline.write_text(boot_options)
initrd = root / boot_directory(args, kver) / "initrd"
cmd: Sequence[PathString] = [
"objcopy",
"--add-section", f".osrel={osrelease}", "--change-section-vma", ".osrel=0x20000",
"--add-section", f".cmdline={cmdline}", "--change-section-vma", ".cmdline=0x30000",
"--add-section", f".linux={root / kimg}", "--change-section-vma", ".linux=0x2000000",
"--add-section", f".initrd={initrd}", "--change-section-vma", ".initrd=0x3000000",
root / "lib/systemd/boot/efi/linuxx64.efi.stub",
boot_binary,
]
run(cmd)
cmdline.unlink()
| def install_unified_kernel(
args: MkosiArgs,
root: Path,
root_hash: Optional[str],
do_run_build_script: bool,
for_cache: bool,
cached: bool,
mount: Callable[[], ContextManager[None]],
) -> None:
# Iterates through all kernel versions included in the image and generates a combined
# kernel+initrd+cmdline+osrelease EFI file from it and places it in the /EFI/Linux directory of the ESP.
# sd-boot iterates through them and shows them in the menu. These "unified" single-file images have the
# benefit that they can be signed like normal EFI binaries, and can encode everything necessary to boot a
# specific root device, including the root hash.
if not (args.bootable and
args.get_partition(PartitionIdentifier.esp) and
args.with_unified_kernel_images):
return
# Don't run dracut if this is for the cache. The unified kernel
# typically includes the image ID, roothash and other data that
# differs between cached version and final result. Moreover, we
# want that the initrd for the image actually takes the changes we
# make to the image into account (e.g. when we build a systemd
# test image with this we want that the systemd we just built is
# in the initrd, and not one from the cache. Hence even though
# dracut is slow we invoke it only during the last final build,
# never for the cached builds.
if for_cache:
return
# Don't bother running dracut if this is a development build. Strictly speaking it would probably be a
# good idea to run it, so that the development environment differs as little as possible from the final
# build, but then again the initrd should not be relevant for building, and dracut is simply very slow,
# hence let's avoid it invoking it needlessly, given that we never actually invoke the boot loader on the
# development image.
if do_run_build_script:
return
prefix = "boot" if args.get_partition(PartitionIdentifier.xbootldr) else "efi"
with mount(), complete_step("Generating combined kernel + initrd boot file…"):
for kver, kimg in gen_kernel_images(args, root):
if args.image_id:
image_id = args.image_id
if args.image_version:
partlabel = f"{args.image_id}_{args.image_version}"
else:
partlabel = f"{args.image_id}"
else:
image_id = f"mkosi-{args.distribution}"
partlabel = None
# See https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/
boot_count = ""
if root.joinpath("etc/kernel/tries").exists():
boot_count=f'+{root.joinpath("etc/kernel/tries").read_text().strip()}'
if args.image_version:
boot_binary = root / prefix / f"EFI/Linux/{image_id}_{args.image_version}{boot_count}.efi"
elif root_hash:
boot_binary = root / prefix / f"EFI/Linux/{image_id}-{kver}-{root_hash}{boot_count}.efi"
else:
boot_binary = root / prefix / f"EFI/Linux/{image_id}-{kver}{boot_count}.efi"
if root.joinpath("etc/kernel/cmdline").exists():
boot_options = root.joinpath("etc/kernel/cmdline").read_text().strip()
elif root.joinpath("/usr/lib/kernel/cmdline").exists():
boot_options = root.joinpath("usr/lib/kernel/cmdline").read_text().strip()
else:
boot_options = ""
if root_hash:
option = "usrhash" if args.usr_only else "roothash"
boot_options = f"{boot_options} {option}={root_hash}"
elif partlabel:
boot_options = f"{boot_options} root=PARTLABEL={partlabel}"
osrelease = root / "usr/lib/os-release"
cmdline = workspace(root) / "cmdline"
cmdline.write_text(boot_options)
initrd = root / boot_directory(args, kver) / "initrd"
cmd: Sequence[PathString] = [
"objcopy",
"--add-section", f".osrel={osrelease}", "--change-section-vma", ".osrel=0x20000",
"--add-section", f".cmdline={cmdline}", "--change-section-vma", ".cmdline=0x30000",
"--add-section", f".linux={root / kimg}", "--change-section-vma", ".linux=0x2000000",
"--add-section", f".initrd={initrd}", "--change-section-vma", ".initrd=0x3000000",
root / "lib/systemd/boot/efi/linuxx64.efi.stub",
boot_binary,
]
run(cmd)
cmdline.unlink()
|
3,309 | def convert_search_filter_to_snuba_query(search_filter, key=None, params=None):
name = search_filter.key.name if key is None else key
value = search_filter.value.value
if name in no_conversion:
return
elif name == "id" and search_filter.value.is_wildcard():
raise InvalidSearchQuery("Wildcard conditions are not permitted on `id` field.")
elif name == "environment":
# conditions added to env_conditions are OR'd
env_conditions = []
values = set(value if isinstance(value, (list, tuple)) else [value])
# the "no environment" environment is null in snuba
if "" in values:
values.remove("")
operator = "IS NULL" if search_filter.operator == "=" else "IS NOT NULL"
env_conditions.append(["environment", operator, None])
if len(values) == 1:
operator = "=" if search_filter.operator == "=" else "!="
env_conditions.append(["environment", operator, values.pop()])
elif values:
operator = "IN" if search_filter.operator == "=" else "NOT IN"
env_conditions.append(["environment", operator, values])
return env_conditions
elif name == "message":
if search_filter.value.is_wildcard():
# XXX: We don't want the '^$' values at the beginning and end of
# the regex since we want to find the pattern anywhere in the
# message. Strip off here
value = search_filter.value.value[1:-1]
return [["match", ["message", f"'(?i){value}'"]], search_filter.operator, 1]
elif value == "":
operator = "=" if search_filter.operator == "=" else "!="
return [["equals", ["message", f"{value}"]], operator, 1]
else:
# https://clickhouse.yandex/docs/en/query_language/functions/string_search_functions/#position-haystack-needle
# positionCaseInsensitive returns 0 if not found and an index of 1 or more if found
# so we should flip the operator here
operator = "=" if search_filter.operator == "!=" else "!="
# make message search case insensitive
return [["positionCaseInsensitive", ["message", f"'{value}'"]], operator, 0]
elif (
name.startswith("stack.") or name.startswith("error.")
) and search_filter.value.is_wildcard():
# Escape and convert meta characters for LIKE expressions.
raw_value = search_filter.value.raw_value
like_value = raw_value.replace("%", "\\%").replace("_", "\\_").replace("*", "%")
operator = "LIKE" if search_filter.operator == "=" else "NOT LIKE"
return [name, operator, like_value]
elif name == "transaction.status":
# Handle "has" queries
if search_filter.value.raw_value == "":
return [["isNull", [name]], search_filter.operator, 1]
internal_value = SPAN_STATUS_NAME_TO_CODE.get(search_filter.value.raw_value)
if internal_value is None:
raise InvalidSearchQuery(
"Invalid value for transaction.status condition. Accepted values are {}".format(
", ".join(SPAN_STATUS_NAME_TO_CODE.keys())
)
)
return [name, search_filter.operator, internal_value]
elif name == "issue.id":
# Handle "has" queries
if search_filter.value.raw_value == "":
if search_filter.operator == "=":
# The state os having no issues is represented differently on transactions vs
# other events. On the transactions table, it is represented by 0 whereas it is
# represented by NULL everywhere else. This means we have to check for both 0
# or NULL.
return [
[["isNull", [name]], search_filter.operator, 1],
[name, search_filter.operator, 0],
]
else:
# Based the reasoning above, we should be checking that it is not 0 and not NULL.
# However, because NULL != 0 evaluates to NULL in Clickhouse, we can simplify it
# to check just not 0.
return [name, search_filter.operator, 0]
# Skip isNull check on group_id value as we want to
# allow snuba's prewhere optimizer to find this condition.
return [name, search_filter.operator, value]
elif name == USER_DISPLAY_ALIAS:
user_display_expr = FIELD_ALIASES[USER_DISPLAY_ALIAS].get_expression(params)
# Handle 'has' condition
if search_filter.value.raw_value == "":
return [["isNull", [user_display_expr]], search_filter.operator, 1]
if search_filter.value.is_wildcard():
return [
["match", [user_display_expr, f"'(?i){value}'"]],
search_filter.operator,
1,
]
return [user_display_expr, search_filter.operator, value]
elif name == ERROR_UNHANDLED_ALIAS:
# This field is the inversion of error.handled, otherwise the logic is the same.
if search_filter.value.raw_value == "":
output = 0 if search_filter.operator == "!=" else 1
return [["isHandled", []], "=", output]
if value in ("1", 1):
return [["notHandled", []], "=", 1]
if value in ("0", 0):
return [["isHandled", []], "=", 1]
raise InvalidSearchQuery(
"Invalid value for error.unhandled condition. Accepted values are 1, 0"
)
elif name == "error.handled":
# Treat has filter as equivalent to handled
if search_filter.value.raw_value == "":
output = 1 if search_filter.operator == "!=" else 0
return [["isHandled", []], "=", output]
# Null values and 1 are the same, and both indicate a handled error.
if value in ("1", 1):
return [["isHandled", []], "=", 1]
if value in (
"0",
0,
):
return [["notHandled", []], "=", 1]
raise InvalidSearchQuery(
"Invalid value for error.handled condition. Accepted values are 1, 0"
)
elif name == KEY_TRANSACTION_ALIAS:
key_transaction_expr = FIELD_ALIASES[KEY_TRANSACTION_ALIAS].get_expression(params)
if search_filter.value.raw_value == "":
operator = "!=" if search_filter.operator == "!=" else "="
return [key_transaction_expr, operator, 0]
if value in ("1", 1):
return [key_transaction_expr, "=", 1]
if value in ("0", 0):
return [key_transaction_expr, "=", 0]
raise InvalidSearchQuery(
"Invalid value for key_transaction condition. Accepted values are 1, 0"
)
elif name in ARRAY_FIELDS and search_filter.value.raw_value == "":
return [["notEmpty", [name]], "=", 1 if search_filter.operator == "!=" else 0]
else:
value = (
int(to_timestamp(value)) * 1000
if isinstance(value, datetime) and name != "timestamp"
else value
)
# Tags are never null, but promoted tags are columns and so can be null.
# To handle both cases, use `ifNull` to convert to an empty string and
# compare so we need to check for empty values.
if search_filter.key.is_tag:
name = ["ifNull", [name, "''"]]
# Handle checks for existence
if search_filter.operator in ("=", "!=") and search_filter.value.value == "":
if search_filter.key.is_tag:
return [name, search_filter.operator, value]
else:
# If not a tag, we can just check that the column is null.
return [["isNull", [name]], search_filter.operator, 1]
is_null_condition = None
if search_filter.operator == "!=" and not search_filter.key.is_tag:
# Handle null columns on inequality comparisons. Any comparison
# between a value and a null will result to null, so we need to
# explicitly check for whether the condition is null, and OR it
# together with the inequality check.
# We don't need to apply this for tags, since if they don't exist
# they'll always be an empty string.
is_null_condition = [["isNull", [name]], "=", 1]
if search_filter.value.is_wildcard():
condition = [["match", [name, f"'(?i){value}'"]], search_filter.operator, 1]
else:
condition = [name, search_filter.operator, value]
# We only want to return as a list if we have the check for null
# present. Returning as a list causes these conditions to be ORed
# together. Otherwise just return the raw condition, so that it can be
# used correctly in aggregates.
if is_null_condition:
return [is_null_condition, condition]
else:
return condition
| def convert_search_filter_to_snuba_query(search_filter, key=None, params=None):
name = search_filter.key.name if key is None else key
value = search_filter.value.value
if name in no_conversion:
return
elif name == "id" and search_filter.value.is_wildcard():
raise InvalidSearchQuery("Wildcard conditions are not permitted on `id` field.")
elif name == "environment":
# conditions added to env_conditions are OR'd
env_conditions = []
values = set(value if isinstance(value, (list, tuple)) else [value])
# the "no environment" environment is null in snuba
if "" in values:
values.remove("")
operator = "IS NULL" if search_filter.operator == "=" else "IS NOT NULL"
env_conditions.append(["environment", operator, None])
if len(values) == 1:
operator = "=" if search_filter.operator == "=" else "!="
env_conditions.append(["environment", operator, values.pop()])
elif values:
operator = "IN" if search_filter.operator == "=" else "NOT IN"
env_conditions.append(["environment", operator, values])
return env_conditions
elif name == "message":
if search_filter.value.is_wildcard():
# XXX: We don't want the '^$' values at the beginning and end of
# the regex since we want to find the pattern anywhere in the
# message. Strip off here
value = search_filter.value.value[1:-1]
return [["match", ["message", f"'(?i){value}'"]], search_filter.operator, 1]
elif value == "":
operator = "=" if search_filter.operator == "=" else "!="
return [["equals", ["message", f"{value}"]], operator, 1]
else:
# https://clickhouse.yandex/docs/en/query_language/functions/string_search_functions/#position-haystack-needle
# positionCaseInsensitive returns 0 if not found and an index of 1 or more if found
# so we should flip the operator here
operator = "=" if search_filter.operator == "!=" else "!="
# make message search case insensitive
return [["positionCaseInsensitive", ["message", f"'{value}'"]], operator, 0]
elif (
name.startswith("stack.") or name.startswith("error.")
) and search_filter.value.is_wildcard():
# Escape and convert meta characters for LIKE expressions.
raw_value = search_filter.value.raw_value
like_value = raw_value.replace("%", "\\%").replace("_", "\\_").replace("*", "%")
operator = "LIKE" if search_filter.operator == "=" else "NOT LIKE"
return [name, operator, like_value]
elif name == "transaction.status":
# Handle "has" queries
if search_filter.value.raw_value == "":
return [["isNull", [name]], search_filter.operator, 1]
internal_value = SPAN_STATUS_NAME_TO_CODE.get(search_filter.value.raw_value)
if internal_value is None:
raise InvalidSearchQuery(
"Invalid value for transaction.status condition. Accepted values are {}".format(
", ".join(SPAN_STATUS_NAME_TO_CODE.keys())
)
)
return [name, search_filter.operator, internal_value]
elif name == "issue.id":
# Handle "has" queries
if search_filter.value.raw_value == "":
if search_filter.operator == "=":
# The state of having no issues is represented differently on transactions vs
# other events. On the transactions table, it is represented by 0 whereas it is
# represented by NULL everywhere else. This means we have to check for both 0
# or NULL.
return [
[["isNull", [name]], search_filter.operator, 1],
[name, search_filter.operator, 0],
]
else:
# Based the reasoning above, we should be checking that it is not 0 and not NULL.
# However, because NULL != 0 evaluates to NULL in Clickhouse, we can simplify it
# to check just not 0.
return [name, search_filter.operator, 0]
# Skip isNull check on group_id value as we want to
# allow snuba's prewhere optimizer to find this condition.
return [name, search_filter.operator, value]
elif name == USER_DISPLAY_ALIAS:
user_display_expr = FIELD_ALIASES[USER_DISPLAY_ALIAS].get_expression(params)
# Handle 'has' condition
if search_filter.value.raw_value == "":
return [["isNull", [user_display_expr]], search_filter.operator, 1]
if search_filter.value.is_wildcard():
return [
["match", [user_display_expr, f"'(?i){value}'"]],
search_filter.operator,
1,
]
return [user_display_expr, search_filter.operator, value]
elif name == ERROR_UNHANDLED_ALIAS:
# This field is the inversion of error.handled, otherwise the logic is the same.
if search_filter.value.raw_value == "":
output = 0 if search_filter.operator == "!=" else 1
return [["isHandled", []], "=", output]
if value in ("1", 1):
return [["notHandled", []], "=", 1]
if value in ("0", 0):
return [["isHandled", []], "=", 1]
raise InvalidSearchQuery(
"Invalid value for error.unhandled condition. Accepted values are 1, 0"
)
elif name == "error.handled":
# Treat has filter as equivalent to handled
if search_filter.value.raw_value == "":
output = 1 if search_filter.operator == "!=" else 0
return [["isHandled", []], "=", output]
# Null values and 1 are the same, and both indicate a handled error.
if value in ("1", 1):
return [["isHandled", []], "=", 1]
if value in (
"0",
0,
):
return [["notHandled", []], "=", 1]
raise InvalidSearchQuery(
"Invalid value for error.handled condition. Accepted values are 1, 0"
)
elif name == KEY_TRANSACTION_ALIAS:
key_transaction_expr = FIELD_ALIASES[KEY_TRANSACTION_ALIAS].get_expression(params)
if search_filter.value.raw_value == "":
operator = "!=" if search_filter.operator == "!=" else "="
return [key_transaction_expr, operator, 0]
if value in ("1", 1):
return [key_transaction_expr, "=", 1]
if value in ("0", 0):
return [key_transaction_expr, "=", 0]
raise InvalidSearchQuery(
"Invalid value for key_transaction condition. Accepted values are 1, 0"
)
elif name in ARRAY_FIELDS and search_filter.value.raw_value == "":
return [["notEmpty", [name]], "=", 1 if search_filter.operator == "!=" else 0]
else:
value = (
int(to_timestamp(value)) * 1000
if isinstance(value, datetime) and name != "timestamp"
else value
)
# Tags are never null, but promoted tags are columns and so can be null.
# To handle both cases, use `ifNull` to convert to an empty string and
# compare so we need to check for empty values.
if search_filter.key.is_tag:
name = ["ifNull", [name, "''"]]
# Handle checks for existence
if search_filter.operator in ("=", "!=") and search_filter.value.value == "":
if search_filter.key.is_tag:
return [name, search_filter.operator, value]
else:
# If not a tag, we can just check that the column is null.
return [["isNull", [name]], search_filter.operator, 1]
is_null_condition = None
if search_filter.operator == "!=" and not search_filter.key.is_tag:
# Handle null columns on inequality comparisons. Any comparison
# between a value and a null will result to null, so we need to
# explicitly check for whether the condition is null, and OR it
# together with the inequality check.
# We don't need to apply this for tags, since if they don't exist
# they'll always be an empty string.
is_null_condition = [["isNull", [name]], "=", 1]
if search_filter.value.is_wildcard():
condition = [["match", [name, f"'(?i){value}'"]], search_filter.operator, 1]
else:
condition = [name, search_filter.operator, value]
# We only want to return as a list if we have the check for null
# present. Returning as a list causes these conditions to be ORed
# together. Otherwise just return the raw condition, so that it can be
# used correctly in aggregates.
if is_null_condition:
return [is_null_condition, condition]
else:
return condition
|
46,253 | def _generate_stubs(output=__file__.replace(".py", ".pyi")):
"""Generat type stubs for view_* functions declared in this file."""
import textwrap
pyi = '# THIS FILE IS AUTOGENERATED. DO NOT EDIT\n'
pyi += '# flake8: noqa\n'
pyi += 'from typing import List, Sequence, Union\n\n'
pyi += 'import napari.viewer\n\n'
for _layer in sorted(list(layers.NAMES) + ['path']):
fname = f'view_{_layer}'
func = globals()[fname]
pyi += f'def {fname}{inspect.signature(func)}:\n'
if func.__doc__:
pyi += textwrap.indent(f'"""{func.__doc__}"""', ' ') + "\n"
else:
pyi += ' ...\n'
try:
import black
pyi = black.format_str(
pyi, mode=black.FileMode(line_length=79, is_pyi=True)
)
except ImportError:
pass
with open(output, 'w') as f:
f.write(pyi.replace("NoneType", "None"))
| def _generate_stubs(output=__file__.replace(".py", ".pyi")):
"""Generate type stubs for view_* functions declared in this file."""
import textwrap
pyi = '# THIS FILE IS AUTOGENERATED. DO NOT EDIT\n'
pyi += '# flake8: noqa\n'
pyi += 'from typing import List, Sequence, Union\n\n'
pyi += 'import napari.viewer\n\n'
for _layer in sorted(list(layers.NAMES) + ['path']):
fname = f'view_{_layer}'
func = globals()[fname]
pyi += f'def {fname}{inspect.signature(func)}:\n'
if func.__doc__:
pyi += textwrap.indent(f'"""{func.__doc__}"""', ' ') + "\n"
else:
pyi += ' ...\n'
try:
import black
pyi = black.format_str(
pyi, mode=black.FileMode(line_length=79, is_pyi=True)
)
except ImportError:
pass
with open(output, 'w') as f:
f.write(pyi.replace("NoneType", "None"))
|
32,719 | def test_callproc_invalid(conn_cnx):
"""Test invalid callproc"""
with conn_cnx() as cnx:
with cnx.cursor() as cur:
cur.execute("drop procedure if exists output_message(varchar)")
# stored procedure does not exist
with pytest.raises(ProgrammingError) as pe:
cur.callproc("output_message")
assert pe.errno == 1044
cur.execute(
"""
create or replace procedure output_message(message varchar)
returns varchar not null
language sql
as
begin
return message;
end;
"""
)
# parameters do not match the signature
with pytest.raises(ProgrammingError) as pe:
cur.callproc("output_message")
assert pe.errno == 1044
with pytest.raises(TypeError):
cur.callproc("output_message", "test varchar")
ret = cur.callproc("output_message", ("test varchar",))
assert ret == ("test varchar",)
res = cur.fetchall()
assert len(res) == 1
assert len(res[0]) == 1
assert res[0][0] == "test varchar"
| def test_callproc_invalid(conn_cnx):
"""Test invalid callproc"""
with conn_cnx() as cnx:
with cnx.cursor() as cur:
cur.execute("drop procedure if exists output_message(varchar)")
# stored procedure does not exist
with pytest.raises(ProgrammingError) as pe:
cur.callproc("output_message")
assert pe.errno == 1044
cur.execute(
"""
create or replace procedure output_message(message varchar)
returns varchar not null
language sql
as
begin
return message;
end;
"""
)
# parameters do not match the signature
with pytest.raises(ProgrammingError) as pe:
cur.callproc("output_message")
assert pe.errno == 1044
with pytest.raises(TypeError):
cur.callproc("output_message", "test varchar")
ret = cur.callproc("output_message", ("test varchar",))
assert ret == ("test varchar",)
res = cur.fetchall()
assert len(res) == 1
assert len(res[0]) == 1
assert res[0][0] == "test varchar"
|
43,484 | def layer2_subcircuit(params):
# all preceding gates
# non-parametrized gates
qml.RY(np.pi/4, wires=0)
qml.RY(np.pi/3, wires=1)
qml.RY(np.pi/7, wires=2)
# Parametrized layer 1
qml.RZ(params[0], wires=0)
qml.RZ(params[1], wires=1)
# non-parametrized gates
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
| def layer2_subcircuit(params):
# all preceding gates
# non-parametrized gates
qml.RY(np.pi/4, wires=0)
qml.RY(np.pi/3, wires=1)
qml.RY(np.pi/7, wires=2)
# Parametrized layer 1
qml.RZ(params[0], wires=0)
qml.RZ(params[1], wires=1)
# non-parametrized gates
qml.CNOT(wires=[0, 1])
qml.CNOT(wires=[1, 2])
|
39,436 | def get_point_along_spline(distance):
"""Return the closest point on the spline a given a length along a spline."""
idx = np.argmin(np.abs(spline.point_data['arc_length'] - distance))
return spline.points[idx]
| def get_point_along_spline(distance):
"""Return the closest point on the spline given a length along a spline."""
idx = np.argmin(np.abs(spline.point_data['arc_length'] - distance))
return spline.points[idx]
|
31,080 | def verify_server_paid_packs_by_index(server_paid_packs, index_data_packs):
"""Compare two pack dictionaries and assert id's and prices are identical.
Raise AssertionError if the lists differ.
Args:
server_paid_packs: Dictionary of packs to check.
index_data_packs: Dictionary of packs to compare to.
"""
# Sorting both lists by id
sorted_server_packs = sorted(server_paid_packs, key=lambda i: i['id'])
sorted_index_packs = sorted(index_data_packs, key=lambda i: i['id'])
# Checking lists are the same.
for (server_pack, index_pack) in zip(sorted_server_packs, sorted_index_packs):
assert server_pack["id"] == index_pack["id"]
assert server_pack["price"] == index_pack["price"]
logging.success(f'Pack: {server_pack["id"]} is valid.')
| def verify_server_paid_packs_by_index(server_paid_packs, index_data_packs):
"""Compare two pack dictionaries and assert id's and prices are identical.
Raise AssertionError if the lists differ.
Args:
server_paid_packs: Dictionary of packs to check.
index_data_packs: Dictionary of packs to compare to.
"""
# Sorting both lists by id
sorted_server_packs = sorted(server_paid_packs, key=lambda pack: pack['id'])
sorted_index_packs = sorted(index_data_packs, key=lambda i: i['id'])
# Checking lists are the same.
for (server_pack, index_pack) in zip(sorted_server_packs, sorted_index_packs):
assert server_pack["id"] == index_pack["id"]
assert server_pack["price"] == index_pack["price"]
logging.success(f'Pack: {server_pack["id"]} is valid.')
|
43,634 | def observable(me_table, init_term=0, mapping="jordan_wigner"):
r"""Builds the many-body observable whose expectation value can be
measured in PennyLane.
This function can be used to build second-quantized operators in the basis
of single-particle states (e.g., HF states) and to transform them into
PennyLane observables. In general, single- and two-particle operators can be
expanded in a truncated set of orbitals that define an active space,
.. math::
&&\hat A = \sum_{\alpha \leq 2N_\mathrm{docc}} \langle \alpha \vert \hat{\mathcal{A}}
\vert \alpha \rangle ~ \hat{n}_\alpha +
\sum_{\alpha, \beta ~ \in ~ \mathrm{active~space}} \langle \alpha \vert \hat{\mathcal{A}}
\vert \beta \rangle ~ \hat{c}_\alpha^\dagger\hat{c}_\beta \\
&&\hat B = \frac{1}{2} \left\{ \sum_{\alpha, \beta \leq 2N_\mathrm{docc}}
\langle \alpha, \beta \vert \hat{\mathcal{B}} \vert \beta, \alpha \rangle
~ \hat{n}_\alpha \hat{n}_\beta + \sum_{\alpha, \beta, \gamma, \delta ~
\in ~ \mathrm{active~space}} \langle \alpha, \beta \vert \hat{\mathcal{B}}
\vert \gamma, \delta \rangle ~ \hat{c}_{\alpha}^\dagger \hat{c}_{\beta}^\dagger
\hat{c}_{\gamma} \hat{c}_{\delta} \right\}.
In the latter equations :math:`N_\mathrm{docc}` denotes the doubly-occupied orbitals,
if any, not included in the active space and
:math:`\langle \alpha \vert \hat{\mathcal{A}} \vert \beta \rangle` and
:math:`\langle \alpha, \beta \vert\hat{\mathcal{B}} \vert \gamma, \delta \rangle`
are the matrix elements of the one- and two-particle operators
:math:`\hat{\mathcal{A}}` and :math:`\hat{\mathcal{B}}`, respectively.
The function utilizes tools of `OpenFermion <https://github.com/quantumlib/OpenFermion>`_
to build the second-quantized operator and map it to basis of Pauli matrices via the
Jordan-Wigner or Bravyi-Kitaev transformation. Finally, the qubit operator is
converted to a a PennyLane observable by the function :func:`~.convert_observable`.
Args:
me_table (array[float]): Numpy array with the table of matrix elements.
For a single-particle operator this array will have shape
``(me_table.shape[0], 3)`` with each row containing the indices
:math:`\alpha`, :math:`\beta` and the matrix element :math:`\langle \alpha \vert
\hat{\mathcal{A}}\vert \beta \rangle`. For a two-particle operator this
array will have shape ``(me_table.shape[0], 5)`` with each row containing
the indices :math:`\alpha`, :math:`\beta`, :math:`\gamma`, :math:`\delta` and
the matrix elements :math:`\langle \alpha, \beta \vert \hat{\mathcal{B}}
\vert \gamma, \delta \rangle`.
init_term: the contribution of doubly-occupied orbitals, if any, or other quantity
required to initialize the many-body observable.
mapping (str): specifies the fermion-to-qubit mapping. Input values can
be ``'jordan_wigner'`` or ``'bravyi_kitaev'``.
Returns:
pennylane.Hamiltonian: the fermionic-to-qubit transformed observable
**Example**
>>> s2_matrix_elements, init_term = get_spin2_matrix_elements('h2', './pyscf/sto-3g')
>>> s2_obs = observable(s2_matrix_elements, init_term=init_term)
>>> print(type(s2_obs))
<class 'pennylane.vqe.vqe.Hamiltonian'>
>>> print(s2_obs)
(0.75) [I0]
+ (0.375) [Z1]
+ (-0.375) [Z0 Z1]
+ (0.125) [Z0 Z2]
+ (0.375) [Z0]
+ (-0.125) [Z0 Z3]
+ (-0.125) [Z1 Z2]
+ (0.125) [Z1 Z3]
+ (0.375) [Z2]
+ (0.375) [Z3]
+ (-0.375) [Z2 Z3]
+ (0.125) [Y0 X1 Y2 X3]
+ (0.125) [Y0 Y1 X2 X3]
+ (0.125) [Y0 Y1 Y2 Y3]
+ (-0.125) [Y0 X1 X2 Y3]
+ (-0.125) [X0 Y1 Y2 X3]
+ (0.125) [X0 X1 X2 X3]
+ (0.125) [X0 X1 Y2 Y3]
+ (0.125) [X0 Y1 X2 Y3]
"""
if mapping.strip().lower() not in ("jordan_wigner", "bravyi_kitaev"):
raise TypeError(
"The '{}' transformation is not available. \n "
"Please set 'mapping' to 'jordan_wigner' or 'bravyi_kitaev'.".format(mapping)
)
sp_op_shape = (3,)
tp_op_shape = (5,)
for i_table in me_table:
if np.array(i_table).shape not in (sp_op_shape, tp_op_shape):
raise ValueError(
"expected entries of 'me_table' to be of shape (3,) or (5,) ; got {}".format(
np.array(i_table).shape
)
)
# Initialize the FermionOperator
mb_obs = FermionOperator() + FermionOperator("") * init_term
for i in me_table:
if i.shape == (5,):
# two-particle operator
mb_obs += FermionOperator(
((int(i[0]), 1), (int(i[1]), 1), (int(i[2]), 0), (int(i[3]), 0)), i[4]
)
elif i.shape == (3,):
# single-particle operator
mb_obs += FermionOperator(((int(i[0]), 1), (int(i[1]), 0)), i[2])
# Map the fermionic to a qubit operator measurable in PennyLane
if mapping.strip().lower() == "bravyi_kitaev":
return structure.convert_observable(bravyi_kitaev(mb_obs))
return structure.convert_observable(jordan_wigner(mb_obs))
| def observable(me_table, init_term=0, mapping="jordan_wigner"):
r"""Builds the many-body observable whose expectation value can be
measured in PennyLane.
This function can be used to build second-quantized operators in the basis
of single-particle states (e.g., HF states) and to transform them into
PennyLane observables. In general, single- and two-particle operators can be
expanded in a truncated set of orbitals that define an active space,
.. math::
&&\hat A = \sum_{\alpha \leq 2N_\mathrm{docc}} \langle \alpha \vert \hat{\mathcal{A}}
\vert \alpha \rangle ~ \hat{n}_\alpha +
\sum_{\alpha, \beta ~ \in ~ \mathrm{active~space}} \langle \alpha \vert \hat{\mathcal{A}}
\vert \beta \rangle ~ \hat{c}_\alpha^\dagger\hat{c}_\beta \\
&&\hat B = \frac{1}{2} \left\{ \sum_{\alpha, \beta \leq 2N_\mathrm{docc}}
\langle \alpha, \beta \vert \hat{\mathcal{B}} \vert \beta, \alpha \rangle
~ \hat{n}_\alpha \hat{n}_\beta + \sum_{\alpha, \beta, \gamma, \delta ~
\in ~ \mathrm{active~space}} \langle \alpha, \beta \vert \hat{\mathcal{B}}
\vert \gamma, \delta \rangle ~ \hat{c}_{\alpha}^\dagger \hat{c}_{\beta}^\dagger
\hat{c}_{\gamma} \hat{c}_{\delta} \right\}.
In the latter equations :math:`N_\mathrm{docc}` denotes the doubly-occupied orbitals,
if any, not included in the active space and
:math:`\langle \alpha \vert \hat{\mathcal{A}} \vert \beta \rangle` and
:math:`\langle \alpha, \beta \vert\hat{\mathcal{B}} \vert \gamma, \delta \rangle`
are the matrix elements of the one- and two-particle operators
:math:`\hat{\mathcal{A}}` and :math:`\hat{\mathcal{B}}`, respectively.
The function utilizes tools of `OpenFermion <https://github.com/quantumlib/OpenFermion>`_
to build the second-quantized operator and map it to basis of Pauli matrices via the
Jordan-Wigner or Bravyi-Kitaev transformation. Finally, the qubit operator is
converted to a a PennyLane observable by the function :func:`~.convert_observable`.
Args:
me_table (array[float]): Numpy array with the table of matrix elements.
For a single-particle operator this array will have shape
``(me_table.shape[0], 3)`` with each row containing the indices
:math:`\alpha`, :math:`\beta` and the matrix element :math:`\langle \alpha \vert
\hat{\mathcal{A}}\vert \beta \rangle`. For a two-particle operator this
array will have shape ``(me_table.shape[0], 5)`` with each row containing
the indices :math:`\alpha`, :math:`\beta`, :math:`\gamma`, :math:`\delta` and
the matrix elements :math:`\langle \alpha, \beta \vert \hat{\mathcal{B}}
\vert \gamma, \delta \rangle`.
init_term: the contribution of doubly-occupied orbitals, if any, or other quantity
required to initialize the many-body observable.
mapping (str): specifies the fermion-to-qubit mapping. Input values can
be ``'jordan_wigner'`` or ``'bravyi_kitaev'``.
Returns:
pennylane.Hamiltonian: the fermionic-to-qubit transformed observable
**Example**
>>> s2_matrix_elements, init_term = get_spin2_matrix_elements('h2', './pyscf/sto-3g')
>>> s2_obs = observable(s2_matrix_elements, init_term=init_term)
>>> print(type(s2_obs))
<class 'pennylane.vqe.vqe.Hamiltonian'>
>>> print(s2_obs)
(0.75) [I0]
+ (0.375) [Z1]
+ (-0.375) [Z0 Z1]
+ (0.125) [Z0 Z2]
+ (0.375) [Z0]
+ (-0.125) [Z0 Z3]
+ (-0.125) [Z1 Z2]
+ (0.125) [Z1 Z3]
+ (0.375) [Z2]
+ (0.375) [Z3]
+ (-0.375) [Z2 Z3]
+ (0.125) [Y0 X1 Y2 X3]
+ (0.125) [Y0 Y1 X2 X3]
+ (0.125) [Y0 Y1 Y2 Y3]
+ (-0.125) [Y0 X1 X2 Y3]
+ (-0.125) [X0 Y1 Y2 X3]
+ (0.125) [X0 X1 X2 X3]
+ (0.125) [X0 X1 Y2 Y3]
+ (0.125) [X0 Y1 X2 Y3]
"""
if mapping.strip().lower() not in ("jordan_wigner", "bravyi_kitaev"):
raise TypeError(
"The '{}' transformation is not available. \n "
"Please set 'mapping' to 'jordan_wigner' or 'bravyi_kitaev'.".format(mapping)
)
sp_op_shape = (3,)
tp_op_shape = (5,)
for i_table in me_table:
if np.array(i_table).shape not in (sp_op_shape, tp_op_shape):
raise ValueError(
"expected entries of 'me_table' to be of shape (3,) or (5,) ; got {}".format(
np.array(i_table).shape
)
)
# Initialize the FermionOperator
mb_obs = FermionOperator() + FermionOperator("") * init_term
for i in me_table:
if i.shape == (5,):
# two-particle operator
mb_obs += FermionOperator(
((int(i[0]), 1), (int(i[1]), 1), (int(i[2]), 0), (int(i[3]), 0)), i[4]
)
elif i.shape == (3,):
# single-particle operator
mb_obs += FermionOperator(((int(i[0]), 1), (int(i[1]), 0)), i[2])
# Map the fermionic operator to a qubit operator
if mapping.strip().lower() == "bravyi_kitaev":
return structure.convert_observable(bravyi_kitaev(mb_obs))
return structure.convert_observable(jordan_wigner(mb_obs))
|
30,353 | def get_relevant_prs(the_time: datetime, label: str, query: str) -> list:
reg = re.compile("\.\d{6}$")
timestamp, _ = reg.subn('', the_time.isoformat())
query = query.replace('{USER}', USER).replace('{REPOSITORY}', REPOSITORY).replace('{timestamp}', timestamp)
# if label was passed then use it in the query otherwise remove that part of the query
if label:
query = query.replace('{label}', label)
elif ' label:{label}' in query:
query = query.replace(' label:{label}', '')
elif ' -label:{label}' in query:
query = query.replace(' -label:{label}', '')
matching_issues = search_issue(query).get('items', [])
relevant_prs = [get_pull_request(issue.get('number')) for issue in matching_issues]
return relevant_prs
| def get_relevant_prs(the_time: datetime, label: str, query: str) -> list:
reg = re.compile('\.\d{6}$')
timestamp, _ = reg.subn('', the_time.isoformat())
query = query.replace('{USER}', USER).replace('{REPOSITORY}', REPOSITORY).replace('{timestamp}', timestamp)
# if label was passed then use it in the query otherwise remove that part of the query
if label:
query = query.replace('{label}', label)
elif ' label:{label}' in query:
query = query.replace(' label:{label}', '')
elif ' -label:{label}' in query:
query = query.replace(' -label:{label}', '')
matching_issues = search_issue(query).get('items', [])
relevant_prs = [get_pull_request(issue.get('number')) for issue in matching_issues]
return relevant_prs
|
41,920 | def _filter_out_observation_pairs(
values: List[Optional[float]],
scores: List[Tuple[float, float]],
distribution: BaseDistribution,
) -> Tuple[List[Optional[float]], List[Tuple[float, float]]]:
ret_values = []
ret_scores = []
for value, score in zip(values, scores):
if value is None or distribution._contains(value):
ret_values.append(value)
ret_scores.append(score)
return ret_values, ret_scores
| def _filter_out_observation_pairs(
values: List[Optional[float]],
scores: List[Tuple[float, float]],
distribution: BaseDistribution,
) -> Tuple[List[Optional[float]], List[Tuple[float, float]]]:
ret_values = []
ret_scores = []
for value, score in zip(values, scores):
if value is not None or distribution._contains(value):
ret_values.append(value)
ret_scores.append(score)
return ret_values, ret_scores
|
13,439 | def registers_from_candidates(candidate_registers_values, args):
kamstrupRegisterCopier = KamstrupRegisterCopier(
args.host, args.port, int(args.communication_address)
)
found_registers = {}
not_found_counts = 0
scanned = 0
dumpfile = "kamstrup_dump_{0}.json".format(
calendar.timegm(datetime.utcnow().utctimetuple())
)
for x in candidate_registers_values:
result = kamstrupRegisterCopier.get_register(x)
if len(result) > 12:
units = result[5]
length = result[6]
unknown = result[7]
register_value = 0
for p in range(length):
register_value += result[8 + p] << (8 * ((length - p) - 1))
found_registers[x] = {
"timestamp": datetime.utcnow(),
"units": units,
"value": register_value,
"value_length": length,
"unknown": unknown,
}
print("Found register value at {0}:{1}".format(hex(x), register_value))
with open(dumpfile, "w") as json_file:
json_file.write(json.dumps(found_registers, indent=4, default=json_default))
else:
not_found_counts += 1
if not_found_counts % 10 == 0:
print(
"Hang on, still scanning, so far scanned {0} and found {1} registers".format(
scanned, len(found_registers)
)
)
scanned += 1
return found_registers
| def registers_from_candidates(candidate_registers_values, args):
kamstrupRegisterCopier = KamstrupRegisterCopier(
args.host, args.port, int(args.communication_address)
)
found_registers = {}
not_found_counts = 0
scanned = 0
dumpfile = "kamstrup_dump_{0}.json".format(
calendar.timegm(datetime.utcnow().utctimetuple())
)
for register_id in candidate_registers_values:
result = kamstrupRegisterCopier.get_register(x)
if len(result) > 12:
units = result[5]
length = result[6]
unknown = result[7]
register_value = 0
for p in range(length):
register_value += result[8 + p] << (8 * ((length - p) - 1))
found_registers[x] = {
"timestamp": datetime.utcnow(),
"units": units,
"value": register_value,
"value_length": length,
"unknown": unknown,
}
print("Found register value at {0}:{1}".format(hex(x), register_value))
with open(dumpfile, "w") as json_file:
json_file.write(json.dumps(found_registers, indent=4, default=json_default))
else:
not_found_counts += 1
if not_found_counts % 10 == 0:
print(
"Hang on, still scanning, so far scanned {0} and found {1} registers".format(
scanned, len(found_registers)
)
)
scanned += 1
return found_registers
|
857 | def primes(n):
""" Generate a list of first n prime numbers.
This function is equivalent to primerange(2, prime(n) + 1)
If the range exists in the default sieve, the values will
be returned from there; otherwise values will be returned
but will not modify the sieve.
Examples
========
>>> from sympy import primes, Symbol
>>> print([i for i in primes(10)])
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
See Also
========
nextprime : Return the ith prime greater than n
prevprime : Return the largest prime smaller than n
randprime : Returns a random prime in a given range
primorial : Returns the product of primes based on condition
primerange : Generate a list of all prime numbers in the range [a, b)
References
==========
.. [1] https://en.wikipedia.org/wiki/Prime_number
"""
if ask(Q.nonpositive(n)):
return primerange(0, 0)
return primerange(2, prime(n) + 1)
| def primes(n):
""" Generate a list of first n prime numbers.
This function is equivalent to primerange(2, prime(n) + 1)
If the range exists in the default sieve, the values will
be returned from there; otherwise values will be returned
but will not modify the sieve.
Examples
========
>>> from sympy import primes
>>> print([i for i in primes(10)])
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
See Also
========
nextprime : Return the ith prime greater than n
prevprime : Return the largest prime smaller than n
randprime : Returns a random prime in a given range
primorial : Returns the product of primes based on condition
primerange : Generate a list of all prime numbers in the range [a, b)
References
==========
.. [1] https://en.wikipedia.org/wiki/Prime_number
"""
if ask(Q.nonpositive(n)):
return primerange(0, 0)
return primerange(2, prime(n) + 1)
|
51,523 | def migrate():
dest_dir = settings.BANNER_PATH
src_dir = os.path.join(settings.DATA_DIR, "banners")
# init_lutris() creates the new banners directrory
if os.path.isdir(src_dir) and os.path.isdir(dest_dir):
for filename in os.listdir(src_dir):
try:
src_file = os.path.join(src_dir, filename)
dest_file = os.path.join(dest_dir, filename)
if not os.path.exists(dest_file):
os.rename(src_file, dest_file)
except OSError:
pass # Skip what we can't migrate
| def migrate():
dest_dir = settings.BANNER_PATH
src_dir = os.path.join(settings.DATA_DIR, "banners")
os.makedirs(dest_dir, exist_ok=True)
# Check if dirs_exist_ok is defined ( Python >= 3.8)
sig = inspect.signature(shutil.copytree)
if "dirs_exist_ok" in sig.parameters:
shutil.copytree(source, destination, symlinks=False, ignore_dangling_symlinks=True, dirs_exist_ok=True)
else:
shutil.copytree(source, destination, symlinks=False, ignore_dangling_symlinks=True)
shutil.rmtree(src_dir, ignore_errors=True)
|
49,651 | def unwrap_all(obj: Any, *, stop: Callable = None) -> Any:
"""
Get an original object from wrapped object (unwrapping partials, wrapped
functions, and other decorators).
"""
while True:
if stop and stop(obj):
return obj
elif ispartial(obj):
obj = obj.func
elif inspect.isroutine(obj) and hasattr(obj, '__wrapped__'):
if hasattr(obj.__wrapped__, '__call__'):
return obj # Don't unwrap wrapped callable types
obj = obj.__wrapped__ # type: ignore
elif isclassmethod(obj):
obj = obj.__func__
elif isstaticmethod(obj):
obj = obj.__func__
else:
return obj
| def unwrap_all(obj: Any, *, stop: Callable = None) -> Any:
"""
Get an original object from wrapped object (unwrapping partials, wrapped
functions, and other decorators).
"""
while True:
if stop and stop(obj):
return obj
elif ispartial(obj):
obj = obj.func
elif inspect.isroutine(obj) and hasattr(obj, '__wrapped__'):
# Don't unwrap wrapped callable types:
if hasattr(obj.__wrapped__, '__call__'):
# From Python 3.10, staticmethod and classmethod have
# "__wrapped__" and "__wrapped__.__call__" attributes.
# Check that we are in Python 3.9 or earlier, or that the object
# isn't a staticmethod or classmethod before returning.
# xref: https://docs.python.org/3.10/whatsnew/3.10.html#other-language-changes
if sys.version_info[:2] < (3, 10):
return obj
if not (isstaticmethod(obj) or isclassmethod(obj)):
return obj
obj = obj.__wrapped__ # type: ignore
elif isclassmethod(obj):
obj = obj.__func__
elif isstaticmethod(obj):
obj = obj.__func__
else:
return obj
|
47,053 | def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty."
"Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
)
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
# Set the verbosity to info of the Transformers logger (on main process only):
if is_main_process(training_args.local_rank):
transformers.utils.logging.set_verbosity_info()
logger.info("Training/evaluation parameters %s", training_args)
# Set seed before initializing model.
set_seed(training_args.seed)
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if data_args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
else:
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.train_file.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, field="data")
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
config = XLNetConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
tokenizer = XLNetTokenizerFast.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
model = XLNetForQuestionAnswering.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# Preprocessing the datasets.
# Preprocessing is slighlty different for training and evaluation.
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
# Padding side determines if we do (question|context) or (context|question).
pad_on_right = tokenizer.padding_side == "right"
# Training preprocessing
def prepare_train_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples[question_column_name if pad_on_right else context_column_name],
examples[context_column_name if pad_on_right else question_column_name],
truncation="only_second" if pad_on_right else "only_first",
max_length=data_args.max_seq_length,
stride=data_args.doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_special_tokens_mask=True,
return_token_type_ids=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers).
special_tokens = tokenized_examples.pop("special_tokens_mask")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
tokenized_examples["is_impossible"] = []
tokenized_examples["cls_index"] = []
tokenized_examples["p_mask"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
tokenized_examples["cls_index"].append(cls_index)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples["token_type_ids"][i]
for k, s in enumerate(special_tokens[i]):
if s:
sequence_ids[k] = 3
context_idx = 1 if pad_on_right else 0
# Build the p_mask: non special tokens and context gets 0.0, the others get 1.0.
# The cls token gets 1.0 too (for predictions of empty answers).
tokenized_examples["p_mask"].append(
[
0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0
for k, s in enumerate(sequence_ids)
]
)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples[answer_column_name][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
tokenized_examples["is_impossible"].append(1.0)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != context_idx:
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != context_idx:
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
tokenized_examples["is_impossible"].append(1.0)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
tokenized_examples["is_impossible"].append(0.0)
return tokenized_examples
if training_args.do_train:
train_dataset = datasets["train"].map(
prepare_train_features,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
# Validation preprocessing
def prepare_validation_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples[question_column_name if pad_on_right else context_column_name],
examples[context_column_name if pad_on_right else question_column_name],
truncation="only_second" if pad_on_right else "only_first",
max_length=data_args.max_seq_length,
stride=data_args.doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_special_tokens_mask=True,
return_token_type_ids=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers).
special_tokens = tokenized_examples.pop("special_tokens_mask")
# For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
# corresponding example_id and we will store the offset mappings.
tokenized_examples["example_id"] = []
# We still provide the index of the CLS token and the p_mask to the model, but not the is_impossible label.
tokenized_examples["cls_index"] = []
tokenized_examples["p_mask"] = []
for i, input_ids in enumerate(tokenized_examples["input_ids"]):
# Find the CLS token in the input ids.
cls_index = input_ids.index(tokenizer.cls_token_id)
tokenized_examples["cls_index"].append(cls_index)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples["token_type_ids"][i]
for k, s in enumerate(special_tokens[i]):
if s:
sequence_ids[k] = 3
context_idx = 1 if pad_on_right else 0
# Build the p_mask: non special tokens and context gets 0.0, the others 1.0.
tokenized_examples["p_mask"].append(
[
0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0
for k, s in enumerate(sequence_ids)
]
)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_idx else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
if training_args.do_eval:
validation_dataset = datasets["validation"].map(
prepare_validation_features,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
# Data collator
# We have already padded to max length if the corresponding flag is True, otherwise we need to pad in the data
# collator.
data_collator = (
default_data_collator
if data_args.pad_to_max_length
else DataCollatorWithPadding(
tokenizer,
pad_to_multiple_of=8 if training_args.fp16 else None,
)
)
# Post-processing:
def post_processing_function(examples, features, predictions):
# Post-processing: we match the start logits and end logits to answers in the original context.
predictions, scores_diff_json = postprocess_qa_predictions_with_beam_search(
examples=examples,
features=features,
predictions=predictions,
version_2_with_negative=data_args.version_2_with_negative,
n_best_size=data_args.n_best_size,
max_answer_length=data_args.max_answer_length,
start_n_top=model.config.start_n_top,
end_n_top=model.config.end_n_top,
output_dir=training_args.output_dir,
is_world_process_zero=trainer.is_world_process_zero(),
)
# Format the result to the format the metric expects.
if data_args.version_2_with_negative:
formatted_predictions = [
{"id": k, "prediction_text": v, "no_answer_probability": scores_diff_json[k]}
for k, v in predictions.items()
]
else:
formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()]
references = [{"id": ex["id"], "answers": ex[answer_column_name]} for ex in datasets["validation"]]
return EvalPrediction(predictions=formatted_predictions, label_ids=references)
metric = load_metric("squad_v2" if data_args.version_2_with_negative else "squad")
def compute_metrics(p: EvalPrediction):
return metric.compute(predictions=p.predictions, references=p.label_ids)
# Initialize our Trainer
trainer = QuestionAnsweringTrainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=validation_dataset if training_args.do_eval else None,
eval_examples=datasets["validation"] if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
post_process_function=post_processing_function,
compute_metrics=compute_metrics,
)
# Training
if training_args.do_train:
train_result = trainer.train(
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
)
trainer.save_model() # Saves the tokenizer too for easy upload
output_train_file = os.path.join(training_args.output_dir, "train_results.txt")
if trainer.is_world_process_zero():
with open(output_train_file, "w") as writer:
logger.info("***** Train results *****")
for key, value in sorted(train_result.metrics.items()):
logger.info(f" {key} = {value}")
writer.write(f"{key} = {value}\n")
# Need to save the state, since Trainer.save_model saves only the tokenizer with the model
trainer.state.save_to_json(os.path.join(training_args.output_dir, "trainer_state.json"))
# Evaluation
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
results = trainer.evaluate()
output_eval_file = os.path.join(training_args.output_dir, "eval_results.txt")
if trainer.is_world_process_zero():
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key, value in sorted(results.items()):
logger.info(f" {key} = {value}")
writer.write(f"{key} = {value}\n")
return results
| def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty."
"Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
)
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
# Set the verbosity to info of the Transformers logger (on main process only):
if is_main_process(training_args.local_rank):
transformers.utils.logging.set_verbosity_info()
logger.info("Training/evaluation parameters %s", training_args)
# Set seed before initializing model.
set_seed(training_args.seed)
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if data_args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
else:
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.train_file.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, field="data")
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
config = XLNetConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
tokenizer = XLNetTokenizerFast.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
model = XLNetForQuestionAnswering.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# Preprocessing the datasets.
# Preprocessing is slighlty different for training and evaluation.
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
# Padding side determines if we do (question|context) or (context|question).
pad_on_right = tokenizer.padding_side == "right"
# Training preprocessing
def prepare_train_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples[question_column_name if pad_on_right else context_column_name],
examples[context_column_name if pad_on_right else question_column_name],
truncation="only_second" if pad_on_right else "only_first",
max_length=data_args.max_seq_length,
stride=data_args.doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_special_tokens_mask=True,
return_token_type_ids=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers).
special_tokens = tokenized_examples.pop("special_tokens_mask")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
tokenized_examples["is_impossible"] = []
tokenized_examples["cls_index"] = []
tokenized_examples["p_mask"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
tokenized_examples["cls_index"].append(cls_index)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples["token_type_ids"][i]
for k, s in enumerate(special_tokens[i]):
if s:
sequence_ids[k] = 3
context_idx = 1 if pad_on_right else 0
# Build the p_mask: non special tokens and context gets 0.0, the others get 1.0.
# The cls token gets 1.0 too (for predictions of empty answers).
tokenized_examples["p_mask"].append(
[
0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0
for k, s in enumerate(sequence_ids)
]
)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples[answer_column_name][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
tokenized_examples["is_impossible"].append(1.0)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != context_idx:
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != context_idx:
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
tokenized_examples["is_impossible"].append(1.0)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
tokenized_examples["is_impossible"].append(0.0)
return tokenized_examples
if training_args.do_train:
train_dataset = datasets["train"].map(
prepare_train_features,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
# Validation preprocessing
def prepare_validation_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples[question_column_name if pad_on_right else context_column_name],
examples[context_column_name if pad_on_right else question_column_name],
truncation="only_second" if pad_on_right else "only_first",
max_length=data_args.max_seq_length,
stride=data_args.doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_special_tokens_mask=True,
return_token_type_ids=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers).
special_tokens = tokenized_examples.pop("special_tokens_mask")
# For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
# corresponding example_id and we will store the offset mappings.
tokenized_examples["example_id"] = []
# We still provide the index of the CLS token and the p_mask to the model, but not the is_impossible label.
tokenized_examples["cls_index"] = []
tokenized_examples["p_mask"] = []
for i, input_ids in enumerate(tokenized_examples["input_ids"]):
# Find the CLS token in the input ids.
cls_index = input_ids.index(tokenizer.cls_token_id)
tokenized_examples["cls_index"].append(cls_index)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples["token_type_ids"][i]
for k, s in enumerate(special_tokens[i]):
if s:
sequence_ids[k] = 3
context_idx = 1 if pad_on_right else 0
# Build the p_mask: non special tokens and context gets 0.0, the others 1.0.
tokenized_examples["p_mask"].append(
[
0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0
for k, s in enumerate(sequence_ids)
]
)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_idx else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
if training_args.do_eval:
validation_dataset = datasets["validation"].map(
prepare_validation_features,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
# Data collator
# We have already padded to max length if the corresponding flag is True, otherwise we need to pad in the data
# collator.
data_collator = (
default_data_collator
if data_args.pad_to_max_length
else DataCollatorWithPadding(
tokenizer, pad_to_multiple_of=8 if training_args.fp16 else None,
)
)
# Post-processing:
def post_processing_function(examples, features, predictions):
# Post-processing: we match the start logits and end logits to answers in the original context.
predictions, scores_diff_json = postprocess_qa_predictions_with_beam_search(
examples=examples,
features=features,
predictions=predictions,
version_2_with_negative=data_args.version_2_with_negative,
n_best_size=data_args.n_best_size,
max_answer_length=data_args.max_answer_length,
start_n_top=model.config.start_n_top,
end_n_top=model.config.end_n_top,
output_dir=training_args.output_dir,
is_world_process_zero=trainer.is_world_process_zero(),
)
# Format the result to the format the metric expects.
if data_args.version_2_with_negative:
formatted_predictions = [
{"id": k, "prediction_text": v, "no_answer_probability": scores_diff_json[k]}
for k, v in predictions.items()
]
else:
formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()]
references = [{"id": ex["id"], "answers": ex[answer_column_name]} for ex in datasets["validation"]]
return EvalPrediction(predictions=formatted_predictions, label_ids=references)
metric = load_metric("squad_v2" if data_args.version_2_with_negative else "squad")
def compute_metrics(p: EvalPrediction):
return metric.compute(predictions=p.predictions, references=p.label_ids)
# Initialize our Trainer
trainer = QuestionAnsweringTrainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=validation_dataset if training_args.do_eval else None,
eval_examples=datasets["validation"] if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
post_process_function=post_processing_function,
compute_metrics=compute_metrics,
)
# Training
if training_args.do_train:
train_result = trainer.train(
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
)
trainer.save_model() # Saves the tokenizer too for easy upload
output_train_file = os.path.join(training_args.output_dir, "train_results.txt")
if trainer.is_world_process_zero():
with open(output_train_file, "w") as writer:
logger.info("***** Train results *****")
for key, value in sorted(train_result.metrics.items()):
logger.info(f" {key} = {value}")
writer.write(f"{key} = {value}\n")
# Need to save the state, since Trainer.save_model saves only the tokenizer with the model
trainer.state.save_to_json(os.path.join(training_args.output_dir, "trainer_state.json"))
# Evaluation
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
results = trainer.evaluate()
output_eval_file = os.path.join(training_args.output_dir, "eval_results.txt")
if trainer.is_world_process_zero():
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key, value in sorted(results.items()):
logger.info(f" {key} = {value}")
writer.write(f"{key} = {value}\n")
return results
|
26,433 | def load_font(prefix, ttf_filename, charmap_filename, directory=None):
"""
Loads a font file and the associated charmap.
If ``directory`` the files will be looked for in the qtawesome ``fonts``
directory.
Parameters
----------
prefix: str
Prefix string to be used when accessing a given font set
ttf_filename: str
Ttf font filename
charmap_filename: str
Character map filename
directory: str or None, optional
Directory path for font and charmap files
Example
-------
If you want to load a font ``myicon.tff`` with a ``myicon-charmap.json``
charmap added to the qtawesome ``fonts`` directory (usually located at
``</path/to/lib/python>/site-packages/qtawesome/fonts/``) you can use::
qta.load_font(
'myicon',
'myicon.ttf',
'myicon-charmap.json'
)
However, if you want to load a font ``myicon.tff`` with a
``myicon-charmap.json`` charmap located in a specific path outside the
qtawesome ``font`` directory like for example ``/path/to/myproject/fonts``
you can use::
qta.load_font(
'myicon',
'myicon.ttf',
'myicon-charmap.json',
directory='/path/to/myproject/fonts'
)
"""
return _instance().load_font(prefix, ttf_filename, charmap_filename, directory)
| def load_font(prefix, ttf_filename, charmap_filename, directory=None):
"""
Loads a font file and the associated charmap.
If ``directory`` the files will be looked for in the qtawesome ``fonts``
directory.
Parameters
----------
prefix: str
Prefix string to be used when accessing a given font set
ttf_filename: str
Ttf font filename
charmap_filename: str
Character map filename
directory: str or None, optional
Directory path for font and charmap files
Example
-------
If you want to load a font ``myicon.tff`` with a ``myicon-charmap.json``
charmap added to the qtawesome ``fonts`` directory (usually located at
``</path/to/lib/python>/site-packages/qtawesome/fonts/``) you can use::
qta.load_font(
'myicon',
'myicon.ttf',
'myicon-charmap.json'
)
However, if you want to load a font ``myicon.tff`` with a
``myicon-charmap.json`` charmap located in a specific path outside the
qtawesome ``font`` directory like for example ``/path/to/myproject/fonts``
you can use::
qta.load_font(
'myicon',
'myicon.ttf',
'myicon-charmap.json',
directory='/path/to/myproject/fonts'
)
"""
return _instance().load_font(prefix, ttf_filename, charmap_filename, directory)
|
45,105 | def with_logger(func: Callable) -> Callable:
"""
Decorator, that allows function to use Prefect logger instance.
Enrich function signature with keyword-only argument "logger".
Use it to write logs from Prefect task.
Args:
- func (Callable): Function that takes `logger` argument.
Returns:
- Callable: a wrapper with Prefect logger passed.
Example:
>>> @task
... def some_task():
... return 123
...
>>> @task
... @with_logger
... def task_with_logger(arg, logger):
... logger.warning('arg value: %s', arg)
...
>>> with Flow('some-flow') as flow:
... res = some_task()
... task_with_logger(res)
...
"""
def _wrapper(*args, **kwargs) -> Any:
import prefect
return partial(func, logger=prefect.context.get("logger"))(*args, **kwargs)
return with_updated_signature(func, _wrapper, remove_func_args={"logger"})
| def with_logger(func: Callable) -> Callable:
"""
Decorator that allows function to use Prefect logger instance.
Enrich function signature with keyword-only argument "logger".
Use it to write logs from Prefect task.
Args:
- func (Callable): Function that takes `logger` argument.
Returns:
- Callable: a wrapper with Prefect logger passed.
Example:
>>> @task
... def some_task():
... return 123
...
>>> @task
... @with_logger
... def task_with_logger(arg, logger):
... logger.warning('arg value: %s', arg)
...
>>> with Flow('some-flow') as flow:
... res = some_task()
... task_with_logger(res)
...
"""
def _wrapper(*args, **kwargs) -> Any:
import prefect
return partial(func, logger=prefect.context.get("logger"))(*args, **kwargs)
return with_updated_signature(func, _wrapper, remove_func_args={"logger"})
|
7,323 | def set_color(image, coords, color, alpha=1):
"""Set pixel color in the image at the given coordinates.
Note that this function modifies the color of the image in-place.
Coordinates that exceed the shape of the image will be ignored.
Parameters
----------
image : (M, N, D) ndarray
Image
coords : tuple of ((P,) ndarray, (P,) ndarray)
Row and column coordinates of pixels to be colored.
color : (D,) ndarray
Color to be assigned to coordinates in the image.
alpha : scalar or (N,) ndarray
Alpha values used to blend color with image. 0 is transparent,
1 is opaque.
Examples
--------
>>> from skimage.draw import line, set_color
>>> img = np.zeros((10, 10), dtype=np.uint8)
>>> rr, cc = line(1, 1, 20, 20)
>>> set_color(img, (rr, cc), 1)
>>> img
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1]], dtype=uint8)
"""
rr, cc = coords
if image.ndim == 2:
image = image[..., np.newaxis]
color = np.array(color, ndmin=1, copy=False)
if image.shape[-1] != color.shape[-1]:
raise ValueError(f'Color shape ({color.shape[0]}) must watch last '
'image dimension ({image.shape[-1]}).')
if np.isscalar(alpha):
# Can be replaced by ``full_like`` when numpy 1.8 becomes
# minimum dependency
alpha = np.ones_like(rr) * alpha
rr, cc, alpha = _coords_inside_image(rr, cc, image.shape, val=alpha)
alpha = alpha[..., np.newaxis]
color = color * alpha
vals = image[rr, cc] * (1 - alpha)
image[rr, cc] = vals + color
| def set_color(image, coords, color, alpha=1):
"""Set pixel color in the image at the given coordinates.
Note that this function modifies the color of the image in-place.
Coordinates that exceed the shape of the image will be ignored.
Parameters
----------
image : (M, N, D) ndarray
Image
coords : tuple of ((P,) ndarray, (P,) ndarray)
Row and column coordinates of pixels to be colored.
color : (D,) ndarray
Color to be assigned to coordinates in the image.
alpha : scalar or (N,) ndarray
Alpha values used to blend color with image. 0 is transparent,
1 is opaque.
Examples
--------
>>> from skimage.draw import line, set_color
>>> img = np.zeros((10, 10), dtype=np.uint8)
>>> rr, cc = line(1, 1, 20, 20)
>>> set_color(img, (rr, cc), 1)
>>> img
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1]], dtype=uint8)
"""
rr, cc = coords
if image.ndim == 2:
image = image[..., np.newaxis]
color = np.array(color, ndmin=1, copy=False)
if image.shape[-1] != color.shape[-1]:
raise ValueError(f'Color shape ({color.shape[0]}) must match last '
'image dimension ({image.shape[-1]}).')
if np.isscalar(alpha):
# Can be replaced by ``full_like`` when numpy 1.8 becomes
# minimum dependency
alpha = np.ones_like(rr) * alpha
rr, cc, alpha = _coords_inside_image(rr, cc, image.shape, val=alpha)
alpha = alpha[..., np.newaxis]
color = color * alpha
vals = image[rr, cc] * (1 - alpha)
image[rr, cc] = vals + color
|
27,710 | def test_popen_default_stdin_stderr_and_stdin_None(testdir) -> None:
# stdout, stderr default to pipes,
# stdin can be None to not close the pipe, avoiding
# "ValueError: flush of closed file" with `communicate()`.
#
# Wraps the test to not make it hang when run with "-s".
p1 = testdir.makepyfile(
'''
import sys
def test_inner(testdir):
p1 = testdir.makepyfile(
"""
import sys
print(sys.stdin.read()) # empty
print('stdout')
sys.stderr.write('stderr')
"""
)
proc = testdir.popen([sys.executable, str(p1)], stdin=None)
stdout, stderr = proc.communicate(b"ignored")
assert stdout.splitlines() == [b"", b"stdout"]
assert stderr.splitlines() == [b"stderr"]
assert proc.returncode == 0
'''
)
result = testdir.runpytest("-p", "pytester", str(p1))
assert result.ret == 0
| def test_popen_default_stdin_stderr_and_stdin_None(testdir) -> None:
# stdout, stderr default to pipes,
# stdin can be None to not close the pipe, avoiding
# "ValueError: flush of closed file" with `communicate()`.
#
# Wraps the test to make it not hang when run with "-s".
p1 = testdir.makepyfile(
'''
import sys
def test_inner(testdir):
p1 = testdir.makepyfile(
"""
import sys
print(sys.stdin.read()) # empty
print('stdout')
sys.stderr.write('stderr')
"""
)
proc = testdir.popen([sys.executable, str(p1)], stdin=None)
stdout, stderr = proc.communicate(b"ignored")
assert stdout.splitlines() == [b"", b"stdout"]
assert stderr.splitlines() == [b"stderr"]
assert proc.returncode == 0
'''
)
result = testdir.runpytest("-p", "pytester", str(p1))
assert result.ret == 0
|
8,300 | def restore_relations(context=None, all_relations=None):
"""Restore relations from a annotation on the portal.
"""
portal = getSite()
if all_relations is None:
all_relations = IAnnotations(portal)[RELATIONS_KEY]
logger.info(f'Loaded {len(all_relations)} relations to restore')
update_linkintegrity = set()
modified_items = set()
modified_relation_lists = defaultdict(list)
# remove duplicates but keep original order
unique_relations = []
seen = set()
seen_add = seen.add
for rel in all_relations:
hashable = tuple(rel.items())
if hashable not in seen:
unique_relations.append(rel)
seen_add(hashable)
else:
logger.info(f'Dropping duplicate: {hashable}')
if len(unique_relations) < len(all_relations):
logger.info(f'Dropping {len(all_relations) - len(unique_relations)} duplicates')
all_relations = unique_relations
intids = getUtility(IIntIds)
for index, item in enumerate(all_relations, start=1):
if not index % 500:
logger.info(f'Restored {index} of {len(all_relations)} relations...')
try:
source_obj = uuidToObject(item['from_uuid'])
except KeyError:
# brain exists but no object
source_obj = None
try:
target_obj = uuidToObject(item['to_uuid'])
except KeyError:
# brain exists but no object
target_obj = None
if not source_obj:
logger.info(f'{item["from_uuid"]} is missing')
continue
if not target_obj:
logger.info(f'{item["to_uuid"]} is missing')
continue
if not IDexterityContent.providedBy(source_obj):
logger.info(f'{source_obj} is no dexterity content')
continue
if not IDexterityContent.providedBy(target_obj):
logger.info(f'{target_obj} is no dexterity content')
continue
from_attribute = item['from_attribute']
try:
to_id = intids.getId(target_obj)
except KeyError as e:
logger.info(f'No intid for {target_obj}')
continue
if from_attribute == referencedRelationship:
# Ignore linkintegrity for now. We'll rebuilt it at the end!
update_linkintegrity.add(item['from_uuid'])
continue
if HAS_ITERATE and from_attribute == ITERATE_RELATION_NAME:
# Iterate relations are not set as values of fields
relation = StagingRelationValue(to_id)
event._setRelation(source_obj, ITERATE_RELATION_NAME, relation)
continue
field_and_schema = get_field_and_schema_for_fieldname(from_attribute, source_obj.portal_type)
if field_and_schema is None:
# the from_attribute is no field
logger.info(f'No field. Setting relation: {item}')
event._setRelation(source_obj, from_attribute, RelationValue(to_id))
continue
field, schema = field_and_schema
relation = RelationValue(to_id)
if isinstance(field, RelationList):
logger.info(f'Add relation to relationslist {from_attribute} from {source_obj.absolute_url()} to {target_obj.absolute_url()}')
if item['from_uuid'] in modified_relation_lists.get(from_attribute, []):
# Do not purge relations
existing_relations = getattr(source_obj, from_attribute, [])
else:
# First touch. Make sure we purge!
existing_relations = []
existing_relations.append(relation)
setattr(source_obj, from_attribute, existing_relations)
modified_items.add(item['from_uuid'])
modified_relation_lists[from_attribute].append(item['from_uuid'])
continue
elif isinstance(field, (Relation, RelationChoice)):
logger.info(f'Add relation {from_attribute} from {source_obj.absolute_url()} to {target_obj.absolute_url()}')
setattr(source_obj, from_attribute, relation)
modified_items.add(item['from_uuid'])
continue
else:
# we should never end up here!
logger.warn(f'Unexpected relation {from_attribute} from {source_obj.absolute_url()} to {target_obj.absolute_url()}')
update_linkintegrity = set(update_linkintegrity)
logger.info(f'Updating linkintegrity for {len(update_linkintegrity)} items')
for uuid in sorted(update_linkintegrity):
modifiedContent(uuidToObject(uuid), None)
logger.info(f'Updating relations for {len(modified_items)} items')
for uuid in sorted(modified_items):
obj = uuidToObject(uuid)
# updateRelations from z3c.relationfield does not properly update relations in behaviors
# that are registered with a marker-interface.
# update_behavior_relations (from plone.app.relationfield) does that but does not update
# those in the main schema. Duh!
updateRelations(obj, None)
update_behavior_relations(obj, None)
# purge annotation from portal if they exist
if RELATIONS_KEY in IAnnotations(portal):
del IAnnotations(portal)[RELATIONS_KEY]
logger.info('Done!')
| def restore_relations(context=None, all_relations=None):
"""Restore relations from a annotation on the portal.
"""
portal = getSite()
if all_relations is None:
all_relations = IAnnotations(portal)[RELATIONS_KEY]
logger.info(f'Loaded {len(all_relations)} relations to restore')
update_linkintegrity = set()
modified_items = set()
modified_relation_lists = defaultdict(list)
# remove duplicates but keep original order
unique_relations = []
seen = set()
seen_add = seen.add
for rel in all_relations:
hashable = tuple(rel.items())
if hashable not in seen:
unique_relations.append(rel)
seen_add(hashable)
else:
logger.info(f'Dropping duplicate: {hashable}')
if len(unique_relations) < len(all_relations):
logger.info(f'Dropping {len(all_relations) - len(unique_relations)} duplicates')
all_relations = unique_relations
intids = getUtility(IIntIds)
for index, item in enumerate(all_relations, start=1):
if not index % 500:
logger.info(f'Restored {index} of {len(all_relations)} relations...')
try:
source_obj = uuidToObject(item['from_uuid'])
except KeyError:
# brain exists but no object
source_obj = None
try:
target_obj = uuidToObject(item['to_uuid'])
except KeyError:
# brain exists but no object
target_obj = None
if not source_obj:
logger.info(f'{item["from_uuid"]} is missing')
continue
if not target_obj:
logger.info(f'{item["to_uuid"]} is missing')
continue
if not IDexterityContent.providedBy(source_obj):
logger.info(f'{source_obj} is no dexterity content')
continue
if not IDexterityContent.providedBy(target_obj):
logger.info(f'{target_obj} is no dexterity content')
continue
from_attribute = item['from_attribute']
try:
to_id = intids.getId(target_obj)
except KeyError as e:
logger.warning(f'No intid for {target_obj}')
continue
if from_attribute == referencedRelationship:
# Ignore linkintegrity for now. We'll rebuilt it at the end!
update_linkintegrity.add(item['from_uuid'])
continue
if HAS_ITERATE and from_attribute == ITERATE_RELATION_NAME:
# Iterate relations are not set as values of fields
relation = StagingRelationValue(to_id)
event._setRelation(source_obj, ITERATE_RELATION_NAME, relation)
continue
field_and_schema = get_field_and_schema_for_fieldname(from_attribute, source_obj.portal_type)
if field_and_schema is None:
# the from_attribute is no field
logger.info(f'No field. Setting relation: {item}')
event._setRelation(source_obj, from_attribute, RelationValue(to_id))
continue
field, schema = field_and_schema
relation = RelationValue(to_id)
if isinstance(field, RelationList):
logger.info(f'Add relation to relationslist {from_attribute} from {source_obj.absolute_url()} to {target_obj.absolute_url()}')
if item['from_uuid'] in modified_relation_lists.get(from_attribute, []):
# Do not purge relations
existing_relations = getattr(source_obj, from_attribute, [])
else:
# First touch. Make sure we purge!
existing_relations = []
existing_relations.append(relation)
setattr(source_obj, from_attribute, existing_relations)
modified_items.add(item['from_uuid'])
modified_relation_lists[from_attribute].append(item['from_uuid'])
continue
elif isinstance(field, (Relation, RelationChoice)):
logger.info(f'Add relation {from_attribute} from {source_obj.absolute_url()} to {target_obj.absolute_url()}')
setattr(source_obj, from_attribute, relation)
modified_items.add(item['from_uuid'])
continue
else:
# we should never end up here!
logger.warn(f'Unexpected relation {from_attribute} from {source_obj.absolute_url()} to {target_obj.absolute_url()}')
update_linkintegrity = set(update_linkintegrity)
logger.info(f'Updating linkintegrity for {len(update_linkintegrity)} items')
for uuid in sorted(update_linkintegrity):
modifiedContent(uuidToObject(uuid), None)
logger.info(f'Updating relations for {len(modified_items)} items')
for uuid in sorted(modified_items):
obj = uuidToObject(uuid)
# updateRelations from z3c.relationfield does not properly update relations in behaviors
# that are registered with a marker-interface.
# update_behavior_relations (from plone.app.relationfield) does that but does not update
# those in the main schema. Duh!
updateRelations(obj, None)
update_behavior_relations(obj, None)
# purge annotation from portal if they exist
if RELATIONS_KEY in IAnnotations(portal):
del IAnnotations(portal)[RELATIONS_KEY]
logger.info('Done!')
|
59,723 | def _blockm(block_method, table, outfile, x, y, z, **kwargs):
r"""
Block average (x,y,z) data tables by mean, median, or mode estimation.
Reads arbitrarily located (x,y,z) triples [or optionally weighted
quadruples (x,y,z,w)] from a table and writes to the output a mean,
median, or mode (depending on ``block_method``) position and value for every
non-empty block in a grid region defined by the ``region`` and ``spacing``
parameters.
Parameters
----------
block_method : str
Name of the GMT module to call. Must be "blockmean" "blockmedian" or "blockmode".
Returns
-------
output : pandas.DataFrame or None
Return type depends on whether the ``outfile`` parameter is set:
- :class:`pandas.DataFrame` table with (x, y, z) columns if ``outfile``
is not set
- None if ``outfile`` is set (filtered output will be stored in file
set by ``outfile``)
"""
with GMTTempFile(suffix=".csv") as tmpfile:
with Session() as lib:
# Choose how data will be passed into the module
table_context = lib.virtualfile_from_data(
check_kind="vector", data=table, x=x, y=y, z=z
)
# Run blockm* on data table
with table_context as infile:
if outfile is None:
outfile = tmpfile.name
arg_str = " ".join([infile, build_arg_string(kwargs), "->" + outfile])
lib.call_module(module=block_method, args=arg_str)
# Read temporary csv output to a pandas table
if outfile == tmpfile.name: # if user did not set outfile, return pd.DataFrame
try:
column_names = table.columns.to_list()
result = pd.read_csv(tmpfile.name, sep="\t", names=column_names)
except AttributeError: # 'str' object has no attribute 'columns'
result = pd.read_csv(tmpfile.name, sep="\t", header=None, comment=">")
elif outfile != tmpfile.name: # return None if outfile set, output in outfile
result = None
return result
| def _blockm(block_method, table, outfile, x, y, z, **kwargs):
r"""
Block average (x,y,z) data tables by mean, median, or mode estimation.
Reads arbitrarily located (x,y,z) triples [or optionally weighted
quadruples (x,y,z,w)] from a table and writes to the output a mean,
median, or mode (depending on ``block_method``) position and value for every
non-empty block in a grid region defined by the ``region`` and ``spacing``
parameters.
Parameters
----------
block_method : str
Name of the GMT module to call. Must be "blockmean" "blockmedian" or
"blockmode".
Returns
-------
output : pandas.DataFrame or None
Return type depends on whether the ``outfile`` parameter is set:
- :class:`pandas.DataFrame` table with (x, y, z) columns if ``outfile``
is not set
- None if ``outfile`` is set (filtered output will be stored in file
set by ``outfile``)
"""
with GMTTempFile(suffix=".csv") as tmpfile:
with Session() as lib:
# Choose how data will be passed into the module
table_context = lib.virtualfile_from_data(
check_kind="vector", data=table, x=x, y=y, z=z
)
# Run blockm* on data table
with table_context as infile:
if outfile is None:
outfile = tmpfile.name
arg_str = " ".join([infile, build_arg_string(kwargs), "->" + outfile])
lib.call_module(module=block_method, args=arg_str)
# Read temporary csv output to a pandas table
if outfile == tmpfile.name: # if user did not set outfile, return pd.DataFrame
try:
column_names = table.columns.to_list()
result = pd.read_csv(tmpfile.name, sep="\t", names=column_names)
except AttributeError: # 'str' object has no attribute 'columns'
result = pd.read_csv(tmpfile.name, sep="\t", header=None, comment=">")
elif outfile != tmpfile.name: # return None if outfile set, output in outfile
result = None
return result
|
46,588 | def test_set_with_invalid_key() -> None:
cfg = OmegaConf.create()
with pytest.raises(KeyValidationError):
cfg[object()] # type: ignore
| def test_set_with_invalid_key() -> None:
cfg = OmegaConf.create()
with pytest.raises(KeyValidationError):
cfg[object()] = "a" # type: ignore
|
14,304 | def merge_charstrings(glyphOrder, num_masters, top_dicts, masterModel):
vsindex_dict = {}
vsindex_by_key = {}
varDataList = []
masterSupports = []
default_charstrings = top_dicts[0].CharStrings
for gid, gname in enumerate(glyphOrder):
all_cs = [
_get_cs(td.CharStrings, gname)
for td in top_dicts]
if len([gs for gs in all_cs if gs is not None]) == 1:
continue
model, model_cs = masterModel.getSubModel(all_cs)
# create the first pass CFF2 charstring, from
# the default charstring.
default_charstring = model_cs[0]
var_pen = CFF2CharStringMergePen([], gname, num_masters, 0)
# We need to override outlineExtractor because these
# charstrings do have widths in the 'program'; we need to drop these
# values rather than post assertion error for them.
default_charstring.outlineExtractor = MergeOutlineExtractor
default_charstring.draw(var_pen)
# Add the coordinates from all the other regions to the
# blend lists in the CFF2 charstring.
region_cs = model_cs[1:]
for region_idx, region_charstring in enumerate(region_cs, start=1):
var_pen.restart(region_idx)
region_charstring.outlineExtractor = MergeOutlineExtractor
region_charstring.draw(var_pen)
# Collapse each coordinate list to a blend operator and its args.
new_cs = var_pen.getCharString(
private=default_charstring.private,
globalSubrs=default_charstring.globalSubrs,
var_model=model, optimize=True)
default_charstrings[gname] = new_cs
if (not var_pen.seen_moveto) or ('blend' not in new_cs.program):
# If this is not a marking glyph, or if there are no blend
# arguments, then we can use vsindex 0. No need to
# check if we need a new vsindex.
continue
# If the charstring required a new model, create
# a VarData table to go with, and set vsindex.
key = tuple(v is not None for v in all_cs)
try:
vsindex = vsindex_by_key[key]
except KeyError:
vsindex = _add_new_vsindex(model, key,masterSupports, vsindex_dict,
vsindex_by_key, varDataList)
# We do not need to check for an existing new_cs.private.vsindex,
# as we know it doesn't exist yet.
if vsindex != 0:
new_cs.program[:0] = [vsindex, 'vsindex']
# If there is no variation in any of the charstrings, then vsindex_dict
# never gets built. This is could still be needed if there is variation
# in the PrivatDict, so we will build the default data for vsindex = 0.
if not vsindex_dict:
key = (True)*num_masters
_add_new_vsindex(model, key, masterSupports, vsindex_dict,
vsindex_by_key, varDataList)
cvData = CVarData(varDataList=varDataList, masterSupports=masterSupports,
vsindex_dict=vsindex_dict)
# XXX To do: optimize use of vsindex between the PrivateDicts and
# charstrings
return cvData
| def merge_charstrings(glyphOrder, num_masters, top_dicts, masterModel):
vsindex_dict = {}
vsindex_by_key = {}
varDataList = []
masterSupports = []
default_charstrings = top_dicts[0].CharStrings
for gid, gname in enumerate(glyphOrder):
all_cs = [
_get_cs(td.CharStrings, gname)
for td in top_dicts]
if len([gs for gs in all_cs if gs is not None]) == 1:
continue
model, model_cs = masterModel.getSubModel(all_cs)
# create the first pass CFF2 charstring, from
# the default charstring.
default_charstring = model_cs[0]
var_pen = CFF2CharStringMergePen([], gname, num_masters, 0)
# We need to override outlineExtractor because these
# charstrings do have widths in the 'program'; we need to drop these
# values rather than post assertion error for them.
default_charstring.outlineExtractor = MergeOutlineExtractor
default_charstring.draw(var_pen)
# Add the coordinates from all the other regions to the
# blend lists in the CFF2 charstring.
region_cs = model_cs[1:]
for region_idx, region_charstring in enumerate(region_cs, start=1):
var_pen.restart(region_idx)
region_charstring.outlineExtractor = MergeOutlineExtractor
region_charstring.draw(var_pen)
# Collapse each coordinate list to a blend operator and its args.
new_cs = var_pen.getCharString(
private=default_charstring.private,
globalSubrs=default_charstring.globalSubrs,
var_model=model, optimize=True)
default_charstrings[gname] = new_cs
if (not var_pen.seen_moveto) or ('blend' not in new_cs.program):
# If this is not a marking glyph, or if there are no blend
# arguments, then we can use vsindex 0. No need to
# check if we need a new vsindex.
continue
# If the charstring required a new model, create
# a VarData table to go with, and set vsindex.
key = tuple(v is not None for v in all_cs)
try:
vsindex = vsindex_by_key[key]
except KeyError:
vsindex = _add_new_vsindex(model, key, masterSupports, vsindex_dict,
vsindex_by_key, varDataList)
# We do not need to check for an existing new_cs.private.vsindex,
# as we know it doesn't exist yet.
if vsindex != 0:
new_cs.program[:0] = [vsindex, 'vsindex']
# If there is no variation in any of the charstrings, then vsindex_dict
# never gets built. This is could still be needed if there is variation
# in the PrivatDict, so we will build the default data for vsindex = 0.
if not vsindex_dict:
key = (True)*num_masters
_add_new_vsindex(model, key, masterSupports, vsindex_dict,
vsindex_by_key, varDataList)
cvData = CVarData(varDataList=varDataList, masterSupports=masterSupports,
vsindex_dict=vsindex_dict)
# XXX To do: optimize use of vsindex between the PrivateDicts and
# charstrings
return cvData
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.