content
stringlengths 35
762k
| sha1
stringlengths 40
40
| id
int64 0
3.66M
|
---|---|---|
def custom_leastsq(obj_fn, jac_fn, x0, f_norm2_tol=1e-6, jac_norm_tol=1e-6,
rel_ftol=1e-6, rel_xtol=1e-6, max_iter=100, num_fd_iters=0,
max_dx_scale=1.0, damping_mode="identity", damping_basis="diagonal_values",
damping_clip=None, use_acceleration=False, uphill_step_threshold=0.0,
init_munu="auto", oob_check_interval=0, oob_action="reject", oob_check_mode=0,
resource_alloc=None, arrays_interface=None, serial_solve_proc_threshold=100,
x_limits=None, verbosity=0, profiler=None):
"""
An implementation of the Levenberg-Marquardt least-squares optimization algorithm customized for use within pyGSTi.
This general purpose routine mimic to a large extent the interface used by
`scipy.optimize.leastsq`, though it implements a newer (and more robust) version
of the algorithm.
Parameters
----------
obj_fn : function
The objective function. Must accept and return 1D numpy ndarrays of
length N and M respectively. Same form as scipy.optimize.leastsq.
jac_fn : function
The jacobian function (not optional!). Accepts a 1D array of length N
and returns an array of shape (M,N).
x0 : numpy.ndarray
Initial evaluation point.
f_norm2_tol : float, optional
Tolerace for `F^2` where `F = `norm( sum(obj_fn(x)**2) )` is the
least-squares residual. If `F**2 < f_norm2_tol`, then mark converged.
jac_norm_tol : float, optional
Tolerance for jacobian norm, namely if `infn(dot(J.T,f)) < jac_norm_tol`
then mark converged, where `infn` is the infinity-norm and
`f = obj_fn(x)`.
rel_ftol : float, optional
Tolerance on the relative reduction in `F^2`, that is, if
`d(F^2)/F^2 < rel_ftol` then mark converged.
rel_xtol : float, optional
Tolerance on the relative value of `|x|`, so that if
`d(|x|)/|x| < rel_xtol` then mark converged.
max_iter : int, optional
The maximum number of (outer) interations.
num_fd_iters : int optional
Internally compute the Jacobian using a finite-difference method
for the first `num_fd_iters` iterations. This is useful when `x0`
lies at a special or singular point where the analytic Jacobian is
misleading.
max_dx_scale : float, optional
If not None, impose a limit on the magnitude of the step, so that
`|dx|^2 < max_dx_scale^2 * len(dx)` (so elements of `dx` should be,
roughly, less than `max_dx_scale`).
damping_mode : {'identity', 'JTJ', 'invJTJ', 'adaptive'}
How damping is applied. `'identity'` means that the damping parameter mu
multiplies the identity matrix. `'JTJ'` means that mu multiplies the
diagonal or singular values (depending on `scaling_mode`) of the JTJ
(Fischer information and approx. hessaian) matrix, whereas `'invJTJ'`
means mu multiplies the reciprocals of these values instead. The
`'adaptive'` mode adaptively chooses a damping strategy.
damping_basis : {'diagonal_values', 'singular_values'}
Whether the the diagonal or singular values of the JTJ matrix are used
during damping. If `'singular_values'` is selected, then a SVD of the
Jacobian (J) matrix is performed and damping is performed in the basis
of (right) singular vectors. If `'diagonal_values'` is selected, the
diagonal values of relevant matrices are used as a proxy for the the
singular values (saving the cost of performing a SVD).
damping_clip : tuple, optional
A 2-tuple giving upper and lower bounds for the values that mu multiplies.
If `damping_mode == "identity"` then this argument is ignored, as mu always
multiplies a 1.0 on the diagonal if the identity matrix. If None, then no
clipping is applied.
use_acceleration : bool, optional
Whether to include a geodesic acceleration term as suggested in
arXiv:1201.5885. This is supposed to increase the rate of
convergence with very little overhead. In practice we've seen
mixed results.
uphill_step_threshold : float, optional
Allows uphill steps when taking two consecutive steps in nearly
the same direction. The condition for accepting an uphill step
is that `(uphill_step_threshold-beta)*new_objective < old_objective`,
where `beta` is the cosine of the angle between successive steps.
If `uphill_step_threshold == 0` then no uphill steps are allowed,
otherwise it should take a value between 1.0 and 2.0, with 1.0 being
the most permissive to uphill steps.
init_munu : tuple, optional
If not None, a (mu, nu) tuple of 2 floats giving the initial values
for mu and nu.
oob_check_interval : int, optional
Every `oob_check_interval` outer iterations, the objective function
(`obj_fn`) is called with a second argument 'oob_check', set to True.
In this case, `obj_fn` can raise a ValueError exception to indicate
that it is Out Of Bounds. If `oob_check_interval` is 0 then this
check is never performed; if 1 then it is always performed.
oob_action : {"reject","stop"}
What to do when the objective function indicates (by raising a ValueError
as described above). `"reject"` means the step is rejected but the
optimization proceeds; `"stop"` means the optimization stops and returns
as converged at the last known-in-bounds point.
oob_check_mode : int, optional
An advanced option, expert use only. If 0 then the optimization is
halted as soon as an *attempt* is made to evaluate the function out of bounds.
If 1 then the optimization is halted only when a would-be *accepted* step
is out of bounds.
resource_alloc : ResourceAllocation, optional
When not None, an resource allocation object used for distributing the computation
across multiple processors.
arrays_interface : ArraysInterface
An object that provides an interface for creating and manipulating data arrays.
serial_solve_proc_threshold : int optional
When there are fewer than this many processors, the optimizer will solve linear
systems serially, using SciPy on a single processor, rather than using a parallelized
Gaussian Elimination (with partial pivoting) algorithm coded in Python. Since SciPy's
implementation is more efficient, it's not worth using the parallel version until there
are many processors to spread the work among.
x_limits : numpy.ndarray, optional
A (num_params, 2)-shaped array, holding on each row the (min, max) values for the corresponding
parameter (element of the "x" vector). If `None`, then no limits are imposed.
verbosity : int, optional
Amount of detail to print to stdout.
profiler : Profiler, optional
A profiler object used for to track timing and memory usage.
Returns
-------
x : numpy.ndarray
The optimal solution.
converged : bool
Whether the solution converged.
msg : str
A message indicating why the solution converged (or didn't).
"""
resource_alloc = _ResourceAllocation.cast(resource_alloc)
comm = resource_alloc.comm
printer = _VerbosityPrinter.create_printer(verbosity, comm)
ari = arrays_interface # shorthand
# MEM from ..baseobjs.profiler import Profiler
# MEM debug_prof = Profiler(comm, True)
# MEM profiler = debug_prof
msg = ""
converged = False
global_x = x0.copy()
f = obj_fn(global_x) # 'E'-type array
norm_f = ari.norm2_f(f) # _np.linalg.norm(f)**2
half_max_nu = 2**62 # what should this be??
tau = 1e-3
alpha = 0.5 # for acceleration
nu = 2
mu = 1 # just a guess - initialized on 1st iter and only used if rejected
#Allocate potentially shared memory used in loop
JTJ = ari.allocate_jtj()
JTf = ari.allocate_jtf()
x = ari.allocate_jtf()
#x_for_jac = ari.allocate_x_for_jac()
if num_fd_iters > 0:
fdJac = ari.allocate_jac()
ari.allscatter_x(global_x, x)
if x_limits is not None:
x_lower_limits = ari.allocate_jtf()
x_upper_limits = ari.allocate_jtf()
ari.allscatter_x(x_limits[:, 0], x_lower_limits)
ari.allscatter_x(x_limits[:, 1], x_upper_limits)
if damping_basis == "singular_values":
Jac_V = ari.allocate_jtj()
if damping_mode == 'adaptive':
dx_lst = [ari.allocate_jtf(), ari.allocate_jtf(), ari.allocate_jtf()]
new_x_lst = [ari.allocate_jtf(), ari.allocate_jtf(), ari.allocate_jtf()]
global_new_x_lst = [global_x.copy() for i in range(3)]
else:
dx = ari.allocate_jtf()
new_x = ari.allocate_jtf()
global_new_x = global_x.copy()
if use_acceleration:
dx1 = ari.allocate_jtf()
dx2 = ari.allocate_jtf()
df2_x = ari.allocate_jtf()
JTdf2 = ari.allocate_jtf()
global_accel_x = global_x.copy()
# don't let any component change by more than ~max_dx_scale
if max_dx_scale:
max_norm_dx = (max_dx_scale**2) * len(global_x)
else: max_norm_dx = None
if not _np.isfinite(norm_f):
msg = "Infinite norm of objective function at initial point!"
if len(global_x) == 0: # a model with 0 parameters - nothing to optimize
msg = "No parameters to optimize"; converged = True
# DB: from ..tools import matrixtools as _mt
# DB: print("DB F0 (%s)=" % str(f.shape)); _mt.print_mx(f,prec=0,width=4)
#num_fd_iters = 1000000 # DEBUG: use finite difference iterations instead
# print("DEBUG: setting num_fd_iters == 0!"); num_fd_iters = 0 # DEBUG
last_accepted_dx = None
min_norm_f = 1e100 # sentinel
best_x = ari.allocate_jtf()
best_x[:] = x[:] # like x.copy() -the x-value corresponding to min_norm_f ('P'-type)
spow = 0.0 # for damping_mode == 'adaptive'
if damping_clip is not None:
def dclip(ar): return _np.clip(ar, damping_clip[0], damping_clip[1])
else:
def dclip(ar): return ar
if init_munu != "auto":
mu, nu = init_munu
best_x_state = (mu, nu, norm_f, f.copy(), spow, None) # need f.copy() b/c f is objfn mem
rawJTJ_scratch = None
jtj_buf = ari.allocate_jtj_shared_mem_buf()
try:
for k in range(max_iter): # outer loop
# assume global_x, x, f, fnorm hold valid values
if len(msg) > 0:
break # exit outer loop if an exit-message has been set
if norm_f < f_norm2_tol:
if oob_check_interval <= 1:
msg = "Sum of squares is at most %g" % f_norm2_tol
converged = True; break
else:
printer.log(("** Converged with out-of-bounds with check interval=%d, reverting to last "
"know in-bounds point and setting interval=1 **") % oob_check_interval, 2)
oob_check_interval = 1
x[:] = best_x[:]
mu, nu, norm_f, f[:], spow, _ = best_x_state
continue # can't make use of saved JTJ yet - recompute on nxt iter
#printer.log("--- Outer Iter %d: norm_f = %g, mu=%g" % (k,norm_f,mu))
if profiler: profiler.memory_check("custom_leastsq: begin outer iter *before de-alloc*")
Jac = None
if profiler: profiler.memory_check("custom_leastsq: begin outer iter")
# unnecessary b/c global_x is already valid: ari.allgather_x(x, global_x)
if k >= num_fd_iters:
Jac = jac_fn(global_x) # 'EP'-type, but doesn't actually allocate any more mem (!)
else:
# Note: x holds only number of "fine"-division params - need to use global_x, and
# Jac only holds a subset of the derivative and element columns and rows, respectively.
f_fixed = f.copy() # a static part of the distributed `f` resturned by obj_fn - MUST copy this.
pslice = ari.jac_param_slice(only_if_leader=True)
eps = 1e-7
#Don't do this: for ii, i in enumerate(range(pslice.start, pslice.stop)): (must keep procs in sync)
for i in range(len(global_x)):
x_plus_dx = global_x.copy()
x_plus_dx[i] += eps
fd = (obj_fn(x_plus_dx) - f_fixed) / eps
if pslice.start <= i < pslice.stop:
fdJac[:, i - pslice.start] = fd
#if comm is not None: comm.barrier() # overkill for shared memory leader host barrier
Jac = fdJac
#DEBUG: compare with analytic jacobian (need to uncomment num_fd_iters DEBUG line above too)
#Jac_analytic = jac_fn(x)
#if _np.linalg.norm(Jac_analytic-Jac) > 1e-6:
# print("JACDIFF = ",_np.linalg.norm(Jac_analytic-Jac)," per el=",
# _np.linalg.norm(Jac_analytic-Jac)/Jac.size," sz=",Jac.size)
# DB: from ..tools import matrixtools as _mt
# DB: print("DB JAC (%s)=" % str(Jac.shape)); _mt.print_mx(Jac,prec=0,width=4); assert(False)
if profiler: profiler.memory_check("custom_leastsq: after jacobian:"
+ "shape=%s, GB=%.2f" % (str(Jac.shape),
Jac.nbytes / (1024.0**3)))
Jnorm = _np.sqrt(ari.norm2_jac(Jac))
xnorm = _np.sqrt(ari.norm2_x(x))
printer.log("--- Outer Iter %d: norm_f = %g, mu=%g, |x|=%g, |J|=%g" % (k, norm_f, mu, xnorm, Jnorm))
#assert(_np.isfinite(Jac).all()), "Non-finite Jacobian!" # NaNs tracking
#assert(_np.isfinite(_np.linalg.norm(Jac))), "Finite Jacobian has inf norm!" # NaNs tracking
tm = _time.time()
#OLD MPI-enabled JTJ computation
##if my_mpidot_qtys is None:
## my_mpidot_qtys = _mpit.distribute_for_dot(Jac.T.shape, Jac.shape, resource_alloc)
#JTJ, JTJ_shm = _mpit.mpidot(Jac.T, Jac, my_mpidot_qtys[0], my_mpidot_qtys[1],
# my_mpidot_qtys[2], resource_alloc, JTJ, JTJ_shm) # _np.dot(Jac.T,Jac) 'PP'
ari.fill_jtj(Jac, JTJ, jtj_buf)
ari.fill_jtf(Jac, f, JTf) # 'P'-type
if profiler: profiler.add_time("custom_leastsq: dotprods", tm)
#assert(not _np.isnan(JTJ).any()), "NaN in JTJ!" # NaNs tracking
#assert(not _np.isinf(JTJ).any()), "inf in JTJ! norm Jac = %g" % _np.linalg.norm(Jac) # NaNs tracking
#assert(_np.isfinite(JTJ).all()), "Non-finite JTJ!" # NaNs tracking
#assert(_np.isfinite(JTf).all()), "Non-finite JTf!" # NaNs tracking
idiag = ari.jtj_diag_indices(JTJ)
norm_JTf = ari.infnorm_x(JTf)
norm_x = ari.norm2_x(x) # _np.linalg.norm(x)**2
undamped_JTJ_diag = JTJ[idiag].copy() # 'P'-type
#max_JTJ_diag = JTJ.diagonal().copy()
JTf *= -1.0; minus_JTf = JTf # use the same memory for -JTf below (shouldn't use JTf anymore)
#Maybe just have a minus_JTf variable?
# FUTURE TODO: keep tallying allocated memory, i.e. array_types (stopped here)
if damping_basis == "singular_values":
# Jac = U * s * Vh; J.T * J = conj(V) * s * U.T * U * s * Vh = conj(V) * s^2 * Vh
# Jac_U, Jac_s, Jac_Vh = _np.linalg.svd(Jac, full_matrices=False)
# Jac_V = _np.conjugate(Jac_Vh.T)
global_JTJ = ari.gather_jtj(JTJ)
if comm is None or comm.rank == 0:
global_Jac_s2, global_Jac_V = _np.linalg.eigh(global_JTJ)
ari.scatter_jtj(global_Jac_V, Jac_V)
comm.bcast(global_Jac_s2, root=0)
else:
ari.scatter_jtj(None, Jac_V)
global_Jac_s2 = comm.bcast(None, root=0)
#print("Rank %d: min s2 = %g" % (comm.rank, min(global_Jac_s2)))
#if min(global_Jac_s2) < -1e-4 and (comm is None or comm.rank == 0):
# print("WARNING: min Jac s^2 = %g (max = %g)" % (min(global_Jac_s2), max(global_Jac_s2)))
assert(min(global_Jac_s2) / abs(max(global_Jac_s2)) > -1e-6), "JTJ should be positive!"
global_Jac_s = _np.sqrt(_np.clip(global_Jac_s2, 1e-12, None)) # eigvals of JTJ must be >= 0
global_Jac_VT_mJTf = ari.global_svd_dot(Jac_V, minus_JTf) # = dot(Jac_V.T, minus_JTf)
#DEBUG
#num_large_svals = _np.count_nonzero(Jac_s > _np.max(Jac_s) / 1e2)
#Jac_Uproj = Jac_U[:,0:num_large_svals]
#JTJ_evals, JTJ_U = _np.linalg.eig(JTJ)
#printer.log("JTJ (dim=%d) eval min/max=%g, %g; %d large svals (of %d)" % (
# JTJ.shape[0], _np.min(_np.abs(JTJ_evals)), _np.max(_np.abs(JTJ_evals)),
# num_large_svals, len(Jac_s)))
if norm_JTf < jac_norm_tol:
if oob_check_interval <= 1:
msg = "norm(jacobian) is at most %g" % jac_norm_tol
converged = True; break
else:
printer.log(("** Converged with out-of-bounds with check interval=%d, reverting to last "
"know in-bounds point and setting interval=1 **") % oob_check_interval, 2)
oob_check_interval = 1
x[:] = best_x[:]
mu, nu, norm_f, f[:], spow, _ = best_x_state
continue # can't make use of saved JTJ yet - recompute on nxt iter
if k == 0:
if init_munu == "auto":
if damping_mode == 'identity':
mu = tau * ari.max_x(undamped_JTJ_diag) # initial damping element
#mu = min(mu, MU_TOL1)
else:
# initial multiplicative damping element
#mu = tau # initial damping element - but this seem to low, at least for termgap...
mu = min(1.0e5, ari.max_x(undamped_JTJ_diag) / norm_JTf) # Erik's heuristic
#tries to avoid making mu so large that dx is tiny and we declare victory prematurely
else:
mu, nu = init_munu
rawJTJ_scratch = JTJ.copy() # allocates the memory for a copy of JTJ so only update mem elsewhere
best_x_state = mu, nu, norm_f, f.copy(), spow, rawJTJ_scratch # update mu,nu,JTJ of initial best state
else:
#on all other iterations, update JTJ of best_x_state if best_x == x, i.e. if we've just evaluated
# a previously accepted step that was deemed the best we've seen so far
if _np.allclose(x, best_x):
rawJTJ_scratch[:, :] = JTJ[:, :] # use pre-allocated memory
rawJTJ_scratch[idiag] = undamped_JTJ_diag # no damping; the "raw" JTJ
best_x_state = best_x_state[0:5] + (rawJTJ_scratch,) # update mu,nu,JTJ of initial "best state"
#determing increment using adaptive damping
while True: # inner loop
if profiler: profiler.memory_check("custom_leastsq: begin inner iter")
#print("DB: Pre-damping JTJ diag = [",_np.min(_np.abs(JTJ[idiag])),_np.max(_np.abs(JTJ[idiag])),"]")
if damping_mode == 'identity':
assert(damping_clip is None), "damping_clip cannot be used with damping_mode == 'identity'"
if damping_basis == "singular_values":
reg_Jac_s = global_Jac_s + mu
#Notes:
#Previously we computed inv_JTJ here and below computed dx:
#inv_JTJ = _np.dot(Jac_V, _np.dot(_np.diag(1 / reg_Jac_s**2), Jac_V.T))
# dx = _np.dot(Jac_V, _np.diag(1 / reg_Jac_s**2), global_Jac_VT_mJTf
#But now we just compute reg_Jac_s here, and so the rest below.
else:
# ok if assume fine-param-proc.size == 1 (otherwise need to sync setting local JTJ)
JTJ[idiag] = undamped_JTJ_diag + mu # augment normal equations
elif damping_mode == 'JTJ':
if damping_basis == "singular_values":
reg_Jac_s = global_Jac_s + mu * dclip(global_Jac_s)
else:
add_to_diag = mu * dclip(undamped_JTJ_diag)
JTJ[idiag] = undamped_JTJ_diag + add_to_diag # ok if assume fine-param-proc.size == 1
elif damping_mode == 'invJTJ':
if damping_basis == "singular_values":
reg_Jac_s = global_Jac_s + mu * dclip(1.0 / global_Jac_s)
else:
add_to_diag = mu * dclip(1.0 / undamped_JTJ_diag)
JTJ[idiag] = undamped_JTJ_diag + add_to_diag # ok if assume fine-param-proc.size == 1
elif damping_mode == 'adaptive':
if damping_basis == "singular_values":
reg_Jac_s_lst = [global_Jac_s + mu * dclip(global_Jac_s**(spow + 0.1)),
global_Jac_s + mu * dclip(global_Jac_s**spow),
global_Jac_s + mu * dclip(global_Jac_s**(spow - 0.1))]
else:
add_to_diag_lst = [mu * dclip(undamped_JTJ_diag**(spow + 0.1)),
mu * dclip(undamped_JTJ_diag**spow),
mu * dclip(undamped_JTJ_diag**(spow - 0.1))]
else:
raise ValueError("Invalid damping mode: %s" % damping_mode)
#assert(_np.isfinite(JTJ).all()), "Non-finite JTJ (inner)!" # NaNs tracking
#assert(_np.isfinite(JTf).all()), "Non-finite JTf (inner)!" # NaNs tracking
try:
if profiler: profiler.memory_check("custom_leastsq: before linsolve")
tm = _time.time()
success = True
if damping_basis == 'diagonal_values':
if damping_mode == 'adaptive':
for ii, add_to_diag in enumerate(add_to_diag_lst):
JTJ[idiag] = undamped_JTJ_diag + add_to_diag # ok if assume fine-param-proc.size == 1
#dx_lst.append(_scipy.linalg.solve(JTJ, -JTf, sym_pos=True))
#dx_lst.append(custom_solve(JTJ, -JTf, resource_alloc))
_custom_solve(JTJ, minus_JTf, dx_lst[ii], ari, resource_alloc,
serial_solve_proc_threshold)
else:
#dx = _scipy.linalg.solve(JTJ, -JTf, sym_pos=True)
_custom_solve(JTJ, minus_JTf, dx, ari, resource_alloc, serial_solve_proc_threshold)
elif damping_basis == 'singular_values':
#Note: above solves JTJ*x = -JTf => x = inv_JTJ * (-JTf)
# but: J = U*s*Vh => JTJ = (VhT*s*UT)(U*s*Vh) = VhT*s^2*Vh, and inv_Vh = V b/c V is unitary
# so inv_JTJ = inv_Vh * 1/s^2 * inv_VhT = V * 1/s^2 * VT = (N,K)*(K,K)*(K,N) if use psuedoinv
if damping_mode == 'adaptive':
#dx_lst = [_np.dot(ijtj, minus_JTf) for ijtj in inv_JTJ_lst] # special case
for ii, s in enumerate(reg_Jac_s_lst):
ari.fill_dx_svd(Jac_V, (1 / s**2) * global_Jac_VT_mJTf, dx_lst[ii])
else:
# dx = _np.dot(inv_JTJ, minus_JTf)
ari.fill_dx_svd(Jac_V, (1 / reg_Jac_s**2) * global_Jac_VT_mJTf, dx)
else:
raise ValueError("Invalid damping_basis = '%s'" % damping_basis)
if profiler: profiler.add_time("custom_leastsq: linsolve", tm)
#except _np.linalg.LinAlgError:
except _scipy.linalg.LinAlgError: # DIST TODO - a different kind of exception caught?
success = False
if success and use_acceleration: # Find acceleration term:
assert(damping_mode != 'adaptive'), "Cannot use acceleration in adaptive mode (yet)"
assert(damping_basis != 'singular_values'), "Cannot use acceleration w/singular-value basis (yet)"
df2_eps = 1.0
try:
#df2 = (obj_fn(x + df2_dx) + obj_fn(x - df2_dx) - 2 * f) / \
# df2_eps**2 # 2nd deriv of f along dx direction
# Above line expanded to reuse shared memory
df2 = -2 * f
df2_x[:] = x + df2_eps * dx
ari.allgather_x(df2_x, global_accel_x)
df2 += obj_fn(global_accel_x)
df2_x[:] = x - df2_eps * dx
ari.allgather_x(df2_x, global_accel_x)
df2 += obj_fn(global_accel_x)
df2 /= df2_eps**2
f[:] = df2; df2 = f # use `f` as an appropriate shared-mem object for fill_jtf below
ari.fill_jtf(Jac, df2, JTdf2)
JTdf2 *= -0.5 # keep using JTdf2 memory in solve call below
#dx2 = _scipy.linalg.solve(JTJ, -0.5 * JTdf2, sym_pos=True) # Note: JTJ not init w/'adaptive'
_custom_solve(JTJ, JTdf2, dx2, ari, resource_alloc, serial_solve_proc_threshold)
dx1[:] = dx[:]
dx += dx2 # add acceleration term to dx
except _scipy.linalg.LinAlgError:
print("WARNING - linear solve failed for acceleration term!")
# but ok to continue - just stick with first order term
except ValueError:
print("WARNING - value error during computation of acceleration term!")
reject_msg = ""
if profiler: profiler.memory_check("custom_leastsq: after linsolve")
if success: # linear solve succeeded
#dx = _hack_dx(obj_fn, x, dx, Jac, JTJ, JTf, f, norm_f)
if damping_mode != 'adaptive':
new_x[:] = x + dx
norm_dx = ari.norm2_x(dx) # _np.linalg.norm(dx)**2
#ensure dx isn't too large - don't let any component change by more than ~max_dx_scale
if max_norm_dx and norm_dx > max_norm_dx:
dx *= _np.sqrt(max_norm_dx / norm_dx)
new_x[:] = x + dx
norm_dx = ari.norm2_x(dx) # _np.linalg.norm(dx)**2
#apply x limits (bounds)
if x_limits is not None:
# Approach 1: project x into valid space by simply clipping out-of-bounds values
for i, (x_el, lower, upper) in enumerate(zip(x, x_lower_limits, x_upper_limits)):
if new_x[i] < lower:
new_x[i] = lower
dx[i] = lower - x_el
elif new_x[i] > upper:
new_x[i] = upper
dx[i] = upper - x_el
norm_dx = ari.norm2_x(dx) # _np.linalg.norm(dx)**2
# Approach 2: by scaling back dx (seems less good, but here in case we want it later)
# # minimally reduce dx s.t. new_x = x + dx so that x_lower_limits <= x+dx <= x_upper_limits
# # x_lower_limits - x <= dx <= x_upper_limits - x. Note: use potentially updated dx from
# # max_norm_dx block above. For 0 <= scale <= 1,
# # 1) require x + scale*dx - x_upper_limits <= 0 => scale <= (x_upper_limits - x) / dx
# # [Note: above assumes dx > 0 b/c if not it moves x away from bound and scale < 0]
# # so if scale >= 0, then scale = min((x_upper_limits - x) / dx, 1.0)
# scale = None
# new_x[:] = (x_upper_limits - x) / dx
# new_x_min = ari.min_x(new_x)
# if 0 <= new_x_min < 1.0:
# scale = new_x_min
#
# # 2) require x + scale*dx - x_lower_limits <= 0 => scale <= (x - x_lower_limits) / (-dx)
# new_x[:] = (x_lower_limits - x) / dx
# new_x_min = ari.min_x(new_x)
# if 0 <= new_x_min < 1.0:
# scale = new_x_min if (scale is None) else min(new_x_min, scale)
#
# if scale is not None:
# dx *= scale
# new_x[:] = x + dx
# norm_dx = ari.norm2_x(dx) # _np.linalg.norm(dx)**2
else:
for dx, new_x in zip(dx_lst, new_x_lst):
new_x[:] = x + dx
norm_dx_lst = [ari.norm2_x(dx) for dx in dx_lst]
#ensure dx isn't too large - don't let any component change by more than ~max_dx_scale
if max_norm_dx:
for i, norm_dx in enumerate(norm_dx_lst):
if norm_dx > max_norm_dx:
dx_lst[i] *= _np.sqrt(max_norm_dx / norm_dx)
new_x_lst[i][:] = x + dx_lst[i]
norm_dx_lst[i] = ari.norm2_x(dx_lst[i])
#apply x limits (bounds)
if x_limits is not None:
for i, (dx, new_x) in enumerate(zip(dx_lst, new_x_lst)):
# Do same thing as above for each possible dx in dx_lst
# Approach 1:
for ii, (x_el, lower, upper) in enumerate(zip(x, x_lower_limits, x_upper_limits)):
if new_x[ii] < lower:
new_x[ii] = lower
dx[ii] = lower - x_el
elif new_x[ii] > upper:
new_x[ii] = upper
dx[ii] = upper - x_el
norm_dx_lst[i] = ari.norm2_x(dx) # _np.linalg.norm(dx)**2
# Approach 2:
# scale = None
# new_x[:] = (x_upper_limits - x) / dx
# new_x_min = ari.min_x(new_x)
# if 0 <= new_x_min < 1.0:
# scale = new_x_min
#
# new_x[:] = (x_lower_limits - x) / dx
# new_x_min = ari.min_x(new_x)
# if 0 <= new_x_min < 1.0:
# scale = new_x_min if (scale is None) else min(new_x_min, scale)
#
# if scale is not None:
# dx *= scale
# new_x[:] = x + dx
# norm_dx_lst[i] = ari.norm2_x(dx)
norm_dx = norm_dx_lst[1] # just use center value for printing & checks below
printer.log(" - Inner Loop: mu=%g, norm_dx=%g" % (mu, norm_dx), 2)
#MEM if profiler: profiler.memory_check("custom_leastsq: mid inner loop")
#print("DB: new_x = ", new_x)
if norm_dx < (rel_xtol**2) * norm_x: # and mu < MU_TOL2:
if oob_check_interval <= 1:
msg = "Relative change, |dx|/|x|, is at most %g" % rel_xtol
converged = True; break
else:
printer.log(("** Converged with out-of-bounds with check interval=%d, reverting to last "
"know in-bounds point and setting interval=1 **") % oob_check_interval, 2)
oob_check_interval = 1
x[:] = best_x[:]
mu, nu, norm_f, f[:], spow, _ = best_x_state
break
if norm_dx > (norm_x + rel_xtol) / (_MACH_PRECISION**2):
msg = "(near-)singular linear system"; break
if oob_check_interval > 0 and oob_check_mode == 0:
if k % oob_check_interval == 0:
#Check to see if objective function is out of bounds
in_bounds = []
if damping_mode == 'adaptive':
new_f_lst = []
for new_x, global_new_x in zip(new_x_lst, global_new_x_lst):
ari.allgather_x(new_x, global_new_x)
try:
new_f = obj_fn(global_new_x, oob_check=True)
except ValueError: # Use this to mean - "not allowed, but don't stop"
in_bounds.append(False)
new_f_lst.append(None) # marks OOB attempts that shouldn't be considered
else: # no exception raised
in_bounds.append(True)
new_f_lst.append(new_f.copy())
else:
#print("DB: Trying |x| = ", _np.linalg.norm(new_x), " |x|^2=", _np.dot(new_x,new_x))
# MEM if profiler: profiler.memory_check("custom_leastsq: before oob_check obj_fn")
ari.allgather_x(new_x, global_new_x)
try:
new_f = obj_fn(global_new_x, oob_check=True)
except ValueError: # Use this to mean - "not allowed, but don't stop"
in_bounds.append(False)
else:
in_bounds.append(True)
if any(in_bounds): # In adaptive mode, proceed if *any* cases are in-bounds
new_x_is_allowed = True
new_x_is_known_inbounds = True
else:
MIN_STOP_ITER = 1 # the minimum iteration where an OOB objective stops the optimization
if oob_action == "reject" or k < MIN_STOP_ITER:
new_x_is_allowed = False # (and also not in bounds)
elif oob_action == "stop":
if oob_check_interval == 1:
msg = "Objective function out-of-bounds! STOP"
converged = True; break
else: # reset to last know in-bounds point and not do oob check every step
printer.log(
("** Hit out-of-bounds with check interval=%d, reverting to last "
"know in-bounds point and setting interval=1 **") % oob_check_interval, 2)
oob_check_interval = 1
x[:] = best_x[:]
mu, nu, norm_f, f[:], spow, _ = best_x_state # can't make use of saved JTJ yet
break # restart next outer loop
else:
raise ValueError("Invalid `oob_action`: '%s'" % oob_action)
else: # don't check this time
if damping_mode == 'adaptive':
new_f_lst = []
for new_x, global_new_x in zip(new_x_lst, global_new_x_lst):
ari.allgather_x(new_x, global_new_x)
new_f_lst.append(obj_fn(global_new_x).copy())
else:
ari.allgather_x(new_x, global_new_x)
new_f = obj_fn(global_new_x, oob_check=False)
new_x_is_allowed = True
new_x_is_known_inbounds = False
else:
#Just evaluate objective function normally; never check for in-bounds condition
if damping_mode == 'adaptive':
new_f_lst = []
for new_x, global_new_x in zip(new_x_lst, global_new_x_lst):
ari.allgather_x(new_x, global_new_x)
new_f_lst.append(obj_fn(global_new_x).copy())
else:
ari.allgather_x(new_x, global_new_x)
new_f = obj_fn(global_new_x)
new_x_is_allowed = True
new_x_is_known_inbounds = bool(oob_check_interval == 0) # consider "in bounds" if not checking
if new_x_is_allowed:
# MEM if profiler: profiler.memory_check("custom_leastsq: after obj_fn")
if damping_mode == 'adaptive':
norm_new_f_lst = [ari.norm2_f(new_f) if (new_f is not None) else 1e100
for new_f in new_f_lst] # 1e100 so we don't choose OOB adaptive cases
if any([not _np.isfinite(norm_new_f) for norm_new_f in norm_new_f_lst]): # avoid inf loop
msg = "Infinite norm of objective function!"; break
#iMin = _np.argmin(norm_new_f_lst) # pick lowest (best) objective
gain_ratio_lst = [(norm_f - nnf) / ari.dot_x(dx, mu * dx + minus_JTf)
for (nnf, dx) in zip(norm_new_f_lst, dx_lst)]
iMin = _np.argmax(gain_ratio_lst) # pick highest (best) gain ratio
# but expected decrease is |f|^2 = grad(fTf) * dx = (grad(fT)*f + fT*grad(f)) * dx
# = (JT*f + fT*J) * dx
# <<more explanation>>
norm_new_f = norm_new_f_lst[iMin]
new_f = new_f_lst[iMin]
new_x = new_x_lst[iMin]
global_new_x = global_new_x_lst[iMin]
dx = dx_lst[iMin]
if iMin == 0: spow = min(1.0, spow + 0.1)
elif iMin == 2: spow = max(-1.0, spow - 0.1)
printer.log("ADAPTIVE damping => i=%d b/c fs=[%s] gains=[%s] => spow=%g" % (
iMin, ", ".join(["%.3g" % v for v in norm_new_f_lst]),
", ".join(["%.3g" % v for v in gain_ratio_lst]), spow))
else:
norm_new_f = ari.norm2_f(new_f) # _np.linalg.norm(new_f)**2
if not _np.isfinite(norm_new_f): # avoid infinite loop...
msg = "Infinite norm of objective function!"; break
# dL = expected decrease in ||F||^2 from linear model
dL = ari.dot_x(dx, mu * dx + minus_JTf)
dF = norm_f - norm_new_f # actual decrease in ||F||^2
#DEBUG - see if cos_phi < 0.001, say, might work as a convergence criterion
#if damping_basis == 'singular_values':
# # projection of new_f onto solution tangent plane
# new_f_proj = _np.dot(Jac_Uproj, _np.dot(Jac_Uproj.T, new_f))
# # angle between residual vec and tangent plane
# cos_phi = _np.sqrt(_np.dot(new_f_proj, new_f_proj) / norm_new_f)
# #grad_f_norm = _np.linalg.norm(mu * dx - JTf)
#else:
# cos_phi = 0
if dF <= 0 and uphill_step_threshold > 0:
beta = 0 if last_accepted_dx is None else \
(ari.dot_x(dx, last_accepted_dx)
/ _np.sqrt(ari.norm2_x(dx) * ari.norm2_x(last_accepted_dx)))
uphill_ok = (uphill_step_threshold - beta) * norm_new_f < min(min_norm_f, norm_f)
else:
uphill_ok = False
if use_acceleration:
accel_ratio = 2 * _np.sqrt(ari.norm2_x(dx2) / ari.norm2_x(dx1))
printer.log(" (cont): norm_new_f=%g, dL=%g, dF=%g, reldL=%g, reldF=%g aC=%g" %
(norm_new_f, dL, dF, dL / norm_f, dF / norm_f, accel_ratio), 2)
else:
printer.log(" (cont): norm_new_f=%g, dL=%g, dF=%g, reldL=%g, reldF=%g" %
(norm_new_f, dL, dF, dL / norm_f, dF / norm_f), 2)
accel_ratio = 0.0
if dL / norm_f < rel_ftol and dF >= 0 and dF / norm_f < rel_ftol \
and dF / dL < 2.0 and accel_ratio <= alpha:
if oob_check_interval <= 1: # (if 0 then no oob checking is done)
msg = "Both actual and predicted relative reductions in the" + \
" sum of squares are at most %g" % rel_ftol
converged = True; break
else:
printer.log(("** Converged with out-of-bounds with check interval=%d, "
"reverting to last know in-bounds point and setting "
"interval=1 **") % oob_check_interval, 2)
oob_check_interval = 1
x[:] = best_x[:]
mu, nu, norm_f, f[:], spow, _ = best_x_state # can't make use of saved JTJ yet
break
# MEM if profiler: profiler.memory_check("custom_leastsq: before success")
if (dL > 0 and dF > 0 and accel_ratio <= alpha) or uphill_ok:
#Check whether an otherwise acceptable solution is in-bounds
if oob_check_mode == 1 and oob_check_interval > 0 and k % oob_check_interval == 0:
#Check to see if objective function is out of bounds
try:
#print("DB: Trying |x| = ", _np.linalg.norm(new_x), " |x|^2=", _np.dot(new_x,new_x))
# MEM if profiler:
# MEM profiler.memory_check("custom_leastsq: before oob_check obj_fn mode 1")
obj_fn(global_new_x, oob_check=True) # don't actually need return val (== new_f)
new_f_is_allowed = True
new_x_is_known_inbounds = True
except ValueError: # Use this to mean - "not allowed, but don't stop"
MIN_STOP_ITER = 1 # the minimum iteration where an OOB objective can stops the opt.
if oob_action == "reject" or k < MIN_STOP_ITER:
new_f_is_allowed = False # (and also not in bounds)
elif oob_action == "stop":
if oob_check_interval == 1:
msg = "Objective function out-of-bounds! STOP"
converged = True; break
else: # reset to last know in-bounds point and not do oob check every step
printer.log(
("** Hit out-of-bounds with check interval=%d, reverting to last "
"know in-bounds point and setting interval=1 **") % oob_check_interval,
2)
oob_check_interval = 1
x[:] = best_x[:]
mu, nu, norm_f, f[:], spow, _ = best_x_state # can't use of saved JTJ yet
break # restart next outer loop
else:
raise ValueError("Invalid `oob_action`: '%s'" % oob_action)
else:
new_f_is_allowed = True
if new_f_is_allowed:
# reduction in error: increment accepted!
t = 1.0 - (2 * dF / dL - 1.0)**3 # dF/dL == gain ratio
# always reduce mu for accepted step when |dx| is small
mu_factor = max(t, 1.0 / 3.0) if norm_dx > 1e-8 else 0.3
mu *= mu_factor
nu = 2
x[:] = new_x[:]; f[:] = new_f[:]; norm_f = norm_new_f
global_x[:] = global_new_x[:]
printer.log(" Accepted%s! gain ratio=%g mu * %g => %g"
% (" UPHILL" if uphill_ok else "", dF / dL, mu_factor, mu), 2)
last_accepted_dx = dx.copy()
if new_x_is_known_inbounds and norm_f < min_norm_f:
min_norm_f = norm_f
best_x[:] = x[:]
best_x_state = (mu, nu, norm_f, f.copy(), spow, None)
#Note: we use rawJTJ=None above because the current `JTJ` was evaluated
# at the *last* x-value -- we need to wait for the next outer loop
# to compute the JTJ for this best_x_state
#assert(_np.isfinite(x).all()), "Non-finite x!" # NaNs tracking
#assert(_np.isfinite(f).all()), "Non-finite f!" # NaNs tracking
##Check to see if we *would* switch to Q-N method in a hybrid algorithm
#new_Jac = jac_fn(new_x)
#new_JTf = _np.dot(new_Jac.T,new_f)
#print(" CHECK: %g < %g ?" % (_np.linalg.norm(new_JTf,
# ord=_np.inf),0.02 * _np.linalg.norm(new_f)))
break # exit inner loop normally
else:
reject_msg = " (out-of-bounds)"
else:
reject_msg = " (out-of-bounds)"
else:
reject_msg = " (LinSolve Failure)"
# if this point is reached, either the linear solve failed
# or the error did not reduce. In either case, reject increment.
#Increase damping (mu), then increase damping factor to
# accelerate further damping increases.
mu *= nu
if nu > half_max_nu: # watch for nu getting too large (&overflow)
msg = "Stopping after nu overflow!"; break
nu = 2 * nu
printer.log(" Rejected%s! mu => mu*nu = %g, nu => 2*nu = %g"
% (reject_msg, mu, nu), 2)
#end of inner loop
#end of outer loop
else:
#if no break stmt hit, then we've exceeded max_iter
msg = "Maximum iterations (%d) exceeded" % max_iter
converged = True # call result "converged" even in this case, but issue warning:
printer.warning("Treating result as *converged* after maximum iterations (%d) were exceeded." % max_iter)
except KeyboardInterrupt:
if comm is not None:
# ensure all procs agree on what best_x is (in case the interrupt occurred around x being updated)
comm.Bcast(best_x, root=0)
printer.log("Rank %d caught keyboard interrupt! Returning the current solution as being *converged*."
% comm.Get_rank())
else:
printer.log("Caught keyboard interrupt! Returning the current solution as being *converged*.")
msg = "Keyboard interrupt!"
converged = True
if comm is not None:
comm.barrier() # Just to be safe, so procs stay synchronized and we don't free anything too soon
ari.deallocate_jtj(JTJ)
ari.deallocate_jtf(JTf)
ari.deallocate_jtf(x)
ari.deallocate_jtj_shared_mem_buf(jtj_buf)
#ari.deallocate_x_for_jac(x_for_jac)
if x_limits is not None:
ari.deallocate_jtf(x_lower_limits)
ari.deallocate_jtf(x_upper_limits)
if damping_basis == "singular_values":
ari.deallocate_jtj(Jac_V)
if damping_mode == 'adaptive':
for xx in dx_lst: ari.deallocate_jtf(xx)
for xx in new_x_lst: ari.deallocate_jtf(xx)
else:
ari.deallocate_jtf(dx)
ari.deallocate_jtf(new_x)
if use_acceleration:
ari.deallocate_jtf(dx1)
ari.deallocate_jtf(dx2)
ari.deallocate_jtf(df2_x)
ari.deallocate_jtf(JTdf2)
if num_fd_iters > 0:
ari.deallocate_jac(fdJac)
ari.allgather_x(best_x, global_x)
ari.deallocate_jtf(best_x)
#JTJ[idiag] = undampled_JTJ_diag #restore diagonal
mu, nu, norm_f, f[:], spow, rawJTJ = best_x_state
global_f = _np.empty(ari.global_num_elements(), 'd')
ari.allgather_f(f, global_f)
return global_x, converged, msg, mu, nu, norm_f, global_f, rawJTJ
#solution = _optResult()
#solution.x = x; solution.fun = f
#solution.success = converged
#solution.message = msg
#return solution | 339c892524442a69b7f84f0b05a1931f5ebddf8a | 3,657,300 |
def degrees(x):
"""Converts angle x from radians to degrees.
:type x: numbers.Real
:rtype: float
"""
return 0.0 | 87fe22113f8286db6c516e711b9cf0d4efe7e11d | 3,657,301 |
def account_credit(account=None,
asset=None,
date=None,
tp=None,
order_by=['tp', 'account', 'asset'],
hide_empty=False):
"""
Get credit operations for the account
Args:
account: filter by account code
asset: filter by asset code
date: get balance for specified date/time
tp: FIlter by account type
sort: field or list of sorting fields
hide_empty: don't return zero balances
Returns:
generator object
"""
return _account_summary('credit',
account=account,
asset=asset,
date=date,
tp=tp,
order_by=order_by,
hide_empty=hide_empty) | a690d1352344c6f8e3d8172848255adc1fa9e331 | 3,657,302 |
import logging
def get_logger(log_file=None):
"""
Initialize logger configuration.
Returns:
logger.
"""
formatter = logging.Formatter(
'%(asctime)s %(name)s.%(funcName)s +%(lineno)s: '
'%(levelname)-8s [%(process)d] %(message)s'
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
if log_file:
file_handler = handlers.RotatingFileHandler(log_file, backupCount=10)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
return logger | 3dad72ee83c25d2c49d6cc357bf89048f7018cb5 | 3,657,303 |
import torch
def verify(model):
"""
测试数据模型检验
:param model: 网络模型以及其参数
:return res: 返回对应的列表
"""
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = model.to(device)
if device == 'cuda':
model = torch.nn.DataParallel(model)
cudnn.benchmark = True
res = []
for idx, data in enumerate(test_loader):
img, label = data
img, label = img.to(device), label.to(device)
label2 = label.numpy()[0]
img = img.view(img.size(0), -1)
out = model(img)
all_output = []
for i in out.data:
all_output.append(i.numpy())
all_output = all_output[0]
if max(all_output) == all_output[label2]:
correct = True
else:
correct = False
all_output = sorted(all_output, reverse=True)
bvsb = all_output[0] - all_output[1]
obj = {
"label": int(label2),
"correct": correct,
"bvsb": float(bvsb)
}
res.append(obj)
if idx >= test_num - 1:
break
return res | 9ad2fd6280018aacbb2501f6b5eb862924b361a1 | 3,657,304 |
import csv
def parse_solution_file(solution_file):
"""Parse a solution file."""
ids = []
classes = []
with open(solution_file) as file_handle:
solution_reader = csv.reader(file_handle)
header = next(solution_reader, None)
if header != HEADER:
raise ValueError(
'Incorrect header found: {}, should be: {}'.format(
header, HEADER))
solution = sorted(list(solution_reader), key=lambda x: x[0])
for row in solution:
if len(row) < 2:
raise ValueError(
'Bad row length: {}, '
'should be at least {} for row {}'.format(
len(row), len(HEADER), row))
row_classes = row[1:]
if any(class_ not in POSSIBLE_CLASSES for class_ in row_classes):
raise ValueError(
'Unknown class found among: {}'.format(row_classes))
ids.append(row[0])
classes.append(row_classes)
return ids, classes | 19a553bd9979ca1d85d223b3109f3567a3a84100 | 3,657,305 |
def fibonacci_modulo(number, modulo):
"""
Calculating (n-th Fibonacci number) mod m
Args:
number: fibonacci number
modulo: modulo
Returns:
(n-th Fibonacci number) mod m
Examples:
>>> fibonacci_modulo(11527523930876953, 26673)
10552
"""
period = _pisano_period_len(modulo)
answer = _fib(number - number // period * period) % modulo
return answer | 5a7692597c17263ba86e81104762e4c7c8c95083 | 3,657,306 |
from typing import Union
def _str_unusual_grades(df: pd.DataFrame) -> Union[str, None]:
"""Print the number of unusual grades."""
grades = np.arange(0, 10.5, 0.5).astype(float)
catch_grades = []
for item in df["grade"]:
try:
if float(item) not in grades:
catch_grades.append(item)
except ValueError:
catch_grades.append(item)
if catch_grades == []:
return None
else:
return (
f"– Over all grades, {len(catch_grades)} of {len(df)} cards do not receive"
f" standard grades. These grades are in {set(catch_grades)}"
) | 0998b112438685523cadc60eb438bee94f3ad8fd | 3,657,307 |
from typing import Any
from typing import Dict
def _adjust_estimator_options(estimator: Any, est_options: Dict[str, Any], **kwargs) -> Dict[str, Any]:
"""
Adds specific required classifier options to the `clf_options` dictionary.
Parameters
----------
classifier : Any
The classifier object for which the options have to be added
clf_options : Dict[str, Any]
Dictionary, where the additional classifier options should be added to
kwargs :
Additional classifier options as keyword arguments
Returns
-------
Dict[str, Any]
The input `clf_options` dictionary containing the additional classifier options
"""
if estimator.__name__ == 'XGBClassifier':
est_options['num_class'] = kwargs['n_categories']
elif estimator.__name__ == 'DNNClassifier':
est_options['n_classes'] = kwargs['n_categories']
est_options['n_features'] = kwargs['n_features']
est_options['random_state'] = kwargs['random_seed']
return est_options | 4ff98d8a3b3e647e129fb0ffbc9bc549caa60440 | 3,657,308 |
def _prettify(elem,indent_level=0):
"""Return a pretty-printed XML string for the Element.
"""
indent = " "
res = indent_level*indent + '<'+elem.tag.encode('utf-8')
for k in elem.keys():
res += " "+k.encode('utf-8')+'="'+_escape_nl(elem.get(k)).encode('utf-8')+'"'
children = elem.getchildren()
if len(children)==0 and not elem.text:
res += ' />'
return res
res += '>'
if elem.text:
res += _escape_nl(elem.text).encode('utf-8')
for c in children:
res += '\n'+_prettify(c,indent_level+1)
if len(children)>0:
res += '\n'+indent_level*indent
res += '</'+elem.tag.encode('utf-8')+'>'
return res | 8f46637cdbb8daf488fd668a197aee5495b8128a | 3,657,309 |
def predict(text):
"""
Predict the language of a text.
Parameters
----------
text : str
Returns
-------
language_code : str
"""
if language_models is None:
init_language_models(comp_metric, unicode_cutoff=10**6)
x_distribution = get_distribution(text, language_models_chars)
return predict_param(language_models,
comp_metric,
x_distribution,
best_only=True) | 713d8cd8df040703ee7f138314f5c14f5a89ef26 | 3,657,310 |
def kgup(baslangic_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"),
bitis_tarihi=__dt.datetime.today().strftime("%Y-%m-%d"), organizasyon_eic="", uevcb_eic=""):
"""
İlgili tarih aralığı için kaynak bazlı kesinleşmiş günlük üretim planı (KGÜP) bilgisini vermektedir.
Not: "organizasyon_eic" değeri girildiği, "uevcb_eic" değeri girilmediği taktirde organizasyona ait tüm uevcb'lerin
toplamı için kgüp bilgisini vermektedir. Her iki değer de girildiği taktirde ilgili organizasyonun ilgili uevcb'si
için kgüp bilgisini vermektedir.
Parametreler
------------
baslangic_tarihi : %YYYY-%AA-%GG formatında başlangıç tarihi (Varsayılan: bugün)
bitis_tarihi : %YYYY-%AA-%GG formatında bitiş tarihi (Varsayılan: bugün)
organizasyon_eic : metin formatında organizasyon eic kodu (Varsayılan: "")
uevcb_eic : metin formatında metin formatında uevcb eic kodu (Varsayılan: "")
Geri Dönüş Değeri
-----------------
KGUP (Tarih, Saat, Doğalgaz, Barajlı, Linyit, Akarsu, İthal Kömür, Rüzgar, Fuel Oil, Jeo Termal, Taş Kömür, Biyokütle
,Nafta, Diğer, Toplam)
"""
if __dogrulama.__baslangic_bitis_tarih_dogrulama(baslangic_tarihi, bitis_tarihi):
try:
particular_url = __first_part_url + "dpp" + "?startDate=" + baslangic_tarihi + "&endDate=" + bitis_tarihi \
+ "&organizationEIC=" + organizasyon_eic + "&uevcbEIC=" + uevcb_eic
json = __make_requests(particular_url)
df = __pd.DataFrame(json["body"]["dppList"])
df["Saat"] = df["tarih"].apply(lambda h: int(h[11:13]))
df["Tarih"] = __pd.to_datetime(df["tarih"].apply(lambda d: d[:10]))
df.rename(index=str,
columns={"akarsu": "Akarsu", "barajli": "Barajlı", "biokutle": "Biyokütle", "diger": "Diğer",
"dogalgaz": "Doğalgaz", "fuelOil": "Fuel Oil", "ithalKomur": "İthal Kömür",
"jeotermal": "Jeo Termal", "linyit": "Linyit", "nafta": "Nafta",
"ruzgar": "Rüzgar", "tasKomur": "Taş Kömür", "toplam": "Toplam"}, inplace=True)
df = df[["Tarih", "Saat", "Doğalgaz", "Barajlı", "Linyit", "Akarsu", "İthal Kömür", "Rüzgar",
"Fuel Oil", "Jeo Termal", "Taş Kömür", "Biyokütle", "Nafta", "Diğer", "Toplam"]]
except (KeyError, TypeError):
return __pd.DataFrame()
else:
return df | a1b19ea295bf58db114391e11333e37a9fd6d47d | 3,657,311 |
def distance_loop(x1, x2):
""" Returns the Euclidean distance between the 1-d numpy arrays x1 and x2"""
return -1 | abd35a27cbeb5f5c9fe49a2a076d18f16e2849d9 | 3,657,312 |
def get_ps_calls_and_summary(filtered_guide_counts_matrix, f_map):
"""Calculates protospacer calls per cell and summarizes them
Args:
filtered_guide_counts_matrix: CountMatrix - obtained by selecting features by CRISPR library type on the feature counts matrix
f_map: dict - map of feature ID:feature sequence pairs
Returns:
First 3 outputs as specified in docstring for get_perturbation_calls
ps_calls_summary is a Pandas dataframe summarizing descriptive statistics for each perturbation_call (unique combination of protospacers) found in
the dataset, along with some overall summary statistics about the mulitplicty of infection
"""
if feature_utils.check_if_none_or_empty(filtered_guide_counts_matrix):
return (None, None, None, None, None)
(ps_calls_table, presence_calls, cells_with_ps, umi_thresholds) = get_perturbation_calls(filtered_guide_counts_matrix,
f_map,)
ps_calls_table.sort_values(by=['feature_call'], inplace=True, kind='mergesort')
ps_calls_summary = get_ps_calls_summary(ps_calls_table, filtered_guide_counts_matrix)
return (ps_calls_table, presence_calls, cells_with_ps, ps_calls_summary, umi_thresholds) | 18aecb335655fb62459350761aeffd4ddbe231ae | 3,657,313 |
def symbol_by_name(name, aliases={}, imp=None, package=None,
sep='.', default=None, **kwargs):
"""Get symbol by qualified name.
The name should be the full dot-separated path to the class::
modulename.ClassName
Example::
celery.concurrency.processes.TaskPool
^- class name
or using ':' to separate module and symbol::
celery.concurrency.processes:TaskPool
If `aliases` is provided, a dict containing short name/long name
mappings, the name is looked up in the aliases first.
Examples:
>>> symbol_by_name("celery.concurrency.processes.TaskPool")
<class 'celery.concurrency.processes.TaskPool'>
>>> symbol_by_name("default", {
... "default": "celery.concurrency.processes.TaskPool"})
<class 'celery.concurrency.processes.TaskPool'>
# Does not try to look up non-string names.
>>> from celery.concurrency.processes import TaskPool
>>> symbol_by_name(TaskPool) is TaskPool
True
"""
if imp is None:
imp = importlib.import_module
if not isinstance(name, basestring):
return name # already a class
name = aliases.get(name) or name
sep = ':' if ':' in name else sep
module_name, _, cls_name = name.rpartition(sep)
if not module_name:
cls_name, module_name = None, package if package else cls_name
try:
try:
module = imp(module_name, package=package, **kwargs)
except ValueError, exc:
raise ValueError, ValueError(
"Couldn't import %r: %s" % (name, exc)), sys.exc_info()[2]
return getattr(module, cls_name) if cls_name else module
except (ImportError, AttributeError):
if default is None:
raise
return default | 10921d715abc9c83891b26b884f3c88e86c4a900 | 3,657,314 |
from typing import Tuple
def coefficients_of_line_from_points(
point_a: Tuple[float, float], point_b: Tuple[float, float]
) -> Tuple[float, float]:
"""Computes the m and c coefficients of the equation (y=mx+c) for
a straight line from two points.
Args:
point_a: point 1 coordinates
point_b: point 2 coordinates
Returns:
m coefficient and c coefficient
"""
points = [point_a, point_b]
x_coords, y_coords = zip(*points)
coord_array = np.vstack([x_coords, np.ones(len(x_coords))]).T
m, c = np.linalg.lstsq(coord_array, y_coords, rcond=None)[0]
return m, c | b4d89f2bb3db48723f321e01658e795f431427e1 | 3,657,315 |
import os
import subprocess
from datetime import datetime
def logDirManager():
""" Directory manager for TensorFlow logging """
print('Cleaning and initialising logging directory... \n')
# Ensure function is starting from project root..
if os.getcwd() != "/Users/Oliver/AnacondaProjects/SNSS_TF":
os.chdir("/Users/Oliver/AnacondaProjects/SNSS_TF")
os.chdir("tb_log") # Go to logging folder..
stdout = subprocess.check_output(["ls", "-a"]) # Send ls command to terminal
# Decode output from terminal
folders = DictReader(stdout.decode('ascii').splitlines(),
delimiter=' ', skipinitialspace=True,
fieldnames=['name'])
# For every folder in ordered dict...
for f in folders:
path = f.get('name') # Get path
if (path != ('.')) & (path != ('..')) & (path != '.DS_Store'): # Ignore parent dirs
cDate = datetime.fromtimestamp(os.stat(os.getcwd() + '/' + f.get('name')).st_ctime)
delta = datetime.today() - cDate # Get age of folder.
if delta.days > 6: # If older than 1 week...
rmtree(path) # Delete folder.
print('Removed old folder: "' + path + '" \n') # Log deletion to console.
# print('Name: ' + str + ' Created on: ' + cDate.isoformat()) # Debugging
logDir = "log_dir/" + date.today().isoformat() + "/" +\
datetime.now().time().isoformat(timespec='minutes').replace(':', '')
# Create todays folder for logging
print('Tensorflow logging to : ~/' + logDir + '\n')
os.chdir('..')
return logDir | 664f5554c5c937ff3a95e68c59fbc41ff4276f85 | 3,657,316 |
import tifffile
def read_tiff(fname, slc=None):
"""
Read data from tiff file.
Parameters
----------
fname : str
String defining the path of file or file name.
slc : sequence of tuples, optional
Range of values for slicing data in each axis.
((start_1, end_1, step_1), ... , (start_N, end_N, step_N))
defines slicing parameters for each axis of the data matrix.
Returns
-------
ndarray
Output 2D image.
"""
fname = _check_read(fname)
try:
arr = tifffile.imread(fname, memmap=True)
except IOError:
logger.error('No such file or directory: %s', fname)
return False
arr = _slice_array(arr, slc)
_log_imported_data(fname, arr)
return arr | 39b48229719dd8210059a3a9ed7972e8398728ab | 3,657,317 |
def sorted_non_max_suppression_padded(scores,
boxes,
max_output_size,
iou_threshold):
"""A wrapper that handles non-maximum suppression.
Assumption:
* The boxes are sorted by scores unless the box is a dot (all coordinates
are zero).
* Boxes with higher scores can be used to suppress boxes with lower scores.
The overal design of the algorithm is to handle boxes tile-by-tile:
boxes = boxes.pad_to_multiply_of(tile_size)
num_tiles = len(boxes) // tile_size
output_boxes = []
for i in range(num_tiles):
box_tile = boxes[i*tile_size : (i+1)*tile_size]
for j in range(i - 1):
suppressing_tile = boxes[j*tile_size : (j+1)*tile_size]
iou = bbox_overlap(box_tile, suppressing_tile)
# if the box is suppressed in iou, clear it to a dot
box_tile *= _update_boxes(iou)
# Iteratively handle the diagnal tile.
iou = _box_overlap(box_tile, box_tile)
iou_changed = True
while iou_changed:
# boxes that are not suppressed by anything else
suppressing_boxes = _get_suppressing_boxes(iou)
# boxes that are suppressed by suppressing_boxes
suppressed_boxes = _get_suppressed_boxes(iou, suppressing_boxes)
# clear iou to 0 for boxes that are suppressed, as they cannot be used
# to suppress other boxes any more
new_iou = _clear_iou(iou, suppressed_boxes)
iou_changed = (new_iou != iou)
iou = new_iou
# remaining boxes that can still suppress others, are selected boxes.
output_boxes.append(_get_suppressing_boxes(iou))
if len(output_boxes) >= max_output_size:
break
Args:
scores: a tensor with a shape of [batch_size, anchors].
boxes: a tensor with a shape of [batch_size, anchors, 4].
max_output_size: a scalar integer `Tensor` representing the maximum number
of boxes to be selected by non max suppression.
iou_threshold: a float representing the threshold for deciding whether boxes
overlap too much with respect to IOU.
Returns:
nms_scores: a tensor with a shape of [batch_size, anchors]. It has same
dtype as input scores.
nms_proposals: a tensor with a shape of [batch_size, anchors, 4]. It has
same dtype as input boxes.
"""
batch_size = tf.shape(boxes)[0]
num_boxes = tf.shape(boxes)[1]
pad = tf.cast(
tf.math.ceil(tf.cast(num_boxes, tf.float32) / NMS_TILE_SIZE),
tf.int32) * NMS_TILE_SIZE - num_boxes
boxes = tf.pad(tf.cast(boxes, tf.float32), [[0, 0], [0, pad], [0, 0]])
scores = tf.pad(
tf.cast(scores, tf.float32), [[0, 0], [0, pad]], constant_values=-1)
num_boxes += pad
def _loop_cond(unused_boxes, unused_threshold, output_size, idx):
return tf.logical_and(
tf.reduce_min(output_size) < max_output_size,
idx < num_boxes // NMS_TILE_SIZE)
selected_boxes, _, output_size, _ = tf.while_loop(
_loop_cond, _suppression_loop_body, [
boxes, iou_threshold,
tf.zeros([batch_size], tf.int32),
tf.constant(0)
])
idx = num_boxes - tf.cast(
tf.nn.top_k(
tf.cast(tf.reduce_any(selected_boxes > 0, [2]), tf.int32) *
tf.expand_dims(tf.range(num_boxes, 0, -1), 0), max_output_size)[0],
tf.int32)
idx = tf.minimum(idx, num_boxes - 1)
idx = tf.reshape(
idx + tf.reshape(tf.range(batch_size) * num_boxes, [-1, 1]), [-1])
boxes = tf.reshape(
tf.gather(tf.reshape(boxes, [-1, 4]), idx),
[batch_size, max_output_size, 4])
boxes = boxes * tf.cast(
tf.reshape(tf.range(max_output_size), [1, -1, 1]) < tf.reshape(
output_size, [-1, 1, 1]), boxes.dtype)
scores = tf.reshape(
tf.gather(tf.reshape(scores, [-1, 1]), idx),
[batch_size, max_output_size])
scores = scores * tf.cast(
tf.reshape(tf.range(max_output_size), [1, -1]) < tf.reshape(
output_size, [-1, 1]), scores.dtype)
return scores, boxes | 5d882acb6b9559eb541d49f6784798e5d342c673 | 3,657,318 |
def create_session() -> Session:
"""
Creates a new session using the aforementioned engine
:return: session
"""
return Session(bind=engine) | 8b480ee216c30b2c6b8652a6b6239ab6b83df4d9 | 3,657,319 |
import torch
def fft_to_complex_matrix(x):
""" Create matrix with [a -b; b a] entries for complex numbers. """
x_stacked = torch.stack((x, torch.flip(x, (4,))), dim=5).permute(2, 3, 0, 4, 1, 5)
x_stacked[:, :, :, 0, :, 1] *= -1
return x_stacked.reshape(-1, 2 * x.shape[0], 2 * x.shape[1]) | 9fb38004041280da0d6d53830761501aebf7969a | 3,657,320 |
import copy
def mcais(A, X, verbose=False):
"""
Returns the maximal constraint-admissible (positive) invariant set O_inf for the system x(t+1) = A x(t) subject to the constraint x in X.
O_inf is also known as maximum output admissible set.
It holds that x(0) in O_inf <=> x(t) in X for all t >= 0.
(Implementation of Algorithm 3.2 from: Gilbert, Tan - Linear Systems with State and Control Constraints, The Theory and Application of Maximal Output Admissible Sets.)
Sufficient conditions for this set to be finitely determined (i.e. defined by a finite number of facets) are: A stable, X bounded and containing the origin.
Math
----------
At each time step t, we want to verify if at the next time step t+1 the system will go outside X.
Let's consider X := {x | D_i x <= e_i, i = 1,...,n} and t = 0.
In order to ensure that x(1) = A x(0) is inside X, we need to consider one by one all the constraints and for each of them, the worst-case x(0).
We can do this solvin an LP
V(t=0, i) = max_{x in X} D_i A x - e_i for i = 1,...,n
if all these LPs has V < 0 there is no x(0) such that x(1) is outside X.
The previous implies that all the time-evolution x(t) will lie in X (see Gilbert and Tan).
In case one of the LPs gives a V > 0, we iterate and consider
V(t=1, i) = max_{x in X, x in A X} D_i A^2 x - e_i for i = 1,...,n
where A X := {x | D A x <= e}.
If now all V < 0, then O_inf = X U AX, otherwise we iterate until convergence
V(t, i) = max_{x in X, x in A X, ..., x in A^t X} D_i A^(t+1) x - e_i for i = 1,...,n
Once at convergence O_Inf = X U A X U ... U A^t X.
Arguments
----------
A : numpy.ndarray
State transition matrix.
X : instance of Polyhedron
State-space domain of the dynamical system.
verbose : bool
If True prints at each iteration the convergence parameters.
Returns:
----------
O_inf : instance of Polyhedron
Maximal constraint-admissible (positive) ivariant.
t : int
Determinedness index.
"""
# ensure convergence of the algorithm
eig_max = np.max(np.absolute(np.linalg.eig(A)[0]))
if eig_max > 1.:
raise ValueError('unstable system, cannot derive maximal constraint-admissible set.')
[nc, nx] = X.A.shape
if not X.contains(np.zeros((nx, 1))):
raise ValueError('the origin is not contained in the constraint set, cannot derive maximal constraint-admissible set.')
if not X.bounded:
raise ValueError('unbounded constraint set, cannot derive maximal constraint-admissible set.')
# initialize mcais
O_inf = copy(X)
# loop over time
t = 1
convergence = False
while not convergence:
# solve one LP per facet
J = X.A.dot(np.linalg.matrix_power(A,t))
residuals = []
for i in range(X.A.shape[0]):
sol = linear_program(- J[i,:], O_inf.A, O_inf.b)
residuals.append(- sol['min'] - X.b[i,0])
# print status of the algorithm
if verbose:
print('Time horizon: ' + str(t) + '.'),
print('Convergence index: ' + str(max(residuals)) + '.'),
print('Number of facets: ' + str(O_inf.A.shape[0]) + '. \r'),
# convergence check
new_facets = [i for i, r in enumerate(residuals) if r > 0.]
if len(new_facets) == 0:
convergence = True
else:
# add (only non-redundant!) facets
O_inf.add_inequality(J[new_facets,:], X.b[new_facets,:])
t += 1
# remove redundant facets
if verbose:
print('\nMaximal constraint-admissible invariant set found.')
print('Removing redundant facets ...'),
O_inf.remove_redundant_inequalities()
if verbose:
print('minimal facets are ' + str(O_inf.A.shape[0]) + '.')
return O_inf | e162a1aed724166f373f8afbd6541622254e8b42 | 3,657,321 |
def evaluation_seasonal_srmse(model_name, variable_name='mean', background='all'):
"""
Evaluate the model in different seasons using the standardized RMSE.
:type model_name: str
:param model_name: The name of the model.
:type variable_name: str
:param variable_name: The name of the variable which shell be evaluated\
against the ONI prediction.
:returns: The SRMSE for different seasons and the \
0, 3, 6, 9, 12 and 15-month lead times. The returned arrays have the shape \
(lead time, season). The season corresponding to the the array entry [:,0]\
is DJF and to [:,1] is JFM (and so on).
"""
reader = data_reader(startdate='1963-01', enddate='2017-12')
# seasonal scores
seas_srmse = np.zeros((n_lead, 12))
# ONI observation
oni = reader.read_csv('oni')
if background=="el-nino-like":
obs = oni[(oni.index.year>=1982)&(oni.index.year<=2001)]
elif background=="la-nina-like":
obs = oni[(oni.index.year<1982)|(oni.index.year>2001)]
elif background=="all":
obs = oni
obs_time = obs.index
for i in range(n_lead):
pred_all = reader.read_forecasts(model_name, lead_times[i]).loc[{'target_season':obs_time}]
pred = pred_all[variable_name]
seas_srmse[i, :] = seasonal_srmse(obs, pred, obs_time - pd.tseries.offsets.MonthBegin(1))
return seas_srmse | 39fb7ae64ab32fc5092e46c77c8593f1aeaf4c92 | 3,657,322 |
def generate_accession_id() -> str:
"""Generate Stable ID."""
accessionID = uuid4()
urn = accessionID.urn
LOG.debug(f"generated accession id as: {urn}")
return urn | d55f63aa0b48a06aaa98f978b6f92a219c0b1457 | 3,657,323 |
def _async_device_ha_info(
hass: HomeAssistant, lg_device_id: str
) -> dict | None:
"""Gather information how this ThinQ device is represented in Home Assistant."""
device_registry = dr.async_get(hass)
entity_registry = er.async_get(hass)
hass_device = device_registry.async_get_device(
identifiers={(DOMAIN, lg_device_id)}
)
if not hass_device:
return None
data = {
"name": hass_device.name,
"name_by_user": hass_device.name_by_user,
"model": hass_device.model,
"manufacturer": hass_device.manufacturer,
"sw_version": hass_device.sw_version,
"disabled": hass_device.disabled,
"disabled_by": hass_device.disabled_by,
"entities": {},
}
hass_entities = er.async_entries_for_device(
entity_registry,
device_id=hass_device.id,
include_disabled_entities=True,
)
for entity_entry in hass_entities:
if entity_entry.platform != DOMAIN:
continue
state = hass.states.get(entity_entry.entity_id)
state_dict = None
if state:
state_dict = dict(state.as_dict())
# The entity_id is already provided at root level.
state_dict.pop("entity_id", None)
# The context doesn't provide useful information in this case.
state_dict.pop("context", None)
data["entities"][entity_entry.entity_id] = {
"name": entity_entry.name,
"original_name": entity_entry.original_name,
"disabled": entity_entry.disabled,
"disabled_by": entity_entry.disabled_by,
"entity_category": entity_entry.entity_category,
"device_class": entity_entry.device_class,
"original_device_class": entity_entry.original_device_class,
"icon": entity_entry.icon,
"original_icon": entity_entry.original_icon,
"unit_of_measurement": entity_entry.unit_of_measurement,
"state": state_dict,
}
return data | 47af173daba91aa70ea167baf58c05e9f6f595f6 | 3,657,324 |
from typing import Optional
def get_travis_pr_num() -> Optional[int]:
"""Return the PR number if the job is a pull request, None otherwise
Returns:
int
See also:
- <https://docs.travis-ci.com/user/environment-variables/#default-environment-variables>
""" # noqa E501
try:
travis_pull_request = get_travis_env_or_fail('TRAVIS_PULL_REQUEST')
if falsy(travis_pull_request):
return None
else:
try:
return int(travis_pull_request)
except ValueError:
return None
except UnexpectedTravisEnvironmentError:
return None | 86ef6ce3f9bf3c3e056b11e575b1b13381e490fe | 3,657,325 |
from typing import List
import json
def get_updated_records(table_name: str, existing_items: List) -> List:
"""
Determine the list of record updates, to be sent to a DDB stream after a PartiQL update operation.
Note: This is currently a fairly expensive operation, as we need to retrieve the list of all items
from the table, and compare the items to the previously available. This is a limitation as
we're currently using the DynamoDB Local backend as a blackbox. In future, we should consider hooking
into the PartiQL query execution inside DynamoDB Local and directly extract the list of updated items.
"""
result = []
stream_spec = dynamodb_get_table_stream_specification(table_name=table_name)
key_schema = SchemaExtractor.get_key_schema(table_name)
before = ItemSet(existing_items, key_schema=key_schema)
after = ItemSet(ItemFinder.get_all_table_items(table_name), key_schema=key_schema)
def _add_record(item, comparison_set: ItemSet):
matching_item = comparison_set.find_item(item)
if matching_item == item:
return
# determine event type
if comparison_set == after:
if matching_item:
return
event_name = "REMOVE"
else:
event_name = "INSERT" if not matching_item else "MODIFY"
old_image = item if event_name == "REMOVE" else matching_item
new_image = matching_item if event_name == "REMOVE" else item
# prepare record
keys = SchemaExtractor.extract_keys_for_schema(item=item, key_schema=key_schema)
record = {
"eventName": event_name,
"eventID": short_uid(),
"dynamodb": {
"Keys": keys,
"NewImage": new_image,
"SizeBytes": len(json.dumps(item)),
},
}
if stream_spec:
record["dynamodb"]["StreamViewType"] = stream_spec["StreamViewType"]
if old_image:
record["dynamodb"]["OldImage"] = old_image
result.append(record)
# loop over items in new item list (find INSERT/MODIFY events)
for item in after.items_list:
_add_record(item, before)
# loop over items in old item list (find REMOVE events)
for item in before.items_list:
_add_record(item, after)
return result | 631c21836614731e5b53ed752036f1216d555196 | 3,657,326 |
def normalize_record(input_object, parent_name="root_entity"):
"""
This function orchestrates the main normalization.
It will go through the json document and recursively work with the data to:
- unnest (flatten/normalize) keys in objects with the standard <parentkey>_<itemkey> convention
- identify arrays, which will be pulled out and normalized
- create an array of entities, ready for streaming or export
for each item in the object:
if the item is a non object or non list item:
append to this flattened_dict object
if the item is a dictionary:
trigger the flatten dict function
the flatten dict function will iterate through the items and append them to a dictionary. it will return a dictionary with {"dictionary": <dict_data>, "array": <arrays>}
join flattened_dict and the returned[dictionary] data
append returned[array] to arrays layer
arrays will be dealt with a little differently. Because we're expecting multiple entries we'll be workign with a loop which will always belong to an array
create new dict object dict_object = {"name": <dict name>, "data": [dict array entries data]}
for each in the array loop - trigger normalize_layer with parent name of array name
dict_object.append the `dicts_array`["data"] to the dict_object["data"] array
"""
arrays = []
dicts = []
output_dictionary = {}
parent_keys = extract_parent_keys(dictionary_name=parent_name, dictionary_object=input_object)
if isinstance(input_object, (dict)):
for key, value in input_object.items():
if not isinstance(value, (dict,list) ):
# if the item is a non object or non list item:
output_dictionary[key] = value
elif isinstance(value, dict):
# if the item is a dictionary:
# trigger the flatten dict function
dict_contents = flatten_object(key,value) # will return {"dictionary": <dict_data>, "array": <arrays>}
instance_dictionary = dict_contents["dictionary"]
instance_array = dict_contents["array"]
if len(instance_array) >0:
arrays.extend(instance_array)
output_dictionary = merge_two_dicts(output_dictionary,instance_dictionary) #join the dict
elif isinstance(value, list):
arrays.append({"name":key, "data":value, "parent_keys": parent_keys})
elif isinstance(input_object, (list)):
arrays.append({"name":parent_name,"data":input_object })
##############################
### Now process the arrays ###
##############################
for each_array in arrays:
for each_entry in each_array["data"]:
each_entry = each_entry
try:
if each_array["parent_keys"]:
each_entry = merge_two_dicts(each_entry, each_array["parent_keys"])
except:
pass
normalized_array = (normalize_record(input_object = each_entry, parent_name = each_array["name"]) )
#expect list here
#let the normalizer recursively work through and pull the data out. Once it's out, we can append the data to the dicts array :)
#may return 1 or more dictionaries
for each_normalized_array_entry in normalized_array:
# iterate through each output in the normalized array
#check if there is an instance of this data already
matches = False
for each_dictionary_entity in dicts:
if each_normalized_array_entry["name"] == each_dictionary_entity["name"]:
#check if there is data in place already for this. If so, we add an entry to it
each_dictionary_entity["data"].extend(each_normalized_array_entry["data"])
matches = True
if matches == False:
dicts.append({"name": each_normalized_array_entry["name"] , "data": each_normalized_array_entry["data"] })
dicts.append({"name":parent_name, "data": [output_dictionary]})
return(dicts) | 73647b04ba943e18a38ebf2f1d03cca46b533935 | 3,657,327 |
def rmse(f, p, xdata, ydata):
"""Root-mean-square error."""
results = np.asarray([f(p, x) for x in xdata])
sqerr = (results - ydata)**2
return np.sqrt(sqerr.mean()) | 2b8afdb1742aad5e5c48fbe4407ab0989dbaf762 | 3,657,328 |
import logging
def get_logger(name=None, propagate=True):
"""Get logger object"""
logger = logging.getLogger(name)
logger.propagate = propagate
loggers.append(logger)
return logger | 3ad4dbc39f9bf934b02e2dc6e713a4793a28298b | 3,657,329 |
import os
def load_movielens1m(infile=None, event_dtype=event_dtype_timestamp):
""" load the MovieLens 1m data set
Original file ``ml-1m.zip`` is distributed by the Grouplens Research
Project at the site:
`MovieLens Data Sets <http://www.grouplens.org/node/73>`_.
Parameters
----------
infile : optional, file or str
input file if specified; otherwise, read from default sample directory.
event_dtype : np.dtype
dtype of extra event features. as default, it consists of only a
``timestamp`` feature.
Returns
-------
data : :class:`kamrecsys.data.EventWithScoreData`
sample data
Notes
-----
Format of events:
* each event consists of a vector whose format is [user, item].
* 1,000,209 events in total
* 6,040 users rate 3,706 items (=movies)
* dtype=int
Format of scores:
* one score is given to each event
* domain of score is [1.0, 2.0, 3.0, 4.0, 5.0]
* dtype=float
Default format of event_features ( `data.event_feature` ):
timestamp : int
represented in seconds since the epoch as returned by time(2)
Format of user's feature ( `data.feature[0]` ):
gender : int
gender of the user, {0:male, 1:female}
age : int, {0, 1,..., 6}
age of the user, where
1:"Under 18", 18:"18-24", 25:"25-34", 35:"35-44", 45:"45-49",
50:"50-55", 56:"56+"
occupation : int, {0,1,...,20}
the number indicates the occupation of the user as follows:
0:"other" or not specified, 1:"academic/educator",
2:"artist", 3:"clerical/admin", 4:"college/grad student"
5:"customer service", 6:"doctor/health care", 7:"executive/managerial"
8:"farmer", 9:"homemaker", 10:"K-12 student", 11:"lawyer",
12:"programmer", 13:"retired", 14:"sales/marketing", 15:"scientist",
16:"self-employed", 17:"technician/engineer", 18:"tradesman/craftsman",
19:"unemployed", 20:"writer"
zip : str, length=5
zip code of 5 digits, which represents the residential area of the user
Format of item's feature ( `data.feature[1]` ):
name : str, length=[8, 82]
title of the movie with release year
year : int
released year
genre : binary_int * 18
18 binary numbers represents a genre of the movie. 1 if the movie
belongs to the genre; 0 other wise. All 0 implies unknown. Each column
corresponds to the following genres:
0:Action, 1:Adventure, 2:Animation, 3:Children's, 4:Comedy, 5:Crime,
6:Documentary, 7:Drama, 8:Fantasy, 9:Film-Noir, 10:Horror, 11:Musical,
12:Mystery, 13:Romance, 14:Sci-Fi, 15:Thriller, 16:War, 17:Western
"""
# load event file
if infile is None:
infile = os.path.join(SAMPLE_PATH, 'movielens1m.event')
data = load_event_with_score(
infile, n_otypes=2, event_otypes=(0, 1),
score_domain=(1., 5., 1.), event_dtype=event_dtype)
# load user's feature file
infile = os.path.join(SAMPLE_PATH, 'movielens1m.user')
fdtype = np.dtype([('gender', int), ('age', int),
('occupation', int), ('zip', 'U5')])
dtype = np.dtype([('eid', int), ('feature', fdtype)])
x = np.genfromtxt(fname=infile, delimiter='\t', dtype=dtype)
data.set_feature(0, x['eid'], x['feature'])
# load item's feature file
infile = os.path.join(SAMPLE_PATH, 'movielens1m.item')
fdtype = np.dtype([('name', 'U82'),
('year', int),
('genre', 'i1', 18)])
dtype = np.dtype([('eid', int), ('feature', fdtype)])
x = np.genfromtxt(fname=infile, delimiter='\t', dtype=dtype,
converters={1: np.char.decode})
data.set_feature(1, x['eid'], x['feature'])
return data | 594e222f1b2fce4fcb97e5ffe7082ccc42172681 | 3,657,330 |
import os
def load_coco_data(split):
"""load the `split` data containing image and label
Args:
split (str): the split of the dataset (train, val, test)
Returns:
tf.data.Dataset: the dataset contains image and label
image (tf.tensor), shape (224, 224, 3)
label (tf.tensor), shape (1000, )
"""
dataset = tfds.load(name="coco_captions", split=split)
write_captions_of_iamges_to_file(dataset, split)
img_cap_dict = get_captions_of_images(dataset, split)
attr_list = get_attributes_list(
os.path.join(WORKING_PATH, "finetune", "attribute_list.pickle")
)
attr2idx = {word: idx for idx, word in enumerate(attr_list)}
attr_dict = get_attributes_dict(dataset, split, attr_list, img_cap_dict)
attr_onehot = get_onehot_attributes(attr_dict, attr2idx, split)
attr_onehot_labels = [attr_onehot[idx] for idx in attr_onehot.keys()]
attr_onehot_labels = tf.data.Dataset.from_tensor_slices(
tf.cast(attr_onehot_labels, tf.int32)
)
def process(image):
image = tf.image.resize(image, (224, 224))
image = tf.cast(image, tf.float32)
image = image / 255
return image
def parse_fn(feature):
image = feature["image"]
return process(image)
img_dataset = dataset.map(parse_fn)
ds = tf.data.Dataset.zip((img_dataset, attr_onehot_labels))
return ds | 56d42130dca83ab883a02a1dce48c3374dc7398f | 3,657,331 |
import argparse
import os
def get_args():
"""
Get User defined arguments, or assign defaults
:rtype: argparse.ArgumentParser()
:return: User defined or default arguments
"""
parser = argparse.ArgumentParser()
# Positional arguments
parser.add_argument("main_args", type=str, nargs="*",
help="task for Seisflows to perform")
# Optional parameters
parser.add_argument("-w", "--workdir", nargs="?", default=os.getcwd())
parser.add_argument("-p", "--parameter_file", nargs="?",
default="parameters.yaml")
return parser.parse_args() | 2f31a2142034127d3de7f4212841c3432b451fc4 | 3,657,332 |
import requests
from typing import Match
def getMatches(tournamentName=None, matchDate=None, matchPatch=None, matchTeam=None):
"""
Params:
tournamentName: str/List[str]/Tuple(str) : filter by tournament names (e.g. LCK 2020 Spring)
matchDate: str/List[str]/Tuple(str) : date in the format of yyyy-mm-dd
matchPatch: str/List[str]/Tuple(str) : game patch the match is played on (e.g. 10.15)
matchTeam: str/List[str]/Tuple(str)
Returns:
List[Match]
"""
argsString = " AND ".join(filter(None, [
_formatArgs(tournamentName, "SG.Tournament"),
_formatDateTimeArgs(matchDate, "SG.DateTime_UTC"),
_formatArgs(matchPatch, "SG.Patch")
]))
url = MATCHES_URL.format(argsString)
matchesJson = requests.get(url).json()["cargoquery"]
matches = []
uniqueMatchMap = {}
for i in range(len(matchesJson)):
matchJson = matchesJson[i]["title"]
# apply team filter
if isinstance(matchTeam, str):
matchTeam = [matchTeam]
if isinstance(matchTeam, list):
if matchJson["Team1"] not in matchTeam and matchJson["Team2"] not in matchTeam:
continue
elif isinstance(matchTeam, tuple):
if not set(matchTeam).issubset(set([matchJson["Team1"], matchJson["Team2"]])):
continue
uniqueMatch = matchJson["UniqueGame"][:-2]
if uniqueMatch not in uniqueMatchMap:
match = Match(uniqueMatch)
match._uniqueGames.append(matchJson["UniqueGame"])
match.dateTime = matchJson["DateTime UTC"]
match.patch = matchJson["Patch"]
match.teams = (matchJson["Team1"], matchJson["Team2"])
match.scores = (int(matchJson["Team1Score"]), int(matchJson["Team2Score"]))
matches.append(match)
uniqueMatchMap[uniqueMatch] = match
else:
match = uniqueMatchMap[uniqueMatch]
match._uniqueGames.append(matchJson["UniqueGame"])
match.dateTime = matchJson["DateTime UTC"]
return matches | 89525caa9da0a3b546e0b8982e96469f32f8c5bc | 3,657,333 |
from typing import Optional
import os
def parse_e_elect(path: str,
zpe_scale_factor: float = 1.,
) -> Optional[float]:
"""
Parse the electronic energy from an sp job output file.
Args:
path (str): The ESS log file to parse from.
zpe_scale_factor (float): The ZPE scaling factor, used only for composite methods in Gaussian via Arkane.
Returns: Optional[float]
The electronic energy in kJ/mol.
"""
if not os.path.isfile(path):
raise InputError(f'Could not find file {path}')
log = ess_factory(fullpath=path)
try:
e_elect = log.load_energy(zpe_scale_factor) * 0.001 # convert to kJ/mol
except (LogError, NotImplementedError):
logger.warning(f'Could not read e_elect from {path}')
e_elect = None
return e_elect | d76eae166309ebe765afcfdcf5fcf26bd19a9826 | 3,657,334 |
from my_app.admin import admin
from my_app.main import main
import os
def create_app():
"""Creates the instance of an app."""
configuration_file=os.getcwd()+'/./configuration.cfg'
app=Flask(__name__)
app.config.from_pyfile(configuration_file)
bootstrap.init_app(app)
mail.init_app(app)
app.register_blueprint(admin)
app.register_blueprint(main)
return app | 54cd56142dadc8c27fa385b3eb12a3b4726c291c | 3,657,335 |
def get_fields(fields):
"""
From the last column of a GTF, return a dictionary mapping each value.
Parameters:
fields (str): The last column of a GTF
Returns:
attributes (dict): Dictionary created from fields.
"""
attributes = {}
description = fields.strip()
description = [x.strip() for x in description.split(";")]
for pair in description:
if pair == "": continue
pair = pair.replace('"', '')
key, val = pair.split()
attributes[key] = val
# put in placeholders for important attributes (such as gene_id) if they
# are absent
if 'gene_id' not in attributes:
attributes['gene_id'] = 'NULL'
return attributes | 30777838934b18a0046017f3da6b3a111a911a9c | 3,657,336 |
def add_log_group_name_params(log_group_name, configs):
"""Add a "log_group_name": log_group_name to every config."""
for config in configs:
config.update({"log_group_name": log_group_name})
return configs | a5fce8143c3404257789c1720bbfefc49c8ea3f5 | 3,657,337 |
from typing import Union
def on_update_user_info(data: dict, activity: Activity) -> (int, Union[str, None]):
"""
broadcast a user info update to a room, or all rooms the user is in if no target.id specified
:param data: activity streams format, must include object.attachments (user info)
:param activity: the parsed activity, supplied by @pre_process decorator, NOT by calling endpoint
:return: {'status_code': ECodes.OK, 'data': '<same AS as client sent, plus timestamp>'}
"""
activity.actor.display_name = utils.b64e(environ.env.session.get(SessionKeys.user_name.value))
data['actor']['displayName'] = activity.actor.display_name
environ.env.observer.emit('on_update_user_info', (data, activity))
return ECodes.OK, data | 735486cad96545885a76a5a18418db549869304d | 3,657,338 |
def discover(isamAppliance, check_mode=False, force=False):
"""
Discover available updates
"""
return isamAppliance.invoke_get("Discover available updates",
"/updates/available/discover") | 04c68b0ce57d27bc4032cf9b1607f2f1f371e384 | 3,657,339 |
from re import A
def ltistep(U, A=A, B=B, C=C):
""" LTI( A B C ): U -> y linear
straight up
"""
U, A, B, C = map(np.asarray, (U, A, B, C))
xk = np.zeros(A.shape[1])
x = [xk]
for u in U[:-1]:
xk = A.dot(xk) + B.dot(u)
x.append(xk.copy())
return np.dot(x, C) | 5d7c7550a9a6407a8f1a68ee32e158f25a7d50bf | 3,657,340 |
def _registry():
"""Registry to download images from."""
return _registry_config()["host"] | ee7c724f3b9381c4106a4e19d0434b9b4f0125fc | 3,657,341 |
def load_structure(query, reduce=True, strip='solvent&~@/pseudoBonds'):
"""
Load a structure in Chimera. It can be anything accepted by `open` command.
Parameters
==========
query : str
Path to molecular file, or special query for Chimera's open (e.g. pdb:3pk2).
reduce : bool
Add hydrogens to structure. Defaults to True.
strip : str
Chimera selection spec that will be removed. Defaults to solvent&~@/pseudoBonds
(solvent that is not attached to a metal ion).
"""
print('Opening', query)
chimera.runCommand('open ' + query)
m = chimera.openModels.list()[0]
m.setAllPDBHeaders({})
if strip:
print(' Removing {}...'.format(strip))
chimera.runCommand('del ' + strip)
if reduce:
print(' Adding hydrogens...')
chimera.runCommand('addh')
return m | d91ceeba36eb04e33c238ab2ecb88ba2cc1928c7 | 3,657,342 |
from re import T
def is_into_keyword(token):
"""
INTO判定
"""
return token.match(T.Keyword, "INTO") | 337fb0062dc4288aad8ac715efcca564ddfad113 | 3,657,343 |
from typing import Union
def exp(
value: Union[Tensor, MPCTensor, int, float], iterations: int = 8
) -> Union[MPCTensor, float, Tensor]:
"""Approximates the exponential function using a limit approximation.
exp(x) = lim_{n -> infty} (1 + x / n) ^ n
Here we compute exp by choosing n = 2 ** d for some large d equal to
iterations. We then compute (1 + x / n) once and square `d` times.
Args:
value: tensor whose exp is to be calculated
iterations (int): number of iterations for limit approximation
Ref: https://github.com/LaRiffle/approximate-models
Returns:
MPCTensor: the calculated exponential of the given tensor
"""
result = (value / 2**iterations) + 1
for _ in range(iterations):
result = result * result
return result | 9cfbb63d39d41e92b506366244ec6e77d52162b2 | 3,657,344 |
from typing import Any
def train(estimator: Estimator, data_root_dir: str, max_steps: int) -> Any:
"""Train a Tensorflow estimator"""
train_spec = tf.estimator.TrainSpec(
input_fn=_build_input_fn(data_root_dir, ModeKeys.TRAIN),
max_steps=max_steps,
)
if max_steps > Training.LONG_TRAINING_STEPS:
throttle_secs = Training.LONG_DELAY
else:
throttle_secs = Training.SHORT_DELAY
eval_spec = tf.estimator.EvalSpec(
input_fn=_build_input_fn(data_root_dir, ModeKeys.EVAL),
start_delay_secs=Training.SHORT_DELAY,
throttle_secs=throttle_secs,
)
LOGGER.debug('Train the model')
results = tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
training_metrics = results[0]
return training_metrics | bcf81a0b46f3c0eea8f2c26929d3b6440df5e2cb | 3,657,345 |
def isDllInCorrectPath():
"""
Returns True if the BUFFY DLL is present and in the correct location (...\<BTS>\Mods\<BUFFY>\Assets\).
"""
return IS_DLL_IN_CORRECT_PATH | ea31391d41ba04b27df70124a65fdb48791cce57 | 3,657,346 |
import time
def time_remaining(event_time):
"""
Args:
event_time (time.struct_time): Time of the event.
Returns:
float: Time remaining between now and the event, in
seconds since epoch.
"""
now = time.localtime()
time_remaining = time.mktime(event_time) - time.mktime(now)
return time_remaining | cb3dfcf916cffc3b45f215f7642aeac8a1d6fef7 | 3,657,347 |
def _repeat(values, count):
"""Produces a list of lists suitable for testing interleave.
Args:
values: for each element `x` the result contains `[x] * x`
count: determines how many times to repeat `[x] * x` in the result
Returns:
A list of lists of values suitable for testing interleave.
"""
return [[value] * value for value in np.tile(values, count)] | 46aa7899e7ed536525b7a94675edf89958f6f37f | 3,657,348 |
from functools import reduce
def P2D_l_TAN(df, cond, attr): # P(attr | 'target', cond)
"""Calcule la probabilité d'un attribut sachant la classe et un autre attribut.
Parameters
----------
df : pandas.DataFrame
La base d'examples.
cond : str
Le nom de l'attribut conditionnant.
attr : str
Le nom de l'attribut conditionné.
Returns
-------
dict of (int, number): (dict of number: float)
Un dictionnaire associant au couple (`t`, `c`), de classe `t` et de valeur
d'attribut conditionnant `c`, un dictionnaire qui associe à
la valeur `a` de l'attribut conditionné la probabilité
.. math:: P(attr=a|target=t,cond=c).
"""
joint_target_cond_attr = getJoint(df, ['target', cond, attr])
joint_target_cond = getJoint(df, ['target', cond])
raw_dico = dict(divide(joint_target_cond_attr, joint_target_cond))
dicos = [{(k_t, k_c): {k_a: proba}}
for (k_t, k_c, k_a), proba in raw_dico.items()]
res = {}
reduce(reduce_update, [res] + dicos)
return res | 88affcaea0368c400ccd25356d97a25c9a88a15e | 3,657,349 |
def has_no_jump(bigram, peaks_groundtruth):
"""
Tell if the two components of the bigram are same or successive in the sequence of valid peaks or not
For exemple, if groundtruth = [1,2,3], [1,1] or [2,3] have no jump but [1,3] has a jump.
bigram : the bigram to judge
peaks_groundtruth : the list of valid peaks
Return boolean
"""
assert len(bigram) == 2
if len(set(bigram)) == 1:
return True
sorted_groundtruth = sorted(peaks_groundtruth)
sorted_peaks = sorted(list(bigram))
begin = sorted_groundtruth.index(sorted_peaks[0])
end = begin+len(sorted_peaks)
return sorted_peaks == sorted_groundtruth[begin:end] | e334c389436d5cda2642f8ac7629b64074dcd0e0 | 3,657,350 |
import base64
def Base64WSDecode(s):
"""
Return decoded version of given Base64 string. Ignore whitespace.
Uses URL-safe alphabet: - replaces +, _ replaces /. Will convert s of type
unicode to string type first.
@param s: Base64 string to decode
@type s: string
@return: original string that was encoded as Base64
@rtype: bytes
@raise Base64DecodingError: If length of string (ignoring whitespace) is one
more than a multiple of four.
"""
s = RawString(s) # Base64W decode can only work with strings
s = ''.join(s.splitlines())
s = str(s.replace(" ", "")) # kill whitespace, make string (not unicode)
d = len(s) % 4
if d == 1:
raise kzr_errors.Base64DecodingError()
elif d == 2:
s += "=="
elif d == 3:
s += "="
s = RawBytes(s)
try:
return base64.urlsafe_b64decode(s)
except TypeError:
# Decoding raises TypeError if s contains invalid characters.
raise kzr_errors.Base64DecodingError() | 67db2d3f298e0220411f224299dcb20feeba5b3e | 3,657,351 |
def make_window():
"""create the window"""
window = Tk()
window.title("Pac-Man")
window.geometry("%dx%d+%d+%d" % (
WINDOW_WIDTH,
WINDOW_HEIGHT,
X_WIN_POS,
Y_WIN_POS
)
)
window = window
return window | 1e9ecb5acf91e75797520c54be1087d24392f190 | 3,657,352 |
from typing import Union
import re
def construct_scrape_regex_patterns(scrape_info: dict[str, Union[ParseResult, str]]) -> dict[str, Union[ParseResult, str]]:
""" Construct regex patterns for seasons/episodes """
logger.debug("Constructing scrape regexes")
for info in scrape_info:
if info == 'url':
continue
if info == 'seasons':
if scrape_info[info] is not None:
if re.search(r'/season-\d{1,6}', scrape_info['url'].geturl()):
logger.warning("Season already specified in url")
raise exceptions.InvalidInput("Season already specified in url")
scrape_info['seasons'] = parse_scrape_info(scrape_info[info])
else:
s = re.search(r'/season-(\d{1,6})', scrape_info['url'].geturl())
if s:
scrape_info['seasons'] = s.group(1)
else:
scrape_info['seasons'] = r'\d{1,6}'
if info == 'episodes':
if scrape_info[info] is not None:
if re.search(r'/episode-\d{1,6}', scrape_info['url'].geturl()):
logger.warning("Episode already specified in url")
raise exceptions.InvalidInput("Episode already specified in url")
scrape_info['episodes'] = parse_scrape_info(scrape_info[info])
else:
e = re.search(r'/episode-(\d{1,6})', scrape_info['url'].geturl())
if e:
scrape_info['episodes'] = e.group(1)
else:
scrape_info['episodes'] = r'\d{1,6}'
return scrape_info | 8d731dee1dc1ce493a4a49140a2fbd11223018fd | 3,657,353 |
def hasf(e):
"""
Returns a function which if applied with `x` tests whether `x` has `e`.
Examples
--------
>>> filter(hasf("."), ['statement', 'A sentence.'])
['A sentence.']
"""
return lambda x: e in x | ac9ce7cf2ed2ee8a050acf24a8d0a3b95b7f2d50 | 3,657,354 |
def borehole_model(x, theta):
"""Given x and theta, return matrix of [row x] times [row theta] of values."""
return f | 9ccfd530ff162d5f2ec786757ec03917f3367635 | 3,657,355 |
def findNodesOnHostname(hostname):
"""Return the list of nodes name of a (non-dmgr) node on the given hostname, or None
Function parameters:
hostname - the hostname to check, with or without the domain suffix
"""
m = "findNodesOnHostname:"
nodes = []
for nodename in listNodes():
if hostname.lower() == getNodeHostname(nodename).lower():
sop(m, "Found node %s which is on %s" % (nodename, hostname))
nodes.append(nodename)
#endif
#endfor
# Not found - try matching without domain - z/OS systems might not have domain configured
shorthostname = hostname.split(".")[0].lower()
for nodename in listNodes():
shortnodehostname = getNodeHostname(nodename).split(".")[0].lower()
if shortnodehostname == shorthostname:
if nodename in nodes :
sop(m, "Node name %s was already found with the domain attached" % nodename)
else :
nodes.append(nodename)
sop(m, "Found node %s which is on %s" % (nodename, hostname))
#endif
#endif
#endfor
if len(nodes) == 0 :
sop(m,"WARNING: Unable to find any node with the hostname %s (not case-sensitive)" % hostname)
sop(m,"HERE are the hostnames that your nodes think they're on:")
for nodename in listNodes():
sop(m,"\tNode %s: hostname %s" % (nodename, getNodeHostname(nodename)))
#endfor
return None
else :
return nodes
#endif | 3a4f28d5fa8c72388cb81d40913e517d343834f0 | 3,657,356 |
def MakeControlClass( controlClass, name = None ):
"""Given a CoClass in a generated .py file, this function will return a Class
object which can be used as an OCX control.
This function is used when you do not want to handle any events from the OCX
control. If you need events, then you should derive a class from both the
activex.Control class and the CoClass
"""
if name is None:
name = controlClass.__name__
return new_type("OCX" + name, (Control, controlClass), {}) | 634544543027b1870bb72544517511d4f7b08e39 | 3,657,357 |
def obtenTipoNom(linea):
""" Obtiene por ahora la primera palabra del título, tendría que regresar de que se trata"""
res = linea.split('\t')
return res[6].partition(' ')[0] | 73edc42c5203b7ebd0086876096cdd3b7c65a54c | 3,657,358 |
def histogramfrom2Darray(array, nbins):
"""
Creates histogram of elements from 2 dimensional array
:param array: input 2 dimensional array
:param nbins: number of bins so that bin size = (maximum value in array - minimum value in array) / nbins
the motivation for returning this array is for the purpose of easily plotting with matplotlib
:return: list of three elements:
list[0] = length nbins list of integers, a histogram of the array elements
list[1] = length nbins list of values of array element types, values of the lower end of the bins
list[2] = [minimum in list, maximum in list]
this is just good to know sometimes.
"""
#find minimum
minimum = np.min(array)
#find maximu
maximum = np.max(array)
#compute bin size
binsize = (maximum - minimum) / nbins
#create bin array
bins = [minimum + binsize * i for i in range(nbins)]
histo = [0 for b in range(nbins)]
for x in array:
for y in x:
#find the lower end of the affiliated bin
ab = y - (minimum + fmod(y - minimum, binsize))
histo[int(ab/binsize)-1] += 1
return [histo, bins, [minimum, maximum]] | 2c377b926b4708b6a6b29d400ae82b8d2931b938 | 3,657,359 |
def build_pert_reg(unsupervised_regularizer, cut_backg_noise=1.0,
cut_prob=1.0, box_reg_scale_mode='fixed',
box_reg_scale=0.25, box_reg_random_aspect_ratio=False,
cow_sigma_range=(4.0, 8.0), cow_prop_range=(0.0, 1.0),):
"""Build perturbation regularizer."""
if unsupervised_regularizer == 'none':
unsup_reg = None
augment_twice = False
elif unsupervised_regularizer == 'mt':
unsup_reg = regularizers.IdentityRegularizer()
augment_twice = False
elif unsupervised_regularizer == 'aug':
unsup_reg = regularizers.IdentityRegularizer()
augment_twice = True
elif unsupervised_regularizer == 'cutout':
unsup_reg = regularizers.BoxMaskRegularizer(
cut_backg_noise, cut_prob, box_reg_scale_mode, box_reg_scale,
box_reg_random_aspect_ratio)
augment_twice = False
elif unsupervised_regularizer == 'aug_cutout':
unsup_reg = regularizers.BoxMaskRegularizer(
cut_backg_noise, cut_prob, box_reg_scale_mode, box_reg_scale,
box_reg_random_aspect_ratio)
augment_twice = True
elif unsupervised_regularizer == 'cowout':
unsup_reg = regularizers.CowMaskRegularizer(
cut_backg_noise, cut_prob, cow_sigma_range, cow_prop_range)
augment_twice = False
elif unsupervised_regularizer == 'aug_cowout':
unsup_reg = regularizers.CowMaskRegularizer(
cut_backg_noise, cut_prob, cow_sigma_range, cow_prop_range)
augment_twice = True
else:
raise ValueError('Unknown supervised_regularizer \'{}\''.format(
unsupervised_regularizer))
return unsup_reg, augment_twice | 37d60049146c876d423fea6615cf43975f1ae389 | 3,657,360 |
def part_5b_avg_std_dev_of_replicates_analysis_completed(*jobs):
"""Check that the initial job data is written to the json files."""
file_written_bool_list = []
all_file_written_bool_pass = False
for job in jobs:
data_written_bool = False
if job.isfile(
f"../../src/engines/gomc/averagesWithinReplicatez.txt"
) and job.isfile(f"../../src/engines/gomc/setAverages.txt"):
data_written_bool = True
file_written_bool_list.append(data_written_bool)
if False not in file_written_bool_list:
all_file_written_bool_pass = True
return all_file_written_bool_pass | f238382e18de32b86598d5daa13f92af01311d3d | 3,657,361 |
def exportFlatClusterData(filename, root_dir, dataset_name, new_row_header,new_column_header,xt,ind1,ind2,display):
""" Export the clustered results as a text file, only indicating the flat-clusters rather than the tree """
filename = string.replace(filename,'.pdf','.txt')
export_text = export.ExportFile(filename)
column_header = string.join(['UID','row_clusters-flat']+new_column_header,'\t')+'\n' ### format column-names for export
export_text.write(column_header)
column_clusters = string.join(['column_clusters-flat','']+ map(str, ind2),'\t')+'\n' ### format column-flat-clusters for export
export_text.write(column_clusters)
### The clusters, dendrogram and flat clusters are drawn bottom-up, so we need to reverse the order to match
#new_row_header = new_row_header[::-1]
#xt = xt[::-1]
try: elite_dir = getGOEliteExportDir(root_dir,dataset_name)
except Exception: elite_dir = None
elite_columns = string.join(['InputID','SystemCode'])
try: sy = systemCodeCheck(new_row_header)
except Exception: sy = None
### Export each row in the clustered data matrix xt
i=0
cluster_db={}
export_lines = []
for row in xt:
id = new_row_header[i]
if sy == '$En:Sy':
cluster = 'cluster-'+string.split(id,':')[0]
elif sy == 'S' and ':' in id:
cluster = 'cluster-'+string.split(id,':')[0]
elif sy == 'Sy' and ':' in id:
cluster = 'cluster-'+string.split(id,':')[0]
else:
cluster = 'c'+str(ind1[i])
try: cluster_db[cluster].append(new_row_header[i])
except Exception: cluster_db[cluster] = [new_row_header[i]]
export_lines.append(string.join([new_row_header[i],str(ind1[i])]+map(str, row),'\t')+'\n')
i+=1
### Reverse the order of the file
export_lines.reverse()
for line in export_lines:
export_text.write(line)
export_text.close()
### Export GO-Elite input files
allGenes={}
for cluster in cluster_db:
export_elite = export.ExportFile(elite_dir+'/'+cluster+'.txt')
if sy==None:
export_elite.write('ID\n')
else:
export_elite.write('ID\tSystemCode\n')
for id in cluster_db[cluster]:
if sy == '$En:Sy':
id = string.split(id,':')[1]
ids = string.split(id,' ')
if 'ENS' in ids[0]: id = ids[0]
else: id = ids[-1]
sc = 'En'
elif sy == 'Sy' and ':' in id:
id = string.split(id,':')[1]
ids = string.split(id,' ')
sc = 'Sy'
elif sy == 'En:Sy':
id = string.split(id,' ')[0]
sc = 'En'
elif sy == 'Ae':
l = string.split(id,':')
if len(l)==2:
id = string.split(id,':')[0] ### Use the Ensembl
if len(l) == 3:
id = string.split(id,':')[1] ### Use the Ensembl
sc = 'En'
else:
sc = sy
if sy == 'S':
if ':' in id:
id = string.split(id,':')[-1]
sc = 'Ae'
try: export_elite.write(id+'\t'+sc+'\n')
except Exception: export_elite.write(id+'\n') ### if no System Code known
allGenes[id]=[]
export_elite.close()
try:
if storeGeneSetName != None:
if len(storeGeneSetName)>0 and 'driver' not in justShowTheseIDs:
exportCustomGeneSet(storeGeneSetName,species,allGenes)
print 'Exported geneset to "StoredGeneSets"'
except Exception: pass
### Export as CDT file
filename = string.replace(filename,'.txt','.cdt')
if display:
try: exportJTV(filename, new_column_header, new_row_header)
except Exception: pass
export_cdt = export.ExportFile(filename)
column_header = string.join(['UNIQID','NAME','GWEIGHT']+new_column_header,'\t')+'\n' ### format column-names for export
export_cdt.write(column_header)
eweight = string.join(['EWEIGHT','','']+ ['1']*len(new_column_header),'\t')+'\n' ### format column-flat-clusters for export
export_cdt.write(eweight)
### Export each row in the clustered data matrix xt
i=0; cdt_lines=[]
for row in xt:
cdt_lines.append(string.join([new_row_header[i]]*2+['1']+map(str, row),'\t')+'\n')
i+=1
### Reverse the order of the file
cdt_lines.reverse()
for line in cdt_lines:
export_cdt.write(line)
export_cdt.close()
return elite_dir, filename | f9ade521b67c87518741fb56fb1c80df0961065a | 3,657,362 |
def indent_multiline(s: str, indentation: str = " ", add_newlines: bool = True) -> str:
"""Indent the given string if it contains more than one line.
Args:
s: String to indent
indentation: Indentation to prepend to each line.
add_newlines: Whether to add newlines surrounding the result
if indentation was added.
"""
lines = s.splitlines()
if len(lines) <= 1:
return s
lines_str = "\n".join(f"{indentation}{line}" for line in lines)
if add_newlines:
return f"\n{lines_str}\n"
else:
return lines_str | 62eb2fc7c3f3b493a6edc009692f472e50e960f7 | 3,657,363 |
from typing import Optional
def _get_property(self, key: str, *, offset: int = 0) -> Optional[int]:
"""Get a property from the location details.
:param key: The key for the property
:param offset: Any offset to apply to the value (if found)
:returns: The property as an int value if found, None otherwise
"""
value = self.location_details.get(key)
if value is None:
return None
return int(value[0]) + offset | 8d2c35a88810db5255cfb0ca9d7bfa6345ff3276 | 3,657,364 |
def pca_normalization(points):
"""Projects points onto the directions of maximum variance."""
points = np.transpose(points)
pca = PCA(n_components=len(np.transpose(points)))
points = pca.fit_transform(points)
return np.transpose(points) | 753bea2546341fc0be3e7cf4fd444b3ee93378f9 | 3,657,365 |
def _reformTrend(percs, inits):
"""
Helper function to recreate original trend based on percent change data.
"""
trend = []
trend.append(percs[0])
for i in range(1, len(percs)):
newLine = []
newLine.append(percs[i][0]) #append the date
for j in range(1, len(percs[i])): #for each term on date
level = float(trend[i-1][j]) * percs[i][j].numerator / percs[i][j].denominator #level is the prev level * %change
newLine.append(level)
trend.append(newLine)
return trend | 1f6c8bbb4786b53ea2c06643108ff50691b6f89c | 3,657,366 |
def PET_initialize_compression_structure(N_axial,N_azimuthal,N_u,N_v):
"""Obtain 'offsets' and 'locations' arrays for fully sampled PET compressed projection data. """
descriptor = [{'name':'N_axial','type':'uint','value':N_axial},
{'name':'N_azimuthal','type':'uint','value':N_azimuthal},
{'name':'N_u','type':'uint','value':N_u},
{'name':'N_v','type':'uint','value':N_v},
{
'name':'offsets','type':'array','value':None,
'dtype':np.int32,'size':(N_azimuthal,N_axial),
'order':'F'
},
{
'name':'locations','type':'array','value':None,
'dtype':np.uint16,
'size':(3,N_u * N_v * N_axial * N_azimuthal),'order':'F'
},
]
r = call_c_function(niftyrec_c.PET_initialize_compression_structure,
descriptor)
if not r.status == status_success():
raise ErrorInCFunction(
"The execution of 'PET_initialize_compression_structure' was unsuccessful.",
r.status,
'niftyrec_c.PET_initialize_compression_structure')
return [r.dictionary['offsets'],r.dictionary['locations']] | 1f879517182462d8b66886aa43a4103a05a5b6f9 | 3,657,367 |
def get_client_from_user_settings(settings_obj):
"""Same as get client, except its argument is a DropboxUserSettingsObject."""
return get_client(settings_obj.owner) | 4b2c2e87310464807bf6f73d1ff8d7b7c21731ff | 3,657,368 |
def train_student(
model,
dataset,
test_data,
test_labels,
nb_labels,
nb_teachers,
stdnt_share,
lap_scale,
):
"""This function trains a student using predictions made by an ensemble of
teachers. The student and teacher models are trained using the same neural
network architecture.
:param dataset: string corresponding to mnist, cifar10, or svhn
:param nb_teachers: number of teachers (in the ensemble) to learn from
:return: True if student training went well
"""
# Call helper function to prepare student data using teacher predictions
stdnt_dataset = prepare_student_data(
model,
dataset,
test_data,
test_labels,
nb_labels,
nb_teachers,
stdnt_share,
lap_scale,
)
# Unpack the student dataset
stdnt_data, stdnt_labels, stdnt_test_data, stdnt_test_labels = stdnt_dataset
# Prepare checkpoint filename and path
filename = str(dataset) + "_" + str(nb_teachers) + "_student.ckpt"
stdnt_prep = PrepareData(stdnt_data, stdnt_labels)
stdnt_loader = DataLoader(stdnt_prep, batch_size=64, shuffle=False)
stdnt_test_prep = PrepareData(stdnt_test_data, stdnt_test_labels)
stdnt_test_loader = DataLoader(stdnt_test_prep, batch_size=64, shuffle=False)
# Start student training
train(model, stdnt_loader, stdnt_test_loader, ckpt_path, filename)
# Compute final checkpoint name for student
student_preds = softmax_preds(
model, nb_labels, stdnt_test_loader, ckpt_path + filename
)
# Compute teacher accuracy
precision = accuracy(student_preds, stdnt_test_labels)
print("\nPrecision of student after training: " + str(precision))
return True | de8db38bde151f5dd65b93a0c8a44c2289351f81 | 3,657,369 |
import numpy
def create_transition_matrix_numeric(mu, d, v):
"""
Use numerical integration.
This is not so compatible with algopy because it goes through fortran.
Note that d = 2*h - 1 following Kimura 1957.
The rate mu is a catch-all scaling factor.
The finite distribution v is assumed to be a stochastic vector.
@param mu: scales the rate matrix
@param d: dominance (as opposed to recessiveness) of preferred states.
@param v: numpy array defining a distribution over states
@return: transition matrix
"""
# Construct the numpy matrix whose entries
# are differences of log equilibrium probabilities.
# Everything in this code block is pure numpy.
F = numpy.log(v)
e = numpy.ones_like(F)
S = numpy.outer(e, F) - numpy.outer(F, e)
# Create the rate matrix Q and return its matrix exponential.
# Things in this code block may use algopy if mu and d
# are bundled with truncated Taylor information.
D = d * numpy.sign(S)
pre_Q = numpy.vectorize(numeric_fixation)(0.5*S, D)
pre_Q = mu * pre_Q
Q = pre_Q - algopy.diag(algopy.sum(pre_Q, axis=1))
P = algopy.expm(Q)
return P | a60a3da34089fffe2a48cc282ea4cbb528454fd6 | 3,657,370 |
import os
def parse_integrate(filename='INTEGRATE.LP'):
"""
Harvest data from INTEGRATE
"""
if not os.path.exists(filename):
return {'failure': 'Integration step failed'}
info = parser.parse(filename, 'integrate')
for batch, frames in zip(info.get('batches',[]), info.pop('batch_frames', [])):
batch.update(frames)
return info | cc37e4a8f4ed35f0827e93f93e8da301d0b49c8e | 3,657,371 |
def channelmap(stream: Stream, *args, **kwargs) -> FilterableStream:
"""https://ffmpeg.org/ffmpeg-filters.html#channelmap"""
return filter(stream, channelmap.__name__, *args, **kwargs) | 8293e9004fd4dfb7ff830e477dcee4de5d163a5d | 3,657,372 |
def test_token(current_user: DBUser = Depends(get_current_user)):
"""
Test access-token
"""
return current_user | 1ceb90c1321e358124520ab5b1b1ecb07de4619d | 3,657,373 |
import mls
import os
def locate_data(name, check_exists=True):
"""Locate the named data file.
Data files under mls/data/ are copied when this package is installed.
This function locates these files relative to the install directory.
Parameters
----------
name : str
Path of data file relative to mls/data.
check_exists : bool
Raise a RuntimeError if the named file does not exist when this is True.
Returns
-------
str
Path of data file within installation directory.
"""
pkg_path = mls.__path__[0]
path = os.path.join(pkg_path, 'data', name)
if check_exists and not os.path.exists(path):
raise RuntimeError('No such data file: {}'.format(path))
return path | 86ed5d2403a8d97aabcd4b65361ffa6f82095fff | 3,657,374 |
def process_label_imA(im):
"""Crop a label image so that the result contains
all labels, then return separate images, one for
each label.
Returns a dictionary of images and corresponding
labels (for choosing colours), also a scene bounding
box. Need to run shape statistics to determine
the number of labels and the IDs
"""
# stuff to figure out which way we slice, etc
isoidx = check_isotropy(im)
otheridx = [0, 1, 2]
otheridx.remove(isoidx)
direction = get_direction(im, isoidx)
sp = im.GetSpacing()
sp = str2ds(sp)
spacing = [sp[i] for i in otheridx]
slthickness = sp[isoidx]
labstats = sitk.LabelShapeStatisticsImageFilter()
labstats.Execute(im)
labels = labstats.GetLabels()
boxes = [labstats.GetBoundingBox(i) for i in labels]
# Need to compute bounding box for all labels, as
# this will set the row/colums
# boxes are corner and size - this code assumes 3D
corners = [(x[0], x[1], x[2]) for x in boxes]
othercorner = [(x[0] + x[3] - 1,
x[1] + x[4] - 1,
x[2] + x[5] - 1) for x in boxes]
sizes = [(x[3], x[4], x[5]) for x in boxes]
all_low_x = [C[0] for C in corners]
all_low_y = [C[1] for C in corners]
all_low_z = [C[2] for C in corners]
low_x = min(all_low_x)
low_y = min(all_low_y)
low_z = min(all_low_z)
lowcorner = (low_x, low_y, low_z)
all_high_x = [C[0] for C in othercorner]
all_high_y = [C[1] for C in othercorner]
all_high_z = [C[2] for C in othercorner]
high_x = max(all_high_x)
high_y = max(all_high_y)
high_z = max(all_high_z)
highcorner = (high_x, high_y, high_z)
allsize = (highcorner[0] - lowcorner[0] + 1,
highcorner[1] - lowcorner[1] + 1,
highcorner[2] - lowcorner[2] + 1)
# corners [otheridx] and size[otheridx] should be all the same
newcorners = [list(x) for x in corners]
newsizes = [list(x) for x in sizes]
a = otheridx[0]
b = otheridx[1]
for f in range(len(newcorners)):
newcorners[f][a] = lowcorner[a]
newcorners[f][b] = lowcorner[b]
newsizes[f][a] = allsize[a]
newsizes[f][b] = allsize[b]
ims = [sitk.RegionOfInterest(im, allsize,
lowcorner) == labels[i]
for i in range(len(labels))]
imcrop = sitk.RegionOfInterest(im, allsize, lowcorner)
return({'rois': ims, 'labels': labels,
'original': im, 'cropped': imcrop}) | 66e89e84d773d102c8fe7a6d10dd0604b52d9862 | 3,657,375 |
def render_graphs(csv_data, append_titles=""):
"""
Convenience function. Gets the aggregated `monthlies` data from
`aggregate_monthly_data(csv_data)` and returns a dict of graph
titles mapped to rendered SVGs from `monthly_total_precip_line()`
and `monthly_avg_min_max_temp_line()` using the `monthlies` data.
"""
monthlies = aggregate_monthly_data(csv_data)
return {
graph.config.title: graph.render()
for graph in [
monthly_total_precip_line(monthlies, append_titles),
monthly_avg_min_max_temp_line(monthlies, append_titles),
monthly_max_temps_box(monthlies, append_titles),
]
} | c2258faf759c2fd91e55fea06384d5f7ec030154 | 3,657,376 |
import traceback
def _get_location():
"""Return the location as a string, accounting for this function and the parent in the stack."""
return "".join(traceback.format_stack(limit=STACK_LIMIT + 2)[:-2]) | f36037a440d2e8f3613beed217a758bc0cfa752d | 3,657,377 |
from apyfal.client.syscall import SysCallClient
from apyfal import Accelerator
import apyfal.configuration as cfg
import apyfal.exceptions as exc
def test_syscall_client_init():
"""Tests SysCallClient.__init__"""
# Test: accelerator_executable_available, checks return type
assert type(cfg.accelerator_executable_available()) is bool
# Mocks some functions
accelerator_available = True
class DummySysCallClient(SysCallClient):
"""Dummy SysCallClient"""
@staticmethod
def _stop(*_, **__):
"""Do Nothing to skip object deletion"""
def dummy_accelerator_executable_available():
"""Return fake result"""
return accelerator_available
cfg_accelerator_executable_available = cfg.accelerator_executable_available
cfg.accelerator_executable_available = (
dummy_accelerator_executable_available)
# Tests
try:
# Accelerator not available
DummySysCallClient()
# Default for Accelerator if no host specified
config = cfg.Configuration()
try:
del config._sections['host']
except KeyError:
pass
client = Accelerator(config=config).client
client._stop = DummySysCallClient._stop # Disable __del__
assert isinstance(client, SysCallClient)
# Accelerator not available
accelerator_available = False
with pytest.raises(exc.HostConfigurationException):
SysCallClient()
# Restores functions
finally:
cfg.accelerator_executable_available = (
cfg_accelerator_executable_available) | cdbe5bbcd9aa2b5e655f5c693316f32ee6b9d073 | 3,657,378 |
def start_session():
"""do nothing here
"""
return Response.failed_response('Error') | b8c58ec837c5a77c35cb6682c6c405489cf512c0 | 3,657,379 |
def _combine_keras_model_with_trill(embedding_tfhub_handle, aggregating_model):
"""Combines keras model with TRILL model."""
trill_layer = hub.KerasLayer(
handle=embedding_tfhub_handle,
trainable=False,
arguments={'sample_rate': 16000},
output_key='embedding',
output_shape=[None, 2048]
)
input1 = tf.keras.Input([None])
trill_output = trill_layer(input1)
final_out = aggregating_model(trill_output)
final_model = tf.keras.Model(
inputs=input1,
outputs=final_out)
return final_model | 97bf695e6b083dfefcad1d2c8ac24b54687047fd | 3,657,380 |
def phases(times, names=[]):
""" Creates named phases from a set of times defining the edges of hte intervals """
if not names: names = range(len(times)-1)
return {names[i]:[times[i], times[i+1]] for (i, _) in enumerate(times) if i < len(times)-1} | 0e56dcf57a736e4555cae02b8f79b827c17e1d38 | 3,657,381 |
def smesolve(H, rho0, times, c_ops=[], sc_ops=[], e_ops=[],
_safe_mode=True, args={}, **kwargs):
"""
Solve stochastic master equation. Dispatch to specific solvers
depending on the value of the `solver` keyword argument.
Parameters
----------
H : :class:`qutip.Qobj`, or time dependent system.
System Hamiltonian.
Can depend on time, see StochasticSolverOptions help for format.
rho0 : :class:`qutip.Qobj`
Initial density matrix or state vector (ket).
times : *list* / *array*
List of times for :math:`t`. Must be uniformly spaced.
c_ops : list of :class:`qutip.Qobj`, or time dependent Qobjs.
Deterministic collapse operator which will contribute with a standard
Lindblad type of dissipation.
Can depend on time, see StochasticSolverOptions help for format.
sc_ops : list of :class:`qutip.Qobj`, or time dependent Qobjs.
List of stochastic collapse operators. Each stochastic collapse
operator will give a deterministic and stochastic contribution
to the eqaution of motion according to how the d1 and d2 functions
are defined.
Can depend on time, see StochasticSolverOptions help for format.
e_ops : list of :class:`qutip.Qobj`
single operator or list of operators for which to evaluate
expectation values.
kwargs : *dictionary*
Optional keyword arguments. See
:class:`qutip.stochastic.StochasticSolverOptions`.
Returns
-------
output: :class:`qutip.solver.Result`
An instance of the class :class:`qutip.solver.Result`.
"""
if "method" in kwargs and kwargs["method"] == "photocurrent":
print("stochastic solver with photocurrent method has been moved to "
"it's own function: photocurrent_mesolve")
return photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops,
e_ops=e_ops, _safe_mode=_safe_mode,
args=args, **kwargs)
if isket(rho0):
rho0 = ket2dm(rho0)
if isinstance(e_ops, dict):
e_ops_dict = e_ops
e_ops = [e for e in e_ops.values()]
else:
e_ops_dict = None
sso = StochasticSolverOptions(True, H=H, state0=rho0, times=times,
c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
args=args, **kwargs)
if _safe_mode:
_safety_checks(sso)
if sso.solver_code == 120:
return _positive_map(sso, e_ops_dict)
sso.LH = liouvillian(sso.H, c_ops=sso.sc_ops + sso.c_ops) * sso.dt
if sso.method == 'homodyne' or sso.method is None:
if sso.m_ops is None:
sso.m_ops = [op + op.dag() for op in sso.sc_ops]
sso.sops = [spre(op) + spost(op.dag()) for op in sso.sc_ops]
if not isinstance(sso.dW_factors, list):
sso.dW_factors = [1] * len(sso.m_ops)
elif len(sso.dW_factors) != len(sso.m_ops):
raise Exception("The len of dW_factors is not the same as m_ops")
elif sso.method == 'heterodyne':
if sso.m_ops is None:
m_ops = []
sso.sops = []
for c in sso.sc_ops:
if sso.m_ops is None:
m_ops += [c + c.dag(), -1j * c - c.dag()]
sso.sops += [(spre(c) + spost(c.dag())) / np.sqrt(2),
(spre(c) - spost(c.dag())) * -1j / np.sqrt(2)]
sso.m_ops = m_ops
if not isinstance(sso.dW_factors, list):
sso.dW_factors = [np.sqrt(2)] * len(sso.sops)
elif len(sso.dW_factors) == len(sso.m_ops):
pass
elif len(sso.dW_factors) == len(sso.sc_ops):
dW_factors = []
for fact in sso.dW_factors:
dW_factors += [np.sqrt(2) * fact, np.sqrt(2) * fact]
sso.dW_factors = dW_factors
elif len(sso.dW_factors) != len(sso.m_ops):
raise Exception("The len of dW_factors is not the same as sc_ops")
elif sso.method == "photocurrent":
raise NotImplementedError("Moved to 'photocurrent_mesolve'")
else:
raise Exception("The method must be one of None, homodyne, heterodyne")
sso.ce_ops = [QobjEvo(spre(op)) for op in sso.e_ops]
sso.cm_ops = [QobjEvo(spre(op)) for op in sso.m_ops]
sso.LH.compile()
[op.compile() for op in sso.sops]
[op.compile() for op in sso.cm_ops]
[op.compile() for op in sso.ce_ops]
if sso.solver_code in [103, 153]:
sso.imp = 1 - sso.LH * 0.5
sso.imp.compile()
sso.solver_obj = SMESolver
sso.solver_name = "smesolve_" + sso.solver
res = _sesolve_generic(sso, sso.options, sso.progress_bar)
if e_ops_dict:
res.expect = {e: res.expect[n]
for n, e in enumerate(e_ops_dict.keys())}
return res | 4a27d54d2ca390bb3e4ac88ec2119633481df529 | 3,657,382 |
def harmonic_vector(n):
"""
create a vector in the form [1,1/2,1/3,...1/n]
"""
return np.array([[1.0 / i] for i in range(1, n + 1)], dtype='double') | 6f2a94e0a54566db614bb3c4916e1a8538783862 | 3,657,383 |
import copy
def get_install_task_flavor(job_config):
"""
Pokes through the install task's configuration (including its overrides) to
figure out which flavor it will want to install.
Only looks at the first instance of the install task in job_config.
"""
project, = job_config.get('project', 'ceph'),
tasks = job_config.get('tasks', dict())
overrides = job_config.get('overrides', dict())
install_overrides = overrides.get('install', dict())
project_overrides = install_overrides.get(project, dict())
first_install_config = dict()
for task in tasks:
if task.keys()[0] == 'install':
first_install_config = task.values()[0] or dict()
break
first_install_config = copy.deepcopy(first_install_config)
deep_merge(first_install_config, install_overrides)
deep_merge(first_install_config, project_overrides)
return get_flavor(first_install_config) | 11fcefe3df17acfbce395949aa615d8292585fb6 | 3,657,384 |
def equalize_hist(image, nbins=256):
"""Return image after histogram equalization.
Parameters
----------
image : array
Image array.
nbins : int
Number of bins for image histogram.
Returns
-------
out : float array
Image array after histogram equalization.
Notes
-----
This function is adapted from [1]_ with the author's permission.
References
----------
.. [1] http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html
.. [2] http://en.wikipedia.org/wiki/Histogram_equalization
"""
image = img_as_float(image)
cdf, bin_centers = cumulative_distribution(image, nbins)
out = np.interp(image.flat, bin_centers, cdf)
return out.reshape(image.shape) | ea990cee9bef0e2edc41e2c5279f52b98d2a4d89 | 3,657,385 |
def add9336(rh):
"""
Adds a 9336 (FBA) disk to virtual machine's directory entry.
Input:
Request Handle with the following properties:
function - 'CHANGEVM'
subfunction - 'ADD9336'
userid - userid of the virtual machine
parms['diskPool'] - Disk pool
parms['diskSize'] - size of the disk in blocks or bytes.
parms['fileSystem'] - Linux filesystem to install on the disk.
parms['mode'] - Disk access mode
parms['multiPW'] - Multi-write password
parms['readPW'] - Read password
parms['vaddr'] - Virtual address
parms['writePW'] - Write password
Output:
Request Handle updated with the results.
Return code - 0: ok, non-zero: error
"""
rh.printSysLog("Enter changeVM.add9336")
results, blocks = generalUtils.cvtToBlocks(rh, rh.parms['diskSize'])
if results['overallRC'] != 0:
# message already sent. Only need to update the final results.
rh.updateResults(results)
if results['overallRC'] == 0:
parms = [
"-T", rh.userid,
"-v", rh.parms['vaddr'],
"-t", "9336",
"-a", "AUTOG",
"-r", rh.parms['diskPool'],
"-u", "1",
"-z", blocks,
"-f", "1"]
hideList = []
if 'mode' in rh.parms:
parms.extend(["-m", rh.parms['mode']])
else:
parms.extend(["-m", 'W'])
if 'readPW' in rh.parms:
parms.extend(["-R", rh.parms['readPW']])
hideList.append(len(parms) - 1)
if 'writePW' in rh.parms:
parms.extend(["-W", rh.parms['writePW']])
hideList.append(len(parms) - 1)
if 'multiPW' in rh.parms:
parms.extend(["-M", rh.parms['multiPW']])
hideList.append(len(parms) - 1)
results = invokeSMCLI(rh,
"Image_Disk_Create_DM",
parms,
hideInLog=hideList)
if results['overallRC'] != 0:
# SMAPI API failed.
rh.printLn("ES", results['response'])
rh.updateResults(results) # Use results from invokeSMCLI
if (results['overallRC'] == 0 and 'fileSystem' in rh.parms):
# Install the file system
results = installFS(
rh,
rh.parms['vaddr'],
rh.parms['mode'],
rh.parms['fileSystem'],
"9336")
if results['overallRC'] == 0:
results = isLoggedOn(rh, rh.userid)
if (results['overallRC'] == 0 and results['rs'] == 0):
# Add the disk to the active configuration.
parms = [
"-T", rh.userid,
"-v", rh.parms['vaddr'],
"-m", rh.parms['mode']]
results = invokeSMCLI(rh, "Image_Disk_Create", parms)
if results['overallRC'] == 0:
rh.printLn("N", "Added dasd " + rh.parms['vaddr'] +
" to the active configuration.")
else:
# SMAPI API failed.
rh.printLn("ES", results['response'])
rh.updateResults(results) # Use results from invokeSMCLI
rh.printSysLog("Exit changeVM.add9336, rc: " +
str(rh.results['overallRC']))
return rh.results['overallRC'] | bb7168d5b0ee084b15e8ef91633d5554669cf83f | 3,657,386 |
def window_data(s1,s2,s5,s6,s7,s8, sat,ele,azi,seconds,edot,f,az1,az2,e1,e2,satNu,pfitV,pele):
"""
author kristine m. larson
also calculates the scale factor for various GNNS frequencies. currently
returns meanTime in UTC hours and mean azimuth in degrees
cf, which is the wavelength/2
currently works for GPS, GLONASS, GALILEO, and Beidou
new: pele are the elevation angle limits for the polynomial fit. these are appplied
before you start windowing the data
"""
cunit = 1
dat = []; x=[]; y=[]
# get scale factor
# added glonass, 101 and 102
if (f == 1) or (f==101) or (f==201):
dat = s1
if (f == 2) or (f == 20) or (f == 102) or (f==302):
dat = s2
if (f == 5) or (f==205):
dat = s5
# these are galileo frequencies (via RINEX definition)
if (f == 206) or (f == 306):
dat = s6
if (f == 207) or (f == 307):
dat = s7
if (f == 208):
dat = s8
# get the scaling factor for this frequency and satellite number
# print(f,satNu)
cf = arc_scaleF(f,satNu)
# if not, frequency does not exist, will be tripped by Nv
# remove the direct signal component
if (cf > 0):
x,y,sat,azi,seconds,edot = removeDC(dat, satNu, sat,ele, pele, azi,az1,az2,edot,seconds)
#
Nv = len(y); Nvv = 0 ;
# some defaults in case there are no data in this region
meanTime = 0.0; avgAzim = 0.0; avgEdot = 1; Nvv = 0
avgEdot_fit =1; delT = 0.0
# no longer have to look for specific satellites. some minimum number of points required
if Nv > 30:
model = np.polyfit(x,y,pfitV)
fit = np.polyval(model,x)
# redefine x and y as old variables
ele = x
dat = y - fit
# ok - now figure out what is within the more restricted elevation angles
x = ele[(ele > e1) & (ele < e2) & (azi > az1) & (azi < az2)]
y = dat[(ele > e1) & (ele < e2) & (azi > az1) & (azi < az2)]
ed = edot[(ele > e1) & (ele < e2) & (azi > az1) & (azi < az2)]
a = azi[(ele > e1) & (ele < e2) & (azi > az1) & (azi < az2)]
t = seconds[(ele > e1) & (ele < e2) & (azi > az1) & (azi < az2)]
sumval = np.sum(y)
if sumval == 0:
x = []; y=[] ; Nv = 0 ; Nvv = 0
# since units were changed to volts/volts, the zeros got changed to 1 values
if sumval == Nv:
x = []; y=[] ; Nv = 0 ; Nvv = 0
Nvv = len(y)
# calculate average time in UTC (actually it is GPS time) in hours and average azimuth
# this is fairly arbitrary, but can't be so small you can't fit a polymial to it
if (Nvv > 10):
dd = np.diff(t)
# edot, in radians/sec
model = np.polyfit(t,x*np.pi/180,1)
# edot in radians/second
avgEdot_fit = model[0]
avgAzim = np.mean(a)
meanTime = np.mean(t)/3600
avgEdot = np.mean(ed)
# delta Time in minutes
delT = (np.max(t) - np.min(t))/60
# average tan(elev)
cunit =np.mean(np.tan(np.pi*x/180))
# return tan(e)/edot, in units of radians/hour now. used for RHdot correction
if avgEdot == 0:
outFact1 = 0
else:
outFact1 = cunit/(avgEdot*3600)
outFact2 = cunit/(avgEdot_fit*3600)
# was debugging
#if (satNu == 25) and (f == 20):
# print('Edot,cunit,az', avgEdot_fit, cunit, avgAzim, outFact2)
return x,y,Nvv,cf,meanTime,avgAzim,outFact1, outFact2, delT | f3bd4e96059882c518bd1e8eb2a966eee9c9968a | 3,657,387 |
def get_related(user, kwargs):
"""
Get related model from user's input.
"""
for item in user.access_extra:
if item[1] in kwargs:
related_model = apps.get_model(item[0], item[1])
kwargs[item[1]] = related_model.objects.get(pk=get_id(kwargs[item[1]]))
return kwargs | 6b2ce081d1f61da734d26ef6f3c25e4da871b9ee | 3,657,388 |
def make_logical(n_tiles=1):
"""
Make a toy dataset with three labels that represent the logical functions: OR, XOR, AND
(functions of the 2D input).
"""
pat = np.array([
# X X Y Y Y
[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[1, 0, 1, 1, 0],
[1, 1, 1, 0, 1]
], dtype=int)
N, E = pat.shape
D = 2
L = E - D
pat2 = np.zeros((N, E))
pat2[:, 0:L] = pat[:, D:E]
pat2[:, L:E] = pat[:, 0:D]
pat2 = np.tile(pat2, (n_tiles, 1))
np.random.shuffle(pat2)
Y = np.array(pat2[:, 0:L], dtype=float)
X = np.array(pat2[:, L:E], dtype=float)
return X, Y | e2d936db7ae0d9ea8b0f1654e89a32b5b8c247cc | 3,657,389 |
def get_idmap_etl(
p_idmap: object,
p_etl_id: str,
p_source_table: object =None
):
"""
Генерирует скрипт ETL для таблицы Idmap
:param p_idmap: объект класса Idmap
:param p_etl_id: id etl процесса
:param p_source_table: таблица источник, которую требуется загрузить в idmap
(если не указана, возвращается лист с etl всех таблиц источников)
"""
l_source_table_id=None
if p_source_table:
l_source_table_id=p_source_table.id
l_etl=[]
l_idmap_nk_column=None
l_idmap_rk_column=None
l_etl_column=None
for i_attribute in _get_table_attribute_property(p_table=p_idmap):
if i_attribute.attribute_type==C_RK:
l_idmap_rk_column=i_attribute.id
if i_attribute.attribute_type==C_NK:
l_idmap_nk_column=i_attribute.id
if i_attribute.attribute_type==C_ETL_ATTR:
l_etl_column=i_attribute.id
for i_source_table in p_idmap.entity.source_table:
if l_source_table_id and l_source_table_id!=i_source_table.id: # пропускаем таблицу источник, если не она указана
continue
l_column_nk_sql=""
# формируем скрипт для конкатенации натуральных ключей
# сортируем список натуральных ключей по наименованию
l_source_attribute_nk=sorted(p_idmap.source_attribute_nk, key=lambda nk: nk.name)
for i_column_nk in l_source_attribute_nk:
if i_source_table.id==i_column_nk.source_table.id:
l_column_nk_sql=l_column_nk_sql+"CAST("+'"'+str(i_column_nk.id)+'"'+" AS VARCHAR(4000))\n\t\t||'@@'||\n\t\t"
l_column_nk_sql=l_column_nk_sql[:-14]
l_source_id=i_source_table.source.source_id
# генерируем etl для каждой таблицы источника
l_etl.append(
Connection().dbms.get_idmap_etl(
p_idmap_id=p_idmap.id,
p_idmap_rk_id=l_idmap_rk_column,
p_idmap_nk_id=l_idmap_nk_column,
p_etl_id=l_etl_column,
p_etl_value=p_etl_id,
p_source_table_id=i_source_table.id,
p_attribute_nk=l_column_nk_sql,
p_source_id=l_source_id,
p_max_rk=str(p_idmap.max_rk)
)
)
return l_etl | 0e24b4cbb5ea935c871cae3338094292c9ebfd02 | 3,657,390 |
def gs_tie(men, women, preftie):
"""
Gale-shapley algorithm, modified to exclude unacceptable matches
Inputs: men (list of men's names)
women (list of women's names)
pref (dictionary of preferences mapping names to list of sets of preferred names in sorted order)
Output: dictionary of stable matches
"""
rank = {}
for w in women:
rank[w] = {}
i = 1
for m in preftie[w]:
rank[w][tuple(m)] = i
i += 1
#print(rank)
prefpointer = {}
for m in men:
prefpointer[m] = 0
freemen = set(men)
S = {}
while(freemen) and prefpointer[m] < len(preftie[m]):
m = freemen.pop()
w = preftie[m][prefpointer[m]]
w = tuple(w)
#print(m + ' ' + str(w))
prefpointer[m] += 1
#print(m + ' ' + str(prefpointer[m]))
for i in range(len(w)):
if w[i] not in S:
S[w[i]] = m
#print(w[i])
else:
mprime = S[w[i]]
if m in rank[w[i]] and rank[w[i]][m] < rank[w[i]][mprime]:
S[w[i]] = m
freemen.add(mprime)
else:
freemen.add(m)
#print(S)
return S | b5dbe7047e3e6be7f0d288e49f8dae25a94db318 | 3,657,391 |
def is_iterable(value):
"""Return True if the object is an iterable type."""
return hasattr(value, '__iter__') | 55e1ecc9b264d39aaf5cfcbe89fdc01264191d95 | 3,657,392 |
def get_search_app_by_model(model):
"""
:returns: a single search app (by django model)
:param model: django model for the search app
:raises LookupError: if it can't find the search app
"""
for search_app in get_search_apps():
if search_app.queryset.model is model:
return search_app
raise LookupError(f'search app for {model} not found.') | 0670fe754df65b02d5dfc502ba3bd0a3a802370c | 3,657,393 |
def prct_overlap(adata, key_1, key_2, norm=False, ax_norm="row", sort_index=False):
"""
% or cell count corresponding to the overlap of different cell types
between 2 set of annotations/clusters.
Parameters
----------
adata: AnnData objet
key_1: observational key corresponding to one cell division/ one set of clusters
key_2: bservational key corresponding to one cell division/ one set of clusters
norm: normalise the ratio to the cell numbers given the total number of cells per
cluster in key_1
Return
------
Table containing the ratio of cells within a cluster
"""
data_1 = adata.obs[key_1].tolist()
data_2 = adata.obs[key_2].tolist()
count = {k:[] for k in list(set(data_1))}
#count = {k:[] for k in sorted(list(set(data_1)))}
i = 0
for index in data_1:
count[index].append(data_2[i])
i += 1
total_matrix = []
for key, value in count.items():
value = sorted(value)
curr_key_list = []
for element in sorted(list(set(data_2))):
curr_count = 0
for v in value:
if element == v:
curr_count += 1
curr_key_list.append(curr_count)
curr_sum = sum(curr_key_list)
#total_matrix.append([x/curr_sum for x in curr_key_list])
total_matrix.append(curr_key_list)
if norm and ax_norm == "row":
total_matrix = []
for key, value in count.items():
value = sorted(value)
curr_key_list = []
for element in sorted(list(set(data_2))):
curr_count = 0
for v in value:
if element == v:
curr_count += 1
curr_key_list.append(curr_count)
curr_sum = sum(curr_key_list)
total_matrix.append([x/curr_sum for x in curr_key_list])
elif norm:
print("""error in the argument ax_norm or it is col and
I haven't figure out how to make it for mow.
, here is the heatmap with no normalisation""")
if sort_index:
data_heatmap = pd.DataFrame(data=np.matrix(total_matrix),
index=list(set(data_1)),
columns=sorted(list(set(data_2)))).sort_index()
else:
data_heatmap = pd.DataFrame(data=np.matrix(total_matrix),
index=list(set(data_1)),
columns=sorted(list(set(data_2))))
return(data_heatmap) | 77a8382af77e8842a99211af58d6a6f85de6a50e | 3,657,394 |
def keep_category(df, colname, pct=0.05, n=5):
""" Keep a pct or number of every levels of a categorical variable
Parameters
----------
pct : float
Keep at least pct of the nb of observations having a specific category
n : int
Keep at least n of the variables having a specific category
Returns
--------
Returns an index of rows to keep
"""
tokeep = []
nmin = df.groupby(colname).apply(lambda x: x.sample(
max(1, min(x.shape[0], n, int(x.shape[0] * pct)))).index)
for index in nmin:
tokeep += index.tolist()
return pd.Index(tokeep) | 3db00aa6bdea797827a693c8e12bbf942a55ec35 | 3,657,395 |
def remove_scope_from_name(name, scope):
"""
Args:
name (str): full name of the tf variable with all the scopes
Returns:
(str): full name of the variable with the scope removed
"""
result = name.split(scope)[1]
result = result[1:] if result[0] == '/' else result
return result.split(":")[0] | aa70042a2f57185a0f5e401d182a02e5654eb2b0 | 3,657,396 |
async def get_timers_matching(ctx, name_str, channel_only=True, info=False):
"""
Interactively get a guild timer matching the given string.
Parameters
----------
name_str: str
Name or partial name of a group timer in the current guild or channel.
channel_only: bool
Whether to match against the groups in the current channel or those in the whole guild.
info: bool
Whether to display some summary info about the timer in the selector.
Returns: Timer
Raises
------
cmdClient.lib.UserCancelled:
Raised if the user manually cancels the selection.
cmdClient.lib.ResponseTimedOut:
Raised if the user fails to respond to the selector within `120` seconds.
"""
# Get the full timer list
if channel_only:
timers = ctx.client.interface.get_channel_timers(ctx.ch.id)
else:
timers = ctx.client.interface.get_guild_timers(ctx.guild.id)
# If there are no timers, quit early
if not timers:
return None
# Build a list of matching timers
name_str = name_str.strip()
timers = [timer for timer in timers if name_str.lower() in timer.name.lower()]
if len(timers) == 0:
return None
elif len(timers) == 1:
return timers[0]
else:
if info:
select_from = [timer.oneline_summary() for timer in timers]
else:
select_from = [timer.name for timer in timers]
try:
selected = await ctx.selector("Multiple matching groups found, please select one.", select_from)
except ResponseTimedOut:
raise ResponseTimedOut("Group selection timed out.") from None
except UserCancelled:
raise UserCancelled("User cancelled group selection.") from None
return timers[selected] | 48e94d2930f48b47b033ec024246065206a2bebb | 3,657,397 |
import random
def comprehension_array(size=1000000):
"""Fills an array that is handled by Python via list comprehension."""
return [random() * i for i in range(size)] | e3ccdc992e5b741cf6f164c93d36f2e45d59a590 | 3,657,398 |
def alignment(alpha, p, treatment):
"""Alignment confounding function.
Reference: Blackwell, Matthew. "A selection bias approach to sensitivity analysis
for causal effects." Political Analysis 22.2 (2014): 169-182.
https://www.mattblackwell.org/files/papers/causalsens.pdf
Args:
alpha (np.array): a confounding values vector
p (np.array): a propensity score vector between 0 and 1
treatment (np.array): a treatment vector (1 if treated, otherwise 0)
"""
assert p.shape[0] == treatment.shape[0]
adj = alpha * (1 - p) * treatment + alpha * p * (1 - treatment)
return adj | 8097dbcd62ba934b31b1f8a9e72fd906109b5181 | 3,657,399 |
Subsets and Splits