text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def expectation(T, a, mu=None):
r"""Equilibrium expectation value of a given observable. Parameters T : (M, M) ndarray or scipy.sparse matrix Transition matrix a : (M,) ndarray Observable vector mu : (M,) ndarray (optional) The stationary distribution of T. If given, the stationary distribution will not be recalculated (saving lots of time) Returns ------- val: float Equilibrium expectation value fo the given observable Notes ----- The equilibrium expectation value of an observable a is defined as follows .. math:: \mathbb{E}_{\mu}[a] = \sum_i \mu_i a_i :math:`\mu=(\mu_i)` is the stationary vector of the transition matrix :math:`T`. Examples -------- """
|
# check if square matrix and remember size
T = _types.ensure_ndarray_or_sparse(T, ndim=2, uniform=True, kind='numeric')
n = T.shape[0]
a = _types.ensure_ndarray(a, ndim=1, size=n, kind='numeric')
mu = _types.ensure_ndarray_or_None(mu, ndim=1, size=n, kind='numeric')
# go
if not mu:
mu = stationary_distribution(T)
return _np.dot(mu, a)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _pcca_object(T, m):
""" Constructs the pcca object from dense or sparse Parameters T : (n, n) ndarray or scipy.sparse matrix Transition matrix m : int Number of metastable sets Returns ------- pcca : PCCA PCCA object """
|
if _issparse(T):
_showSparseConversionWarning()
T = T.toarray()
T = _types.ensure_ndarray(T, ndim=2, uniform=True, kind='numeric')
return dense.pcca.PCCA(T, m)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eigenvalue_sensitivity(T, k):
r"""Sensitivity matrix of a specified eigenvalue. Parameters T : (M, M) ndarray Transition matrix k : int Compute sensitivity matrix for k-th eigenvalue Returns ------- S : (M, M) ndarray Sensitivity matrix for k-th eigenvalue. """
|
T = _types.ensure_ndarray_or_sparse(T, ndim=2, uniform=True, kind='numeric')
if _issparse(T):
_showSparseConversionWarning()
eigenvalue_sensitivity(T.todense(), k)
else:
return dense.sensitivity.eigenvalue_sensitivity(T, k)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def eigenvector_sensitivity(T, k, j, right=True):
r"""Sensitivity matrix of a selected eigenvector element. Parameters T : (M, M) ndarray Transition matrix (stochastic matrix). k : int Eigenvector index j : int Element index right : bool If True compute for right eigenvector, otherwise compute for left eigenvector. Returns ------- S : (M, M) ndarray Sensitivity matrix for the j-th element of the k-th eigenvector. """
|
T = _types.ensure_ndarray_or_sparse(T, ndim=2, uniform=True, kind='numeric')
if _issparse(T):
_showSparseConversionWarning()
eigenvector_sensitivity(T.todense(), k, j, right=right)
else:
return dense.sensitivity.eigenvector_sensitivity(T, k, j, right=right)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stationary_distribution_sensitivity(T, j):
r"""Sensitivity matrix of a stationary distribution element. Parameters T : (M, M) ndarray Transition matrix (stochastic matrix). j : int Index of stationary distribution element for which sensitivity matrix is computed. Returns ------- S : (M, M) ndarray Sensitivity matrix for the specified element of the stationary distribution. """
|
T = _types.ensure_ndarray_or_sparse(T, ndim=2, uniform=True, kind='numeric')
if _issparse(T):
_showSparseConversionWarning()
stationary_distribution_sensitivity(T.todense(), j)
else:
return dense.sensitivity.stationary_distribution_sensitivity(T, j)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mfpt_sensitivity(T, target, i):
r"""Sensitivity matrix of the mean first-passage time from specified state. Parameters T : (M, M) ndarray Transition matrix target : int or list Target state or set for mfpt computation i : int Compute the sensitivity for state `i` Returns ------- S : (M, M) ndarray Sensitivity matrix for specified state """
|
# check input
T = _types.ensure_ndarray_or_sparse(T, ndim=2, uniform=True, kind='numeric')
target = _types.ensure_int_vector(target)
# go
if _issparse(T):
_showSparseConversionWarning()
mfpt_sensitivity(T.todense(), target, i)
else:
return dense.sensitivity.mfpt_sensitivity(T, target, i)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def committor_sensitivity(T, A, B, i, forward=True):
r"""Sensitivity matrix of a specified committor entry. Parameters T : (M, M) ndarray Transition matrix A : array_like List of integer state labels for set A B : array_like List of integer state labels for set B i : int Compute the sensitivity for committor entry `i` forward : bool (optional) Compute the forward committor. If forward is False compute the backward committor. Returns ------- S : (M, M) ndarray Sensitivity matrix of the specified committor entry. """
|
# check inputs
T = _types.ensure_ndarray_or_sparse(T, ndim=2, uniform=True, kind='numeric')
A = _types.ensure_int_vector(A)
B = _types.ensure_int_vector(B)
if _issparse(T):
_showSparseConversionWarning()
committor_sensitivity(T.todense(), A, B, i, forward)
else:
if forward:
return dense.sensitivity.forward_committor_sensitivity(T, A, B, i)
else:
return dense.sensitivity.backward_committor_sensitivity(T, A, B, i)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tmatrix_cov(C, row=None):
r"""Covariance tensor for the non-reversible transition matrix ensemble Normally the covariance tensor cov(p_ij, p_kl) would carry four indices (i,j,k,l). In the non-reversible case rows are independent so that cov(p_ij, p_kl)=0 for i not equal to k. Therefore the function will only return cov(p_ij, p_ik). Parameters C : (M, M) ndarray Count matrix row : int (optional) If row is given return covariance matrix for specified row only Returns ------- cov : (M, M, M) ndarray Covariance tensor """
|
if row is None:
alpha = C + 1.0 # Dirichlet parameters
alpha0 = alpha.sum(axis=1) # Sum of paramters (per row)
norm = alpha0 ** 2 * (alpha0 + 1.0)
"""Non-normalized covariance tensor"""
Z = -alpha[:, :, np.newaxis] * alpha[:, np.newaxis, :]
"""Correct-diagonal"""
ind = np.diag_indices(C.shape[0])
Z[:, ind[0], ind[1]] += alpha0[:, np.newaxis] * alpha
"""Covariance matrix"""
cov = Z / norm[:, np.newaxis, np.newaxis]
return cov
else:
alpha = C[row, :] + 1.0
return dirichlet_covariance(alpha)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dirichlet_covariance(alpha):
r"""Covariance matrix for Dirichlet distribution. Parameters alpha : (M, ) ndarray Parameters of Dirichlet distribution Returns ------- cov : (M, M) ndarray Covariance matrix """
|
alpha0 = alpha.sum()
norm = alpha0 ** 2 * (alpha0 + 1.0)
"""Non normalized covariance"""
Z = -alpha[:, np.newaxis] * alpha[np.newaxis, :]
"""Correct diagonal"""
ind = np.diag_indices(Z.shape[0])
Z[ind] += alpha0 * alpha
"""Covariance matrix"""
cov = Z / norm
return cov
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mfpt_between_sets(T, target, origin, mu=None):
"""Compute mean-first-passage time between subsets of state space. Parameters T : scipy.sparse matrix Transition matrix. target : int or list of int Set of target states. origin : int or list of int Set of starting states. mu : (M,) ndarray (optional) The stationary distribution of the transition matrix T. Returns ------- tXY : float Mean first passage time between set X and Y. Notes ----- The mean first passage time :math:`\mathbf{E}_X[T_Y]` is the expected hitting time of one state :math:`y` in :math:`Y` when starting in a state :math:`x` in :math:`X`: .. math :: \mathbb{E}_X[T_Y] = \sum_{x \in X} \frac{\mu_x \mathbb{E}_x[T_Y]}{\sum_{z \in X} \mu_z} """
|
if mu is None:
mu = stationary_distribution(T)
"""Stationary distribution restriced on starting set X"""
nuX = mu[origin]
muX = nuX / np.sum(nuX)
"""Mean first-passage time to Y (for all possible starting states)"""
tY = mfpt(T, target)
"""Mean first-passage time from X to Y"""
tXY = np.dot(muX, tY[origin])
return tXY
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mydot(A, B):
r"""Dot-product that can handle dense and sparse arrays Parameters A : numpy ndarray or scipy sparse matrix The first factor B : numpy ndarray or scipy sparse matrix The second factor Returns C : numpy ndarray or scipy sparse matrix The dot-product of A and B """
|
if issparse(A) :
return A.dot(B)
elif issparse(B):
return (B.T.dot(A.T)).T
else:
return np.dot(A, B)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def expected_counts(p0, T, N):
r"""Compute expected transition counts for Markov chain after N steps. Expected counts are computed according to ..math:: E[C_{ij}^{(n)}]=\sum_{k=0}^{N-1} (p_0^T T^{k})_{i} p_{ij} Parameters p0 : (M,) ndarray Starting (probability) vector of the chain. T : (M, M) sparse matrix Transition matrix of the chain. N : int Number of steps to take from initial state. Returns -------- EC : (M, M) sparse matrix Expected value for transition counts after N steps. """
|
if (N <= 0):
EC = coo_matrix(T.shape, dtype=float)
return EC
else:
"""Probability vector after (k=0) propagations"""
p_k = 1.0 * p0
"""Sum of vectors after (k=0) propagations"""
p_sum = 1.0 * p_k
"""Transpose T to use sparse dot product"""
Tt = T.transpose()
for k in np.arange(N - 1):
"""Propagate one step p_{k} -> p_{k+1}"""
p_k = Tt.dot(p_k)
"""Update sum"""
p_sum += p_k
D_psum = diags(p_sum, 0)
EC = D_psum.dot(T)
return EC
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fingerprint(P, obs1, obs2=None, p0=None, tau=1, k=None, ncv=None):
r"""Dynamical fingerprint for equilibrium or relaxation experiment The dynamical fingerprint is given by the implied time-scale spectrum together with the corresponding amplitudes. Parameters P : (M, M) scipy.sparse matrix Transition matrix obs1 : (M,) ndarray Observable, represented as vector on state space obs2 : (M,) ndarray (optional) Second observable, for cross-correlations p0 : (M,) ndarray (optional) Initial distribution for a relaxation experiment tau : int (optional) Lag time of given transition matrix, for correct time-scales k : int (optional) Number of time-scales and amplitudes to compute ncv : int (optional) The number of Lanczos vectors generated, `ncv` must be greater than k; it is recommended that ncv > 2*k Returns ------- timescales : (N,) ndarray Time-scales of the transition matrix amplitudes : (N,) ndarray Amplitudes for the given observable(s) """
|
if obs2 is None:
obs2 = obs1
R, D, L = rdl_decomposition(P, k=k, ncv=ncv)
"""Stationary vector"""
mu = L[0, :]
"""Extract diagonal"""
w = np.diagonal(D)
"""Compute time-scales"""
timescales = timescales_from_eigenvalues(w, tau)
if p0 is None:
"""Use stationary distribution - we can not use only left
eigenvectors since the system might be non-reversible"""
amplitudes = np.dot(mu * obs1, R) * np.dot(L, obs2)
else:
"""Use initial distribution"""
amplitudes = np.dot(p0 * obs1, R) * np.dot(L, obs2)
return timescales, amplitudes
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def correlation_matvec(P, obs1, obs2=None, times=[1]):
r"""Time-correlation for equilibrium experiment - via matrix vector products. Parameters P : (M, M) ndarray Transition matrix obs1 : (M,) ndarray Observable, represented as vector on state space obs2 : (M,) ndarray (optional) Second observable, for cross-correlations times : list of int (optional) List of times (in tau) at which to compute correlation Returns ------- correlations : ndarray Correlation values at given times """
|
if obs2 is None:
obs2 = obs1
"""Compute stationary vector"""
mu = statdist(P)
obs1mu = mu * obs1
times = np.asarray(times)
"""Sort in increasing order"""
ind = np.argsort(times)
times = times[ind]
if times[0] < 0:
raise ValueError("Times can not be negative")
dt = times[1:] - times[0:-1]
nt = len(times)
correlations = np.zeros(nt)
"""Propagate obs2 to initial time"""
obs2_t = 1.0 * obs2
obs2_t = propagate(P, obs2_t, times[0])
correlations[0] = np.dot(obs1mu, obs2_t)
for i in range(nt - 1):
obs2_t = propagate(P, obs2_t, dt[i])
correlations[i + 1] = np.dot(obs1mu, obs2_t)
"""Cast back to original order of time points"""
correlations = correlations[ind]
return correlations
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def propagate(A, x, N):
r"""Use matrix A to propagate vector x. Parameters A : (M, M) scipy.sparse matrix Matrix of propagator x : (M, ) ndarray or scipy.sparse matrix Vector to propagate N : int Number of steps to propagate Returns ------- y : (M, ) ndarray or scipy.sparse matrix Propagated vector """
|
y = 1.0 * x
for i in range(N):
y = A.dot(y)
return y
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_traj(P, N, start=None, stop=None, dt=1):
""" Generates a realization of the Markov chain with transition matrix P. Parameters P : (n, n) ndarray transition matrix N : int trajectory length start : int, optional, default = None starting state. If not given, will sample from the stationary distribution of P stop : int or int-array-like, optional, default = None stopping set. If given, the trajectory will be stopped before N steps once a state of the stop set is reached dt : int trajectory will be saved every dt time steps. Internally, the dt'th power of P is taken to ensure a more efficient simulation. Returns ------- traj_sliced : (N/dt, ) ndarray A discrete trajectory with length N/dt """
|
sampler = MarkovChainSampler(P, dt=dt)
return sampler.trajectory(N, start=start, stop=stop)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_trajs(P, M, N, start=None, stop=None, dt=1):
""" Generates multiple realizations of the Markov chain with transition matrix P. Parameters P : (n, n) ndarray transition matrix M : int number of trajectories N : int trajectory length start : int, optional, default = None starting state. If not given, will sample from the stationary distribution of P stop : int or int-array-like, optional, default = None stopping set. If given, the trajectory will be stopped before N steps once a state of the stop set is reached dt : int trajectory will be saved every dt time steps. Internally, the dt'th power of P is taken to ensure a more efficient simulation. Returns ------- traj_sliced : (N/dt, ) ndarray A discrete trajectory with length N/dt """
|
sampler = MarkovChainSampler(P, dt=dt)
return sampler.trajectories(M, N, start=start, stop=stop)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transition_matrix_metropolis_1d(E, d=1.0):
r"""Transition matrix describing the Metropolis chain jumping between neighbors in a discrete 1D energy landscape. Parameters E : (M,) ndarray Energies in units of kT d : float (optional) Diffusivity of the chain, d in (0, 1] Returns ------- P : (M, M) ndarray Transition matrix of the Markov chain Notes ----- Transition probabilities are computed as .. math:: p_{i,i-1} &=& 0.5 d \min \left{ 1.0, \mathrm{e}^{-(E_{i-1} - E_i)} \right}, \\ p_{i,i+1} &=& 0.5 d \min \left{ 1.0, \mathrm{e}^{-(E_{i+1} - E_i)} \right}, \\ p_{i,i} &=& 1.0 - p_{i,i-1} - p_{i,i+1}. """
|
# check input
if (d <= 0 or d > 1):
raise ValueError('Diffusivity must be in (0,1]. Trying to set the invalid value', str(d))
# init
n = len(E)
P = np.zeros((n, n))
# set offdiagonals
P[0, 1] = 0.5 * d * min(1.0, math.exp(-(E[1] - E[0])))
for i in range(1, n - 1):
P[i, i - 1] = 0.5 * d * min(1.0, math.exp(-(E[i - 1] - E[i])))
P[i, i + 1] = 0.5 * d * min(1.0, math.exp(-(E[i + 1] - E[i])))
P[n - 1, n - 2] = 0.5 * d * min(1.0, math.exp(-(E[n - 2] - E[n - 1])))
# normalize
P += np.diag(1.0 - np.sum(P, axis=1))
# done
return P
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def trajectory(self, N, start=None, stop=None):
""" Generates a trajectory realization of length N, starting from state s Parameters N : int trajectory length start : int, optional, default = None starting state. If not given, will sample from the stationary distribution of P stop : int or int-array-like, optional, default = None stopping set. If given, the trajectory will be stopped before N steps once a state of the stop set is reached """
|
# check input
stop = types.ensure_int_vector_or_None(stop, require_order=False)
if start is None:
if self.mudist is None:
# compute mu, the stationary distribution of P
import msmtools.analysis as msmana
from scipy.stats import rv_discrete
mu = msmana.stationary_distribution(self.P)
self.mudist = rv_discrete(values=(np.arange(self.n), mu))
# sample starting point from mu
start = self.mudist.rvs()
# evaluate stopping set
stopat = np.ndarray((self.n), dtype=bool)
stopat[:] = False
if (stop is not None):
for s in np.array(stop):
stopat[s] = True
# result
traj = np.zeros(N, dtype=int)
traj[0] = start
# already at stopping state?
if stopat[traj[0]]:
return traj[:1]
# else run until end or stopping state
for t in range(1, N):
traj[t] = self.rgs[traj[t - 1]].rvs()
if stopat[traj[t]]:
return traj[:t+1]
# return
return traj
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def trajectories(self, M, N, start=None, stop=None):
""" Generates M trajectories, each of length N, starting from state s Parameters M : int number of trajectories N : int trajectory length start : int, optional, default = None starting state. If not given, will sample from the stationary distribution of P stop : int or int-array-like, optional, default = None stopping set. If given, the trajectory will be stopped before N steps once a state of the stop set is reached """
|
trajs = [self.trajectory(N, start=start, stop=stop) for _ in range(M)]
return trajs
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _split_sequences_singletraj(dtraj, nstates, lag):
""" splits the discrete trajectory into conditional sequences by starting state Parameters dtraj : int-iterable discrete trajectory nstates : int total number of discrete states lag : int lag time """
|
sall = [[] for _ in range(nstates)]
res_states = []
res_seqs = []
for t in range(len(dtraj)-lag):
sall[dtraj[t]].append(dtraj[t+lag])
for i in range(nstates):
if len(sall[i]) > 0:
res_states.append(i)
res_seqs.append(np.array(sall[i]))
return res_states, res_seqs
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _split_sequences_multitraj(dtrajs, lag):
""" splits the discrete trajectories into conditional sequences by starting state Parameters dtrajs : list of int-iterables discrete trajectories nstates : int total number of discrete states lag : int lag time """
|
n = number_of_states(dtrajs)
res = []
for i in range(n):
res.append([])
for dtraj in dtrajs:
states, seqs = _split_sequences_singletraj(dtraj, n, lag)
for i in range(len(states)):
res[states[i]].append(seqs[i])
return res
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _indicator_multitraj(ss, i, j):
""" Returns conditional sequence for transition i -> j given all conditional sequences """
|
iseqs = ss[i]
res = []
for iseq in iseqs:
x = np.zeros(len(iseq))
I = np.where(iseq == j)
x[I] = 1.0
res.append(x)
return res
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def statistical_inefficiencies(dtrajs, lag, C=None, truncate_acf=True, mact=2.0, n_jobs=1, callback=None):
r""" Computes statistical inefficiencies of sliding-window transition counts at given lag we collect the target sequence .. mathh: Y^(i) = {x_{t+\tau} | x_{t}=i} which contains the time-ordered target states at times :math:`t+\tau` whenever we started in state :math:`i` at time :math:`t`. Then we define the indicator sequence: .. math: a^{(i,j)}_t (\tau) = 1(Y^(i)_t = j) The statistical inefficiency for transition counts :math:`c_{ij}(tau)` is computed as the statistical inefficiency of the sequence :math:`a^{(i,j)}_t (\tau)`. Parameters dtrajs : list of int-iterables discrete trajectories lag : int lag time C : scipy sparse matrix (n, n) or None sliding window count matrix, if already available truncate_acf : bool, optional, default=True When the normalized autocorrelation function passes through 0, it is truncated in order to avoid integrating random noise n_jobs: int, default=1 If greater one, the function will be evaluated with multiple processes. callback: callable, default=None will be called for every statistical inefficiency computed (number of nonzero elements in count matrix). If n_jobs is greater one, the callback will be invoked per finished batch. Returns ------- I : scipy sparse matrix (n, n) Statistical inefficiency matrix with a sparsity pattern identical to the sliding-window count matrix at the same lag time. Will contain a statistical inefficiency :math:`I_{ij} \in (0,1]` whenever there is a count :math:`c_{ij} > 0`. When there is no transition count (:math:`c_{ij} = 0`), the statistical inefficiency is 0. See also -------- msmtools.util.statistics.statistical_inefficiency used to compute the statistical inefficiency for conditional trajectories """
|
# count matrix
if C is None:
C = count_matrix_coo2_mult(dtrajs, lag, sliding=True, sparse=True)
if callback is not None:
if not callable(callback):
raise ValueError('Provided callback is not callable')
# split sequences
splitseq = _split_sequences_multitraj(dtrajs, lag)
# compute inefficiencies
I, J = C.nonzero()
if n_jobs > 1:
from multiprocessing.pool import Pool, MapResult
from contextlib import closing
import tempfile
# to avoid pickling partial results, we store these in a numpy.memmap
ntf = tempfile.NamedTemporaryFile(delete=False)
arr = np.memmap(ntf.name, dtype=np.float64, mode='w+', shape=C.nnz)
#arr[:] = np.nan
gen = _arguments_generator(I, J, splitseq, truncate_acf=truncate_acf, mact=truncate_acf,
array=ntf.name, njobs=n_jobs)
if callback:
x = gen.n_blocks()
_callback = lambda _: callback(x)
else:
_callback = callback
with closing(Pool(n_jobs)) as pool:
result_async = [pool.apply_async(_wrapper, (args,), callback=_callback)
for args in gen]
[t.get() for t in result_async]
data = np.array(arr[:])
#assert np.all(np.isfinite(data))
import os
os.unlink(ntf.name)
else:
data = np.empty(C.nnz)
for index, (i, j) in enumerate(zip(I, J)):
data[index] = statistical_inefficiency(_indicator_multitraj(splitseq, i, j),
truncate_acf=truncate_acf, mact=mact)
if callback is not None:
callback(1)
res = csr_matrix((data, (I, J)), shape=C.shape)
return res
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transition_matrix_non_reversible(C):
r""" Estimates a non-reversible transition matrix from count matrix C T_ij = c_ij / c_i where c_i = sum_j c_ij Parameters C: ndarray, shape (n,n) count matrix Returns ------- T: Estimated transition matrix """
|
# multiply by 1.0 to make sure we're not doing integer division
rowsums = 1.0 * np.sum(C, axis=1)
if np.min(rowsums) <= 0:
raise ValueError(
"Transition matrix has row sum of " + str(np.min(rowsums)) + ". Must have strictly positive row sums.")
return np.divide(C, rowsums[:, np.newaxis])
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def time_correlation_direct_by_mtx_vec_prod(P, mu, obs1, obs2=None, time=1, start_values=None, return_P_k_obs=False):
r"""Compute time-correlation of obs1, or time-cross-correlation with obs2. The time-correlation at time=k is computed by the matrix-vector expression: cor(k) = obs1' diag(pi) P^k obs2 Parameters P : ndarray, shape=(n, n) or scipy.sparse matrix Transition matrix obs1 : ndarray, shape=(n) Vector representing observable 1 on discrete states obs2 : ndarray, shape=(n) Vector representing observable 2 on discrete states. If not given, the autocorrelation of obs1 will be computed mu : ndarray, shape=(n) stationary distribution vector. time : int time point at which the (auto)correlation will be evaluated. start_values : (time, ndarray <P, <P, obs2>>_t) start iteration of calculation of matrix power product, with this values. only useful when calling this function out of a loop over times. return_P_k_obs : bool if True, the dot product <P^time, obs2> will be returned for further calculations. Returns ------- cor(k) : float correlation between observations """
|
# input checks
if not (type(time) == int):
if not (type(time) == np.int64):
raise TypeError("given time (%s) is not an integer, but has type: %s"
% (str(time), type(time)))
if obs1.shape[0] != P.shape[0]:
raise ValueError("observable shape not compatible with given matrix")
if obs2 is None:
obs2 = obs1
# multiply element-wise obs1 and pi. this is obs1' diag(pi)
l = np.multiply(obs1, mu)
# raise transition matrix to power of time by substituting dot product
# <Pk, obs2> with something like <P, <P, obs2>>.
# This saves a lot of matrix matrix multiplications.
if start_values: # begin with a previous calculated val
P_i_obs = start_values[1]
# calculate difference properly!
time_prev = start_values[0]
t_diff = time - time_prev
r = range(t_diff)
else:
if time >= 2:
P_i_obs = np.dot(P, np.dot(P, obs2)) # vector <P, <P, obs2> := P^2 * obs
r = range(time - 2)
elif time == 1:
P_i_obs = np.dot(P, obs2) # P^1 = P*obs
r = range(0)
elif time == 0: # P^0 = I => I*obs2 = obs2
P_i_obs = obs2
r = range(0)
for k in r: # since we already substituted started with 0
P_i_obs = np.dot(P, P_i_obs)
corr = np.dot(l, P_i_obs)
if return_P_k_obs:
return corr, (time, P_i_obs)
else:
return corr
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def time_correlations_direct(P, pi, obs1, obs2=None, times=[1]):
r"""Compute time-correlations of obs1, or time-cross-correlation with obs2. The time-correlation at time=k is computed by the matrix-vector expression: cor(k) = obs1' diag(pi) P^k obs2 Parameters P : ndarray, shape=(n, n) or scipy.sparse matrix Transition matrix obs1 : ndarray, shape=(n) Vector representing observable 1 on discrete states obs2 : ndarray, shape=(n) Vector representing observable 2 on discrete states. If not given, the autocorrelation of obs1 will be computed pi : ndarray, shape=(n) stationary distribution vector. Will be computed if not given times : array-like, shape(n_t) Vector of time points at which the (auto)correlation will be evaluated Returns ------- """
|
n_t = len(times)
times = np.sort(times) # sort it to use caching of previously computed correlations
f = np.zeros(n_t)
# maximum time > number of rows?
if times[-1] > P.shape[0]:
use_diagonalization = True
R, D, L = rdl_decomposition(P)
# discard imaginary part, if all elements i=0
if not np.any(np.iscomplex(R)):
R = np.real(R)
if not np.any(np.iscomplex(D)):
D = np.real(D)
if not np.any(np.iscomplex(L)):
L = np.real(L)
rdl = (R, D, L)
if use_diagonalization:
for i in range(n_t):
f[i] = time_correlation_by_diagonalization(P, pi, obs1, obs2, times[i], rdl)
else:
start_values = None
for i in range(n_t):
f[i], start_values = \
time_correlation_direct_by_mtx_vec_prod(P, pi, obs1, obs2,
times[i], start_values, True)
return f
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def factor_aug(z, DPhival, G, A):
r"""Set up augmented system and return. Parameters z : (N+P+M+M,) ndarray Current iterate, z = (x, nu, l, s) DPhival : LinearOperator Jacobian of the variational inequality mapping G : (M, N) ndarray or sparse matrix Inequality constraints A : (P, N) ndarray or sparse matrix Equality constraints Returns ------- J : LinearOperator Augmented system """
|
M, N = G.shape
P, N = A.shape
"""Multiplier for inequality constraints"""
l = z[N+P:N+P+M]
"""Slacks"""
s = z[N+P+M:]
"""Sigma matrix"""
SIG = diags(l/s, 0)
# SIG = diags(l*s, 0)
"""Convert A"""
if not issparse(A):
A = csr_matrix(A)
"""Convert G"""
if not issparse(G):
G = csr_matrix(G)
"""Since we expect symmetric DPhival, we need to change A"""
sign = np.zeros(N)
sign[0:N//2] = 1.0
sign[N//2:] = -1.0
T = diags(sign, 0)
A_new = A.dot(T)
W = AugmentedSystem(DPhival, G, SIG, A_new)
return W
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def I(self):
r"""Returns the set of intermediate states """
|
return list(set(range(self.nstates)) - set(self._A) - set(self._B))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _pathways_to_flux(self, paths, pathfluxes, n=None):
r"""Sums up the flux from the pathways given Parameters paths : list of int-arrays list of pathways pathfluxes : double-array array with path fluxes n : int number of states. If not set, will be automatically determined. Returns ------- flux : (n,n) ndarray of float the flux containing the summed path fluxes """
|
if (n is None):
n = 0
for p in paths:
n = max(n, np.max(p))
n += 1
# initialize flux
F = np.zeros((n, n))
for i in range(len(paths)):
p = paths[i]
for t in range(len(p) - 1):
F[p[t], p[t + 1]] += pathfluxes[i]
return F
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def major_flux(self, fraction=0.9):
r"""Returns the main pathway part of the net flux comprising at most the requested fraction of the full flux. """
|
(paths, pathfluxes) = self.pathways(fraction=fraction)
return self._pathways_to_flux(paths, pathfluxes, n=self.nstates)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _compute_coarse_sets(self, user_sets):
r"""Computes the sets to coarse-grain the tpt flux to. Parameters (tpt_sets, A, B) with tpt_sets : list of int-iterables sets of states that shall be distinguished in the coarse-grained flux. A : int-iterable set indexes in A B : int-iterable set indexes in B Returns ------- sets : list of int-iterables sets to compute tpt on. These sets still respect the boundary between A, B and the intermediate tpt states. Notes ----- Given the sets that the user wants to distinguish, the algorithm will create additional sets if necessary * If states are missing in user_sets, they will be put into a separate set * If sets in user_sets are crossing the boundary between A, B and the intermediates, they will be split at these boundaries. Thus each set in user_sets can remain intact or be split into two or three subsets """
|
# set-ify everything
setA = set(self.A)
setB = set(self.B)
setI = set(self.I)
raw_sets = [set(user_set) for user_set in user_sets]
# anything missing? Compute all listed states
set_all = set(range(self.nstates))
set_all_user = []
for user_set in raw_sets:
set_all_user += user_set
set_all_user = set(set_all_user)
# ... and add all the unlisted states in a separate set
set_rest = set_all - set_all_user
if len(set_rest) > 0:
raw_sets.append(set_rest)
# split sets
Asets = []
Isets = []
Bsets = []
for raw_set in raw_sets:
s = raw_set.intersection(setA)
if len(s) > 0:
Asets.append(s)
s = raw_set.intersection(setI)
if len(s) > 0:
Isets.append(s)
s = raw_set.intersection(setB)
if len(s) > 0:
Bsets.append(s)
tpt_sets = Asets + Isets + Bsets
Aindexes = list(range(0, len(Asets)))
Bindexes = list(range(len(Asets) + len(Isets), len(tpt_sets)))
return (tpt_sets, Aindexes, Bindexes)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def coarse_grain(self, user_sets):
r"""Coarse-grains the flux onto user-defined sets. Parameters user_sets : list of int-iterables sets of states that shall be distinguished in the coarse-grained flux. Returns ------- (sets, tpt) : (list of int-iterables, tpt-object) sets contains the sets tpt is computed on. The tpt states of the new tpt object correspond to these sets of states in this order. Sets might be identical, if the user has already provided a complete partition that respects the boundary between A, B and the intermediates. If not, Sets will have more members than provided by the user, containing the "remainder" states and reflecting the splitting at the A and B boundaries. tpt contains a new tpt object for the coarse-grained flux. All its quantities (gross_flux, net_flux, A, B, committor, backward_committor) are coarse-grained to sets. Notes ----- All user-specified sets will be split (if necessary) to preserve the boundary between A, B and the intermediate states. """
|
# coarse-grain sets
(tpt_sets, Aindexes, Bindexes) = self._compute_coarse_sets(user_sets)
nnew = len(tpt_sets)
# coarse-grain fluxHere we should branch between sparse and dense implementations, but currently there is only a
F_coarse = tptapi.coarsegrain(self._gross_flux, tpt_sets)
Fnet_coarse = tptapi.to_netflux(F_coarse)
# coarse-grain stationary probability and committors - this can be done all dense
pstat_coarse = np.zeros((nnew))
forward_committor_coarse = np.zeros((nnew))
backward_committor_coarse = np.zeros((nnew))
for i in range(0, nnew):
I = list(tpt_sets[i])
muI = self._mu[I]
pstat_coarse[i] = np.sum(muI)
partialI = muI / pstat_coarse[i] # normalized stationary probability over I
forward_committor_coarse[i] = np.dot(partialI, self._qplus[I])
backward_committor_coarse[i] = np.dot(partialI, self._qminus[I])
res = ReactiveFlux(Aindexes, Bindexes, Fnet_coarse, mu=pstat_coarse,
qminus=backward_committor_coarse, qplus=forward_committor_coarse, gross_flux=F_coarse)
return (tpt_sets, res)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def committor_forward(self, a, b):
r"""Forward committor for birth-and-death-chain. The forward committor is the probability to hit state b before hitting state a starting in state x, u_x=P_x(T_b<T_a) T_i is the first arrival time of the chain to state i, T_i = inf( t>0 | X_t=i ) Parameters a : int State index b : int State index Returns ------- u : (M,) ndarray Vector of committor probabilities. """
|
u = np.zeros(self.dim)
g = np.zeros(self.dim - 1)
g[0] = 1.0
g[1:] = np.cumprod(self.q[1:-1] / self.p[1:-1])
"""If a and b are equal the event T_b<T_a is impossible
for any starting state x so that the committor is
zero everywhere"""
if a == b:
return u
elif a < b:
"""Birth-death chain has to hit a before it can hit b"""
u[0:a + 1] = 0.0
"""Birth-death chain has to hit b before it can hit a"""
u[b:] = 1.0
"""Intermediate states are given in terms of sums of g"""
u[a + 1:b] = np.cumsum(g[a:b])[0:-1] / np.sum(g[a:b])
return u
else:
u[0:b + 1] = 1.0
u[a:] = 0.0
u[b + 1:a] = (np.cumsum(g[b:a])[0:-1] / np.sum(g[b:a]))[::-1]
return u
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transition_matrix_non_reversible(C):
"""implementation of transition_matrix"""
|
if not scipy.sparse.issparse(C):
C = scipy.sparse.csr_matrix(C)
rowsum = C.tocsr().sum(axis=1)
# catch div by zero
if np.min(rowsum) == 0.0:
raise ValueError("matrix C contains rows with sum zero.")
rowsum = np.array(1. / rowsum).flatten()
norm = scipy.sparse.diags(rowsum, 0)
return norm * C
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def correct_transition_matrix(T, reversible=None):
r"""Normalize transition matrix Fixes a the row normalization of a transition matrix. To be used with the reversible estimators to fix an almost coverged transition matrix. Parameters T : (M, M) ndarray matrix to correct reversible : boolean for future use Returns ------- (M, M) ndarray corrected transition matrix """
|
row_sums = T.sum(axis=1).A1
max_sum = np.max(row_sums)
if max_sum == 0.0:
max_sum = 1.0
return (T + scipy.sparse.diags(-row_sums+max_sum, 0)) / max_sum
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def expectation(P, obs):
r"""Equilibrium expectation of given observable. Parameters P : (M, M) ndarray Transition matrix obs : (M,) ndarray Observable, represented as vector on state space Returns ------- x : float Expectation value """
|
pi = statdist(P)
return np.dot(pi, obs)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def correlation_decomp(P, obs1, obs2=None, times=[1], k=None):
r"""Time-correlation for equilibrium experiment - via decomposition. Parameters P : (M, M) ndarray Transition matrix obs1 : (M,) ndarray Observable, represented as vector on state space obs2 : (M,) ndarray (optional) Second observable, for cross-correlations times : list of int (optional) List of times (in tau) at which to compute correlation k : int (optional) Number of eigenvalues and eigenvectors to use for computation Returns ------- correlations : ndarray Correlation values at given times """
|
if obs2 is None:
obs2 = obs1
R, D, L = rdl_decomposition(P, k=k)
"""Stationary vector"""
mu = L[0, :]
"""Extract eigenvalues"""
ev = np.diagonal(D)
"""Amplitudes"""
amplitudes = np.dot(mu * obs1, R) * np.dot(L, obs2)
"""Propgate eigenvalues"""
times = np.asarray(times)
ev_t = ev[np.newaxis, :] ** times[:, np.newaxis]
"""Compute result"""
res = np.dot(ev_t, amplitudes)
"""Truncate imaginary part - should be zero anyways"""
res = res.real
return res
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def log_likelihood(C, T):
""" implementation of likelihood of C given T """
|
C = C.tocsr()
T = T.tocsr()
ind = scipy.nonzero(C)
relT = np.array(T[ind])[0, :]
relT = np.log(relT)
relC = np.array(C[ind])[0, :]
return relT.dot(relC)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_file(token, channel_name, file_name):
""" upload file to a channel """
|
slack = Slacker(token)
slack.files.upload(file_name, channels=channel_name)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(self):
"""Run the minimization. Returns ------- K : (N,N) ndarray the optimal rate matrix """
|
if self.verbose:
self.selftest()
self.count = 0
if self.verbose:
logging.info('initial value of the objective function is %f'
% self.function(self.initial))
theta0 = self.initial
theta, f, d = fmin_l_bfgs_b(self.function_and_gradient, theta0, fprime=None, args=(),
approx_grad=False, bounds=self.bounds, factr=self.tol,
pgtol=1.0E-11, disp=0, maxiter=self.maxiter, maxfun=self.maxiter, maxls=100)
if self.verbose:
logging.info('l_bfgs_b says: '+str(d))
logging.info('objective function value reached: %f' % f)
if d['warnflag'] != 0:
raise_or_warn(str(d), on_error=self.on_error, warning=NotConvergedWarning, exception=NotConvergedError)
K = np.zeros((self.N, self.N))
K[self.I, self.J] = theta / self.pi[self.I]
K[self.J, self.I] = theta / self.pi[self.J]
np.fill_diagonal(K, -np.sum(K, axis=1))
self.K = K
return K
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unify_quotes(token_string, preferred_quote):
"""Return string with quotes changed to preferred_quote if possible."""
|
bad_quote = {'"': "'",
"'": '"'}[preferred_quote]
allowed_starts = {
'': bad_quote,
'f': 'f' + bad_quote,
'b': 'b' + bad_quote
}
if not any(token_string.startswith(start)
for start in allowed_starts.values()):
return token_string
if token_string.count(bad_quote) != 2:
return token_string
if preferred_quote in token_string:
return token_string
assert token_string.endswith(bad_quote)
assert len(token_string) >= 2
for prefix, start in allowed_starts.items():
if token_string.startswith(start):
chars_to_strip_from_front = len(start)
return '{prefix}{preferred_quote}{token}{preferred_quote}'.format(
prefix=prefix,
preferred_quote=preferred_quote,
token=token_string[chars_to_strip_from_front:-1]
)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def detect_encoding(filename):
"""Return file encoding."""
|
try:
with open(filename, 'rb') as input_file:
from lib2to3.pgen2 import tokenize as lib2to3_tokenize
encoding = lib2to3_tokenize.detect_encoding(input_file.readline)[0]
# Check for correctness of encoding.
with open_with_encoding(filename, encoding) as input_file:
input_file.read()
return encoding
except (SyntaxError, LookupError, UnicodeDecodeError):
return 'latin-1'
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _main(argv, standard_out, standard_error):
"""Run quotes unifying on files. Returns `1` if any quoting changes are still needed, otherwise `None`. """
|
import argparse
parser = argparse.ArgumentParser(description=__doc__, prog='unify')
parser.add_argument('-i', '--in-place', action='store_true',
help='make changes to files instead of printing diffs')
parser.add_argument('-c', '--check-only', action='store_true',
help='exit with a status code of 1 if any changes are'
' still needed')
parser.add_argument('-r', '--recursive', action='store_true',
help='drill down directories recursively')
parser.add_argument('--quote', help='preferred quote', choices=["'", '"'],
default="'")
parser.add_argument('--version', action='version',
version='%(prog)s ' + __version__)
parser.add_argument('files', nargs='+',
help='files to format')
args = parser.parse_args(argv[1:])
filenames = list(set(args.files))
changes_needed = False
failure = False
while filenames:
name = filenames.pop(0)
if args.recursive and os.path.isdir(name):
for root, directories, children in os.walk(unicode(name)):
filenames += [os.path.join(root, f) for f in children
if f.endswith('.py') and
not f.startswith('.')]
directories[:] = [d for d in directories
if not d.startswith('.')]
else:
try:
if format_file(name, args=args, standard_out=standard_out):
changes_needed = True
except IOError as exception:
print(unicode(exception), file=standard_error)
failure = True
if failure or (args.check_only and changes_needed):
return 1
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count_matrix(dtraj, lag, sliding=True, sparse_return=True, nstates=None):
r"""Generate a count matrix from given microstate trajectory. Parameters dtraj : array_like or list of array_like Discretized trajectory or list of discretized trajectories lag : int Lagtime in trajectory steps sliding : bool, optional If true the sliding window approach is used for transition counting. sparse_return : bool (optional) Whether to return a dense or a sparse matrix. nstates : int, optional Enforce a count-matrix with shape=(nstates, nstates) Returns ------- C : scipy.sparse.coo_matrix The count matrix at given lag in coordinate list format. Notes ----- Transition counts can be obtained from microstate trajectory using two methods. Couning at lag and slidingwindow counting. **Lag** This approach will skip all points in the trajectory that are seperated form the last point by less than the given lagtime :math:`\tau`. Transition counts :math:`c_{ij}(\tau)` are generated according to .. math:: c_{ij}(\tau) = \sum_{k=0}^{\left \lfloor \frac{N}{\tau} \right \rfloor -2} \chi_{i}(X_{k\tau})\chi_{j}(X_{(k+1)\tau}). :math:`\chi_{i}(x)` is the indicator function of :math:`i`, i.e :math:`\chi_{i}(x)=1` for :math:`x=i` and :math:`\chi_{i}(x)=0` for :math:`x \neq i`. **Sliding** The sliding approach slides along the trajectory and counts all transitions sperated by the lagtime :math:`\tau`. Transition counts :math:`c_{ij}(\tau)` are generated according to .. math:: c_{ij}(\tau)=\sum_{k=0}^{N-\tau-1} \chi_{i}(X_{k}) \chi_{j}(X_{k+\tau}). References .. [1] Prinz, J H, H Wu, M Sarich, B Keller, M Senne, M Held, J D Chodera, C Schuette and F Noe. 2011. Markov models of molecular kinetics: Generation and validation. J Chem Phys 134: 174105 Examples -------- Use the sliding approach first The generated matrix is a sparse matrix in CSR-format. For convenient printing we convert it to a dense ndarray. array([[ 1., 2.], [ 1., 1.]]) Let us compare to the count-matrix we obtain using the lag approach array([[ 0., 1.], [ 1., 1.]]) """
|
# convert dtraj input, if it contains out of nested python lists to
# a list of int ndarrays.
dtraj = _ensure_dtraj_list(dtraj)
return sparse.count_matrix.count_matrix_coo2_mult(dtraj, lag, sliding=sliding,
sparse=sparse_return, nstates=nstates)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bootstrap_counts(dtrajs, lagtime, corrlength=None):
r"""Generates a randomly resampled count matrix given the input coordinates. Parameters dtrajs : array-like or array-like of array-like single or multiple discrete trajectories. Every trajectory is assumed to be a statistically independent realization. Note that this is often not true and is a weakness with the present bootstrapping approach. lagtime : int the lag time at which the count matrix will be evaluated corrlength : int, optional, default=None the correlation length of the discrete trajectory. N / corrlength counts will be generated, where N is the total number of frames. If set to None (default), corrlength = lagtime will be used. Notes ----- This function can be called multiple times in order to generate randomly resampled realizations of count matrices. For each of these realizations you can estimate a transition matrix, and from each of them computing the observables of your interest. The standard deviation of such a sample of the observable is a model for the standard error. The bootstrap will be generated by sampling N/corrlength counts at time tuples (t, t+lagtime), where t is uniformly sampled over all trajectory time frames in [0,n_i-lagtime]. Here, n_i is the length of trajectory i and N = sum_i n_i is the total number of frames. See also -------- bootstrap_trajectories """
|
dtrajs = _ensure_dtraj_list(dtrajs)
return dense.bootstrapping.bootstrap_counts(dtrajs, lagtime, corrlength=corrlength)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connected_sets(C, directed=True):
r"""Compute connected sets of microstates. Connected components for a directed graph with edge-weights given by the count matrix. Parameters C : scipy.sparse matrix Count matrix specifying edge weights. directed : bool, optional Whether to compute connected components for a directed or undirected graph. Default is True. Returns ------- cc : list of arrays of integers Each entry is an array containing all vertices (states) in the corresponding connected component. The list is sorted according to the size of the individual components. The largest connected set is the first entry in the list, lcc=cc[0]. Notes ----- Viewing the count matrix as the adjacency matrix of a (directed) graph the connected components are given by the connected components of that graph. Connected components of a graph can be efficiently computed using Tarjan's algorithm. References .. [1] Tarjan, R E. 1972. Depth-first search and linear graph algorithms. SIAM Journal on Computing 1 (2):
146-160. Examples -------- [array([0, 1]), array([2])] [array([0, 1, 2])] """
|
if isdense(C):
return sparse.connectivity.connected_sets(csr_matrix(C), directed=directed)
else:
return sparse.connectivity.connected_sets(C, directed=directed)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def largest_connected_set(C, directed=True):
r"""Largest connected component for a directed graph with edge-weights given by the count matrix. Parameters C : scipy.sparse matrix Count matrix specifying edge weights. directed : bool, optional Whether to compute connected components for a directed or undirected graph. Default is True. Returns ------- lcc : array of integers The largest connected component of the directed graph. See also -------- connected_sets Notes ----- Viewing the count matrix as the adjacency matrix of a (directed) graph the largest connected set is the largest connected set of nodes of the corresponding graph. The largest connected set of a graph can be efficiently computed using Tarjan's algorithm. References .. [1] Tarjan, R E. 1972. Depth-first search and linear graph algorithms. SIAM Journal on Computing 1 (2):
146-160. Examples -------- array([0, 1]) array([0, 1, 2]) """
|
if isdense(C):
return sparse.connectivity.largest_connected_set(csr_matrix(C), directed=directed)
else:
return sparse.connectivity.largest_connected_set(C, directed=directed)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def largest_connected_submatrix(C, directed=True, lcc=None):
r"""Compute the count matrix on the largest connected set. Parameters C : scipy.sparse matrix Count matrix specifying edge weights. directed : bool, optional Whether to compute connected components for a directed or undirected graph. Default is True lcc : (M,) ndarray, optional The largest connected set Returns ------- C_cc : scipy.sparse matrix Count matrix of largest completely connected set of vertices (states) See also -------- largest_connected_set Notes ----- Viewing the count matrix as the adjacency matrix of a (directed) graph the larest connected submatrix is the adjacency matrix of the largest connected set of the corresponding graph. The largest connected submatrix can be efficiently computed using Tarjan's algorithm. References .. [1] Tarjan, R E. 1972. Depth-first search and linear graph algorithms. SIAM Journal on Computing 1 (2):
146-160. Examples -------- array([[10, 1], array([[10, 1, 0], [ 2, 0, 3], """
|
if isdense(C):
return sparse.connectivity.largest_connected_submatrix(csr_matrix(C), directed=directed, lcc=lcc).toarray()
else:
return sparse.connectivity.largest_connected_submatrix(C, directed=directed, lcc=lcc)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_connected(C, directed=True):
"""Check connectivity of the given matrix. Parameters C : scipy.sparse matrix Count matrix specifying edge weights. directed : bool, optional Whether to compute connected components for a directed or undirected graph. Default is True. Returns ------- is_connected: bool True if C is connected, False otherwise. See also -------- largest_connected_submatrix Notes ----- A count matrix is connected if the graph having the count matrix as adjacency matrix has a single connected component. Connectivity of a graph can be efficiently checked using Tarjan's algorithm. References .. [1] Tarjan, R E. 1972. Depth-first search and linear graph algorithms. SIAM Journal on Computing 1 (2):
146-160. Examples -------- False True """
|
if isdense(C):
return sparse.connectivity.is_connected(csr_matrix(C), directed=directed)
else:
return sparse.connectivity.is_connected(C, directed=directed)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prior_neighbor(C, alpha=0.001):
r"""Neighbor prior for the given count matrix. Parameters C : (M, M) ndarray or scipy.sparse matrix Count matrix alpha : float (optional) Value of prior counts Returns ------- B : (M, M) ndarray or scipy.sparse matrix Prior count matrix Notes ------ The neighbor prior :math:`b_{ij}` is defined as .. math:: b_{ij}=\left \{ \begin{array}{rl} \alpha & c_{ij}+c_{ji}>0 \\ 0 & \text{else} \end{array} \right . Examples -------- array([[ 0.001, 0.001, 0. ], [ 0.001, 0. , 0.001], [ 0. , 0.001, 0.001]]) """
|
if isdense(C):
B = sparse.prior.prior_neighbor(csr_matrix(C), alpha=alpha)
return B.toarray()
else:
return sparse.prior.prior_neighbor(C, alpha=alpha)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prior_const(C, alpha=0.001):
r"""Constant prior for given count matrix. Parameters C : (M, M) ndarray or scipy.sparse matrix Count matrix alpha : float (optional) Value of prior counts Returns ------- B : (M, M) ndarray Prior count matrix Notes ----- The prior is defined as .. math:: \begin{array}{rl} b_{ij}= \alpha & \forall i, j \end{array} Examples -------- array([[ 0.001, 0.001, 0.001], [ 0.001, 0.001, 0.001], [ 0.001, 0.001, 0.001]]) """
|
if isdense(C):
return sparse.prior.prior_const(C, alpha=alpha)
else:
warnings.warn("Prior will be a dense matrix for sparse input")
return sparse.prior.prior_const(C, alpha=alpha)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transition_matrix(C, reversible=False, mu=None, method='auto', **kwargs):
r"""Estimate the transition matrix from the given countmatrix. Parameters C : numpy ndarray or scipy.sparse matrix Count matrix reversible : bool (optional) If True restrict the ensemble of transition matrices to those having a detailed balance symmetry otherwise the likelihood optimization is carried out over the whole space of stochastic matrices. mu : array_like The stationary distribution of the MLE transition matrix. method : str Select which implementation to use for the estimation. One of 'auto', 'dense' and 'sparse', optional, default='auto'. 'dense' always selects the dense implementation, 'sparse' always selects the sparse one. 'auto' selects the most efficient implementation according to the sparsity structure of the matrix: if the occupation of the C matrix is less then one third, select sparse. Else select dense. The type of the T matrix returned always matches the type of the C matrix, irrespective of the method that was used to compute it. **kwargs: Optional algorithm-specific parameters. See below for special cases Xinit : (M, M) ndarray Optional parameter with reversible = True. initial value for the matrix of absolute transition probabilities. Unless set otherwise, will use X = diag(pi) t, where T is a nonreversible transition matrix estimated from C, i.e. T_ij = c_ij / sum_k c_ik, and pi is its stationary distribution. maxiter : 1000000 : int Optional parameter with reversible = True. maximum number of iterations before the method exits maxerr : 1e-8 : float Optional parameter with reversible = True. convergence tolerance for transition matrix estimation. This specifies the maximum change of the Euclidean norm of relative stationary probabilities (:math:`x_i = \sum_k x_{ik}`). The relative stationary probability changes :math:`e_i = (x_i^{(1)} - x_i^{(2)})/(x_i^{(1)} + x_i^{(2)})` are used in order to track changes in small probabilities. The Euclidean norm of the change vector, :math:`|e_i|_2`, is compared to maxerr. rev_pisym : bool, default=False Fast computation of reversible transition matrix by normalizing :math:`x_{ij} = \pi_i p_{ij} + \pi_j p_{ji}`. :math:`p_{ij}` is the direct (nonreversible) estimate and :math:`pi_i` is its stationary distribution. This estimator is asympotically unbiased but not maximum likelihood. return_statdist : bool, default=False Optional parameter with reversible = True. If set to true, the stationary distribution is also returned return_conv : bool, default=False Optional parameter with reversible = True. If set to true, the likelihood history and the pi_change history is returned. warn_not_converged : bool, default=True Prints a warning if not converged. sparse_newton : bool, default=False If True, use the experimental primal-dual interior-point solver for sparse input/computation method. Returns ------- P : (M, M) ndarray or scipy.sparse matrix The MLE transition matrix. P has the same data type (dense or sparse) as the input matrix C. The reversible estimator returns by default only P, but may also return (P,pi) or (P,lhist,pi_changes) or (P,pi,lhist,pi_changes) depending on the return settings P : ndarray (n,n) transition matrix. This is the only return for return_statdist = False, return_conv = False (pi) : ndarray (n) stationary distribution. Only returned if return_statdist = True (lhist) : ndarray (k) likelihood history. Has the length of the number of iterations needed. Only returned if return_conv = True (pi_changes) : ndarray (k) history of likelihood history. Has the length of the number of iterations needed. Only returned if return_conv = True Notes ----- The transition matrix is a maximum likelihood estimate (MLE) of the probability distribution of transition matrices with parameters given by the count matrix. References .. [1] Prinz, J H, H Wu, M Sarich, B Keller, M Senne, M Held, J D Chodera, C Schuette and F Noe. 2011. Markov models of molecular kinetics: Generation and validation. J Chem Phys 134: 174105 .. [2] Bowman, G R, K A Beauchamp, G Boxer and V S Pande. 2009. Progress and challenges in the automated construction of Markov state models for full protein systems. J. Chem. Phys. 131: 124101 .. [3] Trendelkamp-Schroer, B, H Wu, F Paul and F. Noe. 2015 Estimation and uncertainty of reversible Markov models. J. Chem. Phys. 143: 174101 Examples -------- Non-reversible estimate array([[ 0.83333333, 0.08333333, 0.08333333], [ 0.4 , 0. , 0.6 ], [ 0. , 0.2 , 0.8 ]]) Reversible estimate array([[ 0.83333333, 0.10385551, 0.06281115], [ 0.35074677, 0. , 0.64925323], [ 0.04925323, 0.15074677, 0.8 ]]) Reversible estimate with given stationary vector array([[ 0.94771371, 0.00612645, 0.04615984], [ 0.42885157, 0. , 0.57114843], [ 0.11142031, 0.01969477, 0.86888491]]) """
|
if issparse(C):
sparse_input_type = True
elif isdense(C):
sparse_input_type = False
else:
raise NotImplementedError('C has an unknown type.')
if method == 'dense':
sparse_computation = False
elif method == 'sparse':
sparse_computation = True
elif method == 'auto':
# heuristically determine whether is't more efficient to do a dense of sparse computation
if sparse_input_type:
dof = C.getnnz()
else:
dof = np.count_nonzero(C)
dimension = C.shape[0]
if dimension*dimension < 3*dof:
sparse_computation = False
else:
sparse_computation = True
else:
raise ValueError(('method="%s" is no valid choice. It should be one of'
'"dense", "sparse" or "auto".') % method)
# convert input type
if sparse_computation and not sparse_input_type:
C = coo_matrix(C)
if not sparse_computation and sparse_input_type:
C = C.toarray()
return_statdist = 'return_statdist' in kwargs
if not return_statdist:
kwargs['return_statdist'] = False
sparse_newton = kwargs.pop('sparse_newton', False)
if reversible:
rev_pisym = kwargs.pop('rev_pisym', False)
if mu is None:
if sparse_computation:
if rev_pisym:
result = sparse.transition_matrix.transition_matrix_reversible_pisym(C, **kwargs)
elif sparse_newton:
from msmtools.estimation.sparse.newton.mle_rev import solve_mle_rev
result = solve_mle_rev(C, **kwargs)
else:
result = sparse.mle_trev.mle_trev(C, **kwargs)
else:
if rev_pisym:
result = dense.transition_matrix.transition_matrix_reversible_pisym(C, **kwargs)
else:
result = dense.mle_trev.mle_trev(C, **kwargs)
else:
kwargs.pop('return_statdist') # pi given, keyword unknown by estimators.
if sparse_computation:
# Sparse, reversible, fixed pi (currently using dense with sparse conversion)
result = sparse.mle_trev_given_pi.mle_trev_given_pi(C, mu, **kwargs)
else:
result = dense.mle_trev_given_pi.mle_trev_given_pi(C, mu, **kwargs)
else: # nonreversible estimation
if mu is None:
if sparse_computation:
# Sparse, nonreversible
result = sparse.transition_matrix.transition_matrix_non_reversible(C)
else:
# Dense, nonreversible
result = dense.transition_matrix.transition_matrix_non_reversible(C)
# Both methods currently do not have an iterate of pi, so we compute it here for consistency.
if return_statdist:
from msmtools.analysis import stationary_distribution
mu = stationary_distribution(result)
else:
raise NotImplementedError('nonreversible mle with fixed stationary distribution not implemented.')
if return_statdist and isinstance(result, tuple):
T, mu = result
else:
T = result
# convert return type
if sparse_computation and not sparse_input_type:
T = T.toarray()
elif not sparse_computation and sparse_input_type:
T = csr_matrix(T)
if return_statdist:
return T, mu
return T
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def log_likelihood(C, T):
r"""Log-likelihood of the count matrix given a transition matrix. Parameters C : (M, M) ndarray or scipy.sparse matrix Count matrix T : (M, M) ndarray orscipy.sparse matrix Transition matrix Returns ------- logL : float Log-likelihood of the count matrix Notes ----- The likelihood of a set of observed transition counts :math:`C=(c_{ij})` for a given matrix of transition counts :math:`T=(t_{ij})` is given by .. math:: L(C|P)=\prod_{i=1}^{M} \left( \prod_{j=1}^{M} p_{ij}^{c_{ij}} \right) The log-likelihood is given by .. math:: l(C|P)=\sum_{i,j=1}^{M}c_{ij} \log p_{ij}. The likelihood describes the probability of making an observation :math:`C` for a given model :math:`P`. Examples -------- References .. [1] Prinz, J H, H Wu, M Sarich, B Keller, M Senne, M Held, J D Chodera, C Schuette and F Noe. 2011. Markov models of molecular kinetics: Generation and validation. J Chem Phys 134: 174105 """
|
if issparse(C) and issparse(T):
return sparse.likelihood.log_likelihood(C, T)
else:
# use the dense likelihood calculator for all other cases
# if a mix of dense/sparse C/T matrices is used, then both
# will be converted to ndarrays.
if not isinstance(C, np.ndarray):
C = np.array(C)
if not isinstance(T, np.ndarray):
T = np.array(T)
# computation is still efficient, because we only use terms
# for nonzero elements of T
nz = np.nonzero(T)
return np.dot(C[nz], np.log(T[nz]))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tmatrix_cov(C, k=None):
r"""Covariance tensor for non-reversible transition matrix posterior. Parameters C : (M, M) ndarray or scipy.sparse matrix Count matrix k : int (optional) Return only covariance matrix for entires in the k-th row of the transition matrix Returns ------- cov : (M, M, M) ndarray Covariance tensor for transition matrix posterior Notes ----- The posterior of non-reversible transition matrices is .. math:: \mathbb{P}(T|C) \propto \prod_{i=1}^{M} \left( \prod_{j=1}^{M} p_{ij}^{c_{ij}} \right) Each row in the transition matrix is distributed according to a Dirichlet distribution with parameters given by the observed transition counts :math:`c_{ij}`. The covariance tensor :math:`\text{cov}[p_{ij},p_{kl}]=\Sigma_{i,j,k,l}` is zero whenever :math:`i \neq k` so that only :math:`\Sigma_{i,j,i,l}` is returned. """
|
if issparse(C):
warnings.warn("Covariance matrix will be dense for sparse input")
C = C.toarray()
return dense.covariance.tmatrix_cov(C, row=k)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sample_tmatrix(C, nsample=1, nsteps=None, reversible=False, mu=None, T0=None, return_statdist=False):
r"""samples transition matrices from the posterior distribution Parameters C : (M, M) ndarray or scipy.sparse matrix Count matrix nsample : int number of samples to be drawn nstep : int, default=None number of full Gibbs sampling sweeps internally done for each sample returned. This option is meant to ensure approximately uncorrelated samples for every call to sample(). If None, the number of steps will be automatically determined based on the other options and the matrix size. nstep>1 will only be used for reversible sampling, because nonreversible sampling generates statistically independent transition matrices every step. reversible : bool If true sample from the ensemble of transition matrices restricted to those obeying a detailed balance condition, else draw from the whole ensemble of stochastic matrices. mu : array_like A fixed stationary distribution. Transition matrices with that stationary distribution will be sampled T0 : ndarray, shape=(n, n) or scipy.sparse matrix Starting point of the MC chain of the sampling algorithm. Has to obey the required constraints. return_statdist : bool, optional, default = False if true, will also return the stationary distribution. Returns ------- P : ndarray(n,n) or array of ndarray(n,n) sampled transition matrix (or multiple matrices if nsample > 1) Notes ----- The transition matrix sampler generates transition matrices from the posterior distribution. The posterior distribution is given as a product of Dirichlet distributions .. math:: \mathbb{P}(T|C) \propto \prod_{i=1}^{M} \left( \prod_{j=1}^{M} p_{ij}^{c_{ij}} \right) See also -------- tmatrix_sampler """
|
if issparse(C):
_showSparseConversionWarning()
C = C.toarray()
sampler = tmatrix_sampler(C, reversible=reversible, mu=mu, T0=T0, nsteps=nsteps)
return sampler.sample(nsamples=nsample, return_statdist=return_statdist)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tmatrix_sampler(C, reversible=False, mu=None, T0=None, nsteps=None, prior='sparse'):
r"""Generate transition matrix sampler object. Parameters C : (M, M) ndarray or scipy.sparse matrix Count matrix reversible : bool If true sample from the ensemble of transition matrices restricted to those obeying a detailed balance condition, else draw from the whole ensemble of stochastic matrices. mu : array_like A fixed stationary distribution. Transition matrices with that stationary distribution will be sampled T0 : ndarray, shape=(n, n) or scipy.sparse matrix Starting point of the MC chain of the sampling algorithm. Has to obey the required constraints. nstep : int, default=None number of full Gibbs sampling sweeps per sample. This option is meant to ensure approximately uncorrelated samples for every call to sample(). If None, the number of steps will be automatically determined based on the other options and the matrix size. nstep>1 will only be used for reversible sampling, because nonreversible sampling generates statistically independent transition matrices every step. Returns ------- sampler : A :py:class:dense.tmatrix_sampler.TransitionMatrixSampler object that can be used to generate samples. Notes ----- The transition matrix sampler generates transition matrices from the posterior distribution. The posterior distribution is given as a product of Dirichlet distributions .. math:: \mathbb{P}(T|C) \propto \prod_{i=1}^{M} \left( \prod_{j=1}^{M} p_{ij}^{c_{ij}} \right) The method can generate samples from the posterior under the following constraints **Reversible sampling** Using a MCMC sampler outlined in .. [1] it is ensured that samples from the posterior are reversible, i.e. there is a probability vector :math:`(\mu_i)` such that :math:`\mu_i t_{ij} = \mu_j t_{ji}` holds for all :math:`i,j`. **Reversible sampling with fixed stationary vector** Using a MCMC sampler outlined in .. [2] it is ensured that samples from the posterior fulfill detailed balance with respect to a given probability vector :math:`(\mu_i)`. References .. [1] Noe, F. Probability distributions of molecular observables computed from Markov state models. J Chem Phys 128: 244103 (2008) .. [2] Trendelkamp-Schroer, B., H. Wu, F. Paul and F. Noe: Estimation and uncertainty of reversible Markov models. J. Chem. Phys. (submitted) """
|
if issparse(C):
_showSparseConversionWarning()
C = C.toarray()
from .dense.tmatrix_sampler import TransitionMatrixSampler
sampler = TransitionMatrixSampler(C, reversible=reversible, mu=mu, P0=T0,
nsteps=nsteps, prior=prior)
return sampler
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_negative_entries(A):
r"""Remove all negative entries from sparse matrix. Aplus=max(0, A) Parameters A : (M, M) scipy.sparse matrix Input matrix Returns ------- Aplus : (M, M) scipy.sparse matrix Input matrix with negative entries set to zero. """
|
A = A.tocoo()
data = A.data
row = A.row
col = A.col
"""Positive entries"""
pos = data > 0.0
datap = data[pos]
rowp = row[pos]
colp = col[pos]
Aplus = coo_matrix((datap, (rowp, colp)), shape=A.shape)
return Aplus
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flux_matrix(T, pi, qminus, qplus, netflux=True):
r"""Compute the flux. Parameters T : (M, M) scipy.sparse matrix Transition matrix pi : (M,) ndarray Stationary distribution corresponding to T qminus : (M,) ndarray Backward comittor qplus : (M,) ndarray Forward committor netflux : boolean True: net flux matrix will be computed False: gross flux matrix will be computed Returns ------- flux : (M, M) scipy.sparse matrix Matrix of flux values between pairs of states. """
|
D1 = diags((pi * qminus,), (0,))
D2 = diags((qplus,), (0,))
flux = D1.dot(T.dot(D2))
"""Remove self-fluxes"""
flux = flux - diags(flux.diagonal(), 0)
"""Return net or gross flux"""
if netflux:
return to_netflux(flux)
else:
return flux
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_netflux(flux):
r"""Compute the netflux. f_ij^{+}=max{0, f_ij-f_ji} for all pairs i,j Parameters flux : (M, M) scipy.sparse matrix Matrix of flux values between pairs of states. Returns ------- netflux : (M, M) scipy.sparse matrix Matrix of netflux values between pairs of states. """
|
netflux = flux - flux.T
"""Set negative entries to zero"""
netflux = remove_negative_entries(netflux)
return netflux
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def total_flux(flux, A):
r"""Compute the total flux between reactant and product. Parameters flux : (M, M) scipy.sparse matrix Matrix of flux values between pairs of states. A : array_like List of integer state labels for set A (reactant) Returns ------- F : float The total flux between reactant and product """
|
X = set(np.arange(flux.shape[0])) # total state space
A = set(A)
notA = X.difference(A)
"""Extract rows corresponding to A"""
W = flux.tocsr()
W = W[list(A), :]
"""Extract columns corresonding to X\A"""
W = W.tocsc()
W = W[:, list(notA)]
F = W.sum()
return F
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stationary_distribution_sensitivity(T, j):
r"""Calculate the sensitivity matrix for entry j the stationary distribution vector given transition matrix T. Parameters T : numpy.ndarray shape = (n, n) Transition matrix j : int entry of stationary distribution for which the sensitivity is to be computed Returns ------- x : ndarray, shape=(n, n) Sensitivity matrix for entry index around transition matrix T. Reversibility is not assumed. Remark ------ Note, that this function uses a different normalization convention for the sensitivity compared to eigenvector_sensitivity. See there for further information. """
|
n = len(T)
lEV = numpy.ones(n)
rEV = stationary_distribution(T)
eVal = 1.0
T = numpy.transpose(T)
vecA = numpy.zeros(n)
vecA[j] = 1.0
matA = T - eVal * numpy.identity(n)
# normalize s.t. sum is one using rEV which is constant
matA = numpy.concatenate((matA, [lEV]))
phi = numpy.linalg.lstsq(numpy.transpose(matA), vecA, rcond=-1)
phi = numpy.delete(phi[0], -1)
sensitivity = -numpy.outer(rEV, phi) + numpy.dot(phi, rEV) * numpy.outer(rEV, lEV)
return sensitivity
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def geometric_series(q, n):
""" Compute finite geometric series. \frac{1-q^{n+1}}{1-q} q \neq 1 \sum_{k=0}^{n} q^{k}= n+1 q = 1 Parameters q : array-like The common ratio of the geometric series. n : int The number of terms in the finite series. Returns ------- s : float or ndarray The value of the finite series. """
|
q = np.asarray(q)
if n < 0:
raise ValueError('Finite geometric series is only defined for n>=0.')
else:
"""q is scalar"""
if q.ndim == 0:
if q == 1:
s = (n + 1) * 1.0
return s
else:
s = (1.0 - q ** (n + 1)) / (1.0 - q)
return s
"""q is ndarray"""
s = np.zeros(np.shape(q), dtype=q.dtype)
"""All elements with value q=1"""
ind = (q == 1.0)
"""For q=1 the sum has the value s=n+1"""
s[ind] = (n + 1) * 1.0
"""All elements with value q\neq 1"""
not_ind = np.logical_not(ind)
s[not_ind] = (1.0 - q[not_ind] ** (n + 1)) / (1.0 - q[not_ind])
return s
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def solve_mle_rev(C, tol=1e-10, maxiter=100, show_progress=False, full_output=False, return_statdist=True, **kwargs):
"""Number of states"""
|
M = C.shape[0]
"""Initial guess for primal-point"""
z0 = np.zeros(2*M)
z0[0:M] = 1.0
"""Inequality constraints"""
# G = np.zeros((M, 2*M))
# G[np.arange(M), np.arange(M)] = -1.0
G = -1.0*scipy.sparse.eye(M, n=2*M, k=0)
h = np.zeros(M)
"""Equality constraints"""
A = np.zeros((1, 2*M))
A[0, M] = 1.0
b = np.array([0.0])
"""Scaling"""
c0 = C.max()
C = C/c0
"""Symmetric part"""
Cs = C + C.T
"""Column sum"""
c = C.sum(axis=0)
if scipy.sparse.issparse(C):
Cs = Cs.tocsr()
c = c.A1
A = scipy.sparse.csr_matrix(A)
F = objective_sparse.F
DF = objective_sparse.DFsym
convert_solution = objective_sparse.convert_solution
else:
F = objective_dense.F
DF = objective_dense.DF
convert_solution = objective_dense.convert_solution
"""PDIP iteration"""
res = primal_dual_solve(F, z0, DF, A, b, G, h,
args=(Cs, c),
maxiter=maxiter, tol=tol,
show_progress=show_progress,
full_output=full_output)
if full_output:
z, info = res
else:
z = res
pi, P = convert_solution(z, Cs)
result = [P]
if return_statdist:
result.append(pi)
if full_output:
result.append(info)
return tuple(result) if len(result) > 1 else result[0]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def home(request):
"Simple homepage view."
context = {}
if request.user.is_authenticated():
try:
access = request.user.accountaccess_set.all()[0]
except IndexError:
access = None
else:
client = access.api_client
context['info'] = client.get_profile_info(raw_token=access.access_token)
return render(request, 'home.html', context)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def get_client(provider, token=''):
"Return the API client for the given provider."
cls = OAuth2Client
if provider.request_token_url:
cls = OAuthClient
return cls(provider, token)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def check_application_state(self, request, callback):
"Check optional state parameter."
stored = request.session.get(self.session_key, None)
returned = request.GET.get('state', None)
check = False
if stored is not None:
if returned is not None:
check = constant_time_compare(stored, returned)
else:
logger.error('No state parameter returned by the provider.')
else:
logger.error('No state stored in the sesssion.')
return check
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _stripped_codes(codes):
"""Return a tuple of stripped codes split by ','."""
|
return tuple([
code.strip() for code in codes.split(',')
if code.strip()
])
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def regex(self):
"""Return compiled regex."""
|
if not self._compiled_regex:
self._compiled_regex = re.compile(self.raw)
return self._compiled_regex
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def marker(self):
"""Return environment marker."""
|
if not self._marker:
assert markers, 'Package packaging is needed for environment markers'
self._marker = markers.Marker(self.raw)
return self._marker
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def regex_match_any(self, line, codes=None):
"""Match any regex."""
|
for selector in self.regex_selectors:
for match in selector.regex.finditer(line):
if codes and match.lastindex:
# Currently the group name must be 'codes'
try:
disabled_codes = match.group('codes')
except IndexError:
return True
disabled_codes = _stripped_codes(disabled_codes)
current_code = codes[-1]
if current_code in disabled_codes:
return True
else:
return True
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def match(self, filename, line, codes):
"""Match rule and set attribute codes."""
|
if self.regex_match_any(line, codes):
if self._vary_codes:
self.codes = tuple([codes[-1]])
return True
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def file_match_any(self, filename):
"""Match any filename."""
|
if filename.startswith('.' + os.sep):
filename = filename[len(os.sep) + 1:]
if os.sep != '/':
filename = filename.replace(os.sep, '/')
for selector in self.file_selectors:
if (selector.pattern.endswith('/') and
filename.startswith(selector.pattern)):
return True
if fnmatch.fnmatch(filename, selector.pattern):
return True
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def codes_match_any(self, codes):
"""Match any code."""
|
for selector in self.code_selectors:
if selector.code in codes:
return True
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def match(self, filename, line, codes):
"""Match rule."""
|
if ((not self.file_selectors or self.file_match_any(filename)) and
(not self.environment_marker_selector or
self.environment_marker_evaluate()) and
(not self.code_selectors or self.codes_match_any(codes))):
if self.regex_selectors:
return super(Rule, self).match(filename, line, codes)
else:
return True
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def authenticate(self, provider=None, identifier=None):
"Fetch user for a given provider by id."
provider_q = Q(provider__name=provider)
if isinstance(provider, Provider):
provider_q = Q(provider=provider)
try:
access = AccountAccess.objects.filter(
provider_q, identifier=identifier
).select_related('user')[0]
except IndexError:
return None
else:
return access.user
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def __extract_modules(self, loader, name, is_pkg):
""" if module found load module and save all attributes in the module found """
|
mod = loader.find_module(name).load_module(name)
""" find the attribute method on each module """
if hasattr(mod, '__method__'):
""" register to the blueprint if method attribute found """
module_router = ModuleRouter(mod,
ignore_names=self.__serialize_module_paths()
).register_route(app=self.application, name=name)
self.__routers.extend(module_router.routers)
self.__modules.append(mod)
else:
""" prompt not found notification """
# print('{} has no module attribute method'.format(mod))
pass
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def get_or_create_user(self, provider, access, info):
"Create a shell auth.User."
digest = hashlib.sha1(smart_bytes(access)).digest()
# Base 64 encode to get below 30 characters
# Removed padding characters
username = force_text(base64.urlsafe_b64encode(digest)).replace('=', '')
User = get_user_model()
kwargs = {
User.USERNAME_FIELD: username,
'email': '',
'password': None
}
return User.objects.create_user(**kwargs)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def get_user_id(self, provider, info):
"Return unique identifier from the profile info."
id_key = self.provider_id or 'id'
result = info
try:
for key in id_key.split('.'):
result = result[key]
return result
except KeyError:
return None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def handle_existing_user(self, provider, user, access, info):
"Login user and redirect."
login(self.request, user)
return redirect(self.get_login_redirect(provider, user, access))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def handle_new_user(self, provider, access, info):
"Create a shell auth.User and redirect."
user = self.get_or_create_user(provider, access, info)
access.user = user
AccountAccess.objects.filter(pk=access.pk).update(user=user)
user = authenticate(provider=access.provider, identifier=access.identifier)
login(self.request, user)
return redirect(self.get_login_redirect(provider, user, access, True))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def discover_nupnp(websession):
"""Discover bridges via NUPNP."""
|
async with websession.get(URL_NUPNP) as res:
return [Bridge(item['internalipaddress'], websession=websession)
for item in (await res.json())]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_months_of_year(year):
""" Returns the number of months that have already passed in the given year. This is useful for calculating averages on the year view. For past years, we should divide by 12, but for the current year, we should divide by the current month. """
|
current_year = now().year
if year == current_year:
return now().month
if year > current_year:
return 1
if year < current_year:
return 12
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def colorgamut(self):
"""The color gamut information of the light."""
|
try:
light_spec = self.controlcapabilities
gtup = tuple([XYPoint(*x) for x in light_spec['colorgamut']])
color_gamut = GamutType(*gtup)
except KeyError:
color_gamut = None
return color_gamut
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_totals_by_payee(self, account, start_date=None, end_date=None):
""" Returns transaction totals grouped by Payee. """
|
qs = Transaction.objects.filter(account=account, parent__isnull=True)
qs = qs.values('payee').annotate(models.Sum('value_gross'))
qs = qs.order_by('payee__name')
return qs
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_without_invoice(self):
""" Returns transactions that don't have an invoice. We filter out transactions that have children, because those transactions never have invoices - their children are the ones that would each have one invoice. """
|
qs = Transaction.objects.filter(
children__isnull=True, invoice__isnull=True)
return qs
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_enabled():
"""Wrapped function for filtering enabled providers."""
|
providers = Provider.objects.all()
return [p for p in providers if p.enabled()]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def available_providers(request):
"Adds the list of enabled providers to the context."
if APPENGINE:
# Note: AppEngine inequality queries are limited to one property.
# See https://developers.google.com/appengine/docs/python/datastore/queries#Python_Restrictions_on_queries
# Users have also noted that the exclusion queries don't work
# See https://github.com/mlavin/django-all-access/pull/46
# So this is lazily-filtered in Python
qs = SimpleLazyObject(lambda: _get_enabled())
else:
qs = Provider.objects.filter(consumer_secret__isnull=False, consumer_key__isnull=False)
return {'allaccess_providers': qs}
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(command, **kw):
"""Run `command`, catch any exception, and return lines of output."""
|
# Windows low-level subprocess API wants str for current working
# directory.
if sys.platform == 'win32':
_cwd = kw.get('cwd', None)
if _cwd is not None:
kw['cwd'] = _cwd.decode()
try:
# In Python 3, iterating over bytes yield integers, so we call
# `splitlines()` to force Python 3 to give us lines instead.
return check_output(command, **kw).splitlines()
except CalledProcessError:
return ()
except FileNotFoundError:
print("The {} binary was not found. Skipping directory {}.\n"
.format(command[0], kw['cwd'].decode("UTF-8")))
return ()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def status_mercurial(path, ignore_set, options):
"""Run hg status. Returns a 2-element tuple: * Text lines describing the status of the repository. * Empty sequence of subrepos, since hg does not support them. """
|
lines = run(['hg', '--config', 'extensions.color=!', 'st'], cwd=path)
subrepos = ()
return [b' ' + l for l in lines if not l.startswith(b'?')], subrepos
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def status_git(path, ignore_set, options):
"""Run git status. Returns a 2-element tuple: * Text lines describing the status of the repository. * List of subrepository paths, relative to the repository itself. """
|
# Check whether current branch is dirty:
lines = [l for l in run(('git', 'status', '-s', '-b'), cwd=path)
if (options.untracked or not l.startswith(b'?'))
and not l.startswith(b'##')]
# Check all branches for unpushed commits:
lines += [l for l in run(('git', 'branch', '-v'), cwd=path)
if (b' [ahead ' in l)]
# Check for non-tracking branches:
if options.non_tracking:
lines += [l for l in run(('git', 'for-each-ref',
'--format=[%(refname:short)]%(upstream)',
'refs/heads'), cwd=path)
if l.endswith(b']')]
if options.stash:
lines += [l for l in run(('git', 'stash', 'list'), cwd=path)]
discovered_submodules = []
for l in run(('git', 'submodule', 'status'), cwd=path):
match = git_submodule.search(l)
if match:
discovered_submodules.append(match.group(1))
return lines, discovered_submodules
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def status_subversion(path, ignore_set, options):
"""Run svn status. Returns a 2-element tuple: * Text lines describing the status of the repository. * Empty sequence of subrepos, since svn does not support them. """
|
subrepos = ()
if path in ignore_set:
return None, subrepos
keepers = []
for line in run(['svn', 'st', '-v'], cwd=path):
if not line.strip():
continue
if line.startswith(b'Performing') or line[0] in b'X?':
continue
status = line[:8]
ignored_states = options.ignore_svn_states
if ignored_states and status.strip() in ignored_states:
continue
filename = line[8:].split(None, 3)[-1]
ignore_set.add(os.path.join(path, filename))
if status.strip():
keepers.append(b' ' + status + filename)
return keepers, subrepos
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_reporter_state():
"""Get pep8 reporter state from stack."""
|
# Stack
# 1. get_reporter_state (i.e. this function)
# 2. putty_ignore_code
# 3. QueueReport.error or pep8.StandardReport.error for flake8 -j 1
# 4. pep8.Checker.check_ast or check_physical or check_logical
# locals contains `tree` (ast) for check_ast
frame = sys._getframe(3)
reporter = frame.f_locals['self']
line_number = frame.f_locals['line_number']
offset = frame.f_locals['offset']
text = frame.f_locals['text']
check = frame.f_locals['check']
return reporter, line_number, offset, text, check
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def putty_ignore_code(options, code):
"""Implement pep8 'ignore_code' hook."""
|
reporter, line_number, offset, text, check = get_reporter_state()
try:
line = reporter.lines[line_number - 1]
except IndexError:
line = ''
options.ignore = options._orig_ignore
options.select = options._orig_select
for rule in options.putty_ignore:
if rule.match(reporter.filename, line, list(reporter.counters) + [code]):
if rule._append_codes:
options.ignore = options.ignore + rule.codes
else:
options.ignore = rule.codes
for rule in options.putty_select:
if rule.match(reporter.filename, line, list(reporter.counters) + [code]):
if rule._append_codes:
options.select = options.select + rule.codes
else:
options.select = rule.codes
return ignore_code(options, code)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_options(cls, parser):
"""Add options for command line and config file."""
|
parser.add_option(
'--putty-select', metavar='errors', default='',
help='putty select list',
)
parser.add_option(
'--putty-ignore', metavar='errors', default='',
help='putty ignore list',
)
parser.add_option(
'--putty-no-auto-ignore', action='store_false',
dest='putty_auto_ignore', default=False,
help=(' (default) do not auto ignore lines matching '
'# flake8: disable=<code>,<code>'),
)
parser.add_option(
'--putty-auto-ignore', action='store_true',
dest='putty_auto_ignore', default=False,
help=('auto ignore lines matching '
'# flake8: disable=<code>,<code>'),
)
parser.config_options.append('putty-select')
parser.config_options.append('putty-ignore')
parser.config_options.append('putty-auto-ignore')
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_options(cls, options):
"""Parse options and activate `ignore_code` handler."""
|
if (not options.putty_select and not options.putty_ignore and
not options.putty_auto_ignore):
return
options._orig_select = options.select
options._orig_ignore = options.ignore
options.putty_select = Parser(options.putty_select)._rules
options.putty_ignore = Parser(options.putty_ignore)._rules
if options.putty_auto_ignore:
options.putty_ignore.append(AutoLineDisableRule())
options.ignore_code = functools.partial(
putty_ignore_code,
options,
)
options.report._ignore_code = options.ignore_code
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _raise_on_error(data):
"""Check response for error message."""
|
if isinstance(data, list):
data = data[0]
if isinstance(data, dict) and 'error' in data:
raise_error(data['error'])
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def set_config(self, on=None, long=None, lat=None, sunriseoffset=None, sunsetoffset=None):
"""Change config of a Daylight sensor."""
|
data = {
key: value for key, value in {
'on': on,
'long': long,
'lat': lat,
'sunriseoffset': sunriseoffset,
'sunsetoffset': sunsetoffset,
}.items() if value is not None
}
await self._request('put', 'sensors/{}/config'.format(self.id),
json=data)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def set_config(self, on=None, tholddark=None, tholdoffset=None):
"""Change config of a CLIP LightLevel sensor."""
|
data = {
key: value for key, value in {
'on': on,
'tholddark': tholddark,
'tholdoffset': tholdoffset,
}.items() if value is not None
}
await self._request('put', 'sensors/{}/config'.format(self.id),
json=data)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_unpaid_invoices_with_transactions(branch=None):
""" Returns all invoices that are unpaid on freckle but have transactions. This means, that the invoice is either partially paid and can be left as unpaid in freckle, or the invoice has been fully paid and should be set to paid in freckle as well. """
|
if not client: # pragma: nocover
return None
result = {}
try:
unpaid_invoices = client.fetch_json(
'invoices', query_params={'state': 'unpaid'})
except (ConnectionError, HTTPError): # pragma: nocover
result.update({'error': _('Wasn\'t able to connect to Freckle.')})
else:
invoices = []
for invoice in unpaid_invoices:
invoice_with_transactions = models.Invoice.objects.filter(
invoice_number=invoice['reference'],
transactions__isnull=False)
if branch:
invoice_with_transactions = invoice_with_transactions.filter(
branch=branch)
if invoice_with_transactions:
invoices.append(invoice)
result.update({'invoices': invoices})
return result
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.