repo
stringlengths
7
55
path
stringlengths
4
223
url
stringlengths
87
315
code
stringlengths
75
104k
code_tokens
list
docstring
stringlengths
1
46.9k
docstring_tokens
list
language
stringclasses
1 value
partition
stringclasses
3 values
avg_line_len
float64
7.91
980
markovmodel/PyEMMA
pyemma/coordinates/api.py
https://github.com/markovmodel/PyEMMA/blob/5c3124398217de05ba5ce9c8fb01519222481ab8/pyemma/coordinates/api.py#L1028-L1267
def tica(data=None, lag=10, dim=-1, var_cutoff=0.95, kinetic_map=True, commute_map=False, weights='empirical', stride=1, remove_mean=True, skip=0, reversible=True, ncov_max=float('inf'), chunksize=None, **kwargs): r""" Time-lagged independent component analysis (TICA). TICA is a linear transformation method. In contrast to PCA, which finds coordinates of maximal variance, TICA finds coordinates of maximal autocorrelation at the given lag time. Therefore, TICA is useful in order to find the *slow* components in a dataset and thus an excellent choice to transform molecular dynamics data before clustering data for the construction of a Markov model. When the input data is the result of a Markov process (such as thermostatted molecular dynamics), TICA finds in fact an approximation to the eigenfunctions and eigenvalues of the underlying Markov operator [1]_. It estimates a TICA transformation from *data*. When input data is given as an argument, the estimation will be carried out straight away, and the resulting object can be used to obtain eigenvalues, eigenvectors or project input data onto the slowest TICA components. If no data is given, this object is an empty estimator and can be put into a :func:`pipeline` in order to use TICA in the streaming mode. Parameters ---------- data : ndarray (T, d) or list of ndarray (T_i, d) or a reader created by source function array with the data, if available. When given, the TICA transformation is immediately computed and can be used to transform data. lag : int, optional, default = 10 the lag time, in multiples of the input time step dim : int, optional, default -1 the number of dimensions (independent components) to project onto. A call to the :func:`map <pyemma.coordinates.transform.TICA.map>` function reduces the d-dimensional input to only dim dimensions such that the data preserves the maximum possible autocorrelation amongst dim-dimensional linear projections. -1 means all numerically available dimensions will be used unless reduced by var_cutoff. Setting dim to a positive value is exclusive with var_cutoff. var_cutoff : float in the range [0,1], optional, default 0.95 Determines the number of output dimensions by including dimensions until their cumulative kinetic variance exceeds the fraction subspace_variance. var_cutoff=1.0 means all numerically available dimensions (see epsilon) will be used, unless set by dim. Setting var_cutoff smaller than 1.0 is exclusive with dim kinetic_map : bool, optional, default True Eigenvectors will be scaled by eigenvalues. As a result, Euclidean distances in the transformed data approximate kinetic distances [4]_. This is a good choice when the data is further processed by clustering. commute_map : bool, optional, default False Eigenvector_i will be scaled by sqrt(timescale_i / 2). As a result, Euclidean distances in the transformed data will approximate commute distances [5]_. stride : int, optional, default = 1 If set to 1, all input data will be used for estimation. Note that this could cause this calculation to be very slow for large data sets. Since molecular dynamics data is usually correlated at short timescales, it is often sufficient to estimate transformations at a longer stride. Note that the stride option in the get_output() function of the returned object is independent, so you can parametrize at a long stride, and still map all frames through the transformer. weights : optional, default="empirical" Re-weighting strategy to be used in order to compute equilibrium covariances from non-equilibrium data. * "empirical": no re-weighting * "koopman": use re-weighting procedure from [6]_ * weights: An object that allows to compute re-weighting factors. It must possess a method weights(X) that accepts a trajectory X (np.ndarray(T, n)) and returns a vector of re-weighting factors (np.ndarray(T,)). remove_mean: bool, optional, default True remove mean during covariance estimation. Should not be turned off. skip : int, default=0 skip the first initial n frames per trajectory. reversible: bool, default=True symmetrize correlation matrices C_0, C_{\tau}. ncov_max : int, default=infinity limit the memory usage of the algorithm from [7]_ to an amount that corresponds to ncov_max additional copies of each correlation matrix chunksize: int, default=None Number of data frames to process at once. Choose a higher value here, to optimize thread usage and gain processing speed. If None is passed, use the default value of the underlying reader/data source. Choose zero to disable chunking at all. Returns ------- tica : a :class:`TICA <pyemma.coordinates.transform.TICA>` transformation object Object for time-lagged independent component (TICA) analysis. it contains TICA eigenvalues and eigenvectors, and the projection of input data to the dominant TICA Notes ----- Given a sequence of multivariate data :math:`X_t`, it computes the mean-free covariance and time-lagged covariance matrix: .. math:: C_0 &= (X_t - \mu)^T \mathrm{diag}(w) (X_t - \mu) \\ C_{\tau} &= (X_t - \mu)^T \mathrm{diag}(w) (X_t + \tau - \mu) where w is a vector of weights for each time step. By default, these weights are all equal to one, but different weights are possible, like the re-weighting to equilibrium described in [6]_. Subsequently, the eigenvalue problem .. math:: C_{\tau} r_i = C_0 \lambda_i r_i, is solved,where :math:`r_i` are the independent components and :math:`\lambda_i` are their respective normalized time-autocorrelations. The eigenvalues are related to the relaxation timescale by .. math:: t_i = -\frac{\tau}{\ln |\lambda_i|}. When used as a dimension reduction method, the input data is projected onto the dominant independent components. TICA was originally introduced for signal processing in [2]_. It was introduced to molecular dynamics and as a method for the construction of Markov models in [1]_ and [3]_. It was shown in [1]_ that when applied to molecular dynamics data, TICA is an approximation to the eigenvalues and eigenvectors of the true underlying dynamics. Examples -------- Invoke TICA transformation with a given lag time and output dimension: >>> import numpy as np >>> from pyemma.coordinates import tica >>> data = np.random.random((100,3)) >>> projected_data = tica(data, lag=2, dim=1).get_output()[0] For a brief explaination why TICA outperforms PCA to extract a good reaction coordinate have a look `here <http://docs.markovmodel.org/lecture_tica.html#Example:-TICA-versus-PCA-in-a-stretched-double-well-potential>`_. See also -------- :class:`TICA <pyemma.coordinates.transform.TICA>` : tica object :func:`pca <pyemma.coordinates.pca>` : for principal component analysis .. autoclass:: pyemma.coordinates.transform.tica.TICA :members: :undoc-members: .. rubric:: Methods .. autoautosummary:: pyemma.coordinates.transform.tica.TICA :methods: .. rubric:: Attributes .. autoautosummary:: pyemma.coordinates.transform.tica.TICA :attributes: References ---------- .. [1] Perez-Hernandez G, F Paul, T Giorgino, G De Fabritiis and F Noe. 2013. Identification of slow molecular order parameters for Markov model construction J. Chem. Phys. 139, 015102. doi:10.1063/1.4811489 .. [2] L. Molgedey and H. G. Schuster. 1994. Separation of a mixture of independent signals using time delayed correlations Phys. Rev. Lett. 72, 3634. .. [3] Schwantes C, V S Pande. 2013. Improvements in Markov State Model Construction Reveal Many Non-Native Interactions in the Folding of NTL9 J. Chem. Theory. Comput. 9, 2000-2009. doi:10.1021/ct300878a .. [4] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553 .. [5] Noe, F., Banisch, R., Clementi, C. 2016. Commute maps: separating slowly-mixing molecular configurations for kinetic modeling. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.6b00762 .. [6] Wu, H., Nueske, F., Paul, F., Klus, S., Koltai, P., and Noe, F. 2016. Bias reduced variational approximation of molecular kinetics from short off-equilibrium simulations. J. Chem. Phys. (submitted), https://arxiv.org/abs/1610.06773. .. [7] Chan, T. F., Golub G. H., LeVeque R. J. 1979. Updating formulae and pairwiese algorithms for computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford University. """ from pyemma.coordinates.transform.tica import TICA from pyemma.coordinates.estimation.koopman import _KoopmanEstimator import types from pyemma.util.reflection import get_default_args cs = _check_old_chunksize_arg(chunksize, get_default_args(tica)['chunksize'], **kwargs) if isinstance(weights, _string_types): if weights == "koopman": if data is None: raise ValueError("Data must be supplied for reweighting='koopman'") if not reversible: raise ValueError("Koopman re-weighting is designed for reversible processes, set reversible=True") koop = _KoopmanEstimator(lag=lag, stride=stride, skip=skip, ncov_max=ncov_max) koop.estimate(data, chunksize=cs) weights = koop.weights elif weights == "empirical": weights = None else: raise ValueError("reweighting must be either 'empirical', 'koopman' " "or an object with a weights(data) method.") elif hasattr(weights, 'weights') and type(getattr(weights, 'weights')) == types.MethodType: weights = weights elif isinstance(weights, (list, tuple)) and all(isinstance(w, _np.ndarray) for w in weights): if data is not None and len(data) != len(weights): raise ValueError("len of weights({}) must match len of data({}).".format(len(weights), len(data))) else: raise ValueError("reweighting must be either 'empirical', 'koopman' or an object with a weights(data) method.") if not remove_mean: import warnings user_msg = 'remove_mean option is deprecated. The mean is removed from the data by default, otherwise it' \ 'cannot be guaranteed that all eigenvalues will be smaller than one. Some functionalities might' \ 'become useless in this case (e.g. commute_maps). Also, not removing the mean will not result in' \ 'a significant speed up of calculations.' warnings.warn( user_msg, category=_PyEMMA_DeprecationWarning) res = TICA(lag, dim=dim, var_cutoff=var_cutoff, kinetic_map=kinetic_map, commute_map=commute_map, skip=skip, stride=stride, weights=weights, reversible=reversible, ncov_max=ncov_max) if data is not None: res.estimate(data, chunksize=cs) else: res.chunksize = cs return res
[ "def", "tica", "(", "data", "=", "None", ",", "lag", "=", "10", ",", "dim", "=", "-", "1", ",", "var_cutoff", "=", "0.95", ",", "kinetic_map", "=", "True", ",", "commute_map", "=", "False", ",", "weights", "=", "'empirical'", ",", "stride", "=", "1", ",", "remove_mean", "=", "True", ",", "skip", "=", "0", ",", "reversible", "=", "True", ",", "ncov_max", "=", "float", "(", "'inf'", ")", ",", "chunksize", "=", "None", ",", "*", "*", "kwargs", ")", ":", "from", "pyemma", ".", "coordinates", ".", "transform", ".", "tica", "import", "TICA", "from", "pyemma", ".", "coordinates", ".", "estimation", ".", "koopman", "import", "_KoopmanEstimator", "import", "types", "from", "pyemma", ".", "util", ".", "reflection", "import", "get_default_args", "cs", "=", "_check_old_chunksize_arg", "(", "chunksize", ",", "get_default_args", "(", "tica", ")", "[", "'chunksize'", "]", ",", "*", "*", "kwargs", ")", "if", "isinstance", "(", "weights", ",", "_string_types", ")", ":", "if", "weights", "==", "\"koopman\"", ":", "if", "data", "is", "None", ":", "raise", "ValueError", "(", "\"Data must be supplied for reweighting='koopman'\"", ")", "if", "not", "reversible", ":", "raise", "ValueError", "(", "\"Koopman re-weighting is designed for reversible processes, set reversible=True\"", ")", "koop", "=", "_KoopmanEstimator", "(", "lag", "=", "lag", ",", "stride", "=", "stride", ",", "skip", "=", "skip", ",", "ncov_max", "=", "ncov_max", ")", "koop", ".", "estimate", "(", "data", ",", "chunksize", "=", "cs", ")", "weights", "=", "koop", ".", "weights", "elif", "weights", "==", "\"empirical\"", ":", "weights", "=", "None", "else", ":", "raise", "ValueError", "(", "\"reweighting must be either 'empirical', 'koopman' \"", "\"or an object with a weights(data) method.\"", ")", "elif", "hasattr", "(", "weights", ",", "'weights'", ")", "and", "type", "(", "getattr", "(", "weights", ",", "'weights'", ")", ")", "==", "types", ".", "MethodType", ":", "weights", "=", "weights", "elif", "isinstance", "(", "weights", ",", "(", "list", ",", "tuple", ")", ")", "and", "all", "(", "isinstance", "(", "w", ",", "_np", ".", "ndarray", ")", "for", "w", "in", "weights", ")", ":", "if", "data", "is", "not", "None", "and", "len", "(", "data", ")", "!=", "len", "(", "weights", ")", ":", "raise", "ValueError", "(", "\"len of weights({}) must match len of data({}).\"", ".", "format", "(", "len", "(", "weights", ")", ",", "len", "(", "data", ")", ")", ")", "else", ":", "raise", "ValueError", "(", "\"reweighting must be either 'empirical', 'koopman' or an object with a weights(data) method.\"", ")", "if", "not", "remove_mean", ":", "import", "warnings", "user_msg", "=", "'remove_mean option is deprecated. The mean is removed from the data by default, otherwise it'", "'cannot be guaranteed that all eigenvalues will be smaller than one. Some functionalities might'", "'become useless in this case (e.g. commute_maps). Also, not removing the mean will not result in'", "'a significant speed up of calculations.'", "warnings", ".", "warn", "(", "user_msg", ",", "category", "=", "_PyEMMA_DeprecationWarning", ")", "res", "=", "TICA", "(", "lag", ",", "dim", "=", "dim", ",", "var_cutoff", "=", "var_cutoff", ",", "kinetic_map", "=", "kinetic_map", ",", "commute_map", "=", "commute_map", ",", "skip", "=", "skip", ",", "stride", "=", "stride", ",", "weights", "=", "weights", ",", "reversible", "=", "reversible", ",", "ncov_max", "=", "ncov_max", ")", "if", "data", "is", "not", "None", ":", "res", ".", "estimate", "(", "data", ",", "chunksize", "=", "cs", ")", "else", ":", "res", ".", "chunksize", "=", "cs", "return", "res" ]
r""" Time-lagged independent component analysis (TICA). TICA is a linear transformation method. In contrast to PCA, which finds coordinates of maximal variance, TICA finds coordinates of maximal autocorrelation at the given lag time. Therefore, TICA is useful in order to find the *slow* components in a dataset and thus an excellent choice to transform molecular dynamics data before clustering data for the construction of a Markov model. When the input data is the result of a Markov process (such as thermostatted molecular dynamics), TICA finds in fact an approximation to the eigenfunctions and eigenvalues of the underlying Markov operator [1]_. It estimates a TICA transformation from *data*. When input data is given as an argument, the estimation will be carried out straight away, and the resulting object can be used to obtain eigenvalues, eigenvectors or project input data onto the slowest TICA components. If no data is given, this object is an empty estimator and can be put into a :func:`pipeline` in order to use TICA in the streaming mode. Parameters ---------- data : ndarray (T, d) or list of ndarray (T_i, d) or a reader created by source function array with the data, if available. When given, the TICA transformation is immediately computed and can be used to transform data. lag : int, optional, default = 10 the lag time, in multiples of the input time step dim : int, optional, default -1 the number of dimensions (independent components) to project onto. A call to the :func:`map <pyemma.coordinates.transform.TICA.map>` function reduces the d-dimensional input to only dim dimensions such that the data preserves the maximum possible autocorrelation amongst dim-dimensional linear projections. -1 means all numerically available dimensions will be used unless reduced by var_cutoff. Setting dim to a positive value is exclusive with var_cutoff. var_cutoff : float in the range [0,1], optional, default 0.95 Determines the number of output dimensions by including dimensions until their cumulative kinetic variance exceeds the fraction subspace_variance. var_cutoff=1.0 means all numerically available dimensions (see epsilon) will be used, unless set by dim. Setting var_cutoff smaller than 1.0 is exclusive with dim kinetic_map : bool, optional, default True Eigenvectors will be scaled by eigenvalues. As a result, Euclidean distances in the transformed data approximate kinetic distances [4]_. This is a good choice when the data is further processed by clustering. commute_map : bool, optional, default False Eigenvector_i will be scaled by sqrt(timescale_i / 2). As a result, Euclidean distances in the transformed data will approximate commute distances [5]_. stride : int, optional, default = 1 If set to 1, all input data will be used for estimation. Note that this could cause this calculation to be very slow for large data sets. Since molecular dynamics data is usually correlated at short timescales, it is often sufficient to estimate transformations at a longer stride. Note that the stride option in the get_output() function of the returned object is independent, so you can parametrize at a long stride, and still map all frames through the transformer. weights : optional, default="empirical" Re-weighting strategy to be used in order to compute equilibrium covariances from non-equilibrium data. * "empirical": no re-weighting * "koopman": use re-weighting procedure from [6]_ * weights: An object that allows to compute re-weighting factors. It must possess a method weights(X) that accepts a trajectory X (np.ndarray(T, n)) and returns a vector of re-weighting factors (np.ndarray(T,)). remove_mean: bool, optional, default True remove mean during covariance estimation. Should not be turned off. skip : int, default=0 skip the first initial n frames per trajectory. reversible: bool, default=True symmetrize correlation matrices C_0, C_{\tau}. ncov_max : int, default=infinity limit the memory usage of the algorithm from [7]_ to an amount that corresponds to ncov_max additional copies of each correlation matrix chunksize: int, default=None Number of data frames to process at once. Choose a higher value here, to optimize thread usage and gain processing speed. If None is passed, use the default value of the underlying reader/data source. Choose zero to disable chunking at all. Returns ------- tica : a :class:`TICA <pyemma.coordinates.transform.TICA>` transformation object Object for time-lagged independent component (TICA) analysis. it contains TICA eigenvalues and eigenvectors, and the projection of input data to the dominant TICA Notes ----- Given a sequence of multivariate data :math:`X_t`, it computes the mean-free covariance and time-lagged covariance matrix: .. math:: C_0 &= (X_t - \mu)^T \mathrm{diag}(w) (X_t - \mu) \\ C_{\tau} &= (X_t - \mu)^T \mathrm{diag}(w) (X_t + \tau - \mu) where w is a vector of weights for each time step. By default, these weights are all equal to one, but different weights are possible, like the re-weighting to equilibrium described in [6]_. Subsequently, the eigenvalue problem .. math:: C_{\tau} r_i = C_0 \lambda_i r_i, is solved,where :math:`r_i` are the independent components and :math:`\lambda_i` are their respective normalized time-autocorrelations. The eigenvalues are related to the relaxation timescale by .. math:: t_i = -\frac{\tau}{\ln |\lambda_i|}. When used as a dimension reduction method, the input data is projected onto the dominant independent components. TICA was originally introduced for signal processing in [2]_. It was introduced to molecular dynamics and as a method for the construction of Markov models in [1]_ and [3]_. It was shown in [1]_ that when applied to molecular dynamics data, TICA is an approximation to the eigenvalues and eigenvectors of the true underlying dynamics. Examples -------- Invoke TICA transformation with a given lag time and output dimension: >>> import numpy as np >>> from pyemma.coordinates import tica >>> data = np.random.random((100,3)) >>> projected_data = tica(data, lag=2, dim=1).get_output()[0] For a brief explaination why TICA outperforms PCA to extract a good reaction coordinate have a look `here <http://docs.markovmodel.org/lecture_tica.html#Example:-TICA-versus-PCA-in-a-stretched-double-well-potential>`_. See also -------- :class:`TICA <pyemma.coordinates.transform.TICA>` : tica object :func:`pca <pyemma.coordinates.pca>` : for principal component analysis .. autoclass:: pyemma.coordinates.transform.tica.TICA :members: :undoc-members: .. rubric:: Methods .. autoautosummary:: pyemma.coordinates.transform.tica.TICA :methods: .. rubric:: Attributes .. autoautosummary:: pyemma.coordinates.transform.tica.TICA :attributes: References ---------- .. [1] Perez-Hernandez G, F Paul, T Giorgino, G De Fabritiis and F Noe. 2013. Identification of slow molecular order parameters for Markov model construction J. Chem. Phys. 139, 015102. doi:10.1063/1.4811489 .. [2] L. Molgedey and H. G. Schuster. 1994. Separation of a mixture of independent signals using time delayed correlations Phys. Rev. Lett. 72, 3634. .. [3] Schwantes C, V S Pande. 2013. Improvements in Markov State Model Construction Reveal Many Non-Native Interactions in the Folding of NTL9 J. Chem. Theory. Comput. 9, 2000-2009. doi:10.1021/ct300878a .. [4] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553 .. [5] Noe, F., Banisch, R., Clementi, C. 2016. Commute maps: separating slowly-mixing molecular configurations for kinetic modeling. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.6b00762 .. [6] Wu, H., Nueske, F., Paul, F., Klus, S., Koltai, P., and Noe, F. 2016. Bias reduced variational approximation of molecular kinetics from short off-equilibrium simulations. J. Chem. Phys. (submitted), https://arxiv.org/abs/1610.06773. .. [7] Chan, T. F., Golub G. H., LeVeque R. J. 1979. Updating formulae and pairwiese algorithms for computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford University.
[ "r", "Time", "-", "lagged", "independent", "component", "analysis", "(", "TICA", ")", "." ]
python
train
47.958333
kibitzr/kibitzr
kibitzr/fetcher/browser/fetcher.py
https://github.com/kibitzr/kibitzr/blob/749da312488f1dda1ed1093cf4c95aaac0a604f7/kibitzr/fetcher/browser/fetcher.py#L102-L110
def _run_automation(self, conf): """ 1. Fill form. 2. Run scenario. 3. Delay. """ self._fill_form(self._find_form(conf)) self._run_scenario(conf) self._delay(conf)
[ "def", "_run_automation", "(", "self", ",", "conf", ")", ":", "self", ".", "_fill_form", "(", "self", ".", "_find_form", "(", "conf", ")", ")", "self", ".", "_run_scenario", "(", "conf", ")", "self", ".", "_delay", "(", "conf", ")" ]
1. Fill form. 2. Run scenario. 3. Delay.
[ "1", ".", "Fill", "form", ".", "2", ".", "Run", "scenario", ".", "3", ".", "Delay", "." ]
python
train
24.333333
michael-lazar/rtv
rtv/packages/praw/__init__.py
https://github.com/michael-lazar/rtv/blob/ccef2af042566ad384977028cf0bde01bc524dda/rtv/packages/praw/__init__.py#L1192-L1201
def get_traffic(self, subreddit): """Return the json dictionary containing traffic stats for a subreddit. :param subreddit: The subreddit whose /about/traffic page we will collect. """ url = self.config['subreddit_traffic'].format( subreddit=six.text_type(subreddit)) return self.request_json(url)
[ "def", "get_traffic", "(", "self", ",", "subreddit", ")", ":", "url", "=", "self", ".", "config", "[", "'subreddit_traffic'", "]", ".", "format", "(", "subreddit", "=", "six", ".", "text_type", "(", "subreddit", ")", ")", "return", "self", ".", "request_json", "(", "url", ")" ]
Return the json dictionary containing traffic stats for a subreddit. :param subreddit: The subreddit whose /about/traffic page we will collect.
[ "Return", "the", "json", "dictionary", "containing", "traffic", "stats", "for", "a", "subreddit", "." ]
python
train
35.4
Loudr/asana-hub
asana_hub/action.py
https://github.com/Loudr/asana-hub/blob/af996ce890ed23d8ede5bf68dcd318e3438829cb/asana_hub/action.py#L5-L10
def get_subclasses(c): """Gets the subclasses of a class.""" subclasses = c.__subclasses__() for d in list(subclasses): subclasses.extend(get_subclasses(d)) return subclasses
[ "def", "get_subclasses", "(", "c", ")", ":", "subclasses", "=", "c", ".", "__subclasses__", "(", ")", "for", "d", "in", "list", "(", "subclasses", ")", ":", "subclasses", ".", "extend", "(", "get_subclasses", "(", "d", ")", ")", "return", "subclasses" ]
Gets the subclasses of a class.
[ "Gets", "the", "subclasses", "of", "a", "class", "." ]
python
test
32.166667
mardix/Juice
juice/core.py
https://github.com/mardix/Juice/blob/7afa8d4238868235dfcdae82272bd77958dd416a/juice/core.py#L600-L624
def meta_tags(cls, **kwargs): """ Meta allows you to add meta data to site :params **kwargs: meta keys we're expecting: title (str) description (str) url (str) (Will pick it up by itself if not set) image (str) site_name (str) (but can pick it up from config file) object_type (str) keywords (list) locale (str) card (str) **Boolean By default these keys are True use_opengraph use_twitter use_googleplus """ page_meta = cls._global.get("__META__", {}) page_meta.update(**kwargs) cls.g(__META__=page_meta)
[ "def", "meta_tags", "(", "cls", ",", "*", "*", "kwargs", ")", ":", "page_meta", "=", "cls", ".", "_global", ".", "get", "(", "\"__META__\"", ",", "{", "}", ")", "page_meta", ".", "update", "(", "*", "*", "kwargs", ")", "cls", ".", "g", "(", "__META__", "=", "page_meta", ")" ]
Meta allows you to add meta data to site :params **kwargs: meta keys we're expecting: title (str) description (str) url (str) (Will pick it up by itself if not set) image (str) site_name (str) (but can pick it up from config file) object_type (str) keywords (list) locale (str) card (str) **Boolean By default these keys are True use_opengraph use_twitter use_googleplus
[ "Meta", "allows", "you", "to", "add", "meta", "data", "to", "site", ":", "params", "**", "kwargs", ":" ]
python
train
28.16
saltstack/salt
salt/modules/publish.py
https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/publish.py#L27-L38
def _parse_args(arg): ''' yamlify `arg` and ensure it's outermost datatype is a list ''' yaml_args = salt.utils.args.yamlify_arg(arg) if yaml_args is None: return [] elif not isinstance(yaml_args, list): return [yaml_args] else: return yaml_args
[ "def", "_parse_args", "(", "arg", ")", ":", "yaml_args", "=", "salt", ".", "utils", ".", "args", ".", "yamlify_arg", "(", "arg", ")", "if", "yaml_args", "is", "None", ":", "return", "[", "]", "elif", "not", "isinstance", "(", "yaml_args", ",", "list", ")", ":", "return", "[", "yaml_args", "]", "else", ":", "return", "yaml_args" ]
yamlify `arg` and ensure it's outermost datatype is a list
[ "yamlify", "arg", "and", "ensure", "it", "s", "outermost", "datatype", "is", "a", "list" ]
python
train
23.916667
csparpa/pyowm
pyowm/uvindexapi30/parsers.py
https://github.com/csparpa/pyowm/blob/cdd59eb72f32f7238624ceef9b2e2329a5ebd472/pyowm/uvindexapi30/parsers.py#L77-L96
def parse_JSON(self, JSON_string): """ Parses a list of *UVIndex* instances out of raw JSON data. Only certain properties of the data are used: if these properties are not found or cannot be parsed, an error is issued. :param JSON_string: a raw JSON string :type JSON_string: str :returns: a list of *UVIndex* instances or an empty list if no data is available :raises: *ParseResponseError* if it is impossible to find or parse the data needed to build the result, *APIResponseError* if the JSON string embeds an HTTP status error """ if JSON_string is None: raise parse_response_error.ParseResponseError('JSON data is None') d = json.loads(JSON_string) uvindex_parser = UVIndexParser() return [uvindex_parser.parse_JSON(json.dumps(item)) for item in d]
[ "def", "parse_JSON", "(", "self", ",", "JSON_string", ")", ":", "if", "JSON_string", "is", "None", ":", "raise", "parse_response_error", ".", "ParseResponseError", "(", "'JSON data is None'", ")", "d", "=", "json", ".", "loads", "(", "JSON_string", ")", "uvindex_parser", "=", "UVIndexParser", "(", ")", "return", "[", "uvindex_parser", ".", "parse_JSON", "(", "json", ".", "dumps", "(", "item", ")", ")", "for", "item", "in", "d", "]" ]
Parses a list of *UVIndex* instances out of raw JSON data. Only certain properties of the data are used: if these properties are not found or cannot be parsed, an error is issued. :param JSON_string: a raw JSON string :type JSON_string: str :returns: a list of *UVIndex* instances or an empty list if no data is available :raises: *ParseResponseError* if it is impossible to find or parse the data needed to build the result, *APIResponseError* if the JSON string embeds an HTTP status error
[ "Parses", "a", "list", "of", "*", "UVIndex", "*", "instances", "out", "of", "raw", "JSON", "data", ".", "Only", "certain", "properties", "of", "the", "data", "are", "used", ":", "if", "these", "properties", "are", "not", "found", "or", "cannot", "be", "parsed", "an", "error", "is", "issued", "." ]
python
train
44.4
cimm-kzn/CGRtools
CGRtools/algorithms/compose.py
https://github.com/cimm-kzn/CGRtools/blob/15a19b04f6e4e1d0dab8e0d32a0877c7f7d70f34/CGRtools/algorithms/compose.py#L182-L201
def decompose(self): """ decompose CGR to pair of Molecules, which represents reactants and products state of reaction :return: tuple of two molecules """ mc = self._get_subclass('MoleculeContainer') reactants = mc() products = mc() for n, atom in self.atoms(): reactants.add_atom(atom._reactant, n) products.add_atom(atom._product, n) for n, m, bond in self.bonds(): if bond._reactant is not None: reactants.add_bond(n, m, bond._reactant) if bond._product is not None: products.add_bond(n, m, bond._product) return reactants, products
[ "def", "decompose", "(", "self", ")", ":", "mc", "=", "self", ".", "_get_subclass", "(", "'MoleculeContainer'", ")", "reactants", "=", "mc", "(", ")", "products", "=", "mc", "(", ")", "for", "n", ",", "atom", "in", "self", ".", "atoms", "(", ")", ":", "reactants", ".", "add_atom", "(", "atom", ".", "_reactant", ",", "n", ")", "products", ".", "add_atom", "(", "atom", ".", "_product", ",", "n", ")", "for", "n", ",", "m", ",", "bond", "in", "self", ".", "bonds", "(", ")", ":", "if", "bond", ".", "_reactant", "is", "not", "None", ":", "reactants", ".", "add_bond", "(", "n", ",", "m", ",", "bond", ".", "_reactant", ")", "if", "bond", ".", "_product", "is", "not", "None", ":", "products", ".", "add_bond", "(", "n", ",", "m", ",", "bond", ".", "_product", ")", "return", "reactants", ",", "products" ]
decompose CGR to pair of Molecules, which represents reactants and products state of reaction :return: tuple of two molecules
[ "decompose", "CGR", "to", "pair", "of", "Molecules", "which", "represents", "reactants", "and", "products", "state", "of", "reaction" ]
python
train
33.95
rodluger/everest
everest/basecamp.py
https://github.com/rodluger/everest/blob/6779591f9f8b3556847e2fbf761bdfac7520eaea/everest/basecamp.py#L587-L600
def apply_mask(self, x=None): ''' Returns the outlier mask, an array of indices corresponding to the non-outliers. :param numpy.ndarray x: If specified, returns the masked version of \ :py:obj:`x` instead. Default :py:obj:`None` ''' if x is None: return np.delete(np.arange(len(self.time)), self.mask) else: return np.delete(x, self.mask, axis=0)
[ "def", "apply_mask", "(", "self", ",", "x", "=", "None", ")", ":", "if", "x", "is", "None", ":", "return", "np", ".", "delete", "(", "np", ".", "arange", "(", "len", "(", "self", ".", "time", ")", ")", ",", "self", ".", "mask", ")", "else", ":", "return", "np", ".", "delete", "(", "x", ",", "self", ".", "mask", ",", "axis", "=", "0", ")" ]
Returns the outlier mask, an array of indices corresponding to the non-outliers. :param numpy.ndarray x: If specified, returns the masked version of \ :py:obj:`x` instead. Default :py:obj:`None`
[ "Returns", "the", "outlier", "mask", "an", "array", "of", "indices", "corresponding", "to", "the", "non", "-", "outliers", "." ]
python
train
30.785714
quantopian/pyfolio
pyfolio/perf_attrib.py
https://github.com/quantopian/pyfolio/blob/712716ab0cdebbec9fabb25eea3bf40e4354749d/pyfolio/perf_attrib.py#L471-L501
def plot_risk_exposures(exposures, ax=None, title='Daily risk factor exposures'): """ Parameters ---------- exposures : pd.DataFrame df indexed by datetime, with factors as columns - Example: momentum reversal dt 2017-01-01 -0.238655 0.077123 2017-01-02 0.821872 1.520515 ax : matplotlib.axes.Axes axes on which plots are made. if None, current axes will be used Returns ------- ax : matplotlib.axes.Axes """ if ax is None: ax = plt.gca() for col in exposures: ax.plot(exposures[col]) configure_legend(ax, change_colors=True) ax.set_ylabel('Factor exposures') ax.set_title(title) return ax
[ "def", "plot_risk_exposures", "(", "exposures", ",", "ax", "=", "None", ",", "title", "=", "'Daily risk factor exposures'", ")", ":", "if", "ax", "is", "None", ":", "ax", "=", "plt", ".", "gca", "(", ")", "for", "col", "in", "exposures", ":", "ax", ".", "plot", "(", "exposures", "[", "col", "]", ")", "configure_legend", "(", "ax", ",", "change_colors", "=", "True", ")", "ax", ".", "set_ylabel", "(", "'Factor exposures'", ")", "ax", ".", "set_title", "(", "title", ")", "return", "ax" ]
Parameters ---------- exposures : pd.DataFrame df indexed by datetime, with factors as columns - Example: momentum reversal dt 2017-01-01 -0.238655 0.077123 2017-01-02 0.821872 1.520515 ax : matplotlib.axes.Axes axes on which plots are made. if None, current axes will be used Returns ------- ax : matplotlib.axes.Axes
[ "Parameters", "----------", "exposures", ":", "pd", ".", "DataFrame", "df", "indexed", "by", "datetime", "with", "factors", "as", "columns", "-", "Example", ":", "momentum", "reversal", "dt", "2017", "-", "01", "-", "01", "-", "0", ".", "238655", "0", ".", "077123", "2017", "-", "01", "-", "02", "0", ".", "821872", "1", ".", "520515" ]
python
valid
24.322581
wbond/oscrypto
oscrypto/_openssl/asymmetric.py
https://github.com/wbond/oscrypto/blob/af778bf1c88bf6c4a7342f5353b130686a5bbe1c/oscrypto/_openssl/asymmetric.py#L552-L588
def load_certificate(source): """ Loads an x509 certificate into a Certificate object :param source: A byte string of file contents, a unicode string filename or an asn1crypto.x509.Certificate object :raises: ValueError - when any of the parameters contain an invalid value TypeError - when any of the parameters are of the wrong type OSError - when an error is returned by the OS crypto library :return: A Certificate object """ if isinstance(source, asn1x509.Certificate): certificate = source elif isinstance(source, byte_cls): certificate = parse_certificate(source) elif isinstance(source, str_cls): with open(source, 'rb') as f: certificate = parse_certificate(f.read()) else: raise TypeError(pretty_message( ''' source must be a byte string, unicode string or asn1crypto.x509.Certificate object, not %s ''', type_name(source) )) return _load_x509(certificate)
[ "def", "load_certificate", "(", "source", ")", ":", "if", "isinstance", "(", "source", ",", "asn1x509", ".", "Certificate", ")", ":", "certificate", "=", "source", "elif", "isinstance", "(", "source", ",", "byte_cls", ")", ":", "certificate", "=", "parse_certificate", "(", "source", ")", "elif", "isinstance", "(", "source", ",", "str_cls", ")", ":", "with", "open", "(", "source", ",", "'rb'", ")", "as", "f", ":", "certificate", "=", "parse_certificate", "(", "f", ".", "read", "(", ")", ")", "else", ":", "raise", "TypeError", "(", "pretty_message", "(", "'''\n source must be a byte string, unicode string or\n asn1crypto.x509.Certificate object, not %s\n '''", ",", "type_name", "(", "source", ")", ")", ")", "return", "_load_x509", "(", "certificate", ")" ]
Loads an x509 certificate into a Certificate object :param source: A byte string of file contents, a unicode string filename or an asn1crypto.x509.Certificate object :raises: ValueError - when any of the parameters contain an invalid value TypeError - when any of the parameters are of the wrong type OSError - when an error is returned by the OS crypto library :return: A Certificate object
[ "Loads", "an", "x509", "certificate", "into", "a", "Certificate", "object" ]
python
valid
28.162162
rfverbruggen/rachiopy
rachiopy/zone.py
https://github.com/rfverbruggen/rachiopy/blob/c91abc9984f0f453e60fa905285c1b640c3390ae/rachiopy/zone.py#L17-L21
def startMultiple(self, zones): """Start multiple zones.""" path = 'zone/start_multiple' payload = {'zones': zones} return self.rachio.put(path, payload)
[ "def", "startMultiple", "(", "self", ",", "zones", ")", ":", "path", "=", "'zone/start_multiple'", "payload", "=", "{", "'zones'", ":", "zones", "}", "return", "self", ".", "rachio", ".", "put", "(", "path", ",", "payload", ")" ]
Start multiple zones.
[ "Start", "multiple", "zones", "." ]
python
train
36.2
ModisWorks/modis
modis/discord_modis/modules/manager/api_manager.py
https://github.com/ModisWorks/modis/blob/1f1225c9841835ec1d1831fc196306527567db8b/modis/discord_modis/modules/manager/api_manager.py#L14-L71
async def activate_module(channel, module_name, activate): """ Changes a modules activated/deactivated state for a server Args: channel: The channel to send the message to module_name: The name of the module to change state for activate: The activated/deactivated state of the module """ data = datatools.get_data() server_id = channel.server.id _dir = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__))) _dir_modules = "{}/../".format(_dir) if not os.path.isfile("{}/{}/_data.py".format(_dir_modules, module_name)): await client.send_typing(channel) embed = ui_embed.error(channel, "Error", "No module found named '{}'".format(module_name)) await embed.send() return try: import_name = ".discord_modis.modules.{}.{}".format(module_name, "_data") module_data = importlib.import_module(import_name, "modis") # Don't try and deactivate this module (not that it would do anything) if module_data.modulename == _data.modulename: await client.send_typing(channel) embed = ui_embed.error(channel, "Error", "I'm sorry, Dave. I'm afraid I can't do that.") await embed.send() return # This /should/ never happen if everything goes well if module_data.modulename not in data["discord"]["servers"][server_id]: await client.send_typing(channel) embed = ui_embed.error(channel, "Error", "No data found for module '{}'".format(module_data.modulename)) await embed.send() return # Modify the module if "activated" in data["discord"]["servers"][server_id][module_data.modulename]: data["discord"]["servers"][server_id][module_data.modulename]["activated"] = activate # Write the data datatools.write_data(data) await client.send_typing(channel) embed = ui_embed.modify_module(channel, module_data.modulename, activate) await embed.send() return else: await client.send_typing(channel) embed = ui_embed.error(channel, "Error", "Can't deactivate module '{}'".format(module_data.modulename)) await embed.send() return except Exception as e: logger.error("Could not modify module {}".format(module_name)) logger.exception(e)
[ "async", "def", "activate_module", "(", "channel", ",", "module_name", ",", "activate", ")", ":", "data", "=", "datatools", ".", "get_data", "(", ")", "server_id", "=", "channel", ".", "server", ".", "id", "_dir", "=", "os", ".", "path", ".", "realpath", "(", "os", ".", "path", ".", "join", "(", "os", ".", "getcwd", "(", ")", ",", "os", ".", "path", ".", "dirname", "(", "__file__", ")", ")", ")", "_dir_modules", "=", "\"{}/../\"", ".", "format", "(", "_dir", ")", "if", "not", "os", ".", "path", ".", "isfile", "(", "\"{}/{}/_data.py\"", ".", "format", "(", "_dir_modules", ",", "module_name", ")", ")", ":", "await", "client", ".", "send_typing", "(", "channel", ")", "embed", "=", "ui_embed", ".", "error", "(", "channel", ",", "\"Error\"", ",", "\"No module found named '{}'\"", ".", "format", "(", "module_name", ")", ")", "await", "embed", ".", "send", "(", ")", "return", "try", ":", "import_name", "=", "\".discord_modis.modules.{}.{}\"", ".", "format", "(", "module_name", ",", "\"_data\"", ")", "module_data", "=", "importlib", ".", "import_module", "(", "import_name", ",", "\"modis\"", ")", "# Don't try and deactivate this module (not that it would do anything)", "if", "module_data", ".", "modulename", "==", "_data", ".", "modulename", ":", "await", "client", ".", "send_typing", "(", "channel", ")", "embed", "=", "ui_embed", ".", "error", "(", "channel", ",", "\"Error\"", ",", "\"I'm sorry, Dave. I'm afraid I can't do that.\"", ")", "await", "embed", ".", "send", "(", ")", "return", "# This /should/ never happen if everything goes well", "if", "module_data", ".", "modulename", "not", "in", "data", "[", "\"discord\"", "]", "[", "\"servers\"", "]", "[", "server_id", "]", ":", "await", "client", ".", "send_typing", "(", "channel", ")", "embed", "=", "ui_embed", ".", "error", "(", "channel", ",", "\"Error\"", ",", "\"No data found for module '{}'\"", ".", "format", "(", "module_data", ".", "modulename", ")", ")", "await", "embed", ".", "send", "(", ")", "return", "# Modify the module", "if", "\"activated\"", "in", "data", "[", "\"discord\"", "]", "[", "\"servers\"", "]", "[", "server_id", "]", "[", "module_data", ".", "modulename", "]", ":", "data", "[", "\"discord\"", "]", "[", "\"servers\"", "]", "[", "server_id", "]", "[", "module_data", ".", "modulename", "]", "[", "\"activated\"", "]", "=", "activate", "# Write the data", "datatools", ".", "write_data", "(", "data", ")", "await", "client", ".", "send_typing", "(", "channel", ")", "embed", "=", "ui_embed", ".", "modify_module", "(", "channel", ",", "module_data", ".", "modulename", ",", "activate", ")", "await", "embed", ".", "send", "(", ")", "return", "else", ":", "await", "client", ".", "send_typing", "(", "channel", ")", "embed", "=", "ui_embed", ".", "error", "(", "channel", ",", "\"Error\"", ",", "\"Can't deactivate module '{}'\"", ".", "format", "(", "module_data", ".", "modulename", ")", ")", "await", "embed", ".", "send", "(", ")", "return", "except", "Exception", "as", "e", ":", "logger", ".", "error", "(", "\"Could not modify module {}\"", ".", "format", "(", "module_name", ")", ")", "logger", ".", "exception", "(", "e", ")" ]
Changes a modules activated/deactivated state for a server Args: channel: The channel to send the message to module_name: The name of the module to change state for activate: The activated/deactivated state of the module
[ "Changes", "a", "modules", "activated", "/", "deactivated", "state", "for", "a", "server" ]
python
train
41.810345
Azure/azure-uamqp-python
uamqp/authentication/cbs_auth_async.py
https://github.com/Azure/azure-uamqp-python/blob/b67e4fcaf2e8a337636947523570239c10a58ae2/uamqp/authentication/cbs_auth_async.py#L56-L64
async def close_authenticator_async(self): """Close the CBS auth channel and session asynchronously.""" _logger.info("Shutting down CBS session on connection: %r.", self._connection.container_id) try: self._cbs_auth.destroy() _logger.info("Auth closed, destroying session on connection: %r.", self._connection.container_id) await self._session.destroy_async() finally: _logger.info("Finished shutting down CBS session on connection: %r.", self._connection.container_id)
[ "async", "def", "close_authenticator_async", "(", "self", ")", ":", "_logger", ".", "info", "(", "\"Shutting down CBS session on connection: %r.\"", ",", "self", ".", "_connection", ".", "container_id", ")", "try", ":", "self", ".", "_cbs_auth", ".", "destroy", "(", ")", "_logger", ".", "info", "(", "\"Auth closed, destroying session on connection: %r.\"", ",", "self", ".", "_connection", ".", "container_id", ")", "await", "self", ".", "_session", ".", "destroy_async", "(", ")", "finally", ":", "_logger", ".", "info", "(", "\"Finished shutting down CBS session on connection: %r.\"", ",", "self", ".", "_connection", ".", "container_id", ")" ]
Close the CBS auth channel and session asynchronously.
[ "Close", "the", "CBS", "auth", "channel", "and", "session", "asynchronously", "." ]
python
train
60.111111
saltstack/salt
salt/modules/redismod.py
https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/redismod.py#L194-L205
def expireat(key, timestamp, host=None, port=None, db=None, password=None): ''' Set a keys expire at given UNIX time CLI Example: .. code-block:: bash salt '*' redis.expireat foo 1400000000 ''' server = _connect(host, port, db, password) return server.expireat(key, timestamp)
[ "def", "expireat", "(", "key", ",", "timestamp", ",", "host", "=", "None", ",", "port", "=", "None", ",", "db", "=", "None", ",", "password", "=", "None", ")", ":", "server", "=", "_connect", "(", "host", ",", "port", ",", "db", ",", "password", ")", "return", "server", ".", "expireat", "(", "key", ",", "timestamp", ")" ]
Set a keys expire at given UNIX time CLI Example: .. code-block:: bash salt '*' redis.expireat foo 1400000000
[ "Set", "a", "keys", "expire", "at", "given", "UNIX", "time" ]
python
train
25.333333
peterdemin/pip-compile-multi
pipcompilemulti/dependency.py
https://github.com/peterdemin/pip-compile-multi/blob/7bd1968c424dd7ce3236885b4b3e4e28523e6915/pipcompilemulti/dependency.py#L63-L88
def serialize(self): """ Render dependency back in string using: ~= if package is internal == otherwise """ if self.is_vcs: return self.without_editable(self.line).strip() equal = '~=' if self.is_compatible else '==' package_version = '{package}{equal}{version} '.format( package=self.without_editable(self.package), version=self.version, equal=equal, ) if self.hashes: hashes = self.hashes.split() lines = [package_version.strip()] lines.extend(hashes) if self.comment: lines.append(self.comment) return ' \\\n '.join(lines) else: return '{0}{1}'.format( package_version.ljust(self.COMMENT_JUSTIFICATION), self.comment, ).rstrip()
[ "def", "serialize", "(", "self", ")", ":", "if", "self", ".", "is_vcs", ":", "return", "self", ".", "without_editable", "(", "self", ".", "line", ")", ".", "strip", "(", ")", "equal", "=", "'~='", "if", "self", ".", "is_compatible", "else", "'=='", "package_version", "=", "'{package}{equal}{version} '", ".", "format", "(", "package", "=", "self", ".", "without_editable", "(", "self", ".", "package", ")", ",", "version", "=", "self", ".", "version", ",", "equal", "=", "equal", ",", ")", "if", "self", ".", "hashes", ":", "hashes", "=", "self", ".", "hashes", ".", "split", "(", ")", "lines", "=", "[", "package_version", ".", "strip", "(", ")", "]", "lines", ".", "extend", "(", "hashes", ")", "if", "self", ".", "comment", ":", "lines", ".", "append", "(", "self", ".", "comment", ")", "return", "' \\\\\\n '", ".", "join", "(", "lines", ")", "else", ":", "return", "'{0}{1}'", ".", "format", "(", "package_version", ".", "ljust", "(", "self", ".", "COMMENT_JUSTIFICATION", ")", ",", "self", ".", "comment", ",", ")", ".", "rstrip", "(", ")" ]
Render dependency back in string using: ~= if package is internal == otherwise
[ "Render", "dependency", "back", "in", "string", "using", ":", "~", "=", "if", "package", "is", "internal", "==", "otherwise" ]
python
train
34.076923
mitsei/dlkit
dlkit/json_/relationship/sessions.py
https://github.com/mitsei/dlkit/blob/445f968a175d61c8d92c0f617a3c17dc1dc7c584/dlkit/json_/relationship/sessions.py#L2493-L2511
def add_child_family(self, family_id, child_id): """Adds a child to a family. arg: family_id (osid.id.Id): the ``Id`` of a family arg: child_id (osid.id.Id): the ``Id`` of the new child raise: AlreadyExists - ``family_id`` is already a parent of ``child_id`` raise: NotFound - ``family_id`` or ``child_id`` not found raise: NullArgument - ``family_id`` or ``child_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.BinHierarchyDesignSession.add_child_bin_template if self._catalog_session is not None: return self._catalog_session.add_child_catalog(catalog_id=family_id, child_id=child_id) return self._hierarchy_session.add_child(id_=family_id, child_id=child_id)
[ "def", "add_child_family", "(", "self", ",", "family_id", ",", "child_id", ")", ":", "# Implemented from template for", "# osid.resource.BinHierarchyDesignSession.add_child_bin_template", "if", "self", ".", "_catalog_session", "is", "not", "None", ":", "return", "self", ".", "_catalog_session", ".", "add_child_catalog", "(", "catalog_id", "=", "family_id", ",", "child_id", "=", "child_id", ")", "return", "self", ".", "_hierarchy_session", ".", "add_child", "(", "id_", "=", "family_id", ",", "child_id", "=", "child_id", ")" ]
Adds a child to a family. arg: family_id (osid.id.Id): the ``Id`` of a family arg: child_id (osid.id.Id): the ``Id`` of the new child raise: AlreadyExists - ``family_id`` is already a parent of ``child_id`` raise: NotFound - ``family_id`` or ``child_id`` not found raise: NullArgument - ``family_id`` or ``child_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
[ "Adds", "a", "child", "to", "a", "family", "." ]
python
train
51.421053
tensorpack/tensorpack
examples/FasterRCNN/config.py
https://github.com/tensorpack/tensorpack/blob/d7a13cb74c9066bc791d7aafc3b744b60ee79a9f/examples/FasterRCNN/config.py#L214-L282
def finalize_configs(is_training): """ Run some sanity checks, and populate some configs from others """ _C.freeze(False) # populate new keys now _C.DATA.NUM_CLASS = _C.DATA.NUM_CATEGORY + 1 # +1 background _C.DATA.BASEDIR = os.path.expanduser(_C.DATA.BASEDIR) if isinstance(_C.DATA.VAL, six.string_types): # support single string (the typical case) as well _C.DATA.VAL = (_C.DATA.VAL, ) assert _C.BACKBONE.NORM in ['FreezeBN', 'SyncBN', 'GN', 'None'], _C.BACKBONE.NORM if _C.BACKBONE.NORM != 'FreezeBN': assert not _C.BACKBONE.FREEZE_AFFINE assert _C.BACKBONE.FREEZE_AT in [0, 1, 2] _C.RPN.NUM_ANCHOR = len(_C.RPN.ANCHOR_SIZES) * len(_C.RPN.ANCHOR_RATIOS) assert len(_C.FPN.ANCHOR_STRIDES) == len(_C.RPN.ANCHOR_SIZES) # image size into the backbone has to be multiple of this number _C.FPN.RESOLUTION_REQUIREMENT = _C.FPN.ANCHOR_STRIDES[3] # [3] because we build FPN with features r2,r3,r4,r5 if _C.MODE_FPN: size_mult = _C.FPN.RESOLUTION_REQUIREMENT * 1. _C.PREPROC.MAX_SIZE = np.ceil(_C.PREPROC.MAX_SIZE / size_mult) * size_mult assert _C.FPN.PROPOSAL_MODE in ['Level', 'Joint'] assert _C.FPN.FRCNN_HEAD_FUNC.endswith('_head') assert _C.FPN.MRCNN_HEAD_FUNC.endswith('_head') assert _C.FPN.NORM in ['None', 'GN'] if _C.FPN.CASCADE: # the first threshold is the proposal sampling threshold assert _C.CASCADE.IOUS[0] == _C.FRCNN.FG_THRESH assert len(_C.CASCADE.BBOX_REG_WEIGHTS) == len(_C.CASCADE.IOUS) if is_training: train_scales = _C.PREPROC.TRAIN_SHORT_EDGE_SIZE if isinstance(train_scales, (list, tuple)) and train_scales[1] - train_scales[0] > 100: # don't autotune if augmentation is on os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0' os.environ['TF_AUTOTUNE_THRESHOLD'] = '1' assert _C.TRAINER in ['horovod', 'replicated'], _C.TRAINER # setup NUM_GPUS if _C.TRAINER == 'horovod': import horovod.tensorflow as hvd ngpu = hvd.size() if ngpu == hvd.local_size(): logger.warn("It's not recommended to use horovod for single-machine training. " "Replicated trainer is more stable and has the same efficiency.") else: assert 'OMPI_COMM_WORLD_SIZE' not in os.environ ngpu = get_num_gpu() assert ngpu > 0, "Has to train with GPU!" assert ngpu % 8 == 0 or 8 % ngpu == 0, "Can only train with 1,2,4 or >=8 GPUs, but found {} GPUs".format(ngpu) else: # autotune is too slow for inference os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0' ngpu = get_num_gpu() if _C.TRAIN.NUM_GPUS is None: _C.TRAIN.NUM_GPUS = ngpu else: if _C.TRAINER == 'horovod': assert _C.TRAIN.NUM_GPUS == ngpu else: assert _C.TRAIN.NUM_GPUS <= ngpu _C.freeze() logger.info("Config: ------------------------------------------\n" + str(_C))
[ "def", "finalize_configs", "(", "is_training", ")", ":", "_C", ".", "freeze", "(", "False", ")", "# populate new keys now", "_C", ".", "DATA", ".", "NUM_CLASS", "=", "_C", ".", "DATA", ".", "NUM_CATEGORY", "+", "1", "# +1 background", "_C", ".", "DATA", ".", "BASEDIR", "=", "os", ".", "path", ".", "expanduser", "(", "_C", ".", "DATA", ".", "BASEDIR", ")", "if", "isinstance", "(", "_C", ".", "DATA", ".", "VAL", ",", "six", ".", "string_types", ")", ":", "# support single string (the typical case) as well", "_C", ".", "DATA", ".", "VAL", "=", "(", "_C", ".", "DATA", ".", "VAL", ",", ")", "assert", "_C", ".", "BACKBONE", ".", "NORM", "in", "[", "'FreezeBN'", ",", "'SyncBN'", ",", "'GN'", ",", "'None'", "]", ",", "_C", ".", "BACKBONE", ".", "NORM", "if", "_C", ".", "BACKBONE", ".", "NORM", "!=", "'FreezeBN'", ":", "assert", "not", "_C", ".", "BACKBONE", ".", "FREEZE_AFFINE", "assert", "_C", ".", "BACKBONE", ".", "FREEZE_AT", "in", "[", "0", ",", "1", ",", "2", "]", "_C", ".", "RPN", ".", "NUM_ANCHOR", "=", "len", "(", "_C", ".", "RPN", ".", "ANCHOR_SIZES", ")", "*", "len", "(", "_C", ".", "RPN", ".", "ANCHOR_RATIOS", ")", "assert", "len", "(", "_C", ".", "FPN", ".", "ANCHOR_STRIDES", ")", "==", "len", "(", "_C", ".", "RPN", ".", "ANCHOR_SIZES", ")", "# image size into the backbone has to be multiple of this number", "_C", ".", "FPN", ".", "RESOLUTION_REQUIREMENT", "=", "_C", ".", "FPN", ".", "ANCHOR_STRIDES", "[", "3", "]", "# [3] because we build FPN with features r2,r3,r4,r5", "if", "_C", ".", "MODE_FPN", ":", "size_mult", "=", "_C", ".", "FPN", ".", "RESOLUTION_REQUIREMENT", "*", "1.", "_C", ".", "PREPROC", ".", "MAX_SIZE", "=", "np", ".", "ceil", "(", "_C", ".", "PREPROC", ".", "MAX_SIZE", "/", "size_mult", ")", "*", "size_mult", "assert", "_C", ".", "FPN", ".", "PROPOSAL_MODE", "in", "[", "'Level'", ",", "'Joint'", "]", "assert", "_C", ".", "FPN", ".", "FRCNN_HEAD_FUNC", ".", "endswith", "(", "'_head'", ")", "assert", "_C", ".", "FPN", ".", "MRCNN_HEAD_FUNC", ".", "endswith", "(", "'_head'", ")", "assert", "_C", ".", "FPN", ".", "NORM", "in", "[", "'None'", ",", "'GN'", "]", "if", "_C", ".", "FPN", ".", "CASCADE", ":", "# the first threshold is the proposal sampling threshold", "assert", "_C", ".", "CASCADE", ".", "IOUS", "[", "0", "]", "==", "_C", ".", "FRCNN", ".", "FG_THRESH", "assert", "len", "(", "_C", ".", "CASCADE", ".", "BBOX_REG_WEIGHTS", ")", "==", "len", "(", "_C", ".", "CASCADE", ".", "IOUS", ")", "if", "is_training", ":", "train_scales", "=", "_C", ".", "PREPROC", ".", "TRAIN_SHORT_EDGE_SIZE", "if", "isinstance", "(", "train_scales", ",", "(", "list", ",", "tuple", ")", ")", "and", "train_scales", "[", "1", "]", "-", "train_scales", "[", "0", "]", ">", "100", ":", "# don't autotune if augmentation is on", "os", ".", "environ", "[", "'TF_CUDNN_USE_AUTOTUNE'", "]", "=", "'0'", "os", ".", "environ", "[", "'TF_AUTOTUNE_THRESHOLD'", "]", "=", "'1'", "assert", "_C", ".", "TRAINER", "in", "[", "'horovod'", ",", "'replicated'", "]", ",", "_C", ".", "TRAINER", "# setup NUM_GPUS", "if", "_C", ".", "TRAINER", "==", "'horovod'", ":", "import", "horovod", ".", "tensorflow", "as", "hvd", "ngpu", "=", "hvd", ".", "size", "(", ")", "if", "ngpu", "==", "hvd", ".", "local_size", "(", ")", ":", "logger", ".", "warn", "(", "\"It's not recommended to use horovod for single-machine training. \"", "\"Replicated trainer is more stable and has the same efficiency.\"", ")", "else", ":", "assert", "'OMPI_COMM_WORLD_SIZE'", "not", "in", "os", ".", "environ", "ngpu", "=", "get_num_gpu", "(", ")", "assert", "ngpu", ">", "0", ",", "\"Has to train with GPU!\"", "assert", "ngpu", "%", "8", "==", "0", "or", "8", "%", "ngpu", "==", "0", ",", "\"Can only train with 1,2,4 or >=8 GPUs, but found {} GPUs\"", ".", "format", "(", "ngpu", ")", "else", ":", "# autotune is too slow for inference", "os", ".", "environ", "[", "'TF_CUDNN_USE_AUTOTUNE'", "]", "=", "'0'", "ngpu", "=", "get_num_gpu", "(", ")", "if", "_C", ".", "TRAIN", ".", "NUM_GPUS", "is", "None", ":", "_C", ".", "TRAIN", ".", "NUM_GPUS", "=", "ngpu", "else", ":", "if", "_C", ".", "TRAINER", "==", "'horovod'", ":", "assert", "_C", ".", "TRAIN", ".", "NUM_GPUS", "==", "ngpu", "else", ":", "assert", "_C", ".", "TRAIN", ".", "NUM_GPUS", "<=", "ngpu", "_C", ".", "freeze", "(", ")", "logger", ".", "info", "(", "\"Config: ------------------------------------------\\n\"", "+", "str", "(", "_C", ")", ")" ]
Run some sanity checks, and populate some configs from others
[ "Run", "some", "sanity", "checks", "and", "populate", "some", "configs", "from", "others" ]
python
train
43.492754
saltstack/salt
salt/returners/rawfile_json.py
https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/returners/rawfile_json.py#L68-L84
def event_return(events): ''' Write event data (return data and non-return data) to file on the master. ''' if not events: # events is an empty list. # Don't open the logfile in vain. return opts = _get_options({}) # Pass in empty ret, since this is a list of events try: with salt.utils.files.flopen(opts['filename'], 'a') as logfile: for event in events: salt.utils.json.dump(event, logfile) logfile.write(str('\n')) # future lint: disable=blacklisted-function except Exception: log.error('Could not write to rawdata_json file %s', opts['filename']) raise
[ "def", "event_return", "(", "events", ")", ":", "if", "not", "events", ":", "# events is an empty list.", "# Don't open the logfile in vain.", "return", "opts", "=", "_get_options", "(", "{", "}", ")", "# Pass in empty ret, since this is a list of events", "try", ":", "with", "salt", ".", "utils", ".", "files", ".", "flopen", "(", "opts", "[", "'filename'", "]", ",", "'a'", ")", "as", "logfile", ":", "for", "event", "in", "events", ":", "salt", ".", "utils", ".", "json", ".", "dump", "(", "event", ",", "logfile", ")", "logfile", ".", "write", "(", "str", "(", "'\\n'", ")", ")", "# future lint: disable=blacklisted-function", "except", "Exception", ":", "log", ".", "error", "(", "'Could not write to rawdata_json file %s'", ",", "opts", "[", "'filename'", "]", ")", "raise" ]
Write event data (return data and non-return data) to file on the master.
[ "Write", "event", "data", "(", "return", "data", "and", "non", "-", "return", "data", ")", "to", "file", "on", "the", "master", "." ]
python
train
39
ethereum/py-evm
scripts/benchmark/_utils/tx.py
https://github.com/ethereum/py-evm/blob/58346848f076116381d3274bbcea96b9e2cfcbdf/scripts/benchmark/_utils/tx.py#L17-L41
def new_transaction( vm: VM, from_: Address, to: Address, amount: int=0, private_key: PrivateKey=None, gas_price: int=10, gas: int=100000, data: bytes=b'') -> BaseTransaction: """ Create and return a transaction sending amount from <from_> to <to>. The transaction will be signed with the given private key. """ nonce = vm.state.get_nonce(from_) tx = vm.create_unsigned_transaction( nonce=nonce, gas_price=gas_price, gas=gas, to=to, value=amount, data=data, ) return tx.as_signed_transaction(private_key)
[ "def", "new_transaction", "(", "vm", ":", "VM", ",", "from_", ":", "Address", ",", "to", ":", "Address", ",", "amount", ":", "int", "=", "0", ",", "private_key", ":", "PrivateKey", "=", "None", ",", "gas_price", ":", "int", "=", "10", ",", "gas", ":", "int", "=", "100000", ",", "data", ":", "bytes", "=", "b''", ")", "->", "BaseTransaction", ":", "nonce", "=", "vm", ".", "state", ".", "get_nonce", "(", "from_", ")", "tx", "=", "vm", ".", "create_unsigned_transaction", "(", "nonce", "=", "nonce", ",", "gas_price", "=", "gas_price", ",", "gas", "=", "gas", ",", "to", "=", "to", ",", "value", "=", "amount", ",", "data", "=", "data", ",", ")", "return", "tx", ".", "as_signed_transaction", "(", "private_key", ")" ]
Create and return a transaction sending amount from <from_> to <to>. The transaction will be signed with the given private key.
[ "Create", "and", "return", "a", "transaction", "sending", "amount", "from", "<from_", ">", "to", "<to", ">", "." ]
python
train
25.04
fhcrc/seqmagick
seqmagick/subcommands/primer_trim.py
https://github.com/fhcrc/seqmagick/blob/1642bb87ba5c171fbd307f9da0f8a0ee1d69d5ed/seqmagick/subcommands/primer_trim.py#L287-L331
def action(arguments): """ Trim the alignment as specified """ # Determine file format for input and output source_format = (arguments.source_format or fileformat.from_handle(arguments.source_file)) output_format = (arguments.output_format or fileformat.from_handle(arguments.output_file)) # Load the alignment with arguments.source_file: sequences = SeqIO.parse( arguments.source_file, source_format, alphabet=Alphabet.Gapped(Alphabet.single_letter_alphabet)) # Locate primers (forward_start, forward_end), (reverse_start, reverse_end) = locate_primers( sequences, arguments.forward_primer, arguments.reverse_primer, arguments.reverse_complement, arguments.max_hamming_distance) # Generate slice indexes if arguments.include_primers: start = forward_start end = reverse_end + 1 else: start = forward_end + 1 end = reverse_start # Rewind the input file arguments.source_file.seek(0) sequences = SeqIO.parse( arguments.source_file, source_format, alphabet=Alphabet.Gapped(Alphabet.single_letter_alphabet)) # Apply the transformation prune_action = _ACTIONS[arguments.prune_action] transformed_sequences = prune_action(sequences, start, end) with arguments.output_file: SeqIO.write(transformed_sequences, arguments.output_file, output_format)
[ "def", "action", "(", "arguments", ")", ":", "# Determine file format for input and output", "source_format", "=", "(", "arguments", ".", "source_format", "or", "fileformat", ".", "from_handle", "(", "arguments", ".", "source_file", ")", ")", "output_format", "=", "(", "arguments", ".", "output_format", "or", "fileformat", ".", "from_handle", "(", "arguments", ".", "output_file", ")", ")", "# Load the alignment", "with", "arguments", ".", "source_file", ":", "sequences", "=", "SeqIO", ".", "parse", "(", "arguments", ".", "source_file", ",", "source_format", ",", "alphabet", "=", "Alphabet", ".", "Gapped", "(", "Alphabet", ".", "single_letter_alphabet", ")", ")", "# Locate primers", "(", "forward_start", ",", "forward_end", ")", ",", "(", "reverse_start", ",", "reverse_end", ")", "=", "locate_primers", "(", "sequences", ",", "arguments", ".", "forward_primer", ",", "arguments", ".", "reverse_primer", ",", "arguments", ".", "reverse_complement", ",", "arguments", ".", "max_hamming_distance", ")", "# Generate slice indexes", "if", "arguments", ".", "include_primers", ":", "start", "=", "forward_start", "end", "=", "reverse_end", "+", "1", "else", ":", "start", "=", "forward_end", "+", "1", "end", "=", "reverse_start", "# Rewind the input file", "arguments", ".", "source_file", ".", "seek", "(", "0", ")", "sequences", "=", "SeqIO", ".", "parse", "(", "arguments", ".", "source_file", ",", "source_format", ",", "alphabet", "=", "Alphabet", ".", "Gapped", "(", "Alphabet", ".", "single_letter_alphabet", ")", ")", "# Apply the transformation", "prune_action", "=", "_ACTIONS", "[", "arguments", ".", "prune_action", "]", "transformed_sequences", "=", "prune_action", "(", "sequences", ",", "start", ",", "end", ")", "with", "arguments", ".", "output_file", ":", "SeqIO", ".", "write", "(", "transformed_sequences", ",", "arguments", ".", "output_file", ",", "output_format", ")" ]
Trim the alignment as specified
[ "Trim", "the", "alignment", "as", "specified" ]
python
train
34.933333
DataONEorg/d1_python
lib_client/src/d1_client/cnclient.py
https://github.com/DataONEorg/d1_python/blob/3ac4d4f3ca052d3e8641a6a329cab526c8ddcb0d/lib_client/src/d1_client/cnclient.py#L868-L890
def setReplicationStatusResponse( self, pid, nodeRef, status, dataoneError=None, vendorSpecific=None ): """CNReplication.setReplicationStatus(session, pid, nodeRef, status, failure) → boolean https://releases.dataone.org/online/api-documentatio n-v2.0.1/apis/CN_APIs.html#CNReplication.setReplicationStatus. Args: pid: nodeRef: status: dataoneError: vendorSpecific: Returns: """ mmp_dict = {'nodeRef': nodeRef, 'status': status} # .toxml('utf-8'), if dataoneError is not None: mmp_dict['failure'] = ('failure.xml', dataoneError.serialize_to_transport()) return self.PUT( ['replicaNotifications', pid], fields=mmp_dict, headers=vendorSpecific )
[ "def", "setReplicationStatusResponse", "(", "self", ",", "pid", ",", "nodeRef", ",", "status", ",", "dataoneError", "=", "None", ",", "vendorSpecific", "=", "None", ")", ":", "mmp_dict", "=", "{", "'nodeRef'", ":", "nodeRef", ",", "'status'", ":", "status", "}", "# .toxml('utf-8'),", "if", "dataoneError", "is", "not", "None", ":", "mmp_dict", "[", "'failure'", "]", "=", "(", "'failure.xml'", ",", "dataoneError", ".", "serialize_to_transport", "(", ")", ")", "return", "self", ".", "PUT", "(", "[", "'replicaNotifications'", ",", "pid", "]", ",", "fields", "=", "mmp_dict", ",", "headers", "=", "vendorSpecific", ")" ]
CNReplication.setReplicationStatus(session, pid, nodeRef, status, failure) → boolean https://releases.dataone.org/online/api-documentatio n-v2.0.1/apis/CN_APIs.html#CNReplication.setReplicationStatus. Args: pid: nodeRef: status: dataoneError: vendorSpecific: Returns:
[ "CNReplication", ".", "setReplicationStatus", "(", "session", "pid", "nodeRef", "status", "failure", ")", "→", "boolean", "https", ":", "//", "releases", ".", "dataone", ".", "org", "/", "online", "/", "api", "-", "documentatio", "n", "-", "v2", ".", "0", ".", "1", "/", "apis", "/", "CN_APIs", ".", "html#CNReplication", ".", "setReplicationStatus", "." ]
python
train
34.391304
AkihikoITOH/capybara
capybara/virtualenv/lib/python2.7/site-packages/pip/req/req_uninstall.py
https://github.com/AkihikoITOH/capybara/blob/e86c2173ea386654f4ae061148e8fbe3f25e715c/capybara/virtualenv/lib/python2.7/site-packages/pip/req/req_uninstall.py#L134-L148
def rollback(self): """Rollback the changes previously made by remove().""" if self.save_dir is None: logger.error( "Can't roll back %s; was not uninstalled", self.dist.project_name, ) return False logger.info('Rolling back uninstall of %s', self.dist.project_name) for path in self._moved_paths: tmp_path = self._stash(path) logger.debug('Replacing %s', path) renames(tmp_path, path) for pth in self.pth.values(): pth.rollback()
[ "def", "rollback", "(", "self", ")", ":", "if", "self", ".", "save_dir", "is", "None", ":", "logger", ".", "error", "(", "\"Can't roll back %s; was not uninstalled\"", ",", "self", ".", "dist", ".", "project_name", ",", ")", "return", "False", "logger", ".", "info", "(", "'Rolling back uninstall of %s'", ",", "self", ".", "dist", ".", "project_name", ")", "for", "path", "in", "self", ".", "_moved_paths", ":", "tmp_path", "=", "self", ".", "_stash", "(", "path", ")", "logger", ".", "debug", "(", "'Replacing %s'", ",", "path", ")", "renames", "(", "tmp_path", ",", "path", ")", "for", "pth", "in", "self", ".", "pth", ".", "values", "(", ")", ":", "pth", ".", "rollback", "(", ")" ]
Rollback the changes previously made by remove().
[ "Rollback", "the", "changes", "previously", "made", "by", "remove", "()", "." ]
python
test
38.066667
ewels/MultiQC
multiqc/modules/hicpro/hicpro.py
https://github.com/ewels/MultiQC/blob/2037d6322b2554146a74efbf869156ad20d4c4ec/multiqc/modules/hicpro/hicpro.py#L399-L423
def hicpro_capture_chart (self): """ Generate Capture Hi-C plot""" keys = OrderedDict() keys['valid_pairs_on_target_cap_cap'] = { 'color': '#0039e6', 'name': 'Capture-Capture interactions' } keys['valid_pairs_on_target_cap_rep'] = { 'color': '#809fff', 'name': 'Capture-Reporter interactions' } keys['valid_pairs_off_target'] = { 'color': '#cccccc', 'name': 'Off-target valid pairs' } # Check capture info are available num_samples = 0 for s_name in self.hicpro_data: for k in keys: num_samples += sum([1 if k in self.hicpro_data[s_name] else 0]) if num_samples == 0: return False # Config for the plot config = { 'id': 'hicpro_cap_plot', 'title': 'HiC-Pro: Capture Statistics', 'ylab': '# Pairs', 'cpswitch_counts_label': 'Number of Pairs' } return bargraph.plot(self.hicpro_data, keys, config)
[ "def", "hicpro_capture_chart", "(", "self", ")", ":", "keys", "=", "OrderedDict", "(", ")", "keys", "[", "'valid_pairs_on_target_cap_cap'", "]", "=", "{", "'color'", ":", "'#0039e6'", ",", "'name'", ":", "'Capture-Capture interactions'", "}", "keys", "[", "'valid_pairs_on_target_cap_rep'", "]", "=", "{", "'color'", ":", "'#809fff'", ",", "'name'", ":", "'Capture-Reporter interactions'", "}", "keys", "[", "'valid_pairs_off_target'", "]", "=", "{", "'color'", ":", "'#cccccc'", ",", "'name'", ":", "'Off-target valid pairs'", "}", "# Check capture info are available", "num_samples", "=", "0", "for", "s_name", "in", "self", ".", "hicpro_data", ":", "for", "k", "in", "keys", ":", "num_samples", "+=", "sum", "(", "[", "1", "if", "k", "in", "self", ".", "hicpro_data", "[", "s_name", "]", "else", "0", "]", ")", "if", "num_samples", "==", "0", ":", "return", "False", "# Config for the plot", "config", "=", "{", "'id'", ":", "'hicpro_cap_plot'", ",", "'title'", ":", "'HiC-Pro: Capture Statistics'", ",", "'ylab'", ":", "'# Pairs'", ",", "'cpswitch_counts_label'", ":", "'Number of Pairs'", "}", "return", "bargraph", ".", "plot", "(", "self", ".", "hicpro_data", ",", "keys", ",", "config", ")" ]
Generate Capture Hi-C plot
[ "Generate", "Capture", "Hi", "-", "C", "plot" ]
python
train
38.8
kushaldas/retask
retask/queue.py
https://github.com/kushaldas/retask/blob/5c955b8386653d3f0591ca2f4b1a213ff4b5a018/retask/queue.py#L299-L322
def wait(self, wait_time=0): """ Blocking call to check if the worker returns the result. One can use job.result after this call returns ``True``. :arg wait_time: Time in seconds to wait, default is infinite. :return: `True` or `False`. .. note:: This is a blocking call, you can specity wait_time argument for timeout. """ if self.__result: return True data = self.rdb.brpop(self.urn, wait_time) if data: self.rdb.delete(self.urn) data = json.loads(data[1]) self.__result = data return True else: return False
[ "def", "wait", "(", "self", ",", "wait_time", "=", "0", ")", ":", "if", "self", ".", "__result", ":", "return", "True", "data", "=", "self", ".", "rdb", ".", "brpop", "(", "self", ".", "urn", ",", "wait_time", ")", "if", "data", ":", "self", ".", "rdb", ".", "delete", "(", "self", ".", "urn", ")", "data", "=", "json", ".", "loads", "(", "data", "[", "1", "]", ")", "self", ".", "__result", "=", "data", "return", "True", "else", ":", "return", "False" ]
Blocking call to check if the worker returns the result. One can use job.result after this call returns ``True``. :arg wait_time: Time in seconds to wait, default is infinite. :return: `True` or `False`. .. note:: This is a blocking call, you can specity wait_time argument for timeout.
[ "Blocking", "call", "to", "check", "if", "the", "worker", "returns", "the", "result", ".", "One", "can", "use", "job", ".", "result", "after", "this", "call", "returns", "True", "." ]
python
train
27.666667
spyder-ide/spyder
spyder/widgets/comboboxes.py
https://github.com/spyder-ide/spyder/blob/f76836ce1b924bcc4efd3f74f2960d26a4e528e0/spyder/widgets/comboboxes.py#L336-L340
def is_module_or_package(path): """Return True if path is a Python module/package""" is_module = osp.isfile(path) and osp.splitext(path)[1] in ('.py', '.pyw') is_package = osp.isdir(path) and osp.isfile(osp.join(path, '__init__.py')) return is_module or is_package
[ "def", "is_module_or_package", "(", "path", ")", ":", "is_module", "=", "osp", ".", "isfile", "(", "path", ")", "and", "osp", ".", "splitext", "(", "path", ")", "[", "1", "]", "in", "(", "'.py'", ",", "'.pyw'", ")", "is_package", "=", "osp", ".", "isdir", "(", "path", ")", "and", "osp", ".", "isfile", "(", "osp", ".", "join", "(", "path", ",", "'__init__.py'", ")", ")", "return", "is_module", "or", "is_package" ]
Return True if path is a Python module/package
[ "Return", "True", "if", "path", "is", "a", "Python", "module", "/", "package" ]
python
train
56
Sean1708/HipPy
hippy/parser.py
https://github.com/Sean1708/HipPy/blob/d0ea8fb1e417f1fedaa8e215e3d420b90c4de691/hippy/parser.py#L88-L91
def _skip_newlines(self): """Increment over newlines.""" while self._cur_token['type'] is TT.lbreak and not self._finished: self._increment()
[ "def", "_skip_newlines", "(", "self", ")", ":", "while", "self", ".", "_cur_token", "[", "'type'", "]", "is", "TT", ".", "lbreak", "and", "not", "self", ".", "_finished", ":", "self", ".", "_increment", "(", ")" ]
Increment over newlines.
[ "Increment", "over", "newlines", "." ]
python
train
41.5
agile-geoscience/striplog
striplog/lexicon.py
https://github.com/agile-geoscience/striplog/blob/8033b673a151f96c29802b43763e863519a3124c/striplog/lexicon.py#L136-L168
def find_synonym(self, word): """ Given a string and a dict of synonyms, returns the 'preferred' word. Case insensitive. Args: word (str): A word. Returns: str: The preferred word, or the input word if not found. Example: >>> syn = {'snake': ['python', 'adder']} >>> find_synonym('adder', syn) 'snake' >>> find_synonym('rattler', syn) 'rattler' TODO: Make it handle case, returning the same case it received. """ if word and self.synonyms: # Make the reverse look-up table. reverse_lookup = {} for k, v in self.synonyms.items(): for i in v: reverse_lookup[i.lower()] = k.lower() # Now check words against this table. if word.lower() in reverse_lookup: return reverse_lookup[word.lower()] return word
[ "def", "find_synonym", "(", "self", ",", "word", ")", ":", "if", "word", "and", "self", ".", "synonyms", ":", "# Make the reverse look-up table.", "reverse_lookup", "=", "{", "}", "for", "k", ",", "v", "in", "self", ".", "synonyms", ".", "items", "(", ")", ":", "for", "i", "in", "v", ":", "reverse_lookup", "[", "i", ".", "lower", "(", ")", "]", "=", "k", ".", "lower", "(", ")", "# Now check words against this table.", "if", "word", ".", "lower", "(", ")", "in", "reverse_lookup", ":", "return", "reverse_lookup", "[", "word", ".", "lower", "(", ")", "]", "return", "word" ]
Given a string and a dict of synonyms, returns the 'preferred' word. Case insensitive. Args: word (str): A word. Returns: str: The preferred word, or the input word if not found. Example: >>> syn = {'snake': ['python', 'adder']} >>> find_synonym('adder', syn) 'snake' >>> find_synonym('rattler', syn) 'rattler' TODO: Make it handle case, returning the same case it received.
[ "Given", "a", "string", "and", "a", "dict", "of", "synonyms", "returns", "the", "preferred", "word", ".", "Case", "insensitive", "." ]
python
test
29.121212
saltstack/salt
salt/states/service.py
https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/states/service.py#L323-L335
def _available(name, ret): ''' Check if the service is available ''' avail = False if 'service.available' in __salt__: avail = __salt__['service.available'](name) elif 'service.get_all' in __salt__: avail = name in __salt__['service.get_all']() if not avail: ret['result'] = False ret['comment'] = 'The named service {0} is not available'.format(name) return avail
[ "def", "_available", "(", "name", ",", "ret", ")", ":", "avail", "=", "False", "if", "'service.available'", "in", "__salt__", ":", "avail", "=", "__salt__", "[", "'service.available'", "]", "(", "name", ")", "elif", "'service.get_all'", "in", "__salt__", ":", "avail", "=", "name", "in", "__salt__", "[", "'service.get_all'", "]", "(", ")", "if", "not", "avail", ":", "ret", "[", "'result'", "]", "=", "False", "ret", "[", "'comment'", "]", "=", "'The named service {0} is not available'", ".", "format", "(", "name", ")", "return", "avail" ]
Check if the service is available
[ "Check", "if", "the", "service", "is", "available" ]
python
train
32
nephila/djangocms-page-sitemap
djangocms_page_sitemap/utils.py
https://github.com/nephila/djangocms-page-sitemap/blob/0d89365e5513471b603c99c60dba6d1101f19d53/djangocms_page_sitemap/utils.py#L7-L15
def get_cache_key(page): """ Create the cache key for the current page and language """ try: site_id = page.node.site_id except AttributeError: site_id = page.site_id return _get_cache_key('page_sitemap', page, 'default', site_id)
[ "def", "get_cache_key", "(", "page", ")", ":", "try", ":", "site_id", "=", "page", ".", "node", ".", "site_id", "except", "AttributeError", ":", "site_id", "=", "page", ".", "site_id", "return", "_get_cache_key", "(", "'page_sitemap'", ",", "page", ",", "'default'", ",", "site_id", ")" ]
Create the cache key for the current page and language
[ "Create", "the", "cache", "key", "for", "the", "current", "page", "and", "language" ]
python
train
29.111111
dev-pipeline/dev-pipeline-core
lib/devpipeline_core/paths.py
https://github.com/dev-pipeline/dev-pipeline-core/blob/fa40c050a56202485070b0300bb8695e9388c34f/lib/devpipeline_core/paths.py#L8-L22
def make_path(config, *endings): """ Create a path based on component configuration. All paths are relative to the component's configuration directory; usually this will be the same for an entire session, but this function supuports component-specific configuration directories. Arguments: config - the configuration object for a component endings - a list of file paths to append to the component's configuration directory """ config_dir = config.get("dp.config_dir") return os.path.join(config_dir, *endings)
[ "def", "make_path", "(", "config", ",", "*", "endings", ")", ":", "config_dir", "=", "config", ".", "get", "(", "\"dp.config_dir\"", ")", "return", "os", ".", "path", ".", "join", "(", "config_dir", ",", "*", "endings", ")" ]
Create a path based on component configuration. All paths are relative to the component's configuration directory; usually this will be the same for an entire session, but this function supuports component-specific configuration directories. Arguments: config - the configuration object for a component endings - a list of file paths to append to the component's configuration directory
[ "Create", "a", "path", "based", "on", "component", "configuration", "." ]
python
train
37.066667
Kane610/deconz
pydeconz/utils.py
https://github.com/Kane610/deconz/blob/8a9498dbbc8c168d4a081173ad6c3b1e17fffdf6/pydeconz/utils.py#L40-L49
async def async_delete_all_keys(session, host, port, api_key, api_keys=[]): """Delete all API keys except for the ones provided to the method.""" url = 'http://{}:{}/api/{}/config'.format(host, str(port), api_key) response = await async_request(session.get, url) api_keys.append(api_key) for key in response['whitelist'].keys(): if key not in api_keys: await async_delete_api_key(session, host, port, key)
[ "async", "def", "async_delete_all_keys", "(", "session", ",", "host", ",", "port", ",", "api_key", ",", "api_keys", "=", "[", "]", ")", ":", "url", "=", "'http://{}:{}/api/{}/config'", ".", "format", "(", "host", ",", "str", "(", "port", ")", ",", "api_key", ")", "response", "=", "await", "async_request", "(", "session", ".", "get", ",", "url", ")", "api_keys", ".", "append", "(", "api_key", ")", "for", "key", "in", "response", "[", "'whitelist'", "]", ".", "keys", "(", ")", ":", "if", "key", "not", "in", "api_keys", ":", "await", "async_delete_api_key", "(", "session", ",", "host", ",", "port", ",", "key", ")" ]
Delete all API keys except for the ones provided to the method.
[ "Delete", "all", "API", "keys", "except", "for", "the", "ones", "provided", "to", "the", "method", "." ]
python
train
43.8
crossbario/txaio-etcd
txaioetcd/_client_tx.py
https://github.com/crossbario/txaio-etcd/blob/c9aebff7f288a0b219bffc9d2579d22cf543baa5/txaioetcd/_client_tx.py#L408-L446
def watch(self, keys, on_watch, filters=None, start_revision=None, return_previous=None): """ Watch one or more keys or key sets and invoke a callback. Watch watches for events happening or that have happened. The entire event history can be watched starting from the last compaction revision. :param keys: Watch these keys / key sets. :type keys: list of bytes or list of instance of :class:`txaioetcd.KeySet` :param on_watch: The callback to invoke upon receiving a watch event. :type on_watch: callable :param filters: Any filters to apply. :param start_revision: start_revision is an optional revision to watch from (inclusive). No start_revision is "now". :type start_revision: int :param return_previous: Flag to request returning previous values. :returns: A deferred that just fires when watching has started successfully, or which fires with an error in case the watching could not be started. :rtype: twisted.internet.Deferred """ d = self._start_watching(keys, on_watch, filters, start_revision, return_previous) # # ODD: Trying to use a parameter instead of *args errors out as soon as the # parameter is accessed. # def on_err(*args): if args[0].type not in [CancelledError, ResponseFailed]: self.log.warn('etcd watch terminated with "{error}"', error=args[0].type) return args[0] d.addErrback(on_err) return d
[ "def", "watch", "(", "self", ",", "keys", ",", "on_watch", ",", "filters", "=", "None", ",", "start_revision", "=", "None", ",", "return_previous", "=", "None", ")", ":", "d", "=", "self", ".", "_start_watching", "(", "keys", ",", "on_watch", ",", "filters", ",", "start_revision", ",", "return_previous", ")", "#", "# ODD: Trying to use a parameter instead of *args errors out as soon as the", "# parameter is accessed.", "#", "def", "on_err", "(", "*", "args", ")", ":", "if", "args", "[", "0", "]", ".", "type", "not", "in", "[", "CancelledError", ",", "ResponseFailed", "]", ":", "self", ".", "log", ".", "warn", "(", "'etcd watch terminated with \"{error}\"'", ",", "error", "=", "args", "[", "0", "]", ".", "type", ")", "return", "args", "[", "0", "]", "d", ".", "addErrback", "(", "on_err", ")", "return", "d" ]
Watch one or more keys or key sets and invoke a callback. Watch watches for events happening or that have happened. The entire event history can be watched starting from the last compaction revision. :param keys: Watch these keys / key sets. :type keys: list of bytes or list of instance of :class:`txaioetcd.KeySet` :param on_watch: The callback to invoke upon receiving a watch event. :type on_watch: callable :param filters: Any filters to apply. :param start_revision: start_revision is an optional revision to watch from (inclusive). No start_revision is "now". :type start_revision: int :param return_previous: Flag to request returning previous values. :returns: A deferred that just fires when watching has started successfully, or which fires with an error in case the watching could not be started. :rtype: twisted.internet.Deferred
[ "Watch", "one", "or", "more", "keys", "or", "key", "sets", "and", "invoke", "a", "callback", "." ]
python
train
40.179487
nuagenetworks/monolithe
monolithe/lib/sdkutils.py
https://github.com/nuagenetworks/monolithe/blob/626011af3ff43f73b7bd8aa5e1f93fb5f1f0e181/monolithe/lib/sdkutils.py#L44-L75
def massage_type_name(cls, type_name): """ Returns a readable type according to a java type """ if type_name.lower() in ("enum", "enumeration"): return "enum" if type_name.lower() in ("str", "string"): return "string" if type_name.lower() in ("boolean", "bool"): return "boolean" if type_name.lower() in ("int", "integer"): return "integer" if type_name.lower() in ("date", "datetime", "time"): return "time" if type_name.lower() in ("double", "float", "long"): return "float" if type_name.lower() in ("list", "array"): return "list" if type_name.lower() in ("object", "dict"): return "object" if "array" in type_name.lower(): return "list" return "string"
[ "def", "massage_type_name", "(", "cls", ",", "type_name", ")", ":", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"enum\"", ",", "\"enumeration\"", ")", ":", "return", "\"enum\"", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"str\"", ",", "\"string\"", ")", ":", "return", "\"string\"", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"boolean\"", ",", "\"bool\"", ")", ":", "return", "\"boolean\"", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"int\"", ",", "\"integer\"", ")", ":", "return", "\"integer\"", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"date\"", ",", "\"datetime\"", ",", "\"time\"", ")", ":", "return", "\"time\"", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"double\"", ",", "\"float\"", ",", "\"long\"", ")", ":", "return", "\"float\"", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"list\"", ",", "\"array\"", ")", ":", "return", "\"list\"", "if", "type_name", ".", "lower", "(", ")", "in", "(", "\"object\"", ",", "\"dict\"", ")", ":", "return", "\"object\"", "if", "\"array\"", "in", "type_name", ".", "lower", "(", ")", ":", "return", "\"list\"", "return", "\"string\"" ]
Returns a readable type according to a java type
[ "Returns", "a", "readable", "type", "according", "to", "a", "java", "type" ]
python
train
26.21875
limodou/uliweb
uliweb/lib/werkzeug/wrappers.py
https://github.com/limodou/uliweb/blob/34472f25e4bc0b954a35346672f94e84ef18b076/uliweb/lib/werkzeug/wrappers.py#L855-L870
def set_data(self, value): """Sets a new string as response. The value set must either by a unicode or bytestring. If a unicode string is set it's encoded automatically to the charset of the response (utf-8 by default). .. versionadded:: 0.9 """ # if an unicode string is set, it's encoded directly so that we # can set the content length if isinstance(value, text_type): value = value.encode(self.charset) else: value = bytes(value) self.response = [value] if self.automatically_set_content_length: self.headers['Content-Length'] = str(len(value))
[ "def", "set_data", "(", "self", ",", "value", ")", ":", "# if an unicode string is set, it's encoded directly so that we", "# can set the content length", "if", "isinstance", "(", "value", ",", "text_type", ")", ":", "value", "=", "value", ".", "encode", "(", "self", ".", "charset", ")", "else", ":", "value", "=", "bytes", "(", "value", ")", "self", ".", "response", "=", "[", "value", "]", "if", "self", ".", "automatically_set_content_length", ":", "self", ".", "headers", "[", "'Content-Length'", "]", "=", "str", "(", "len", "(", "value", ")", ")" ]
Sets a new string as response. The value set must either by a unicode or bytestring. If a unicode string is set it's encoded automatically to the charset of the response (utf-8 by default). .. versionadded:: 0.9
[ "Sets", "a", "new", "string", "as", "response", ".", "The", "value", "set", "must", "either", "by", "a", "unicode", "or", "bytestring", ".", "If", "a", "unicode", "string", "is", "set", "it", "s", "encoded", "automatically", "to", "the", "charset", "of", "the", "response", "(", "utf", "-", "8", "by", "default", ")", "." ]
python
train
41.25
mohamedattahri/PyXMLi
pyxmli/__init__.py
https://github.com/mohamedattahri/PyXMLi/blob/a81a245be822d62f1a20c734ca14b42c786ae81e/pyxmli/__init__.py#L1042-L1048
def duplicate(self): ''' Returns a copy of the current group, including its lines. @returns: Group ''' return self.__class__(amount=self.amount, date=self.date, method=self.method, ref=self.ref)
[ "def", "duplicate", "(", "self", ")", ":", "return", "self", ".", "__class__", "(", "amount", "=", "self", ".", "amount", ",", "date", "=", "self", ".", "date", ",", "method", "=", "self", ".", "method", ",", "ref", "=", "self", ".", "ref", ")" ]
Returns a copy of the current group, including its lines. @returns: Group
[ "Returns", "a", "copy", "of", "the", "current", "group", "including", "its", "lines", "." ]
python
train
36.857143
arne-cl/discoursegraphs
src/discoursegraphs/readwrite/exmaralda.py
https://github.com/arne-cl/discoursegraphs/blob/842f0068a3190be2c75905754521b176b25a54fb/src/discoursegraphs/readwrite/exmaralda.py#L322-L335
def __add_tier(self, tier, token_tier_name): """ adds a tier to the document graph (either as additional attributes to the token nodes or as a span node with outgoing edges to the token nodes it represents) """ if tier.attrib['category'] == token_tier_name: self.__add_tokens(tier) else: if self.is_token_annotation_tier(tier): self.__add_token_annotation_tier(tier) else: self.__add_span_tier(tier)
[ "def", "__add_tier", "(", "self", ",", "tier", ",", "token_tier_name", ")", ":", "if", "tier", ".", "attrib", "[", "'category'", "]", "==", "token_tier_name", ":", "self", ".", "__add_tokens", "(", "tier", ")", "else", ":", "if", "self", ".", "is_token_annotation_tier", "(", "tier", ")", ":", "self", ".", "__add_token_annotation_tier", "(", "tier", ")", "else", ":", "self", ".", "__add_span_tier", "(", "tier", ")" ]
adds a tier to the document graph (either as additional attributes to the token nodes or as a span node with outgoing edges to the token nodes it represents)
[ "adds", "a", "tier", "to", "the", "document", "graph", "(", "either", "as", "additional", "attributes", "to", "the", "token", "nodes", "or", "as", "a", "span", "node", "with", "outgoing", "edges", "to", "the", "token", "nodes", "it", "represents", ")" ]
python
train
36.5
AguaClara/aguaclara
aguaclara/core/physchem.py
https://github.com/AguaClara/aguaclara/blob/8dd4e734768b166a7fc2b60388a24df2f93783fc/aguaclara/core/physchem.py#L134-L139
def re_general(Vel, Area, PerimWetted, Nu): """Return the Reynolds Number for a general cross section.""" #Checking input validity - inputs not checked here are checked by #functions this function calls. ut.check_range([Vel, ">=0", "Velocity"], [Nu, ">0", "Nu"]) return 4 * radius_hydraulic_general(Area, PerimWetted).magnitude * Vel / Nu
[ "def", "re_general", "(", "Vel", ",", "Area", ",", "PerimWetted", ",", "Nu", ")", ":", "#Checking input validity - inputs not checked here are checked by", "#functions this function calls.", "ut", ".", "check_range", "(", "[", "Vel", ",", "\">=0\"", ",", "\"Velocity\"", "]", ",", "[", "Nu", ",", "\">0\"", ",", "\"Nu\"", "]", ")", "return", "4", "*", "radius_hydraulic_general", "(", "Area", ",", "PerimWetted", ")", ".", "magnitude", "*", "Vel", "/", "Nu" ]
Return the Reynolds Number for a general cross section.
[ "Return", "the", "Reynolds", "Number", "for", "a", "general", "cross", "section", "." ]
python
train
58.833333
iotile/coretools
iotilegateway/iotilegateway/supervisor/client.py
https://github.com/iotile/coretools/blob/2d794f5f1346b841b0dcd16c9d284e9bf2f3c6ec/iotilegateway/iotilegateway/supervisor/client.py#L646-L660
def service_info(self, name): """Pull descriptive info of a service by name. Information returned includes the service's user friendly name and whether it was preregistered or added dynamically. Returns: dict: A dictionary of service information with the following keys set: long_name (string): The user friendly name of the service preregistered (bool): Whether the service was explicitly called out as a preregistered service. """ return self._loop.run_coroutine(self._client.service_info(name))
[ "def", "service_info", "(", "self", ",", "name", ")", ":", "return", "self", ".", "_loop", ".", "run_coroutine", "(", "self", ".", "_client", ".", "service_info", "(", "name", ")", ")" ]
Pull descriptive info of a service by name. Information returned includes the service's user friendly name and whether it was preregistered or added dynamically. Returns: dict: A dictionary of service information with the following keys set: long_name (string): The user friendly name of the service preregistered (bool): Whether the service was explicitly called out as a preregistered service.
[ "Pull", "descriptive", "info", "of", "a", "service", "by", "name", "." ]
python
train
40.933333
novopl/peltak
src/peltak/extra/gitflow/logic/common.py
https://github.com/novopl/peltak/blob/b627acc019e3665875fe76cdca0a14773b69beaa/src/peltak/extra/gitflow/logic/common.py#L61-L77
def assert_on_branch(branch_name): # type: (str) -> None """ Print error and exit if *branch_name* is not the current branch. Args: branch_name (str): The supposed name of the current branch. """ branch = git.current_branch(refresh=True) if branch.name != branch_name: if context.get('pretend', False): log.info("Would assert that you're on a <33>{}<32> branch", branch_name) else: log.err("You're not on a <33>{}<31> branch!", branch_name) sys.exit(1)
[ "def", "assert_on_branch", "(", "branch_name", ")", ":", "# type: (str) -> None", "branch", "=", "git", ".", "current_branch", "(", "refresh", "=", "True", ")", "if", "branch", ".", "name", "!=", "branch_name", ":", "if", "context", ".", "get", "(", "'pretend'", ",", "False", ")", ":", "log", ".", "info", "(", "\"Would assert that you're on a <33>{}<32> branch\"", ",", "branch_name", ")", "else", ":", "log", ".", "err", "(", "\"You're not on a <33>{}<31> branch!\"", ",", "branch_name", ")", "sys", ".", "exit", "(", "1", ")" ]
Print error and exit if *branch_name* is not the current branch. Args: branch_name (str): The supposed name of the current branch.
[ "Print", "error", "and", "exit", "if", "*", "branch_name", "*", "is", "not", "the", "current", "branch", "." ]
python
train
32.647059
ska-sa/spead2
spead2/__init__.py
https://github.com/ska-sa/spead2/blob/cac95fd01d8debaa302d2691bd26da64b7828bc6/spead2/__init__.py#L284-L292
def compatible_shape(self, shape): """Determine whether `shape` is compatible with the (possibly variable-sized) shape for this descriptor""" if len(shape) != len(self.shape): return False for x, y in zip(self.shape, shape): if x is not None and x != y: return False return True
[ "def", "compatible_shape", "(", "self", ",", "shape", ")", ":", "if", "len", "(", "shape", ")", "!=", "len", "(", "self", ".", "shape", ")", ":", "return", "False", "for", "x", ",", "y", "in", "zip", "(", "self", ".", "shape", ",", "shape", ")", ":", "if", "x", "is", "not", "None", "and", "x", "!=", "y", ":", "return", "False", "return", "True" ]
Determine whether `shape` is compatible with the (possibly variable-sized) shape for this descriptor
[ "Determine", "whether", "shape", "is", "compatible", "with", "the", "(", "possibly", "variable", "-", "sized", ")", "shape", "for", "this", "descriptor" ]
python
train
38.888889
urinieto/msaf
msaf/pymf/chnmf.py
https://github.com/urinieto/msaf/blob/9dbb57d77a1310465a65cc40f1641d083ca74385/msaf/pymf/chnmf.py#L193-L218
def factorize(self, show_progress=False, compute_w=True, compute_h=True, compute_err=True, niter=1): """ Factorize s.t. WH = data Parameters ---------- show_progress : bool print some extra information to stdout. compute_h : bool iteratively update values for H. compute_w : bool iteratively update values for W. compute_err : bool compute Frobenius norm |data-WH| after each update and store it to .ferr[k]. Updated Values -------------- .W : updated values for W. .H : updated values for H. .ferr : Frobenius norm |data-WH|. """ AA.factorize(self, niter=1, show_progress=show_progress, compute_w=compute_w, compute_h=compute_h, compute_err=compute_err)
[ "def", "factorize", "(", "self", ",", "show_progress", "=", "False", ",", "compute_w", "=", "True", ",", "compute_h", "=", "True", ",", "compute_err", "=", "True", ",", "niter", "=", "1", ")", ":", "AA", ".", "factorize", "(", "self", ",", "niter", "=", "1", ",", "show_progress", "=", "show_progress", ",", "compute_w", "=", "compute_w", ",", "compute_h", "=", "compute_h", ",", "compute_err", "=", "compute_err", ")" ]
Factorize s.t. WH = data Parameters ---------- show_progress : bool print some extra information to stdout. compute_h : bool iteratively update values for H. compute_w : bool iteratively update values for W. compute_err : bool compute Frobenius norm |data-WH| after each update and store it to .ferr[k]. Updated Values -------------- .W : updated values for W. .H : updated values for H. .ferr : Frobenius norm |data-WH|.
[ "Factorize", "s", ".", "t", ".", "WH", "=", "data" ]
python
test
36.230769
ellmetha/django-machina
machina/apps/forum/abstract_models.py
https://github.com/ellmetha/django-machina/blob/89ac083c1eaf1cfdeae6686ee094cc86362e8c69/machina/apps/forum/abstract_models.py#L134-L145
def clean(self): """ Validates the forum instance. """ super().clean() if self.parent and self.parent.is_link: raise ValidationError(_('A forum can not have a link forum as parent')) if self.is_category and self.parent and self.parent.is_category: raise ValidationError(_('A category can not have another category as parent')) if self.is_link and not self.link: raise ValidationError(_('A link forum must have a link associated with it'))
[ "def", "clean", "(", "self", ")", ":", "super", "(", ")", ".", "clean", "(", ")", "if", "self", ".", "parent", "and", "self", ".", "parent", ".", "is_link", ":", "raise", "ValidationError", "(", "_", "(", "'A forum can not have a link forum as parent'", ")", ")", "if", "self", ".", "is_category", "and", "self", ".", "parent", "and", "self", ".", "parent", ".", "is_category", ":", "raise", "ValidationError", "(", "_", "(", "'A category can not have another category as parent'", ")", ")", "if", "self", ".", "is_link", "and", "not", "self", ".", "link", ":", "raise", "ValidationError", "(", "_", "(", "'A link forum must have a link associated with it'", ")", ")" ]
Validates the forum instance.
[ "Validates", "the", "forum", "instance", "." ]
python
train
42.166667
cdgriffith/Reusables
reusables/tasker.py
https://github.com/cdgriffith/Reusables/blob/bc32f72e4baee7d76a6d58b88fcb23dd635155cd/reusables/tasker.py#L271-L277
def run(self): """Start the main loop as a background process. *nix only""" if win_based: raise NotImplementedError("Please run main_loop, " "backgrounding not supported on Windows") self.background_process = mp.Process(target=self.main_loop) self.background_process.start()
[ "def", "run", "(", "self", ")", ":", "if", "win_based", ":", "raise", "NotImplementedError", "(", "\"Please run main_loop, \"", "\"backgrounding not supported on Windows\"", ")", "self", ".", "background_process", "=", "mp", ".", "Process", "(", "target", "=", "self", ".", "main_loop", ")", "self", ".", "background_process", ".", "start", "(", ")" ]
Start the main loop as a background process. *nix only
[ "Start", "the", "main", "loop", "as", "a", "background", "process", ".", "*", "nix", "only" ]
python
train
50
rosenbrockc/fortpy
fortpy/parsers/variable.py
https://github.com/rosenbrockc/fortpy/blob/1ed0757c52d549e41d9d44bdea68cb89529293a5/fortpy/parsers/variable.py#L80-L123
def _separate_multiple_def(self, defstring, parent, refstring, refline): """Separates the text after '::' in a variable definition to extract all the variables, their dimensions and default values. """ import pyparsing nester = pyparsing.nestedExpr('(', ')') try: parsed = nester.parseString("(" + re.sub("=(>?)", " =\\1 ", defstring) + ")").asList()[0] except pyparsing.ParseException as err: from fortpy import msg repl = (parent.name, refline[0], refstring, defstring, ''.join(['-']*(err.loc-1))+'^', err.msg) msg.err("parsing variable from '{}:{} >> {}': \n'{}'\n{} {}.".format(*repl)) raise i = 0 clean = [] while i < len(parsed): if (isinstance(parsed[i], str) and not re.match("=>?", parsed[i]) and i+1 < len(parsed) and isinstance(parsed[i+1], list)): clean.append((parsed[i], parsed[i+1])) i += 2 elif isinstance(parsed[i], str) and parsed[i] == ",": i += 1 else: clean.append(parsed[i]) i += 1 #Now pass through again to handle the default values. i = 0 ready = [] while i < len(clean): if isinstance(clean[i], str) and re.match("=>?", clean[i]): ready.pop() if ">" in clean[i]: ready.append([clean[i-1], ("> " + clean[i+1][0], clean[i+1][1])]) else: ready.append([clean[i-1], clean[i+1]]) i += 2 else: ready.append(clean[i]) i += 1 return ready
[ "def", "_separate_multiple_def", "(", "self", ",", "defstring", ",", "parent", ",", "refstring", ",", "refline", ")", ":", "import", "pyparsing", "nester", "=", "pyparsing", ".", "nestedExpr", "(", "'('", ",", "')'", ")", "try", ":", "parsed", "=", "nester", ".", "parseString", "(", "\"(\"", "+", "re", ".", "sub", "(", "\"=(>?)\"", ",", "\" =\\\\1 \"", ",", "defstring", ")", "+", "\")\"", ")", ".", "asList", "(", ")", "[", "0", "]", "except", "pyparsing", ".", "ParseException", "as", "err", ":", "from", "fortpy", "import", "msg", "repl", "=", "(", "parent", ".", "name", ",", "refline", "[", "0", "]", ",", "refstring", ",", "defstring", ",", "''", ".", "join", "(", "[", "'-'", "]", "*", "(", "err", ".", "loc", "-", "1", ")", ")", "+", "'^'", ",", "err", ".", "msg", ")", "msg", ".", "err", "(", "\"parsing variable from '{}:{} >> {}': \\n'{}'\\n{} {}.\"", ".", "format", "(", "*", "repl", ")", ")", "raise", "i", "=", "0", "clean", "=", "[", "]", "while", "i", "<", "len", "(", "parsed", ")", ":", "if", "(", "isinstance", "(", "parsed", "[", "i", "]", ",", "str", ")", "and", "not", "re", ".", "match", "(", "\"=>?\"", ",", "parsed", "[", "i", "]", ")", "and", "i", "+", "1", "<", "len", "(", "parsed", ")", "and", "isinstance", "(", "parsed", "[", "i", "+", "1", "]", ",", "list", ")", ")", ":", "clean", ".", "append", "(", "(", "parsed", "[", "i", "]", ",", "parsed", "[", "i", "+", "1", "]", ")", ")", "i", "+=", "2", "elif", "isinstance", "(", "parsed", "[", "i", "]", ",", "str", ")", "and", "parsed", "[", "i", "]", "==", "\",\"", ":", "i", "+=", "1", "else", ":", "clean", ".", "append", "(", "parsed", "[", "i", "]", ")", "i", "+=", "1", "#Now pass through again to handle the default values.", "i", "=", "0", "ready", "=", "[", "]", "while", "i", "<", "len", "(", "clean", ")", ":", "if", "isinstance", "(", "clean", "[", "i", "]", ",", "str", ")", "and", "re", ".", "match", "(", "\"=>?\"", ",", "clean", "[", "i", "]", ")", ":", "ready", ".", "pop", "(", ")", "if", "\">\"", "in", "clean", "[", "i", "]", ":", "ready", ".", "append", "(", "[", "clean", "[", "i", "-", "1", "]", ",", "(", "\"> \"", "+", "clean", "[", "i", "+", "1", "]", "[", "0", "]", ",", "clean", "[", "i", "+", "1", "]", "[", "1", "]", ")", "]", ")", "else", ":", "ready", ".", "append", "(", "[", "clean", "[", "i", "-", "1", "]", ",", "clean", "[", "i", "+", "1", "]", "]", ")", "i", "+=", "2", "else", ":", "ready", ".", "append", "(", "clean", "[", "i", "]", ")", "i", "+=", "1", "return", "ready" ]
Separates the text after '::' in a variable definition to extract all the variables, their dimensions and default values.
[ "Separates", "the", "text", "after", "::", "in", "a", "variable", "definition", "to", "extract", "all", "the", "variables", "their", "dimensions", "and", "default", "values", "." ]
python
train
39
brainiak/brainiak
brainiak/factoranalysis/tfa.py
https://github.com/brainiak/brainiak/blob/408f12dec2ff56559a26873a848a09e4c8facfeb/brainiak/factoranalysis/tfa.py#L415-L428
def set_centers_mean_cov(self, estimation, centers_mean_cov): """Set estimation on centers Parameters ---------- estimation : 1D arrary Either prior of posterior estimation centers : 2D array, in shape [K, n_dim] Estimation on centers """ estimation[self.map_offset[2]:self.map_offset[3]] =\ centers_mean_cov.ravel()
[ "def", "set_centers_mean_cov", "(", "self", ",", "estimation", ",", "centers_mean_cov", ")", ":", "estimation", "[", "self", ".", "map_offset", "[", "2", "]", ":", "self", ".", "map_offset", "[", "3", "]", "]", "=", "centers_mean_cov", ".", "ravel", "(", ")" ]
Set estimation on centers Parameters ---------- estimation : 1D arrary Either prior of posterior estimation centers : 2D array, in shape [K, n_dim] Estimation on centers
[ "Set", "estimation", "on", "centers" ]
python
train
28.428571
mishan/twemredis-py
twemredis.py
https://github.com/mishan/twemredis-py/blob/cfc787d90482eb6a2037cfbf4863bd144582662d/twemredis.py#L132-L143
def get_key(self, key_type, key_id): """ get_key constructs a key given a key type and a key id. Keyword arguments: key_type -- the type of key (e.g.: 'friend_request') key_id -- the key id (e.g.: '12345') returns a string representing the key (e.g.: 'friend_request:{12345}') """ return "{0}:{1}{2}{3}".format(key_type, self._hash_start, key_id, self._hash_stop)
[ "def", "get_key", "(", "self", ",", "key_type", ",", "key_id", ")", ":", "return", "\"{0}:{1}{2}{3}\"", ".", "format", "(", "key_type", ",", "self", ".", "_hash_start", ",", "key_id", ",", "self", ".", "_hash_stop", ")" ]
get_key constructs a key given a key type and a key id. Keyword arguments: key_type -- the type of key (e.g.: 'friend_request') key_id -- the key id (e.g.: '12345') returns a string representing the key (e.g.: 'friend_request:{12345}')
[ "get_key", "constructs", "a", "key", "given", "a", "key", "type", "and", "a", "key", "id", ".", "Keyword", "arguments", ":", "key_type", "--", "the", "type", "of", "key", "(", "e", ".", "g", ".", ":", "friend_request", ")", "key_id", "--", "the", "key", "id", "(", "e", ".", "g", ".", ":", "12345", ")" ]
python
train
38.583333
mitsei/dlkit
dlkit/json_/authentication/sessions.py
https://github.com/mitsei/dlkit/blob/445f968a175d61c8d92c0f617a3c17dc1dc7c584/dlkit/json_/authentication/sessions.py#L255-L280
def get_agents_by_genus_type(self, agent_genus_type): """Gets an ``AgentList`` corresponding to the given agent genus ``Type`` which does not include agents of genus types derived from the specified ``Type``. In plenary mode, the returned list contains all known agents or an error results. Otherwise, the returned list may contain only those agents that are accessible through this session. arg: agent_genus_type (osid.type.Type): an agent genus type return: (osid.authentication.AgentList) - the returned ``Agent`` list raise: NullArgument - ``agent_genus_type`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.ResourceLookupSession.get_resources_by_genus_type # NOTE: This implementation currently ignores plenary view collection = JSONClientValidated('authentication', collection='Agent', runtime=self._runtime) result = collection.find( dict({'genusTypeId': str(agent_genus_type)}, **self._view_filter())).sort('_id', DESCENDING) return objects.AgentList(result, runtime=self._runtime, proxy=self._proxy)
[ "def", "get_agents_by_genus_type", "(", "self", ",", "agent_genus_type", ")", ":", "# Implemented from template for", "# osid.resource.ResourceLookupSession.get_resources_by_genus_type", "# NOTE: This implementation currently ignores plenary view", "collection", "=", "JSONClientValidated", "(", "'authentication'", ",", "collection", "=", "'Agent'", ",", "runtime", "=", "self", ".", "_runtime", ")", "result", "=", "collection", ".", "find", "(", "dict", "(", "{", "'genusTypeId'", ":", "str", "(", "agent_genus_type", ")", "}", ",", "*", "*", "self", ".", "_view_filter", "(", ")", ")", ")", ".", "sort", "(", "'_id'", ",", "DESCENDING", ")", "return", "objects", ".", "AgentList", "(", "result", ",", "runtime", "=", "self", ".", "_runtime", ",", "proxy", "=", "self", ".", "_proxy", ")" ]
Gets an ``AgentList`` corresponding to the given agent genus ``Type`` which does not include agents of genus types derived from the specified ``Type``. In plenary mode, the returned list contains all known agents or an error results. Otherwise, the returned list may contain only those agents that are accessible through this session. arg: agent_genus_type (osid.type.Type): an agent genus type return: (osid.authentication.AgentList) - the returned ``Agent`` list raise: NullArgument - ``agent_genus_type`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
[ "Gets", "an", "AgentList", "corresponding", "to", "the", "given", "agent", "genus", "Type", "which", "does", "not", "include", "agents", "of", "genus", "types", "derived", "from", "the", "specified", "Type", "." ]
python
train
55.076923
cameronbwhite/Flask-CAS
flask_cas/routing.py
https://github.com/cameronbwhite/Flask-CAS/blob/f85173938654cb9b9316a5c869000b74b008422e/flask_cas/routing.py#L90-L141
def validate(ticket): """ Will attempt to validate the ticket. If validation fails, then False is returned. If validation is successful, then True is returned and the validated username is saved in the session under the key `CAS_USERNAME_SESSION_KEY` while tha validated attributes dictionary is saved under the key 'CAS_ATTRIBUTES_SESSION_KEY'. """ cas_username_session_key = current_app.config['CAS_USERNAME_SESSION_KEY'] cas_attributes_session_key = current_app.config['CAS_ATTRIBUTES_SESSION_KEY'] current_app.logger.debug("validating token {0}".format(ticket)) cas_validate_url = create_cas_validate_url( current_app.config['CAS_SERVER'], current_app.config['CAS_VALIDATE_ROUTE'], flask.url_for('.login', origin=flask.session.get('CAS_AFTER_LOGIN_SESSION_URL'), _external=True), ticket) current_app.logger.debug("Making GET request to {0}".format( cas_validate_url)) xml_from_dict = {} isValid = False try: xmldump = urlopen(cas_validate_url).read().strip().decode('utf8', 'ignore') xml_from_dict = parse(xmldump) isValid = True if "cas:authenticationSuccess" in xml_from_dict["cas:serviceResponse"] else False except ValueError: current_app.logger.error("CAS returned unexpected result") if isValid: current_app.logger.debug("valid") xml_from_dict = xml_from_dict["cas:serviceResponse"]["cas:authenticationSuccess"] username = xml_from_dict["cas:user"] flask.session[cas_username_session_key] = username if "cas:attributes" in xml_from_dict: attributes = xml_from_dict["cas:attributes"] if "cas:memberOf" in attributes: attributes["cas:memberOf"] = attributes["cas:memberOf"].lstrip('[').rstrip(']').split(',') for group_number in range(0, len(attributes['cas:memberOf'])): attributes['cas:memberOf'][group_number] = attributes['cas:memberOf'][group_number].lstrip(' ').rstrip(' ') flask.session[cas_attributes_session_key] = attributes else: current_app.logger.debug("invalid") return isValid
[ "def", "validate", "(", "ticket", ")", ":", "cas_username_session_key", "=", "current_app", ".", "config", "[", "'CAS_USERNAME_SESSION_KEY'", "]", "cas_attributes_session_key", "=", "current_app", ".", "config", "[", "'CAS_ATTRIBUTES_SESSION_KEY'", "]", "current_app", ".", "logger", ".", "debug", "(", "\"validating token {0}\"", ".", "format", "(", "ticket", ")", ")", "cas_validate_url", "=", "create_cas_validate_url", "(", "current_app", ".", "config", "[", "'CAS_SERVER'", "]", ",", "current_app", ".", "config", "[", "'CAS_VALIDATE_ROUTE'", "]", ",", "flask", ".", "url_for", "(", "'.login'", ",", "origin", "=", "flask", ".", "session", ".", "get", "(", "'CAS_AFTER_LOGIN_SESSION_URL'", ")", ",", "_external", "=", "True", ")", ",", "ticket", ")", "current_app", ".", "logger", ".", "debug", "(", "\"Making GET request to {0}\"", ".", "format", "(", "cas_validate_url", ")", ")", "xml_from_dict", "=", "{", "}", "isValid", "=", "False", "try", ":", "xmldump", "=", "urlopen", "(", "cas_validate_url", ")", ".", "read", "(", ")", ".", "strip", "(", ")", ".", "decode", "(", "'utf8'", ",", "'ignore'", ")", "xml_from_dict", "=", "parse", "(", "xmldump", ")", "isValid", "=", "True", "if", "\"cas:authenticationSuccess\"", "in", "xml_from_dict", "[", "\"cas:serviceResponse\"", "]", "else", "False", "except", "ValueError", ":", "current_app", ".", "logger", ".", "error", "(", "\"CAS returned unexpected result\"", ")", "if", "isValid", ":", "current_app", ".", "logger", ".", "debug", "(", "\"valid\"", ")", "xml_from_dict", "=", "xml_from_dict", "[", "\"cas:serviceResponse\"", "]", "[", "\"cas:authenticationSuccess\"", "]", "username", "=", "xml_from_dict", "[", "\"cas:user\"", "]", "flask", ".", "session", "[", "cas_username_session_key", "]", "=", "username", "if", "\"cas:attributes\"", "in", "xml_from_dict", ":", "attributes", "=", "xml_from_dict", "[", "\"cas:attributes\"", "]", "if", "\"cas:memberOf\"", "in", "attributes", ":", "attributes", "[", "\"cas:memberOf\"", "]", "=", "attributes", "[", "\"cas:memberOf\"", "]", ".", "lstrip", "(", "'['", ")", ".", "rstrip", "(", "']'", ")", ".", "split", "(", "','", ")", "for", "group_number", "in", "range", "(", "0", ",", "len", "(", "attributes", "[", "'cas:memberOf'", "]", ")", ")", ":", "attributes", "[", "'cas:memberOf'", "]", "[", "group_number", "]", "=", "attributes", "[", "'cas:memberOf'", "]", "[", "group_number", "]", ".", "lstrip", "(", "' '", ")", ".", "rstrip", "(", "' '", ")", "flask", ".", "session", "[", "cas_attributes_session_key", "]", "=", "attributes", "else", ":", "current_app", ".", "logger", ".", "debug", "(", "\"invalid\"", ")", "return", "isValid" ]
Will attempt to validate the ticket. If validation fails, then False is returned. If validation is successful, then True is returned and the validated username is saved in the session under the key `CAS_USERNAME_SESSION_KEY` while tha validated attributes dictionary is saved under the key 'CAS_ATTRIBUTES_SESSION_KEY'.
[ "Will", "attempt", "to", "validate", "the", "ticket", ".", "If", "validation", "fails", "then", "False", "is", "returned", ".", "If", "validation", "is", "successful", "then", "True", "is", "returned", "and", "the", "validated", "username", "is", "saved", "in", "the", "session", "under", "the", "key", "CAS_USERNAME_SESSION_KEY", "while", "tha", "validated", "attributes", "dictionary", "is", "saved", "under", "the", "key", "CAS_ATTRIBUTES_SESSION_KEY", "." ]
python
train
41.211538
textbook/aslack
aslack/core.py
https://github.com/textbook/aslack/blob/9ac6a44e4464180109fa4be130ad7a980a9d1acc/aslack/core.py#L37-L62
def url_builder(self, endpoint, *, root=None, params=None, url_params=None): """Create a URL for the specified endpoint. Arguments: endpoint (:py:class:`str`): The API endpoint to access. root: (:py:class:`str`, optional): The root URL for the service API. params: (:py:class:`dict`, optional): The values for format into the created URL (defaults to ``None``). url_params: (:py:class:`dict`, optional): Parameters to add to the end of the URL (defaults to ``None``). Returns: :py:class:`str`: The resulting URL. """ if root is None: root = self.ROOT scheme, netloc, path, _, _ = urlsplit(root) return urlunsplit(( scheme, netloc, urljoin(path, endpoint), urlencode(url_params or {}), '', )).format(**params or {})
[ "def", "url_builder", "(", "self", ",", "endpoint", ",", "*", ",", "root", "=", "None", ",", "params", "=", "None", ",", "url_params", "=", "None", ")", ":", "if", "root", "is", "None", ":", "root", "=", "self", ".", "ROOT", "scheme", ",", "netloc", ",", "path", ",", "_", ",", "_", "=", "urlsplit", "(", "root", ")", "return", "urlunsplit", "(", "(", "scheme", ",", "netloc", ",", "urljoin", "(", "path", ",", "endpoint", ")", ",", "urlencode", "(", "url_params", "or", "{", "}", ")", ",", "''", ",", ")", ")", ".", "format", "(", "*", "*", "params", "or", "{", "}", ")" ]
Create a URL for the specified endpoint. Arguments: endpoint (:py:class:`str`): The API endpoint to access. root: (:py:class:`str`, optional): The root URL for the service API. params: (:py:class:`dict`, optional): The values for format into the created URL (defaults to ``None``). url_params: (:py:class:`dict`, optional): Parameters to add to the end of the URL (defaults to ``None``). Returns: :py:class:`str`: The resulting URL.
[ "Create", "a", "URL", "for", "the", "specified", "endpoint", "." ]
python
valid
35.153846
grundic/yagocd
yagocd/client.py
https://github.com/grundic/yagocd/blob/4c75336ae6f107c8723d37b15e52169151822127/yagocd/client.py#L255-L263
def packages(self): """ Property for accessing :class:`PackageManager` instance, which is used to manage packages. :rtype: yagocd.resources.package.PackageManager """ if self._package_manager is None: self._package_manager = PackageManager(session=self._session) return self._package_manager
[ "def", "packages", "(", "self", ")", ":", "if", "self", ".", "_package_manager", "is", "None", ":", "self", ".", "_package_manager", "=", "PackageManager", "(", "session", "=", "self", ".", "_session", ")", "return", "self", ".", "_package_manager" ]
Property for accessing :class:`PackageManager` instance, which is used to manage packages. :rtype: yagocd.resources.package.PackageManager
[ "Property", "for", "accessing", ":", "class", ":", "PackageManager", "instance", "which", "is", "used", "to", "manage", "packages", "." ]
python
train
38.222222
tcalmant/ipopo
pelix/shell/completion/ipopo.py
https://github.com/tcalmant/ipopo/blob/2f9ae0c44cd9c34ef1a9d50837b3254e75678eb1/pelix/shell/completion/ipopo.py#L112-L136
def complete( self, config, prompt, session, context, current_arguments, current ): # type: (CompletionInfo, str, ShellSession, BundleContext, List[str], str) -> List[str] """ Returns the list of services IDs matching the current state :param config: Configuration of the current completion :param prompt: Shell prompt (for re-display) :param session: Shell session (to display in shell) :param context: Bundle context of the Shell bundle :param current_arguments: Current arguments (without the command itself) :param current: Current word :return: A list of matches """ # Register a method to display helpful completion self.set_display_hook(self.display_hook, prompt, session, context) # Return a list of component factories with use_ipopo(context) as ipopo: return [ "{} ".format(factory) for factory in ipopo.get_factories() if factory.startswith(current) ]
[ "def", "complete", "(", "self", ",", "config", ",", "prompt", ",", "session", ",", "context", ",", "current_arguments", ",", "current", ")", ":", "# type: (CompletionInfo, str, ShellSession, BundleContext, List[str], str) -> List[str]", "# Register a method to display helpful completion", "self", ".", "set_display_hook", "(", "self", ".", "display_hook", ",", "prompt", ",", "session", ",", "context", ")", "# Return a list of component factories", "with", "use_ipopo", "(", "context", ")", "as", "ipopo", ":", "return", "[", "\"{} \"", ".", "format", "(", "factory", ")", "for", "factory", "in", "ipopo", ".", "get_factories", "(", ")", "if", "factory", ".", "startswith", "(", "current", ")", "]" ]
Returns the list of services IDs matching the current state :param config: Configuration of the current completion :param prompt: Shell prompt (for re-display) :param session: Shell session (to display in shell) :param context: Bundle context of the Shell bundle :param current_arguments: Current arguments (without the command itself) :param current: Current word :return: A list of matches
[ "Returns", "the", "list", "of", "services", "IDs", "matching", "the", "current", "state" ]
python
train
41.76
novopl/peltak
src/peltak/core/versioning.py
https://github.com/novopl/peltak/blob/b627acc019e3665875fe76cdca0a14773b69beaa/src/peltak/core/versioning.py#L253-L266
def read(self): # type: () -> Optional[str] """ Read the project version from .py file. This will regex search in the file for a ``__version__ = VERSION_STRING`` and read the version string. """ with open(self.version_file) as fp: version = fp.read().strip() if is_valid(version): return version return None
[ "def", "read", "(", "self", ")", ":", "# type: () -> Optional[str]", "with", "open", "(", "self", ".", "version_file", ")", "as", "fp", ":", "version", "=", "fp", ".", "read", "(", ")", ".", "strip", "(", ")", "if", "is_valid", "(", "version", ")", ":", "return", "version", "return", "None" ]
Read the project version from .py file. This will regex search in the file for a ``__version__ = VERSION_STRING`` and read the version string.
[ "Read", "the", "project", "version", "from", ".", "py", "file", "." ]
python
train
28.357143
grycap/cpyutils
config.py
https://github.com/grycap/cpyutils/blob/fa966fc6d2ae1e1e799e19941561aa79b617f1b1/config.py#L234-L257
def existing_config_files(): """ Method that calculates all the configuration files that are valid, according to the 'set_paths' and other methods for this module. """ global _ETC_PATHS global _MAIN_CONFIG_FILE global _CONFIG_VAR_INCLUDE global _CONFIG_FILTER config_files = [] for possible in _ETC_PATHS: config_files = config_files + glob.glob("%s%s" % (possible, _MAIN_CONFIG_FILE)) if _CONFIG_VAR_INCLUDE != "": main_config = Configuration("general", { _CONFIG_VAR_INCLUDE:"" }, _MAIN_CONFIG_FILE) if main_config.CONFIG_DIR != "": for possible in _ETC_PATHS: config_files = config_files + glob.glob("%s%s/%s" % (possible, main_config.CONFIG_DIR, _CONFIG_FILTER)) return config_files
[ "def", "existing_config_files", "(", ")", ":", "global", "_ETC_PATHS", "global", "_MAIN_CONFIG_FILE", "global", "_CONFIG_VAR_INCLUDE", "global", "_CONFIG_FILTER", "config_files", "=", "[", "]", "for", "possible", "in", "_ETC_PATHS", ":", "config_files", "=", "config_files", "+", "glob", ".", "glob", "(", "\"%s%s\"", "%", "(", "possible", ",", "_MAIN_CONFIG_FILE", ")", ")", "if", "_CONFIG_VAR_INCLUDE", "!=", "\"\"", ":", "main_config", "=", "Configuration", "(", "\"general\"", ",", "{", "_CONFIG_VAR_INCLUDE", ":", "\"\"", "}", ",", "_MAIN_CONFIG_FILE", ")", "if", "main_config", ".", "CONFIG_DIR", "!=", "\"\"", ":", "for", "possible", "in", "_ETC_PATHS", ":", "config_files", "=", "config_files", "+", "glob", ".", "glob", "(", "\"%s%s/%s\"", "%", "(", "possible", ",", "main_config", ".", "CONFIG_DIR", ",", "_CONFIG_FILTER", ")", ")", "return", "config_files" ]
Method that calculates all the configuration files that are valid, according to the 'set_paths' and other methods for this module.
[ "Method", "that", "calculates", "all", "the", "configuration", "files", "that", "are", "valid", "according", "to", "the", "set_paths", "and", "other", "methods", "for", "this", "module", "." ]
python
train
33.791667
ktdreyer/txkoji
txkoji/cache.py
https://github.com/ktdreyer/txkoji/blob/a7de380f29f745bf11730b27217208f6d4da7733/txkoji/cache.py#L29-L43
def get_name(self, type_, id_): """ Read a cached name if available. :param type_: str, "owner" or "tag" :param id_: int, eg. 123456 :returns: str, or None """ cachefile = self.filename(type_, id_) try: with open(cachefile, 'r') as f: return f.read() except (OSError, IOError) as e: if e.errno != errno.ENOENT: raise
[ "def", "get_name", "(", "self", ",", "type_", ",", "id_", ")", ":", "cachefile", "=", "self", ".", "filename", "(", "type_", ",", "id_", ")", "try", ":", "with", "open", "(", "cachefile", ",", "'r'", ")", "as", "f", ":", "return", "f", ".", "read", "(", ")", "except", "(", "OSError", ",", "IOError", ")", "as", "e", ":", "if", "e", ".", "errno", "!=", "errno", ".", "ENOENT", ":", "raise" ]
Read a cached name if available. :param type_: str, "owner" or "tag" :param id_: int, eg. 123456 :returns: str, or None
[ "Read", "a", "cached", "name", "if", "available", "." ]
python
train
28.733333
bkabrda/flask-whooshee
flask_whooshee.py
https://github.com/bkabrda/flask-whooshee/blob/773fc51ed53043bd5e92c65eadef5663845ae8c4/flask_whooshee.py#L402-L407
def camel_to_snake(self, s): """Constructs nice dir name from class name, e.g. FooBar => foo_bar. :param s: The string which should be converted to snake_case. """ return self._underscore_re2.sub(r'\1_\2', self._underscore_re1.sub(r'\1_\2', s)).lower()
[ "def", "camel_to_snake", "(", "self", ",", "s", ")", ":", "return", "self", ".", "_underscore_re2", ".", "sub", "(", "r'\\1_\\2'", ",", "self", ".", "_underscore_re1", ".", "sub", "(", "r'\\1_\\2'", ",", "s", ")", ")", ".", "lower", "(", ")" ]
Constructs nice dir name from class name, e.g. FooBar => foo_bar. :param s: The string which should be converted to snake_case.
[ "Constructs", "nice", "dir", "name", "from", "class", "name", "e", ".", "g", ".", "FooBar", "=", ">", "foo_bar", "." ]
python
train
46.666667
Esri/ArcREST
src/arcrest/common/general.py
https://github.com/Esri/ArcREST/blob/ab240fde2b0200f61d4a5f6df033516e53f2f416/src/arcrest/common/general.py#L66-L93
def online_time_to_string(value, timeFormat, utcOffset=0): """Converts AGOL timestamp to formatted string. Args: value (float): A UTC timestamp as reported by AGOL (time in ms since Unix epoch * 1000) timeFormat (str): Date/Time format string as parsed by :py:func:`datetime.strftime`. utcOffset (int): Hours difference from UTC and desired output. Default is 0 (remain in UTC). Returns: str: A string representation of the timestamp. Examples: >>> rcrest.general.online_time_to_string(1457167261000.0, "%Y-%m-%d %H:%M:%S") '2016-03-05 00:41:01' >>> rcrest.general.online_time_to_string(731392515000.0, '%m/%d/%Y %H:%M:%S', -8) # PST is UTC-8:00 '03/05/1993 12:35:15' See Also: :py:func:`local_time_to_online` for converting a :py:class:`datetime.datetime` object to AGOL timestamp """ try: return datetime.datetime.fromtimestamp(value/1000 + utcOffset*3600).strftime(timeFormat) except: return "" finally: pass
[ "def", "online_time_to_string", "(", "value", ",", "timeFormat", ",", "utcOffset", "=", "0", ")", ":", "try", ":", "return", "datetime", ".", "datetime", ".", "fromtimestamp", "(", "value", "/", "1000", "+", "utcOffset", "*", "3600", ")", ".", "strftime", "(", "timeFormat", ")", "except", ":", "return", "\"\"", "finally", ":", "pass" ]
Converts AGOL timestamp to formatted string. Args: value (float): A UTC timestamp as reported by AGOL (time in ms since Unix epoch * 1000) timeFormat (str): Date/Time format string as parsed by :py:func:`datetime.strftime`. utcOffset (int): Hours difference from UTC and desired output. Default is 0 (remain in UTC). Returns: str: A string representation of the timestamp. Examples: >>> rcrest.general.online_time_to_string(1457167261000.0, "%Y-%m-%d %H:%M:%S") '2016-03-05 00:41:01' >>> rcrest.general.online_time_to_string(731392515000.0, '%m/%d/%Y %H:%M:%S', -8) # PST is UTC-8:00 '03/05/1993 12:35:15' See Also: :py:func:`local_time_to_online` for converting a :py:class:`datetime.datetime` object to AGOL timestamp
[ "Converts", "AGOL", "timestamp", "to", "formatted", "string", "." ]
python
train
36.464286
erdewit/ib_insync
ib_insync/util.py
https://github.com/erdewit/ib_insync/blob/d0646a482590f5cb7bfddbd1f0870f8c4bc1df80/ib_insync/util.py#L23-L49
def df(objs, labels=None): """ Create pandas DataFrame from the sequence of same-type objects. When a list of labels is given then only retain those labels and drop the rest. """ import pandas as pd from .objects import Object, DynamicObject if objs: objs = list(objs) obj = objs[0] if isinstance(obj, Object): df = pd.DataFrame.from_records(o.tuple() for o in objs) df.columns = obj.__class__.defaults elif isinstance(obj, DynamicObject): df = pd.DataFrame.from_records(o.__dict__ for o in objs) else: df = pd.DataFrame.from_records(objs) if isinstance(obj, tuple) and hasattr(obj, '_fields'): # assume it's a namedtuple df.columns = obj.__class__._fields else: df = None if labels: exclude = [label for label in df if label not in labels] df = df.drop(exclude, axis=1) return df
[ "def", "df", "(", "objs", ",", "labels", "=", "None", ")", ":", "import", "pandas", "as", "pd", "from", ".", "objects", "import", "Object", ",", "DynamicObject", "if", "objs", ":", "objs", "=", "list", "(", "objs", ")", "obj", "=", "objs", "[", "0", "]", "if", "isinstance", "(", "obj", ",", "Object", ")", ":", "df", "=", "pd", ".", "DataFrame", ".", "from_records", "(", "o", ".", "tuple", "(", ")", "for", "o", "in", "objs", ")", "df", ".", "columns", "=", "obj", ".", "__class__", ".", "defaults", "elif", "isinstance", "(", "obj", ",", "DynamicObject", ")", ":", "df", "=", "pd", ".", "DataFrame", ".", "from_records", "(", "o", ".", "__dict__", "for", "o", "in", "objs", ")", "else", ":", "df", "=", "pd", ".", "DataFrame", ".", "from_records", "(", "objs", ")", "if", "isinstance", "(", "obj", ",", "tuple", ")", "and", "hasattr", "(", "obj", ",", "'_fields'", ")", ":", "# assume it's a namedtuple", "df", ".", "columns", "=", "obj", ".", "__class__", ".", "_fields", "else", ":", "df", "=", "None", "if", "labels", ":", "exclude", "=", "[", "label", "for", "label", "in", "df", "if", "label", "not", "in", "labels", "]", "df", "=", "df", ".", "drop", "(", "exclude", ",", "axis", "=", "1", ")", "return", "df" ]
Create pandas DataFrame from the sequence of same-type objects. When a list of labels is given then only retain those labels and drop the rest.
[ "Create", "pandas", "DataFrame", "from", "the", "sequence", "of", "same", "-", "type", "objects", ".", "When", "a", "list", "of", "labels", "is", "given", "then", "only", "retain", "those", "labels", "and", "drop", "the", "rest", "." ]
python
train
34.888889
DomainTools/python_api
domaintools/api.py
https://github.com/DomainTools/python_api/blob/17be85fd4913fbe14d7660a4f4829242f1663e60/domaintools/api.py#L129-L133
def registrant_monitor(self, query, exclude=[], days_back=0, limit=None, **kwargs): """One or more terms as a Python list or separated by the pipe character ( | ).""" return self._results('registrant-alert', '/v1/registrant-alert', query=delimited(query), exclude=delimited(exclude), days_back=days_back, limit=limit, items_path=('alerts', ), **kwargs)
[ "def", "registrant_monitor", "(", "self", ",", "query", ",", "exclude", "=", "[", "]", ",", "days_back", "=", "0", ",", "limit", "=", "None", ",", "*", "*", "kwargs", ")", ":", "return", "self", ".", "_results", "(", "'registrant-alert'", ",", "'/v1/registrant-alert'", ",", "query", "=", "delimited", "(", "query", ")", ",", "exclude", "=", "delimited", "(", "exclude", ")", ",", "days_back", "=", "days_back", ",", "limit", "=", "limit", ",", "items_path", "=", "(", "'alerts'", ",", ")", ",", "*", "*", "kwargs", ")" ]
One or more terms as a Python list or separated by the pipe character ( | ).
[ "One", "or", "more", "terms", "as", "a", "Python", "list", "or", "separated", "by", "the", "pipe", "character", "(", "|", ")", "." ]
python
train
84
keflavich/plfit
plfit/plfit.py
https://github.com/keflavich/plfit/blob/7dafa6302b427ba8c89651148e3e9d29add436c3/plfit/plfit.py#L1048-L1069
def discrete_ksD(data, xmin, alpha): """ given a sorted data set, a minimum, and an alpha, returns the power law ks-test D value w/data The returned value is the "D" parameter in the ks test (this is implemented differently from the continuous version because there are potentially multiple identical points that need comparison to the power law) """ zz = np.sort(data[data>=xmin]) nn = float(len(zz)) if nn < 2: return np.inf #cx = np.arange(nn,dtype='float')/float(nn) #cf = 1.0-(zz/xmin)**(1.0-alpha) model_cdf = 1.0-(zz.astype('float')/float(xmin))**(1.0-alpha) data_cdf = np.searchsorted(zz,zz,side='left')/(float(nn)) ks = max(abs(model_cdf-data_cdf)) return ks
[ "def", "discrete_ksD", "(", "data", ",", "xmin", ",", "alpha", ")", ":", "zz", "=", "np", ".", "sort", "(", "data", "[", "data", ">=", "xmin", "]", ")", "nn", "=", "float", "(", "len", "(", "zz", ")", ")", "if", "nn", "<", "2", ":", "return", "np", ".", "inf", "#cx = np.arange(nn,dtype='float')/float(nn)", "#cf = 1.0-(zz/xmin)**(1.0-alpha)", "model_cdf", "=", "1.0", "-", "(", "zz", ".", "astype", "(", "'float'", ")", "/", "float", "(", "xmin", ")", ")", "**", "(", "1.0", "-", "alpha", ")", "data_cdf", "=", "np", ".", "searchsorted", "(", "zz", ",", "zz", ",", "side", "=", "'left'", ")", "/", "(", "float", "(", "nn", ")", ")", "ks", "=", "max", "(", "abs", "(", "model_cdf", "-", "data_cdf", ")", ")", "return", "ks" ]
given a sorted data set, a minimum, and an alpha, returns the power law ks-test D value w/data The returned value is the "D" parameter in the ks test (this is implemented differently from the continuous version because there are potentially multiple identical points that need comparison to the power law)
[ "given", "a", "sorted", "data", "set", "a", "minimum", "and", "an", "alpha", "returns", "the", "power", "law", "ks", "-", "test", "D", "value", "w", "/", "data" ]
python
test
32.954545
crackinglandia/pype32
pype32/utils.py
https://github.com/crackinglandia/pype32/blob/192fd14dfc0dd36d953739a81c17fbaf5e3d6076/pype32/utils.py#L231-L240
def readByte(self): """ Reads a byte value from the L{ReadData} stream object. @rtype: int @return: The byte value read from the L{ReadData} stream. """ byte = unpack('B' if not self.signed else 'b', self.readAt(self.offset, 1))[0] self.offset += 1 return byte
[ "def", "readByte", "(", "self", ")", ":", "byte", "=", "unpack", "(", "'B'", "if", "not", "self", ".", "signed", "else", "'b'", ",", "self", ".", "readAt", "(", "self", ".", "offset", ",", "1", ")", ")", "[", "0", "]", "self", ".", "offset", "+=", "1", "return", "byte" ]
Reads a byte value from the L{ReadData} stream object. @rtype: int @return: The byte value read from the L{ReadData} stream.
[ "Reads", "a", "byte", "value", "from", "the", "L", "{", "ReadData", "}", "stream", "object", "." ]
python
train
32.4
imjoey/pyhaproxy
pyhaproxy/parse.py
https://github.com/imjoey/pyhaproxy/blob/4f0904acfc6bdb29ba6104ce2f6724c0330441d3/pyhaproxy/parse.py#L170-L206
def build_frontend(self, frontend_node): """parse `frontend` sections, and return a config.Frontend Args: frontend_node (TreeNode): Description Raises: Exception: Description Returns: config.Frontend: an object """ proxy_name = frontend_node.frontend_header.proxy_name.text service_address_node = frontend_node.frontend_header.service_address # parse the config block config_block_lines = self.__build_config_block( frontend_node.config_block) # parse host and port host, port = '', '' if isinstance(service_address_node, pegnode.ServiceAddress): host = service_address_node.host.text port = service_address_node.port.text else: # use `bind` in config lines to fill in host and port # just use the first for line in config_block_lines: if isinstance(line, config.Bind): host, port = line.host, line.port break else: raise Exception( 'Not specify host and port in `frontend` definition') return config.Frontend( name=proxy_name, host=host, port=port, config_block=config_block_lines)
[ "def", "build_frontend", "(", "self", ",", "frontend_node", ")", ":", "proxy_name", "=", "frontend_node", ".", "frontend_header", ".", "proxy_name", ".", "text", "service_address_node", "=", "frontend_node", ".", "frontend_header", ".", "service_address", "# parse the config block", "config_block_lines", "=", "self", ".", "__build_config_block", "(", "frontend_node", ".", "config_block", ")", "# parse host and port", "host", ",", "port", "=", "''", ",", "''", "if", "isinstance", "(", "service_address_node", ",", "pegnode", ".", "ServiceAddress", ")", ":", "host", "=", "service_address_node", ".", "host", ".", "text", "port", "=", "service_address_node", ".", "port", ".", "text", "else", ":", "# use `bind` in config lines to fill in host and port", "# just use the first", "for", "line", "in", "config_block_lines", ":", "if", "isinstance", "(", "line", ",", "config", ".", "Bind", ")", ":", "host", ",", "port", "=", "line", ".", "host", ",", "line", ".", "port", "break", "else", ":", "raise", "Exception", "(", "'Not specify host and port in `frontend` definition'", ")", "return", "config", ".", "Frontend", "(", "name", "=", "proxy_name", ",", "host", "=", "host", ",", "port", "=", "port", ",", "config_block", "=", "config_block_lines", ")" ]
parse `frontend` sections, and return a config.Frontend Args: frontend_node (TreeNode): Description Raises: Exception: Description Returns: config.Frontend: an object
[ "parse", "frontend", "sections", "and", "return", "a", "config", ".", "Frontend" ]
python
train
35.108108
Esri/ArcREST
src/arcrest/enrichment/_geoenrichment.py
https://github.com/Esri/ArcREST/blob/ab240fde2b0200f61d4a5f6df033516e53f2f416/src/arcrest/enrichment/_geoenrichment.py#L293-L380
def createReport(self, out_file_path, studyAreas, report=None, format="PDF", reportFields=None, studyAreasOptions=None, useData=None, inSR=4326, ): """ The GeoEnrichment Create Report method uses the concept of a study area to define the location of the point or area that you want to enrich with generated reports. This method allows you to create many types of high-quality reports for a variety of use cases describing the input area. If a point is used as a study area, the service will create a 1-mile ring buffer around the point to collect and append enrichment data. Optionally, you can create a buffer ring or drive-time service area around the points to prepare PDF or Excel reports for the study areas. Note: For full examples for each input, please review the following: http://resources.arcgis.com/en/help/arcgis-rest-api/#/Create_report/02r30000022q000000/ Inputs: out_file_path - save location of the report studyAreas - Required parameter to specify a list of input features to be enriched. The input can be a Point, Polygon, Adress, or named administrative boundary. The locations can be passed in as a single object or as a list of objects. report - Default report to generate. format - specify the generated report. Options are: XLSX or PDF reportFields - Optional parameter specifies additional choices to customize reports. See the URL above to see all the options. studyAreasOptions - Optional parameter to specify enrichment behavior. For points described as map coordinates, a 1-mile ring area centered on each site will be used by default. You can use this parameter to change these default settings. With this parameter, the caller can override the default behavior describing how the enrichment attributes are appended to the input features described in studyAreas. For example, you can change the output ring buffer to 5 miles, change the number of output buffers created around each point, and also change the output buffer type to a drive-time service area rather than a simple ring buffer. useData - By default, the service will automatically determine the country or dataset that is associated with each location or area submitted in the studyAreas parameter; however, there is an associated computational cost which may lengthen the time it takes to return a response. To skip this intermediate step and potentially improve the speed and performance of the service, the caller can specify the country or dataset information up front through this parameter. inSR - parameter to define the input geometries in the studyAreas parameter in a specified spatial reference system. """ url = self._base_url + self._url_create_report if isinstance(studyAreas, list) == False: studyAreas = [studyAreas] studyAreas = self.__geometryToDict(studyAreas) params = { "f" : "bin", "studyAreas" : studyAreas, "inSR" : inSR, } if not report is None: params['report'] = report if format is None: format = "pdf" elif format.lower() in ['pdf', 'xlsx']: params['format'] = format.lower() else: raise AttributeError("Invalid format value.") if not reportFields is None: params['reportFields'] = reportFields if not studyAreasOptions is None: params['studyAreasOptions'] = studyAreasOptions if not useData is None: params['useData'] = useData result = self._get(url=url, param_dict=params, securityHandler=self._securityHandler, proxy_url=self._proxy_url, proxy_port=self._proxy_port, out_folder=os.path.dirname(out_file_path)) return result
[ "def", "createReport", "(", "self", ",", "out_file_path", ",", "studyAreas", ",", "report", "=", "None", ",", "format", "=", "\"PDF\"", ",", "reportFields", "=", "None", ",", "studyAreasOptions", "=", "None", ",", "useData", "=", "None", ",", "inSR", "=", "4326", ",", ")", ":", "url", "=", "self", ".", "_base_url", "+", "self", ".", "_url_create_report", "if", "isinstance", "(", "studyAreas", ",", "list", ")", "==", "False", ":", "studyAreas", "=", "[", "studyAreas", "]", "studyAreas", "=", "self", ".", "__geometryToDict", "(", "studyAreas", ")", "params", "=", "{", "\"f\"", ":", "\"bin\"", ",", "\"studyAreas\"", ":", "studyAreas", ",", "\"inSR\"", ":", "inSR", ",", "}", "if", "not", "report", "is", "None", ":", "params", "[", "'report'", "]", "=", "report", "if", "format", "is", "None", ":", "format", "=", "\"pdf\"", "elif", "format", ".", "lower", "(", ")", "in", "[", "'pdf'", ",", "'xlsx'", "]", ":", "params", "[", "'format'", "]", "=", "format", ".", "lower", "(", ")", "else", ":", "raise", "AttributeError", "(", "\"Invalid format value.\"", ")", "if", "not", "reportFields", "is", "None", ":", "params", "[", "'reportFields'", "]", "=", "reportFields", "if", "not", "studyAreasOptions", "is", "None", ":", "params", "[", "'studyAreasOptions'", "]", "=", "studyAreasOptions", "if", "not", "useData", "is", "None", ":", "params", "[", "'useData'", "]", "=", "useData", "result", "=", "self", ".", "_get", "(", "url", "=", "url", ",", "param_dict", "=", "params", ",", "securityHandler", "=", "self", ".", "_securityHandler", ",", "proxy_url", "=", "self", ".", "_proxy_url", ",", "proxy_port", "=", "self", ".", "_proxy_port", ",", "out_folder", "=", "os", ".", "path", ".", "dirname", "(", "out_file_path", ")", ")", "return", "result" ]
The GeoEnrichment Create Report method uses the concept of a study area to define the location of the point or area that you want to enrich with generated reports. This method allows you to create many types of high-quality reports for a variety of use cases describing the input area. If a point is used as a study area, the service will create a 1-mile ring buffer around the point to collect and append enrichment data. Optionally, you can create a buffer ring or drive-time service area around the points to prepare PDF or Excel reports for the study areas. Note: For full examples for each input, please review the following: http://resources.arcgis.com/en/help/arcgis-rest-api/#/Create_report/02r30000022q000000/ Inputs: out_file_path - save location of the report studyAreas - Required parameter to specify a list of input features to be enriched. The input can be a Point, Polygon, Adress, or named administrative boundary. The locations can be passed in as a single object or as a list of objects. report - Default report to generate. format - specify the generated report. Options are: XLSX or PDF reportFields - Optional parameter specifies additional choices to customize reports. See the URL above to see all the options. studyAreasOptions - Optional parameter to specify enrichment behavior. For points described as map coordinates, a 1-mile ring area centered on each site will be used by default. You can use this parameter to change these default settings. With this parameter, the caller can override the default behavior describing how the enrichment attributes are appended to the input features described in studyAreas. For example, you can change the output ring buffer to 5 miles, change the number of output buffers created around each point, and also change the output buffer type to a drive-time service area rather than a simple ring buffer. useData - By default, the service will automatically determine the country or dataset that is associated with each location or area submitted in the studyAreas parameter; however, there is an associated computational cost which may lengthen the time it takes to return a response. To skip this intermediate step and potentially improve the speed and performance of the service, the caller can specify the country or dataset information up front through this parameter. inSR - parameter to define the input geometries in the studyAreas parameter in a specified spatial reference system.
[ "The", "GeoEnrichment", "Create", "Report", "method", "uses", "the", "concept", "of", "a", "study", "area", "to", "define", "the", "location", "of", "the", "point", "or", "area", "that", "you", "want", "to", "enrich", "with", "generated", "reports", ".", "This", "method", "allows", "you", "to", "create", "many", "types", "of", "high", "-", "quality", "reports", "for", "a", "variety", "of", "use", "cases", "describing", "the", "input", "area", ".", "If", "a", "point", "is", "used", "as", "a", "study", "area", "the", "service", "will", "create", "a", "1", "-", "mile", "ring", "buffer", "around", "the", "point", "to", "collect", "and", "append", "enrichment", "data", ".", "Optionally", "you", "can", "create", "a", "buffer", "ring", "or", "drive", "-", "time", "service", "area", "around", "the", "points", "to", "prepare", "PDF", "or", "Excel", "reports", "for", "the", "study", "areas", "." ]
python
train
50.034091
rfk/tnetstring
tnetstring/__init__.py
https://github.com/rfk/tnetstring/blob/146381498a07d6053e044375562be08ef16017c2/tnetstring/__init__.py#L61-L74
def dumps(value,encoding=None): """dumps(object,encoding=None) -> string This function dumps a python object as a tnetstring. """ # This uses a deque to collect output fragments in reverse order, # then joins them together at the end. It's measurably faster # than creating all the intermediate strings. # If you're reading this to get a handle on the tnetstring format, # consider the _gdumps() function instead; it's a standard top-down # generator that's simpler to understand but much less efficient. q = deque() _rdumpq(q,0,value,encoding) return "".join(q)
[ "def", "dumps", "(", "value", ",", "encoding", "=", "None", ")", ":", "# This uses a deque to collect output fragments in reverse order,", "# then joins them together at the end. It's measurably faster", "# than creating all the intermediate strings.", "# If you're reading this to get a handle on the tnetstring format,", "# consider the _gdumps() function instead; it's a standard top-down", "# generator that's simpler to understand but much less efficient.", "q", "=", "deque", "(", ")", "_rdumpq", "(", "q", ",", "0", ",", "value", ",", "encoding", ")", "return", "\"\"", ".", "join", "(", "q", ")" ]
dumps(object,encoding=None) -> string This function dumps a python object as a tnetstring.
[ "dumps", "(", "object", "encoding", "=", "None", ")", "-", ">", "string" ]
python
train
43.214286
fprimex/zdesk
zdesk/zdesk_api.py
https://github.com/fprimex/zdesk/blob/851611c13b4d530e9df31390b3ec709baf0a0188/zdesk/zdesk_api.py#L290-L293
def apps_notify_create(self, data, **kwargs): "https://developer.zendesk.com/rest_api/docs/core/apps#send-notification-to-app" api_path = "/api/v2/apps/notify.json" return self.call(api_path, method="POST", data=data, **kwargs)
[ "def", "apps_notify_create", "(", "self", ",", "data", ",", "*", "*", "kwargs", ")", ":", "api_path", "=", "\"/api/v2/apps/notify.json\"", "return", "self", ".", "call", "(", "api_path", ",", "method", "=", "\"POST\"", ",", "data", "=", "data", ",", "*", "*", "kwargs", ")" ]
https://developer.zendesk.com/rest_api/docs/core/apps#send-notification-to-app
[ "https", ":", "//", "developer", ".", "zendesk", ".", "com", "/", "rest_api", "/", "docs", "/", "core", "/", "apps#send", "-", "notification", "-", "to", "-", "app" ]
python
train
62
diux-dev/ncluster
ncluster/local_backend.py
https://github.com/diux-dev/ncluster/blob/2fd359621896717197b479c7174d06d80df1529b/ncluster/local_backend.py#L428-L431
def run_with_output(self, *args, **kwargs): """Runs command on every first job in the run, returns stdout.""" for job in self.jobs: job.run_with_output(*args, **kwargs)
[ "def", "run_with_output", "(", "self", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "for", "job", "in", "self", ".", "jobs", ":", "job", ".", "run_with_output", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
Runs command on every first job in the run, returns stdout.
[ "Runs", "command", "on", "every", "first", "job", "in", "the", "run", "returns", "stdout", "." ]
python
train
44.75
briancappello/flask-unchained
flask_unchained/bundles/security/services/security_service.py
https://github.com/briancappello/flask-unchained/blob/4d536cb90e2cc4829c1c05f2c74d3e22901a1399/flask_unchained/bundles/security/services/security_service.py#L122-L163
def register_user(self, user, allow_login=None, send_email=None, _force_login_without_confirmation=False): """ Service method to register a user. Sends signal `user_registered`. Returns True if the user has been logged in, False otherwise. """ should_login_user = (not self.security.confirmable or self.security.login_without_confirmation or _force_login_without_confirmation) should_login_user = (should_login_user if allow_login is None else allow_login and should_login_user) if should_login_user: user.active = True # confirmation token depends on having user.id set, which requires # the user be committed to the database self.user_manager.save(user, commit=True) confirmation_link, token = None, None if self.security.confirmable and not _force_login_without_confirmation: token = self.security_utils_service.generate_confirmation_token(user) confirmation_link = url_for('security_controller.confirm_email', token=token, _external=True) user_registered.send(app._get_current_object(), user=user, confirm_token=token) if (send_email or ( send_email is None and app.config.SECURITY_SEND_REGISTER_EMAIL)): self.send_mail(_('flask_unchained.bundles.security:email_subject.register'), to=user.email, template='security/email/welcome.html', user=user, confirmation_link=confirmation_link) if should_login_user: return self.login_user(user, force=_force_login_without_confirmation) return False
[ "def", "register_user", "(", "self", ",", "user", ",", "allow_login", "=", "None", ",", "send_email", "=", "None", ",", "_force_login_without_confirmation", "=", "False", ")", ":", "should_login_user", "=", "(", "not", "self", ".", "security", ".", "confirmable", "or", "self", ".", "security", ".", "login_without_confirmation", "or", "_force_login_without_confirmation", ")", "should_login_user", "=", "(", "should_login_user", "if", "allow_login", "is", "None", "else", "allow_login", "and", "should_login_user", ")", "if", "should_login_user", ":", "user", ".", "active", "=", "True", "# confirmation token depends on having user.id set, which requires", "# the user be committed to the database", "self", ".", "user_manager", ".", "save", "(", "user", ",", "commit", "=", "True", ")", "confirmation_link", ",", "token", "=", "None", ",", "None", "if", "self", ".", "security", ".", "confirmable", "and", "not", "_force_login_without_confirmation", ":", "token", "=", "self", ".", "security_utils_service", ".", "generate_confirmation_token", "(", "user", ")", "confirmation_link", "=", "url_for", "(", "'security_controller.confirm_email'", ",", "token", "=", "token", ",", "_external", "=", "True", ")", "user_registered", ".", "send", "(", "app", ".", "_get_current_object", "(", ")", ",", "user", "=", "user", ",", "confirm_token", "=", "token", ")", "if", "(", "send_email", "or", "(", "send_email", "is", "None", "and", "app", ".", "config", ".", "SECURITY_SEND_REGISTER_EMAIL", ")", ")", ":", "self", ".", "send_mail", "(", "_", "(", "'flask_unchained.bundles.security:email_subject.register'", ")", ",", "to", "=", "user", ".", "email", ",", "template", "=", "'security/email/welcome.html'", ",", "user", "=", "user", ",", "confirmation_link", "=", "confirmation_link", ")", "if", "should_login_user", ":", "return", "self", ".", "login_user", "(", "user", ",", "force", "=", "_force_login_without_confirmation", ")", "return", "False" ]
Service method to register a user. Sends signal `user_registered`. Returns True if the user has been logged in, False otherwise.
[ "Service", "method", "to", "register", "a", "user", "." ]
python
train
44.595238
happyleavesaoc/python-firetv
firetv/__init__.py
https://github.com/happyleavesaoc/python-firetv/blob/3dd953376c0d5af502e775ae14ed0afe03224781/firetv/__init__.py#L472-L507
def available(self): """Check whether the ADB connection is intact.""" if not self.adb_server_ip: # python-adb return bool(self._adb) # pure-python-adb try: # make sure the server is available adb_devices = self._adb_client.devices() # make sure the device is available try: # case 1: the device is currently available if any([self.host in dev.get_serial_no() for dev in adb_devices]): if not self._available: self._available = True return True # case 2: the device is not currently available if self._available: logging.error('ADB server is not connected to the device.') self._available = False return False except RuntimeError: if self._available: logging.error('ADB device is unavailable; encountered an error when searching for device.') self._available = False return False except RuntimeError: if self._available: logging.error('ADB server is unavailable.') self._available = False return False
[ "def", "available", "(", "self", ")", ":", "if", "not", "self", ".", "adb_server_ip", ":", "# python-adb", "return", "bool", "(", "self", ".", "_adb", ")", "# pure-python-adb", "try", ":", "# make sure the server is available", "adb_devices", "=", "self", ".", "_adb_client", ".", "devices", "(", ")", "# make sure the device is available", "try", ":", "# case 1: the device is currently available", "if", "any", "(", "[", "self", ".", "host", "in", "dev", ".", "get_serial_no", "(", ")", "for", "dev", "in", "adb_devices", "]", ")", ":", "if", "not", "self", ".", "_available", ":", "self", ".", "_available", "=", "True", "return", "True", "# case 2: the device is not currently available", "if", "self", ".", "_available", ":", "logging", ".", "error", "(", "'ADB server is not connected to the device.'", ")", "self", ".", "_available", "=", "False", "return", "False", "except", "RuntimeError", ":", "if", "self", ".", "_available", ":", "logging", ".", "error", "(", "'ADB device is unavailable; encountered an error when searching for device.'", ")", "self", ".", "_available", "=", "False", "return", "False", "except", "RuntimeError", ":", "if", "self", ".", "_available", ":", "logging", ".", "error", "(", "'ADB server is unavailable.'", ")", "self", ".", "_available", "=", "False", "return", "False" ]
Check whether the ADB connection is intact.
[ "Check", "whether", "the", "ADB", "connection", "is", "intact", "." ]
python
train
36.305556
HewlettPackard/python-hpOneView
hpOneView/resources/activity/alerts.py
https://github.com/HewlettPackard/python-hpOneView/blob/3c6219723ef25e6e0c83d44a89007f89bc325b89/hpOneView/resources/activity/alerts.py#L135-L152
def update(self, resource, id_or_uri=None, timeout=-1): """ Updates the specified alert resource. Args: resource (dict): Object to update. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: dict: Updated alert. """ uri = resource.pop('uri', None) if not uri: if not id_or_uri: raise ValueError("URI was not provided") uri = self._client.build_uri(id_or_uri) return self._client.update(resource=resource, uri=uri, timeout=timeout)
[ "def", "update", "(", "self", ",", "resource", ",", "id_or_uri", "=", "None", ",", "timeout", "=", "-", "1", ")", ":", "uri", "=", "resource", ".", "pop", "(", "'uri'", ",", "None", ")", "if", "not", "uri", ":", "if", "not", "id_or_uri", ":", "raise", "ValueError", "(", "\"URI was not provided\"", ")", "uri", "=", "self", ".", "_client", ".", "build_uri", "(", "id_or_uri", ")", "return", "self", ".", "_client", ".", "update", "(", "resource", "=", "resource", ",", "uri", "=", "uri", ",", "timeout", "=", "timeout", ")" ]
Updates the specified alert resource. Args: resource (dict): Object to update. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: dict: Updated alert.
[ "Updates", "the", "specified", "alert", "resource", "." ]
python
train
38.277778
bwhite/hadoopy
hadoopy/_hdfs.py
https://github.com/bwhite/hadoopy/blob/ff39b4e6d4e6efaf1f571cf0f2c0e0d7ab28c2d6/hadoopy/_hdfs.py#L113-L135
def abspath(path): """Return the absolute path to a file and canonicalize it Path is returned without a trailing slash and without redundant slashes. Caches the user's home directory. :param path: A string for the path. This should not have any wildcards. :returns: Absolute path to the file :raises IOError: If unsuccessful """ global _USER_HOME_DIR # FIXME(brandyn): User's home directory must exist # FIXME(brandyn): Requires something to be in home dir if path[0] == '/': return os.path.abspath(path) if _USER_HOME_DIR is None: try: _USER_HOME_DIR = _get_home_dir() except IOError, e: if not exists('.'): raise IOError("Home directory doesn't exist") raise e return os.path.abspath(os.path.join(_USER_HOME_DIR, path))
[ "def", "abspath", "(", "path", ")", ":", "global", "_USER_HOME_DIR", "# FIXME(brandyn): User's home directory must exist", "# FIXME(brandyn): Requires something to be in home dir", "if", "path", "[", "0", "]", "==", "'/'", ":", "return", "os", ".", "path", ".", "abspath", "(", "path", ")", "if", "_USER_HOME_DIR", "is", "None", ":", "try", ":", "_USER_HOME_DIR", "=", "_get_home_dir", "(", ")", "except", "IOError", ",", "e", ":", "if", "not", "exists", "(", "'.'", ")", ":", "raise", "IOError", "(", "\"Home directory doesn't exist\"", ")", "raise", "e", "return", "os", ".", "path", ".", "abspath", "(", "os", ".", "path", ".", "join", "(", "_USER_HOME_DIR", ",", "path", ")", ")" ]
Return the absolute path to a file and canonicalize it Path is returned without a trailing slash and without redundant slashes. Caches the user's home directory. :param path: A string for the path. This should not have any wildcards. :returns: Absolute path to the file :raises IOError: If unsuccessful
[ "Return", "the", "absolute", "path", "to", "a", "file", "and", "canonicalize", "it" ]
python
train
36.086957
gem/oq-engine
openquake/hazardlib/gsim/boore_2014.py
https://github.com/gem/oq-engine/blob/8294553a0b8aba33fd96437a35065d03547d0040/openquake/hazardlib/gsim/boore_2014.py#L152-L160
def _get_site_scaling(self, C, pga_rock, sites, period, rjb): """ Returns the site-scaling term (equation 5), broken down into a linear scaling, a nonlinear scaling and a basin scaling term """ flin = self._get_linear_site_term(C, sites.vs30) fnl = self._get_nonlinear_site_term(C, sites.vs30, pga_rock) fbd = self._get_basin_depth_term(C, sites, period) return flin + fnl + fbd
[ "def", "_get_site_scaling", "(", "self", ",", "C", ",", "pga_rock", ",", "sites", ",", "period", ",", "rjb", ")", ":", "flin", "=", "self", ".", "_get_linear_site_term", "(", "C", ",", "sites", ".", "vs30", ")", "fnl", "=", "self", ".", "_get_nonlinear_site_term", "(", "C", ",", "sites", ".", "vs30", ",", "pga_rock", ")", "fbd", "=", "self", ".", "_get_basin_depth_term", "(", "C", ",", "sites", ",", "period", ")", "return", "flin", "+", "fnl", "+", "fbd" ]
Returns the site-scaling term (equation 5), broken down into a linear scaling, a nonlinear scaling and a basin scaling term
[ "Returns", "the", "site", "-", "scaling", "term", "(", "equation", "5", ")", "broken", "down", "into", "a", "linear", "scaling", "a", "nonlinear", "scaling", "and", "a", "basin", "scaling", "term" ]
python
train
48.222222
LonamiWebs/Telethon
telethon_examples/interactive_telegram_client.py
https://github.com/LonamiWebs/Telethon/blob/1ead9757d366b58c1e0567cddb0196e20f1a445f/telethon_examples/interactive_telegram_client.py#L46-L53
async def async_input(prompt): """ Python's ``input()`` is blocking, which means the event loop we set above can't be running while we're blocking there. This method will let the loop run while we wait for input. """ print(prompt, end='', flush=True) return (await loop.run_in_executor(None, sys.stdin.readline)).rstrip()
[ "async", "def", "async_input", "(", "prompt", ")", ":", "print", "(", "prompt", ",", "end", "=", "''", ",", "flush", "=", "True", ")", "return", "(", "await", "loop", ".", "run_in_executor", "(", "None", ",", "sys", ".", "stdin", ".", "readline", ")", ")", ".", "rstrip", "(", ")" ]
Python's ``input()`` is blocking, which means the event loop we set above can't be running while we're blocking there. This method will let the loop run while we wait for input.
[ "Python", "s", "input", "()", "is", "blocking", "which", "means", "the", "event", "loop", "we", "set", "above", "can", "t", "be", "running", "while", "we", "re", "blocking", "there", ".", "This", "method", "will", "let", "the", "loop", "run", "while", "we", "wait", "for", "input", "." ]
python
train
42.75
mila/pyoo
pyoo.py
https://github.com/mila/pyoo/blob/1e024999f608c87ea72cd443e39c89eb0ba3cc62/pyoo.py#L916-L952
def get_target(self, row, col, row_count, col_count): """ Moves cursor to the specified position and returns in. """ # This method is called for almost any operation so it should # be maximally optimized. # # Any comparison here is negligible compared to UNO call. So we do all # possible checks which can prevent an unnecessary cursor movement. # # Generally we need to expand or collapse selection to the desired # size and move it to the desired position. But both of these actions # can fail if there is not enough space. For this reason we must # determine which of the actions has to be done first. In some cases # we must even move the cursor twice (cursor movement is faster than # selection change). # target = self._target # If we cannot resize selection now then we must move cursor first. if self.row + row_count > self.max_row_count or self.col + col_count > self.max_col_count: # Move cursor to the desired position if possible. row_delta = row - self.row if row + self.row_count <= self.max_row_count else 0 col_delta = col - self.col if col + self.col_count <= self.max_col_count else 0 target.gotoOffset(col_delta, row_delta) self.row += row_delta self.col += col_delta # Resize selection if (row_count, col_count) != (self.row_count, self.col_count): target.collapseToSize(col_count, row_count) self.row_count = row_count self.col_count = col_count # Move cursor to the desired position if (row, col) != (self.row, self.col): target.gotoOffset(col - self.col, row - self.row) self.row = row self.col = col return target
[ "def", "get_target", "(", "self", ",", "row", ",", "col", ",", "row_count", ",", "col_count", ")", ":", "# This method is called for almost any operation so it should", "# be maximally optimized.", "#", "# Any comparison here is negligible compared to UNO call. So we do all", "# possible checks which can prevent an unnecessary cursor movement.", "#", "# Generally we need to expand or collapse selection to the desired", "# size and move it to the desired position. But both of these actions", "# can fail if there is not enough space. For this reason we must", "# determine which of the actions has to be done first. In some cases", "# we must even move the cursor twice (cursor movement is faster than", "# selection change).", "#", "target", "=", "self", ".", "_target", "# If we cannot resize selection now then we must move cursor first.", "if", "self", ".", "row", "+", "row_count", ">", "self", ".", "max_row_count", "or", "self", ".", "col", "+", "col_count", ">", "self", ".", "max_col_count", ":", "# Move cursor to the desired position if possible.", "row_delta", "=", "row", "-", "self", ".", "row", "if", "row", "+", "self", ".", "row_count", "<=", "self", ".", "max_row_count", "else", "0", "col_delta", "=", "col", "-", "self", ".", "col", "if", "col", "+", "self", ".", "col_count", "<=", "self", ".", "max_col_count", "else", "0", "target", ".", "gotoOffset", "(", "col_delta", ",", "row_delta", ")", "self", ".", "row", "+=", "row_delta", "self", ".", "col", "+=", "col_delta", "# Resize selection", "if", "(", "row_count", ",", "col_count", ")", "!=", "(", "self", ".", "row_count", ",", "self", ".", "col_count", ")", ":", "target", ".", "collapseToSize", "(", "col_count", ",", "row_count", ")", "self", ".", "row_count", "=", "row_count", "self", ".", "col_count", "=", "col_count", "# Move cursor to the desired position", "if", "(", "row", ",", "col", ")", "!=", "(", "self", ".", "row", ",", "self", ".", "col", ")", ":", "target", ".", "gotoOffset", "(", "col", "-", "self", ".", "col", ",", "row", "-", "self", ".", "row", ")", "self", ".", "row", "=", "row", "self", ".", "col", "=", "col", "return", "target" ]
Moves cursor to the specified position and returns in.
[ "Moves", "cursor", "to", "the", "specified", "position", "and", "returns", "in", "." ]
python
train
49.648649
etcher-be/elib_miz
elib_miz/mission.py
https://github.com/etcher-be/elib_miz/blob/f28db58fadb2cd9341e0ae4d65101c0cc7d8f3d7/elib_miz/mission.py#L1298-L1309
def groups(self) -> typing.Iterator['Group']: """ Returns: generator of all groups in this country """ for group_category in Mission.valid_group_categories: if group_category in self._section_this_country.keys(): for group_index in self._section_this_country[group_category]['group']: if group_index not in self.__groups[group_category]: self.__groups[group_category][group_index] = Group(self.d, self.l10n, self.coa_color, self.country_index, group_category, group_index) yield self.__groups[group_category][group_index]
[ "def", "groups", "(", "self", ")", "->", "typing", ".", "Iterator", "[", "'Group'", "]", ":", "for", "group_category", "in", "Mission", ".", "valid_group_categories", ":", "if", "group_category", "in", "self", ".", "_section_this_country", ".", "keys", "(", ")", ":", "for", "group_index", "in", "self", ".", "_section_this_country", "[", "group_category", "]", "[", "'group'", "]", ":", "if", "group_index", "not", "in", "self", ".", "__groups", "[", "group_category", "]", ":", "self", ".", "__groups", "[", "group_category", "]", "[", "group_index", "]", "=", "Group", "(", "self", ".", "d", ",", "self", ".", "l10n", ",", "self", ".", "coa_color", ",", "self", ".", "country_index", ",", "group_category", ",", "group_index", ")", "yield", "self", ".", "__groups", "[", "group_category", "]", "[", "group_index", "]" ]
Returns: generator of all groups in this country
[ "Returns", ":", "generator", "of", "all", "groups", "in", "this", "country" ]
python
train
65.333333
ff0000/scarlet
scarlet/accounts/forms.py
https://github.com/ff0000/scarlet/blob/6c37befd810916a2d7ffff2cdb2dab57bcb6d12e/scarlet/accounts/forms.py#L99-L113
def save(self): """ Creates a new user and account. Returns the newly created user. """ username, email, password, first_name, last_name = (self.cleaned_data['username'], self.cleaned_data['email'], self.cleaned_data['password1'], self.cleaned_data['first_name'], self.cleaned_data['last_name'],) new_user = get_user_model()(username=username, email=email, first_name=first_name, last_name=last_name) new_user.set_password(password) new_user.save() return new_user
[ "def", "save", "(", "self", ")", ":", "username", ",", "email", ",", "password", ",", "first_name", ",", "last_name", "=", "(", "self", ".", "cleaned_data", "[", "'username'", "]", ",", "self", ".", "cleaned_data", "[", "'email'", "]", ",", "self", ".", "cleaned_data", "[", "'password1'", "]", ",", "self", ".", "cleaned_data", "[", "'first_name'", "]", ",", "self", ".", "cleaned_data", "[", "'last_name'", "]", ",", ")", "new_user", "=", "get_user_model", "(", ")", "(", "username", "=", "username", ",", "email", "=", "email", ",", "first_name", "=", "first_name", ",", "last_name", "=", "last_name", ")", "new_user", ".", "set_password", "(", "password", ")", "new_user", ".", "save", "(", ")", "return", "new_user" ]
Creates a new user and account. Returns the newly created user.
[ "Creates", "a", "new", "user", "and", "account", ".", "Returns", "the", "newly", "created", "user", "." ]
python
train
49.733333
yougov/mongo-connector
mongo_connector/oplog_manager.py
https://github.com/yougov/mongo-connector/blob/557cafd4b54c848cd54ef28a258391a154650cb4/mongo_connector/oplog_manager.py#L858-L883
def read_last_checkpoint(self): """Read the last checkpoint from the oplog progress dictionary. """ # In versions of mongo-connector 2.3 and before, # we used the repr of the # oplog collection as keys in the oplog_progress dictionary. # In versions thereafter, we use the replica set name. For backwards # compatibility, we check for both. oplog_str = str(self.oplog) ret_val = None with self.oplog_progress as oplog_prog: oplog_dict = oplog_prog.get_dict() try: # New format. ret_val = oplog_dict[self.replset_name] except KeyError: try: # Old format. ret_val = oplog_dict[oplog_str] except KeyError: pass LOG.debug("OplogThread: reading last checkpoint as %s " % str(ret_val)) self.checkpoint = ret_val return ret_val
[ "def", "read_last_checkpoint", "(", "self", ")", ":", "# In versions of mongo-connector 2.3 and before,", "# we used the repr of the", "# oplog collection as keys in the oplog_progress dictionary.", "# In versions thereafter, we use the replica set name. For backwards", "# compatibility, we check for both.", "oplog_str", "=", "str", "(", "self", ".", "oplog", ")", "ret_val", "=", "None", "with", "self", ".", "oplog_progress", "as", "oplog_prog", ":", "oplog_dict", "=", "oplog_prog", ".", "get_dict", "(", ")", "try", ":", "# New format.", "ret_val", "=", "oplog_dict", "[", "self", ".", "replset_name", "]", "except", "KeyError", ":", "try", ":", "# Old format.", "ret_val", "=", "oplog_dict", "[", "oplog_str", "]", "except", "KeyError", ":", "pass", "LOG", ".", "debug", "(", "\"OplogThread: reading last checkpoint as %s \"", "%", "str", "(", "ret_val", ")", ")", "self", ".", "checkpoint", "=", "ret_val", "return", "ret_val" ]
Read the last checkpoint from the oplog progress dictionary.
[ "Read", "the", "last", "checkpoint", "from", "the", "oplog", "progress", "dictionary", "." ]
python
train
36.961538
tensorflow/tensor2tensor
tensor2tensor/data_generators/cleaner_en_xx.py
https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/data_generators/cleaner_en_xx.py#L66-L86
def paracrawl_v3_pairs(paracrawl_file): """Generates raw (English, other) pairs from a ParaCrawl V3.0 data file. Args: paracrawl_file: A ParaCrawl V3.0 en-.. data file. Yields: Pairs of (sentence_en, sentence_xx), as Unicode strings. Raises: StopIteration: If the file ends while this method is in the middle of creating a translation pair. """ raw_sentences = _raw_sentences(paracrawl_file) for s_en in raw_sentences: try: s_xx = next(raw_sentences) if s_en and s_xx: # Prevent empty string examples. yield s_en, s_xx except StopIteration: tf.logging.error( 'Unmatched final sentence while reading in sentence pairs: [%s]', s_en)
[ "def", "paracrawl_v3_pairs", "(", "paracrawl_file", ")", ":", "raw_sentences", "=", "_raw_sentences", "(", "paracrawl_file", ")", "for", "s_en", "in", "raw_sentences", ":", "try", ":", "s_xx", "=", "next", "(", "raw_sentences", ")", "if", "s_en", "and", "s_xx", ":", "# Prevent empty string examples.", "yield", "s_en", ",", "s_xx", "except", "StopIteration", ":", "tf", ".", "logging", ".", "error", "(", "'Unmatched final sentence while reading in sentence pairs: [%s]'", ",", "s_en", ")" ]
Generates raw (English, other) pairs from a ParaCrawl V3.0 data file. Args: paracrawl_file: A ParaCrawl V3.0 en-.. data file. Yields: Pairs of (sentence_en, sentence_xx), as Unicode strings. Raises: StopIteration: If the file ends while this method is in the middle of creating a translation pair.
[ "Generates", "raw", "(", "English", "other", ")", "pairs", "from", "a", "ParaCrawl", "V3", ".", "0", "data", "file", "." ]
python
train
33.333333
SKA-ScienceDataProcessor/integration-prototype
sip/tango_control/tango_master/app/sdp_master_device.py
https://github.com/SKA-ScienceDataProcessor/integration-prototype/blob/8c8006de6ad71dcd44114b0338780738079c87d4/sip/tango_control/tango_master/app/sdp_master_device.py#L47-L57
def always_executed_hook(self): """Run for each command.""" _logT = self._devProxy.get_logging_target() if 'device::sip_sdp_logger' not in _logT: try: self._devProxy.add_logging_target('device::sip_sdp/elt/logger') self.info_stream("Test of Tango logging from " "'tc_tango_master'") except Exception as e: LOG.debug('Failed to setup Tango logging %s', e )
[ "def", "always_executed_hook", "(", "self", ")", ":", "_logT", "=", "self", ".", "_devProxy", ".", "get_logging_target", "(", ")", "if", "'device::sip_sdp_logger'", "not", "in", "_logT", ":", "try", ":", "self", ".", "_devProxy", ".", "add_logging_target", "(", "'device::sip_sdp/elt/logger'", ")", "self", ".", "info_stream", "(", "\"Test of Tango logging from \"", "\"'tc_tango_master'\"", ")", "except", "Exception", "as", "e", ":", "LOG", ".", "debug", "(", "'Failed to setup Tango logging %s'", ",", "e", ")" ]
Run for each command.
[ "Run", "for", "each", "command", "." ]
python
train
43.181818
ambitioninc/python-logentries-api
logentries_api/resources.py
https://github.com/ambitioninc/python-logentries-api/blob/77ff1a7a2995d7ea2725b74e34c0f880f4ee23bc/logentries_api/resources.py#L275-L296
def get(self, label_sn): """ Get tags by a label's sn key :param label_sn: A corresponding label's ``sn`` key. :type label_sn: str or int :return: A list of matching tags. An empty list is returned if there are not any matches :rtype: list of dict :raises: This will raise a :class:`ServerException<logentries_api.exceptions.ServerException>` if there is an error from Logentries """ tags = self.list() return [ tag for tag in tags if str(label_sn) in tag.get('args', {}).values() ]
[ "def", "get", "(", "self", ",", "label_sn", ")", ":", "tags", "=", "self", ".", "list", "(", ")", "return", "[", "tag", "for", "tag", "in", "tags", "if", "str", "(", "label_sn", ")", "in", "tag", ".", "get", "(", "'args'", ",", "{", "}", ")", ".", "values", "(", ")", "]" ]
Get tags by a label's sn key :param label_sn: A corresponding label's ``sn`` key. :type label_sn: str or int :return: A list of matching tags. An empty list is returned if there are not any matches :rtype: list of dict :raises: This will raise a :class:`ServerException<logentries_api.exceptions.ServerException>` if there is an error from Logentries
[ "Get", "tags", "by", "a", "label", "s", "sn", "key" ]
python
test
28.909091
hvac/hvac
hvac/api/secrets_engines/azure.py
https://github.com/hvac/hvac/blob/cce5b86889193f622c2a72a4a1b7e1c9c8aff1ce/hvac/api/secrets_engines/azure.py#L19-L62
def configure(self, subscription_id, tenant_id, client_id="", client_secret="", environment='AzurePublicCloud', mount_point=DEFAULT_MOUNT_POINT): """Configure the credentials required for the plugin to perform API calls to Azure. These credentials will be used to query roles and create/delete service principals. Environment variables will override any parameters set in the config. Supported methods: POST: /{mount_point}/config. Produces: 204 (empty body) :param subscription_id: The subscription id for the Azure Active Directory :type subscription_id: str | unicode :param tenant_id: The tenant id for the Azure Active Directory. :type tenant_id: str | unicode :param client_id: The OAuth2 client id to connect to Azure. :type client_id: str | unicode :param client_secret: The OAuth2 client secret to connect to Azure. :type client_secret: str | unicode :param environment: The Azure environment. If not specified, Vault will use Azure Public Cloud. :type environment: str | unicode :param mount_point: The OAuth2 client secret to connect to Azure. :type mount_point: str | unicode :return: The response of the request. :rtype: requests.Response """ if environment not in VALID_ENVIRONMENTS: error_msg = 'invalid environment argument provided "{arg}", supported environments: "{environments}"' raise exceptions.ParamValidationError(error_msg.format( arg=environment, environments=','.join(VALID_ENVIRONMENTS), )) params = { 'subscription_id': subscription_id, 'tenant_id': tenant_id, 'client_id': client_id, 'client_secret': client_secret, 'environment': environment, } api_path = '/v1/{mount_point}/config'.format(mount_point=mount_point) return self._adapter.post( url=api_path, json=params, )
[ "def", "configure", "(", "self", ",", "subscription_id", ",", "tenant_id", ",", "client_id", "=", "\"\"", ",", "client_secret", "=", "\"\"", ",", "environment", "=", "'AzurePublicCloud'", ",", "mount_point", "=", "DEFAULT_MOUNT_POINT", ")", ":", "if", "environment", "not", "in", "VALID_ENVIRONMENTS", ":", "error_msg", "=", "'invalid environment argument provided \"{arg}\", supported environments: \"{environments}\"'", "raise", "exceptions", ".", "ParamValidationError", "(", "error_msg", ".", "format", "(", "arg", "=", "environment", ",", "environments", "=", "','", ".", "join", "(", "VALID_ENVIRONMENTS", ")", ",", ")", ")", "params", "=", "{", "'subscription_id'", ":", "subscription_id", ",", "'tenant_id'", ":", "tenant_id", ",", "'client_id'", ":", "client_id", ",", "'client_secret'", ":", "client_secret", ",", "'environment'", ":", "environment", ",", "}", "api_path", "=", "'/v1/{mount_point}/config'", ".", "format", "(", "mount_point", "=", "mount_point", ")", "return", "self", ".", "_adapter", ".", "post", "(", "url", "=", "api_path", ",", "json", "=", "params", ",", ")" ]
Configure the credentials required for the plugin to perform API calls to Azure. These credentials will be used to query roles and create/delete service principals. Environment variables will override any parameters set in the config. Supported methods: POST: /{mount_point}/config. Produces: 204 (empty body) :param subscription_id: The subscription id for the Azure Active Directory :type subscription_id: str | unicode :param tenant_id: The tenant id for the Azure Active Directory. :type tenant_id: str | unicode :param client_id: The OAuth2 client id to connect to Azure. :type client_id: str | unicode :param client_secret: The OAuth2 client secret to connect to Azure. :type client_secret: str | unicode :param environment: The Azure environment. If not specified, Vault will use Azure Public Cloud. :type environment: str | unicode :param mount_point: The OAuth2 client secret to connect to Azure. :type mount_point: str | unicode :return: The response of the request. :rtype: requests.Response
[ "Configure", "the", "credentials", "required", "for", "the", "plugin", "to", "perform", "API", "calls", "to", "Azure", "." ]
python
train
46.454545
vinci1it2000/schedula
schedula/utils/base.py
https://github.com/vinci1it2000/schedula/blob/addb9fd685be81544b796c51383ac00a31543ce9/schedula/utils/base.py#L111-L265
def plot(self, workflow=None, view=True, depth=-1, name=NONE, comment=NONE, format=NONE, engine=NONE, encoding=NONE, graph_attr=NONE, node_attr=NONE, edge_attr=NONE, body=NONE, node_styles=NONE, node_data=NONE, node_function=NONE, edge_data=NONE, max_lines=NONE, max_width=NONE, directory=None, sites=None, index=False): """ Plots the Dispatcher with a graph in the DOT language with Graphviz. :param workflow: If True the latest solution will be plotted, otherwise the dmap. :type workflow: bool, optional :param view: Open the rendered directed graph in the DOT language with the sys default opener. :type view: bool, optional :param edge_data: Edge attributes to view. :type edge_data: tuple[str], optional :param node_data: Data node attributes to view. :type node_data: tuple[str], optional :param node_function: Function node attributes to view. :type node_function: tuple[str], optional :param node_styles: Default node styles according to graphviz node attributes. :type node_styles: dict[str|Token, dict[str, str]] :param depth: Depth of sub-dispatch plots. If negative all levels are plotted. :type depth: int, optional :param name: Graph name used in the source code. :type name: str :param comment: Comment added to the first line of the source. :type comment: str :param directory: (Sub)directory for source saving and rendering. :type directory: str, optional :param format: Rendering output format ('pdf', 'png', ...). :type format: str, optional :param engine: Layout command used ('dot', 'neato', ...). :type engine: str, optional :param encoding: Encoding for saving the source. :type encoding: str, optional :param graph_attr: Dict of (attribute, value) pairs for the graph. :type graph_attr: dict, optional :param node_attr: Dict of (attribute, value) pairs set for all nodes. :type node_attr: dict, optional :param edge_attr: Dict of (attribute, value) pairs set for all edges. :type edge_attr: dict, optional :param body: Dict of (attribute, value) pairs to add to the graph body. :type body: dict, optional :param directory: Where is the generated Flask app root located? :type directory: str, optional :param sites: A set of :class:`~schedula.utils.drw.Site` to maintain alive the backend server. :type sites: set[~schedula.utils.drw.Site], optional :param index: Add the site index as first page? :type index: bool, optional :param max_lines: Maximum number of lines for rendering node attributes. :type max_lines: int, optional :param max_width: Maximum number of characters in a line to render node attributes. :type max_width: int, optional :param view: Open the main page of the site? :type view: bool, optional :return: A SiteMap. :rtype: schedula.utils.drw.SiteMap Example: .. dispatcher:: dsp :opt: graph_attr={'ratio': '1'} :code: >>> from schedula import Dispatcher >>> dsp = Dispatcher(name='Dispatcher') >>> def fun(a): ... return a + 1, a - 1 >>> dsp.add_function('fun', fun, ['a'], ['b', 'c']) 'fun' >>> dsp.plot(view=False, graph_attr={'ratio': '1'}) SiteMap([(Dispatcher, SiteMap())]) """ d = { 'name': name, 'comment': comment, 'format': format, 'engine': engine, 'encoding': encoding, 'graph_attr': graph_attr, 'node_attr': node_attr, 'edge_attr': edge_attr, 'body': body, } options = { 'digraph': {k: v for k, v in d.items() if v is not NONE} or NONE, 'node_styles': node_styles, 'node_data': node_data, 'node_function': node_function, 'edge_data': edge_data, 'max_lines': max_lines, # 5 'max_width': max_width, # 200 } options = {k: v for k, v in options.items() if v is not NONE} from .drw import SiteMap from .sol import Solution if workflow is None and isinstance(self, Solution): workflow = True else: workflow = workflow or False sitemap = SiteMap() sitemap.add_items(self, workflow=workflow, depth=depth, **options) if view: import tempfile directory = directory or tempfile.mkdtemp() if sites is None: sitemap.render(directory=directory, view=True, index=index) else: sites.add(sitemap.site(directory, view=True, index=index)) return sitemap
[ "def", "plot", "(", "self", ",", "workflow", "=", "None", ",", "view", "=", "True", ",", "depth", "=", "-", "1", ",", "name", "=", "NONE", ",", "comment", "=", "NONE", ",", "format", "=", "NONE", ",", "engine", "=", "NONE", ",", "encoding", "=", "NONE", ",", "graph_attr", "=", "NONE", ",", "node_attr", "=", "NONE", ",", "edge_attr", "=", "NONE", ",", "body", "=", "NONE", ",", "node_styles", "=", "NONE", ",", "node_data", "=", "NONE", ",", "node_function", "=", "NONE", ",", "edge_data", "=", "NONE", ",", "max_lines", "=", "NONE", ",", "max_width", "=", "NONE", ",", "directory", "=", "None", ",", "sites", "=", "None", ",", "index", "=", "False", ")", ":", "d", "=", "{", "'name'", ":", "name", ",", "'comment'", ":", "comment", ",", "'format'", ":", "format", ",", "'engine'", ":", "engine", ",", "'encoding'", ":", "encoding", ",", "'graph_attr'", ":", "graph_attr", ",", "'node_attr'", ":", "node_attr", ",", "'edge_attr'", ":", "edge_attr", ",", "'body'", ":", "body", ",", "}", "options", "=", "{", "'digraph'", ":", "{", "k", ":", "v", "for", "k", ",", "v", "in", "d", ".", "items", "(", ")", "if", "v", "is", "not", "NONE", "}", "or", "NONE", ",", "'node_styles'", ":", "node_styles", ",", "'node_data'", ":", "node_data", ",", "'node_function'", ":", "node_function", ",", "'edge_data'", ":", "edge_data", ",", "'max_lines'", ":", "max_lines", ",", "# 5", "'max_width'", ":", "max_width", ",", "# 200", "}", "options", "=", "{", "k", ":", "v", "for", "k", ",", "v", "in", "options", ".", "items", "(", ")", "if", "v", "is", "not", "NONE", "}", "from", ".", "drw", "import", "SiteMap", "from", ".", "sol", "import", "Solution", "if", "workflow", "is", "None", "and", "isinstance", "(", "self", ",", "Solution", ")", ":", "workflow", "=", "True", "else", ":", "workflow", "=", "workflow", "or", "False", "sitemap", "=", "SiteMap", "(", ")", "sitemap", ".", "add_items", "(", "self", ",", "workflow", "=", "workflow", ",", "depth", "=", "depth", ",", "*", "*", "options", ")", "if", "view", ":", "import", "tempfile", "directory", "=", "directory", "or", "tempfile", ".", "mkdtemp", "(", ")", "if", "sites", "is", "None", ":", "sitemap", ".", "render", "(", "directory", "=", "directory", ",", "view", "=", "True", ",", "index", "=", "index", ")", "else", ":", "sites", ".", "add", "(", "sitemap", ".", "site", "(", "directory", ",", "view", "=", "True", ",", "index", "=", "index", ")", ")", "return", "sitemap" ]
Plots the Dispatcher with a graph in the DOT language with Graphviz. :param workflow: If True the latest solution will be plotted, otherwise the dmap. :type workflow: bool, optional :param view: Open the rendered directed graph in the DOT language with the sys default opener. :type view: bool, optional :param edge_data: Edge attributes to view. :type edge_data: tuple[str], optional :param node_data: Data node attributes to view. :type node_data: tuple[str], optional :param node_function: Function node attributes to view. :type node_function: tuple[str], optional :param node_styles: Default node styles according to graphviz node attributes. :type node_styles: dict[str|Token, dict[str, str]] :param depth: Depth of sub-dispatch plots. If negative all levels are plotted. :type depth: int, optional :param name: Graph name used in the source code. :type name: str :param comment: Comment added to the first line of the source. :type comment: str :param directory: (Sub)directory for source saving and rendering. :type directory: str, optional :param format: Rendering output format ('pdf', 'png', ...). :type format: str, optional :param engine: Layout command used ('dot', 'neato', ...). :type engine: str, optional :param encoding: Encoding for saving the source. :type encoding: str, optional :param graph_attr: Dict of (attribute, value) pairs for the graph. :type graph_attr: dict, optional :param node_attr: Dict of (attribute, value) pairs set for all nodes. :type node_attr: dict, optional :param edge_attr: Dict of (attribute, value) pairs set for all edges. :type edge_attr: dict, optional :param body: Dict of (attribute, value) pairs to add to the graph body. :type body: dict, optional :param directory: Where is the generated Flask app root located? :type directory: str, optional :param sites: A set of :class:`~schedula.utils.drw.Site` to maintain alive the backend server. :type sites: set[~schedula.utils.drw.Site], optional :param index: Add the site index as first page? :type index: bool, optional :param max_lines: Maximum number of lines for rendering node attributes. :type max_lines: int, optional :param max_width: Maximum number of characters in a line to render node attributes. :type max_width: int, optional :param view: Open the main page of the site? :type view: bool, optional :return: A SiteMap. :rtype: schedula.utils.drw.SiteMap Example: .. dispatcher:: dsp :opt: graph_attr={'ratio': '1'} :code: >>> from schedula import Dispatcher >>> dsp = Dispatcher(name='Dispatcher') >>> def fun(a): ... return a + 1, a - 1 >>> dsp.add_function('fun', fun, ['a'], ['b', 'c']) 'fun' >>> dsp.plot(view=False, graph_attr={'ratio': '1'}) SiteMap([(Dispatcher, SiteMap())])
[ "Plots", "the", "Dispatcher", "with", "a", "graph", "in", "the", "DOT", "language", "with", "Graphviz", "." ]
python
train
32.948387
openid/python-openid
openid/consumer/consumer.py
https://github.com/openid/python-openid/blob/f7e13536f0d1828d3cef5ae7a7b55cabadff37fc/openid/consumer/consumer.py#L701-L744
def _doIdRes(self, message, endpoint, return_to): """Handle id_res responses that are not cancellations of immediate mode requests. @param message: the response paramaters. @param endpoint: the discovered endpoint object. May be None. @raises ProtocolError: If the message contents are not well-formed according to the OpenID specification. This includes missing fields or not signing fields that should be signed. @raises DiscoveryFailure: If the subject of the id_res message does not match the supplied endpoint, and discovery on the identifier in the message fails (this should only happen when using OpenID 2) @returntype: L{Response} """ # Checks for presence of appropriate fields (and checks # signed list fields) self._idResCheckForFields(message) if not self._checkReturnTo(message, return_to): raise ProtocolError( "return_to does not match return URL. Expected %r, got %r" % (return_to, message.getArg(OPENID_NS, 'return_to'))) # Verify discovery information: endpoint = self._verifyDiscoveryResults(message, endpoint) logging.info("Received id_res response from %s using association %s" % (endpoint.server_url, message.getArg(OPENID_NS, 'assoc_handle'))) self._idResCheckSignature(message, endpoint.server_url) # Will raise a ProtocolError if the nonce is bad self._idResCheckNonce(message, endpoint) signed_list_str = message.getArg(OPENID_NS, 'signed', no_default) signed_list = signed_list_str.split(',') signed_fields = ["openid." + s for s in signed_list] return SuccessResponse(endpoint, message, signed_fields)
[ "def", "_doIdRes", "(", "self", ",", "message", ",", "endpoint", ",", "return_to", ")", ":", "# Checks for presence of appropriate fields (and checks", "# signed list fields)", "self", ".", "_idResCheckForFields", "(", "message", ")", "if", "not", "self", ".", "_checkReturnTo", "(", "message", ",", "return_to", ")", ":", "raise", "ProtocolError", "(", "\"return_to does not match return URL. Expected %r, got %r\"", "%", "(", "return_to", ",", "message", ".", "getArg", "(", "OPENID_NS", ",", "'return_to'", ")", ")", ")", "# Verify discovery information:", "endpoint", "=", "self", ".", "_verifyDiscoveryResults", "(", "message", ",", "endpoint", ")", "logging", ".", "info", "(", "\"Received id_res response from %s using association %s\"", "%", "(", "endpoint", ".", "server_url", ",", "message", ".", "getArg", "(", "OPENID_NS", ",", "'assoc_handle'", ")", ")", ")", "self", ".", "_idResCheckSignature", "(", "message", ",", "endpoint", ".", "server_url", ")", "# Will raise a ProtocolError if the nonce is bad", "self", ".", "_idResCheckNonce", "(", "message", ",", "endpoint", ")", "signed_list_str", "=", "message", ".", "getArg", "(", "OPENID_NS", ",", "'signed'", ",", "no_default", ")", "signed_list", "=", "signed_list_str", ".", "split", "(", "','", ")", "signed_fields", "=", "[", "\"openid.\"", "+", "s", "for", "s", "in", "signed_list", "]", "return", "SuccessResponse", "(", "endpoint", ",", "message", ",", "signed_fields", ")" ]
Handle id_res responses that are not cancellations of immediate mode requests. @param message: the response paramaters. @param endpoint: the discovered endpoint object. May be None. @raises ProtocolError: If the message contents are not well-formed according to the OpenID specification. This includes missing fields or not signing fields that should be signed. @raises DiscoveryFailure: If the subject of the id_res message does not match the supplied endpoint, and discovery on the identifier in the message fails (this should only happen when using OpenID 2) @returntype: L{Response}
[ "Handle", "id_res", "responses", "that", "are", "not", "cancellations", "of", "immediate", "mode", "requests", "." ]
python
train
41.590909
ttroy50/pyephember
pyephember/pyephember.py
https://github.com/ttroy50/pyephember/blob/3ee159ee82b926b957dae8dcbc7a4bfb6807a9b4/pyephember/pyephember.py#L243-L272
def set_target_temperature_by_id(self, zone_id, target_temperature): """ Set the target temperature for a zone by id """ if not self._do_auth(): raise RuntimeError("Unable to login") data = { "ZoneId": zone_id, "TargetTemperature": target_temperature } headers = { "Accept": "application/json", "Content-Type": "application/json", 'Authorization': 'Bearer ' + self.login_data['token']['accessToken'] } url = self.api_base_url + "Home/ZoneTargetTemperature" response = requests.post(url, data=json.dumps( data), headers=headers, timeout=10) if response.status_code != 200: return False zone_change_data = response.json() return zone_change_data.get("isSuccess", False)
[ "def", "set_target_temperature_by_id", "(", "self", ",", "zone_id", ",", "target_temperature", ")", ":", "if", "not", "self", ".", "_do_auth", "(", ")", ":", "raise", "RuntimeError", "(", "\"Unable to login\"", ")", "data", "=", "{", "\"ZoneId\"", ":", "zone_id", ",", "\"TargetTemperature\"", ":", "target_temperature", "}", "headers", "=", "{", "\"Accept\"", ":", "\"application/json\"", ",", "\"Content-Type\"", ":", "\"application/json\"", ",", "'Authorization'", ":", "'Bearer '", "+", "self", ".", "login_data", "[", "'token'", "]", "[", "'accessToken'", "]", "}", "url", "=", "self", ".", "api_base_url", "+", "\"Home/ZoneTargetTemperature\"", "response", "=", "requests", ".", "post", "(", "url", ",", "data", "=", "json", ".", "dumps", "(", "data", ")", ",", "headers", "=", "headers", ",", "timeout", "=", "10", ")", "if", "response", ".", "status_code", "!=", "200", ":", "return", "False", "zone_change_data", "=", "response", ".", "json", "(", ")", "return", "zone_change_data", ".", "get", "(", "\"isSuccess\"", ",", "False", ")" ]
Set the target temperature for a zone by id
[ "Set", "the", "target", "temperature", "for", "a", "zone", "by", "id" ]
python
train
28.7
AguaClara/aguaclara
aguaclara/design/lfom.py
https://github.com/AguaClara/aguaclara/blob/8dd4e734768b166a7fc2b60388a24df2f93783fc/aguaclara/design/lfom.py#L67-L70
def area_pipe_min(self): """The minimum cross-sectional area of the LFOM pipe that assures a safety factor.""" return (self.safety_factor * self.q / self.vel_critical).to(u.cm**2)
[ "def", "area_pipe_min", "(", "self", ")", ":", "return", "(", "self", ".", "safety_factor", "*", "self", ".", "q", "/", "self", ".", "vel_critical", ")", ".", "to", "(", "u", ".", "cm", "**", "2", ")" ]
The minimum cross-sectional area of the LFOM pipe that assures a safety factor.
[ "The", "minimum", "cross", "-", "sectional", "area", "of", "the", "LFOM", "pipe", "that", "assures", "a", "safety", "factor", "." ]
python
train
50
hendrix/hendrix
hendrix/contrib/cache/__init__.py
https://github.com/hendrix/hendrix/blob/175af011a7e5822b772bfec0e11a46466bb8688d/hendrix/contrib/cache/__init__.py#L75-L80
def getDate(self): "returns the GMT response datetime or None" date = self.headers.get('date') if date: date = self.convertTimeString(date) return date
[ "def", "getDate", "(", "self", ")", ":", "date", "=", "self", ".", "headers", ".", "get", "(", "'date'", ")", "if", "date", ":", "date", "=", "self", ".", "convertTimeString", "(", "date", ")", "return", "date" ]
returns the GMT response datetime or None
[ "returns", "the", "GMT", "response", "datetime", "or", "None" ]
python
train
31.666667
crocs-muni/roca
roca/detect.py
https://github.com/crocs-muni/roca/blob/74ad6ce63c428d83dcffce9c5e26ef7b9e30faa5/roca/detect.py#L1753-L1779
def process_js_mod(self, data, name, idx, sub_idx): """ Processes one moduli from JSON :param data: :param name: :param idx: :param sub_idx: :return: """ if isinstance(data, (int, long)): js = collections.OrderedDict() js['type'] = 'js-mod-num' js['fname'] = name js['idx'] = idx js['sub_idx'] = sub_idx js['n'] = '0x%x' % data if self.has_fingerprint(data): logger.warning('Fingerprint found in json int modulus %s idx %s %s' % (name, idx, sub_idx)) self.mark_and_add_effort(data, js) if self.do_print: print(json.dumps(js)) return TestResult(js) self.process_mod_line(data, name, idx, aux={'stype': 'json', 'sub_idx': sub_idx})
[ "def", "process_js_mod", "(", "self", ",", "data", ",", "name", ",", "idx", ",", "sub_idx", ")", ":", "if", "isinstance", "(", "data", ",", "(", "int", ",", "long", ")", ")", ":", "js", "=", "collections", ".", "OrderedDict", "(", ")", "js", "[", "'type'", "]", "=", "'js-mod-num'", "js", "[", "'fname'", "]", "=", "name", "js", "[", "'idx'", "]", "=", "idx", "js", "[", "'sub_idx'", "]", "=", "sub_idx", "js", "[", "'n'", "]", "=", "'0x%x'", "%", "data", "if", "self", ".", "has_fingerprint", "(", "data", ")", ":", "logger", ".", "warning", "(", "'Fingerprint found in json int modulus %s idx %s %s'", "%", "(", "name", ",", "idx", ",", "sub_idx", ")", ")", "self", ".", "mark_and_add_effort", "(", "data", ",", "js", ")", "if", "self", ".", "do_print", ":", "print", "(", "json", ".", "dumps", "(", "js", ")", ")", "return", "TestResult", "(", "js", ")", "self", ".", "process_mod_line", "(", "data", ",", "name", ",", "idx", ",", "aux", "=", "{", "'stype'", ":", "'json'", ",", "'sub_idx'", ":", "sub_idx", "}", ")" ]
Processes one moduli from JSON :param data: :param name: :param idx: :param sub_idx: :return:
[ "Processes", "one", "moduli", "from", "JSON", ":", "param", "data", ":", ":", "param", "name", ":", ":", "param", "idx", ":", ":", "param", "sub_idx", ":", ":", "return", ":" ]
python
train
31.518519
knipknap/exscript
Exscript/emulators/command.py
https://github.com/knipknap/exscript/blob/72718eee3e87b345d5a5255be9824e867e42927b/Exscript/emulators/command.py#L73-L96
def add_from_file(self, filename, handler_decorator=None): """ Wrapper around add() that reads the handlers from the file with the given name. The file is a Python script containing a list named 'commands' of tuples that map command names to handlers. :type filename: str :param filename: The name of the file containing the tuples. :type handler_decorator: function :param handler_decorator: A function that is used to decorate each of the handlers in the file. """ args = {} execfile(filename, args) commands = args.get('commands') if commands is None: raise Exception(filename + ' has no variable named "commands"') elif not hasattr(commands, '__iter__'): raise Exception(filename + ': "commands" is not iterable') for key, handler in commands: if handler_decorator: handler = handler_decorator(handler) self.add(key, handler)
[ "def", "add_from_file", "(", "self", ",", "filename", ",", "handler_decorator", "=", "None", ")", ":", "args", "=", "{", "}", "execfile", "(", "filename", ",", "args", ")", "commands", "=", "args", ".", "get", "(", "'commands'", ")", "if", "commands", "is", "None", ":", "raise", "Exception", "(", "filename", "+", "' has no variable named \"commands\"'", ")", "elif", "not", "hasattr", "(", "commands", ",", "'__iter__'", ")", ":", "raise", "Exception", "(", "filename", "+", "': \"commands\" is not iterable'", ")", "for", "key", ",", "handler", "in", "commands", ":", "if", "handler_decorator", ":", "handler", "=", "handler_decorator", "(", "handler", ")", "self", ".", "add", "(", "key", ",", "handler", ")" ]
Wrapper around add() that reads the handlers from the file with the given name. The file is a Python script containing a list named 'commands' of tuples that map command names to handlers. :type filename: str :param filename: The name of the file containing the tuples. :type handler_decorator: function :param handler_decorator: A function that is used to decorate each of the handlers in the file.
[ "Wrapper", "around", "add", "()", "that", "reads", "the", "handlers", "from", "the", "file", "with", "the", "given", "name", ".", "The", "file", "is", "a", "Python", "script", "containing", "a", "list", "named", "commands", "of", "tuples", "that", "map", "command", "names", "to", "handlers", "." ]
python
train
42.333333
titilambert/pyfido
pyfido/client.py
https://github.com/titilambert/pyfido/blob/8302b76f4d9d7b05b97926c003ca02409aa23281/pyfido/client.py#L239-L287
def _get_usage(self, account_number, number): """Get Fido usage. Get the following data - talk - text - data Roaming data is not supported yet """ # Prepare data data = {"ctn": number, "language": "en-US", "accountNumber": account_number} # Http request try: raw_res = yield from self._session.post(USAGE_URL, data=data, headers=self._headers, timeout=self._timeout) except OSError: raise PyFidoError("Can not get usage") # Load answer as json try: output = yield from raw_res.json() except (OSError, ValueError): raise PyFidoError("Can not get usage as json") # Format data ret_data = {} for data_name, keys in DATA_MAP.items(): key, subkey = keys for data in output.get(key)[0].get('wirelessUsageSummaryInfoList'): if data.get('usageSummaryType') == subkey: # Prepare keys: used_key = "{}_used".format(data_name) remaining_key = "{}_remaining".format(data_name) limit_key = "{}_limit".format(data_name) # Get values ret_data[used_key] = data.get('used', 0.0) if data.get('remaining') >= 0: ret_data[remaining_key] = data.get('remaining') else: ret_data[remaining_key] = None if data.get('total') >= 0: ret_data[limit_key] = data.get('total') else: ret_data[limit_key] = None return ret_data
[ "def", "_get_usage", "(", "self", ",", "account_number", ",", "number", ")", ":", "# Prepare data", "data", "=", "{", "\"ctn\"", ":", "number", ",", "\"language\"", ":", "\"en-US\"", ",", "\"accountNumber\"", ":", "account_number", "}", "# Http request", "try", ":", "raw_res", "=", "yield", "from", "self", ".", "_session", ".", "post", "(", "USAGE_URL", ",", "data", "=", "data", ",", "headers", "=", "self", ".", "_headers", ",", "timeout", "=", "self", ".", "_timeout", ")", "except", "OSError", ":", "raise", "PyFidoError", "(", "\"Can not get usage\"", ")", "# Load answer as json", "try", ":", "output", "=", "yield", "from", "raw_res", ".", "json", "(", ")", "except", "(", "OSError", ",", "ValueError", ")", ":", "raise", "PyFidoError", "(", "\"Can not get usage as json\"", ")", "# Format data", "ret_data", "=", "{", "}", "for", "data_name", ",", "keys", "in", "DATA_MAP", ".", "items", "(", ")", ":", "key", ",", "subkey", "=", "keys", "for", "data", "in", "output", ".", "get", "(", "key", ")", "[", "0", "]", ".", "get", "(", "'wirelessUsageSummaryInfoList'", ")", ":", "if", "data", ".", "get", "(", "'usageSummaryType'", ")", "==", "subkey", ":", "# Prepare keys:", "used_key", "=", "\"{}_used\"", ".", "format", "(", "data_name", ")", "remaining_key", "=", "\"{}_remaining\"", ".", "format", "(", "data_name", ")", "limit_key", "=", "\"{}_limit\"", ".", "format", "(", "data_name", ")", "# Get values", "ret_data", "[", "used_key", "]", "=", "data", ".", "get", "(", "'used'", ",", "0.0", ")", "if", "data", ".", "get", "(", "'remaining'", ")", ">=", "0", ":", "ret_data", "[", "remaining_key", "]", "=", "data", ".", "get", "(", "'remaining'", ")", "else", ":", "ret_data", "[", "remaining_key", "]", "=", "None", "if", "data", ".", "get", "(", "'total'", ")", ">=", "0", ":", "ret_data", "[", "limit_key", "]", "=", "data", ".", "get", "(", "'total'", ")", "else", ":", "ret_data", "[", "limit_key", "]", "=", "None", "return", "ret_data" ]
Get Fido usage. Get the following data - talk - text - data Roaming data is not supported yet
[ "Get", "Fido", "usage", "." ]
python
train
38.183673
DLR-RM/RAFCON
source/rafcon/gui/controllers/state_machines_editor.py
https://github.com/DLR-RM/RAFCON/blob/24942ef1a904531f49ab8830a1dbb604441be498/source/rafcon/gui/controllers/state_machines_editor.py#L154-L162
def close_state_machine(self, widget, page_number, event=None): """Triggered when the close button in the tab is clicked """ page = widget.get_nth_page(page_number) for tab_info in self.tabs.values(): if tab_info['page'] is page: state_machine_m = tab_info['state_machine_m'] self.on_close_clicked(event, state_machine_m, None, force=False) return
[ "def", "close_state_machine", "(", "self", ",", "widget", ",", "page_number", ",", "event", "=", "None", ")", ":", "page", "=", "widget", ".", "get_nth_page", "(", "page_number", ")", "for", "tab_info", "in", "self", ".", "tabs", ".", "values", "(", ")", ":", "if", "tab_info", "[", "'page'", "]", "is", "page", ":", "state_machine_m", "=", "tab_info", "[", "'state_machine_m'", "]", "self", ".", "on_close_clicked", "(", "event", ",", "state_machine_m", ",", "None", ",", "force", "=", "False", ")", "return" ]
Triggered when the close button in the tab is clicked
[ "Triggered", "when", "the", "close", "button", "in", "the", "tab", "is", "clicked" ]
python
train
47.888889
nvbn/thefuck
thefuck/entrypoints/not_configured.py
https://github.com/nvbn/thefuck/blob/40ab4eb62db57627bff10cf029d29c94704086a2/thefuck/entrypoints/not_configured.py#L75-L79
def _is_already_configured(configuration_details): """Returns `True` when alias already in shell config.""" path = Path(configuration_details.path).expanduser() with path.open('r') as shell_config: return configuration_details.content in shell_config.read()
[ "def", "_is_already_configured", "(", "configuration_details", ")", ":", "path", "=", "Path", "(", "configuration_details", ".", "path", ")", ".", "expanduser", "(", ")", "with", "path", ".", "open", "(", "'r'", ")", "as", "shell_config", ":", "return", "configuration_details", ".", "content", "in", "shell_config", ".", "read", "(", ")" ]
Returns `True` when alias already in shell config.
[ "Returns", "True", "when", "alias", "already", "in", "shell", "config", "." ]
python
train
54.6
diffeo/py-nilsimsa
nilsimsa/deprecated/_deprecated_nilsimsa.py
https://github.com/diffeo/py-nilsimsa/blob/c652f4bbfd836f7aebf292dcea676cc925ec315a/nilsimsa/deprecated/_deprecated_nilsimsa.py#L117-L119
def tran3(self, a, b, c, n): """Get accumulator for a transition n between chars a, b, c.""" return (((TRAN[(a+n)&255]^TRAN[b]*(n+n+1))+TRAN[(c)^TRAN[n]])&255)
[ "def", "tran3", "(", "self", ",", "a", ",", "b", ",", "c", ",", "n", ")", ":", "return", "(", "(", "(", "TRAN", "[", "(", "a", "+", "n", ")", "&", "255", "]", "^", "TRAN", "[", "b", "]", "*", "(", "n", "+", "n", "+", "1", ")", ")", "+", "TRAN", "[", "(", "c", ")", "^", "TRAN", "[", "n", "]", "]", ")", "&", "255", ")" ]
Get accumulator for a transition n between chars a, b, c.
[ "Get", "accumulator", "for", "a", "transition", "n", "between", "chars", "a", "b", "c", "." ]
python
train
57.666667
mongolab/mongoctl
mongoctl/commands/server/start.py
https://github.com/mongolab/mongoctl/blob/fab15216127ad4bf8ea9aa8a95d75504c0ef01a2/mongoctl/commands/server/start.py#L525-L545
def generate_start_command(server, options_override=None, standalone=False): """ Check if we need to use numactl if we are running on a NUMA box. 10gen recommends using numactl on NUMA. For more info, see http://www.mongodb.org/display/DOCS/NUMA """ command = [] if mongod_needs_numactl(): log_info("Running on a NUMA machine...") command = apply_numactl(command) # append the mongod executable command.append(get_server_executable(server)) # create the command args cmd_options = server.export_cmd_options(options_override=options_override, standalone=standalone) command.extend(options_to_command_args(cmd_options)) return command
[ "def", "generate_start_command", "(", "server", ",", "options_override", "=", "None", ",", "standalone", "=", "False", ")", ":", "command", "=", "[", "]", "if", "mongod_needs_numactl", "(", ")", ":", "log_info", "(", "\"Running on a NUMA machine...\"", ")", "command", "=", "apply_numactl", "(", "command", ")", "# append the mongod executable", "command", ".", "append", "(", "get_server_executable", "(", "server", ")", ")", "# create the command args", "cmd_options", "=", "server", ".", "export_cmd_options", "(", "options_override", "=", "options_override", ",", "standalone", "=", "standalone", ")", "command", ".", "extend", "(", "options_to_command_args", "(", "cmd_options", ")", ")", "return", "command" ]
Check if we need to use numactl if we are running on a NUMA box. 10gen recommends using numactl on NUMA. For more info, see http://www.mongodb.org/display/DOCS/NUMA
[ "Check", "if", "we", "need", "to", "use", "numactl", "if", "we", "are", "running", "on", "a", "NUMA", "box", ".", "10gen", "recommends", "using", "numactl", "on", "NUMA", ".", "For", "more", "info", "see", "http", ":", "//", "www", ".", "mongodb", ".", "org", "/", "display", "/", "DOCS", "/", "NUMA" ]
python
train
35.428571
pydata/xarray
xarray/core/missing.py
https://github.com/pydata/xarray/blob/6d93a95d05bdbfc33fff24064f67d29dd891ab58/xarray/core/missing.py#L259-L272
def ffill(arr, dim=None, limit=None): '''forward fill missing values''' import bottleneck as bn axis = arr.get_axis_num(dim) # work around for bottleneck 178 _limit = limit if limit is not None else arr.shape[axis] return apply_ufunc(bn.push, arr, dask='parallelized', keep_attrs=True, output_dtypes=[arr.dtype], kwargs=dict(n=_limit, axis=axis)).transpose(*arr.dims)
[ "def", "ffill", "(", "arr", ",", "dim", "=", "None", ",", "limit", "=", "None", ")", ":", "import", "bottleneck", "as", "bn", "axis", "=", "arr", ".", "get_axis_num", "(", "dim", ")", "# work around for bottleneck 178", "_limit", "=", "limit", "if", "limit", "is", "not", "None", "else", "arr", ".", "shape", "[", "axis", "]", "return", "apply_ufunc", "(", "bn", ".", "push", ",", "arr", ",", "dask", "=", "'parallelized'", ",", "keep_attrs", "=", "True", ",", "output_dtypes", "=", "[", "arr", ".", "dtype", "]", ",", "kwargs", "=", "dict", "(", "n", "=", "_limit", ",", "axis", "=", "axis", ")", ")", ".", "transpose", "(", "*", "arr", ".", "dims", ")" ]
forward fill missing values
[ "forward", "fill", "missing", "values" ]
python
train
33.785714
saltstack/salt
salt/modules/yumpkg.py
https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/yumpkg.py#L211-L234
def _check_versionlock(): ''' Ensure that the appropriate versionlock plugin is present ''' if _yum() == 'dnf': if int(__grains__.get('osmajorrelease')) >= 26: if six.PY3: vl_plugin = 'python3-dnf-plugin-versionlock' else: vl_plugin = 'python2-dnf-plugin-versionlock' else: if six.PY3: vl_plugin = 'python3-dnf-plugins-extras-versionlock' else: vl_plugin = 'python-dnf-plugins-extras-versionlock' else: vl_plugin = 'yum-versionlock' \ if __grains__.get('osmajorrelease') == '5' \ else 'yum-plugin-versionlock' if vl_plugin not in list_pkgs(): raise SaltInvocationError( 'Cannot proceed, {0} is not installed.'.format(vl_plugin) )
[ "def", "_check_versionlock", "(", ")", ":", "if", "_yum", "(", ")", "==", "'dnf'", ":", "if", "int", "(", "__grains__", ".", "get", "(", "'osmajorrelease'", ")", ")", ">=", "26", ":", "if", "six", ".", "PY3", ":", "vl_plugin", "=", "'python3-dnf-plugin-versionlock'", "else", ":", "vl_plugin", "=", "'python2-dnf-plugin-versionlock'", "else", ":", "if", "six", ".", "PY3", ":", "vl_plugin", "=", "'python3-dnf-plugins-extras-versionlock'", "else", ":", "vl_plugin", "=", "'python-dnf-plugins-extras-versionlock'", "else", ":", "vl_plugin", "=", "'yum-versionlock'", "if", "__grains__", ".", "get", "(", "'osmajorrelease'", ")", "==", "'5'", "else", "'yum-plugin-versionlock'", "if", "vl_plugin", "not", "in", "list_pkgs", "(", ")", ":", "raise", "SaltInvocationError", "(", "'Cannot proceed, {0} is not installed.'", ".", "format", "(", "vl_plugin", ")", ")" ]
Ensure that the appropriate versionlock plugin is present
[ "Ensure", "that", "the", "appropriate", "versionlock", "plugin", "is", "present" ]
python
train
34.125
jsommers/switchyard
switchyard/lib/packet/util.py
https://github.com/jsommers/switchyard/blob/fdcb3869c937dcedbd6ea7a7822ebd412bf1e2b0/switchyard/lib/packet/util.py#L3-L12
def create_ip_arp_reply(srchw, dsthw, srcip, targetip): ''' Create an ARP reply (just change what needs to be changed from a request) ''' pkt = create_ip_arp_request(srchw, srcip, targetip) pkt[0].dst = dsthw pkt[1].operation = ArpOperation.Reply pkt[1].targethwaddr = dsthw return pkt
[ "def", "create_ip_arp_reply", "(", "srchw", ",", "dsthw", ",", "srcip", ",", "targetip", ")", ":", "pkt", "=", "create_ip_arp_request", "(", "srchw", ",", "srcip", ",", "targetip", ")", "pkt", "[", "0", "]", ".", "dst", "=", "dsthw", "pkt", "[", "1", "]", ".", "operation", "=", "ArpOperation", ".", "Reply", "pkt", "[", "1", "]", ".", "targethwaddr", "=", "dsthw", "return", "pkt" ]
Create an ARP reply (just change what needs to be changed from a request)
[ "Create", "an", "ARP", "reply", "(", "just", "change", "what", "needs", "to", "be", "changed", "from", "a", "request", ")" ]
python
train
31.2
dfunckt/django-connections
connections/models.py
https://github.com/dfunckt/django-connections/blob/15f40d187df673da6e6245ccfeca3cf13355f0ab/connections/models.py#L131-L140
def get_connection(self, from_obj, to_obj): """ Returns a ``Connection`` instance for the given objects or ``None`` if there's no connection. """ self._validate_ctypes(from_obj, to_obj) try: return self.connections.get(from_pk=from_obj.pk, to_pk=to_obj.pk) except Connection.DoesNotExist: return None
[ "def", "get_connection", "(", "self", ",", "from_obj", ",", "to_obj", ")", ":", "self", ".", "_validate_ctypes", "(", "from_obj", ",", "to_obj", ")", "try", ":", "return", "self", ".", "connections", ".", "get", "(", "from_pk", "=", "from_obj", ".", "pk", ",", "to_pk", "=", "to_obj", ".", "pk", ")", "except", "Connection", ".", "DoesNotExist", ":", "return", "None" ]
Returns a ``Connection`` instance for the given objects or ``None`` if there's no connection.
[ "Returns", "a", "Connection", "instance", "for", "the", "given", "objects", "or", "None", "if", "there", "s", "no", "connection", "." ]
python
train
37.1
Stufinite/djangoApiDec
djangoApiDec/djangoApiDec.py
https://github.com/Stufinite/djangoApiDec/blob/8b2d5776b3413b1b850df12a92f30526c05c0a46/djangoApiDec/djangoApiDec.py#L8-L36
def date_proc(func): """ An decorator checking whether date parameter is passing in or not. If not, default date value is all PTT data. Else, return PTT data with right date. Args: func: function you want to decorate. request: WSGI request parameter getten from django. Returns: date: a datetime variable, you can only give year, year + month or year + month + day, three type. The missing part would be assigned default value 1 (for month is Jan, for day is 1). """ @wraps(func) def wrapped(request, *args, **kwargs): if 'date' in request.GET and request.GET['date'] == '': raise Http404("api does not exist") elif 'date' not in request.GET: date = datetime.today() return func(request, date) else: date = tuple(int(intValue) for intValue in request.GET['date'].split('-')) if len(date) == 3: date = datetime(*date) elif len(date) == 2: date = datetime(*date, day = 1) else: date = datetime(*date, month = 1, day = 1) return func(request, date) return wrapped
[ "def", "date_proc", "(", "func", ")", ":", "@", "wraps", "(", "func", ")", "def", "wrapped", "(", "request", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "if", "'date'", "in", "request", ".", "GET", "and", "request", ".", "GET", "[", "'date'", "]", "==", "''", ":", "raise", "Http404", "(", "\"api does not exist\"", ")", "elif", "'date'", "not", "in", "request", ".", "GET", ":", "date", "=", "datetime", ".", "today", "(", ")", "return", "func", "(", "request", ",", "date", ")", "else", ":", "date", "=", "tuple", "(", "int", "(", "intValue", ")", "for", "intValue", "in", "request", ".", "GET", "[", "'date'", "]", ".", "split", "(", "'-'", ")", ")", "if", "len", "(", "date", ")", "==", "3", ":", "date", "=", "datetime", "(", "*", "date", ")", "elif", "len", "(", "date", ")", "==", "2", ":", "date", "=", "datetime", "(", "*", "date", ",", "day", "=", "1", ")", "else", ":", "date", "=", "datetime", "(", "*", "date", ",", "month", "=", "1", ",", "day", "=", "1", ")", "return", "func", "(", "request", ",", "date", ")", "return", "wrapped" ]
An decorator checking whether date parameter is passing in or not. If not, default date value is all PTT data. Else, return PTT data with right date. Args: func: function you want to decorate. request: WSGI request parameter getten from django. Returns: date: a datetime variable, you can only give year, year + month or year + month + day, three type. The missing part would be assigned default value 1 (for month is Jan, for day is 1).
[ "An", "decorator", "checking", "whether", "date", "parameter", "is", "passing", "in", "or", "not", ".", "If", "not", "default", "date", "value", "is", "all", "PTT", "data", ".", "Else", "return", "PTT", "data", "with", "right", "date", ".", "Args", ":", "func", ":", "function", "you", "want", "to", "decorate", ".", "request", ":", "WSGI", "request", "parameter", "getten", "from", "django", "." ]
python
valid
34.344828
gccxml/pygccxml
pygccxml/utils/utils.py
https://github.com/gccxml/pygccxml/blob/2b1efbb9e37ceb2ae925c7f3ce1570f476db9e1e/pygccxml/utils/utils.py#L35-L58
def find_xml_generator(name="castxml"): """ Try to find a c++ parser (xml generator) Args: name (str): name of the c++ parser (e.g. castxml) Returns: path (str), name (str): path to the xml generator and it's name If no c++ parser is found the function raises an exception. pygccxml does currently only support castxml as c++ parser. """ if sys.version_info[:2] >= (3, 3): path = _find_xml_generator_for_python_greater_equals_33(name) else: path = _find_xml_generator_for_legacy_python(name) if path == "" or path is None: raise Exception("No c++ parser found. Please install castxml.") return path.rstrip(), name
[ "def", "find_xml_generator", "(", "name", "=", "\"castxml\"", ")", ":", "if", "sys", ".", "version_info", "[", ":", "2", "]", ">=", "(", "3", ",", "3", ")", ":", "path", "=", "_find_xml_generator_for_python_greater_equals_33", "(", "name", ")", "else", ":", "path", "=", "_find_xml_generator_for_legacy_python", "(", "name", ")", "if", "path", "==", "\"\"", "or", "path", "is", "None", ":", "raise", "Exception", "(", "\"No c++ parser found. Please install castxml.\"", ")", "return", "path", ".", "rstrip", "(", ")", ",", "name" ]
Try to find a c++ parser (xml generator) Args: name (str): name of the c++ parser (e.g. castxml) Returns: path (str), name (str): path to the xml generator and it's name If no c++ parser is found the function raises an exception. pygccxml does currently only support castxml as c++ parser.
[ "Try", "to", "find", "a", "c", "++", "parser", "(", "xml", "generator", ")" ]
python
train
28.375
ming060/robotframework-uiautomatorlibrary
uiautomatorlibrary/Mobile.py
https://github.com/ming060/robotframework-uiautomatorlibrary/blob/b70202b6a8aa68b4efd9d029c2845407fb33451a/uiautomatorlibrary/Mobile.py#L354-L360
def wait_for_exists(self, timeout=0, *args, **selectors): """ Wait for the object which has *selectors* within the given timeout. Return true if the object *appear* in the given timeout. Else return false. """ return self.device(**selectors).wait.exists(timeout=timeout)
[ "def", "wait_for_exists", "(", "self", ",", "timeout", "=", "0", ",", "*", "args", ",", "*", "*", "selectors", ")", ":", "return", "self", ".", "device", "(", "*", "*", "selectors", ")", ".", "wait", ".", "exists", "(", "timeout", "=", "timeout", ")" ]
Wait for the object which has *selectors* within the given timeout. Return true if the object *appear* in the given timeout. Else return false.
[ "Wait", "for", "the", "object", "which", "has", "*", "selectors", "*", "within", "the", "given", "timeout", "." ]
python
train
43.571429
lukasgeiter/mkdocs-awesome-pages-plugin
mkdocs_awesome_pages_plugin/utils.py
https://github.com/lukasgeiter/mkdocs-awesome-pages-plugin/blob/f5693418b71a0849c5fee3b3307e117983c4e2d8/mkdocs_awesome_pages_plugin/utils.py#L37-L40
def join_paths(path1: Optional[str], path2: Optional[str]) -> Optional[str]: """ Joins two paths if neither of them is None """ if path1 is not None and path2 is not None: return os.path.join(path1, path2)
[ "def", "join_paths", "(", "path1", ":", "Optional", "[", "str", "]", ",", "path2", ":", "Optional", "[", "str", "]", ")", "->", "Optional", "[", "str", "]", ":", "if", "path1", "is", "not", "None", "and", "path2", "is", "not", "None", ":", "return", "os", ".", "path", ".", "join", "(", "path1", ",", "path2", ")" ]
Joins two paths if neither of them is None
[ "Joins", "two", "paths", "if", "neither", "of", "them", "is", "None" ]
python
train
54.5