docstring
stringlengths
52
499
function
stringlengths
67
35.2k
__index_level_0__
int64
52.6k
1.16M
Set x limits for plot. This will set the limits for the x axis for the specific plot. Args: xlims (len-2 list of floats): The limits for the axis. dx (float): Amount to increment by between the limits. xscale (str): Scale of the axis. Either `log` or `lin`. reverse (bool, optional): If True, reverse the axis tick marks. Default is False.
def set_xlim(self, xlims, dx, xscale, reverse=False): self._set_axis_limits('x', xlims, dx, xscale, reverse) return
1,002,960
Set y limits for plot. This will set the limits for the y axis for the specific plot. Args: ylims (len-2 list of floats): The limits for the axis. dy (float): Amount to increment by between the limits. yscale (str): Scale of the axis. Either `log` or `lin`. reverse (bool, optional): If True, reverse the axis tick marks. Default is False.
def set_ylim(self, xlims, dx, xscale, reverse=False): self._set_axis_limits('y', xlims, dx, xscale, reverse) return
1,002,961
Set the figure size in inches. Sets the figure size with a call to fig.set_size_inches. Default in code is 8 inches for each. Args: width (float): Dimensions for figure width in inches. height (float, optional): Dimensions for figure height in inches. Default is None.
def set_fig_size(self, width, height=None): self.figure.figure_width = width self.figure.figure_height = height return
1,002,967
Set the figure spacing. Sets whether in general there is space between subplots. If all axes are shared, this can be `tight`. Default in code is `wide`. The main difference is the tick labels extend to the ends if space==`wide`. If space==`tight`, the edge tick labels are cut off for clearity. Args: space (str): Sets spacing for subplots. Either `wide` or `tight`.
def set_spacing(self, space): self.figure.spacing = space if 'subplots_adjust_kwargs' not in self.figure.__dict__: self.figure.subplots_adjust_kwargs = {} if space == 'wide': self.figure.subplots_adjust_kwargs['hspace'] = 0.3 self.figure.subplots_adjust_kwargs['wspace'] = 0.3 else: self.figure.subplots_adjust_kwargs['hspace'] = 0.0 self.figure.subplots_adjust_kwargs['wspace'] = 0.0 return
1,002,968
Indicate general x,y column labels. This sets the general x and y column labels into data files for all plots. It can be overridden for specific plots. Args: xlabel/ylabel (str, optional): String indicating column label for x,y values into the data files. Default is None. Raises: UserWarning: If xlabel and ylabel are both not specified, The user will be alerted, but the code will not stop.
def set_all_file_column_labels(self, xlabel=None, ylabel=None): if xlabel is not None: self.general.x_column_label = xlabel if ylabel is not None: self.general.y_column_label = ylabel if xlabel is None and ylabel is None: warnings.warn("is not specifying x or y lables even" + "though column labels function is called.", UserWarning) return
1,002,975
Reverse an axis in all figure plots. This will reverse the tick marks on an axis for each plot in the figure. It can be overridden in SinglePlot class. Args: axis_to_reverse (str): Axis to reverse. Supports `x` and `y`. Raises: ValueError: The string representing the axis to reverse is not `x` or `y`.
def reverse_axis(self, axis_to_reverse): if axis_to_reverse.lower() == 'x': self.general.reverse_x_axis = True if axis_to_reverse.lower() == 'y': self.general.reverse_y_axis = True if axis_to_reverse.lower() != 'x' or axis_to_reverse.lower() != 'y': raise ValueError('Axis for reversing needs to be either x or y.') return
1,002,979
Prepare the parallel calculations Prepares the arguments to be run in parallel. It will divide up arrays according to num_splits. Args: binary_args (list): List of binary arguments for input into the SNR function. other_args (tuple of obj): tuple of other args for input into parallel snr function.
def prep_parallel(self, binary_args, other_args): if self.length < 100: raise Exception("Run this across 1 processor by setting num_processors kwarg to None.") if self.num_processors == -1: self.num_processors = mp.cpu_count() split_val = int(np.ceil(self.length/self.num_splits)) split_inds = [self.num_splits*i for i in np.arange(1, split_val)] inds_split_all = np.split(np.arange(self.length), split_inds) self.args = [] for i, ind_split in enumerate(inds_split_all): trans_args = [] for arg in binary_args: try: trans_args.append(arg[ind_split]) except TypeError: trans_args.append(arg) self.args.append((i, tuple(trans_args)) + other_args) return
1,003,080
Run parallel calulation This will run the parallel calculation on self.num_processors. Args: para_func (obj): Function object to be used in parallel. Returns: (dict): Dictionary with parallel results.
def run_parallel(self, para_func): if self.timer: start_timer = time.time() # for testing # check = parallel_snr_func(*self.args[10]) # import pdb # pdb.set_trace() with mp.Pool(self.num_processors) as pool: print('start pool with {} processors: {} total processes.\n'.format( self.num_processors, len(self.args))) results = [pool.apply_async(para_func, arg) for arg in self.args] out = [r.get() for r in results] out = {key: np.concatenate([out_i[key] for out_i in out]) for key in out[0].keys()} if self.timer: print("SNR calculation time:", time.time()-start_timer) return out
1,003,081
Only `key` is required Arguments: operator (str) -- "?" optional, "!" for complete arrays; defaults to None (i.e. required) required (boolean) -- whether the key is required in the output (defaults to True) scope (`Selector`) -- restrict extraction to elements matching this selector iterate (boolean) -- whether multiple objects will be extracted (defaults to False)
def __init__(self, key, operator=None, required=True, scope=None, iterate=False): self.key = key self.operator = operator self.required = required self.scope = scope self.iterate = iterate
1,003,394
Build part of the abstract Parsley extraction tree Arguments: parselet_node (dict) -- part of the Parsley tree to compile (can be the root dict/node) level (int) -- current recursion depth (used for debug)
def _compile(self, parselet_node, level=0): if self.DEBUG: debug_offset = "".join([" " for x in range(level)]) if self.DEBUG: print(debug_offset, "%s::compile(%s)" % ( self.__class__.__name__, parselet_node)) if isinstance(parselet_node, dict): parselet_tree = ParsleyNode() for k, v in list(parselet_node.items()): # we parse the key raw elements but without much # interpretation (which is done by the SelectorHandler) try: m = self.REGEX_PARSELET_KEY.match(k) if not m: if self.DEBUG: print(debug_offset, "could not parse key", k) raise InvalidKeySyntax(k) except: raise InvalidKeySyntax("Key %s is not valid" % k) key = m.group('key') # by default, fields are required key_required = True operator = m.group('operator') if operator == '?': key_required = False # FIXME: "!" operator not supported (complete array) scope = m.group('scope') # example: get list of H3 tags # { "titles": ["h3"] } # FIXME: should we support multiple selectors in list? # e.g. { "titles": ["h1", "h2", "h3", "h4"] } if isinstance(v, (list, tuple)): v = v[0] iterate = True else: iterate = False # keys in the abstract Parsley trees are of type `ParsleyContext` try: parsley_context = ParsleyContext( key, operator=operator, required=key_required, scope=self.selector_handler.make(scope) if scope else None, iterate=iterate) except SyntaxError: if self.DEBUG: print("Invalid scope:", k, scope) raise if self.DEBUG: print(debug_offset, "current context:", parsley_context) # go deeper in the Parsley tree... try: child_tree = self._compile(v, level=level+1) except SyntaxError: if self.DEBUG: print("Invalid value: ", v) raise except: raise if self.DEBUG: print(debug_offset, "child tree:", child_tree) parselet_tree[parsley_context] = child_tree return parselet_tree # a string leaf should match some kind of selector, # let the selector handler deal with it elif isstr(parselet_node): return self.selector_handler.make(parselet_node) else: raise ValueError( "Unsupported type(%s) for Parselet node <%s>" % ( type(parselet_node), parselet_node))
1,003,405
Main function for this program. This will read in sensitivity_curves and binary parameters; calculate snrs with a matched filtering approach; and then read the contour data out to a file. Args: pid (obj or dict): GenInput class or dictionary containing all of the input information for the generation. See BOWIE documentation and example notebooks for usage of this class.
def generate_contour_data(pid): # check if pid is dicionary or GenInput class # if GenInput, change to dictionary if isinstance(pid, GenInput): pid = pid.return_dict() begin_time = time.time() WORKING_DIRECTORY = '.' if 'WORKING_DIRECTORY' not in pid['general'].keys(): pid['general']['WORKING_DIRECTORY'] = WORKING_DIRECTORY # Generate the contour data. running_process = GenProcess(**{**pid, **pid['generate_info']}) running_process.set_parameters() running_process.run_snr() # Read out file_out = FileReadOut(running_process.xvals, running_process.yvals, running_process.final_dict, **{**pid['general'], **pid['generate_info'], **pid['output_info']}) print('outputing file:', pid['general']['WORKING_DIRECTORY'] + '/' + pid['output_info']['output_file_name']) getattr(file_out, file_out.output_file_type + '_read_out')() print(time.time()-begin_time) return
1,003,503
Set the grid values for y. Create information for the grid of y values. Args: num_y (int): Number of points on axis. y_low/y_high (float): Lowest/highest value for the axis. yscale (str): Scale of the axis. Choices are 'log' or 'lin'. yval_name (str): Name representing the axis. See GenerateContainer documentation for options for the name.
def set_y_grid_info(self, y_low, y_high, num_y, yscale, yval_name): self._set_grid_info('y', y_low, y_high, num_y, yscale, yval_name) return
1,003,723
Set the grid values for x. Create information for the grid of x values. Args: num_x (int): Number of points on axis. x_low/x_high (float): Lowest/highest value for the axis. xscale (str): Scale of the axis. Choices are 'log' or 'lin'. xval_name (str): Name representing the axis. See GenerateContainer documentation for options for the name.
def set_x_grid_info(self, x_low, x_high, num_x, xscale, xval_name): self._set_grid_info('x', x_low, x_high, num_x, xscale, xval_name) return
1,003,724
Set the signal type of interest. Sets the signal type for which the SNR is calculated. This means inspiral, merger, and/or ringdown. Args: sig_type (str or list of str): Signal type desired by user. Choices are `ins`, `mrg`, `rd`, `all` for circular waveforms created with PhenomD. If eccentric waveforms are used, must be `all`.
def set_signal_type(self, sig_type): if isinstance(sig_type, str): sig_type = [sig_type] self.snr_input.signal_type = sig_type return
1,003,728
Raise an appropriate error for a given response. Arguments: response (:py:class:`aiohttp.ClientResponse`): The API response. Raises: :py:class:`aiohttp.web_exceptions.HTTPException`: The appropriate error for the response's status.
def raise_for_status(response): for err_name in web_exceptions.__all__: err = getattr(web_exceptions, err_name) if err.status_code == response.status: payload = dict( headers=response.headers, reason=response.reason, ) if issubclass(err, web_exceptions._HTTPMove): # pylint: disable=protected-access raise err(response.headers['Location'], **payload) raise err(**payload)
1,004,071
Truncate the supplied text for display. Arguments: text (:py:class:`str`): The text to truncate. max_len (:py:class:`int`, optional): The maximum length of the text before truncation (defaults to 350 characters). end (:py:class:`str`, optional): The ending to use to show that the text was truncated (defaults to ``'...'``). Returns: :py:class:`str`: The truncated text.
def truncate(text, max_len=350, end='...'): if len(text) <= max_len: return text return text[:max_len].rsplit(' ', maxsplit=1)[0] + end
1,004,072
Input binary parameters and calculate the SNR Binary parameters are read in and adjusted based on shapes. They are then fed into ``run`` for calculation of the snr. Args: *args: Arguments for binary parameters (see `:meth:gwsnrcalc.utils.pyphenomd.__call__`) Returns: (dict): Dictionary with the SNR output from the calculation.
def __call__(self, *binary_args): # if self.num_processors is None, run on single processor if self.num_processors is None: return self.snr_function(0, binary_args, self.wavegen, self.signal_type, self.noise_interpolants, self.prefactor, self.verbose) other_args = (self.wavegen, self.signal_type, self.noise_interpolants, self.prefactor, self.verbose) self.prep_parallel(binary_args, other_args) return self.run_parallel(self.snr_function)
1,004,195
Main function for creating these plots. Reads in plot info dict from json file or dictionary in script. Args: return_fig_ax (bool, optional): Return figure and axes objects. Returns: 2-element tuple containing - **fig** (*obj*): Figure object for customization outside of those in this program. - **ax** (*obj*): Axes object for customization outside of those in this program.
def plot_main(pid, return_fig_ax=False): global WORKING_DIRECTORY, SNR_CUT if isinstance(pid, PlotInput): pid = pid.return_dict() WORKING_DIRECTORY = '.' if 'WORKING_DIRECTORY' not in pid['general'].keys(): pid['general']['WORKING_DIRECTORY'] = '.' SNR_CUT = 5.0 if 'SNR_CUT' not in pid['general'].keys(): pid['general']['SNR_CUT'] = SNR_CUT if "switch_backend" in pid['general'].keys(): plt.switch_backend(pid['general']['switch_backend']) running_process = MakePlotProcess( **{**pid, **pid['general'], **pid['plot_info'], **pid['figure']}) running_process.input_data() running_process.setup_figure() running_process.create_plots() # save or show figure if 'save_figure' in pid['figure'].keys(): if pid['figure']['save_figure'] is True: running_process.fig.savefig( pid['general']['WORKING_DIRECTORY'] + '/' + pid['figure']['output_path'], **pid['figure']['savefig_kwargs']) if 'show_figure' in pid['figure'].keys(): if pid['figure']['show_figure'] is True: plt.show() if return_fig_ax is True: return running_process.fig, running_process.ax return
1,004,249
Initialize an `ExpCM` object. Args: `prefs` (list) List of dicts giving amino-acid preferences for each site. Each dict keyed by amino acid letter codes, value is pref > 0 and < 1. Must sum to 1 at each site. `kappa`, `omega`, `beta`, `mu`, `phi` Model params described in main class doc string. `freeparams` (list of strings) Specifies free parameters.
def __init__(self, prefs, kappa=2.0, omega=0.5, beta=1.0, mu=1.0, phi=scipy.ones(N_NT) / N_NT, freeparams=['kappa', 'omega', 'beta', 'mu', 'eta']): self._nsites = len(prefs) assert self.nsites > 0, "No preferences specified" assert all(map(lambda x: x in self.ALLOWEDPARAMS, freeparams)),\ "Invalid entry in freeparams\nGot: {0}\nAllowed: {1}".format( ', '.join(freeparams), ', '.join(self.ALLOWEDPARAMS)) self._freeparams = list(freeparams) # underscore as `freeparams` is property # put prefs in pi self.pi = scipy.ndarray((self.nsites, N_AA), dtype='float') assert (isinstance(prefs, list) and all([isinstance(x, dict) for x in prefs])),\ "prefs is not a list of dicts" for r in range(self.nsites): assert set(prefs[r].keys()) == set(AA_TO_INDEX.keys()),\ "prefs not keyed by amino acids for site {0}".format(r) assert abs(1 - sum(prefs[r].values())) <= ALMOST_ZERO,\ "prefs don't sum to one for site {0}".format(r) for (a, aa) in INDEX_TO_AA.items(): _checkParam('pi', prefs[r][aa], self.PARAMLIMITS, self.PARAMTYPES) self.pi[r][a] = prefs[r][aa] self.pi[r] /= self.pi[r].sum() # renormalize to sum to one # set up attributes defined solely in terms of preferences self.pi_codon = scipy.full((self.nsites, N_CODON), -1, dtype='float') self.ln_pi_codon = scipy.full((self.nsites, N_CODON), -1, dtype='float') self.piAx_piAy = scipy.full((self.nsites, N_CODON, N_CODON), -1, dtype='float') # construct eta from phi _checkParam('phi', phi, self.PARAMLIMITS, self.PARAMTYPES) assert abs(1 - phi.sum()) <= ALMOST_ZERO, "phi doesn't sum to 1" self.phi = phi.copy() self.phi /= self.phi.sum() self._eta_from_phi() # set attributes to calling params self._mu = mu # underscore as `mu` is property self.kappa = kappa self.omega = omega self.beta = beta for (name, value) in [('kappa', self.kappa), ('omega', self.omega), ('beta', self.beta), ('eta', self.eta), ('mu', self.mu)]: _checkParam(name, value, self.PARAMLIMITS, self.PARAMTYPES) # define other params, initialized appropriately self.piAx_piAy_beta = scipy.zeros((self.nsites, N_CODON, N_CODON), dtype='float') self.ln_piAx_piAy_beta = scipy.zeros((self.nsites, N_CODON, N_CODON), dtype='float') self.Prxy = scipy.zeros((self.nsites, N_CODON, N_CODON), dtype='float') self.prx = scipy.zeros((self.nsites, N_CODON), dtype='float') self.Qxy = scipy.zeros((N_CODON, N_CODON), dtype='float') self.Frxy = scipy.ones((self.nsites, N_CODON, N_CODON), dtype='float') self.Frxy_no_omega = scipy.ones((self.nsites, N_CODON, N_CODON), dtype='float') self.D = scipy.zeros((self.nsites, N_CODON), dtype='float') self.A = scipy.zeros((self.nsites, N_CODON, N_CODON), dtype='float') self.Ainv = scipy.zeros((self.nsites, N_CODON, N_CODON), dtype='float') self.dPrxy = {} self.B = {} self.dprx = {} for param in self.freeparams: if param == 'mu': self.dprx['mu'] = 0.0 elif self.PARAMTYPES[param] == float: self.dPrxy[param] = scipy.zeros((self.nsites, N_CODON, N_CODON), dtype='float') self.B[param] = scipy.zeros((self.nsites, N_CODON, N_CODON), dtype='float') self.dprx[param] = scipy.zeros((self.nsites, N_CODON), dtype='float') else: assert self.PARAMTYPES[param][0] == scipy.ndarray paramshape = self.PARAMTYPES[param][1] assert len(paramshape) == 1, "Can't handle multi-dimensional ndarray" paramlen = paramshape[0] self.dPrxy[param] = scipy.zeros((paramlen, self.nsites, N_CODON, N_CODON), dtype='float') self.B[param] = scipy.zeros((paramlen, self.nsites, N_CODON, N_CODON), dtype='float') self.dprx[param] = scipy.zeros((paramlen, self.nsites, N_CODON), dtype='float') # indexes diagonals in square matrices self._diag_indices = scipy.diag_indices(N_CODON) self.updateParams({}, update_all=True)
1,004,599
Initialize an `ExpCM_empirical_phi` object. Args: `prefs`, `kappa`, `omega`, `beta`, `mu`, `freeparams` Same meaning as for an `ExpCM` `g` Has the meaning described in the main class doc string.
def __init__(self, prefs, g, kappa=2.0, omega=0.5, beta=1.0, mu=1.0, freeparams=['kappa', 'omega', 'beta', 'mu']): _checkParam('g', g, self.PARAMLIMITS, self.PARAMTYPES) assert abs(1 - g.sum()) <= ALMOST_ZERO, "g doesn't sum to 1" self.g = g.copy() self.g /= self.g.sum() super(ExpCM_empirical_phi, self).__init__(prefs, kappa=kappa, omega=omega, beta=beta, mu=mu, freeparams=freeparams)
1,004,625
Initialize an `ExpCM_empirical_phi_divpressure` object. Args: `prefs`, `kappa`, `omega`, `beta`, `mu`, `g`, `freeparams` Same meaning as for an `ExpCM_empirical_phi` `divPressureValues`, `omega2` Meaning described in the main class doc string.
def __init__(self, prefs, g, divPressureValues, kappa=2.0, omega=0.5, beta=1.0, mu=1.0,omega2=0.0, freeparams=['kappa', 'omega', 'beta', 'mu', 'omega2']): _checkParam('omega2',omega2, self.PARAMLIMITS, self.PARAMTYPES) self.omega2 = omega2 self.deltar = scipy.array(divPressureValues.copy()) assert (max(scipy.absolute(self.deltar))) <= 1, ( "A scaled deltar value is > 1 or < -1.") super(ExpCM_empirical_phi_divpressure, self).__init__(prefs, g, kappa=kappa, omega=omega, beta=beta, mu=mu, freeparams=freeparams)
1,004,630
Initialize an `YNGKP_M0` object. Args: `kappa`, `omega`, `mu`, Model params described in main class doc string. `freeparams` (list of strings) Specifies free parameters. `e_pw`, `nsites` Meaning described in the main class doc string.
def __init__(self, e_pw, nsites, kappa=2.0, omega=0.5, mu=1.0, freeparams=['kappa', 'omega', 'mu']): _checkParam('e_pw', e_pw, self.PARAMLIMITS, self.PARAMTYPES) self.e_pw = e_pw.copy() self.phi = self._calculate_correctedF3X4() assert scipy.allclose(self.phi.sum(axis = 1),\ scipy.ones(3, dtype='float'),atol=1e-4, rtol=5e-3),\ "The `phi` values do not sum to 1 for all `p`" self.Phi_x = scipy.ones(N_CODON, dtype='float') self._calculate_Phi_x() self._nsites = nsites assert self._nsites > 0, "There must be more than 1 site in the gene" #check allowed params assert all(map(lambda x: x in self.ALLOWEDPARAMS, freeparams)),\ "Invalid entry in freeparams\nGot: {0}\nAllowed: {1}".format( ', '.join(freeparams), ', '.join(self.ALLOWEDPARAMS)) self._freeparams = list(freeparams) # underscore as `freeparams` is property # set attributes to calling params self._mu = mu # underscore as `mu` is property self.kappa = kappa self.omega = omega for (name, value) in [('kappa', self.kappa), ('omega', self.omega), ('mu', self.mu)]: _checkParam(name, value, self.PARAMLIMITS, self.PARAMTYPES) # define other params, initialized appropriately #single site dimension to be carried through the calcs added here self.Pxy = scipy.zeros((1, N_CODON, N_CODON), dtype='float') self.Pxy_no_omega = scipy.zeros((1, N_CODON, N_CODON), dtype='float') self.D = scipy.zeros((1, N_CODON), dtype='float') self.A = scipy.zeros((1, N_CODON, N_CODON), dtype='float') self.Ainv = scipy.zeros((1, N_CODON, N_CODON), dtype='float') self.dPxy = {} self.B = {} for param in self.freeparams: if param in self.ALLOWEDPARAMS: self.dPxy[param] = scipy.zeros((1, N_CODON, N_CODON), dtype='float') self.B[param] = scipy.zeros((1, N_CODON, N_CODON), dtype='float') else: raise ValueError("Unrecognized param {0}".format(param)) # indexes diagonals in square matrices self._diag_indices = scipy.diag_indices(N_CODON) self.updateParams({}, update_all=True)
1,004,633
Initialize an `GammaDistributedModel` object. The `lambda_param` is set to "omega". Args: `model` `ncats`,`alpha_lambda`, `beta_lambda`, `freeparams` Meaning described in main class doc string for `GammaDistributedModel`.
def __init__(self, model, ncats, alpha_lambda=1.0, beta_lambda=2.0, freeparams=['alpha_lambda', 'beta_lambda']): super(GammaDistributedOmegaModel, self).__init__(model, "omega", ncats, alpha_lambda=1.0, beta_lambda=2.0, freeparams=['alpha_lambda', 'beta_lambda'])
1,004,657
Initialize an `GammaDistributedModel` object. The `lambda_param` is set to "beta". Args: `model` `ncats`,`alpha_lambda`, `beta_lambda`, `freeparams` Meaning described in main class doc string for `GammaDistributedModel`.
def __init__(self, model, ncats, alpha_lambda=1.0, beta_lambda=2.0, freeparams=['alpha_lambda', 'beta_lambda']): # set new limits so the maximum value of `beta` is equal to or # greater than the maximum `beta` inferred from the gamma distribution # with the constrained `alpha_beta` and `beta_beta` parameters new_max_beta = DiscreteGamma(self.PARAMLIMITS["alpha_lambda"][1], self.PARAMLIMITS["beta_lambda"][0], ncats)[-1] new_limits = model.PARAMLIMITS new_limits["beta"] = (new_limits["beta"][0], new_max_beta) model.PARAMLIMITS = new_limits super(GammaDistributedBetaModel, self).__init__(model, "beta", ncats, alpha_lambda=1.0, beta_lambda=2.0, freeparams=['alpha_lambda', 'beta_lambda']) assert all([scipy.allclose(new_max_beta, m.PARAMLIMITS["beta"][1]) for m in self._models]), ("{0}\n{1}".format( new_max_beta, '\n'.join([m.PARAMLIMITS["beta"][1] for m in self._models])))
1,004,658
Setup colorbars for each type of plot. Take all of the optional performed during ``__init__`` method and makes the colorbar. Args: plot_call_sign (obj): Plot instance of ax.contourf with colormapping to add as a colorbar.
def setup_colorbars(self, plot_call_sign): self.fig.colorbar(plot_call_sign, cax=self.cbar_ax, ticks=self.cbar_ticks, orientation=self.cbar_orientation) # setup colorbar ticks (getattr(self.cbar_ax, 'set_' + self.cbar_var + 'ticklabels') (self.cbar_tick_labels, fontsize=self.cbar_ticks_fontsize)) (getattr(self.cbar_ax, 'set_' + self.cbar_var + 'label') (self.cbar_label, fontsize=self.cbar_label_fontsize, labelpad=self.cbar_label_pad)) return
1,004,716
Return a specific record. Args: session (requests.sessions.Session): Authenticated session. record_id (int): The ID of the record to get. endpoint_override (str, optional): Override the default endpoint using this. Returns: helpscout.BaseModel: A record singleton, if existing. Otherwise ``None``.
def get(cls, session, record_id, endpoint_override=None): cls._check_implements('get') try: return cls( endpoint_override or '/%s/%d.json' % ( cls.__endpoint__, record_id, ), singleton=True, session=session, ) except HelpScoutRemoteException as e: if e.status_code == 404: return None else: raise
1,005,162
Return records in a mailbox. Args: session (requests.sessions.Session): Authenticated session. endpoint_override (str, optional): Override the default endpoint using this. data (dict, optional): Data to provide as request parameters. Returns: RequestPaginator(output_type=helpscout.BaseModel): Results iterator.
def list(cls, session, endpoint_override=None, data=None): cls._check_implements('list') return cls( endpoint_override or '/%s.json' % cls.__endpoint__, data=data, session=session, )
1,005,163
Update a record. Args: session (requests.sessions.Session): Authenticated session. record (helpscout.BaseModel): The record to be updated. Returns: helpscout.BaseModel: Freshly updated record.
def update(cls, session, record): cls._check_implements('update') data = record.to_api() del data['id'] data['reload'] = True return cls( '/%s/%s.json' % (cls.__endpoint__, record.id), data=data, request_type=RequestPaginator.PUT, singleton=True, session=session, )
1,005,165
Initialize a new HelpScout client. Args: api_key (str): The API key to use for this session.
def __init__(self, api_key): self.session = Session() self.session.auth = HTTPBasicAuth(api_key, 'NoPassBecauseKey!') self._load_apis()
1,005,282
Get the EPSG code associated with a geometry attribute. Arguments: geom_attr the key of the geometry property as defined in the SQLAlchemy mapper. If you use ``declarative_base`` this is the name of the geometry attribute as defined in the mapped class.
def _get_col_epsg(mapped_class, geom_attr): col = class_mapper(mapped_class).get_property(geom_attr).columns[0] return col.type.srid
1,005,452
Create an ``and_`` SQLAlchemy filter (a ClauseList object) based on the request params (``queryable``, ``eq``, ``ne``, ...). Arguments: request the request. mapped_class the SQLAlchemy mapped class.
def create_attr_filter(request, mapped_class): mapping = { 'eq': '__eq__', 'ne': '__ne__', 'lt': '__lt__', 'lte': '__le__', 'gt': '__gt__', 'gte': '__ge__', 'like': 'like', 'ilike': 'ilike' } filters = [] if 'queryable' in request.params: queryable = request.params['queryable'].split(',') for k in request.params: if len(request.params[k]) <= 0 or '__' not in k: continue col, op = k.split("__") if col not in queryable or op not in mapping: continue column = getattr(mapped_class, col) f = getattr(column, mapping[op])(request.params[k]) filters.append(f) return and_(*filters) if len(filters) > 0 else None
1,005,454
Create MapFish default filter based on the request params. Arguments: request the request. mapped_class the SQLAlchemy mapped class. geom_attr the key of the geometry property as defined in the SQLAlchemy mapper. If you use ``declarative_base`` this is the name of the geometry attribute as defined in the mapped class. \\**kwargs additional arguments passed to ``create_geom_filter()``.
def create_filter(request, mapped_class, geom_attr, **kwargs): attr_filter = create_attr_filter(request, mapped_class) geom_filter = create_geom_filter(request, mapped_class, geom_attr, **kwargs) if geom_filter is None and attr_filter is None: return None if geom_filter is None: return attr_filter if attr_filter is None: return geom_filter return and_(geom_filter, attr_filter)
1,005,455
Return a specific team. Args: session (requests.sessions.Session): Authenticated session. team_id (int): The ID of the team to get. Returns: helpscout.models.Person: A person singleton representing the team, if existing. Otherwise ``None``.
def get(cls, session, team_id): return cls( '/teams/%d.json' % team_id, singleton=True, session=session, )
1,005,550
List the members for the team. Args: team_or_id (helpscout.models.Person or int): Team or the ID of the team to get the folders for. Returns: RequestPaginator(output_type=helpscout.models.Users): Users iterator.
def get_members(cls, session, team_or_id): if isinstance(team_or_id, Person): team_or_id = team_or_id.id return cls( '/teams/%d/members.json' % team_or_id, session=session, out_type=User, )
1,005,551
Parse a property received from the API into an internal object. Args: name (str): Name of the property on the object. value (mixed): The unparsed API value. Raises: HelpScoutValidationException: In the event that the property name is not found. Returns: mixed: A value compatible with the internal models.
def _parse_property(cls, name, value): prop = cls._props.get(name) return_value = value if not prop: logger.debug( '"%s" with value "%s" is not a valid property for "%s".' % ( name, value, cls, ), ) return_value = None elif isinstance(prop, properties.Instance): return_value = prop.instance_class.from_api(**value) elif isinstance(prop, properties.List): return_value = cls._parse_property_list(prop, value) elif isinstance(prop, properties.Color): return_value = cls._parse_property_color(value) return return_value
1,005,883
Return a snake cased version of the input string. Args: string (str): A camel cased string. Returns: str: A snake cased string.
def _to_snake_case(string): sub_string = r'\1_\2' string = REGEX_CAMEL_FIRST.sub(sub_string, string) return REGEX_CAMEL_SECOND.sub(sub_string, string).lower()
1,005,885
Return a camel cased version of the input string. Args: string (str): A snake cased string. Returns: str: A camel cased string.
def _to_camel_case(string): components = string.split('_') return '%s%s' % ( components[0], ''.join(c.title() for c in components[1:]), )
1,005,886
Formats a given value Args: value: value to format Returns: str: formatted value
def __call__(self, value): fmt = self.fmt(value) if len(fmt) > self.col_width: fmt = fmt[:self.col_width - 3] + '...' fmt = self.just(fmt, self.col_width) return fmt
1,006,036
A helper method that adds routes to view callables that, together, implement the MapFish HTTP interface. Example:: import papyrus config.include(papyrus) config.add_papyrus_routes('spots', '/spots') config.scan() Arguments: ``route_name_prefix' The prefix used for the route names passed to ``config.add_route``. ``base_url`` The web service's base URL, e.g. ``/spots``. No trailing slash!
def add_papyrus_routes(self, route_name_prefix, base_url): route_name = route_name_prefix + '_read_many' self.add_route(route_name, base_url, request_method='GET') route_name = route_name_prefix + '_read_one' self.add_route(route_name, base_url + '/{id}', request_method='GET') route_name = route_name_prefix + '_count' self.add_route(route_name, base_url + '/count', request_method='GET') route_name = route_name_prefix + '_create' self.add_route(route_name, base_url, request_method='POST') route_name = route_name_prefix + '_update' self.add_route(route_name, base_url + '/{id}', request_method='PUT') route_name = route_name_prefix + '_delete' self.add_route(route_name, base_url + '/{id}', request_method='DELETE')
1,006,062
Send a DELETE request and return the JSON decoded result. Args: json (dict, optional): Object to encode and send in request. Returns: mixed: JSON decoded response data.
def delete(self, json=None): return self._call('delete', url=self.endpoint, json=json)
1,006,216
Send a POST request and return the JSON decoded result. Args: params (dict, optional): Mapping of parameters to send in request. Returns: mixed: JSON decoded response data.
def get(self, params=None): return self._call('get', url=self.endpoint, params=params)
1,006,217
Send a POST request and return the JSON decoded result. Args: json (dict, optional): Object to encode and send in request. Returns: mixed: JSON decoded response data.
def post(self, json=None): return self._call('post', url=self.endpoint, json=json)
1,006,218
Send a PUT request and return the JSON decoded result. Args: json (dict, optional): Object to encode and send in request. Returns: mixed: JSON decoded response data.
def put(self, json=None): return self._call('put', url=self.endpoint, json=json)
1,006,219
Instantiate an API Authentication Proxy. Args: auth (requests.Session): Authenticated requests Session. proxy_class (type): A class implementing the ``BaseApi`` interface.
def __init__(self, session, proxy_class): assert isinstance(proxy_class, type) self.session = session self.proxy_class = proxy_class
1,006,231
Override attribute getter to act as a proxy for``proxy_class``. If ``item`` is contained in ``METHOD_NO_PROXY``, it will not be proxied to the ``proxy_class`` and will instead return the attribute on this object. Args: item (str): Name of attribute to get.
def __getattr__(self, item): if item in self.METHOD_NO_PROXY: return super(AuthProxy, self).__getattr__(item) attr = getattr(self.proxy_class, item) if callable(attr): return self.auth_proxy(attr)
1,006,232
Authentication proxy for API requests. This is required because the API objects are naive of ``HelpScout``, so they would otherwise be unauthenticated. Args: method (callable): A method call that should be authenticated. It should accept a ``requests.Session`` as its first parameter, which should be used for the actual API call. Returns: mixed: The results of the authenticated callable.
def auth_proxy(self, method): def _proxy(*args, **kwargs): return method(self.session, *args, **kwargs) return _proxy
1,006,233
Get the users that are associated to a Mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox_or_id (MailboxRef or int): Mailbox of the ID of the mailbox to get the folders for. Returns: RequestPaginator(output_type=helpscout.models.User): Users iterator.
def find_in_mailbox(cls, session, mailbox_or_id): if hasattr(mailbox_or_id, 'id'): mailbox_or_id = mailbox_or_id.id return cls( '/mailboxes/%d/users.json' % mailbox_or_id, session=session, )
1,006,234
Delete an attachment. Args: session (requests.sessions.Session): Authenticated session. attachment (helpscout.models.Attachment): The attachment to be deleted. Returns: NoneType: Nothing.
def delete_attachment(cls, session, attachment): return super(Conversations, cls).delete( session, attachment, endpoint_override='/attachments/%s.json' % attachment.id, out_type=Attachment, )
1,006,283
Return conversations for a specific customer in a mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox (helpscout.models.Mailbox): Mailbox to search. customer (helpscout.models.Customer): Customer to search for. Returns: RequestPaginator(output_type=helpscout.models.Conversation): Conversations iterator.
def find_customer(cls, session, mailbox, customer): return cls( '/mailboxes/%d/customers/%s/conversations.json' % ( mailbox.id, customer.id, ), session=session, )
1,006,284
Return conversations for a specific user in a mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox (helpscout.models.Mailbox): Mailbox to search. user (helpscout.models.User): User to search for. Returns: RequestPaginator(output_type=helpscout.models.Conversation): Conversations iterator.
def find_user(cls, session, mailbox, user): return cls( '/mailboxes/%d/users/%s/conversations.json' % ( mailbox.id, user.id, ), session=session, )
1,006,285
Return a specific attachment's data. Args: session (requests.sessions.Session): Authenticated session. attachment_id (int): The ID of the attachment from which to get data. Returns: helpscout.models.AttachmentData: An attachment data singleton, if existing. Otherwise ``None``.
def get_attachment_data(cls, session, attachment_id): return cls( '/attachments/%d/data.json' % attachment_id, singleton=True, session=session, out_type=AttachmentData, )
1,006,286
Return conversations in a mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox (helpscout.models.Mailbox): Mailbox to list. Returns: RequestPaginator(output_type=helpscout.models.Conversation): Conversations iterator.
def list(cls, session, mailbox): endpoint = '/mailboxes/%d/conversations.json' % mailbox.id return super(Conversations, cls).list(session, endpoint)
1,006,287
Return conversations in a specific folder of a mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox (helpscout.models.Mailbox): Mailbox that folder is in. folder (helpscout.models.Folder): Folder to list. Returns: RequestPaginator(output_type=helpscout.models.Conversation): Conversations iterator.
def list_folder(cls, session, mailbox, folder): return cls( '/mailboxes/%d/folders/%s/conversations.json' % ( mailbox.id, folder.id, ), session=session, )
1,006,288
Update a thread. Args: session (requests.sessions.Session): Authenticated session. conversation (helpscout.models.Conversation): The conversation that the thread belongs to. thread (helpscout.models.Thread): The thread to be updated. Returns: helpscout.models.Conversation: Conversation including freshly updated thread.
def update_thread(cls, session, conversation, thread): data = thread.to_api() data['reload'] = True return cls( '/conversations/%s/threads/%d.json' % ( conversation.id, thread.id, ), data=data, request_type=RequestPaginator.PUT, singleton=True, session=session, )
1,006,290
Called by the protocol on object creation. Arguments: * ``feature`` The GeoJSON feature as received from the client.
def __init__(self, feature=None): if feature: for p in class_mapper(self.__class__).iterate_properties: if not isinstance(p, ColumnProperty): continue if p.columns[0].primary_key: primary_key = p.key if hasattr(feature, 'id') and feature.id is not None: setattr(self, primary_key, feature.id) self.__update__(feature)
1,006,369
Called by the protocol on object update. Arguments: * ``feature`` The GeoJSON feature as received from the client.
def __update__(self, feature): for p in class_mapper(self.__class__).iterate_properties: if not isinstance(p, ColumnProperty): continue col = p.columns[0] if isinstance(col.type, Geometry): geom = feature.geometry if geom and not isinstance(geom, geojson.geometry.Default): srid = col.type.srid shape = asShape(geom) setattr(self, p.key, from_shape(shape, srid=srid)) self._shape = shape elif not col.primary_key: if p.key in feature.properties: setattr(self, p.key, feature.properties[p.key]) if self.__add_properties__: for k in self.__add_properties__: setattr(self, k, feature.properties.get(k))
1,006,370
Prints a formatted row Args: args: row cells
def __call__(self, *args): if len(self.formatters) == 0: self.setup(*args) row_cells = [] if self.rownum: row_cells.append(0) if self.timestamp: row_cells.append(datetime.datetime.now()) if self.time_diff: row_cells.append(0) row_cells.extend(args) if len(row_cells) != len(self.formatters): raise ValueError('Expected number of columns is {}. Got {}.'.format( len(self.formatters), len(row_cells))) line = self.format_row(*row_cells) self.print_line(line)
1,006,444
Setup formatters by observing the first row. Args: *args: row cells
def setup_formatters(self, *args): formatters = [] col_offset = 0 # initialize formatters for row-id, timestamp and time-diff columns if self.rownum: formatters.append(fmt.RowNumberFormatter.setup(0)) col_offset += 1 if self.timestamp: formatters.append(fmt.DatetimeFormatter.setup( datetime.datetime.now(), fmt='{:%Y-%m-%d %H:%M:%S.%f}'.format, col_width=26)) col_offset += 1 if self.time_diff: formatters.append(fmt.TimeDeltaFormatter.setup(0)) col_offset += 1 # initialize formatters for user-defined columns for coli, value in enumerate(args): fmt_class = type2fmt.get(type(value), fmt.GenericFormatter) kwargs = {} # set column width if self.default_colwidth is not None: kwargs['col_width'] = self.default_colwidth if coli in self.column_widths: kwargs['col_width'] = self.column_widths[coli] elif self.columns and self.columns[coli + col_offset] in self.column_widths: kwargs['col_width'] = self.column_widths[self.columns[coli + col_offset]] # set formatter function if fmt_class == fmt.FloatFormatter and self.float_format is not None: kwargs['fmt'] = self.float_format if coli in self.column_formatters: kwargs['fmt'] = self.column_formatters[coli] elif self.columns and self.columns[coli + col_offset] in self.column_formatters: kwargs['fmt'] = self.column_formatters[self.columns[coli + col_offset]] formatter = fmt_class.setup(value, **kwargs) formatters.append(formatter) self.formatters = formatters
1,006,446
Do preparations before printing the first row Args: *args: first row cells
def setup(self, *args): self.setup_formatters(*args) if self.columns: self.print_header() elif self.border and not self.csv: self.print_line(self.make_horizontal_border())
1,006,447
Converts row values into a csv line Args: row: a list of row cells as unicode Returns: csv_line (unicode)
def csv_format(self, row): if PY2: buf = io.BytesIO() csvwriter = csv.writer(buf) csvwriter.writerow([c.strip().encode(self.encoding) for c in row]) csv_line = buf.getvalue().decode(self.encoding).rstrip() else: buf = io.StringIO() csvwriter = csv.writer(buf) csvwriter.writerow([c.strip() for c in row]) csv_line = buf.getvalue().rstrip() return csv_line
1,006,452
Join a new query to existing queries on the stack. Args: query (tuple or list or DomainCondition): The condition for the query. If a ``DomainCondition`` object is not provided, the input should conform to the interface defined in :func:`~.domain.DomainCondition.from_tuple`. join_with (str): The join string to apply, if other queries are already on the stack.
def add_query(self, query, join_with=AND): if not isinstance(query, DomainCondition): query = DomainCondition.from_tuple(query) if len(self.query): self.query.append(join_with) self.query.append(query)
1,006,743
Initialize a new generic query condition. Args: field (str): Field name to search on. This should be the Pythonified name as in the internal models, not the name as provided in the API e.g. ``first_name`` for the Customer's first name instead of ``firstName``. value (mixed): The value of the field.
def __init__(self, field, value, **kwargs): return super(DomainCondition, self).__init__( field=field, value=value, **kwargs )
1,006,745
List the folders for the mailbox. Args: mailbox_or_id (helpscout.models.Mailbox or int): Mailbox or the ID of the mailbox to get the folders for. Returns: RequestPaginator(output_type=helpscout.models.Folder): Folders iterator.
def get_folders(cls, session, mailbox_or_id): if isinstance(mailbox_or_id, Mailbox): mailbox_or_id = mailbox_or_id.id return cls( '/mailboxes/%d/folders.json' % mailbox_or_id, session=session, out_type=Folder, )
1,006,812
Parse raw record data if required. Args: record (dict or BaseModel): The record data that was received for the request. If it is a ``dict``, the data will be parsed using the proper model's ``from_api`` method.
def __init__(self, *args, **kwargs): if isinstance(kwargs.get('record'), dict): prefix, _ = kwargs['event_type'].split('.', 1) model = self.EVENT_PREFIX_TO_MODEL[prefix] kwargs['record'] = model.from_api(**kwargs['record']) super(WebHookEvent, self).__init__(*args, **kwargs)
1,007,234
Defines a flag of type 'string'. Args: flag_name: The name of the flag as a string. default_value: The default value the flag should take as a string. docstring: A helpful message explaining the use of the flag.
def DEFINE_string(flag_name, default_value, docstring, required=False): # pylint: disable=invalid-name _define_helper(flag_name, default_value, docstring, str, required)
1,007,327
Defines a flag of type 'int'. Args: flag_name: The name of the flag as a string. default_value: The default value the flag should take as an int. docstring: A helpful message explaining the use of the flag.
def DEFINE_integer(flag_name, default_value, docstring, required=False): # pylint: disable=invalid-name _define_helper(flag_name, default_value, docstring, int, required)
1,007,328
Defines a flag of type 'boolean'. Args: flag_name: The name of the flag as a string. default_value: The default value the flag should take as a boolean. docstring: A helpful message explaining the use of the flag.
def DEFINE_boolean(flag_name, default_value, docstring): # pylint: disable=invalid-name # Register a custom function for 'bool' so --flag=True works. def str2bool(bool_str): return bool_str.lower() in ('true', 't', '1') get_context_parser().add_argument( '--' + flag_name, nargs='?', const=True, help=docstring, default=default_value, type=str2bool) # Add negated version, stay consistent with argparse with regard to # dashes in flag names. get_context_parser().add_argument( '--no' + flag_name, action='store_false', dest=flag_name.replace('-', '_'))
1,007,329
Defines a flag of type 'float'. Args: flag_name: The name of the flag as a string. default_value: The default value the flag should take as a float. docstring: A helpful message explaining the use of the flag.
def DEFINE_float(flag_name, default_value, docstring, required=False): # pylint: disable=invalid-name _define_helper(flag_name, default_value, docstring, float, required)
1,007,330
Return a value associated with a key from the session dictionary. Args: key (str): The dictionary key. Returns: str: The value associate with that key or None if the key is not in the dictionary.
def __getitem__(self,key): self.rdb.expire(self.session_hash,self.ttl) encoded_result = self.rdb.hget(self.session_hash,key) if encoded_result is None: return None else: return encoded_result.decode('utf-8')
1,007,396
Set an existing or new key, value association. Args: key (str): The dictionary key. value (str): The dictionary value
def __setitem__(self,key,value): self.rdb.hset(self.session_hash,key,value) self.rdb.expire(self.session_hash,self.ttl)
1,007,397
Get a value from the dictionary. Args: key (str): The dictionary key. default (any): The default to return if the key is not in the dictionary. Defaults to None. Returns: str or any: The dictionary value or the default if the key is not in the dictionary.
def get(self,key,default=None): retval = self.__getitem__(key) if not retval: retval = default return retval
1,007,398
function hash() implement to acquire hash value that use simply method that weighted sum. Parameters: ----------- value: string the value is param of need acquire hash Returns: -------- result hash code for value
def hash(self, value): result = 0 for i in range(len(value)): result += self.seed * result + ord(value[i]) return (self.capacity - 1) % result
1,007,476
Tokenizes a document, using a lemmatizer. Args: | doc (str) -- the text document to process. Returns: | list -- the list of tokens.
def tokenize(self, docs): if self.n_jobs == 1: return [self._tokenize(doc) for doc in docs] else: return parallel(self._tokenize, docs, self.n_jobs)
1,007,936
Encrypt a 16-byte block of data. NOTE: This function was formerly called `encrypt`, but was changed when support for encrypting arbitrary-length strings was added. Args: plainText (str): 16-byte data. Returns: 16-byte str. Raises: TypeError if CamCrypt object has not been initialized. ValueError if `plainText` is not BLOCK_SIZE (i.e. 16) bytes.
def encrypt_block(self, plainText): if not self.initialized: raise TypeError("CamCrypt object has not been initialized") if len(plainText) != BLOCK_SIZE: raise ValueError("plainText must be %d bytes long (received %d bytes)" % (BLOCK_SIZE, len(plainText))) cipher = ctypes.create_string_buffer(BLOCK_SIZE) self.encblock(self.bitlen, plainText, self.keytable, cipher) return cipher.raw
1,008,001
Decrypt a 16-byte block of data. NOTE: This function was formerly called `decrypt`, but was changed when support for decrypting arbitrary-length strings was added. Args: cipherText (str): 16-byte data. Returns: 16-byte str. Raises: TypeError if CamCrypt object has not been initialized. ValueError if `cipherText` is not BLOCK_SIZE (i.e. 16) bytes.
def decrypt_block(self, cipherText): if not self.initialized: raise TypeError("CamCrypt object has not been initialized") if len(cipherText) != BLOCK_SIZE: raise ValueError("cipherText must be %d bytes long (received %d bytes)" % (BLOCK_SIZE, len(cipherText))) plain = ctypes.create_string_buffer(BLOCK_SIZE) self.decblock(self.bitlen, cipherText, self.keytable, plain) return plain.raw
1,008,002
Returns the feature vectors for a set of docs. If model is not already be trained, then self.train() is called. Args: docs (dict or list of tuples): asset_id, body_text of documents you wish to featurize.
def vectorize( self, docs ): if type(docs) == dict: docs = docs.items() if self.model == None: self.train(docs) asset_id2vector = {} unfound = [] for item in docs: ## iterate through the items in docs and check if any are already in the model. asset_id, _ = item label = 'DOC_' + str(asset_id) if label in self.model: asset_id2vector.update({asset_id: self.model['DOC_' + str(asset_id)]}) else: unfound.append(item) if len(unfound) > 0: ## for all assets not in the model, update the model and then get their sentence vectors. sentences = [self._gen_sentence(item) for item in unfound] self.update_model(sentences, train=self.stream_train) asset_id2vector.update({item[0]: self.model['DOC_' + str(item[0])] for item in unfound}) return asset_id2vector
1,008,197
Train Doc2Vec on a series of docs. Train from scratch or update. Args: docs: list of tuples (assetid, body_text) or dictionary {assetid : body_text} retrain: boolean, retrain from scratch or update model saves model in class to self.model Returns: 0 if successful
def train(self, docs, retrain=False): if type(docs) == dict: docs = docs.items() train_sentences = [self._gen_sentence(item) for item in docs] if (self.is_trained) and (retrain == False): ## online training self.update_model(train_sentences, update_labels_bool=True) else: ## train from scratch self.model = Doc2Vec(train_sentences, size=self.size, window=self.window, min_count=self.min_count, workers=self.workers) self.is_trained = True return 0
1,008,198
Takes in html-mixed body text as a string and returns a list of strings, lower case and with punctuation given spacing. Called by self._gen_sentence() Args: inpnut (string): body text
def _process(self, input): input = re.sub("<[^>]*>", " ", input) punct = list(string.punctuation) for symbol in punct: input = input.replace(symbol, " %s " % symbol) input = filter(lambda x: x != u'', input.lower().split(' ')) return input
1,008,200
Takes an assetid_body_tuple and returns a Doc2Vec LabeledSentence Args: assetid_body_tuple (tuple): (assetid, bodytext) pair
def _gen_sentence(self, assetid_body_tuple): asset_id, body = assetid_body_tuple text = self._process(body) sentence = LabeledSentence(text, labels=['DOC_%s' % str(asset_id)]) return sentence
1,008,201
Set the resource attributes from the kwargs. Only sets items in the `self.Meta.attributes` white list. Subclass this method to customise attributes. Args: kwargs: Keyword arguements passed into the init of this class
def set_attributes(self, **kwargs): if self._subresource_map: self.set_subresources(**kwargs) for key in self._subresource_map.keys(): # Don't let these attributes be overridden later kwargs.pop(key, None) for field, value in kwargs.items(): if field in self.Meta.attributes: setattr(self, field, value)
1,008,631
Construct the URL for talking to this resource. i.e.: http://myapi.com/api/resource Note that this is NOT the method for calling individual instances i.e. http://myapi.com/api/resource/1 Args: resource: The resource class instance base_url: The Base URL of this API service. returns: resource_url: The URL for this resource
def get_resource_url(cls, resource, base_url): if resource.Meta.resource_name: url = '{}/{}'.format(base_url, resource.Meta.resource_name) else: p = inflect.engine() plural_name = p.plural(resource.Meta.name.lower()) url = '{}/{}'.format(base_url, plural_name) return cls._parse_url_and_validate(url)
1,008,632
Construct the URL for talking to an individual resource. http://myapi.com/api/resource/1 Args: url: The url for this resource uid: The unique identifier for an individual resource kwargs: Additional keyword argueents returns: final_url: The URL for this individual resource
def get_url(cls, url, uid, **kwargs): if uid: url = '{}/{}'.format(url, uid) else: url = url return cls._parse_url_and_validate(url)
1,008,633
Recieves a URL string and validates it using urlparse. Args: url: A URL string Returns: parsed_url: A validated URL Raises: BadURLException
def _parse_url_and_validate(cls, url): parsed_url = urlparse(url) if parsed_url.scheme and parsed_url.netloc: final_url = parsed_url.geturl() else: raise BadURLException return final_url
1,008,635
For the list of valid URLs, try and match them up to resources in the related_resources attribute. Args: url_values: A dictionary of keys and URL strings that could be related resources. Returns: valid_values: The values that are valid
def match_urls_to_resources(self, url_values): valid_values = {} for resource in self.Meta.related_resources: for k, v in url_values.items(): resource_url = resource.get_resource_url( resource, resource.Meta.base_url) if isinstance(v, list): if all([resource_url in i for i in v]): self.set_related_method(resource, v) valid_values[k] = v elif resource_url in v: self.set_related_method(resource, v) valid_values[k] = v return valid_values
1,008,638
Set the resource attributes from the kwargs. Only sets items in the `self.Meta.attributes` white list. Args: kwargs: Keyword arguements passed into the init of this class
def set_attributes(self, **kwargs): for field, value in kwargs.items(): if field in self.Meta.attributes: setattr(self, field, value)
1,008,641
Read data from file(s) or STDIN. Args: filenames (list): List of files to read to get data. If empty or None, read from STDIN.
def _get_data(filenames): if filenames: data = "" for filename in filenames: with open(filename, "rb") as f: data += f.read() else: data = sys.stdin.read() return data
1,009,304
Print data to a file or STDOUT. Args: filename (str or None): If None, print to STDOUT; otherwise, print to the file with this name. data (str): Data to print.
def _print_results(filename, data): if filename: with open(filename, 'wb') as f: f.write(data) else: print data
1,009,305
Prepares the HTTP REQUEST and returns it. Args: method_type: The HTTP method type params: Additional parameters for the HTTP request. kwargs: Any extra keyword arguements passed into a client method. returns: prepared_request: An HTTP request object.
def prepare_http_request(self, method_type, params, **kwargs): prepared_request = self.session.prepare_request( requests.Request(method=method_type, **params) ) return prepared_request
1,009,452
Handles Response objects Args: response: An HTTP reponse object valid_status_codes: A tuple list of valid status codes resource: The resource class to build from this response returns: resources: A list of Resource instances
def _handle_response(self, response, valid_status_codes, resource): if response.status_code not in valid_status_codes: raise InvalidStatusCodeError( status_code=response.status_code, expected_status_codes=valid_status_codes ) if response.content: data = response.json() if isinstance(data, list): # A list of results is always rendered return [resource(**x) for x in data] else: # Try and find the paginated resources key = getattr(resource.Meta, 'pagination_key', None) if isinstance(data.get(key), list): # Only return the paginated responses return [resource(**x) for x in data.get(key)] else: # Attempt to render this whole response as a resource return [resource(**data)] return []
1,009,454
Given a resource_class and it's Meta.methods tuple, assign methods for communicating with that resource. Args: resource_class: A single resource class
def assign_methods(self, resource_class): assert all([ x.upper() in VALID_METHODS for x in resource_class.Meta.methods]) for method in resource_class.Meta.methods: self._assign_method( resource_class, method.upper() )
1,009,458
Using reflection, assigns a new method to this class. Args: resource_class: A resource class method_type: The HTTP method type
def _assign_method(self, resource_class, method_type): method_name = resource_class.get_method_name( resource_class, method_type) valid_status_codes = getattr( resource_class.Meta, 'valid_status_codes', DEFAULT_VALID_STATUS_CODES ) # I know what you're going to say, and I'd love help making this nicer # reflection assigns the same memory addr to each method otherwise. def get(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def put(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def post(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def patch(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def delete(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) method_map = { 'GET': get, 'PUT': put, 'POST': post, 'PATCH': patch, 'DELETE': delete } setattr( self, method_name, types.MethodType(method_map[method_type], self) )
1,009,459
Ensures that the data within cdata has double sphere symmetry. Example:: >>> spherepy.doublesphere(cdata, 1) Args: sym (int): is 1 for scalar data and -1 for vector data Returns: numpy.array([*,*], dtype=np.complex128) containing array with doublesphere symmetry.
def double_sphere(cdata, sym): nrows = cdata.shape[0] ncols = cdata.shape[1] ddata = np.zeros([nrows, ncols], dtype=np.complex128) for n in xrange(0, nrows): for m in xrange(0, ncols): s = sym * cdata[np.mod(nrows - n, nrows), np.mod(int(np.floor(ncols / 2)) + m, ncols)] t = cdata[n, m] if s * t == 0: ddata[n, m] = s + t else: ddata[n, m] = (s + t) / 2 return ddata
1,009,487
Calculates virtual barcode for IBAN account number and ISO reference Arguments: iban {string} -- IBAN formed account number reference {string} -- ISO 11649 creditor reference amount {decimal.Decimal} -- Amount in euros, 0.01 - 999999.99 due {datetime.date} -- due date
def barcode(iban, reference, amount, due=None): iban = iban.replace(' ', '') reference = reference.replace(' ', '') if reference.startswith('RF'): version = 5 else: version = 4 if version == 5: reference = reference[2:] # test RF and add 00 where needed if len(reference) < 23: reference = reference[:2] + ("0" * (23 - len(reference))) + reference[2:] elif version == 4: reference = reference.zfill(20) if not iban.startswith('FI'): raise BarcodeException('Barcodes can be printed only for IBANs starting with FI') iban = iban[2:] amount = "%08d" % (amount.quantize(Decimal('.01')).shift(2).to_integral_value()) if len(amount) != 8: raise BarcodeException("Barcode payment amount must be less than 1000000.00") if due: due = due.strftime("%y%m%d") else: due = "000000" if version == 4: barcode = "%s%s%s000%s%s" % (version, iban, amount, reference, due) elif version == 5: barcode = "%s%s%s%s%s" % (version, iban, amount, reference, due) return barcode
1,009,666
This endpoint doesn't return a JSON object, instead it returns a series of rows, each its own object. Given this setup, it makes sense to treat it how we handle our Bulk Export reqeusts. Arguments: path: the directory on your computer you wish the file to be downloaded into. return_response_object: recommended to be set to 'False'. If set to 'True', will just return the response object as defined by the 'python-requests' module.
def get_experiment_metrics(self, path, return_response_object= None, experiment_id=None, campaign_id=None, start_date_time=None, end_date_time=None ): call="/api/experiments/metrics" if isinstance(return_response_object, bool) is False: raise ValueError("'return_iterator_object'parameter must be a boolean") payload={} if experiment_id is not None: payload["experimentId"]=experiment_id if campaign_id is not None: payload["campaignId"]=campaign_id if start_date_time is not None: payload["startDateTime"]=start_date_time if end_date_time is not None: payload["endDateTime"]=end_date_time return self.export_data_api(call=call, path=path, params=payload)
1,009,747
Groups together Params for adding under the 'What' section. Args: params(list of :func:`Param`): Parameter elements to go in this group. name(str): Group name. NB ``None`` is valid, since the group may be best identified by its type. type(str): Type of group, e.g. 'complex' (for real and imaginary).
def Group(params, name=None, type=None): atts = {} if name: atts['name'] = name if type: atts['type'] = type g = objectify.Element('Group', attrib=atts) for p in params: g.append(p) return g
1,009,974
Represents external information, typically original obs data and metadata. Args: uri(str): Uniform resource identifier for external data, e.g. FITS file. meaning(str): The nature of the document referenced, e.g. what instrument and filter was used to create the data?
def Reference(uri, meaning=None): attrib = {'uri': uri} if meaning is not None: attrib['meaning'] = meaning return objectify.Element('Reference', attrib)
1,009,975
Represents a probable cause / relation between this event and some prior. Args: probability(float): Value 0.0 to 1.0. relation(str): e.g. 'associated' or 'identified' (see Voevent spec) name(str): e.g. name of identified progenitor. concept(str): One of a 'formal UCD-like vocabulary of astronomical concepts', e.g. http://ivoat.ivoa.net/stars.supernova.Ia - see VOEvent spec.
def Inference(probability=None, relation=None, name=None, concept=None): atts = {} if probability is not None: atts['probability'] = str(probability) if relation is not None: atts['relation'] = relation inf = objectify.Element('Inference', attrib=atts) if name is not None: inf.Name = name if concept is not None: inf.Concept = concept return inf
1,009,976
Used to cite earlier VOEvents. Use in conjunction with :func:`.add_citations` Args: ivorn(str): It is assumed this will be copied verbatim from elsewhere, and so these should have any prefix (e.g. 'ivo://','http://') already in place - the function will not alter the value. cite_type (:class:`.definitions.cite_types`): String conforming to one of the standard citation types.
def EventIvorn(ivorn, cite_type): # This is an ugly hack around the limitations of the lxml.objectify API: c = objectify.StringElement(cite=cite_type) c._setText(ivorn) c.tag = "EventIVORN" return c
1,009,977
Initialize ndrive instance Using given user information, login to ndrive server and create a session Args: NID_AUT: Naver account authentication info NID_SES: Naver account session info Returns:
def __init__(self, NID_AUT = None, NID_SES= None): self.session.headers["User-Agent"] = \ "Mozilla/5.0 (Windows NT 6.2; WOW64) Chrome/32.0.1700.76 Safari/537.36" self.session.cookies.set('NID_AUT', NID_AUT) self.session.cookies.set('NID_SES', NID_SES)
1,010,110
Get registerUserInfo Args: svctype: Platform information auth: ??? Returns: True: Success False: Failed
def getRegisterUserInfo(self, svctype = "Android NDrive App ver", auth = 0): data = {'userid': self.user_id, 'svctype': svctype, 'auth': auth} r = self.session.get(nurls['getRegisterUserInfo'], params = data) j = json.loads(r.text) if j['message'] != 'success': print "[*] Error getRegisterUserInfo: " + j['message'] return False else: self.useridx = j['resultvalue']['useridx'] return True
1,010,112