code
stringlengths
52
7.75k
docs
stringlengths
1
5.85k
def shuffle(self, overwrite=False): if overwrite: shuffled = self.path else: shuffled = FileAPI.add_ext_name(self.path, "_shuffled") lines = open(self.path).readlines() random.shuffle(lines) open(shuffled, "w").writelines(lines) self.path = shuffled
This method creates new shuffled file.
def multiple_files_count_reads_in_windows(bed_files, args): # type: (Iterable[str], Namespace) -> OrderedDict[str, List[pd.DataFrame]] bed_windows = OrderedDict() # type: OrderedDict[str, List[pd.DataFrame]] for bed_file in bed_files: logging.info("Binning " + bed_file) if ".bedpe" in bed_file: chromosome_dfs = count_reads_in_windows_paired_end(bed_file, args) else: chromosome_dfs = count_reads_in_windows(bed_file, args) bed_windows[bed_file] = chromosome_dfs return bed_windows
Use count_reads on multiple files and store result in dict. Untested since does the same thing as count reads.
def _merge_files(windows, nb_cpu): # type: (Iterable[pd.DataFrame], int) -> pd.DataFrame # windows is a list of chromosome dfs per file windows = iter(windows) # can iterate over because it is odict_values merged = next(windows) # if there is only one file, the merging is skipped since the windows is used up for chromosome_dfs in windows: # merge_same_files merges the chromosome files in parallel merged = merge_same_files(merged, chromosome_dfs, nb_cpu) return merged
Merge lists of chromosome bin df chromosome-wise. windows is an OrderedDict where the keys are files, the values are lists of dfs, one per chromosome. Returns a list of dataframes, one per chromosome, with the collective count per bin for all files. TODO: is it faster to merge all in one command?
def generate_cumulative_dist(island_expectations_d, total_length): # type: (Dict[int, float], int) -> float cumulative = [0.0] * (total_length + 1) partial_sum = 0.0 island_expectations = [] for i in range(len(cumulative)): if i in island_expectations_d: island_expectations.append(island_expectations_d[i]) else: island_expectations.append(0) for index in range(1, len(island_expectations) + 1): complimentary = len(island_expectations) - index partial_sum += island_expectations[complimentary] cumulative[complimentary] = partial_sum # move to function call for index in range(len(cumulative)): if cumulative[index] <= E_VALUE: score_threshold = index * BIN_SIZE break return score_threshold
Generate cumulative distribution: a list of tuples (bins, hist).
def py2round(value): if value > 0: return float(floor(float(value)+0.5)) else: return float(ceil(float(value)-0.5))
Round values as in Python 2, for Python 3 compatibility. All x.5 values are rounded away from zero. In Python 3, this has changed to avoid bias: when x is even, rounding is towards zero, when x is odd, rounding is away from zero. Thus, in Python 3, round(2.5) results in 2, round(3.5) is 4. Python 3 also returns an int; Python 2 returns a float.
def canonicalize(interval, lower_inc=True, upper_inc=False): if not interval.discrete: raise TypeError('Only discrete ranges can be canonicalized') if interval.empty: return interval lower, lower_inc = canonicalize_lower(interval, lower_inc) upper, upper_inc = canonicalize_upper(interval, upper_inc) return interval.__class__( [lower, upper], lower_inc=lower_inc, upper_inc=upper_inc, )
Convert equivalent discrete intervals to different representations.
def glb(self, other): return self.__class__( [ min(self.lower, other.lower), min(self.upper, other.upper) ], lower_inc=self.lower_inc if self < other else other.lower_inc, upper_inc=self.upper_inc if self > other else other.upper_inc, )
Return the greatest lower bound for given intervals. :param other: AbstractInterval instance
def lub(self, other): return self.__class__( [ max(self.lower, other.lower), max(self.upper, other.upper), ], lower_inc=self.lower_inc if self < other else other.lower_inc, upper_inc=self.upper_inc if self > other else other.upper_inc, )
Return the least upper bound for given intervals. :param other: AbstractInterval instance
def is_connected(self, other): return self.upper > other.lower and other.upper > self.lower or ( self.upper == other.lower and (self.upper_inc or other.lower_inc) ) or ( self.lower == other.upper and (self.lower_inc or other.upper_inc) )
Returns ``True`` if there exists a (possibly empty) range which is enclosed by both this range and other. Examples: * [1, 3] and [5, 7] are not connected * [5, 7] and [1, 3] are not connected * [2, 4) and [3, 5) are connected, because both enclose [3, 4) * [1, 3) and [3, 5) are connected, because both enclose the empty range [3, 3) * [1, 3) and (3, 5) are not connected
def compute_enriched_threshold(average_window_readcount): # type: (float) -> int current_threshold, survival_function = 0, 1 for current_threshold in count(start=0, step=1): survival_function -= poisson.pmf(current_threshold, average_window_readcount) if survival_function <= WINDOW_P_VALUE: break island_enriched_threshold = current_threshold + 1 return island_enriched_threshold
Computes the minimum number of tags required in window for an island to be enriched.
def _factln(num): # type: (int) -> float if num < 20: log_factorial = log(factorial(num)) else: log_factorial = num * log(num) - num + log(num * (1 + 4 * num * ( 1 + 2 * num))) / 6.0 + log(pi) / 2 return log_factorial
Computes logfactorial regularly for tractable numbers, uses Ramanujans approximation otherwise.
def add_new_enriched_bins_matrixes(region_files, dfs, bin_size): dfs = _remove_epic_enriched(dfs) names = ["Enriched_" + os.path.basename(r) for r in region_files] regions = region_files_to_bins(region_files, names, bin_size) new_dfs = OrderedDict() assert len(regions.columns) == len(dfs) for region, (n, df) in zip(regions, dfs.items()): region_col = regions[region] df = df.join(region_col, how="outer").fillna(0) new_dfs[n] = df return new_dfs
Add enriched bins based on bed files. There is no way to find the correspondence between region file and matrix file, but it does not matter.
def merge_chromosome_dfs(df_tuple): # type: (Tuple[pd.DataFrame, pd.DataFrame]) -> pd.DataFrame plus_df, minus_df = df_tuple index_cols = "Chromosome Bin".split() count_column = plus_df.columns[0] if plus_df.empty: return return_other(minus_df, count_column, index_cols) if minus_df.empty: return return_other(plus_df, count_column, index_cols) # sum duplicate bins # TODO: why are there duplicate bins here in the first place? plus_df = plus_df.groupby(index_cols).sum() minus_df = minus_df.groupby(index_cols).sum() # first sum the two bins from each strand df = pd.concat([plus_df, minus_df], axis=1).fillna(0).sum(axis=1) df = df.reset_index().sort_values(by="Bin") df.columns = ["Chromosome", "Bin", count_column] df = df.sort_values(["Chromosome", "Bin"]) df[["Bin", count_column]] = df[["Bin", count_column]].astype(int32) df = df[[count_column, "Chromosome", "Bin"]] return df.reset_index(drop=True)
Merges data from the two strands into strand-agnostic counts.
def remove_out_of_bounds_bins(df, chromosome_size): # type: (pd.DataFrame, int) -> pd.DataFrame # The dataframe is empty and contains no bins out of bounds if "Bin" not in df: return df df = df.drop(df[df.Bin > chromosome_size].index) return df.drop(df[df.Bin < 0].index)
Remove all reads that were shifted outside of the genome endpoints.
def remove_bins_with_ends_out_of_bounds(df, chromosome_size, window_size): # type: (pd.DataFrame, int, int) -> pd.DataFrame # The dataframe is empty and contains no bins out of bounds # print(df.head(2)) # print(chromosome_size) # print(window_size) out_of_bounds = df[df.index.get_level_values("Bin") + window_size > chromosome_size].index # print(len(out_of_bounds)) df = df.drop(out_of_bounds) return df
Remove all reads that were shifted outside of the genome endpoints.
def create_log2fc_bigwigs(matrix, outdir, args): # type: (pd.DataFrame, str, Namespace) -> None call("mkdir -p {}".format(outdir), shell=True) genome_size_dict = args.chromosome_sizes outpaths = [] for bed_file in matrix[args.treatment]: outpath = join(outdir, splitext(basename(bed_file))[0] + "_log2fc.bw") outpaths.append(outpath) data = create_log2fc_data(matrix, args) Parallel(n_jobs=args.number_cores)(delayed(_create_bigwig)(bed_column, outpath, genome_size_dict) for outpath, bed_column in zip(outpaths, data))
Create bigwigs from matrix.
def add_to_island_expectations_dict(average_window_readcount, current_max_scaled_score, island_eligibility_threshold, island_expectations, gap_contribution): # type: ( float, int, float, Dict[int, float], float) -> Dict[int, float] scaled_score = current_max_scaled_score + E_VALUE for index in range(current_max_scaled_score + 1, scaled_score + 1): island_expectation = 0.0 i = island_eligibility_threshold #i is the number of tags in the added window current_island = int(round(index - compute_window_score( i, average_window_readcount) / BIN_SIZE)) while (current_island >= 0): if current_island in island_expectations: island_expectation += _poisson( i, average_window_readcount) * island_expectations[ current_island] i += 1 current_island = int(round(index - compute_window_score( i, average_window_readcount) / BIN_SIZE)) island_expectation *= gap_contribution if island_expectation: island_expectations[index] = island_expectation return island_expectations
Can probably be heavily optimized. Time required to run can be seen from logging info.
def get_island_bins(df, window_size, genome, args): # type: (pd.DataFrame, int, str, Namespace) -> Dict[str, Set[int]] # need these chromos because the df might not have islands in all chromos chromosomes = natsorted(list(args.chromosome_sizes)) chromosome_island_bins = {} # type: Dict[str, Set[int]] df_copy = df.reset_index(drop=False) for chromosome in chromosomes: cdf = df_copy.loc[df_copy.Chromosome == chromosome] if cdf.empty: chromosome_island_bins[chromosome] = set() else: island_starts_ends = zip(cdf.Start.values.tolist(), cdf.End.values.tolist()) island_bins = chain(*[range( int(start), int(end), window_size) for start, end in island_starts_ends]) chromosome_island_bins[chromosome] = set(island_bins) return chromosome_island_bins
Finds the enriched bins in a df.
def create_genome_size_dict(genome): # type: (str) -> Dict[str,int] size_file = get_genome_size_file(genome) size_lines = open(size_file).readlines() size_dict = {} for line in size_lines: genome, length = line.split() size_dict[genome] = int(length) return size_dict
Creates genome size dict from string containing data.
def find_readlength(args): # type: (Namespace) -> int try: bed_file = args.treatment[0] except AttributeError: bed_file = args.infiles[0] filereader = "cat " if bed_file.endswith(".gz") and search("linux", platform, IGNORECASE): filereader = "zcat " elif bed_file.endswith(".gz") and search("darwin", platform, IGNORECASE): filereader = "gzcat " elif bed_file.endswith(".bz2"): filereader = "bzgrep " command = filereader + "{} | head -10000".format(bed_file) output = check_output(command, shell=True) df = pd.read_table( BytesIO(output), header=None, usecols=[1, 2], sep="\t", names=["Start", "End"]) readlengths = df.End - df.Start mean_readlength = readlengths.mean() median_readlength = readlengths.median() max_readlength = readlengths.max() min_readlength = readlengths.min() logging.info(( "Used first 10000 reads of {} to estimate a median read length of {}\n" "Mean readlength: {}, max readlength: {}, min readlength: {}.").format( bed_file, median_readlength, mean_readlength, max_readlength, min_readlength)) return median_readlength
Estimate length of reads based on 10000 first.
def get_closest_readlength(estimated_readlength): # type: (int) -> int readlengths = [36, 50, 75, 100] differences = [abs(r - estimated_readlength) for r in readlengths] min_difference = min(differences) index_of_min_difference = [i for i, d in enumerate(differences) if d == min_difference][0] return readlengths[index_of_min_difference]
Find the predefined readlength closest to the estimated readlength. In the case of a tie, choose the shortest readlength.
def parse_hyphen_range(self, value): values = value.strip().split('-') values = list(map(strip, values)) if len(values) == 1: lower = upper = value.strip() elif len(values) == 2: lower, upper = values if lower == '': # Parse range such as '-3' upper = '-' + upper lower = upper else: if len(values) > 4: raise IntervalException( 'Unknown interval format given.' ) values_copy = [] for key, value in enumerate(values): if value != '': try: if values[key - 1] == '': value = '-' + value except IndexError: pass values_copy.append(value) lower, upper = values_copy return [lower, upper], True, True
Parse hyphen ranges such as: 2 - 5, -2 - -1, -3 - 5
def parse_version(output): for x in output.splitlines(): match = VERSION_PATTERN.match(x) if match: return match.group('version').strip() return None
Parses the supplied output and returns the version string. :param output: A string containing the output of running snort. :returns: Version string for the version of snort run. None if not found.
def parse_alert(output): for x in output.splitlines(): match = ALERT_PATTERN.match(x) if match: rec = {'timestamp': datetime.strptime(match.group('timestamp'), '%m/%d/%y-%H:%M:%S.%f'), 'sid': int(match.group('sid')), 'revision': int(match.group('revision')), 'priority': int(match.group('priority')), 'message': match.group('message'), 'source': match.group('src'), 'destination': match.group('dest'), 'protocol': match.group('protocol'), } if match.group('classtype'): rec['classtype'] = match.group('classtype') yield rec
Parses the supplied output and yields any alerts. Example alert format: 01/28/14-22:26:04.885446 [**] [1:1917:11] INDICATOR-SCAN UPnP service discover attempt [**] [Classification: Detection of a Network Scan] [Priority: 3] {UDP} 10.1.1.132:58650 -> 239.255.255.250:1900 :param output: A string containing the output of running snort :returns: Generator of snort alert dicts
def _snort_cmd(self, pcap): cmdline = "'{0}' -A console -N -y -c '{1}' {2} -r '{3}'" \ .format(self.conf['path'], self.conf['config'], self.conf['extra_args'] or '', pcap) # can't seem to capture stderr from snort on windows # unless launched via cmd shell if 'nt' in os.name: cmdline = "cmd.exe /c " + cmdline return shlex.split(cmdline)
Given a pcap filename, get the commandline to run. :param pcap: Pcap filename to scan :returns: list of snort command args to scan supplied pcap file
def run(self, pcap): proc = Popen(self._snort_cmd(pcap), stdout=PIPE, stderr=PIPE, universal_newlines=True) stdout, stderr = proc.communicate() if proc.returncode != 0: raise Exception("\n".join(["Execution failed return code: {0}" \ .format(proc.returncode), stderr or ""])) return (parse_version(stderr), [ x for x in parse_alert(stdout) ])
Runs snort against the supplied pcap. :param pcap: Filepath to pcap file to scan :returns: tuple of version, list of alerts
def _suri_cmd(self, pcap, logs): cmdline = "'{0}' -c '{1}' -l '{2}' {3} -r '{4}'" \ .format(self.conf['path'], self.conf['config'], logs, self.conf['extra_args'] or '', pcap) # can't seem to capture stderr on windows # unless launched via cmd shell if 'nt' in os.name: cmdline = "cmd.exe /c " + cmdline return shlex.split(cmdline)
Given a pcap filename, get the commandline to run. :param pcap: Pcap filename to scan :param logs: Output directory for logs :returns: list of command args to scan supplied pcap file
def run(self, pcap): tmpdir = None try: tmpdir = tempfile.mkdtemp(prefix='tmpsuri') proc = Popen(self._suri_cmd(pcap, tmpdir), stdout=PIPE, stderr=PIPE, universal_newlines=True) stdout, stderr = proc.communicate() if proc.returncode != 0: raise Exception("\n".join(["Execution failed return code: {0}" \ .format(proc.returncode), stderr or ""])) with open(os.path.join(tmpdir, 'fast.log')) as tmp: return (parse_version(stdout), [ x for x in parse_alert(tmp.read()) ]) finally: if tmpdir: shutil.rmtree(tmpdir)
Runs suricata against the supplied pcap. :param pcap: Filepath to pcap file to scan :returns: tuple of version, list of alerts
def analyse_pcap(infile, filename): tmp = tempfile.NamedTemporaryFile(suffix=".pcap", delete=False) m = hashlib.md5() results = {'filename': filename, 'status': 'Failed', 'apiversion': __version__, } try: size = 0 while True: buf = infile.read(16384) if not buf: break tmp.write(buf) size += len(buf) m.update(buf) tmp.close() results['md5'] = m.hexdigest() results['filesize'] = size results.update(runner.run(tmp.name)) except OSError as ex: results['stderr'] = str(ex) finally: os.remove(tmp.name) return results
Run IDS across the supplied file. :param infile: File like object containing pcap data. :param filename: Filename of the submitted file. :returns: Dictionary of analysis results.
def submit_and_render(): data = request.files.file template = env.get_template("results.html") if not data: pass results = analyse_pcap(data.file, data.filename) results.update(base) return template.render(results)
Blocking POST handler for file submission. Runs snort on supplied file and returns results as rendered html.
def api_submit(): data = request.files.file response.content_type = 'application/json' if not data or not hasattr(data, 'file'): return json.dumps({"status": "Failed", "stderr": "Missing form params"}) return json.dumps(analyse_pcap(data.file, data.filename), default=jsondate, indent=4)
Blocking POST handler for file submission. Runs snort on supplied file and returns results as json text.
def main(): parser = argparse.ArgumentParser() parser.add_argument("-H", "--host", help="Web server Host address to bind to", default="0.0.0.0", action="store", required=False) parser.add_argument("-p", "--port", help="Web server Port to bind to", default=8080, action="store", required=False) args = parser.parse_args() logging.basicConfig() run(host=args.host, port=args.port, reloader=True, server=SERVER)
Main entrypoint for command-line webserver.
def duration(start, end=None): if not end: end = datetime.now() td = end - start return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 1000000) \ / 1000000.0
Returns duration in seconds since supplied time. Note: time_delta.total_seconds() only available in python 2.7+ :param start: datetime object :param end: Optional end datetime, None = now :returns: Seconds as decimal since start
def is_pcap(pcap): with open(pcap, 'rb') as tmp: header = tmp.read(4) # check for both big/little endian if header == b"\xa1\xb2\xc3\xd4" or \ header == b"\xd4\xc3\xb2\xa1": return True return False
Simple test for pcap magic bytes in supplied file. :param pcap: File path to Pcap file to check :returns: True if content is pcap (magic bytes present), otherwise False.
def _run_ids(runner, pcap): run = {'name': runner.conf.get('name'), 'module': runner.conf.get('module'), 'ruleset': runner.conf.get('ruleset', 'default'), 'status': STATUS_FAILED, } try: run_start = datetime.now() version, alerts = runner.run(pcap) run['version'] = version or 'Unknown' run['status'] = STATUS_SUCCESS run['alerts'] = alerts except Exception as ex: run['error'] = str(ex) finally: run['duration'] = duration(run_start) return run
Runs the specified IDS runner. :param runner: Runner instance to use :param pcap: File path to pcap for analysis :returns: dict of run metadata/alerts
def run(pcap): start = datetime.now() errors = [] status = STATUS_FAILED analyses = [] pool = ThreadPool(MAX_THREADS) try: if not is_pcap(pcap): raise Exception("Not a valid pcap file") runners = [] for conf in Config().modules.values(): runner = registry.get(conf['module']) if not runner: raise Exception("No module named: '{0}' found registered" .format(conf['module'])) runners.append(runner(conf)) # launch via worker pool analyses = [ pool.apply_async(_run_ids, (runner, pcap)) for runner in runners ] analyses = [ x.get() for x in analyses ] # were all runs successful? if all([ x['status'] == STATUS_SUCCESS for x in analyses ]): status = STATUS_SUCCESS # propagate any errors to the main list for run in [ x for x in analyses if x['status'] != STATUS_SUCCESS ]: errors.append("Failed to run {0}: {1}".format(run['name'], run['error'])) except Exception as ex: errors.append(str(ex)) return {'start': start, 'duration': duration(start), 'status': status, 'analyses': analyses, 'errors': errors, }
Runs all configured IDS instances against the supplied pcap. :param pcap: File path to pcap file to analyse :returns: Dict with details and results of run/s
def _set_up_pool_config(self): ''' Helper to configure pool options during DatabaseWrapper initialization. ''' self._max_conns = self.settings_dict['OPTIONS'].get('MAX_CONNS', pool_config_defaults['MAX_CONNS']) self._min_conns = self.settings_dict['OPTIONS'].get('MIN_CONNS', self._max_conns) self._test_on_borrow = self.settings_dict["OPTIONS"].get('TEST_ON_BORROW', pool_config_defaults['TEST_ON_BORROW']) if self._test_on_borrow: self._test_on_borrow_query = self.settings_dict["OPTIONS"].get('TEST_ON_BORROW_QUERY', pool_config_defaults['TEST_ON_BORROW_QUERY']) else: self._test_on_borrow_query = Nonf _set_up_pool_config(self): ''' Helper to configure pool options during DatabaseWrapper initialization. ''' self._max_conns = self.settings_dict['OPTIONS'].get('MAX_CONNS', pool_config_defaults['MAX_CONNS']) self._min_conns = self.settings_dict['OPTIONS'].get('MIN_CONNS', self._max_conns) self._test_on_borrow = self.settings_dict["OPTIONS"].get('TEST_ON_BORROW', pool_config_defaults['TEST_ON_BORROW']) if self._test_on_borrow: self._test_on_borrow_query = self.settings_dict["OPTIONS"].get('TEST_ON_BORROW_QUERY', pool_config_defaults['TEST_ON_BORROW_QUERY']) else: self._test_on_borrow_query = None
Helper to configure pool options during DatabaseWrapper initialization.
def _create_connection_pool(self, conn_params): ''' Helper to initialize the connection pool. ''' connection_pools_lock.acquire() try: # One more read to prevent a read/write race condition (We do this # here to avoid the overhead of locking each time we get a connection.) if (self.alias not in connection_pools or connection_pools[self.alias]['settings'] != self.settings_dict): logger.info("Creating connection pool for db alias %s" % self.alias) logger.info(" using MIN_CONNS = %s, MAX_CONNS = %s, TEST_ON_BORROW = %s" % (self._min_conns, self._max_conns, self._test_on_borrow)) from psycopg2 import pool connection_pools[self.alias] = { 'pool': pool.ThreadedConnectionPool(self._min_conns, self._max_conns, **conn_params), 'settings': dict(self.settings_dict), } finally: connection_pools_lock.release(f _create_connection_pool(self, conn_params): ''' Helper to initialize the connection pool. ''' connection_pools_lock.acquire() try: # One more read to prevent a read/write race condition (We do this # here to avoid the overhead of locking each time we get a connection.) if (self.alias not in connection_pools or connection_pools[self.alias]['settings'] != self.settings_dict): logger.info("Creating connection pool for db alias %s" % self.alias) logger.info(" using MIN_CONNS = %s, MAX_CONNS = %s, TEST_ON_BORROW = %s" % (self._min_conns, self._max_conns, self._test_on_borrow)) from psycopg2 import pool connection_pools[self.alias] = { 'pool': pool.ThreadedConnectionPool(self._min_conns, self._max_conns, **conn_params), 'settings': dict(self.settings_dict), } finally: connection_pools_lock.release()
Helper to initialize the connection pool.
def close(self): ''' Override to return the connection to the pool rather than closing it. ''' if self._wrapped_connection and self._pool: logger.debug("Returning connection %s to pool %s" % (self._wrapped_connection, self._pool)) self._pool.putconn(self._wrapped_connection) self._wrapped_connection = Nonf close(self): ''' Override to return the connection to the pool rather than closing it. ''' if self._wrapped_connection and self._pool: logger.debug("Returning connection %s to pool %s" % (self._wrapped_connection, self._pool)) self._pool.putconn(self._wrapped_connection) self._wrapped_connection = None
Override to return the connection to the pool rather than closing it.
def b58encode_int(i, default_one=True): '''Encode an integer using Base58''' if not i and default_one: return alphabet[0] string = "" while i: i, idx = divmod(i, 58) string = alphabet[idx] + string return strinf b58encode_int(i, default_one=True): '''Encode an integer using Base58''' if not i and default_one: return alphabet[0] string = "" while i: i, idx = divmod(i, 58) string = alphabet[idx] + string return string
Encode an integer using Base58
def b58encode(v): '''Encode a string using Base58''' if not isinstance(v, bytes): raise TypeError("a bytes-like object is required, not '%s'" % type(v).__name__) origlen = len(v) v = v.lstrip(b'\0') newlen = len(v) p, acc = 1, 0 for c in iseq(reversed(v)): acc += p * c p = p << 8 result = b58encode_int(acc, default_one=False) return (alphabet[0] * (origlen - newlen) + resultf b58encode(v): '''Encode a string using Base58''' if not isinstance(v, bytes): raise TypeError("a bytes-like object is required, not '%s'" % type(v).__name__) origlen = len(v) v = v.lstrip(b'\0') newlen = len(v) p, acc = 1, 0 for c in iseq(reversed(v)): acc += p * c p = p << 8 result = b58encode_int(acc, default_one=False) return (alphabet[0] * (origlen - newlen) + result)
Encode a string using Base58
def b58decode_int(v): '''Decode a Base58 encoded string as an integer''' if not isinstance(v, str): v = v.decode('ascii') decimal = 0 for char in v: decimal = decimal * 58 + alphabet.index(char) return decimaf b58decode_int(v): '''Decode a Base58 encoded string as an integer''' if not isinstance(v, str): v = v.decode('ascii') decimal = 0 for char in v: decimal = decimal * 58 + alphabet.index(char) return decimal
Decode a Base58 encoded string as an integer
def b58decode(v): '''Decode a Base58 encoded string''' if not isinstance(v, str): v = v.decode('ascii') origlen = len(v) v = v.lstrip(alphabet[0]) newlen = len(v) acc = b58decode_int(v) result = [] while acc > 0: acc, mod = divmod(acc, 256) result.append(mod) return (b'\0' * (origlen - newlen) + bseq(reversed(result))f b58decode(v): '''Decode a Base58 encoded string''' if not isinstance(v, str): v = v.decode('ascii') origlen = len(v) v = v.lstrip(alphabet[0]) newlen = len(v) acc = b58decode_int(v) result = [] while acc > 0: acc, mod = divmod(acc, 256) result.append(mod) return (b'\0' * (origlen - newlen) + bseq(reversed(result)))
Decode a Base58 encoded string
def breadcrumb(context, label, viewname, *args, **kwargs): append_breadcrumb(context, _(escape(label)), viewname, args, kwargs) return ''
Add link to list of breadcrumbs, usage: {% load bubbles_breadcrumbs %} {% breadcrumb "Home" "index" %} Remember to use it inside {% block %} with {{ block.super }} to get all parent breadcrumbs. :param label: Breadcrumb link label. :param viewname: Name of the view to link this breadcrumb to, or Model instance with implemented get_absolute_url(). :param args: Any arguments to view function.
def breadcrumb_safe(context, label, viewname, *args, **kwargs): append_breadcrumb(context, _(label), viewname, args, kwargs) return ''
Same as breadcrumb but label is not escaped.
def breadcrumb_raw(context, label, viewname, *args, **kwargs): append_breadcrumb(context, escape(label), viewname, args, kwargs) return ''
Same as breadcrumb but label is not translated.
def breadcrumb_raw_safe(context, label, viewname, *args, **kwargs): append_breadcrumb(context, label, viewname, args, kwargs) return ''
Same as breadcrumb but label is not escaped and translated.
def render_breadcrumbs(context, *args): try: template_path = args[0] except IndexError: template_path = getattr(settings, 'BREADCRUMBS_TEMPLATE', 'django_bootstrap_breadcrumbs/bootstrap2.html') links = [] for (label, viewname, view_args, view_kwargs) in context[ 'request'].META.get(CONTEXT_KEY, []): if isinstance(viewname, Model) and hasattr( viewname, 'get_absolute_url') and ismethod( viewname.get_absolute_url): url = viewname.get_absolute_url(*view_args, **view_kwargs) else: try: try: # 'resolver_match' introduced in Django 1.5 current_app = context['request'].resolver_match.namespace except AttributeError: try: resolver_match = resolve(context['request'].path) current_app = resolver_match.namespace except Resolver404: current_app = None url = reverse(viewname=viewname, args=view_args, kwargs=view_kwargs, current_app=current_app) except NoReverseMatch: url = viewname links.append((url, smart_text(label) if label else label)) if not links: return '' if VERSION > (1, 8): # pragma: nocover # RequestContext is deprecated in recent django # https://docs.djangoproject.com/en/1.10/ref/templates/upgrading/ context = context.flatten() context['breadcrumbs'] = links context['breadcrumbs_total'] = len(links) return mark_safe(template.loader.render_to_string(template_path, context))
Render breadcrumbs html using bootstrap css classes.
def split(examples, ratio=0.8): split = int(ratio * len(examples)) return examples[:split], examples[split:]
Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.
def _find_symbol(self, module, name, fallback=None): if not hasattr(module, name) and fallback: return self._find_symbol(module, fallback, None) return getattr(module, name)
Find the symbol of the specified name inside the module or raise an exception.
def start(self, work): assert threading.current_thread() == threading.main_thread() assert not self.state.running self.state.running = True self.thread = threading.Thread(target=work, args=(self.state,)) self.thread.start() while self.state.running: try: before = time.time() self.update() duration = time.time() - before plt.pause(max(0.001, self.refresh - duration)) except KeyboardInterrupt: self.state.running = False self.thread.join() return
Hand the main thread to the window and continue work in the provided function. A state is passed as the first argument that contains a `running` flag. The function is expected to exit if the flag becomes false. The flag can also be set to false to stop the window event loop and continue in the main thread after the `start()` call.
def stop(self): assert threading.current_thread() == self.thread assert self.state.running self.state.running = False
Close the window and stops the worker thread. The main thread will resume with the next command after the `start()` call.
def update(self): assert threading.current_thread() == threading.main_thread() for axis, line, interface in self.interfaces: line.set_xdata(interface.xdata) line.set_ydata(interface.ydata) axis.set_xlim(0, interface.width or 1, emit=False) axis.set_ylim(0, interface.height or 1, emit=False) self.figure.canvas.draw()
Redraw the figure to show changed data. This is automatically called after `start()` was run.
def apply(self, incoming): assert len(incoming) == self.size self.incoming = incoming outgoing = self.activation(self.incoming) assert len(outgoing) == self.size self.outgoing = outgoing
Store the incoming activation, apply the activation function and store the result as outgoing activation.
def delta(self, above): return self.activation.delta(self.incoming, self.outgoing, above)
The derivative of the activation function at the current state.
def feed(self, weights, data): assert len(data) == self.layers[0].size self.layers[0].apply(data) # Propagate trough the remaining layers. connections = zip(self.layers[:-1], weights, self.layers[1:]) for previous, weight, current in connections: incoming = self.forward(weight, previous.outgoing) current.apply(incoming) # Return the activations of the output layer. return self.layers[-1].outgoing
Evaluate the network with alternative weights on the input data and return the output activation.
def _init_network(self): self.network = Network(self.problem.layers) self.weights = Matrices(self.network.shapes) if self.load: loaded = np.load(self.load) assert loaded.shape == self.weights.shape, ( 'weights to load must match problem definition') self.weights.flat = loaded else: self.weights.flat = np.random.normal( self.problem.weight_mean, self.problem.weight_scale, len(self.weights.flat))
Define model and initialize weights.
def _init_training(self): # pylint: disable=redefined-variable-type if self.check: self.backprop = CheckedBackprop(self.network, self.problem.cost) else: self.backprop = BatchBackprop(self.network, self.problem.cost) self.momentum = Momentum() self.decent = GradientDecent() self.decay = WeightDecay() self.tying = WeightTying(*self.problem.weight_tying) self.weights = self.tying(self.weights)
Classes needed during training.
def _every(times, step_size, index): current = index * step_size step = current // times * times reached = current >= step overshot = current >= step + step_size return current and reached and not overshot
Given a loop over batches of an iterable and an operation that should be performed every few elements. Determine whether the operation should be called for the current index.
def parse_tax_lvl(entry, tax_lvl_depth=[]): # How deep in the hierarchy are we currently? Each two spaces of # indentation is one level deeper. Also parse the scientific name at this # level. depth_and_name = re.match('^( *)(.*)', entry['sci_name']) depth = len(depth_and_name.group(1))//2 name = depth_and_name.group(2) # Remove the previous levels so we're one higher than the level of the new # taxon. (This also works if we're just starting out or are going deeper.) del tax_lvl_depth[depth:] # Append the new taxon. tax_lvl_depth.append((entry['rank'], name)) # Create a tax_lvl dict for the named ranks. tax_lvl = {x[0]: x[1] for x in tax_lvl_depth if x[0] in ranks} return(tax_lvl)
Parse a single kraken-report entry and return a dictionary of taxa for its named ranks. :type entry: dict :param entry: attributes of a single kraken-report row. :type tax_lvl_depth: list :param tax_lvl_depth: running record of taxon levels encountered in previous calls.
def parse_kraken_report(kdata, max_rank, min_rank): # map between NCBI taxonomy IDs and the string rep. of the hierarchy taxa = OrderedDict() # the master collection of read counts (keyed on NCBI ID) counts = OrderedDict() # current rank r = 0 max_rank_idx = ranks.index(max_rank) min_rank_idx = ranks.index(min_rank) for entry in kdata: erank = entry['rank'].strip() # print("erank: "+erank) if erank in ranks: r = ranks.index(erank) # update running tally of ranks tax_lvl = parse_tax_lvl(entry) # record the reads assigned to this taxon level, and record the taxonomy string with the NCBI ID if erank in ranks and min_rank_idx >= ranks.index(entry['rank']) >= max_rank_idx: taxon_reads = int(entry["taxon_reads"]) clade_reads = int(entry["clade_reads"]) if taxon_reads > 0 or (clade_reads > 0 and entry['rank'] == min_rank): taxa[entry['ncbi_tax']] = tax_fmt(tax_lvl, r) if entry['rank'] == min_rank: counts[entry['ncbi_tax']] = clade_reads else: counts[entry['ncbi_tax']] = taxon_reads # print(" Counting {} reads at {}".format(counts[entry['ncbi_tax']], '; '.join(taxa[entry['ncbi_tax']]))) #TODO: handle subspecies #if erank == '-' and min_rank == "SS" and last_entry_indent < curr_indent: # pass return counts, taxa
Parse a single output file from the kraken-report tool. Return a list of counts at each of the acceptable taxonomic levels, and a list of NCBI IDs and a formatted string representing their taxonomic hierarchies. :type kdata: str :param kdata: Contents of the kraken report file.
def process_samples(kraken_reports_fp, max_rank, min_rank): taxa = OrderedDict() sample_counts = OrderedDict() for krep_fp in kraken_reports_fp: if not osp.isfile(krep_fp): raise RuntimeError("ERROR: File '{}' not found.".format(krep_fp)) # use the kraken report filename as the sample ID sample_id = osp.splitext(osp.split(krep_fp)[1])[0] with open(krep_fp, "rt") as kf: try: kdr = csv.DictReader(kf, fieldnames=field_names, delimiter="\t") kdata = [entry for entry in kdr][1:] except OSError as oe: raise RuntimeError("ERROR: {}".format(oe)) scounts, staxa = parse_kraken_report(kdata, max_rank=max_rank, min_rank=min_rank) # update master records taxa.update(staxa) sample_counts[sample_id] = scounts return sample_counts, taxa
Parse all kraken-report data files into sample counts dict and store global taxon id -> taxonomy data
def create_biom_table(sample_counts, taxa): data = [[0 if taxid not in sample_counts[sid] else sample_counts[sid][taxid] for sid in sample_counts] for taxid in taxa] data = np.array(data, dtype=int) tax_meta = [{'taxonomy': taxa[taxid]} for taxid in taxa] gen_str = "kraken-biom v{} ({})".format(__version__, __url__) return Table(data, list(taxa), list(sample_counts), tax_meta, type="OTU table", create_date=str(dt.now().isoformat()), generated_by=gen_str, input_is_dense=True)
Create a BIOM table from sample counts and taxonomy metadata. :type sample_counts: dict :param sample_counts: A dictionary of dictionaries with the first level keyed on sample ID, and the second level keyed on taxon ID with counts as values. :type taxa: dict :param taxa: A mapping between the taxon IDs from sample_counts to the full representation of the taxonomy string. The values in this dict will be used as metadata in the BIOM table. :rtype: biom.Table :return: A BIOM table containing the per-sample taxon counts and full taxonomy identifiers as metadata for each taxon.
def write_biom(biomT, output_fp, fmt="hdf5", gzip=False): opener = open mode = 'w' if gzip and fmt != "hdf5": if not output_fp.endswith(".gz"): output_fp += ".gz" opener = gzip_open mode = 'wt' # HDF5 BIOM files are gzipped by default if fmt == "hdf5": opener = h5py.File with opener(output_fp, mode) as biom_f: if fmt == "json": biomT.to_json(biomT.generated_by, direct_io=biom_f) elif fmt == "tsv": biom_f.write(biomT.to_tsv()) else: biomT.to_hdf5(biom_f, biomT.generated_by) return output_fp
Write the BIOM table to a file. :type biomT: biom.table.Table :param biomT: A BIOM table containing the per-sample OTU counts and metadata to be written out to file. :type output_fp str :param output_fp: Path to the BIOM-format file that will be written. :type fmt: str :param fmt: One of: hdf5, json, tsv. The BIOM version the table will be output (2.x, 1.0, 'classic').
def write_otu_file(otu_ids, fp): fpdir = osp.split(fp)[0] if not fpdir == "" and not osp.isdir(fpdir): raise RuntimeError("Specified path does not exist: {}".format(fpdir)) with open(fp, 'wt') as outf: outf.write('\n'.join(otu_ids))
Write out a file containing only the list of OTU IDs from the kraken data. One line per ID. :type otu_ids: list or iterable :param otu_ids: The OTU identifiers that will be written to file. :type fp: str :param fp: The path to the output file.
def transform(self, X): if self.fill_missing: X = self.filler.complete(X) return {'X': X}
Args: X: DataFrame with NaN's Returns: Dictionary with one key - 'X' corresponding to given DataFrame but without nan's
def fit(self, X): self.categorical_encoder = self.encoder_class(cols=list(X)) self.categorical_encoder.fit(X) return self
Args: X: DataFrame of categorical features to encode
def transform(self, numerical_feature_list, categorical_feature_list): features = numerical_feature_list + categorical_feature_list for feature in features: feature = self._format_target(feature) feature.set_index(self.id_column, drop=True, inplace=True) features = pd.concat(features, axis=1).astype(np.float32).reset_index() outputs = dict() outputs['features'] = features outputs['feature_names'] = list(features.columns) outputs['categorical_features'] = self._get_feature_names(categorical_feature_list) return outputs
Args: numerical_feature_list: list of numerical features categorical_feature_list: list of categorical features Returns: Dictionary with following keys: features: DataFrame with concatenated features feature_names: list of features names categorical_features: list of categorical feature names
def transform(self, X): assert np.shape(X)[0] == len(self._weights), ( 'BlendingOptimizer: Number of models to blend its predictions and weights does not match: ' 'n_models={}, weights_len={}'.format(np.shape(X)[0], len(self._weights))) blended_predictions = np.average(np.power(X, self._power), weights=self._weights, axis=0) ** (1.0 / self._power) return {'y_pred': blended_predictions}
Performs predictions blending using the trained weights. Args: X (array-like): Predictions of different models. Returns: dict with blended predictions (key is 'y_pred').
def fit_transform(self, X, y, step_size=0.1, init_weights=None, warm_start=False): self.fit(X=X, y=y, step_size=step_size, init_weights=init_weights, warm_start=warm_start) return self.transform(X=X)
Fit optimizer to X, then transforms X. See `fit` and `transform` for further explanation.
def escape_tags(value, valid_tags): # 1. escape everything value = conditional_escape(value) # 2. Reenable certain tags if valid_tags: # TODO: precompile somewhere once? tag_re = re.compile(r'&lt;(\s*/?\s*(%s))(.*?\s*)&gt;' % '|'.join(re.escape(tag) for tag in valid_tags)) value = tag_re.sub(_replace_quot, value) # Allow comments to be hidden value = value.replace("&lt;!--", "<!--").replace("--&gt;", "-->") return mark_safe(value)
Strips text from the given html string, leaving only tags. This functionality requires BeautifulSoup, nothing will be done otherwise. This isn't perfect. Someone could put javascript in here: <a onClick="alert('hi');">test</a> So if you use valid_tags, you still need to trust your data entry. Or we could try: - only escape the non matching bits - use BeautifulSoup to understand the elements, escape everything else and remove potentially harmful attributes (onClick). - Remove this feature entirely. Half-escaping things securely is very difficult, developers should not be lured into a false sense of security.
def _get_seo_content_types(seo_models): try: return [ContentType.objects.get_for_model(m).id for m in seo_models] except Exception: # previously caught DatabaseError # Return an empty list if this is called too early return []
Returns a list of content types from the models defined in settings.
def register_seo_admin(admin_site, metadata_class): if metadata_class._meta.use_sites: path_admin = SitePathMetadataAdmin model_instance_admin = SiteModelInstanceMetadataAdmin model_admin = SiteModelMetadataAdmin view_admin = SiteViewMetadataAdmin else: path_admin = PathMetadataAdmin model_instance_admin = ModelInstanceMetadataAdmin model_admin = ModelMetadataAdmin view_admin = ViewMetadataAdmin def get_list_display(): return tuple( name for name, obj in metadata_class._meta.elements.items() if obj.editable) backends = metadata_class._meta.backends if 'model' in backends: class ModelAdmin(model_admin): form = get_model_form(metadata_class) list_display = model_admin.list_display + get_list_display() _register_admin(admin_site, metadata_class._meta.get_model('model'), ModelAdmin) if 'view' in backends: class ViewAdmin(view_admin): form = get_view_form(metadata_class) list_display = view_admin.list_display + get_list_display() _register_admin(admin_site, metadata_class._meta.get_model('view'), ViewAdmin) if 'path' in backends: class PathAdmin(path_admin): form = get_path_form(metadata_class) list_display = path_admin.list_display + get_list_display() _register_admin(admin_site, metadata_class._meta.get_model('path'), PathAdmin) if 'modelinstance' in backends: class ModelInstanceAdmin(model_instance_admin): form = get_modelinstance_form(metadata_class) list_display = (model_instance_admin.list_display + get_list_display()) _register_admin(admin_site, metadata_class._meta.get_model('modelinstance'), ModelInstanceAdmin)
Register the backends specified in Meta.backends with the admin.
def _construct_form(self, i, **kwargs): form = super(MetadataFormset, self)._construct_form(i, **kwargs) # Monkey patch the form to always force a save. # It's unfortunate, but necessary because we always want an instance # Affect on performance shouldn't be too great, because ther is only # ever one metadata attached form.empty_permitted = False form.has_changed = lambda: True # Set a marker on this object to prevent automatic metadata creation # This is seen by the post_save handler, which then skips this # instance. if self.instance: self.instance.__seo_metadata_handled = True return form
Override the method to change the form attribute empty_permitted.
def do_get_metadata(parser, token): bits = list(token.split_contents()) tag_name = bits[0] bits = bits[1:] metadata_name = None args = {'as': None, 'for': None, 'in': None, 'on': None} # If there are an even number of bits, # a metadata name has been provided. if len(bits) % 2: metadata_name = bits[0] bits = bits[1:] # Each bits are in the form "key value key value ..." # Valid keys are given in the 'args' dict above. while len(bits): if len(bits) < 2 or bits[0] not in args: raise template.TemplateSyntaxError( "expected format is '%r [as <variable_name>]'" % tag_name) key, value, bits = bits[0], bits[1], bits[2:] args[key] = value return MetadataNode( metadata_name, variable_name=args['as'], target=args['for'], site=args['on'], language=args['in'], )
Retrieve an object which can produce (and format) metadata. {% get_metadata [for my_path] [in my_language] [on my_site] [as my_variable] %} or if you have multiple metadata classes: {% get_metadata MyClass [for my_path] [in my_language] [on my_site] [as my_variable] %}
def _handle_exception(self, exception): try: return super(WebSocketRpcRequest, self)._handle_exception(exception) except Exception: if not isinstance(exception, (odoo.exceptions.Warning, odoo.http.SessionExpiredException, odoo.exceptions.except_orm)): _logger.exception("Exception during JSON request handling.") error = { 'code': 200, 'message': "Odoo Server Error", 'data': odoo.http.serialize_exception(exception) } if isinstance(exception, odoo.http.AuthenticationError): error['code'] = 100 error['message'] = "Odoo Session Invalid" if isinstance(exception, odoo.http.SessionExpiredException): error['code'] = 100 error['message'] = "Odoo Session Expired" return self._json_response(error=error)
Called within an except block to allow converting exceptions to arbitrary responses. Anything returned (except None) will be used as response.
def _get_metadata_model(name=None): if name is not None: try: return registry[name] except KeyError: if len(registry) == 1: valid_names = 'Try using the name "%s" or simply leaving it '\ 'out altogether.' % list(registry)[0] else: valid_names = "Valid names are " + ", ".join( '"%s"' % k for k in list(registry)) raise Exception( "Metadata definition with name \"%s\" does not exist.\n%s" % ( name, valid_names)) else: assert len(registry) == 1, \ "You must have exactly one Metadata class, if using " \ "get_metadata() without a 'name' parameter." return list(registry.values())[0]
Find registered Metadata object.
def populate_metadata(model, MetadataClass): for instance in model.objects.all(): create_metadata_instance(MetadataClass, instance)
For a given model and metadata class, ensure there is metadata for every instance.
def _set_seo_models(self, value): seo_models = [] for model_name in value: if "." in model_name: app_label, model_name = model_name.split(".", 1) model = apps.get_model(app_label, model_name) if model: seo_models.append(model) else: app = apps.get_app_config(model_name) if app: seo_models.extend(app.get_models()) self.seo_models = seo_models
Gets the actual models to be used.
def _resolve_value(self, name): name = str(name) if name in self._metadata._meta.elements: element = self._metadata._meta.elements[name] # Look in instances for an explicit value if element.editable: value = getattr(self, name) if value: return value # Otherwise, return an appropriate default value (populate_from) populate_from = element.populate_from if isinstance(populate_from, collections.Callable): return populate_from(self, **self._populate_from_kwargs()) elif isinstance(populate_from, Literal): return populate_from.value elif populate_from is not NotSet: return self._resolve_value(populate_from) # If this is not an element, look for an attribute on metadata try: value = getattr(self._metadata, name) except AttributeError: pass else: if isinstance(value, collections.Callable): if getattr(value, '__self__', None): return value(self) else: return value(self._metadata, obj=self) return value
Returns an appropriate value for the given name.
def _resolve_template(value, model_instance=None, context=None): if isinstance(value, string_types) and "{" in value: if context is None: context = Context() if model_instance is not None: context[model_instance._meta.model_name] = model_instance value = Template(value).render(context) return value
Resolves any template references in the given value.
def _urls_for_js(urls=None): if urls is None: # prevent circular import from .urls import urlpatterns urls = [url.name for url in urlpatterns if getattr(url, 'name', None)] urls = dict(zip(urls, [get_uri_template(url) for url in urls])) urls.update(getattr(settings, 'LEAFLET_STORAGE_EXTRA_URLS', {})) return urls
Return templated URLs prepared for javascript.
def decorated_patterns(func, *urls): def decorate(urls, func): for url in urls: if isinstance(url, RegexURLPattern): url.__class__ = DecoratedURLPattern if not hasattr(url, "_decorate_with"): setattr(url, "_decorate_with", []) url._decorate_with.append(func) elif isinstance(url, RegexURLResolver): for pp in url.url_patterns: if isinstance(pp, RegexURLPattern): pp.__class__ = DecoratedURLPattern if not hasattr(pp, "_decorate_with"): setattr(pp, "_decorate_with", []) pp._decorate_with.append(func) if func: if not isinstance(func, (list, tuple)): func = [func] for f in func: decorate(urls, f) return urls
Utility function to decorate a group of url in urls.py Taken from http://djangosnippets.org/snippets/532/ + comments See also http://friendpaste.com/6afByRiBB9CMwPft3a6lym Example: urlpatterns = [ url(r'^language/(?P<lang_code>[a-z]+)$', views.MyView, name='name'), ] + decorated_patterns(login_required, url(r'^', include('cms.urls')),
def can_edit(self, user=None, request=None): can = False if request and not self.owner: if (getattr(settings, "LEAFLET_STORAGE_ALLOW_ANONYMOUS", False) and self.is_anonymous_owner(request)): can = True if user and user.is_authenticated(): # if user is authenticated, attach as owner self.owner = user self.save() msg = _("Your anonymous map has been attached to your account %s" % user) messages.info(request, msg) if self.edit_status == self.ANONYMOUS: can = True elif not user.is_authenticated(): pass elif user == self.owner: can = True elif self.edit_status == self.EDITORS and user in self.editors.all(): can = True return can
Define if a user can edit or not the instance, according to his account or the request.
def get_custom_fields(self): return CustomField.objects.filter( content_type=ContentType.objects.get_for_model(self))
Return a list of custom fields for this model
def get_model_custom_fields(self): return CustomField.objects.filter( content_type=ContentType.objects.get_for_model(self))
Return a list of custom fields for this model, directly callable without an instance. Use like Foo.get_model_custom_fields(Foo)
def get_custom_field(self, field_name): content_type = ContentType.objects.get_for_model(self) return CustomField.objects.get( content_type=content_type, name=field_name)
Get a custom field object for this model field_name - Name of the custom field you want.
def get_custom_value(self, field_name): custom_field = self.get_custom_field(field_name) return CustomFieldValue.objects.get_or_create( field=custom_field, object_id=self.id)[0].value
Get a value for a specified custom field field_name - Name of the custom field you want.
def set_custom_value(self, field_name, value): custom_field = self.get_custom_field(field_name) custom_value = CustomFieldValue.objects.get_or_create( field=custom_field, object_id=self.id)[0] custom_value.value = value custom_value.save()
Set a value for a specified custom field field_name - Name of the custom field you want. value - Value to set it to
def is_outdated(self, infile, outfile): # Preliminary check for simply missing file or modified entry-point file. if super(BrowserifyCompiler, self).is_outdated(infile, outfile): return True # Otherwise we need to see what dependencies there are now, and if they're modified. tool, args, env = self._get_cmd_parts() cmd = [tool] + args + ['--list', infile] if self.verbose: print("is_outdated command:", cmd, env) dep_list = self.simple_execute_command(cmd, env=env) if self.verbose: print("dep_list is:", dep_list) for dep_file in dep_list.strip().split('\n'): if super(BrowserifyCompiler, self).is_outdated(dep_file, outfile): if self.verbose: print("Found dep_file \"%s\" updated." % dep_file) return True return False
Check if the input file is outdated. The difficulty with the default implementation is that any file that is `require`d from the entry-point file will not trigger a recompile if it is modified. This overloaded version of the method corrects this by generating a list of all required files that are also a part of the storage manifest and checking if they've been modified since the last compile. The command used to generate the list of dependencies is the same as the compile command but uses the `--list` option instead of `--outfile`. WARNING: It seems to me that just generating the dependencies may take just as long as actually compiling, which would mean we would be better off just forcing a compile every time.
def getclusters(self, count): # only proceed if we got sensible input if count <= 1: raise ClusteringError("When clustering, you need to ask for at " "least two clusters! " "You asked for %d" % count) # return the data straight away if there is nothing to cluster if (self.__data == [] or len(self.__data) == 1 or count == self.__initial_length): return self.__data # It makes no sense to ask for more clusters than data-items available if count > self.__initial_length: raise ClusteringError( "Unable to generate more clusters than " "items available. You supplied %d items, and asked for " "%d clusters." % (self.__initial_length, count)) self.initialise_clusters(self.__data, count) items_moved = True # tells us if any item moved between the clusters, # as we initialised the clusters, we assume that # is the case while items_moved is True: items_moved = False for cluster in self.__clusters: for item in cluster: res = self.assign_item(item, cluster) if items_moved is False: items_moved = res return self.__clusters
Generates *count* clusters. :param count: The amount of clusters that should be generated. count must be greater than ``1``. :raises ClusteringError: if *count* is out of bounds.
def assign_item(self, item, origin): closest_cluster = origin for cluster in self.__clusters: if self.distance(item, centroid(cluster)) < self.distance( item, centroid(closest_cluster)): closest_cluster = cluster if id(closest_cluster) != id(origin): self.move_item(item, origin, closest_cluster) return True else: return False
Assigns an item from a given cluster to the closest located cluster. :param item: the item to be moved. :param origin: the originating cluster.
def move_item(self, item, origin, destination): if self.equality: item_index = 0 for i, element in enumerate(origin): if self.equality(element, item): item_index = i break else: item_index = origin.index(item) destination.append(origin.pop(item_index))
Moves an item from one cluster to anoter cluster. :param item: the item to be moved. :param origin: the originating cluster. :param destination: the target cluster.
def initialise_clusters(self, input_, clustercount): # initialise the clusters with empty lists self.__clusters = [] for _ in range(clustercount): self.__clusters.append([]) # distribute the items into the clusters count = 0 for item in input_: self.__clusters[count % clustercount].append(item) count += 1
Initialises the clusters by distributing the items from the data. evenly across n clusters :param input_: the data set (a list of tuples). :param clustercount: the amount of clusters (n).
def publish_progress(self, total, current): if self.progress_callback: self.progress_callback(total, current)
If a progress function was supplied, this will call that function with the total number of elements, and the remaining number of elements. :param total: The total number of elements. :param remaining: The remaining number of elements.
def set_linkage_method(self, method): if method == 'single': self.linkage = single elif method == 'complete': self.linkage = complete elif method == 'average': self.linkage = average elif method == 'uclus': self.linkage = uclus elif hasattr(method, '__call__'): self.linkage = method else: raise ValueError('distance method must be one of single, ' 'complete, average of uclus')
Sets the method to determine the distance between two clusters. :param method: The method to use. It can be one of ``'single'``, ``'complete'``, ``'average'`` or ``'uclus'``, or a callable. The callable should take two collections as parameters and return a distance value between both collections.
def getlevel(self, threshold): # if it's not worth clustering, just return the data if len(self._input) <= 1: return self._input # initialize the cluster if not yet done if not self.__cluster_created: self.cluster() return self._data[0].getlevel(threshold)
Returns all clusters with a maximum distance of *threshold* in between each other :param threshold: the maximum distance between clusters. See :py:meth:`~cluster.cluster.Cluster.getlevel`
def flatten(L): if not isinstance(L, list): return [L] if L == []: return L return flatten(L[0]) + flatten(L[1:])
Flattens a list. Example: >>> flatten([a,b,[c,d,[e,f]]]) [a,b,c,d,e,f]
def fullyflatten(container): flattened_items = [] for item in container: if hasattr(item, 'items'): flattened_items = flattened_items + fullyflatten(item.items) else: flattened_items.append(item) return flattened_items
Completely flattens out a cluster and returns a one-dimensional set containing the cluster's items. This is useful in cases where some items of the cluster are clusters in their own right and you only want the items. :param container: the container to flatten.
def median(numbers): # Sort the list and take the middle element. n = len(numbers) copy = sorted(numbers) if n & 1: # There is an odd number of elements return copy[n // 2] else: return (copy[n // 2 - 1] + copy[n // 2]) / 2.0
Return the median of the list of numbers. see: http://mail.python.org/pipermail/python-list/2004-December/294990.html