code
stringlengths
52
7.75k
docs
stringlengths
1
5.85k
def _execute(self, logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: if isinstance(self.parser, _BaseParser): if (not self.is_singlefile) and self.parser.supports_multifile(): return self.parser._parse_multifile(self.obj_type, self.obj_on_fs_to_parse, self._get_children_parsing_plan(), logger, options) elif self.is_singlefile and self.parser.supports_singlefile(): return self.parser._parse_singlefile(self.obj_type, self.get_singlefile_path(), self.get_singlefile_encoding(), logger, options) else: raise _InvalidParserException.create(self.parser, self.obj_on_fs_to_parse) else: raise TypeError('Parser attached to this _BaseParsingPlan is not a ' + str(_BaseParser))
Implementation of the parent class method. Checks that self.parser is a _BaseParser, and calls the appropriate parsing method. :param logger: :param options: :return:
def create_parsing_plan(self, desired_type: Type[T], filesystem_object: PersistedObject, logger: Logger, _main_call: bool = True): in_root_call = False # -- log msg only for the root call, not for the children that will be created by the code below if _main_call and (not hasattr(AnyParser.thrd_locals, 'flag_init') or AnyParser.thrd_locals.flag_init == 0): # print('Building a parsing plan to parse ' + str(filesystem_object) + ' into a ' + # get_pretty_type_str(desired_type)) logger.debug('Building a parsing plan to parse [{location}] into a {type}' ''.format(location=filesystem_object.get_pretty_location(append_file_ext=False), type=get_pretty_type_str(desired_type))) AnyParser.thrd_locals.flag_init = 1 in_root_call = True # -- create the parsing plan try: pp = self._create_parsing_plan(desired_type, filesystem_object, logger, log_only_last=(not _main_call)) finally: # remove threadlocal flag if needed if in_root_call: AnyParser.thrd_locals.flag_init = 0 # -- log success only if in root call if in_root_call: # print('Parsing Plan created successfully') logger.debug('Parsing Plan created successfully') # -- finally return return pp
Implements the abstract parent method by using the recursive parsing plan impl. Subclasses wishing to produce their own parsing plans should rather override _create_parsing_plan in order to benefit from this same log msg. :param desired_type: :param filesystem_object: :param logger: :param _main_call: internal parameter for recursive calls. Should not be changed by the user. :return:
def _create_parsing_plan(self, desired_type: Type[T], filesystem_object: PersistedObject, logger: Logger, log_only_last: bool = False): logger.debug('(B) ' + get_parsing_plan_log_str(filesystem_object, desired_type, log_only_last=log_only_last, parser=self)) return AnyParser._RecursiveParsingPlan(desired_type, filesystem_object, self, logger)
Adds a log message and creates a recursive parsing plan. :param desired_type: :param filesystem_object: :param logger: :param log_only_last: a flag to only log the last part of the file path (default False) :return:
def _get_parsing_plan_for_multifile_children(self, obj_on_fs: PersistedObject, desired_type: Type[T], logger: Logger) -> Dict[str, ParsingPlan[T]]: pass
This method is called by the _RecursiveParsingPlan when created. Implementing classes should return a dictionary containing a ParsingPlan for each child they plan to parse using this framework. Note that for the files that will be parsed using a parsing library it is not necessary to return a ParsingPlan. In other words, implementing classes should return here everything they need for their implementation of _parse_multifile to succeed. Indeed during parsing execution, the framework will call their _parse_multifile method with that same dictionary as an argument (argument name is 'parsing_plan_for_children', see _BaseParser). :param obj_on_fs: :param desired_type: :param logger: :return:
def _parse_singlefile(self, desired_type: Type[T], file_path: str, encoding: str, logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: raise Exception('Not implemented since this is a MultiFileParser')
Implementation of the parent method : since this is a multifile parser, this is not implemented. :param desired_type: :param file_path: :param encoding: :param logger: :param options: :return:
def create(parser_func: Union[ParsingMethodForStream, ParsingMethodForFile], caught: Exception): msg = 'Caught TypeError while calling parsing function \'' + str(parser_func.__name__) + '\'. ' \ 'Note that the parsing function signature should be ' + parsing_method_stream_example_signature_str \ + ' (streaming=True) or ' + parsing_method_file_example_signature_str + ' (streaming=False).' \ 'Caught error message is : ' + caught.__class__.__name__ + ' : ' + str(caught) return CaughtTypeError(msg).with_traceback(caught.__traceback__)
Helper method provided because we actually can't put that in the constructor, it creates a bug in Nose tests https://github.com/nose-devs/nose/issues/725 :param parser_func: :param caught: :return:
def _parse_singlefile(self, desired_type: Type[T], file_path: str, encoding: str, logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: opts = get_options_for_id(options, self.get_id_for_options()) if self._streaming_mode: # We open the stream, and let the function parse from it file_stream = None try: # Open the file with the appropriate encoding file_stream = open(file_path, 'r', encoding=encoding) # Apply the parsing function if self.function_args is None: return self._parser_func(desired_type, file_stream, logger, **opts) else: return self._parser_func(desired_type, file_stream, logger, **self.function_args, **opts) except TypeError as e: raise CaughtTypeError.create(self._parser_func, e) finally: if file_stream is not None: # Close the File in any case file_stream.close() else: # the parsing function will open the file itself if self.function_args is None: return self._parser_func(desired_type, file_path, encoding, logger, **opts) else: return self._parser_func(desired_type, file_path, encoding, logger, **self.function_args, **opts)
Relies on the inner parsing function to parse the file. If _streaming_mode is True, the file will be opened and closed by this method. Otherwise the parsing function will be responsible to open and close. :param desired_type: :param file_path: :param encoding: :param options: :return:
def queryByPortSensor(portiaConfig, edgeId, port, sensor, strategy=SummaryStrategies.PER_HOUR, interval=1, params={ 'from': None, 'to': None, 'order': None, 'precision': 'ms', 'fill':'none', 'min': True, 'max': True, 'sum': True, 'avg': True, 'median': False, 'mode': False, 'stddev': False, 'spread': False }): header = {'Accept': 'text/csv'} endpoint = '/summary/device/{0}/port/{1}/sensor/{2}/{3}/{4}{5}'.format( edgeId, port, sensor, resolveStrategy(strategy), interval, utils.buildGetParams(params) ) response = utils.httpGetRequest(portiaConfig, endpoint, header) if response.status_code == 200: try: dimensionSeries = pandas.read_csv( StringIO(response.text), sep=';' ) if portiaConfig['debug']: print( '[portia-debug]: {0} rows'.format( len(dimensionSeries.index) ) ) return dimensionSeries except: raise Exception('couldn\'t create pandas data frame') else: raise Exception('couldn\'t retrieve data')
Returns a pandas data frame with the portia select resultset
def _process_counter_example(self, mma, w_string): Process a counterexample in the Rivest-Schapire way. Args: mma (DFA): The hypothesis automaton w_string (str): The examined string to be consumed Returns: None """ diff = len(w_string) same = 0 membership_answer = self._membership_query(w_string) while True: i = (same + diff) / 2 access_string = self._run_in_hypothesis(mma, w_string, i) if membership_answer != self._membership_query(access_string + w_string[i:]): diff = i else: same = i if diff - same == 1: break exp = w_string[diff:] self.observation_table.em_vector.append(exp) for row in self.observation_table.sm_vector + self.observation_table.smi_vector: self._fill_table_entry(row, exp) return 0
Process a counterexample in the Rivest-Schapire way. Args: mma (DFA): The hypothesis automaton w_string (str): The examined string to be consumed Returns: None
def get_dfa_conjecture(self): dfa = DFA(self.alphabet) for s in self.observation_table.sm_vector: for i in self.alphabet: dst = self.observation_table.equiv_classes[s + i] # If dst == None then the table is not closed. if dst == None: logging.debug('Conjecture attempt on non closed table.') return None obsrv = self.observation_table[s, i] src_id = self.observation_table.sm_vector.index(s) dst_id = self.observation_table.sm_vector.index(dst) dfa.add_arc(src_id, dst_id, i, obsrv) # Mark the final states in the hypothesis automaton. i = 0 for s in self.observation_table.sm_vector: dfa[i].final = self.observation_table[s, self.epsilon] i += 1 return dfa
Utilize the observation table to construct a Mealy Machine. The library used for representing the Mealy Machine is the python bindings of the openFST library (pyFST). Args: None Returns: MealyMachine: A mealy machine build based on a closed and consistent observation table.
def _init_table(self): self.observation_table.sm_vector.append(self.epsilon) self.observation_table.smi_vector = list(self.alphabet) self.observation_table.em_vector.append(self.epsilon) self._fill_table_entry(self.epsilon, self.epsilon) for s in self.observation_table.smi_vector: self._fill_table_entry(s, self.epsilon)
Initialize the observation table.
def learn_dfa(self, mma=None): logging.info('Initializing learning procedure.') if mma: self._init_table_from_dfa(mma) else: self._init_table() logging.info('Generating a closed and consistent observation table.') while True: closed = False # Make sure that the table is closed while not closed: logging.debug('Checking if table is closed.') closed, string = self.observation_table.is_closed() if not closed: logging.debug('Closing table.') self._ot_make_closed(string) else: logging.debug('Table closed.') # Create conjecture dfa = self.get_dfa_conjecture() logging.info('Generated conjecture machine with %d states.',len(list(dfa.states))) # _check correctness logging.debug('Running equivalence query.') found, counter_example = self._equivalence_query(dfa) # Are we done? if found: logging.info('No counterexample found. Hypothesis is correct!') break # Add the new experiments into the table to reiterate the # learning loop logging.info('Processing counterexample %s with length %d.', counter_example, len(counter_example)) self._process_counter_example(dfa, counter_example) logging.info('Learning complete.') logging.info('Learned em_vector table is the following:') logging.info(self.observation_table.em_vector) return '', dfa
Implements the high level loop of the algorithm for learning a Mealy machine. Args: mma (DFA): The input automaton Returns: MealyMachine: A string and a model for the Mealy machine to be learned.
def print_error_to_io_stream(err: Exception, io: TextIOBase, print_big_traceback : bool = True): if print_big_traceback: traceback.print_tb(err.__traceback__, file=io, limit=-GLOBAL_CONFIG.multiple_errors_tb_limit) else: traceback.print_tb(err.__traceback__, file=io, limit=-1) io.writelines(' ' + str(err.__class__.__name__) + ' : ' + str(err))
Utility method to print an exception's content to a stream :param err: :param io: :param print_big_traceback: :return:
def should_hide_traceback(e): if type(e) in {WrongTypeCreatedError, CascadeError, TypeInformationRequiredError}: return True elif type(e).__name__ in {'InvalidAttributeNameForConstructorError', 'MissingMandatoryAttributeFiles'}: return True else: return False
Returns True if we can hide the error traceback in the warnings messages
def _get_parsing_plan_for_multifile_children(self, obj_on_fs: PersistedObject, desired_type: Type[Any], logger: Logger) -> Dict[str, Any]: raise Exception('This should never happen, since this parser relies on underlying parsers')
Implementation of AnyParser API
def _parse_multifile(self, desired_type: Type[T], obj: PersistedObject, parsing_plan_for_children: Dict[str, AnyParser._RecursiveParsingPlan], logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: raise Exception('This should never happen, since this parser relies on underlying parsers')
Implementation of AnyParser API
def _create_parsing_plan(self, desired_type: Type[T], filesystem_object: PersistedObject, logger: Logger, log_only_last: bool = False) -> ParsingPlan[T]: # build the parsing plan logger.debug('(B) ' + get_parsing_plan_log_str(filesystem_object, desired_type, log_only_last=log_only_last, parser=self)) return CascadingParser.CascadingParsingPlan(desired_type, filesystem_object, self, self._parsers_list, logger=logger)
Creates a parsing plan to parse the given filesystem object into the given desired_type. This overrides the method in AnyParser, in order to provide a 'cascading' parsing plan :param desired_type: :param filesystem_object: :param logger: :param log_only_last: a flag to only log the last part of the file path (default False) :return:
def _parse_singlefile(self, desired_type: Type[T], file_path: str, encoding: str, logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: # first use the base parser to parse something compliant with the conversion chain first = self._base_parser._parse_singlefile(self._converter.from_type, file_path, encoding, logger, options) # then apply the conversion chain return self._converter.convert(desired_type, first, logger, options)
Implementation of AnyParser API
def _get_parsing_plan_for_multifile_children(self, obj_on_fs: PersistedObject, desired_type: Type[Any], logger: Logger) -> Dict[str, Any]: return self._base_parser._get_parsing_plan_for_multifile_children(obj_on_fs, self._converter.from_type, logger)
Implementation of AnyParser API
def _parse_multifile(self, desired_type: Type[T], obj: PersistedObject, parsing_plan_for_children: Dict[str, ParsingPlan], logger: Logger, options: Dict[str, Dict[str, Any]]) -> T: # first use the base parser # first = self._base_parser._parse_multifile(desired_type, obj, parsing_plan_for_children, logger, options) first = self._base_parser._parse_multifile(self._converter.from_type, obj, parsing_plan_for_children, logger, options) # then apply the conversion chain return self._converter.convert(desired_type, first, logger, options)
Implementation of AnyParser API
def are_worth_chaining(base_parser: Parser, to_type: Type[S], converter: Converter[S,T]) -> bool: if isinstance(converter, ConversionChain): for conv in converter._converters_list: if not Parser.are_worth_chaining(base_parser, to_type, conv): return False # all good return True else: return Parser.are_worth_chaining(base_parser, to_type, converter)
Utility method to check if it makes sense to chain this parser configured with the given to_type, with this converter. It is an extension of ConverterChain.are_worth_chaining :param base_parser: :param to_type: :param converter: :return:
def set_mode(self, mode): _LOGGER.debug('State change called from alarm device') if not mode: _LOGGER.info('No mode supplied') elif mode not in CONST.ALL_MODES: _LOGGER.warning('Invalid mode') response_object = self._lupusec.set_mode(CONST.MODE_TRANSLATION[mode]) if response_object['result'] != 1: _LOGGER.warning('Mode setting unsuccessful') self._json_state['mode'] = mode _LOGGER.info('Mode set to: %s', mode) return True
Set Lupusec alarm mode.
def parse_amendements_summary(url, json_response): amendements = [] fields = [convert_camelcase_to_underscore(field) for field in json_response['infoGenerales']['description_schema'].split('|')] for row in json_response['data_table']: values = row.split('|') amd = AmendementSummary(**dict(zip(fields, values))) amd.legislature = re.search(r'www.assemblee-nationale.fr/(\d+)/', amd.url_amend).groups()[0] amendements.append(amd) return AmendementSearchResult(**{ 'url': url, 'total_count': json_response['infoGenerales']['nb_resultats'], 'start': json_response['infoGenerales']['debut'], 'size': json_response['infoGenerales']['nb_docs'], 'results': amendements })
json schema : { infoGenerales: { nb_resultats, debut, nb_docs }, data_table: 'id|numInit|titreDossierLegislatif|urlDossierLegislatif|' 'instance|numAmend|urlAmend|designationArticle|' 'designationAlinea|dateDepot|signataires|sort' } NB : the json response does not contain the dispositif and expose, that's why we call it "amendement's summary"
def get(self, **kwargs): params = self.default_params.copy() params.update(kwargs) start = time.time() response = requests.get(self.base_url, params=params) end = time.time() LOGGER.debug( 'fetched amendements with search params: %s in %0.2f s', params, end - start ) return parse_amendements_summary(response.url, response.json())
:param texteRecherche: :param numAmend: :param idArticle: :param idAuteur: :param idDossierLegislatif: :param idExamen: :param idExamens: :param periodeParlementaire: :param dateDebut: :param dateFin: :param rows: :param start: :param sort:
def to_utf8(value): if isinstance(value, unicode): return value.encode('utf-8') assert isinstance(value, str) return value
Returns a string encoded using UTF-8. This function comes from `Tornado`_. :param value: A unicode or string to be encoded. :returns: The encoded string.
def to_unicode(value): if isinstance(value, str): return value.decode('utf-8') assert isinstance(value, unicode) return value
Returns a unicode string from a string, using UTF-8 to decode if needed. This function comes from `Tornado`_. :param value: A unicode or string to be decoded. :returns: The decoded string.
def find_all_commands(management_dir): try: #Find all commands in the directory that are not __init__.py and end in .py. Then, remove the trailing .py return [f[:-3] for f in os.listdir(management_dir) if f.endswith('.py') and not f.startswith("__")] except OSError: #If nothing is found, return empty return []
Find all valid commands in a directory management_dir : directory path return - List of commands
def find_commands_module(app_name): parts = app_name.split('.') parts.append('commands') parts.reverse() part = parts.pop() path = None #Load the module if needed try: f, path, descr = imp.find_module(part, path) except ImportError as e: if os.path.basename(os.getcwd()) != part: raise e else: try: if f: f.close() except UnboundLocalError: log.error("Could not import module {0} at path {1}. Sys.path is {2}".format(part, path, sys.path)) #Go down level by and level and try to load the module at each level while parts: part = parts.pop() f, path, descr = imp.find_module(part, [path] if path else None) if f: f.close() return path
Find the commands module in each app (if it exists) and return the path app_name : The name of an app in the INSTALLED_APPS setting return - path to the app
def get_commands(): commands = {} #Try to load the settings file (settings can be specified on the command line) and get the INSTALLED_APPS try: from percept.conf.base import settings apps = settings.INSTALLED_APPS except KeyError: apps = [] #For each app, try to find the command module (command folder in the app) #Then, try to load all commands in the directory for app_name in apps: try: path = find_commands_module(app_name) commands.update(dict([(name, app_name) for name in find_all_commands(path)])) except ImportError as e: pass return commands
Get all valid commands return - all valid commands in dictionary form
def execute(self): #Initialize the option parser parser = LaxOptionParser( usage="%prog subcommand [options] [args]", option_list=BaseCommand.option_list #This will define what is allowed input to the parser (ie --settings=) ) #Parse the options options, args = parser.parse_args(self.argv) #Handle --settings and --pythonpath properly options = handle_default_options(options) try: #Get the name of the subcommand subcommand = self.argv[1] except IndexError: #If the subcommand name cannot be found, set it to help subcommand = 'help' #If the subcommand is help, print the usage of the parser, and available command names if subcommand == 'help': if len(args) <= 2: parser.print_help() sys.stdout.write(self.help_text + '\n') else: #Otherwise, run the given command self.fetch_command(subcommand).run_from_argv(self.argv)
Run the command with the command line arguments
def help_text(self): help_text = '\n'.join(sorted(get_commands().keys())) help_text = "\nCommands:\n" + help_text return help_text
Formats and prints the help text from the command list
def missing(self, field, last=True): ''' Numeric fields support specific handling for missing fields in a doc. The missing value can be _last, _first, or a custom value (that will be used for missing docs as the sort value). missing('price') > {"price" : {"missing": "_last" } } missing('price',False) > {"price" : {"missing": "_first"} } ''' if last: self.append({field: {'missing': '_last'}}) else: self.append({field: {'missing': '_first'}}) return self missing(self, field, last=True): ''' Numeric fields support specific handling for missing fields in a doc. The missing value can be _last, _first, or a custom value (that will be used for missing docs as the sort value). missing('price') > {"price" : {"missing": "_last" } } missing('price',False) > {"price" : {"missing": "_first"} } ''' if last: self.append({field: {'missing': '_last'}}) else: self.append({field: {'missing': '_first'}}) return self
Numeric fields support specific handling for missing fields in a doc. The missing value can be _last, _first, or a custom value (that will be used for missing docs as the sort value). missing('price') > {"price" : {"missing": "_last" } } missing('price',False) > {"price" : {"missing": "_first"} }
def ensure_table(self, cls): cur = self._conn().cursor() table_name = cls.get_table_name() index_names = cls.index_names() or [] cols = ['id text primary key', 'value text'] for name in index_names: cols.append(name + ' text') cur.execute('create table if not exists %s (%s)' % ( table_name, ','.join(cols) )) for name in index_names: cur.execute('create index if not exists %s on %s(%s)' % ( table_name + '_' + name + '_idx', table_name, name )) self._conn().commit() cur.close()
Ensure table's existence - as per the gludb spec.
def find_by_index(self, cls, index_name, value): cur = self._conn().cursor() query = 'select id,value from %s where %s = ?' % ( cls.get_table_name(), index_name ) found = [] for row in cur.execute(query, (value,)): id, data = row[0], row[1] obj = cls.from_data(data) assert id == obj.id found.append(obj) cur.close() return found
Find all rows matching index query - as per the gludb spec.
def delete(self, obj): del_id = obj.get_id() if not del_id: return cur = self._conn().cursor() tabname = obj.__class__.get_table_name() query = 'delete from %s where id = ?' % tabname cur.execute(query, (del_id,)) self._conn().commit() cur.close()
Required functionality.
def register(self): register_url = self.base_url + "api/0.1.0/register" register_headers = { "apikey": str(self.owner_api_key), "resourceID": str(self.entity_id), "serviceType": "publish,subscribe,historicData" } with self.no_ssl_verification(): r = requests.get(register_url, {}, headers=register_headers) response = r.content.decode("utf-8") if "APIKey" in str(r.content.decode("utf-8")): response = json.loads(response[:-331] + "}") # Temporary fix to a middleware bug, should be removed in future response["Registration"] = "success" else: response = json.loads(response) response["Registration"] = "failure" return response
Registers a new device with the name entity_id. This device has permissions for services like subscribe, publish and access historical data.
def no_ssl_verification(self): try: from functools import partialmethod except ImportError: # Python 2 fallback: https://gist.github.com/carymrobbins/8940382 from functools import partial class partialmethod(partial): def __get__(self, instance, owner): if instance is None: return self return partial(self.func, instance, *(self.args or ()), **(self.keywords or {})) old_request = requests.Session.request requests.Session.request = partialmethod(old_request, verify=False) warnings.filterwarnings('ignore', 'Unverified HTTPS request') yield warnings.resetwarnings() requests.Session.request = old_request
Requests module fails due to lets encrypt ssl encryption. Will be fixed in the future release.
def publish(self, data): if self.entity_api_key == "": return {'status': 'failure', 'response': 'No API key found in request'} publish_url = self.base_url + "api/0.1.0/publish" publish_headers = {"apikey": self.entity_api_key} publish_data = { "exchange": "amq.topic", "key": str(self.entity_id), "body": str(data) } with self.no_ssl_verification(): r = requests.post(publish_url, json.dumps(publish_data), headers=publish_headers) response = dict() if "No API key" in str(r.content.decode("utf-8")): response["status"] = "failure" r = json.loads(r.content.decode("utf-8"))['message'] elif 'publish message ok' in str(r.content.decode("utf-8")): response["status"] = "success" r = r.content.decode("utf-8") else: response["status"] = "failure" r = r.content.decode("utf-8") response["response"] = str(r) return response
This function allows an entity to publish data to the middleware. Args: data (string): contents to be published by this entity.
def db(self, entity, query_filters="size=10"): if self.entity_api_key == "": return {'status': 'failure', 'response': 'No API key found in request'} historic_url = self.base_url + "api/0.1.0/historicData?" + query_filters historic_headers = { "apikey": self.entity_api_key, "Content-Type": "application/json" } historic_query_data = json.dumps({ "query": { "match": { "key": entity } } }) with self.no_ssl_verification(): r = requests.get(historic_url, data=historic_query_data, headers=historic_headers) response = dict() if "No API key" in str(r.content.decode("utf-8")): response["status"] = "failure" else: r = r.content.decode("utf-8") response = r return response
This function allows an entity to access the historic data. Args: entity (string): Name of the device to listen to query_filters (string): Elastic search response format string example, "pretty=true&size=10"
def bind(self, devices_to_bind): if self.entity_api_key == "": return {'status': 'failure', 'response': 'No API key found in request'} url = self.base_url + "api/0.1.0/subscribe/bind" headers = {"apikey": self.entity_api_key} data = { "exchange": "amq.topic", "keys": devices_to_bind, "queue": self.entity_id } with self.no_ssl_verification(): r = requests.post(url, json=data, headers=headers) response = dict() if "No API key" in str(r.content.decode("utf-8")): response["status"] = "failure" r = json.loads(r.content.decode("utf-8"))['message'] elif 'bind queue ok' in str(r.content.decode("utf-8")): response["status"] = "success" r = r.content.decode("utf-8") else: response["status"] = "failure" r = r.content.decode("utf-8") response["response"] = str(r) return response
This function allows an entity to list the devices to subscribe for data. This function must be called at least once, before doing a subscribe. Subscribe function will listen to devices that are bound here. Args: devices_to_bind (list): an array of devices to listen to. Example bind(["test100","testDemo"])
def unbind(self, devices_to_unbind): if self.entity_api_key == "": return {'status': 'failure', 'response': 'No API key found in request'} url = self.base_url + "api/0.1.0/subscribe/unbind" headers = {"apikey": self.entity_api_key} data = { "exchange": "amq.topic", "keys": devices_to_unbind, "queue": self.entity_id } with self.no_ssl_verification(): r = requests.delete(url, json=data, headers=headers) print(r) response = dict() if "No API key" in str(r.content.decode("utf-8")): response["status"] = "failure" r = json.loads(r.content.decode("utf-8"))['message'] elif 'unbind' in str(r.content.decode("utf-8")): response["status"] = "success" r = r.content.decode("utf-8") else: response["status"] = "failure" r = r.content.decode("utf-8") response["response"] = str(r) return response
This function allows an entity to unbound devices that are already bound. Args: devices_to_unbind (list): an array of devices that are to be unbound ( stop listening) Example unbind(["test10","testDemo105"])
def subscribe(self, devices_to_bind=[]): if self.entity_api_key == "": return {'status': 'failure', 'response': 'No API key found in request'} self.bind(devices_to_bind) loop = asyncio.new_event_loop() t1 = threading.Thread(target=self.start_subscribe_worker, args=(loop,)) t1.daemon = True t1.start()
This function allows an entity to subscribe for data from the devices specified in the bind operation. It creates a thread with an event loop to manager the tasks created in start_subscribe_worker. Args: devices_to_bind (list): an array of devices to listen to
def start_subscribe_worker(self, loop): url = self.base_url + "api/0.1.0/subscribe" task = loop.create_task(self.asynchronously_get_data(url + "?name={0}".format(self.entity_id))) asyncio.set_event_loop(loop) loop.run_until_complete(task) self.event_loop = loop
Switch to new event loop as a thread and run until complete.
async def asynchronously_get_data(self, url): headers = {"apikey": self.entity_api_key} try: async with aiohttp.ClientSession(connector=aiohttp.TCPConnector(verify_ssl=False)) as session: async with session.get(url, headers=headers, timeout=3000) as response: while True: # loop over for each chunk of data chunk = await response.content.readchunk() if not chunk: break if platform == "linux" or platform == "linux2": # In linux systems, readchunk() returns a tuple chunk = chunk[0] resp = dict() resp["data"] = chunk.decode() current_milli_time = lambda: int(round(time() * 1000)) resp["timestamp"] = str(current_milli_time()) self.subscribe_data = resp except Exception as e: print("\n********* Oops: " + url + " " + str(type(e)) + str(e) + " *********\n") print('\n********* Closing TCP: {} *********\n'.format(url))
Asynchronously get data from Chunked transfer encoding of https://smartcity.rbccps.org/api/0.1.0/subscribe. (Only this function requires Python 3. Rest of the functions can be run in python2. Args: url (string): url to subscribe
def stop_subscribe(self): asyncio.gather(*asyncio.Task.all_tasks()).cancel() self.event_loop.stop() self.event_loop.close()
This function is used to stop the event loop created when subscribe is called. But this function doesn't stop the thread and should be avoided until its completely developed.
def timeuntil(d, now=None): if not now: if getattr(d, 'tzinfo', None): now = datetime.datetime.now(LocalTimezone(d)) else: now = datetime.datetime.now() return timesince(now, d)
Like timesince, but returns a string measuring the time until the given time.
def updateCache(self, service, url, new_data, new_data_dt): key = self._get_key(service, url) # clear existing data try: value = self.client.get(key) if value: data = pickle.loads(value, encoding="utf8") if "time_stamp" in data: cached_data_dt = parse(data["time_stamp"]) if new_data_dt > cached_data_dt: self.client.delete(key) # may raise MemcachedException logger.info( "IN cache (key: {}), older DELETE".format(key)) else: logger.info( "IN cache (key: {}), newer KEEP".format(key)) return else: logger.info("NOT IN cache (key: {})".format(key)) except MemcachedException as ex: logger.error( "Clear existing data (key: {}) ==> {}".format(key, str(ex))) return # store new value in cache cdata, time_to_store = self._make_cache_data( service, url, new_data, {}, 200, new_data_dt) self.client.set(key, cdata, time=time_to_store) # may raise MemcachedException logger.info( "MemCached SET (key {}) for {:d} seconds".format( key, time_to_store))
:param new_data: a string representation of the data :param new_data_dt: a timezone aware datetime object giving the timestamp of the new_data :raise MemcachedException: if update failed
def delete_all_eggs(self): path_to_delete = os.path.join(self.egg_directory, "lib", "python") if os.path.exists(path_to_delete): shutil.rmtree(path_to_delete)
delete all the eggs in the directory specified
def install_egg(self, egg_name): if not os.path.exists(self.egg_directory): os.makedirs(self.egg_directory) self.requirement_set.add_requirement( InstallRequirement.from_line(egg_name, None)) try: self.requirement_set.prepare_files(self.finder) self.requirement_set.install(['--prefix=' + self.egg_directory], []) except DistributionNotFound: self.requirement_set.requirements._keys.remove(egg_name) raise PipException()
Install an egg into the egg directory
def call(cls, iterable, *a, **kw): return cls(x(*a, **kw) for x in iterable)
Calls every item in *iterable* with the specified arguments.
def map(cls, iterable, func, *a, **kw): return cls(func(x, *a, **kw) for x in iterable)
Iterable-first replacement of Python's built-in `map()` function.
def filter(cls, iterable, cond, *a, **kw): return cls(x for x in iterable if cond(x, *a, **kw))
Iterable-first replacement of Python's built-in `filter()` function.
def unique(cls, iterable, key=None): if key is None: key = lambda x: x def generator(): seen = set() seen_add = seen.add for item in iterable: key_val = key(item) if key_val not in seen: seen_add(key_val) yield item return cls(generator())
Yields unique items from *iterable* whilst preserving the original order.
def chunks(cls, iterable, n, fill=None): return cls(itertools.zip_longest(*[iter(iterable)] * n, fillvalue=fill))
Collects elements in fixed-length chunks.
def concat(cls, iterables): def generator(): for it in iterables: for element in it: yield element return cls(generator())
Similar to #itertools.chain.from_iterable().
def chain(cls, *iterables): def generator(): for it in iterables: for element in it: yield element return cls(generator())
Similar to #itertools.chain.from_iterable().
def attr(cls, iterable, attr_name): return cls(getattr(x, attr_name) for x in iterable)
Applies #getattr() on all elements of *iterable*.
def of_type(cls, iterable, types): return cls(x for x in iterable if isinstance(x, types))
Filters using #isinstance().
def partition(cls, iterable, pred): t1, t2 = itertools.tee(iterable) return cls(itertools.filterfalse(pred, t1), filter(pred, t2))
Use a predicate to partition items into false and true entries.
def count(cls, iterable): iterable = iter(iterable) count = 0 while True: try: next(iterable) except StopIteration: break count += 1 return count
Returns the number of items in an iterable.
def column_names(self, table): table_info = self.execute( u'PRAGMA table_info(%s)' % quote(table)) return (column['name'] for column in table_info)
An iterable of column names, for a particular table or view.
def execute(self, sql, *args, **kwargs): ''' Run raw SQL on the database, and receive relaxing output. This is sort of the foundational method that most of the others build on. ''' try: self.cursor.execute(sql, *args) except self.sqlite3.InterfaceError, msg: raise self.sqlite3.InterfaceError(unicode(msg) + '\nTry converting types or pickling.') rows = self.cursor.fetchall() self.__commit_if_necessary(kwargs) if None == self.cursor.description: return None else: colnames = [d[0].decode('utf-8') for d in self.cursor.description] rawdata = [OrderedDict(zip(colnames,row)) for row in rows] return rawdatf execute(self, sql, *args, **kwargs): ''' Run raw SQL on the database, and receive relaxing output. This is sort of the foundational method that most of the others build on. ''' try: self.cursor.execute(sql, *args) except self.sqlite3.InterfaceError, msg: raise self.sqlite3.InterfaceError(unicode(msg) + '\nTry converting types or pickling.') rows = self.cursor.fetchall() self.__commit_if_necessary(kwargs) if None == self.cursor.description: return None else: colnames = [d[0].decode('utf-8') for d in self.cursor.description] rawdata = [OrderedDict(zip(colnames,row)) for row in rows] return rawdata
Run raw SQL on the database, and receive relaxing output. This is sort of the foundational method that most of the others build on.
def create_index(self, columns, table_name, if_not_exists = True, unique = False, **kwargs): 'Create a unique index on the column(s) passed.' index_name = simplify(table_name) + u'_' + u'_'.join(map(simplify, columns)) if unique: sql = u'CREATE UNIQUE INDEX %s ON %s (%s)' else: sql = u'CREATE INDEX %s ON %s (%s)' first_param = u'IF NOT EXISTS ' + index_name if if_not_exists else index_name params = (first_param, quote(table_name), ','.join(map(quote, columns))) self.execute(sql % params, **kwargsf create_index(self, columns, table_name, if_not_exists = True, unique = False, **kwargs): 'Create a unique index on the column(s) passed.' index_name = simplify(table_name) + u'_' + u'_'.join(map(simplify, columns)) if unique: sql = u'CREATE UNIQUE INDEX %s ON %s (%s)' else: sql = u'CREATE INDEX %s ON %s (%s)' first_param = u'IF NOT EXISTS ' + index_name if if_not_exists else index_name params = (first_param, quote(table_name), ','.join(map(quote, columns))) self.execute(sql % params, **kwargs)
Create a unique index on the column(s) passed.
def get_var(self, key): 'Retrieve one saved variable from the database.' vt = quote(self.__vars_table) data = self.execute(u'SELECT * FROM %s WHERE `key` = ?' % vt, [key], commit = False) if data == []: raise NameError(u'The DumpTruck variables table doesn\'t have a value for %s.' % key) else: tmp = quote(self.__vars_table_tmp) row = data[0] self.execute(u'DROP TABLE IF EXISTS %s' % tmp, commit = False) # This is vulnerable to injection self.execute(u'CREATE TEMPORARY TABLE %s (`value` %s)' % (tmp, row['type']), commit = False) # This is ugly self.execute(u'INSERT INTO %s (`value`) VALUES (?)' % tmp, [row['value']], commit = False) value = self.dump(tmp)[0]['value'] self.execute(u'DROP TABLE %s' % tmp, commit = False) return valuf get_var(self, key): 'Retrieve one saved variable from the database.' vt = quote(self.__vars_table) data = self.execute(u'SELECT * FROM %s WHERE `key` = ?' % vt, [key], commit = False) if data == []: raise NameError(u'The DumpTruck variables table doesn\'t have a value for %s.' % key) else: tmp = quote(self.__vars_table_tmp) row = data[0] self.execute(u'DROP TABLE IF EXISTS %s' % tmp, commit = False) # This is vulnerable to injection self.execute(u'CREATE TEMPORARY TABLE %s (`value` %s)' % (tmp, row['type']), commit = False) # This is ugly self.execute(u'INSERT INTO %s (`value`) VALUES (?)' % tmp, [row['value']], commit = False) value = self.dump(tmp)[0]['value'] self.execute(u'DROP TABLE %s' % tmp, commit = False) return value
Retrieve one saved variable from the database.
def tablesAndViews(self): result = self.execute( u'SELECT name,type FROM sqlite_master WHERE type in ("table", "view")', commit=False) return ((row['name'],row['type']) for row in result)
Return a sequence of (name,type) pairs where type is either "table" or "view".
def drop(self, table_name = 'dumptruck', if_exists = False, **kwargs): 'Drop a table.' return self.execute(u'DROP TABLE %s %s;' % ('IF EXISTS' if if_exists else '', quote(table_name)), **kwargsf drop(self, table_name = 'dumptruck', if_exists = False, **kwargs): 'Drop a table.' return self.execute(u'DROP TABLE %s %s;' % ('IF EXISTS' if if_exists else '', quote(table_name)), **kwargs)
Drop a table.
def __install_perforce(self, config): if not system.is_64_bit(): self.logger.warn("Perforce formula is only designed for 64 bit systems! Not install executables...") return False version = config.get('version', 'r13.2') key = 'osx' if system.is_osx() else 'linux' perforce_packages = package_dict[version][key] d = self.directory.install_directory(self.feature_name) if not os.path.exists(d): os.makedirs(d) self.logger.info("Downloading p4 executable...") with open(os.path.join(d, "p4"), 'wb+') as fh: fh.write(lib.cleaned_request('get', url_prefix + perforce_packages['p4']).content) self.directory.symlink_to_bin("p4", os.path.join(d, "p4")) self.p4_command = os.path.join(d, "p4") self.logger.info("Installing p4v...") if system.is_osx(): return self._install_p4v_osx(url_prefix + perforce_packages['p4v']) else: return self._install_p4v_linux(url_prefix + perforce_packages['p4v'])
install perforce binary
def _install_p4v_osx(self, url, overwrite=False): package_exists = False root_dir = os.path.expanduser(os.path.join("~", "Applications")) package_exists = len([x for x in P4V_APPLICATIONS if os.path.exists(os.path.join(root_dir, x))]) if not package_exists or overwrite: lib.extract_dmg(url, root_dir) else: self.logger.warn("P4V exists already in %s! Not overwriting..." % root_dir) return True
Install perforce applications and binaries for mac
def _install_p4v_linux(self, url): lib.extract_targz(url, self.directory.install_directory(self.feature_name), remove_common_prefix=True) bin_path = os.path.join(self.directory.install_directory(self.feature_name), 'bin') if os.path.exists(bin_path): for f in os.listdir(bin_path): self.directory.symlink_to_bin(f, os.path.join(bin_path, f)) return True
Install perforce applications and binaries for linux
def __write_p4settings(self, config): self.logger.info("Writing p4settings...") root_dir = os.path.expanduser(config.get('root_path')) p4settings_path = os.path.join(root_dir, ".p4settings") if os.path.exists(p4settings_path): if self.target.get('overwrite_p4settings', False): self.logger.info("Overwriting existing p4settings...") os.remove(p4settings_path) else: return with open(p4settings_path, "w+") as p4settings_file: p4settings_file.write(p4settings_template % config.to_dict()) if config.get('write_password_p4settings', 'no'): p4settings_file.write("\nP4PASSWD=%s" % config['password'])
write perforce settings
def __configure_client(self, config): self.logger.info("Configuring p4 client...") client_dict = config.to_dict() client_dict['root_path'] = os.path.expanduser(config.get('root_path')) os.chdir(client_dict['root_path']) client_dict['hostname'] = system.NODE client_dict['p4view'] = config['p4view'] % self.environment.target.get_context_dict() client = re.sub('//depot', ' //depot', p4client_template % client_dict) self.logger.info(lib.call("%s client -i" % self.p4_command, stdin=client, env=self.p4environ, cwd=client_dict['root_path']))
write the perforce client
def __install_eggs(self, config): egg_carton = (self.directory.install_directory(self.feature_name), 'requirements.txt') eggs = self.__gather_eggs(config) self.logger.debug("Installing eggs %s..." % eggs) self.__load_carton(egg_carton, eggs) self.__prepare_eggs(egg_carton, config)
Install eggs for a particular configuration
def __add_paths(self, config): bin_path = os.path.join(self.directory.install_directory(self.feature_name), 'bin') whitelist_executables = self._get_whitelisted_executables(config) for f in os.listdir(bin_path): for pattern in BLACKLISTED_EXECUTABLES: if re.match(pattern, f): continue if whitelist_executables and f not in whitelist_executables: continue self.directory.symlink_to_bin(f, os.path.join(bin_path, f))
add the proper resources into the environment
def analyse_body_paragraph(body_paragraph, labels=None): # try to find leading label first: for label, dummy in labels: if body_paragraph.startswith('* ' + label): return (label, body_paragraph[len(label) + 3:].replace('\n ', ' ')) # no conformed leading label found; do we have leading asterisk? if body_paragraph.startswith('* '): return (None, body_paragraph[2:].replace('\n ', ' ')) # no leading asterisk found; ignore this paragraph silently: return (None, None)
Analyse commit body paragraph and return (label, message). >>> analyse_body_paragraph('* BETTER Foo and bar.', >>> ... {'BETTER': 'Improvements'}) ('BETTER', 'Foo and bar.') >>> analyse_body_paragraph('* Foo and bar.') (None, 'Foo and bar.') >>> analyse_body_paragraph('Foo and bar.') (None, None)
def remove_ticket_directives(message): if message: message = re.sub(r'closes #', '#', message) message = re.sub(r'addresses #', '#', message) message = re.sub(r'references #', '#', message) return message
Remove ticket directives like "(closes #123). >>> remove_ticket_directives('(closes #123)') '(#123)' >>> remove_ticket_directives('(foo #123)') '(foo #123)'
def amended_commits(commits): # which SHA1 are declared as amended later? amended_sha1s = [] for message in commits.values(): amended_sha1s.extend(re.findall(r'AMENDS\s([0-f]+)', message)) return amended_sha1s
Return those git commit sha1s that have been amended later.
def enrich_git_log_dict(messages, labels): for commit_sha1, message in messages.items(): # detect module and ticket numbers for each commit: component = None title = message.split('\n')[0] try: component, title = title.split(":", 1) component = component.strip() except ValueError: pass # noqa paragraphs = [analyse_body_paragraph(p, labels) for p in message.split('\n\n')] yield { 'sha1': commit_sha1, 'component': component, 'title': title.strip(), 'tickets': re.findall(r'\s(#\d+)', message), 'paragraphs': [ (label, remove_ticket_directives(message)) for label, message in paragraphs ], }
Enrich git log with related information on tickets.
def release(obj, commit='HEAD', components=False): options = obj.options repository = obj.repository try: sha = 'oid' commits = _pygit2_commits(commit, repository) except ImportError: try: sha = 'hexsha' commits = _git_commits(commit, repository) except ImportError: click.echo('To use this feature, please install pygit2. ' 'GitPython will also work but is not recommended ' '(python <= 2.7 only).', file=sys.stderr) return 2 messages = OrderedDict([(getattr(c, sha), c.message) for c in commits]) for commit_sha1 in amended_commits(messages): if commit_sha1 in messages: del messages[commit_sha1] full_messages = list( enrich_git_log_dict(messages, options.get('commit_msg_labels')) ) indent = ' ' if components else '' wrapper = textwrap.TextWrapper( width=70, initial_indent=indent + '- ', subsequent_indent=indent + ' ', ) for label, section in options.get('commit_msg_labels'): if section is None: continue bullets = [] for commit in full_messages: bullets += [ {'text': bullet, 'component': commit['component']} for lbl, bullet in commit['paragraphs'] if lbl == label and bullet is not None ] if len(bullets) > 0: click.echo(section) click.echo('~' * len(section)) click.echo() if components: def key(cmt): return cmt['component'] for component, bullets in itertools.groupby( sorted(bullets, key=key), key): bullets = list(bullets) if len(bullets) > 0: click.echo('+ {}'.format(component)) click.echo() for bullet in bullets: click.echo(wrapper.fill(bullet['text'])) click.echo() else: for bullet in bullets: click.echo(wrapper.fill(bullet['text'])) click.echo() return 0
Generate release notes.
def incr(**vars): for k, v in vars: current_context.vars.setdefault(k, 0) current_context[k] += v
Increments context variables
def set_default_var(**vars): for k, v in vars.iteritems(): current_context.vars.setdefault(k, v)
Sets context variables using the key/value provided in the options
def incr_obj(obj, **attrs): for name, value in attrs.iteritems(): v = getattr(obj, name, None) if not hasattr(obj, name) or v is None: v = 0 setattr(obj, name, v + value)
Increments context variables
def redirect(view=None, url=None, **kwargs): if view: if url: kwargs["url"] = url url = flask.url_for(view, **kwargs) current_context.exit(flask.redirect(url))
Redirects to the specified view or url
def lines(query): filename = support.get_file_name(query) if(os.path.isfile(filename)): with open(filename) as openfile: print len(openfile.readlines()) else: print 'File not found : ' + filename
lines(query) -- print the number of lines in a given file
def words(query): filename = support.get_file_name(query) if(os.path.isfile(filename)): with open(filename) as openfile: print len(openfile.read().split()) else: print 'File not found : ' + filename
lines(query) -- print the number of words in a given file
def file_info(query): filename = support.get_file_name(query) if(os.path.isfile(filename)): stat_info = os.stat(filename) owner_name = pwd.getpwuid(stat_info.st_uid).pw_name print 'owner : ' + owner_name file_size = support.get_readable_filesize(stat_info.st_size) print 'size : ' + file_size print 'created : ' + time.ctime(stat_info.st_ctime) print 'last modified : ' + time.ctime(stat_info.st_mtime) else: print 'file not found'
file_info(query) -- print some human readable information of a given file
def make_executable(query): filename = support.get_file_name(query) if(os.path.isfile(filename)): os.system('chmod +x '+filename) else: print 'file not found'
make_executable(query) -- give executable permissions to a given file
def add_to_path(query): new_entry = support.get_path(query) if(new_entry): print 'Adding '+new_entry+' to PATH variable.' print '''1 : confirm 2 : cancel ''' choice = int(raw_input('>> ')) if(choice == 1): home_dir = os.path.expanduser('~') bashrc = open(os.path.join(home_dir, ".bashrc"), "a") bashrc.write('\n\nexport PATH=\"'+new_entry+':$PATH\"\n') bashrc.close() os.system('source '+os.path.join(os.path.expanduser('~'),'.bashrc')) print 'Success!!' print os.system('echo $PATH') else: print 'We were unable to extract the \'path\' from your query.'
add_to_path(query) -- add user given path to environment PATH variable.
def system_info(query): proc = subprocess.Popen(["uname -o"], stdout=subprocess.PIPE, shell=True) (out, err) = proc.communicate() print "operating system : "+str(out), proc = subprocess.Popen(["uname"], stdout=subprocess.PIPE, shell=True) (out, err) = proc.communicate() print "kernel : "+str(out), proc = subprocess.Popen(["uname -r"], stdout=subprocess.PIPE, shell=True) (out, err) = proc.communicate() print "kernel release : "+str(out), proc = subprocess.Popen(["uname -m"], stdout=subprocess.PIPE, shell=True) (out, err) = proc.communicate() print "architecture : "+str(out), proc = subprocess.Popen(["uname -n"], stdout=subprocess.PIPE, shell=True) (out, err) = proc.communicate() print "network node name : "+str(out),
system_info(query) -- print system specific information like OS, kernel, architecture etc.
def statsd_metric(name, count, elapsed): with statsd.pipeline() as pipe: pipe.incr(name, count) pipe.timing(name, int(round(1000 * elapsed)))
Metric that records to statsd & graphite
def setup(template_paths={}, autoescape=False, cache_size=100, auto_reload=True, bytecode_cache=True): global _jinja_env, _jinja_loaders if not _jinja_env: _jinja_env = JinjaEnviroment( autoescape=autoescape, cache_size=cache_size, auto_reload=auto_reload, bytecode_cache=None) # @TODO alter so Marshall is not used # if bytecode_cache and GAE_CACHE: # _jinja_env.bytecode_cache = GAEMemcacheBytecodeCache() if type(template_paths) == type(''): template_paths = {'site': template_paths} if len(template_paths) < 1: logging.exception('Sketch: jinja.setup: no template sets configured') return False if len(template_paths) == 1: template_set_name = template_paths.keys()[0] tp = template_paths[template_set_name] if tp in _jinja_loaders: _jinja_env.loader = _jinja_loaders[tp] else: _jinja_env.loader = _jinja_loaders[tp] = jinja2.FileSystemLoader(tp) return True if len(template_paths) > 1: loaders = {} for dirn, path in template_paths.items(): loaders[dirn] = jinja2.FileSystemLoader(path) _jinja_env.loader = SubdirLoader(loaders) return True logging.error('Sketch: jinja.setup: no template sets configured (fallthrough)') logging.error(_jinja_loaders)
Setup Jinja enviroment eg. sketch.jinja.setup({ 'app': self.config.paths['app_template_basedir'], 'sketch': self.config.paths['sketch_template_dir'], }) :param template_paths: Dictionary of paths to templates (template_name => template_path) :param autoescape: Autoescape :param cache_size: :param auto_reload: :param bytecode_cache:
def render(template_name, template_vars={}, template_set='site', template_theme=None, template_extension='html', template_content=None): global _jinja_env if not _jinja_env: raise 'Jinja env not setup' try: _jinja_env.filters['timesince'] = timesince _jinja_env.filters['timeuntil'] = timeuntil _jinja_env.filters['date'] = date_format _jinja_env.filters['time'] = time_format _jinja_env.filters['shortdate'] = short_date _jinja_env.filters['isodate'] = iso_date _jinja_env.filters['rfcdate'] = rfc2822_date _jinja_env.filters['tformat'] = datetimeformat _jinja_env.filters['timestamp'] = timestamp except NameError as errstr: logging.info('Helper import error: %s' % errstr) _template_name = "%s.%s" % (template_name, template_extension) template = _jinja_env.get_template(_template_name, parent=template_theme) return template.render(template_vars)
Given a template path, a template name and template variables will return rendered content using jinja2 library :param template_path: Path to template directory :param template_name: Name of template :param vars: (Optional) Template variables
def compile_file(env, src_path, dst_path, encoding='utf-8', base_dir=''): src_file = file(src_path, 'r') source = src_file.read().decode(encoding) name = src_path.replace(base_dir, '') raw = env.compile(source, name=name, filename=name, raw=True) src_file.close() dst_file = open(dst_path, 'w') dst_file.write(raw) dst_file.close()
Compiles a Jinja2 template to python code. :param env: a Jinja2 Environment instance. :param src_path: path to the source file. :param dst_path: path to the destination file. :param encoding: template encoding. :param base_dir: the base path to be removed from the compiled template filename.
def compile_dir(env, src_path, dst_path, pattern=r'^.*\.html$', encoding='utf-8', base_dir=None): from os import path, listdir, mkdir file_re = re.compile(pattern) if base_dir is None: base_dir = src_path for filename in listdir(src_path): src_name = path.join(src_path, filename) dst_name = path.join(dst_path, filename) if path.isdir(src_name): mkdir(dst_name) compile_dir(env, src_name, dst_name, encoding=encoding, base_dir=base_dir) elif path.isfile(src_name) and file_re.match(filename): compile_file(env, src_name, dst_name, encoding=encoding, base_dir=base_dir)
Compiles a directory of Jinja2 templates to python code. :param env: a Jinja2 Environment instance. :param src_path: path to the source directory. :param dst_path: path to the destination directory. :param encoding: template encoding. :param base_dir: the base path to be removed from the compiled template filename.
def _pre_dump(cls): shutil.rmtree(cls.outdir, ignore_errors=True) os.makedirs(cls.outdir) super(PlotMetric, cls)._pre_dump()
Output all recorded stats
def _histogram(self, which, mu, sigma, data): weights = np.ones_like(data)/len(data) # make bar heights sum to 100% n, bins, patches = plt.hist(data, bins=25, weights=weights, facecolor='blue', alpha=0.5) plt.title(r'%s %s: $\mu=%.2f$, $\sigma=%.2f$' % (self.name, which.capitalize(), mu, sigma)) plt.xlabel('Items' if which == 'count' else 'Seconds') plt.ylabel('Frequency') plt.gca().yaxis.set_major_formatter(FuncFormatter(lambda y, position: "{:.1f}%".format(y*100)))
plot a histogram. For internal use only
def _scatter(self): plt.scatter(self.count_arr, self.elapsed_arr) plt.title('{}: Count vs. Elapsed'.format(self.name)) plt.xlabel('Items') plt.ylabel('Seconds')
plot a scatter plot of count vs. elapsed. For internal use only
def bind(self, __fun, *args, **kwargs): with self._lock: if self._running or self._completed or self._cancelled: raise RuntimeError('Future object can not be reused') if self._worker: raise RuntimeError('Future object is already bound') self._worker = functools.partial(__fun, *args, **kwargs) return self
Bind a worker function to the future. This worker function will be executed when the future is executed.
def add_done_callback(self, fun): with self._lock: if self._completed: fun() else: self._done_callbacks.append(fun)
Adds the callback *fun* to the future so that it be invoked when the future completed. The future completes either when it has been completed after being started with the :meth:`start` method (independent of whether an error occurs or not) or when either :meth:`set_result` or :meth:`set_exception` is called. If the future is already complete, *fun* will be invoked directly. The function *fun* must accept the future as its sole argument.
def enqueue(self): with self._lock: if self._enqueued: raise RuntimeError('Future object is already enqueued') if self._running: raise RuntimeError('Future object is already running') if self._completed: raise RuntimeError('Future object can not be restarted') if not self._worker: raise RuntimeError('Future object is not bound') self._enqueued = True
Mark the future as being enqueued in some kind of executor for futures. Calling :meth:`start()` with the *as_thread* parameter as :const:`True` will raise a :class:`RuntimeError` after this method has been called. This method will also validate the state of the future.
def start(self, as_thread=True): with self._lock: if as_thread: self.enqueue() # Validate future state if self._cancelled: return self._running = True if as_thread: self._thread = threading.Thread(target=self._run) self._thread.start() return self self._run()
Execute the future in a new thread or in the current thread as specified by the *as_thread* parameter. :param as_thread: Execute the future in a new, separate thread. If this is set to :const:`False`, the future will be executed in the calling thread.