Code
stringlengths 103
85.9k
| Summary
sequencelengths 0
94
|
---|---|
Please provide a description of the function:def enable_streaming(self):
if not self.connected:
raise HardwareError("Cannot enable streaming if we are not in a connected state")
if self._reports is not None:
_clear_queue(self._reports)
return self._reports
self._reports = queue.Queue()
self._loop.run_coroutine(self.adapter.open_interface(0, 'streaming'))
return self._reports | [
"Open the streaming interface and accumute reports in a queue.\n\n This method is safe to call multiple times in a single device\n connection. There is no way to check if the streaming interface is\n opened or to close it once it is opened (apart from disconnecting from\n the device).\n\n The first time this method is called, it will open the streaming\n interface and return a queue that will be filled asynchronously with\n reports as they are received. Subsequent calls will just empty the\n queue and return the same queue without interacting with the device at\n all.\n\n Returns:\n queue.Queue: A queue that will be filled with reports from the device.\n "
] |
Please provide a description of the function:def enable_tracing(self):
if not self.connected:
raise HardwareError("Cannot enable tracing if we are not in a connected state")
if self._traces is not None:
_clear_queue(self._traces)
return self._traces
self._traces = queue.Queue()
self._loop.run_coroutine(self.adapter.open_interface(0, 'tracing'))
return self._traces | [
"Open the tracing interface and accumulate traces in a queue.\n\n This method is safe to call multiple times in a single device\n connection. There is no way to check if the tracing interface is\n opened or to close it once it is opened (apart from disconnecting from\n the device).\n\n The first time this method is called, it will open the tracing\n interface and return a queue that will be filled asynchronously with\n reports as they are received. Subsequent calls will just empty the\n queue and return the same queue without interacting with the device at\n all.\n\n Returns:\n queue.Queue: A queue that will be filled with trace data from the device.\n\n The trace data will be in disjoint bytes objects in the queue\n "
] |
Please provide a description of the function:def enable_broadcasting(self):
if self._broadcast_reports is not None:
_clear_queue(self._broadcast_reports)
return self._broadcast_reports
self._broadcast_reports = queue.Queue()
return self._broadcast_reports | [
"Begin accumulating broadcast reports received from all devices.\n\n This method will allocate a queue to receive broadcast reports that\n will be filled asynchronously as broadcast reports are received.\n\n Returns:\n queue.Queue: A queue that will be filled with braodcast reports.\n "
] |
Please provide a description of the function:def enable_debug(self):
if not self.connected:
raise HardwareError("Cannot enable debug if we are not in a connected state")
self._loop.run_coroutine(self.adapter.open_interface(0, 'debug')) | [
"Open the debug interface on the connected device."
] |
Please provide a description of the function:def debug_command(self, cmd, args=None, progress_callback=None):
if args is None:
args = {}
try:
self._on_progress = progress_callback
return self._loop.run_coroutine(self.adapter.debug(0, cmd, args))
finally:
self._on_progress = None | [
"Send a debug command to the connected device.\n\n This generic method will send a named debug command with the given\n arguments to the connected device. Debug commands are typically used\n for things like forcible reflashing of firmware or other, debug-style,\n operations. Not all transport protocols support debug commands and\n the supported operations vary depeneding on the transport protocol.\n\n Args:\n cmd (str): The name of the debug command to send.\n args (dict): Any arguments required by the given debug command\n progress_callback (callable): A function that will be called periodically to\n report progress. The signature must be callback(done_count, total_count)\n where done_count and total_count will be passed as integers.\n\n Returns:\n object: The return value of the debug command, if there is one.\n "
] |
Please provide a description of the function:def close(self):
try:
self._loop.run_coroutine(self.adapter.stop())
finally:
self._save_recording() | [
"Close this adapter stream.\n\n This method may only be called once in the lifetime of an\n AdapterStream and it will shutdown the underlying device adapter,\n disconnect all devices and stop all background activity.\n\n If this stream is configured to save a record of all RPCs, the RPCs\n will be logged to a file at this point.\n "
] |
Please provide a description of the function:def _on_scan(self, info):
device_id = info['uuid']
expiration_time = info.get('validity_period', 60)
infocopy = deepcopy(info)
infocopy['expiration_time'] = monotonic() + expiration_time
with self._scan_lock:
self._scanned_devices[device_id] = infocopy | [
"Callback called when a new device is discovered on this CMDStream\n\n Args:\n info (dict): Information about the scanned device\n "
] |
Please provide a description of the function:def _on_disconnect(self):
self._logger.info("Connection to device %s was interrupted", self.connection_string)
self.connection_interrupted = True | [
"Callback when a device is disconnected unexpectedly.\n\n Args:\n adapter_id (int): An ID for the adapter that was connected to the device\n connection_id (int): An ID for the connection that has become disconnected\n "
] |
Please provide a description of the function:def midl_emitter(target, source, env):
base, _ = SCons.Util.splitext(str(target[0]))
tlb = target[0]
incl = base + '.h'
interface = base + '_i.c'
targets = [tlb, incl, interface]
midlcom = env['MIDLCOM']
if midlcom.find('/proxy') != -1:
proxy = base + '_p.c'
targets.append(proxy)
if midlcom.find('/dlldata') != -1:
dlldata = base + '_data.c'
targets.append(dlldata)
return (targets, source) | [
"Produces a list of outputs from the MIDL compiler"
] |
Please provide a description of the function:def generate(env):
env['MIDL'] = 'MIDL.EXE'
env['MIDLFLAGS'] = SCons.Util.CLVar('/nologo')
env['MIDLCOM'] = '$MIDL $MIDLFLAGS /tlb ${TARGETS[0]} /h ${TARGETS[1]} /iid ${TARGETS[2]} /proxy ${TARGETS[3]} /dlldata ${TARGETS[4]} $SOURCE 2> NUL'
env['BUILDERS']['TypeLibrary'] = midl_builder | [
"Add Builders and construction variables for midl to an Environment."
] |
Please provide a description of the function:def generate(env):
SCons.Tool.bcc32.findIt('tlib', env)
SCons.Tool.createStaticLibBuilder(env)
env['AR'] = 'tlib'
env['ARFLAGS'] = SCons.Util.CLVar('')
env['ARCOM'] = '$AR $TARGET $ARFLAGS /a $SOURCES'
env['LIBPREFIX'] = ''
env['LIBSUFFIX'] = '.lib' | [
"Add Builders and construction variables for ar to an Environment."
] |
Please provide a description of the function:def File(name, dbm_module=None):
global ForDirectory, DB_Name, DB_Module
if name is None:
ForDirectory = DirFile
DB_Module = None
else:
ForDirectory = DB
DB_Name = name
if not dbm_module is None:
DB_Module = dbm_module | [
"\n Arrange for all signatures to be stored in a global .sconsign.db*\n file.\n "
] |
Please provide a description of the function:def set_entry(self, filename, obj):
self.entries[filename] = obj
self.dirty = True | [
"\n Set the entry.\n "
] |
Please provide a description of the function:def write(self, sync=1):
if not self.dirty:
return
self.merge()
temp = os.path.join(self.dir.get_internal_path(), '.scons%d' % os.getpid())
try:
file = open(temp, 'wb')
fname = temp
except IOError:
try:
file = open(self.sconsign, 'wb')
fname = self.sconsign
except IOError:
return
for key, entry in self.entries.items():
entry.convert_to_sconsign()
pickle.dump(self.entries, file, PICKLE_PROTOCOL)
file.close()
if fname != self.sconsign:
try:
mode = os.stat(self.sconsign)[0]
os.chmod(self.sconsign, 0o666)
os.unlink(self.sconsign)
except (IOError, OSError):
# Try to carry on in the face of either OSError
# (things like permission issues) or IOError (disk
# or network issues). If there's a really dangerous
# issue, it should get re-raised by the calls below.
pass
try:
os.rename(fname, self.sconsign)
except OSError:
# An OSError failure to rename may indicate something
# like the directory has no write permission, but
# the .sconsign file itself might still be writable,
# so try writing on top of it directly. An IOError
# here, or in any of the following calls, would get
# raised, indicating something like a potentially
# serious disk or network issue.
open(self.sconsign, 'wb').write(open(fname, 'rb').read())
os.chmod(self.sconsign, mode)
try:
os.unlink(temp)
except (IOError, OSError):
pass | [
"\n Write the .sconsign file to disk.\n\n Try to write to a temporary file first, and rename it if we\n succeed. If we can't write to the temporary file, it's\n probably because the directory isn't writable (and if so,\n how did we build anything in this directory, anyway?), so\n try to write directly to the .sconsign file as a backup.\n If we can't rename, try to copy the temporary contents back\n to the .sconsign file. Either way, always try to remove\n the temporary file at the end.\n "
] |
Please provide a description of the function:def generate(env):
link.generate(env)
env['LINK'] = env.Detect(linkers) or 'cc'
env['SHLINKFLAGS'] = SCons.Util.CLVar('$LINKFLAGS -shared')
# __RPATH is set to $_RPATH in the platform specification if that
# platform supports it.
env['RPATHPREFIX'] = '-rpath '
env['RPATHSUFFIX'] = ''
env['_RPATH'] = '${_concat(RPATHPREFIX, RPATH, RPATHSUFFIX, __env__)}' | [
"Add Builders and construction variables for MIPSPro to an Environment."
] |
Please provide a description of the function:def Dump(title=None):
if title:
print(title)
for counter in sorted(CounterList):
CounterList[counter].display() | [
" Dump the hit/miss count for all the counters\n collected so far.\n "
] |
Please provide a description of the function:def CountMethodCall(fn):
if use_memoizer:
def wrapper(self, *args, **kwargs):
global CounterList
key = self.__class__.__name__+'.'+fn.__name__
if key not in CounterList:
CounterList[key] = CountValue(self.__class__.__name__, fn.__name__)
CounterList[key].count(self, *args, **kwargs)
return fn(self, *args, **kwargs)
wrapper.__name__= fn.__name__
return wrapper
else:
return fn | [
" Decorator for counting memoizer hits/misses while retrieving\n a simple value in a class method. It wraps the given method\n fn and uses a CountValue object to keep track of the\n caching statistics.\n Wrapping gets enabled by calling EnableMemoization().\n "
] |
Please provide a description of the function:def CountDictCall(keyfunc):
def decorator(fn):
if use_memoizer:
def wrapper(self, *args, **kwargs):
global CounterList
key = self.__class__.__name__+'.'+fn.__name__
if key not in CounterList:
CounterList[key] = CountDict(self.__class__.__name__, fn.__name__, keyfunc)
CounterList[key].count(self, *args, **kwargs)
return fn(self, *args, **kwargs)
wrapper.__name__= fn.__name__
return wrapper
else:
return fn
return decorator | [
" Decorator for counting memoizer hits/misses while accessing\n dictionary values with a key-generating function. Like\n CountMethodCall above, it wraps the given method\n fn and uses a CountDict object to keep track of the\n caching statistics. The dict-key function keyfunc has to\n get passed in the decorator call and gets stored in the\n CountDict instance.\n Wrapping gets enabled by calling EnableMemoization().\n "
] |
Please provide a description of the function:def count(self, *args, **kw):
obj = args[0]
if self.method_name in obj._memo:
self.hit = self.hit + 1
else:
self.miss = self.miss + 1 | [
" Counts whether the memoized value has already been\n set (a hit) or not (a miss).\n "
] |
Please provide a description of the function:def count(self, *args, **kw):
obj = args[0]
try:
memo_dict = obj._memo[self.method_name]
except KeyError:
self.miss = self.miss + 1
else:
key = self.keymaker(*args, **kw)
if key in memo_dict:
self.hit = self.hit + 1
else:
self.miss = self.miss + 1 | [
" Counts whether the computed key value is already present\n in the memoization dictionary (a hit) or not (a miss).\n "
] |
Please provide a description of the function:async def start(self):
await self.server.start()
self.port = self.server.port | [
"Start the supervisor server."
] |
Please provide a description of the function:async def prepare_conn(self, conn):
client_id = str(uuid.uuid4())
monitor = functools.partial(self.send_event, client_id)
self._logger.info("New client connection: %s", client_id)
self.service_manager.add_monitor(monitor)
self.clients[client_id] = dict(connection=conn, monitor=monitor)
return client_id | [
"Setup a new connection from a client."
] |
Please provide a description of the function:async def teardown_conn(self, context):
client_id = context.user_data
self._logger.info("Tearing down client connection: %s", client_id)
if client_id not in self.clients:
self._logger.warning("client_id %s did not exist in teardown_conn", client_id)
else:
del self.clients[client_id] | [
"Teardown a connection from a client."
] |
Please provide a description of the function:async def send_event(self, client_id, service_name, event_name, event_info, directed_client=None):
if directed_client is not None and directed_client != client_id:
return
client_info = self.clients.get(client_id)
if client_info is None:
self._logger.warning("Attempted to send event to invalid client id: %s", client_id)
return
conn = client_info['connection']
event = dict(service=service_name)
if event_info is not None:
event['payload'] = event_info
self._logger.debug("Sending event: %s", event)
await self.server.send_event(conn, event_name, event) | [
"Send an event to a client."
] |
Please provide a description of the function:async def send_rpc(self, msg, _context):
service = msg.get('name')
rpc_id = msg.get('rpc_id')
payload = msg.get('payload')
timeout = msg.get('timeout')
response_id = await self.service_manager.send_rpc_command(service, rpc_id, payload,
timeout)
try:
result = await self.service_manager.rpc_results.get(response_id, timeout=timeout)
except asyncio.TimeoutError:
self._logger.warning("RPC 0x%04X on service %s timed out after %f seconds",
rpc_id, service, timeout)
result = dict(result='timeout', response=b'')
return result | [
"Send an RPC to a service on behalf of a client."
] |
Please provide a description of the function:async def respond_rpc(self, msg, _context):
rpc_id = msg.get('response_uuid')
result = msg.get('result')
payload = msg.get('response')
self.service_manager.send_rpc_response(rpc_id, result, payload) | [
"Respond to an RPC previously sent to a service."
] |
Please provide a description of the function:async def set_agent(self, msg, context):
service = msg.get('name')
client = context.user_data
self.service_manager.set_agent(service, client) | [
"Mark a client as the RPC agent for a service."
] |
Please provide a description of the function:async def post_heartbeat(self, msg, _context):
name = msg.get('name')
await self.service_manager.send_heartbeat(name) | [
"Update the status of a service."
] |
Please provide a description of the function:async def update_state(self, msg, _context):
name = msg.get('name')
status = msg.get('new_status')
await self.service_manager.update_state(name, status) | [
"Update the status of a service."
] |
Please provide a description of the function:async def service_messages(self, msg, _context):
msgs = self.service_manager.service_messages(msg.get('name'))
return [x.to_dict() for x in msgs] | [
"Get all messages for a service."
] |
Please provide a description of the function:async def service_headline(self, msg, _context):
headline = self.service_manager.service_headline(msg.get('name'))
if headline is not None:
headline = headline.to_dict()
return headline | [
"Get the headline for a service."
] |
Please provide a description of the function:def generate(env):
static_obj, shared_obj = SCons.Tool.createObjBuilders(env)
for suffix in ASSuffixes:
static_obj.add_action(suffix, SCons.Defaults.ASAction)
static_obj.add_emitter(suffix, SCons.Defaults.StaticObjectEmitter)
for suffix in ASPPSuffixes:
static_obj.add_action(suffix, SCons.Defaults.ASPPAction)
static_obj.add_emitter(suffix, SCons.Defaults.StaticObjectEmitter)
env['AS'] = 'nasm'
env['ASFLAGS'] = SCons.Util.CLVar('')
env['ASPPFLAGS'] = '$ASFLAGS'
env['ASCOM'] = '$AS $ASFLAGS -o $TARGET $SOURCES'
env['ASPPCOM'] = '$CC $ASPPFLAGS $CPPFLAGS $_CPPDEFFLAGS $_CPPINCFLAGS -c -o $TARGET $SOURCES' | [
"Add Builders and construction variables for nasm to an Environment."
] |
Please provide a description of the function:def generate(env):
link.generate(env)
env['SHLINKFLAGS'] = SCons.Util.CLVar('$LINKFLAGS -G')
env['RPATHPREFIX'] = '-R'
env['RPATHSUFFIX'] = ''
env['_RPATH'] = '${_concat(RPATHPREFIX, RPATH, RPATHSUFFIX, __env__)}'
# Support for versioned libraries
link._setup_versioned_lib_variables(env, tool = 'sunlink', use_soname = True)
env['LINKCALLBACKS'] = link._versioned_lib_callbacks() | [
"Add Builders and construction variables for Forte to an Environment."
] |
Please provide a description of the function:def _get_short_description(self):
if self.description is None:
return None
lines = [x for x in self.description.split('\n')]
if len(lines) == 1:
return lines[0]
elif len(lines) >= 3 and lines[1] == '':
return lines[0]
return None | [
"Return the first line of a multiline description\n\n Returns:\n string: The short description, otherwise None\n "
] |
Please provide a description of the function:def _get_long_description(self):
if self.description is None:
return None
lines = [x for x in self.description.split('\n')]
if len(lines) == 1:
return None
elif len(lines) >= 3 and lines[1] == '':
return '\n'.join(lines[2:])
return self.description | [
"Return the subsequent lines of a multiline description\n\n Returns:\n string: The long description, otherwise None\n "
] |
Please provide a description of the function:def wrap_lines(self, text, indent_level, indent_size=4):
indent = ' '*indent_size*indent_level
lines = text.split('\n')
wrapped_lines = []
for line in lines:
if line == '':
wrapped_lines.append(line)
else:
wrapped_lines.append(indent + line)
return '\n'.join(wrapped_lines) | [
"Indent a multiline string\n\n Args:\n text (string): The string to indent\n indent_level (int): The number of indent_size spaces to prepend\n to each line\n indent_size (int): The number of spaces to prepend for each indent\n level\n\n Returns:\n string: The indented block of text\n "
] |
Please provide a description of the function:def format_name(self, name, indent_size=4):
name_block = ''
if self.short_desc is None:
name_block += name + '\n'
else:
name_block += name + ': ' + self.short_desc + '\n'
if self.long_desc is not None:
name_block += self.wrap_lines(self.long_desc, 1, indent_size=indent_size)
name_block += '\n'
return name_block | [
"Format the name of this verifier\n\n The name will be formatted as:\n <name>: <short description>\n long description if one is given followed by \\n\n otherwise no long description\n\n Args:\n name (string): A name for this validator\n indent_size (int): The number of spaces to indent the\n description\n Returns:\n string: The formatted name block with a short and or long\n description appended.\n "
] |
Please provide a description of the function:def trim_whitespace(self, text):
lines = text.split('\n')
new_lines = [x.lstrip() for x in lines]
return '\n'.join(new_lines) | [
"Remove leading whitespace from each line of a multiline string\n\n Args:\n text (string): The text to be unindented\n\n Returns:\n string: The unindented block of text\n "
] |
Please provide a description of the function:def FromBinary(cls, record_data, record_count=1):
_cmd, address, _resp_length, payload = cls._parse_rpc_info(record_data)
try:
online, = struct.unpack("<H", payload)
online = bool(online)
except ValueError:
raise ArgumentError("Could not decode payload for set_online record", payload=payload)
return SetGraphOnlineRecord(online, address) | [
"Create an UpdateRecord subclass from binary record data.\n\n This should be called with a binary record blob (NOT including the\n record type header) and it will decode it into a SetGraphOnlineRecord.\n\n Args:\n record_data (bytearray): The raw record data that we wish to parse\n into an UpdateRecord subclass NOT including its 8 byte record header.\n record_count (int): The number of records included in record_data.\n\n Raises:\n ArgumentError: If the record_data is malformed and cannot be parsed.\n\n Returns:\n SetGraphOnlineRecord: The decoded reflash tile record.\n "
] |
Please provide a description of the function:def __extend_targets_sources(target, source):
if not SCons.Util.is_List(target):
target = [target]
if not source:
source = target[:]
elif not SCons.Util.is_List(source):
source = [source]
if len(target) < len(source):
target.extend(source[len(target):])
return target, source | [
" Prepare the lists of target and source files. "
] |
Please provide a description of the function:def __select_builder(lxml_builder, libxml2_builder, cmdline_builder):
if prefer_xsltproc:
return cmdline_builder
if not has_libxml2:
# At the moment we prefer libxml2 over lxml, the latter can lead
# to conflicts when installed together with libxml2.
if has_lxml:
return lxml_builder
else:
return cmdline_builder
return libxml2_builder | [
" Selects a builder, based on which Python modules are present. "
] |
Please provide a description of the function:def __ensure_suffix(t, suffix):
tpath = str(t)
if not tpath.endswith(suffix):
return tpath+suffix
return t | [
" Ensure that the target t has the given suffix. "
] |
Please provide a description of the function:def __ensure_suffix_stem(t, suffix):
tpath = str(t)
if not tpath.endswith(suffix):
stem = tpath
tpath += suffix
return tpath, stem
else:
stem, ext = os.path.splitext(tpath)
return t, stem | [
" Ensure that the target t has the given suffix, and return the file's stem. "
] |
Please provide a description of the function:def __get_xml_text(root):
txt = ""
for e in root.childNodes:
if (e.nodeType == e.TEXT_NODE):
txt += e.data
return txt | [
" Return the text for the given root node (xml.dom.minidom). "
] |
Please provide a description of the function:def __create_output_dir(base_dir):
root, tail = os.path.split(base_dir)
dir = None
if tail:
if base_dir.endswith('/'):
dir = base_dir
else:
dir = root
else:
if base_dir.endswith('/'):
dir = base_dir
if dir and not os.path.isdir(dir):
os.makedirs(dir) | [
" Ensure that the output directory base_dir exists. "
] |
Please provide a description of the function:def __detect_cl_tool(env, chainkey, cdict, cpriority=None):
if env.get(chainkey,'') == '':
clpath = ''
if cpriority is None:
cpriority = cdict.keys()
for cltool in cpriority:
if __debug_tool_location:
print("DocBook: Looking for %s"%cltool)
clpath = env.WhereIs(cltool)
if clpath:
if __debug_tool_location:
print("DocBook: Found:%s"%cltool)
env[chainkey] = clpath
if not env[chainkey + 'COM']:
env[chainkey + 'COM'] = cdict[cltool]
break | [
"\n Helper function, picks a command line tool from the list\n and initializes its environment variables.\n "
] |
Please provide a description of the function:def _detect(env):
global prefer_xsltproc
if env.get('DOCBOOK_PREFER_XSLTPROC',''):
prefer_xsltproc = True
if ((not has_libxml2 and not has_lxml) or (prefer_xsltproc)):
# Try to find the XSLT processors
__detect_cl_tool(env, 'DOCBOOK_XSLTPROC', xsltproc_com, xsltproc_com_priority)
__detect_cl_tool(env, 'DOCBOOK_XMLLINT', xmllint_com)
__detect_cl_tool(env, 'DOCBOOK_FOP', fop_com, ['fop','xep','jw']) | [
"\n Detect all the command line tools that we might need for creating\n the requested output formats.\n "
] |
Please provide a description of the function:def __xml_scan(node, env, path, arg):
# Does the node exist yet?
if not os.path.isfile(str(node)):
return []
if env.get('DOCBOOK_SCANENT',''):
# Use simple pattern matching for system entities..., no support
# for recursion yet.
contents = node.get_text_contents()
return sentity_re.findall(contents)
xsl_file = os.path.join(scriptpath,'utils','xmldepend.xsl')
if not has_libxml2 or prefer_xsltproc:
if has_lxml and not prefer_xsltproc:
from lxml import etree
xsl_tree = etree.parse(xsl_file)
doc = etree.parse(str(node))
result = doc.xslt(xsl_tree)
depfiles = [x.strip() for x in str(result).splitlines() if x.strip() != "" and not x.startswith("<?xml ")]
return depfiles
else:
# Try to call xsltproc
xsltproc = env.subst("$DOCBOOK_XSLTPROC")
if xsltproc and xsltproc.endswith('xsltproc'):
result = env.backtick(' '.join([xsltproc, xsl_file, str(node)]))
depfiles = [x.strip() for x in str(result).splitlines() if x.strip() != "" and not x.startswith("<?xml ")]
return depfiles
else:
# Use simple pattern matching, there is currently no support
# for xi:includes...
contents = node.get_text_contents()
return include_re.findall(contents)
styledoc = libxml2.parseFile(xsl_file)
style = libxslt.parseStylesheetDoc(styledoc)
doc = libxml2.readFile(str(node), None, libxml2.XML_PARSE_NOENT)
result = style.applyStylesheet(doc, None)
depfiles = []
for x in str(result).splitlines():
if x.strip() != "" and not x.startswith("<?xml "):
depfiles.extend(x.strip().split())
style.freeStylesheet()
doc.freeDoc()
result.freeDoc()
return depfiles | [
" Simple XML file scanner, detecting local images and XIncludes as implicit dependencies. "
] |
Please provide a description of the function:def __build_libxml2(target, source, env):
xsl_style = env.subst('$DOCBOOK_XSL')
styledoc = libxml2.parseFile(xsl_style)
style = libxslt.parseStylesheetDoc(styledoc)
doc = libxml2.readFile(str(source[0]),None,libxml2.XML_PARSE_NOENT)
# Support for additional parameters
parampass = {}
if parampass:
result = style.applyStylesheet(doc, parampass)
else:
result = style.applyStylesheet(doc, None)
style.saveResultToFilename(str(target[0]), result, 0)
style.freeStylesheet()
doc.freeDoc()
result.freeDoc()
return None | [
"\n General XSLT builder (HTML/FO), using the libxml2 module.\n "
] |
Please provide a description of the function:def __build_lxml(target, source, env):
from lxml import etree
xslt_ac = etree.XSLTAccessControl(read_file=True,
write_file=True,
create_dir=True,
read_network=False,
write_network=False)
xsl_style = env.subst('$DOCBOOK_XSL')
xsl_tree = etree.parse(xsl_style)
transform = etree.XSLT(xsl_tree, access_control=xslt_ac)
doc = etree.parse(str(source[0]))
# Support for additional parameters
parampass = {}
if parampass:
result = transform(doc, **parampass)
else:
result = transform(doc)
try:
of = open(str(target[0]), "wb")
of.write(of.write(etree.tostring(result, pretty_print=True)))
of.close()
except:
pass
return None | [
"\n General XSLT builder (HTML/FO), using the lxml module.\n "
] |
Please provide a description of the function:def __xinclude_libxml2(target, source, env):
doc = libxml2.readFile(str(source[0]), None, libxml2.XML_PARSE_NOENT)
doc.xincludeProcessFlags(libxml2.XML_PARSE_NOENT)
doc.saveFile(str(target[0]))
doc.freeDoc()
return None | [
"\n Resolving XIncludes, using the libxml2 module.\n "
] |
Please provide a description of the function:def __xinclude_lxml(target, source, env):
from lxml import etree
doc = etree.parse(str(source[0]))
doc.xinclude()
try:
doc.write(str(target[0]), xml_declaration=True,
encoding="UTF-8", pretty_print=True)
except:
pass
return None | [
"\n Resolving XIncludes, using the lxml module.\n "
] |
Please provide a description of the function:def DocbookEpub(env, target, source=None, *args, **kw):
import zipfile
import shutil
def build_open_container(target, source, env):
zf = zipfile.ZipFile(str(target[0]), 'w')
mime_file = open('mimetype', 'w')
mime_file.write('application/epub+zip')
mime_file.close()
zf.write(mime_file.name, compress_type = zipfile.ZIP_STORED)
for s in source:
if os.path.isfile(str(s)):
head, tail = os.path.split(str(s))
if not head:
continue
s = head
for dirpath, dirnames, filenames in os.walk(str(s)):
for fname in filenames:
path = os.path.join(dirpath, fname)
if os.path.isfile(path):
zf.write(path, os.path.relpath(path, str(env.get('ZIPROOT', ''))),
zipfile.ZIP_DEFLATED)
zf.close()
def add_resources(target, source, env):
hrefs = []
content_file = os.path.join(source[0].get_abspath(), 'content.opf')
if not os.path.isfile(content_file):
return
hrefs = []
if has_libxml2:
nsmap = {'opf' : 'http://www.idpf.org/2007/opf'}
# Read file and resolve entities
doc = libxml2.readFile(content_file, None, 0)
opf = doc.getRootElement()
# Create xpath context
xpath_context = doc.xpathNewContext()
# Register namespaces
for key, val in nsmap.items():
xpath_context.xpathRegisterNs(key, val)
if hasattr(opf, 'xpathEval') and xpath_context:
# Use the xpath context
xpath_context.setContextNode(opf)
items = xpath_context.xpathEval(".//opf:item")
else:
items = opf.findall(".//{'http://www.idpf.org/2007/opf'}item")
for item in items:
if hasattr(item, 'prop'):
hrefs.append(item.prop('href'))
else:
hrefs.append(item.attrib['href'])
doc.freeDoc()
xpath_context.xpathFreeContext()
elif has_lxml:
from lxml import etree
opf = etree.parse(content_file)
# All the opf:item elements are resources
for item in opf.xpath('//opf:item',
namespaces= { 'opf': 'http://www.idpf.org/2007/opf' }):
hrefs.append(item.attrib['href'])
for href in hrefs:
# If the resource was not already created by DocBook XSL itself,
# copy it into the OEBPS folder
referenced_file = os.path.join(source[0].get_abspath(), href)
if not os.path.exists(referenced_file):
shutil.copy(href, os.path.join(source[0].get_abspath(), href))
# Init list of targets/sources
target, source = __extend_targets_sources(target, source)
# Init XSL stylesheet
__init_xsl_stylesheet(kw, env, '$DOCBOOK_DEFAULT_XSL_EPUB', ['epub','docbook.xsl'])
# Setup builder
__builder = __select_builder(__lxml_builder, __libxml2_builder, __xsltproc_builder)
# Create targets
result = []
if not env.GetOption('clean'):
# Ensure that the folders OEBPS and META-INF exist
__create_output_dir('OEBPS/')
__create_output_dir('META-INF/')
dirs = env.Dir(['OEBPS', 'META-INF'])
# Set the fixed base_dir
kw['base_dir'] = 'OEBPS/'
tocncx = __builder.__call__(env, 'toc.ncx', source[0], **kw)
cxml = env.File('META-INF/container.xml')
env.SideEffect(cxml, tocncx)
env.Depends(tocncx, kw['DOCBOOK_XSL'])
result.extend(tocncx+[cxml])
container = env.Command(__ensure_suffix(str(target[0]), '.epub'),
tocncx+[cxml], [add_resources, build_open_container])
mimetype = env.File('mimetype')
env.SideEffect(mimetype, container)
result.extend(container)
# Add supporting files for cleanup
env.Clean(tocncx, dirs)
return result | [
"\n A pseudo-Builder, providing a Docbook toolchain for ePub output.\n ",
"Generate the *.epub file from intermediate outputs\n\n Constructs the epub file according to the Open Container Format. This \n function could be replaced by a call to the SCons Zip builder if support\n was added for different compression formats for separate source nodes.\n ",
"Add missing resources to the OEBPS directory\n\n Ensure all the resources in the manifest are present in the OEBPS directory.\n "
] |
Please provide a description of the function:def DocbookHtml(env, target, source=None, *args, **kw):
# Init list of targets/sources
target, source = __extend_targets_sources(target, source)
# Init XSL stylesheet
__init_xsl_stylesheet(kw, env, '$DOCBOOK_DEFAULT_XSL_HTML', ['html','docbook.xsl'])
# Setup builder
__builder = __select_builder(__lxml_builder, __libxml2_builder, __xsltproc_builder)
# Create targets
result = []
for t,s in zip(target,source):
r = __builder.__call__(env, __ensure_suffix(t,'.html'), s, **kw)
env.Depends(r, kw['DOCBOOK_XSL'])
result.extend(r)
return result | [
"\n A pseudo-Builder, providing a Docbook toolchain for HTML output.\n "
] |
Please provide a description of the function:def DocbookMan(env, target, source=None, *args, **kw):
# Init list of targets/sources
target, source = __extend_targets_sources(target, source)
# Init XSL stylesheet
__init_xsl_stylesheet(kw, env, '$DOCBOOK_DEFAULT_XSL_MAN', ['manpages','docbook.xsl'])
# Setup builder
__builder = __select_builder(__lxml_builder, __libxml2_builder, __xsltproc_builder)
# Create targets
result = []
for t,s in zip(target,source):
volnum = "1"
outfiles = []
srcfile = __ensure_suffix(str(s),'.xml')
if os.path.isfile(srcfile):
try:
import xml.dom.minidom
dom = xml.dom.minidom.parse(__ensure_suffix(str(s),'.xml'))
# Extract volume number, default is 1
for node in dom.getElementsByTagName('refmeta'):
for vol in node.getElementsByTagName('manvolnum'):
volnum = __get_xml_text(vol)
# Extract output filenames
for node in dom.getElementsByTagName('refnamediv'):
for ref in node.getElementsByTagName('refname'):
outfiles.append(__get_xml_text(ref)+'.'+volnum)
except:
# Use simple regex parsing
f = open(__ensure_suffix(str(s),'.xml'), 'r')
content = f.read()
f.close()
for m in re_manvolnum.finditer(content):
volnum = m.group(1)
for m in re_refname.finditer(content):
outfiles.append(m.group(1)+'.'+volnum)
if not outfiles:
# Use stem of the source file
spath = str(s)
if not spath.endswith('.xml'):
outfiles.append(spath+'.'+volnum)
else:
stem, ext = os.path.splitext(spath)
outfiles.append(stem+'.'+volnum)
else:
# We have to completely rely on the given target name
outfiles.append(t)
__builder.__call__(env, outfiles[0], s, **kw)
env.Depends(outfiles[0], kw['DOCBOOK_XSL'])
result.append(outfiles[0])
if len(outfiles) > 1:
env.Clean(outfiles[0], outfiles[1:])
return result | [
"\n A pseudo-Builder, providing a Docbook toolchain for Man page output.\n "
] |
Please provide a description of the function:def DocbookSlidesPdf(env, target, source=None, *args, **kw):
# Init list of targets/sources
target, source = __extend_targets_sources(target, source)
# Init XSL stylesheet
__init_xsl_stylesheet(kw, env, '$DOCBOOK_DEFAULT_XSL_SLIDESPDF', ['slides','fo','plain.xsl'])
# Setup builder
__builder = __select_builder(__lxml_builder, __libxml2_builder, __xsltproc_builder)
# Create targets
result = []
for t,s in zip(target,source):
t, stem = __ensure_suffix_stem(t, '.pdf')
xsl = __builder.__call__(env, stem+'.fo', s, **kw)
env.Depends(xsl, kw['DOCBOOK_XSL'])
result.extend(xsl)
result.extend(__fop_builder.__call__(env, t, xsl, **kw))
return result | [
"\n A pseudo-Builder, providing a Docbook toolchain for PDF slides output.\n "
] |
Please provide a description of the function:def DocbookSlidesHtml(env, target, source=None, *args, **kw):
# Init list of targets/sources
if not SCons.Util.is_List(target):
target = [target]
if not source:
source = target
target = ['index.html']
elif not SCons.Util.is_List(source):
source = [source]
# Init XSL stylesheet
__init_xsl_stylesheet(kw, env, '$DOCBOOK_DEFAULT_XSL_SLIDESHTML', ['slides','html','plain.xsl'])
# Setup builder
__builder = __select_builder(__lxml_builder, __libxml2_builder, __xsltproc_builder)
# Detect base dir
base_dir = kw.get('base_dir', '')
if base_dir:
__create_output_dir(base_dir)
# Create targets
result = []
r = __builder.__call__(env, __ensure_suffix(str(target[0]), '.html'), source[0], **kw)
env.Depends(r, kw['DOCBOOK_XSL'])
result.extend(r)
# Add supporting files for cleanup
env.Clean(r, [os.path.join(base_dir, 'toc.html')] +
glob.glob(os.path.join(base_dir, 'foil*.html')))
return result | [
"\n A pseudo-Builder, providing a Docbook toolchain for HTML slides output.\n "
] |
Please provide a description of the function:def DocbookXInclude(env, target, source, *args, **kw):
# Init list of targets/sources
target, source = __extend_targets_sources(target, source)
# Setup builder
__builder = __select_builder(__xinclude_lxml_builder,__xinclude_libxml2_builder,__xmllint_builder)
# Create targets
result = []
for t,s in zip(target,source):
result.extend(__builder.__call__(env, t, s, **kw))
return result | [
"\n A pseudo-Builder, for resolving XIncludes in a separate processing step.\n "
] |
Please provide a description of the function:def DocbookXslt(env, target, source=None, *args, **kw):
# Init list of targets/sources
target, source = __extend_targets_sources(target, source)
# Init XSL stylesheet
kw['DOCBOOK_XSL'] = kw.get('xsl', 'transform.xsl')
# Setup builder
__builder = __select_builder(__lxml_builder, __libxml2_builder, __xsltproc_builder)
# Create targets
result = []
for t,s in zip(target,source):
r = __builder.__call__(env, t, s, **kw)
env.Depends(r, kw['DOCBOOK_XSL'])
result.extend(r)
return result | [
"\n A pseudo-Builder, applying a simple XSL transformation to the input file.\n "
] |
Please provide a description of the function:def generate(env):
env.SetDefault(
# Default names for customized XSL stylesheets
DOCBOOK_DEFAULT_XSL_EPUB = '',
DOCBOOK_DEFAULT_XSL_HTML = '',
DOCBOOK_DEFAULT_XSL_HTMLCHUNKED = '',
DOCBOOK_DEFAULT_XSL_HTMLHELP = '',
DOCBOOK_DEFAULT_XSL_PDF = '',
DOCBOOK_DEFAULT_XSL_MAN = '',
DOCBOOK_DEFAULT_XSL_SLIDESPDF = '',
DOCBOOK_DEFAULT_XSL_SLIDESHTML = '',
# Paths to the detected executables
DOCBOOK_XSLTPROC = '',
DOCBOOK_XMLLINT = '',
DOCBOOK_FOP = '',
# Additional flags for the text processors
DOCBOOK_XSLTPROCFLAGS = SCons.Util.CLVar(''),
DOCBOOK_XMLLINTFLAGS = SCons.Util.CLVar(''),
DOCBOOK_FOPFLAGS = SCons.Util.CLVar(''),
DOCBOOK_XSLTPROCPARAMS = SCons.Util.CLVar(''),
# Default command lines for the detected executables
DOCBOOK_XSLTPROCCOM = xsltproc_com['xsltproc'],
DOCBOOK_XMLLINTCOM = xmllint_com['xmllint'],
DOCBOOK_FOPCOM = fop_com['fop'],
# Screen output for the text processors
DOCBOOK_XSLTPROCCOMSTR = None,
DOCBOOK_XMLLINTCOMSTR = None,
DOCBOOK_FOPCOMSTR = None,
)
_detect(env)
env.AddMethod(DocbookEpub, "DocbookEpub")
env.AddMethod(DocbookHtml, "DocbookHtml")
env.AddMethod(DocbookHtmlChunked, "DocbookHtmlChunked")
env.AddMethod(DocbookHtmlhelp, "DocbookHtmlhelp")
env.AddMethod(DocbookPdf, "DocbookPdf")
env.AddMethod(DocbookMan, "DocbookMan")
env.AddMethod(DocbookSlidesPdf, "DocbookSlidesPdf")
env.AddMethod(DocbookSlidesHtml, "DocbookSlidesHtml")
env.AddMethod(DocbookXInclude, "DocbookXInclude")
env.AddMethod(DocbookXslt, "DocbookXslt") | [
"Add Builders and construction variables for docbook to an Environment."
] |
Please provide a description of the function:def execute(self, sensor_graph, scope_stack):
parent = scope_stack[-1]
alloc = parent.allocator
trigger_stream, trigger_cond = parent.trigger_chain()
op = 'copy_latest_a'
if self.all:
op = 'copy_all_a'
elif self.average:
op = 'average_a'
elif self.count:
op = 'copy_count_a'
if self.explicit_input:
# If root node is an input, create an intermediate node with an unbuffered node
if self.explicit_input.input:
unbuffered_stream = alloc.allocate_stream(DataStream.UnbufferedType, attach=True)
sensor_graph.add_node(u"({} always) => {} using {}".format(self.explicit_input, unbuffered_stream, 'copy_latest_a'))
sensor_graph.add_node(u"({} always && {} {}) => {} using {}".format(unbuffered_stream, trigger_stream, trigger_cond, self.output, op))
else:
sensor_graph.add_node(u"({} always && {} {}) => {} using {}".format(self.explicit_input, trigger_stream, trigger_cond, self.output, op))
elif self.constant_input is not None:
const_stream = alloc.allocate_stream(DataStream.ConstantType, attach=True)
sensor_graph.add_node(u"({} always && {} {}) => {} using {}".format(const_stream, trigger_stream, trigger_cond, self.output, op))
sensor_graph.add_constant(const_stream, self.constant_input)
else:
sensor_graph.add_node(u"({} {}) => {} using {}".format(trigger_stream, trigger_cond, self.output, op)) | [
"Execute this statement on the sensor_graph given the current scope tree.\n\n This adds a single node to the sensor graph with either the\n copy_latest_a, copy_all_a or average_a function as is processing function.\n\n If there is an explicit stream passed, that is used as input a with the\n current scope's trigger as input b, otherwise the current scope's trigger\n is used as input a.\n\n Args:\n sensor_graph (SensorGraph): The sensor graph that we are building or\n modifying\n scope_stack (list(Scope)): A stack of nested scopes that may influence\n how this statement allocates clocks or other stream resources.\n "
] |
Please provide a description of the function:def verify(self, obj):
if self.encoding == 'none' and not isinstance(obj, (bytes, bytearray)):
raise ValidationError('Byte object was not either bytes or a bytearray', type=obj.__class__.__name__)
elif self.encoding == 'base64':
try:
data = base64.b64decode(obj)
return data
except TypeError:
raise ValidationError("Could not decode base64 encoded bytes", obj=obj)
elif self.encoding == 'hex':
try:
data = binascii.unhexlify(obj)
return data
except TypeError:
raise ValidationError("Could not decode hex encoded bytes", obj=obj)
return obj | [
"Verify that the object conforms to this verifier's schema\n\n Args:\n obj (object): A python object to verify\n\n Returns:\n bytes or byterray: The decoded byte buffer\n\n Raises:\n ValidationError: If there is a problem verifying the object, a\n ValidationError is thrown with at least the reason key set indicating\n the reason for the lack of validation.\n "
] |
Please provide a description of the function:def format(self, indent_level, indent_size=4):
name = self.format_name('Bytes', indent_size)
return self.wrap_lines(name, indent_level, indent_size) | [
"Format this verifier\n\n Returns:\n string: A formatted string\n "
] |
Please provide a description of the function:def save(self):
try:
with open(self.path, "w") as f:
f.writelines(self.contents)
except IOError as e:
raise InternalError("Could not write RCFile contents", name=self.name, path=self.path, error_message=str(e)) | [
"Update the configuration file on disk with the current contents of self.contents.\n Previous contents are overwritten.\n "
] |
Please provide a description of the function:async def probe_message(self, _message, context):
client_id = context.user_data
await self.probe(client_id) | [
"Handle a probe message.\n\n See :meth:`AbstractDeviceAdapter.probe`.\n "
] |
Please provide a description of the function:async def connect_message(self, message, context):
conn_string = message.get('connection_string')
client_id = context.user_data
await self.connect(client_id, conn_string) | [
"Handle a connect message.\n\n See :meth:`AbstractDeviceAdapter.connect`.\n "
] |
Please provide a description of the function:async def disconnect_message(self, message, context):
conn_string = message.get('connection_string')
client_id = context.user_data
await self.disconnect(client_id, conn_string) | [
"Handle a disconnect message.\n\n See :meth:`AbstractDeviceAdapter.disconnect`.\n "
] |
Please provide a description of the function:async def open_interface_message(self, message, context):
conn_string = message.get('connection_string')
interface = message.get('interface')
client_id = context.user_data
await self.open_interface(client_id, conn_string, interface) | [
"Handle an open_interface message.\n\n See :meth:`AbstractDeviceAdapter.open_interface`.\n "
] |
Please provide a description of the function:async def close_interface_message(self, message, context):
conn_string = message.get('connection_string')
interface = message.get('interface')
client_id = context.user_data
await self.close_interface(client_id, conn_string, interface) | [
"Handle a close_interface message.\n\n See :meth:`AbstractDeviceAdapter.close_interface`.\n "
] |
Please provide a description of the function:async def send_rpc_message(self, message, context):
conn_string = message.get('connection_string')
rpc_id = message.get('rpc_id')
address = message.get('address')
timeout = message.get('timeout')
payload = message.get('payload')
client_id = context.user_data
self._logger.debug("Calling RPC %d:0x%04X with payload %s on %s",
address, rpc_id, payload, conn_string)
response = bytes()
err = None
try:
response = await self.send_rpc(client_id, conn_string, address, rpc_id, payload, timeout=timeout)
except VALID_RPC_EXCEPTIONS as internal_err:
err = internal_err
except (DeviceAdapterError, DeviceServerError):
raise
except Exception as internal_err:
self._logger.warning("Unexpected exception calling RPC %d:0x%04x", address, rpc_id, exc_info=True)
raise ServerCommandError('send_rpc', str(internal_err)) from internal_err
status, response = pack_rpc_response(response, err)
return {
'status': status,
'payload': base64.b64encode(response)
} | [
"Handle a send_rpc message.\n\n See :meth:`AbstractDeviceAdapter.send_rpc`.\n "
] |
Please provide a description of the function:async def send_script_message(self, message, context):
script = message.get('script')
conn_string = message.get('connection_string')
client_id = context.user_data
if message.get('fragment_count') != 1:
raise DeviceServerError(client_id, conn_string, 'send_script', 'fragmented scripts are not yet supported')
await self.send_script(client_id, conn_string, script) | [
"Handle a send_script message.\n\n See :meth:`AbstractDeviceAdapter.send_script`.\n "
] |
Please provide a description of the function:async def debug_command_message(self, message, context):
conn_string = message.get('connection_string')
command = message.get('command')
args = message.get('args')
client_id = context.user_data
result = await self.debug(client_id, conn_string, command, args)
return result | [
"Handle a debug message.\n\n See :meth:`AbstractDeviceAdapter.debug`.\n "
] |
Please provide a description of the function:async def client_event_handler(self, client_id, event_tuple, user_data):
#TODO: Support sending disconnection events
conn_string, event_name, event = event_tuple
if event_name == 'report':
report = event.serialize()
report['encoded_report'] = base64.b64encode(report['encoded_report'])
msg_payload = dict(connection_string=conn_string, serialized_report=report)
msg_name = OPERATIONS.NOTIFY_REPORT
elif event_name == 'trace':
encoded_payload = base64.b64encode(event)
msg_payload = dict(connection_string=conn_string, payload=encoded_payload)
msg_name = OPERATIONS.NOTIFY_TRACE
elif event_name == 'progress':
msg_payload = dict(connection_string=conn_string, operation=event.get('operation'),
done_count=event.get('finished'), total_count=event.get('total'))
msg_name = OPERATIONS.NOTIFY_PROGRESS
elif event_name == 'device_seen':
msg_payload = event
msg_name = OPERATIONS.NOTIFY_DEVICE_FOUND
elif event_name == 'broadcast':
report = event.serialize()
report['encoded_report'] = base64.b64encode(report['encoded_report'])
msg_payload = dict(connection_string=conn_string, serialized_report=report)
msg_name = OPERATIONS.NOTIFY_BROADCAST
else:
self._logger.debug("Not forwarding unknown event over websockets: %s", event_tuple)
return
try:
self._logger.debug("Sending event %s: %s", msg_name, msg_payload)
await self.server.send_event(user_data, msg_name, msg_payload)
except websockets.exceptions.ConnectionClosed:
self._logger.debug("Could not send notification because connection was closed for client %s", client_id) | [
"Forward an event on behalf of a client.\n\n This method is called by StandardDeviceServer when it has an event that\n should be sent to a client.\n\n Args:\n client_id (str): The client that we should send this event to\n event_tuple (tuple): The conn_string, event_name and event\n object passed from the call to notify_event.\n user_data (object): The user data passed in the call to\n :meth:`setup_client`.\n "
] |
Please provide a description of the function:def generate(env):
add_all_to_env(env)
fcomp = env.Detect(compilers) or 'f90'
env['FORTRAN'] = fcomp
env['F90'] = fcomp
env['SHFORTRAN'] = '$FORTRAN'
env['SHF90'] = '$F90'
env['SHFORTRANFLAGS'] = SCons.Util.CLVar('$FORTRANFLAGS -KPIC')
env['SHF90FLAGS'] = SCons.Util.CLVar('$F90FLAGS -KPIC') | [
"Add Builders and construction variables for sun f90 compiler to an\n Environment."
] |
Please provide a description of the function:def Builder(**kw):
composite = None
if 'generator' in kw:
if 'action' in kw:
raise UserError("You must not specify both an action and a generator.")
kw['action'] = SCons.Action.CommandGeneratorAction(kw['generator'], {})
del kw['generator']
elif 'action' in kw:
source_ext_match = kw.get('source_ext_match', 1)
if 'source_ext_match' in kw:
del kw['source_ext_match']
if SCons.Util.is_Dict(kw['action']):
composite = DictCmdGenerator(kw['action'], source_ext_match)
kw['action'] = SCons.Action.CommandGeneratorAction(composite, {})
kw['src_suffix'] = composite.src_suffixes()
else:
kw['action'] = SCons.Action.Action(kw['action'])
if 'emitter' in kw:
emitter = kw['emitter']
if SCons.Util.is_String(emitter):
# This allows users to pass in an Environment
# variable reference (like "$FOO") as an emitter.
# We will look in that Environment variable for
# a callable to use as the actual emitter.
var = SCons.Util.get_environment_var(emitter)
if not var:
raise UserError("Supplied emitter '%s' does not appear to refer to an Environment variable" % emitter)
kw['emitter'] = EmitterProxy(var)
elif SCons.Util.is_Dict(emitter):
kw['emitter'] = DictEmitter(emitter)
elif SCons.Util.is_List(emitter):
kw['emitter'] = ListEmitter(emitter)
result = BuilderBase(**kw)
if not composite is None:
result = CompositeBuilder(result, composite)
return result | [
"A factory for builder objects."
] |
Please provide a description of the function:def _node_errors(builder, env, tlist, slist):
# First, figure out if there are any errors in the way the targets
# were specified.
for t in tlist:
if t.side_effect:
raise UserError("Multiple ways to build the same target were specified for: %s" % t)
if t.has_explicit_builder():
# Check for errors when the environments are different
# No error if environments are the same Environment instance
if (not t.env is None and not t.env is env and
# Check OverrideEnvironment case - no error if wrapped Environments
# are the same instance, and overrides lists match
not (getattr(t.env, '__subject', 0) is getattr(env, '__subject', 1) and
getattr(t.env, 'overrides', 0) == getattr(env, 'overrides', 1) and
not builder.multi)):
action = t.builder.action
t_contents = t.builder.action.get_contents(tlist, slist, t.env)
contents = builder.action.get_contents(tlist, slist, env)
if t_contents == contents:
msg = "Two different environments were specified for target %s,\n\tbut they appear to have the same action: %s" % (t, action.genstring(tlist, slist, t.env))
SCons.Warnings.warn(SCons.Warnings.DuplicateEnvironmentWarning, msg)
else:
try:
msg = "Two environments with different actions were specified for the same target: %s\n(action 1: %s)\n(action 2: %s)" % (t,t_contents.decode('utf-8'),contents.decode('utf-8'))
except UnicodeDecodeError as e:
msg = "Two environments with different actions were specified for the same target: %s"%t
raise UserError(msg)
if builder.multi:
if t.builder != builder:
msg = "Two different builders (%s and %s) were specified for the same target: %s" % (t.builder.get_name(env), builder.get_name(env), t)
raise UserError(msg)
# TODO(batch): list constructed each time!
if t.get_executor().get_all_targets() != tlist:
msg = "Two different target lists have a target in common: %s (from %s and from %s)" % (t, list(map(str, t.get_executor().get_all_targets())), list(map(str, tlist)))
raise UserError(msg)
elif t.sources != slist:
msg = "Multiple ways to build the same target were specified for: %s (from %s and from %s)" % (t, list(map(str, t.sources)), list(map(str, slist)))
raise UserError(msg)
if builder.single_source:
if len(slist) > 1:
raise UserError("More than one source given for single-source builder: targets=%s sources=%s" % (list(map(str,tlist)), list(map(str,slist)))) | [
"Validate that the lists of target and source nodes are\n legal for this builder and environment. Raise errors or\n issue warnings as appropriate.\n "
] |
Please provide a description of the function:def is_a_Builder(obj):
return (isinstance(obj, BuilderBase)
or isinstance(obj, CompositeBuilder)
or callable(obj)) | [
"\"Returns True if the specified obj is one of our Builder classes.\n\n The test is complicated a bit by the fact that CompositeBuilder\n is a proxy, not a subclass of BuilderBase.\n "
] |
Please provide a description of the function:def get_name(self, env):
try:
index = list(env['BUILDERS'].values()).index(self)
return list(env['BUILDERS'].keys())[index]
except (AttributeError, KeyError, TypeError, ValueError):
try:
return self.name
except AttributeError:
return str(self.__class__) | [
"Attempts to get the name of the Builder.\n\n Look at the BUILDERS variable of env, expecting it to be a\n dictionary containing this Builder, and return the key of the\n dictionary. If there's no key, then return a directly-configured\n name (if there is one) or the name of the class (by default)."
] |
Please provide a description of the function:def _create_nodes(self, env, target = None, source = None):
src_suf = self.get_src_suffix(env)
target_factory = env.get_factory(self.target_factory)
source_factory = env.get_factory(self.source_factory)
source = self._adjustixes(source, None, src_suf)
slist = env.arg2nodes(source, source_factory)
pre = self.get_prefix(env, slist)
suf = self.get_suffix(env, slist)
if target is None:
try:
t_from_s = slist[0].target_from_source
except AttributeError:
raise UserError("Do not know how to create a target from source `%s'" % slist[0])
except IndexError:
tlist = []
else:
splitext = lambda S: self.splitext(S,env)
tlist = [ t_from_s(pre, suf, splitext) ]
else:
target = self._adjustixes(target, pre, suf, self.ensure_suffix)
tlist = env.arg2nodes(target, target_factory, target=target, source=source)
if self.emitter:
# The emitter is going to do str(node), but because we're
# being called *from* a builder invocation, the new targets
# don't yet have a builder set on them and will look like
# source files. Fool the emitter's str() calls by setting
# up a temporary builder on the new targets.
new_targets = []
for t in tlist:
if not t.is_derived():
t.builder_set(self)
new_targets.append(t)
orig_tlist = tlist[:]
orig_slist = slist[:]
target, source = self.emitter(target=tlist, source=slist, env=env)
# Now delete the temporary builders that we attached to any
# new targets, so that _node_errors() doesn't do weird stuff
# to them because it thinks they already have builders.
for t in new_targets:
if t.builder is self:
# Only delete the temporary builder if the emitter
# didn't change it on us.
t.builder_set(None)
# Have to call arg2nodes yet again, since it is legal for
# emitters to spit out strings as well as Node instances.
tlist = env.arg2nodes(target, target_factory,
target=orig_tlist, source=orig_slist)
slist = env.arg2nodes(source, source_factory,
target=orig_tlist, source=orig_slist)
return tlist, slist | [
"Create and return lists of target and source nodes.\n "
] |
Please provide a description of the function:def _get_sdict(self, env):
sdict = {}
for bld in self.get_src_builders(env):
for suf in bld.src_suffixes(env):
sdict[suf] = bld
return sdict | [
"\n Returns a dictionary mapping all of the source suffixes of all\n src_builders of this Builder to the underlying Builder that\n should be called first.\n\n This dictionary is used for each target specified, so we save a\n lot of extra computation by memoizing it for each construction\n environment.\n\n Note that this is re-computed each time, not cached, because there\n might be changes to one of our source Builders (or one of their\n source Builders, and so on, and so on...) that we can't \"see.\"\n\n The underlying methods we call cache their computed values,\n though, so we hope repeatedly aggregating them into a dictionary\n like this won't be too big a hit. We may need to look for a\n better way to do this if performance data show this has turned\n into a significant bottleneck.\n "
] |
Please provide a description of the function:def get_src_builders(self, env):
memo_key = id(env)
try:
memo_dict = self._memo['get_src_builders']
except KeyError:
memo_dict = {}
self._memo['get_src_builders'] = memo_dict
else:
try:
return memo_dict[memo_key]
except KeyError:
pass
builders = []
for bld in self.src_builder:
if SCons.Util.is_String(bld):
try:
bld = env['BUILDERS'][bld]
except KeyError:
continue
builders.append(bld)
memo_dict[memo_key] = builders
return builders | [
"\n Returns the list of source Builders for this Builder.\n\n This exists mainly to look up Builders referenced as\n strings in the 'BUILDER' variable of the construction\n environment and cache the result.\n "
] |
Please provide a description of the function:def subst_src_suffixes(self, env):
memo_key = id(env)
try:
memo_dict = self._memo['subst_src_suffixes']
except KeyError:
memo_dict = {}
self._memo['subst_src_suffixes'] = memo_dict
else:
try:
return memo_dict[memo_key]
except KeyError:
pass
suffixes = [env.subst(x) for x in self.src_suffix]
memo_dict[memo_key] = suffixes
return suffixes | [
"\n The suffix list may contain construction variable expansions,\n so we have to evaluate the individual strings. To avoid doing\n this over and over, we memoize the results for each construction\n environment.\n "
] |
Please provide a description of the function:def src_suffixes(self, env):
sdict = {}
suffixes = self.subst_src_suffixes(env)
for s in suffixes:
sdict[s] = 1
for builder in self.get_src_builders(env):
for s in builder.src_suffixes(env):
if s not in sdict:
sdict[s] = 1
suffixes.append(s)
return suffixes | [
"\n Returns the list of source suffixes for all src_builders of this\n Builder.\n\n This is essentially a recursive descent of the src_builder \"tree.\"\n (This value isn't cached because there may be changes in a\n src_builder many levels deep that we can't see.)\n "
] |
Please provide a description of the function:def generate(env):
link.generate(env)
env['SMARTLINKFLAGS'] = smart_linkflags
env['LINKFLAGS'] = SCons.Util.CLVar('$SMARTLINKFLAGS')
env['SHLINKFLAGS'] = SCons.Util.CLVar('$LINKFLAGS -qmkshrobj -qsuppress=1501-218')
env['SHLIBSUFFIX'] = '.a' | [
"\n Add Builders and construction variables for Visual Age linker to\n an Environment.\n "
] |
Please provide a description of the function:def retrieve(self, node):
if not self.is_enabled():
return False
env = node.get_build_env()
if cache_show:
if CacheRetrieveSilent(node, [], env, execute=1) == 0:
node.build(presub=0, execute=0)
return True
else:
if CacheRetrieve(node, [], env, execute=1) == 0:
return True
return False | [
"\n This method is called from multiple threads in a parallel build,\n so only do thread safe stuff here. Do thread unsafe stuff in\n built().\n\n Note that there's a special trick here with the execute flag\n (one that's not normally done for other actions). Basically\n if the user requested a no_exec (-n) build, then\n SCons.Action.execute_actions is set to 0 and when any action\n is called, it does its showing but then just returns zero\n instead of actually calling the action execution operation.\n The problem for caching is that if the file does NOT exist in\n cache then the CacheRetrieveString won't return anything to\n show for the task, but the Action.__call__ won't call\n CacheRetrieveFunc; instead it just returns zero, which makes\n the code below think that the file *was* successfully\n retrieved from the cache, therefore it doesn't do any\n subsequent building. However, the CacheRetrieveString didn't\n print anything because it didn't actually exist in the cache,\n and no more build actions will be performed, so the user just\n sees nothing. The fix is to tell Action.__call__ to always\n execute the CacheRetrieveFunc and then have the latter\n explicitly check SCons.Action.execute_actions itself.\n "
] |
Please provide a description of the function:def generate(env):
as_module.generate(env)
env['AS'] = '386asm'
env['ASFLAGS'] = SCons.Util.CLVar('')
env['ASPPFLAGS'] = '$ASFLAGS'
env['ASCOM'] = '$AS $ASFLAGS $SOURCES -o $TARGET'
env['ASPPCOM'] = '$CC $ASPPFLAGS $CPPFLAGS $_CPPDEFFLAGS $_CPPINCFLAGS $SOURCES -o $TARGET'
addPharLapPaths(env) | [
"Add Builders and construction variables for ar to an Environment."
] |
Please provide a description of the function:def _parse_target(target):
if len(target) != 8:
raise ArgumentError("Invalid targeting data length", expected=8, length=len(target))
slot, match_op = struct.unpack("<B6xB", target)
if match_op == _MATCH_CONTROLLER:
return {'controller': True, 'slot': 0}
elif match_op == _MATCH_SLOT:
return {'controller': False, 'slot': slot}
raise ArgumentError("Unsupported complex targeting specified", match_op=match_op) | [
"Parse a binary targeting information structure.\n\n This function only supports extracting the slot number or controller from\n the target and will raise an ArgumentError if more complicated targeting\n is desired.\n\n Args:\n target (bytes): The binary targeting data blob.\n\n Returns:\n dict: The parsed targeting data\n "
] |
Please provide a description of the function:def encode_contents(self):
header = struct.pack("<LL8sBxxx", self.offset, len(self.raw_data),
_create_target(slot=self.slot), self.hardware_type)
return bytearray(header) + self.raw_data | [
"Encode the contents of this update record without including a record header.\n\n Returns:\n bytearary: The encoded contents.\n "
] |
Please provide a description of the function:def FromBinary(cls, record_data, record_count=1):
if len(record_data) < ReflashTileRecord.RecordHeaderLength:
raise ArgumentError("Record was too short to contain a full reflash record header",
length=len(record_data), header_length=ReflashTileRecord.RecordHeaderLength)
offset, data_length, raw_target, hardware_type = struct.unpack_from("<LL8sB3x", record_data)
bindata = record_data[ReflashTileRecord.RecordHeaderLength:]
if len(bindata) != data_length:
raise ArgumentError("Embedded firmware length did not agree with actual length of embeded data",
length=len(bindata), embedded_length=data_length)
target = _parse_target(raw_target)
if target['controller']:
raise ArgumentError("Invalid targetting information, you "
"cannot reflash a controller with a ReflashTileRecord", target=target)
return ReflashTileRecord(target['slot'], bindata, offset, hardware_type) | [
"Create an UpdateRecord subclass from binary record data.\n\n This should be called with a binary record blob (NOT including the\n record type header) and it will decode it into a ReflashTileRecord.\n\n Args:\n record_data (bytearray): The raw record data that we wish to parse\n into an UpdateRecord subclass NOT including its 8 byte record header.\n record_count (int): The number of records included in record_data.\n\n Raises:\n ArgumentError: If the record_data is malformed and cannot be parsed.\n\n Returns:\n ReflashTileRecord: The decoded reflash tile record.\n "
] |
Please provide a description of the function:def put_task(self, func, args, response):
self._rpc_queue.put_nowait((func, args, response)) | [
"Place a task onto the RPC queue.\n\n This temporary functionality will go away but it lets you run a\n task synchronously with RPC dispatch by placing it onto the\n RCP queue.\n\n Args:\n func (callable): The function to execute\n args (iterable): The function arguments\n response (GenericResponse): The response object to signal the\n result on.\n "
] |
Please provide a description of the function:def put_rpc(self, address, rpc_id, arg_payload, response):
self._rpc_queue.put_nowait((address, rpc_id, arg_payload, response)) | [
"Place an RPC onto the RPC queue.\n\n The rpc will be dispatched asynchronously by the background dispatch\n task. This method must be called from the event loop. This method\n does not block.\n\n Args:\n address (int): The address of the tile with the RPC\n rpc_id (int): The id of the rpc you want to call\n arg_payload (bytes): The RPC payload\n respones (GenericResponse): The object to use to signal the result.\n "
] |
Please provide a description of the function:def finish_async_rpc(self, address, rpc_id, response):
pending = self._pending_rpcs.get(address)
if pending is None:
raise ArgumentError("No asynchronously RPC currently in progress on tile %d" % address)
responder = pending.get(rpc_id)
if responder is None:
raise ArgumentError("RPC %04X is not running asynchronous on tile %d" % (rpc_id, address))
del pending[rpc_id]
responder.set_result(response)
self._rpc_queue.task_done() | [
"Finish a previous asynchronous RPC.\n\n This method should be called by a peripheral tile that previously\n had an RPC called on it and chose to response asynchronously by\n raising ``AsynchronousRPCResponse`` in the RPC handler itself.\n\n The response passed to this function will be returned to the caller\n as if the RPC had returned it immediately.\n\n This method must only ever be called from a coroutine inside the\n emulation loop that is handling background work on behalf of a tile.\n\n Args:\n address (int): The tile address the RPC was called on.\n rpc_id (int): The ID of the RPC that was called.\n response (bytes): The bytes that should be returned to\n the caller of the RPC.\n "
] |
Please provide a description of the function:async def stop(self):
if self._rpc_task is not None:
self._rpc_task.cancel()
try:
await self._rpc_task
except asyncio.CancelledError:
pass
self._rpc_task = None | [
"Stop the rpc queue from inside the event loop."
] |
Please provide a description of the function:def add_segment(self, address, data, overwrite=False):
seg_type = self._classify_segment(address, len(data))
if not isinstance(seg_type, DisjointSegment):
raise ArgumentError("Unsupported segment type")
segment = MemorySegment(address, address+len(data)-1, len(data), bytearray(data))
self._segments.append(segment) | [
"Add a contiguous segment of data to this memory map\n\n If the segment overlaps with a segment already added , an\n ArgumentError is raised unless the overwrite flag is True.\n\n Params:\n address (int): The starting address for this segment\n data (bytearray): The data to add\n overwrite (bool): Overwrite data if this segment overlaps\n with one previously added.\n "
] |
Please provide a description of the function:def _create_slice(self, key):
if isinstance(key, slice):
step = key.step
if step is None:
step = 1
if step != 1:
raise ArgumentError("You cannot slice with a step that is not equal to 1", step=key.step)
start_address = key.start
end_address = key.stop - 1
start_i, start_seg = self._find_address(start_address)
end_i, _end_seg = self._find_address(end_address)
if start_seg is None or start_i != end_i:
raise ArgumentError("Slice would span invalid data in memory",
start_address=start_address, end_address=end_address)
block_offset = start_address - start_seg.start_address
block_length = end_address - start_address + 1
return start_seg, block_offset, block_offset + block_length
elif isinstance(key, int):
start_i, start_seg = self._find_address(key)
if start_seg is None:
raise ArgumentError("Requested invalid address", address=key)
return start_seg, key - start_seg.start_address, None
else:
raise ArgumentError("Unknown type of address key", address=key) | [
"Create a slice in a memory segment corresponding to a key."
] |
Please provide a description of the function:def _classify_segment(self, address, length):
end_address = address + length - 1
_, start_seg = self._find_address(address)
_, end_seg = self._find_address(end_address)
if start_seg is not None or end_seg is not None:
raise ArgumentError("Overlapping segments are not yet supported", address=address, length=length)
return DisjointSegment() | [
"Determine how a new data segment fits into our existing world\n\n Params:\n address (int): The address we wish to classify\n length (int): The length of the segment\n\n Returns:\n int: One of SparseMemoryMap.prepended\n "
] |
Please provide a description of the function:def generate(env):
SCons.Tool.createSharedLibBuilder(env)
SCons.Tool.createProgBuilder(env)
env['SUBST_CMD_FILE'] = LinklocGenerator
env['SHLINK'] = '$LINK'
env['SHLINKFLAGS'] = SCons.Util.CLVar('$LINKFLAGS')
env['SHLINKCOM'] = '${SUBST_CMD_FILE("$SHLINK $SHLINKFLAGS $_LIBDIRFLAGS $_LIBFLAGS -dll $TARGET $SOURCES")}'
env['SHLIBEMITTER']= None
env['LDMODULEEMITTER']= None
env['LINK'] = "linkloc"
env['LINKFLAGS'] = SCons.Util.CLVar('')
env['LINKCOM'] = '${SUBST_CMD_FILE("$LINK $LINKFLAGS $_LIBDIRFLAGS $_LIBFLAGS -exe $TARGET $SOURCES")}'
env['LIBDIRPREFIX']='-libpath '
env['LIBDIRSUFFIX']=''
env['LIBLINKPREFIX']='-lib '
env['LIBLINKSUFFIX']='$LIBSUFFIX'
# Set-up ms tools paths for default version
merge_default_version(env)
addPharLapPaths(env) | [
"Add Builders and construction variables for ar to an Environment."
] |
Please provide a description of the function:def generate(env):
# ifort supports Fortran 90 and Fortran 95
# Additionally, ifort recognizes more file extensions.
fscan = FortranScan("FORTRANPATH")
SCons.Tool.SourceFileScanner.add_scanner('.i', fscan)
SCons.Tool.SourceFileScanner.add_scanner('.i90', fscan)
if 'FORTRANFILESUFFIXES' not in env:
env['FORTRANFILESUFFIXES'] = ['.i']
else:
env['FORTRANFILESUFFIXES'].append('.i')
if 'F90FILESUFFIXES' not in env:
env['F90FILESUFFIXES'] = ['.i90']
else:
env['F90FILESUFFIXES'].append('.i90')
add_all_to_env(env)
fc = 'ifort'
for dialect in ['F77', 'F90', 'FORTRAN', 'F95']:
env['%s' % dialect] = fc
env['SH%s' % dialect] = '$%s' % dialect
if env['PLATFORM'] == 'posix':
env['SH%sFLAGS' % dialect] = SCons.Util.CLVar('$%sFLAGS -fPIC' % dialect)
if env['PLATFORM'] == 'win32':
# On Windows, the ifort compiler specifies the object on the
# command line with -object:, not -o. Massage the necessary
# command-line construction variables.
for dialect in ['F77', 'F90', 'FORTRAN', 'F95']:
for var in ['%sCOM' % dialect, '%sPPCOM' % dialect,
'SH%sCOM' % dialect, 'SH%sPPCOM' % dialect]:
env[var] = env[var].replace('-o $TARGET', '-object:$TARGET')
env['FORTRANMODDIRPREFIX'] = "/module:"
else:
env['FORTRANMODDIRPREFIX'] = "-module " | [
"Add Builders and construction variables for ifort to an Environment."
] |
Please provide a description of the function:def run(self, postfunc=lambda: None):
self._setup_sig_handler()
try:
self.job.start()
finally:
postfunc()
self._reset_sig_handler() | [
"Run the jobs.\n\n postfunc() will be invoked after the jobs has run. It will be\n invoked even if the jobs are interrupted by a keyboard\n interrupt (well, in fact by a signal such as either SIGINT,\n SIGTERM or SIGHUP). The execution of postfunc() is protected\n against keyboard interrupts and is guaranteed to run to\n completion."
] |
Please provide a description of the function:def _setup_sig_handler(self):
def handler(signum, stack, self=self, parentpid=os.getpid()):
if os.getpid() == parentpid:
self.job.taskmaster.stop()
self.job.interrupted.set()
else:
os._exit(2)
self.old_sigint = signal.signal(signal.SIGINT, handler)
self.old_sigterm = signal.signal(signal.SIGTERM, handler)
try:
self.old_sighup = signal.signal(signal.SIGHUP, handler)
except AttributeError:
pass | [
"Setup an interrupt handler so that SCons can shutdown cleanly in\n various conditions:\n\n a) SIGINT: Keyboard interrupt\n b) SIGTERM: kill or system shutdown\n c) SIGHUP: Controlling shell exiting\n\n We handle all of these cases by stopping the taskmaster. It\n turns out that it's very difficult to stop the build process\n by throwing asynchronously an exception such as\n KeyboardInterrupt. For example, the python Condition\n variables (threading.Condition) and queues do not seem to be\n asynchronous-exception-safe. It would require adding a whole\n bunch of try/finally block and except KeyboardInterrupt all\n over the place.\n\n Note also that we have to be careful to handle the case when\n SCons forks before executing another process. In that case, we\n want the child to exit immediately.\n "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.