text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_registration_dt(self, use_cached=True):
"""Get the datetime of when this device was added to Device Cloud""" |
device_json = self.get_device_json(use_cached)
start_date_iso8601 = device_json.get("devRecordStartDate")
if start_date_iso8601:
return iso8601_to_dt(start_date_iso8601)
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_latlon(self, use_cached=True):
|
device_json = self.get_device_json(use_cached)
lat = device_json.get("dpMapLat")
lon = device_json.get("dpMapLong")
return (float(lat) if lat else None,
float(lon) if lon else None, ) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_to_group(self, group_path):
"""Add a device to a group, if the group doesn't exist it is created :param group_path: Path or "name" of the group """ |
if self.get_group_path() != group_path:
post_data = ADD_GROUP_TEMPLATE.format(connectware_id=self.get_connectware_id(),
group_path=group_path)
self._conn.put('/ws/DeviceCore', post_data)
# Invalidate cache
self._device_json = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_tag(self, new_tags):
"""Add a tag to existing device tags. This method will not add a duplicate, if already in the list. :param new_tags: the tag(s) to be added. new_tags can be a comma-separated string or list """ |
tags = self.get_tags()
orig_tag_cnt = len(tags)
# print("self.get_tags() {}".format(tags))
if isinstance(new_tags, six.string_types):
new_tags = new_tags.split(',')
# print("spliting tags :: {}".format(new_tags))
for tag in new_tags:
if not tag in tags:
tags.append(tag.strip())
if len(tags) > orig_tag_cnt:
xml_tags = escape(",".join(tags))
post_data = TAGS_TEMPLATE.format(connectware_id=self.get_connectware_id(),
tags=xml_tags)
self._conn.put('/ws/DeviceCore', post_data)
# Invalidate cache
self._device_json = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_tag(self, tag):
"""Remove tag from existing device tags :param tag: the tag to be removed from the list :raises ValueError: If tag does not exist in list """ |
tags = self.get_tags()
tags.remove(tag)
post_data = TAGS_TEMPLATE.format(connectware_id=self.get_connectware_id(),
tags=escape(",".join(tags)))
self._conn.put('/ws/DeviceCore', post_data)
# Invalidate cache
self._device_json = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hostname(self):
"""Get the hostname that this connection is associated with""" |
from six.moves.urllib.parse import urlparse
return urlparse(self._base_url).netloc.split(':', 1)[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def iter_json_pages(self, path, page_size=1000, **params):
"""Return an iterator over JSON items from a paginated resource Legacy resources (prior to V1) implemented a common paging interfaces for several different resources. This method handles the details of iterating over the paged result set, yielding only the JSON data for each item within the aggregate resource. :param str path: The base path to the resource being requested (e.g. /ws/Group) :param int page_size: The number of items that should be requested for each page. A larger page_size may mean fewer HTTP requests but could also increase the time to get a first result back from Device Cloud. :param params: These are additional query parameters that should be sent with each request to Device Cloud. """ |
path = validate_type(path, *six.string_types)
page_size = validate_type(page_size, *six.integer_types)
offset = 0
remaining_size = 1 # just needs to be non-zero
while remaining_size > 0:
reqparams = {"start": offset, "size": page_size}
reqparams.update(params)
response = self.get_json(path, params=reqparams)
offset += page_size
remaining_size = int(response.get("remainingSize", "0"))
for item_json in response.get("items", []):
yield item_json |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, path, **kwargs):
"""Perform an HTTP GET request of the specified path in Device Cloud Make an HTTP GET request against Device Cloud with this accounts credentials and base url. This method uses the `requests <http://docs.python-requests.org/en/latest/>`_ library `request method <http://docs.python-requests.org/en/latest/api/#requests.request>`_ and all keyword arguments will be passed on to that method. :param str path: Device Cloud path to GET :param int retries: The number of times the request should be retried if an unsuccessful response is received. Most likely, you should leave this at 0. :raises DeviceCloudHttpException: if a non-success response to the request is received from Device Cloud :returns: A requests ``Response`` object """ |
url = self._make_url(path)
return self._make_request("GET", url, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_json(self, path, **kwargs):
"""Perform an HTTP GET request with JSON headers of the specified path against Device Cloud Make an HTTP GET request against Device Cloud with this accounts credentials and base url. This method uses the `requests <http://docs.python-requests.org/en/latest/>`_ library `request method <http://docs.python-requests.org/en/latest/api/#requests.request>`_ and all keyword arguments will be passed on to that method. This method will automatically add the ``Accept: application/json`` and parse the JSON response from Device Cloud. :param str path: Device Cloud path to GET :param int retries: The number of times the request should be retried if an unsuccessful response is received. Most likely, you should leave this at 0. :raises DeviceCloudHttpException: if a non-success response to the request is received from Device Cloud :returns: A python data structure containing the results of calling ``json.loads`` on the body of the response from Device Cloud. """ |
url = self._make_url(path)
headers = kwargs.setdefault('headers', {})
headers.update({'Accept': 'application/json'})
response = self._make_request("GET", url, **kwargs)
return json.loads(response.text) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def post(self, path, data, **kwargs):
"""Perform an HTTP POST request of the specified path in Device Cloud Make an HTTP POST request against Device Cloud with this accounts credentials and base url. This method uses the `requests <http://docs.python-requests.org/en/latest/>`_ library `request method <http://docs.python-requests.org/en/latest/api/#requests.request>`_ and all keyword arguments will be passed on to that method. :param str path: Device Cloud path to POST :param int retries: The number of times the request should be retried if an unsuccessful response is received. Most likely, you should leave this at 0. :param data: The data to be posted in the body of the POST request (see docs for ``requests.post`` :raises DeviceCloudHttpException: if a non-success response to the request is received from Device Cloud :returns: A requests ``Response`` object """ |
url = self._make_url(path)
return self._make_request("POST", url, data=data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def put(self, path, data, **kwargs):
"""Perform an HTTP PUT request of the specified path in Device Cloud Make an HTTP PUT request against Device Cloud with this accounts credentials and base url. This method uses the `requests <http://docs.python-requests.org/en/latest/>`_ library `request method <http://docs.python-requests.org/en/latest/api/#requests.request>`_ and all keyword arguments will be passed on to that method. :param str path: Device Cloud path to PUT :param int retries: The number of times the request should be retried if an unsuccessful response is received. Most likely, you should leave this at 0. :param data: The data to be posted in the body of the POST request (see docs for ``requests.post`` :raises DeviceCloudHttpException: if a non-success response to the request is received from Device Cloud :returns: A requests ``Response`` object """ |
url = self._make_url(path)
return self._make_request("PUT", url, data=data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, path, retries=DEFAULT_THROTTLE_RETRIES, **kwargs):
"""Perform an HTTP DELETE request of the specified path in Device Cloud Make an HTTP DELETE request against Device Cloud with this accounts credentials and base url. This method uses the `requests <http://docs.python-requests.org/en/latest/>`_ library `request method <http://docs.python-requests.org/en/latest/api/#requests.request>`_ and all keyword arguments will be passed on to that method. :param str path: Device Cloud path to DELETE :param int retries: The number of times the request should be retried if an unsuccessful response is received. Most likely, you should leave this at 0. :raises DeviceCloudHttpException: if a non-success response to the request is received from Device Cloud :returns: A requests ``Response`` object """ |
url = self._make_url(path)
return self._make_request("DELETE", url, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_async_job(self, job_id):
"""Query an asynchronous SCI job by ID This is useful if the job was not created with send_sci_async(). :param int job_id: The job ID to query :returns: The SCI response from GETting the job information """ |
uri = "/ws/sci/{0}".format(job_id)
# TODO: do parsing here?
return self._conn.get(uri) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_sci_async(self, operation, target, payload, **sci_options):
"""Send an asynchronous SCI request, and wraps the job in an object to manage it :param str operation: The operation is one of {send_message, update_firmware, disconnect, query_firmware_targets, file_system, data_service, and reboot} :param target: The device(s) to be targeted with this request :type target: :class:`~.TargetABC` or list of :class:`~.TargetABC` instances TODO: document other params """ |
sci_options['synchronous'] = False
resp = self.send_sci(operation, target, payload, **sci_options)
dom = ET.fromstring(resp.content)
job_element = dom.find('.//jobId')
if job_element is None:
return
job_id = int(job_element.text)
return AsyncRequestProxy(job_id, self._conn) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_sci(self, operation, target, payload, reply=None, synchronous=None, sync_timeout=None, cache=None, allow_offline=None, wait_for_reconnect=None):
"""Send SCI request to 1 or more targets :param str operation: The operation is one of {send_message, update_firmware, disconnect, query_firmware_targets, file_system, data_service, and reboot} :param target: The device(s) to be targeted with this request :type target: :class:`~.TargetABC` or list of :class:`~.TargetABC` instances TODO: document other params """ |
if not isinstance(payload, six.string_types) and not isinstance(payload, six.binary_type):
raise TypeError("payload is required to be a string or bytes")
# validate targets and bulid targets xml section
try:
iter(target)
targets = target
except TypeError:
targets = [target, ]
if not all(isinstance(t, TargetABC) for t in targets):
raise TypeError("Target(s) must each be instances of TargetABC")
targets_xml = "".join(t.to_xml() for t in targets)
# reply argument
if not isinstance(reply, (type(None), six.string_types)):
raise TypeError("reply must be either None or a string")
if reply is not None:
reply_xml = ' reply="{}"'.format(reply)
else:
reply_xml = ''
# synchronous argument
if not isinstance(synchronous, (type(None), bool)):
raise TypeError("synchronous expected to be either None or a boolean")
if synchronous is not None:
synchronous_xml = ' synchronous="{}"'.format('true' if synchronous else 'false')
else:
synchronous_xml = ''
# sync_timeout argument
# TODO: What units is syncTimeout in? seconds?
if sync_timeout is not None and not isinstance(sync_timeout, six.integer_types):
raise TypeError("sync_timeout expected to either be None or a number")
if sync_timeout is not None:
sync_timeout_xml = ' syncTimeout="{}"'.format(sync_timeout)
else:
sync_timeout_xml = ''
# cache argument
if not isinstance(cache, (type(None), bool)):
raise TypeError("cache expected to either be None or a boolean")
if cache is not None:
cache_xml = ' cache="{}"'.format('true' if cache else 'false')
else:
cache_xml = ''
# allow_offline argument
if not isinstance(allow_offline, (type(None), bool)):
raise TypeError("allow_offline is expected to be either None or a boolean")
if allow_offline is not None:
allow_offline_xml = ' allowOffline="{}"'.format('true' if allow_offline else 'false')
else:
allow_offline_xml = ''
# wait_for_reconnect argument
if not isinstance(wait_for_reconnect, (type(None), bool)):
raise TypeError("wait_for_reconnect expected to be either None or a boolean")
if wait_for_reconnect is not None:
wait_for_reconnect_xml = ' waitForReconnect="{}"'.format('true' if wait_for_reconnect else 'false')
else:
wait_for_reconnect_xml = ''
full_request = SCI_TEMPLATE.format(
operation=operation,
targets=targets_xml,
reply=reply_xml,
synchronous=synchronous_xml,
sync_timeout=sync_timeout_xml,
cache=cache_xml,
allow_offline=allow_offline_xml,
wait_for_reconnect=wait_for_reconnect_xml,
payload=payload
)
# TODO: do parsing here?
return self._conn.post("/ws/sci", full_request) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conditional_write(strm, fmt, value, *args, **kwargs):
"""Write to stream using fmt and value if value is not None""" |
if value is not None:
strm.write(fmt.format(value, *args, **kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def iso8601_to_dt(iso8601):
"""Given an ISO8601 string as returned by Device Cloud, convert to a datetime object""" |
# We could just use arrow.get() but that is more permissive than we actually want.
# Internal (but still public) to arrow is the actual parser where we can be
# a bit more specific
parser = DateTimeParser()
try:
arrow_dt = arrow.Arrow.fromdatetime(parser.parse_iso(iso8601))
return arrow_dt.to('utc').datetime
except ParserError as pe:
raise ValueError("Provided was not a valid ISO8601 string: %r" % pe) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_none_or_dt(input):
"""Convert ``input`` to either None or a datetime object If the input is None, None will be returned. If the input is a datetime object, it will be converted to a datetime object with UTC timezone info. If the datetime object is naive, then this method will assume the object is specified according to UTC and not local or some other timezone. If the input to the function is a string, this method will attempt to parse the input as an ISO-8601 formatted string. :param input: Input data (expected to be either str, None, or datetime object) :return: datetime object from input or None if already None :rtype: datetime or None """ |
if input is None:
return input
elif isinstance(input, datetime.datetime):
arrow_dt = arrow.Arrow.fromdatetime(input, input.tzinfo or 'utc')
return arrow_dt.to('utc').datetime
if isinstance(input, six.string_types):
# try to convert from ISO8601
return iso8601_to_dt(input)
else:
raise TypeError("Not a string, NoneType, or datetime object") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def isoformat(dt):
"""Return an ISO-8601 formatted string from the provided datetime object""" |
if not isinstance(dt, datetime.datetime):
raise TypeError("Must provide datetime.datetime object to isoformat")
if dt.tzinfo is None:
raise ValueError("naive datetime objects are not allowed beyond the library boundaries")
return dt.isoformat().replace("+00:00", "Z") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_filedata(self, condition=None, page_size=1000):
"""Return a generator over all results matching the provided condition :param condition: An :class:`.Expression` which defines the condition which must be matched on the filedata that will be retrieved from file data store. If a condition is unspecified, the following condition will be used ``fd_path == '~/'``. This condition will match all file data in this accounts "home" directory (a sensible root). :type condition: :class:`.Expression` or None :param int page_size: The number of results to fetch in a single page. Regardless of the size specified, :meth:`.get_filedata` will continue to fetch pages and yield results until all items have been fetched. :return: Generator yielding :class:`.FileDataObject` instances matching the provided conditions. """ |
condition = validate_type(condition, type(None), Expression, *six.string_types)
page_size = validate_type(page_size, *six.integer_types)
if condition is None:
condition = (fd_path == "~/") # home directory
params = {"embed": "true", "condition": condition.compile()}
for fd_json in self._conn.iter_json_pages("/ws/FileData", page_size=page_size, **params):
yield FileDataObject.from_json(self, fd_json) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_file(self, path, name, data, content_type=None, archive=False, raw=False):
"""Write a file to the file data store at the given path :param str path: The path (directory) into which the file should be written. :param str name: The name of the file to be written. :param data: The binary data that should be written into the file. :type data: str (Python2) or bytes (Python3) :param content_type: The content type for the data being written to the file. May be left unspecified. :type content_type: str or None :param bool archive: If true, history will be retained for various revisions of this file. If this is not required, leave as false. :param bool raw: If true, skip the FileData XML headers (necessary for binary files) """ |
path = validate_type(path, *six.string_types)
name = validate_type(name, *six.string_types)
data = validate_type(data, six.binary_type)
content_type = validate_type(content_type, type(None), *six.string_types)
archive_str = "true" if validate_type(archive, bool) else "false"
if not path.startswith("/"):
path = "/" + path
if not path.endswith("/"):
path += "/"
name = name.lstrip("/")
sio = six.moves.StringIO()
if not raw:
if six.PY3:
base64_encoded_data = base64.encodebytes(data).decode('utf-8')
else:
base64_encoded_data = base64.encodestring(data)
sio.write("<FileData>")
if content_type is not None:
sio.write("<fdContentType>{}</fdContentType>".format(content_type))
sio.write("<fdType>file</fdType>")
sio.write("<fdData>{}</fdData>".format(base64_encoded_data))
sio.write("<fdArchive>{}</fdArchive>".format(archive_str))
sio.write("</FileData>")
else:
sio.write(data)
params = {
"type": "file",
"archive": archive_str
}
self._conn.put(
"/ws/FileData{path}{name}".format(path=path, name=name),
sio.getvalue(),
params=params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_file(self, path):
"""Delete a file or directory from the filedata store This method removes a file or directory (recursively) from the filedata store. :param path: The path of the file or directory to remove from the file data store. """ |
path = validate_type(path, *six.string_types)
if not path.startswith("/"):
path = "/" + path
self._conn.delete("/ws/FileData{path}".format(path=path)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def walk(self, root="~/"):
"""Emulation of os.walk behavior against Device Cloud filedata store This method will yield tuples in the form ``(dirpath, FileDataDirectory's, FileData's)`` recursively in pre-order (depth first from top down). :param str root: The root path from which the search should commence. By default, this is the root directory for this device cloud account (~). :return: Generator yielding 3-tuples of dirpath, directories, and files :rtype: 3-tuple in form (dirpath, list of :class:`FileDataDirectory`, list of :class:`FileDataFile`) """ |
root = validate_type(root, *six.string_types)
directories = []
files = []
# fd_path is real picky
query_fd_path = root
if not query_fd_path.endswith("/"):
query_fd_path += "/"
for fd_object in self.get_filedata(fd_path == query_fd_path):
if fd_object.get_type() == "directory":
directories.append(fd_object)
else:
files.append(fd_object)
# Yield the walk results for this level of the tree
yield (root, directories, files)
# recurse on each directory and yield results up the chain
for directory in directories:
for dirpath, directories, files in self.walk(directory.get_full_path()):
yield (dirpath, directories, files) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_data(self):
"""Get the data associated with this filedata object :returns: Data associated with this object or None if none exists :rtype: str (Python2)/bytes (Python3) or None """ |
# NOTE: we assume that the "embed" option is used
base64_data = self._json_data.get("fdData")
if base64_data is None:
return None
else:
# need to convert to bytes() with python 3
return base64.decodestring(six.b(base64_data)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_file(self, *args, **kwargs):
"""Write a file into this directory This method takes the same arguments as :meth:`.FileDataAPI.write_file` with the exception of the ``path`` argument which is not needed here. """ |
return self._fdapi.write_file(self.get_path(), *args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_monitors(self, condition=None, page_size=1000):
"""Return an iterator over all monitors matching the provided condition Get all inactive monitors and print id:: for mon in dc.monitor.get_monitors(MON_STATUS_ATTR == "DISABLED"):
print(mon.get_id()) Get all the HTTP monitors and print id:: for mon in dc.monitor.get_monitors(MON_TRANSPORT_TYPE_ATTR == "http"):
print(mon.get_id()) Many other possibilities exist. See the :mod:`devicecloud.condition` documention for additional details on building compound expressions. :param condition: An :class:`.Expression` which defines the condition which must be matched on the monitor that will be retrieved from Device Cloud. If a condition is unspecified, an iterator over all monitors for this account will be returned. :type condition: :class:`.Expression` or None :param int page_size: The number of results to fetch in a single page. :return: Generator yielding :class:`.DeviceCloudMonitor` instances matching the provided conditions. """ |
req_kwargs = {}
if condition:
req_kwargs['condition'] = condition.compile()
for monitor_data in self._conn.iter_json_pages("/ws/Monitor", **req_kwargs):
yield DeviceCloudMonitor.from_json(self._conn, monitor_data, self._tcp_client_manager) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_monitor(self, topics):
"""Attempts to find a Monitor in device cloud that matches the provided topics :param topics: a string list of topics (e.g. ``['DeviceCore[U]', 'FileDataCore'])``) Returns a :class:`DeviceCloudMonitor` if found, otherwise None. """ |
for monitor in self.get_monitors(MON_TOPIC_ATTR == ",".join(topics)):
return monitor # return the first one, even if there are multiple
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_encoder_method(stream_type):
"""A function to get the python type to device cloud type converter function. :param stream_type: The streams data type :return: A function that when called with the python object will return the serializable type for sending to the cloud. If there is no function for the given type, or the `stream_type` is `None` the returned function will simply return the object unchanged. """ |
if stream_type is not None:
return DSTREAM_TYPE_MAP.get(stream_type.upper(), (lambda x: x, lambda x: x))[1]
else:
return lambda x: x |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_decoder_method(stream_type):
""" A function to get Device Cloud type to python type converter function. :param stream_type: The streams data type :return: A function that when called with Device Cloud object will return the python native type. If there is no function for the given type, or the `stream_type` is `None` the returned function will simply return the object unchanged. """ |
if stream_type is not None:
return DSTREAM_TYPE_MAP.get(stream_type.upper(), (lambda x: x, lambda x: x))[0]
else:
return lambda x: x |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_streams(self, uri_suffix=None):
"""Clear and update internal cache of stream objects""" |
# TODO: handle paging, perhaps change this to be a generator
if uri_suffix is not None and not uri_suffix.startswith('/'):
uri_suffix = '/' + uri_suffix
elif uri_suffix is None:
uri_suffix = ""
streams = {}
response = self._conn.get_json("/ws/DataStream{}".format(uri_suffix))
for stream_data in response["items"]:
stream_id = stream_data["streamId"]
stream = DataStream(self._conn, stream_id, stream_data)
streams[stream_id] = stream
return streams |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_stream(self, stream_id, data_type, description=None, data_ttl=None, rollup_ttl=None, units=None):
"""Create a new data stream on Device Cloud This method will attempt to create a new data stream on Device Cloud. This method will only succeed if the stream does not already exist. :param str stream_id: The path/id of the stream being created on Device Cloud. :param str data_type: The type of this stream. This must be in the set `{ INTEGER, LONG, FLOAT, DOUBLE, STRING, BINARY, UNKNOWN }`. These values are available in constants like :attr:`~STREAM_TYPE_INTEGER`. :param str description: An optional description of this stream. See :meth:`~DataStream.get_description`. :param int data_ttl: The TTL for data points in this stream. See :meth:`~DataStream.get_data_ttl`. :param int rollup_ttl: The TTL for performing rollups on data. See :meth:~DataStream.get_rollup_ttl`. :param str units: Units for data in this stream. See :meth:`~DataStream.get_units` """ |
stream_id = validate_type(stream_id, *six.string_types)
data_type = validate_type(data_type, type(None), *six.string_types)
if isinstance(data_type, *six.string_types):
data_type = str(data_type).upper()
if not data_type in (set([None, ]) | set(list(DSTREAM_TYPE_MAP.keys()))):
raise ValueError("data_type %r is not valid" % data_type)
description = validate_type(description, type(None), *six.string_types)
data_ttl = validate_type(data_ttl, type(None), *six.integer_types)
rollup_ttl = validate_type(rollup_ttl, type(None), *six.integer_types)
units = validate_type(units, type(None), *six.string_types)
sio = StringIO()
sio.write("<DataStream>")
conditional_write(sio, "<streamId>{}</streamId>", stream_id)
conditional_write(sio, "<dataType>{}</dataType>", data_type)
conditional_write(sio, "<description>{}</description>", description)
conditional_write(sio, "<dataTtl>{}</dataTtl>", data_ttl)
conditional_write(sio, "<rollupTtl>{}</rollupTtl>", rollup_ttl)
conditional_write(sio, "<units>{}</units>", units)
sio.write("</DataStream>")
self._conn.post("/ws/DataStream", sio.getvalue())
logger.info("Data stream (%s) created successfully", stream_id)
stream = DataStream(self._conn, stream_id)
return stream |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_stream_if_exists(self, stream_id):
"""Return a reference to a stream with the given ``stream_id`` if it exists This works similar to :py:meth:`get_stream` but will return None if the stream is not already created. :param stream_id: The path of the stream on Device Cloud :raises TypeError: if the stream_id provided is the wrong type :raises ValueError: if the stream_id is not properly formed :return: :class:`.DataStream` instance with the provided stream_id :rtype: :class:`~DataStream` """ |
stream = self.get_stream(stream_id)
try:
stream.get_data_type(use_cached=True)
except NoSuchStreamException:
return None
else:
return stream |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_json(cls, stream, json_data):
"""Create a new DataPoint object from device cloud JSON data :param DataStream stream: The :class:`~DataStream` out of which this data is coming :param dict json_data: Deserialized JSON data from Device Cloud about this device :raises ValueError: if the data is malformed :return: (:class:`~DataPoint`) newly created :class:`~DataPoint` """ |
type_converter = _get_decoder_method(stream.get_data_type())
data = type_converter(json_data.get("data"))
return cls(
# these are actually properties of the stream, not the data point
stream_id=stream.get_stream_id(),
data_type=stream.get_data_type(),
units=stream.get_units(),
# and these are part of the data point itself
data=data,
description=json_data.get("description"),
timestamp=json_data.get("timestampISO"),
server_timestamp=json_data.get("serverTimestampISO"),
quality=json_data.get("quality"),
location=json_data.get("location"),
dp_id=json_data.get("id"),
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_rollup_json(cls, stream, json_data):
"""Rollup json data from the server looks slightly different :param DataStream stream: The :class:`~DataStream` out of which this data is coming :param dict json_data: Deserialized JSON data from Device Cloud about this device :raises ValueError: if the data is malformed :return: (:class:`~DataPoint`) newly created :class:`~DataPoint` """ |
dp = cls.from_json(stream, json_data)
# Special handling for timestamp
timestamp = isoformat(dc_utc_timestamp_to_dt(int(json_data.get("timestamp"))))
# Special handling for data, all rollup data is float type
type_converter = _get_decoder_method(stream.get_data_type())
data = type_converter(float(json_data.get("data")))
# Update the special fields
dp.set_timestamp(timestamp)
dp.set_data(data)
return dp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_stream_id(self, stream_id):
"""Set the stream id associated with this data point""" |
stream_id = validate_type(stream_id, type(None), *six.string_types)
if stream_id is not None:
stream_id = stream_id.lstrip('/')
self._stream_id = stream_id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_description(self, description):
"""Set the description for this data point""" |
self._description = validate_type(description, type(None), *six.string_types) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_quality(self, quality):
"""Set the quality for this sample Quality is stored on Device Cloud as a 32-bit integer, so the input to this function should be either None, an integer, or a string that can be converted to an integer. """ |
if isinstance(quality, *six.string_types):
quality = int(quality)
elif isinstance(quality, float):
quality = int(quality)
self._quality = validate_type(quality, type(None), *six.integer_types) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_location(self, location):
"""Set the location for this data point The location must be either None (if no location data is known) or a 3-tuple of floating point values in the form (latitude-degrees, longitude-degrees, altitude-meters). """ |
if location is None:
self._location = location
elif isinstance(location, *six.string_types): # from device cloud, convert from csv
parts = str(location).split(",")
if len(parts) == 3:
self._location = tuple(map(float, parts))
return
else:
raise ValueError("Location string %r has unexpected format" % location)
# TODO: could maybe try to allow any iterable but this covers the most common cases
elif (isinstance(location, (tuple, list))
and len(location) == 3
and all([isinstance(x, (float, six.integer_types)) for x in location])):
self._location = tuple(map(float, location)) # coerce ints to float
else:
raise TypeError("Location must be None or 3-tuple of floats")
self._location = location |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_data_type(self, data_type):
"""Set the data type for ths data point The data type is actually associated with the stream itself and should not (generally) vary on a point-per-point basis. That being said, if creating a new stream by writing a datapoint, it may be beneficial to include this information. The data type provided should be in the set of available data types of { INTEGER, LONG, FLOAT, DOUBLE, STRING, BINARY, UNKNOWN }. """ |
validate_type(data_type, type(None), *six.string_types)
if isinstance(data_type, *six.string_types):
data_type = str(data_type).upper()
if not data_type in ({None} | set(DSTREAM_TYPE_MAP.keys())):
raise ValueError("Provided data type not in available set of types")
self._data_type = data_type |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_units(self, unit):
"""Set the unit for this data point Unit, as with data_type, are actually associated with the stream and not the individual data point. As such, changing this within a stream is not encouraged. Setting the unit on the data point is useful when the stream might be created with the write of a data point. """ |
self._units = validate_type(unit, type(None), *six.string_types) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_xml(self):
"""Convert this datapoint into a form suitable for pushing to device cloud An XML string will be returned that will contain all pieces of information set on this datapoint. Values not set (e.g. quality) will be ommitted. """ |
type_converter = _get_encoder_method(self._data_type)
# Convert from python native to device cloud
encoded_data = type_converter(self._data)
out = StringIO()
out.write("<DataPoint>")
out.write("<streamId>{}</streamId>".format(self.get_stream_id()))
out.write("<data>{}</data>".format(encoded_data))
conditional_write(out, "<description>{}</description>", self.get_description())
if self.get_timestamp() is not None:
out.write("<timestamp>{}</timestamp>".format(isoformat(self.get_timestamp())))
conditional_write(out, "<quality>{}</quality>", self.get_quality())
if self.get_location() is not None:
out.write("<location>%s</location>" % ",".join(map(str, self.get_location())))
conditional_write(out, "<streamType>{}</streamType>", self.get_data_type())
conditional_write(out, "<streamUnits>{}</streamUnits>", self.get_units())
out.write("</DataPoint>")
return out.getvalue() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_stream_metadata(self, use_cached):
"""Retrieve metadata about this stream from Device Cloud""" |
if self._cached_data is None or not use_cached:
try:
self._cached_data = self._conn.get_json("/ws/DataStream/%s" % self._stream_id)["items"][0]
except DeviceCloudHttpException as http_exception:
if http_exception.response.status_code == 404:
raise NoSuchStreamException("Stream with id %r has not been created" % self._stream_id)
raise http_exception
return self._cached_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_data_type(self, use_cached=True):
"""Get the data type of this stream if it exists The data type is the type of data stored in this data stream. Valid types include: * INTEGER - data can be represented with a network (= big-endian) 32-bit two's-complement integer. Data with this type maps to a python int. * LONG - data can be represented with a network (= big-endian) 64-bit two's complement integer. Data with this type maps to a python int. * FLOAT - data can be represented with a network (= big-endian) 32-bit IEEE754 floating point. Data with this type maps to a python float. * DOUBLE - data can be represented with a network (= big-endian) 64-bit IEEE754 floating point. Data with this type maps to a python float. * STRING - UTF-8. Data with this type map to a python string * BINARY - Data with this type map to a python string. * UNKNOWN - Data with this type map to a python string. :param bool use_cached: If False, the function will always request the latest from Device Cloud. If True, the device will not make a request if it already has cached data. :return: The data type of this stream as a string :rtype: str """ |
dtype = self._get_stream_metadata(use_cached).get("dataType")
if dtype is not None:
dtype = dtype.upper()
return dtype |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_data_ttl(self, use_cached=True):
"""Retrieve the dataTTL for this stream The dataTtl is the time to live (TTL) in seconds for data points stored in the data stream. A data point expires after the configured amount of time and is automatically deleted. :param bool use_cached: If False, the function will always request the latest from Device Cloud. If True, the device will not make a request if it already has cached data. :raises devicecloud.DeviceCloudHttpException: in the case of an unexpected http error :raises devicecloud.streams.NoSuchStreamException: if this stream has not yet been created :return: The dataTtl associated with this stream in seconds :rtype: int or None """ |
data_ttl_text = self._get_stream_metadata(use_cached).get("dataTtl")
return int(data_ttl_text) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_rollup_ttl(self, use_cached=True):
"""Retrieve the rollupTtl for this stream The rollupTtl is the time to live (TTL) in seconds for the aggregate roll-ups of data points stored in the stream. A roll-up expires after the configured amount of time and is automatically deleted. :param bool use_cached: If False, the function will always request the latest from Device Cloud. If True, the device will not make a request if it already has cached data. :raises devicecloud.DeviceCloudHttpException: in the case of an unexpected http error :raises devicecloud.streams.NoSuchStreamException: if this stream has not yet been created :return: The rollupTtl associated with this stream in seconds :rtype: int or None """ |
rollup_ttl_text = self._get_stream_metadata(use_cached).get("rollupTtl")
return int(rollup_ttl_text) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_current_value(self, use_cached=False):
"""Return the most recent DataPoint value written to a stream The current value is the last recorded data point for this stream. :param bool use_cached: If False, the function will always request the latest from Device Cloud. If True, the device will not make a request if it already has cached data. :raises devicecloud.DeviceCloudHttpException: in the case of an unexpected http error :raises devicecloud.streams.NoSuchStreamException: if this stream has not yet been created :return: The most recent value written to this stream (or None if nothing has been written) :rtype: :class:`~DataPoint` or None """ |
current_value = self._get_stream_metadata(use_cached).get("currentValue")
if current_value:
return DataPoint.from_json(self, current_value)
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
"""Delete this stream from Device Cloud along with its history This call will return None on success and raise an exception in the event of an error performing the deletion. :raises devicecloud.DeviceCloudHttpException: in the case of an unexpected http error :raises devicecloud.streams.NoSuchStreamException: if this stream has already been deleted """ |
try:
self._conn.delete("/ws/DataStream/{}".format(self.get_stream_id()))
except DeviceCloudHttpException as http_excpeption:
if http_excpeption.response.status_code == 404:
raise NoSuchStreamException() # this branch is present, but the DC appears to just return 200 again
else:
raise http_excpeption |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_datapoint(self, datapoint):
"""Delete the provided datapoint from this stream :raises devicecloud.DeviceCloudHttpException: in the case of an unexpected http error """ |
datapoint = validate_type(datapoint, DataPoint)
self._conn.delete("/ws/DataPoint/{stream_id}/{datapoint_id}".format(
stream_id=self.get_stream_id(),
datapoint_id=datapoint.get_id(),
)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_datapoints_in_time_range(self, start_dt=None, end_dt=None):
"""Delete datapoints from this stream between the provided start and end times If neither a start or end time is specified, all data points in the stream will be deleted. :param start_dt: The datetime after which data points should be deleted or None if all data points from the beginning of time should be deleted. :param end_dt: The datetime before which data points should be deleted or None if all data points until the current time should be deleted. :raises devicecloud.DeviceCloudHttpException: in the case of an unexpected http error """ |
start_dt = to_none_or_dt(validate_type(start_dt, datetime.datetime, type(None)))
end_dt = to_none_or_dt(validate_type(end_dt, datetime.datetime, type(None)))
params = {}
if start_dt is not None:
params['startTime'] = isoformat(start_dt)
if end_dt is not None:
params['endTime'] = isoformat(end_dt)
self._conn.delete("/ws/DataPoint/{stream_id}{querystring}".format(
stream_id=self.get_stream_id(),
querystring="?" + urllib.parse.urlencode(params) if params else "",
)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, datapoint):
"""Write some raw data to a stream using the DataPoint API This method will mutate the datapoint provided to populate it with information available from the stream as it is available (but without making any new HTTP requests). For instance, we will add in information about the stream data type if it is available so that proper type conversion happens. Values already set on the datapoint will not be overridden (except for path) :param DataPoint datapoint: The :class:`.DataPoint` that should be written to Device Cloud """ |
if not isinstance(datapoint, DataPoint):
raise TypeError("First argument must be a DataPoint object")
datapoint._stream_id = self.get_stream_id()
if self._cached_data is not None and datapoint.get_data_type() is None:
datapoint._data_type = self.get_data_type()
self._conn.post("/ws/DataPoint/{}".format(self.get_stream_id()), datapoint.to_xml()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(self, start_time=None, end_time=None, use_client_timeline=True, newest_first=True, rollup_interval=None, rollup_method=None, timezone=None, page_size=1000):
"""Read one or more DataPoints from a stream .. warning:: The data points from Device Cloud is a paged data set. When iterating over the result set there could be delays when we hit the end of a page. If this is undesirable, the caller should collect all results into a data structure first before iterating over the result set. :param start_time: The start time for the window of data points to read. None means that we should start with the oldest data available. :type start_time: :class:`datetime.datetime` or None :param end_time: The end time for the window of data points to read. None means that we should include all points received until this point in time. :type end_time: :class:`datetime.datetime` or None :param bool use_client_timeline: If True, the times used will be those provided by clients writing data points into the cloud (which also default to server time if the a timestamp was not included by the client). This is usually what you want. If False, the server timestamp will be used which records when the data point was received. :param bool newest_first: If True, results will be ordered from newest to oldest (descending order). If False, results will be returned oldest to newest. :param rollup_interval: the roll-up interval that should be used if one is desired at all. Rollups will not be performed if None is specified for the interval. Valid roll-up interval values are None, "half", "hourly", "day", "week", and "month". See `DataPoints documentation <http://ftp1.digi.com/support/documentation/html/90002008/90002008_P/Default.htm#ProgrammingTopics/DataStreams.htm#DataPoints>`_ for additional details on these values. :type rollup_interval: str or None :param rollup_method: The aggregation applied to values in the points within the specified rollup_interval. Available methods are None, "sum", "average", "min", "max", "count", and "standarddev". See `DataPoint documentation <http://ftp1.digi.com/support/documentation/html/90002008/90002008_P/Default.htm#ProgrammingTopics/DataStreams.htm#DataPoints>`_ for additional details on these values. :type rollup_method: str or None :param timezone: timezone for calculating roll-ups. This determines roll-up interval boundaries and only applies to roll-ups of a day or larger (for example, day, week, or month). Note that it does not apply to the startTime and endTime parameters. See the `Timestamps <http://ftp1.digi.com/support/documentation/html/90002008/90002008_P/Default.htm#ProgrammingTopics/DataStreams.htm#timestamp>`_ and `Supported Time Zones <http://ftp1.digi.com/support/documentation/html/90002008/90002008_P/Default.htm#ProgrammingTopics/DataStreams.htm#TimeZones>`_ sections for more information. :type timezone: str or None :param int page_size: The number of results that we should attempt to retrieve from the device cloud in each page. Generally, this can be left at its default value unless you have a good reason to change the parameter for performance reasons. :returns: A generator object which one can iterate over the DataPoints read. """ |
is_rollup = False
if (rollup_interval is not None) or (rollup_method is not None):
is_rollup = True
numeric_types = [
STREAM_TYPE_INTEGER,
STREAM_TYPE_LONG,
STREAM_TYPE_FLOAT,
STREAM_TYPE_DOUBLE,
STREAM_TYPE_STRING,
STREAM_TYPE_BINARY,
STREAM_TYPE_UNKNOWN,
]
if self.get_data_type(use_cached=True) not in numeric_types:
raise InvalidRollupDatatype('Rollups only support numerical DataPoints')
# Validate function inputs
start_time = to_none_or_dt(validate_type(start_time, datetime.datetime, type(None)))
end_time = to_none_or_dt(validate_type(end_time, datetime.datetime, type(None)))
use_client_timeline = validate_type(use_client_timeline, bool)
newest_first = validate_type(newest_first, bool)
rollup_interval = validate_type(rollup_interval, type(None), *six.string_types)
if not rollup_interval in {None,
ROLLUP_INTERVAL_HALF,
ROLLUP_INTERVAL_HOUR,
ROLLUP_INTERVAL_DAY,
ROLLUP_INTERVAL_WEEK,
ROLLUP_INTERVAL_MONTH, }:
raise ValueError("Invalid rollup_interval %r provided" % (rollup_interval, ))
rollup_method = validate_type(rollup_method, type(None), *six.string_types)
if not rollup_method in {None,
ROLLUP_METHOD_SUM,
ROLLUP_METHOD_AVERAGE,
ROLLUP_METHOD_MIN,
ROLLUP_METHOD_MAX,
ROLLUP_METHOD_COUNT,
ROLLUP_METHOD_STDDEV}:
raise ValueError("Invalid rollup_method %r provided" % (rollup_method, ))
timezone = validate_type(timezone, type(None), *six.string_types)
page_size = validate_type(page_size, *six.integer_types)
# Remember that there could be multiple pages of data and we want to provide
# in iterator over the result set. To start the process out, we need to make
# an initial request without a page cursor. We should get one in response to
# our first request which we will use to page through the result set
query_parameters = {
'timeline': 'client' if use_client_timeline else 'server',
'order': 'descending' if newest_first else 'ascending',
'size': page_size
}
if start_time is not None:
query_parameters["startTime"] = isoformat(start_time)
if end_time is not None:
query_parameters["endTime"] = isoformat(end_time)
if rollup_interval is not None:
query_parameters["rollupInterval"] = rollup_interval
if rollup_method is not None:
query_parameters["rollupMethod"] = rollup_method
if timezone is not None:
query_parameters["timezone"] = timezone
result_size = page_size
while result_size == page_size:
# request the next page of data or first if pageCursor is not set as query param
try:
result = self._conn.get_json("/ws/DataPoint/{stream_id}?{query_params}".format(
stream_id=self.get_stream_id(),
query_params=urllib.parse.urlencode(query_parameters)
))
except DeviceCloudHttpException as http_exception:
if http_exception.response.status_code == 404:
raise NoSuchStreamException()
raise http_exception
result_size = int(result["resultSize"]) # how many are actually included here?
query_parameters["pageCursor"] = result.get("pageCursor") # will not be present if result set is empty
for item_info in result.get("items", []):
if is_rollup:
data_point = DataPoint.from_rollup_json(self, item_info)
else:
data_point = DataPoint.from_json(self, item_info)
yield data_point |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_msg_header(session):
""" Perform a read on input socket to consume headers and then return a tuple of message type, message length. :param session: Push Session to read data for. Returns response type (i.e. PUBLISH_MESSAGE) if header was completely read, otherwise None if header was not completely read. """ |
try:
data = session.socket.recv(6 - len(session.data))
if len(data) == 0: # No Data on Socket. Likely closed.
return NO_DATA
session.data += data
# Data still not completely read.
if len(session.data) < 6:
return INCOMPLETE
except ssl.SSLError:
# This can happen when select gets triggered
# for an SSL socket and data has not yet been
# read.
return INCOMPLETE
session.message_length = struct.unpack('!i', session.data[2:6])[0]
response_type = struct.unpack('!H', session.data[0:2])[0]
# Clear out session data as header is consumed.
session.data = six.b("")
return response_type |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_msg(session):
""" Perform a read on input socket to consume message and then return the payload and block_id in a tuple. :param session: Push Session to read data for. """ |
if len(session.data) == session.message_length:
# Data Already completely read. Return
return True
try:
data = session.socket.recv(session.message_length - len(session.data))
if len(data) == 0:
raise PushException("No Data on Socket!")
session.data += data
except ssl.SSLError:
# This can happen when select gets triggered
# for an SSL socket and data has not yet been
# read. Wait for it to get triggered again.
return False
# Whether or not all data was read.
return len(session.data) == session.message_length |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_connection_request(self):
""" Sends a ConnectionRequest to the iDigi server using the credentials established with the id of the monitor as defined in the monitor member. """ |
try:
self.log.info("Sending ConnectionRequest for Monitor %s."
% self.monitor_id)
# Send connection request and perform a receive to ensure
# request is authenticated.
# Protocol Version = 1.
payload = struct.pack('!H', 0x01)
# Username Length.
payload += struct.pack('!H', len(self.client.username))
# Username.
payload += six.b(self.client.username)
# Password Length.
payload += struct.pack('!H', len(self.client.password))
# Password.
payload += six.b(self.client.password)
# Monitor ID.
payload += struct.pack('!L', int(self.monitor_id))
# Header 6 Bytes : Type [2 bytes] & Length [4 Bytes]
# ConnectionRequest is Type 0x01.
data = struct.pack("!HL", CONNECTION_REQUEST, len(payload))
# The full payload.
data += payload
# Send Connection Request.
self.socket.send(data)
# Set a 60 second blocking on recv, if we don't get any data
# within 60 seconds, timeout which will throw an exception.
self.socket.settimeout(60)
# Should receive 10 bytes with ConnectionResponse.
response = self.socket.recv(10)
# Make socket blocking.
self.socket.settimeout(0)
if len(response) != 10:
raise PushException("Length of Connection Request Response "
"(%d) is not 10." % len(response))
# Type
response_type = int(struct.unpack("!H", response[0:2])[0])
if response_type != CONNECTION_RESPONSE:
raise PushException(
"Connection Response Type (%d) is not "
"ConnectionResponse Type (%d)." % (response_type, CONNECTION_RESPONSE))
status_code = struct.unpack("!H", response[6:8])[0]
self.log.info("Got ConnectionResponse for Monitor %s. Status %s."
% (self.monitor_id, status_code))
if status_code != STATUS_OK:
raise PushException("Connection Response Status Code (%d) is "
"not STATUS_OK (%d)." % (status_code, STATUS_OK))
except Exception as exception:
# TODO(posborne): This is bad! It isn't necessarily a socket exception!
# Likely a socket exception, close it and raise an exception.
self.socket.close()
self.socket = None
raise exception |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
"""Creates a TCP connection to Device Cloud and sends a ConnectionRequest message""" |
self.log.info("Starting Insecure Session for Monitor %s" % self.monitor_id)
if self.socket is not None:
raise Exception("Socket already established for %s." % self)
try:
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.connect((self.client.hostname, PUSH_OPEN_PORT))
self.socket.setblocking(0)
except socket.error as exception:
self.socket.close()
self.socket = None
raise
self.send_connection_request() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
""" Creates a SSL connection to the iDigi Server and sends a ConnectionRequest message. """ |
self.log.info("Starting SSL Session for Monitor %s."
% self.monitor_id)
if self.socket is not None:
raise Exception("Socket already established for %s." % self)
try:
# Create socket, wrap in SSL and connect.
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Validate that certificate server uses matches what we expect.
if self.ca_certs is not None:
self.socket = ssl.wrap_socket(self.socket,
cert_reqs=ssl.CERT_REQUIRED,
ca_certs=self.ca_certs)
else:
self.socket = ssl.wrap_socket(self.socket)
self.socket.connect((self.client.hostname, PUSH_SECURE_PORT))
self.socket.setblocking(0)
except Exception as exception:
self.socket.close()
self.socket = None
raise exception
self.send_connection_request() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _consume_queue(self):
""" Continually blocks until data is on the internal queue, then calls the session's registered callback and sends a PublishMessageReceived if callback returned True. """ |
while True:
session, block_id, raw_data = self._queue.get()
data = json.loads(raw_data.decode('utf-8')) # decode as JSON
try:
result = session.callback(data)
if result is None:
self.log.warn("Callback %r returned None, expected boolean. Messages "
"are not marked as received unless True is returned", session.callback)
elif result:
# Send a Successful PublishMessageReceived with the
# block id sent in request
if self._write_queue is not None:
response_message = struct.pack('!HHH',
PUBLISH_MESSAGE_RECEIVED,
block_id, 200)
self._write_queue.put((session.socket, response_message))
except Exception as exception:
self.log.exception(exception)
self._queue.task_done() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def queue_callback(self, session, block_id, data):
""" Queues up a callback event to occur for a session with the given payload data. Will block if the queue is full. :param session: the session with a defined callback function to call. :param block_id: the block_id of the message received. :param data: the data payload of the message received. """ |
self._queue.put((session, block_id, data)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _restart_session(self, session):
"""Restarts and re-establishes session :param session: The session to restart """ |
# remove old session key, if socket is None, that means the
# session was closed by user and there is no need to restart.
if session.socket is not None:
self.log.info("Attempting restart session for Monitor Id %s."
% session.monitor_id)
del self.sessions[session.socket.fileno()]
session.stop()
session.start()
self.sessions[session.socket.fileno()] = session |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _writer(self):
""" Indefinitely checks the writer queue for data to write to socket. """ |
while not self.closed:
try:
sock, data = self._write_queue.get(timeout=0.1)
self._write_queue.task_done()
sock.send(data)
except Empty:
pass # nothing to write after timeout
except socket.error as err:
if err.errno == errno.EBADF:
self._clean_dead_sessions() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _select(self):
""" While the client is not marked as closed, performs a socket select on all PushSession sockets. If any data is received, parses and forwards it on to the callback function. If the callback is successful, a PublishMessageReceived message is sent. """ |
try:
while not self.closed:
try:
inputready = select.select(self.sessions.keys(), [], [], 0.1)[0]
for sock in inputready:
session = self.sessions[sock]
sck = session.socket
if sck is None:
# Socket has since been deleted, continue
continue
# If no defined message length, nothing has been
# consumed yet, parse the header.
if session.message_length == 0:
# Read header information before receiving rest of
# message.
response_type = _read_msg_header(session)
if response_type == NO_DATA:
# No data could be read, assume socket closed.
if session.socket is not None:
self.log.error("Socket closed for Monitor %s." % session.monitor_id)
self._restart_session(session)
continue
elif response_type == INCOMPLETE:
# More Data to be read. Continue.
continue
elif response_type != PUBLISH_MESSAGE:
self.log.warn("Response Type (%x) does not match PublishMessage (%x)"
% (response_type, PUBLISH_MESSAGE))
continue
try:
if not _read_msg(session):
# Data not completely read, continue.
continue
except PushException as err:
# If Socket is None, it was closed,
# otherwise it was closed when it shouldn't
# have been restart it.
session.data = six.b("")
session.message_length = 0
if session.socket is None:
del self.sessions[sck]
else:
self.log.exception(err)
self._restart_session(session)
continue
# We received full payload,
# clear session data and parse it.
data = session.data
session.data = six.b("")
session.message_length = 0
block_id = struct.unpack('!H', data[0:2])[0]
compression = struct.unpack('!B', data[4:5])[0]
payload = data[10:]
if compression == 0x01:
# Data is compressed, uncompress it.
payload = zlib.decompress(payload)
# Enqueue payload into a callback queue to be
# invoked
self._callback_pool.queue_callback(session, block_id, payload)
except select.error as err:
# Evaluate sessions if we get a bad file descriptor, if
# socket is gone, delete the session.
if err.args[0] == errno.EBADF:
self._clean_dead_sessions()
except Exception as err:
self.log.exception(err)
finally:
for session in self.sessions.values():
if session is not None:
session.stop() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init_threads(self):
"""Initializes the IO and Writer threads""" |
if self._io_thread is None:
self._io_thread = Thread(target=self._select)
self._io_thread.start()
if self._writer_thread is None:
self._writer_thread = Thread(target=self._writer)
self._writer_thread.start() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_session(self, callback, monitor_id):
""" Creates and Returns a PushSession instance based on the input monitor and callback. When data is received, callback will be invoked. If neither monitor or monitor_id are specified, throws an Exception. :param callback: Callback function to call when PublishMessage messages are received. Expects 1 argument which will contain the payload of the pushed message. Additionally, expects function to return True if callback was able to process the message, False or None otherwise. :param monitor_id: The id of the Monitor, will be queried to understand parameters of the monitor. """ |
self.log.info("Creating Session for Monitor %s." % monitor_id)
session = SecurePushSession(callback, monitor_id, self, self._ca_certs) \
if self._secure else PushSession(callback, monitor_id, self)
session.start()
self.sessions[session.socket.fileno()] = session
self._init_threads()
return session |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop(self):
"""Stops all session activity. Blocks until io and writer thread dies """ |
if self._io_thread is not None:
self.log.info("Waiting for I/O thread to stop...")
self.closed = True
self._io_thread.join()
if self._writer_thread is not None:
self.log.info("Waiting for Writer Thread to stop...")
self.closed = True
self._writer_thread.join()
self.log.info("All worker threads stopped.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def plotF0(fromTuple, toTuple, mergeTupleList, fnFullPath):
'''
Plots the original data in a graph above the plot of the dtw'ed data
'''
_matplotlibCheck()
plt.hold(True)
fig, (ax0) = plt.subplots(nrows=1)
# Old data
plot1 = ax0.plot(fromTuple[0], fromTuple[1], color='red',
linewidth=2, label="From")
plot2 = ax0.plot(toTuple[0], toTuple[1], color='blue',
linewidth=2, label="To")
ax0.set_title("Plot of F0 Morph")
plt.ylabel('Pitch (hz)')
plt.xlabel('Time (s)')
# Merge data
colorValue = 0
colorStep = 255.0 / len(mergeTupleList)
for timeList, valueList in mergeTupleList:
colorValue += colorStep
hexValue = "#%02x0000" % int(255 - colorValue)
if int(colorValue) == 255:
ax0.plot(timeList, valueList, color=hexValue, linewidth=1,
label="Merged line, final iteration")
else:
ax0.plot(timeList, valueList, color=hexValue, linewidth=1)
plt.legend(loc=1, borderaxespad=0.)
# plt.legend([plot1, plot2, plot3], ["From", "To", "Merged line"])
plt.savefig(fnFullPath, dpi=300, bbox_inches='tight')
plt.close(fig) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def getPitchForIntervals(data, tgFN, tierName):
'''
Preps data for use in f0Morph
'''
tg = tgio.openTextgrid(tgFN)
data = tg.tierDict[tierName].getValuesInIntervals(data)
data = [dataList for _, dataList in data]
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def f0Morph(fromWavFN, pitchPath, stepList,
outputName, doPlotPitchSteps, fromPitchData, toPitchData,
outputMinPitch, outputMaxPitch, praatEXE, keepPitchRange=False,
keepAveragePitch=False, sourcePitchDataList=None,
minIntervalLength=0.3):
'''
Resynthesizes the pitch track from a source to a target wav file
fromPitchData and toPitchData should be segmented according to the
portions that you want to morph. The two lists must have the same
number of sublists.
Occurs over a three-step process.
This function can act as a template for how to use the function
morph_sequence.morphChunkedDataLists to morph pitch contours or
other data.
By default, everything is morphed, but it is possible to maintain elements
of the original speaker's pitch (average pitch and pitch range) by setting
the appropriate flag)
sourcePitchDataList: if passed in, any regions unspecified by
fromPitchData will be sampled from this list. In
essence, this allows one to leave segments of
the original pitch contour untouched by the
morph process.
'''
fromDuration = audio_scripts.getSoundFileDuration(fromWavFN)
# Find source pitch samples that will be mixed in with the target
# pitch samples later
nonMorphPitchData = []
if sourcePitchDataList is not None:
timeList = sorted(fromPitchData)
timeList = [(row[0][0], row[-1][0]) for row in timeList]
endTime = sourcePitchDataList[-1][0]
invertedTimeList = praatio_utils.invertIntervalList(timeList, endTime)
invertedTimeList = [(start, stop) for start, stop in invertedTimeList
if stop - start > minIntervalLength]
for start, stop in invertedTimeList:
pitchList = praatio_utils.getValuesInInterval(sourcePitchDataList,
start,
stop)
nonMorphPitchData.extend(pitchList)
# Iterative pitch tier data path
pitchTierPath = join(pitchPath, "pitchTiers")
resynthesizedPath = join(pitchPath, "f0_resynthesized_wavs")
for tmpPath in [pitchTierPath, resynthesizedPath]:
utils.makeDir(tmpPath)
# 1. Prepare the data for morphing - acquire the segments to merge
# (Done elsewhere, with the input fed into this function)
# 2. Morph the fromData to the toData
try:
finalOutputList = morph_sequence.morphChunkedDataLists(fromPitchData,
toPitchData,
stepList)
except IndexError:
raise MissingPitchDataException()
fromPitchData = [row for subList in fromPitchData for row in subList]
toPitchData = [row for subList in toPitchData for row in subList]
# 3. Save the pitch data and resynthesize the pitch
mergedDataList = []
for i in range(0, len(finalOutputList)):
outputDataList = finalOutputList[i]
if keepPitchRange is True:
outputDataList = morph_sequence.morphRange(outputDataList,
fromPitchData)
if keepAveragePitch is True:
outputDataList = morph_sequence.morphAveragePitch(outputDataList,
fromPitchData)
if sourcePitchDataList is not None:
outputDataList.extend(nonMorphPitchData)
outputDataList.sort()
stepOutputName = "%s_%0.3g" % (outputName, stepList[i])
pitchFNFullPath = join(pitchTierPath, "%s.PitchTier" % stepOutputName)
outputFN = join(resynthesizedPath, "%s.wav" % stepOutputName)
pointObj = dataio.PointObject2D(outputDataList, dataio.PITCH,
0, fromDuration)
pointObj.save(pitchFNFullPath)
outputTime, outputVals = zip(*outputDataList)
mergedDataList.append((outputTime, outputVals))
praat_scripts.resynthesizePitch(praatEXE, fromWavFN, pitchFNFullPath,
outputFN, outputMinPitch,
outputMaxPitch)
# 4. (Optional) Plot the generated contours
if doPlotPitchSteps:
fromTime, fromVals = zip(*fromPitchData)
toTime, toVals = zip(*toPitchData)
plot_morphed_data.plotF0((fromTime, fromVals),
(toTime, toVals),
mergedDataList,
join(pitchTierPath,
"%s.png" % outputName)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def adjustPeakHeight(self, heightAmount):
'''
Adjust peak height
The foot of the accent is left unchanged and intermediate
values are linearly scaled
'''
if heightAmount == 0:
return
pitchList = [f0V for _, f0V in self.pointList]
minV = min(pitchList)
maxV = max(pitchList)
scale = lambda x, y: x + y * (x - minV) / float(maxV - minV)
self.pointList = [(timeV, scale(f0V, heightAmount))
for timeV, f0V in self.pointList] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def addPlateau(self, plateauAmount, pitchSampFreq=None):
'''
Add a plateau
A negative plateauAmount will move the peak backwards.
A positive plateauAmount will move the peak forwards.
All points on the side of the peak growth will also get moved.
i.e. the slope of the peak does not change. The accent gets
wider instead.
If pitchSampFreq=None, the plateau will only be specified by
the start and end points of the plateau
'''
if plateauAmount == 0:
return
maxPoint = self.pointList[self.peakI]
# Define the plateau
if pitchSampFreq is not None:
numSteps = abs(int(plateauAmount / pitchSampFreq))
timeChangeList = [stepV * pitchSampFreq
for stepV in
range(0, numSteps + 1)]
else:
timeChangeList = [plateauAmount, ]
# Shift the side being pushed by the plateau
if plateauAmount < 0: # Plateau moves left of the peak
leftSide = self.pointList[:self.peakI]
rightSide = self.pointList[self.peakI:]
plateauPoints = [(maxPoint[0] + timeChange, maxPoint[1])
for timeChange in timeChangeList]
leftSide = [(timeV + plateauAmount, f0V)
for timeV, f0V in leftSide]
self.netLeftShift += plateauAmount
elif plateauAmount > 0: # Plateau moves right of the peak
leftSide = self.pointList[:self.peakI + 1]
rightSide = self.pointList[self.peakI + 1:]
plateauPoints = [(maxPoint[0] + timeChange, maxPoint[1])
for timeChange in timeChangeList]
rightSide = [(timeV + plateauAmount, f0V)
for timeV, f0V in rightSide]
self.netRightShift += plateauAmount
self.pointList = leftSide + plateauPoints + rightSide |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def shiftAccent(self, shiftAmount):
'''
Move the whole accent earlier or later
'''
if shiftAmount == 0:
return
self.pointList = [(time + shiftAmount, pitch)
for time, pitch in self.pointList]
# Update shift amounts
if shiftAmount < 0:
self.netLeftShift += shiftAmount
elif shiftAmount >= 0:
self.netRightShift += shiftAmount |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def deleteOverlapping(self, targetList):
'''
Erase points from another list that overlap with points in this list
'''
start = self.pointList[0][0]
stop = self.pointList[-1][0]
if self.netLeftShift < 0:
start += self.netLeftShift
if self.netRightShift > 0:
stop += self.netRightShift
targetList = _deletePoints(targetList, start, stop)
return targetList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def reintegrate(self, fullPointList):
'''
Integrates the pitch values of the accent into a larger pitch contour
'''
# Erase the original region of the accent
fullPointList = _deletePoints(fullPointList, self.minT, self.maxT)
# Erase the new region of the accent
fullPointList = self.deleteOverlapping(fullPointList)
# Add the accent into the full pitch list
outputPointList = fullPointList + self.pointList
outputPointList.sort()
return outputPointList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def detect(filename, include_confidence=False):
""" Detect the encoding of a file. Returns only the predicted current encoding as a string. If `include_confidence` is True, Returns tuple containing: (str encoding, float confidence) """ |
f = open(filename)
detection = chardet.detect(f.read())
f.close()
encoding = detection.get('encoding')
confidence = detection.get('confidence')
if include_confidence:
return (encoding, confidence)
return encoding |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(url, localFileName=None, localDirName=None):
""" Utility function for downloading files from the web and retaining the same filename. """ |
localName = url2name(url)
req = Request(url)
r = urlopen(req)
if r.info().has_key('Content-Disposition'):
# If the response has Content-Disposition, we take file name from it
localName = r.info()['Content-Disposition'].split('filename=')
if len(localName) > 1:
localName = localName[1]
if localName[0] == '"' or localName[0] == "'":
localName = localName[1:-1]
else:
localName = url2name(r.url)
elif r.url != url:
# if we were redirected, the real file name we take from the final URL
localName = url2name(r.url)
if localFileName:
# we can force to save the file as specified name
localName = localFileName
if localDirName:
# we can also put it in some custom directory
if not os.path.exists(localDirName):
os.makedirs(localDirName)
localName = os.path.join(localDirName, localName)
f = open(localName, 'wb')
f.write(r.read())
f.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _t(unistr, charset_from, charset_to):
""" This is a unexposed function, is responsibility for translation internal. """ |
# if type(unistr) is str:
# try:
# unistr = unistr.decode('utf-8')
# # Python 3 returns AttributeError when .decode() is called on a str
# # This means it is already unicode.
# except AttributeError:
# pass
# try:
# if type(unistr) is not unicode:
# return unistr
# # Python 3 returns NameError because unicode is not a type.
# except NameError:
# pass
chars = []
for c in unistr:
idx = charset_from.find(c)
chars.append(charset_to[idx] if idx!=-1 else c)
return u''.join(chars) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def identify(text):
"""Identify whether a string is simplified or traditional Chinese. Returns: None: if there are no recognizd Chinese characters. EITHER: if the test is inconclusive. TRAD: if the text is traditional. SIMP: if the text is simplified. BOTH: the text has characters recognized as being solely traditional and other characters recognized as being solely simplified. """ |
filtered_text = set(list(text)).intersection(ALL_CHARS)
if len(filtered_text) is 0:
return None
if filtered_text.issubset(SHARED_CHARS):
return EITHER
if filtered_text.issubset(TRAD_CHARS):
return TRAD
if filtered_text.issubset(SIMP_CHARS):
return SIMP
if filtered_text.difference(TRAD_CHARS).issubset(SIMP_CHARS):
return BOTH |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def makeSequenceRelative(absVSequence):
'''
Puts every value in a list on a continuum between 0 and 1
Also returns the min and max values (to reverse the process)
'''
if len(absVSequence) < 2 or len(set(absVSequence)) == 1:
raise RelativizeSequenceException(absVSequence)
minV = min(absVSequence)
maxV = max(absVSequence)
relativeSeq = [(value - minV) / (maxV - minV) for value in absVSequence]
return relativeSeq, minV, maxV |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def makeSequenceAbsolute(relVSequence, minV, maxV):
'''
Makes every value in a sequence absolute
'''
return [(value * (maxV - minV)) + minV for value in relVSequence] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _makeTimingRelative(absoluteDataList):
'''
Given normal pitch tier data, puts the times on a scale from 0 to 1
Input is a list of tuples of the form
([(time1, pitch1), (time2, pitch2),...]
Also returns the start and end time so that the process can be reversed
'''
timingSeq = [row[0] for row in absoluteDataList]
valueSeq = [list(row[1:]) for row in absoluteDataList]
relTimingSeq, startTime, endTime = makeSequenceRelative(timingSeq)
relDataList = [tuple([time, ] + row) for time, row
in zip(relTimingSeq, valueSeq)]
return relDataList, startTime, endTime |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _makeTimingAbsolute(relativeDataList, startTime, endTime):
'''
Maps values from 0 to 1 to the provided start and end time
Input is a list of tuples of the form
([(time1, pitch1), (time2, pitch2),...]
'''
timingSeq = [row[0] for row in relativeDataList]
valueSeq = [list(row[1:]) for row in relativeDataList]
absTimingSeq = makeSequenceAbsolute(timingSeq, startTime, endTime)
absDataList = [tuple([time, ] + row) for time, row
in zip(absTimingSeq, valueSeq)]
return absDataList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _getSmallestDifference(inputList, targetVal):
'''
Returns the value in inputList that is closest to targetVal
Iteratively splits the dataset in two, so it should be pretty fast
'''
targetList = inputList[:]
retVal = None
while True:
# If we're down to one value, stop iterating
if len(targetList) == 1:
retVal = targetList[0]
break
halfPoint = int(len(targetList) / 2.0) - 1
a = targetList[halfPoint]
b = targetList[halfPoint + 1]
leftDiff = abs(targetVal - a)
rightDiff = abs(targetVal - b)
# If the distance is 0, stop iterating, the targetVal is present
# in the inputList
if leftDiff == 0 or rightDiff == 0:
retVal = targetVal
break
# Look at left half or right half
if leftDiff < rightDiff:
targetList = targetList[:halfPoint + 1]
else:
targetList = targetList[halfPoint + 1:]
return retVal |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _getNearestMappingIndexList(fromValList, toValList):
'''
Finds the indicies for data points that are closest to each other.
The inputs should be in relative time, scaled from 0 to 1
e.g. if you have [0, .1, .5., .9] and [0, .1, .2, 1]
will output [0, 1, 1, 2]
'''
indexList = []
for fromTimestamp in fromValList:
smallestDiff = _getSmallestDifference(toValList, fromTimestamp)
i = toValList.index(smallestDiff)
indexList.append(i)
return indexList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def morphChunkedDataLists(fromDataList, toDataList, stepList):
'''
Morph one set of data into another, in a stepwise fashion
A convenience function. Given a set of paired data lists,
this will morph each one individually.
Returns a single list with all data combined together.
'''
assert(len(fromDataList) == len(toDataList))
# Morph the fromDataList into the toDataList
outputList = []
for x, y in zip(fromDataList, toDataList):
# We cannot morph a region if there is no data or only
# a single data point for either side
if (len(x) < 2) or (len(y) < 2):
continue
tmpList = [outputPitchList for _, outputPitchList
in morphDataLists(x, y, stepList)]
outputList.append(tmpList)
# Transpose list
finalOutputList = outputList.pop(0)
for subList in outputList:
for i, subsubList in enumerate(subList):
finalOutputList[i].extend(subsubList)
return finalOutputList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def morphAveragePitch(fromDataList, toDataList):
'''
Adjusts the values in fromPitchList to have the same average as toPitchList
Because other manipulations can alter the average pitch, morphing the pitch
is the last pitch manipulation that should be done
After the morphing, the code removes any values below zero, thus the
final average might not match the target average.
'''
timeList, fromPitchList = zip(*fromDataList)
toPitchList = [pitchVal for _, pitchVal in toDataList]
# Zero pitch values aren't meaningful, so filter them out if they are
# in the dataset
fromListNoZeroes = [val for val in fromPitchList if val > 0]
fromAverage = sum(fromListNoZeroes) / float(len(fromListNoZeroes))
toListNoZeroes = [val for val in toPitchList if val > 0]
toAverage = sum(toListNoZeroes) / float(len(toListNoZeroes))
newPitchList = [val - fromAverage + toAverage for val in fromPitchList]
# finalAverage = sum(newPitchList) / float(len(newPitchList))
# Removing zeroes and negative pitch values
retDataList = [(time, pitchVal) for time, pitchVal
in zip(timeList, newPitchList)
if pitchVal > 0]
return retDataList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def morphRange(fromDataList, toDataList):
'''
Changes the scale of values in one distribution to that of another
ie The maximum value in fromDataList will be set to the maximum value in
toDataList. The 75% largest value in fromDataList will be set to the
75% largest value in toDataList, etc.
Small sample sizes will yield results that are not very meaningful
'''
# Isolate and sort pitch values
fromPitchList = [dataTuple[1] for dataTuple in fromDataList]
toPitchList = [dataTuple[1] for dataTuple in toDataList]
fromPitchListSorted = sorted(fromPitchList)
toPitchListSorted = sorted(toPitchList)
# Bin pitch values between 0 and 1
fromListRel = makeSequenceRelative(fromPitchListSorted)[0]
toListRel = makeSequenceRelative(toPitchListSorted)[0]
# Find each values closest equivalent in the other list
indexList = _getNearestMappingIndexList(fromListRel, toListRel)
# Map the source pitch to the target pitch value
# Pitch value -> get sorted position -> get corresponding position in
# target list -> get corresponding pitch value = the new pitch value
retList = []
for time, pitch in fromDataList:
fromI = fromPitchListSorted.index(pitch)
toI = indexList[fromI]
newPitch = toPitchListSorted[toI]
retList.append((time, newPitch))
return retList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def getIntervals(fn, tierName, filterFunc=None,
includeUnlabeledRegions=False):
'''
Get information about the 'extract' tier, used by several merge scripts
'''
tg = tgio.openTextgrid(fn)
tier = tg.tierDict[tierName]
if includeUnlabeledRegions is True:
tier = tgio._fillInBlanks(tier)
entryList = tier.entryList
if filterFunc is not None:
entryList = [entry for entry in entryList if filterFunc(entry)]
return entryList |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def changeDuration(fromWavFN, durationParameters, stepList, outputName,
outputMinPitch, outputMaxPitch, praatEXE):
'''
Uses praat to morph duration in one file to duration in another
Praat uses the PSOLA algorithm
'''
rootPath = os.path.split(fromWavFN)[0]
# Prep output directories
outputPath = join(rootPath, "duration_resynthesized_wavs")
utils.makeDir(outputPath)
durationTierPath = join(rootPath, "duration_tiers")
utils.makeDir(durationTierPath)
fromWavDuration = audio_scripts.getSoundFileDuration(fromWavFN)
durationParameters = copy.deepcopy(durationParameters)
# Pad any gaps with values of 1 (no change in duration)
# No need to stretch out any pauses at the beginning
if durationParameters[0][0] != 0:
tmpVar = (0, durationParameters[0][0] - PRAAT_TIME_DIFF, 1)
durationParameters.insert(0, tmpVar)
# Or the end
if durationParameters[-1][1] < fromWavDuration:
durationParameters.append((durationParameters[-1][1] + PRAAT_TIME_DIFF,
fromWavDuration, 1))
# Create the praat script for doing duration manipulation
for stepAmount in stepList:
durationPointList = []
for start, end, ratio in durationParameters:
percentChange = 1 + (ratio - 1) * stepAmount
durationPointList.append((start, percentChange))
durationPointList.append((end, percentChange))
outputPrefix = "%s_%0.3g" % (outputName, stepAmount)
durationTierFN = join(durationTierPath,
"%s.DurationTier" % outputPrefix)
outputWavFN = join(outputPath, "%s.wav" % outputPrefix)
durationTier = dataio.PointObject2D(durationPointList, dataio.DURATION,
0, fromWavDuration)
durationTier.save(durationTierFN)
praat_scripts.resynthesizeDuration(praatEXE,
fromWavFN,
durationTierFN,
outputWavFN,
outputMinPitch, outputMaxPitch) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def textgridMorphDuration(fromTGFN, toTGFN):
'''
A convenience function. Morphs interval durations of one tg to another.
This assumes the two textgrids have the same number of segments.
'''
fromTG = tgio.openTextgrid(fromTGFN)
toTG = tgio.openTextgrid(toTGFN)
adjustedTG = tgio.Textgrid()
for tierName in fromTG.tierNameList:
fromTier = fromTG.tierDict[tierName]
toTier = toTG.tierDict[tierName]
adjustedTier = fromTier.morph(toTier)
adjustedTG.addTier(adjustedTier)
return adjustedTG |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def split_text(text, include_part_of_speech=False, strip_english=False, strip_numbers=False):
u""" Split Chinese text at word boundaries. include_pos: also returns the Part Of Speech for each of the words. Some of the different parts of speech are: r: pronoun v: verb ns: proper noun This all gets returned as a tuple: index 0: the split word index 1: the word's part of speech strip_english: remove all entries that have English or numbers in them (useful sometimes) """ |
if not include_part_of_speech:
seg_list = pseg.cut(text)
if strip_english:
seg_list = filter(lambda x: not contains_english(x), seg_list)
if strip_numbers:
seg_list = filter(lambda x: not _is_number(x), seg_list)
return list(map(lambda i: i.word, seg_list))
else:
seg_list = pseg.cut(text)
objs = map(lambda w: (w.word, w.flag), seg_list)
if strip_english:
objs = filter(lambda x: not contains_english(x[0]), objs)
if strip_english:
objs = filter(lambda x: not _is_number(x[0]), objs)
return objs
# if was_traditional:
# seg_list = map(tradify, seg_list)
return list(seg_list) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_special_atom(cron_atom, span):
""" Returns a boolean indicating whether or not the string can be parsed by parse_atom to produce a static set. In the process of examining the string, the syntax of any special character uses is also checked. """ |
for special_char in ('%', '#', 'L', 'W'):
if special_char not in cron_atom:
continue
if special_char == '#':
if span != DAYS_OF_WEEK:
raise ValueError("\"#\" invalid where used.")
elif not VALIDATE_POUND.match(cron_atom):
raise ValueError("\"#\" syntax incorrect.")
elif special_char == "W":
if span != DAYS_OF_MONTH:
raise ValueError("\"W\" syntax incorrect.")
elif not(VALIDATE_W.match(cron_atom) and int(cron_atom[:-1]) > 0):
raise ValueError("Invalid use of \"W\".")
elif special_char == "L":
if span not in L_FIELDS:
raise ValueError("\"L\" invalid where used.")
elif span == DAYS_OF_MONTH:
if cron_atom != "L":
raise ValueError("\"L\" must be alone in days of month.")
elif span == DAYS_OF_WEEK:
if not VALIDATE_L_IN_DOW.match(cron_atom):
raise ValueError("\"L\" syntax incorrect.")
elif special_char == "%":
if not(cron_atom[1:].isdigit() and int(cron_atom[1:]) > 1):
raise ValueError("\"%\" syntax incorrect.")
return True
else:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_atom(parse, minmax):
""" Returns a set containing valid values for a given cron-style range of numbers. The 'minmax' arguments is a two element iterable containing the inclusive upper and lower limits of the expression. Examples: set([1, 2, 3, 4, 5]) set([0, 6, 12, 18]) set([18, 22, 0, 4]) set([0, 9, 18]) """ |
parse = parse.strip()
increment = 1
if parse == '*':
return set(xrange(minmax[0], minmax[1] + 1))
elif parse.isdigit():
# A single number still needs to be returned as a set
value = int(parse)
if value >= minmax[0] and value <= minmax[1]:
return set((value,))
else:
raise ValueError("\"%s\" is not within valid range." % parse)
elif '-' in parse or '/' in parse:
divide = parse.split('/')
subrange = divide[0]
if len(divide) == 2:
# Example: 1-3/5 or */7 increment should be 5 and 7 respectively
increment = int(divide[1])
if '-' in subrange:
# Example: a-b
prefix, suffix = [int(n) for n in subrange.split('-')]
if prefix < minmax[0] or suffix > minmax[1]:
raise ValueError("\"%s\" is not within valid range." % parse)
elif subrange.isdigit():
# Handle offset increments e.g. 5/15 to run at :05, :20, :35, and :50
return set(xrange(int(subrange), minmax[1] + 1, increment))
elif subrange == '*':
# Include all values with the given range
prefix, suffix = minmax
else:
raise ValueError("Unrecognized symbol \"%s\"" % subrange)
if prefix < suffix:
# Example: 7-10
return set(xrange(prefix, suffix + 1, increment))
else:
# Example: 12-4/2; (12, 12 + n, ..., 12 + m*n) U (n_0, ..., 4)
noskips = list(xrange(prefix, minmax[1] + 1))
noskips += list(xrange(minmax[0], suffix + 1))
return set(noskips[::increment])
else:
raise ValueError("Atom \"%s\" not in a recognized format." % parse) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute_numtab(self):
""" Recomputes the sets for the static ranges of the trigger time. This method should only be called by the user if the string_tab member is modified. """ |
self.numerical_tab = []
for field_str, span in zip(self.string_tab, FIELD_RANGES):
split_field_str = field_str.split(',')
if len(split_field_str) > 1 and "*" in split_field_str:
raise ValueError("\"*\" must be alone in a field.")
unified = set()
for cron_atom in split_field_str:
# parse_atom only handles static cases
if not(is_special_atom(cron_atom, span)):
unified.update(parse_atom(cron_atom, span))
self.numerical_tab.append(unified)
if self.string_tab[2] == "*" and self.string_tab[4] != "*":
self.numerical_tab[2] = set()
elif self.string_tab[4] == "*" and self.string_tab[2] != "*":
self.numerical_tab[4] = set() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_trigger(self, date_tuple, utc_offset=0):
""" Returns boolean indicating if the trigger is active at the given time. The date tuple should be in the local time. Unless periodicities are used, utc_offset does not need to be specified. If periodicities are used, specifically in the hour and minutes fields, it is crucial that the utc_offset is specified. """ |
year, month, day, hour, mins = date_tuple
given_date = datetime.date(year, month, day)
zeroday = datetime.date(*self.epoch[:3])
last_dom = calendar.monthrange(year, month)[-1]
dom_matched = True
# In calendar and datetime.date.weekday, Monday = 0
given_dow = (datetime.date.weekday(given_date) + 1) % 7
first_dow = (given_dow + 1 - day) % 7
# Figure out how much time has passed from the epoch to the given date
utc_diff = utc_offset - self.epoch[5]
mod_delta_yrs = year - self.epoch[0]
mod_delta_mon = month - self.epoch[1] + mod_delta_yrs * 12
mod_delta_day = (given_date - zeroday).days
mod_delta_hrs = hour - self.epoch[3] + mod_delta_day * 24 + utc_diff
mod_delta_min = mins - self.epoch[4] + mod_delta_hrs * 60
# Makes iterating through like components easier.
quintuple = zip(
(mins, hour, day, month, given_dow),
self.numerical_tab,
self.string_tab,
(mod_delta_min, mod_delta_hrs, mod_delta_day, mod_delta_mon,
mod_delta_day),
FIELD_RANGES)
for value, valid_values, field_str, delta_t, field_type in quintuple:
# All valid, static values for the fields are stored in sets
if value in valid_values:
continue
# The following for loop implements the logic for context
# sensitive and epoch sensitive constraints. break statements,
# which are executed when a match is found, lead to a continue
# in the outer loop. If there are no matches found, the given date
# does not match expression constraints, so the function returns
# False as seen at the end of this for...else... construct.
for cron_atom in field_str.split(','):
if cron_atom[0] == '%':
if not(delta_t % int(cron_atom[1:])):
break
elif '#' in cron_atom:
D, N = int(cron_atom[0]), int(cron_atom[2])
# Computes Nth occurence of D day of the week
if (((D - first_dow) % 7) + 1 + 7 * (N - 1)) == day:
break
elif cron_atom[-1] == 'W':
target = min(int(cron_atom[:-1]), last_dom)
lands_on = (first_dow + target - 1) % 7
if lands_on == 0:
# Shift from Sun. to Mon. unless Mon. is next month
if target < last_dom:
target += 1
else:
target -= 2
elif lands_on == 6:
# Shift from Sat. to Fri. unless Fri. in prior month
if target > 1:
target -= 1
else:
target += 2
# Break if the day is correct, and target is a weekday
if target == day and (first_dow + target) % 7 > 1:
break
elif cron_atom[-1] == 'L':
# In dom field, L means the last day of the month
target = last_dom
if field_type == DAYS_OF_WEEK:
# Calculates the last occurence of given day of week
desired_dow = int(cron_atom[:-1])
target = (((desired_dow - first_dow) % 7) + 29)
if target > last_dom:
target -= 7
if target == day:
break
else:
# See 2010.11.15 of CHANGELOG
if field_type == DAYS_OF_MONTH and self.string_tab[4] != '*':
dom_matched = False
continue
elif field_type == DAYS_OF_WEEK and self.string_tab[2] != '*':
# If we got here, then days of months validated so it does
# not matter that days of the week failed.
return dom_matched
# None of the expressions matched which means this field fails
return False
# Arriving at this point means the date landed within the constraints
# of all fields; the associated trigger should be fired.
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def show(self):
"""Show the structure of self.rules_list, only for debug.""" |
for rule in self.rules_list:
result = ", ".join([str(check) for check, deny in rule])
print(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(self):
"""Run self.rules_list. Return True if one rule channel has been passed. Otherwise return False and the deny() method of the last failed rule. """ |
failed_result = None
for rule in self.rules_list:
for check, deny in rule:
if not check():
failed_result = (False, deny)
break
else:
return (True, None)
return failed_result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_fraction(self, value):
"""Set the meter indicator. Value should be between 0 and 1.""" |
if value < 0:
value *= -1
value = min(value, 1)
if self.horizontal:
width = int(self.width * value)
height = self.height
else:
width = self.width
height = int(self.height * value)
self.canvas.coords(self.meter, self.xpos, self.ypos,
self.xpos + width, self.ypos + height) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_status(self):
"""Update status informations in tkinter window.""" |
try:
# all this may fail if the connection to the fritzbox is down
self.update_connection_status()
self.max_stream_rate.set(self.get_stream_rate_str())
self.ip.set(self.status.external_ip)
self.uptime.set(self.status.str_uptime)
upstream, downstream = self.status.transmission_rate
except IOError:
# here we inform the user about being unable to
# update the status informations
pass
else:
# max_downstream and max_upstream may be zero if the
# fritzbox is configured as ip-client.
if self.max_downstream > 0:
self.in_meter.set_fraction(
1.0 * downstream / self.max_downstream)
if self.max_upstream > 0:
self.out_meter.set_fraction(1.0 * upstream / self.max_upstream)
self.update_traffic_info()
self.after(1000, self.update_status) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def format_num(num, unit='bytes'):
""" Returns a human readable string of a byte-value. If 'num' is bits, set unit='bits'. """ |
if unit == 'bytes':
extension = 'B'
else:
# if it's not bytes, it's bits
extension = 'Bit'
for dimension in (unit, 'K', 'M', 'G', 'T'):
if num < 1024:
if dimension == unit:
return '%3.1f %s' % (num, dimension)
return '%3.1f %s%s' % (num, dimension, extension)
num /= 1024
return '%3.1f P%s' % (num, extension) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_headers(content_disposition, location=None, relaxed=False):
"""Build a ContentDisposition from header values. """ |
LOGGER.debug(
'Content-Disposition %r, Location %r', content_disposition, location)
if content_disposition is None:
return ContentDisposition(location=location)
# Both alternatives seem valid.
if False:
# Require content_disposition to be ascii bytes (0-127),
# or characters in the ascii range
content_disposition = ensure_charset(content_disposition, 'ascii')
else:
# We allow non-ascii here (it will only be parsed inside of
# qdtext, and rejected by the grammar if it appears in
# other places), although parsing it can be ambiguous.
# Parsing it ensures that a non-ambiguous filename* value
# won't get dismissed because of an unrelated ambiguity
# in the filename parameter. But it does mean we occasionally
# give less-than-certain values for some legacy senders.
content_disposition = ensure_charset(content_disposition, 'iso-8859-1')
# Check the caller already did LWS-folding (normally done
# when separating header names and values; RFC 2616 section 2.2
# says it should be done before interpretation at any rate).
# Hopefully space still means what it should in iso-8859-1.
# This check is a bit stronger that LWS folding, it will
# remove CR and LF even if they aren't part of a CRLF.
# However http doesn't allow isolated CR and LF in headers outside
# of LWS.
if relaxed:
# Relaxed has two effects (so far):
# the grammar allows a final ';' in the header;
# we do LWS-folding, and possibly normalise other broken
# whitespace, instead of rejecting non-lws-safe text.
# XXX Would prefer to accept only the quoted whitespace
# case, rather than normalising everything.
content_disposition = normalize_ws(content_disposition)
parser = content_disposition_value_relaxed
else:
# Turns out this is occasionally broken: two spaces inside
# a quoted_string's qdtext. Firefox and Chrome save the two spaces.
if not is_lws_safe(content_disposition):
raise ValueError(
content_disposition, 'Contains nonstandard whitespace')
parser = content_disposition_value
try:
parsed = parser.parse(content_disposition)
except FullFirstMatchException:
return ContentDisposition(location=location)
return ContentDisposition(
disposition=parsed[0], assocs=parsed[1:], location=location) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_header( filename, disposition='attachment', filename_compat=None ):
"""Generate a Content-Disposition header for a given filename. For legacy clients that don't understand the filename* parameter, a filename_compat value may be given. It should either be ascii-only (recommended) or iso-8859-1 only. In the later case it should be a character string (unicode in Python 2). Options for generating filename_compat (only useful for legacy clients):
- ignore (will only send filename*); - strip accents using unicode's decomposing normalisations, which can be done from unicode data (stdlib), and keep only ascii; - use the ascii transliteration tables from Unidecode (PyPI); - use iso-8859-1 Ignore is the safest, and can be used to trigger a fallback to the document location (which can be percent-encoded utf-8 if you control the URLs). See https://tools.ietf.org/html/rfc6266#appendix-D """ |
# While this method exists, it could also sanitize the filename
# by rejecting slashes or other weirdness that might upset a receiver.
if disposition != 'attachment':
assert is_token(disposition)
rv = disposition
if is_token(filename):
rv += '; filename=%s' % (filename, )
return rv
elif is_ascii(filename) and is_lws_safe(filename):
qd_filename = qd_quote(filename)
rv += '; filename="%s"' % (qd_filename, )
if qd_filename == filename:
# RFC 6266 claims some implementations are iffy on qdtext's
# backslash-escaping, we'll include filename* in that case.
return rv
elif filename_compat:
if is_token(filename_compat):
rv += '; filename=%s' % (filename_compat, )
else:
assert is_lws_safe(filename_compat)
rv += '; filename="%s"' % (qd_quote(filename_compat), )
# alnum are already considered always-safe, but the rest isn't.
# Python encodes ~ when it shouldn't, for example.
rv += "; filename*=utf-8''%s" % (percent_encode(
filename, safe=attr_chars_nonalnum, encoding='utf-8'), )
# This will only encode filename_compat, if it used non-ascii iso-8859-1.
return rv.encode('iso-8859-1') |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.