text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prune_dupes(self):
"""Remove all but the last entry for a given resource URI. Returns the number of entries removed. Also removes all entries for a given URI where the first entry is a create and the last entry is a delete. """ |
n = 0
pruned1 = []
seen = set()
deletes = {}
for r in reversed(self.resources):
if (r.uri in seen):
n += 1
if (r.uri in deletes):
deletes[r.uri] = r.change
else:
pruned1.append(r)
seen.add(r.uri)
if (r.change == 'deleted'):
deletes[r.uri] = r.change
# go through all deletes and prune if first was create
pruned2 = []
for r in reversed(pruned1):
if (r.uri in deletes and deletes[r.uri] == 'created'):
n += 1
else:
pruned2.append(r)
self.resources = pruned2
return(n) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _str_datetime_now(self, x=None):
"""Return datetime string for use with time attributes. Handling depends on input: 'now' - returns datetime for now number - assume datetime values, generate string other - no change, return same value """ |
if (x == 'now'):
# Now, this is wht datetime_to_str() with no arg gives
return(datetime_to_str())
try:
# Test for number
junk = x + 0.0
return datetime_to_str(x)
except TypeError:
# Didn't look like a number, treat as string
return x |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def map(self, width, height):
""" Creates and returns a new randomly generated map """ |
template = ti.load(os.path.join(script_dir, 'assets', 'template.tmx'))['map0']
#template.set_view(0, 0, template.px_width, template.px_height)
template.set_view(0, 0, width*template.tw, height*template.th)
# TODO: Save the generated map.
#epoch = int(time.time())
#filename = 'map_' + str(epoch) + '.tmx'
# Draw borders
border_x = template.cells[width]
for y in xrange(0,height+1):
border_x[y].tile = template.cells[0][0].tile
for x in xrange(0,width):
template.cells[x][height].tile = template.cells[0][0].tile
# Start within borders
#self.recursive_division(template.cells, 3, (template.px_width/template.tw)-1, (template.px_height/template.th)-1, 0, 0)
self.recursive_division(template.cells, 3, width, height, 0, 0)
return template |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_node_dict(outer_list, sort="zone"):
"""Convert node data from nested-list to sorted dict.""" |
raw_dict = {}
x = 1
for inner_list in outer_list:
for node in inner_list:
raw_dict[x] = node
x += 1
if sort == "name": # sort by provider - name
srt_dict = OrderedDict(sorted(raw_dict.items(), key=lambda k:
(k[1].cloud, k[1].name.lower())))
else: # sort by provider - zone - name
srt_dict = OrderedDict(sorted(raw_dict.items(), key=lambda k:
(k[1].cloud, k[1].zone, k[1].name.lower())))
x = 1
node_dict = {}
for i, v in srt_dict.items():
node_dict[x] = v
x += 1
return node_dict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def config_read():
"""Read config info from config file.""" |
config_file = (u"{0}config.ini".format(CONFIG_DIR))
if not os.path.isfile(config_file):
config_make(config_file)
config = configparser.ConfigParser(allow_no_value=True)
try:
config.read(config_file, encoding='utf-8')
except IOError:
print("Error reading config file: {}".format(config_file))
sys.exit()
# De-duplicate provider-list
providers = config_prov(config)
# Read credentials for listed providers
(cred, to_remove) = config_cred(config, providers)
# remove unsupported and credential-less providers
for item in to_remove:
providers.remove(item)
return cred, providers |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def config_prov(config):
"""Read providers from configfile and de-duplicate it.""" |
try:
providers = [e.strip() for e in (config['info']
['providers']).split(',')]
except KeyError as e:
print("Error reading config item: {}".format(e))
sys.exit()
providers = list(OrderedDict.fromkeys(providers))
return providers |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def config_cred(config, providers):
"""Read credentials from configfile.""" |
expected = ['aws', 'azure', 'gcp', 'alicloud']
cred = {}
to_remove = []
for item in providers:
if any(item.startswith(itemb) for itemb in expected):
try:
cred[item] = dict(list(config[item].items()))
except KeyError as e:
print("No credentials section in config file for {} -"
" provider will be skipped.".format(e))
to_remove.append(item)
else:
print("Unsupported provider: '{}' listed in config - ignoring"
.format(item))
to_remove.append(item)
return cred, to_remove |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def config_make(config_file):
"""Create config.ini on first use, make dir and copy sample.""" |
from pkg_resources import resource_filename
import shutil
if not os.path.exists(CONFIG_DIR):
os.makedirs(CONFIG_DIR)
filename = resource_filename("mcc", "config.ini")
try:
shutil.copyfile(filename, config_file)
except IOError:
print("Error copying sample config file: {}".format(config_file))
sys.exit()
print("Please add credential information to {}".format(config_file))
sys.exit() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hash(self):
"""Provide access to the complete hash string. The hash string may have zero or more hash values with appropriate prefixes. All hash values are assumed to be strings """ |
hashvals = []
if (self.md5 is not None):
hashvals.append('md5:' + self.md5)
if (self.sha1 is not None):
hashvals.append('sha-1:' + self.sha1)
if (self.sha256 is not None):
hashvals.append('sha-256:' + self.sha256)
if (len(hashvals) > 0):
return(' '.join(hashvals))
return(None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hash(self, hash):
"""Parse space separated set of values. See specification at: http://tools.ietf.org/html/draft-snell-atompub-link-extensions-09 which defines many types. We implement md5, sha-1, sha-256 """ |
self.md5 = None
self.sha1 = None
self.sha256 = None
if (hash is None):
return
hash_seen = set()
errors = []
for entry in hash.split():
(hash_type, value) = entry.split(':', 1)
if (hash_type in hash_seen):
errors.append("Ignored duplicate hash type %s" % (hash_type))
else:
hash_seen.add(hash_type)
if (hash_type == 'md5'):
self.md5 = value
elif (hash_type == 'sha-1'):
self.sha1 = value
elif (hash_type == 'sha-256'):
self.sha256 = value
else:
errors.append("Ignored unsupported hash type (%s)" %
(hash_type))
if (len(errors) > 0):
raise ValueError(". ".join(errors)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def link_add(self, rel, href, **atts):
"""Create an link with specified rel. Will add a link even if one with that rel already exists. """ |
self.link_set(rel, href, allow_duplicates=True, **atts) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def equal(self, other, delta=0.0):
"""Equality or near equality test for resources. Equality means: 1. same uri, AND 2. same timestamp WITHIN delta if specified for either, AND 3. same md5 if specified for both, AND 4. same length if specified for both """ |
if (other is None):
return False
if (self.uri != other.uri):
return(False)
if (self.timestamp is not None or other.timestamp is not None):
# not equal if only one timestamp specified
if (self.timestamp is None or
other.timestamp is None or
abs(self.timestamp - other.timestamp) >= delta):
return(False)
if ((self.md5 is not None and other.md5 is not None) and
self.md5 != other.md5):
return(False)
if ((self.length is not None and other.length is not None) and
self.length != other.length):
return(False)
return(True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ChiSquared(target_frequency):
"""Score a text by comparing its frequency distribution against another. Note: It is easy to be penalised without knowing it when using this scorer. English frequency ngrams are capital letters, meaning when using it any text you score against must be all capitals for it to give correct results. I am aware of the issue and will work on a fix. Todo: Maybe include paramter for ngram size. Havent had a use case for this yet. Once there is evidence it is needed, I will add it. Example: -32.2 Args: target_frequency (dict):
symbol to frequency mapping of the distribution to compare with """ |
def inner(text):
text = ''.join(text)
return -chi_squared(frequency_analyze(text), target_frequency)
return inner |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def my_resource_list():
"""Simulate the generator used by simulator""" |
rl = ResourceList( resources=iter(my_resources), count=len(my_resources) )
rl.max_sitemap_entries = max_sitemap_entries
return(rl) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, resource, replace=False):
"""Add a single resource, check for dupes.""" |
uri = resource.uri
for r in self:
if (uri == r.uri):
if (replace):
r = resource
return
else:
raise ResourceListDupeError(
"Attempt to add resource already in resource_list")
# didn't find it in list, add to end
self.append(resource) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, resource, replace=False):
"""Add a resource or an iterable collection of resources. Will throw a ValueError if the resource (ie. same uri) already exists in the ResourceList, unless replace=True. """ |
if isinstance(resource, collections.Iterable):
for r in resource:
self.resources.add(r, replace)
else:
self.resources.add(resource, replace) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compare(self, src):
"""Compare this ResourceList object with that specified as src. The parameter src must also be a ResourceList object, it is assumed to be the source, and the current object is the destination. This written to work for any objects in self and sc, provided that the == operator can be used to compare them. The functioning of this method depends on the iterators for self and src providing access to the resource objects in URI order. """ |
dst_iter = iter(self.resources)
src_iter = iter(src.resources)
same = ResourceList()
updated = ResourceList()
deleted = ResourceList()
created = ResourceList()
dst_cur = next(dst_iter, None)
src_cur = next(src_iter, None)
while ((dst_cur is not None) and (src_cur is not None)):
# print 'dst='+dst_cur+' src='+src_cur
if (dst_cur.uri == src_cur.uri):
if (dst_cur == src_cur):
same.add(dst_cur)
else:
updated.add(src_cur)
dst_cur = next(dst_iter, None)
src_cur = next(src_iter, None)
elif (not src_cur or dst_cur.uri < src_cur.uri):
deleted.add(dst_cur)
dst_cur = next(dst_iter, None)
elif (not dst_cur or dst_cur.uri > src_cur.uri):
created.add(src_cur)
src_cur = next(src_iter, None)
else:
raise Exception("this should not be possible")
# what do we have leftover in src or dst lists?
while (dst_cur is not None):
deleted.add(dst_cur)
dst_cur = next(dst_iter, None)
while (src_cur is not None):
created.add(src_cur)
src_cur = next(src_iter, None)
# have now gone through both lists
return(same, updated, deleted, created) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hashes(self):
"""Return set of hashes uses in this resource_list.""" |
hashes = set()
if (self.resources is not None):
for resource in self:
if (resource.md5 is not None):
hashes.add('md5')
if (resource.sha1 is not None):
hashes.add('sha-1')
if (resource.sha256 is not None):
hashes.add('sha-256')
return(hashes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _build_sliced_filepath(filename, slice_count):
""" append slice_count to the end of a filename """ |
root = os.path.splitext(filename)[0]
ext = os.path.splitext(filename)[1]
new_filepath = ''.join((root, str(slice_count), ext))
return _build_filepath_for_phantomcss(new_filepath) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _build_filepath_for_phantomcss(filepath):
""" Prepare screenshot filename for use with phantomcss. ie, append 'diff' to the end of the file if a baseline exists """ |
try:
if os.path.exists(filepath):
new_root = '.'.join((os.path.splitext(filepath)[0], 'diff'))
ext = os.path.splitext(filepath)[1]
diff_filepath = ''.join((new_root, ext))
if os.path.exists(diff_filepath):
print 'removing stale diff: {0}'.format(diff_filepath)
os.remove(diff_filepath)
return diff_filepath
else:
return filepath
except OSError, e:
print e |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _build_filename_from_browserstack_json(j):
""" Build a useful filename for an image from the screenshot json metadata """ |
filename = ''
device = j['device'] if j['device'] else 'Desktop'
if j['state'] == 'done' and j['image_url']:
detail = [device, j['os'], j['os_version'],
j['browser'], j['browser_version'], '.jpg']
filename = '_'.join(item.replace(" ", "_") for item in detail if item)
else:
print 'screenshot timed out, ignoring this result'
return filename |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _long_image_slice(in_filepath, out_filepath, slice_size):
""" Slice an image into parts slice_size tall. """ |
print 'slicing image: {0}'.format(in_filepath)
img = Image.open(in_filepath)
width, height = img.size
upper = 0
left = 0
slices = int(math.ceil(height / slice_size))
count = 1
for slice in range(slices):
# if we are at the end, set the lower bound to be the bottom of the image
if count == slices:
lower = height
else:
lower = int(count * slice_size)
# set the bounding box! The important bit
bbox = (left, upper, width, lower)
working_slice = img.crop(bbox)
upper += slice_size
# save the slice
new_filepath = _build_sliced_filepath(out_filepath, count)
working_slice.save(new_filepath)
count += 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _purge(dir, pattern, reason=''):
""" delete files in dir that match pattern """ |
for f in os.listdir(dir):
if re.search(pattern, f):
print "Purging file {0}. {1}".format(f, reason)
os.remove(os.path.join(dir, f)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def crack(ciphertext, *fitness_functions, ntrials=30, nswaps=3000):
"""Break ``ciphertext`` using hill climbing. Note: Currently ntrails and nswaps default to magic numbers. Generally the trend is, the longer the text, the lower the number of trials you need to run, because the hill climbing will lead to the best answer faster. Because randomness is involved, there is the possibility of the correct decryption not being found. In this circumstance you just need to run the code again. Example: HELLO Args: ciphertext (str):
The text to decrypt *fitness_functions (variable length argument list):
Functions to score decryption with Keyword Args: ntrials (int):
The number of times to run the hill climbing algorithm nswaps (int):
The number of rounds to find a local maximum Returns: Sorted list of decryptions Raises: ValueError: If nswaps or ntrails are not positive integers ValueError: If no fitness_functions are given """ |
if ntrials <= 0 or nswaps <= 0:
raise ValueError("ntrials and nswaps must be positive integers")
# Find a local maximum by swapping two letters and scoring the decryption
def next_node_inner_climb(node):
# Swap 2 characters in the key
a, b = random.sample(range(len(node)), 2)
node[a], node[b] = node[b], node[a]
plaintext = decrypt(node, ciphertext)
node_score = score(plaintext, *fitness_functions)
return node, node_score, Decryption(plaintext, ''.join(node), node_score)
# Outer climb rereuns hill climb ntrials number of times each time at a different start location
def next_node_outer_climb(node):
random.shuffle(node)
key, best_score, outputs = hill_climb(nswaps, node[:], next_node_inner_climb)
return key, best_score, outputs[-1] # The last item in this list is the item with the highest score
_, _, decryptions = hill_climb(ntrials, list(string.ascii_uppercase), next_node_outer_climb)
return sorted(decryptions, reverse=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decrypt(key, ciphertext):
"""Decrypt Simple Substitution enciphered ``ciphertext`` using ``key``. Example: HELLO Args: key (iterable):
The key to use ciphertext (str):
The text to decrypt Returns: Decrypted ciphertext """ |
# TODO: Is it worth keeping this here I should I only accept strings?
key = ''.join(key)
alphabet = string.ascii_letters
cipher_alphabet = key.lower() + key.upper()
return ciphertext.translate(str.maketrans(cipher_alphabet, alphabet)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_heading(self, value):
"""value can be between 0 and 359""" |
return self.write(request.SetHeading(self.seq, value)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_back_led_output(self, value):
"""value can be between 0x00 and 0xFF""" |
return self.write(request.SetBackLEDOutput(self.seq, value)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def roll(self, speed, heading, state=1):
""" speed can have value between 0x00 and 0xFF heading can have value between 0 and 359 """ |
return self.write(request.Roll(self.seq, speed, heading, state )) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ISBNValidator(raw_isbn):
""" Check string is a valid ISBN number""" |
isbn_to_check = raw_isbn.replace('-', '').replace(' ', '')
if not isinstance(isbn_to_check, string_types):
raise ValidationError(_(u'Invalid ISBN: Not a string'))
if len(isbn_to_check) != 10 and len(isbn_to_check) != 13:
raise ValidationError(_(u'Invalid ISBN: Wrong length'))
if not isbn.is_valid(isbn_to_check):
raise ValidationError(_(u'Invalid ISBN: Failed checksum'))
if isbn_to_check != isbn_to_check.upper():
raise ValidationError(_(u'Invalid ISBN: Only upper case allowed'))
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reward_battery(self):
""" Add a battery level reward """ |
if not 'battery' in self.mode:
return
mode = self.mode['battery']
if mode and mode and self.__test_cond(mode):
self.logger.debug('Battery out')
self.player.stats['reward'] += mode['reward']
self.player.game_over = self.player.game_over or mode['terminal'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reward_item(self, item_type):
""" Add a food collision reward """ |
assert isinstance(item_type, str)
if not 'items' in self.mode:
return
mode = self.mode['items']
if mode and mode[item_type] and self.__test_cond(mode[item_type]):
self.logger.debug("{item_type} consumed".format(item_type=item_type))
self.player.stats['reward'] += mode[item_type]['reward']
self.player.stats['score'] += mode[item_type]['reward']
self.player.game_over = self.player.game_over or mode[item_type]['terminal'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reward_wall(self):
""" Add a wall collision reward """ |
if not 'wall' in self.mode:
return
mode = self.mode['wall']
if mode and mode and self.__test_cond(mode):
self.logger.debug("Wall {x}/{y}'".format(x=self.bumped_x, y=self.bumped_y))
self.player.stats['reward'] += mode['reward']
self.player.game_over = self.player.game_over or mode['terminal'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reward_explore(self):
""" Add an exploration reward """ |
if not 'explore' in self.mode:
return
mode = self.mode['explore']
if mode and mode['reward'] and self.__test_cond(mode):
self.player.stats['reward'] += mode['reward']
self.player.stats['score'] += mode['reward']
self.player.game_over = self.player.game_over or mode['terminal'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reward_goal(self):
""" Add an end goal reward """ |
if not 'goal' in self.mode:
return
mode = self.mode['goal']
if mode and mode['reward'] and self.__test_cond(mode):
if mode['reward'] > 0:
self.logger.info("Escaped!!")
self.player.stats['reward'] += mode['reward']
self.player.stats['score'] += mode['reward']
self.player.game_over = self.player.game_over or mode['terminal'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reward_proximity(self):
""" Add a wall proximity reward """ |
if not 'proximity' in self.mode:
return
mode = self.mode['proximity']
# Calculate proximity reward
reward = 0
for sensor in self.player.sensors:
if sensor.sensed_type == 'wall':
reward += sensor.proximity_norm()
else:
reward += 1
reward /= len(self.player.sensors)
#reward = min(1.0, reward * 2)
reward = min(1.0, reward * reward)
# TODO: Configurable bonus reward threshold. Pass extra args to `__test_cond`?
#if mode and mode and reward > 0.75 and self.__test_cond(mode):
if mode and mode and self.__test_cond(mode):
# Apply bonus
reward *= mode['reward']
self.player.stats['reward'] += reward |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sitemap(self):
"""Return the sitemap URI based on maps or explicit settings.""" |
if (self.sitemap_name is not None):
return(self.sitemap_name)
return(self.sitemap_uri(self.resource_list_name)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_resource_list(self, uri):
"""Read resource list from specified URI else raise exception.""" |
self.logger.info("Reading resource list %s" % (uri))
try:
resource_list = ResourceList(allow_multifile=self.allow_multifile,
mapper=self.mapper)
resource_list.read(uri=uri)
except Exception as e:
raise ClientError("Can't read source resource list from %s (%s)" %
(uri, str(e)))
self.logger.debug("Finished reading resource list")
return(resource_list) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_resource_list_from_source_description(self, uri):
"""Read source description to find resource list. Raises a ClientError in cases where the client might look for a source description in another location, but a ClientFatalError if a source description is found but there is some problem using it. """ |
self.logger.info("Reading source description %s" % (uri))
try:
sd = SourceDescription()
sd.read(uri=uri)
except Exception as e:
raise ClientError(
"Can't read source description from %s (%s)" %
(uri, str(e)))
if (len(sd) == 0):
raise ClientFatalError(
"Source description %s has no sources" % (uri))
elif (len(sd) > 1):
raise ClientFatalError(
"Source description %s has multiple sources, specify one "
"with --capabilitylist" % (uri))
self.logger.info("Finished reading source description")
cluri = sd.resources.first().uri
uri = urljoin(uri, cluri) # FIXME - Should relative URI handling be elsewhere?
return(self.find_resource_list_from_capability_list(uri)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_resource_list_from_capability_list(self, uri):
"""Read capability list to find resource list. Raises a ClientError in cases where the client might look for a capability list in another location, but a ClientFatalError if a capability list is found but there is some problem using it. """ |
self.logger.info("Reading capability list %s" % (uri))
try:
cl = CapabilityList()
cl.read(uri=uri)
except Exception as e:
raise ClientError(
"Can't read capability list from %s (%s)" %
(uri, str(e)))
if (not cl.has_capability('resourcelist')):
raise ClientFatalError(
"Capability list %s does not describe a resource list" % (uri))
rluri = cl.capability_info('resourcelist').uri
return(urljoin(uri, rluri)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_resource_list(self):
"""Finf resource list by hueristics, returns ResourceList object. 1. Use explicitly specified self.sitemap_name (and fail if that doesn't work) 2. Use explicitly specified self.capability_list_uri (and fail is that doesn't work) 3. Look for base_url/.well-known/resourcesync (then look for capability, look for resourcelist) 4. Look for host/.well-known/resourcesync (then look for capability, look for resourcelist) 5. Look for base_url/resourcelist.xml 6. Look for base_url/sitemap.xml 7. Look for host/sitemap.xml """ |
# 1 & 2
if (self.sitemap_name is not None):
return(self.read_resource_list(self.sitemap_name))
if (self.capability_list_uri is not None):
rluri = self.find_resource_list_from_capability_list(self.capability_list_uri)
return(self.read_resource_list(rluri))
# 3 & 4
parts = urlsplit(self.sitemap)
uri_host = urlunsplit([parts[0], parts[1], '', '', ''])
errors = []
for uri in [urljoin(self.sitemap, '.well-known/resourcesync'),
urljoin(uri_host, '.well-known/resourcesync')]:
uri = uri.lstrip('file:///') # urljoin adds this for local files
try:
rluri = self.find_resource_list_from_source_description(uri)
return(self.read_resource_list(rluri))
except ClientError as e:
errors.append(str(e))
# 5, 6 & 7
for uri in [urljoin(self.sitemap, 'resourcelist.xml'),
urljoin(self.sitemap, 'sitemap.xml'),
urljoin(uri_host, 'sitemap.xml')]:
uri = uri.lstrip('file:///') # urljoin adds this for local files
try:
return(self.read_resource_list(uri))
except ClientError as e:
errors.append(str(e))
raise ClientFatalError(
"Failed to find source resource list from common patterns (%s)" %
". ".join(errors)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_resource_list(self, paths=None, set_path=False):
"""Return a resource list for files on local disk. The set of files is taken by disk scan from the paths specified or else defaults to the paths specified in the current mappings paths - override paths from mappings if specified set_path - set true to set the path information for each resource included. This is used to build a resource list as the basis for creating a dump. Return ResourceList. Uses existing self.mapper settings. """ |
# 0. Sanity checks, parse paths is specified
if (len(self.mapper) < 1):
raise ClientFatalError(
"No source to destination mapping specified")
if (paths is not None):
# Expect comma separated list of paths
paths = paths.split(',')
# 1. Build from disk
rlb = ResourceListBuilder(set_hashes=self.hashes, mapper=self.mapper)
rlb.set_path = set_path
try:
rlb.add_exclude_files(self.exclude_patterns)
rl = rlb.from_disk(paths=paths)
except ValueError as e:
raise ClientFatalError(str(e))
# 2. Set defaults and overrides
rl.allow_multifile = self.allow_multifile
rl.pretty_xml = self.pretty_xml
rl.mapper = self.mapper
if (self.max_sitemap_entries is not None):
rl.max_sitemap_entries = self.max_sitemap_entries
return(rl) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_resource(self, resource, filename, change=None):
"""Update resource from uri to filename on local system. Update means three things: 1. GET resources 2. set mtime in local time to be equal to timestamp in UTC (should perhaps or at least warn if different from LastModified from the GET response instead but maybe warn if different (or just earlier than) the lastmod we expected from the resource list 3. check that resource matches expected information Also update self.last_timestamp if the timestamp (in source frame) of this resource is later and the current value. Returns the number of resources updated/created (0 or 1) """ |
path = os.path.dirname(filename)
distutils.dir_util.mkpath(path)
num_updated = 0
if (self.dryrun):
self.logger.info(
"dryrun: would GET %s --> %s" %
(resource.uri, filename))
else:
# 1. GET
for try_i in range(1, self.tries + 1):
try:
r = requests.get(resource.uri, timeout=self.timeout, stream=True)
# Fail on 4xx or 5xx
r.raise_for_status()
with open(filename, 'wb') as fd:
for chunk in r.iter_content(chunk_size=1024):
fd.write(chunk)
num_updated += 1
break
except requests.Timeout as e:
if try_i < self.tries:
msg = 'Download timed out, retrying...'
self.logger.info(msg)
# Continue loop
else:
# No more tries left, so fail
msg = "Failed to GET %s after %s tries -- %s" % (resource.uri, self.tries, str(e))
if (self.ignore_failures):
self.logger.warning(msg)
return(num_updated)
else:
raise ClientFatalError(msg)
except (requests.RequestException, IOError) as e:
msg = "Failed to GET %s -- %s" % (resource.uri, str(e))
if (self.ignore_failures):
self.logger.warning(msg)
return(num_updated)
else:
raise ClientFatalError(msg)
# 2. set timestamp if we have one
if (resource.timestamp is not None):
unixtime = int(resource.timestamp) # no fractional
os.utime(filename, (unixtime, unixtime))
if (resource.timestamp > self.last_timestamp):
self.last_timestamp = resource.timestamp
self.log_event(Resource(resource=resource, change=change))
# 3. sanity check
length = os.stat(filename).st_size
if (resource.length is not None and resource.length != length):
self.logger.info(
"Downloaded size for %s of %d bytes does not match expected %d bytes" %
(resource.uri, length, resource.length))
if (len(self.hashes) > 0):
self.check_hashes(filename, resource)
return(num_updated) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_hashes(self, filename, resource):
"""Check all hashes present in self.hashes _and_ resource object. Simply shows warning for mismatch, does not raise exception or otherwise stop process. """ |
# which hashes to calculate?
hashes = []
if ('md5' in self.hashes and resource.md5 is not None):
hashes.append('md5')
if ('sha-1' in self.hashes and resource.sha1 is not None):
hashes.append('sha-1')
if ('sha-256' in self.hashes and resource.sha256 is not None):
hashes.append('sha-256')
# calculate
hasher = Hashes(hashes, filename)
# check and report
if ('md5' in hashes and resource.md5 != hasher.md5):
self.logger.info(
"MD5 mismatch for %s, got %s but expected %s" %
(resource.uri, hasher.md5, resource.md5))
if ('sha-1' in hashes and resource.sha1 != hasher.sha1):
self.logger.info(
"SHA-1 mismatch for %s, got %s but expected %s" %
(resource.uri, hasher.sha1, resource.sha1))
if ('sha-256' in hashes and resource.sha256 != hasher.sha256):
self.logger.info(
"SHA-256 mismatch for %s, got %s but expected %s" %
(resource.uri, hasher.sha256, resource.sha256)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_resource(self, resource, filename, allow_deletion=False):
"""Delete copy of resource in filename on local system. Will only actually do the deletion if allow_deletion is True. Regardless of whether the deletion occurs, self.last_timestamp will be updated if the resource.timestamp is later than the current value. Returns the number of files actually deleted (0 or 1). """ |
num_deleted = 0
uri = resource.uri
if (resource.timestamp is not None and
resource.timestamp > self.last_timestamp):
self.last_timestamp = resource.timestamp
if (allow_deletion):
if (self.dryrun):
self.logger.info(
"dryrun: would delete %s -> %s" %
(uri, filename))
else:
try:
os.unlink(filename)
num_deleted += 1
self.logger.info("deleted: %s -> %s" % (uri, filename))
self.log_event(
Resource(
resource=resource,
change="deleted"))
except OSError as e:
msg = "Failed to DELETE %s -> %s : %s" % (
uri, filename, str(e))
# if (self.ignore_failures):
self.logger.warning(msg)
# return
# else:
# raise ClientFatalError(msg)
else:
self.logger.info(
"nodelete: would delete %s (--delete to enable)" %
uri)
return(num_deleted) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_document(self):
"""Parse any ResourceSync document and show information. Will use sitemap URI taken either from explicit self.sitemap_name or derived from the mappings supplied. """ |
s = Sitemap()
self.logger.info("Reading sitemap(s) from %s ..." % (self.sitemap))
try:
list = s.parse_xml(url_or_file_open(self.sitemap))
except IOError as e:
raise ClientFatalError("Cannot read document (%s)" % str(e))
num_entries = len(list.resources)
capability = '(unknown capability)'
if ('capability' in list.md):
capability = list.md['capability']
print("Parsed %s document with %d entries" % (capability, num_entries))
if (self.verbose):
to_show = 100
override_str = ' (override with --max-sitemap-entries)'
if (self.max_sitemap_entries):
to_show = self.max_sitemap_entries
override_str = ''
if (num_entries > to_show):
print(
"Showing first %d entries sorted by URI%s..." %
(to_show, override_str))
n = 0
for resource in list:
print('[%d] %s' % (n, str(resource)))
n += 1
if (n >= to_show):
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_resource_list( self, paths=None, outfile=None, links=None, dump=None):
"""Write a Resource List or a Resource Dump for files on local disk. Set of resources included is based on paths setting or else the mappings. Optionally links can be added. Output will be to stdout unless outfile is specified. If dump is true then a Resource Dump is written instead of a Resource List. If outfile is not set then self.default_resource_dump will be used. """ |
rl = self.build_resource_list(paths=paths, set_path=dump)
if (links is not None):
rl.ln = links
if (dump):
if (outfile is None):
outfile = self.default_resource_dump
self.logger.info("Writing resource dump to %s..." % (dump))
d = Dump(resources=rl, format=self.dump_format)
d.write(basename=outfile)
else:
if (outfile is None):
try:
print(rl.as_xml())
except ListBaseIndexError as e:
raise ClientFatalError(
"%s. Use --output option to specify base name for output files." %
str(e))
else:
rl.write(basename=outfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_change_list(self, paths=None, outfile=None, ref_sitemap=None, newref_sitemap=None, empty=None, links=None, dump=None):
"""Write a change list. Unless the both ref_sitemap and newref_sitemap are specified then the Change List is calculated between the reference an the current state of files on disk. The files on disk are scanned based either on the paths setting or else on the mappings. """ |
cl = ChangeList(ln=links)
if (not empty):
# 1. Get and parse reference sitemap
old_rl = self.read_reference_resource_list(ref_sitemap)
# 2. Depending on whether a newref_sitemap was specified, either read that
# or build resource list from files on disk
if (newref_sitemap is None):
# Get resource list from disk
new_rl = self.build_resource_list(paths=paths, set_path=dump)
else:
new_rl = self.read_reference_resource_list(
newref_sitemap, name='new reference')
# 3. Calculate change list
(same, updated, deleted, created) = old_rl.compare(new_rl)
cl.add_changed_resources(updated, change='updated')
cl.add_changed_resources(deleted, change='deleted')
cl.add_changed_resources(created, change='created')
# 4. Write out change list
cl.mapper = self.mapper
cl.pretty_xml = self.pretty_xml
if (self.max_sitemap_entries is not None):
cl.max_sitemap_entries = self.max_sitemap_entries
if (outfile is None):
print(cl.as_xml())
else:
cl.write(basename=outfile)
self.write_dump_if_requested(cl, dump) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_capability_list(self, capabilities=None, outfile=None, links=None):
"""Write a Capability List to outfile or STDOUT.""" |
capl = CapabilityList(ln=links)
capl.pretty_xml = self.pretty_xml
if (capabilities is not None):
for name in capabilities.keys():
capl.add_capability(name=name, uri=capabilities[name])
if (outfile is None):
print(capl.as_xml())
else:
capl.write(basename=outfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_source_description( self, capability_lists=None, outfile=None, links=None):
"""Write a ResourceSync Description document to outfile or STDOUT.""" |
rsd = SourceDescription(ln=links)
rsd.pretty_xml = self.pretty_xml
if (capability_lists is not None):
for uri in capability_lists:
rsd.add_capability_list(uri)
if (outfile is None):
print(rsd.as_xml())
else:
rsd.write(basename=outfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_reference_resource_list(self, ref_sitemap, name='reference'):
"""Read reference resource list and return the ResourceList object. The name parameter is used just in output messages to say what type of resource list is being read. """ |
rl = ResourceList()
self.logger.info(
"Reading %s resource list from %s ..." %
(name, ref_sitemap))
rl.mapper = self.mapper
rl.read(uri=ref_sitemap, index_only=(not self.allow_multifile))
num_entries = len(rl.resources)
self.logger.info(
"Read %s resource list with %d entries in %d sitemaps" %
(name, num_entries, rl.num_files))
if (self.verbose):
to_show = 100
override_str = ' (override with --max-sitemap-entries)'
if (self.max_sitemap_entries):
to_show = self.max_sitemap_entries
override_str = ''
if (num_entries > to_show):
print(
"Showing first %d entries sorted by URI%s..." %
(to_show, override_str))
n = 0
for r in rl.resources:
print(r)
n += 1
if (n >= to_show):
break
return(rl) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prune_hashes(self, hashes, list_type):
"""Prune any hashes not in source resource or change list.""" |
discarded = []
for hash in hashes:
if (hash in self.hashes):
self.hashes.discard(hash)
discarded.append(hash)
self.logger.info("Not calculating %s hash(es) on destination as not present "
"in source %s list" % (', '.join(sorted(discarded)), list_type)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def log_status(self, in_sync=True, incremental=False, audit=False, same=None, created=0, updated=0, deleted=0, to_delete=0):
"""Write log message regarding status in standard form. Split this off so all messages from baseline/audit/incremental are written in a consistent form. """ |
if (audit):
words = {'created': 'to create',
'updated': 'to update',
'deleted': 'to delete'}
else:
words = {'created': 'created',
'updated': 'updated',
'deleted': 'deleted'}
if in_sync:
# status rather than action
status = "NO CHANGES" if incremental else "IN SYNC"
else:
if audit:
status = "NOT IN SYNC"
elif (to_delete > deleted):
# will need --delete
status = "PART APPLIED" if incremental else"PART SYNCED"
words['deleted'] = 'to delete (--delete)'
deleted = to_delete
else:
status = "CHANGES APPLIED" if incremental else "SYNCED"
same = "" if (same is None) else ("same=%d, " % same)
self.logger.warning("Status: %15s (%s%s=%d, %s=%d, %s=%d)" %
(status, same, words['created'], created,
words['updated'], updated, words['deleted'], deleted)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def render_latex_template(path_templates, template_filename,
template_vars=None):
'''Render a latex template, filling in its template variables
:param path_templates: the path to the template directory
:param template_filename: the name, rooted at the path_template_directory,
of the desired template for rendering
:param template_vars: dictionary of key:val for jinja2 variables
defaults to None for case when no values need to be passed
'''
var_dict = template_vars if template_vars else {}
var_dict_escape = recursive_apply(var_dict, escape_latex_str_if_str)
j2_env = jinja2.Environment(
loader=jinja2.FileSystemLoader(path_templates), **J2_ARGS
)
template = j2_env.get_template(template_filename)
return template.render(**var_dict_escape) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _normalize_day(year, month, day):
""" Coerce the day of the month to an internal value that may or may not match the "public" value. With the exception of the last three days of every month, all days are stored as-is. The last three days are instead stored as -1 (the last), -2 (first from last) and -3 (second from last). Therefore, for a 28-day month, the last week is as follows: Day | 22 23 24 25 26 27 28 Value | 22 23 24 25 -3 -2 -1 For a 29-day month, the last week is as follows: Day | 23 24 25 26 27 28 29 Value | 23 24 25 26 -3 -2 -1 For a 30-day month, the last week is as follows: Day | 24 25 26 27 28 29 30 Value | 24 25 26 27 -3 -2 -1 For a 31-day month, the last week is as follows: Day | 25 26 27 28 29 30 31 Value | 25 26 27 28 -3 -2 -1 This slightly unintuitive system makes some temporal arithmetic produce a more desirable outcome. :param year: :param month: :param day: :return: """ |
if year < MIN_YEAR or year > MAX_YEAR:
raise ValueError("Year out of range (%d..%d)" % (MIN_YEAR, MAX_YEAR))
if month < 1 or month > 12:
raise ValueError("Month out of range (1..12)")
days_in_month = DAYS_IN_MONTH[(year, month)]
if day in (days_in_month, -1):
return year, month, -1
if day in (days_in_month - 1, -2):
return year, month, -2
if day in (days_in_month - 2, -3):
return year, month, -3
if 1 <= day <= days_in_month - 3:
return year, month, int(day)
# TODO improve this error message
raise ValueError("Day %d out of range (1..%d, -1, -2 ,-3)" % (day, days_in_month)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_clock_time(cls, clock_time, epoch):
""" Convert from a ClockTime relative to a given epoch. """ |
try:
clock_time = ClockTime(*clock_time)
except (TypeError, ValueError):
raise ValueError("Clock time must be a 2-tuple of (s, ns)")
else:
ordinal = clock_time.seconds // 86400
return Date.from_ordinal(ordinal + epoch.date().to_ordinal()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_native(cls, t):
""" Convert from a native Python `datetime.time` value. """ |
second = (1000000 * t.second + t.microsecond) / 1000000
return Time(t.hour, t.minute, second, t.tzinfo) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_clock_time(cls, clock_time, epoch):
""" Convert from a `.ClockTime` relative to a given epoch. """ |
clock_time = ClockTime(*clock_time)
ts = clock_time.seconds % 86400
nanoseconds = int(1000000000 * ts + clock_time.nanoseconds)
return Time.from_ticks(epoch.time().ticks + nanoseconds / 1000000000) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_native(self):
""" Convert to a native Python `datetime.time` value. """ |
h, m, s = self.hour_minute_second
s, ns = nano_divmod(s, 1)
ms = int(nano_mul(ns, 1000000))
return time(h, m, s, ms) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_user(ctx, param, value):
"""Return a user if exists.""" |
if value:
if value.isdigit():
user = User.query.get(str(value))
else:
user = User.query.filter(User.email == value).one_or_none()
return user |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokens_create(name, user, scopes, internal):
"""Create a personal OAuth token.""" |
token = Token.create_personal(
name, user.id, scopes=scopes, is_internal=internal)
db.session.commit()
click.secho(token.access_token, fg='blue') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tokens_delete(name=None, user=None, read_access_token=None, force=False):
"""Delete a personal OAuth token.""" |
if not (name or user) and not read_access_token:
click.get_current_context().fail(
'You have to pass either a "name" and "user" or the "token"')
if name and user:
client = Client.query.filter(
Client.user_id == user.id,
Client.name == name,
Client.is_internal.is_(True)).one()
token = Token.query.filter(
Token.user_id == user.id,
Token.is_personal.is_(True),
Token.client_id == client.client_id).one()
elif read_access_token:
access_token = click.prompt('Token', hide_input=True)
token = Token.query.filter(Token.access_token == access_token).one()
else:
click.get_current_context().fail('No token was found with provided')
if force or click.confirm('Are you sure you want to delete the token?'):
db.session.delete(token)
db.session.commit()
click.secho(
'Token "{}" deleted.'.format(token.access_token), fg='yellow') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normalize_residuals(self, points):
"""Normalize residuals by the level of the variable.""" |
residuals = self.evaluate_residual(points)
solutions = self.evaluate_solution(points)
return [resid / soln for resid, soln in zip(residuals, solutions)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def apply(self, query, value, alias):
"""Convert UUID. :param query: SQLAlchemy query object. :param value: UUID value. :param alias: Alias of the column. :returns: Filtered query matching the UUID value. """ |
try:
value = uuid.UUID(value)
return query.filter(self.column == value)
except ValueError:
return query |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conv_uuid(self, column, name, **kwargs):
"""Convert UUID filter.""" |
return [f(column, name, **kwargs) for f in self.uuid_filters] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conv_variant(self, column, name, **kwargs):
"""Convert variants.""" |
return self.convert(str(column.type), column, name, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def verify(self, otp, timestamp=False, sl=None, timeout=None, return_response=False):
""" Verify a provided OTP. :param otp: OTP to verify. :type otp: ``str`` :param timestamp: True to include request timestamp and session counter in the response. Defaults to False. :type timestamp: ``bool`` :param sl: A value indicating percentage of syncing required by client. :type sl: ``int`` or ``str`` :param timeout: Number of seconds to wait for sync responses. :type timeout: ``int`` :param return_response: True to return a response object instead of the status code. Defaults to False. :type return_response: ``bool`` :return: True is the provided OTP is valid, False if the REPLAYED_OTP status value is returned or the response message signature verification failed and None for the rest of the status values. """ |
ca_bundle_path = self._get_ca_bundle_path()
otp = OTP(otp, self.translate_otp)
rand_str = b(os.urandom(30))
nonce = base64.b64encode(rand_str, b('xz'))[:25].decode('utf-8')
query_string = self.generate_query_string(otp.otp, nonce, timestamp,
sl, timeout)
threads = []
timeout = timeout or DEFAULT_TIMEOUT
for url in self.api_urls:
thread = URLThread('%s?%s' % (url, query_string), timeout,
self.verify_cert, ca_bundle_path)
thread.start()
threads.append(thread)
# Wait for a first positive or negative response
start_time = time.time()
while threads and (start_time + timeout) > time.time():
for thread in threads:
if not thread.is_alive():
if thread.exception:
raise thread.exception
elif thread.response:
status = self.verify_response(thread.response,
otp.otp, nonce,
return_response)
if status:
if return_response:
return status
else:
return True
threads.remove(thread)
time.sleep(0.1)
# Timeout or no valid response received
raise Exception('NO_VALID_ANSWERS') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def verify_multi(self, otp_list, max_time_window=DEFAULT_MAX_TIME_WINDOW, sl=None, timeout=None):
""" Verify a provided list of OTPs. :param max_time_window: Maximum number of seconds which can pass between the first and last OTP generation for the OTP to still be considered valid. :type max_time_window: ``int`` """ |
# Create the OTP objects
otps = []
for otp in otp_list:
otps.append(OTP(otp, self.translate_otp))
if len(otp_list) < 2:
raise ValueError('otp_list needs to contain at least two OTPs')
device_ids = set()
for otp in otps:
device_ids.add(otp.device_id)
# Check that all the OTPs contain same device id
if len(device_ids) != 1:
raise Exception('OTPs contain different device ids')
# Now we verify the OTPs and save the server response for each OTP.
# We need the server response, to retrieve the timestamp.
# It's possible to retrieve this value locally, without querying the
# server but in this case, user would need to provide his AES key.
for otp in otps:
response = self.verify(otp.otp, True, sl, timeout,
return_response=True)
if not response:
return False
otp.timestamp = int(response['timestamp'])
count = len(otps)
delta = otps[count - 1].timestamp - otps[0].timestamp
# OTPs have an 8Hz timestamp counter so we need to divide it to get
# seconds
delta = delta / 8
if delta < 0:
raise Exception('delta is smaller than zero. First OTP appears to '
'be older than the last one')
if delta > max_time_window:
raise Exception(('More than %s seconds have passed between '
'generating the first and the last OTP.') %
(max_time_window))
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_query_string(self, otp, nonce, timestamp=False, sl=None, timeout=None):
""" Returns a query string which is sent to the validation servers. """ |
data = [('id', self.client_id),
('otp', otp),
('nonce', nonce)]
if timestamp:
data.append(('timestamp', '1'))
if sl is not None:
if sl not in range(0, 101) and sl not in ['fast', 'secure']:
raise Exception('sl parameter value must be between 0 and '
'100 or string "fast" or "secure"')
data.append(('sl', sl))
if timeout:
data.append(('timeout', timeout))
query_string = urlencode(data)
if self.key:
hmac_signature = self.generate_message_signature(query_string)
hmac_signature = hmac_signature
query_string += '&h=%s' % (hmac_signature.replace('+', '%2B'))
return query_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_parameters_from_response(self, response):
""" Returns a response signature and query string generated from the server response. 'h' aka signature argument is stripped from the returned query string. """ |
lines = response.splitlines()
pairs = [line.strip().split('=', 1) for line in lines if '=' in line]
pairs = sorted(pairs)
signature = ([unquote(v) for k, v in pairs if k == 'h'] or [None])[0]
# already quoted
query_string = '&' . join([k + '=' + v for k, v in pairs if k != 'h'])
return (signature, query_string) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_parameters_as_dictionary(self, query_string):
""" Returns query string parameters as a dictionary. """ |
pairs = (x.split('=', 1) for x in query_string.split('&'))
return dict((k, unquote(v)) for k, v in pairs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init_request_urls(self, api_urls):
""" Returns a list of the API URLs. """ |
if not isinstance(api_urls, (str, list, tuple)):
raise TypeError('api_urls needs to be string or iterable!')
if isinstance(api_urls, str):
api_urls = (api_urls,)
api_urls = list(api_urls)
for url in api_urls:
if not url.startswith('http://') and \
not url.startswith('https://'):
raise ValueError(('URL "%s" contains an invalid or missing'
' scheme' % (url)))
return list(api_urls) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_ca_bundle_path(self):
""" Return a path to the CA bundle which is used for verifying the hosts SSL certificate. """ |
if self.ca_certs_bundle_path:
# User provided a custom path
return self.ca_certs_bundle_path
# Return first bundle which is available
for file_path in COMMON_CA_LOCATIONS:
if self._is_valid_ca_bundle_file(file_path=file_path):
return file_path
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def random_str_uuid(string_length):
"""Returns a random string of length string_length""" |
if not isinstance(string_length, int) or not 1 <= string_length <= 32:
msg = "string_length must be type int where 1 <= string_length <= 32"
raise ValueError(msg)
random = str(uuid.uuid4()).upper().replace('-', '')
return random[0:string_length] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def random_name_filepath(path_full, length_random=5):
'''Take a filepath, add randome characters to its basename,
and return the new filepath
:param filename: either a filename or filepath
:param length_random: length of random string to be generated
'''
path_full_pre_extension, extension = os.path.splitext(path_full)
random_str = random_str_uuid(length_random)
return "{}{}{}".format(path_full_pre_extension, random_str, extension) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def list_filepathes_with_predicate(path_dir, predicate):
'''List all filepathes in a directory that begin with predicate
:param path_dir: the directory whose top-level contents you wish to list
:param predicate: the predicate you want to test the directory's
found files against
'''
if not os.path.isdir(path_dir):
raise ValueError("{} is not a directory".format(path_dir))
contents = (os.path.join(path_dir, a) for a in os.listdir(path_dir))
files = (c for c in contents if os.path.isfile(c))
return [f for f in files if f.startswith(predicate)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def recursive_apply(inval, func):
'''Recursively apply a function to all levels of nested iterables
:param inval: the object to run the function on
:param func: the function that will be run on the inval
'''
if isinstance(inval, dict):
return {k: recursive_apply(v, func) for k, v in inval.items()}
elif isinstance(inval, list):
return [recursive_apply(v, func) for v in inval]
else:
return func(inval) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, basename=None, write_separate_manifests=True):
"""Write one or more dump files to complete this dump. Returns the number of dump/archive files written. """ |
self.check_files()
n = 0
for manifest in self.partition_dumps():
dumpbase = "%s%05d" % (basename, n)
dumpfile = "%s.%s" % (dumpbase, self.format)
if (write_separate_manifests):
manifest.write(basename=dumpbase + '.xml')
if (self.format == 'zip'):
self.write_zip(manifest.resources, dumpfile)
elif (self.format == 'warc'):
self.write_warc(manifest.resources, dumpfile)
else:
raise DumpError(
"Unknown dump format requested (%s)" %
(self.format))
n += 1
self.logger.info("Wrote %d dump files" % (n))
return(n) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_zip(self, resources=None, dumpfile=None):
"""Write a ZIP format dump file. Writes a ZIP file containing the resources in the iterable resources along with a manifest file manifest.xml (written first). No checks on the size of files or total size are performed, this is expected to have been done beforehand. """ |
compression = (ZIP_DEFLATED if self.compress else ZIP_STORED)
zf = ZipFile(
dumpfile,
mode="w",
compression=compression,
allowZip64=True)
# Write resources first
rdm = ResourceDumpManifest(resources=resources)
real_path = {}
for resource in resources:
archive_path = self.archive_path(resource.path)
real_path[archive_path] = resource.path
resource.path = archive_path
zf.writestr('manifest.xml', rdm.as_xml())
# Add all files in the resources
for resource in resources:
zf.write(real_path[resource.path], arcname=resource.path)
zf.close()
zipsize = os.path.getsize(dumpfile)
self.logger.info(
"Wrote ZIP file dump %s with size %d bytes" %
(dumpfile, zipsize)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_warc(self, resources=None, dumpfile=None):
"""Write a WARC dump file. WARC support is not part of ResourceSync v1.0 (Z39.99 2014) but is left in this library for experimentation. """ |
# Load library late as we want to be able to run rest of code
# without this installed
try:
from warc import WARCFile, WARCHeader, WARCRecord
except:
raise DumpError("Failed to load WARC library")
wf = WARCFile(dumpfile, mode="w", compress=self.compress)
# Add all files in the resources
for resource in resources:
wh = WARCHeader({})
wh.url = resource.uri
wh.ip_address = None
wh.date = resource.lastmod
wh.content_type = 'text/plain'
wh.result_code = 200
wh.checksum = 'aabbcc'
wh.location = self.archive_path(resource.path)
wf.write_record(WARCRecord(header=wh, payload=resource.path))
wf.close()
warcsize = os.path.getsize(dumpfile)
self.logging.info(
"Wrote WARC file dump %s with size %d bytes" %
(dumpfile, warcsize)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_files(self, set_length=True, check_length=True):
"""Check all files in self.resources, find longest common prefix. Go though and check all files in self.resources, add up size, and find longest common path that can be used when writing the dump file. Saved in self.path_prefix. Parameters set_length and check_length control control whether then set_length attribute should be set from the file size if not specified, and whether any length specified should be checked. By default both are True. In any event, the total size calculated is the size of files on disk. """ |
total_size = 0 # total size of all files in bytes
path_prefix = None
for resource in self.resources:
if (resource.path is None):
# explicit test because exception raised by getsize otherwise
# confusing
raise DumpError(
"No file path defined for resource %s" %
resource.uri)
if (path_prefix is None):
path_prefix = os.path.dirname(resource.path)
else:
path_prefix = os.path.commonprefix(
[path_prefix, os.path.dirname(resource.path)])
size = os.path.getsize(resource.path)
if (resource.length is not None):
if (check_length and resource.length != size):
raise DumpError("Size of resource %s is %d on disk, not %d as specified" %
(resource.uri, size, resource.length))
elif (set_length):
resource.length = size
if (size > self.max_size):
raise DumpError(
"Size of file (%s, %d) exceeds maximum (%d) dump size" %
(resource.path, size, self.max_size))
total_size += size
self.path_prefix = path_prefix
self.total_size = total_size
self.logger.info(
"Total size of files to include in dump %d bytes" %
(total_size))
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def partition_dumps(self):
"""Yeild a set of manifest object that parition the dumps. Simply adds resources/files to a manifest until their are either the the correct number of files or the size limit is exceeded, then yields that manifest. """ |
manifest = self.manifest_class()
manifest_size = 0
manifest_files = 0
for resource in self.resources:
manifest.add(resource)
manifest_size += resource.length
manifest_files += 1
if (manifest_size >= self.max_size or
manifest_files >= self.max_files):
yield(manifest)
# Need to start a new manifest
manifest = self.manifest_class()
manifest_size = 0
manifest_files = 0
if (manifest_files > 0):
yield(manifest) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def archive_path(self, real_path):
"""Return the archive path for file with real_path. Mapping is based on removal of self.path_prefix which is determined by self.check_files(). """ |
if (not self.path_prefix):
return(real_path)
else:
return(os.path.relpath(real_path, self.path_prefix)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reset(self):
""" Attach a new engine to director """ |
self.scene = cocos.scene.Scene()
self.z = 0
palette = config.settings['view']['palette']
#Player.palette = palette
r, g, b = palette['bg']
self.scene.add(cocos.layer.ColorLayer(r, g, b, 255), z=self.z)
self.z += 1
message_layer = MessageLayer()
self.scene.add(message_layer, z=self.z)
self.z += 1
self.world_layer = WorldLayer(self.mode_id, fn_show_message=message_layer.show_message)
self.scene.add(self.world_layer, z=self.z)
self.z += 1
self.director._set_scene(self.scene)
# Step once to refresh before `act`
self.step()
# TODO: Reset to `ones`?
return self.world_layer.get_state() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def act(self, action):
""" Take one action for one step """ |
# FIXME: Hack to change in return type
action = int(action)
assert isinstance(action, int)
assert action < self.actions_num, "%r (%s) invalid"%(action, type(action))
# Reset buttons
for k in self.world_layer.buttons:
self.world_layer.buttons[k] = 0
# Apply each button defined in action config
for key in self.world_layer.player.controls[action]:
if key in self.world_layer.buttons:
self.world_layer.buttons[key] = 1
# Act in the environment
self.step()
observation = self.world_layer.get_state()
reward = self.world_layer.player.get_reward()
terminal = self.world_layer.player.game_over
info = {}
return observation, reward, terminal, info |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def step(self):
""" Step the engine one tick """ |
self.director.window.switch_to()
self.director.window.dispatch_events()
self.director.window.dispatch_event('on_draw')
self.director.window.flip()
# Ticking before events caused glitches.
pyglet.clock.tick() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def run_latex(self, cmd_wo_infile, path_outfile):
'''Main runner for latex build
Should compile the object's Latex template using a list of latex
shell commands, along with an output file location. Runs the latex
shell command until the process's .aux file remains unchanged.
:return: STR template text that is ultimately rendered
:param cmd_wo_infile: a list of subprocess commands omitting the
input file (example: ['pdflatex'])
:param path_outfile: the full path to the desired final output file
Must contain the same file extension as files generated by
cmd_wo_infile, otherwise the process will fail
'''
# Generate path variables
text_template = self.get_text_template()
path_template_random = random_name_filepath(self.path_template)
path_template_dir = os.path.dirname(path_template_random)
path_template_random_no_ext = os.path.splitext(path_template_random)[0]
path_template_random_aux = path_template_random_no_ext + ".aux"
ext_outfile = os.path.splitext(path_outfile)[-1]
path_outfile_initial = "{}{}".format(
path_template_random_no_ext, ext_outfile)
# Handle special case of MS Word
if cmd_wo_infile[0] == 'latex2rtf' and len(cmd_wo_infile) == 1:
cmd_docx = cmd_wo_infile + ['-o', path_outfile_initial]
# Need to run pdf2latex to generate aux file
cmd_wo_infile = ['pdflatex']
else:
cmd_docx = None
try:
# Write template variable to a temporary file
with open(path_template_random, 'w') as temp_file:
temp_file.write(text_template)
cmd = cmd_wo_infile + [path_template_random]
old_aux, new_aux = random_str_uuid(1), random_str_uuid(2)
while old_aux != new_aux:
# Run the relevant Latex command until old aux != new aux
stdout = check_output_cwd(cmd, path_template_dir)
LOGGER.debug('\n'.join(stdout))
old_aux, new_aux = new_aux, read_file(path_template_random_aux)
# Handle special case of MS Word
if cmd_docx:
cmd_word = cmd_docx + [path_template_random]
stdout = check_output_cwd(cmd_word, path_template_dir)
LOGGER.debug('\n'.join(stdout))
shutil.move(path_outfile_initial, path_outfile)
LOGGER.info("Built {} from {}".format(
path_outfile, self.path_template))
except Exception:
LOGGER.exception("Failed during latex build")
finally:
# Clean up all temporary files associated with the
# random file identifier for the process files
path_gen = list_filepathes_with_predicate(
path_template_dir, path_template_random_no_ext)
for path_gen_file in path_gen:
os.remove(path_gen_file)
return text_template |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_center(self, cshape_center):
"""cshape_center must be eu.Vector2""" |
assert isinstance(cshape_center, eu.Vector2)
self.position = world_to_view(cshape_center)
self.cshape.center = cshape_center |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has(cls):
"""Class decorator that declares dependencies""" |
deps = {}
for i in dir(cls):
if i.startswith('__') and i.endswith('__'):
continue
val = getattr(cls, i, None)
if isinstance(val, Dependency):
deps[i] = val
if val.name is None:
val.name = i
cls.__injections__ = deps
return cls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def inject(self, inst, **renames):
"""Injects dependencies and propagates dependency injector""" |
if renames:
di = self.clone(**renames)
else:
di = self
pro = di._provides
inst.__injections_source__ = di
deps = getattr(inst, '__injections__', None)
if deps:
for attr, dep in deps.items():
val = pro.get(dep.name)
if val is None:
raise MissingDependencyError(dep.name)
if not isinstance(val, dep.type):
raise TypeError("Wrong provider for {!r}".format(val))
setattr(inst, attr, val)
meth = getattr(inst, '__injected__', None)
if meth is not None:
meth()
return inst |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def interconnect_all(self):
"""Propagate dependencies for provided instances""" |
for dep in topologically_sorted(self._provides):
if hasattr(dep, '__injections__') and not hasattr(dep, '__injections_source__'):
self.inject(dep) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _basis_polynomial_factory(cls, kind):
"""Return a polynomial given some coefficients.""" |
valid_kind = cls._validate(kind)
basis_polynomial = getattr(np.polynomial, valid_kind)
return basis_polynomial |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _validate(cls, kind):
"""Validate the kind argument.""" |
if kind not in cls._valid_kinds:
mesg = "'kind' must be one of {}, {}, {}, or {}."
raise ValueError(mesg.format(*cls._valid_kinds))
else:
return kind |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derivatives_factory(cls, coef, domain, kind, **kwargs):
""" Given some coefficients, return a the derivative of a certain kind of orthogonal polynomial defined over a specific domain. """ |
basis_polynomial = cls._basis_polynomial_factory(kind)
return basis_polynomial(coef, domain).deriv() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def functions_factory(cls, coef, domain, kind, **kwargs):
""" Given some coefficients, return a certain kind of orthogonal polynomial defined over a specific domain. """ |
basis_polynomial = cls._basis_polynomial_factory(kind)
return basis_polynomial(coef, domain) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def roots(cls, degree, domain, kind):
"""Return optimal collocation nodes for some orthogonal polynomial.""" |
basis_coefs = cls._basis_monomial_coefs(degree)
basis_poly = cls.functions_factory(basis_coefs, domain, kind)
return basis_poly.roots() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_capability_list(self, capability_list=None):
"""Add a capability list. Adds either a CapabiltyList object specified in capability_list or else creates a Resource with the URI given in capability_list and adds that to the Source Description """ |
if (hasattr(capability_list, 'uri')):
r = Resource(uri=capability_list.uri,
capability=capability_list.capability_name)
if (capability_list.describedby is not None):
r.link_set(rel='describedby', href=capability_list.describedby)
else:
r = Resource(uri=capability_list,
capability='capabilitylist')
self.add(r) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def verify_oauth_token_and_set_current_user():
"""Verify OAuth token and set current user on request stack. This function should be used **only** on REST application. .. code-block:: python app.before_request(verify_oauth_token_and_set_current_user) """ |
for func in oauth2._before_request_funcs:
func()
if not hasattr(request, 'oauth') or not request.oauth:
scopes = []
try:
valid, req = oauth2.verify_request(scopes)
except ValueError:
abort(400, 'Error trying to decode a non urlencoded string.')
for func in oauth2._after_request_funcs:
valid, req = func(valid, req)
if valid:
request.oauth = req |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scope_choices(self, exclude_internal=True):
"""Return list of scope choices. :param exclude_internal: Exclude internal scopes or not. (Default: ``True``) :returns: A list of tuples (id, scope). """ |
return [
(k, scope) for k, scope in sorted(self.scopes.items())
if not exclude_internal or not scope.is_internal
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_scope(self, scope):
"""Register a scope. :param scope: A :class:`invenio_oauth2server.models.Scope` instance. """ |
if not isinstance(scope, Scope):
raise TypeError("Invalid scope type.")
assert scope.id not in self.scopes
self.scopes[scope.id] = scope |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def monkeypatch_oauthlib_urlencode_chars(chars):
"""Monkeypatch OAuthlib set of "URL encoded"-safe characters. .. note:: OAuthlib keeps a set of characters that it considers as valid inside an URL-encoded query-string during parsing of requests. The issue is that this set of characters wasn't designed to be configurable since it should technically follow various RFC specifications about URIs, like for example `RFC3986 <https://www.ietf.org/rfc/rfc3986.txt>`_. Many online services and frameworks though have designed their APIs in ways that aim at keeping things practical and readable to the API consumer, making use of special characters to mark or seperate query-string arguments. Such an example is the usage of embedded JSON strings inside query-string arguments, which of course have to contain the "colon" character (:) for key/value pair definitions. Users of the OAuthlib library, in order to integrate with these services and frameworks, end up either circumventing these "static" restrictions of OAuthlib by pre-processing query-strings, or -in search of a more permanent solution- directly make Pull Requests to OAuthlib to include additional characters in the set, and explain the logic behind their decision (one can witness these efforts inside the git history of the source file that includes this set of characters `here <https://github.com/idan/oauthlib/commits/master/oauthlib/common.py>`_). This kind of tactic leads easily to misconceptions about the ability one has over the usage of specific features of services and frameworks. In order to tackle this issue in Invenio-OAuth2Server, we are monkey-patching this set of characters using a configuration variable, so that usage of any special characters is a conscious decision of the package user. """ |
modified_chars = set(chars)
always_safe = set(oauthlib_commmon.always_safe)
original_special_chars = oauthlib_commmon.urlencoded - always_safe
if modified_chars != original_special_chars:
warnings.warn(
'You are overriding the default OAuthlib "URL encoded" set of '
'valid characters. Make sure that the characters defined in '
'oauthlib.common.urlencoded are indeed limitting your needs.',
RuntimeWarning
)
oauthlib_commmon.urlencoded = always_safe | modified_chars |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.