text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def average_cq(seq, efficiency=1.0):
"""Given a set of Cq values, return the Cq value that represents the average expression level of the input. The intent is to average the expression levels of the samples, since the average of Cq values is not biologically meaningful. :param iterable seq: A sequence (e.g. list, array, or Series) of Cq values. :param float efficiency: The fractional efficiency of the PCR reaction; i.e. 1.0 is 100% efficiency, producing 2 copies per amplicon per cycle. :return: Cq value representing average expression level :rtype: float """
|
denominator = sum( [pow(2.0*efficiency, -Ci) for Ci in seq] )
return log(len(seq)/denominator)/log(2.0*efficiency)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_sample_frame(sample_frame):
"""Makes sure that `sample_frame` has the columns we expect. :param DataFrame sample_frame: A sample data frame. :return: True (or raises an exception) :raises TypeError: if sample_frame is not a pandas DataFrame :raises ValueError: if columns are missing or the wrong type """
|
if not isinstance(sample_frame, pd.core.frame.DataFrame):
raise TypeError("Expected a pandas DataFrame, received {}".format(type(sample_frame)))
for col in ['Sample', 'Target', 'Cq']:
if col not in sample_frame:
raise ValueError("Missing column {} in sample frame".format(col))
if sample_frame['Cq'].dtype.kind != 'f':
raise ValueError("Expected Cq column to have float type; has type {} instead".format(str(sample_frame['Cq'].dtype)))
return True
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def censor_background(sample_frame, ntc_samples=['NTC'], margin=log2(10)):
"""Selects rows from the sample data frame that fall `margin` or greater cycles earlier than the NTC for that target. NTC wells are recognized by string matching against the Sample column. :param DataFrame sample_frame: A sample data frame. :param iterable ntc_samples: A sequence of strings giving the sample names of your NTC wells, i.e. ['NTC'] :param float margin: The number of cycles earlier than the NTC for a "good" sample, i.e. log2(10) :return: a view of the sample data frame containing only non-background rows :rtype: DataFrame """
|
ntcs = sample_frame.loc[ sample_frame['Sample'].apply(lambda x: x in ntc_samples), ]
if ntcs.empty:
return sample_frame
g = ntcs.groupby('Target')
min_ntcs = g['Cq'].min()
# if a target has no NTC, min_ntcs.loc[sample] is NaN
# we should retain all values from targets with no NTC
# all comparisons with NaN are false
# so we test for the "wrong" condition and invert the result
censored = sample_frame.loc[ ~(sample_frame['Cq'] > (min_ntcs.loc[sample_frame['Target']] - margin)) ]
return censored
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def expression_nf(sample_frame, nf_n, ref_sample):
"""Calculates expression of samples in a sample data frame relative to pre-computed normalization factors. ref_sample should be defined for all targets or the result will contain many NaNs. :param DataFrame sample_frame: A sample data frame. :param Series nf_n: A Series of normalization factors indexed by sample. You probably got this from `compute_nf`. :param string ref_sample: The name of the sample to normalize against, which should match a value in the sample_frame Sample column. :return: a Series of expression values for each row in the sample data frame. :rtype: Series """
|
ref_sample_df = sample_frame.ix[sample_frame['Sample'] == ref_sample, ['Target', 'Cq']]
ref_sample_cq = ref_sample_df.groupby('Target')['Cq'].aggregate(average_cq)
delta = -sample_frame['Cq'] + asarray(ref_sample_cq.ix[sample_frame['Target']])
rel = power(2, delta) / asarray(nf_n.ix[sample_frame['Sample']])
return rel
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def collect_expression(sample_frame, ref_targets, ref_sample):
"""Calculates the expression of all rows in the sample_frame relative to each of the ref_targets. Used in rank_targets. :param DataFrame sample_frame: A sample data frame. :param iterable ref_targets: A sequence of targets from the Target column of the sample frame. :param string ref_sample: The name of the sample to which expression should be referenced. :return: a DataFrame of relative expression; rows represent rows of the sample_frame and columns represent each of the ref_targets. :rtype: DataFrame """
|
by_gene = {'Sample': sample_frame['Sample'], 'Target': sample_frame['Target']}
for target in ref_targets:
by_gene[target] = expression_ddcq(sample_frame, target, ref_sample)
return pd.DataFrame(by_gene)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rank_targets(sample_frame, ref_targets, ref_sample):
"""Uses the geNorm algorithm to determine the most stably expressed genes from amongst ref_targets in your sample. See Vandesompele et al.'s 2002 Genome Biology paper for information about the algorithm: http://dx.doi.org/10.1186/gb-2002-3-7-research0034 :param DataFrame sample_frame: A sample data frame. :param iterable ref_targets: A sequence of targets from the Target column of sample_frame to consider for ranking. :param string ref_sample: The name of a sample from the Sample column of sample_frame. It doesn't really matter what it is but it should exist for every target. :return: a sorted DataFrame with two columns, 'Target' and 'M' (the relative stability; lower means more stable). :rtype: DataFrame """
|
table = collect_expression(sample_frame, ref_targets, ref_sample)
all_samples = sample_frame['Sample'].unique()
t = table.groupby(['Sample', 'Target']).mean()
logt = log2(t)
ref_targets = set(ref_targets)
worst = []
worst_m = []
while len(ref_targets) - len(worst) > 1:
M = []
for test_target in ref_targets:
if test_target in worst: continue
Vs = []
for ref_target in ref_targets:
if ref_target == test_target or ref_target in worst: continue
A = logt.ix[zip(all_samples, repeat(test_target)), ref_target]
Vs.append(A.std())
M.append( (sum(Vs)/(len(ref_targets)-len(worst)-1), test_target) )
worst.append(max(M)[1])
worst_m.append(max(M)[0])
best = ref_targets - set(worst)
worst.reverse()
worst_m.reverse()
worst_m = [worst_m[0]] + worst_m
return pd.DataFrame({'Target': list(best) + worst, 'M': worst_m}, columns=['Target', 'M'])
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_nf(sample_frame, ref_targets, ref_sample):
"""Calculates a normalization factor from the geometric mean of the expression of all ref_targets, normalized to a reference sample. :param DataFrame sample_frame: A sample data frame. :param iterable ref_targets: A list or Series of target names. :param string ref_sample: The name of the sample to normalize against. :return: a Series indexed by sample name containing normalization factors for each sample. """
|
grouped = sample_frame.groupby(['Target', 'Sample'])['Cq'].aggregate(average_cq)
samples = sample_frame['Sample'].unique()
nfs = gmean([pow(2, -grouped.ix[zip(repeat(ref_gene), samples)] + grouped.ix[ref_gene, ref_sample]) for ref_gene in ref_targets])
return pd.Series(nfs, index=samples)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_ec2_meta_data():
""" Get meta data about ourselves, if we are on an EC2 instance. In particular, this returns the VPC ID and region of this instance. If we are not on an EC2 instance it returns an empty dict. """
|
# The timeout is just for the connection attempt, but between retries there
# is an exponential back off in seconds. So, too many retries and it can
# possibly block for a very long time here. Main contributor to waiting
# time here is the number of retries, rather than the timeout time.
try:
md = boto.utils.get_instance_metadata(timeout=2, num_retries=2)
vpc_id = md['network']['interfaces']['macs'].values()[0]['vpc-id']
region = md['placement']['availability-zone'][:-1]
return {"vpc_id" : vpc_id, "region_name" : region}
except:
# Any problem while getting the meta data? Assume we are not on an EC2
# instance.
return {}
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect_to_region(region_name):
""" Establish connection to AWS API. """
|
logging.debug("Connecting to AWS region '%s'" % region_name)
con = boto.vpc.connect_to_region(region_name)
if not con:
raise VpcRouteSetError("Could not establish connection to "
"region '%s'." % region_name)
return con
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_ip_subnet_lookup(vpc_info):
""" Updates the vpc-info object with a lookup for IP -> subnet. """
|
# We create a reverse lookup from the instances private IP addresses to the
# subnets they are associated with. This is used later on in order to
# determine whether routes should be set in an RT: Is the RT's subnet
# associated with ANY of the IP addresses in the route spec? To make this
# easy, we collect the IPs and subnets of all EC2 instances in the VPC.
# Once we get a route spec, we create a list of subnets for only the
# cluster nodes. The assumption is that not all EC2 instances in the VPC
# necessarily belong to the cluster. We really want to narrow it down to
# the cluster nodes only. See make_cluster_node_subnet_list().
vpc_info['ip_subnet_lookup'] = {}
for instance in vpc_info['instances']:
for interface in instance.interfaces:
subnet_id = interface.subnet_id
for priv_addr in interface.private_ip_addresses:
vpc_info['ip_subnet_lookup'][priv_addr.private_ip_address] = \
subnet_id
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_vpc_overview(con, vpc_id, region_name):
""" Retrieve information for the specified VPC. If no VPC ID was specified then just pick the first VPC we find. Returns a dict with the VPC's zones, subnets, route tables and instances. """
|
logging.debug("Retrieving information for VPC '%s'" % vpc_id)
d = {}
d['zones'] = con.get_all_zones()
# Find the specified VPC, or just use the first one
all_vpcs = con.get_all_vpcs()
if not all_vpcs:
raise VpcRouteSetError("Cannot find any VPCs.")
if not vpc_id:
# Just grab the first available VPC and use it, if no VPC specified
vpc = all_vpcs[0]
vpc_id = vpc.id
else:
# Search through the list of VPCs for the one with the specified ID
vpc = None
for v in all_vpcs:
if v.id == vpc_id:
vpc = v
break
if not vpc:
raise VpcRouteSetError("Cannot find specified VPC '%s' "
"in region '%s'." % (vpc_id, region_name))
d['vpc'] = vpc
vpc_filter = {"vpc-id" : vpc_id} # Will use this filter expression a lot
# Now find the subnets, route tables and instances within this VPC
d['subnets'] = con.get_all_subnets(filters=vpc_filter)
d['route_tables'] = con.get_all_route_tables(filters=vpc_filter)
# Route tables are associated with subnets. Maintain a lookup table from
# RT ID to (list of) subnets. This is necessary later on, because
# we only want to set a route in an RT if at least one of the ENIs we are
# dealing with is associated with the RT. That way, we don't have to set
# the route in all tables all the time.
d['rt_subnet_lookup'] = {}
for rt in d['route_tables']:
for assoc in rt.associations:
if hasattr(assoc, 'subnet_id'):
subnet_id = assoc.subnet_id
if subnet_id:
d['rt_subnet_lookup'].setdefault(rt.id, []). \
append(subnet_id)
# Get all the cluster instances (all EC2 instances associated with this
# VPC).
reservations = con.get_all_reservations(filters=vpc_filter)
d['instances'] = []
for r in reservations: # a reservation may have multiple instances
d['instances'].extend(r.instances)
# Add reverse lookup from the instances private IP addresses to the
# subnets they are associated with. More info in the doc for that function.
_make_ip_subnet_lookup(d)
# Maintain a quick instance lookup for convenience
d['instance_by_id'] = {}
for i in d['instances']:
d['instance_by_id'][i.id] = i
return d
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_instance_and_eni_by_ip(vpc_info, ip):
""" Given a specific IP address, find the EC2 instance and ENI. We need this information for setting the route. Returns instance and emi in a tuple. """
|
for instance in vpc_info['instances']:
for eni in instance.interfaces:
for pa in eni.private_ip_addresses:
if pa.private_ip_address == ip:
return instance, eni
raise VpcRouteSetError("Could not find instance/eni for '%s' "
"in VPC '%s'." % (ip, vpc_info['vpc'].id))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_instance_private_ip_from_route(instance, route):
""" Find the private IP and ENI of an instance that's pointed to in a route. Returns (ipaddr, eni) tuple. """
|
ipaddr = None
for eni in instance.interfaces:
if eni.id == route.interface_id:
ipaddr = eni.private_ip_address
break
return ipaddr, eni if ipaddr else None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _choose_different_host(old_ip, ip_list, failed_ips, questionable_ips):
""" Randomly choose a different host from a list of hosts. Pick from fully healthy IPs first (neither failed nor questionable). If we don't have any of those, pick from questionable ones next. If no suitable hosts can be found in the list (if it's empty or all hosts are in the failed_ips list) it will return None. The old IP (if any) is passed in. We will try to avoid returning this same old IP under the right circumstances. If no old IP is known, None can be passed in for it instead. """
|
if not ip_list:
# We don't have any hosts to choose from.
return None
ip_set = set(ip_list)
failed_set = set(failed_ips)
# Consider only those questionable IPs that aren't also failed and make
# sure all of the ones in the questionable list are at least also present
# in the overall IP list.
questionable_set = set(questionable_ips).intersection(ip_set). \
difference(failed_set)
# Get all healthy IPs that are neither failed, nor questionable
healthy_ips = list(ip_set.difference(failed_set, questionable_set))
if healthy_ips:
# Return one of the completely healthy IPs
return random.choice(healthy_ips)
if questionable_set:
# Don't have any completely healthy ones, so return one of the
# questionable ones. Not perfect, but at least may still provide
# routing functionality for some time.
if old_ip not in questionable_set:
# We may be here because the original address was questionable. If
# only other questionable ones are available then there's no point
# changing the address. We only change if the old address wasn't
# one of the questionable ones already.
return random.choice(list(questionable_set))
# We got nothing...
return None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _rt_state_update(route_table_id, dcidr, router_ip="(none)", instance_id="(none)", eni_id="(none)", old_router_ip="(none)", msg="(none)"):
""" Store a message about a VPC route in the current state. """
|
buf = "inst: %s, eni: %s, r_ip: %-15s, o_r_ip: %-15s, msg: %s" % \
(instance_id, eni_id, router_ip, old_router_ip, msg)
CURRENT_STATE.vpc_state.setdefault('route_tables', {}). \
setdefault(route_table_id, {})[dcidr] = buf
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_route(dcidr, router_ip, old_router_ip, vpc_info, con, route_table_id, update_reason):
""" Update an existing route entry in the route table. """
|
instance = eni = None
try:
instance, eni = find_instance_and_eni_by_ip(vpc_info, router_ip)
logging.info("--- updating existing route in RT '%s' "
"%s -> %s (%s, %s) (old IP: %s, reason: %s)" %
(route_table_id, dcidr, router_ip,
instance.id, eni.id, old_router_ip, update_reason))
try:
con.replace_route(
route_table_id = route_table_id,
destination_cidr_block = dcidr,
instance_id = instance.id,
interface_id = eni.id)
except Exception as e:
raise Exception("replace_route failed: %s" % str(e))
CURRENT_STATE.routes[dcidr] = \
(router_ip, str(instance.id), str(eni.id))
except Exception as e:
msg = "*** failed to update route in RT '%s' %s -> %s (%s)" % \
(route_table_id, dcidr, old_router_ip, e.message)
update_reason += " [ERROR update route: %s]" % e.message
logging.error(msg)
_rt_state_update(route_table_id, dcidr, router_ip,
instance.id if instance else "(none)",
eni.id if eni else "(none)",
old_router_ip, update_reason)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_new_route(dcidr, router_ip, vpc_info, con, route_table_id):
""" Add a new route to the route table. """
|
try:
instance, eni = find_instance_and_eni_by_ip(vpc_info, router_ip)
# Only set the route if the RT is associated with any of the subnets
# used for the cluster.
rt_subnets = \
set(vpc_info['rt_subnet_lookup'].get(route_table_id, []))
cluster_node_subnets = \
set(vpc_info['cluster_node_subnets'])
if not rt_subnets or not rt_subnets.intersection(cluster_node_subnets):
logging.debug("--- skipping adding route in RT '%s' "
"%s -> %s (%s, %s) since RT's subnets (%s) are not "
"part of the cluster (%s)." %
(route_table_id, dcidr, router_ip, instance.id,
eni.id,
", ".join(rt_subnets) if rt_subnets else "none",
", ".join(cluster_node_subnets)))
return
logging.info("--- adding route in RT '%s' "
"%s -> %s (%s, %s)" %
(route_table_id, dcidr, router_ip, instance.id, eni.id))
con.create_route(route_table_id = route_table_id,
destination_cidr_block = dcidr,
instance_id = instance.id,
interface_id = eni.id)
CURRENT_STATE.routes[dcidr] = \
(router_ip, str(instance.id), str(eni.id))
_rt_state_update(route_table_id, dcidr, router_ip, instance.id, eni.id,
msg="Added route")
except Exception as e:
logging.error("*** failed to add route in RT '%s' "
"%s -> %s (%s)" %
(route_table_id, dcidr, router_ip, e.message))
_rt_state_update(route_table_id, dcidr,
msg="[ERROR add route: %s]" % e.message)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_real_instance_if_mismatch(vpc_info, ipaddr, instance, eni):
""" Return the real instance for the given IP address, if that instance is different than the passed in instance or has a different eni. If the ipaddr belongs to the same instance and eni that was passed in then this returns None. """
|
# Careful! A route may be a black-hole route, which still has instance and
# ENI information for an instance that doesn't exist anymore. If a host was
# terminated and a new host got the same IP then this route won't be
# updated and will keep pointing to a non-existing node. So we find the
# instance by IP and check that the route really points to this instance.
inst_id = instance.id if instance else ""
eni_id = eni.id if eni else ""
if ipaddr:
real_instance, real_eni = \
find_instance_and_eni_by_ip(vpc_info, ipaddr)
if real_instance.id != inst_id or real_eni.id != eni_id:
return real_instance
return None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_host_for_route(vpc_info, route, route_table, dcidr):
""" Given a specific route, return information about the instance to which it points. Returns 3-tuple: instance-id, ipaddr, eni-id Need to take care of scenarios where the instance isn't set anymore in the route (the instance may have disappeared). """
|
class _CouldNotIdentifyHost(Exception):
# If we can't find both the instance as well as an eni for the route,
# we will raise this exception. In that case, we'll return None/unknown
# values, which indicate to the calling code that a new instance should
# be found.
pass
try:
# The instance_id in the route may be None. We can get this in case of
# a black-hole route.
if route.instance_id:
instance = vpc_info['instance_by_id'].get(route.instance_id)
if not instance:
logging.info("--- instance in route in RT '%s' can't "
"be found: %s -> ... (instance '%s')" %
(route_table.id, dcidr, route.instance_id))
raise _CouldNotIdentifyHost()
inst_id = instance.id
ipaddr, eni = get_instance_private_ip_from_route(instance, route)
if not eni:
logging.info("--- eni in route in RT '%s' can't "
"be found: %s -> %s (instance '%s')" %
(route_table.id, dcidr,
ipaddr if ipaddr else "(none)",
inst_id))
raise _CouldNotIdentifyHost()
eni_id = eni.id
# If route points to outdated instance, set ipaddr and eni to
# None to signal that route needs to be updated
real_instance = _get_real_instance_if_mismatch(
vpc_info, ipaddr, instance, eni)
if real_instance:
logging.info("--- obsoleted route in RT '%s' "
"%s -> %s (%s, %s) (new instance with same "
"IP address should be used: %s)" %
(route_table.id, dcidr, ipaddr, inst_id, eni_id,
real_instance.id))
# Setting the ipaddr and eni to None signals code further
# down that the route must be updated.
raise _CouldNotIdentifyHost()
else:
# This route didn't point to an instance anymore, probably
# a black hole route
logging.info("--- obsoleted route in RT '%s' "
"%s -> ... (doesn't point to instance anymore)" %
(route_table.id, dcidr))
raise _CouldNotIdentifyHost()
except _CouldNotIdentifyHost:
inst_id = eni_id = "(unknown)"
ipaddr = None
return inst_id, ipaddr, eni_id
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _is_cidr_in_ignore_routes(cidr):
""" Checks the CIDR to see if it falls into any CIDRs specified via the ignore_routes parameter. This is used mostly to protect special routes to specific instances (for example proxies, etc.), so that the vpc-router does not clean those up. Only used to govern cleanup of routes, is not consulted when creating routes. """
|
for ignore_cidr in CURRENT_STATE.ignore_routes:
if is_cidr_in_cidr(cidr, ignore_cidr):
return True
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_missing_routes(route_spec, failed_ips, questionable_ips, chosen_routers, vpc_info, con, routes_in_rts):
""" Iterate over route spec and add all the routes we haven't set yet. This relies on being told what routes we HAVE already. This is passed in via the routes_in_rts dict. Furthermore, some routes may be set in some RTs, but not in others. In that case, we may already have seen which router was chosen for a certain route. This information is passed in via the chosen_routers dict. We should choose routers that were used before. """
|
for dcidr, hosts in route_spec.items():
new_router_ip = chosen_routers.get(dcidr)
# Look at the routes we have seen in each of the route tables.
for rt_id, dcidr_list in routes_in_rts.items():
if dcidr not in dcidr_list:
if not new_router_ip:
# We haven't chosen a target host for this CIDR.
new_router_ip = _choose_different_host(None, hosts,
failed_ips,
questionable_ips)
if not new_router_ip:
logging.warning("--- cannot find available target "
"for route addition %s! "
"Nothing I can do..." % (dcidr))
# Skipping the check on any further RT, breaking out to
# outer most loop over route spec
break
_add_new_route(dcidr, new_router_ip, vpc_info, con, rt_id)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_route_spec_config(con, vpc_info, route_spec, failed_ips, questionable_ips):
""" Look through the route spec and update routes accordingly. Idea: Make sure we have a route for each CIDR. If we have a route to any of the IP addresses for a given CIDR then we are good. Otherwise, pick one (usually the first) IP and create a route to that IP. If a route points at a failed or questionable IP then a new candidate is chosen, if possible. """
|
if CURRENT_STATE._stop_all:
logging.debug("Routespec processing. Stop requested, abort operation")
return
if failed_ips:
logging.debug("Route spec processing. Failed IPs: %s" %
",".join(failed_ips))
else:
logging.debug("Route spec processing. No failed IPs.")
# Iterate over all the routes in the VPC, check they are contained in
# the spec, update the routes as needed.
# Need to remember the routes we saw in different RTs, so that we can later
# add them, if needed.
routes_in_rts = {}
CURRENT_STATE.vpc_state.setdefault("time",
datetime.datetime.now().isoformat())
# Passed through the functions and filled in, state accumulates information
# about all the routes we encounted in the VPC and what we are doing with
# them. This is then available in the CURRENT_STATE
chosen_routers = _update_existing_routes(route_spec,
failed_ips, questionable_ips,
vpc_info, con, routes_in_rts)
# Now go over all the routes in the spec and add those that aren't in VPC,
# yet.
_add_missing_routes(route_spec, failed_ips, questionable_ips,
chosen_routers,
vpc_info, con, routes_in_rts)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_spec(region_name, vpc_id, route_spec, failed_ips, questionable_ips):
""" Connect to region and update routes according to route spec. """
|
if CURRENT_STATE._stop_all:
logging.debug("handle_spec: Stop requested, abort operation")
return
if not route_spec:
logging.debug("handle_spec: No route spec provided")
return
logging.debug("Handle route spec")
try:
con = connect_to_region(region_name)
vpc_info = get_vpc_overview(con, vpc_id, region_name)
vpc_info['cluster_node_subnets'] = \
make_cluster_node_subnet_list(vpc_info, route_spec)
process_route_spec_config(con, vpc_info, route_spec,
failed_ips, questionable_ips)
con.close()
except boto.exception.StandardError as e:
logging.warning("vpc-router could not set route: %s - %s" %
(e.message, e.args))
raise
except boto.exception.NoAuthHandlerFound:
logging.error("vpc-router could not authenticate")
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def signature(self, block_size=None):
"Calculates signature for local file."
kwargs = {}
if block_size:
kwargs['block_size'] = block_size
return librsync.signature(open(self.path, 'rb'), **kwargs)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def patch(self, delta):
"Applies remote delta to local file."
# Create a temp file in which to store our synced copy. We will handle
# deleting it manually, since we may move it instead.
with (tempfile.NamedTemporaryFile(prefix='.sync',
suffix=os.path.basename(self.path),
dir=os.path.dirname(self.path), delete=False)) as output:
try:
# Open the local file, data may be read from it.
with open(self.path, 'rb') as reference:
# Patch the local file into our temporary file.
r = librsync.patch(reference, delta, output)
os.rename(output.name, self.path)
return r
finally:
try:
os.remove(output.name)
except OSError as e:
if e.errno != errno.ENOENT:
raise
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def signature(self, block_size=None):
"Requests a signature for remote file via API."
kwargs = {}
if block_size:
kwargs['block_size'] = block_size
return self.api.get('path/sync/signature', self.path, **kwargs)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def delta(self, signature):
"Generates delta for remote file via API using local file's signature."
return self.api.post('path/sync/delta', self.path, signature=signature)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def patch(self, delta):
"Applies delta for local file to remote file via API."
return self.api.post('path/sync/patch', self.path, delta=delta)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload(self, local, remote):
""" Performs synchronization from a local file to a remote file. The local path is the source and remote path is the destination. """
|
self.sync(LocalFile(local), RemoteFile(remote, self.api))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(self, local, remote):
""" Performs synchronization from a remote file to a local file. The remote path is the source and the local path is the destination. """
|
self.sync(RemoteFile(remote, self.api), LocalFile(local))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_python(self, value):
""" Validates that the input can be converted to a date. Returns a Python datetime.date object. """
|
if value in validators.EMPTY_VALUES:
return None
if isinstance(value, datetime.datetime):
return value.date()
if isinstance(value, datetime.date):
return value
if isinstance(value, list):
# Input comes from a 2 SplitDateWidgets, for example. So, it's two
# components: start date and end date.
if len(value) != 2:
raise ValidationError(self.error_messages['invalid'])
if value[0] in validators.EMPTY_VALUES and value[1] in \
validators.EMPTY_VALUES:
return None
start_value = value[0]
end_value = value[1]
start_date = None
end_date = None
for format in self.input_formats or \
formats.get_format('DATE_INPUT_FORMATS'):
try:
start_date = datetime.datetime(*time.strptime(start_value, \
format)[:6]).date()
except ValueError:
continue
for format in self.input_formats or formats.get_format(\
'DATE_INPUT_FORMATS'):
try:
end_date = datetime.datetime(
*time.strptime(end_value, format)[:6]
).date()
except ValueError:
continue
return (start_date, end_date)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_python(self, value):
""" Validates that the input can be converted to a datetime. Returns a Python datetime.datetime object. """
|
if value in validators.EMPTY_VALUES:
return None
if isinstance(value, datetime.datetime):
return value
if isinstance(value, datetime.date):
return datetime.datetime(value.year, value.month, value.day)
if isinstance(value, list):
# Input comes from a 2 SplitDateTimeWidgets, for example. So,
# it's four components: start date and time, and end date and time.
if len(value) != 4:
raise ValidationError(self.error_messages['invalid'])
if value[0] in validators.EMPTY_VALUES and value[1] in \
validators.EMPTY_VALUES and value[2] in \
validators.EMPTY_VALUES and value[3] in \
validators.EMPTY_VALUES:
return None
start_value = '%s %s' % tuple(value[:2])
end_value = '%s %s' % tuple(value[2:])
start_datetime = None
end_datetime = None
for format in self.input_formats or formats.get_format(\
'DATETIME_INPUT_FORMATS'):
try:
start_datetime = datetime.datetime(*time.strptime(\
start_value, format)[:6])
except ValueError:
continue
for format in self.input_formats or formats.get_format(\
'DATETIME_INPUT_FORMATS'):
try:
end_datetime = datetime.datetime(*time.strptime(\
end_value, format)[:6])
except ValueError:
continue
return (start_datetime, end_datetime)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_python(self, value):
""" Validates that the input can be converted to a time. Returns a Python datetime.time object. """
|
if value in validators.EMPTY_VALUES:
return None
if isinstance(value, datetime.datetime):
return value.time()
if isinstance(value, datetime.time):
return value
if isinstance(value, list):
# Input comes from a 2 SplitTimeWidgets, for example. So, it's two
# components: start time and end time.
if len(value) != 2:
raise ValidationError(self.error_messages['invalid'])
if value[0] in validators.EMPTY_VALUES and value[1] in \
validators.EMPTY_VALUES:
return None
start_value = value[0]
end_value = value[1]
start_time = None
end_time = None
for format in self.input_formats or formats.get_format(\
'TIME_INPUT_FORMATS'):
try:
start_time = datetime.datetime(
*time.strptime(start_value, format)[:6]
).time()
except ValueError:
if start_time:
continue
else:
raise ValidationError(self.error_messages['invalid'])
for format in self.input_formats or formats.get_format(\
'TIME_INPUT_FORMATS'):
try:
end_time = datetime.datetime(
*time.strptime(end_value, format)[:6]
).time()
except ValueError:
if end_time:
continue
else:
raise ValidationError(self.error_messages['invalid'])
return (start_time, end_time)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def docs_client(self):
""" A DocsClient singleton, used to look up spreadsheets by name. """
|
if not hasattr(self, '_docs_client'):
client = DocsClient()
client.ClientLogin(self.google_user, self.google_password,
SOURCE_NAME)
self._docs_client = client
return self._docs_client
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sheets_service(self):
""" A SpreadsheetsService singleton, used to perform operations on the actual spreadsheet. """
|
if not hasattr(self, '_sheets_service'):
service = SpreadsheetsService()
service.email = self.google_user
service.password = self.google_password
service.source = SOURCE_NAME
service.ProgrammaticLogin()
self._sheets_service = service
return self._sheets_service
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def recid_fetcher(record_uuid, data):
"""Fetch a record's identifiers. :param record_uuid: The record UUID. :param data: The record metadata. :returns: A :data:`invenio_pidstore.fetchers.FetchedPID` instance. """
|
pid_field = current_app.config['PIDSTORE_RECID_FIELD']
return FetchedPID(
provider=RecordIdProvider,
pid_type=RecordIdProvider.pid_type,
pid_value=str(data[pid_field]),
)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_column(self, label, field):
""" Add a new column to the table. It will have the header text ``label``, but for data inserts and queries, the ``field`` name must be used. If necessary, this will expand the size of the sheet itself to allow for the new column. """
|
# Don't call this directly.
assert self.headers is not None
cols = 0
if len(self._headers) > 0:
cols = max([int(c.cell.col) for c in self._headers])
new_col = cols + 1
if int(self._ws.col_count.text) < new_col:
self._ws.col_count.text = str(new_col)
self._update_metadata()
cell = self._service.UpdateCell(1, new_col, label,
self._ss.id, self.id)
self._headers.append(cell)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def headers(self):
""" Return the name of all headers currently defined for the table. """
|
if self._headers is None:
query = CellQuery()
query.max_row = '1'
feed = self._service.GetCellsFeed(self._ss.id, self.id,
query=query)
self._headers = feed.entry
return [normalize_header(h.cell.text) for h in self._headers]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insert(self, row):
""" Insert a new row. The row will be added to the end of the spreadsheet. Before inserting, the field names in the given row will be normalized and values with empty field names removed. """
|
data = self._convert_value(row)
self._service.InsertRow(data, self._ss.id, self.id)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove(self, _query=None, **kwargs):
""" Remove all rows matching the current query. If no query is given, this will truncate the entire table. """
|
for entry in self._find_entries(_query=_query, **kwargs):
self._service.DeleteRow(entry)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normalize_header(name, existing=[]):
""" Try to emulate the way in which Google does normalization on the column names to transform them into headers. """
|
name = re.sub('\W+', '', name, flags=re.UNICODE).lower()
# TODO handle multiple columns with the same name.
return name
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getAllElementsOfHirarchy(self):
""" returns ALL elements of the complete hirarchy as a flat list """
|
allElements=[]
for element in self.getAllElements():
allElements.append(element)
if isinstance(element, BaseElement):
allElements.extend(element.getAllElementsOfHirarchy())
return allElements
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getElementByID(self, id):
""" returns an element with the specific id and the position of that element within the svg elements array """
|
pos=0
for element in self._subElements:
if element.get_id()==id:
return (element,pos)
pos+=1
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getElementsByType(self, type):
""" retrieves all Elements that are of type type @type type: class @param type: type of the element """
|
foundElements=[]
for element in self.getAllElementsOfHirarchy():
if isinstance(element, type):
foundElements.append(element)
return foundElements
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getXML(self):
""" Return a XML representation of the current element. This function can be used for debugging purposes. It is also used by getXML in SVG @return: the representation of the current element as an xml string """
|
xml='<'+self._elementName+' '
for key,value in list(self._attributes.items()):
if value != None:
xml+=key+'="'+self.quote_attrib(str(value))+'" '
if len(self._subElements)==0: #self._textContent==None and
xml+=' />\n'
else:
xml+=' >\n'
#if self._textContent==None:
for subelement in self._subElements:
s = subelement.getXML()
if type(s) != str:
s = str(s)
xml+=s
# xml+=str(subelement.getXML())
#else:
#if self._textContent!=None:
# xml+=self._textContent
xml+='</'+self._elementName+'>\n'
#print xml
return xml
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quote_attrib(self, inStr):
""" Transforms characters between xml notation and python notation. """
|
s1 = (isinstance(inStr, str) and inStr or
'%s' % inStr)
s1 = s1.replace('&', '&')
s1 = s1.replace('<', '<')
s1 = s1.replace('>', '>')
if '"' in s1:
# if "'" in s1:
s1 = '%s' % s1.replace('"', """)
# else:
# s1 = "'%s'" % s1
#else:
# s1 = '"%s"' % s1
return s1
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_status(ctx, param, value):
"""Return status value."""
|
from .models import PIDStatus
# Allow empty status
if value is None:
return None
if not hasattr(PIDStatus, value):
raise click.BadParameter('Status needs to be one of {0}.'.format(
', '.join([s.name for s in PIDStatus])
))
return getattr(PIDStatus, value)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(pid_type, pid_value, status, object_type, object_uuid):
"""Create new persistent identifier."""
|
from .models import PersistentIdentifier
if bool(object_type) ^ bool(object_uuid):
raise click.BadParameter('Speficy both or any of --type and --uuid.')
new_pid = PersistentIdentifier.create(
pid_type,
pid_value,
status=status,
object_type=object_type,
object_uuid=object_uuid,
)
db.session.commit()
click.echo(
'{0.pid_type} {0.pid_value} {0.pid_provider}'.format(new_pid)
)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def assign(pid_type, pid_value, status, object_type, object_uuid, overwrite):
"""Assign persistent identifier."""
|
from .models import PersistentIdentifier
obj = PersistentIdentifier.get(pid_type, pid_value)
if status is not None:
obj.status = status
obj.assign(object_type, object_uuid, overwrite=overwrite)
db.session.commit()
click.echo(obj.status)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unassign(pid_type, pid_value):
"""Unassign persistent identifier."""
|
from .models import PersistentIdentifier
obj = PersistentIdentifier.get(pid_type, pid_value)
obj.unassign()
db.session.commit()
click.echo(obj.status)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_object(pid_type, pid_value):
"""Get an object behind persistent identifier."""
|
from .models import PersistentIdentifier
obj = PersistentIdentifier.get(pid_type, pid_value)
if obj.has_object():
click.echo('{0.object_type} {0.object_uuid} {0.status}'.format(obj))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Deserializer(stream_or_string, **options):
""" Deserialize a stream or string of CSV data. """
|
def process_item(item):
m = _LIST_RE.match(item)
if m:
contents = m.group(1)
if not contents:
item = []
else:
item = process_m2m(contents)
else:
if item == 'TRUE':
item = True
elif item == 'FALSE':
item = False
elif item == 'NULL':
item = None
elif (item in _QUOTED_BOOL_NULL or
_QUOTED_LIST_RE.match(item)):
item = item.strip('\'"')
return item
def process_m2m(contents):
li = []
if _NK_LIST_RE.match(contents):
for item in _NK_SPLIT_RE.split(contents):
li.append(process_item(item))
else:
li = _SPLIT_RE.split(contents)
return li
if isinstance(stream_or_string, six.string_types):
stream = StringIO(stream_or_string)
else:
stream = stream_or_string
reader = UnicodeReader(stream)
header = next(reader) # first line must be a header
data = []
for row in reader:
# Need to account for the presence of multiple headers in
# the stream since serialized data can contain them.
if row[:2] == ['pk', 'model']:
# Not the best check. Perhaps csv.Sniffer.has_header
# would be better?
header = row
continue
d = dict(zip(header[:2], row[:2]))
d['fields'] = dict(zip(header[2:], map(process_item, row[2:])))
data.append(d)
for obj in PythonDeserializer(data, **options):
yield obj
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def object_formatter(v, c, m, p):
"""Format object view link."""
|
endpoint = current_app.config['PIDSTORE_OBJECT_ENDPOINTS'].get(
m.object_type)
if endpoint and m.object_uuid:
return Markup('<a href="{0}">{1}</a>'.format(
url_for(endpoint, id=m.object_uuid),
_('View')))
return ''
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode(self, tx):
""" Decodes the given transaction. Args: tx: hex of transaction Returns: decoded transaction .. note:: Only supported for blockr.io at the moment. """
|
if not isinstance(self._service, BitcoinBlockrService):
raise NotImplementedError('Currently only supported for "blockr.io"')
return self._service.decode(tx)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register(self, url, doc):
"""Register a DOI via the DataCite API. :param url: Specify the URL for the API. :param doc: Set metadata for DOI. :returns: `True` if is registered successfully. """
|
try:
self.pid.register()
# Set metadata for DOI
self.api.metadata_post(doc)
# Mint DOI
self.api.doi_post(self.pid.pid_value, url)
except (DataCiteError, HttpError):
logger.exception("Failed to register in DataCite",
extra=dict(pid=self.pid))
raise
logger.info("Successfully registered in DataCite",
extra=dict(pid=self.pid))
return True
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, url, doc):
"""Update metadata associated with a DOI. This can be called before/after a DOI is registered. :param doc: Set metadata for DOI. :returns: `True` if is updated successfully. """
|
if self.pid.is_deleted():
logger.info("Reactivate in DataCite",
extra=dict(pid=self.pid))
try:
# Set metadata
self.api.metadata_post(doc)
self.api.doi_post(self.pid.pid_value, url)
except (DataCiteError, HttpError):
logger.exception("Failed to update in DataCite",
extra=dict(pid=self.pid))
raise
if self.pid.is_deleted():
self.pid.sync_status(PIDStatus.REGISTERED)
logger.info("Successfully updated in DataCite",
extra=dict(pid=self.pid))
return True
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
"""Delete a registered DOI. If the PID is new then it's deleted only locally. Otherwise, also it's deleted also remotely. :returns: `True` if is deleted successfully. """
|
try:
if self.pid.is_new():
self.pid.delete()
else:
self.pid.delete()
self.api.metadata_delete(self.pid.pid_value)
except (DataCiteError, HttpError):
logger.exception("Failed to delete in DataCite",
extra=dict(pid=self.pid))
raise
logger.info("Successfully deleted in DataCite",
extra=dict(pid=self.pid))
return True
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sync_status(self):
"""Synchronize DOI status DataCite MDS. :returns: `True` if is sync successfully. """
|
status = None
try:
try:
self.api.doi_get(self.pid.pid_value)
status = PIDStatus.REGISTERED
except DataCiteGoneError:
status = PIDStatus.DELETED
except DataCiteNoContentError:
status = PIDStatus.REGISTERED
except DataCiteNotFoundError:
pass
if status is None:
try:
self.api.metadata_get(self.pid.pid_value)
status = PIDStatus.RESERVED
except DataCiteGoneError:
status = PIDStatus.DELETED
except DataCiteNoContentError:
status = PIDStatus.REGISTERED
except DataCiteNotFoundError:
pass
except (DataCiteError, HttpError):
logger.exception("Failed to sync status from DataCite",
extra=dict(pid=self.pid))
raise
if status is None:
status = PIDStatus.NEW
self.pid.sync_status(status)
logger.info("Successfully synced status from DataCite",
extra=dict(pid=self.pid))
return True
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(cls, pid_type=None, pid_value=None, object_type=None, object_uuid=None, status=None, **kwargs):
"""Create a new instance for the given type and pid. :param pid_type: Persistent identifier type. (Default: None). :param pid_value: Persistent identifier value. (Default: None). :param status: Current PID status. (Default: :attr:`invenio_pidstore.models.PIDStatus.NEW`) :param object_type: The object type is a string that identify its type. (Default: None). :param object_uuid: The object UUID. (Default: None). :returns: A :class:`invenio_pidstore.providers.base.BaseProvider` instance. """
|
assert pid_value
assert pid_type or cls.pid_type
pid = PersistentIdentifier.create(
pid_type or cls.pid_type,
pid_value,
pid_provider=cls.pid_provider,
object_type=object_type,
object_uuid=object_uuid,
status=status or cls.default_status,
)
return cls(pid, **kwargs)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(cls, pid_value, pid_type=None, **kwargs):
"""Get a persistent identifier for this provider. :param pid_type: Persistent identifier type. (Default: configured :attr:`invenio_pidstore.providers.base.BaseProvider.pid_type`) :param pid_value: Persistent identifier value. :param kwargs: See :meth:`invenio_pidstore.providers.base.BaseProvider` required initialization properties. :returns: A :class:`invenio_pidstore.providers.base.BaseProvider` instance. """
|
return cls(
PersistentIdentifier.get(pid_type or cls.pid_type, pid_value,
pid_provider=cls.pid_provider),
**kwargs)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init_app(self, app, config_group="flask_keystone"):
""" Iniitialize the Flask_Keystone module in an application factory. :param app: `flask.Flask` application to which to connect. :type app: `flask.Flask` :param str config_group: :class:`oslo_config.cfg.OptGroup` to which to attach. When initialized, the extension will apply the :mod:`keystonemiddleware` WSGI middleware to the flask Application, attach it's own error handler, and generate a User model based on its :mod:`oslo_config` configuration. """
|
cfg.CONF.register_opts(RAX_OPTS, group=config_group)
self.logger = logging.getLogger(__name__)
try:
logging.register_options(cfg.CONF)
except cfg.ArgsAlreadyParsedError: # pragma: no cover
pass
logging.setup(cfg.CONF, "flask_keystone")
self.config = cfg.CONF[config_group]
self.roles = self._parse_roles()
self.User = self._make_user_model()
self.Anonymous = self._make_anonymous_model()
self.logger.debug("Initialized keystone with roles: %s and "
"allow_anonymous: %s" % (
self.roles,
self.config.allow_anonymous_access
))
app.wsgi_app = auth_token.AuthProtocol(app.wsgi_app, {})
self.logger.debug("Adding before_request request handler.")
app.before_request(self._make_before_request())
self.logger.debug("Registering Custom Error Handler.")
app.register_error_handler(FlaskKeystoneException, handle_exception)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_roles(self):
""" Generate a dictionary for configured roles from oslo_config. Due to limitations in ini format, it's necessary to specify roles in a flatter format than a standard dictionary. This function serves to transform these roles into a standard python dictionary. """
|
roles = {}
for keystone_role, flask_role in self.config.roles.items():
roles.setdefault(flask_role, set()).add(keystone_role)
return roles
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_before_request(self):
""" Generate the before_request function to be added to the app. Currently this function is static, however it is very likely we will need to programmatically generate this function in the future. """
|
def before_request():
"""
Process invalid identity statuses and attach user to request.
:raises: :exception:`exceptions.FlaskKeystoneUnauthorized`
This function guarantees that a bad token will return a 401
when :mod:`keystonemiddleware` is configured to
defer_auth_decision. Once this is done, it instantiates a user
from the generated User model and attaches it to the request
context for later access.
"""
identity_status = request.headers.get(
"X-Identity-Status", "Invalid"
)
if identity_status != "Confirmed":
msg = ("Couldn't authenticate user '%s' with "
"X-Identity-Status '%s'")
self.logger.info(msg % (
request.headers.get("X-User-Id", "None"),
request.headers.get("X-Identity-Status", "None")
))
if not self.config.allow_anonymous_access:
msg = "Anonymous Access disabled, rejecting %s"
self.logger.debug(
msg % request.headers.get("X-User-Id", "None")
)
raise FlaskKeystoneUnauthorized()
else:
self.logger.debug("Setting Anonymous user.")
self._set_anonymous_user()
return
self._set_user(request)
return before_request
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_user_model(self):
""" Dynamically generate a User class for use with FlaskKeystone. :returns: a generated User class, inherited from :class:`flask_keystone.UserBase`. :rtype: class This User model is intended to work somewhat similarly to the User class that is created for Flask-Login, however it is Dynamically generated based on configuration values in `oslo.config.cfg`, and is populated automatically from the request headers added by :mod:`keystonemiddleware`. This User class has the concept of "roles", which are defined in oslo.config, and generates helper functions to quickly Determine whether these roles apply to a particular instance. """
|
class User(UserBase):
"""
A User as defined by the response from Keystone.
Note: This class is dynamically generated by :class:`FlaskKeystone`
from the :class:`flask_keystone.UserBase` class.
:param request: The incoming `flask.Request` object, after being
handled by the :mod:`keystonemiddleware`
:returns: :class:`flask_keystone.UserBase`
"""
pass
User.generate_has_role_function(self.roles)
User.generate_is_role_functions(self.roles)
return User
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def login_required(self, f):
""" Require a user to be validated by Identity to access an endpoint. :raises: FlaskKeystoneUnauthorized This method will gate a particular endpoint to only be accessed by :class:`FlaskKeystone.User`'s. This means that a valid token will need to be passed to grant access. If a User is not authenticated, a FlaskKeystoneUnauthorized will be thrown, resulting in a 401 response to the client. """
|
@wraps(f)
def wrapped_f(*args, **kwargs):
if current_user.anonymous:
msg = ("Rejected User '%s access to '%s' as user"
" could not be authenticated.")
self.logger.warn(msg % (
current_user.user_id,
request.path
))
raise FlaskKeystoneUnauthorized()
return f(*args, **kwargs)
return wrapped_f
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pid_exists(value, pidtype=None):
"""Check if a persistent identifier exists. :param value: The PID value. :param pidtype: The pid value (Default: None). :returns: `True` if the PID exists. """
|
try:
PersistentIdentifier.get(pidtype, value)
return True
except PIDDoesNotExistError:
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_minter(self, name, minter):
"""Register a minter. :param name: Minter name. :param minter: The new minter. """
|
assert name not in self.minters
self.minters[name] = minter
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_fetcher(self, name, fetcher):
"""Register a fetcher. :param name: Fetcher name. :param fetcher: The new fetcher. """
|
assert name not in self.fetchers
self.fetchers[name] = fetcher
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_minters_entry_point_group(self, entry_point_group):
"""Load minters from an entry point group. :param entry_point_group: The entrypoint group. """
|
for ep in pkg_resources.iter_entry_points(group=entry_point_group):
self.register_minter(ep.name, ep.load())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_fetchers_entry_point_group(self, entry_point_group):
"""Load fetchers from an entry point group. :param entry_point_group: The entrypoint group. """
|
for ep in pkg_resources.iter_entry_points(group=entry_point_group):
self.register_fetcher(ep.name, ep.load())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def import_address(self, address, account="*", rescan=False):
""" param address = address to import param label= account name to use """
|
response = self.make_request("importaddress", [address, account, rescan])
error = response.get('error')
if error is not None:
raise Exception(error)
return response
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getBottomRight(self):
""" Retrieves a tuple with the x,y coordinates of the lower right point of the rect. Requires the coordinates, width, height to be numbers """
|
return (float(self.get_x()) + float(self.get_width()), float(self.get_y()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTopLeft(self):
""" Retrieves a tuple with the x,y coordinates of the upper left point of the rect. Requires the coordinates, width, height to be numbers """
|
return (float(self.get_x()), float(self.get_y())+ float(self.get_height()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTopRight(self):
""" Retrieves a tuple with the x,y coordinates of the upper right point of the rect. Requires the coordinates, width, height to be numbers """
|
return (float(self.get_x()) + float(self.get_width()), float(self.get_y()) + float(self.get_height()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def moveToPoint(self, xxx_todo_changeme):
""" Moves the rect to the point x,y """
|
(x,y) = xxx_todo_changeme
self.set_x(float(self.get_x()) + float(x))
self.set_y(float(self.get_y()) + float(y))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getBottomLeft(self):
""" Retrieves a tuple with the x,y coordinates of the lower left point of the circle. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) - float(self.get_r()), float(self.get_cy()) - float(self.get_r()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getBottomRight(self):
""" Retrieves a tuple with the x,y coordinates of the lower right point of the circle. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) + float(self.get_r()), float(self.get_cy()) - float(self.get_r()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTopLeft(self):
""" Retrieves a tuple with the x,y coordinates of the upper left point of the circle. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) - float(self.get_r()), float(self.get_cy()) + float(self.get_r()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTopRight(self):
""" Retrieves a tuple with the x,y coordinates of the upper right point of the circle. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) + float(self.get_r()), float(self.get_cy()) + float(self.get_r()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def moveToPoint(self, xxx_todo_changeme1):
""" Moves the circle to the point x,y """
|
(x,y) = xxx_todo_changeme1
self.set_cx(float(self.get_cx()) + float(x))
self.set_cy(float(self.get_cy()) + float(y))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getBottomLeft(self):
""" Retrieves a tuple with the x,y coordinates of the lower left point of the ellipse. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) - float(self.get_rx()), float(self.get_cy()) - float(self.get_ry()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getBottomRight(self):
""" Retrieves a tuple with the x,y coordinates of the lower right point of the ellipse. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) + float(self.get_rx()), float(self.get_cy()) - float(self.get_ry()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTopLeft(self):
""" Retrieves a tuple with the x,y coordinates of the upper left point of the ellipse. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) - float(self.get_rx()), float(self.get_cy()) + float(self.get_ry()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTopRight(self):
""" Retrieves a tuple with the x,y coordinates of the upper right point of the ellipse. Requires the radius and the coordinates to be numbers """
|
return (float(self.get_cx()) + float(self.get_rx()), float(self.get_cy()) + float(self.get_ry()))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getBottomLeft(self):
""" Retrieves the the bottom left coordinate of the line as tuple. Coordinates must be numbers. """
|
x1 = float(self.get_x1())
x2 = float(self.get_x2())
y1 = float(self.get_y1())
y2 = float(self.get_y2())
if x1 < x2:
if y1 < y2:
return (x1, y1)
else:
return (x1, y2)
else:
if y1 < y2:
return (x2, y1)
else:
return (x2, y2)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def moveToPoint(self, xxx_todo_changeme2):
""" Moves the line to the point x,y """
|
(x,y) = xxx_todo_changeme2
self.set_x1(float(self.get_x1()) + float(x))
self.set_x2(float(self.get_x2()) + float(x))
self.set_y1(float(self.get_y1()) + float(y))
self.set_y2(float(self.get_y2()) + float(y))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createCircle(self, cx, cy, r, strokewidth=1, stroke='black', fill='none'):
""" Creates a circle @type cx: string or int @param cx: starting x-coordinate @type cy: string or int @param cy: starting y-coordinate @type r: string or int @param r: radius @type strokewidth: string or int @param strokewidth: width of the pen used to draw @type stroke: string (either css constants like "black" or numerical values like "#FFFFFF") @param stroke: color with which to draw the outer limits @type fill: string (either css constants like "black" or numerical values like "#FFFFFF") @param fill: color with which to fill the element (default: no filling) @return: a circle object """
|
style_dict = {'fill':fill, 'stroke-width':strokewidth, 'stroke':stroke}
myStyle = StyleBuilder(style_dict)
c = Circle(cx, cy, r)
c.set_style(myStyle.getStyle())
return c
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createEllipse(self, cx, cy, rx, ry, strokewidth=1, stroke='black', fill='none'):
""" Creates an ellipse @type cx: string or int @param cx: starting x-coordinate @type cy: string or int @param cy: starting y-coordinate @type rx: string or int @param rx: radius in x direction @type ry: string or int @param ry: radius in y direction @type strokewidth: string or int @param strokewidth: width of the pen used to draw @type stroke: string (either css constants like "black" or numerical values like "#FFFFFF") @param stroke: color with which to draw the outer limits @type fill: string (either css constants like "black" or numerical values like "#FFFFFF") @param fill: color with which to fill the element (default: no filling) @return: an ellipse object """
|
style_dict = {'fill':fill, 'stroke-width':strokewidth, 'stroke':stroke}
myStyle = StyleBuilder(style_dict)
e = Ellipse(cx, cy, rx, ry)
e.set_style(myStyle.getStyle())
return e
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createRect(self, x, y, width, height, rx=None, ry=None, strokewidth=1, stroke='black', fill='none'):
""" Creates a Rectangle @type x: string or int @param x: starting x-coordinate @type y: string or int @param y: starting y-coordinate @type width: string or int @param width: width of the rectangle @type height: string or int @param height: height of the rectangle @type rx: string or int @param rx: For rounded rectangles, the x-axis radius of the ellipse used to round off the corners of the rectangle. @type ry: string or int @param ry: For rounded rectangles, the y-axis radius of the ellipse used to round off the corners of the rectangle. @type strokewidth: string or int @param strokewidth: width of the pen used to draw @type stroke: string (either css constants like "black" or numerical values like "#FFFFFF") @param stroke: color with which to draw the outer limits @type fill: string (either css constants like "black" or numerical values like "#FFFFFF") @param fill: color with which to fill the element (default: no filling) @return: a rect object """
|
style_dict = {'fill':fill, 'stroke-width':strokewidth, 'stroke':stroke}
myStyle = StyleBuilder(style_dict)
r = Rect(x, y, width, height, rx, ry)
r.set_style(myStyle.getStyle())
return r
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createPolygon(self, points, strokewidth=1, stroke='black', fill='none'):
""" Creates a Polygon @type points: string in the form "x1,y1 x2,y2 x3,y3" @param points: all points relevant to the polygon @type strokewidth: string or int @param strokewidth: width of the pen used to draw @type stroke: string (either css constants like "black" or numerical values like "#FFFFFF") @param stroke: color with which to draw the outer limits @type fill: string (either css constants like "black" or numerical values like "#FFFFFF") @param fill: color with which to fill the element (default: no filling) @return: a polygon object """
|
style_dict = {'fill':fill, 'stroke-width':strokewidth, 'stroke':stroke}
myStyle = StyleBuilder(style_dict)
p = Polygon(points=points)
p.set_style(myStyle.getStyle())
return p
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createPolyline(self, points, strokewidth=1, stroke='black'):
""" Creates a Polyline @type points: string in the form "x1,y1 x2,y2 x3,y3" @param points: all points relevant to the polygon @type strokewidth: string or int @param strokewidth: width of the pen used to draw @type stroke: string (either css constants like "black" or numerical values like "#FFFFFF") @param stroke: color with which to draw the outer limits @return: a polyline object """
|
style_dict = {'fill':'none', 'stroke-width':strokewidth, 'stroke':stroke}
myStyle = StyleBuilder(style_dict)
p = Polyline(points=points)
p.set_style(myStyle.getStyle())
return p
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createLine(self, x1, y1, x2, y2, strokewidth=1, stroke="black"):
""" Creates a line @type x1: string or int @param x1: starting x-coordinate @type y1: string or int @param y1: starting y-coordinate @type x2: string or int @param x2: ending x-coordinate @type y2: string or int @param y2: ending y-coordinate @type strokewidth: string or int @param strokewidth: width of the pen used to draw @type stroke: string (either css constants like "black" or numerical values like "#FFFFFF") @param stroke: color with which to draw the outer limits @return: a line object """
|
style_dict = {'stroke-width':strokewidth, 'stroke':stroke}
myStyle = StyleBuilder(style_dict)
l = Line(x1, y1, x2, y2)
l.set_style(myStyle.getStyle())
return l
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def encode_scaled(data, size, version=0, level=QR_ECLEVEL_L, hint=QR_MODE_8, case_sensitive=True):
"""Creates a QR-code from string data, resized to the specified dimensions. Args: data: string: The data to encode in a QR-code. If a unicode string is supplied, it will be encoded in UTF-8. size: int: Output size. If this is not an exact multiple of the QR-code's dimensions, padding will be added. If this is smaller than the QR-code's dimensions, it is ignored. version: int: The minimum version to use. If set to 0, the library picks the smallest version that the data fits in. level: int: Error correction level. Defaults to 'L'. hint: int: The type of data to encode. Either QR_MODE_8 or QR_MODE_KANJI. case_sensitive: bool: Should string data be encoded case-preserving? Returns: A (version, size, image) tuple, where image is a size*size PIL image of the QR-code. """
|
version, src_size, im = encode(data, version, level, hint, case_sensitive)
if size < src_size:
size = src_size
qr_size = (size / src_size) * src_size
im = im.resize((qr_size, qr_size), Image.NEAREST)
pad = (size - qr_size) / 2
ret = Image.new("L", (size, size), 255)
ret.paste(im, (pad, pad))
return (version, size, ret)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def moveTo(self, vector):
""" Moves the turtle to the new position. Orientation is kept as it is. If the pen is lowered it will also add to the currently drawn polyline. """
|
self._position = vector
if self.isPenDown():
self._pointsOfPolyline.append(self._position)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def penUp(self):
""" Raises the pen. Any movement will not draw lines till pen is lowered again. """
|
if self._penDown==True:
self._penDown = False
self._addPolylineToElements()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _move(self, distance):
""" Moves the turtle by distance in the direction it is facing. If the pen is lowered it will also add to the currently drawn polyline. """
|
self._position = self._position + self._orient * distance
if self.isPenDown():
x = round(self._position.x, 2)
y = round(self._position.y, 2)
self._pointsOfPolyline.append(Vector(x, y))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getXML(self):
"""Retrieves the pysvg elements that make up the turtles path and returns them as String in an xml representation. """
|
s = ''
for element in self._svgElements:
s += element.getXML()
return s
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addTurtlePathToSVG(self, svgContainer):
"""Adds the paths of the turtle to an existing svg container. """
|
for element in self.getSVGElements():
svgContainer.addElement(element)
return svgContainer
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_content(self, context):
""" Add the csrf_token_value because the mixin use render_to_string and not render. """
|
self._valid_template()
context.update({
"csrf_token_value": get_token(self.request)
})
return render_to_string(self.get_template_names(), context)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normpath(path, keep_trailing=False):
"""Really normalize the path by adding a missing leading slash."""
|
new_path = k_paths.normpath(path)
if keep_trailing and path.endswith("/") and not new_path.endswith("/"):
new_path = new_path + "/"
if not new_path.startswith('/'):
return '/' + new_path
return new_path
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.