text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_activities(self, activity_ids=None, max_records=50):
""" Get all activies for this group. """ |
return self.connection.get_all_activities(self, activity_ids,
max_records) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_response(self, action, params, path='/', parent=None, verb='GET', list_marker='Set'):
""" Utility method to handle calls to IAM and parsing of responses. """ |
if not parent:
parent = self
response = self.make_request(action, params, path, verb)
body = response.read()
boto.log.debug(body)
if response.status == 200:
e = boto.jsonresponse.Element(list_marker=list_marker,
pythonize_name=True)
h = boto.jsonresponse.XmlHandler(e, parent)
h.parse(body)
return e
else:
boto.log.error('%s %s' % (response.status, response.reason))
boto.log.error('%s' % body)
raise self.ResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_groups(self, path_prefix='/', marker=None, max_items=None):
""" List the groups that have the specified path prefix. :type path_prefix: string :param path_prefix: If provided, only groups whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {}
if path_prefix:
params['PathPrefix'] = path_prefix
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListGroups', params,
list_marker='Groups') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_group(self, group_name, marker=None, max_items=None):
""" Return a list of users that are in the specified group. :type group_name: string :param group_name: The name of the group whose information should be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {'GroupName' : group_name}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('GetGroup', params, list_marker='Users') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_group_policies(self, group_name, marker=None, max_items=None):
""" List the names of the policies associated with the specified group. :type group_name: string :param group_name: The name of the group the policy is associated with. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {'GroupName' : group_name}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListGroupPolicies', params,
list_marker='PolicyNames') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_group_policy(self, group_name, policy_name):
""" Deletes the specified policy document for the specified group. :type group_name: string :param group_name: The name of the group the policy is associated with. :type policy_name: string :param policy_name: The policy document to delete. """ |
params = {'GroupName' : group_name,
'PolicyName' : policy_name}
return self.get_response('DeleteGroupPolicy', params, verb='POST') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_users(self, path_prefix='/', marker=None, max_items=None):
""" List the users that have the specified path prefix. :type path_prefix: string :param path_prefix: If provided, only users whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {'PathPrefix' : path_prefix}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListUsers', params, list_marker='Users') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_user(self, user_name=None):
""" Retrieve information about the specified user. If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request. :type user_name: string :param user_name: The name of the user to delete. If not specified, defaults to user making request. """ |
params = {}
if user_name:
params['UserName'] = user_name
return self.get_response('GetUser', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_user_policies(self, user_name, marker=None, max_items=None):
""" List the names of the policies associated with the specified user. :type user_name: string :param user_name: The name of the user the policy is associated with. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {'UserName' : user_name}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListUserPolicies', params,
list_marker='PolicyNames') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_user_policy(self, user_name, policy_name):
""" Deletes the specified policy document for the specified user. :type user_name: string :param user_name: The name of the user the policy is associated with. :type policy_name: string :param policy_name: The policy document to delete. """ |
params = {'UserName' : user_name,
'PolicyName' : policy_name}
return self.get_response('DeleteUserPolicy', params, verb='POST') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_groups_for_user(self, user_name, marker=None, max_items=None):
""" List the groups that a specified user belongs to. :type user_name: string :param user_name: The name of the user to list groups for. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {'UserName' : user_name}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListGroupsForUser', params,
list_marker='Groups') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_access_keys(self, user_name, marker=None, max_items=None):
""" Get all access keys associated with an account. :type user_name: string :param user_name: The username of the user :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {'UserName' : user_name}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListAccessKeys', params,
list_marker='AccessKeyMetadata') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_access_key(self, access_key_id, status, user_name=None):
""" Changes the status of the specified access key from Active to Inactive or vice versa. This action can be used to disable a user's key as part of a key rotation workflow. If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request. :type access_key_id: string :param access_key_id: The ID of the access key. :type status: string :param status: Either Active or Inactive. :type user_name: string :param user_name: The username of user (optional). """ |
params = {'AccessKeyId' : access_key_id,
'Status' : status}
if user_name:
params['UserName'] = user_name
return self.get_response('UpdateAccessKey', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_access_key(self, access_key_id, user_name=None):
""" Delete an access key associated with a user. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type access_key_id: string :param access_key_id: The ID of the access key to be deleted. :type user_name: string :param user_name: The username of the user """ |
params = {'AccessKeyId' : access_key_id}
if user_name:
params['UserName'] = user_name
return self.get_response('DeleteAccessKey', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_signing_certs(self, marker=None, max_items=None, user_name=None):
""" Get all signing certificates associated with an account. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. :type user_name: string :param user_name: The username of the user """ |
params = {}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
if user_name:
params['UserName'] = user_name
return self.get_response('ListSigningCertificates',
params, list_marker='Certificates') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_signing_cert(self, cert_id, status, user_name=None):
""" Change the status of the specified signing certificate from Active to Inactive or vice versa. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type cert_id: string :param cert_id: The ID of the signing certificate :type status: string :param status: Either Active or Inactive. :type user_name: string :param user_name: The username of the user """ |
params = {'CertificateId' : cert_id,
'Status' : status}
if user_name:
params['UserName'] = user_name
return self.get_response('UpdateSigningCertificate', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_signing_cert(self, cert_body, user_name=None):
""" Uploads an X.509 signing certificate and associates it with the specified user. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type cert_body: string :param cert_body: The body of the signing certificate. :type user_name: string :param user_name: The username of the user """ |
params = {'CertificateBody' : cert_body}
if user_name:
params['UserName'] = user_name
return self.get_response('UploadSigningCertificate', params,
verb='POST') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_signing_cert(self, cert_id, user_name=None):
""" Delete a signing certificate associated with a user. If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request. :type user_name: string :param user_name: The username of the user :type cert_id: string :param cert_id: The ID of the certificate. """ |
params = {'CertificateId' : cert_id}
if user_name:
params['UserName'] = user_name
return self.get_response('DeleteSigningCertificate', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_server_certs(self, path_prefix='/', marker=None, max_items=None):
""" Lists the server certificates that have the specified path prefix. If none exist, the action returns an empty list. :type path_prefix: string :param path_prefix: If provided, only certificates whose paths match the provided prefix will be returned. :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {}
if path_prefix:
params['PathPrefix'] = path_prefix
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListServerCertificates',
params,
list_marker='ServerCertificateMetadataList') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_server_cert(self, cert_name, cert_body, private_key, cert_chain=None, path=None):
""" Uploads a server certificate entity for the AWS Account. The server certificate entity includes a public key certificate, a private key, and an optional certificate chain, which should all be PEM-encoded. :type cert_name: string :param cert_name: The name for the server certificate. Do not include the path in this value. :type cert_body: string :param cert_body: The contents of the public key certificate in PEM-encoded format. :type private_key: string :param private_key: The contents of the private key in PEM-encoded format. :type cert_chain: string :param cert_chain: The contents of the certificate chain. This is typically a concatenation of the PEM-encoded public key certificates of the chain. :type path: string :param path: The path for the server certificate. """ |
params = {'ServerCertificateName' : cert_name,
'CertificateBody' : cert_body,
'PrivateKey' : private_key}
if cert_chain:
params['CertificateChain'] = cert_chain
if path:
params['Path'] = path
return self.get_response('UploadServerCertificate', params,
verb='POST') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_mfa_devices(self, user_name, marker=None, max_items=None):
""" Get all MFA devices associated with an account. :type user_name: string :param user_name: The username of the user :type marker: string :param marker: Use this only when paginating results and only in follow-up request after you've received a response where the results are truncated. Set this to the value of the Marker element in the response you just received. :type max_items: int :param max_items: Use this only when paginating results to indicate the maximum number of groups you want in the response. """ |
params = {'UserName' : user_name}
if marker:
params['Marker'] = marker
if max_items:
params['MaxItems'] = max_items
return self.get_response('ListMFADevices',
params, list_marker='MFADevices') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def enable_mfa_device(self, user_name, serial_number, auth_code_1, auth_code_2):
""" Enables the specified MFA device and associates it with the specified user. :type user_name: string :param user_name: The username of the user :type serial_number: string :param seriasl_number: The serial number which uniquely identifies the MFA device. :type auth_code_1: string :param auth_code_1: An authentication code emitted by the device. :type auth_code_2: string :param auth_code_2: A subsequent authentication code emitted by the device. """ |
params = {'UserName' : user_name,
'SerialNumber' : serial_number,
'AuthenticationCode1' : auth_code_1,
'AuthenticationCode2' : auth_code_2}
return self.get_response('EnableMFADevice', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def deactivate_mfa_device(self, user_name, serial_number):
""" Deactivates the specified MFA device and removes it from association with the user. :type user_name: string :param user_name: The username of the user :type serial_number: string :param seriasl_number: The serial number which uniquely identifies the MFA device. """ |
params = {'UserName' : user_name,
'SerialNumber' : serial_number}
return self.get_response('DeactivateMFADevice', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def resync_mfa_device(self, user_name, serial_number, auth_code_1, auth_code_2):
""" Syncronizes the specified MFA device with the AWS servers. :type user_name: string :param user_name: The username of the user :type serial_number: string :param seriasl_number: The serial number which uniquely identifies the MFA device. :type auth_code_1: string :param auth_code_1: An authentication code emitted by the device. :type auth_code_2: string :param auth_code_2: A subsequent authentication code emitted by the device. """ |
params = {'UserName' : user_name,
'SerialNumber' : serial_number,
'AuthenticationCode1' : auth_code_1,
'AuthenticationCode2' : auth_code_2}
return self.get_response('ResyncMFADevice', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_login_profile(self, user_name, password):
""" Resets the password associated with the user's login profile. :type user_name: string :param user_name: The name of the user :type password: string :param password: The new password for the user """ |
params = {'UserName' : user_name,
'Password' : password}
return self.get_response('UpdateLoginProfile', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_signin_url(self, service='ec2'):
""" Get the URL where IAM users can use their login profile to sign in to this account's console. :type service: string :param service: Default service to go to in the console. """ |
alias = self.get_account_alias()
if not alias:
raise Exception('No alias associated with this account. Please use iam.create_account_alias() first.')
return "https://%s.signin.aws.amazon.com/console/%s" % (alias, service) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def next(self):
"""Special paging functionality""" |
if self.iter == None:
self.iter = iter(self.objs)
try:
return self.iter.next()
except StopIteration:
self.iter = None
self.objs = []
if int(self.page) < int(self.total_pages):
self.page += 1
self._connection.get_response(self.action, self.params, self.page, self)
return self.next()
else:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def xread(file, length):
"Read exactly length bytes from file; raise EOFError if file ends sooner."
data = file.read(length)
if len(data) != length:
raise EOFError
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def opened(filename, mode):
"Open filename, or do nothing if filename is already an open file object"
if isinstance(filename, str):
file = open(filename, mode)
try:
yield file
finally:
if not file.closed:
file.close()
else:
yield filename |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def replace_chunk(filename, offset, length, chunk, in_place=True, max_mem=5):
"""Replace length bytes of data with chunk, starting at offset. Any KeyboardInterrupts arriving while replace_chunk is runnning are deferred until the operation is complete. If in_place is true, the operation works directly on the original file; this is fast and works on files that are already open, but an error or interrupt may lead to corrupt file contents. If in_place is false, the function prepares a copy first, then renames it back over the original file. This method is slower, but it prevents corruption on systems with atomic renames (UNIX), and reduces the window of vulnerability elsewhere (Windows). If there is no need to move data that is not being replaced, then we use the direct method irrespective of in_place. (In this case an interrupt may only corrupt the chunk being replaced.) """ |
with suppress_interrupt():
_replace_chunk(filename, offset, length, chunk, in_place, max_mem) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _copy_chunk(src, dst, length):
"Copy length bytes from file src to file dst."
BUFSIZE = 128 * 1024
while length > 0:
l = min(BUFSIZE, length)
buf = src.read(l)
assert len(buf) == l
dst.write(buf)
length -= l |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def symbolize(self, dsym_path, image_vmaddr, image_addr, instruction_addr, cpu_name, symbolize_inlined=False):
"""Symbolizes a single frame based on the information provided. If the symbolication fails a `SymbolicationError` is raised. `dsym_path` is the path to the dsym file on the file system. `image_vmaddr` is the slide of the image. For most situations this can just be set to `0`. If it's zero or unset we will attempt to find the slide from the dsym file. `image_addr` is the canonical image address as loaded. `instruction_addr` is the address where the error happened. `cpu_name` is the CPU name. It follows general apple conventions and is used to special case certain behavior and look up the right symbols. Common names are `armv7` and `arm64`. Additionally if `symbolize_inlined` is set to `True` then a list of frames is returned instead which might contain inlined frames. In that case the return value might be an empty list instead. """ |
if self._closed:
raise RuntimeError('Symbolizer is closed')
dsym_path = normalize_dsym_path(dsym_path)
image_vmaddr = parse_addr(image_vmaddr)
if not image_vmaddr:
di = self._symbolizer.get_debug_info(dsym_path)
if di is not None:
variant = di.get_variant(cpu_name)
if variant is not None:
image_vmaddr = variant.vmaddr
image_addr = parse_addr(image_addr)
instruction_addr = parse_addr(instruction_addr)
if not is_valid_cpu_name(cpu_name):
raise SymbolicationError('"%s" is not a valid cpu name' % cpu_name)
addr = image_vmaddr + instruction_addr - image_addr
with self._lock:
with timedsection('symbolize'):
if symbolize_inlined:
return self._symbolizer.symbolize_inlined(
dsym_path, addr, cpu_name)
return self._symbolizer.symbolize(
dsym_path, addr, cpu_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_tables(self, limit=None, start_table=None):
""" Return a list of table names associated with the current account and endpoint. :type limit: int :param limit: The maximum number of tables to return. :type start_table: str :param limit: The name of the table that starts the list. If you ran a previous list_tables and not all results were returned, the response dict would include a LastEvaluatedTableName attribute. Use that value here to continue the listing. """ |
data = {}
if limit:
data['Limit'] = limit
if start_table:
data['ExclusiveStartTableName'] = start_table
json_input = json.dumps(data)
return self.make_request('ListTables', json_input) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def describe_table(self, table_name):
""" Returns information about the table including current state of the table, primary key schema and when the table was created. :type table_name: str :param table_name: The name of the table to describe. """ |
data = {'TableName' : table_name}
json_input = json.dumps(data)
return self.make_request('DescribeTable', json_input) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_table(self, table_name, schema, provisioned_throughput):
""" Add a new table to your account. The table name must be unique among those associated with the account issuing the request. This request triggers an asynchronous workflow to begin creating the table. When the workflow is complete, the state of the table will be ACTIVE. :type table_name: str :param table_name: The name of the table to create. :type schema: dict :param schema: A Python version of the KeySchema data structure as defined by DynamoDB :type provisioned_throughput: dict :param provisioned_throughput: A Python version of the ProvisionedThroughput data structure defined by DynamoDB. """ |
data = {'TableName' : table_name,
'KeySchema' : schema,
'ProvisionedThroughput': provisioned_throughput}
json_input = json.dumps(data)
response_dict = self.make_request('CreateTable', json_input)
return response_dict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_table(self, table_name):
""" Deletes the table and all of it's data. After this request the table will be in the DELETING state until DynamoDB completes the delete operation. :type table_name: str :param table_name: The name of the table to delete. """ |
data = {'TableName': table_name}
json_input = json.dumps(data)
return self.make_request('DeleteTable', json_input) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_item(self, table_name, key, attributes_to_get=None, consistent_read=False, object_hook=None):
""" Return a set of attributes for an item that matches the supplied key. :type table_name: str :param table_name: The name of the table containing the item. :type key: dict :param key: A Python version of the Key data structure defined by DynamoDB. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. """ |
data = {'TableName': table_name,
'Key': key}
if attributes_to_get:
data['AttributesToGet'] = attributes_to_get
if consistent_read:
data['ConsistentRead'] = True
json_input = json.dumps(data)
response = self.make_request('GetItem', json_input,
object_hook=object_hook)
if not response.has_key('Item'):
raise dynamodb_exceptions.DynamoDBKeyNotFoundError(
"Key does not exist."
)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_item(self, table_name, key, expected=None, return_values=None, object_hook=None):
""" Delete an item and all of it's attributes by primary key. You can perform a conditional delete by specifying an expected rule. :type table_name: str :param table_name: The name of the table containing the item. :type key: dict :param key: A Python version of the Key data structure defined by DynamoDB. :type expected: dict :param expected: A Python version of the Expected data structure defined by DynamoDB. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ |
data = {'TableName' : table_name,
'Key' : key}
if expected:
data['Expected'] = expected
if return_values:
data['ReturnValues'] = return_values
json_input = json.dumps(data)
return self.make_request('DeleteItem', json_input,
object_hook=object_hook) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query(self, table_name, hash_key_value, range_key_conditions=None, attributes_to_get=None, limit=None, consistent_read=False, scan_index_forward=True, exclusive_start_key=None, object_hook=None):
""" Perform a query of DynamoDB. This version is currently punting and expecting you to provide a full and correct JSON body which is passed as is to DynamoDB. :type table_name: str :param table_name: The name of the table to query. :type hash_key_value: dict :param key: A DynamoDB-style HashKeyValue. :type range_key_conditions: dict :param range_key_conditions: A Python version of the RangeKeyConditions data structure. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type limit: int :param limit: The maximum number of items to return. :type consistent_read: bool :param consistent_read: If True, a consistent read request is issued. Otherwise, an eventually consistent request is issued. :type scan_index_forward: bool :param scan_index_forward: Specified forward or backward traversal of the index. Default is forward (True). :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. """ |
data = {'TableName': table_name,
'HashKeyValue': hash_key_value}
if range_key_conditions:
data['RangeKeyCondition'] = range_key_conditions
if attributes_to_get:
data['AttributesToGet'] = attributes_to_get
if limit:
data['Limit'] = limit
if consistent_read:
data['ConsistentRead'] = True
if scan_index_forward:
data['ScanIndexForward'] = True
else:
data['ScanIndexForward'] = False
if exclusive_start_key:
data['ExclusiveStartKey'] = exclusive_start_key
json_input = json.dumps(data)
return self.make_request('Query', json_input,
object_hook=object_hook) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scan(self, table_name, scan_filter=None, attributes_to_get=None, limit=None, count=False, exclusive_start_key=None, object_hook=None):
""" Perform a scan of DynamoDB. This version is currently punting and expecting you to provide a full and correct JSON body which is passed as is to DynamoDB. :type table_name: str :param table_name: The name of the table to scan. :type scan_filter: dict :param scan_filter: A Python version of the ScanFilter data structure. :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type limit: int :param limit: The maximum number of items to return. :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Scan operation, even if the operation has no matching items for the assigned filter. :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. """ |
data = {'TableName': table_name}
if scan_filter:
data['ScanFilter'] = scan_filter
if attributes_to_get:
data['AttributesToGet'] = attributes_to_get
if limit:
data['Limit'] = limit
if count:
data['Count'] = True
if exclusive_start_key:
data['ExclusiveStartKey'] = exclusive_start_key
json_input = json.dumps(data)
return self.make_request('Scan', json_input, object_hook=object_hook) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_bg(img, th=(240, 255)):
""" Removes similar colored background in the given image. :param img: Input image :param th: Tuple(2) Background color threshold (lower-limit, upper-limit) :return: Background removed image as result """ |
if img.size == 0:
return img
img = gray3(img)
# delete rows with complete background color
h, w = img.shape[:2]
i = 0
while i < h:
mask = np.logical_or(img[i, :, :] < th[0], img[i, :, :] > th[1])
if not mask.any():
img = np.delete(img, i, axis=0)
i -= 1
h -= 1
i += 1
# if image is complete background only
if img.size == 0:
return img
# delete columns with complete background color
h, w = img.shape[:2]
i = 0
while i < w:
mask = np.logical_or(img[:, i, :] < th[0], img[:, i, :] > th[1])
if not mask.any():
img = np.delete(img, i, axis=1)
i -= 1
w -= 1
i += 1
return img |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_bg(img, padding, color=COL_WHITE):
""" Adds a padding to the given image as background of specified color :param img: Input image. :param padding: constant padding around the image. :param color: background color that needs to filled for the newly padded region. :return: New image with background. """ |
img = gray3(img)
h, w, d = img.shape
new_img = np.ones((h + 2*padding, w + 2*padding, d)) * color[:d]
new_img = new_img.astype(np.uint8)
set_img_box(new_img, (padding, padding, w, h), img)
return new_img |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def img_box(img, box):
""" Selects the sub-image inside the given box :param img: Image to crop from :param box: Box to crop from. Box can be either Box object or array of [x, y, width, height] :return: Cropped sub-image from the main image """ |
if isinstance(box, tuple):
box = Box.from_tup(box)
if len(img.shape) == 3:
return img[box.y:box.y + box.height, box.x:box.x + box.width, :]
else:
return img[box.y:box.y + box.height, box.x:box.x + box.width] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_text_img(img, text, pos, box=None, color=None, thickness=1, scale=1, vertical=False):
""" Adds the given text in the image. :param img: Input image :param text: String text :param pos: (x, y) in the image or relative to the given Box object :param box: Box object. If not None, the text is placed inside the box. :param color: Color of the text. :param thickness: Thickness of the font. :param scale: Font size scale. :param vertical: If true, the text is displayed vertically. (slow) :return: """ |
if color is None:
color = COL_WHITE
text = str(text)
top_left = pos
if box is not None:
top_left = box.move(pos).to_int().top_left()
if top_left[0] > img.shape[1]:
return
if vertical:
if box is not None:
h, w, d = box.height, box.width, 3
else:
h, w, d = img.shape
txt_img = np.zeros((w, h, d), dtype=np.uint8)
# 90 deg rotation
top_left = h - pos[1], pos[0]
cv.putText(txt_img, text, top_left, cv.FONT_HERSHEY_PLAIN, scale, color, thickness)
txt_img = ndimage.rotate(txt_img, 90)
mask = txt_img > 0
if box is not None:
im_box = img_box(img, box)
im_box[mask] = txt_img[mask]
else:
img[mask] = txt_img[mask]
else:
cv.putText(img, text, top_left, cv.FONT_HERSHEY_PLAIN, scale, color, thickness) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_rect(img, box, color=None, thickness=1):
""" Draws a bounding box inside the image. :param img: Input image :param box: Box object that defines the bounding box. :param color: Color of the box :param thickness: Thickness of line :return: Rectangle added image """ |
if color is None:
color = COL_GRAY
box = box.to_int()
cv.rectangle(img, box.top_left(), box.bottom_right(), color, thickness) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def collage(imgs, size, padding=10, bg=COL_BLACK):
""" Constructs a collage of same-sized images with specified padding. :param imgs: Array of images. Either 1d-array or 2d-array. :param size: (no. of rows, no. of cols) :param padding: Padding space between each image :param bg: Background color for the collage. Default: Black :return: New collage """ |
# make 2d array
if not isinstance(imgs[0], list):
imgs = [imgs]
h, w = imgs[0][0].shape[:2]
nrows, ncols = size
nr, nc = nrows * h + (nrows-1) * padding, ncols * w + (ncols-1) * padding
res = np.ones((nr, nc, 3), dtype=np.uint8) * np.array(bg, dtype=np.uint8)
for r in range(nrows):
for c in range(ncols):
img = imgs[r][c]
if is_gray(img):
img = gray3ch(img)
rs = r * (h + padding)
re = rs + h
cs = c * (w + padding)
ce = cs + w
res[rs:re, cs:ce, :] = img
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def each_img(img_dir):
""" Reads and iterates through each image file in the given directory """ |
for fname in utils.each_img(img_dir):
fname = os.path.join(img_dir, fname)
yield cv.imread(fname), fname |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_state(self, value, reason, data=None):
""" Temporarily sets the state of an alarm. :type value: str :param value: OK | ALARM | INSUFFICIENT_DATA :type reason: str :param reason: Reason alarm set (human readable). :type data: str :param data: Reason data (will be jsonified). """ |
return self.connection.set_alarm_state(self.name, reason, value, data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_alarm_action(self, action_arn=None):
""" Adds an alarm action, represented as an SNS topic, to this alarm. What do do when alarm is triggered. :type action_arn: str :param action_arn: SNS topics to which notification should be sent if the alarm goes to state ALARM. """ |
if not action_arn:
return # Raise exception instead?
self.actions_enabled = 'true'
self.alarm_actions.append(action_arn) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_insufficient_data_action(self, action_arn=None):
""" Adds an insufficient_data action, represented as an SNS topic, to this alarm. What to do when the insufficient_data state is reached. :type action_arn: str :param action_arn: SNS topics to which notification should be sent if the alarm goes to state INSUFFICIENT_DATA. """ |
if not action_arn:
return
self.actions_enabled = 'true'
self.insufficient_data_actions.append(action_arn) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_ok_action(self, action_arn=None):
""" Adds an ok action, represented as an SNS topic, to this alarm. What to do when the ok state is reached. :type action_arn: str :param action_arn: SNS topics to which notification should be sent if the alarm goes to state INSUFFICIENT_DATA. """ |
if not action_arn:
return
self.actions_enabled = 'true'
self.ok_actions.append(action_arn) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_session_token(self, duration=None, force_new=False):
""" Return a valid session token. Because retrieving new tokens from the Secure Token Service is a fairly heavyweight operation this module caches previously retrieved tokens and returns them when appropriate. Each token is cached with a key consisting of the region name of the STS endpoint concatenated with the requesting user's access id. If there is a token in the cache meeting with this key, the session expiration is checked to make sure it is still valid and if so, the cached token is returned. Otherwise, a new session token is requested from STS and it is placed into the cache and returned. :type duration: int :param duration: The number of seconds the credentials should remain valid. :type force_new: bool :param force_new: If this parameter is True, a new session token will be retrieved from the Secure Token Service regardless of whether there is a valid cached token or not. """ |
token_key = '%s:%s' % (self.region.name, self.provider.access_key)
token = self._check_token_cache(token_key, duration)
if force_new or not token:
boto.log.debug('fetching a new token for %s' % token_key)
self._mutex.acquire()
token = self._get_session_token(duration)
_session_token_cache[token_key] = token
self._mutex.release()
return token |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_topic_attributes(self, topic, attr_name, attr_value):
""" Get attributes of a Topic :type topic: string :param topic: The ARN of the topic. :type attr_name: string :param attr_name: The name of the attribute you want to set. Only a subset of the topic's attributes are mutable. Valid values: Policy | DisplayName :type attr_value: string :param attr_value: The new value for the attribute. """ |
params = {'ContentType' : 'JSON',
'TopicArn' : topic,
'AttributeName' : attr_name,
'AttributeValue' : attr_value}
response = self.make_request('SetTopicAttributes', params, '/', 'GET')
body = response.read()
if response.status == 200:
return json.loads(body)
else:
boto.log.error('%s %s' % (response.status, response.reason))
boto.log.error('%s' % body)
raise self.ResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_permission(self, topic, label, account_ids, actions):
""" Adds a statement to a topic's access control policy, granting access for the specified AWS accounts to the specified actions. :type topic: string :param topic: The ARN of the topic. :type label: string :param label: A unique identifier for the new policy statement. :type account_ids: list of strings :param account_ids: The AWS account ids of the users who will be give access to the specified actions. :type actions: list of strings :param actions: The actions you want to allow for each of the specified principal(s). """ |
params = {'ContentType' : 'JSON',
'TopicArn' : topic,
'Label' : label}
self.build_list_params(params, account_ids, 'AWSAccountId')
self.build_list_params(params, actions, 'ActionName')
response = self.make_request('AddPermission', params, '/', 'GET')
body = response.read()
if response.status == 200:
return json.loads(body)
else:
boto.log.error('%s %s' % (response.status, response.reason))
boto.log.error('%s' % body)
raise self.ResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def subscribe_sqs_queue(self, topic, queue):
""" Subscribe an SQS queue to a topic. This is convenience method that handles most of the complexity involved in using ans SQS queue as an endpoint for an SNS topic. To achieve this the following operations are performed: * The correct ARN is constructed for the SQS queue and that ARN is then subscribed to the topic. * A JSON policy document is contructed that grants permission to the SNS topic to send messages to the SQS queue. * This JSON policy is then associated with the SQS queue using the queue's set_attribute method. If the queue already has a policy associated with it, this process will add a Statement to that policy. If no policy exists, a new policy will be created. :type topic: string :param topic: The name of the new topic. :type queue: A boto Queue object :param queue: The queue you wish to subscribe to the SNS Topic. """ |
t = queue.id.split('/')
q_arn = 'arn:aws:sqs:%s:%s:%s' % (queue.connection.region.name,
t[1], t[2])
resp = self.subscribe(topic, 'sqs', q_arn)
policy = queue.get_attributes('Policy')
if 'Version' not in policy:
policy['Version'] = '2008-10-17'
if 'Statement' not in policy:
policy['Statement'] = []
statement = {'Action' : 'SQS:SendMessage',
'Effect' : 'Allow',
'Principal' : {'AWS' : '*'},
'Resource' : q_arn,
'Sid' : str(uuid.uuid4()),
'Condition' : {'StringLike' : {'aws:SourceArn' : topic}}}
policy['Statement'].append(statement)
queue.set_attribute('Policy', json.dumps(policy))
return resp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unsubscribe(self, subscription):
""" Allows endpoint owner to delete subscription. Confirmation message will be delivered. :type subscription: string :param subscription: The ARN of the subscription to be deleted. """ |
params = {'ContentType' : 'JSON',
'SubscriptionArn' : subscription}
response = self.make_request('Unsubscribe', params, '/', 'GET')
body = response.read()
if response.status == 200:
return json.loads(body)
else:
boto.log.error('%s %s' % (response.status, response.reason))
boto.log.error('%s' % body)
raise self.ResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_subscriptions(self, next_token=None):
""" Get list of all subscriptions. :type next_token: string :param next_token: Token returned by the previous call to this method. """ |
params = {'ContentType' : 'JSON'}
if next_token:
params['NextToken'] = next_token
response = self.make_request('ListSubscriptions', params, '/', 'GET')
body = response.read()
if response.status == 200:
return json.loads(body)
else:
boto.log.error('%s %s' % (response.status, response.reason))
boto.log.error('%s' % body)
raise self.ResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def install_caller_instruction(self, token_type="Unrestricted", transaction_id=None):
""" Set us up as a caller This will install a new caller_token into the FPS section. This should really only be called to regenerate the caller token. """ |
response = self.install_payment_instruction("MyRole=='Caller';",
token_type=token_type,
transaction_id=transaction_id)
body = response.read()
if(response.status == 200):
rs = ResultSet()
h = handler.XmlHandler(rs, self)
xml.sax.parseString(body, h)
caller_token = rs.TokenId
try:
boto.config.save_system_option("FPS", "caller_token",
caller_token)
except(IOError):
boto.config.save_user_option("FPS", "caller_token",
caller_token)
return caller_token
else:
raise FPSResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_marketplace_registration_url(self, returnURL, pipelineName, maxFixedFee=0.0, maxVariableFee=0.0, recipientPaysFee=True, **params):
""" Generate the URL with the signature required for signing up a recipient """ |
# use the sandbox authorization endpoint if we're using the
# sandbox for API calls.
endpoint_host = 'authorize.payments.amazon.com'
if 'sandbox' in self.host:
endpoint_host = 'authorize.payments-sandbox.amazon.com'
base = "/cobranded-ui/actions/start"
params['callerKey'] = str(self.aws_access_key_id)
params['returnURL'] = str(returnURL)
params['pipelineName'] = str(pipelineName)
params['maxFixedFee'] = str(maxFixedFee)
params['maxVariableFee'] = str(maxVariableFee)
params['recipientPaysFee'] = str(recipientPaysFee)
params["signatureMethod"] = 'HmacSHA256'
params["signatureVersion"] = '2'
if(not params.has_key('callerReference')):
params['callerReference'] = str(uuid.uuid4())
parts = ''
for k in sorted(params.keys()):
parts += "&%s=%s" % (k, urllib.quote(params[k], '~'))
canonical = '\n'.join(['GET',
str(endpoint_host).lower(),
base,
parts[1:]])
signature = self._auth_handler.sign_string(canonical)
params["signature"] = signature
urlsuffix = ''
for k in sorted(params.keys()):
urlsuffix += "&%s=%s" % (k, urllib.quote(params[k], '~'))
urlsuffix = urlsuffix[1:] # strip the first &
fmt = "https://%(endpoint_host)s%(base)s?%(urlsuffix)s"
final = fmt % vars()
return final |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_url(self, returnURL, paymentReason, pipelineName, transactionAmount, **params):
""" Generate the URL with the signature required for a transaction """ |
# use the sandbox authorization endpoint if we're using the
# sandbox for API calls.
endpoint_host = 'authorize.payments.amazon.com'
if 'sandbox' in self.host:
endpoint_host = 'authorize.payments-sandbox.amazon.com'
base = "/cobranded-ui/actions/start"
params['callerKey'] = str(self.aws_access_key_id)
params['returnURL'] = str(returnURL)
params['paymentReason'] = str(paymentReason)
params['pipelineName'] = pipelineName
params['transactionAmount'] = transactionAmount
params["signatureMethod"] = 'HmacSHA256'
params["signatureVersion"] = '2'
if(not params.has_key('callerReference')):
params['callerReference'] = str(uuid.uuid4())
parts = ''
for k in sorted(params.keys()):
parts += "&%s=%s" % (k, urllib.quote(params[k], '~'))
canonical = '\n'.join(['GET',
str(endpoint_host).lower(),
base,
parts[1:]])
signature = self._auth_handler.sign_string(canonical)
params["signature"] = signature
urlsuffix = ''
for k in sorted(params.keys()):
urlsuffix += "&%s=%s" % (k, urllib.quote(params[k], '~'))
urlsuffix = urlsuffix[1:] # strip the first &
fmt = "https://%(endpoint_host)s%(base)s?%(urlsuffix)s"
final = fmt % vars()
return final |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pay(self, transactionAmount, senderTokenId, recipientTokenId=None, callerTokenId=None, chargeFeeTo="Recipient", callerReference=None, senderReference=None, recipientReference=None, senderDescription=None, recipientDescription=None, callerDescription=None, metadata=None, transactionDate=None, reserve=False):
""" Make a payment transaction. You must specify the amount. This can also perform a Reserve request if 'reserve' is set to True. """ |
params = {}
params['SenderTokenId'] = senderTokenId
# this is for 2008-09-17 specification
params['TransactionAmount.Amount'] = str(transactionAmount)
params['TransactionAmount.CurrencyCode'] = "USD"
#params['TransactionAmount'] = str(transactionAmount)
params['ChargeFeeTo'] = chargeFeeTo
params['RecipientTokenId'] = (
recipientTokenId if recipientTokenId is not None
else boto.config.get("FPS", "recipient_token")
)
params['CallerTokenId'] = (
callerTokenId if callerTokenId is not None
else boto.config.get("FPS", "caller_token")
)
if(transactionDate != None):
params['TransactionDate'] = transactionDate
if(senderReference != None):
params['SenderReference'] = senderReference
if(recipientReference != None):
params['RecipientReference'] = recipientReference
if(senderDescription != None):
params['SenderDescription'] = senderDescription
if(recipientDescription != None):
params['RecipientDescription'] = recipientDescription
if(callerDescription != None):
params['CallerDescription'] = callerDescription
if(metadata != None):
params['MetaData'] = metadata
if(callerReference == None):
callerReference = uuid.uuid4()
params['CallerReference'] = callerReference
if reserve:
response = self.make_request("Reserve", params)
else:
response = self.make_request("Pay", params)
body = response.read()
if(response.status == 200):
rs = ResultSet()
h = handler.XmlHandler(rs, self)
xml.sax.parseString(body, h)
return rs
else:
raise FPSResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_transaction_status(self, transactionId):
""" Returns the status of a given transaction. """ |
params = {}
params['TransactionId'] = transactionId
response = self.make_request("GetTransactionStatus", params)
body = response.read()
if(response.status == 200):
rs = ResultSet()
h = handler.XmlHandler(rs, self)
xml.sax.parseString(body, h)
return rs
else:
raise FPSResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def settle(self, reserveTransactionId, transactionAmount=None):
""" Charges for a reserved payment. """ |
params = {}
params['ReserveTransactionId'] = reserveTransactionId
if(transactionAmount != None):
params['TransactionAmount'] = transactionAmount
response = self.make_request("Settle", params)
body = response.read()
if(response.status == 200):
rs = ResultSet()
h = handler.XmlHandler(rs, self)
xml.sax.parseString(body, h)
return rs
else:
raise FPSResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refund(self, callerReference, transactionId, refundAmount=None, callerDescription=None):
""" Refund a transaction. This refunds the full amount by default unless 'refundAmount' is specified. """ |
params = {}
params['CallerReference'] = callerReference
params['TransactionId'] = transactionId
if(refundAmount != None):
params['RefundAmount'] = refundAmount
if(callerDescription != None):
params['CallerDescription'] = callerDescription
response = self.make_request("Refund", params)
body = response.read()
if(response.status == 200):
rs = ResultSet()
h = handler.XmlHandler(rs, self)
xml.sax.parseString(body, h)
return rs
else:
raise FPSResponseError(response.status, response.reason, body) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def build_pdf(path_jinja2, template_name, path_outfile, template_kwargs=None):
'''Helper function for building a pdf from a latex jinja2 template
:param path_jinja2: the root directory for latex jinja2 templates
:param template_name: the relative path, to path_jinja2, to the desired
jinja2 Latex template
:param path_outfile: the full path to the desired final output file
Must contain the same file extension as files generated by
cmd_wo_infile, otherwise the process will fail
:param template_kwargs: a dictionary of key/values for jinja2 variables
'''
latex_template_object = LatexBuild(
path_jinja2,
template_name,
template_kwargs,
)
return latex_template_object.build_pdf(path_outfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def build_html(path_jinja2, template_name, path_outfile, template_kwargs=None):
'''Helper function for building an html from a latex jinja2 template
:param path_jinja2: the root directory for latex jinja2 templates
:param template_name: the relative path, to path_jinja2, to the desired
jinja2 Latex template
:param path_outfile: the full path to the desired final output file
Must contain the same file extension as files generated by
cmd_wo_infile, otherwise the process will fail
:param template_kwargs: a dictionary of key/values for jinja2 variables
'''
latex_template_object = LatexBuild(
path_jinja2,
template_name,
template_kwargs,
)
return latex_template_object.build_html(path_outfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def build_docx(path_jinja2, template_name, path_outfile, template_kwargs=None):
'''Helper function for building a docx file from a latex jinja2 template
:param path_jinja2: the root directory for latex jinja2 templates
:param template_name: the relative path, to path_jinja2, to the desired
jinja2 Latex template
:param path_outfile: the full path to the desired final output file
Must contain the same file extension as files generated by
cmd_wo_infile, otherwise the process will fail
:param template_kwargs: a dictionary of key/values for jinja2 variables
'''
latex_template_object = LatexBuild(
path_jinja2,
template_name,
template_kwargs,
)
return latex_template_object.build_docx(path_outfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(self, uri=None, resources=None, index_only=False):
"""Read sitemap from a URI including handling sitemapindexes. If index_only is True then individual sitemaps references in a sitemapindex will not be read. This will result in no resources being returned and is useful only to read the metadata and links listed in the sitemapindex. Includes the subtlety that if the input URI is a local file and is a sitemapindex which contains URIs for the individual sitemaps, then these are mapped to the filesystem also. """ |
try:
fh = URLopener().open(uri)
self.num_files += 1
except IOError as e:
raise IOError(
"Failed to load sitemap/sitemapindex from %s (%s)" %
(uri, str(e)))
# Get the Content-Length if we can (works fine for local files)
try:
self.content_length = int(fh.info()['Content-Length'])
self.bytes_read += self.content_length
self.logger.debug(
"Read %d bytes from %s" %
(self.content_length, uri))
except KeyError:
# If we don't get a length then c'est la vie
self.logger.debug("Read ????? bytes from %s" % (uri))
pass
self.logger.info("Read sitemap/sitemapindex from %s" % (uri))
s = self.new_sitemap()
s.parse_xml(fh=fh, resources=self, capability=self.capability_name)
# what did we read? sitemap or sitemapindex?
if (s.parsed_index):
# sitemapindex
if (not self.allow_multifile):
raise ListBaseIndexError(
"Got sitemapindex from %s but support for sitemapindex disabled" %
(uri))
self.logger.info(
"Parsed as sitemapindex, %d sitemaps" %
(len(
self.resources)))
sitemapindex_is_file = self.is_file_uri(uri)
if (index_only):
# don't read the component sitemaps
self.sitemapindex = True
return
# now loop over all entries to read each sitemap and add to
# resources
sitemaps = self.resources
self.resources = self.resources_class()
self.logger.info("Now reading %d sitemaps" % len(sitemaps.uris()))
for sitemap_uri in sorted(sitemaps.uris()):
self.read_component_sitemap(
uri, sitemap_uri, s, sitemapindex_is_file)
else:
# sitemap
self.logger.info("Parsed as sitemap, %d resources" %
(len(self.resources))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_component_sitemap( self, sitemapindex_uri, sitemap_uri, sitemap, sitemapindex_is_file):
"""Read a component sitemap of a Resource List with index. Each component must be a sitemap with the """ |
if (sitemapindex_is_file):
if (not self.is_file_uri(sitemap_uri)):
# Attempt to map URI to local file
remote_uri = sitemap_uri
sitemap_uri = self.mapper.src_to_dst(remote_uri)
self.logger.info(
"Mapped %s to local file %s" %
(remote_uri, sitemap_uri))
else:
# The individual sitemaps should be at a URL (scheme/server/path)
# that the sitemapindex URL can speak authoritatively about
if (self.check_url_authority and
not UrlAuthority(sitemapindex_uri).has_authority_over(sitemap_uri)):
raise ListBaseIndexError(
"The sitemapindex (%s) refers to sitemap at a location it does not have authority over (%s)" %
(sitemapindex_uri, sitemap_uri))
try:
fh = URLopener().open(sitemap_uri)
self.num_files += 1
except IOError as e:
raise ListBaseIndexError(
"Failed to load sitemap from %s listed in sitemap index %s (%s)" %
(sitemap_uri, sitemapindex_uri, str(e)))
# Get the Content-Length if we can (works fine for local files)
try:
self.content_length = int(fh.info()['Content-Length'])
self.bytes_read += self.content_length
except KeyError:
# If we don't get a length then c'est la vie
pass
self.logger.info(
"Reading sitemap from %s (%d bytes)" %
(sitemap_uri, self.content_length))
component = sitemap.parse_xml(fh=fh, sitemapindex=False)
# Copy resources into self, check any metadata
for r in component:
self.resources.add(r) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def requires_multifile(self):
"""Return False or the number of component sitemaps required. In the case that no len() is available for self.resources then then self.count must be set beforehand to avoid an exception. """ |
if (self.max_sitemap_entries is None or
len(self) <= self.max_sitemap_entries):
return(False)
return(int(math.ceil(len(self) / float(self.max_sitemap_entries)))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_xml_index(self, basename="/tmp/sitemap.xml"):
"""Return a string of the index for a large list that is split. All we need to do is determine the number of component sitemaps will be is and generate their URIs based on a pattern. Q - should there be a flag to select generation of each component sitemap in order to calculate the md5sum? Q - what timestamp should be used? """ |
num_parts = self.requires_multifile()
if (not num_parts):
raise ListBaseIndexError(
"Request for sitemapindex for list with only %d entries when max_sitemap_entries is set to %s" %
(len(self), str(
self.max_sitemap_entries)))
index = ListBase()
index.sitemapindex = True
index.capability_name = self.capability_name
index.default_capability()
for n in range(num_parts):
r = Resource(uri=self.part_name(basename, n))
index.add(r)
return(index.as_xml()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_xml_part(self, basename="/tmp/sitemap.xml", part_number=0):
"""Return a string of component sitemap number part_number. Used in the case of a large list that is split into component sitemaps. basename is used to create "index" links to the sitemapindex Q - what timestamp should be used? """ |
if (not self.requires_multifile()):
raise ListBaseIndexError(
"Request for component sitemap for list with only %d entries when max_sitemap_entries is set to %s" %
(len(self), str(
self.max_sitemap_entries)))
start = part_number * self.max_sitemap_entries
if (start > len(self)):
raise ListBaseIndexError(
"Request for component sitemap with part_number too high, would start at entry %d yet the list has only %d entries" %
(start, len(self)))
stop = start + self.max_sitemap_entries
if (stop > len(self)):
stop = len(self)
part = ListBase(itertools.islice(self.resources, start, stop))
part.capability_name = self.capability_name
part.default_capability()
part.index = basename
s = self.new_sitemap()
return(s.resources_as_xml(part)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, basename='/tmp/sitemap.xml'):
"""Write one or a set of sitemap files to disk. resources is a ResourceContainer that may be an ResourceList or a ChangeList. This may be a generator so data is read as needed and length is determined at the end. basename is used as the name of the single sitemap file or the sitemapindex for a set of sitemap files. Uses self.max_sitemap_entries to determine whether the resource_list can be written as one sitemap. If there are more entries and self.allow_multifile is set True then a set of sitemap files, with an sitemapindex, will be written. """ |
# Access resources through iterator only
resources_iter = iter(self.resources)
(chunk, nxt) = self.get_resources_chunk(resources_iter)
s = self.new_sitemap()
if (nxt is not None):
# Have more than self.max_sitemap_entries => sitemapindex
if (not self.allow_multifile):
raise ListBaseIndexError(
"Too many entries for a single sitemap but multifile disabled")
# Work out URI of sitemapindex so that we can link up to
# it from the individual sitemap files
try:
index_uri = self.mapper.dst_to_src(basename)
except MapperError as e:
raise ListBaseIndexError(
"Cannot map sitemapindex filename to URI (%s)" %
str(e))
# Use iterator over all resources and count off sets of
# max_sitemap_entries to go into each sitemap, store the
# names of the sitemaps as we go. Copy md from self into
# the index and use this for all chunks also
index = ListBase(md=self.md.copy(), ln=list(self.ln))
index.capability_name = self.capability_name
index.default_capability()
while (len(chunk) > 0):
file = self.part_name(basename, len(index))
# Check that we can map the filename of this sitemap into
# URI space for the sitemapindex
try:
uri = self.mapper.dst_to_src(file)
except MapperError as e:
raise ListBaseIndexError(
"Cannot map sitemap filename to URI (%s)" % str(e))
self.logger.info("Writing sitemap %s..." % (file))
f = open(file, 'w')
chunk.index = index_uri
chunk.md = index.md
s.resources_as_xml(chunk, fh=f)
f.close()
# Record information about this sitemap for index
r = Resource(uri=uri,
timestamp=os.stat(file).st_mtime,
md5=Hashes(['md5'], file).md5)
index.add(r)
# Get next chunk
(chunk, nxt) = self.get_resources_chunk(resources_iter, nxt)
self.logger.info("Wrote %d sitemaps" % (len(index)))
f = open(basename, 'w')
self.logger.info("Writing sitemapindex %s..." % (basename))
s.resources_as_xml(index, sitemapindex=True, fh=f)
f.close()
self.logger.info("Wrote sitemapindex %s" % (basename))
elif self.sitemapindex:
f = open(basename, 'w')
self.logger.info("Writing sitemapindex %s..." % (basename))
s.resources_as_xml(chunk, sitemapindex=True, fh=f)
f.close()
self.logger.info("Wrote sitemapindex %s" % (basename))
else:
f = open(basename, 'w')
self.logger.info("Writing sitemap %s..." % (basename))
s.resources_as_xml(chunk, fh=f)
f.close()
self.logger.info("Wrote sitemap %s" % (basename)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def index_as_xml(self):
"""XML serialization of this list taken to be sitemapindex entries.""" |
self.default_capability()
s = self.new_sitemap()
return s.resources_as_xml(self, sitemapindex=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_resources_chunk(self, resource_iter, first=None):
"""Return next chunk of resources from resource_iter, and next item. If first parameter is specified then this will be prepended to the list. The chunk will contain self.max_sitemap_entries if the iterator returns that many. next will have the value of the next value from the iterator, providing indication of whether more is available. Use this as first when asking for the following chunk. """ |
chunk = ListBase(md=self.md.copy(), ln=list(self.ln))
chunk.capability_name = self.capability_name
chunk.default_capability()
if (first is not None):
chunk.add(first)
for r in resource_iter:
chunk.add(r)
if (len(chunk) >= self.max_sitemap_entries):
break
# Get next to see whether there are more resources
try:
nxt = next(resource_iter)
except StopIteration:
nxt = None
return(chunk, nxt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _basis_spline_factory(coef, degree, knots, der, ext):
"""Return a B-Spline given some coefficients.""" |
return functools.partial(interpolate.splev, tck=(knots, coef, degree), der=der, ext=ext) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derivatives_factory(cls, coef, degree, knots, ext, **kwargs):
""" Given some coefficients, return a the derivative of a B-spline. """ |
return cls._basis_spline_factory(coef, degree, knots, 1, ext) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def functions_factory(cls, coef, degree, knots, ext, **kwargs):
""" Given some coefficients, return a B-spline. """ |
return cls._basis_spline_factory(coef, degree, knots, 0, ext) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_master(self, url):
"""Set the master url that this object works with.""" |
m = urlparse(url)
self.master_scheme = m.scheme
self.master_netloc = m.netloc
self.master_path = os.path.dirname(m.path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_authority_over(self, url):
"""Return True of the current master has authority over url. In strict mode checks scheme, server and path. Otherwise checks just that the server names match or the query url is a sub-domain of the master. """ |
s = urlparse(url)
if (s.scheme != self.master_scheme):
return(False)
if (s.netloc != self.master_netloc):
if (not s.netloc.endswith('.' + self.master_netloc)):
return(False)
# Maybe should allow parallel for 3+ components, eg. a.example.org,
# b.example.org
path = os.path.dirname(s.path)
if (self.strict and
path != self.master_path and
not path.startswith(self.master_path)):
return(False)
return(True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def admin_permission_factory(admin_view):
"""Default factory for creating a permission for an admin. It tries to load a :class:`invenio_access.permissions.Permission` instance if `invenio_access` is installed. Otherwise, it loads a :class:`flask_principal.Permission` instance. :param admin_view: Instance of administration view which is currently being protected. :returns: Permission instance. """ |
try:
pkg_resources.get_distribution('invenio-access')
from invenio_access import Permission
except pkg_resources.DistributionNotFound:
from flask_principal import Permission
return Permission(action_admin_access) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_conns(cred, providers):
"""Collect node data asynchronously using gevent lib.""" |
cld_svc_map = {"aws": conn_aws,
"azure": conn_az,
"gcp": conn_gcp,
"alicloud": conn_ali}
sys.stdout.write("\rEstablishing Connections: ")
sys.stdout.flush()
busy_obj = busy_disp_on()
conn_fn = [[cld_svc_map[x.rstrip('1234567890')], cred[x], x]
for x in providers]
cgroup = Group()
conn_res = []
conn_res = cgroup.map(get_conn, conn_fn)
cgroup.join()
conn_objs = {}
for item in conn_res:
conn_objs.update(item)
busy_disp_off(dobj=busy_obj)
sys.stdout.write("\r \r")
sys.stdout.write("\033[?25h") # cursor back on
sys.stdout.flush()
return conn_objs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_data(conn_objs, providers):
"""Refresh node data using existing connection-objects.""" |
cld_svc_map = {"aws": nodes_aws,
"azure": nodes_az,
"gcp": nodes_gcp,
"alicloud": nodes_ali}
sys.stdout.write("\rCollecting Info: ")
sys.stdout.flush()
busy_obj = busy_disp_on()
collec_fn = [[cld_svc_map[x.rstrip('1234567890')], conn_objs[x]]
for x in providers]
ngroup = Group()
node_list = []
node_list = ngroup.map(get_nodes, collec_fn)
ngroup.join()
busy_disp_off(dobj=busy_obj)
sys.stdout.write("\r \r")
sys.stdout.write("\033[?25h") # cursor back on
sys.stdout.flush()
return node_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def busy_disp_off(dobj):
"""Turn OFF busy_display to indicate completion.""" |
dobj.kill(block=False)
sys.stdout.write("\033[D \033[D")
sys.stdout.flush() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def busy_display():
"""Display animation to show activity.""" |
sys.stdout.write("\033[?25l") # cursor off
sys.stdout.flush()
for x in range(1800):
symb = ['\\', '|', '/', '-']
sys.stdout.write("\033[D{}".format(symb[x % 4]))
sys.stdout.flush()
gevent.sleep(0.1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conn_aws(cred, crid):
"""Establish connection to AWS service.""" |
driver = get_driver(Provider.EC2)
try:
aws_obj = driver(cred['aws_access_key_id'],
cred['aws_secret_access_key'],
region=cred['aws_default_region'])
except SSLError as e:
abort_err("\r SSL Error with AWS: {}".format(e))
except InvalidCredsError as e:
abort_err("\r Error with AWS Credentials: {}".format(e))
return {crid: aws_obj} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nodes_aws(c_obj):
"""Get node objects from AWS.""" |
aws_nodes = []
try:
aws_nodes = c_obj.list_nodes()
except BaseHTTPError as e:
abort_err("\r HTTP Error with AWS: {}".format(e))
aws_nodes = adj_nodes_aws(aws_nodes)
return aws_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def adj_nodes_aws(aws_nodes):
"""Adjust details specific to AWS.""" |
for node in aws_nodes:
node.cloud = "aws"
node.cloud_disp = "AWS"
node.private_ips = ip_to_str(node.private_ips)
node.public_ips = ip_to_str(node.public_ips)
node.zone = node.extra['availability']
node.size = node.extra['instance_type']
node.type = node.extra['instance_lifecycle']
return aws_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conn_az(cred, crid):
"""Establish connection to Azure service.""" |
driver = get_driver(Provider.AZURE_ARM)
try:
az_obj = driver(tenant_id=cred['az_tenant_id'],
subscription_id=cred['az_sub_id'],
key=cred['az_app_id'],
secret=cred['az_app_sec'])
except SSLError as e:
abort_err("\r SSL Error with Azure: {}".format(e))
except InvalidCredsError as e:
abort_err("\r Error with Azure Credentials: {}".format(e))
return {crid: az_obj} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nodes_az(c_obj):
"""Get node objects from Azure.""" |
az_nodes = []
try:
az_nodes = c_obj.list_nodes()
except BaseHTTPError as e:
abort_err("\r HTTP Error with Azure: {}".format(e))
az_nodes = adj_nodes_az(az_nodes)
return az_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def adj_nodes_az(az_nodes):
"""Adjust details specific to Azure.""" |
for node in az_nodes:
node.cloud = "azure"
node.cloud_disp = "Azure"
node.private_ips = ip_to_str(node.private_ips)
node.public_ips = ip_to_str(node.public_ips)
node.zone = node.extra['location']
node.size = node.extra['properties']['hardwareProfile']['vmSize']
group_raw = node.id
unnsc, group_end = group_raw.split("resourceGroups/", 1)
group, unnsc = group_end.split("/", 1)
node.group = group
return az_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conn_gcp(cred, crid):
"""Establish connection to GCP.""" |
gcp_auth_type = cred.get('gcp_auth_type', "S")
if gcp_auth_type == "A": # Application Auth
gcp_crd_ia = CONFIG_DIR + ".gcp_libcloud_a_auth." + cred['gcp_proj_id']
gcp_crd = {'user_id': cred['gcp_client_id'],
'key': cred['gcp_client_sec'],
'project': cred['gcp_proj_id'],
'auth_type': "IA",
'credential_file': gcp_crd_ia}
else: # Service Account Auth
gcp_pem = CONFIG_DIR + cred['gcp_pem_file']
gcp_crd_sa = CONFIG_DIR + ".gcp_libcloud_s_auth." + cred['gcp_proj_id']
gcp_crd = {'user_id': cred['gcp_svc_acct_email'],
'key': gcp_pem,
'project': cred['gcp_proj_id'],
'credential_file': gcp_crd_sa}
driver = get_driver(Provider.GCE)
try:
gcp_obj = driver(**gcp_crd)
except SSLError as e:
abort_err("\r SSL Error with GCP: {}".format(e))
except (InvalidCredsError, ValueError) as e:
abort_err("\r Error with GCP Credentials: {}".format(e))
return {crid: gcp_obj} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nodes_gcp(c_obj):
"""Get node objects from GCP.""" |
gcp_nodes = []
try:
gcp_nodes = c_obj.list_nodes(ex_use_disk_cache=True)
except BaseHTTPError as e:
abort_err("\r HTTP Error with GCP: {}".format(e))
gcp_nodes = adj_nodes_gcp(gcp_nodes)
return gcp_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def adj_nodes_gcp(gcp_nodes):
"""Adjust details specific to GCP.""" |
for node in gcp_nodes:
node.cloud = "gcp"
node.cloud_disp = "GCP"
node.private_ips = ip_to_str(node.private_ips)
node.public_ips = ip_to_str(node.public_ips)
node.zone = node.extra['zone'].name
return gcp_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conn_ali(cred, crid):
"""Establish connection to AliCloud service.""" |
driver = get_driver(Provider.ALIYUN_ECS)
try:
ali_obj = driver(cred['ali_access_key_id'],
cred['ali_access_key_secret'],
region=cred['ali_region'])
except SSLError as e:
abort_err("\r SSL Error with AliCloud: {}".format(e))
except InvalidCredsError as e:
abort_err("\r Error with AliCloud Credentials: {}".format(e))
return {crid: ali_obj} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nodes_ali(c_obj):
"""Get node objects from AliCloud.""" |
ali_nodes = []
try:
ali_nodes = c_obj.list_nodes()
except BaseHTTPError as e:
abort_err("\r HTTP Error with AliCloud: {}".format(e))
ali_nodes = adj_nodes_ali(ali_nodes)
return ali_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def adj_nodes_ali(ali_nodes):
"""Adjust details specific to AliCloud.""" |
for node in ali_nodes:
node.cloud = "alicloud"
node.cloud_disp = "AliCloud"
node.private_ips = ip_to_str(node.extra['vpc_attributes']['private_ip_address'])
node.public_ips = ip_to_str(node.public_ips)
node.zone = node.extra['zone_id']
node.size = node.extra['instance_type']
if node.size.startswith('ecs.'):
node.size = node.size[len('ecs.'):]
return ali_nodes |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extendMarkdown(self, md, md_globals):
""" Add an instance of FigcaptionProcessor to BlockParser. """ |
# def_list = 'def_list' in md.registeredExtensions
md.parser.blockprocessors.add(
'figcaption', FigcaptionProcessor(md.parser), '<ulist') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse(self, uri=None, fh=None, str_data=None, **kwargs):
"""Parse a single XML document for this list. Accepts either a uri (uri or default if parameter not specified), or a filehandle (fh) or a string (str_data). Note that this method does not handle the case of a sitemapindex+sitemaps. LEGACY SUPPORT - the parameter str may be used in place of str_data but is deprecated and will be removed in a later version. """ |
if (uri is not None):
try:
fh = URLopener().open(uri)
except IOError as e:
raise Exception(
"Failed to load sitemap/sitemapindex from %s (%s)" %
(uri, str(e)))
elif (str_data is not None):
fh = io.StringIO(str_data)
elif ('str' in kwargs):
# Legacy support for str argument, see
# https://github.com/resync/resync/pull/21
# One test for this in tests/test_list_base.py
self.logger.warn(
"Legacy parse(str=...), use parse(str_data=...) instead")
fh = io.StringIO(kwargs['str'])
if (fh is None):
raise Exception("Nothing to parse")
s = self.new_sitemap()
s.parse_xml(
fh=fh,
resources=self,
capability=self.capability_name,
sitemapindex=False)
self.parsed_index = s.parsed_index |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, basename="/tmp/resynclist.xml"):
"""Write a single sitemap or sitemapindex XML document. Must be overridden to support multi-file lists. """ |
self.default_capability()
fh = open(basename, 'w')
s = self.new_sitemap()
s.resources_as_xml(self, fh=fh, sitemapindex=self.sitemapindex)
fh.close() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.