text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def detach(self, force=False):
""" Detach this EBS volume from an EC2 instance. :type force: bool :param force: Forces detachment if the previous detachment attempt did not occur cleanly. This option can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance. The instance will not have an opportunity to flush file system caches nor file system meta data. If you use this option, you must perform file system check and repair procedures. :rtype: bool :return: True if successful """ |
instance_id = None
if self.attach_data:
instance_id = self.attach_data.instance_id
device = None
if self.attach_data:
device = self.attach_data.device
return self.connection.detach_volume(self.id, instance_id, device, force) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def attachment_state(self):
""" Get the attachment state. """ |
state = None
if self.attach_data:
state = self.attach_data.status
return state |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def snapshots(self, owner=None, restorable_by=None):
""" Get all snapshots related to this volume. Note that this requires that all available snapshots for the account be retrieved from EC2 first and then the list is filtered client-side to contain only those for this volume. :type owner: str :param owner: If present, only the snapshots owned by the specified user will be returned. Valid values are: self | amazon | AWS Account ID :type restorable_by: str :param restorable_by: If present, only the snapshots that are restorable by the specified account id will be returned. :rtype: list of L{boto.ec2.snapshot.Snapshot} :return: The requested Snapshot objects """ |
rs = self.connection.get_all_snapshots(owner=owner,
restorable_by=restorable_by)
mine = []
for snap in rs:
if snap.volume_id == self.id:
mine.append(snap)
return mine |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def encrypt(text):
'Encrypt a string using an encryption key based on the django SECRET_KEY'
crypt = EncryptionAlgorithm.new(_get_encryption_key())
return crypt.encrypt(to_blocksize(text)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def decrypt(text):
'Decrypt a string using an encryption key based on the django SECRET_KEY'
crypt = EncryptionAlgorithm.new(_get_encryption_key())
return crypt.decrypt(text).rstrip(ENCRYPT_PAD_CHARACTER) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def item_object_hook(dct):
""" A custom object hook for use when decoding JSON item bodys. This hook will transform Amazon DynamoDB JSON responses to something that maps directly to native Python types. """ |
if len(dct.keys()) > 1:
return dct
if 'S' in dct:
return dct['S']
if 'N' in dct:
return convert_num(dct['N'])
if 'SS' in dct:
return set(dct['SS'])
if 'NS' in dct:
return set(map(convert_num, dct['NS']))
return dct |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dynamize_attribute_updates(self, pending_updates):
""" Convert a set of pending item updates into the structure required by Layer1. """ |
d = {}
for attr_name in pending_updates:
action, value = pending_updates[attr_name]
if value is None:
# DELETE without an attribute value
d[attr_name] = {"Action": action}
else:
d[attr_name] = {"Action": action,
"Value": self.dynamize_value(value)}
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dynamize_range_key_condition(self, range_key_condition):
""" Convert a layer2 range_key_condition parameter into the structure required by Layer1. """ |
d = None
if range_key_condition:
d = {}
for range_value in range_key_condition:
range_condition = range_key_condition[range_value]
if range_condition == 'BETWEEN':
if isinstance(range_value, tuple):
avl = [self.dynamize_value(v) for v in range_value]
else:
msg = 'BETWEEN condition requires a tuple value'
raise TypeError(msg)
elif isinstance(range_value, tuple):
msg = 'Tuple can only be supplied with BETWEEN condition'
raise TypeError(msg)
else:
avl = [self.dynamize_value(range_value)]
d = {'AttributeValueList': avl,
'ComparisonOperator': range_condition}
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dynamize_scan_filter(self, scan_filter):
""" Convert a layer2 scan_filter parameter into the structure required by Layer1. """ |
d = None
if scan_filter:
d = {}
for attr_name, op, value in scan_filter:
if op == 'BETWEEN':
if isinstance(value, tuple):
avl = [self.dynamize_value(v) for v in value]
else:
msg = 'BETWEEN condition requires a tuple value'
raise TypeError(msg)
elif op == 'NULL' or op == 'NOT_NULL':
avl = None
elif isinstance(value, tuple):
msg = 'Tuple can only be supplied with BETWEEN condition'
raise TypeError(msg)
else:
avl = [self.dynamize_value(value)]
dd = {'ComparisonOperator': op}
if avl:
dd['AttributeValueList'] = avl
d[attr_name] = dd
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dynamize_expected_value(self, expected_value):
""" Convert an expected_value parameter into the data structure required for Layer1. """ |
d = None
if expected_value:
d = {}
for attr_name in expected_value:
attr_value = expected_value[attr_name]
if attr_value is True:
attr_value = {'Exists': True}
elif attr_value is False:
attr_value = {'Exists': False}
else:
val = self.dynamize_value(expected_value[attr_name])
attr_value = {'Value': val}
d[attr_name] = attr_value
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dynamize_last_evaluated_key(self, last_evaluated_key):
""" Convert a last_evaluated_key parameter into the data structure required for Layer1. """ |
d = None
if last_evaluated_key:
hash_key = last_evaluated_key['HashKeyElement']
d = {'HashKeyElement': self.dynamize_value(hash_key)}
if 'RangeKeyElement' in last_evaluated_key:
range_key = last_evaluated_key['RangeKeyElement']
d['RangeKeyElement'] = self.dynamize_value(range_key)
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dynamize_request_items(self, batch_list):
""" Convert a request_items parameter into the data structure required for Layer1. """ |
d = None
if batch_list:
d = {}
for batch in batch_list:
batch_dict = {}
key_list = []
for key in batch.keys:
if isinstance(key, tuple):
hash_key, range_key = key
else:
hash_key = key
range_key = None
k = self.build_key_from_values(batch.table.schema,
hash_key, range_key)
key_list.append(k)
batch_dict['Keys'] = key_list
if batch.attributes_to_get:
batch_dict['AttributesToGet'] = batch.attributes_to_get
d[batch.table.name] = batch_dict
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_dynamodb_type(self, val):
""" Take a scalar Python value and return a string representing the corresponding Amazon DynamoDB type. If the value passed in is not a supported type, raise a TypeError. """ |
dynamodb_type = None
if is_num(val):
dynamodb_type = 'N'
elif is_str(val):
dynamodb_type = 'S'
elif isinstance(val, (set, frozenset)):
if False not in map(is_num, val):
dynamodb_type = 'NS'
elif False not in map(is_str, val):
dynamodb_type = 'SS'
if dynamodb_type is None:
raise TypeError('Unsupported type "%s" for value "%s"' % (type(val), val))
return dynamodb_type |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dynamize_value(self, val):
""" Take a scalar Python value and return a dict consisting of the Amazon DynamoDB type specification and the value that needs to be sent to Amazon DynamoDB. If the type of the value is not supported, raise a TypeError """ |
def _str(val):
"""
DynamoDB stores booleans as numbers. True is 1, False is 0.
This function converts Python booleans into DynamoDB friendly
representation.
"""
if isinstance(val, bool):
return str(int(val))
return str(val)
dynamodb_type = self.get_dynamodb_type(val)
if dynamodb_type == 'N':
val = {dynamodb_type : _str(val)}
elif dynamodb_type == 'S':
val = {dynamodb_type : val}
elif dynamodb_type == 'NS':
val = {dynamodb_type : [ str(n) for n in val]}
elif dynamodb_type == 'SS':
val = {dynamodb_type : [ n for n in val]}
return val |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_key_from_values(self, schema, hash_key, range_key=None):
""" Build a Key structure to be used for accessing items in Amazon DynamoDB. This method takes the supplied hash_key and optional range_key and validates them against the schema. If there is a mismatch, a TypeError is raised. Otherwise, a Python dict version of a Amazon DynamoDB Key data structure is returned. :type hash_key: int, float, str, or unicode :param hash_key: The hash key of the item you are looking for. The type of the hash key should match the type defined in the schema. :type range_key: int, float, str or unicode :param range_key: The range key of the item your are looking for. This should be supplied only if the schema requires a range key. The type of the range key should match the type defined in the schema. """ |
dynamodb_key = {}
dynamodb_value = self.dynamize_value(hash_key)
if dynamodb_value.keys()[0] != schema.hash_key_type:
msg = 'Hashkey must be of type: %s' % schema.hash_key_type
raise TypeError(msg)
dynamodb_key['HashKeyElement'] = dynamodb_value
if range_key is not None:
dynamodb_value = self.dynamize_value(range_key)
if dynamodb_value.keys()[0] != schema.range_key_type:
msg = 'RangeKey must be of type: %s' % schema.range_key_type
raise TypeError(msg)
dynamodb_key['RangeKeyElement'] = dynamodb_value
return dynamodb_key |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_table(self, name):
""" Retrieve the Table object for an existing table. :type name: str :param name: The name of the desired table. :rtype: :class:`boto.dynamodb.table.Table` :return: A Table object representing the table. """ |
response = self.layer1.describe_table(name)
return Table(self, response) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_table(self, table):
""" Delete this table and all items in it. After calling this the Table objects status attribute will be set to 'DELETING'. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object that is being deleted. """ |
response = self.layer1.delete_table(table.name)
table.update_from_response(response) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_schema(self, hash_key_name, hash_key_proto_value, range_key_name=None, range_key_proto_value=None):
""" Create a Schema object used when creating a Table. :type hash_key_name: str :param hash_key_name: The name of the HashKey for the schema. :type hash_key_proto_value: int|long|float|str|unicode :param hash_key_proto_value: A sample or prototype of the type of value you want to use for the HashKey. :type range_key_name: str :param range_key_name: The name of the RangeKey for the schema. This parameter is optional. :type range_key_proto_value: int|long|float|str|unicode :param range_key_proto_value: A sample or prototype of the type of value you want to use for the RangeKey. This parameter is optional. """ |
schema = {}
hash_key = {}
hash_key['AttributeName'] = hash_key_name
hash_key_type = self.get_dynamodb_type(hash_key_proto_value)
hash_key['AttributeType'] = hash_key_type
schema['HashKeyElement'] = hash_key
if range_key_name and range_key_proto_value is not None:
range_key = {}
range_key['AttributeName'] = range_key_name
range_key_type = self.get_dynamodb_type(range_key_proto_value)
range_key['AttributeType'] = range_key_type
schema['RangeKeyElement'] = range_key
return Schema(schema) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_item(self, item, expected_value=None, return_values=None):
""" Commit pending item updates to Amazon DynamoDB. :type item: :class:`boto.dynamodb.item.Item` :param item: The Item to update in Amazon DynamoDB. It is expected that you would have called the add_attribute, put_attribute and/or delete_attribute methods on this Item prior to calling this method. Those queued changes are what will be updated. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name/value pairs before they were updated. Possible values are: None, 'ALL_OLD', 'UPDATED_OLD', 'ALL_NEW' or 'UPDATED_NEW'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. If 'ALL_NEW' is specified, then all the attributes of the new version of the item are returned. If 'UPDATED_NEW' is specified, the new versions of only the updated attributes are returned. """ |
expected_value = self.dynamize_expected_value(expected_value)
key = self.build_key_from_values(item.table.schema,
item.hash_key, item.range_key)
attr_updates = self.dynamize_attribute_updates(item._updates)
response = self.layer1.update_item(item.table.name, key,
attr_updates,
expected_value, return_values,
object_hook=item_object_hook)
item._updates.clear()
if 'ConsumedCapacityUnits' in response:
item.consumed_units = response['ConsumedCapacityUnits']
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_item(self, item, expected_value=None, return_values=None):
""" Delete the item from Amazon DynamoDB. :type item: :class:`boto.dynamodb.item.Item` :param item: The Item to delete from Amazon DynamoDB. :type expected_value: dict :param expected_value: A dictionary of name/value pairs that you expect. This dictionary should have name/value pairs where the name is the name of the attribute and the value is either the value you are expecting or False if you expect the attribute not to exist. :type return_values: str :param return_values: Controls the return of attribute name-value pairs before then were changed. Possible values are: None or 'ALL_OLD'. If 'ALL_OLD' is specified and the item is overwritten, the content of the old item is returned. """ |
expected_value = self.dynamize_expected_value(expected_value)
key = self.build_key_from_values(item.table.schema,
item.hash_key, item.range_key)
return self.layer1.delete_item(item.table.name, key,
expected=expected_value,
return_values=return_values,
object_hook=item_object_hook) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scan(self, table, scan_filter=None, attributes_to_get=None, request_limit=None, max_results=None, count=False, exclusive_start_key=None, item_class=Item):
""" Perform a scan of DynamoDB. :type table: :class:`boto.dynamodb.table.Table` :param table: The Table object that is being scanned. :type scan_filter: A list of tuples :param scan_filter: A list of tuples where each tuple consists of an attribute name, a comparison operator, and either a scalar or tuple consisting of the values to compare the attribute to. Valid comparison operators are shown below along with the expected number of values that should be supplied. * EQ - equal (1) * NE - not equal (1) * LE - less than or equal (1) * LT - less than (1) * GE - greater than or equal (1) * GT - greater than (1) * NOT_NULL - attribute exists (0, use None) * NULL - attribute does not exist (0, use None) * CONTAINS - substring or value in list (1) * NOT_CONTAINS - absence of substring or value in list (1) * BEGINS_WITH - substring prefix (1) * IN - exact match in list (N) * BETWEEN - >= first value, <= second value (2) :type attributes_to_get: list :param attributes_to_get: A list of attribute names. If supplied, only the specified attribute names will be returned. Otherwise, all attributes will be returned. :type request_limit: int :param request_limit: The maximum number of items to retrieve from Amazon DynamoDB on each request. You may want to set a specific request_limit based on the provisioned throughput of your table. The default behavior is to retrieve as many results as possible per request. :type max_results: int :param max_results: The maximum number of results that will be retrieved from Amazon DynamoDB in total. For example, if you only wanted to see the first 100 results from the query, regardless of how many were actually available, you could set max_results to 100 and the generator returned from the query method will only yeild 100 results max. :type count: bool :param count: If True, Amazon DynamoDB returns a total number of items for the Scan operation, even if the operation has no matching items for the assigned filter. :type exclusive_start_key: list or tuple :param exclusive_start_key: Primary key of the item from which to continue an earlier query. This would be provided as the LastEvaluatedKey in that query. :type item_class: Class :param item_class: Allows you to override the class used to generate the items. This should be a subclass of :class:`boto.dynamodb.item.Item` :rtype: generator """ |
sf = self.dynamize_scan_filter(scan_filter)
response = True
n = 0
while response:
if response is True:
pass
elif response.has_key("LastEvaluatedKey"):
exclusive_start_key = response['LastEvaluatedKey']
else:
break
response = self.layer1.scan(table.name, sf,
attributes_to_get,request_limit,
count, exclusive_start_key,
object_hook=item_object_hook)
if response:
for item in response['Items']:
if max_results and n == max_results:
break
yield item_class(table, attrs=item)
n += 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def photos_search(user_id='', auth=False, tags='', tag_mode='', text='',\ min_upload_date='', max_upload_date='',\ min_taken_date='', max_taken_date='', \ license='', per_page='', page='', sort=''):
"""Returns a list of Photo objects. If auth=True then will auth the user. Can see private etc """ |
method = 'flickr.photos.search'
data = _doget(method, auth=auth, user_id=user_id, tags=tags, text=text,\
min_upload_date=min_upload_date,\
max_upload_date=max_upload_date, \
min_taken_date=min_taken_date, \
max_taken_date=max_taken_date, \
license=license, per_page=per_page,\
page=page, sort=sort)
photos = []
if isinstance(data.rsp.photos.photo, list):
for photo in data.rsp.photos.photo:
photos.append(_parse_photo(photo))
else:
photos = [_parse_photo(data.rsp.photos.photo)]
return photos |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def favorites_add(photo_id):
"""Add a photo to the user's favorites.""" |
method = 'flickr.favorites.add'
_dopost(method, auth=True, photo_id=photo_id)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def favorites_remove(photo_id):
"""Remove a photo from the user's favorites.""" |
method = 'flickr.favorites.remove'
_dopost(method, auth=True, photo_id=photo_id)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def groups_pools_getGroups():
"""Get a list of groups the auth'd user can post photos to.""" |
method = 'flickr.groups.pools.getGroups'
data = _doget(method, auth=True)
groups = []
if isinstance(data.rsp.groups.group, list):
for group in data.rsp.groups.group:
groups.append(Group(group.id, name=group.name, \
privacy=group.privacy))
else:
group = data.rsp.groups.group
groups = [Group(group.id, name=group.name, privacy=group.privacy)]
return groups |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tags_getListUserPopular(user_id='', count=''):
"""Gets the popular tags for a user in dictionary form tag=>count""" |
method = 'flickr.tags.getListUserPopular'
auth = user_id == ''
data = _doget(method, auth=auth, user_id=user_id)
result = {}
if isinstance(data.rsp.tags.tag, list):
for tag in data.rsp.tags.tag:
result[tag.text] = tag.count
else:
result[data.rsp.tags.tag.text] = data.rsp.tags.tag.count
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tags_getrelated(tag):
"""Gets the related tags for given tag.""" |
method = 'flickr.tags.getRelated'
data = _doget(method, auth=False, tag=tag)
if isinstance(data.rsp.tags.tag, list):
return [tag.text for tag in data.rsp.tags.tag]
else:
return [data.rsp.tags.tag.text] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_photo(photo):
"""Create a Photo object from photo data.""" |
owner = User(photo.owner)
title = photo.title
ispublic = photo.ispublic
isfriend = photo.isfriend
isfamily = photo.isfamily
secret = photo.secret
server = photo.server
p = Photo(photo.id, owner=owner, title=title, ispublic=ispublic,\
isfriend=isfriend, isfamily=isfamily, secret=secret, \
server=server)
return p |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getLocation(self):
""" Return the latitude+longitutde of the picture. Returns None if no location given for this pic. """ |
method = 'flickr.photos.geo.getLocation'
try:
data = _doget(method, photo_id=self.id)
except FlickrError: # Some other error might have occured too!?
return None
loc = data.rsp.photo.location
return [loc.latitude, loc.longitude] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPhotos(self):
"""Returns list of Photos.""" |
method = 'flickr.photosets.getPhotos'
data = _doget(method, photoset_id=self.id)
photos = data.rsp.photoset.photo
p = []
for photo in photos:
p.append(Photo(photo.id, title=photo.title, secret=photo.secret, \
server=photo.server))
return p |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def editPhotos(self, photos, primary=None):
"""Edit the photos in this set. photos - photos for set primary - primary photo (if None will used current) """ |
method = 'flickr.photosets.editPhotos'
if primary is None:
primary = self.primary
ids = [photo.id for photo in photos]
if primary.id not in ids:
ids.append(primary.id)
_dopost(method, auth=True, photoset_id=self.id,\
primary_photo_id=primary.id,
photo_ids=ids)
self.__count = len(ids)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addPhoto(self, photo):
"""Add a photo to this set. photo - the photo """ |
method = 'flickr.photosets.addPhoto'
_dopost(method, auth=True, photoset_id=self.id, photo_id=photo.id)
self.__count += 1
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
"""Deletes the photoset. """ |
method = 'flickr.photosets.delete'
_dopost(method, auth=True, photoset_id=self.id)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(cls, photo, title, description=''):
"""Create a new photoset. photo - primary photo """ |
if not isinstance(photo, Photo):
raise TypeError, "Photo expected"
method = 'flickr.photosets.create'
data = _dopost(method, auth=True, title=title,\
description=description,\
primary_photo_id=photo.id)
set = Photoset(data.rsp.photoset.id, title, Photo(photo.id),
photos=1, description=description)
return set |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _general_getattr(self, var):
"""Generic get attribute function.""" |
if getattr(self, "_%s__%s" % (self.__class__.__name__, var)) is None \
and not self.__loaded:
self._load_properties()
return getattr(self, "_%s__%s" % (self.__class__.__name__, var)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_properties(self):
"""Load User properties from Flickr.""" |
method = 'flickr.people.getInfo'
data = _doget(method, user_id=self.__id)
self.__loaded = True
person = data.rsp.person
self.__isadmin = person.isadmin
self.__ispro = person.ispro
self.__icon_server = person.iconserver
if int(person.iconserver) > 0:
self.__icon_url = 'http://photos%s.flickr.com/buddyicons/%s.jpg' \
% (person.iconserver, self.__id)
else:
self.__icon_url = 'http://www.flickr.com/images/buddyicon.jpg'
self.__username = person.username.text
self.__realname = person.realname.text
self.__location = person.location.text
self.__photos_firstdate = person.photos.firstdate.text
self.__photos_firstdatetaken = person.photos.firstdatetaken.text
self.__photos_count = person.photos.count.text |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPhotosets(self):
"""Returns a list of Photosets.""" |
method = 'flickr.photosets.getList'
data = _doget(method, user_id=self.id)
sets = []
if isinstance(data.rsp.photosets.photoset, list):
for photoset in data.rsp.photosets.photoset:
sets.append(Photoset(photoset.id, photoset.title.text,\
Photo(photoset.primary),\
secret=photoset.secret, \
server=photoset.server, \
description=photoset.description.text,
photos=photoset.photos))
else:
photoset = data.rsp.photosets.photoset
sets.append(Photoset(photoset.id, photoset.title.text,\
Photo(photoset.primary),\
secret=photoset.secret, \
server=photoset.server, \
description=photoset.description.text,
photos=photoset.photos))
return sets |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPhotos(self, tags='', per_page='', page=''):
"""Get a list of photo objects for this group""" |
method = 'flickr.groups.pools.getPhotos'
data = _doget(method, group_id=self.id, tags=tags,\
per_page=per_page, page=page)
photos = []
for photo in data.rsp.photos.photo:
photos.append(_parse_photo(photo))
return photos |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, photo):
"""Adds a Photo to the group""" |
method = 'flickr.groups.pools.add'
_dopost(method, auth=True, photo_id=photo.id, group_id=self.id)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_keys(self, headers=None, **params):
""" This method returns the single key around which this anonymous Bucket was instantiated. :rtype: SimpleResultSet :return: The result from file system listing the keys requested """ |
key = Key(self.name, self.contained_key)
return SimpleResultSet([key]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def new_key(self, key_name=None, key_type=Key.KEY_REGULAR_FILE):
""" Creates a new key :type key_name: string :param key_name: The name of the key to create :rtype: :class:`boto.file.key.Key` :returns: An instance of the newly created key object """ |
if key_name == '-':
return Key(self.name, '-', key_type=Key.KEY_STREAM_WRITABLE)
else:
dir_name = os.path.dirname(key_name)
if dir_name and not os.path.exists(dir_name):
os.makedirs(dir_name)
fp = open(key_name, 'wb')
return Key(self.name, key_name, fp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_queue(self, queue, force_deletion=False, callback=None):
""" Delete an SQS Queue. :type queue: A Queue object :param queue: The SQS queue to be deleted :type force_deletion: Boolean :param force_deletion: Normally, SQS will not delete a queue that contains messages. However, if the force_deletion argument is True, the queue will be deleted regardless of whether there are messages in the queue or not. USE WITH CAUTION. This will delete all messages in the queue as well. :rtype: bool :return: True if the command succeeded, False otherwise """ |
return self.get_status('DeleteQueue', None, queue.id, callback=callback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def receive_message(self, queue, number_messages=1, visibility_timeout=None, attributes=None, callback=None):
""" Read messages from an SQS Queue. :type queue: A Queue object :param queue: The Queue from which messages are read. :type number_messages: int :param number_messages: The maximum number of messages to read (default=1) :type visibility_timeout: int :param visibility_timeout: The number of seconds the message should remain invisible to other queue readers (default=None which uses the Queues default) :type attributes: str :param attributes: The name of additional attribute to return with response or All if you want all attributes. The default is to return no additional attributes. Valid values: All|SenderId|SentTimestamp| ApproximateReceiveCount| ApproximateFirstReceiveTimestamp :rtype: list :return: A list of :class:`boto.sqs.message.Message` objects. """ |
params = {'MaxNumberOfMessages' : number_messages}
if visibility_timeout:
params['VisibilityTimeout'] = visibility_timeout
if attributes:
self.build_list_params(params, attributes, 'AttributeName')
return self.get_list('ReceiveMessage', params,
[('Message', queue.message_class)],
queue.id, queue, callback=callback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def change_message_visibility(self, queue, receipt_handle, visibility_timeout, callback=None):
""" Extends the read lock timeout for the specified message from the specified queue to the specified value. :type queue: A :class:`boto.sqs.queue.Queue` object :param queue: The Queue from which messages are read. :type receipt_handle: str :param queue: The receipt handle associated with the message whose visibility timeout will be changed. :type visibility_timeout: int :param visibility_timeout: The new value of the message's visibility timeout in seconds. """ |
params = {'ReceiptHandle' : receipt_handle,
'VisibilityTimeout' : visibility_timeout}
return self.get_status('ChangeMessageVisibility', params, queue.id, callback=callback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def regions():
""" Get all available regions for the RDS service. :rtype: list :return: A list of :class:`boto.rds.regioninfo.RDSRegionInfo` """ |
return [RDSRegionInfo(name='us-east-1',
endpoint='rds.us-east-1.amazonaws.com'),
RDSRegionInfo(name='eu-west-1',
endpoint='rds.eu-west-1.amazonaws.com'),
RDSRegionInfo(name='us-west-1',
endpoint='rds.us-west-1.amazonaws.com'),
RDSRegionInfo(name='us-west-2',
endpoint='rds.us-west-2.amazonaws.com'),
RDSRegionInfo(name='sa-east-1',
endpoint='rds.sa-east-1.amazonaws.com'),
RDSRegionInfo(name='ap-northeast-1',
endpoint='rds.ap-northeast-1.amazonaws.com'),
RDSRegionInfo(name='ap-southeast-1',
endpoint='rds.ap-southeast-1.amazonaws.com')
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_dbinstances(self, instance_id=None, max_records=None, marker=None):
""" Retrieve all the DBInstances in your account. :type instance_id: str :param instance_id: DB Instance identifier. If supplied, only information this instance will be returned. Otherwise, info about all DB Instances will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.rds.dbinstance.DBInstance` """ |
params = {}
if instance_id:
params['DBInstanceIdentifier'] = instance_id
if max_records:
params['MaxRecords'] = max_records
if marker:
params['Marker'] = marker
return self.get_list('DescribeDBInstances', params,
[('DBInstance', DBInstance)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_dbinstance(self, id, allocated_storage, instance_class, master_username, master_password, port=3306, engine='MySQL5.1', db_name=None, param_group=None, security_groups=None, availability_zone=None, preferred_maintenance_window=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, engine_version=None, auto_minor_version_upgrade=True):
""" Create a new DBInstance. :type id: str :param id: Unique identifier for the new instance. Must contain 1-63 alphanumeric characters. First character must be a letter. May not end with a hyphen or contain two consecutive hyphens :type allocated_storage: int :param allocated_storage: Initially allocated storage size, in GBs. Valid values are [5-1024] :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Valid values are: * db.m1.small * db.m1.large * db.m1.xlarge * db.m2.xlarge * db.m2.2xlarge * db.m2.4xlarge :type engine: str :param engine: Name of database engine. Must be MySQL5.1 for now. :type master_username: str :param master_username: Name of master user for the DBInstance. Must be 1-15 alphanumeric characters, first must be a letter. :type master_password: str :param master_password: Password of master user for the DBInstance. Must be 4-16 alphanumeric characters. :type port: int :param port: Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306. :type db_name: str :param db_name: Name of a database to create when the DBInstance is created. Default is to create no databases. :type param_group: str :param param_group: Name of DBParameterGroup to associate with this DBInstance. If no groups are specified no parameter groups will be used. :type security_groups: list of str or list of DBSecurityGroup objects :param security_groups: List of names of DBSecurityGroup to authorize on this DBInstance. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :type preferred_maintenance_window: str :param preferred_maintenance_window: The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00 :type backup_retention_period: int :param backup_retention_period: The number of days for which automated backups are retained. Setting this to zero disables automated backups. :type preferred_backup_window: str :param preferred_backup_window: The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC). :type multi_az: bool :param multi_az: If True, specifies the DB Instance will be deployed in multiple availability zones. :type engine_version: str :param engine_version: Version number of the database engine to use. :type auto_minor_version_upgrade: bool :param auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window. Default is True. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The new db instance. """ |
params = {'DBInstanceIdentifier': id,
'AllocatedStorage': allocated_storage,
'DBInstanceClass': instance_class,
'Engine': engine,
'MasterUsername': master_username,
'MasterUserPassword': master_password,
'Port': port,
'MultiAZ': str(multi_az).lower(),
'AutoMinorVersionUpgrade':
str(auto_minor_version_upgrade).lower()}
if db_name:
params['DBName'] = db_name
if param_group:
params['DBParameterGroupName'] = param_group
if security_groups:
l = []
for group in security_groups:
if isinstance(group, DBSecurityGroup):
l.append(group.name)
else:
l.append(group)
self.build_list_params(params, l, 'DBSecurityGroups.member')
if availability_zone:
params['AvailabilityZone'] = availability_zone
if preferred_maintenance_window:
params['PreferredMaintenanceWindow'] = preferred_maintenance_window
if backup_retention_period is not None:
params['BackupRetentionPeriod'] = backup_retention_period
if preferred_backup_window:
params['PreferredBackupWindow'] = preferred_backup_window
if engine_version:
params['EngineVersion'] = engine_version
return self.get_object('CreateDBInstance', params, DBInstance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_dbinstance_read_replica(self, id, source_id, instance_class=None, port=3306, availability_zone=None, auto_minor_version_upgrade=None):
""" Create a new DBInstance Read Replica. :type id: str :param id: Unique identifier for the new instance. Must contain 1-63 alphanumeric characters. First character must be a letter. May not end with a hyphen or contain two consecutive hyphens :type source_id: str :param source_id: Unique identifier for the DB Instance for which this DB Instance will act as a Read Replica. :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Default is to inherit from the source DB Instance. Valid values are: * db.m1.small * db.m1.large * db.m1.xlarge * db.m2.xlarge * db.m2.2xlarge * db.m2.4xlarge :type port: int :param port: Port number on which database accepts connections. Default is to inherit from source DB Instance. Valid values [1115-65535]. Defaults to 3306. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :type auto_minor_version_upgrade: bool :param auto_minor_version_upgrade: Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window. Default is to inherit this value from the source DB Instance. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The new db instance. """ |
params = {'DBInstanceIdentifier' : id,
'SourceDBInstanceIdentifier' : source_id}
if instance_class:
params['DBInstanceClass'] = instance_class
if port:
params['Port'] = port
if availability_zone:
params['AvailabilityZone'] = availability_zone
if auto_minor_version_upgrade is not None:
if auto_minor_version_upgrade is True:
params['AutoMinorVersionUpgrade'] = 'true'
else:
params['AutoMinorVersionUpgrade'] = 'false'
return self.get_object('CreateDBInstanceReadReplica',
params, DBInstance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def modify_dbinstance(self, id, param_group=None, security_groups=None, preferred_maintenance_window=None, master_password=None, allocated_storage=None, instance_class=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, apply_immediately=False):
""" Modify an existing DBInstance. :type id: str :param id: Unique identifier for the new instance. :type security_groups: list of str or list of DBSecurityGroup objects :param security_groups: List of names of DBSecurityGroup to authorize on this DBInstance. :type preferred_maintenance_window: str :param preferred_maintenance_window: The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00 :type master_password: str :param master_password: Password of master user for the DBInstance. Must be 4-15 alphanumeric characters. :type allocated_storage: int :param allocated_storage: The new allocated storage size, in GBs. Valid values are [5-1024] :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Changes will be applied at next maintenance window unless apply_immediately is True. Valid values are: * db.m1.small * db.m1.large * db.m1.xlarge * db.m2.xlarge * db.m2.2xlarge * db.m2.4xlarge :type apply_immediately: bool :param apply_immediately: If true, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window. :type backup_retention_period: int :param backup_retention_period: The number of days for which automated backups are retained. Setting this to zero disables automated backups. :type preferred_backup_window: str :param preferred_backup_window: The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC). :type multi_az: bool :param multi_az: If True, specifies the DB Instance will be deployed in multiple availability zones. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The modified db instance. """ |
params = {'DBInstanceIdentifier' : id}
if param_group:
params['DBParameterGroupName'] = param_group
if security_groups:
l = []
for group in security_groups:
if isinstance(group, DBSecurityGroup):
l.append(group.name)
else:
l.append(group)
self.build_list_params(params, l, 'DBSecurityGroups.member')
if preferred_maintenance_window:
params['PreferredMaintenanceWindow'] = preferred_maintenance_window
if master_password:
params['MasterUserPassword'] = master_password
if allocated_storage:
params['AllocatedStorage'] = allocated_storage
if instance_class:
params['DBInstanceClass'] = instance_class
if backup_retention_period is not None:
params['BackupRetentionPeriod'] = backup_retention_period
if preferred_backup_window:
params['PreferredBackupWindow'] = preferred_backup_window
if multi_az:
params['MultiAZ'] = 'true'
if apply_immediately:
params['ApplyImmediately'] = 'true'
return self.get_object('ModifyDBInstance', params, DBInstance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_dbinstance(self, id, skip_final_snapshot=False, final_snapshot_id=''):
""" Delete an existing DBInstance. :type id: str :param id: Unique identifier for the new instance. :type skip_final_snapshot: bool :param skip_final_snapshot: This parameter determines whether a final db snapshot is created before the instance is deleted. If True, no snapshot is created. If False, a snapshot is created before deleting the instance. :type final_snapshot_id: str :param final_snapshot_id: If a final snapshot is requested, this is the identifier used for that snapshot. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The deleted db instance. """ |
params = {'DBInstanceIdentifier' : id}
if skip_final_snapshot:
params['SkipFinalSnapshot'] = 'true'
else:
params['SkipFinalSnapshot'] = 'false'
params['FinalDBSnapshotIdentifier'] = final_snapshot_id
return self.get_object('DeleteDBInstance', params, DBInstance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_dbparameter_groups(self, groupname=None, max_records=None, marker=None):
""" Get all parameter groups associated with your account in a region. :type groupname: str :param groupname: The name of the DBParameter group to retrieve. If not provided, all DBParameter groups will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.ec2.parametergroup.ParameterGroup` """ |
params = {}
if groupname:
params['DBParameterGroupName'] = groupname
if max_records:
params['MaxRecords'] = max_records
if marker:
params['Marker'] = marker
return self.get_list('DescribeDBParameterGroups', params,
[('DBParameterGroup', ParameterGroup)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_dbparameters(self, groupname, source=None, max_records=None, marker=None):
""" Get all parameters associated with a ParameterGroup :type groupname: str :param groupname: The name of the DBParameter group to retrieve. :type source: str :param source: Specifies which parameters to return. If not specified, all parameters will be returned. Valid values are: user|system|engine-default :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: :class:`boto.ec2.parametergroup.ParameterGroup` :return: The ParameterGroup """ |
params = {'DBParameterGroupName' : groupname}
if source:
params['Source'] = source
if max_records:
params['MaxRecords'] = max_records
if marker:
params['Marker'] = marker
pg = self.get_object('DescribeDBParameters', params, ParameterGroup)
pg.name = groupname
return pg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_parameter_group(self, name, engine='MySQL5.1', description=''):
""" Create a new dbparameter group for your account. :type name: string :param name: The name of the new dbparameter group :type engine: str :param engine: Name of database engine. :type description: string :param description: The description of the new security group :rtype: :class:`boto.rds.dbsecuritygroup.DBSecurityGroup` :return: The newly created DBSecurityGroup """ |
params = {'DBParameterGroupName': name,
'DBParameterGroupFamily': engine,
'Description' : description}
return self.get_object('CreateDBParameterGroup', params, ParameterGroup) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def modify_parameter_group(self, name, parameters=None):
""" Modify a parameter group for your account. :type name: string :param name: The name of the new parameter group :type parameters: list of :class:`boto.rds.parametergroup.Parameter` :param parameters: The new parameters :rtype: :class:`boto.rds.parametergroup.ParameterGroup` :return: The newly created ParameterGroup """ |
params = {'DBParameterGroupName': name}
for i in range(0, len(parameters)):
parameter = parameters[i]
parameter.merge(params, i+1)
return self.get_list('ModifyDBParameterGroup', params,
ParameterGroup, verb='POST') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reset_parameter_group(self, name, reset_all_params=False, parameters=None):
""" Resets some or all of the parameters of a ParameterGroup to the default value :type key_name: string :param key_name: The name of the ParameterGroup to reset :type parameters: list of :class:`boto.rds.parametergroup.Parameter` :param parameters: The parameters to reset. If not supplied, all parameters will be reset. """ |
params = {'DBParameterGroupName':name}
if reset_all_params:
params['ResetAllParameters'] = 'true'
else:
params['ResetAllParameters'] = 'false'
for i in range(0, len(parameters)):
parameter = parameters[i]
parameter.merge(params, i+1)
return self.get_status('ResetDBParameterGroup', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def authorize_dbsecurity_group(self, group_name, cidr_ip=None, ec2_security_group_name=None, ec2_security_group_owner_id=None):
""" Add a new rule to an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR a CIDR block but not both. :type group_name: string :param group_name: The name of the security group you are adding the rule to. :type ec2_security_group_name: string :param ec2_security_group_name: The name of the EC2 security group you are granting access to. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The ID of the owner of the EC2 security group you are granting access to. :type cidr_ip: string :param cidr_ip: The CIDR block you are providing access to. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing :rtype: bool :return: True if successful. """ |
params = {'DBSecurityGroupName':group_name}
if ec2_security_group_name:
params['EC2SecurityGroupName'] = ec2_security_group_name
if ec2_security_group_owner_id:
params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id
if cidr_ip:
params['CIDRIP'] = urllib.quote(cidr_ip)
return self.get_object('AuthorizeDBSecurityGroupIngress', params,
DBSecurityGroup) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def revoke_dbsecurity_group(self, group_name, ec2_security_group_name=None, ec2_security_group_owner_id=None, cidr_ip=None):
""" Remove an existing rule from an existing security group. You need to pass in either ec2_security_group_name and ec2_security_group_owner_id OR a CIDR block. :type group_name: string :param group_name: The name of the security group you are removing the rule from. :type ec2_security_group_name: string :param ec2_security_group_name: The name of the EC2 security group from which you are removing access. :type ec2_security_group_owner_id: string :param ec2_security_group_owner_id: The ID of the owner of the EC2 security from which you are removing access. :type cidr_ip: string :param cidr_ip: The CIDR block from which you are removing access. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing :rtype: bool :return: True if successful. """ |
params = {'DBSecurityGroupName':group_name}
if ec2_security_group_name:
params['EC2SecurityGroupName'] = ec2_security_group_name
if ec2_security_group_owner_id:
params['EC2SecurityGroupOwnerId'] = ec2_security_group_owner_id
if cidr_ip:
params['CIDRIP'] = cidr_ip
return self.get_object('RevokeDBSecurityGroupIngress', params,
DBSecurityGroup) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_dbsnapshots(self, snapshot_id=None, instance_id=None, max_records=None, marker=None):
""" Get information about DB Snapshots. :type snapshot_id: str :param snapshot_id: The unique identifier of an RDS snapshot. If not provided, all RDS snapshots will be returned. :type instance_id: str :param instance_id: The identifier of a DBInstance. If provided, only the DBSnapshots related to that instance will be returned. If not provided, all RDS snapshots will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of :class:`boto.rds.dbsnapshot.DBSnapshot` """ |
params = {}
if snapshot_id:
params['DBSnapshotIdentifier'] = snapshot_id
if instance_id:
params['DBInstanceIdentifier'] = instance_id
if max_records:
params['MaxRecords'] = max_records
if marker:
params['Marker'] = marker
return self.get_list('DescribeDBSnapshots', params,
[('DBSnapshot', DBSnapshot)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_dbsnapshot(self, snapshot_id, dbinstance_id):
""" Create a new DB snapshot. :type snapshot_id: string :param snapshot_id: The identifier for the DBSnapshot :type dbinstance_id: string :param dbinstance_id: The source identifier for the RDS instance from which the snapshot is created. :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot """ |
params = {'DBSnapshotIdentifier' : snapshot_id,
'DBInstanceIdentifier' : dbinstance_id}
return self.get_object('CreateDBSnapshot', params, DBSnapshot) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def restore_dbinstance_from_dbsnapshot(self, identifier, instance_id, instance_class, port=None, availability_zone=None):
""" Create a new DBInstance from a DB snapshot. :type identifier: string :param identifier: The identifier for the DBSnapshot :type instance_id: string :param instance_id: The source identifier for the RDS instance from which the snapshot is created. :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Valid values are: db.m1.small | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge :type port: int :param port: Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The newly created DBInstance """ |
params = {'DBSnapshotIdentifier' : identifier,
'DBInstanceIdentifier' : instance_id,
'DBInstanceClass' : instance_class}
if port:
params['Port'] = port
if availability_zone:
params['AvailabilityZone'] = availability_zone
return self.get_object('RestoreDBInstanceFromDBSnapshot',
params, DBInstance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def restore_dbinstance_from_point_in_time(self, source_instance_id, target_instance_id, use_latest=False, restore_time=None, dbinstance_class=None, port=None, availability_zone=None):
""" Create a new DBInstance from a point in time. :type source_instance_id: string :param source_instance_id: The identifier for the source DBInstance. :type target_instance_id: string :param target_instance_id: The identifier of the new DBInstance. :type use_latest: bool :param use_latest: If True, the latest snapshot availabile will be used. :type restore_time: datetime :param restore_time: The date and time to restore from. Only used if use_latest is False. :type instance_class: str :param instance_class: The compute and memory capacity of the DBInstance. Valid values are: db.m1.small | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge :type port: int :param port: Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306. :type availability_zone: str :param availability_zone: Name of the availability zone to place DBInstance into. :rtype: :class:`boto.rds.dbinstance.DBInstance` :return: The newly created DBInstance """ |
params = {'SourceDBInstanceIdentifier' : source_instance_id,
'TargetDBInstanceIdentifier' : target_instance_id}
if use_latest:
params['UseLatestRestorableTime'] = 'true'
elif restore_time:
params['RestoreTime'] = restore_time.isoformat()
if dbinstance_class:
params['DBInstanceClass'] = dbinstance_class
if port:
params['Port'] = port
if availability_zone:
params['AvailabilityZone'] = availability_zone
return self.get_object('RestoreDBInstanceToPointInTime',
params, DBInstance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_events(self, source_identifier=None, source_type=None, start_time=None, end_time=None, max_records=None, marker=None):
""" Get information about events related to your DBInstances, DBSecurityGroups and DBParameterGroups. :type source_identifier: str :param source_identifier: If supplied, the events returned will be limited to those that apply to the identified source. The value of this parameter depends on the value of source_type. If neither parameter is specified, all events in the time span will be returned. :type source_type: str :param source_type: Specifies how the source_identifier should be interpreted. Valid values are: b-instance | db-security-group | db-parameter-group | db-snapshot :type start_time: datetime :param start_time: The beginning of the time interval for events. If not supplied, all available events will be returned. :type end_time: datetime :param end_time: The ending of the time interval for events. If not supplied, all available events will be returned. :type max_records: int :param max_records: The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100. :type marker: str :param marker: The marker provided by a previous request. :rtype: list :return: A list of class:`boto.rds.event.Event` """ |
params = {}
if source_identifier and source_type:
params['SourceIdentifier'] = source_identifier
params['SourceType'] = source_type
if start_time:
params['StartTime'] = start_time.isoformat()
if end_time:
params['EndTime'] = end_time.isoformat()
if max_records:
params['MaxRecords'] = max_records
if marker:
params['Marker'] = marker
return self.get_list('DescribeEvents', params, [('Event', Event)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(cls, filename, offset=None, encoding="iso-8859-1"):
"""Read an ID3v1 tag from a file.""" |
with fileutil.opened(filename, "rb") as file:
if offset is None:
file.seek(-128, 2)
else:
file.seek(offset)
data = file.read(128)
return cls.decode(data, encoding=encoding) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_acl(self, acl_or_str, key_name='', headers=None, version_id=None):
"""sets or changes a bucket's acl. We include a version_id argument to support a polymorphic interface for callers, however, version_id is not relevant for Google Cloud Storage buckets and is therefore ignored here.""" |
if isinstance(acl_or_str, Policy):
raise InvalidAclError('Attempt to set S3 Policy on GS ACL')
elif isinstance(acl_or_str, ACL):
self.set_xml_acl(acl_or_str.to_xml(), key_name, headers=headers)
else:
self.set_canned_acl(acl_or_str, key_name, headers=headers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_def_acl(self, acl_or_str, key_name='', headers=None):
"""sets or changes a bucket's default object acl""" |
if isinstance(acl_or_str, Policy):
raise InvalidAclError('Attempt to set S3 Policy on GS ACL')
elif isinstance(acl_or_str, ACL):
self.set_def_xml_acl(acl_or_str.to_xml(), key_name, headers=headers)
else:
self.set_def_canned_acl(acl_or_str, key_name, headers=headers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_acl(self, key_name='', headers=None, version_id=None):
"""returns a bucket's acl. We include a version_id argument to support a polymorphic interface for callers, however, version_id is not relevant for Google Cloud Storage buckets and is therefore ignored here.""" |
return self.get_acl_helper(key_name, headers, STANDARD_ACL) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_def_xml_acl(self, acl_str, key_name='', headers=None):
"""sets or changes a bucket's default object""" |
return self.set_xml_acl(acl_str, key_name, headers,
query_args=DEF_OBJ_ACL) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_snapshots(self):
""" Returns a list of all completed snapshots for this volume ID. """ |
ec2 = self.get_ec2_connection()
rs = ec2.get_all_snapshots()
all_vols = [self.volume_id] + self.past_volume_ids
snaps = []
for snapshot in rs:
if snapshot.volume_id in all_vols:
if snapshot.progress == '100%':
snapshot.date = boto.utils.parse_ts(snapshot.start_time)
snapshot.keep = True
snaps.append(snapshot)
snaps.sort(cmp=lambda x,y: cmp(x.date, y.date))
return snaps |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def trim_snapshots(self, delete=False):
""" Trim the number of snapshots for this volume. This method always keeps the oldest snapshot. It then uses the parameters passed in to determine how many others should be kept. The algorithm is to keep all snapshots from the current day. Then it will keep the first snapshot of the day for the previous seven days. Then, it will keep the first snapshot of the week for the previous four weeks. After than, it will keep the first snapshot of the month for as many months as there are. """ |
snaps = self.get_snapshots()
# Always keep the oldest and the newest
if len(snaps) <= 2:
return snaps
snaps = snaps[1:-1]
now = datetime.datetime.now(snaps[0].date.tzinfo)
midnight = datetime.datetime(year=now.year, month=now.month,
day=now.day, tzinfo=now.tzinfo)
# Keep the first snapshot from each day of the previous week
one_week = datetime.timedelta(days=7, seconds=60*60)
print midnight-one_week, midnight
previous_week = self.get_snapshot_range(snaps, midnight-one_week, midnight)
print previous_week
if not previous_week:
return snaps
current_day = None
for snap in previous_week:
if current_day and current_day == snap.date.day:
snap.keep = False
else:
current_day = snap.date.day
# Get ourselves onto the next full week boundary
if previous_week:
week_boundary = previous_week[0].date
if week_boundary.weekday() != 0:
delta = datetime.timedelta(days=week_boundary.weekday())
week_boundary = week_boundary - delta
# Keep one within this partial week
partial_week = self.get_snapshot_range(snaps, week_boundary, previous_week[0].date)
if len(partial_week) > 1:
for snap in partial_week[1:]:
snap.keep = False
# Keep the first snapshot of each week for the previous 4 weeks
for i in range(0,4):
weeks_worth = self.get_snapshot_range(snaps, week_boundary-one_week, week_boundary)
if len(weeks_worth) > 1:
for snap in weeks_worth[1:]:
snap.keep = False
week_boundary = week_boundary - one_week
# Now look through all remaining snaps and keep one per month
remainder = self.get_snapshot_range(snaps, end_date=week_boundary)
current_month = None
for snap in remainder:
if current_month and current_month == snap.date.month:
snap.keep = False
else:
current_month = snap.date.month
if delete:
for snap in snaps:
if not snap.keep:
boto.log.info('Deleting %s(%s) for %s' % (snap, snap.date, self.name))
snap.delete()
return snaps |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_headers(self, token):
"""Generate auth headers""" |
headers = {}
token = self.encode_token(token)
if self.config["header"]:
headers[self.config["header"]] = token
if self.config["cookie"]:
headers["Set-Cookie"] = dump_cookie(
self.config["cookie"], token, httponly=True,
max_age=self.config["expiration"]
)
return headers |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_expiration(self, token):
""" Calculate token expiration return expiration if the token need to set expiration or refresh, otherwise return None. Args: token (dict):
a decoded token """ |
if not token:
return None
now = datetime.utcnow()
time_to_live = self.config["expiration"]
if "exp" not in token:
return now + timedelta(seconds=time_to_live)
elif self.config["refresh"]:
exp = datetime.utcfromtimestamp(token["exp"])
# 0.5: reduce refresh frequent
if exp - now < timedelta(seconds=0.5 * time_to_live):
return now + timedelta(seconds=time_to_live)
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_token(self, token):
"""Decode Authorization token, return None if token invalid""" |
key = current_app.secret_key
if key is None:
if current_app.debug:
current_app.logger.debug("app.secret_key not set")
return None
try:
return jwt.decode(
token, key,
algorithms=[self.config["algorithm"]],
options={'require_exp': True}
)
except jwt.InvalidTokenError:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def encode_token(self, token):
"""Encode Authorization token, return bytes token""" |
key = current_app.secret_key
if key is None:
raise RuntimeError(
"please set app.secret_key before generate token")
return jwt.encode(token, key, algorithm=self.config["algorithm"]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_for_credential_file(self):
""" Checks for the existance of an AWS credential file. If the environment variable AWS_CREDENTIAL_FILE is set and points to a file, that file will be read and will be searched credentials. Note that if credentials have been explicitelypassed into the class constructor, those values always take precedence. """ |
if 'AWS_CREDENTIAL_FILE' in os.environ:
path = os.environ['AWS_CREDENTIAL_FILE']
path = os.path.expanduser(path)
path = os.path.expandvars(path)
if os.path.isfile(path):
fp = open(path)
lines = fp.readlines()
fp.close()
for line in lines:
if line[0] != '#':
if '=' in line:
name, value = line.split('=', 1)
if name.strip() == 'AWSAccessKeyId':
if 'aws_access_key_id' not in self.args:
value = value.strip()
self.args['aws_access_key_id'] = value
elif name.strip() == 'AWSSecretKey':
if 'aws_secret_access_key' not in self.args:
value = value.strip()
self.args['aws_secret_access_key'] = value
else:
print 'Warning: unable to read AWS_CREDENTIAL_FILE' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_for_env_url(self):
""" First checks to see if a url argument was explicitly passed in. If so, that will be used. If not, it checks for the existence of the environment variable specified in ENV_URL. If this is set, it should contain a fully qualified URL to the service you want to use. Note that any values passed explicitly to the class constructor will take precedence. """ |
url = self.args.get('url', None)
if url:
del self.args['url']
if not url and self.EnvURL in os.environ:
url = os.environ[self.EnvURL]
if url:
rslt = urlparse.urlparse(url)
if 'is_secure' not in self.args:
if rslt.scheme == 'https':
self.args['is_secure'] = True
else:
self.args['is_secure'] = False
host = rslt.netloc
port = None
l = host.split(':')
if len(l) > 1:
host = l[0]
port = int(l[1])
if 'host' not in self.args:
self.args['host'] = host
if port and 'port' not in self.args:
self.args['port'] = port
if rslt.path and 'path' not in self.args:
self.args['path'] = rslt.path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_objects_with_cmodel(self, cmodel_uri, type=None):
""" Find objects in Fedora with the specified content model. :param cmodel_uri: content model URI (should be full URI in info:fedora/pid:### format) :param type: type of object to return (e.g., class:`DigitalObject`) :rtype: list of objects """ |
uris = self.risearch.get_subjects(modelns.hasModel, cmodel_uri)
return [self.get_object(uri, type) for uri in uris] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_object(self, pid=None, type=None, create=None):
""" Initialize a single object from Fedora, or create a new one, with the same Fedora configuration and credentials. :param pid: pid of the object to request, or a function that can be called to get one. if not specified, :meth:`get_next_pid` will be called if a pid is needed :param type: type of object to return; defaults to :class:`DigitalObject` :rtype: single object of the type specified :create: boolean: create a new object? (if not specified, defaults to False when pid is specified, and True when it is not) """ |
objtype = type or self.default_object_type
if pid is None:
if create is None:
create = True
else:
if create is None:
create = False
return objtype(self.api, pid, create,
default_pidspace=self.default_pidspace) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self):
"""Create the write buffer and cache directory.""" |
if not self._sync and not hasattr(self, '_buffer'):
self._buffer = {}
if not os.path.exists(self.cache_dir):
os.makedirs(self.cache_dir) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
"""Delete the write buffer and cache directory.""" |
if not self._sync:
del self._buffer
shutil.rmtree(self.cache_dir) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self):
"""Sync the write buffer, then close the cache. If a closed :class:`FileCache` object's methods are called, a :exc:`ValueError` will be raised. """ |
self.sync()
self.sync = self.create = self.delete = self._closed
self._write_to_file = self._read_to_file = self._closed
self._key_to_filename = self._filename_to_key = self._closed
self.__getitem__ = self.__setitem__ = self.__delitem__ = self._closed
self.__iter__ = self.__len__ = self.__contains__ = self._closed |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sync(self):
"""Sync the write buffer with the cache files and clear the buffer. If the :class:`FileCache` object was opened with the optional ``'s'`` *flag* argument, then calling :meth:`sync` will do nothing. """ |
if self._sync:
return # opened in sync mode, so skip the manual sync
self._sync = True
for ekey in self._buffer:
filename = self._key_to_filename(ekey)
self._write_to_file(filename, self._buffer[ekey])
self._buffer.clear()
self._sync = False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _decode_key(self, key):
"""Decode key using hex_codec to retrieve the original key. Keys are returned as :class:`str` if serialization is enabled. Keys are returned as :class:`bytes` if serialization is disabled. """ |
bkey = codecs.decode(key.encode(self._keyencoding), 'hex_codec')
return bkey.decode(self._keyencoding) if self._serialize else bkey |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _all_filenames(self):
"""Return a list of absolute cache filenames""" |
try:
return [os.path.join(self.cache_dir, filename) for filename in
os.listdir(self.cache_dir)]
except (FileNotFoundError, OSError):
return [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _all_keys(self):
"""Return a list of all encoded key names.""" |
file_keys = [self._filename_to_key(fn) for fn in self._all_filenames()]
if self._sync:
return set(file_keys)
else:
return set(file_keys + list(self._buffer)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _write_to_file(self, filename, bytesvalue):
"""Write bytesvalue to filename.""" |
fh, tmp = tempfile.mkstemp()
with os.fdopen(fh, self._flag) as f:
f.write(self._dumps(bytesvalue))
rename(tmp, filename)
os.chmod(filename, self._mode) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_from_file(self, filename):
"""Read data from filename.""" |
try:
with open(filename, 'rb') as f:
return self._loads(f.read())
except (IOError, OSError):
logger.warning('Error opening file: {}'.format(filename))
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getDatastreamDissemination(self, pid, dsID, asOfDateTime=None, stream=False, head=False, rqst_headers=None):
"""Get a single datastream on a Fedora object; optionally, get the version as of a particular date time. :param pid: object pid :param dsID: datastream id :param asOfDateTime: optional datetime; ``must`` be a non-naive datetime so it can be converted to a date-time format Fedora can understand :param stream: return a streaming response (default: False); use is recommended for large datastreams :param head: return a HEAD request instead of GET (default: False) :param rqst_headers: request headers to be passed through to Fedora, such as http range requests :rtype: :class:`requests.models.Response` """ |
# /objects/{pid}/datastreams/{dsID}/content ? [asOfDateTime] [download]
http_args = {}
if rqst_headers is None:
rqst_headers = {}
if asOfDateTime:
http_args['asOfDateTime'] = datetime_to_fedoratime(asOfDateTime)
url = 'objects/%(pid)s/datastreams/%(dsid)s/content' % \
{'pid': pid, 'dsid': dsID}
if head:
reqmethod = self.head
else:
reqmethod = self.get
return reqmethod(url, params=http_args, stream=stream, headers=rqst_headers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def getDissemination(self, pid, sdefPid, method, method_params=None):
'''Get a service dissemination.
.. NOTE:
This method not available in REST API until Fedora 3.3
:param pid: object pid
:param sDefPid: service definition pid
:param method: service method name
:param method_params: method parameters
:rtype: :class:`requests.models.Response`
'''
# /objects/{pid}/methods/{sdefPid}/{method} ? [method parameters]
if method_params is None:
method_params = {}
uri = 'objects/%(pid)s/methods/%(sdefpid)s/%(method)s' % \
{'pid': pid, 'sdefpid': sdefPid, 'method': method}
return self.get(uri, params=method_params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getObjectProfile(self, pid, asOfDateTime=None):
"""Get top-level information aboug a single Fedora object; optionally, retrieve information as of a particular date-time. :param pid: object pid :param asOfDateTime: optional datetime; ``must`` be a non-naive datetime so it can be converted to a date-time format Fedora can understand :rtype: :class:`requests.models.Response` """ |
# /objects/{pid} ? [format] [asOfDateTime]
http_args = {}
if asOfDateTime:
http_args['asOfDateTime'] = datetime_to_fedoratime(asOfDateTime)
http_args.update(self.format_xml)
url = 'objects/%(pid)s' % {'pid': pid}
return self.get(url, params=http_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def listMethods(self, pid, sdefpid=None):
'''List available service methods.
:param pid: object pid
:param sDefPid: service definition pid
:rtype: :class:`requests.models.Response`
'''
# /objects/{pid}/methods ? [format, datetime]
# /objects/{pid}/methods/{sdefpid} ? [format, datetime]
## NOTE: getting an error when sdefpid is specified; fedora issue?
uri = 'objects/%(pid)s/methods' % {'pid': pid}
if sdefpid:
uri += '/' + sdefpid
return self.get(uri, params=self.format_xml) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def addDatastream(self, pid, dsID, dsLabel=None, mimeType=None, logMessage=None,
controlGroup=None, dsLocation=None, altIDs=None, versionable=None,
dsState=None, formatURI=None, checksumType=None, checksum=None, content=None):
'''Add a new datastream to an existing object. On success,
the return response should have a status of 201 Created;
if there is an error, the response body includes the error message.
:param pid: object pid
:param dsID: id for the new datastream
:param dslabel: label for the new datastream (optional)
:param mimeType: mimetype for the new datastream (optional)
:param logMessage: log message for the object history (optional)
:param controlGroup: control group for the new datastream (optional)
:param dsLocation: URL where the content should be ingested from
:param altIDs: alternate ids (optional)
:param versionable: configure datastream versioning (optional)
:param dsState: datastream state (optional)
:param formatURI: datastream format (optional)
:param checksumType: checksum type (optional)
:param checksum: checksum (optional)
:param content: datastream content, as a file-like object or
characterdata (optional)
:rtype: :class:`requests.models.Response`
'''
# objects/{pid}/datastreams/NEWDS? [opts]
# content via multipart file in request content, or dsLocation=URI
# one of dsLocation or filename must be specified
# if checksum is sent without checksum type, Fedora seems to
# ignore it (does not error on invalid checksum with no checksum type)
if checksum is not None and checksumType is None:
warnings.warn('Fedora will ignore the checksum (%s) because no checksum type is specified' \
% checksum)
http_args = {}
if dsLabel:
http_args['dsLabel'] = dsLabel
if mimeType:
http_args['mimeType'] = mimeType
if logMessage:
http_args['logMessage'] = logMessage
if controlGroup:
http_args['controlGroup'] = controlGroup
if dsLocation:
http_args['dsLocation'] = dsLocation
if altIDs:
http_args['altIDs'] = altIDs
if versionable is not None:
http_args['versionable'] = versionable
if dsState:
http_args['dsState'] = dsState
if formatURI:
http_args['formatURI'] = formatURI
if checksumType:
http_args['checksumType'] = checksumType
if checksum:
http_args['checksum'] = checksum
# Added code to match how content is now handled, see modifyDatastream.
extra_args = {}
# could be a string or a file-like object
if content:
if hasattr(content, 'read'): # if content is a file-like object, warn if no checksum
if not checksum:
logger.warning("File was ingested into fedora without a passed checksum for validation, pid was: %s and dsID was: %s.",
pid, dsID)
extra_args['files'] = {'file': content}
else:
# fedora wants a multipart file upload;
# this seems to work better for handling unicode than
# simply sending content via requests data parameter
extra_args['files'] = {'file': ('filename', content)}
# set content-type header ?
url = 'objects/%s/datastreams/%s' % (pid, dsID)
return self.post(url, params=http_args, **extra_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def export(self, pid, context=None, format=None, encoding=None,
stream=False):
'''Export an object to be migrated or archived.
:param pid: object pid
:param context: export context, one of: public, migrate, archive
(default: public)
:param format: export format (Fedora default is foxml 1.1)
:param encoding: encoding (Fedora default is UTF-8)
:param stream: if True, request a streaming response to be
read in chunks
:rtype: :class:`requests.models.Response`
'''
http_args = {}
if context:
http_args['context'] = context
if format:
http_args['format'] = format
if encoding:
http_args['encoding'] = encoding
uri = 'objects/%s/export' % pid
return self.get(uri, params=http_args, stream=stream) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getDatastream(self, pid, dsID, asOfDateTime=None, validateChecksum=False):
"""Get information about a single datastream on a Fedora object; optionally, get information for the version of the datastream as of a particular date time. :param pid: object pid :param dsID: datastream id :param asOfDateTime: optional datetime; ``must`` be a non-naive datetime so it can be converted to a date-time format Fedora can understand :param validateChecksum: boolean; if True, request Fedora to recalculate and verify the stored checksum against actual data :rtype: :class:`requests.models.Response` """ |
# /objects/{pid}/datastreams/{dsID} ? [asOfDateTime] [format] [validateChecksum]
http_args = {}
if validateChecksum:
# fedora only responds to lower-case validateChecksum option
http_args['validateChecksum'] = str(validateChecksum).lower()
if asOfDateTime:
http_args['asOfDateTime'] = datetime_to_fedoratime(asOfDateTime)
http_args.update(self.format_xml)
uri = 'objects/%(pid)s/datastreams/%(dsid)s' % {'pid': pid, 'dsid': dsID}
return self.get(uri, params=http_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def getDatastreamHistory(self, pid, dsid, format=None):
'''Get history information for a datastream.
:param pid: object pid
:param dsid: datastream id
:param format: format
:rtype: :class:`requests.models.Response`
'''
http_args = {}
if format is not None:
http_args['format'] = format
# Fedora docs say the url should be:
# /objects/{pid}/datastreams/{dsid}/versions
# In Fedora 3.4.3, that 404s but /history does not
uri = 'objects/%(pid)s/datastreams/%(dsid)s/history' % \
{'pid': pid, 'dsid': dsid}
return self.get(uri, params=http_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def getRelationships(self, pid, subject=None, predicate=None, format=None):
'''Get information about relationships on an object.
Wrapper function for
`Fedora REST API getRelationships <https://wiki.duraspace.org/display/FEDORA34/REST+API#RESTAPI-getRelationships>`_
:param pid: object pid
:param subject: subject (optional)
:param predicate: predicate (optional)
:param format: format
:rtype: :class:`requests.models.Response`
'''
http_args = {}
if subject is not None:
http_args['subject'] = subject
if predicate is not None:
http_args['predicate'] = predicate
if format is not None:
http_args['format'] = format
url = 'objects/%(pid)s/relationships' % {'pid': pid}
return self.get(url, params=http_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ingest(self, text, logMessage=None):
"""Ingest a new object into Fedora. Returns the pid of the new object on success. Return response should have a status of 201 Created on success, and the content of the response will be the newly created pid. Wrapper function for `Fedora REST API ingest <http://fedora-commons.org/confluence/display/FCR30/REST+API#RESTAPI-ingest>`_ :param text: full text content of the object to be ingested :param logMessage: optional log message :rtype: :class:`requests.models.Response` """ |
# NOTE: ingest method supports additional options for
# label/format/namespace/ownerId, etc - but we generally set
# those in the foxml that is passed in
http_args = {}
if logMessage:
http_args['logMessage'] = logMessage
headers = {'Content-Type': 'text/xml'}
url = 'objects/new'
# if text is unicode, it needs to be encoded so we can send the
# data as bytes; otherwise, we get ascii encode errors in httplib/ssl
if isinstance(text, six.text_type):
text = bytes(text.encode('utf-8'))
return self.post(url, data=text, params=http_args, headers=headers) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def modifyObject(self, pid, label, ownerId, state, logMessage=None):
'''Modify object properties. Returned response should have
a status of 200 on succuss.
:param pid: object pid
:param label: object label
:param ownerId: object owner
:param state: object state
:param logMessage: optional log message
:rtype: :class:`requests.models.Response`
'''
# /objects/{pid} ? [label] [ownerId] [state] [logMessage]
http_args = {'label': label,
'ownerId': ownerId,
'state': state}
if logMessage is not None:
http_args['logMessage'] = logMessage
url = 'objects/%(pid)s' % {'pid': pid}
return self.put(url, params=http_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def purgeDatastream(self, pid, dsID, startDT=None, endDT=None, logMessage=None, force=False):
"""Purge a datastream, or specific versions of a dastream, from a Fedora object. On success, response content will include a list of timestamps for the purged datastream versions; on failure, response content may contain an error message. :param pid: object pid :param dsID: datastream ID :param startDT: optional start datetime (when purging specific versions) :param endDT: optional end datetime (when purging specific versions) :param logMessage: optional log message :rtype: :class:`requests.models.Response` """ |
# /objects/{pid}/datastreams/{dsID} ? [startDT] [endDT] [logMessage] [force]
http_args = {}
if logMessage:
http_args['logMessage'] = logMessage
if startDT:
http_args['startDT'] = startDT
if endDT:
http_args['endDT'] = endDT
if force:
http_args['force'] = force
url = 'objects/%(pid)s/datastreams/%(dsid)s' % {'pid': pid, 'dsid': dsID}
return self.delete(url, params=http_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def purgeObject(self, pid, logMessage=None):
"""Purge an object from Fedora. Returned response shoudl have a status of 200 on success; response content is a timestamp. Wrapper function for `REST API purgeObject <http://fedora-commons.org/confluence/display/FCR30/REST+API#RESTAPI-purgeObject>`_ :param pid: pid of the object to be purged :param logMessage: optional log message :rtype: :class:`requests.models.Response` """ |
http_args = {}
if logMessage:
http_args['logMessage'] = logMessage
url = 'objects/%(pid)s' % {'pid': pid}
return self.delete(url, params=http_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def purgeRelationship(self, pid, subject, predicate, object, isLiteral=False,
datatype=None):
'''Remove a relationship from an object.
Wrapper function for
`Fedora REST API purgeRelationship <https://wiki.duraspace.org/display/FEDORA34/REST+API#RESTAPI-purgeRelationship>`_
:param pid: object pid
:param subject: relationship subject
:param predicate: relationship predicate
:param object: relationship object
:param isLiteral: boolean (default: false)
:param datatype: optional datatype
:returns: boolean; indicates whether or not a relationship was
removed
'''
http_args = {'subject': subject, 'predicate': predicate,
'object': object, 'isLiteral': isLiteral}
if datatype is not None:
http_args['datatype'] = datatype
url = 'objects/%(pid)s/relationships' % {'pid': pid}
response = self.delete(url, params=http_args)
# should have a status code of 200;
# response body text indicates if a relationship was purged or not
return response.status_code == requests.codes.ok and response.content == b'true' |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.