text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reload(self):
""" Reloads the space. """ |
result = self._client._get(
self.__class__.base_url(
self.sys['id']
)
)
self._update_from_resource(result)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
""" Deletes the space """ |
return self._client._delete(
self.__class__.base_url(
self.sys['id']
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON representation of the space. """ |
result = super(Space, self).to_json()
result.update({'name': self.name})
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, query=None, **kwargs):
""" Gets all spaces. """ |
return super(SpacesProxy, self).all(query=query) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, space_id, query=None, **kwargs):
""" Gets a space by ID. """ |
try:
self.space_id = space_id
return super(SpacesProxy, self).find(space_id, query=query)
finally:
self.space_id = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self, attributes=None, **kwargs):
""" Creates a space with given attributes. """ |
if attributes is None:
attributes = {}
if 'default_locale' not in attributes:
attributes['default_locale'] = self.client.default_locale
return super(SpacesProxy, self).create(resource_id=None, attributes=attributes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, space_id):
""" Deletes a space by ID. """ |
try:
self.space_id = space_id
return super(SpacesProxy, self).delete(space_id)
finally:
self.space_id = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def editor_interfaces(self, space_id, environment_id, content_type_id):
""" Provides access to editor interfaces management methods. API reference: https://www.contentful.com/developers/docs/references/content-management-api/#/reference/editor-interface :return: :class:`EditorInterfacesProxy <contentful_management.editor_interfaces_proxy.EditorInterfacesProxy>` object. :rtype: contentful.editor_interfaces_proxy.EditorInterfacesProxy Usage: <EditorInterfacesProxy space_id="cfexampleapi" environment_id="master" content_type_id="cat"> """ |
return EditorInterfacesProxy(self, space_id, environment_id, content_type_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def snapshots(self, space_id, environment_id, resource_id, resource_kind='entries'):
""" Provides access to snapshot management methods. API reference: https://www.contentful.com/developers/docs/references/content-management-api/#/reference/snapshots :return: :class:`SnapshotsProxy <contentful_management.snapshots_proxy.SnapshotsProxy>` object. :rtype: contentful.snapshots_proxy.SnapshotsProxy Usage: <SnapshotsProxy[entries] space_id="cfexampleapi" environment_id="master" parent_resource_id="nyancat"> <SnapshotsProxy[content_types] space_id="cfexampleapi" environment_id="master" parent_resource_id="cat"> """ |
return SnapshotsProxy(self, space_id, environment_id, resource_id, resource_kind) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def entry_snapshots(self, space_id, environment_id, entry_id):
""" Provides access to entry snapshot management methods. API reference: https://www.contentful.com/developers/docs/references/content-management-api/#/reference/snapshots :return: :class:`SnapshotsProxy <contentful_management.snapshots_proxy.SnapshotsProxy>` object. :rtype: contentful.snapshots_proxy.SnapshotsProxy Usage: <SnapshotsProxy[entries] space_id="cfexampleapi" environment_id="master" parent_resource_id="nyancat"> """ |
return SnapshotsProxy(self, space_id, environment_id, entry_id, 'entries') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def content_type_snapshots(self, space_id, environment_id, content_type_id):
""" Provides access to content type snapshot management methods. API reference: https://www.contentful.com/developers/docs/references/content-management-api/#/reference/snapshots :return: :class:`SnapshotsProxy <contentful_management.snapshots_proxy.SnapshotsProxy>` object. :rtype: contentful.snapshots_proxy.SnapshotsProxy Usage: <SnapshotsProxy[content_types] space_id="cfexampleapi" environment_id="master" parent_resource_id="cat"> """ |
return SnapshotsProxy(self, space_id, environment_id, content_type_id, 'content_types') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _validate_configuration(self):
""" Validates that required parameters are present. """ |
if not self.access_token:
raise ConfigurationException(
'You will need to initialize a client with an Access Token'
)
if not self.api_url:
raise ConfigurationException(
'The client configuration needs to contain an API URL'
)
if not self.default_locale:
raise ConfigurationException(
'The client configuration needs to contain a Default Locale'
)
if not self.api_version or self.api_version < 1:
raise ConfigurationException(
'The API Version must be a positive number'
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _contentful_user_agent(self):
""" Sets the X-Contentful-User-Agent header. """ |
header = {}
from . import __version__
header['sdk'] = {
'name': 'contentful-management.py',
'version': __version__
}
header['app'] = {
'name': self.application_name,
'version': self.application_version
}
header['integration'] = {
'name': self.integration_name,
'version': self.integration_version
}
header['platform'] = {
'name': 'python',
'version': platform.python_version()
}
os_name = platform.system()
if os_name == 'Darwin':
os_name = 'macOS'
elif not os_name or os_name == 'Java':
os_name = None
elif os_name and os_name not in ['macOS', 'Windows']:
os_name = 'Linux'
header['os'] = {
'name': os_name,
'version': platform.release()
}
def format_header(key, values):
header = "{0} {1}".format(key, values['name'])
if values['version'] is not None:
header = "{0}/{1}".format(header, values['version'])
return "{0};".format(header)
result = []
for k, values in header.items():
if not values['name']:
continue
result.append(format_header(k, values))
return ' '.join(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _url(self, url, file_upload=False):
""" Creates the request URL. """ |
host = self.api_url
if file_upload:
host = self.uploads_api_url
protocol = 'https' if self.https else 'http'
if url.endswith('/'):
url = url[:-1]
return '{0}://{1}/{2}'.format(
protocol,
host,
url
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _http_request(self, method, url, request_kwargs=None):
""" Performs the requested HTTP request. """ |
kwargs = request_kwargs if request_kwargs is not None else {}
headers = self._request_headers()
headers.update(self.additional_headers)
if 'headers' in kwargs:
headers.update(kwargs['headers'])
kwargs['headers'] = headers
if self._has_proxy():
kwargs['proxies'] = self._proxy_parameters()
request_url = self._url(
url,
file_upload=kwargs.pop('file_upload', False)
)
request_method = getattr(requests, method)
response = request_method(request_url, **kwargs)
if response.status_code == 429:
raise RateLimitExceededError(response)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _http_get(self, url, query, **kwargs):
""" Performs the HTTP GET request. """ |
self._normalize_query(query)
kwargs.update({'params': query})
return self._http_request('get', url, kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _http_post(self, url, data, **kwargs):
""" Performs the HTTP POST request. """ |
if not kwargs.get('file_upload', False):
data = json.dumps(data)
kwargs.update({'data': data})
return self._http_request('post', url, kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _http_put(self, url, data, **kwargs):
""" Performs the HTTP PUT request. """ |
kwargs.update({'data': json.dumps(data)})
return self._http_request('put', url, kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _request(self, method, url, query_or_data=None, **kwargs):
""" Wrapper for the HTTP requests, rate limit backoff is handled here, responses are processed with ResourceBuilder. """ |
if query_or_data is None:
query_or_data = {}
request_method = getattr(self, '_http_{0}'.format(method))
response = retry_request(self)(request_method)(url, query_or_data, **kwargs)
if self.raw_mode:
return response
if response.status_code >= 300:
error = get_error(response)
if self.raise_errors:
raise error
return error
# Return response object on NoContent
if response.status_code == 204 or not response.text:
return response
return ResourceBuilder(
self,
self.default_locale,
response.json()
).build() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get(self, url, query=None, **kwargs):
""" Wrapper for the HTTP GET request. """ |
return self._request('get', url, query, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _post(self, url, attributes=None, **kwargs):
""" Wrapper for the HTTP POST request. """ |
return self._request('post', url, attributes, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _put(self, url, attributes=None, **kwargs):
""" Wrapper for the HTTP PUT request. """ |
return self._request('put', url, attributes, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _delete(self, url, **kwargs):
""" Wrapper for the HTTP DELETE request. """ |
response = retry_request(self)(self._http_delete)(url, **kwargs)
if self.raw_mode:
return response
if response.status_code >= 300:
error = get_error(response)
if self.raise_errors:
raise error
return error
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON representation of the locale. """ |
result = super(Locale, self).to_json()
result.update({
'code': self.code,
'name': self.name,
'fallbackCode': self.fallback_code,
'optional': self.optional,
'contentDeliveryApi': self.content_delivery_api,
'contentManagementApi': self.content_management_api
})
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, query=None, **kwargs):
""" Gets all assets of a space. """ |
if query is None:
query = {}
normalize_select(query)
return super(AssetsProxy, self).all(query, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, asset_id, query=None, **kwargs):
""" Gets a single asset by ID. """ |
if query is None:
query = {}
normalize_select(query)
return super(AssetsProxy, self).find(asset_id, query=query, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def snapshots(self):
""" Provides access to snapshot management methods for the given entry. API reference: https://www.contentful.com/developers/docs/references/content-management-api/#/reference/snapshots :return: :class:`EntrySnapshotsProxy <contentful_management.entry_snapshots_proxy.EntrySnapshotsProxy>` object. :rtype: contentful.entry_snapshots_proxy.EntrySnapshotsProxy Usage: <EntrySnapshotsProxy space_id="cfexampleapi" environment_id="master" entry_id="nyancat"> """ |
return EntrySnapshotsProxy(self._client, self.sys['space'].id, self._environment_id, self.sys['id']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, attributes=None):
""" Updates the entry with attributes. """ |
if attributes is None:
attributes = {}
attributes['content_type_id'] = self.sys['content_type'].id
return super(Entry, self).update(attributes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, query=None):
""" Gets resource collection for _resource_class. """ |
if query is None:
query = {}
return self.client._get(
self._url(),
query
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, resource_id, query=None, **kwargs):
"""Gets a single resource.""" |
if query is None:
query = {}
return self.client._get(
self._url(resource_id),
query,
**kwargs
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, resource_id, **kwargs):
""" Deletes a resource by ID. """ |
return self.client._delete(self._url(resource_id), **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON representation of the role. """ |
result = super(Role, self).to_json()
result.update({
'name': self.name,
'description': self.description,
'permissions': self.permissions,
'policies': self.policies
})
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON representation of the space membership. """ |
result = super(SpaceMembership, self).to_json()
result.update({
'admin': self.admin,
'roles': self.roles
})
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self, file_or_path, **kwargs):
""" Creates an upload for the given file or path. """ |
opened = False
if isinstance(file_or_path, str_type()):
file_or_path = open(file_or_path, 'rb')
opened = True
elif not getattr(file_or_path, 'read', False):
raise Exception("A file or path to a file is required for this operation.")
try:
return self.client._post(
self._url(),
file_or_path,
headers=self._resource_class.create_headers({}),
file_upload=True
)
finally:
if opened:
file_or_path.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, upload_id, **kwargs):
""" Finds an upload by ID. """ |
return super(UploadsProxy, self).find(upload_id, file_upload=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, upload_id):
""" Deletes an upload by ID. """ |
return super(UploadsProxy, self).delete(upload_id, file_upload=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON Representation of the content type field. """ |
result = {
'name': self.name,
'id': self._real_id(),
'type': self.type,
'localized': self.localized,
'omitted': self.omitted,
'required': self.required,
'disabled': self.disabled,
'validations': [v.to_json() for v in self.validations]
}
if self.type == 'Array':
result['items'] = self.items
if self.type == 'Link':
result['linkType'] = self.link_type
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def coerce(self, value):
""" Coerces value to location hash. """ |
return {
'lat': float(value.get('lat', value.get('latitude'))),
'lon': float(value.get('lon', value.get('longitude')))
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def base_url(klass, space_id, parent_resource_id, resource_url='entries', resource_id=None, environment_id=None):
""" Returns the URI for the snapshot. """ |
return "spaces/{0}{1}/{2}/{3}/snapshots/{4}".format(
space_id,
'/environments/{0}'.format(environment_id) if environment_id is not None else '',
resource_url,
parent_resource_id,
resource_id if resource_id is not None else ''
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON representation of the snapshot. """ |
result = super(Snapshot, self).to_json()
result.update({
'snapshot': self.snapshot.to_json(),
})
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, *args, **kwargs):
""" Gets all usage periods. """ |
return self.client._get(
self._url(),
{},
headers={
'x-contentful-enable-alpha-feature': 'usage-insights'
}
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_attributes(klass, attributes, previous_object=None):
""" Attributes for webhook creation. """ |
result = super(Webhook, klass).create_attributes(attributes, previous_object)
if 'topics' not in result:
raise Exception("Topics ('topics') must be provided for this operation.")
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON representation of the webhook. """ |
result = super(Webhook, self).to_json()
result.update({
'name': self.name,
'url': self.url,
'topics': self.topics,
'httpBasicUsername': self.http_basic_username,
'headers': self.headers
})
if self.filters:
result.update({'filters': self.filters})
if self.transformation:
result.update({'transformation': self.transformation})
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def base_url(self, space_id, content_type_id, environment_id=None, **kwargs):
""" Returns the URI for the editor interface. """ |
return "spaces/{0}{1}/content_types/{2}/editor_interface".format(
space_id,
'/environments/{0}'.format(environment_id) if environment_id is not None else '',
content_type_id
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON representation of the editor interface. """ |
result = super(EditorInterface, self).to_json()
result.update({'controls': self.controls})
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self):
""" Returns the JSON Representation of the content type field validation. """ |
result = {}
for k, v in self._data.items():
result[camel_case(k)] = v
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build(self):
""" Creates the objects from the JSON response. """ |
if self.json['sys']['type'] == 'Array':
return self._build_array()
return self._build_item(self.json) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, resource_id, query=None):
""" Finds a single resource by ID related to the current space. """ |
return self.proxy.find(resource_id, query=query) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_ngroups(self, field=None):
'''
Returns ngroups count if it was specified in the query, otherwise ValueError.
If grouping on more than one field, provide the field argument to specify which count you are looking for.
'''
field = field if field else self._determine_group_field(field)
if 'ngroups' in self.data['grouped'][field]:
return self.data['grouped'][field]['ngroups']
raise ValueError("ngroups not found in response. specify group.ngroups in the query.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_groups_count(self, field=None):
'''
Returns 'matches' from group response.
If grouping on more than one field, provide the field argument to specify which count you are looking for.
'''
field = field if field else self._determine_group_field(field)
if 'matches' in self.data['grouped'][field]:
return self.data['grouped'][field]['matches']
raise ValueError("group matches not found in response") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_flat_groups(self, field=None):
'''
Flattens the group response and just returns a list of documents.
'''
field = field if field else self._determine_group_field(field)
temp_groups = self.data['grouped'][field]['groups']
return [y for x in temp_groups for y in x['doclist']['docs']] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _gen_file_name(self):
'''
Generates a random file name based on self._output_filename_pattern for the output to do file.
'''
date = datetime.datetime.now()
dt = "{}-{}-{}-{}-{}-{}-{}".format(str(date.year),str(date.month),str(date.day),str(date.hour),str(date.minute),str(date.second),str(random.randint(0,10000)))
return self._output_filename_pattern.format(dt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def add(self, item=None, finalize=False, callback=None):
'''
Takes a string, dictionary or list of items for adding to queue. To help troubleshoot it will output the updated buffer size, however when the content gets written it will output the file path of the new file. Generally this can be safely discarded.
:param <dict,list> item: Item to add to the queue. If dict will be converted directly to a list and then to json. List must be a list of dictionaries. If a string is submitted, it will be written out as-is immediately and not buffered.
:param bool finalize: If items are buffered internally, it will flush them to disk and return the file name.
:param callback: A callback function that will be called when the item gets written to disk. It will be passed one position argument, the file path of the file written. Note that errors from the callback method will not be re-raised here.
'''
if item:
if type(item) is list:
check = list(set([type(d) for d in item]))
if len(check) > 1 or dict not in check:
raise ValueError("More than one data type detected in item (list). Make sure they are all dicts of data going to Solr")
elif type(item) is dict:
item = [item]
elif type(item) is str:
return self._write_file(item)
else:
raise ValueError("Not the right data submitted. Make sure you are sending a dict or list of dicts")
with self._rlock:
res = self._preprocess(item, finalize, callback)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _lock(self):
'''
Locks, or returns False if already locked
'''
if not self._is_locked():
with open(self._lck,'w') as fh:
if self._devel: self.logger.debug("Locking")
fh.write(str(os.getpid()))
return True
else:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _is_locked(self):
'''
Checks to see if we are already pulling items from the queue
'''
if os.path.isfile(self._lck):
try:
import psutil
except ImportError:
return True #Lock file exists and no psutil
#If psutil is imported
with open(self._lck) as f:
pid = f.read()
return True if psutil.pid_exists(int(pid)) else False
else:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _unlock(self):
'''
Unlocks the index
'''
if self._devel: self.logger.debug("Unlocking Index")
if self._is_locked():
os.remove(self._lck)
return True
else:
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_all_as_list(self, dir='_todo_dir'):
'''
Returns a list of the the full path to all items currently in the todo directory. The items will be listed in ascending order based on filesystem time.
This will re-scan the directory on each execution.
Do not use this to process items, this method should only be used for troubleshooting or something axillary. To process items use get_todo_items() iterator.
'''
dir = getattr(self,dir)
list = [x for x in os.listdir(dir) if x.endswith('.json') or x.endswith('.json.gz')]
full = [os.path.join(dir,x) for x in list]
full.sort(key=lambda x: os.path.getmtime(x))
return full |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_todo_items(self, **kwargs):
'''
Returns an iterator that will provide each item in the todo queue. Note that to complete each item you have to run complete method with the output of this iterator.
That will move the item to the done directory and prevent it from being retrieved in the future.
'''
def inner(self):
for item in self.get_all_as_list():
yield item
self._unlock()
if not self._is_locked():
if self._lock():
return inner(self)
raise RuntimeError("RuntimeError: Index Already Locked") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def complete(self, filepath):
'''
Marks the item as complete by moving it to the done directory and optionally gzipping it.
'''
if not os.path.exists(filepath):
raise FileNotFoundError("Can't Complete {}, it doesn't exist".format(filepath))
if self._devel: self.logger.debug("Completing - {} ".format(filepath))
if self.rotate_complete:
try:
complete_dir = str(self.rotate_complete())
except Exception as e:
self.logger.error("rotate_complete function failed with the following exception.")
self.logger.exception(e)
raise
newdir = os.path.join(self._done_dir, complete_dir)
newpath = os.path.join(newdir, os.path.split(filepath)[-1] )
if not os.path.isdir(newdir):
self.logger.debug("Making new directory: {}".format(newdir))
os.makedirs(newdir)
else:
newpath = os.path.join(self._done_dir, os.path.split(filepath)[-1] )
try:
if self._compress_complete:
if not filepath.endswith('.gz'):
# Compressing complete, but existing file not compressed
# Compress and move it and kick out
newpath += '.gz'
self._compress_and_move(filepath, newpath)
return newpath
# else the file is already compressed and can just be moved
#if not compressing completed file, just move it
shutil.move(filepath, newpath)
self.logger.info(" Completed - {}".format(filepath))
except Exception as e:
self.logger.error("Couldn't Complete {}".format(filepath))
self.logger.exception(e)
raise
return newpath |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_all_json_from_indexq(self):
'''
Gets all data from the todo files in indexq and returns one huge list of all data.
'''
files = self.get_all_as_list()
out = []
for efile in files:
out.extend(self._open_file(efile))
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _retry(function):
""" Internal mechanism to try to send data to multiple Solr Hosts if the query fails on the first one. """ |
def inner(self, **kwargs):
last_exception = None
#for host in self.router.get_hosts(**kwargs):
for host in self.host:
try:
return function(self, host, **kwargs)
except SolrError as e:
self.logger.exception(e)
raise
except ConnectionError as e:
self.logger.exception("Tried connecting to Solr, but couldn't because of the following exception.")
if '401' in e.__str__():
raise
last_exception = e
# raise the last exception after contacting all hosts instead of returning None
if last_exception is not None:
raise last_exception
return inner |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def check_zk(self):
'''
Will attempt to telnet to each zookeeper that is used by SolrClient and issue 'mntr' command. Response is parsed to check to see if the
zookeeper node is a leader or a follower and returned as a dict.
If the telnet collection fails or the proper response is not parsed, the zk node will be listed as 'down' in the dict. Desired values are
either follower or leader.
'''
import telnetlib
temp = self.zk_hosts.split('/')
zks = temp[0].split(',')
status = {}
for zk in zks:
self.logger.debug("Checking {}".format(zk))
host, port = zk.split(':')
try:
t = telnetlib.Telnet(host, port=int(port))
t.write('mntr'.encode('ascii'))
r = t.read_all()
for out in r.decode('utf-8').split('\n'):
if out:
param, val = out.split('\t')
if param == 'zk_server_state':
status[zk] = val
except Exception as e:
self.logger.error("Unable to reach ZK: {}".format(zk))
self.logger.exception(e)
status[zk] = 'down'
#assert len(zks) == len(status)
return status |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def copy_config(self, original, new):
'''
Copies collection configs into a new folder. Can be used to create new collections based on existing configs.
Basically, copies all nodes under /configs/original to /configs/new.
:param original str: ZK name of original config
:param new str: New name of the ZK config.
'''
if not self.kz.exists('/configs/{}'.format(original)):
raise ZookeeperError("Collection doesn't exist in Zookeeper. Current Collections are: {}".format(self.kz.get_children('/configs')))
base = '/configs/{}'.format(original)
nbase = '/configs/{}'.format(new)
self._copy_dir(base, nbase) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def download_collection_configs(self, collection, fs_path):
'''
Downloads ZK Directory to the FileSystem.
:param collection str: Name of the collection (zk config name)
:param fs_path str: Destination filesystem path.
'''
if not self.kz.exists('/configs/{}'.format(collection)):
raise ZookeeperError("Collection doesn't exist in Zookeeper. Current Collections are: {} ".format(self.kz.get_children('/configs')))
self._download_dir('/configs/{}'.format(collection), fs_path + os.sep + collection) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def upload_collection_configs(self, collection, fs_path):
'''
Uploads collection configurations from a specified directory to zookeeper.
'''
coll_path = fs_path
if not os.path.isdir(coll_path):
raise ValueError("{} Doesn't Exist".format(coll_path))
self._upload_dir(coll_path, '/configs/{}'.format(collection)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def delete_field(self,collection,field_name):
'''
Deletes a field from the Solr Collection. Will raise ValueError if the field doesn't exist.
:param string collection: Name of the collection for the action
:param string field_name: String name of the field.
'''
if not self.does_field_exist(collection,field_name):
raise ValueError("Field {} Doesn't Exists in Solr Collection {}".format(field_name,collection))
else:
temp = {"delete-field" : { "name":field_name }}
res, con_info = self.solr.transport.send_request(method='POST',endpoint=self.schema_endpoint,collection=collection, data=json.dumps(temp))
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def create_copy_field(self,collection,copy_dict):
'''
Creates a copy field.
copy_dict should look like ::
{'source':'source_field_name','dest':'destination_field_name'}
:param string collection: Name of the collection for the action
:param dict copy_field: Dictionary of field info
Reference: https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-AddaNewCopyFieldRule
'''
temp = {"add-copy-field":dict(copy_dict)}
res, con_info = self.solr.transport.send_request(method='POST',endpoint=self.schema_endpoint,collection=collection, data=json.dumps(temp))
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def delete_copy_field(self, collection, copy_dict):
'''
Deletes a copy field.
copy_dict should look like ::
{'source':'source_field_name','dest':'destination_field_name'}
:param string collection: Name of the collection for the action
:param dict copy_field: Dictionary of field info
'''
#Fix this later to check for field before sending a delete
if self.devel:
self.logger.debug("Deleting {}".format(str(copy_dict)))
copyfields = self.get_schema_copyfields(collection)
if copy_dict not in copyfields:
self.logger.info("Fieldset not in Solr Copy Fields: {}".format(str(copy_dict)))
temp = {"delete-copy-field": dict(copy_dict)}
res, con_info = self.solr.transport.send_request(method='POST',endpoint=self.schema_endpoint,collection=collection, data=json.dumps(temp))
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_virtual_display(self, width=1440, height=900, colordepth=24, **kwargs):
"""Starts virtual display which will be destroyed after test execution will be end *Arguments:* - width: a width to be set in pixels - height: a height to be set in pixels - color_depth: a color depth to be used - kwargs: extra parameters *Example:* | Start Virtual Display | | Start Virtual Display | 1920 | 1080 | | Start Virtual Display | ${1920} | ${1080} | ${16} | """ |
if self._display is None:
logger.info("Using virtual display: '{0}x{1}x{2}'".format(
width, height, colordepth))
self._display = Xvfb(int(width), int(height),
int(colordepth), **kwargs)
self._display.start()
atexit.register(self._display.stop) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clusterstatus(self):
""" Returns a slightly slimmed down version of the clusterstatus api command. It also gets count of documents in each shard on each replica and returns it as doc_count key for each replica. """ |
res = self.cluster_status_raw()
cluster = res['cluster']['collections']
out = {}
try:
for collection in cluster:
out[collection] = {}
for shard in cluster[collection]['shards']:
out[collection][shard] = {}
for replica in cluster[collection]['shards'][shard]['replicas']:
out[collection][shard][replica] = cluster[collection]['shards'][shard]['replicas'][replica]
if out[collection][shard][replica]['state'] != 'active':
out[collection][shard][replica]['doc_count'] = False
else:
out[collection][shard][replica]['doc_count'] = self._get_collection_counts(
out[collection][shard][replica])
except Exception as e:
self.logger.error("Couldn't parse response from clusterstatus API call")
self.logger.exception(e)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self, name, numShards, params=None):
""" Create a new collection. """ |
if params is None:
params = {}
params.update(
name=name,
numShards=numShards
)
return self.api('CREATE', params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_collection_counts(self, core_data):
""" Queries each core to get individual counts for each core for each shard. """ |
if core_data['base_url'] not in self.solr_clients:
from SolrClient import SolrClient
self.solr_clients['base_url'] = SolrClient(core_data['base_url'], log=self.logger)
try:
return self.solr_clients['base_url'].query(core_data['core'],
{'q': '*:*',
'rows': 0,
'distrib': 'false',
}).get_num_found()
except Exception as e:
self.logger.error("Couldn't get Counts for {}/{}".format(core_data['base_url'], core_data['core']))
self.logger.exception(e)
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _from_solr(self, fq=[], report_frequency = 25):
'''
Method for retrieving batch data from Solr.
'''
cursor = '*'
stime = datetime.now()
query_count = 0
while True:
#Get data with starting cursorMark
query = self._get_query(cursor)
#Add FQ to the query. This is used by resume to filter on date fields and when specifying document subset.
#Not included in _get_query for more flexibiilty.
if fq:
if 'fq' in query:
[query['fq'].append(x) for x in fq]
else:
query['fq'] = fq
results = self._source.query(self._source_coll, query)
query_count += 1
if query_count % report_frequency == 0:
self.log.info("Processed {} Items in {} Seconds. Apprximately {} items/minute".format(
self._items_processed, int((datetime.now()-stime).seconds),
str(int(self._items_processed / ((datetime.now()-stime).seconds/60)))
))
if results.get_results_count():
#If we got items back, get the new cursor and yield the docs
self._items_processed += results.get_results_count()
cursor = results.get_cursor()
#Remove ignore fields
docs = self._trim_fields(results.docs)
yield docs
if results.get_results_count() < self._rows:
#Less results than asked, probably done
break
else:
#No Results, probably done :)
self.log.debug("Got zero Results with cursor: {}".format(cursor))
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _trim_fields(self, docs):
'''
Removes ignore fields from the data that we got from Solr.
'''
for doc in docs:
for field in self._ignore_fields:
if field in doc:
del(doc[field])
return docs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _get_query(self, cursor):
'''
Query tempalte for source Solr, sorts by id by default.
'''
query = {'q':'*:*',
'sort':'id desc',
'rows':self._rows,
'cursorMark':cursor}
if self._date_field:
query['sort'] = "{} asc, id desc".format(self._date_field)
if self._per_shard:
query['distrib'] = 'false'
return query |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _to_solr(self, data):
'''
Sends data to a Solr instance.
'''
return self._dest.index_json(self._dest_coll, json.dumps(data,sort_keys=True)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _get_edge_date(self, date_field, sort):
'''
This method is used to get start and end dates for the collection.
'''
return self._source.query(self._source_coll, {
'q':'*:*',
'rows':1,
'fq':'+{}:*'.format(date_field),
'sort':'{} {}'.format(date_field, sort)}).docs[0][date_field] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _get_date_facet_counts(self, timespan, date_field, start_date=None, end_date=None):
'''
Returns Range Facet counts based on
'''
if 'DAY' not in timespan:
raise ValueError("At this time, only DAY date range increment is supported. Aborting..... ")
#Need to do this a bit better later. Don't like the string and date concatenations.
if not start_date:
start_date = self._get_edge_date(date_field, 'asc')
start_date = datetime.strptime(start_date,'%Y-%m-%dT%H:%M:%S.%fZ').date().isoformat()+'T00:00:00.000Z'
else:
start_date = start_date+'T00:00:00.000Z'
if not end_date:
end_date = self._get_edge_date(date_field, 'desc')
end_date = datetime.strptime(end_date,'%Y-%m-%dT%H:%M:%S.%fZ').date()
end_date += timedelta(days=1)
end_date = end_date.isoformat()+'T00:00:00.000Z'
else:
end_date = end_date+'T00:00:00.000Z'
self.log.info("Processing Items from {} to {}".format(start_date, end_date))
#Get facet counts for source and destination collections
source_facet = self._source.query(self._source_coll,
self._get_date_range_query(timespan=timespan, start_date=start_date, end_date=end_date)
).get_facets_ranges()[date_field]
dest_facet = self._dest.query(
self._dest_coll, self._get_date_range_query(
timespan=timespan, start_date=start_date, end_date=end_date
)).get_facets_ranges()[date_field]
return source_facet, dest_facet |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def renegotiate_keys(self):
""" Force this session to switch to new keys. Normally this is done automatically after the session hits a certain number of packets or bytes sent or received, but this method gives you the option of forcing new keys whenever you want. Negotiating new keys causes a pause in traffic both ways as the two sides swap keys and do computations. This method returns when the session has switched to new keys. @raise SSHException: if the key renegotiation failed (which causes the session to end) """ |
self.completion_event = threading.Event()
self._send_kex_init()
while True:
self.completion_event.wait(0.1)
if not self.active:
e = self.get_exception()
if e is not None:
raise e
raise SSHException('Negotiation failed.')
if self.completion_event.isSet():
break
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _send_user_message(self, data):
""" send a message, but block if we're in key negotiation. this is used for user-initiated requests. """ |
start = time.time()
while True:
self.clear_to_send.wait(0.1)
if not self.active:
self._log(DEBUG, 'Dropping user packet because connection is dead.')
return
self.clear_to_send_lock.acquire()
if self.clear_to_send.isSet():
break
self.clear_to_send_lock.release()
if time.time() > start + self.clear_to_send_timeout:
raise SSHException('Key-exchange timed out waiting for key negotiation')
try:
self._send_message(data)
finally:
self.clear_to_send_lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _generate_prime(bits, rng):
"primtive attempt at prime generation"
hbyte_mask = pow(2, bits % 8) - 1
while True:
# loop catches the case where we increment n into a higher bit-range
x = rng.read((bits+7) // 8)
if hbyte_mask > 0:
x = chr(ord(x[0]) & hbyte_mask) + x[1:]
n = util.inflate_long(x, 1)
n |= 1
n |= (1 << (bits - 1))
while not number.isPrime(n):
n += 2
if util.bit_length(n) == bits:
break
return n |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_string(self, s):
""" Add a string to the stream. @param s: string to add @type s: str """ |
self.add_int(len(s))
self.packet.write(s)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def agent_auth(transport, username):
""" Attempt to authenticate to the given transport using any of the private keys available from an SSH agent. """ |
agent = ssh.Agent()
agent_keys = agent.get_keys()
if len(agent_keys) == 0:
return
for key in agent_keys:
print 'Trying ssh-agent key %s' % hexlify(key.get_fingerprint()),
try:
transport.auth_publickey(username, key)
print '... success!'
return
except ssh.SSHException:
print '... nope.' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_prefetch(self, size):
""" read data out of the prefetch buffer, if possible. if the data isn't in the buffer, return None. otherwise, behaves like a normal read. """ |
# while not closed, and haven't fetched past the current position, and haven't reached EOF...
while True:
offset = self._data_in_prefetch_buffers(self._realpos)
if offset is not None:
break
if self._prefetch_done or self._closed:
break
self.sftp._read_response()
self._check_exception()
if offset is None:
self._prefetching = False
return None
prefetch = self._prefetch_data[offset]
del self._prefetch_data[offset]
buf_offset = self._realpos - offset
if buf_offset > 0:
self._prefetch_data[offset] = prefetch[:buf_offset]
prefetch = prefetch[buf_offset:]
if size < len(prefetch):
self._prefetch_data[self._realpos + size] = prefetch[size:]
prefetch = prefetch[:size]
return prefetch |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_mode(self, mode='r', bufsize=-1):
""" Subclasses call this method to initialize the BufferedFile. """ |
# set bufsize in any event, because it's used for readline().
self._bufsize = self._DEFAULT_BUFSIZE
if bufsize < 0:
# do no buffering by default, because otherwise writes will get
# buffered in a way that will probably confuse people.
bufsize = 0
if bufsize == 1:
# apparently, line buffering only affects writes. reads are only
# buffered if you call readline (directly or indirectly: iterating
# over a file will indirectly call readline).
self._flags |= self.FLAG_BUFFERED | self.FLAG_LINE_BUFFERED
elif bufsize > 1:
self._bufsize = bufsize
self._flags |= self.FLAG_BUFFERED
self._flags &= ~self.FLAG_LINE_BUFFERED
elif bufsize == 0:
# unbuffered
self._flags &= ~(self.FLAG_BUFFERED | self.FLAG_LINE_BUFFERED)
if ('r' in mode) or ('+' in mode):
self._flags |= self.FLAG_READ
if ('w' in mode) or ('+' in mode):
self._flags |= self.FLAG_WRITE
if ('a' in mode):
self._flags |= self.FLAG_WRITE | self.FLAG_APPEND
self._size = self._get_size()
self._pos = self._realpos = self._size
if ('b' in mode):
self._flags |= self.FLAG_BINARY
if ('U' in mode):
self._flags |= self.FLAG_UNIVERSAL_NEWLINE
# built-in file objects have this attribute to store which kinds of
# line terminations they've seen:
# <http://www.python.org/doc/current/lib/built-in-funcs.html>
self.newlines = None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _request(self, method, path, data=None, reestablish_session=True):
"""Perform HTTP request for REST API.""" |
if path.startswith("http"):
url = path # For cases where URL of different form is needed.
else:
url = self._format_path(path)
headers = {"Content-Type": "application/json"}
if self._user_agent:
headers['User-Agent'] = self._user_agent
body = json.dumps(data).encode("utf-8")
try:
response = requests.request(method, url, data=body, headers=headers,
cookies=self._cookies, **self._request_kwargs)
except requests.exceptions.RequestException as err:
# error outside scope of HTTP status codes
# e.g. unable to resolve domain name
raise PureError(err.message)
if response.status_code == 200:
if "application/json" in response.headers.get("Content-Type", ""):
if response.cookies:
self._cookies.update(response.cookies)
else:
self._cookies.clear()
content = response.json()
if isinstance(content, list):
content = ResponseList(content)
elif isinstance(content, dict):
content = ResponseDict(content)
content.headers = response.headers
return content
raise PureError("Response not in JSON: " + response.text)
elif response.status_code == 401 and reestablish_session:
self._start_session()
return self._request(method, path, data, False)
elif response.status_code == 450 and self._renegotiate_rest_version:
# Purity REST API version is incompatible.
old_version = self._rest_version
self._rest_version = self._choose_rest_version()
if old_version == self._rest_version:
# Got 450 error, but the rest version was supported
# Something really unexpected happened.
raise PureHTTPError(self._target, str(self._rest_version), response)
return self._request(method, path, data, reestablish_session)
else:
raise PureHTTPError(self._target, str(self._rest_version), response) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_rest_version(self, version):
"""Validate a REST API version is supported by the library and target array.""" |
version = str(version)
if version not in self.supported_rest_versions:
msg = "Library is incompatible with REST API version {0}"
raise ValueError(msg.format(version))
array_rest_versions = self._list_available_rest_versions()
if version not in array_rest_versions:
msg = "Array is incompatible with REST API version {0}"
raise ValueError(msg.format(version))
return LooseVersion(version) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _choose_rest_version(self):
"""Return the newest REST API version supported by target array.""" |
versions = self._list_available_rest_versions()
versions = [LooseVersion(x) for x in versions if x in self.supported_rest_versions]
if versions:
return max(versions)
else:
raise PureError(
"Library is incompatible with all REST API versions supported"
"by the target array.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _list_available_rest_versions(self):
"""Return a list of the REST API versions supported by the array""" |
url = "https://{0}/api/api_version".format(self._target)
data = self._request("GET", url, reestablish_session=False)
return data["version"] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _obtain_api_token(self, username, password):
"""Use username and password to obtain and return an API token.""" |
data = self._request("POST", "auth/apitoken",
{"username": username, "password": password},
reestablish_session=False)
return data["api_token"] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_snapshots(self, volumes, **kwargs):
"""Create snapshots of the listed volumes. :param volumes: List of names of the volumes to snapshot. :type volumes: list of str :param \*\*kwargs: See the REST API Guide on your array for the documentation on the request: **POST volume** :type \*\*kwargs: optional :returns: A list of dictionaries describing the new snapshots. :rtype: ResponseDict """ |
data = {"source": volumes, "snap": True}
data.update(kwargs)
return self._request("POST", "volume", data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_volume(self, volume, size, **kwargs):
"""Create a volume and return a dictionary describing it. :param volume: Name of the volume to be created. :type volume: str :param size: Size in bytes, or string representing the size of the volume to be created. :type size: int or str :param \*\*kwargs: See the REST API Guide on your array for the documentation on the request: **POST volume/:volume** :type \*\*kwargs: optional :returns: A dictionary describing the created volume. :rtype: ResponseDict .. note:: The maximum volume size supported is 4 petabytes (4 * 2^50). .. note:: If size is an int, it must be a multiple of 512. .. note:: If size is a string, it must consist of an integer followed by a valid suffix. Accepted Suffixes ====== ======== ====== Suffix Size Bytes ====== ======== ====== S Sector (2^9) K Kilobyte (2^10) M Megabyte (2^20) G Gigabyte (2^30) T Terabyte (2^40) P Petabyte (2^50) ====== ======== ====== """ |
data = {"size": size}
data.update(kwargs)
return self._request("POST", "volume/{0}".format(volume), data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extend_volume(self, volume, size):
"""Extend a volume to a new, larger size. :param volume: Name of the volume to be extended. :type volume: str :type size: int or str :param size: Size in bytes, or string representing the size of the volume to be created. :returns: A dictionary mapping "name" to volume and "size" to the volume's new size in bytes. :rtype: ResponseDict .. note:: The new size must be larger than the volume's old size. .. note:: The maximum volume size supported is 4 petabytes (4 * 2^50). .. note:: If size is an int, it must be a multiple of 512. .. note:: If size is a string, it must consist of an integer followed by a valid suffix. Accepted Suffixes ====== ======== ====== Suffix Size Bytes ====== ======== ====== S Sector (2^9) K Kilobyte (2^10) M Megabyte (2^20) G Gigabyte (2^30) T Terabyte (2^40) P Petabyte (2^50) ====== ======== ====== """ |
return self.set_volume(volume, size=size, truncate=False) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def truncate_volume(self, volume, size):
"""Truncate a volume to a new, smaller size. :param volume: Name of the volume to truncate. :type volume: str :param size: Size in bytes, or string representing the size of the volume to be created. :type size: int or str :returns: A dictionary mapping "name" to volume and "size" to the volume's new size in bytes. :rtype: ResponseDict .. warnings also:: Data may be irretrievably lost in this operation. .. note:: A snapshot of the volume in its previous state is taken and immediately destroyed, but it is available for recovery for the 24 hours following the truncation. """ |
return self.set_volume(volume, size=size, truncate=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect_host(self, host, volume, **kwargs):
"""Create a connection between a host and a volume. :param host: Name of host to connect to volume. :type host: str :param volume: Name of volume to connect to host. :type volume: str :param \*\*kwargs: See the REST API Guide on your array for the documentation on the request: **POST host/:host/volume/:volume** :type \*\*kwargs: optional :returns: A dictionary describing the connection between the host and volume. :rtype: ResponseDict """ |
return self._request(
"POST", "host/{0}/volume/{1}".format(host, volume), kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect_hgroup(self, hgroup, volume, **kwargs):
"""Create a shared connection between a host group and a volume. :param hgroup: Name of hgroup to connect to volume. :type hgroup: str :param volume: Name of volume to connect to hgroup. :type volume: str :param \*\*kwargs: See the REST API Guide on your array for the documentation on the request: **POST hgroup/:hgroup/volume/:volume** :type \*\*kwargs: optional :returns: A dictionary describing the connection between the hgroup and volume. :rtype: ResponseDict """ |
return self._request(
"POST", "hgroup/{0}/volume/{1}".format(hgroup, volume), kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_offload(self, name, **kwargs):
"""Return a dictionary describing the connected offload target. :param offload: Name of offload target to get information about. :type offload: str :param \*\*kwargs: See the REST API Guide on your array for the documentation on the request: **GET offload/::offload** :type \*\*kwargs: optional :returns: A dictionary describing the offload connection. :rtype: ResponseDict """ |
# Unbox if a list to accommodate a bug in REST 1.14
result = self._request("GET", "offload/{0}".format(name), kwargs)
if isinstance(result, list):
headers = result.headers
result = ResponseDict(result[0])
result.headers = headers
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_subnet(self, subnet, prefix, **kwargs):
"""Create a subnet. :param subnet: Name of subnet to be created. :type subnet: str :param prefix: Routing prefix of subnet to be created. :type prefix: str :param \*\*kwargs: See the REST API Guide on your array for the documentation on the request: **POST subnet/:subnet** :type \*\*kwargs: optional :returns: A dictionary describing the created subnet. :rtype: ResponseDict .. note:: prefix should be specified as an IPv4 CIDR address. ("xxx.xxx.xxx.xxx/nn", representing prefix and prefix length) .. note:: Requires use of REST API 1.5 or later. """ |
data = {"prefix": prefix}
data.update(kwargs)
return self._request("POST", "subnet/{0}".format(subnet), data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_vlan_interface(self, interface, subnet, **kwargs):
"""Create a vlan interface :param interface: Name of interface to be created. :type interface: str :param subnet: Subnet associated with interface to be created :type subnet: str :param \*\*kwargs: See the REST API Guide on your array for the documentation on the request: **POST network/vif/:vlan_interface** :type \*\*kwargs: optional :returns: A dictionary describing the created interface :rtype: ResponseDict .. note:: Requires use of REST API 1.5 or later. """ |
data = {"subnet": subnet}
data.update(kwargs)
return self._request("POST", "network/vif/{0}".format(interface), data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_password(self, admin, new_password, old_password):
"""Set an admin's password. :param admin: Name of admin whose password is to be set. :type admin: str :param new_password: New password for admin. :type new_password: str :param old_password: Current password of admin. :type old_password: str :returns: A dictionary mapping "name" to admin. :rtype: ResponseDict """ |
return self.set_admin(admin, password=new_password,
old_password=old_password) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.