text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_cache_policy(self, func):
"""Set the context cache policy function. Args: func: A function that accepts a Key instance as argument and returns a bool indicating if it should be cached. May be None. """ |
if func is None:
func = self.default_cache_policy
elif isinstance(func, bool):
func = lambda unused_key, flag=func: flag
self._cache_policy = func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _use_cache(self, key, options=None):
"""Return whether to use the context cache for this key. Args: key: Key instance. options: ContextOptions instance, or None. Returns: True if the key should be cached, False otherwise. """ |
flag = ContextOptions.use_cache(options)
if flag is None:
flag = self._cache_policy(key)
if flag is None:
flag = ContextOptions.use_cache(self._conn.config)
if flag is None:
flag = True
return flag |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_memcache_policy(self, func):
"""Set the memcache policy function. Args: func: A function that accepts a Key instance as argument and returns a bool indicating if it should be cached. May be None. """ |
if func is None:
func = self.default_memcache_policy
elif isinstance(func, bool):
func = lambda unused_key, flag=func: flag
self._memcache_policy = func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _use_memcache(self, key, options=None):
"""Return whether to use memcache for this key. Args: key: Key instance. options: ContextOptions instance, or None. Returns: True if the key should be cached in memcache, False otherwise. """ |
flag = ContextOptions.use_memcache(options)
if flag is None:
flag = self._memcache_policy(key)
if flag is None:
flag = ContextOptions.use_memcache(self._conn.config)
if flag is None:
flag = True
return flag |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def default_datastore_policy(key):
"""Default datastore policy. This defers to _use_datastore on the Model class. Args: key: Key instance. Returns: A bool or None. """ |
flag = None
if key is not None:
modelclass = model.Model._kind_map.get(key.kind())
if modelclass is not None:
policy = getattr(modelclass, '_use_datastore', None)
if policy is not None:
if isinstance(policy, bool):
flag = policy
else:
flag = policy(key)
return flag |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_datastore_policy(self, func):
"""Set the context datastore policy function. Args: func: A function that accepts a Key instance as argument and returns a bool indicating if it should use the datastore. May be None. """ |
if func is None:
func = self.default_datastore_policy
elif isinstance(func, bool):
func = lambda unused_key, flag=func: flag
self._datastore_policy = func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _use_datastore(self, key, options=None):
"""Return whether to use the datastore for this key. Args: key: Key instance. options: ContextOptions instance, or None. Returns: True if the datastore should be used, False otherwise. """ |
flag = ContextOptions.use_datastore(options)
if flag is None:
flag = self._datastore_policy(key)
if flag is None:
flag = ContextOptions.use_datastore(self._conn.config)
if flag is None:
flag = True
return flag |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def default_memcache_timeout_policy(key):
"""Default memcache timeout policy. This defers to _memcache_timeout on the Model class. Args: key: Key instance. Returns: Memcache timeout to use (integer), or None. """ |
timeout = None
if key is not None and isinstance(key, model.Key):
modelclass = model.Model._kind_map.get(key.kind())
if modelclass is not None:
policy = getattr(modelclass, '_memcache_timeout', None)
if policy is not None:
if isinstance(policy, (int, long)):
timeout = policy
else:
timeout = policy(key)
return timeout |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_from_cache_if_available(self, key):
"""Returns a cached Model instance given the entity key if available. Args: key: Key instance. Returns: A Model instance if the key exists in the cache. """ |
if key in self._cache:
entity = self._cache[key] # May be None, meaning "doesn't exist".
if entity is None or entity._key == key:
# If entity's key didn't change later, it is ok.
# See issue 13. http://goo.gl/jxjOP
raise tasklets.Return(entity) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, key, **ctx_options):
"""Return a Model instance given the entity key. It will use the context cache if the cache policy for the given key is enabled. Args: key: Key instance. **ctx_options: Context options. Returns: A Model instance if the key exists in the datastore; None otherwise. """ |
options = _make_ctx_options(ctx_options)
use_cache = self._use_cache(key, options)
if use_cache:
self._load_from_cache_if_available(key)
use_datastore = self._use_datastore(key, options)
if (use_datastore and
isinstance(self._conn, datastore_rpc.TransactionalConnection)):
use_memcache = False
else:
use_memcache = self._use_memcache(key, options)
ns = key.namespace()
memcache_deadline = None # Avoid worries about uninitialized variable.
if use_memcache:
mkey = self._memcache_prefix + key.urlsafe()
memcache_deadline = self._get_memcache_deadline(options)
mvalue = yield self.memcache_get(mkey, for_cas=use_datastore,
namespace=ns, use_cache=True,
deadline=memcache_deadline)
# A value may have appeared while yielding.
if use_cache:
self._load_from_cache_if_available(key)
if mvalue not in (_LOCKED, None):
cls = model.Model._lookup_model(key.kind(),
self._conn.adapter.default_model)
pb = entity_pb.EntityProto()
try:
pb.MergePartialFromString(mvalue)
except ProtocolBuffer.ProtocolBufferDecodeError:
logging.warning('Corrupt memcache entry found '
'with key %s and namespace %s' % (mkey, ns))
mvalue = None
else:
entity = cls._from_pb(pb)
# Store the key on the entity since it wasn't written to memcache.
entity._key = key
if use_cache:
# Update in-memory cache.
self._cache[key] = entity
raise tasklets.Return(entity)
if mvalue is None and use_datastore:
yield self.memcache_set(mkey, _LOCKED, time=_LOCK_TIME, namespace=ns,
use_cache=True, deadline=memcache_deadline)
yield self.memcache_gets(mkey, namespace=ns, use_cache=True,
deadline=memcache_deadline)
if not use_datastore:
# NOTE: Do not cache this miss. In some scenarios this would
# prevent an app from working properly.
raise tasklets.Return(None)
if use_cache:
entity = yield self._get_batcher.add_once(key, options)
else:
entity = yield self._get_batcher.add(key, options)
if entity is not None:
if use_memcache and mvalue != _LOCKED:
# Don't serialize the key since it's already the memcache key.
pbs = entity._to_pb(set_key=False).SerializePartialToString()
# Don't attempt to write to memcache if too big. Note that we
# use LBYL ("look before you leap") because a multi-value
# memcache operation would fail for all entities rather than
# for just the one that's too big. (Also, the AutoBatcher
# class doesn't pass back exceptions very well.)
if len(pbs) <= memcache.MAX_VALUE_SIZE:
timeout = self._get_memcache_timeout(key, options)
# Don't use fire-and-forget -- for users who forget
# @ndb.toplevel, it's too painful to diagnose why their simple
# code using a single synchronous call doesn't seem to use
# memcache. See issue 105. http://goo.gl/JQZxp
yield self.memcache_cas(mkey, pbs, time=timeout, namespace=ns,
deadline=memcache_deadline)
if use_cache:
# Cache hit or miss. NOTE: In this case it is okay to cache a
# miss; the datastore is the ultimate authority.
self._cache[key] = entity
raise tasklets.Return(entity) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def call_on_commit(self, callback):
"""Call a callback upon successful commit of a transaction. If not in a transaction, the callback is called immediately. In a transaction, multiple callbacks may be registered and will be called once the transaction commits, in the order in which they were registered. If the transaction fails, the callbacks will not be called. If the callback raises an exception, it bubbles up normally. This means: If the callback is called immediately, any exception it raises will bubble up immediately. If the call is postponed until commit, remaining callbacks will be skipped and the exception will bubble up through the transaction() call. (However, the transaction is already committed at that point.) """ |
if not self.in_transaction():
callback()
else:
self._on_commit_queue.append(callback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_nickname(userid):
"""Return a Future for a nickname from an account.""" |
account = yield get_account(userid)
if not account:
nickname = 'Unregistered'
else:
nickname = account.nickname or account.email
raise ndb.Return(nickname) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mark_done(task_id):
"""Marks a task as done. Args: task_id: The integer id of the task to update. Raises: ValueError: if the requested task doesn't exist. """ |
task = Task.get_by_id(task_id)
if task is None:
raise ValueError('Task with id %d does not exist' % task_id)
task.done = True
task.put() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def format_tasks(tasks):
"""Converts a list of tasks to a list of string representations. Args: tasks: A list of the tasks to convert. Returns: A list of string formatted tasks. """ |
return ['%d : %s (%s)' % (task.key.id(),
task.description,
('done' if task.done
else 'created %s' % task.created))
for task in tasks] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_command(command):
"""Accepts a string command and performs an action. Args: command: the command to run as a string. """ |
try:
cmds = command.split(None, 1)
cmd = cmds[0]
if cmd == 'new':
add_task(get_arg(cmds))
elif cmd == 'done':
mark_done(int(get_arg(cmds)))
elif cmd == 'list':
for task in format_tasks(list_tasks()):
print task
elif cmd == 'delete':
delete_task(int(get_arg(cmds)))
else:
print_usage()
except Exception, e: # pylint: disable=broad-except
print e
print_usage() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_upload_url(success_path, max_bytes_per_blob=None, max_bytes_total=None, **options):
"""Create upload URL for POST form. Args: success_path: Path within application to call when POST is successful and upload is complete. max_bytes_per_blob: The maximum size in bytes that any one blob in the upload can be or None for no maximum size. max_bytes_total: The maximum size in bytes that the aggregate sizes of all of the blobs in the upload can be or None for no maximum size. **options: Options for create_rpc(). Returns: The upload URL. Raises: TypeError: If max_bytes_per_blob or max_bytes_total are not integral types. ValueError: If max_bytes_per_blob or max_bytes_total are not positive values. """ |
fut = create_upload_url_async(success_path,
max_bytes_per_blob=max_bytes_per_blob,
max_bytes_total=max_bytes_total,
**options)
return fut.get_result() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_blob_info(field_storage):
"""Parse a BlobInfo record from file upload field_storage. Args: field_storage: cgi.FieldStorage that represents uploaded blob. Returns: BlobInfo record as parsed from the field-storage instance. None if there was no field_storage. Raises: BlobInfoParseError when provided field_storage does not contain enough information to construct a BlobInfo object. """ |
if field_storage is None:
return None
field_name = field_storage.name
def get_value(dct, name):
value = dct.get(name, None)
if value is None:
raise BlobInfoParseError(
'Field %s has no %s.' % (field_name, name))
return value
filename = get_value(field_storage.disposition_options, 'filename')
blob_key_str = get_value(field_storage.type_options, 'blob-key')
blob_key = BlobKey(blob_key_str)
upload_content = email.message_from_file(field_storage.file)
content_type = get_value(upload_content, 'content-type')
size = get_value(upload_content, 'content-length')
creation_string = get_value(upload_content, UPLOAD_INFO_CREATION_HEADER)
md5_hash_encoded = get_value(upload_content, 'content-md5')
md5_hash = base64.urlsafe_b64decode(md5_hash_encoded)
try:
size = int(size)
except (TypeError, ValueError):
raise BlobInfoParseError(
'%s is not a valid value for %s size.' % (size, field_name))
try:
creation = blobstore._parse_creation(creation_string, field_name)
except blobstore._CreationFormatError, err:
raise BlobInfoParseError(str(err))
return BlobInfo(id=blob_key_str,
content_type=content_type,
creation=creation,
filename=filename,
size=size,
md5_hash=md5_hash,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch_data(blob, start_index, end_index, **options):
"""Fetch data for blob. Fetches a fragment of a blob up to MAX_BLOB_FETCH_SIZE in length. Attempting to fetch a fragment that extends beyond the boundaries of the blob will return the amount of data from start_index until the end of the blob, which will be a smaller size than requested. Requesting a fragment which is entirely outside the boundaries of the blob will return empty string. Attempting to fetch a negative index will raise an exception. Args: blob: BlobInfo, BlobKey, str or unicode representation of BlobKey of blob to fetch data from. start_index: Start index of blob data to fetch. May not be negative. end_index: End index (inclusive) of blob data to fetch. Must be >= start_index. **options: Options for create_rpc(). Returns: str containing partial data of blob. If the indexes are legal but outside the boundaries of the blob, will return empty string. Raises: TypeError if start_index or end_index are not indexes. Also when blob is not a string, BlobKey or BlobInfo. DataIndexOutOfRangeError when start_index < 0 or end_index < start_index. BlobFetchSizeTooLargeError when request blob fragment is larger than MAX_BLOB_FETCH_SIZE. BlobNotFoundError when blob does not exist. """ |
fut = fetch_data_async(blob, start_index, end_index, **options)
return fut.get_result() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(cls, blob_key, **ctx_options):
"""Retrieve a BlobInfo by key. Args: blob_key: A blob key. This may be a str, unicode or BlobKey instance. **ctx_options: Context options for Model().get_by_id(). Returns: A BlobInfo entity associated with the provided key, If there was no such entity, returns None. """ |
fut = cls.get_async(blob_key, **ctx_options)
return fut.get_result() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, **options):
"""Permanently delete this blob from Blobstore. Args: **options: Options for create_rpc(). """ |
fut = delete_async(self.key(), **options)
fut.get_result() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def __fill_buffer(self, size=0):
"""Fills the internal buffer. Args: size: Number of bytes to read. Will be clamped to [self.__buffer_size, MAX_BLOB_FETCH_SIZE]. """ |
read_size = min(max(size, self.__buffer_size), MAX_BLOB_FETCH_SIZE)
self.__buffer = fetch_data(self.__blob_key, self.__position,
self.__position + read_size - 1)
self.__buffer_position = 0
self.__eof = len(self.__buffer) < read_size |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def blob_info(self):
"""Returns the BlobInfo for this file.""" |
if not self.__blob_info:
self.__blob_info = BlobInfo.get(self.__blob_key)
return self.__blob_info |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_connection(config=None, default_model=None, _api_version=datastore_rpc._DATASTORE_V3, _id_resolver=None):
"""Create a new Connection object with the right adapter. Optionally you can pass in a datastore_rpc.Configuration object. """ |
return datastore_rpc.Connection(
adapter=ModelAdapter(default_model, id_resolver=_id_resolver),
config=config,
_api_version=_api_version) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _unpack_user(v):
"""Internal helper to unpack a User value from a protocol buffer.""" |
uv = v.uservalue()
email = unicode(uv.email().decode('utf-8'))
auth_domain = unicode(uv.auth_domain().decode('utf-8'))
obfuscated_gaiaid = uv.obfuscated_gaiaid().decode('utf-8')
obfuscated_gaiaid = unicode(obfuscated_gaiaid)
federated_identity = None
if uv.has_federated_identity():
federated_identity = unicode(
uv.federated_identity().decode('utf-8'))
value = users.User(email=email,
_auth_domain=auth_domain,
_user_id=obfuscated_gaiaid,
federated_identity=federated_identity)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _date_to_datetime(value):
"""Convert a date to a datetime for Cloud Datastore storage. Args: value: A datetime.date object. Returns: A datetime object with time set to 0:00. """ |
if not isinstance(value, datetime.date):
raise TypeError('Cannot convert to datetime expected date value; '
'received %s' % value)
return datetime.datetime(value.year, value.month, value.day) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _time_to_datetime(value):
"""Convert a time to a datetime for Cloud Datastore storage. Args: value: A datetime.time object. Returns: A datetime object with date set to 1970-01-01. """ |
if not isinstance(value, datetime.time):
raise TypeError('Cannot convert to datetime expected time value; '
'received %s' % value)
return datetime.datetime(1970, 1, 1,
value.hour, value.minute, value.second,
value.microsecond) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transactional(func, args, kwds, **options):
"""Decorator to make a function automatically run in a transaction. Args: **ctx_options: Transaction options (see transaction(), but propagation default to TransactionOptions.ALLOWED). This supports two forms: (1) Vanilla: @transactional def callback(arg):
(2) With options: @transactional(retries=1) def callback(arg):
""" |
return transactional_async.wrapped_decorator(
func, args, kwds, **options).get_result() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transactional_async(func, args, kwds, **options):
"""The async version of @ndb.transaction.""" |
options.setdefault('propagation', datastore_rpc.TransactionOptions.ALLOWED)
if args or kwds:
return transaction_async(lambda: func(*args, **kwds), **options)
return transaction_async(func, **options) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def non_transactional(func, args, kwds, allow_existing=True):
"""A decorator that ensures a function is run outside a transaction. If there is an existing transaction (and allow_existing=True), the existing transaction is paused while the function is executed. Args: allow_existing: If false, throw an exception if called from within a transaction. If true, temporarily re-establish the previous non-transactional context. Defaults to True. This supports two forms, similar to transactional(). Returns: A wrapper for the decorated function that ensures it runs outside a transaction. """ |
from . import tasklets
ctx = tasklets.get_context()
if not ctx.in_transaction():
return func(*args, **kwds)
if not allow_existing:
raise datastore_errors.BadRequestError(
'%s cannot be called within a transaction.' % func.__name__)
save_ctx = ctx
while ctx.in_transaction():
ctx = ctx._parent_context
if ctx is None:
raise datastore_errors.BadRequestError(
'Context without non-transactional ancestor')
save_ds_conn = datastore._GetConnection()
try:
if hasattr(save_ctx, '_old_ds_conn'):
datastore._SetConnection(save_ctx._old_ds_conn)
tasklets.set_context(ctx)
return func(*args, **kwds)
finally:
tasklets.set_context(save_ctx)
datastore._SetConnection(save_ds_conn) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set(self, value):
"""Updates all descendants to a specified value.""" |
if self.__is_parent_node():
for child in self.__sub_counters.itervalues():
child._set(value)
else:
self.__counter = value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _comparison(self, op, value):
"""Internal helper for comparison operators. Args: op: The operator ('=', '<' etc.). Returns: A FilterNode instance representing the requested comparison. """ |
# NOTE: This is also used by query.gql().
if not self._indexed:
raise datastore_errors.BadFilterError(
'Cannot query for unindexed property %s' % self._name)
from .query import FilterNode # Import late to avoid circular imports.
if value is not None:
value = self._do_validate(value)
value = self._call_to_base_type(value)
value = self._datastore_type(value)
return FilterNode(self._name, op, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _IN(self, value):
"""Comparison operator for the 'in' comparison operator. The Python 'in' operator cannot be overloaded in the way we want to, so we define a method. For example:: Employee.query(Employee.rank.IN([4, 5, 6])) Note that the method is called ._IN() but may normally be invoked as .IN(); ._IN() is provided for the case you have a StructuredProperty with a model that has a Property named IN. """ |
if not self._indexed:
raise datastore_errors.BadFilterError(
'Cannot query for unindexed property %s' % self._name)
from .query import FilterNode # Import late to avoid circular imports.
if not isinstance(value, (list, tuple, set, frozenset)):
raise datastore_errors.BadArgumentError(
'Expected list, tuple or set, got %r' % (value,))
values = []
for val in value:
if val is not None:
val = self._do_validate(val)
val = self._call_to_base_type(val)
val = self._datastore_type(val)
values.append(val)
return FilterNode(self._name, 'in', values) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _do_validate(self, value):
"""Call all validations on the value. This calls the most derived _validate() method(s), then the custom validator function, and then checks the choices. It returns the value, possibly modified in an idempotent way, or raises an exception. Note that this does not call all composable _validate() methods. It only calls _validate() methods up to but not including the first _to_base_type() method, when the MRO is traversed looking for _validate() and _to_base_type() methods. (IOW if a class defines both _validate() and _to_base_type(), its _validate() is called and then the search is aborted.) Note that for a repeated Property this function should be called for each item in the list, not for the list as a whole. """ |
if isinstance(value, _BaseValue):
return value
value = self._call_shallow_validation(value)
if self._validator is not None:
newvalue = self._validator(self, value)
if newvalue is not None:
value = newvalue
if self._choices is not None:
if value not in self._choices:
raise datastore_errors.BadValueError(
'Value %r for property %s is not an allowed choice' %
(value, self._name))
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _fix_up(self, cls, code_name):
"""Internal helper called to tell the property its name. This is called by _fix_up_properties() which is called by MetaModel when finishing the construction of a Model subclass. The name passed in is the name of the class attribute to which the Property is assigned (a.k.a. the code name). Note that this means that each Property instance must be assigned to (at most) one class attribute. E.g. to declare three strings, you must call StringProperty() three times, you cannot write foo = bar = baz = StringProperty() """ |
self._code_name = code_name
if self._name is None:
self._name = code_name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_value(self, entity, value):
"""Internal helper to set a value in an entity for a Property. This performs validation first. For a repeated Property the value should be a list. """ |
if entity._projection:
raise ReadonlyPropertyError(
'You cannot set property values of a projection entity')
if self._repeated:
if not isinstance(value, (list, tuple, set, frozenset)):
raise datastore_errors.BadValueError('Expected list or tuple, got %r' %
(value,))
value = [self._do_validate(v) for v in value]
else:
if value is not None:
value = self._do_validate(value)
self._store_value(entity, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _retrieve_value(self, entity, default=None):
"""Internal helper to retrieve the value for this Property from an entity. This returns None if no value is set, or the default argument if given. For a repeated Property this returns a list if a value is set, otherwise None. No additional transformations are applied. """ |
return entity._values.get(self._name, default) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_methods(cls, *names, **kwds):
"""Compute a list of composable methods. Because this is a common operation and the class hierarchy is static, the outcome is cached (assuming that for a particular list of names the reversed flag is either always on, or always off). Args: *names: One or more method names. reverse: Optional flag, default False; if True, the list is reversed. Returns: A list of callable class method objects. """ |
reverse = kwds.pop('reverse', False)
assert not kwds, repr(kwds)
cache = cls.__dict__.get('_find_methods_cache')
if cache:
hit = cache.get(names)
if hit is not None:
return hit
else:
cls._find_methods_cache = cache = {}
methods = []
for c in cls.__mro__:
for name in names:
method = c.__dict__.get(name)
if method is not None:
methods.append(method)
if reverse:
methods.reverse()
cache[names] = methods
return methods |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _apply_list(self, methods):
"""Return a single callable that applies a list of methods to a value. If a method returns None, the last value is kept; if it returns some other value, that replaces the last value. Exceptions are not caught. """ |
def call(value):
for method in methods:
newvalue = method(self, value)
if newvalue is not None:
value = newvalue
return value
return call |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_value(self, entity):
"""Internal helper to get the value for this Property from an entity. For a repeated Property this initializes the value to an empty list if it is not set. """ |
if entity._projection:
if self._name not in entity._projection:
raise UnprojectedPropertyError(
'Property %s is not in the projection' % (self._name,))
return self._get_user_value(entity) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _delete_value(self, entity):
"""Internal helper to delete the value for this Property from an entity. Note that if no value exists this is a no-op; deleted values will not be serialized but requesting their value will return None (or an empty list in the case of a repeated Property). """ |
if self._name in entity._values:
del entity._values[self._name] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _is_initialized(self, entity):
"""Internal helper to ask if the entity has a value for this Property. This returns False if a value is stored but it is None. """ |
return (not self._required or
((self._has_value(entity) or self._default is not None) and
self._get_value(entity) is not None)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _serialize(self, entity, pb, prefix='', parent_repeated=False, projection=None):
"""Internal helper to serialize this property to a protocol buffer. Subclasses may override this method. Args: entity: The entity, a Model (subclass) instance. pb: The protocol buffer, an EntityProto instance. prefix: Optional name prefix used for StructuredProperty (if present, must end in '.'). parent_repeated: True if the parent (or an earlier ancestor) is a repeated Property. projection: A list or tuple of strings representing the projection for the model instance, or None if the instance is not a projection. """ |
values = self._get_base_value_unwrapped_as_list(entity)
name = prefix + self._name
if projection and name not in projection:
return
if self._indexed:
create_prop = lambda: pb.add_property()
else:
create_prop = lambda: pb.add_raw_property()
if self._repeated and not values and self._write_empty_list:
# We want to write the empty list
p = create_prop()
p.set_name(name)
p.set_multiple(False)
p.set_meaning(entity_pb.Property.EMPTY_LIST)
p.mutable_value()
else:
# We write a list, or a single property
for val in values:
p = create_prop()
p.set_name(name)
p.set_multiple(self._repeated or parent_repeated)
v = p.mutable_value()
if val is not None:
self._db_set_value(v, p, val)
if projection:
# Projected properties have the INDEX_VALUE meaning and only contain
# the original property's name and value.
new_p = entity_pb.Property()
new_p.set_name(p.name())
new_p.set_meaning(entity_pb.Property.INDEX_VALUE)
new_p.set_multiple(False)
new_p.mutable_value().CopyFrom(v)
p.CopyFrom(new_p) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _deserialize(self, entity, p, unused_depth=1):
"""Internal helper to deserialize this property from a protocol buffer. Subclasses may override this method. Args: entity: The entity, a Model (subclass) instance. p: A Property Message object (a protocol buffer). depth: Optional nesting depth, default 1 (unused here, but used by some subclasses that override this method). """ |
if p.meaning() == entity_pb.Property.EMPTY_LIST:
self._store_value(entity, [])
return
val = self._db_get_value(p.value(), p)
if val is not None:
val = _BaseValue(val)
# TODO: replace the remainder of the function with the following commented
# out code once its feasible to make breaking changes such as not calling
# _store_value().
# if self._repeated:
# entity._values.setdefault(self._name, []).append(val)
# else:
# entity._values[self._name] = val
if self._repeated:
if self._has_value(entity):
value = self._retrieve_value(entity)
assert isinstance(value, list), repr(value)
value.append(val)
else:
# We promote single values to lists if we are a list property
value = [val]
else:
value = val
self._store_value(entity, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_property(self, rest=None, require_indexed=True):
"""Internal helper to check this property for specific requirements. Called by Model._check_properties(). Args: Raises: InvalidPropertyError if this property does not meet the given requirements or if a subproperty is specified. (StructuredProperty overrides this method to handle subproperties.) """ |
if require_indexed and not self._indexed:
raise InvalidPropertyError('Property is unindexed %s' % self._name)
if rest:
raise InvalidPropertyError('Referencing subproperty %s.%s '
'but %s is not a structured property' %
(self._name, rest, self._name)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_value(self, entity, value):
"""Setter for key attribute.""" |
if value is not None:
value = _validate_key(value, entity=entity)
value = entity._validate_key(value)
entity._entity_key = value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def __get_arg(cls, kwds, kwd):
"""Internal helper method to parse keywords that may be property names.""" |
alt_kwd = '_' + kwd
if alt_kwd in kwds:
return kwds.pop(alt_kwd)
if kwd in kwds:
obj = getattr(cls, kwd, None)
if not isinstance(obj, Property) or isinstance(obj, ModelKey):
return kwds.pop(kwd)
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_attributes(self, kwds):
"""Internal helper to set attributes from keyword arguments. Expando overrides this. """ |
cls = self.__class__
for name, value in kwds.iteritems():
prop = getattr(cls, name) # Raises AttributeError for unknown properties.
if not isinstance(prop, Property):
raise TypeError('Cannot set non-property %s' % name)
prop._set_value(self, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_uninitialized(self):
"""Internal helper to find uninitialized properties. Returns: A set of property names. """ |
return set(name
for name, prop in self._properties.iteritems()
if not prop._is_initialized(self)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_initialized(self):
"""Internal helper to check for uninitialized properties. Raises: BadValueError if it finds any. """ |
baddies = self._find_uninitialized()
if baddies:
raise datastore_errors.BadValueError(
'Entity has uninitialized properties: %s' % ', '.join(baddies)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _reset_kind_map(cls):
"""Clear the kind map. Useful for testing.""" |
# Preserve "system" kinds, like __namespace__
keep = {}
for name, value in cls._kind_map.iteritems():
if name.startswith('__') and name.endswith('__'):
keep[name] = value
cls._kind_map.clear()
cls._kind_map.update(keep) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _lookup_model(cls, kind, default_model=None):
"""Get the model class for the kind. Args: kind: A string representing the name of the kind to lookup. default_model: The model class to use if the kind can't be found. Returns: The model class for the requested kind. Raises: KindError: The kind was not found and no default_model was provided. """ |
modelclass = cls._kind_map.get(kind, default_model)
if modelclass is None:
raise KindError(
"No model class found for kind '%s'. Did you forget to import it?" %
kind)
return modelclass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _equivalent(self, other):
"""Compare two entities of the same class, excluding keys.""" |
if other.__class__ is not self.__class__: # TODO: What about subclasses?
raise NotImplementedError('Cannot compare different model classes. '
'%s is not %s' % (self.__class__.__name__,
other.__class_.__name__))
if set(self._projection) != set(other._projection):
return False
# It's all about determining inequality early.
if len(self._properties) != len(other._properties):
return False # Can only happen for Expandos.
my_prop_names = set(self._properties.iterkeys())
their_prop_names = set(other._properties.iterkeys())
if my_prop_names != their_prop_names:
return False # Again, only possible for Expandos.
if self._projection:
my_prop_names = set(self._projection)
for name in my_prop_names:
if '.' in name:
name, _ = name.split('.', 1)
my_value = self._properties[name]._get_value(self)
their_value = other._properties[name]._get_value(other)
if my_value != their_value:
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _to_pb(self, pb=None, allow_partial=False, set_key=True):
"""Internal helper to turn an entity into an EntityProto protobuf.""" |
if not allow_partial:
self._check_initialized()
if pb is None:
pb = entity_pb.EntityProto()
if set_key:
# TODO: Move the key stuff into ModelAdapter.entity_to_pb()?
self._key_to_pb(pb)
for unused_name, prop in sorted(self._properties.iteritems()):
prop._serialize(self, pb, projection=self._projection)
return pb |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _key_to_pb(self, pb):
"""Internal helper to copy the key into a protobuf.""" |
key = self._key
if key is None:
pairs = [(self._get_kind(), None)]
ref = key_module._ReferenceFromPairs(pairs, reference=pb.mutable_key())
else:
ref = key.reference()
pb.mutable_key().CopyFrom(ref)
group = pb.mutable_entity_group() # Must initialize this.
# To work around an SDK issue, only set the entity group if the
# full key is complete. TODO: Remove the top test once fixed.
if key is not None and key.id():
elem = ref.path().element(0)
if elem.id() or elem.name():
group.add_element().CopyFrom(elem) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _from_pb(cls, pb, set_key=True, ent=None, key=None):
"""Internal helper to create an entity from an EntityProto protobuf.""" |
if not isinstance(pb, entity_pb.EntityProto):
raise TypeError('pb must be a EntityProto; received %r' % pb)
if ent is None:
ent = cls()
# A key passed in overrides a key in the pb.
if key is None and pb.key().path().element_size():
key = Key(reference=pb.key())
# If set_key is not set, skip a trivial incomplete key.
if key is not None and (set_key or key.id() or key.parent()):
ent._key = key
# NOTE(darke): Keep a map from (indexed, property name) to the property.
# This allows us to skip the (relatively) expensive call to
# _get_property_for for repeated fields.
_property_map = {}
projection = []
for indexed, plist in ((True, pb.property_list()),
(False, pb.raw_property_list())):
for p in plist:
if p.meaning() == entity_pb.Property.INDEX_VALUE:
projection.append(p.name())
property_map_key = (p.name(), indexed)
if property_map_key not in _property_map:
_property_map[property_map_key] = ent._get_property_for(p, indexed)
_property_map[property_map_key]._deserialize(ent, p)
ent._set_projection(projection)
return ent |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_property_for(self, p, indexed=True, depth=0):
"""Internal helper to get the Property for a protobuf-level property.""" |
parts = p.name().split('.')
if len(parts) <= depth:
# Apparently there's an unstructured value here.
# Assume it is a None written for a missing value.
# (It could also be that a schema change turned an unstructured
# value into a structured one. In that case, too, it seems
# better to return None than to return an unstructured value,
# since the latter doesn't match the current schema.)
return None
next = parts[depth]
prop = self._properties.get(next)
if prop is None:
prop = self._fake_property(p, next, indexed)
return prop |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _clone_properties(self):
"""Internal helper to clone self._properties if necessary.""" |
cls = self.__class__
if self._properties is cls._properties:
self._properties = dict(cls._properties) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _fake_property(self, p, next, indexed=True):
"""Internal helper to create a fake Property.""" |
self._clone_properties()
if p.name() != next and not p.name().endswith('.' + next):
prop = StructuredProperty(Expando, next)
prop._store_value(self, _BaseValue(Expando()))
else:
compressed = p.meaning_uri() == _MEANING_URI_COMPRESSED
prop = GenericProperty(next,
repeated=p.multiple(),
indexed=indexed,
compressed=compressed)
prop._code_name = next
self._properties[prop._name] = prop
return prop |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _to_dict(self, include=None, exclude=None):
"""Return a dict containing the entity's property values. Args: include: Optional set of property names to include, default all. exclude: Optional set of property names to skip, default none. A name contained in both include and exclude is excluded. """ |
if (include is not None and
not isinstance(include, (list, tuple, set, frozenset))):
raise TypeError('include should be a list, tuple or set')
if (exclude is not None and
not isinstance(exclude, (list, tuple, set, frozenset))):
raise TypeError('exclude should be a list, tuple or set')
values = {}
for prop in self._properties.itervalues():
name = prop._code_name
if include is not None and name not in include:
continue
if exclude is not None and name in exclude:
continue
try:
values[name] = prop._get_for_dict(self)
except UnprojectedPropertyError:
pass # Ignore unprojected properties rather than failing.
return values |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_properties(cls, property_names, require_indexed=True):
"""Internal helper to check the given properties exist and meet specified requirements. Called from query.py. Args: property_names: List or tuple of property names -- each being a string, possibly containing dots (to address subproperties of structured properties). Raises: InvalidPropertyError if one of the properties is invalid. AssertionError if the argument is not a list or tuple of strings. """ |
assert isinstance(property_names, (list, tuple)), repr(property_names)
for name in property_names:
assert isinstance(name, basestring), repr(name)
if '.' in name:
name, rest = name.split('.', 1)
else:
rest = None
prop = cls._properties.get(name)
if prop is None:
cls._unknown_property(name)
else:
prop._check_property(rest, require_indexed=require_indexed) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _query(cls, *args, **kwds):
"""Create a Query object for this class. Args: distinct: Optional bool, short hand for group_by = projection. *args: Used to apply an initial filter **kwds: are passed to the Query() constructor. Returns: A Query object. """ |
# Validating distinct.
if 'distinct' in kwds:
if 'group_by' in kwds:
raise TypeError(
'cannot use distinct= and group_by= at the same time')
projection = kwds.get('projection')
if not projection:
raise TypeError(
'cannot use distinct= without projection=')
if kwds.pop('distinct'):
kwds['group_by'] = projection
# TODO: Disallow non-empty args and filter=.
from .query import Query # Import late to avoid circular imports.
qry = Query(kind=cls._get_kind(), **kwds)
qry = qry.filter(*cls._default_filters())
qry = qry.filter(*args)
return qry |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _gql(cls, query_string, *args, **kwds):
"""Run a GQL query.""" |
from .query import gql # Import late to avoid circular imports.
return gql('SELECT * FROM %s %s' % (cls._class_name(), query_string),
*args, **kwds) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _put_async(self, **ctx_options):
"""Write this entity to Cloud Datastore. This is the asynchronous version of Model._put(). """ |
if self._projection:
raise datastore_errors.BadRequestError('Cannot put a partial entity')
from . import tasklets
ctx = tasklets.get_context()
self._prepare_for_put()
if self._key is None:
self._key = Key(self._get_kind(), None)
self._pre_put_hook()
fut = ctx.put(self, **ctx_options)
post_hook = self._post_put_hook
if not self._is_default_hook(Model._default_post_put_hook, post_hook):
fut.add_immediate_callback(post_hook, fut)
return fut |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_by_id(cls, id, parent=None, **ctx_options):
"""Returns an instance of Model class by ID. Args: id: A string or integer key ID. parent: Optional parent key of the model to get. namespace: Optional namespace. app: Optional app ID. **ctx_options: Context options. Returns: A model instance or None if not found. """ |
return cls._get_by_id_async(id, parent=parent, **ctx_options).get_result() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _is_default_hook(default_hook, hook):
"""Checks whether a specific hook is in its default state. Args: cls: A ndb.model.Model class. default_hook: Callable specified by ndb internally (do not override). hook: The hook defined by a model class using _post_*_hook. Raises: TypeError if either the default hook or the tested hook are not callable. """ |
if not hasattr(default_hook, '__call__'):
raise TypeError('Default hooks for ndb.model.Model must be callable')
if not hasattr(hook, '__call__'):
raise TypeError('Hooks must be callable')
return default_hook.im_func is hook.im_func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _ReferenceFromSerialized(serialized):
"""Construct a Reference from a serialized Reference.""" |
if not isinstance(serialized, basestring):
raise TypeError('serialized must be a string; received %r' % serialized)
elif isinstance(serialized, unicode):
serialized = serialized.encode('utf8')
return entity_pb.Reference(serialized) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _DecodeUrlSafe(urlsafe):
"""Decode a url-safe base64-encoded string. This returns the decoded string. """ |
if not isinstance(urlsafe, basestring):
raise TypeError('urlsafe must be a string; received %r' % urlsafe)
if isinstance(urlsafe, unicode):
urlsafe = urlsafe.encode('utf8')
mod = len(urlsafe) % 4
if mod:
urlsafe += '=' * (4 - mod)
# This is 3-4x faster than urlsafe_b64decode()
return base64.b64decode(urlsafe.replace('-', '+').replace('_', '/')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flat(self):
"""Return a tuple of alternating kind and id values.""" |
flat = []
for kind, id in self.__pairs:
flat.append(kind)
flat.append(id)
return tuple(flat) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reference(self):
"""Return the Reference object for this Key. This is a entity_pb.Reference instance -- a protocol buffer class used by the lower-level API to the datastore. NOTE: The caller should not mutate the return value. """ |
if self.__reference is None:
self.__reference = _ConstructReference(self.__class__,
pairs=self.__pairs,
app=self.__app,
namespace=self.__namespace)
return self.__reference |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def urlsafe(self):
"""Return a url-safe string encoding this Key's Reference. This string is compatible with other APIs and languages and with the strings used to represent Keys in GQL and in the App Engine Admin Console. """ |
# This is 3-4x faster than urlsafe_b64decode()
urlsafe = base64.b64encode(self.reference().Encode())
return urlsafe.rstrip('=').replace('+', '-').replace('/', '_') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_async(self, **ctx_options):
"""Return a Future whose result is the entity for this Key. If no such entity exists, a Future is still returned, and the Future's eventual return result be None. """ |
from . import model, tasklets
ctx = tasklets.get_context()
cls = model.Model._kind_map.get(self.kind())
if cls:
cls._pre_get_hook(self)
fut = ctx.get(self, **ctx_options)
if cls:
post_hook = cls._post_get_hook
if not cls._is_default_hook(model.Model._default_post_get_hook,
post_hook):
fut.add_immediate_callback(post_hook, self, fut)
return fut |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_async(self, **ctx_options):
"""Schedule deletion of the entity for this Key. This returns a Future, whose result becomes available once the deletion is complete. If no such entity exists, a Future is still returned. In all cases the Future's result is None (i.e. there is no way to tell whether the entity existed or not). """ |
from . import tasklets, model
ctx = tasklets.get_context()
cls = model.Model._kind_map.get(self.kind())
if cls:
cls._pre_delete_hook(self)
fut = ctx.delete(self, **ctx_options)
if cls:
post_hook = cls._post_delete_hook
if not cls._is_default_hook(model.Model._default_post_delete_hook,
post_hook):
fut.add_immediate_callback(post_hook, self, fut)
return fut |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_flow_exception(exc):
"""Add an exception that should not be logged. The argument must be a subclass of Exception. """ |
global _flow_exceptions
if not isinstance(exc, type) or not issubclass(exc, Exception):
raise TypeError('Expected an Exception subclass, got %r' % (exc,))
as_set = set(_flow_exceptions)
as_set.add(exc)
_flow_exceptions = tuple(as_set) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init_flow_exceptions():
"""Internal helper to initialize _flow_exceptions. This automatically adds webob.exc.HTTPException, if it can be imported. """ |
global _flow_exceptions
_flow_exceptions = ()
add_flow_exception(datastore_errors.Rollback)
try:
from webob import exc
except ImportError:
pass
else:
add_flow_exception(exc.HTTPException) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sleep(dt):
"""Public function to sleep some time. Example: yield tasklets.sleep(0.5) # Sleep for half a sec. """ |
fut = Future('sleep(%.3f)' % dt)
eventloop.queue_call(dt, fut.set_result, None)
return fut |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _transfer_result(fut1, fut2):
"""Helper to transfer result or errors from one Future to another.""" |
exc = fut1.get_exception()
if exc is not None:
tb = fut1.get_traceback()
fut2.set_exception(exc, tb)
else:
val = fut1.get_result()
fut2.set_result(val) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def synctasklet(func):
"""Decorator to run a function as a tasklet when called. Use this to wrap a request handler function that will be called by some web application framework (e.g. a Django view function or a webapp.RequestHandler.get method). """ |
taskletfunc = tasklet(func) # wrap at declaration time.
@utils.wrapping(func)
def synctasklet_wrapper(*args, **kwds):
# pylint: disable=invalid-name
__ndb_debug__ = utils.func_info(func)
return taskletfunc(*args, **kwds).get_result()
return synctasklet_wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def toplevel(func):
"""A sync tasklet that sets a fresh default Context. Use this for toplevel view functions such as webapp.RequestHandler.get() or Django view functions. """ |
synctaskletfunc = synctasklet(func) # wrap at declaration time.
@utils.wrapping(func)
def add_context_wrapper(*args, **kwds):
# pylint: disable=invalid-name
__ndb_debug__ = utils.func_info(func)
_state.clear_all_pending()
# Create and install a new context.
ctx = make_default_context()
try:
set_context(ctx)
return synctaskletfunc(*args, **kwds)
finally:
set_context(None)
ctx.flush().check_success()
eventloop.run() # Ensure writes are flushed, etc.
return add_context_wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_cloud_datastore_context(app_id, external_app_ids=()):
"""Creates a new context to connect to a remote Cloud Datastore instance. This should only be used outside of Google App Engine. Args: app_id: The application id to connect to. This differs from the project id as it may have an additional prefix, e.g. "s~" or "e~". external_app_ids: A list of apps that may be referenced by data in your application. For example, if you are connected to s~my-app and store keys for s~my-other-app, you should include s~my-other-app in the external_apps list. Returns: An ndb.Context that can connect to a Remote Cloud Datastore. You can use this context by passing it to ndb.set_context. """ |
from . import model # Late import to deal with circular imports.
# Late import since it might not exist.
if not datastore_pbs._CLOUD_DATASTORE_ENABLED:
raise datastore_errors.BadArgumentError(
datastore_pbs.MISSING_CLOUD_DATASTORE_MESSAGE)
import googledatastore
try:
from google.appengine.datastore import cloud_datastore_v1_remote_stub
except ImportError:
from google3.apphosting.datastore import cloud_datastore_v1_remote_stub
current_app_id = os.environ.get('APPLICATION_ID', None)
if current_app_id and current_app_id != app_id:
# TODO(pcostello): We should support this so users can connect to different
# applications.
raise ValueError('Cannot create a Cloud Datastore context that connects '
'to an application (%s) that differs from the application '
'already connected to (%s).' % (app_id, current_app_id))
os.environ['APPLICATION_ID'] = app_id
id_resolver = datastore_pbs.IdResolver((app_id,) + tuple(external_app_ids))
project_id = id_resolver.resolve_project_id(app_id)
endpoint = googledatastore.helper.get_project_endpoint_from_env(project_id)
datastore = googledatastore.Datastore(
project_endpoint=endpoint,
credentials=googledatastore.helper.get_credentials_from_env())
conn = model.make_connection(_api_version=datastore_rpc._CLOUD_DATASTORE_V1,
_id_resolver=id_resolver)
# If necessary, install the stubs
try:
stub = cloud_datastore_v1_remote_stub.CloudDatastoreV1RemoteStub(datastore)
apiproxy_stub_map.apiproxy.RegisterStub(datastore_rpc._CLOUD_DATASTORE_V1,
stub)
except:
pass # The stub is already installed.
# TODO(pcostello): Ensure the current stub is connected to the right project.
# Install a memcache and taskqueue stub which throws on everything.
try:
apiproxy_stub_map.apiproxy.RegisterStub('memcache', _ThrowingStub())
except:
pass # The stub is already installed.
try:
apiproxy_stub_map.apiproxy.RegisterStub('taskqueue', _ThrowingStub())
except:
pass # The stub is already installed.
return make_context(conn=conn) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _analyze_indexed_fields(indexed_fields):
"""Internal helper to check a list of indexed fields. Args: indexed_fields: A list of names, possibly dotted names. (A dotted name is a string containing names separated by dots, e.g. 'foo.bar.baz'. An undotted name is a string containing no dots, e.g. 'foo'.) Returns: A dict whose keys are undotted names. For each undotted name in the argument, the dict contains that undotted name as a key with None as a value. For each dotted name in the argument, the dict contains the first component as a key with a list of remainders as values. Example: If the argument is ['foo.bar.baz', 'bar', 'foo.bletch'], the return value is {'foo': ['bar.baz', 'bletch'], 'bar': None}. Raises: TypeError if an argument is not a string. ValueError for duplicate arguments and for conflicting arguments (when an undotted name also appears as the first component of a dotted name). """ |
result = {}
for field_name in indexed_fields:
if not isinstance(field_name, basestring):
raise TypeError('Field names must be strings; got %r' % (field_name,))
if '.' not in field_name:
if field_name in result:
raise ValueError('Duplicate field name %s' % field_name)
result[field_name] = None
else:
head, tail = field_name.split('.', 1)
if head not in result:
result[head] = [tail]
elif result[head] is None:
raise ValueError('Field name %s conflicts with ancestor %s' %
(field_name, head))
else:
result[head].append(tail)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_model_class(message_type, indexed_fields, **props):
"""Construct a Model subclass corresponding to a Message subclass. Args: message_type: A Message subclass. indexed_fields: A list of dotted and undotted field names. **props: Additional properties with which to seed the class. Returns: A Model subclass whose properties correspond to those fields of message_type whose field name is listed in indexed_fields, plus the properties specified by the **props arguments. For dotted field names, a StructuredProperty is generated using a Model subclass created by a recursive call. Raises: Whatever _analyze_indexed_fields() raises. ValueError if a field name conflicts with a name in **props. ValueError if a field name is not valid field of message_type. ValueError if an undotted field name designates a MessageField. """ |
analyzed = _analyze_indexed_fields(indexed_fields)
for field_name, sub_fields in analyzed.iteritems():
if field_name in props:
raise ValueError('field name %s is reserved' % field_name)
try:
field = message_type.field_by_name(field_name)
except KeyError:
raise ValueError('Message type %s has no field named %s' %
(message_type.__name__, field_name))
if isinstance(field, messages.MessageField):
if not sub_fields:
raise ValueError(
'MessageField %s cannot be indexed, only sub-fields' % field_name)
sub_model_class = _make_model_class(field.type, sub_fields)
prop = model.StructuredProperty(sub_model_class, field_name,
repeated=field.repeated)
else:
if sub_fields is not None:
raise ValueError(
'Unstructured field %s cannot have indexed sub-fields' % field_name)
if isinstance(field, messages.EnumField):
prop = EnumProperty(field.type, field_name, repeated=field.repeated)
elif isinstance(field, messages.BytesField):
prop = model.BlobProperty(field_name,
repeated=field.repeated, indexed=True)
else:
# IntegerField, FloatField, BooleanField, StringField.
prop = model.GenericProperty(field_name, repeated=field.repeated)
props[field_name] = prop
return model.MetaModel('_%s__Model' % message_type.__name__,
(model.Model,), props) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_value(self, entity):
"""Compute and store a default value if necessary.""" |
value = super(_ClassKeyProperty, self)._get_value(entity)
if not value:
value = entity._class_key()
self._store_value(entity, value)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_hierarchy(cls):
"""Internal helper to return the list of polymorphic base classes. This returns a list of class objects, e.g. [Animal, Feline, Cat]. """ |
bases = []
for base in cls.mro(): # pragma: no branch
if hasattr(base, '_get_hierarchy'):
bases.append(base)
del bases[-1] # Delete PolyModel itself
bases.reverse()
return bases |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_env_paths_in_basedirs(base_dirs):
"""Returns all potential envs in a basedir""" |
# get potential env path in the base_dirs
env_path = []
for base_dir in base_dirs:
env_path.extend(glob.glob(os.path.join(
os.path.expanduser(base_dir), '*', '')))
# self.log.info("Found the following kernels from config: %s", ", ".join(venvs))
return env_path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_to_env_data(mgr, env_paths, validator_func, activate_func, name_template, display_name_template, name_prefix):
"""Converts a list of paths to environments to env_data. env_data is a structure {name -> (ressourcedir, kernel spec)} """ |
env_data = {}
for venv_dir in env_paths:
venv_name = os.path.split(os.path.abspath(venv_dir))[1]
kernel_name = name_template.format(name_prefix + venv_name)
kernel_name = kernel_name.lower()
if kernel_name in env_data:
mgr.log.debug(
"Found duplicate env kernel: %s, which would again point to %s. Using the first!",
kernel_name, venv_dir)
continue
argv, language, resource_dir = validator_func(venv_dir)
if not argv:
# probably does not contain the kernel type (e.g. not R or python or does not contain
# the kernel code itself)
continue
display_name = display_name_template.format(kernel_name)
kspec_dict = {"argv": argv, "language": language,
"display_name": display_name,
"resource_dir": resource_dir
}
# the default vars are needed to save the vars in the function context
def loader(env_dir=venv_dir, activate_func=activate_func, mgr=mgr):
mgr.log.debug("Loading env data for %s" % env_dir)
res = activate_func(mgr, env_dir)
# mgr.log.info("PATH: %s" % res['PATH'])
return res
kspec = EnvironmentLoadingKernelSpec(loader, **kspec_dict)
env_data.update({kernel_name: (resource_dir, kspec)})
return env_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_IPykernel(venv_dir):
"""Validates that this env contains an IPython kernel and returns info to start it Returns: tuple (ARGV, language, resource_dir) """ |
python_exe_name = find_exe(venv_dir, "python")
if python_exe_name is None:
python_exe_name = find_exe(venv_dir, "python2")
if python_exe_name is None:
python_exe_name = find_exe(venv_dir, "python3")
if python_exe_name is None:
return [], None, None
# Make some checks for ipython first, because calling the import is expensive
if find_exe(venv_dir, "ipython") is None:
if find_exe(venv_dir, "ipython2") is None:
if find_exe(venv_dir, "ipython3") is None:
return [], None, None
# check if this is really an ipython **kernel**
import subprocess
try:
subprocess.check_call([python_exe_name, '-c', '"import ipykernel"'])
except:
# not installed? -> not useable in any case...
return [], None, None
argv = [python_exe_name, "-m", "ipykernel", "-f", "{connection_file}"]
resources_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "logos", "python")
return argv, "python", resources_dir |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_IRkernel(venv_dir):
"""Validates that this env contains an IRkernel kernel and returns info to start it Returns: tuple (ARGV, language, resource_dir) """ |
r_exe_name = find_exe(venv_dir, "R")
if r_exe_name is None:
return [], None, None
# check if this is really an IRkernel **kernel**
import subprocess
ressources_dir = None
try:
print_resources = 'cat(as.character(system.file("kernelspec", package = "IRkernel")))'
resources_dir_bytes = subprocess.check_output([r_exe_name, '--slave', '-e', print_resources])
resources_dir = resources_dir_bytes.decode(errors='ignore')
except:
# not installed? -> not useable in any case...
return [], None, None
argv = [r_exe_name, "--slave", "-e", "IRkernel::main()", "--args", "{connection_file}"]
if not os.path.exists(resources_dir.strip()):
# Fallback to our own log, but don't get the nice js goodies...
resources_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "logos", "r")
return argv, "r", resources_dir |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_exe(env_dir, name):
"""Finds a exe with that name in the environment path""" |
if platform.system() == "Windows":
name = name + ".exe"
# find the binary
exe_name = os.path.join(env_dir, name)
if not os.path.exists(exe_name):
exe_name = os.path.join(env_dir, "bin", name)
if not os.path.exists(exe_name):
exe_name = os.path.join(env_dir, "Scripts", name)
if not os.path.exists(exe_name):
return None
return exe_name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_virtualenv_env_data(mgr):
"""Finds kernel specs from virtualenv environments env_data is a structure {name -> (resourcedir, kernel spec)} """ |
if not mgr.find_virtualenv_envs:
return {}
mgr.log.debug("Looking for virtualenv environments in %s...", mgr.virtualenv_env_dirs)
# find all potential env paths
env_paths = find_env_paths_in_basedirs(mgr.virtualenv_env_dirs)
mgr.log.debug("Scanning virtualenv environments for python kernels...")
env_data = convert_to_env_data(mgr=mgr,
env_paths=env_paths,
validator_func=validate_IPykernel,
activate_func=_get_env_vars_for_virtualenv_env,
name_template=mgr.virtualenv_prefix_template,
display_name_template=mgr.display_name_template,
# virtualenv has only python, so no need for a prefix
name_prefix="")
return env_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def source_bash(args, stdin=None):
"""Simply bash-specific wrapper around source-foreign Returns a dict to be used as a new environment""" |
args = list(args)
new_args = ['bash', '--sourcer=source']
new_args.extend(args)
return source_foreign(new_args, stdin=stdin) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def source_zsh(args, stdin=None):
"""Simply zsh-specific wrapper around source-foreign Returns a dict to be used as a new environment""" |
args = list(args)
new_args = ['zsh', '--sourcer=source']
new_args.extend(args)
return source_foreign(new_args, stdin=stdin) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def source_cmd(args, stdin=None):
"""Simple cmd.exe-specific wrapper around source-foreign. returns a dict to be used as a new environment """ |
args = list(args)
fpath = locate_binary(args[0])
args[0] = fpath if fpath else args[0]
if not os.path.isfile(args[0]):
raise RuntimeError("Command not found: %s" % args[0])
prevcmd = 'call '
prevcmd += ' '.join([argvquote(arg, force=True) for arg in args])
prevcmd = escape_windows_cmd_string(prevcmd)
args.append('--prevcmd={}'.format(prevcmd))
args.insert(0, 'cmd')
args.append('--interactive=0')
args.append('--sourcer=call')
args.append('--envcmd=set')
args.append('--seterrpostcmd=if errorlevel 1 exit 1')
args.append('--use-tmpfile=1')
return source_foreign(args, stdin=stdin) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def source_foreign(args, stdin=None):
"""Sources a file written in a foreign shell language.""" |
parser = _ensure_source_foreign_parser()
ns = parser.parse_args(args)
if ns.prevcmd is not None:
pass # don't change prevcmd if given explicitly
elif os.path.isfile(ns.files_or_code[0]):
# we have filename to source
ns.prevcmd = '{} "{}"'.format(ns.sourcer, '" "'.join(ns.files_or_code))
elif ns.prevcmd is None:
ns.prevcmd = ' '.join(ns.files_or_code) # code to run, no files
fsenv = foreign_shell_data(shell=ns.shell, login=ns.login,
interactive=ns.interactive,
envcmd=ns.envcmd,
aliascmd=ns.aliascmd,
extra_args=ns.extra_args,
safe=ns.safe, prevcmd=ns.prevcmd,
postcmd=ns.postcmd,
funcscmd=ns.funcscmd,
sourcer=ns.sourcer,
use_tmpfile=ns.use_tmpfile,
seterrprevcmd=ns.seterrprevcmd,
seterrpostcmd=ns.seterrpostcmd)
if fsenv is None:
raise RuntimeError("Source failed: {}\n".format(ns.prevcmd), 1)
# apply results
env = os.environ.copy()
for k, v in fsenv.items():
if k in env and v == env[k]:
continue # no change from original
env[k] = v
# Remove any env-vars that were unset by the script.
for k in os.environ: # use os.environ again to prevent errors about changed size
if k not in fsenv:
env.pop(k, None)
return env |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_bool(x):
""""Converts to a boolean in a semantically meaningful way.""" |
if isinstance(x, bool):
return x
elif isinstance(x, str):
return False if x.lower() in _FALSES else True
else:
return bool(x) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_env(s):
"""Parses the environment portion of string into a dict.""" |
m = ENV_RE.search(s)
if m is None:
return {}
g1 = m.group(1)
env = dict(ENV_SPLIT_RE.findall(g1))
return env |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_conda_env_data(mgr):
"""Finds kernel specs from conda environments env_data is a structure {name -> (resourcedir, kernel spec)} """ |
if not mgr.find_conda_envs:
return {}
mgr.log.debug("Looking for conda environments in %s...", mgr.conda_env_dirs)
# find all potential env paths
env_paths = find_env_paths_in_basedirs(mgr.conda_env_dirs)
env_paths.extend(_find_conda_env_paths_from_conda(mgr))
env_paths = list(set(env_paths)) # remove duplicates
mgr.log.debug("Scanning conda environments for python kernels...")
env_data = convert_to_env_data(mgr=mgr,
env_paths=env_paths,
validator_func=validate_IPykernel,
activate_func=_get_env_vars_for_conda_env,
name_template=mgr.conda_prefix_template,
display_name_template=mgr.display_name_template,
name_prefix="") # lets keep the py kernels without a prefix...
if mgr.find_r_envs:
mgr.log.debug("Scanning conda environments for R kernels...")
env_data.update(convert_to_env_data(mgr=mgr,
env_paths=env_paths,
validator_func=validate_IRkernel,
activate_func=_get_env_vars_for_conda_env,
name_template=mgr.conda_prefix_template,
display_name_template=mgr.display_name_template,
name_prefix="r_"))
return env_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_env(self, envname):
""" Check the name of the environment against the black list and the whitelist. If a whitelist is specified only it is checked. """ |
if self.whitelist_envs and envname in self.whitelist_envs:
return True
elif self.whitelist_envs:
return False
if self.blacklist_envs and envname not in self.blacklist_envs:
return True
elif self.blacklist_envs:
# If there is just a True, all envs are blacklisted
return False
else:
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_env_data(self, reload=False):
"""Get the data about the available environments. env_data is a structure {name -> (resourcedir, kernel spec)} """ |
# This is called much too often and finding-process is really expensive :-(
if not reload and getattr(self, "_env_data_cache", {}):
return getattr(self, "_env_data_cache")
env_data = {}
for supplyer in ENV_SUPPLYER:
env_data.update(supplyer(self))
env_data = {name: env_data[name] for name in env_data if self.validate_env(name)}
new_kernels = [env for env in list(env_data.keys()) if env not in list(self._env_data_cache.keys())]
if new_kernels:
self.log.info("Found new kernels in environments: %s", ", ".join(new_kernels))
self._env_data_cache = env_data
return env_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_kernel_specs_for_envs(self):
"""Returns the dict of name -> kernel_spec for all environments""" |
data = self._get_env_data()
return {name: data[name][1] for name in data} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_specs(self):
"""Returns a dict mapping kernel names and resource directories. """ |
# This is new in 4.1 -> https://github.com/jupyter/jupyter_client/pull/93
specs = self.get_all_kernel_specs_for_envs()
specs.update(super(EnvironmentKernelSpecManager, self).get_all_specs())
return specs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.