text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unpack(self, message):
"""Called to extract a STOMP message into this instance. message: This is a text string representing a valid STOMP (v1.0) message. information, before it is assigned internally. retuned: """ |
if not message:
raise FrameError("Unpack error! The given message isn't valid '%s'!" % message)
msg = unpack_frame(message)
self.cmd = msg['cmd']
self.headers = msg['headers']
# Assign directly as the message will have the null
# character in the message already.
self.body = msg['body']
return msg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def react(self, msg):
"""Called to provide a response to a message if needed. msg: or it can be a straight STOMP message. This function will attempt to determine which an deal with it. returned: A message to return or an empty string. """ |
returned = ""
# If its not a string assume its a dict.
mtype = type(msg)
if mtype in stringTypes:
msg = unpack_frame(msg)
elif mtype == dict:
pass
else:
raise FrameError("Unknown message type '%s', I don't know what to do with this!" % mtype)
if msg['cmd'] in self.states:
# print("reacting to message - %s" % msg['cmd'])
returned = self.states[msg['cmd']](msg)
return returned |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def error(self, msg):
"""Called to handle an error message received from the server. This method just logs the error message returned: NO_RESPONSE_NEEDED """ |
body = msg['body'].replace(NULL, '')
brief_msg = ""
if 'message' in msg['headers']:
brief_msg = msg['headers']['message']
self.log.error("Received server error - message%s\n\n%s" % (brief_msg, body))
returned = NO_RESPONSE_NEEDED
if self.testing:
returned = 'error'
return returned |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def receipt(self, msg):
"""Called to handle a receipt message received from the server. This method just logs the receipt message returned: NO_RESPONSE_NEEDED """ |
body = msg['body'].replace(NULL, '')
brief_msg = ""
if 'receipt-id' in msg['headers']:
brief_msg = msg['headers']['receipt-id']
self.log.info("Received server receipt message - receipt-id:%s\n\n%s" % (brief_msg, body))
returned = NO_RESPONSE_NEEDED
if self.testing:
returned = 'receipt'
return returned |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def log_init(level):
"""Set up a logger that catches all channels and logs it to stdout. This is used to set up logging when testing. """ |
log = logging.getLogger()
hdlr = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(name)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
log.addHandler(hdlr)
log.setLevel(level) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ack(self, msg):
"""Override this and do some customer message handler. """ |
print("Got a message:\n%s\n" % msg['body'])
# do something with the message...
# Generate the ack or not if you subscribed with ack='auto'
return super(Pong, self).ack(msg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transaction_atomic_with_retry(num_retries=5, backoff=0.1):
""" This is a decorator that will wrap the decorated method in an atomic transaction and retry the transaction a given number of times :param num_retries: How many times should we retry before we give up :param backoff: How long should we wait after each try """ |
# Create the decorator
@wrapt.decorator
def wrapper(wrapped, instance, args, kwargs):
# Keep track of how many times we have tried
num_tries = 0
exception = None
# Call the main sync entities method and catch any exceptions
while num_tries <= num_retries:
# Try running the transaction
try:
with transaction.atomic():
return wrapped(*args, **kwargs)
# Catch any operation errors
except db.utils.OperationalError as e:
num_tries += 1
exception = e
sleep(backoff * num_tries)
# If we have an exception raise it
raise exception
# Return the decorator
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def defer_entity_syncing(wrapped, instance, args, kwargs):
""" A decorator that can be used to defer the syncing of entities until after the method has been run This is being introduced to help avoid deadlocks in the meantime as we attempt to better understand why they are happening """ |
# Defer entity syncing while we run our method
sync_entities.defer = True
# Run the method
try:
return wrapped(*args, **kwargs)
# After we run the method disable the deferred syncing
# and sync all the entities that have been buffered to be synced
finally:
# Enable entity syncing again
sync_entities.defer = False
# Get the models that need to be synced
model_objs = list(sync_entities.buffer.values())
# If none is in the model objects we need to sync all
if None in sync_entities.buffer:
model_objs = list()
# Sync the entities that were deferred if any
if len(sync_entities.buffer):
sync_entities(*model_objs)
# Clear the buffer
sync_entities.buffer = {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_super_entities_by_ctype(model_objs_by_ctype, model_ids_to_sync, sync_all):
""" Given model objects organized by content type and a dictionary of all model IDs that need to be synced, organize all super entity relationships that need to be synced. Ensure that the model_ids_to_sync dict is updated with any new super entities that need to be part of the overall entity sync """ |
super_entities_by_ctype = defaultdict(lambda: defaultdict(list)) # pragma: no cover
for ctype, model_objs_for_ctype in model_objs_by_ctype.items():
entity_config = entity_registry.entity_registry.get(ctype.model_class())
super_entities = entity_config.get_super_entities(model_objs_for_ctype, sync_all)
super_entities_by_ctype[ctype] = {
ContentType.objects.get_for_model(model_class, for_concrete_model=False): relationships
for model_class, relationships in super_entities.items()
}
# Continue adding to the set of entities that need to be synced
for super_entity_ctype, relationships in super_entities_by_ctype[ctype].items():
for sub_entity_id, super_entity_id in relationships:
model_ids_to_sync[ctype].add(sub_entity_id)
model_ids_to_sync[super_entity_ctype].add(super_entity_id)
return super_entities_by_ctype |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_model_objs_to_sync(model_ids_to_sync, model_objs_map, sync_all):
""" Given the model IDs to sync, fetch all model objects to sync """ |
model_objs_to_sync = {}
for ctype, model_ids_to_sync_for_ctype in model_ids_to_sync.items():
model_qset = entity_registry.entity_registry.get(ctype.model_class()).queryset
if not sync_all:
model_objs_to_sync[ctype] = model_qset.filter(id__in=model_ids_to_sync_for_ctype)
else:
model_objs_to_sync[ctype] = [
model_objs_map[ctype, model_id] for model_id in model_ids_to_sync_for_ctype
]
return model_objs_to_sync |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sync_entities_watching(instance):
""" Syncs entities watching changes of a model instance. """ |
for entity_model, entity_model_getter in entity_registry.entity_watching[instance.__class__]:
model_objs = list(entity_model_getter(instance))
if model_objs:
sync_entities(*model_objs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upsert_entity_kinds(self, entity_kinds):
""" Given a list of entity kinds ensure they are synced properly to the database. This will ensure that only unchanged entity kinds are synced and will still return all updated entity kinds :param entity_kinds: The list of entity kinds to sync """ |
# Filter out unchanged entity kinds
unchanged_entity_kinds = {}
if entity_kinds:
unchanged_entity_kinds = {
(entity_kind.name, entity_kind.display_name): entity_kind
for entity_kind in EntityKind.all_objects.extra(
where=['(name, display_name) IN %s'],
params=[tuple(
(entity_kind.name, entity_kind.display_name)
for entity_kind in entity_kinds
)]
)
}
# Filter out the unchanged entity kinds
changed_entity_kinds = [
entity_kind
for entity_kind in entity_kinds
if (entity_kind.name, entity_kind.display_name) not in unchanged_entity_kinds
]
# If any of our kinds have changed upsert them
upserted_enitity_kinds = []
if changed_entity_kinds:
# Select all our existing entity kinds for update so we can do proper locking
# We have to select all here for some odd reason, if we only select the ones
# we are syncing we still run into deadlock issues
list(EntityKind.all_objects.all().select_for_update().values_list('id', flat=True))
# Upsert the entity kinds
upserted_enitity_kinds = manager_utils.bulk_upsert(
queryset=EntityKind.all_objects.filter(
name__in=[entity_kind.name for entity_kind in changed_entity_kinds]
),
model_objs=changed_entity_kinds,
unique_fields=['name'],
update_fields=['display_name'],
return_upserts=True
)
# Return all the entity kinds
return upserted_enitity_kinds + list(unchanged_entity_kinds.values()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_entity_kind(self, model_obj):
""" Returns a tuple for a kind name and kind display name of an entity. By default, uses the app_label and model of the model object's content type as the kind. """ |
model_obj_ctype = ContentType.objects.get_for_model(self.queryset.model)
return (u'{0}.{1}'.format(model_obj_ctype.app_label, model_obj_ctype.model), u'{0}'.format(model_obj_ctype)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_entity(self, entity_config):
""" Registers an entity config """ |
if not issubclass(entity_config, EntityConfig):
raise ValueError('Must register entity config class of subclass EntityConfig')
if entity_config.queryset is None:
raise ValueError('Entity config must define queryset')
model = entity_config.queryset.model
self._entity_registry[model] = entity_config()
# Add watchers to the global look up table
for watching_model, entity_model_getter in entity_config.watching:
self._entity_watching[watching_model].append((model, entity_model_getter)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(host='localhost', port=61613, username='', password=''):
""" |
StompClientFactory.username = username
StompClientFactory.password = password
reactor.connectTCP(host, port, StompClientFactory())
reactor.run() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send(self):
"""Send out a hello message periodically. """ |
self.log.info("Saying hello (%d)." % self.counter)
f = stomper.Frame()
f.unpack(stomper.send(DESTINATION, 'hello there (%d)' % self.counter))
self.counter += 1
# ActiveMQ specific headers:
#
#f.headers['persistent'] = 'true'
self.transport.write(f.pack()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connectionMade(self):
"""Register with stomp server. """ |
cmd = stomper.connect(self.username, self.password)
self.transport.write(cmd) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dataReceived(self, data):
"""Use stompbuffer to determine when a complete message has been received. """ |
self.stompBuffer.appendData(data)
while True:
msg = self.stompBuffer.getOneMessage()
if msg is None:
break
returned = self.react(msg)
if returned:
self.transport.write(returned) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ack(self, msg):
"""Process the message and determine what to do with it. """ |
self.log.info("receiverId <%s> Received: <%s> " % (self.receiverId, msg['body']))
#return super(MyStomp, self).ack(msg)
return stomper.NO_REPONSE_NEEDED |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connectionMade(self):
"""Register with the stomp server. """ |
cmd = self.sm.connect()
self.transport.write(cmd) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dataReceived(self, data):
"""Data received, react to it and respond if needed. """ |
# print "receiver dataReceived: <%s>" % data
msg = stomper.unpack_frame(data)
returned = self.sm.react(msg)
# print "receiver returned <%s>" % returned
if returned:
self.transport.write(returned) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_id_in_folder(self, name, parent_folder_id=0):
"""Find a folder or a file ID from its name, inside a given folder. Args: name (str):
Name of the folder or the file to find. parent_folder_id (int):
ID of the folder where to search. Returns: int. ID of the file or folder found. None if not found. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
if name is None or len(name) == 0:
return parent_folder_id
offset = 0
resp = self.get_folder_items(parent_folder_id,
limit=1000, offset=offset,
fields_list=['name'])
total = int(resp['total_count'])
while offset < total:
found = self.__find_name(resp, name)
if found is not None:
return found
offset += int(len(resp['entries']))
resp = self.get_folder_items(parent_folder_id,
limit=1000, offset=offset,
fields_list=['name'])
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_folder(self, name, parent_folder_id=0):
"""Create a folder If the folder exists, a BoxError will be raised. Args: folder_id (int):
Name of the folder. parent_folder_id (int):
ID of the folder where to create the new one. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
return self.__request("POST", "folders",
data={ "name": name,
"parent": {"id": unicode(parent_folder_id)} }) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_folder(self, folder_id, recursive=True):
"""Delete an existing folder Args: folder_id (int):
ID of the folder to delete. recursive (bool):
Delete all subfolder if True. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
return self.__request("DELETE", "folders/%s" % (folder_id, ),
querystring={'recursive': unicode(recursive).lower()}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_folder_items(self, folder_id, limit=100, offset=0, fields_list=None):
"""Get files and folders inside a given folder Args: folder_id (int):
Where to get files and folders info. limit (int):
The number of items to return. offset (int):
The item at which to begin the response. fields_list (list):
List of attributes to get. All attributes if None. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
qs = { "limit": limit,
"offset": offset }
if fields_list:
qs['fields'] = ','.join(fields_list)
return self.__request("GET", "folders/%s/items" % (folder_id, ),
querystring=qs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_file(self, name, folder_id, file_path):
"""Upload a file into a folder. Use function for small file otherwise there is the chunk_upload_file() function Args:: name (str):
Name of the file on your Box storage. folder_id (int):
ID of the folder where to upload the file. file_path (str):
Local path of the file to upload. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
try:
return self.__do_upload_file(name, folder_id, file_path)
except BoxError, ex:
if ex.status != 401:
raise
#tokens had been refreshed, so we start again the upload
return self.__do_upload_file(name, folder_id, file_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload_new_file_version(self, name, folder_id, file_id, file_path):
"""Upload a new version of a file into a folder. Use function for small file otherwise there is the chunk_upload_file() function. Args:: name (str):
Name of the file on your Box storage. folder_id (int):
ID of the folder where to upload the file. file_id (int):
ID of the file to update. file_path (str):
Local path of the file to upload. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
try:
return self.__do_upload_file(name, folder_id, file_path, file_id)
except BoxError, ex:
if ex.status != 401:
raise
#tokens had been refreshed, so we start again the upload
return self.__do_upload_file(name, folder_id, file_path, file_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def chunk_upload_file(self, name, folder_id, file_path, progress_callback=None, chunk_size=1024*1024*1):
"""Upload a file chunk by chunk. The whole file is never loaded in memory. Use this function for big file. The callback(transferred, total) to let you know the upload progress. Upload can be cancelled if the callback raise an Exception. Args: name (str):
Name of the file on your Box storage. folder_id (int):
ID of the folder where to upload the file. file_path (str):
Local path of the file to upload. progress_callback (func):
Function called each time a chunk is uploaded. chunk_size (int):
Size of chunks. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
try:
return self.__do_chunk_upload_file(name, folder_id, file_path,
progress_callback,
chunk_size)
except BoxError, ex:
if ex.status != 401:
raise
#tokens had been refreshed, so we start again the upload
return self.__do_chunk_upload_file(name, folder_id, file_path,
progress_callback,
chunk_size) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def copy_file(self, file_id, dest_folder_id):
"""Copy file to new destination Args: file_id (int):
ID of the folder. dest_folder_id (int):
ID of parent folder you are copying to. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxError: 409 - Item with the same name already exists. In this case you will need download the file and upload a new version to your destination. (Box currently doesn't have a method to copy a new verison.) BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem. """ |
return self.__request("POST", "/files/" + unicode(file_id) + "/copy",
data={ "parent": {"id": unicode(dest_folder_id)} }) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ack(self, msg):
"""Processes the received message. I don't need to generate an ack message. """ |
self.log.info("senderID:%s Received: %s " % (self.senderID, msg['body']))
return stomper.NO_REPONSE_NEEDED |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _clear(self, pipe=None):
"""Helper for clear operations. :param pipe: Redis pipe in case update is performed as a part of transaction. :type pipe: :class:`redis.client.StrictPipeline` or :class:`redis.client.StrictRedis` """ |
redis = self.redis if pipe is None else pipe
redis.delete(self.key) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _normalize_index(self, index, pipe=None):
"""Convert negative indexes into their positive equivalents.""" |
pipe = self.redis if pipe is None else pipe
len_self = self.__len__(pipe)
positive_index = index if index >= 0 else len_self + index
return len_self, positive_index |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _transaction(self, fn, *extra_keys):
"""Helper simplifying code within watched transaction. Takes *fn*, function treated as a transaction. Returns whatever *fn* returns. ``self.key`` is watched. *fn* takes *pipe* as the only argument. :param fn: Closure treated as a transaction. :type fn: function *fn(pipe)* :param extra_keys: Optional list of additional keys to watch. :type extra_keys: list :rtype: whatever *fn* returns """ |
results = []
def trans(pipe):
results.append(fn(pipe))
self.redis.transaction(trans, self.key, *extra_keys)
return results[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def recursive_path(pack, path):
"""Find paths recursively""" |
matches = []
for root, _, filenames in os.walk(os.path.join(pack, path)):
for filename in filenames:
matches.append(os.path.join(root, filename)[len(pack) + 1:])
return matches |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nack(messageid, subscriptionid, transactionid=None):
"""STOMP negative acknowledge command. NACK is the opposite of ACK. It is used to tell the server that the client did not consume the message. The server can then either send the message to a different client, discard it, or put it in a dead letter queue. The exact behavior is server specific. messageid: This is the id of the message we are acknowledging, what else could it be? ;) subscriptionid: This is the id of the subscription that applies to the message. transactionid: This is the id that all actions in this transaction will have. If this is not given then a random UUID will be generated for this. """ |
header = 'subscription:%s\nmessage-id:%s' % (subscriptionid, messageid)
if transactionid:
header += '\ntransaction:%s' % transactionid
return "NACK\n%s\n\n\x00\n" % header |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect(username, password, host, heartbeats=(0,0)):
"""STOMP connect command. username, password: These are the needed auth details to connect to the message server. After sending this we will receive a CONNECTED message which will contain our session id. """ |
if len(heartbeats) != 2:
raise ValueError('Invalid heartbeat %r' % heartbeats)
cx, cy = heartbeats
return "CONNECT\naccept-version:1.1\nhost:%s\nheart-beat:%i,%i\nlogin:%s\npasscode:%s\n\n\x00\n" % (host, cx, cy, username, password) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ack(self, msg):
"""Called when a MESSAGE has been received. Override this method to handle received messages. This function will generate an acknowledge message for the given message and transaction (if present). """ |
message_id = msg['headers']['message-id']
subscription = msg['headers']['subscription']
transaction_id = None
if 'transaction-id' in msg['headers']:
transaction_id = msg['headers']['transaction-id']
# print "acknowledging message id <%s>." % message_id
return ack(message_id, subscription, transaction_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getOneMessage ( self ):
""" I pull one complete message off the buffer and return it decoded as a dict. If there is no complete message in the buffer, I return None. Note that the buffer can contain more than once message. You should therefore call me in a loop until I return None. """ |
( mbytes, hbytes ) = self._findMessageBytes ( self.buffer )
if not mbytes:
return None
msgdata = self.buffer[:mbytes]
self.buffer = self.buffer[mbytes:]
hdata = msgdata[:hbytes]
elems = hdata.split ( '\n' )
cmd = elems.pop ( 0 )
headers = {}
# We can't use a simple split because the value can legally contain
# colon characters (for example, the session returned by ActiveMQ).
for e in elems:
try:
i = e.find ( ':' )
except ValueError:
continue
k = e[:i].strip()
v = e[i+1:].strip()
headers [ k ] = v
# hbytes points to the start of the '\n\n' at the end of the header,
# so 2 bytes beyond this is the start of the body. The body EXCLUDES
# the final two bytes, which are '\x00\n'. Note that these 2 bytes
# are UNRELATED to the 2-byte '\n\n' that Frame.pack() used to insert
# into the data stream.
body = msgdata[hbytes+2:-2]
msg = { 'cmd' : cmd,
'headers' : headers,
'body' : body,
}
return msg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_entity_signal_handler(sender, instance, **kwargs):
""" Defines a signal handler for syncing an individual entity. Called when an entity is saved or deleted. """ |
if instance.__class__ in entity_registry.entity_registry:
Entity.all_objects.delete_for_obj(instance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_entity_signal_handler(sender, instance, **kwargs):
""" Defines a signal handler for saving an entity. Syncs the entity to the entity mirror table. """ |
if instance.__class__ in entity_registry.entity_registry:
sync_entities(instance)
if instance.__class__ in entity_registry.entity_watching:
sync_entities_watching(instance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def turn_on_syncing(for_post_save=True, for_post_delete=True, for_m2m_changed=True, for_post_bulk_operation=False):
""" Enables all of the signals for syncing entities. Everything is True by default, except for the post_bulk_operation signal. The reason for this is because when any bulk operation occurs on any mirrored entity model, it will result in every single entity being synced again. This is not a desired behavior by the majority of users, and should only be turned on explicitly. """ |
if for_post_save:
post_save.connect(save_entity_signal_handler, dispatch_uid='save_entity_signal_handler')
if for_post_delete:
post_delete.connect(delete_entity_signal_handler, dispatch_uid='delete_entity_signal_handler')
if for_m2m_changed:
m2m_changed.connect(m2m_changed_entity_signal_handler, dispatch_uid='m2m_changed_entity_signal_handler')
if for_post_bulk_operation:
post_bulk_operation.connect(bulk_operation_signal_handler, dispatch_uid='bulk_operation_signal_handler') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scan_elements(self):
""" Yield each of the elements from the collection, without pulling them all into memory. .. warning:: This method is not available on the set collections provided by Python. This method may return the element multiple times. See the `Redis SCAN documentation <http://redis.io/commands/scan#scan-guarantees>`_ for details. """ |
for x in self.redis.sscan_iter(self.key):
yield self._unpickle(x) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def places_within_radius( self, place=None, latitude=None, longitude=None, radius=0, **kwargs ):
""" Return descriptions of the places stored in the collection that are within the circle specified by the given location and radius. A list of dicts will be returned. The center of the circle can be specified by the identifier of another place in the collection with the *place* keyword argument. Or, it can be specified by using both the *latitude* and *longitude* keyword arguments. By default the *radius* is given in kilometers, but you may also set the *unit* keyword argument to ``'m'``, ``'mi'``, or ``'ft'``. Limit the number of results returned with the *count* keyword argument. Change the sorted order by setting the *sort* keyword argument to ``b'DESC'``. """ |
kwargs['withdist'] = True
kwargs['withcoord'] = True
kwargs['withhash'] = False
kwargs.setdefault('sort', 'ASC')
unit = kwargs.setdefault('unit', 'km')
# Make the query
if place is not None:
response = self.redis.georadiusbymember(
self.key, self._pickle(place), radius, **kwargs
)
elif (latitude is not None) and (longitude is not None):
response = self.redis.georadius(
self.key, longitude, latitude, radius, **kwargs
)
else:
raise ValueError(
'Must specify place, or both latitude and longitude'
)
# Assemble the result
ret = []
for item in response:
ret.append(
{
'place': self._unpickle(item[0]),
'distance': item[1],
'unit': unit,
'latitude': item[2][1],
'longitude': item[2][0],
}
)
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rotate(self, n=1):
""" Rotate the deque n steps to the right. If n is negative, rotate to the left. """ |
# No work to do for a 0-step rotate
if n == 0:
return
def rotate_trans(pipe):
# Synchronize the cache before rotating
if self.writeback:
self._sync_helper(pipe)
# Rotating len(self) times has no effect.
len_self = self.__len__(pipe)
steps = abs_n % len_self
# When n is positive we can use the built-in Redis command
if forward:
pipe.multi()
for __ in range(steps):
pipe.rpoplpush(self.key, self.key)
# When n is negative we must use Python
else:
for __ in range(steps):
pickled_value = pipe.lpop(self.key)
pipe.rpush(self.key, pickled_value)
forward = n >= 0
abs_n = abs(n)
self._transaction(rotate_trans) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_sub_to_all(self, *super_entities):
""" Given a list of super entities, return the entities that have those as a subset of their super entities. """ |
if super_entities:
if len(super_entities) == 1:
# Optimize for the case of just one super entity since this is a much less intensive query
has_subset = EntityRelationship.objects.filter(
super_entity=super_entities[0]).values_list('sub_entity', flat=True)
else:
# Get a list of entities that have super entities with all types
has_subset = EntityRelationship.objects.filter(
super_entity__in=super_entities).values('sub_entity').annotate(Count('super_entity')).filter(
super_entity__count=len(set(super_entities))).values_list('sub_entity', flat=True)
return self.filter(id__in=has_subset)
else:
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_sub_to_any(self, *super_entities):
""" Given a list of super entities, return the entities that have super entities that interset with those provided. """ |
if super_entities:
return self.filter(id__in=EntityRelationship.objects.filter(
super_entity__in=super_entities).values_list('sub_entity', flat=True))
else:
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_sub_to_any_kind(self, *super_entity_kinds):
""" Find all entities that have super_entities of any of the specified kinds """ |
if super_entity_kinds:
# get the pks of the desired subs from the relationships table
if len(super_entity_kinds) == 1:
entity_pks = EntityRelationship.objects.filter(
super_entity__entity_kind=super_entity_kinds[0]
).select_related('entity_kind', 'sub_entity').values_list('sub_entity', flat=True)
else:
entity_pks = EntityRelationship.objects.filter(
super_entity__entity_kind__in=super_entity_kinds
).select_related('entity_kind', 'sub_entity').values_list('sub_entity', flat=True)
# return a queryset limited to only those pks
return self.filter(pk__in=entity_pks)
else:
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_for_obj(self, entity_model_obj):
""" Given a saved entity model object, return the associated entity. """ |
return self.get(entity_type=ContentType.objects.get_for_model(
entity_model_obj, for_concrete_model=False), entity_id=entity_model_obj.id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_for_obj(self, entity_model_obj):
""" Delete the entities associated with a model object. """ |
return self.filter(
entity_type=ContentType.objects.get_for_model(
entity_model_obj, for_concrete_model=False), entity_id=entity_model_obj.id).delete(
force=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all_entities(self, is_active=True):
""" Return all the entities in the group. Because groups can contain both individual entities, as well as whole groups of entities, this method acts as a convenient way to get a queryset of all the entities in the group. """ |
return self.get_all_entities(return_models=True, is_active=is_active) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_entity(self, entity, sub_entity_kind=None):
""" Add an entity, or sub-entity group to this EntityGroup. :type entity: Entity :param entity: The entity to add. :type sub_entity_kind: Optional EntityKind :param sub_entity_kind: If a sub_entity_kind is given, all sub_entities of the entity will be added to this EntityGroup. """ |
membership = EntityGroupMembership.objects.create(
entity_group=self,
entity=entity,
sub_entity_kind=sub_entity_kind,
)
return membership |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bulk_add_entities(self, entities_and_kinds):
""" Add many entities and sub-entity groups to this EntityGroup. :type entities_and_kinds: List of (Entity, EntityKind) pairs. :param entities_and_kinds: A list of entity, entity-kind pairs to add to the group. In the pairs the entity-kind can be ``None``, to add a single entity, or some entity kind to add all sub-entities of that kind. """ |
memberships = [EntityGroupMembership(
entity_group=self,
entity=entity,
sub_entity_kind=sub_entity_kind,
) for entity, sub_entity_kind in entities_and_kinds]
created = EntityGroupMembership.objects.bulk_create(memberships)
return created |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_entity(self, entity, sub_entity_kind=None):
""" Remove an entity, or sub-entity group to this EntityGroup. :type entity: Entity :param entity: The entity to remove. :type sub_entity_kind: Optional EntityKind :param sub_entity_kind: If a sub_entity_kind is given, all sub_entities of the entity will be removed from this EntityGroup. """ |
EntityGroupMembership.objects.get(
entity_group=self,
entity=entity,
sub_entity_kind=sub_entity_kind,
).delete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bulk_remove_entities(self, entities_and_kinds):
""" Remove many entities and sub-entity groups to this EntityGroup. :type entities_and_kinds: List of (Entity, EntityKind) pairs. :param entities_and_kinds: A list of entity, entity-kind pairs to remove from the group. In the pairs, the entity-kind can be ``None``, to add a single entity, or some entity kind to add all sub-entities of that kind. """ |
criteria = [
Q(entity=entity, sub_entity_kind=entity_kind)
for entity, entity_kind in entities_and_kinds
]
criteria = reduce(lambda q1, q2: q1 | q2, criteria, Q())
EntityGroupMembership.objects.filter(
criteria, entity_group=self).delete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bulk_overwrite(self, entities_and_kinds):
""" Update the group to the given entities and sub-entity groups. After this operation, the only members of this EntityGroup will be the given entities, and sub-entity groups. :type entities_and_kinds: List of (Entity, EntityKind) pairs. :param entities_and_kinds: A list of entity, entity-kind pairs to set to the EntityGroup. In the pairs the entity-kind can be ``None``, to add a single entity, or some entity kind to add all sub-entities of that kind. """ |
EntityGroupMembership.objects.filter(entity_group=self).delete()
return self.bulk_add_entities(entities_and_kinds) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_slug(apps, schema_editor, class_name):
""" Create a slug for each Work already in the DB. """ |
Cls = apps.get_model('spectator_events', class_name)
for obj in Cls.objects.all():
obj.slug = generate_slug(obj.pk)
obj.save(update_fields=['slug']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_descriptor_and_rows(self, descriptor, rows):
"""Convert descriptor and rows to Pandas """ |
# Prepare
primary_key = None
schema = tableschema.Schema(descriptor)
if len(schema.primary_key) == 1:
primary_key = schema.primary_key[0]
elif len(schema.primary_key) > 1:
message = 'Multi-column primary keys are not supported'
raise tableschema.exceptions.StorageError(message)
# Get data/index
data_rows = []
index_rows = []
jtstypes_map = {}
for row in rows:
values = []
index = None
for field, value in zip(schema.fields, row):
try:
if isinstance(value, float) and np.isnan(value):
value = None
if value and field.type == 'integer':
value = int(value)
value = field.cast_value(value)
except tableschema.exceptions.CastError:
value = json.loads(value)
# http://pandas.pydata.org/pandas-docs/stable/gotchas.html#support-for-integer-na
if value is None and field.type in ('number', 'integer'):
jtstypes_map[field.name] = 'number'
value = np.NaN
if field.name == primary_key:
index = value
else:
values.append(value)
data_rows.append(tuple(values))
index_rows.append(index)
# Get dtypes
dtypes = []
for field in schema.fields:
if field.name != primary_key:
field_name = field.name
if six.PY2:
field_name = field.name.encode('utf-8')
dtype = self.convert_type(jtstypes_map.get(field.name, field.type))
dtypes.append((field_name, dtype))
# Create dataframe
index = None
columns = schema.headers
array = np.array(data_rows, dtype=dtypes)
if primary_key:
index_field = schema.get_field(primary_key)
index_dtype = self.convert_type(index_field.type)
index_class = pd.Index
if index_field.type in ['datetime', 'date']:
index_class = pd.DatetimeIndex
index = index_class(index_rows, name=primary_key, dtype=index_dtype)
columns = filter(lambda column: column != primary_key, schema.headers)
dataframe = pd.DataFrame(array, index=index, columns=columns)
return dataframe |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_type(self, type):
"""Convert type to Pandas """ |
# Mapping
mapping = {
'any': np.dtype('O'),
'array': np.dtype(list),
'boolean': np.dtype(bool),
'date': np.dtype('O'),
'datetime': np.dtype('datetime64[ns]'),
'duration': np.dtype('O'),
'geojson': np.dtype('O'),
'geopoint': np.dtype('O'),
'integer': np.dtype(int),
'number': np.dtype(float),
'object': np.dtype(dict),
'string': np.dtype('O'),
'time': np.dtype('O'),
'year': np.dtype(int),
'yearmonth': np.dtype('O'),
}
# Get type
if type not in mapping:
message = 'Type "%s" is not supported' % type
raise tableschema.exceptions.StorageError(message)
return mapping[type] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def restore_descriptor(self, dataframe):
"""Restore descriptor from Pandas """ |
# Prepare
fields = []
primary_key = None
# Primary key
if dataframe.index.name:
field_type = self.restore_type(dataframe.index.dtype)
field = {
'name': dataframe.index.name,
'type': field_type,
'constraints': {'required': True},
}
fields.append(field)
primary_key = dataframe.index.name
# Fields
for column, dtype in dataframe.dtypes.iteritems():
sample = dataframe[column].iloc[0] if len(dataframe) else None
field_type = self.restore_type(dtype, sample=sample)
field = {'name': column, 'type': field_type}
# TODO: provide better required indication
# if dataframe[column].isnull().sum() == 0:
# field['constraints'] = {'required': True}
fields.append(field)
# Descriptor
descriptor = {}
descriptor['fields'] = fields
if primary_key:
descriptor['primaryKey'] = primary_key
return descriptor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def restore_row(self, row, schema, pk):
"""Restore row from Pandas """ |
result = []
for field in schema.fields:
if schema.primary_key and schema.primary_key[0] == field.name:
if field.type == 'number' and np.isnan(pk):
pk = None
if pk and field.type == 'integer':
pk = int(pk)
result.append(field.cast_value(pk))
else:
value = row[field.name]
if field.type == 'number' and np.isnan(value):
value = None
if value and field.type == 'integer':
value = int(value)
elif field.type == 'datetime':
value = value.to_pydatetime()
result.append(field.cast_value(value))
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def restore_type(self, dtype, sample=None):
"""Restore type from Pandas """ |
# Pandas types
if pdc.is_bool_dtype(dtype):
return 'boolean'
elif pdc.is_datetime64_any_dtype(dtype):
return 'datetime'
elif pdc.is_integer_dtype(dtype):
return 'integer'
elif pdc.is_numeric_dtype(dtype):
return 'number'
# Python types
if sample is not None:
if isinstance(sample, (list, tuple)):
return 'array'
elif isinstance(sample, datetime.date):
return 'date'
elif isinstance(sample, isodate.Duration):
return 'duration'
elif isinstance(sample, dict):
return 'object'
elif isinstance(sample, six.string_types):
return 'string'
elif isinstance(sample, datetime.time):
return 'time'
return 'string' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def domain_urlize(value):
""" Returns an HTML link to the supplied URL, but only using the domain as the text. Strips 'www.' from the start of the domain, if present. e.g. if `my_url` is 'http://www.example.org/foo/' then: {{ my_url|domain_urlize }} returns: <a href="http://www.example.org/foo/" rel="nofollow">example.org</a> """ |
parsed_uri = urlparse(value)
domain = '{uri.netloc}'.format(uri=parsed_uri)
if domain.startswith('www.'):
domain = domain[4:]
return format_html('<a href="{}" rel="nofollow">{}</a>',
value,
domain
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def current_url_name(context):
""" Returns the name of the current URL, namespaced, or False. Example usage: {% current_url_name as url_name %} <a href="#"{% if url_name == 'myapp:home' %} class="active"{% endif %}">Home</a> """ |
url_name = False
if context.request.resolver_match:
url_name = "{}:{}".format(
context.request.resolver_match.namespace,
context.request.resolver_match.url_name
)
return url_name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def most_read_creators_card(num=10):
""" Displays a card showing the Creators who have the most Readings associated with their Publications. In spectator_core tags, rather than spectator_reading so it can still be used on core pages, even if spectator_reading isn't installed. """ |
if spectator_apps.is_enabled('reading'):
object_list = most_read_creators(num=num)
object_list = chartify(object_list, 'num_readings', cutoff=1)
return {
'card_title': 'Most read authors',
'score_attr': 'num_readings',
'object_list': object_list,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def most_visited_venues_card(num=10):
""" Displays a card showing the Venues that have the most Events. In spectator_core tags, rather than spectator_events so it can still be used on core pages, even if spectator_events isn't installed. """ |
if spectator_apps.is_enabled('events'):
object_list = most_visited_venues(num=num)
object_list = chartify(object_list, 'num_visits', cutoff=1)
return {
'card_title': 'Most visited venues',
'score_attr': 'num_visits',
'object_list': object_list,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def has_urls(self):
"Handy for templates."
if self.isbn_uk or self.isbn_us or self.official_url or self.notes_url:
return True
else:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_queryset(self):
"Reduce the number of queries and speed things up."
qs = super().get_queryset()
qs = qs.select_related('publication__series') \
.prefetch_related('publication__roles__creator')
return qs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_slug(apps, schema_editor):
""" Create a slug for each Creator already in the DB. """ |
Creator = apps.get_model('spectator_core', 'Creator')
for c in Creator.objects.all():
c.slug = generate_slug(c.pk)
c.save(update_fields=['slug']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forwards(apps, schema_editor):
""" Copy the ClassicalWork and DancePiece data to use the new through models. """ |
Event = apps.get_model('spectator_events', 'Event')
ClassicalWorkSelection = apps.get_model(
'spectator_events', 'ClassicalWorkSelection')
DancePieceSelection = apps.get_model(
'spectator_events', 'DancePieceSelection')
for event in Event.objects.all():
for work in event.classicalworks.all():
selection = ClassicalWorkSelection(
classical_work=work,
event=event)
selection.save()
for piece in event.dancepieces.all():
selection = DancePieceSelection(
dance_piece=piece,
event=event)
selection.save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forwards(apps, schema_editor):
""" Set the venue_name field of all Events that have a Venue. """ |
Event = apps.get_model('spectator_events', 'Event')
for event in Event.objects.all():
if event.venue is not None:
event.venue_name = event.venue.name
event.save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forwards(apps, schema_editor):
""" Migrate all 'exhibition' Events to the new 'museum' Event kind. """ |
Event = apps.get_model('spectator_events', 'Event')
for ev in Event.objects.filter(kind='exhibition'):
ev.kind = 'museum'
ev.save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def chartify(qs, score_field, cutoff=0, ensure_chartiness=True):
""" Given a QuerySet it will go through and add a `chart_position` property to each object returning a list of the objects. If adjacent objects have the same 'score' (based on `score_field`) then they will have the same `chart_position`. This can then be used in templates for the `value` of <li> elements in an <ol>. By default any objects with a score of 0 or less will be removed. By default, if all the items in the chart have the same position, no items will be returned (it's not much of a chart). Keyword arguments: qs -- The QuerySet score_field -- The name of the numeric field that each object in the QuerySet has, that will be used to compare their positions. cutoff -- Any objects with a score of this value or below will be removed from the list. Set to None to disable this. ensure_chartiness -- If True, then if all items in the list have the same score, an empty list will be returned. """ |
chart = []
position = 0
prev_obj = None
for counter, obj in enumerate(qs):
score = getattr(obj, score_field)
if score != getattr(prev_obj, score_field, None):
position = counter + 1
if cutoff is None or score > cutoff:
obj.chart_position = position
chart.append(obj)
prev_obj = obj
if ensure_chartiness and len(chart) > 0:
if getattr(chart[0], score_field) == getattr(chart[-1], score_field):
chart = []
return chart |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def by_visits(self, event_kind=None):
""" Gets Venues in order of how many Events have been held there. Adds a `num_visits` field to each one. event_kind filters by kind of Event, e.g. 'theatre', 'cinema', etc. """ |
qs = self.get_queryset()
if event_kind is not None:
qs = qs.filter(event__kind=event_kind)
qs = qs.annotate(num_visits=Count('event')) \
.order_by('-num_visits', 'name_sort')
return qs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def by_views(self, kind=None):
""" Gets Works in order of how many times they've been attached to Events. kind is the kind of Work, e.g. 'play', 'movie', etc. """ |
qs = self.get_queryset()
if kind is not None:
qs = qs.filter(kind=kind)
qs = qs.annotate(num_views=Count('event')) \
.order_by('-num_views', 'title_sort')
return qs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def naturalize_person(self, string):
""" Attempt to make a version of the string that has the surname, if any, at the start. 'John, Brown' to 'Brown, John' 'Sir John Brown Jr' to 'Brown, Sir John Jr' 'Prince' to 'Prince' string -- The string to change. """ |
suffixes = [
'Jr', 'Jr.', 'Sr', 'Sr.',
'I', 'II', 'III', 'IV', 'V',
]
# Add lowercase versions:
suffixes = suffixes + [s.lower() for s in suffixes]
# If a name has a capitalised particle in we use that to sort.
# So 'Le Carre, John' but 'Carre, John le'.
particles = [
'Le', 'La',
'Von', 'Van',
'Du', 'De',
]
surname = '' # Smith
names = '' # Fred James
suffix = '' # Jr
sort_string = string
parts = string.split(' ')
if parts[-1] in suffixes:
# Remove suffixes entirely, as we'll add them back on the end.
suffix = parts[-1]
parts = parts[0:-1] # Remove suffix from parts
sort_string = ' '.join(parts)
if len(parts) > 1:
if parts[-2] in particles:
# From ['Alan', 'Barry', 'Le', 'Carré']
# to ['Alan', 'Barry', 'Le Carré']:
parts = parts[0:-2] + [ ' '.join(parts[-2:]) ]
# From 'David Foster Wallace' to 'Wallace, David Foster':
sort_string = '{}, {}'.format(parts[-1], ' '.join(parts[:-1]))
if suffix:
# Add it back on.
sort_string = '{} {}'.format(sort_string, suffix)
# In case this name has any numbers in it.
sort_string = self._naturalize_numbers(sort_string)
return sort_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forward(apps, schema_editor):
""" Copying data from the old `Event.movie` and `Event.play` ForeignKey fields into the new `Event.movies` and `Event.plays` ManyToManyFields. """ |
Event = apps.get_model('spectator_events', 'Event')
MovieSelection = apps.get_model('spectator_events', 'MovieSelection')
PlaySelection = apps.get_model('spectator_events', 'PlaySelection')
for event in Event.objects.all():
if event.movie is not None:
selection = MovieSelection(event=event, movie=event.movie)
selection.save()
if event.play is not None:
selection = PlaySelection(event=event, play=event.play)
selection.save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_slug(apps, schema_editor):
""" Create a slug for each Event already in the DB. """ |
Event = apps.get_model('spectator_events', 'Event')
for e in Event.objects.all():
e.slug = generate_slug(e.pk)
e.save(update_fields=['slug']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def page(self, number, *args, **kwargs):
"""Return a standard ``Page`` instance with custom, digg-specific page ranges attached. """ |
page = super().page(number, *args, **kwargs)
number = int(number) # we know this will work
# easier access
num_pages, body, tail, padding, margin = \
self.num_pages, self.body, self.tail, self.padding, self.margin
# put active page in middle of main range
main_range = list(map(int, [
math.floor(number-body/2.0)+1, # +1 = shift odd body to right
math.floor(number+body/2.0)]))
# adjust bounds
if main_range[0] < 1:
main_range = list(map(abs(main_range[0]-1).__add__, main_range))
if main_range[1] > num_pages:
main_range = list(map((num_pages-main_range[1]).__add__, main_range))
# Determine leading and trailing ranges; if possible and appropriate,
# combine them with the main range, in which case the resulting main
# block might end up considerable larger than requested. While we
# can't guarantee the exact size in those cases, we can at least try
# to come as close as possible: we can reduce the other boundary to
# max padding, instead of using half the body size, which would
# otherwise be the case. If the padding is large enough, this will
# of course have no effect.
# Example:
# total pages=100, page=4, body=5, (default padding=2)
# 1 2 3 [4] 5 6 ... 99 100
# total pages=100, page=4, body=5, padding=1
# 1 2 3 [4] 5 ... 99 100
# If it were not for this adjustment, both cases would result in the
# first output, regardless of the padding value.
if main_range[0] <= tail+margin:
leading = []
main_range = [1, max(body, min(number+padding, main_range[1]))]
main_range[0] = 1
else:
leading = list(range(1, tail+1))
# basically same for trailing range, but not in ``left_align`` mode
if self.align_left:
trailing = []
else:
if main_range[1] >= num_pages-(tail+margin)+1:
trailing = []
if not leading:
# ... but handle the special case of neither leading nor
# trailing ranges; otherwise, we would now modify the
# main range low bound, which we just set in the previous
# section, again.
main_range = [1, num_pages]
else:
main_range = [min(num_pages-body+1, max(number-padding, main_range[0])), num_pages]
else:
trailing = list(range(num_pages-tail+1, num_pages+1))
# finally, normalize values that are out of bound; this basically
# fixes all the things the above code screwed up in the simple case
# of few enough pages where one range would suffice.
main_range = [max(main_range[0], 1), min(main_range[1], num_pages)]
# make the result of our calculations available as custom ranges
# on the ``Page`` instance.
page.main_range = list(range(main_range[0], main_range[1]+1))
page.leading_range = leading
page.trailing_range = trailing
page.page_range = reduce(lambda x, y: x+((x and y) and [False])+y,
[page.leading_range, page.main_range, page.trailing_range])
page.__class__ = DiggPage
return page |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def version():
"""Get the version number without importing the mrcfile package.""" |
namespace = {}
with open(os.path.join('mrcfile', 'version.py')) as f:
exec(f.read(), namespace)
return namespace['__version__'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_event_kind(self):
""" Unless we're on the front page we'll have a kind_slug like 'movies'. We need to translate that into an event `kind` like 'movie'. """ |
slug = self.kwargs.get('kind_slug', None)
if slug is None:
return None # Front page; showing all Event kinds.
else:
slugs_to_kinds = {v:k for k,v in Event.KIND_SLUGS.items()}
return slugs_to_kinds.get(slug, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_queryset(self):
"Restrict to a single kind of event, if any, and include Venue data."
qs = super().get_queryset()
kind = self.get_event_kind()
if kind is not None:
qs = qs.filter(kind=kind)
qs = qs.select_related('venue')
return qs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_work_kind(self):
""" We'll have a kind_slug like 'movies'. We need to translate that into a work `kind` like 'movie'. """ |
slugs_to_kinds = {v:k for k,v in Work.KIND_SLUGS.items()}
return slugs_to_kinds.get(self.kind_slug, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_countries(self):
""" Returns a list of dicts, one per country that has at least one Venue in it. Each dict has 'code' and 'name' elements. The list is sorted by the country 'name's. """ |
qs = Venue.objects.values('country') \
.exclude(country='') \
.distinct() \
.order_by('country')
countries = []
for c in qs:
countries.append({
'code': c['country'],
'name': Venue.get_country_name(c['country'])
})
return sorted(countries, key=lambda k: k['name']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forwards(apps, schema_editor):
""" Re-save all the Works because something earlier didn't create their slugs. """ |
Work = apps.get_model('spectator_events', 'Work')
for work in Work.objects.all():
if not work.slug:
work.slug = generate_slug(work.pk)
work.save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def annual_event_counts_card(kind='all', current_year=None):
""" Displays years and the number of events per year. kind is an Event kind (like 'cinema', 'gig', etc.) or 'all' (default). current_year is an optional date object representing the year we're already showing information about. """ |
if kind == 'all':
card_title = 'Events per year'
else:
card_title = '{} per year'.format(Event.get_kind_name_plural(kind))
return {
'card_title': card_title,
'kind': kind,
'years': annual_event_counts(kind=kind),
'current_year': current_year
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def event_list_tabs(counts, current_kind, page_number=1):
""" Displays the tabs to different event_list pages. `counts` is a dict of number of events for each kind, like: {'all': 30, 'gig': 12, 'movie': 18,} `current_kind` is the event kind that's active, if any. e.g. 'gig', 'movie', etc. `page_number` is the current page of this kind of events we're on. """ |
return {
'counts': counts,
'current_kind': current_kind,
'page_number': page_number,
# A list of all the kinds we might show tabs for, like
# ['gig', 'movie', 'play', ...]
'event_kinds': Event.get_kinds(),
# A dict of data about each kind, keyed by kind ('gig') including
# data about 'name', 'name_plural' and 'slug':
'event_kinds_data': Event.get_kinds_data(),
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def day_events_card(date):
""" Displays Events that happened on the supplied date. `date` is a date object. """ |
d = date.strftime(app_settings.DATE_FORMAT)
card_title = 'Events on {}'.format(d)
return {
'card_title': card_title,
'event_list': day_events(date=date),
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def most_seen_creators_card(event_kind=None, num=10):
""" Displays a card showing the Creators that are associated with the most Events. """ |
object_list = most_seen_creators(event_kind=event_kind, num=num)
object_list = chartify(object_list, 'num_events', cutoff=1)
return {
'card_title': 'Most seen people/groups',
'score_attr': 'num_events',
'object_list': object_list,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def most_seen_creators_by_works(work_kind=None, role_name=None, num=10):
""" Returns a QuerySet of the Creators that are associated with the most Works. """ |
return Creator.objects.by_works(kind=work_kind, role_name=role_name)[:num] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def most_seen_creators_by_works_card(work_kind=None, role_name=None, num=10):
""" Displays a card showing the Creators that are associated with the most Works. e.g.: {% most_seen_creators_by_works_card work_kind='movie' role_name='Director' num=5 %} """ |
object_list = most_seen_creators_by_works(
work_kind=work_kind, role_name=role_name, num=num)
object_list = chartify(object_list, 'num_works', cutoff=1)
# Attempt to create a sensible card title...
if role_name:
# Yes, this pluralization is going to break at some point:
creators_name = '{}s'.format(role_name.capitalize())
else:
creators_name = 'People/groups'
if work_kind:
works_name = Work.get_kind_name_plural(work_kind).lower()
else:
works_name = 'works'
card_title = '{} with most {}'.format(creators_name, works_name)
return {
'card_title': card_title,
'score_attr': 'num_works',
'object_list': object_list,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def most_seen_works_card(kind=None, num=10):
""" Displays a card showing the Works that are associated with the most Events. """ |
object_list = most_seen_works(kind=kind, num=num)
object_list = chartify(object_list, 'num_views', cutoff=1)
if kind:
card_title = 'Most seen {}'.format(
Work.get_kind_name_plural(kind).lower())
else:
card_title = 'Most seen works'
return {
'card_title': card_title,
'score_attr': 'num_views',
'object_list': object_list,
'name_attr': 'title',
'use_cite': True,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forwards(apps, schema_editor):
""" Change all Movie objects into Work objects, and their associated data into WorkRole and WorkSelection models, then delete the Movie. """ |
Movie = apps.get_model('spectator_events', 'Movie')
Work = apps.get_model('spectator_events', 'Work')
WorkRole = apps.get_model('spectator_events', 'WorkRole')
WorkSelection = apps.get_model('spectator_events', 'WorkSelection')
for m in Movie.objects.all():
work = Work.objects.create(
kind='movie',
title=m.title,
title_sort=m.title_sort,
year=m.year,
imdb_id=m.imdb_id
)
for role in m.roles.all():
WorkRole.objects.create(
creator=role.creator,
work=work,
role_name=role.role_name,
role_order=role.role_order
)
for selection in m.events.all():
WorkSelection.objects.create(
event=selection.event,
work=work,
order=selection.order
)
m.delete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def paginate_queryset(self, queryset, page_size):
""" Paginate the queryset, if needed. This is EXACTLY the same as the standard ListView.paginate_queryset() except for this line: page = paginator.page(page_number, softlimit=True) Because we want to use the DiggPaginator's softlimit option. So that if you're viewing a page of, say, Flickr photos, and you switch from viewing by Uploaded Time to viewing by Taken Time, the new ordering might have fewer pages. In that case we want to see the final page, not a 404. The softlimit does that, but I can't see how to use """ |
paginator = self.get_paginator(
queryset,
page_size,
orphans = self.get_paginate_orphans(),
allow_empty_first_page = self.get_allow_empty(),
body = self.paginator_body,
margin = self.paginator_margin,
padding = self.paginator_padding,
tail = self.paginator_tail,
)
page_kwarg = self.page_kwarg
page = self.kwargs.get(page_kwarg) or self.request.GET.get(page_kwarg) or 1
try:
page_number = int(page)
except ValueError:
if page == 'last':
page_number = paginator.num_pages
else:
raise Http404(_("Page is not 'last', nor can it be converted to an int."))
try:
page = paginator.page(page_number, softlimit=False)
return (paginator, page, page.object_list, page.has_other_pages())
except InvalidPage as e:
raise Http404(_('Invalid page (%(page_number)s): %(message)s') % {
'page_number': page_number,
'message': str(e)
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def day_publications(date):
""" Returns a QuerySet of Publications that were being read on `date`. `date` is a date tobject. """ |
readings = Reading.objects \
.filter(start_date__lte=date) \
.filter(
Q(end_date__gte=date)
|
Q(end_date__isnull=True)
)
if readings:
return Publication.objects.filter(reading__in=readings) \
.select_related('series') \
.prefetch_related('roles__creator') \
.distinct()
else:
return Publication.objects.none() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def day_publications_card(date):
""" Displays Publications that were being read on `date`. `date` is a date tobject. """ |
d = date.strftime(app_settings.DATE_FORMAT)
card_title = 'Reading on {}'.format(d)
return {
'card_title': card_title,
'publication_list': day_publications(date=date),
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forwards(apps, schema_editor):
""" Change Events with kind 'movie' to 'cinema' and Events with kind 'play' to 'theatre'. Purely for more consistency. """ |
Event = apps.get_model('spectator_events', 'Event')
for ev in Event.objects.filter(kind='movie'):
ev.kind = 'cinema'
ev.save()
for ev in Event.objects.filter(kind='play'):
ev.kind = 'theatre'
ev.save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_env_variable(var_name, default=None):
"""Get the environment variable or return exception.""" |
try:
return os.environ[var_name]
except KeyError:
if default is None:
error_msg = "Set the %s environment variable" % var_name
raise ImproperlyConfigured(error_msg)
else:
return default |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def forwards(apps, schema_editor):
""" Having added the new 'exhibition' Work type, we're going to assume that every Event of type 'museum' should actually have one Exhibition attached. So, we'll add one, with the same title as the Event. And we'll move all Creators from the Event to the Exhibition. """ |
Event = apps.get_model('spectator_events', 'Event')
Work = apps.get_model('spectator_events', 'Work')
WorkRole = apps.get_model('spectator_events', 'WorkRole')
WorkSelection = apps.get_model('spectator_events', 'WorkSelection')
for event in Event.objects.filter(kind='museum'):
# Create a new Work based on this Event's details.
work = Work.objects.create(
kind='exhibition',
title=event.title,
title_sort=event.title_sort
)
# This doesn't generate the slug field automatically because Django.
# So we'll have to do it manually. Graarhhh.
work.slug = generate_slug(work.pk)
work.save()
# Associate the new Work with the Event.
WorkSelection.objects.create(
event=event,
work=work
)
# Associate any Creators on the Event with the new Work.
for role in event.roles.all():
WorkRole.objects.create(
creator=role.creator,
work=work,
role_name=role.role_name,
role_order=role.role_order
)
# Remove Creators from the Event.
role.delete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def by_readings(self, role_names=['', 'Author']):
""" The Creators who have been most-read, ordered by number of readings. By default it will only include Creators whose role was left empty, or is 'Author'. Each Creator will have a `num_readings` attribute. """ |
if not spectator_apps.is_enabled('reading'):
raise ImproperlyConfigured("To use the CreatorManager.by_readings() method, 'spectator.reading' must by in INSTALLED_APPS.")
qs = self.get_queryset()
qs = qs.filter(publication_roles__role_name__in=role_names) \
.exclude(publications__reading__isnull=True) \
.annotate(num_readings=Count('publications__reading')) \
.order_by('-num_readings', 'name_sort')
return qs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def by_events(self, kind=None):
""" Get the Creators involved in the most Events. This only counts Creators directly involved in an Event. i.e. if a Creator is the director of a movie Work, and an Event was a viewing of that movie, that Event wouldn't count. Unless they were also directly involved in the Event (e.g. speaking after the movie). kind - If supplied, only Events with that `kind` value will be counted. """ |
if not spectator_apps.is_enabled('events'):
raise ImproperlyConfigured("To use the CreatorManager.by_events() method, 'spectator.events' must by in INSTALLED_APPS.")
qs = self.get_queryset()
if kind is not None:
qs = qs.filter(events__kind=kind)
qs = qs.annotate(num_events=Count('events', distinct=True)) \
.order_by('-num_events', 'name_sort')
return qs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.