text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hoist_event(self, e):
""" Hoist an xcb_generic_event_t to the right xcffib structure. """ |
if e.response_type == 0:
return self._process_error(ffi.cast("xcb_generic_error_t *", e))
# We mask off the high bit here because events sent with SendEvent have
# this bit set. We don't actually care where the event came from, so we
# just throw this away. Maybe we could expose this, if anyone actually
# cares about it.
event = self._event_offsets[e.response_type & 0x7f]
buf = CffiUnpacker(e)
return event(buf) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def serialize(self, value, greedy=True):
""" Greedy serialization requires the value to either be a column or convertible to a column, whereas non-greedy serialization will pass through any string as-is and will only serialize Column objects. Non-greedy serialization is useful when preparing queries with custom filters or segments. """ |
if greedy and not isinstance(value, Column):
value = self.normalize(value)
if isinstance(value, Column):
return value.id
else:
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def describe(profile, description):
""" Generate a query by describing it as a series of actions and parameters to those actions. These map directly to Query methods and arguments to those methods. This is an alternative to the chaining interface. Mostly useful if you'd like to put your queries in a file, rather than in Python code. """ |
api_type = description.pop('type', 'core')
api = getattr(profile, api_type)
return refine(api.query, description) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refine(query, description):
""" Refine a query from a dictionary of parameters that describes it. See `describe` for more information. """ |
for attribute, arguments in description.items():
if hasattr(query, attribute):
attribute = getattr(query, attribute)
else:
raise ValueError("Unknown query method: " + attribute)
# query descriptions are often automatically generated, and
# may include empty calls, which we skip
if utils.isempty(arguments):
continue
if callable(attribute):
method = attribute
if isinstance(arguments, dict):
query = method(**arguments)
elif isinstance(arguments, list):
query = method(*arguments)
else:
query = method(arguments)
else:
setattr(attribute, arguments)
return query |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set(self, key=None, value=None, **kwargs):
""" `set` is a way to add raw properties to the request, for features that this module does not support or supports incompletely. For convenience's sake, it will serialize Column objects but will leave any other kind of value alone. """ |
serialize = partial(self.api.columns.serialize, greedy=False)
if key and value:
self.raw[key] = serialize(value)
elif key or kwargs:
properties = key or kwargs
for key, value in properties.items():
self.raw[key] = serialize(value)
else:
raise ValueError(
"Query#set requires a key and value, a properties dictionary or keyword arguments.")
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def description(self):
""" A list of the metrics this query will ask for. """ |
if 'metrics' in self.raw:
metrics = self.raw['metrics']
head = metrics[0:-1] or metrics[0:1]
text = ", ".join(head)
if len(metrics) > 1:
tail = metrics[-1]
text = text + " and " + tail
else:
text = 'n/a'
return text |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sort(self, *columns, **options):
""" Return a new query which will produce results sorted by one or more metrics or dimensions. You may use plain strings for the columns, or actual `Column`, `Metric` and `Dimension` objects. Add a minus in front of the metric (either the string or the object) to sort in descending order. ```python # sort using strings query.sort('pageviews', '-device type') # alternatively, ask for a descending sort in a keyword argument query.sort('pageviews', descending=True) # sort using metric, dimension or column objects pageviews = profile.core.metrics['pageviews'] query.sort(-pageviews) ``` """ |
sorts = self.meta.setdefault('sort', [])
for column in columns:
if isinstance(column, Column):
identifier = column.id
elif isinstance(column, utils.basestring):
descending = column.startswith('-') or options.get('descending', False)
identifier = self.api.columns[column.lstrip('-')].id
else:
raise ValueError("Can only sort on columns or column strings. Received: {}".format(column))
if descending:
sign = '-'
else:
sign = ''
sorts.append(sign + identifier)
self.raw['sort'] = ",".join(sorts)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter(self, value=None, exclude=False, **selection):
""" Most of the actual functionality lives on the Column object and the `all` and `any` functions. """ |
filters = self.meta.setdefault('filters', [])
if value and len(selection):
raise ValueError("Cannot specify a filter string and a filter keyword selection at the same time.")
elif value:
value = [value]
elif len(selection):
value = select(self.api.columns, selection, invert=exclude)
filters.append(value)
self.raw['filters'] = utils.paste(filters, ',', ';')
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def range(self, start=None, stop=None, months=0, days=0):
""" Return a new query that fetches metrics within a certain date range. ```python query.range('2014-01-01', '2014-06-30') ``` If you don't specify a `stop` argument, the date range will end today. If instead you meant to fetch just a single day's results, try: ```python query.range('2014-01-01', days=1) ``` More generally, you can specify that you'd like a certain number of days, starting from a certain date: ```python query.range('2014-01-01', months=3) query.range('2014-01-01', days=28) ``` Note that if you don't specify a granularity (either through the `interval` method or through the `hourly`, `daily`, `weekly`, `monthly` or `yearly` shortcut methods) you will get only a single result, encompassing the entire date range, per metric. **Note:** it is currently not possible to easily specify that you'd like to query the last last full week(s), month(s) et cetera. This will be added sometime in the future. """ |
start, stop = utils.date.range(start, stop, months, days)
self.raw.update({
'start_date': start,
'end_date': stop,
})
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def segment(self, value=None, scope=None, metric_scope=None, **selection):
""" Return a new query, limited to a segment of all users or sessions. Accepts segment objects, filtered segment objects and segment names: ```python query.segment(account.segments['browser']) query.segment('browser') query.segment(account.segments['browser'].any('Chrome', 'Firefox')) ``` Segment can also accept a segment expression when you pass in a `type` argument. The type argument can be either `users` or `sessions`. This is pretty close to the metal. ```python # will be translated into `users::condition::perUser::ga:sessions>10` query.segment('condition::perUser::ga:sessions>10', type='users') ``` See the [Google Analytics dynamic segments documentation][segments] You can also use the `any`, `all`, `followed_by` and `immediately_followed_by` functions in this module to chain together segments. Everything about how segments get handled is still in flux. Feel free to propose ideas for a nicer interface on the [GitHub issues page][issues] [segments]: https://developers.google.com/analytics/devguides/reporting/core/v3/segments#reference [issues]: https://github.com/debrouwere/google-analytics/issues """ |
"""
Technical note to self about segments:
* users or sessions
* sequence or condition
* scope (perHit, perSession, perUser -- gte primary scope)
Multiple conditions can be ANDed or ORed together; these two are equivalent
users::condition::ga:revenue>10;ga:sessionDuration>60
users::condition::ga:revenue>10;users::condition::ga:sessionDuration>60
For sequences, prepending ^ means the first part of the sequence has to match
the first session/hit/...
* users and sessions conditions can be combined (but only with AND)
* sequences and conditions can also be combined (but only with AND)
sessions::sequence::ga:browser==Chrome;
condition::perHit::ga:timeOnPage>5
->>
ga:deviceCategory==mobile;ga:revenue>10;
users::sequence::ga:deviceCategory==desktop
->>
ga:deviceCategory=mobile;
ga:revenue>100;
condition::ga:browser==Chrome
Problem: keyword arguments are passed as a dictionary, not an ordered dictionary!
So e.g. this is risky
query.sessions(time_on_page__gt=5, device_category='mobile', followed_by=True)
"""
SCOPES = {
'hits': 'perHit',
'sessions': 'perSession',
'users': 'perUser',
}
segments = self.meta.setdefault('segments', [])
if value and len(selection):
raise ValueError("Cannot specify a filter string and a filter keyword selection at the same time.")
elif value:
value = [self.api.segments.serialize(value)]
elif len(selection):
if not scope:
raise ValueError("Scope is required. Choose from: users, sessions.")
if metric_scope:
metric_scope = SCOPES[metric_scope]
value = select(self.api.columns, selection)
value = [[scope, 'condition', metric_scope, condition] for condition in value]
value = ['::'.join(filter(None, condition)) for condition in value]
segments.append(value)
self.raw['segment'] = utils.paste(segments, ',', ';')
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def next(self):
""" Return a new query with a modified `start_index`. Mainly used internally to paginate through results. """ |
step = self.raw.get('max_results', 1000)
start = self.raw.get('start_index', 1) + step
self.raw['start_index'] = start
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self):
""" Run the query and return a `Report`. This method transparently handles paginated results, so even for results that are larger than the maximum amount of rows the Google Analytics API will return in a single request, or larger than the amount of rows as specified through `CoreQuery#step`, `get` will leaf through all pages, concatenate the results and produce a single Report instance. """ |
cursor = self
report = None
is_complete = False
is_enough = False
while not (is_enough or is_complete):
chunk = cursor.execute()
if report:
report.append(chunk.raw[0], cursor)
else:
report = chunk
is_enough = len(report.rows) >= self.meta.get('limit', float('inf'))
is_complete = chunk.is_complete
cursor = cursor.next()
return report |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def valid(self):
""" Valid credentials are not necessarily correct, but they contain all necessary information for an authentication attempt. """ |
two_legged = self.client_email and self.private_key
three_legged = self.client_id and self.client_secret
return two_legged or three_legged or False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def complete(self):
""" Complete credentials are valid and are either two-legged or include a token. """ |
return self.valid and (self.access_token or self.refresh_token or self.type == 2) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def revoke(client_id, client_secret, client_email=None, private_key=None, access_token=None, refresh_token=None, identity=None, prefix=None, suffix=None):
""" Given a client id, client secret and either an access token or a refresh token, revoke OAuth access to the Google Analytics data and remove any stored credentials that use these tokens. """ |
if client_email and private_key:
raise ValueError('Two-legged OAuth does not use revokable tokens.')
credentials = oauth.Credentials.find(
complete=True,
interactive=False,
identity=identity,
client_id=client_id,
client_secret=client_secret,
access_token=access_token,
refresh_token=refresh_token,
prefix=prefix,
suffix=suffix,
)
retval = credentials.revoke()
keyring.delete(credentials.identity)
return retval |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def vectorize(fn):
""" Allows a method to accept one or more values, but internally deal only with a single item, and returning a list or a single item depending on what is desired. """ |
@functools.wraps(fn)
def vectorized_method(self, values, *vargs, **kwargs):
wrap = not isinstance(values, (list, tuple))
should_unwrap = not kwargs.setdefault('wrap', False)
unwrap = wrap and should_unwrap
del kwargs['wrap']
if wrap:
values = [values]
results = [fn(self, value, *vargs, **kwargs) for value in values]
if unwrap:
results = results[0]
return results
return vectorized_method |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def webproperties(self):
""" A list of all web properties on this account. You may select a specific web property using its name, its id or an index. ```python account.webproperties[0] account.webproperties['UA-9234823-5'] account.webproperties['debrouwere.org'] ``` """ |
raw_properties = self.service.management().webproperties().list(
accountId=self.id).execute()['items']
_webproperties = [WebProperty(raw, self) for raw in raw_properties]
return addressable.List(_webproperties, indices=['id', 'name'], insensitive=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def profiles(self):
""" A list of all profiles on this web property. You may select a specific profile using its name, its id or an index. ```python property.profiles[0] property.profiles['9234823'] property.profiles['marketing profile'] ``` """ |
raw_profiles = self.account.service.management().profiles().list(
accountId=self.account.id,
webPropertyId=self.id).execute()['items']
profiles = [Profile(raw, self) for raw in raw_profiles]
return addressable.List(profiles, indices=['id', 'name'], insensitive=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_output_input(*popenargs, **kwargs):
"""Run command with arguments and return its output as a byte string. If the exit code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and output in the output attribute. The arguments are the same as for the Popen constructor. Example: 'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n' The stdout argument is not allowed as it is used internally. To capture standard error in the result, use stderr=STDOUT. 'ls: non_existent_file: No such file or directory\n' There is an additional optional argument, "input", allowing you to pass a string to the subprocess's stdin. If you use this argument you may not also use the Popen constructor's "stdin" argument, as it too will be used internally. Example: b'when in the course of barman events\n' If universal_newlines=True is passed, the return value will be a string rather than bytes. """ |
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
if 'input' in kwargs:
if 'stdin' in kwargs:
raise ValueError('stdin and input arguments may not both be used.')
inputdata = kwargs['input']
del kwargs['input']
kwargs['stdin'] = PIPE
else:
inputdata = None
process = Popen(*popenargs, stdout=PIPE, **kwargs)
try:
output, unused_err = process.communicate(inputdata)
except:
process.kill()
process.wait()
raise
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise CalledProcessError(retcode, cmd, output=output)
return output |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_token_indices(self, tokens):
"""If `apply_encoding_options` is inadequate, one can retrieve tokens from `self.token_counts`, filter with a desired strategy and regenerate `token_index` using this method. The token index is subsequently used when `encode_texts` or `decode_texts` methods are called. """ |
start_index = len(self.special_token)
indices = list(range(len(tokens) + start_index))
# prepend because the special tokens come in the beginning
tokens_with_special = self.special_token + list(tokens)
self._token2idx = dict(list(zip(tokens_with_special, indices)))
self._idx2token = dict(list(zip(indices, tokens_with_special))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def apply_encoding_options(self, min_token_count=1, limit_top_tokens=None):
"""Applies the given settings for subsequent calls to `encode_texts` and `decode_texts`. This allows you to play with different settings without having to re-run tokenization on the entire corpus. Args: min_token_count: The minimum token count (frequency) in order to include during encoding. All tokens below this frequency will be encoded to `0` which corresponds to unknown token. (Default value = 1) limit_top_tokens: The maximum number of tokens to keep, based their frequency. Only the most common `limit_top_tokens` tokens will be kept. Set to None to keep everything. (Default value: None) """ |
if not self.has_vocab:
raise ValueError("You need to build the vocabulary using `build_vocab` "
"before using `apply_encoding_options`")
if min_token_count < 1:
raise ValueError("`min_token_count` should atleast be 1")
# Remove tokens with freq < min_token_count
token_counts = list(self._token_counts.items())
token_counts = [x for x in token_counts if x[1] >= min_token_count]
# Clip to max_tokens.
if limit_top_tokens is not None:
token_counts.sort(key=lambda x: x[1], reverse=True)
filtered_tokens = list(zip(*token_counts))[0]
filtered_tokens = filtered_tokens[:limit_top_tokens]
else:
filtered_tokens = zip(*token_counts)[0]
# Generate indices based on filtered tokens.
self.create_token_indices(filtered_tokens) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def encode_texts(self, texts, unknown_token="<UNK>", verbose=1, **kwargs):
"""Encodes the given texts using internal vocabulary with optionally applied encoding options. See ``apply_encoding_options` to set various options. Args: texts: The list of text items to encode. unknown_token: The token to replace words that out of vocabulary. If none, those words are omitted. verbose: The verbosity level for progress. Can be 0, 1, 2. (Default value = 1) **kwargs: The kwargs for `token_generator`. Returns: The encoded texts. """ |
if not self.has_vocab:
raise ValueError(
"You need to build the vocabulary using `build_vocab` before using `encode_texts`")
if unknown_token and unknown_token not in self.special_token:
raise ValueError(
"Your special token (" + unknown_token + ") to replace unknown words is not in the list of special token: " + self.special_token)
progbar = Progbar(len(texts), verbose=verbose, interval=0.25)
encoded_texts = []
for token_data in self.token_generator(texts, **kwargs):
indices, token = token_data[:-1], token_data[-1]
token_idx = self._token2idx.get(token)
if token_idx is None and unknown_token:
token_idx = self.special_token.index(unknown_token)
if token_idx is not None:
utils._append(encoded_texts, indices, token_idx)
# Update progressbar per document level.
progbar.update(indices[0])
# All done. Finalize progressbar.
progbar.update(len(texts))
return encoded_texts |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_texts(self, encoded_texts, unknown_token="<UNK>", inplace=True):
"""Decodes the texts using internal vocabulary. The list structure is maintained. Args: encoded_texts: The list of texts to decode. unknown_token: The placeholder value for unknown token. (Default value: "<UNK>") inplace: True to make changes inplace. (Default value: True) Returns: The decoded texts. """ |
if len(self._token2idx) == 0:
raise ValueError(
"You need to build vocabulary using `build_vocab` before using `decode_texts`")
if not isinstance(encoded_texts, list):
# assume it's a numpy array
encoded_texts = encoded_texts.tolist()
if not inplace:
encoded_texts = deepcopy(encoded_texts)
utils._recursive_apply(encoded_texts,
lambda token_id: self._idx2token.get(token_id) or unknown_token)
return encoded_texts |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_vocab(self, texts, verbose=1, **kwargs):
"""Builds the internal vocabulary and computes various statistics. Args: texts: The list of text items to encode. verbose: The verbosity level for progress. Can be 0, 1, 2. (Default value = 1) **kwargs: The kwargs for `token_generator`. """ |
if self.has_vocab:
logger.warn(
"Tokenizer already has existing vocabulary. Overriding and building new vocabulary.")
progbar = Progbar(len(texts), verbose=verbose, interval=0.25)
count_tracker = utils._CountTracker()
self._token_counts.clear()
self._num_texts = len(texts)
for token_data in self.token_generator(texts, **kwargs):
indices, token = token_data[:-1], token_data[-1]
count_tracker.update(indices)
self._token_counts[token] += 1
# Update progressbar per document level.
progbar.update(indices[0])
# Generate token2idx and idx2token.
self.create_token_indices(self._token_counts.keys())
# All done. Finalize progressbar update and count tracker.
count_tracker.finalize()
self._counts = count_tracker.counts
progbar.update(len(texts)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_embedding_weights(word_index, embeddings_index):
"""Builds an embedding matrix for all words in vocab using embeddings_index """ |
logger.info('Loading embeddings for all words in the corpus')
embedding_dim = list(embeddings_index.values())[0].shape[-1]
# setting special tokens such as UNK and PAD to 0
# all other words are also set to 0.
embedding_weights = np.zeros((len(word_index), embedding_dim))
for word, i in word_index.items():
word_vector = embeddings_index.get(word)
if word_vector is not None:
embedding_weights[i] = word_vector
return embedding_weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_embeddings_index(embedding_type='glove.42B.300d', embedding_dims=None, embedding_path=None, cache=True):
"""Retrieves embeddings index from embedding name or path. Will automatically download and cache as needed. Args: embedding_type: The embedding type to load. embedding_path: Path to a local embedding to use instead of the embedding type. Ignores `embedding_type` if specified. Returns: The embeddings indexed by word. """ |
if embedding_path is not None:
embedding_type = embedding_path # identify embedding by path
embeddings_index = _EMBEDDINGS_CACHE.get(embedding_type)
if embeddings_index is not None:
return embeddings_index
if embedding_path is None:
embedding_type_obj = get_embedding_type(embedding_type)
# some very rough wrangling of zip files with the keras util `get_file`
# a special problem: when multiple files are in one zip file
extract = embedding_type_obj.get('extract', True)
file_path = get_file(
embedding_type_obj['file'], origin=embedding_type_obj['url'], extract=extract, cache_subdir='embeddings', file_hash=embedding_type_obj.get('file_hash',))
if 'file_in_zip' in embedding_type_obj:
zip_folder = file_path.split('.zip')[0]
with ZipFile(file_path, 'r') as zf:
zf.extractall(zip_folder)
file_path = os.path.join(
zip_folder, embedding_type_obj['file_in_zip'])
else:
if extract:
if file_path.endswith('.zip'):
file_path = file_path.split('.zip')[0]
# if file_path.endswith('.gz'):
# file_path = file_path.split('.gz')[0]
else:
file_path = embedding_path
embeddings_index = _build_embeddings_index(file_path, embedding_dims)
if cache:
_EMBEDDINGS_CACHE[embedding_type] = embeddings_index
return embeddings_index |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def equal_distribution_folds(y, folds=2):
"""Creates `folds` number of indices that has roughly balanced multi-label distribution. Args: y: The multi-label outputs. folds: The number of folds to create. Returns: `folds` number of indices that have roughly equal multi-label distributions. """ |
n, classes = y.shape
# Compute sample distribution over classes
dist = y.sum(axis=0).astype('float')
dist /= dist.sum()
index_list = []
fold_dist = np.zeros((folds, classes), dtype='float')
for _ in range(folds):
index_list.append([])
for i in range(n):
if i < folds:
target_fold = i
else:
normed_folds = fold_dist.T / fold_dist.sum(axis=1)
how_off = normed_folds.T - dist
target_fold = np.argmin(
np.dot((y[i] - .5).reshape(1, -1), how_off.T))
fold_dist[target_fold] += y[i]
index_list[target_fold].append(i)
logger.debug("Fold distributions:")
logger.debug(fold_dist)
return index_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_model(self, token_encoder_model, sentence_encoder_model, trainable_embeddings=True, output_activation='softmax'):
"""Builds a model that first encodes all words within sentences using `token_encoder_model`, followed by `sentence_encoder_model`. Args: token_encoder_model: An instance of `SequenceEncoderBase` for encoding tokens within sentences. This model will be applied across all sentences to create a sentence encoding. sentence_encoder_model: An instance of `SequenceEncoderBase` operating on sentence encoding generated by `token_encoder_model`. This encoding is then fed into a final `Dense` layer for classification. trainable_embeddings: Whether or not to fine tune embeddings. output_activation: The output activation to use. (Default value: 'softmax') Use: - `softmax` for binary or multi-class. - `sigmoid` for multi-label classification. - `linear` for regression output. Returns: The model output tensor. """ |
if not isinstance(token_encoder_model, SequenceEncoderBase):
raise ValueError("`token_encoder_model` should be an instance of `{}`".format(
SequenceEncoderBase))
if not isinstance(sentence_encoder_model, SequenceEncoderBase):
raise ValueError("`sentence_encoder_model` should be an instance of `{}`".format(
SequenceEncoderBase))
if not sentence_encoder_model.allows_dynamic_length() and self.max_sents is None:
raise ValueError("Sentence encoder model '{}' requires padding. "
"You need to provide `max_sents`")
if self.embeddings_index is None:
# The +1 is for unknown token index 0.
embedding_layer = Embedding(len(self.token_index),
self.embedding_dims,
input_length=self.max_tokens,
mask_zero=token_encoder_model.allows_dynamic_length(),
trainable=trainable_embeddings)
else:
embedding_layer = Embedding(len(self.token_index),
self.embedding_dims,
weights=[build_embedding_weights(
self.token_index, self.embeddings_index)],
input_length=self.max_tokens,
mask_zero=token_encoder_model.allows_dynamic_length(),
trainable=trainable_embeddings)
word_input = Input(shape=(self.max_tokens,), dtype='int32')
x = embedding_layer(word_input)
word_encoding = token_encoder_model(x)
token_encoder_model = Model(
word_input, word_encoding, name='word_encoder')
doc_input = Input(
shape=(self.max_sents, self.max_tokens), dtype='int32')
sent_encoding = TimeDistributed(token_encoder_model)(doc_input)
x = sentence_encoder_model(sent_encoding)
x = Dense(self.num_classes, activation=output_activation)(x)
return Model(doc_input, x) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_save(X, y, tokenizer, proc_data_path, max_len=400, train=False, ngrams=None, limit_top_tokens=None):
"""Process text and save as Dataset """ |
if train and limit_top_tokens is not None:
tokenizer.apply_encoding_options(limit_top_tokens=limit_top_tokens)
X_encoded = tokenizer.encode_texts(X)
if ngrams is not None:
X_encoded = tokenizer.add_ngrams(X_encoded, n=ngrams, train=train)
X_padded = tokenizer.pad_sequences(
X_encoded, fixed_token_seq_length=max_len)
if train:
ds = Dataset(X_padded,
y, tokenizer=tokenizer)
else:
ds = Dataset(X_padded, y)
ds.save(proc_data_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def split_data(X, y, ratio=(0.8, 0.1, 0.1)):
"""Splits data into a training, validation, and test set. Args: X: text data y: data labels ratio: the ratio for splitting. Default: (0.8, 0.1, 0.1) Returns: split data: X_train, X_val, X_test, y_train, y_val, y_test """ |
assert(sum(ratio) == 1 and len(ratio) == 3)
X_train, X_rest, y_train, y_rest = train_test_split(
X, y, train_size=ratio[0])
X_val, X_test, y_val, y_test = train_test_split(
X_rest, y_rest, train_size=ratio[1])
return X_train, X_val, X_test, y_train, y_val, y_test |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_data_split(X, y, tokenizer, proc_data_dir, **kwargs):
"""Setup data while splitting into a training, validation, and test set. Args: X: text data, y: data labels, tokenizer: A Tokenizer instance proc_data_dir: Directory for the split and processed data """ |
X_train, X_val, X_test, y_train, y_val, y_test = split_data(X, y)
# only build vocabulary on training data
tokenizer.build_vocab(X_train)
process_save(X_train, y_train, tokenizer, path.join(
proc_data_dir, 'train.bin'), train=True, **kwargs)
process_save(X_val, y_val, tokenizer, path.join(
proc_data_dir, 'val.bin'), **kwargs)
process_save(X_test, y_test, tokenizer, path.join(
proc_data_dir, 'test.bin'), **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_data_split(proc_data_dir):
"""Loads a split dataset Args: proc_data_dir: Directory with the split and processed data Returns: (Training Data, Validation Data, Test Data) """ |
ds_train = Dataset.load(path.join(proc_data_dir, 'train.bin'))
ds_val = Dataset.load(path.join(proc_data_dir, 'val.bin'))
ds_test = Dataset.load(path.join(proc_data_dir, 'test.bin'))
return ds_train, ds_val, ds_test |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_model(self, token_encoder_model, trainable_embeddings=True, output_activation='softmax'):
"""Builds a model using the given `text_model` Args: token_encoder_model: An instance of `SequenceEncoderBase` for encoding all the tokens within a document. This encoding is then fed into a final `Dense` layer for classification. trainable_embeddings: Whether or not to fine tune embeddings. output_activation: The output activation to use. (Default value: 'softmax') Use: - `softmax` for binary or multi-class. - `sigmoid` for multi-label classification. - `linear` for regression output. Returns: The model output tensor. """ |
if not isinstance(token_encoder_model, SequenceEncoderBase):
raise ValueError("`token_encoder_model` should be an instance of `{}`".format(
SequenceEncoderBase))
if not token_encoder_model.allows_dynamic_length() and self.max_tokens is None:
raise ValueError("The provided `token_encoder_model` does not allow variable length mini-batches. "
"You need to provide `max_tokens`")
if self.embeddings_index is None:
# The +1 is for unknown token index 0.
embedding_layer = Embedding(len(self.token_index),
self.embedding_dims,
input_length=self.max_tokens,
mask_zero=token_encoder_model.allows_dynamic_length(),
trainable=trainable_embeddings)
else:
embedding_layer = Embedding(len(self.token_index),
self.embedding_dims,
weights=[build_embedding_weights(
self.token_index, self.embeddings_index)],
input_length=self.max_tokens,
mask_zero=token_encoder_model.allows_dynamic_length(),
trainable=trainable_embeddings)
sequence_input = Input(shape=(self.max_tokens,), dtype='int32')
x = embedding_layer(sequence_input)
x = token_encoder_model(x)
x = Dense(self.num_classes, activation=output_activation)(x)
return Model(sequence_input, x) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _softmax(x, dim):
"""Computes softmax along a specified dim. Keras currently lacks this feature. """ |
if K.backend() == 'tensorflow':
import tensorflow as tf
return tf.nn.softmax(x, dim)
elif K.backend() is 'cntk':
import cntk
return cntk.softmax(x, dim)
elif K.backend() == 'theano':
# Theano cannot softmax along an arbitrary dim.
# So, we will shuffle `dim` to -1 and un-shuffle after softmax.
perm = np.arange(K.ndim(x))
perm[dim], perm[-1] = perm[-1], perm[dim]
x_perm = K.permute_dimensions(x, perm)
output = K.softmax(x_perm)
# Permute back
perm[dim], perm[-1] = perm[-1], perm[dim]
output = K.permute_dimensions(x, output)
return output
else:
raise ValueError("Backend '{}' not supported".format(K.backend())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _apply_options(self, token):
"""Applies various filtering and processing options on token. Returns: The processed token. None if filtered. """ |
# Apply work token filtering.
if token.is_punct and self.remove_punct:
return None
if token.is_stop and self.remove_stop_words:
return None
if token.is_digit and self.remove_digits:
return None
if token.is_oov and self.exclude_oov:
return None
if token.pos_ in self.exclude_pos_tags:
return None
if token.ent_type_ in self.exclude_entities:
return None
# Lemmatized ones are already lowered.
if self.lemmatize:
return token.lemma_
if self.lower:
return token.lower_
return token.orth_ |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _append(lst, indices, value):
"""Adds `value` to `lst` list indexed by `indices`. Will create sub lists as required. """ |
for i, idx in enumerate(indices):
# We need to loop because sometimes indices can increment by more than 1 due to missing tokens.
# Example: Sentence with no words after filtering words.
while len(lst) <= idx:
# Update max counts whenever a new sublist is created.
# There is no need to worry about indices beyond `i` since they will end up creating new lists as well.
lst.append([])
lst = lst[idx]
# Add token and update token max count.
lst.append(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, indices):
"""Updates counts based on indices. The algorithm tracks the index change at i and update global counts for all indices beyond i with local counts tracked so far. """ |
# Initialize various lists for the first time based on length of indices.
if self._prev_indices is None:
self._prev_indices = indices
# +1 to track token counts in the last index.
self._local_counts = np.full(len(indices) + 1, 1)
self._local_counts[-1] = 0
self.counts = [[] for _ in range(len(self._local_counts))]
has_reset = False
for i in range(len(indices)):
# index value changed. Push all local values beyond i to count and reset those local_counts.
# For example, if document index changed, push counts on sentences and tokens and reset their local_counts
# to indicate that we are tracking those for new document. We need to do this at all document hierarchies.
if indices[i] > self._prev_indices[i]:
self._local_counts[i] += 1
has_reset = True
for j in range(i + 1, len(self.counts)):
self.counts[j].append(self._local_counts[j])
self._local_counts[j] = 1
# If none of the aux indices changed, update token count.
if not has_reset:
self._local_counts[-1] += 1
self._prev_indices = indices[:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_folder(directory):
"""read text files in directory and returns them as array Args: directory: where the text files are Returns: Array of text """ |
res = []
for filename in os.listdir(directory):
with io.open(os.path.join(directory, filename), encoding="utf-8") as f:
content = f.read()
res.append(content)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_pos_neg_data(path, folder, limit):
"""returns array with positive and negative examples""" |
training_pos_path = os.path.join(path, folder, 'pos')
training_neg_path = os.path.join(path, folder, 'neg')
X_pos = read_folder(training_pos_path)
X_neg = read_folder(training_neg_path)
if limit is None:
X = X_pos + X_neg
else:
X = X_pos[:limit] + X_neg[:limit]
y = [1] * int(len(X) / 2) + [0] * int(len(X) / 2)
return X, y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_value(self, number: (float, int)):
""" Sets the value of the graphic :param number: the number (must be between 0 and \ 'max_range' or the scale will peg the limits :return: None """ |
self.canvas.delete('all')
self.canvas.create_image(0, 0, image=self.image, anchor='nw')
number = number if number <= self.max_value else self.max_value
number = 0.0 if number < 0.0 else number
radius = 0.9 * self.size/2.0
angle_in_radians = (2.0 * cmath.pi / 3.0) \
+ number / self.max_value * (5.0 * cmath.pi / 3.0)
center = cmath.rect(0, 0)
outer = cmath.rect(radius, angle_in_radians)
if self.needle_thickness == 0:
line_width = int(5 * self.size / 200)
line_width = 1 if line_width < 1 else line_width
else:
line_width = self.needle_thickness
self.canvas.create_line(
*self.to_absolute(center.real, center.imag),
*self.to_absolute(outer.real, outer.imag),
width=line_width,
fill=self.needle_color
)
self.readout['text'] = '{}{}'.format(number, self.unit) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _draw_background(self, divisions=10):
""" Draws the background of the dial :param divisions: the number of divisions between 'ticks' shown on the dial :return: None """ |
self.canvas.create_arc(2, 2, self.size-2, self.size-2,
style=tk.PIESLICE, start=-60, extent=30,
fill='red')
self.canvas.create_arc(2, 2, self.size-2, self.size-2,
style=tk.PIESLICE, start=-30, extent=60,
fill='yellow')
self.canvas.create_arc(2, 2, self.size-2, self.size-2,
style=tk.PIESLICE, start=30, extent=210,
fill='green')
# find the distance between the center and the inner tick radius
inner_tick_radius = int(self.size * 0.4)
outer_tick_radius = int(self.size * 0.5)
for tick in range(divisions):
angle_in_radians = (2.0 * cmath.pi / 3.0) \
+ tick/divisions * (5.0 * cmath.pi / 3.0)
inner_point = cmath.rect(inner_tick_radius, angle_in_radians)
outer_point = cmath.rect(outer_tick_radius, angle_in_radians)
self.canvas.create_line(
*self.to_absolute(inner_point.real, inner_point.imag),
*self.to_absolute(outer_point.real, outer_point.imag),
width=1
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def draw_axes(self):
""" Removes all existing series and re-draws the axes. :return: None """ |
self.canvas.delete('all')
rect = 50, 50, self.w - 50, self.h - 50
self.canvas.create_rectangle(rect, outline="black")
for x in self.frange(0, self.x_max - self.x_min + 1, self.x_tick):
value = Decimal(self.x_min + x)
if self.x_min <= value <= self.x_max:
x_step = (self.px_x * x) / self.x_tick
coord = 50 + x_step, self.h - 50, 50 + x_step, self.h - 45
self.canvas.create_line(coord, fill="black")
coord = 50 + x_step, self.h - 40
label = round(Decimal(self.x_min + x), 1)
self.canvas.create_text(coord, fill="black", text=label)
for y in self.frange(0, self.y_max - self.y_min + 1, self.y_tick):
value = Decimal(self.y_max - y)
if self.y_min <= value <= self.y_max:
y_step = (self.px_y * y) / self.y_tick
coord = 45, 50 + y_step, 50, 50 + y_step
self.canvas.create_line(coord, fill="black")
coord = 35, 50 + y_step
label = round(value, 1)
self.canvas.create_text(coord, fill="black", text=label) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_point(self, x, y, visible=True, color='black', size=5):
""" Places a single point on the grid :param x: the x coordinate :param y: the y coordinate :param visible: True if the individual point should be visible :param color: the color of the point :param size: the point size in pixels :return: The absolute coordinates as a tuple """ |
xp = (self.px_x * (x - self.x_min)) / self.x_tick
yp = (self.px_y * (self.y_max - y)) / self.y_tick
coord = 50 + xp, 50 + yp
if visible:
# divide down to an appropriate size
size = int(size/2) if int(size/2) > 1 else 1
x, y = coord
self.canvas.create_oval(
x-size, y-size,
x+size, y+size,
fill=color
)
return coord |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_line(self, points: list, color='black', point_visibility=False):
""" Plot a line of points :param points: a list of tuples, each tuple containing an (x, y) point :param color: the color of the line :param point_visibility: True if the points \ should be individually visible :return: None """ |
last_point = ()
for point in points:
this_point = self.plot_point(point[0], point[1],
color=color, visible=point_visibility)
if last_point:
self.canvas.create_line(last_point + this_point, fill=color)
last_point = this_point |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def frange(start, stop, step, digits_to_round=3):
""" Works like range for doubles :param start: starting value :param stop: ending value :param step: the increment_value :param digits_to_round: the digits to which to round \ (makes floating-point numbers much easier to work with) :return: generator """ |
while start < stop:
yield round(start, digits_to_round)
start += step |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_new(self, img_data: str):
""" Load a new image. :param img_data: the image data as a base64 string :return: None """ |
self._image = tk.PhotoImage(data=img_data)
self._image = self._image.subsample(int(200 / self._size),
int(200 / self._size))
self._canvas.delete('all')
self._canvas.create_image(0, 0, image=self._image, anchor='nw')
if self._user_click_callback is not None:
self._user_click_callback(self._on) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_grey(self, on: bool=False):
""" Change the LED to grey. :param on: Unused, here for API consistency with the other states :return: None """ |
self._on = False
self._load_new(led_grey) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _redraw(self):
""" Forgets the current layout and redraws with the most recent information :return: None """ |
for row in self._rows:
for widget in row:
widget.grid_forget()
offset = 0 if not self.headers else 1
for i, row in enumerate(self._rows):
for j, widget in enumerate(row):
widget.grid(row=i+offset, column=j) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_row(self, row_number: int=-1):
""" Removes a specified row of data :param row_number: the row to remove (defaults to the last row) :return: None """ |
if len(self._rows) == 0:
return
row = self._rows.pop(row_number)
for widget in row:
widget.destroy() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_row(self, data: list):
""" Add a row of data to the current widget :param data: a row of data :return: None """ |
# validation
if self.headers:
if len(self.headers) != len(data):
raise ValueError
if len(data) != self.num_of_columns:
raise ValueError
offset = 0 if not self.headers else 1
row = list()
for i, element in enumerate(data):
label = ttk.Label(self, text=str(element), relief=tk.GROOVE,
padding=self.padding)
label.grid(row=len(self._rows) + offset, column=i, sticky='E,W')
row.append(label)
self._rows.append(row) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_as_dict(self):
""" Read the data contained in all entries as a list of dictionaries with the headers as the dictionary keys :return: list of dicts containing all tabular data """ |
data = list()
for row in self._rows:
row_data = OrderedDict()
for i, header in enumerate(self.headers):
row_data[header.cget('text')] = row[i].get()
data.append(row_data)
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_as_table(self):
""" Read the data contained in all entries as a list of lists containing all of the data :return: list of dicts containing all tabular data """ |
rows = list()
for row in self._rows:
rows.append([row[i].get() for i in range(self.num_of_columns)])
return rows |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_row(self, key: str, default: str=None, unit_label: str=None, enable: bool=None):
""" Add a single row and re-draw as necessary :param key: the name and dict accessor :param default: the default value :param unit_label: the label that should be \ applied at the right of the entry :param enable: the 'enabled' state (defaults to True) :return: """ |
self.keys.append(ttk.Label(self, text=key))
self.defaults.append(default)
self.unit_labels.append(
ttk.Label(self, text=unit_label if unit_label else '')
)
self.enables.append(enable)
self.values.append(ttk.Entry(self))
row_offset = 1 if self.title is not None else 0
for i in range(len(self.keys)):
self.keys[i].grid_forget()
self.keys[i].grid(row=row_offset, column=0, sticky='e')
self.values[i].grid(row=row_offset, column=1)
if self.unit_labels[i]:
self.unit_labels[i].grid(row=row_offset, column=3, sticky='w')
if self.defaults[i]:
self.values[i].config(state=tk.NORMAL)
self.values[i].delete(0, tk.END)
self.values[i].insert(0, self.defaults[i])
if self.enables[i] in [True, None]:
self.values[i].config(state=tk.NORMAL)
elif self.enables[i] is False:
self.values[i].config(state=tk.DISABLED)
row_offset += 1
# strip <Return> and <Tab> bindings, add callbacks to all entries
self.values[i].unbind('<Return>')
self.values[i].unbind('<Tab>')
if self.callback is not None:
def callback(event):
self.callback()
self.values[i].bind('<Return>', callback)
self.values[i].bind('<Tab>', callback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reset(self):
""" Clears all entries. :return: None """ |
for i in range(len(self.values)):
self.values[i].delete(0, tk.END)
if self.defaults[i] is not None:
self.values[i].insert(0, self.defaults[i]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self):
""" Retrieve the GUI elements for program use. :return: a dictionary containing all \ of the data from the key/value entries """ |
data = dict()
for label, entry in zip(self.keys, self.values):
data[label.cget('text')] = entry.get()
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, string: (str, list)):
""" Clear the contents of the entry field and insert the contents of string. :param string: an str containing the text to display :return: """ |
if len(self._entries) == 1:
self._entries[0].delete(0, 'end')
self._entries[0].insert(0, string)
else:
if len(string) != len(self._entries):
raise ValueError('the "string" list must be '
'equal to the number of entries')
for i, e in enumerate(self._entries):
self._entries[i].delete(0, 'end')
self._entries[i].insert(0, string[i]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_callback(self, callback: callable):
""" Add a callback on change :param callback: callable function :return: None """ |
def internal_callback(*args):
try:
callback()
except TypeError:
callback(self.get())
self._var.trace('w', internal_callback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set(self, value: int):
""" Set the current value :param value: :return: None """ |
max_value = int(''.join(['1' for _ in range(self._bit_width)]), 2)
if value > max_value:
raise ValueError('the value {} is larger than '
'the maximum value {}'.format(value, max_value))
self._value = value
self._text_update() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_bit(self, position: int):
""" Returns the bit value at position :param position: integer between 0 and <width>, inclusive :return: the value at position as a integer """ |
if position > (self._bit_width - 1):
raise ValueError('position greater than the bit width')
if self._value & (1 << position):
return 1
else:
return 0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def toggle_bit(self, position: int):
""" Toggles the value at position :param position: integer between 0 and 7, inclusive :return: None """ |
if position > (self._bit_width - 1):
raise ValueError('position greater than the bit width')
self._value ^= (1 << position)
self._text_update() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_bit(self, position: int):
""" Sets the value at position :param position: integer between 0 and 7, inclusive :return: None """ |
if position > (self._bit_width - 1):
raise ValueError('position greater than the bit width')
self._value |= (1 << position)
self._text_update() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_bit(self, position: int):
""" Clears the value at position :param position: integer between 0 and 7, inclusive :return: None """ |
if position > (self._bit_width - 1):
raise ValueError('position greater than the bit width')
self._value &= ~(1 << position)
self._text_update() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _register_admin(admin_site, model, admin_class):
""" Register model in the admin, ignoring any previously registered models. Alternatively it could be used in the future to replace a previously registered model. """ |
try:
admin_site.register(model, admin_class)
except admin.sites.AlreadyRegistered:
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _monkey_inline(model, admin_class_instance, metadata_class, inline_class, admin_site):
""" Monkey patch the inline onto the given admin_class instance. """ |
if model in metadata_class._meta.seo_models:
# *Not* adding to the class attribute "inlines", as this will affect
# all instances from this class. Explicitly adding to instance attribute.
admin_class_instance.__dict__['inlines'] = admin_class_instance.inlines + [inline_class]
# Because we've missed the registration, we need to perform actions
# that were done then (on admin class instantiation)
inline_instance = inline_class(admin_class_instance.model, admin_site)
admin_class_instance.inline_instances.append(inline_instance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _with_inline(func, admin_site, metadata_class, inline_class):
""" Decorator for register function that adds an appropriate inline.""" |
def register(model_or_iterable, admin_class=None, **options):
# Call the (bound) function we were given.
# We have to assume it will be bound to admin_site
func(model_or_iterable, admin_class, **options)
_monkey_inline(model_or_iterable, admin_site._registry[model_or_iterable], metadata_class, inline_class, admin_site)
return register |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def auto_register_inlines(admin_site, metadata_class):
""" This is a questionable function that automatically adds our metadata inline to all relevant models in the site. """ |
inline_class = get_inline(metadata_class)
for model, admin_class_instance in admin_site._registry.items():
_monkey_inline(model, admin_class_instance, metadata_class, inline_class, admin_site)
# Monkey patch the register method to automatically add an inline for this site.
# _with_inline() is a decorator that wraps the register function with the same injection code
# used above (_monkey_inline).
admin_site.register = _with_inline(admin_site.register, admin_site, metadata_class, inline_class) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_linked_metadata(obj, name=None, context=None, site=None, language=None):
""" Gets metadata linked from the given object. """ |
# XXX Check that 'modelinstance' and 'model' metadata are installed in backends
# I believe that get_model() would return None if not
Metadata = _get_metadata_model(name)
InstanceMetadata = Metadata._meta.get_model('modelinstance')
ModelMetadata = Metadata._meta.get_model('model')
content_type = ContentType.objects.get_for_model(obj)
instances = []
if InstanceMetadata is not None:
try:
instance_md = InstanceMetadata.objects.get(_content_type=content_type, _object_id=obj.pk)
except InstanceMetadata.DoesNotExist:
instance_md = InstanceMetadata(_content_object=obj)
instances.append(instance_md)
if ModelMetadata is not None:
try:
model_md = ModelMetadata.objects.get(_content_type=content_type)
except ModelMetadata.DoesNotExist:
model_md = ModelMetadata(_content_type=content_type)
instances.append(model_md)
return FormattedMetadata(Metadata, instances, '', site, language) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def __instances(self):
""" Cache instances, allowing generators to be used and reused. This fills a cache as the generator gets emptied, eventually reading exclusively from the cache. """ |
for instance in self.__instances_cache:
yield instance
for instance in self.__instances_original:
self.__instances_cache.append(instance)
yield instance |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _resolve_value(self, name):
""" Returns an appropriate value for the given name. This simply asks each of the instances for a value. """ |
for instance in self.__instances():
value = instance._resolve_value(name)
if value:
return value
# Otherwise, return an appropriate default value (populate_from)
# TODO: This is duplicated in meta_models. Move this to a common home.
if name in self.__metadata._meta.elements:
populate_from = self.__metadata._meta.elements[name].populate_from
if callable(populate_from):
return populate_from(None)
elif isinstance(populate_from, Literal):
return populate_from.value
elif populate_from is not NotSet:
return self._resolve_value(populate_from) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_formatted_data(cls, path, context=None, site=None, language=None):
""" Return an object to conveniently access the appropriate values. """ |
return FormattedMetadata(cls(), cls._get_instances(path, context, site, language), path, site, language) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate(options):
""" Validates the application of this backend to a given metadata """ |
try:
if options.backends.index('modelinstance') > options.backends.index('model'):
raise Exception("Metadata backend 'modelinstance' must come before 'model' backend")
except ValueError:
raise Exception("Metadata backend 'modelinstance' must be installed in order to use 'model' backend") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _register_elements(self, elements):
""" Takes elements from the metadata class and creates a base model for all backend models . """ |
self.elements = elements
for key, obj in elements.items():
obj.contribute_to_class(self.metadata, key)
# Create the common Django fields
fields = {}
for key, obj in elements.items():
if obj.editable:
field = obj.get_field()
if not field.help_text:
if key in self.bulk_help_text:
field.help_text = self.bulk_help_text[key]
fields[key] = field
# 0. Abstract base model with common fields
base_meta = type('Meta', (), self.original_meta)
class BaseMeta(base_meta):
abstract = True
app_label = 'seo'
fields['Meta'] = BaseMeta
# Do we need this?
fields['__module__'] = __name__ #attrs['__module__']
self.MetadataBaseModel = type('%sBase' % self.name, (models.Model,), fields) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_backend(self, backend):
""" Builds a subclass model for the given backend """ |
md_type = backend.verbose_name
base = backend().get_model(self)
# TODO: Rename this field
new_md_attrs = {'_metadata': self.metadata, '__module__': __name__ }
new_md_meta = {}
new_md_meta['verbose_name'] = '%s (%s)' % (self.verbose_name, md_type)
new_md_meta['verbose_name_plural'] = '%s (%s)' % (self.verbose_name_plural, md_type)
new_md_meta['unique_together'] = base._meta.unique_together
new_md_attrs['Meta'] = type("Meta", (), new_md_meta)
new_md_attrs['_metadata_type'] = backend.name
model = type("%s%s"%(self.name,"".join(md_type.split())), (base, self.MetadataBaseModel), new_md_attrs.copy())
self.models[backend.name] = model
# This is a little dangerous, but because we set __module__ to __name__, the model needs tobe accessible here
globals()[model.__name__] = model |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate(self):
""" Discover certain illegal configurations """ |
if not self.editable:
assert self.populate_from is not NotSet, u"If field (%s) is not editable, you must set populate_from" % self.name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def populate_all_metadata():
""" Create metadata instances for all models in seo_models if empty. Once you have created a single metadata instance, this will not run. This is because it is a potentially slow operation that need only be done once. If you want to ensure that everything is populated, run the populate_metadata management command. """ |
for Metadata in registry.values():
InstanceMetadata = Metadata._meta.get_model('modelinstance')
if InstanceMetadata is not None:
for model in Metadata._meta.seo_models:
populate_metadata(model, InstanceMetadata) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def populate(self):
""" Populate this list with all views that take no arguments. """ |
from django.conf import settings
from django.core import urlresolvers
self.append(("", ""))
urlconf = settings.ROOT_URLCONF
resolver = urlresolvers.RegexURLResolver(r'^/', urlconf)
# Collect base level views
for key, value in resolver.reverse_dict.items():
if isinstance(key, basestring):
args = value[0][0][1]
url = "/" + value[0][0][0]
self.append((key, " ".join(key.split("_"))))
# Collect namespaces (TODO: merge these two sections into one)
for namespace, url in resolver.namespace_dict.items():
for key, value in url[1].reverse_dict.items():
if isinstance(key, basestring):
args = value[0][0][1]
full_key = '%s:%s' % (namespace, key)
self.append((full_key, "%s: %s" % (namespace, " ".join(key.split("_")))))
self.sort() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def block_splitter(data, block_size):
""" Creates a generator by slicing ``data`` into chunks of ``block_size``. [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]] If ``data`` cannot be evenly divided by ``block_size``, the last block will simply be the remainder of the data. Example: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]] If the ``block_size`` is greater than the total length of ``data``, a single block will be generated: [[0, 1, 2]] :param data: Any iterable. If ``data`` is a generator, it will be exhausted, obviously. :param int block_site: Desired (maximum) block size. """ |
buf = []
for i, datum in enumerate(data):
buf.append(datum)
if len(buf) == block_size:
yield buf
buf = []
# If there's anything leftover (a partial block),
# yield it as well.
if buf:
yield buf |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def round_geom(geom, precision=None):
"""Round coordinates of a geometric object to given precision.""" |
if geom['type'] == 'Point':
x, y = geom['coordinates']
xp, yp = [x], [y]
if precision is not None:
xp = [round(v, precision) for v in xp]
yp = [round(v, precision) for v in yp]
new_coords = tuple(zip(xp, yp))[0]
if geom['type'] in ['LineString', 'MultiPoint']:
xp, yp = zip(*geom['coordinates'])
if precision is not None:
xp = [round(v, precision) for v in xp]
yp = [round(v, precision) for v in yp]
new_coords = tuple(zip(xp, yp))
elif geom['type'] in ['Polygon', 'MultiLineString']:
new_coords = []
for piece in geom['coordinates']:
xp, yp = zip(*piece)
if precision is not None:
xp = [round(v, precision) for v in xp]
yp = [round(v, precision) for v in yp]
new_coords.append(tuple(zip(xp, yp)))
elif geom['type'] == 'MultiPolygon':
parts = geom['coordinates']
new_coords = []
for part in parts:
inner_coords = []
for ring in part:
xp, yp = zip(*ring)
if precision is not None:
xp = [round(v, precision) for v in xp]
yp = [round(v, precision) for v in yp]
inner_coords.append(tuple(zip(xp, yp)))
new_coords.append(inner_coords)
return {'type': geom['type'], 'coordinates': new_coords} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cli(input, verbose, quiet, output_format, precision, indent):
"""Convert text read from the first positional argument, stdin, or a file to GeoJSON and write to stdout.""" |
verbosity = verbose - quiet
configure_logging(verbosity)
logger = logging.getLogger('geomet')
# Handle the case of file, stream, or string input.
try:
src = click.open_file(input).readlines()
except IOError:
src = [input]
stdout = click.get_text_stream('stdout')
# Read-write loop.
try:
for line in src:
text = line.strip()
logger.debug("Input: %r", text)
output = translate(
text,
output_format=output_format,
indent=indent,
precision=precision
)
logger.debug("Output: %r", output)
stdout.write(output)
stdout.write('\n')
sys.exit(0)
except Exception:
logger.exception("Failed. Exception caught")
sys.exit(1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_geom_type(type_bytes):
"""Get the GeoJSON geometry type label from a WKB type byte string. :param type_bytes: 4 byte string in big endian byte order containing a WKB type number. It may also contain a "has SRID" flag in the high byte (the first type, since this is big endian byte order), indicated as 0x20. If the SRID flag is not set, the high byte will always be null (0x00). :returns: 3-tuple ofGeoJSON geometry type label, the bytes resprenting the geometry type, and a separate "has SRID" flag. If the input `type_bytes` contains an SRID flag, it will be removed. True True """ |
# slice off the high byte, which may contain the SRID flag
high_byte = type_bytes[0]
if six.PY3:
high_byte = bytes([high_byte])
has_srid = high_byte == b'\x20'
if has_srid:
# replace the high byte with a null byte
type_bytes = as_bin_str(b'\x00' + type_bytes[1:])
else:
type_bytes = as_bin_str(type_bytes)
# look up the geometry type
geom_type = _BINARY_TO_GEOM_TYPE.get(type_bytes)
return geom_type, type_bytes, has_srid |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dumps(obj, big_endian=True):
""" Dump a GeoJSON-like `dict` to a WKB string. .. note:: The dimensions of the generated WKB will be inferred from the first vertex in the GeoJSON `coordinates`. It will be assumed that all vertices are uniform. There are 4 types: - 2D (X, Y):
2-dimensional geometry - Z (X, Y, Z):
3-dimensional geometry - M (X, Y, M):
2-dimensional geometry with a "Measure" - ZM (X, Y, Z, M):
3-dimensional geometry with a "Measure" If the first vertex contains 2 values, we assume a 2D geometry. If the first vertex contains 3 values, this is slightly ambiguous and so the most common case is chosen: Z. If the first vertex contains 4 values, we assume a ZM geometry. The WKT/WKB standards provide a way of differentiating normal (2D), Z, M, and ZM geometries (http://en.wikipedia.org/wiki/Well-known_text), but the GeoJSON spec does not. Therefore, for the sake of interface simplicity, we assume that geometry that looks 3D contains XYZ components, instead of XYM. If the coordinates list has no coordinate values (this includes nested lists, for example, `[[[[],[]], []]]`, the geometry is considered to be empty. Geometries, with the exception of points, have a reasonable "empty" representation in WKB; however, without knowing the number of coordinate values per vertex, the type is ambigious, and thus we don't know if the geometry type is 2D, Z, M, or ZM. Therefore in this case we expect a `ValueError` to be raised. :param dict obj: GeoJson-like `dict` object. :param bool big_endian: Defaults to `True`. If `True`, data values in the generated WKB will be represented using big endian byte order. Else, little endian. TODO: remove this :param str dims: Indicates to WKB representation desired from converting the given GeoJSON `dict` ``obj``. The accepted values are: * '2D': 2-dimensional geometry (X, Y) * 'Z': 3-dimensional geometry (X, Y, Z) * 'M': 3-dimensional geometry (X, Y, M) * 'ZM': 4-dimensional geometry (X, Y, Z, M) :returns: A WKB binary string representing of the ``obj``. """ |
geom_type = obj['type']
meta = obj.get('meta', {})
exporter = _dumps_registry.get(geom_type)
if exporter is None:
_unsupported_geom_type(geom_type)
# Check for empty geometries. GeometryCollections have a slightly different
# JSON/dict structure, but that's handled.
coords_or_geoms = obj.get('coordinates', obj.get('geometries'))
if len(list(flatten_multi_dim(coords_or_geoms))) == 0:
raise ValueError(
'Empty geometries cannot be represented in WKB. Reason: The '
'dimensionality of the WKB would be ambiguous.'
)
return exporter(obj, big_endian, meta) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_point(obj, big_endian, meta):
""" Dump a GeoJSON-like `dict` to a point WKB string. :param dict obj: GeoJson-like `dict` object. :param bool big_endian: If `True`, data values in the generated WKB will be represented using big endian byte order. Else, little endian. :param dict meta: Metadata associated with the GeoJSON object. Currently supported metadata: - srid: Used to support EWKT/EWKB. For example, ``meta`` equal to ``{'srid': '4326'}`` indicates that the geometry is defined using Extended WKT/WKB and that it bears a Spatial Reference System Identifier of 4326. This ID will be encoded into the resulting binary. Any other meta data objects will simply be ignored by this function. :returns: A WKB binary string representing of the Point ``obj``. """ |
coords = obj['coordinates']
num_dims = len(coords)
wkb_string, byte_fmt, _ = _header_bytefmt_byteorder(
'Point', num_dims, big_endian, meta
)
wkb_string += struct.pack(byte_fmt, *coords)
return wkb_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_linestring(obj, big_endian, meta):
""" Dump a GeoJSON-like `dict` to a linestring WKB string. Input parameters and output are similar to :func:`_dump_point`. """ |
coords = obj['coordinates']
vertex = coords[0]
# Infer the number of dimensions from the first vertex
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'LineString', num_dims, big_endian, meta
)
# append number of vertices in linestring
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for vertex in coords:
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_multipoint(obj, big_endian, meta):
""" Dump a GeoJSON-like `dict` to a multipoint WKB string. Input parameters and output are similar to :funct:`_dump_point`. """ |
coords = obj['coordinates']
vertex = coords[0]
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'MultiPoint', num_dims, big_endian, meta
)
point_type = _WKB[_INT_TO_DIM_LABEL.get(num_dims)]['Point']
if big_endian:
point_type = BIG_ENDIAN + point_type
else:
point_type = LITTLE_ENDIAN + point_type[::-1]
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for vertex in coords:
# POINT type strings
wkb_string += point_type
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_multilinestring(obj, big_endian, meta):
""" Dump a GeoJSON-like `dict` to a multilinestring WKB string. Input parameters and output are similar to :funct:`_dump_point`. """ |
coords = obj['coordinates']
vertex = coords[0][0]
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'MultiLineString', num_dims, big_endian, meta
)
ls_type = _WKB[_INT_TO_DIM_LABEL.get(num_dims)]['LineString']
if big_endian:
ls_type = BIG_ENDIAN + ls_type
else:
ls_type = LITTLE_ENDIAN + ls_type[::-1]
# append the number of linestrings
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for linestring in coords:
wkb_string += ls_type
# append the number of vertices in each linestring
wkb_string += struct.pack('%sl' % byte_order, len(linestring))
for vertex in linestring:
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_multipolygon(obj, big_endian, meta):
""" Dump a GeoJSON-like `dict` to a multipolygon WKB string. Input parameters and output are similar to :funct:`_dump_point`. """ |
coords = obj['coordinates']
vertex = coords[0][0][0]
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'MultiPolygon', num_dims, big_endian, meta
)
poly_type = _WKB[_INT_TO_DIM_LABEL.get(num_dims)]['Polygon']
if big_endian:
poly_type = BIG_ENDIAN + poly_type
else:
poly_type = LITTLE_ENDIAN + poly_type[::-1]
# apped the number of polygons
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for polygon in coords:
# append polygon header
wkb_string += poly_type
# append the number of rings in this polygon
wkb_string += struct.pack('%sl' % byte_order, len(polygon))
for ring in polygon:
# append the number of vertices in this ring
wkb_string += struct.pack('%sl' % byte_order, len(ring))
for vertex in ring:
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_point(big_endian, type_bytes, data_bytes):
""" Convert byte data for a Point to a GeoJSON `dict`. :param bool big_endian: If `True`, interpret the ``data_bytes`` in big endian order, else little endian. :param str type_bytes: 4-byte integer (as a binary string) indicating the geometry type (Point) and the dimensions (2D, Z, M or ZM). For consistency, these bytes are expected to always be in big endian order, regardless of the value of ``big_endian``. :param str data_bytes: Coordinate data in a binary string. :returns: GeoJSON `dict` representing the Point geometry. """ |
endian_token = '>' if big_endian else '<'
if type_bytes == WKB_2D['Point']:
coords = struct.unpack('%sdd' % endian_token,
as_bin_str(take(16, data_bytes)))
elif type_bytes == WKB_Z['Point']:
coords = struct.unpack('%sddd' % endian_token,
as_bin_str(take(24, data_bytes)))
elif type_bytes == WKB_M['Point']:
# NOTE: The use of XYM types geometries is quite rare. In the interest
# of removing ambiguity, we will treat all XYM geometries as XYZM when
# generate the GeoJSON. A default Z value of `0.0` will be given in
# this case.
coords = list(struct.unpack('%sddd' % endian_token,
as_bin_str(take(24, data_bytes))))
coords.insert(2, 0.0)
elif type_bytes == WKB_ZM['Point']:
coords = struct.unpack('%sdddd' % endian_token,
as_bin_str(take(32, data_bytes)))
return dict(type='Point', coordinates=list(coords)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dumps(obj, decimals=16):
""" Dump a GeoJSON-like `dict` to a WKT string. """ |
try:
geom_type = obj['type']
exporter = _dumps_registry.get(geom_type)
if exporter is None:
_unsupported_geom_type(geom_type)
# Check for empty cases
if geom_type == 'GeometryCollection':
if len(obj['geometries']) == 0:
return 'GEOMETRYCOLLECTION EMPTY'
else:
# Geom has no coordinate values at all, and must be empty.
if len(list(util.flatten_multi_dim(obj['coordinates']))) == 0:
return '%s EMPTY' % geom_type.upper()
except KeyError:
raise geomet.InvalidGeoJSONException('Invalid GeoJSON: %s' % obj)
result = exporter(obj, decimals)
# Try to get the SRID from `meta.srid`
meta_srid = obj.get('meta', {}).get('srid')
# Also try to get it from `crs.properties.name`:
crs_srid = obj.get('crs', {}).get('properties', {}).get('name')
if crs_srid is not None:
# Shave off the EPSG prefix to give us the SRID:
crs_srid = crs_srid.replace('EPSG', '')
if (meta_srid is not None and
crs_srid is not None and
str(meta_srid) != str(crs_srid)):
raise ValueError(
'Ambiguous CRS/SRID values: %s and %s' % (meta_srid, crs_srid)
)
srid = meta_srid or crs_srid
# TODO: add tests for CRS input
if srid is not None:
# Prepend the SRID
result = 'SRID=%s;%s' % (srid, result)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _tokenize_wkt(tokens):
""" Since the tokenizer treats "-" and numeric strings as separate values, combine them and yield them as a single token. This utility encapsulates parsing of negative numeric values from WKT can be used generically in all parsers. """ |
negative = False
for t in tokens:
if t == '-':
negative = True
continue
else:
if negative:
yield '-%s' % t
else:
yield t
negative = False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _round_and_pad(value, decimals):
""" Round the input value to `decimals` places, and pad with 0's if the resulting value is less than `decimals`. :param value: The value to round :param decimals: Number of decimals places which should be displayed after the rounding. :return: str of the rounded value """ |
if isinstance(value, int) and decimals != 0:
# if we get an int coordinate and we have a non-zero value for
# `decimals`, we want to create a float to pad out.
value = float(value)
elif decimals == 0:
# if get a `decimals` value of 0, we want to return an int.
return repr(int(round(value, decimals)))
rounded = repr(round(value, decimals))
rounded += '0' * (decimals - len(rounded.split('.')[1]))
return rounded |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_point(obj, decimals):
""" Dump a GeoJSON-like Point object to WKT. :param dict obj: A GeoJSON-like `dict` representing a Point. :param int decimals: int which indicates the number of digits to display after the decimal point when formatting coordinates. :returns: WKT representation of the input GeoJSON Point ``obj``. """ |
coords = obj['coordinates']
pt = 'POINT (%s)' % ' '.join(_round_and_pad(c, decimals)
for c in coords)
return pt |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_linestring(obj, decimals):
""" Dump a GeoJSON-like LineString object to WKT. Input parameters and return value are the LINESTRING equivalent to :func:`_dump_point`. """ |
coords = obj['coordinates']
ls = 'LINESTRING (%s)'
ls %= ', '.join(' '.join(_round_and_pad(c, decimals)
for c in pt) for pt in coords)
return ls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_polygon(obj, decimals):
""" Dump a GeoJSON-like Polygon object to WKT. Input parameters and return value are the POLYGON equivalent to :func:`_dump_point`. """ |
coords = obj['coordinates']
poly = 'POLYGON (%s)'
rings = (', '.join(' '.join(_round_and_pad(c, decimals)
for c in pt) for pt in ring)
for ring in coords)
rings = ('(%s)' % r for r in rings)
poly %= ', '.join(rings)
return poly |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_multipoint(obj, decimals):
""" Dump a GeoJSON-like MultiPoint object to WKT. Input parameters and return value are the MULTIPOINT equivalent to :func:`_dump_point`. """ |
coords = obj['coordinates']
mp = 'MULTIPOINT (%s)'
points = (' '.join(_round_and_pad(c, decimals)
for c in pt) for pt in coords)
# Add parens around each point.
points = ('(%s)' % pt for pt in points)
mp %= ', '.join(points)
return mp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_multilinestring(obj, decimals):
""" Dump a GeoJSON-like MultiLineString object to WKT. Input parameters and return value are the MULTILINESTRING equivalent to :func:`_dump_point`. """ |
coords = obj['coordinates']
mlls = 'MULTILINESTRING (%s)'
linestrs = ('(%s)' % ', '.join(' '.join(_round_and_pad(c, decimals)
for c in pt) for pt in linestr) for linestr in coords)
mlls %= ', '.join(ls for ls in linestrs)
return mlls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_multipolygon(obj, decimals):
""" Dump a GeoJSON-like MultiPolygon object to WKT. Input parameters and return value are the MULTIPOLYGON equivalent to :func:`_dump_point`. """ |
coords = obj['coordinates']
mp = 'MULTIPOLYGON (%s)'
polys = (
# join the polygons in the multipolygon
', '.join(
# join the rings in a polygon,
# and wrap in parens
'(%s)' % ', '.join(
# join the points in a ring,
# and wrap in parens
'(%s)' % ', '.join(
# join coordinate values of a vertex
' '.join(_round_and_pad(c, decimals) for c in pt)
for pt in ring)
for ring in poly)
for poly in coords)
)
mp %= polys
return mp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dump_geometrycollection(obj, decimals):
""" Dump a GeoJSON-like GeometryCollection object to WKT. Input parameters and return value are the GEOMETRYCOLLECTION equivalent to :func:`_dump_point`. The WKT conversions for each geometry in the collection are delegated to their respective functions. """ |
gc = 'GEOMETRYCOLLECTION (%s)'
geoms = obj['geometries']
geoms_wkt = []
for geom in geoms:
geom_type = geom['type']
geoms_wkt.append(_dumps_registry.get(geom_type)(geom, decimals))
gc %= ','.join(geoms_wkt)
return gc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_request_params(self, **kwargs):
"""Merge shared params and new params.""" |
request_params = copy.deepcopy(self._shared_request_params)
for key, value in iteritems(kwargs):
if isinstance(value, dict) and key in request_params:
# ensure we don't lose dict values like headers or cookies
request_params[key].update(value)
else:
request_params[key] = value
return request_params |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _sanitize_request_params(self, request_params):
"""Remove keyword arguments not used by `requests`""" |
if 'verify_ssl' in request_params:
request_params['verify'] = request_params.pop('verify_ssl')
return dict((key, val) for key, val in request_params.items()
if key in self._VALID_REQUEST_ARGS) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pre_send(self, request_params):
"""Override this method to modify sent request parameters""" |
for adapter in itervalues(self.adapters):
adapter.max_retries = request_params.get('max_retries', 0)
return request_params |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.