text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
β |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def image(img, cmap='gray', bar=False, nans=True, clim=None, size=7, ax=None):
""" Streamlined display of images using matplotlib. Parameters img : ndarray, 2D or 3D The image to display cmap : str or Colormap, optional, default = 'gray' A colormap to use, for non RGB images bar : boolean, optional, default = False Whether to append a colorbar nans : boolean, optional, deafult = True Whether to replace NaNs, if True, will replace with 0s clim : tuple, optional, default = None Limits for scaling image size : scalar, optional, deafult = 9 Size of the figure ax : matplotlib axis, optional, default = None An existing axis to plot into """
|
from matplotlib.pyplot import axis, colorbar, figure, gca
img = asarray(img)
if (nans is True) and (img.dtype != bool):
img = nan_to_num(img)
if ax is None:
f = figure(figsize=(size, size))
ax = gca()
if img.ndim == 3:
if bar:
raise ValueError("Cannot show meaningful colorbar for RGB images")
if img.shape[2] != 3:
raise ValueError("Size of third dimension must be 3 for RGB images, got %g" % img.shape[2])
mn = img.min()
mx = img.max()
if mn < 0.0 or mx > 1.0:
raise ValueError("Values must be between 0.0 and 1.0 for RGB images, got range (%g, %g)" % (mn, mx))
im = ax.imshow(img, interpolation='nearest', clim=clim)
else:
im = ax.imshow(img, cmap=cmap, interpolation='nearest', clim=clim)
if bar is True:
cb = colorbar(im, fraction=0.046, pad=0.04)
rng = abs(cb.vmax - cb.vmin) * 0.05
cb.set_ticks([around(cb.vmin + rng, 1), around(cb.vmax - rng, 1)])
cb.outline.set_visible(False)
axis('off')
return im
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(cls):
""" Returns a list of all configured endpoints the server is listening on. For each endpoint, the list of allowed databases is returned too if set. The result is a JSON hash which has the endpoints as keys, and the list of mapped database names as values for each endpoint. If a list of mapped databases is empty, it means that all databases can be accessed via the endpoint. If a list of mapped databases contains more than one database name, this means that any of the databases might be accessed via the endpoint, and the first database in the list will be treated as the default database for the endpoint. The default database will be used when an incoming request does not specify a database name in the request explicitly. *Note*: retrieving the list of all endpoints is allowed in the system database only. Calling this action in any other database will make the server return an error. """
|
api = Client.instance().api
endpoint_list = api.endpoint.get()
return endpoint_list
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(cls, url, databases):
""" If databases is an empty list, all databases present in the server will become accessible via the endpoint, with the _system database being the default database. If databases is non-empty, only the specified databases will become available via the endpoint. The first database name in the databases list will also become the default database for the endpoint. The default database will always be used if a request coming in on the endpoint does not specify the database name explicitly. *Note*: adding or reconfiguring endpoints is allowed in the system database only. Calling this action in any other database will make the server return an error. Adding SSL endpoints at runtime is only supported if the server was started with SSL properly configured (e.g. --server.keyfile must have been set). :param url the endpoint specification, e.g. tcp://127.0.0.1:8530 :param databases a list of database names the endpoint is responsible for. """
|
api = Client.instance().api
result = api.endpoint.post(data={
'endpoint': url,
'databases': databases,
})
return result
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def destroy(cls, url):
""" This operation deletes an existing endpoint from the list of all endpoints, and makes the server stop listening on the endpoint. *Note*: deleting and disconnecting an endpoint is allowed in the system database only. Calling this action in any other database will make the server return an error. Futhermore, the last remaining endpoint cannot be deleted as this would make the server kaput. :param url The endpoint to delete, e.g. tcp://127.0.0.1:8529. """
|
api = Client.instance().api
api.endpoint(url).delete()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait(self):
""" Return a deferred that will be fired when the event is fired. """
|
d = defer.Deferred()
if self._result is None:
self._waiters.append(d)
else:
self._fire_deferred(d)
return d
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self, reason):
"""Explicitly close a channel"""
|
self._closing = True
self.do_close(reason)
self._closing = False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _raise_closed(reason):
"""Raise the appropriate Closed-based error for the given reason."""
|
if isinstance(reason, Message):
if reason.method.klass.name == "channel":
raise ChannelClosed(reason)
elif reason.method.klass.name == "connection":
raise ConnectionClosed(reason)
raise Closed(reason)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self, reason=None, within=0):
"""Explicitely close the connection. @param reason: Optional closing reason. If not given, ConnectionDone will be used. @param within: Shutdown the client within this amount of seconds. If zero (the default), all channels and queues will be closed immediately. If greater than 0, try to close the AMQP connection cleanly, by sending a "close" method and waiting for "close-ok". If no reply is received within the given amount of seconds, the transport will be forcely shutdown. """
|
if self.closed:
return
if reason is None:
reason = ConnectionDone()
if within > 0:
channel0 = yield self.channel(0)
deferred = channel0.connection_close()
call = self.clock.callLater(within, deferred.cancel)
try:
yield deferred
except defer.CancelledError:
pass
else:
call.cancel()
self.do_close(reason)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def copy(self):
'''Returns a copy of this namespace.
Note: we truly create a copy of the dictionary but keep
_macros and _blocks.
'''
return Namespace(self.dictionary.copy(), self._macros, self._blocks)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _do_tcp_check(self, ip, results):
""" Attempt to establish a TCP connection. If not successful, record the IP in the results dict. Always closes the connection at the end. """
|
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
sock.connect((ip, self.conf['tcp_check_port']))
except:
# Any problem during the connection attempt? We won't diagnose it,
# we just indicate failure by adding the IP to the list
results.append(ip)
finally:
sock.close()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def do_health_checks(self, list_of_ips):
""" Perform a health check on a list of IP addresses. Each check (we use a TCP connection attempt) is run in its own thread. Gather up the results and return the list of those addresses that failed the test and the list of questionable IPs. TODO: Currently, this starts a thread for every single address we want to check. That's probably not a good idea if we have thousands of addresses. Therefore, we should implement some batching for large sets. """
|
threads = []
results = []
# Start the thread for each IP we wish to check.
for count, ip in enumerate(list_of_ips):
thread = threading.Thread(
target = self._do_tcp_check,
name = "%s:%s" % (self.thread_name, ip),
args = (ip, results))
thread.start()
threads.append(thread)
# ... make sure all threads are done...
for thread in threads:
thread.join()
# ... and send back all the failed IPs.
return results, []
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
""" Start the monitoring thread of the plugin. """
|
logging.info("TCP health monitor plugin: Starting to watch "
"instances.")
self.monitor_thread = threading.Thread(target = self.start_monitoring,
name = self.thread_name)
self.monitor_thread.daemon = True
self.monitor_thread.start()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_arguments(cls, parser, sys_arg_list=None):
""" Arguments for the TCP health monitor plugin. """
|
parser.add_argument('--tcp_check_interval',
dest='tcp_check_interval',
required=False, default=2, type=float,
help="TCP health-test interval in seconds, "
"default 2 "
"(only for 'tcp' health monitor plugin)")
parser.add_argument('--tcp_check_port',
dest='tcp_check_port',
required=False, default=22, type=int,
help="Port for TCP health-test, default 22 "
"(only for 'tcp' health monitor plugin)")
return ["tcp_check_interval", "tcp_check_port"]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def _check_negatives(numbers):
"Raise warning for negative numbers."
negatives = filter(lambda x: x < 0, filter(None, numbers))
if any(negatives):
neg_values = ', '.join(map(str, negatives))
msg = 'Found negative value(s): {0!s}. '.format(neg_values)
msg += 'While not forbidden, the output will look unexpected.'
warnings.warn(msg)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def _check_emphasis(numbers, emph):
"Find index postions in list of numbers to be emphasized according to emph."
pat = '(\w+)\:(eq|gt|ge|lt|le)\:(.+)'
# find values to be highlighted
emphasized = {} # index: color
for (i, n) in enumerate(numbers):
if n is None:
continue
for em in emph:
color, op, value = re.match(pat, em).groups()
value = float(value)
if op == 'eq' and n == value:
emphasized[i] = color
elif op == 'gt' and n > value:
emphasized[i] = color
elif op == 'ge' and n >= value:
emphasized[i] = color
elif op == 'lt' and n < value:
emphasized[i] = color
elif op == 'le' and n <= value:
emphasized[i] = color
return emphasized
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def scale_values(numbers, num_lines=1, minimum=None, maximum=None):
"Scale input numbers to appropriate range."
# find min/max values, ignoring Nones
filtered = [n for n in numbers if n is not None]
min_ = min(filtered) if minimum is None else minimum
max_ = max(filtered) if maximum is None else maximum
dv = max_ - min_
# clamp
numbers = [max(min(n, max_), min_) if n is not None else None for n in numbers]
if dv == 0:
values = [4 * num_lines if x is not None else None for x in numbers]
elif dv > 0:
num_blocks = len(blocks) - 1
min_index = 1.
max_index = num_lines * num_blocks
values = [
((max_index - min_index) * (x - min_)) / dv + min_index
if not x is None else None for x in numbers
]
values = [round(v) or 1 if not v is None else None for v in values]
return values
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sparklines(numbers=[], num_lines=1, emph=None, verbose=False, minimum=None, maximum=None, wrap=None):
""" Return a list of 'sparkline' strings for a given list of input numbers. The list of input numbers may contain None values, too, for which the resulting sparkline will contain a blank character (a space). Examples: sparklines([3, 1, 4, 1, 5, 9, 2, 6]) -> ['ββββββββ
'] sparklines([3, 1, 4, 1, 5, 9, 2, 6], num_lines=2) -> [ ' β β', 'β
βββββββ' ] """
|
assert num_lines > 0
if len(numbers) == 0:
return ['']
# raise warning for negative numbers
_check_negatives(numbers)
values = scale_values(numbers, num_lines=num_lines, minimum=minimum, maximum=maximum)
# find values to be highlighted
emphasized = _check_emphasis(numbers, emph) if emph else {}
point_index = 0
subgraphs = []
for subgraph_values in batch(wrap, values):
multi_values = []
for i in range(num_lines):
multi_values.append([
min(v, 8) if not v is None else None
for v in subgraph_values
])
subgraph_values = [max(0, v-8) if not v is None else None for v in subgraph_values]
multi_values.reverse()
lines = []
for subgraph_values in multi_values:
if HAVE_TERMCOLOR and emphasized:
tc = termcolor.colored
res = [tc(blocks[int(v)], emphasized.get(point_index + i, 'white')) if not v is None else ' ' for (i, v) in enumerate(subgraph_values)]
else:
res = [blocks[int(v)] if not v is None else ' ' for v in subgraph_values]
lines.append(''.join(res))
subgraphs.append(lines)
point_index += len(subgraph_values)
return list_join('', subgraphs)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def batch(batch_size, items):
"Batch items into groups of batch_size"
items = list(items)
if batch_size is None:
return [items]
MISSING = object()
padded_items = items + [MISSING] * (batch_size - 1)
groups = zip(*[padded_items[i::batch_size] for i in range(batch_size)])
return [[item for item in group if item != MISSING] for group in groups]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def demo(nums=[]):
"Print a few usage examples on stdout."
nums = nums or [3, 1, 4, 1, 5, 9, 2, 6]
fmt = lambda num: '{0:g}'.format(num) if isinstance(num, (float, int)) else 'None'
nums1 = list(map(fmt, nums))
if __name__ == '__main__':
prog = sys.argv[0]
else:
prog = 'sparklines'
result = []
result.append('Usage examples (command-line and programmatic use):')
result.append('')
result.append('- Standard one-line sparkline')
result.append('{0!s} {1!s}'.format(prog, ' '.join(nums1)))
result.append('>>> print(sparklines([{0!s}])[0])'.format(', '.join(nums1)))
result.append(sparklines(nums)[0])
result.append('')
result.append('- Multi-line sparkline (n=2)')
result.append('{0!s} -n 2 {1!s}'.format(prog, ' '.join(nums1)))
result.append('>>> for line in sparklines([{0!s}], num_lines=2): print(line)'.format(', '.join(nums1)))
for line in sparklines(nums, num_lines=2):
result.append(line)
result.append('')
result.append('- Multi-line sparkline (n=3)')
result.append('{0!s} -n 3 {1!s}'.format(prog, ' '.join(nums1)))
result.append('>>> for line in sparklines([{0!s}], num_lines=3): print(line)'.format(', '.join(nums1)))
for line in sparklines(nums, num_lines=3):
result.append(line)
result.append('')
nums = nums + [None] + list(reversed(nums[:]))
result.append('- Standard one-line sparkline with gap')
result.append('{0!s} {1!s}'.format(prog, ' '.join(map(str, nums))))
result.append('>>> print(sparklines([{0!s}])[0])'.format(', '.join(map(str, nums))))
result.append(sparklines(nums)[0])
return '\n'.join(result) + '\n'
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _expire_data(self):
""" Remove all expired entries. """
|
expire_time_stamp = time.time() - self.expire_time
self.timed_data = {d: t for d, t in self.timed_data.items()
if t > expire_time_stamp}
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, data_set):
""" Refresh the time of all specified elements in the supplied data set. """
|
now = time.time()
for d in data_set:
self.timed_data[d] = now
self._expire_data()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _accumulate_ips_from_plugins(self, ip_type_name, plugin_queue_lookup, ip_accumulator):
""" Retrieve all IPs of a given type from all sub-plugins. ip_type_name: A name of the type of IP we are working with. Used for nice log messages. Example 'failed', 'questionable'. plugin_queue_lookup: Dictionary to lookup the queues (of a given type) for a plugins, by plugin name. ip_accumulator: An expiring data set for this type of IP address. Returns either a set of addresses to send out on our own reporting queues, or None. """
|
all_reported_ips = set()
for pname, q in plugin_queue_lookup.items():
# Get all the IPs of the specified type from all the plugins.
ips = utils.read_last_msg_from_queue(q)
if ips:
logging.debug("Sub-plugin '%s' reported %d "
"%s IPs: %s" %
(pname, len(ips), ip_type_name,
",".join(ips)))
all_reported_ips.update(ips) # merge all the lists
else:
logging.debug("Sub-plugin '%s' reported no "
"%s IPs." % (pname, ip_type_name))
# Send out the combined list of reported IPs. The receiver of this
# message expects this list to always be the full list of IPs. So, IF
# they get a message, it needs to be complete, since otherwise any IP
# not mentioned in this update is considered healthy.
#
# Since different sub-plugins may report different IPs at different
# times (and not always at the same time), we need to accumulate those
# IPs that are recorded by different sub-plugins over time.
#
# We use an 'expiring data set' to store those: If any plugin refreshes
# an IP as failed then the entry remains, otherwise, it will expire
# after some time. The expiring data set therefore, is an accumulation
# of recently reported IPs. We always report this set, whenever we send
# out an update of IPs.
#
# Each type of IP (for example, 'failed' or 'questionable') has its own
# accumulator, which was passed in to this function.
if all_reported_ips:
ip_accumulator.update(all_reported_ips)
current_ips = ip_accumulator.get()
logging.info("Multi-plugin health monitor: "
"Reporting combined list of %s "
"IPs: %s" %
(ip_type_name,
",".join(current_ips)))
return current_ips
else:
logging.debug("No failed IPs to report.")
return None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_monitoring(self):
""" Pass IP lists to monitor sub-plugins and get results from them. Override the common definition of this function, since in the multi plugin it's a little different: Instead of monitoring ourselves, we just use a number of other plugins to gather results. The multi plugin just serves as a proxy and (de)multiplexer for those other plugins. Note that we don't have to push any updates about failed IPs if nothing new was detected. Therefore, our own updates can be entirely driven by updates from the sub-plugin, which keeps our architecture simple. """
|
logging.info("Multi-plugin health monitor: Started in thread.")
try:
while True:
# Get new IP addresses and pass them on to the sub-plugins
new_ips = self.get_new_working_set()
if new_ips:
logging.debug("Sending list of %d IPs to %d plugins." %
(len(new_ips), len(self.plugins)))
for q in self.monitor_ip_queues.values():
q.put(new_ips)
# Get any notifications about failed or questionable IPs from
# the plugins.
all_failed_ips = self._accumulate_ips_from_plugins(
"failed",
self.failed_queue_lookup,
self.report_failed_acc)
if all_failed_ips:
self.q_failed_ips.put(all_failed_ips)
all_questionable_ips = self._accumulate_ips_from_plugins(
"questionable",
self.questionable_queue_lookup,
self.report_questionable_acc)
if all_questionable_ips:
self.q_questionable_ips.put(all_questionable_ips)
time.sleep(self.get_monitor_interval())
except common.StopReceived:
# Received the stop signal, just exiting the thread function
return
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_sub_plugins_from_str(cls, plugins_str):
""" Load plugin classes based on column separated list of plugin names. Returns dict with plugin name as key and class as value. """
|
plugin_classes = {}
if plugins_str:
for plugin_name in plugins_str.split(":"):
pc = load_plugin(plugin_name, MONITOR_DEFAULT_PLUGIN_MODULE)
plugin_classes[plugin_name] = pc
return plugin_classes
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_arguments(cls, parser, sys_arg_list=None):
""" Arguments for the Multi health monitor plugin. """
|
parser.add_argument('--multi_plugins',
dest='multi_plugins', required=True,
help="Column seperated list of health monitor "
"plugins (only for 'multi' health monitor "
"plugin)")
arglist = ["multi_plugins"]
# Read the list of the specified sub-plugins ahead of time, so we can
# get their classes and add their parameters.
sub_plugin_names_str = \
utils.param_extract(sys_arg_list, None, "--multi_plugins")
sub_plugin_classes = \
cls.load_sub_plugins_from_str(sub_plugin_names_str).values()
# Store the list of the sub-plugins in the class, so we can iterate
# over those during parameter evaluation later on.
cls.multi_plugin_classes = sub_plugin_classes
# Now also add the parameters for the sub-plugins
for pc in sub_plugin_classes:
arglist.extend(pc.add_arguments(parser, sys_arg_list))
return arglist
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_route_spec_config(fname):
""" Read, parse and sanity check the route spec config file. The config file needs to be in this format: { "<CIDR-1>" : [ "host-1-ip", "host-2-ip", "host-3-ip" ], "<CIDR-2>" : [ "host-4-ip", "host-5-ip" ], "<CIDR-3>" : [ "host-6-ip", "host-7-ip", "host-8-ip", "host-9-ip" ] } Returns the validated route config. """
|
try:
try:
f = open(fname, "r")
except IOError as e:
# Cannot open file? Doesn't exist?
raise ValueError("Cannot open file: " + str(e))
data = json.loads(f.read())
f.close()
# Sanity checking on the data object
data = common.parse_route_spec_config(data)
except ValueError as e:
logging.error("Config ignored: %s" % str(e))
data = None
return data
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
""" Start the configfile change monitoring thread. """
|
fname = self.conf['file']
logging.info("Configfile watcher plugin: Starting to watch route spec "
"file '%s' for changes..." % fname)
# Initial content of file needs to be processed at least once, before
# we start watching for any changes to it. Therefore, we will write it
# out on the queue right away.
route_spec = {}
try:
route_spec = read_route_spec_config(fname)
if route_spec:
self.last_route_spec_update = datetime.datetime.now()
self.q_route_spec.put(route_spec)
except ValueError as e:
logging.warning("Cannot parse route spec: %s" % str(e))
# Now prepare to watch for any changes in that file. Find the parent
# directory of the config file, since this is where we will attach a
# watcher to.
abspath = os.path.abspath(fname)
parent_dir = os.path.dirname(abspath)
# Create the file watcher and run in endless loop
handler = RouteSpecChangeEventHandler(
route_spec_fname = fname,
route_spec_abspath = abspath,
q_route_spec = self.q_route_spec,
plugin = self)
self.observer_thread = watchdog.observers.Observer()
self.observer_thread.name = "ConfMon"
self.observer_thread.schedule(handler, parent_dir)
self.observer_thread.start()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop(self):
""" Stop the config change monitoring thread. """
|
self.observer_thread.stop()
self.observer_thread.join()
logging.info("Configfile watcher plugin: Stopped")
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_arguments(cls, parser, sys_arg_list=None):
""" Arguments for the configfile mode. """
|
parser.add_argument('-f', '--file', dest='file', required=True,
help="config file for routing groups "
"(only in configfile mode)")
return ["file"]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_arguments(cls, conf):
""" Sanity checks for options needed for configfile mode. """
|
try:
# Check we have access to the config file
f = open(conf['file'], "r")
f.close()
except IOError as e:
raise ArgsError("Cannot open config file '%s': %s" %
(conf['file'], e))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_plugin(plugin_name, default_plugin_module):
""" Load a plugin plugin. Supports loading of plugins that are part of the vpcrouter, as well as external plugins: If the plugin name has a dotted notation then it assumes it's an external plugin and the dotted notation is the complete import path. If it's just a single word then it looks for the plugin in the specified default module. Return the plugin class. """
|
try:
if "." in plugin_name:
# Assume external plugin, full path
plugin_mod_name = plugin_name
plugin_class_name = plugin_name.split(".")[-1].capitalize()
else:
# One of the built-in plugins
plugin_mod_name = "%s.%s" % (default_plugin_module, plugin_name)
plugin_class_name = plugin_name.capitalize()
plugin_mod = importlib.import_module(plugin_mod_name)
plugin_class = getattr(plugin_mod, plugin_class_name)
return plugin_class
except ImportError as e:
raise PluginError("Cannot load '%s'" % plugin_mod_name)
except AttributeError:
raise PluginError("Cannot find plugin class '%s' in "
"plugin '%s'" %
(plugin_class_name, plugin_mod_name))
except Exception as e:
raise PluginError("Error while loading plugin '%s': %s" %
(plugin_mod_name, str(e)))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_signal_receiver(self, signal):
""" Remove an installed signal receiver by signal name. See also :py:meth:`add_signal_receiver` :py:exc:`exceptions.ConnSignalNameNotRecognisedException` :param str signal: Signal name to uninstall e.g., :py:attr:`SIGNAL_PROPERTY_CHANGED` :return: :raises ConnSignalNameNotRecognisedException: if the signal name is not registered """
|
if (signal in self._signal_names):
s = self._signals.get(signal)
if (s):
self._bus.remove_signal_receiver(s.signal_handler,
signal,
dbus_interface=self._dbus_addr) # noqa
self._signals.pop(signal)
else:
raise ConnSignalNameNotRecognisedException
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def _do_request(self, request, url, **kwargs):
"Actually makes the HTTP request."
try:
response = request(url, stream=True, **kwargs)
except RequestException as e:
raise RequestError(e)
else:
if response.status_code >= 400:
raise ResponseError(response)
# Try to return the response in the most useful fashion given it's
# type.
if response.headers.get('content-type') == 'application/json':
try:
# Try to decode as JSON
return response.json()
except (TypeError, ValueError):
# If that fails, return the text.
return response.text
else:
# This might be a file, so return it.
if kwargs.get('params', {}).get('raw', True):
return response.raw
else:
return response
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def _request(self, method, endpoint, id=None, **kwargs):
"Handles retrying failed requests and error handling."
request = getattr(requests, method, None)
if not callable(request):
raise RequestError('Invalid method %s' % method)
# Find files, separate them out to correct kwarg for requests.
data = kwargs.get('data')
if data:
files = {}
for name, value in list(data.items()):
# Value might be a file-like object (with a read method), or it
# might be a (filename, file-like) tuple.
if hasattr(value, 'read') or isinstance(value, tuple):
files[name] = data.pop(name)
if files:
kwargs.setdefault('files', {}).update(files)
path = ['api', self.version, endpoint]
# If we received an ID, append it to the path.
if id:
path.append(str(id))
# Join fragments into a URL
path = '/'.join(path)
if not path.endswith('/'):
path += '/'
while '//' in path:
path = path.replace('//', '/')
url = self.url + path
# Add our user agent.
kwargs.setdefault('headers', {}).setdefault('User-Agent',
HTTP_USER_AGENT)
# Now try the request, if we get throttled, sleep and try again.
trys, retrys = 0, 3
while True:
if trys == retrys:
raise RequestError('Could not complete request after %s trys.'
% trys)
trys += 1
try:
return self._do_request(request, url, **kwargs)
except ResponseError as e:
if self.throttle_wait and e.status_code == 503:
m = THROTTLE_PATTERN.match(
e.response.headers.get('x-throttle', ''))
if m:
time.sleep(float(m.group(1)))
continue
# Failed for a reason other than throttling.
raise
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download(self, file_to_be_downloaded, perform_download=True, download_to_path=None):
""" file_to_be_downloaded is a file-like object that has already been uploaded, you cannot download folders """
|
response = self.get(
'/path/data/', file_to_be_downloaded, raw=False)
if not perform_download:
# The caller can decide how to process the download of the data
return response
if not download_to_path:
download_to_path = file_to_be_downloaded.split("/")[-1]
# download uses shutil.copyfileobj to download, which copies
# the data in chunks
o = open(download_to_path, 'wb')
return shutil.copyfileobj(response.raw, o)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_notebook(self, data):
"""Create notebook under notebook directory."""
|
r = requests.post('http://{0}/api/notebook'.format(self.zeppelin_url),
json=data)
self.notebook_id = r.json()['body']
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait_for_notebook_to_execute(self):
"""Wait for notebook to finish executing before continuing."""
|
while True:
r = requests.get('http://{0}/api/notebook/job/{1}'.format(
self.zeppelin_url, self.notebook_id))
if r.status_code == 200:
try:
data = r.json()['body']
if all(paragraph['status'] in ['FINISHED', 'ERROR'] for paragraph in data):
break
time.sleep(5)
continue
except KeyError as e:
print(e)
print(r.json())
elif r.status_code == 500:
print('Notebook is still busy executing. Checking again in 60 seconds...')
time.sleep(60)
continue
else:
print('ERROR: Unexpected return code: {}'.format(r.status_code))
sys.exit(1)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_executed_notebook(self):
"""Return the executed notebook."""
|
r = requests.get('http://{0}/api/notebook/{1}'.format(
self.zeppelin_url, self.notebook_id))
if r.status_code == 200:
return r.json()['body']
else:
print('ERROR: Could not get executed notebook.', file=sys.stderr)
sys.exit(1)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_notebook(self, body):
"""Save notebook depending on user provided output path."""
|
directory = os.path.dirname(self.output_path)
full_path = os.path.join(directory, self.notebook_name)
try:
with open(full_path, 'w') as fh:
fh.write(json.dumps(body, indent=2))
except ValueError:
print('ERROR: Could not save executed notebook to path: ' +
self.output_path +
' -- Please provide a valid absolute path.')
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def execute_notebook(self, data):
"""Execute input notebook and save it to file. If no output path given, the output will be printed to stdout. If any errors occur from executing the notebook's paragraphs, they will be displayed in stderr. """
|
self.create_notebook(data)
self.run_notebook()
self.wait_for_notebook_to_execute()
body = self.get_executed_notebook()
err = False
output = []
for paragraph in body['paragraphs']:
if 'results' in paragraph and paragraph['results']['code'] == 'ERROR':
output.append(paragraph['results']['msg'][0]['data'])
err = True
elif 'result' in paragraph and paragraph['result']['code'] == 'ERROR':
output.append(paragraph['result']['msg'])
err = True
[print(e.strip() + '\n', file=sys.stderr) for e in output if e]
if err:
sys.exit(1)
if not self.output_path:
print(json.dumps(body, indent=2))
else:
self.save_notebook(body)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _setup_arg_parser(args_list, watcher_plugin_class, health_plugin_class):
""" Configure and return the argument parser for the command line options. If a watcher and/or health-monitor plugin_class is provided then call the add_arguments() callback of the plugin class(es), in order to add plugin specific options. Some parameters are required (vpc and region, for example), but we may be able to discover them automatically, later on. Therefore, we allow them to remain unset on the command line. We will have to complain about those parameters missing later on, if the auto discovery fails. The args_list (from sys.argv) is passed in, since some plugins have to do their own ad-hoc extraction of certain parameters in order to add things to the official parameter list. Return parser and the conf-name of all the arguments that have been added. """
|
parser = argparse.ArgumentParser(
description="VPC router: Manage routes in VPC route table")
# General arguments
parser.add_argument('--verbose', dest="verbose", action='store_true',
help="produces more output")
parser.add_argument('-l', '--logfile', dest='logfile',
default='-',
help="full path name for the logfile, "
"or '-' for logging to stdout, "
"default: '-' (logging to stdout)"),
parser.add_argument('-r', '--region', dest="region_name",
required=False, default=None,
help="the AWS region of the VPC")
parser.add_argument('-v', '--vpc', dest="vpc_id",
required=False, default=None,
help="the ID of the VPC in which to operate")
parser.add_argument('--ignore_routes', dest="ignore_routes",
required=False, default=None,
help="Comma separated list of CIDRs or IPs for "
"routes which vpc-router should ignore.")
parser.add_argument('--route_recheck_interval',
dest="route_recheck_interval",
required=False, default="30", type=int,
help="time between regular checks of VPC route "
"tables, default: 30")
parser.add_argument('-a', '--address', dest="addr",
default="localhost",
help="address to listen on for HTTP requests, "
"default: localhost")
parser.add_argument('-p', '--port', dest="port",
default="33289", type=int,
help="port to listen on for HTTP requests, "
"default: 33289")
parser.add_argument('-m', '--mode', dest='mode', required=True,
help="name of the watcher plugin")
parser.add_argument('-H', '--health', dest='health', required=False,
default=monitor.MONITOR_DEFAULT_PLUGIN,
help="name of the health-check plugin, "
"default: %s" % monitor.MONITOR_DEFAULT_PLUGIN)
arglist = ["logfile", "region_name", "vpc_id", "route_recheck_interval",
"verbose", "addr", "port", "mode", "health", "ignore_routes"]
# Inform the CurrentState object of the main config parameter names, which
# should be rendered in an overview.
CURRENT_STATE.main_param_names = list(arglist)
# Let each watcher and health-monitor plugin add its own arguments.
for plugin_class in [watcher_plugin_class, health_plugin_class]:
if plugin_class:
arglist.extend(plugin_class.add_arguments(parser, args_list))
return parser, arglist
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_args(args_list, watcher_plugin_class, health_plugin_class):
""" Parse command line arguments and return relevant values in a dict. Also perform basic sanity checking on some arguments. If plugin classes have been provided then a callback into those classes is used to extend the arguments with plugin-specific options. Likewise, the sanity checking will then also invoke a callback into the plugins, in order to perform a sanity check on the plugin options. """
|
conf = {}
# Setting up the command line argument parser. Note that we pass the
# complete list of all plugins, so that their parameter can be added to the
# official parameter handling, the help screen, etc. Some plugins may even
# add further plugins themselves, but will handle this themselves.
parser, arglist = _setup_arg_parser(args_list, watcher_plugin_class,
health_plugin_class)
args = parser.parse_args(args_list)
# Transcribe argument values into our own dict
for argname in arglist:
conf[argname] = getattr(args, argname)
# Sanity checking of arguments. Let the watcher and health-monitor plugin
# class check their own arguments.
for plugin_class in [watcher_plugin_class, health_plugin_class]:
if plugin_class:
try:
plugin_class.check_arguments(conf)
except ArgsError as e:
parser.print_help()
raise e
# Sanity checking of other args
if conf['route_recheck_interval'] < 5 and \
conf['route_recheck_interval'] != 0:
raise ArgsError("route_recheck_interval argument must be either 0 "
"or at least 5")
if not 0 < conf['port'] < 65535:
raise ArgsError("Invalid listen port '%d' for built-in http server." %
conf['port'])
if not conf['addr'] == "localhost":
# Check if a proper address was specified (already raises a suitable
# ArgsError if not)
utils.ip_check(conf['addr'])
if conf['ignore_routes']:
# Parse the list of addresses and CIDRs
for a in conf['ignore_routes'].split(","):
a = a.strip()
a = utils.check_valid_ip_or_cidr(a, return_as_cidr=True)
CURRENT_STATE.ignore_routes.append(a)
# Store a reference to the config dict in the current state
CURRENT_STATE.conf = conf
return conf
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
""" Starting point of the executable. """
|
try:
# A bit of a hack: We want to load the plugins (specified via the mode
# and health parameter) in order to add their arguments to the argument
# parser. But this means we first need to look into the CLI arguments
# to find them ... before looking at the arguments. So we first perform
# a manual search through the argument list for this purpose only.
args = sys.argv[1:]
# Loading the watcher plugin
mode_name = utils.param_extract(args, "-m", "--mode", default=None)
if mode_name:
watcher_plugin_class = \
load_plugin(mode_name, watcher.WATCHER_DEFAULT_PLUGIN_MODULE)
else:
watcher_plugin_class = None
# Loading the health monitor plugin
health_check_name = \
utils.param_extract(args, "-H", "--health",
default=monitor.MONITOR_DEFAULT_PLUGIN)
if health_check_name:
health_plugin_class = \
load_plugin(health_check_name,
monitor.MONITOR_DEFAULT_PLUGIN_MODULE)
else:
health_plugin_class = None
# Provide complete arg parsing for vpcrouter and all plugin classes.
conf = _parse_args(sys.argv[1:],
watcher_plugin_class, health_plugin_class)
if not health_plugin_class or not watcher_plugin_class:
logging.error("Watcher plugin or health monitor plugin class "
"are missing.")
sys.exit(1)
_setup_logging(conf)
# If we are on an EC2 instance then some data is already available to
# us. The return data items in the meta data match some of the command
# line arguments, so we can pass this through to the parser function to
# provide defaults for those parameters. Specifically: VPC-ID and
# region name.
if not conf['vpc_id'] or not conf['region_name']:
meta_data = get_ec2_meta_data()
if 'vpc_id' not in meta_data or 'region_name' not in meta_data:
logging.error("VPC and region were not explicitly specified "
"and can't be auto-discovered.")
sys.exit(1)
else:
conf.update(meta_data)
try:
info_str = "vpc-router (%s): mode: %s (%s), " \
"health-check: %s (%s)" % \
(vpcrouter.__version__,
conf['mode'], watcher_plugin_class.get_version(),
health_check_name, health_plugin_class.get_version())
logging.info("*** Starting %s ***" % info_str)
CURRENT_STATE.versions = info_str
http_srv = http_server.VpcRouterHttpServer(conf)
CURRENT_STATE._vpc_router_http = http_srv
watcher.start_watcher(conf,
watcher_plugin_class, health_plugin_class)
http_srv.stop()
logging.info("*** Stopping vpc-router ***")
except Exception as e:
import traceback
traceback.print_exc()
logging.error(e.message)
logging.error("*** Exiting")
except Exception as e:
print "\n*** Error: %s\n" % e.message
sys.exit(1)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_plugins_info(self):
""" Collect the current live info from all the registered plugins. Return a dictionary, keyed on the plugin name. """
|
d = {}
for p in self.plugins:
d.update(p.get_info())
return d
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_state_repr(self, path):
""" Returns the current state, or sub-state, depending on the path. """
|
if path == "ips":
return {
"failed_ips" : self.failed_ips,
"questionable_ips" : self.questionable_ips,
"working_set" : self.working_set,
}
if path == "route_info":
return {
"route_spec" : self.route_spec,
"routes" : self.routes,
"ignore_routes" : self.ignore_routes
}
if path == "plugins":
return self.get_plugins_info()
if path == "vpc":
return self.vpc_state
if path == "":
return {
"SERVER" : {
"version" : self.versions,
"start_time" : self.starttime.isoformat(),
"current_time" : datetime.datetime.now().isoformat()
},
"params" : self.render_main_params(),
"plugins" : {"_href" : "/plugins"},
"ips" : {"_href" : "/ips"},
"route_info" : {"_href" : "/route_info"},
"vpc" : {"_href" : "/vpc"}
}
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_json(self, path="", with_indent=False):
""" Return a rendering of the current state in JSON. """
|
if path not in self.top_level_links:
raise StateError("Unknown path")
return json.dumps(self.get_state_repr(path),
indent=4 if with_indent else None)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_html(self, path=""):
""" Return a rendering of the current state in HTML. """
|
if path not in self.top_level_links:
raise StateError("Unknown path")
header = """
<html>
<head>
<title>VPC-router state</title>
</head>
<body>
<h3>VPC-router state</h3>
<hr>
<font face="courier">
"""
footer = """
</font>
</body>
</html>
"""
rep = self.get_state_repr(path)
def make_links(rep):
# Recursively create clickable links for _href elements
for e, v in rep.items():
if e == "_href":
v = '<a href=%s>%s</a>' % (v, v)
rep[e] = v
else:
if type(v) == dict:
make_links(v)
make_links(rep)
rep_str_lines = json.dumps(rep, indent=4).split("\n")
buf = []
for l in rep_str_lines:
# Replace leading spaces with ' '
num_spaces = len(l) - len(l.lstrip())
l = " " * num_spaces + l[num_spaces:]
buf.append(l)
return "%s%s%s" % (header, "<br>\n".join(buf), footer)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
""" Start the config watch thread or process. """
|
# Normally, we should start a thread or process here, pass the message
# queue self.q_route_spec to that thread and let it send route
# configurations through that queue. But since we're just sending a
# single, fixed configuration, we can just do that right here.
# Note that the q_route_spec queue was created by the __init__()
# function of the WatcherPlugin base class.
logging.info("Fixedconf watcher plugin: Started")
# The configuration provided on the command line is available to every
# plugin. Here we are reading our own parameters.
cidr = self.conf['fixed_cidr']
hosts = self.conf['fixed_hosts'].split(":")
route_spec = {cidr : hosts}
try:
# Probably don't really have to parse the route spec (sanity check)
# one more time, since we already sanity checked the command line
# options.
common.parse_route_spec_config(route_spec)
self.q_route_spec.put(route_spec)
except Exception as e:
logging.warning("Fixedconf watcher plugin: "
"Invalid route spec: %s" % str(e))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_arguments(cls, parser, sys_arg_list=None):
""" Callback to add command line options for this plugin to the argparse parser. """
|
parser.add_argument('--fixed_cidr', dest="fixed_cidr", required=True,
help="specify the route CIDR "
"(only in fixedconf mode)")
parser.add_argument('--fixed_hosts', dest="fixed_hosts", required=True,
help="list of host IPs, separated by ':' "
"(only in fixedconf mode)")
return ["fixed_cidr", "fixed_hosts"]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_arguments(cls, conf):
""" Callback to perform sanity checking for the plugin's specific parameters. """
|
# Perform sanity checking on CIDR
utils.ip_check(conf['fixed_cidr'], netmask_expected=True)
# Perform sanity checking on host list
for host in conf['fixed_hosts'].split(":"):
utils.ip_check(host)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_new_working_set(self):
""" Get a new list of IPs to work with from the queue. This returns None if there is no update. Read all the messages from the queue on which we get the IP addresses that we have to monitor. We will ignore all of them, except the last one, since maybe we received two updates in a row, but each update is a full state, so only the last one matters. Raises the StopReceived exception if the stop signal ("None") was received on the notification queue. """
|
new_list_of_ips = None
while True:
try:
new_list_of_ips = self.q_monitor_ips.get_nowait()
self.q_monitor_ips.task_done()
if type(new_list_of_ips) is MonitorPluginStopSignal:
raise StopReceived()
except Queue.Empty:
# No more messages, all done reading monitor list for now
break
if new_list_of_ips is not None:
CURRENT_STATE.working_set = new_list_of_ips
return new_list_of_ips
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_monitoring(self):
""" Monitor IP addresses and send notifications if one of them has failed. This function will continuously monitor q_monitor_ips for new lists of IP addresses to monitor. Each message received there is the full state (the complete lists of addresses to monitor). Push out (return) any failed IPs on q_failed_ips. This is also a list of IPs, which may be empty if all instances work correctly. If q_monitor_ips receives a 'None' instead of list then this is intepreted as a stop signal and the function exits. """
|
time.sleep(1)
# This is our working set. This list may be updated occasionally when
# we receive messages on the q_monitor_ips queue. But irrespective of
# any received updates, the list of IPs in here is regularly checked.
list_of_ips = []
currently_failed_ips = set()
currently_questionable_ips = set()
# Accumulating failed IPs for 10 intervals before rechecking them to
# see if they are alive again
recheck_failed_interval = 10
try:
interval_count = 0
while not CURRENT_STATE._stop_all:
start_time = time.time()
# See if we should update our working set
new_ips = self.get_new_working_set()
if new_ips:
list_of_ips = new_ips
# Update the currently-failed-IP list to only include IPs
# that are still in the spec. The list update may have
# removed some of the historical, failed IPs altogether.
currently_failed_ips = \
set([ip for ip in currently_failed_ips
if ip in list_of_ips])
# Same for the questionable IPs
currently_questionable_ips = \
set([ip for ip in currently_questionable_ips
if ip in list_of_ips])
# Don't check failed IPs for liveness on every interval. We
# keep a list of currently-failed IPs for that purpose.
# But we will check questionable IPs, so we don't exclude
# those.
live_ips_to_check = [ip for ip in list_of_ips if
ip not in currently_failed_ips]
logging.debug("Checking live IPs: %s" %
(",".join(live_ips_to_check)
if live_ips_to_check else "(none alive)"))
# Independent of any updates: Perform health check on all IPs
# in the working set and send messages out about any failed
# ones as necessary.
if live_ips_to_check:
failed_ips, questionable_ips = \
self.do_health_checks(live_ips_to_check)
if failed_ips:
# Update list of currently failed IPs with any new ones
currently_failed_ips.update(failed_ips)
logging.info('Currently failed IPs: %s' %
",".join(currently_failed_ips))
# Let the main loop know the full set of failed IPs
self.q_failed_ips.put(list(currently_failed_ips))
if questionable_ips:
# Update list of currently questionable IPs with any
# new ones
currently_questionable_ips.update(failed_ips)
logging.info('Currently questionable IPs: %s' %
",".join(currently_questionable_ips))
# Let the main loop know the full set of questionable
# IPs
self.q_questionable_ips.put(
list(currently_questionable_ips))
if interval_count == recheck_failed_interval:
# Ever now and then clean out our currently failed IP cache
# so that we can recheck them to see if they are still
# failed. We also clear out the questionable IPs, so that
# they don't forever accumulate.
interval_count = 0
currently_failed_ips = set()
currently_questionable_ips = set()
# Wait until next monitoring interval: We deduct the time we
# spent in this loop.
end_time = time.time()
time.sleep(self.get_monitor_interval() -
(end_time - start_time))
interval_count += 1
logging.debug("Monitoring loop ended: Global stop")
except StopReceived:
# Received the stop signal, just exiting the thread function
return
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_route_spec_config(data):
""" Parse and sanity check the route spec config. The config data is a blob of JSON that needs to be in this format: { "<CIDR-1>" : [ "host-1-ip", "host-2-ip", "host-3-ip" ], "<CIDR-2>" : [ "host-4-ip", "host-5-ip" ], "<CIDR-3>" : [ "host-6-ip", "host-7-ip", "host-8-ip", "host-9-ip" ] } Returns the validated route config. This validation is performed on any route-spec pushed out by the config watcher plugin. Duplicate hosts in the host lists are removed. Raises ValueError exception in case of problems. """
|
# Sanity checking on the data object
if type(data) is not dict:
raise ValueError("Expected dictionary at top level")
try:
for k, v in data.items():
utils.ip_check(k, netmask_expected=True)
if type(v) is not list:
raise ValueError("Expect list of IPs as values in dict")
hosts = set(v) # remove duplicates
for ip in hosts:
utils.ip_check(ip)
clean_host_list = sorted(list(hosts))
data[k] = clean_host_list
except ArgsError as e:
raise ValueError(e.message)
return data
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_health_monitor_with_new_ips(route_spec, all_ips, q_monitor_ips):
""" Take the current route spec and compare to the current list of known IP addresses. If the route spec mentiones a different set of IPs, update the monitoring thread with that new list. Return the current set of IPs mentioned in the route spec. """
|
# Extract all the IP addresses from the route spec, unique and sorted.
new_all_ips = \
sorted(set(itertools.chain.from_iterable(route_spec.values())))
if new_all_ips != all_ips:
logging.debug("New route spec detected. Updating "
"health-monitor with: %s" %
",".join(new_all_ips))
# Looks like we have a new list of IPs
all_ips = new_all_ips
q_monitor_ips.put(all_ips)
else:
logging.debug("New route spec detected. No changes in "
"IP address list, not sending update to "
"health-monitor.")
return all_ips
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _event_monitor_loop(region_name, vpc_id, watcher_plugin, health_plugin, iterations, sleep_time, route_check_time_interval=30):
""" Monitor queues to receive updates about new route specs or any detected failed IPs. If any of those have updates, notify the health-monitor thread with a message on a special queue and also re-process the entire routing table. The 'iterations' argument allows us to limit the running time of the watch loop for test purposes. Not used during normal operation. Also, for faster tests, sleep_time can be set to values less than 1. The 'route_check_time_interval' arguments specifies the number of seconds we allow to elapse before forcing a re-check of the VPC routes. This is so that accidentally deleted routes or manually broken route tables can be fixed back up again on their own. """
|
q_route_spec = watcher_plugin.get_route_spec_queue()
q_monitor_ips, q_failed_ips, q_questionable_ips = \
health_plugin.get_queues()
time.sleep(sleep_time) # Wait to allow monitor to report results
current_route_spec = {} # The last route spec we have seen
all_ips = [] # Cache of IP addresses we currently know about
# Occasionally we want to recheck VPC routes even without other updates.
# That way, if a route is manually deleted by someone, it will be
# re-created on its own.
last_route_check_time = time.time()
while not CURRENT_STATE._stop_all:
try:
# Get the latest messages from the route-spec monitor and the
# health-check monitor. At system start the route-spec queue should
# immediately have been initialized with a first message.
failed_ips = utils.read_last_msg_from_queue(q_failed_ips)
questnbl_ips = utils.read_last_msg_from_queue(q_questionable_ips)
new_route_spec = utils.read_last_msg_from_queue(q_route_spec)
if failed_ips:
# Store the failed IPs in the shared state
CURRENT_STATE.failed_ips = failed_ips
if questnbl_ips:
# Store the questionable IPs in the shared state
CURRENT_STATE.questionble_ips = questnbl_ips
if new_route_spec:
# Store the new route spec in the shared state
CURRENT_STATE.route_spec = new_route_spec
current_route_spec = new_route_spec
# Need to communicate a new set of IPs to the health
# monitoring thread, in case the list changed. The list of
# addresses is extracted from the route spec. Pass in the old
# version of the address list, so that this function can
# compare to see if there are any changes to the host list.
all_ips = _update_health_monitor_with_new_ips(new_route_spec,
all_ips,
q_monitor_ips)
# Spec or list of failed or questionable IPs changed? Update
# routes...
# We pass in the last route spec we have seen, since we are also
# here in case we only have failed/questionable IPs, but no new
# route spec. This is also called occasionally on its own, so that
# we can repair any damaged route tables in VPC.
now = time.time()
time_for_regular_recheck = \
(now - last_route_check_time) > route_check_time_interval
if new_route_spec or failed_ips or questnbl_ips or \
time_for_regular_recheck:
if not new_route_spec and not (failed_ips or questnbl_ips):
# Only reason we are here is due to expired timer.
logging.debug("Time for regular route check")
last_route_check_time = now
vpc.handle_spec(region_name, vpc_id, current_route_spec,
failed_ips if failed_ips else [],
questnbl_ips if questnbl_ips else [])
# If iterations are provided, count down and exit
if iterations is not None:
iterations -= 1
if iterations == 0:
break
time.sleep(sleep_time)
except KeyboardInterrupt:
# Allow exit via keyboard interrupt, useful during development
return
except Exception as e:
# Of course we should never get here, but if we do, better to log
# it and keep operating best we can...
import traceback
traceback.print_exc()
logging.error("*** Uncaught exception 1: %s" % str(e))
return
logging.debug("event_monitor_loop ended: Global stop")
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop_plugins(watcher_plugin, health_plugin):
""" Stops all plugins. """
|
logging.debug("Stopping health-check monitor...")
health_plugin.stop()
logging.debug("Stopping config change observer...")
watcher_plugin.stop()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_watcher(conf, watcher_plugin_class, health_plugin_class, iterations=None, sleep_time=1):
""" Start watcher loop, listening for config changes or failed hosts. Also starts the various service threads. VPC router watches for any changes in the config and updates/adds/deletes routes as necessary. If failed hosts are reported, routes are also updated as needed. This function starts a few working threads: - The watcher plugin to monitor for updated route specs. - A health monitor plugin for instances mentioned in the route spec. It then drops into a loop to receive messages from the health monitoring thread and watcher plugin and re-process the config if any failed IPs are reported. The loop itself is in its own function to facilitate easier testing. """
|
if CURRENT_STATE._stop_all:
logging.debug("Not starting plugins: Global stop")
return
# Start the working threads (health monitor, config event monitor, etc.)
# and return the thread handles and message queues in a thread-info dict.
watcher_plugin, health_plugin = \
start_plugins(conf, watcher_plugin_class, health_plugin_class,
sleep_time)
CURRENT_STATE.add_plugin(watcher_plugin)
CURRENT_STATE.add_plugin(health_plugin)
# Start the loop to process messages from the monitoring
# threads about any failed IP addresses or updated route specs.
_event_monitor_loop(conf['region_name'], conf['vpc_id'],
watcher_plugin, health_plugin,
iterations, sleep_time, conf['route_recheck_interval'])
# Stopping plugins and collecting all worker threads when we are done
stop_plugins(watcher_plugin, health_plugin)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_header(self, title):
"""Generate the header for the Markdown file."""
|
header = ['---',
'title: ' + title,
'author(s): ' + self.user,
'tags: ',
'created_at: ' + str(self.date_created),
'updated_at: ' + str(self.date_updated),
'tldr: ',
'thumbnail: ',
'---']
self.out = header + self.out
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_code(self, lang, body):
"""Wrap text with markdown specific flavour."""
|
self.out.append("```" + lang)
self.build_markdown(lang, body)
self.out.append("```")
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_input(self, paragraph):
"""Parse paragraph for the language of the code and the code itself."""
|
try:
lang, body = paragraph.split(None, 1)
except ValueError:
lang, body = paragraph, None
if not lang.strip().startswith('%'):
lang = 'scala'
body = paragraph.strip()
else:
lang = lang.strip()[1:]
if lang == 'md':
self.build_markdown(lang, body)
else:
self.build_code(lang, body)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_md_row(self, row, header=False):
"""Translate row into markdown format."""
|
if not row:
return
cols = row.split('\t')
if len(cols) == 1:
self.out.append(cols[0])
else:
col_md = '|'
underline_md = '|'
if cols:
for col in cols:
col_md += col + '|'
underline_md += '-|'
if header:
self.out.append(col_md + '\n' + underline_md)
else:
self.out.append(col_md)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_output(self, fout):
"""Squash self.out into string. Join every line in self.out with a new line and write the result to the output file. """
|
fout.write('\n'.join([s for s in self.out]))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert(self, json, fout):
"""Convert json to markdown. Takes in a .json file as input and convert it to Markdown format, saving the generated .png images into ./images. """
|
self.build_markdown_body(json) # create the body
self.build_header(json['name']) # create the md header
self.build_output(fout)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_markdown_body(self, text):
"""Generate the body for the Markdown file. - processes each json block one by one - for each block, process: - the creator of the notebook (user) - the date the notebook was created - the date the notebook was last updated - the input by detecting the editor language - the output by detecting the output format """
|
key_options = {
'dateCreated': self.process_date_created,
'dateUpdated': self.process_date_updated,
'title': self.process_title,
'text': self.process_input
}
for paragraph in text['paragraphs']:
if 'user' in paragraph:
self.user = paragraph['user']
for key, handler in key_options.items():
if key in paragraph:
handler(paragraph[key])
if self._RESULT_KEY in paragraph:
self.process_results(paragraph)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_image(self, msg):
"""Convert base64 encoding to png. Strips msg of the base64 image encoding and outputs the images to the specified directory. """
|
result = self.find_message(msg)
if result is None:
return
self.index += 1
images_path = 'images'
if self.directory:
images_path = os.path.join(self.directory, images_path)
if not os.path.isdir(images_path):
os.makedirs(images_path)
with open('{0}/output_{1}.png'.format(images_path, self.index), 'wb') as fh:
self.write_image_to_disk(msg, result, fh)
self.out.append(
'\n\n'.format(images_path, self.index))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_results(self, paragraph):
"""Route Zeppelin output types to corresponding handlers."""
|
if 'result' in paragraph and paragraph['result']['msg']:
msg = paragraph['result']['msg']
self.output_options[paragraph['result']['type']](msg)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_results(self, paragraph):
"""Routes Zeppelin output types to corresponding handlers."""
|
if 'editorMode' in paragraph['config']:
mode = paragraph['config']['editorMode'].split('/')[-1]
if 'results' in paragraph and paragraph['results']['msg']:
msg = paragraph['results']['msg'][0]
if mode not in ('text', 'markdown'):
self.output_options[msg['type']](msg['data'])
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_route_spec_request():
""" Process request for route spec. Either a new one is posted or the current one is to be retrieved. """
|
try:
if bottle.request.method == 'GET':
# Just return what we currenty have cached as the route spec
data = CURRENT_STATE.route_spec
if not data:
bottle.response.status = 404
msg = "Route spec not found!"
else:
bottle.response.status = 200
msg = json.dumps(data)
else:
# A new route spec is posted
raw_data = bottle.request.body.read()
new_route_spec = json.loads(raw_data)
logging.info("New route spec posted")
common.parse_route_spec_config(new_route_spec)
_Q_ROUTE_SPEC.put(new_route_spec)
bottle.response.status = 200
msg = "Ok"
except ValueError as e:
logging.error("Config ignored: %s" % str(e))
bottle.response.status = 400
msg = "Config ignored: %s" % str(e)
except Exception as e:
logging.error("Exception while processing HTTP request: %s" % str(e))
bottle.response.status = 500
msg = "Internal server error"
bottle.response.content_type = 'application/json'
return msg
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
""" Start the HTTP change monitoring thread. """
|
# Store reference to message queue in module global variable, so that
# our Bottle app handler functions have easy access to it.
global _Q_ROUTE_SPEC
_Q_ROUTE_SPEC = self.q_route_spec
logging.info("Http watcher plugin: "
"Starting to watch for route spec on "
"'%s:%s/route_spec'..." %
(self.conf['addr'], self.conf['port']))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cache_results(function):
"""Return decorated function that caches the results."""
|
def save_to_permacache():
"""Save the in-memory cache data to the permacache.
There is a race condition here between two processes updating at the
same time. It's perfectly acceptable to lose and/or corrupt the
permacache information as each process's in-memory cache will remain
in-tact.
"""
update_from_permacache()
try:
with open(filename, 'wb') as fp:
pickle.dump(cache, fp, pickle.HIGHEST_PROTOCOL)
except IOError:
pass # Ignore permacache saving exceptions
def update_from_permacache():
"""Attempt to update newer items from the permacache."""
try:
with open(filename, 'rb') as fp:
permacache = pickle.load(fp)
except Exception: # TODO: Handle specific exceptions
return # It's okay if it cannot load
for key, value in permacache.items():
if key not in cache or value[0] > cache[key][0]:
cache[key] = value
cache = {}
cache_expire_time = 3600
try:
filename = os.path.join(gettempdir(), 'update_checker_cache.pkl')
update_from_permacache()
except NotImplementedError:
filename = None
@wraps(function)
def wrapped(obj, package_name, package_version, **extra_data):
"""Return cached results if available."""
now = time.time()
key = (package_name, package_version)
if not obj.bypass_cache and key in cache: # Check the in-memory cache
cache_time, retval = cache[key]
if now - cache_time < cache_expire_time:
return retval
retval = function(obj, package_name, package_version, **extra_data)
cache[key] = now, retval
if filename:
save_to_permacache()
return retval
return wrapped
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pretty_date(the_datetime):
"""Attempt to return a human-readable time delta string."""
|
# Source modified from
# http://stackoverflow.com/a/5164027/176978
diff = datetime.utcnow() - the_datetime
if diff.days > 7 or diff.days < 0:
return the_datetime.strftime('%A %B %d, %Y')
elif diff.days == 1:
return '1 day ago'
elif diff.days > 1:
return '{0} days ago'.format(diff.days)
elif diff.seconds <= 1:
return 'just now'
elif diff.seconds < 60:
return '{0} seconds ago'.format(diff.seconds)
elif diff.seconds < 120:
return '1 minute ago'
elif diff.seconds < 3600:
return '{0} minutes ago'.format(int(round(diff.seconds / 60)))
elif diff.seconds < 7200:
return '1 hour ago'
else:
return '{0} hours ago'.format(int(round(diff.seconds / 3600)))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_check(package_name, package_version, bypass_cache=False, url=None, **extra_data):
"""Convenience method that outputs to stdout if an update is available."""
|
checker = UpdateChecker(url)
checker.bypass_cache = bypass_cache
result = checker.check(package_name, package_version, **extra_data)
if result:
print(result)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check(self, package_name, package_version, **extra_data):
"""Return a UpdateResult object if there is a newer version."""
|
data = extra_data
data['package_name'] = package_name
data['package_version'] = package_version
data['python_version'] = sys.version.split()[0]
data['platform'] = platform.platform(True) or 'Unspecified'
try:
headers = {'connection': 'close',
'content-type': 'application/json'}
response = requests.put(self.url, json.dumps(data), timeout=1,
headers=headers)
if response.status_code == codes.UNPROCESSABLE_ENTITY:
return 'update_checker does not support {!r}'.format(
package_name)
data = response.json()
except (requests.exceptions.RequestException, ValueError):
return None
if not data or not data.get('success') \
or (parse_version(package_version) >=
parse_version(data['data']['version'])):
return None
return UpdateResult(package_name, running=package_version,
available=data['data']['version'],
release_date=data['data']['upload_time'])
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addMonitor(self, monitorFriendlyName, monitorURL):
""" Returns True if Monitor was added, otherwise False. """
|
url = self.baseUrl
url += "newMonitor?apiKey=%s" % self.apiKey
url += "&monitorFriendlyName=%s" % monitorFriendlyName
url += "&monitorURL=%s&monitorType=1" % monitorURL
url += "&monitorAlertContacts=%s" % monitorAlertContacts
url += "&noJsonCallback=1&format=json"
success, response = self.requestApi(url)
if success:
return True
else:
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getMonitors(self, response_times=0, logs=0, uptime_ratio=''):
""" Returns status and response payload for all known monitors. """
|
url = self.baseUrl
url += "getMonitors?apiKey=%s" % (self.apiKey)
url += "&noJsonCallback=1&format=json"
# responseTimes - optional (defines if the response time data of each
# monitor will be returned. Should be set to 1 for getting them. Default
# is 0)
if response_times:
url += "&responseTimes=1"
# logs - optional (defines if the logs of each monitor will be returned.
# Should be set to 1 for getting the logs. Default is 0)
if logs:
url += '&logs=1'
# customUptimeRatio - optional (defines the number of days to calculate
# the uptime ratio(s) for. Ex: customUptimeRatio=7-30-45 to get the
# uptime ratios for those periods)
if uptime_ratio:
url += '&customUptimeRatio=%s' % uptime_ratio
return self.requestApi(url)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getMonitorById(self, monitorId):
""" Returns monitor status and alltimeuptimeratio for a MonitorId. """
|
url = self.baseUrl
url += "getMonitors?apiKey=%s&monitors=%s" % (self.apiKey, monitorId)
url += "&noJsonCallback=1&format=json"
success, response = self.requestApi(url)
if success:
status = response.get('monitors').get('monitor')[0].get('status')
alltimeuptimeratio = response.get('monitors').get('monitor')[0].get('alltimeuptimeratio')
return status, alltimeuptimeratio
return None, None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getMonitorByName(self, monitorFriendlyName):
""" Returns monitor status and alltimeuptimeratio for a MonitorFriendlyName. """
|
url = self.baseUrl
url += "getMonitors?apiKey=%s" % self.apiKey
url += "&noJsonCallback=1&format=json"
success, response = self.requestApi(url)
if success:
monitors = response.get('monitors').get('monitor')
for i in range(len(monitors)):
monitor = monitors[i]
if monitor.get('friendlyname') == monitorFriendlyName:
status = monitor.get('status')
alltimeuptimeratio = monitor.get('alltimeuptimeratio')
return status, alltimeuptimeratio
return None, None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def editMonitor(self, monitorID, monitorStatus=None, monitorFriendlyName=None, monitorURL=None, monitorType=None, monitorSubType=None, monitorPort=None, monitorKeywordType=None, monitorKeywordValue=None, monitorHTTPUsername=None, monitorHTTPPassword=None, monitorAlertContacts=None):
""" monitorID is the only required object. All others are optional and must be quoted. Returns Response object from api. """
|
url = self.baseUrl
url += "editMonitor?apiKey=%s" % self.apiKey
url += "&monitorID=%s" % monitorID
if monitorStatus:
# Pause, Start Montir
url += "&monitorStatus=%s" % monitorStatus
if monitorFriendlyName:
# Update their FriendlyName
url += "&monitorFriendlyName=%s" % monitorFriendlyName
if monitorURL:
# Edit the MontiorUrl
url += "&monitorURL=%s" % monitorURL
if monitorType:
# Edit the type of montior
url += "&monitorType=%s" % monitorType
if monitorSubType:
# Edit the SubType
url += "&monitorSubType=%s" % monitorSubType
if monitorPort:
# Edit the Port
url += "&monitorPort=%s" % monitorPort
if monitorKeywordType:
# Edit the Keyword Type
url += "&monitorKeywordType=%s" % monitorKeywordType
if monitorKeywordValue:
# Edit the Keyword Match
url += "&monitorKeywordValue=%s" % monitorKeywordValue
if monitorHTTPUsername:
# Edit the HTTP Username
url += "&monitorHTTPUsername=%s" % monitorHTTPUsername
if monitorHTTPPassword:
# Edit the HTTP Password
url += "&monitorHTTPPassword=%s" % monitorHTTPPassword
if monitorAlertContacts:
# Edit the contacts
url += "&monitorAlertContacts=%s" % monitorAlertContacts
url += "&noJsonCallback=1&format=json"
success = self.requestApi(url)
return success
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def deleteMonitorById(self, monitorID):
""" Returns True or False if monitor is deleted """
|
url = self.baseUrl
url += "deleteMonitor?apiKey=%s" % self.apiKey
url += "&monitorID=%s" % monitorID
url += "&noJsonCallback=1&format=json"
success, response = self.requestApi(url)
if success:
return True
else:
return False
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getAlertContacts(self, alertContacts=None, offset=None, limit=None):
""" Get Alert Contacts """
|
url = self.baseUrl
url += "getAlertContacts?apiKey=%s" % self.apiKey
if alertContacts:
url += "&alertContacts=%s" % alertContacts
if offset:
url += "&offset=%s" % offset
if limit:
url += "&limit=%s" % limit
url += "&format=json"
return self.requestApi(url)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fromfile(file_, threadpool_size=None, ignore_lock=False):
""" Instantiate BlockStorageRAM device from a file saved in block storage format. The file_ argument can be a file object or a string that represents a filename. If called with a file object, it should be opened in binary mode, and the caller is responsible for closing the file. This method returns a BlockStorageRAM instance. """
|
close_file = False
if not hasattr(file_, 'read'):
file_ = open(file_, 'rb')
close_file = True
try:
header_data = file_.read(BlockStorageRAM._index_offset)
block_size, block_count, user_header_size, locked = \
struct.unpack(
BlockStorageRAM._index_struct_string,
header_data)
if locked and (not ignore_lock):
raise IOError(
"Can not open block storage device because it is "
"locked by another process. To ignore this check, "
"call this method with the keyword 'ignore_lock' "
"set to True.")
header_offset = len(header_data) + \
user_header_size
f = bytearray(header_offset + \
(block_size * block_count))
f[:header_offset] = header_data + file_.read(user_header_size)
f[header_offset:] = file_.read(block_size * block_count)
finally:
if close_file:
file_.close()
return BlockStorageRAM(f,
threadpool_size=threadpool_size,
ignore_lock=ignore_lock)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tofile(self, file_):
""" Dump all storage data to a file. The file_ argument can be a file object or a string that represents a filename. If called with a file object, it should be opened in binary mode, and the caller is responsible for closing the file. The method should only be called after the storage device has been closed to ensure that the locked flag has been set to False. """
|
close_file = False
if not hasattr(file_, 'write'):
file_ = open(file_, 'wb')
close_file = True
file_.write(self._f)
if close_file:
file_.close()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def log_to_logger(fn):
""" Wrap a Bottle request so that a log line is emitted after it's handled. """
|
@wraps(fn)
def _log_to_logger(*args, **kwargs):
actual_response = fn(*args, **kwargs)
# modify this to log exactly what you need:
logger.info('%s %s %s %s' % (bottle.request.remote_addr,
bottle.request.method,
bottle.request.url,
bottle.response.status))
return actual_response
return _log_to_logger
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_request(path):
""" Return the current status. """
|
accept = bottle.request.get_header("accept", default="text/plain")
bottle.response.status = 200
try:
if "text/html" in accept:
ret = CURRENT_STATE.as_html(path=path)
bottle.response.content_type = "text/html"
elif "application/json" in accept:
ret = CURRENT_STATE.as_json(path=path)
bottle.response.content_type = "application/json"
elif "text/" in accept or "*/*" in accept:
ret = CURRENT_STATE.as_json(path=path, with_indent=True)
bottle.response.content_type = "text/plain"
else:
bottle.response.status = 407
ret = "Cannot render data in acceptable content type"
except StateError:
bottle.response.status = 404
ret = "Requested state component not found"
return ret
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
""" Start the HTTP server thread. """
|
logging.info("HTTP server: "
"Starting to listen for requests on '%s:%s'..." %
(self.conf['addr'], self.conf['port']))
self.my_server = MyWSGIRefServer(host=self.conf['addr'],
port=self.conf['port'],
romana_http=self)
self.http_thread = threading.Thread(
target = APP.run,
name = "HTTP",
kwargs = {"quiet" : True, "server" : self.my_server})
self.http_thread.daemon = True
self.http_thread.start()
time.sleep(1)
if not self.wsgi_server_started:
# Set the global flag indicating that everything should stop
CURRENT_STATE._stop_all = True
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop(self):
""" Stop the HTTP server thread. """
|
self.my_server.stop()
self.http_thread.join()
logging.info("HTTP server: Stopped")
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_stats(self, responses, no_responses):
""" Maintain some stats about our requests. """
|
slowest_rtt = 0.0
slowest_ip = None
fastest_rtt = 9999999.9
fastest_ip = None
rtt_total = 0.0
for ip, rtt in responses.items():
rtt_total += rtt
if rtt > slowest_rtt:
slowest_rtt = rtt
slowest_ip = ip
elif rtt < fastest_rtt:
fastest_rtt = rtt
fastest_ip = ip
sorted_rtts = sorted(responses.values())
l = len(sorted_rtts)
if l == 0:
median_rtt = 0.0
elif l % 2 == 1:
# Odd number: Median is the middle element
median_rtt = sorted_rtts[int(l / 2)]
else:
# Even number (average between two middle elements)
median_rtt = (sorted_rtts[int(l / 2) - 1] +
sorted_rtts[int(l / 2)]) / 2.0
now = datetime.datetime.now().isoformat()
m = {
"time" : now,
"num_responses" : len(responses),
"num_no_responses" : len(no_responses),
"slowest" : {
"ip" : slowest_ip,
"rtt" : slowest_rtt
},
"fastest" : {
"ip" : fastest_ip,
"rtt" : fastest_rtt
},
"average_rtt" : rtt_total / len(responses),
"median_rtt" : median_rtt
}
self.measurements.insert(0, m)
self.measurements = self.measurements[:self.max_num_measurements]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def do_health_checks(self, list_of_ips):
""" Perform a health check on a list of IP addresses, using ICMPecho. Return tuple with list of failed IPs and questionable IPs. """
|
# Calculate a decent overall timeout time for a ping attempt: 3/4th of
# the monitoring interval. That way, we know we're done with this ping
# attempt before the next monitoring attempt is started.
ping_timeout = self.get_monitor_interval() * 0.75
# Calculate a decent number of retries. For very short intervals we
# shouldn't have any retries, for very long ones, we should have
# several ones. Converting the timeout to an integer gives us what we
# want: For timeouts less than 1 we have no retry at all.
num_retries = int(ping_timeout)
try:
self.ping_count += len(list_of_ips)
responses, no_responses = multiping.multi_ping(
list_of_ips, ping_timeout, num_retries)
self.update_stats(responses, no_responses)
except Exception as e:
logging.error("Exception while trying to monitor servers: %s" %
str(e))
# Need to assume all IPs failed
no_responses = list_of_ips
return no_responses, []
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_arguments(cls, parser, sys_arg_list=None):
""" Arguments for the ICMPecho health monitor plugin. """
|
parser.add_argument('--icmp_check_interval',
dest='icmp_check_interval',
required=False, default=2, type=float,
help="ICMPecho interval in seconds, default 2 "
"(only for 'icmpecho' health monitor plugin)")
return ["icmp_check_interval"]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ip_check(ip, netmask_expected=False):
""" Sanity check that the specified string is indeed an IP address or mask. """
|
try:
if netmask_expected:
if "/" not in ip:
raise netaddr.core.AddrFormatError()
netaddr.IPNetwork(ip)
else:
netaddr.IPAddress(ip)
except netaddr.core.AddrFormatError:
if netmask_expected:
raise ArgsError("Not a valid CIDR (%s)" % ip)
else:
raise ArgsError("Not a valid IP address (%s)" % ip)
except Exception as e:
raise ArgsError("Invalid format: %s" % str(e))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_valid_ip_or_cidr(val, return_as_cidr=False):
""" Checks that the value is a valid IP address or a valid CIDR. Returns the specified value. If 'return_as_cidr' is set then the return value will always be in the form of a CIDR, even if a plain IP address was specified. """
|
is_ip = True
if "/" in val:
ip_check(val, netmask_expected=True)
is_ip = False
else:
ip_check(val, netmask_expected=False)
if return_as_cidr and is_ip:
# Convert a plain IP to a CIDR
if val == "0.0.0.0":
# Special case for the default route
val = "0.0.0.0/0"
else:
val = "%s/32" % val
try:
ipaddress.IPv4Network(unicode(val))
except Exception as e:
raise ArgsError("Not a valid network: %s" % str(e))
return val
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def is_cidr_in_cidr(small_cidr, big_cidr):
""" Return True if the small CIDR is contained in the big CIDR. """
|
# The default route (0.0.0.0/0) is handled differently, since every route
# would always be contained in there. Instead, only a small CIDR of
# "0.0.0.0/0" can match against it. Other small CIDRs will always result in
# 'False' (not contained).
if small_cidr == "0.0.0.0/0":
return big_cidr == "0.0.0.0/0"
else:
if big_cidr == "0.0.0.0/0":
return False
s = ipaddress.IPv4Network(unicode(small_cidr))
b = ipaddress.IPv4Network(unicode(big_cidr))
return s.subnet_of(b)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_last_msg_from_queue(q):
""" Read all messages from a queue and return the last one. This is useful in many cases where all messages are always the complete state of things. Therefore, intermittent messages can be ignored. Doesn't block, returns None if there is no message waiting in the queue. """
|
msg = None
while True:
try:
# The list of IPs is always a full list.
msg = q.get_nowait()
q.task_done()
except Queue.Empty:
# No more messages, all done for now
return msg
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def param_extract(args, short_form, long_form, default=None):
""" Quick extraction of a parameter from the command line argument list. In some cases we need to parse a few arguments before the official arg-parser starts. Returns parameter value, or None if not present. """
|
val = default
for i, a in enumerate(args):
# Long form may use "--xyz=foo", so need to split on '=', but it
# doesn't necessarily do that, can also be "--xyz foo".
elems = a.split("=", 1)
if elems[0] in [short_form, long_form]:
# At least make sure that an actual name was specified
if len(elems) == 1:
if i + 1 < len(args) and not args[i + 1].startswith("-"):
val = args[i + 1]
else:
val = "" # Invalid value was specified
else:
val = elems[1]
break
return val
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def base10_integer_to_basek_string(k, x):
"""Convert an integer into a base k string."""
|
if not (2 <= k <= max_k_labeled):
raise ValueError("k must be in range [2, %d]: %s"
% (max_k_labeled, k))
return ((x == 0) and numerals[0]) or \
(base10_integer_to_basek_string(k, x // k).\
lstrip(numerals[0]) + numerals[x % k])
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def basek_string_to_base10_integer(k, x):
"""Convert a base k string into an integer."""
|
assert 1 < k <= max_k_labeled
return sum(numeral_index[c]*(k**i)
for i, c in enumerate(reversed(x)))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_bucket_level(k, b):
""" Calculate the level in which a 0-based bucket lives inside of a k-ary heap. """
|
assert k >= 2
if k == 2:
return log2floor(b+1)
v = (k - 1) * (b + 1) + 1
h = 0
while k**(h+1) < v:
h += 1
return h
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_last_common_level(k, b1, b2):
""" Calculate the highest level after which the paths from the root to these buckets diverge. """
|
l1 = calculate_bucket_level(k, b1)
l2 = calculate_bucket_level(k, b2)
while l1 > l2:
b1 = (b1-1)//k
l1 -= 1
while l2 > l1:
b2 = (b2-1)//k
l2 -= 1
while b1 != b2:
b1 = (b1-1)//k
b2 = (b2-1)//k
l1 -= 1
return l1
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def write_as_dot(self, f, data=None, max_levels=None):
"Write the tree in the dot language format to f."
assert (max_levels is None) or (max_levels >= 0)
def visit_node(n, levels):
lbl = "{"
if data is None:
if self.k <= max_k_labeled:
lbl = repr(n.label()).\
replace("{","\{").\
replace("}","\}").\
replace("|","\|").\
replace("<","\<").\
replace(">","\>")
else:
lbl = str(n)
else:
s = self.bucket_to_block(n.bucket)
for i in xrange(self.blocks_per_bucket):
lbl += "{%s}" % (data[s+i])
if i + 1 != self.blocks_per_bucket:
lbl += "|"
lbl += "}"
f.write(" %s [penwidth=%s,label=\"%s\"];\n"
% (n.bucket, 1, lbl))
levels += 1
if (max_levels is None) or (levels <= max_levels):
for i in xrange(self.k):
cn = n.child_node(i)
if not self.is_nil_node(cn):
visit_node(cn, levels)
f.write(" %s -> %s ;\n" % (n.bucket, cn.bucket))
f.write("// Created by SizedVirtualHeap.write_as_dot(...)\n")
f.write("digraph heaptree {\n")
f.write("node [shape=record]\n")
if (max_levels is None) or (max_levels > 0):
visit_node(self.root_node(), 1)
f.write("}\n")
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
|
def save_image_as_pdf(self, filename, data=None, max_levels=None):
"Write the heap as PDF file."
assert (max_levels is None) or (max_levels >= 0)
import os
if not filename.endswith('.pdf'):
filename = filename+'.pdf'
tmpfd, tmpname = tempfile.mkstemp(suffix='dot')
with open(tmpname, 'w') as f:
self.write_as_dot(f, data=data, max_levels=max_levels)
os.close(tmpfd)
try:
subprocess.call(['dot',
tmpname,
'-Tpdf',
'-o',
('%s'%filename)])
except OSError:
sys.stderr.write(
"DOT -> PDF conversion failed. See DOT file: %s\n"
% (tmpname))
return False
os.remove(tmpname)
return True
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.