text_prompt
stringlengths
157
13.1k
code_prompt
stringlengths
7
19.8k
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def best_actions(self): """A tuple containing the actions whose action sets have the best prediction."""
if self._best_actions is None: best_prediction = self.best_prediction self._best_actions = tuple( action for action, action_set in self._action_sets.items() if action_set.prediction == best_prediction ) return self._best_actions
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def select_action(self): """Select an action according to the action selection strategy of the associated algorithm. If an action has already been selected, raise a ValueError instead. Usage: if match_set.selected_action is None: match_set.select_action() Arguments: None Return: The action that was selected by the action selection strategy. """
if self._selected_action is not None: raise ValueError("The action has already been selected.") strategy = self._algorithm.action_selection_strategy self._selected_action = strategy(self) return self._selected_action
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _set_selected_action(self, action): """Setter method for the selected_action property."""
assert action in self._action_sets if self._selected_action is not None: raise ValueError("The action has already been selected.") self._selected_action = action
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _set_payoff(self, payoff): """Setter method for the payoff property."""
if self._selected_action is None: raise ValueError("The action has not been selected yet.") if self._closed: raise ValueError("The payoff for this match set has already" "been applied.") self._payoff = float(payoff)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def pay(self, predecessor): """If the predecessor is not None, gives the appropriate amount of payoff to the predecessor in payment for its contribution to this match set's expected future payoff. The predecessor argument should be either None or a MatchSet instance whose selected action led directly to this match set's situation. Usage: match_set = model.match(situation) match_set.pay(previous_match_set) Arguments: predecessor: The MatchSet instance which was produced by the same classifier set in response to the immediately preceding situation, or None if this is the first situation in the scenario. Return: None """
assert predecessor is None or isinstance(predecessor, MatchSet) if predecessor is not None: expectation = self._algorithm.get_future_expectation(self) predecessor.payoff += expectation
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add(self, rule): """Add a new classifier rule to the classifier set. Return a list containing zero or more rules that were deleted from the classifier by the algorithm in order to make room for the new rule. The rule argument should be a ClassifierRule instance. The behavior of this method depends on whether the rule already exists in the classifier set. When a rule is already present, the rule's numerosity is added to that of the version of the rule already present in the population. Otherwise, the new rule is captured. Note that this means that for rules already present in the classifier set, the metadata of the existing rule is not overwritten by that of the one passed in as an argument. Usage: displaced_rules = model.add(rule) Arguments: rule: A ClassifierRule instance which is to be added to this classifier set. Return: A possibly empty list of ClassifierRule instances which were removed altogether from the classifier set (as opposed to simply having their numerosities decremented) in order to make room for the newly added rule. """
assert isinstance(rule, ClassifierRule) condition = rule.condition action = rule.action # If the rule already exists in the population, then we virtually # add the rule by incrementing the existing rule's numerosity. This # prevents redundancy in the rule set. Otherwise we capture the # new rule. if condition not in self._population: self._population[condition] = {} if action in self._population[condition]: existing_rule = self._population[condition][action] existing_rule.numerosity += rule.numerosity else: self._population[condition][action] = rule # Any time we add a rule, we need to call this to keep the # population size under control. return self._algorithm.prune(self)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get(self, rule, default=None): """Return the existing version of the given rule. If the rule is not present in the classifier set, return the default. If no default was given, use None. This is useful for eliminating duplicate copies of rules. Usage: unique_rule = model.get(possible_duplicate, possible_duplicate) Arguments: rule: The ClassifierRule instance which may be a duplicate of another already contained in the classifier set. default: The value returned if the rule is not a duplicate of another already contained in the classifier set. Return: If the rule is a duplicate of another already contained in the classifier set, the existing one is returned. Otherwise, the value of default is returned. """
assert isinstance(rule, ClassifierRule) if (rule.condition not in self._population or rule.action not in self._population[rule.condition]): return default return self._population[rule.condition][rule.action]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def ping(config_file, profile, solver_def, json_output, request_timeout, polling_timeout): """Ping the QPU by submitting a single-qubit problem."""
now = utcnow() info = dict(datetime=now.isoformat(), timestamp=datetime_to_timestamp(now), code=0) def output(fmt, **kwargs): info.update(kwargs) if not json_output: click.echo(fmt.format(**kwargs)) def flush(): if json_output: click.echo(json.dumps(info)) try: _ping(config_file, profile, solver_def, request_timeout, polling_timeout, output) except CLIError as error: output("Error: {error} (code: {code})", error=str(error), code=error.code) sys.exit(error.code) except Exception as error: output("Unhandled error: {error}", error=str(error)) sys.exit(127) finally: flush()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def solvers(config_file, profile, solver_def, list_solvers): """Get solver details. Unless solver name/id specified, fetch and display details for all online solvers available on the configured endpoint. """
with Client.from_config( config_file=config_file, profile=profile, solver=solver_def) as client: try: solvers = client.get_solvers(**client.default_solver) except SolverNotFoundError: click.echo("Solver(s) {} not found.".format(solver_def)) return 1 if list_solvers: for solver in solvers: click.echo(solver.id) return # ~YAML output for solver in solvers: click.echo("Solver: {}".format(solver.id)) click.echo(" Parameters:") for name, val in sorted(solver.parameters.items()): click.echo(" {}: {}".format(name, strtrunc(val) if val else '?')) solver.properties.pop('parameters', None) click.echo(" Properties:") for name, val in sorted(solver.properties.items()): click.echo(" {}: {}".format(name, strtrunc(val))) click.echo(" Derived properties:") for name in sorted(solver.derived_properties): click.echo(" {}: {}".format(name, strtrunc(getattr(solver, name)))) click.echo()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def sample(config_file, profile, solver_def, biases, couplings, random_problem, num_reads, verbose): """Submit Ising-formulated problem and return samples."""
# TODO: de-dup wrt ping def echo(s, maxlen=100): click.echo(s if verbose else strtrunc(s, maxlen)) try: client = Client.from_config( config_file=config_file, profile=profile, solver=solver_def) except Exception as e: click.echo("Invalid configuration: {}".format(e)) return 1 if config_file: echo("Using configuration file: {}".format(config_file)) if profile: echo("Using profile: {}".format(profile)) echo("Using endpoint: {}".format(client.endpoint)) try: solver = client.get_solver() except SolverAuthenticationError: click.echo("Authentication error. Check credentials in your configuration file.") return 1 except (InvalidAPIResponseError, UnsupportedSolverError): click.echo("Invalid or unexpected API response.") return 2 except SolverNotFoundError: click.echo("Solver with the specified features does not exist.") return 3 echo("Using solver: {}".format(solver.id)) if random_problem: linear, quadratic = generate_random_ising_problem(solver) else: try: linear = ast.literal_eval(biases) if biases else [] except Exception as e: click.echo("Invalid biases: {}".format(e)) try: quadratic = ast.literal_eval(couplings) if couplings else {} except Exception as e: click.echo("Invalid couplings: {}".format(e)) echo("Using qubit biases: {!r}".format(linear)) echo("Using qubit couplings: {!r}".format(quadratic)) echo("Number of samples: {}".format(num_reads)) try: result = solver.sample_ising(linear, quadratic, num_reads=num_reads).result() except Exception as e: click.echo(e) return 4 if verbose: click.echo("Result: {!r}".format(result)) echo("Samples: {!r}".format(result['samples'])) echo("Occurrences: {!r}".format(result['occurrences'])) echo("Energies: {!r}".format(result['energies']))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_input_callback(samplerate, params, num_samples=256): """Return a function that produces samples of a sine. Parameters samplerate : float The sample rate. params : dict Parameters for FM generation. num_samples : int, optional Number of samples to be generated on each call. """
amplitude = params['mod_amplitude'] frequency = params['mod_frequency'] def producer(): """Generate samples. Yields ------ samples : ndarray A number of samples (`num_samples`) of the sine. """ start_time = 0 while True: time = start_time + np.arange(num_samples) / samplerate start_time += num_samples / samplerate output = amplitude * np.cos(2 * np.pi * frequency * time) yield output return lambda p=producer(): next(p)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_playback_callback(resampler, samplerate, params): """Return a sound playback callback. Parameters resampler The resampler from which samples are read. samplerate : float The sample rate. params : dict Parameters for FM generation. """
def callback(outdata, frames, time, _): """Playback callback. Read samples from the resampler and modulate them onto a carrier frequency. """ last_fmphase = getattr(callback, 'last_fmphase', 0) df = params['fm_gain'] * resampler.read(frames) df = np.pad(df, (0, frames - len(df)), mode='constant') t = time.outputBufferDacTime + np.arange(frames) / samplerate phase = 2 * np.pi * params['carrier_frequency'] * t fmphase = last_fmphase + 2 * np.pi * np.cumsum(df) / samplerate outdata[:, 0] = params['output_volume'] * np.cos(phase + fmphase) callback.last_fmphase = fmphase[-1] return callback
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def main(source_samplerate, target_samplerate, params, converter_type): """Setup the resampling and audio output callbacks and start playback."""
from time import sleep ratio = target_samplerate / source_samplerate with sr.CallbackResampler(get_input_callback(source_samplerate, params), ratio, converter_type) as resampler, \ sd.OutputStream(channels=1, samplerate=target_samplerate, callback=get_playback_callback( resampler, target_samplerate, params)): print("Playing back... Ctrl+C to stop.") try: while True: sleep(1) except KeyboardInterrupt: print("Aborting.")
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def max_num_reads(self, **params): """Returns the maximum number of reads for the given solver parameters. Args: **params: Parameters for the sampling method. Relevant to num_reads: - annealing_time - readout_thermalization - num_reads - programming_thermalization Returns: int: The maximum number of reads. """
# dev note: in the future it would be good to have a way of doing this # server-side, as we are duplicating logic here. properties = self.properties if self.software or not params: # software solvers don't use any of the above parameters return properties['num_reads_range'][1] # qpu _, duration = properties['problem_run_duration_range'] annealing_time = params.get('annealing_time', properties['default_annealing_time']) readout_thermalization = params.get('readout_thermalization', properties['default_readout_thermalization']) programming_thermalization = params.get('programming_thermalization', properties['default_programming_thermalization']) return min(properties['num_reads_range'][1], int((duration - programming_thermalization) / (annealing_time + readout_thermalization)))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _sample(self, type_, linear, quadratic, params): """Internal method for both sample_ising and sample_qubo. Args: linear (list/dict): Linear terms of the model. quadratic (dict of (int, int): float): Quadratic terms of the model. **params: Parameters for the sampling method, specified per solver. Returns: :obj: `Future` """
# Check the problem if not self.check_problem(linear, quadratic): raise ValueError("Problem graph incompatible with solver.") # Mix the new parameters with the default parameters combined_params = dict(self._params) combined_params.update(params) # Check the parameters before submitting for key in combined_params: if key not in self.parameters and not key.startswith('x_'): raise KeyError("{} is not a parameter of this solver.".format(key)) # transform some of the parameters in-place self._format_params(type_, combined_params) body = json.dumps({ 'solver': self.id, 'data': encode_bqm_as_qp(self, linear, quadratic), 'type': type_, 'params': combined_params }) _LOGGER.trace("Encoded sample request: %s", body) future = Future(solver=self, id_=None, return_matrix=self.return_matrix, submission_data=(type_, linear, quadratic, params)) _LOGGER.debug("Submitting new problem to: %s", self.id) self.client._submit(body, future) return future
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _format_params(self, type_, params): """Reformat some of the parameters for sapi."""
if 'initial_state' in params: # NB: at this moment the error raised when initial_state does not match lin/quad (in # active qubits) is not very informative, but there is also no clean way to check here # that they match because lin can be either a list or a dict. In the future it would be # good to check. initial_state = params['initial_state'] if isinstance(initial_state, Mapping): initial_state_list = [3]*self.properties['num_qubits'] low = -1 if type_ == 'ising' else 0 for v, val in initial_state.items(): if val == 3: continue if val <= 0: initial_state_list[v] = low else: initial_state_list[v] = 1 params['initial_state'] = initial_state_list
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def check_problem(self, linear, quadratic): """Test if an Ising model matches the graph provided by the solver. Args: linear (list/dict): Linear terms of the model (h). quadratic (dict of (int, int): float): Quadratic terms of the model (J). Returns: boolean Examples: This example creates a client using the local system's default D-Wave Cloud Client configuration file, which is configured to access a D-Wave 2000Q QPU, and tests a simple :term:`Ising` model for two target embeddings (that is, representations of the model's graph by coupled qubits on the QPU's sparsely connected graph), where only the second is valid. False True False True """
for key, value in uniform_iterator(linear): if value != 0 and key not in self.nodes: return False for key, value in uniform_iterator(quadratic): if value != 0 and tuple(key) not in self.edges: return False return True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _retrieve_problem(self, id_): """Resume polling for a problem previously submitted. Args: id_: Identification of the query. Returns: :obj: `Future` """
future = Future(self, id_, self.return_matrix, None) self.client._poll(future) return future
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_converter_type(identifier): """Return the converter type for `identifier`."""
if isinstance(identifier, str): return ConverterType[identifier] if isinstance(identifier, ConverterType): return identifier return ConverterType(identifier)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def resample(input_data, ratio, converter_type='sinc_best', verbose=False): """Resample the signal in `input_data` at once. Parameters input_data : ndarray Input data. A single channel is provided as a 1D array of `num_frames` length. Input data with several channels is represented as a 2D array of shape (`num_frames`, `num_channels`). For use with `libsamplerate`, `input_data` is converted to 32-bit float and C (row-major) memory order. ratio : float Conversion ratio = output sample rate / input sample rate. converter_type : ConverterType, str, or int Sample rate converter. verbose : bool If `True`, print additional information about the conversion. Returns ------- output_data : ndarray Resampled input data. Note ---- If samples are to be processed in chunks, `Resampler` and `CallbackResampler` will provide better results and allow for variable conversion ratios. """
from samplerate.lowlevel import src_simple from samplerate.exceptions import ResamplingError input_data = np.require(input_data, requirements='C', dtype=np.float32) if input_data.ndim == 2: num_frames, channels = input_data.shape output_shape = (int(num_frames * ratio), channels) elif input_data.ndim == 1: num_frames, channels = input_data.size, 1 output_shape = (int(num_frames * ratio), ) else: raise ValueError('rank > 2 not supported') output_data = np.empty(output_shape, dtype=np.float32) converter_type = _get_converter_type(converter_type) (error, input_frames_used, output_frames_gen) \ = src_simple(input_data, output_data, ratio, converter_type.value, channels) if error != 0: raise ResamplingError(error) if verbose: info = ('samplerate info:\n' '{} input frames used\n' '{} output frames generated\n' .format(input_frames_used, output_frames_gen)) print(info) return (output_data[:output_frames_gen, :] if channels > 1 else output_data[:output_frames_gen])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def set_ratio(self, new_ratio): """Set a new conversion ratio immediately."""
from samplerate.lowlevel import src_set_ratio return src_set_ratio(self._state, new_ratio)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def process(self, input_data, ratio, end_of_input=False, verbose=False): """Resample the signal in `input_data`. Parameters input_data : ndarray Input data. A single channel is provided as a 1D array of `num_frames` length. Input data with several channels is represented as a 2D array of shape (`num_frames`, `num_channels`). For use with `libsamplerate`, `input_data` is converted to 32-bit float and C (row-major) memory order. ratio : float Conversion ratio = output sample rate / input sample rate. end_of_input : int Set to `True` if no more data is available, or to `False` otherwise. verbose : bool If `True`, print additional information about the conversion. Returns ------- output_data : ndarray Resampled input data. """
from samplerate.lowlevel import src_process from samplerate.exceptions import ResamplingError input_data = np.require(input_data, requirements='C', dtype=np.float32) if input_data.ndim == 2: num_frames, channels = input_data.shape output_shape = (int(num_frames * ratio), channels) elif input_data.ndim == 1: num_frames, channels = input_data.size, 1 output_shape = (int(num_frames * ratio), ) else: raise ValueError('rank > 2 not supported') if channels != self._channels: raise ValueError('Invalid number of channels in input data.') output_data = np.empty(output_shape, dtype=np.float32) (error, input_frames_used, output_frames_gen) = src_process( self._state, input_data, output_data, ratio, end_of_input) if error != 0: raise ResamplingError(error) if verbose: info = ('samplerate info:\n' '{} input frames used\n' '{} output frames generated\n' .format(input_frames_used, output_frames_gen)) print(info) return (output_data[:output_frames_gen, :] if channels > 1 else output_data[:output_frames_gen])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _create(self): """Create new callback resampler."""
from samplerate.lowlevel import ffi, src_callback_new, src_delete from samplerate.exceptions import ResamplingError state, handle, error = src_callback_new( self._callback, self._converter_type.value, self._channels) if error != 0: raise ResamplingError(error) self._state = ffi.gc(state, src_delete) self._handle = handle
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def set_starting_ratio(self, ratio): """ Set the starting conversion ratio for the next `read` call. """
from samplerate.lowlevel import src_set_ratio if self._state is None: self._create() src_set_ratio(self._state, ratio) self.ratio = ratio
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def reset(self): """Reset state."""
from samplerate.lowlevel import src_reset if self._state is None: self._create() src_reset(self._state)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read(self, num_frames): """Read a number of frames from the resampler. Parameters num_frames : int Number of frames to read. Returns ------- output_data : ndarray Resampled frames as a (`num_output_frames`, `num_channels`) or (`num_output_frames`,) array. Note that this may return fewer frames than requested, for example when no more input is available. """
from samplerate.lowlevel import src_callback_read, src_error from samplerate.exceptions import ResamplingError if self._state is None: self._create() if self._channels > 1: output_shape = (num_frames, self._channels) elif self._channels == 1: output_shape = (num_frames, ) output_data = np.empty(output_shape, dtype=np.float32) ret = src_callback_read(self._state, self._ratio, num_frames, output_data) if ret == 0: error = src_error(self._state) if error: raise ResamplingError(error) return (output_data[:ret, :] if self._channels > 1 else output_data[:ret])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_variance(seq): """ Batch variance calculation. """
m = get_mean(seq) return sum((v-m)**2 for v in seq)/float(len(seq))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def mean_absolute_error(seq, correct): """ Batch mean absolute error calculation. """
assert len(seq) == len(correct) diffs = [abs(a-b) for a, b in zip(seq, correct)] return sum(diffs)/float(len(diffs))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def normalize(seq): """ Scales each number in the sequence so that the sum of all numbers equals 1. """
s = float(sum(seq)) return [v/s for v in seq]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def entropy_variance(data, class_attr=None, method=DEFAULT_CONTINUOUS_METRIC): """ Calculates the variance fo a continuous class attribute, to be used as an entropy metric. """
assert method in CONTINUOUS_METRICS, "Unknown entropy variance metric: %s" % (method,) assert (class_attr is None and isinstance(data, dict)) \ or (class_attr is not None and isinstance(data, list)) if isinstance(data, dict): lst = data else: lst = [record.get(class_attr) for record in data] return get_variance(lst)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def majority_value(data, class_attr): """ Creates a list of all values in the target attribute for each record in the data list object, and returns the value that appears in this list the most frequently. """
if is_continuous(data[0][class_attr]): return CDist(seq=[record[class_attr] for record in data]) else: return most_frequent([record[class_attr] for record in data])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def most_frequent(lst): """ Returns the item that appears most frequently in the given list. """
lst = lst[:] highest_freq = 0 most_freq = None for val in unique(lst): if lst.count(val) > highest_freq: most_freq = val highest_freq = lst.count(val) return most_freq
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def unique(lst): """ Returns a list made up of the unique values found in lst. i.e., it removes the redundant values in lst. """
lst = lst[:] unique_lst = [] # Cycle through the list and add each value to the unique list only once. for item in lst: if unique_lst.count(item) <= 0: unique_lst.append(item) # Return the list with all redundant values removed. return unique_lst
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def create_decision_tree(data, attributes, class_attr, fitness_func, wrapper, **kwargs): """ Returns a new decision tree based on the examples given. """
split_attr = kwargs.get('split_attr', None) split_val = kwargs.get('split_val', None) assert class_attr not in attributes node = None data = list(data) if isinstance(data, Data) else data if wrapper.is_continuous_class: stop_value = CDist(seq=[r[class_attr] for r in data]) # For a continuous class case, stop if all the remaining records have # a variance below the given threshold. stop = wrapper.leaf_threshold is not None \ and stop_value.variance <= wrapper.leaf_threshold else: stop_value = DDist(seq=[r[class_attr] for r in data]) # For a discrete class, stop if all remaining records have the same # classification. stop = len(stop_value.counts) <= 1 if not data or len(attributes) <= 0: # If the dataset is empty or the attributes list is empty, return the # default value. The target attribute is not in the attributes list, so # we need not subtract 1 to account for the target attribute. if wrapper: wrapper.leaf_count += 1 return stop_value elif stop: # If all the records in the dataset have the same classification, # return that classification. if wrapper: wrapper.leaf_count += 1 return stop_value else: # Choose the next best attribute to best classify our data best = choose_attribute( data, attributes, class_attr, fitness_func, method=wrapper.metric) # Create a new decision tree/node with the best attribute and an empty # dictionary object--we'll fill that up next. # tree = {best:{}} node = Node(tree=wrapper, attr_name=best) node.n += len(data) # Create a new decision tree/sub-node for each of the values in the # best attribute field for val in get_values(data, best): # Create a subtree for the current value under the "best" field subtree = create_decision_tree( [r for r in data if r[best] == val], [attr for attr in attributes if attr != best], class_attr, fitness_func, split_attr=best, split_val=val, wrapper=wrapper) # Add the new subtree to the empty dictionary object in our new # tree/node we just created. if isinstance(subtree, Node): node._branches[val] = subtree elif isinstance(subtree, (CDist, DDist)): node.set_leaf_dist(attr_value=val, dist=subtree) else: raise Exception("Unknown subtree type: %s" % (type(subtree),)) return node
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add(self, k, count=1): """ Increments the count for the given element. """
self.counts[k] += count self.total += count
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def best(self): """ Returns the element with the highest probability. """
b = (-1e999999, None) for k, c in iteritems(self.counts): b = max(b, (c, k)) return b[1]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def update(self, dist): """ Adds the given distribution's counts to the current distribution. """
assert isinstance(dist, DDist) for k, c in iteritems(dist.counts): self.counts[k] += c self.total += dist.total
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def probability_lt(self, x): """ Returns the probability of a random variable being less than the given value. """
if self.mean is None: return return normdist(x=x, mu=self.mean, sigma=self.standard_deviation)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def probability_in(self, a, b): """ Returns the probability of a random variable falling between the given values. """
if self.mean is None: return p1 = normdist(x=a, mu=self.mean, sigma=self.standard_deviation) p2 = normdist(x=b, mu=self.mean, sigma=self.standard_deviation) return abs(p1 - p2)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def probability_gt(self, x): """ Returns the probability of a random variable being greater than the given value. """
if self.mean is None: return p = normdist(x=x, mu=self.mean, sigma=self.standard_deviation) return 1-p
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def copy_no_data(self): """ Returns a copy of the object without any data. """
return type(self)( [], order=list(self.header_modes), types=self.header_types.copy(), modes=self.header_modes.copy())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def is_valid(self, name, value): """ Returns true if the given value matches the type for the given name according to the schema. Returns false otherwise. """
if name not in self.header_types: return False t = self.header_types[name] if t == ATTR_TYPE_DISCRETE: return isinstance(value, int) elif t == ATTR_TYPE_CONTINUOUS: return isinstance(value, (float, Decimal)) return True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _read_header(self): """ When a CSV file is given, extracts header information the file. Otherwise, this header data must be explicitly given when the object is instantiated. """
if not self.filename or self.header_types: return rows = csv.reader(open(self.filename)) #header = rows.next() header = next(rows) self.header_types = {} # {attr_name:type} self._class_attr_name = None self.header_order = [] # [attr_name,...] for el in header: matches = ATTR_HEADER_PATTERN.findall(el) assert matches, "Invalid header element: %s" % (el,) el_name, el_type, el_mode = matches[0] el_name = el_name.strip() self.header_order.append(el_name) self.header_types[el_name] = el_type if el_mode == ATTR_MODE_CLASS: assert self._class_attr_name is None, \ "Multiple class attributes are not supported." self._class_attr_name = el_name else: assert self.header_types[el_name] != ATTR_TYPE_CONTINUOUS, \ "Non-class continuous attributes are not supported." assert self._class_attr_name, "A class attribute must be specified."
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def validate_row(self, row): """ Ensure each element in the row matches the schema. """
clean_row = {} if isinstance(row, (tuple, list)): assert self.header_order, "No attribute order specified." assert len(row) == len(self.header_order), \ "Row length does not match header length." itr = zip(self.header_order, row) else: assert isinstance(row, dict) itr = iteritems(row) for el_name, el_value in itr: if self.header_types[el_name] == ATTR_TYPE_DISCRETE: clean_row[el_name] = int(el_value) elif self.header_types[el_name] == ATTR_TYPE_CONTINUOUS: clean_row[el_name] = float(el_value) else: clean_row[el_name] = el_value return clean_row
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def split(self, ratio=0.5, leave_one_out=False): """ Returns two Data instances, containing the data randomly split between the two according to the given ratio. The first instance will contain the ratio of data specified. The second instance will contain the remaining ratio of data. If leave_one_out is True, the ratio will be ignored and the first instance will contain exactly one record for each class label, and the second instance will contain all remaining data. """
a_labels = set() a = self.copy_no_data() b = self.copy_no_data() for row in self: if leave_one_out and not self.is_continuous_class: label = row[self.class_attribute_name] if label not in a_labels: a_labels.add(label) a.data.append(row) else: b.data.append(row) elif not a: a.data.append(row) elif not b: b.data.append(row) elif random.random() <= ratio: a.data.append(row) else: b.data.append(row) return a, b
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_attribute_value_for_node(self, record): """ Gets the closest value for the current node's attribute matching the given record. """
# Abort if this node has not get split on an attribute. if self.attr_name is None: return # Otherwise, lookup the attribute value for this node in the # given record. attr = self.attr_name attr_value = record[attr] attr_values = self.get_values(attr) if attr_value in attr_values: return attr_value else: # The value of the attribute in the given record does not directly # map to any previously known values, so apply a missing value # policy. policy = self.tree.missing_value_policy.get(attr) assert policy, \ ("No missing value policy specified for attribute %s.") \ % (attr,) if policy == USE_NEAREST: # Use the value that the tree has seen that's also has the # smallest Euclidean distance to the actual value. assert self.tree.data.header_types[attr] \ in (ATTR_TYPE_DISCRETE, ATTR_TYPE_CONTINUOUS), \ "The use-nearest policy is invalid for nominal types." nearest = (1e999999, None) for _value in attr_values: nearest = min( nearest, (abs(_value - attr_value), _value)) _, nearest_value = nearest return nearest_value else: raise Exception("Unknown missing value policy: %s" % (policy,))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_values(self, attr_name): """ Retrieves the unique set of values seen for the given attribute at this node. """
ret = list(self._attr_value_cdist[attr_name].keys()) \ + list(self._attr_value_counts[attr_name].keys()) \ + list(self._branches.keys()) ret = set(ret) return ret
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_best_splitting_attr(self): """ Returns the name of the attribute with the highest gain. """
best = (-1e999999, None) for attr in self.attributes: best = max(best, (self.get_gain(attr), attr)) best_gain, best_attr = best return best_attr
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_gain(self, attr_name): """ Calculates the information gain from splitting on the given attribute. """
subset_entropy = 0.0 for value in iterkeys(self._attr_value_counts[attr_name]): value_prob = self.get_value_prob(attr_name, value) e = self.get_entropy(attr_name, value) subset_entropy += value_prob * e return (self.main_entropy - subset_entropy)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_value_ddist(self, attr_name, attr_value): """ Returns the class value probability distribution of the given attribute value. """
assert not self.tree.data.is_continuous_class, \ "Discrete distributions are only maintained for " + \ "discrete class types." ddist = DDist() cls_counts = self._attr_class_value_counts[attr_name][attr_value] for cls_value, cls_count in iteritems(cls_counts): ddist.add(cls_value, count=cls_count) return ddist
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_value_prob(self, attr_name, value): """ Returns the value probability of the given attribute at this node. """
if attr_name not in self._attr_value_count_totals: return n = self._attr_value_counts[attr_name][value] d = self._attr_value_count_totals[attr_name] return n/float(d)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def predict(self, record, depth=0): """ Returns the estimated value of the class attribute for the given record. """
# Check if we're ready to predict. if not self.ready_to_predict: raise NodeNotReadyToPredict # Lookup attribute value. attr_value = self._get_attribute_value_for_node(record) # Propagate decision to leaf node. if self.attr_name: if attr_value in self._branches: try: return self._branches[attr_value].predict(record, depth=depth+1) except NodeNotReadyToPredict: #TODO:allow re-raise if user doesn't want an intermediate prediction? pass # Otherwise make decision at current node. if self.attr_name: if self._tree.data.is_continuous_class: return self._attr_value_cdist[self.attr_name][attr_value].copy() else: # return self._class_ddist.copy() return self.get_value_ddist(self.attr_name, attr_value) elif self._tree.data.is_continuous_class: # Make decision at current node, which may be a true leaf node # or an incomplete branch in a tree currently being built. assert self._class_cdist is not None return self._class_cdist.copy() else: return self._class_ddist.copy()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def ready_to_split(self): """ Returns true if this node is ready to branch off additional nodes. Returns false otherwise. """
# Never split if we're a leaf that predicts adequately. threshold = self._tree.leaf_threshold if self._tree.data.is_continuous_class: var = self._class_cdist.variance if var is not None and threshold is not None \ and var <= threshold: return False else: best_prob = self._class_ddist.best_prob if best_prob is not None and threshold is not None \ and best_prob >= threshold: return False return self._tree.auto_grow \ and not self.attr_name \ and self.n >= self._tree.splitting_n
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def set_leaf_dist(self, attr_value, dist): """ Sets the probability distribution at a leaf node. """
assert self.attr_name assert self.tree.data.is_valid(self.attr_name, attr_value), \ "Value %s is invalid for attribute %s." \ % (attr_value, self.attr_name) if self.is_continuous_class: assert isinstance(dist, CDist) assert self.attr_name self._attr_value_cdist[self.attr_name][attr_value] = dist.copy() # self.n += dist.count else: assert isinstance(dist, DDist) # {attr_name:{attr_value:count}} self._attr_value_counts[self.attr_name][attr_value] += 1 # {attr_name:total} self._attr_value_count_totals[self.attr_name] += 1 # {attr_name:{attr_value:{class_value:count}}} for cls_value, cls_count in iteritems(dist.counts): self._attr_class_value_counts[self.attr_name][attr_value] \ [cls_value] += cls_count
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def train(self, record): """ Incrementally update the statistics at this node. """
self.n += 1 class_attr = self.tree.data.class_attribute_name class_value = record[class_attr] # Update class statistics. is_con = self.tree.data.is_continuous_class if is_con: # For a continuous class. self._class_cdist += class_value else: # For a discrete class. self._class_ddist.add(class_value) # Update attribute statistics. for an, av in iteritems(record): if an == class_attr: continue self._attr_value_counts[an][av] += 1 self._attr_value_count_totals[an] += 1 if is_con: self._attr_value_cdist[an][av] += class_value else: self._attr_class_value_counts[an][av][class_value] += 1 # Decide if branch should split on an attribute. if self.ready_to_split: self.attr_name = self.get_best_splitting_attr() self.tree.leaf_count -= 1 for av in self._attr_value_counts[self.attr_name]: self._branches[av] = Node(tree=self.tree) self.tree.leaf_count += 1 # If we've split, then propagate the update to appropriate sub-branch. if self.attr_name: key = record[self.attr_name] del record[self.attr_name] self._branches[key].train(record)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def build(cls, data, *args, **kwargs): """ Constructs a classification or regression tree in a single batch by analyzing the given data. """
assert isinstance(data, Data) if data.is_continuous_class: fitness_func = gain_variance else: fitness_func = get_gain t = cls(data=data, *args, **kwargs) t._data = data t.sample_count = len(data) t._tree = create_decision_tree( data=data, attributes=data.attribute_names, class_attr=data.class_attribute_name, fitness_func=fitness_func, wrapper=t, ) return t
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def out_of_bag_mae(self): """ Returns the mean absolute error for predictions on the out-of-bag samples. """
if not self._out_of_bag_mae_clean: try: self._out_of_bag_mae = self.test(self.out_of_bag_samples) self._out_of_bag_mae_clean = True except NodeNotReadyToPredict: return return self._out_of_bag_mae.copy()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def out_of_bag_samples(self): """ Returns the out-of-bag samples list, inside a wrapper to keep track of modifications. """
#TODO:replace with more a generic pass-through wrapper? class O(object): def __init__(self, tree): self.tree = tree def __len__(self): return len(self.tree._out_of_bag_samples) def append(self, v): self.tree._out_of_bag_mae_clean = False return self.tree._out_of_bag_samples.append(v) def pop(self, v): self.tree._out_of_bag_mae_clean = False return self.tree._out_of_bag_samples.pop(v) def __iter__(self): for _ in self.tree._out_of_bag_samples: yield _ return O(self)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def set_missing_value_policy(self, policy, target_attr_name=None): """ Sets the behavior for one or all attributes to use when traversing the tree using a query vector and it encounters a branch that does not exist. """
assert policy in MISSING_VALUE_POLICIES, \ "Unknown policy: %s" % (policy,) for attr_name in self.data.attribute_names: if target_attr_name is not None and target_attr_name != attr_name: continue self.missing_value_policy[attr_name] = policy
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def train(self, record): """ Incrementally updates the tree with the given sample record. """
assert self.data.class_attribute_name in record, \ "The class attribute must be present in the record." record = record.copy() self.sample_count += 1 self.tree.train(record)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _fell_trees(self): """ Removes trees from the forest according to the specified fell method. """
if callable(self.fell_method): for tree in self.fell_method(list(self.trees)): self.trees.remove(tree)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_best_prediction(self, record, train=True): """ Gets the prediction from the tree with the lowest mean absolute error. """
if not self.trees: return best = (+1e999999, None) for tree in self.trees: best = min(best, (tree.mae.mean, tree)) _, best_tree = best prediction, tree_mae = best_tree.predict(record, train=train) return prediction.mean
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def best_oob_mae_weight(trees): """ Returns weights so that the tree with smallest out-of-bag mean absolute error """
best = (+1e999999, None) for tree in trees: oob_mae = tree.out_of_bag_mae if oob_mae is None or oob_mae.mean is None: continue best = min(best, (oob_mae.mean, tree)) best_mae, best_tree = best if best_tree is None: return return [(1.0, best_tree)]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def mean_oob_mae_weight(trees): """ Returns weights proportional to the out-of-bag mean absolute error for each tree. """
weights = [] active_trees = [] for tree in trees: oob_mae = tree.out_of_bag_mae if oob_mae is None or oob_mae.mean is None: continue weights.append(oob_mae.mean) active_trees.append(tree) if not active_trees: return weights = normalize(weights) return zip(weights, active_trees)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _grow_trees(self): """ Adds new trees to the forest according to the specified growth method. """
if self.grow_method == GROW_AUTO_INCREMENTAL: self.tree_kwargs['auto_grow'] = True while len(self.trees) < self.size: self.trees.append(Tree(data=self.data, **self.tree_kwargs))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def train(self, record): """ Updates the trees with the given training record. """
self._fell_trees() self._grow_trees() for tree in self.trees: if random.random() < self.sample_ratio: tree.train(record) else: tree.out_of_bag_samples.append(record) while len(tree.out_of_bag_samples) > self.max_out_of_bag_samples: tree.out_of_bag_samples.pop(0)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_configfile_paths(system=True, user=True, local=True, only_existing=True): """Return a list of local configuration file paths. Search paths for configuration files on the local system are based on homebase_ and depend on operating system; for example, for Linux systems these might include ``dwave.conf`` in the current working directory (CWD), user-local ``.config/dwave/``, and system-wide ``/etc/dwave/``. .. _homebase: https://github.com/dwavesystems/homebase Args: system (boolean, default=True): Search for system-wide configuration files. user (boolean, default=True): Search for user-local configuration files. local (boolean, default=True): Search for local configuration files (in CWD). only_existing (boolean, default=True): Return only paths for files that exist on the local system. Returns: list[str]: List of configuration file paths. Examples: This example displays all paths to configuration files on a Windows system running Python 2.7 and then finds the single existing configuration file. [u'C:\\ProgramData\\dwavesystem\\dwave\\dwave.conf', u'C:\\Users\\jane\\AppData\\Local\\dwavesystem\\dwave\\dwave.conf', '.\\dwave.conf'] [u'C:\\Users\\jane\\AppData\\Local\\dwavesystem\\dwave\\dwave.conf'] """
candidates = [] # system-wide has the lowest priority, `/etc/dwave/dwave.conf` if system: candidates.extend(homebase.site_config_dir_list( app_author=CONF_AUTHOR, app_name=CONF_APP, use_virtualenv=False, create=False)) # user-local will override it, `~/.config/dwave/dwave.conf` if user: candidates.append(homebase.user_config_dir( app_author=CONF_AUTHOR, app_name=CONF_APP, roaming=False, use_virtualenv=False, create=False)) # highest priority (overrides all): `./dwave.conf` if local: candidates.append(".") paths = [os.path.join(base, CONF_FILENAME) for base in candidates] if only_existing: paths = list(filter(os.path.exists, paths)) return paths
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_default_configfile_path(): """Return the default configuration-file path. Typically returns a user-local configuration file; e.g: ``~/.config/dwave/dwave.conf``. Returns: str: Configuration file path. Examples: This example displays the default configuration file on an Ubuntu Unix system running IPython 2.7. ['/etc/xdg/xdg-ubuntu/dwave/dwave.conf', '/usr/share/upstart/xdg/dwave/dwave.conf', '/etc/xdg/dwave/dwave.conf', '/home/mary/.config/dwave/dwave.conf', './dwave.conf'] '/home/mary/.config/dwave/dwave.conf' """
base = homebase.user_config_dir( app_author=CONF_AUTHOR, app_name=CONF_APP, roaming=False, use_virtualenv=False, create=False) path = os.path.join(base, CONF_FILENAME) return path
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load_config_from_files(filenames=None): """Load D-Wave Cloud Client configuration from a list of files. .. note:: This method is not standardly used to set up D-Wave Cloud Client configuration. It is recommended you use :meth:`.Client.from_config` or :meth:`.config.load_config` instead. Configuration files comply with standard Windows INI-like format, parsable with Python's :mod:`configparser`. A section called ``defaults`` contains default values inherited by other sections. Each filename in the list (each configuration file loaded) progressively upgrades the final configuration, on a key by key basis, per each section. Args: filenames (list[str], default=None): D-Wave Cloud Client configuration files (paths and names). If ``None``, searches for a configuration file named ``dwave.conf`` in all system-wide configuration directories, in the user-local configuration directory, and in the current working directory, following the user/system configuration paths of :func:`get_configfile_paths`. Returns: :obj:`~configparser.ConfigParser`: :class:`dict`-like mapping of configuration sections (profiles) to mapping of per-profile keys holding values. Raises: :exc:`~dwave.cloud.exceptions.ConfigFileReadError`: Config file specified or detected could not be opened or read. :exc:`~dwave.cloud.exceptions.ConfigFileParseError`: Config file parse failed. Examples: This example loads configurations from two files. One contains a default section with key/values that are overwritten by any profile section that contains that key/value; for example, profile dw2000b in file dwave_b.conf overwrites the default URL and client type, which profile dw2000a inherits from the defaults section, while profile dw2000a overwrites the API token that profile dw2000b inherits. The files, which are located in the current working directory, are (1) dwave_a.conf:: [defaults] endpoint = https://url.of.some.dwavesystem.com/sapi client = qpu token = ABC-123456789123456789123456789 [dw2000a] solver = EXAMPLE_2000Q_SYSTEM token = DEF-987654321987654321987654321 and (2) dwave_b.conf:: [dw2000b] endpoint = https://url.of.some.other.dwavesystem.com/sapi client = sw solver = EXAMPLE_2000Q_SYSTEM The following example code loads configuration from both these files, with the defined overrides and inheritance. .. code:: python [defaults] endpoint = https://url.of.some.dwavesystem.com/sapi client = qpu token = ABC-123456789123456789123456789 [dw2000a] solver = EXAMPLE_2000Q_SYSTEM token = DEF-987654321987654321987654321 [dw2000b] endpoint = https://url.of.some.other.dwavesystem.com/sapi client = sw solver = EXAMPLE_2000Q_SYSTEM """
if filenames is None: filenames = get_configfile_paths() config = configparser.ConfigParser(default_section="defaults") for filename in filenames: try: with open(filename, 'r') as f: config.read_file(f, filename) except (IOError, OSError): raise ConfigFileReadError("Failed to read {!r}".format(filename)) except configparser.Error: raise ConfigFileParseError("Failed to parse {!r}".format(filename)) return config
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load_profile_from_files(filenames=None, profile=None): """Load a profile from a list of D-Wave Cloud Client configuration files. .. note:: This method is not standardly used to set up D-Wave Cloud Client configuration. It is recommended you use :meth:`.Client.from_config` or :meth:`.config.load_config` instead. Configuration files comply with standard Windows INI-like format, parsable with Python's :mod:`configparser`. Each file in the list is progressively searched until the first profile is found. This function does not input profile information from environment variables. Args: filenames (list[str], default=None): D-Wave cloud client configuration files (path and name). If ``None``, searches for existing configuration files in the standard directories of :func:`get_configfile_paths`. profile (str, default=None): Name of profile to return from reading the configuration from the specified configuration file(s). If ``None``, progressively falls back in the following order: (1) ``profile`` key following ``[defaults]`` section. (2) First non-``[defaults]`` section. (3) ``[defaults]`` section. Returns: dict: Mapping of configuration keys to values. If no valid config/profile is found, returns an empty dict. Raises: :exc:`~dwave.cloud.exceptions.ConfigFileReadError`: Config file specified or detected could not be opened or read. :exc:`~dwave.cloud.exceptions.ConfigFileParseError`: Config file parse failed. :exc:`ValueError`: Profile name not found. Examples: This example loads a profile based on configurations from two files. It finds the first profile, dw2000a, in the first file, dwave_a.conf, and adds to the values of the defaults section, overwriting the existing client value, while ignoring the profile in the second file, dwave_b.conf. The files, which are located in the current working directory, are (1) dwave_a.conf:: [defaults] endpoint = https://url.of.some.dwavesystem.com/sapi client = qpu token = ABC-123456789123456789123456789 [dw2000a] client = sw solver = EXAMPLE_2000Q_SYSTEM_A token = DEF-987654321987654321987654321 and (2) dwave_b.conf:: [dw2000b] endpoint = https://url.of.some.other.dwavesystem.com/sapi client = qpu solver = EXAMPLE_2000Q_SYSTEM_B The following example code loads profile values from parsing both these files, by default loading the first profile encountered or an explicitly specified profile. {'client': u'sw', 'endpoint': u'https://url.of.some.dwavesystem.com/sapi', 'solver': u'EXAMPLE_2000Q_SYSTEM_A', 'token': u'DEF-987654321987654321987654321'} {'client': u'qpu', 'endpoint': u'https://url.of.some.other.dwavesystem.com/sapi', 'solver': u'EXAMPLE_2000Q_SYSTEM_B', 'token': u'ABC-123456789123456789123456789'} """
# progressively build config from a file, or a list of auto-detected files # raises ConfigFileReadError/ConfigFileParseError on error config = load_config_from_files(filenames) # determine profile name fallback: # (1) profile key under [defaults], # (2) first non-[defaults] section # (3) [defaults] section first_section = next(iter(config.sections() + [None])) config_defaults = config.defaults() if not profile: profile = config_defaults.get('profile', first_section) if profile: try: section = dict(config[profile]) except KeyError: raise ValueError("Config profile {!r} not found".format(profile)) else: # as the very last resort (unspecified profile name and # no profiles defined in config), try to use [defaults] if config_defaults: section = config_defaults else: section = {} return section
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def load_config(config_file=None, profile=None, client=None, endpoint=None, token=None, solver=None, proxy=None): """Load D-Wave Cloud Client configuration based on a configuration file. Configuration values can be specified in multiple ways, ranked in the following order (with 1 the highest ranked): 1. Values specified as keyword arguments in :func:`load_config()`. These values replace values read from a configuration file, and therefore must be **strings**, including float values for timeouts, boolean flags (tested for "truthiness"), and solver feature constraints (a dictionary encoded as JSON). 2. Values specified as environment variables. 3. Values specified in the configuration file. Configuration-file format is described in :mod:`dwave.cloud.config`. If the location of the configuration file is not specified, auto-detection searches for existing configuration files in the standard directories of :func:`get_configfile_paths`. If a configuration file explicitly specified, via an argument or environment variable, does not exist or is unreadable, loading fails with :exc:`~dwave.cloud.exceptions.ConfigFileReadError`. Loading fails with :exc:`~dwave.cloud.exceptions.ConfigFileParseError` if the file is readable but invalid as a configuration file. Similarly, if a profile explicitly specified, via an argument or environment variable, is not present in the loaded configuration, loading fails with :exc:`ValueError`. Explicit profile selection also fails if the configuration file is not explicitly specified, detected on the system, or defined via an environment variable. Environment variables: ``DWAVE_CONFIG_FILE``, ``DWAVE_PROFILE``, ``DWAVE_API_CLIENT``, ``DWAVE_API_ENDPOINT``, ``DWAVE_API_TOKEN``, ``DWAVE_API_SOLVER``, ``DWAVE_API_PROXY``. Environment variables are described in :mod:`dwave.cloud.config`. Args: config_file (str/[str]/None/False/True, default=None): Path to configuration file(s). If `None`, the value is taken from `DWAVE_CONFIG_FILE` environment variable if defined. If the environment variable is undefined or empty, auto-detection searches for existing configuration files in the standard directories of :func:`get_configfile_paths`. If `False`, loading from file(s) is skipped; if `True`, forces auto-detection (regardless of the `DWAVE_CONFIG_FILE` environment variable). profile (str, default=None): Profile name (name of the profile section in the configuration file). If undefined, inferred from `DWAVE_PROFILE` environment variable if defined. If the environment variable is undefined or empty, a profile is selected in the following order: 1. From the default section if it includes a profile key. 2. The first section (after the default section). 3. If no other section is defined besides `[defaults]`, the defaults section is promoted and selected. client (str, default=None): Client type used for accessing the API. Supported values are `qpu` for :class:`dwave.cloud.qpu.Client` and `sw` for :class:`dwave.cloud.sw.Client`. endpoint (str, default=None): API endpoint URL. token (str, default=None): API authorization token. solver (str, default=None): :term:`solver` features, as a JSON-encoded dictionary of feature constraints, the client should use. See :meth:`~dwave.cloud.client.Client.get_solvers` for semantics of supported feature constraints. If undefined, the client uses a solver definition from environment variables, a configuration file, or falls back to the first available online solver. For backward compatibility, solver name in string format is accepted and converted to ``{"name": <solver name>}``. proxy (str, default=None): URL for proxy to use in connections to D-Wave API. Can include username/password, port, scheme, etc. If undefined, client uses the system-level proxy, if defined, or connects directly to the API. Returns: dict: Mapping of configuration keys to values for the profile (section), as read from the configuration file and optionally overridden by environment values and specified keyword arguments. Always contains the `client`, `endpoint`, `token`, `solver`, and `proxy` keys. Raises: :exc:`ValueError`: Invalid (non-existing) profile name. :exc:`~dwave.cloud.exceptions.ConfigFileReadError`: Config file specified or detected could not be opened or read. :exc:`~dwave.cloud.exceptions.ConfigFileParseError`: Config file parse failed. Examples This example loads the configuration from an auto-detected configuration file in the home directory of a Windows system user. {'client': u'qpu', 'endpoint': u'https://url.of.some.dwavesystem.com/sapi', 'proxy': None, 'solver': u'EXAMPLE_2000Q_SYSTEM_A', 'token': u'DEF-987654321987654321987654321'} [u'C:\\Users\\jane\\AppData\\Local\\dwavesystem\\dwave\\dwave.conf'] Additional examples are given in :mod:`dwave.cloud.config`. """
if profile is None: profile = os.getenv("DWAVE_PROFILE") if config_file == False: # skip loading from file altogether section = {} elif config_file == True: # force auto-detection, disregarding DWAVE_CONFIG_FILE section = load_profile_from_files(None, profile) else: # auto-detect if not specified with arg or env if config_file is None: # note: both empty and undefined DWAVE_CONFIG_FILE treated as None config_file = os.getenv("DWAVE_CONFIG_FILE") # handle ''/None/str/[str] for `config_file` (after env) filenames = None if config_file: if isinstance(config_file, six.string_types): filenames = [config_file] else: filenames = config_file section = load_profile_from_files(filenames, profile) # override a selected subset of values via env or kwargs, # pass-through the rest unmodified section['client'] = client or os.getenv("DWAVE_API_CLIENT", section.get('client')) section['endpoint'] = endpoint or os.getenv("DWAVE_API_ENDPOINT", section.get('endpoint')) section['token'] = token or os.getenv("DWAVE_API_TOKEN", section.get('token')) section['solver'] = solver or os.getenv("DWAVE_API_SOLVER", section.get('solver')) section['proxy'] = proxy or os.getenv("DWAVE_API_PROXY", section.get('proxy')) return section
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def legacy_load_config(profile=None, endpoint=None, token=None, solver=None, proxy=None, **kwargs): """Load configured URLs and token for the SAPI server. .. warning:: Included only for backward compatibility. Please use :func:`load_config` or the client factory :meth:`~dwave.cloud.client.Client.from_config` instead. This method tries to load a legacy configuration file from ``~/.dwrc``, select a specified `profile` (or, if not specified, the first profile), and override individual keys with values read from environment variables or specified explicitly as key values in the function. Configuration values can be specified in multiple ways, ranked in the following order (with 1 the highest ranked): 1. Values specified as keyword arguments in :func:`legacy_load_config()` 2. Values specified as environment variables 3. Values specified in the legacy ``~/.dwrc`` configuration file Environment variables searched for are: - ``DW_INTERNAL__HTTPLINK`` - ``DW_INTERNAL__TOKEN`` - ``DW_INTERNAL__HTTPPROXY`` - ``DW_INTERNAL__SOLVER`` Legacy configuration file format is a modified CSV where the first comma is replaced with a bar character (``|``). Each line encodes a single profile. Its columns are:: profile_name|endpoint_url,authentication_token,proxy_url,default_solver_name All its fields after ``authentication_token`` are optional. When there are multiple connections in a file, the first one is the default. Any commas in the URLs must be percent-encoded. Args: profile (str): Profile name in the legacy configuration file. endpoint (str, default=None): API endpoint URL. token (str, default=None): API authorization token. solver (str, default=None): Default solver to use in :meth:`~dwave.cloud.client.Client.get_solver`. If undefined, all calls to :meth:`~dwave.cloud.client.Client.get_solver` must explicitly specify the solver name/id. proxy (str, default=None): URL for proxy to use in connections to D-Wave API. Can include username/password, port, scheme, etc. If undefined, client uses a system-level proxy, if defined, or connects directly to the API. Returns: Dictionary with keys: endpoint, token, solver, and proxy. Examples: This example creates a client using the :meth:`~dwave.cloud.client.Client.from_config` method, which falls back on the legacy file by default when it fails to find a D-Wave Cloud Client configuration file (setting its `legacy_config_fallback` parameter to False precludes this fall-back operation). For this example, no D-Wave Cloud Client configuration file is present on the local system; instead the following ``.dwrc`` legacy configuration file is present in the user's home directory:: profile-a|https://one.com,token-one profile-b|https://two.com,token-two The following example code creates a client without explicitly specifying key values, therefore auto-detection searches for existing (non-legacy) configuration files in the standard directories of :func:`get_configfile_paths` and, failing to find one, falls back on the existing legacy configuration file above. 'https://one.com' 'token-one' The following examples specify a profile and/or token. """
def _parse_config(fp, filename): fields = ('endpoint', 'token', 'proxy', 'solver') config = OrderedDict() for line in fp: # strip whitespace, skip blank and comment lines line = line.strip() if not line or line.startswith('#'): continue # parse each record, store in dict with label as key try: label, data = line.split('|', 1) values = [v.strip() or None for v in data.split(',')] config[label] = dict(zip(fields, values)) except: raise ConfigFileParseError( "Failed to parse {!r}, line {!r}".format(filename, line)) return config def _read_config(filename): try: with open(filename, 'r') as f: return _parse_config(f, filename) except (IOError, OSError): raise ConfigFileReadError("Failed to read {!r}".format(filename)) config = {} filename = os.path.expanduser('~/.dwrc') if os.path.exists(filename): config = _read_config(filename) # load profile if specified, or first one in file if profile: try: section = config[profile] except KeyError: raise ValueError("Config profile {!r} not found".format(profile)) else: try: _, section = next(iter(config.items())) except StopIteration: section = {} # override config variables (if any) with environment and then with arguments section['endpoint'] = endpoint or os.getenv("DW_INTERNAL__HTTPLINK", section.get('endpoint')) section['token'] = token or os.getenv("DW_INTERNAL__TOKEN", section.get('token')) section['proxy'] = proxy or os.getenv("DW_INTERNAL__HTTPPROXY", section.get('proxy')) section['solver'] = solver or os.getenv("DW_INTERNAL__SOLVER", section.get('solver')) section.update(kwargs) return section
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def src_simple(input_data, output_data, ratio, converter_type, channels): """Perform a single conversion from an input buffer to an output buffer. Simple interface for performing a single conversion from input buffer to output buffer at a fixed conversion ratio. Simple interface does not require initialisation as it can only operate on a single buffer worth of audio. """
input_frames, _ = _check_data(input_data) output_frames, _ = _check_data(output_data) data = ffi.new('SRC_DATA*') data.input_frames = input_frames data.output_frames = output_frames data.src_ratio = ratio data.data_in = ffi.cast('float*', ffi.from_buffer(input_data)) data.data_out = ffi.cast('float*', ffi.from_buffer(output_data)) error = _lib.src_simple(data, converter_type, channels) return error, data.input_frames_used, data.output_frames_gen
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def src_new(converter_type, channels): """Initialise a new sample rate converter. Parameters converter_type : int Converter to be used. channels : int Number of channels. Returns ------- state An anonymous pointer to the internal state of the converter. error : int Error code. """
error = ffi.new('int*') state = _lib.src_new(converter_type, channels, error) return state, error[0]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def src_process(state, input_data, output_data, ratio, end_of_input=0): """Standard processing function. Returns non zero on error. """
input_frames, _ = _check_data(input_data) output_frames, _ = _check_data(output_data) data = ffi.new('SRC_DATA*') data.input_frames = input_frames data.output_frames = output_frames data.src_ratio = ratio data.data_in = ffi.cast('float*', ffi.from_buffer(input_data)) data.data_out = ffi.cast('float*', ffi.from_buffer(output_data)) data.end_of_input = end_of_input error = _lib.src_process(state, data) return error, data.input_frames_used, data.output_frames_gen
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _src_input_callback(cb_data, data): """Internal callback function to be used with the callback API. Pulls the Python callback function from the handle contained in `cb_data` and calls it to fetch frames. Frames are converted to the format required by the API (float, interleaved channels). A reference to these data is kept internally. Returns ------- frames : int The number of frames supplied. """
cb_data = ffi.from_handle(cb_data) ret = cb_data['callback']() if ret is None: cb_data['last_input'] = None return 0 # No frames supplied input_data = _np.require(ret, requirements='C', dtype=_np.float32) input_frames, channels = _check_data(input_data) # Check whether the correct number of channels is supplied by user. if cb_data['channels'] != channels: raise ValueError('Invalid number of channels in callback.') # Store a reference of the input data to ensure it is still alive when # accessed by libsamplerate. cb_data['last_input'] = input_data data[0] = ffi.cast('float*', ffi.from_buffer(input_data)) return input_frames
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def src_callback_new(callback, converter_type, channels): """Initialisation for the callback based API. Parameters callback : function Called whenever new frames are to be read. Must return a NumPy array of shape (num_frames, channels). converter_type : int Converter to be used. channels : int Number of channels. Returns ------- state An anonymous pointer to the internal state of the converter. handle A CFFI handle to the callback data. error : int Error code. """
cb_data = {'callback': callback, 'channels': channels} handle = ffi.new_handle(cb_data) error = ffi.new('int*') state = _lib.src_callback_new(_src_input_callback, converter_type, channels, error, handle) if state == ffi.NULL: return None, handle, error[0] return state, handle, error[0]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def src_callback_read(state, ratio, frames, data): """Read up to `frames` worth of data using the callback API. Returns ------- frames : int Number of frames read or -1 on error. """
data_ptr = ffi.cast('float*f', ffi.from_buffer(data)) return _lib.src_callback_read(state, ratio, frames, data_ptr)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def from_config(cls, config_file=None, profile=None, client=None, endpoint=None, token=None, solver=None, proxy=None, legacy_config_fallback=False, **kwargs): """Client factory method to instantiate a client instance from configuration. Configuration values can be specified in multiple ways, ranked in the following order (with 1 the highest ranked): 1. Values specified as keyword arguments in :func:`from_config()` 2. Values specified as environment variables 3. Values specified in the configuration file Configuration-file format is described in :mod:`dwave.cloud.config`. If the location of the configuration file is not specified, auto-detection searches for existing configuration files in the standard directories of :func:`get_configfile_paths`. If a configuration file explicitly specified, via an argument or environment variable, does not exist or is unreadable, loading fails with :exc:`~dwave.cloud.exceptions.ConfigFileReadError`. Loading fails with :exc:`~dwave.cloud.exceptions.ConfigFileParseError` if the file is readable but invalid as a configuration file. Similarly, if a profile explicitly specified, via an argument or environment variable, is not present in the loaded configuration, loading fails with :exc:`ValueError`. Explicit profile selection also fails if the configuration file is not explicitly specified, detected on the system, or defined via an environment variable. Environment variables: ``DWAVE_CONFIG_FILE``, ``DWAVE_PROFILE``, ``DWAVE_API_CLIENT``, ``DWAVE_API_ENDPOINT``, ``DWAVE_API_TOKEN``, ``DWAVE_API_SOLVER``, ``DWAVE_API_PROXY``. Environment variables are described in :mod:`dwave.cloud.config`. Args: config_file (str/[str]/None/False/True, default=None): Path to configuration file. If ``None``, the value is taken from ``DWAVE_CONFIG_FILE`` environment variable if defined. If the environment variable is undefined or empty, auto-detection searches for existing configuration files in the standard directories of :func:`get_configfile_paths`. If ``False``, loading from file is skipped; if ``True``, forces auto-detection (regardless of the ``DWAVE_CONFIG_FILE`` environment variable). profile (str, default=None): Profile name (name of the profile section in the configuration file). If undefined, inferred from ``DWAVE_PROFILE`` environment variable if defined. If the environment variable is undefined or empty, a profile is selected in the following order: 1. From the default section if it includes a profile key. 2. The first section (after the default section). 3. If no other section is defined besides ``[defaults]``, the defaults section is promoted and selected. client (str, default=None): Client type used for accessing the API. Supported values are ``qpu`` for :class:`dwave.cloud.qpu.Client` and ``sw`` for :class:`dwave.cloud.sw.Client`. endpoint (str, default=None): API endpoint URL. token (str, default=None): API authorization token. solver (dict/str, default=None): Default :term:`solver` features to use in :meth:`~dwave.cloud.client.Client.get_solver`. Defined via dictionary of solver feature constraints (see :meth:`~dwave.cloud.client.Client.get_solvers`). For backward compatibility, a solver name, as a string, is also accepted and converted to ``{"name": <solver name>}``. If undefined, :meth:`~dwave.cloud.client.Client.get_solver` uses a solver definition from environment variables, a configuration file, or falls back to the first available online solver. proxy (str, default=None): URL for proxy to use in connections to D-Wave API. Can include username/password, port, scheme, etc. If undefined, client uses the system-level proxy, if defined, or connects directly to the API. legacy_config_fallback (bool, default=False): If True and loading from a standard D-Wave Cloud Client configuration file (``dwave.conf``) fails, tries loading a legacy configuration file (``~/.dwrc``). Other Parameters: Unrecognized keys (str): All unrecognized keys are passed through to the appropriate client class constructor as string keyword arguments. An explicit key value overrides an identical user-defined key value loaded from a configuration file. Returns: :class:`~dwave.cloud.client.Client` (:class:`dwave.cloud.qpu.Client` or :class:`dwave.cloud.sw.Client`, default=:class:`dwave.cloud.qpu.Client`): Appropriate instance of a QPU or software client. Raises: :exc:`~dwave.cloud.exceptions.ConfigFileReadError`: Config file specified or detected could not be opened or read. :exc:`~dwave.cloud.exceptions.ConfigFileParseError`: Config file parse failed. Examples: A variety of examples are given in :mod:`dwave.cloud.config`. This example initializes :class:`~dwave.cloud.client.Client` from an explicitly specified configuration file, "~/jane/my_path_to_config/my_cloud_conf.conf":: """
# try loading configuration from a preferred new config subsystem # (`./dwave.conf`, `~/.config/dwave/dwave.conf`, etc) config = load_config( config_file=config_file, profile=profile, client=client, endpoint=endpoint, token=token, solver=solver, proxy=proxy) _LOGGER.debug("Config loaded: %r", config) # fallback to legacy `.dwrc` if key variables missing if legacy_config_fallback: warnings.warn("'legacy_config_fallback' is deprecated, please convert " "your legacy .dwrc file to the new config format.", DeprecationWarning) if not config.get('token'): config = legacy_load_config( profile=profile, client=client, endpoint=endpoint, token=token, solver=solver, proxy=proxy) _LOGGER.debug("Legacy config loaded: %r", config) # manual override of other (client-custom) arguments config.update(kwargs) from dwave.cloud import qpu, sw _clients = {'qpu': qpu.Client, 'sw': sw.Client, 'base': cls} _client = config.pop('client', None) or 'base' _LOGGER.debug("Final config used for %s.Client(): %r", _client, config) return _clients[_client](**config)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def close(self): """Perform a clean shutdown. Waits for all the currently scheduled work to finish, kills the workers, and closes the connection pool. .. note:: Ensure your code does not submit new work while the connection is closing. construct) to ensure your code properly closes all resources. Examples: This example creates a client (based on an auto-detected configuration file), executes some code (represented by a placeholder comment), and then closes the client. """
# Finish all the work that requires the connection _LOGGER.debug("Joining submission queue") self._submission_queue.join() _LOGGER.debug("Joining cancel queue") self._cancel_queue.join() _LOGGER.debug("Joining poll queue") self._poll_queue.join() _LOGGER.debug("Joining load queue") self._load_queue.join() # Send kill-task to all worker threads # Note: threads can't be 'killed' in Python, they have to die by # natural causes for _ in self._submission_workers: self._submission_queue.put(None) for _ in self._cancel_workers: self._cancel_queue.put(None) for _ in self._poll_workers: self._poll_queue.put((-1, None)) for _ in self._load_workers: self._load_queue.put(None) # Wait for threads to die for worker in chain(self._submission_workers, self._cancel_workers, self._poll_workers, self._load_workers): worker.join() # Close the requests session self.session.close()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_solver(self, name=None, refresh=False, **filters): """Load the configuration for a single solver. Makes a blocking web call to `{endpoint}/solvers/remote/{solver_name}/`, where `{endpoint}` is a URL configured for the client, and returns a :class:`.Solver` instance that can be used to submit sampling problems to the D-Wave API and retrieve results. Args: name (str): ID of the requested solver. ``None`` returns the default solver. If default solver is not configured, ``None`` returns the first available solver in ``Client``'s class (QPU/software/base). **filters (keyword arguments, optional): Dictionary of filters over features this solver has to have. For a list of feature names and values, see: :meth:`~dwave.cloud.client.Client.get_solvers`. order_by (callable/str, default='id'): Solver sorting key function (or :class:`Solver` attribute name). By default, solvers are sorted by ID/name. refresh (bool): Return solver from cache (if cached with ``get_solvers()``), unless set to ``True``. Returns: :class:`.Solver` Examples: This example creates two solvers for a client instantiated from a local system's auto-detected default configuration file, which configures a connection to a D-Wave resource that provides two solvers. The first uses the default solver, the second explicitly selects another solver. [Solver(id='2000Q_ONLINE_SOLVER1'), Solver(id='2000Q_ONLINE_SOLVER2')] '2000Q_ONLINE_SOLVER1' '2000Q_ONLINE_SOLVER2' """
_LOGGER.debug("Requested a solver that best matches feature filters=%r", filters) # backward compatibility: name as the first feature if name is not None: filters.setdefault('name', name) # in absence of other filters, config/env solver filters/name are used if not filters and self.default_solver: filters = self.default_solver # get the first solver that satisfies all filters try: _LOGGER.debug("Fetching solvers according to filters=%r", filters) return self.get_solvers(refresh=refresh, **filters)[0] except IndexError: raise SolverNotFoundError("Solver with the requested features not available")
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _submit(self, body, future): """Enqueue a problem for submission to the server. This method is thread safe. """
self._submission_queue.put(self._submit.Message(body, future))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _do_submit_problems(self): """Pull problems from the submission queue and submit them. Note: This method is always run inside of a daemon thread. """
try: while True: # Pull as many problems as we can, block on the first one, # but once we have one problem, switch to non-blocking then # submit without blocking again. # `None` task is used to signal thread termination item = self._submission_queue.get() if item is None: break ready_problems = [item] while len(ready_problems) < self._SUBMIT_BATCH_SIZE: try: ready_problems.append(self._submission_queue.get_nowait()) except queue.Empty: break # Submit the problems _LOGGER.debug("Submitting %d problems", len(ready_problems)) body = '[' + ','.join(mess.body for mess in ready_problems) + ']' try: try: response = self.session.post(posixpath.join(self.endpoint, 'problems/'), body) localtime_of_response = epochnow() except requests.exceptions.Timeout: raise RequestTimeout if response.status_code == 401: raise SolverAuthenticationError() response.raise_for_status() message = response.json() _LOGGER.debug("Finished submitting %d problems", len(ready_problems)) except BaseException as exception: _LOGGER.debug("Submit failed for %d problems", len(ready_problems)) if not isinstance(exception, SolverAuthenticationError): exception = IOError(exception) for mess in ready_problems: mess.future._set_error(exception, sys.exc_info()) self._submission_queue.task_done() continue # Pass on the information for submission, res in zip(ready_problems, message): submission.future._set_clock_diff(response, localtime_of_response) self._handle_problem_status(res, submission.future) self._submission_queue.task_done() # this is equivalent to a yield to scheduler in other threading libraries time.sleep(0) except BaseException as err: _LOGGER.exception(err)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _handle_problem_status(self, message, future): """Handle the results of a problem submission or results request. This method checks the status of the problem and puts it in the correct queue. Args: message (dict): Update message from the SAPI server wrt. this problem. future `Future`: future corresponding to the problem Note: This method is always run inside of a daemon thread. """
try: _LOGGER.trace("Handling response: %r", message) _LOGGER.debug("Handling response for %s with status %s", message.get('id'), message.get('status')) # Handle errors in batch mode if 'error_code' in message and 'error_msg' in message: raise SolverFailureError(message['error_msg']) if 'status' not in message: raise InvalidAPIResponseError("'status' missing in problem description response") if 'id' not in message: raise InvalidAPIResponseError("'id' missing in problem description response") future.id = message['id'] future.remote_status = status = message['status'] # The future may not have the ID set yet with future._single_cancel_lock: # This handles the case where cancel has been called on a future # before that future received the problem id if future._cancel_requested: if not future._cancel_sent and status == self.STATUS_PENDING: # The problem has been canceled but the status says its still in queue # try to cancel it self._cancel(message['id'], future) # If a cancel request could meaningfully be sent it has been now future._cancel_sent = True if not future.time_received and message.get('submitted_on'): future.time_received = parse_datetime(message['submitted_on']) if not future.time_solved and message.get('solved_on'): future.time_solved = parse_datetime(message['solved_on']) if not future.eta_min and message.get('earliest_estimated_completion'): future.eta_min = parse_datetime(message['earliest_estimated_completion']) if not future.eta_max and message.get('latest_estimated_completion'): future.eta_max = parse_datetime(message['latest_estimated_completion']) if status == self.STATUS_COMPLETE: # TODO: find a better way to differentiate between # `completed-on-submit` and `completed-on-poll`. # Loading should happen only once, not every time when response # doesn't contain 'answer'. # If the message is complete, forward it to the future object if 'answer' in message: future._set_message(message) # If the problem is complete, but we don't have the result data # put the problem in the queue for loading results. else: self._load(future) elif status in self.ANY_STATUS_ONGOING: # If the response is pending add it to the queue. self._poll(future) elif status == self.STATUS_CANCELLED: # If canceled return error raise CanceledFutureError() else: # Return an error to the future object errmsg = message.get('error_message', 'An unknown error has occurred.') if 'solver is offline' in errmsg.lower(): raise SolverOfflineError(errmsg) else: raise SolverFailureError(errmsg) except Exception as error: # If there were any unhandled errors we need to release the # lock in the future, otherwise deadlock occurs. future._set_error(error, sys.exc_info())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _do_cancel_problems(self): """Pull ids from the cancel queue and submit them. Note: This method is always run inside of a daemon thread. """
try: while True: # Pull as many problems as we can, block when none are available. # `None` task is used to signal thread termination item = self._cancel_queue.get() if item is None: break item_list = [item] while True: try: item_list.append(self._cancel_queue.get_nowait()) except queue.Empty: break # Submit the problems, attach the ids as a json list in the # body of the delete query. try: body = [item[0] for item in item_list] try: self.session.delete(posixpath.join(self.endpoint, 'problems/'), json=body) except requests.exceptions.Timeout: raise RequestTimeout except Exception as err: for _, future in item_list: if future is not None: future._set_error(err, sys.exc_info()) # Mark all the ids as processed regardless of success or failure. [self._cancel_queue.task_done() for _ in item_list] # this is equivalent to a yield to scheduler in other threading libraries time.sleep(0) except Exception as err: _LOGGER.exception(err)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _poll(self, future): """Enqueue a problem to poll the server for status."""
if future._poll_backoff is None: # on first poll, start with minimal back-off future._poll_backoff = self._POLL_BACKOFF_MIN # if we have ETA of results, schedule the first poll for then if future.eta_min and self._is_clock_diff_acceptable(future): at = datetime_to_timestamp(future.eta_min) _LOGGER.debug("Response ETA indicated and local clock reliable. " "Scheduling first polling at +%.2f sec", at - epochnow()) else: at = time.time() + future._poll_backoff _LOGGER.debug("Response ETA not indicated, or local clock unreliable. " "Scheduling first polling at +%.2f sec", at - epochnow()) else: # update exponential poll back-off, clipped to a range future._poll_backoff = \ max(self._POLL_BACKOFF_MIN, min(future._poll_backoff * 2, self._POLL_BACKOFF_MAX)) # for poll priority we use timestamp of next scheduled poll at = time.time() + future._poll_backoff now = utcnow() future_age = (now - future.time_created).total_seconds() _LOGGER.debug("Polling scheduled at %.2f with %.2f sec new back-off for: %s (future's age: %.2f sec)", at, future._poll_backoff, future.id, future_age) # don't enqueue for next poll if polling_timeout is exceeded by then future_age_on_next_poll = future_age + (at - datetime_to_timestamp(now)) if self.polling_timeout is not None and future_age_on_next_poll > self.polling_timeout: _LOGGER.debug("Polling timeout exceeded before next poll: %.2f sec > %.2f sec, aborting polling!", future_age_on_next_poll, self.polling_timeout) raise PollingTimeout self._poll_queue.put((at, future))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _do_poll_problems(self): """Poll the server for the status of a set of problems. Note: This method is always run inside of a daemon thread. """
try: # grouped futures (all scheduled within _POLL_GROUP_TIMEFRAME) frame_futures = {} def task_done(): self._poll_queue.task_done() def add(future): # add future to query frame_futures # returns: worker lives on? # `None` task signifies thread termination if future is None: task_done() return False if future.id not in frame_futures and not future.done(): frame_futures[future.id] = future else: task_done() return True while True: frame_futures.clear() # blocking add first scheduled frame_earliest, future = self._poll_queue.get() if not add(future): return # try grouping if scheduled within grouping timeframe while len(frame_futures) < self._STATUS_QUERY_SIZE: try: task = self._poll_queue.get_nowait() except queue.Empty: break at, future = task if at - frame_earliest <= self._POLL_GROUP_TIMEFRAME: if not add(future): return else: task_done() self._poll_queue.put(task) break # build a query string with ids of all futures in this frame ids = [future.id for future in frame_futures.values()] _LOGGER.debug("Polling for status of futures: %s", ids) query_string = 'problems/?id=' + ','.join(ids) # if futures were cancelled while `add`ing, skip empty frame if not ids: continue # wait until `frame_earliest` before polling delay = frame_earliest - time.time() if delay > 0: _LOGGER.debug("Pausing polling %.2f sec for futures: %s", delay, ids) time.sleep(delay) else: _LOGGER.trace("Skipping non-positive delay of %.2f sec", delay) try: _LOGGER.trace("Executing poll API request") try: response = self.session.get(posixpath.join(self.endpoint, query_string)) except requests.exceptions.Timeout: raise RequestTimeout if response.status_code == 401: raise SolverAuthenticationError() response.raise_for_status() statuses = response.json() for status in statuses: self._handle_problem_status(status, frame_futures[status['id']]) except BaseException as exception: if not isinstance(exception, SolverAuthenticationError): exception = IOError(exception) for id_ in frame_futures.keys(): frame_futures[id_]._set_error(IOError(exception), sys.exc_info()) for id_ in frame_futures.keys(): task_done() time.sleep(0) except Exception as err: _LOGGER.exception(err)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _do_load_results(self): """Submit a query asking for the results for a particular problem. To request the results of a problem: ``GET /problems/{problem_id}/`` Note: This method is always run inside of a daemon thread. """
try: while True: # Select a problem future = self._load_queue.get() # `None` task signifies thread termination if future is None: break _LOGGER.debug("Loading results of: %s", future.id) # Submit the query query_string = 'problems/{}/'.format(future.id) try: try: response = self.session.get(posixpath.join(self.endpoint, query_string)) except requests.exceptions.Timeout: raise RequestTimeout if response.status_code == 401: raise SolverAuthenticationError() response.raise_for_status() message = response.json() except BaseException as exception: if not isinstance(exception, SolverAuthenticationError): exception = IOError(exception) future._set_error(IOError(exception), sys.exc_info()) continue # Dispatch the results, mark the task complete self._handle_problem_status(message, future) self._load_queue.task_done() # this is equivalent to a yield to scheduler in other threading libraries time.sleep(0) except Exception as err: _LOGGER.error('Load result error: ' + str(err))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def encode_bqm_as_qp(solver, linear, quadratic): """Encode the binary quadratic problem for submission to a given solver, using the `qp` format for data. Args: solver (:class:`dwave.cloud.solver.Solver`): The solver used. linear (dict[variable, bias]/list[variable, bias]): Linear terms of the model. quadratic (dict[(variable, variable), bias]): Quadratic terms of the model. Returns: encoded submission dictionary """
active = active_qubits(linear, quadratic) # Encode linear terms. The coefficients of the linear terms of the objective # are encoded as an array of little endian 64 bit doubles. # This array is then base64 encoded into a string safe for json. # The order of the terms is determined by the _encoding_qubits property # specified by the server. # Note: only active qubits are coded with double, inactive with NaN nan = float('nan') lin = [uniform_get(linear, qubit, 0 if qubit in active else nan) for qubit in solver._encoding_qubits] lin = base64.b64encode(struct.pack('<' + ('d' * len(lin)), *lin)) # Encode the coefficients of the quadratic terms of the objective # in the same manner as the linear terms, in the order given by the # _encoding_couplers property, discarding tailing zero couplings quad = [quadratic.get((q1,q2), 0) + quadratic.get((q2,q1), 0) for (q1,q2) in solver._encoding_couplers if q1 in active and q2 in active] quad = base64.b64encode(struct.pack('<' + ('d' * len(quad)), *quad)) # The name for this encoding is 'qp' and is explicitly included in the # message for easier extension in the future. return { 'format': 'qp', 'lin': lin.decode('utf-8'), 'quad': quad.decode('utf-8') }
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def decode_qp(msg): """Decode SAPI response that uses `qp` format, without numpy. The 'qp' format is the current encoding used for problems and samples. In this encoding the reply is generally json, but the samples, energy, and histogram data (the occurrence count of each solution), are all base64 encoded arrays. """
# Decode the simple buffers result = msg['answer'] result['active_variables'] = _decode_ints(result['active_variables']) active_variables = result['active_variables'] if 'num_occurrences' in result: result['num_occurrences'] = _decode_ints(result['num_occurrences']) result['energies'] = _decode_doubles(result['energies']) # Measure out the size of the binary solution data num_solutions = len(result['energies']) num_variables = len(result['active_variables']) solution_bytes = -(-num_variables // 8) # equivalent to int(math.ceil(num_variables / 8.)) total_variables = result['num_variables'] # Figure out the null value for output default = 3 if msg['type'] == 'qubo' else 0 # Decode the solutions, which will be byte aligned in binary format binary = base64.b64decode(result['solutions']) solutions = [] for solution_index in range(num_solutions): # Grab the section of the buffer related to the current buffer_index = solution_index * solution_bytes solution_buffer = binary[buffer_index:buffer_index + solution_bytes] bytes = struct.unpack('B' * solution_bytes, solution_buffer) # Assume None values solution = [default] * total_variables index = 0 for byte in bytes: # Parse each byte and read how ever many bits can be values = _decode_byte(byte) for _ in range(min(8, len(active_variables) - index)): i = active_variables[index] index += 1 solution[i] = values.pop() # Switch to the right variable space if msg['type'] == 'ising': values = {0: -1, 1: 1} solution = [values.get(v, default) for v in solution] solutions.append(solution) result['solutions'] = solutions return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _decode_byte(byte): """Helper for decode_qp, turns a single byte into a list of bits. Args: byte: byte to be decoded Returns: list of bits corresponding to byte """
bits = [] for _ in range(8): bits.append(byte & 1) byte >>= 1 return bits
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _decode_ints(message): """Helper for decode_qp, decodes an int array. The int array is stored as little endian 32 bit integers. The array has then been base64 encoded. Since we are decoding we do these steps in reverse. """
binary = base64.b64decode(message) return struct.unpack('<' + ('i' * (len(binary) // 4)), binary)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _decode_doubles(message): """Helper for decode_qp, decodes a double array. The double array is stored as little endian 64 bit doubles. The array has then been base64 encoded. Since we are decoding we do these steps in reverse. Args: message: the double array Returns: decoded double array """
binary = base64.b64decode(message) return struct.unpack('<' + ('d' * (len(binary) // 8)), binary)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def decode_qp_numpy(msg, return_matrix=True): """Decode SAPI response, results in a `qp` format, explicitly using numpy. If numpy is not installed, the method will fail. To use numpy for decoding, but return the results a lists (instead of numpy matrices), set `return_matrix=False`. """
import numpy as np result = msg['answer'] # Build some little endian type encodings double_type = np.dtype(np.double) double_type = double_type.newbyteorder('<') int_type = np.dtype(np.int32) int_type = int_type.newbyteorder('<') # Decode the simple buffers result['energies'] = np.frombuffer(base64.b64decode(result['energies']), dtype=double_type) if 'num_occurrences' in result: result['num_occurrences'] = \ np.frombuffer(base64.b64decode(result['num_occurrences']), dtype=int_type) result['active_variables'] = \ np.frombuffer(base64.b64decode(result['active_variables']), dtype=int_type) # Measure out the binary data size num_solutions = len(result['energies']) active_variables = result['active_variables'] num_variables = len(active_variables) total_variables = result['num_variables'] # Decode the solutions, which will be a continuous run of bits byte_type = np.dtype(np.uint8) byte_type = byte_type.newbyteorder('<') bits = np.unpackbits(np.frombuffer(base64.b64decode(result['solutions']), dtype=byte_type)) # Clip off the extra bits from encoding if num_solutions: bits = np.reshape(bits, (num_solutions, bits.size // num_solutions)) bits = np.delete(bits, range(num_variables, bits.shape[1]), 1) # Switch from bits to spins default = 3 if msg['type'] == 'ising': bits = bits.astype(np.int8) bits *= 2 bits -= 1 default = 0 # Fill in the missing variables solutions = np.full((num_solutions, total_variables), default, dtype=np.int8) solutions[:, active_variables] = bits result['solutions'] = solutions # If the final result shouldn't be numpy formats switch back to python objects if not return_matrix: result['energies'] = result['energies'].tolist() if 'num_occurrences' in result: result['num_occurrences'] = result['num_occurrences'].tolist() result['active_variables'] = result['active_variables'].tolist() result['solutions'] = result['solutions'].tolist() return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def evaluate_ising(linear, quad, state): """Calculate the energy of a state given the Hamiltonian. Args: linear: Linear Hamiltonian terms. quad: Quadratic Hamiltonian terms. state: Vector of spins describing the system state. Returns: Energy of the state evaluated by the given energy function. """
# If we were given a numpy array cast to list if _numpy and isinstance(state, np.ndarray): return evaluate_ising(linear, quad, state.tolist()) # Accumulate the linear and quadratic values energy = 0.0 for index, value in uniform_iterator(linear): energy += state[index] * value for (index_a, index_b), value in six.iteritems(quad): energy += value * state[index_a] * state[index_b] return energy
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def active_qubits(linear, quadratic): """Calculate a set of all active qubits. Qubit is "active" if it has bias or coupling attached. Args: linear (dict[variable, bias]/list[variable, bias]): Linear terms of the model. quadratic (dict[(variable, variable), bias]): Quadratic terms of the model. Returns: set: Active qubits' indices. """
active = {idx for idx,bias in uniform_iterator(linear)} for edge, _ in six.iteritems(quadratic): active.update(edge) return active
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def strip_head(sequence, values): """Strips elements of `values` from the beginning of `sequence`."""
values = set(values) return list(itertools.dropwhile(lambda x: x in values, sequence))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def strip_tail(sequence, values): """Strip `values` from the end of `sequence`."""
return list(reversed(list(strip_head(reversed(sequence), values))))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def datetime_to_timestamp(dt): """Convert timezone-aware `datetime` to POSIX timestamp and return seconds since UNIX epoch. Note: similar to `datetime.timestamp()` in Python 3.3+. """
epoch = datetime.utcfromtimestamp(0).replace(tzinfo=UTC) return (dt - epoch).total_seconds()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description:
def argshash(self, args, kwargs): "Hash mutable arguments' containers with immutable keys and values." a = repr(args) b = repr(sorted((repr(k), repr(v)) for k, v in kwargs.items())) return a + b