INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Based on the current utterance, return a dictionary where the keys are the strings in the database that map to lists of the token indices that they are linked to.
def get_strings_from_utterance(tokenized_utterance: List[Token]) -> Dict[str, List[int]]: """ Based on the current utterance, return a dictionary where the keys are the strings in the database that map to lists of the token indices that they are linked to. """ string_linking_scores: Dict[str, List[int]] = defaultdict(list) for index, token in enumerate(tokenized_utterance): for string in ATIS_TRIGGER_DICT.get(token.text.lower(), []): string_linking_scores[string].append(index) token_bigrams = bigrams([token.text for token in tokenized_utterance]) for index, token_bigram in enumerate(token_bigrams): for string in ATIS_TRIGGER_DICT.get(' '.join(token_bigram).lower(), []): string_linking_scores[string].extend([index, index + 1]) trigrams = ngrams([token.text for token in tokenized_utterance], 3) for index, trigram in enumerate(trigrams): if trigram[0] == 'st': natural_language_key = f'st. {trigram[2]}'.lower() else: natural_language_key = ' '.join(trigram).lower() for string in ATIS_TRIGGER_DICT.get(natural_language_key, []): string_linking_scores[string].extend([index, index + 1, index + 2]) return string_linking_scores
Gets the maximum padding lengths from all ``Instances`` in this batch. Each ``Instance`` has multiple ``Fields``, and each ``Field`` could have multiple things that need padding. We look at all fields in all instances, and find the max values for each (field_name, padding_key) pair, returning them in a dictionary. This can then be used to convert this batch into arrays of consistent length, or to set model parameters, etc.
def get_padding_lengths(self) -> Dict[str, Dict[str, int]]: """ Gets the maximum padding lengths from all ``Instances`` in this batch. Each ``Instance`` has multiple ``Fields``, and each ``Field`` could have multiple things that need padding. We look at all fields in all instances, and find the max values for each (field_name, padding_key) pair, returning them in a dictionary. This can then be used to convert this batch into arrays of consistent length, or to set model parameters, etc. """ padding_lengths: Dict[str, Dict[str, int]] = defaultdict(dict) all_instance_lengths: List[Dict[str, Dict[str, int]]] = [instance.get_padding_lengths() for instance in self.instances] if not all_instance_lengths: return {**padding_lengths} all_field_lengths: Dict[str, List[Dict[str, int]]] = defaultdict(list) for instance_lengths in all_instance_lengths: for field_name, instance_field_lengths in instance_lengths.items(): all_field_lengths[field_name].append(instance_field_lengths) for field_name, field_lengths in all_field_lengths.items(): for padding_key in field_lengths[0].keys(): max_value = max(x[padding_key] if padding_key in x else 0 for x in field_lengths) padding_lengths[field_name][padding_key] = max_value return {**padding_lengths}
We create a new ``Grammar`` object from the one in ``AtisSqlTableContext``, that also has the new entities that are extracted from the utterance. Stitching together the expressions to form the grammar is a little tedious here, but it is worth it because we don't have to create a new grammar from scratch. Creating a new grammar is expensive because we have many production rules that have all database values in the column on the right hand side. We update the expressions bottom up, since the higher level expressions may refer to the lower level ones. For example, the ternary expression will refer to the start and end times.
def _update_grammar(self): """ We create a new ``Grammar`` object from the one in ``AtisSqlTableContext``, that also has the new entities that are extracted from the utterance. Stitching together the expressions to form the grammar is a little tedious here, but it is worth it because we don't have to create a new grammar from scratch. Creating a new grammar is expensive because we have many production rules that have all database values in the column on the right hand side. We update the expressions bottom up, since the higher level expressions may refer to the lower level ones. For example, the ternary expression will refer to the start and end times. """ # This will give us a shallow copy. We have to be careful here because the ``Grammar`` object # contains ``Expression`` objects that have tuples containing the members of that expression. # We have to create new sub-expression objects so that original grammar is not mutated. new_grammar = copy(AtisWorld.sql_table_context.grammar) for numeric_nonterminal in NUMERIC_NONTERMINALS: self._add_numeric_nonterminal_to_grammar(numeric_nonterminal, new_grammar) self._update_expression_reference(new_grammar, 'pos_value', 'number') ternary_expressions = [self._get_sequence_with_spacing(new_grammar, [new_grammar['col_ref'], Literal('BETWEEN'), new_grammar['time_range_start'], Literal(f'AND'), new_grammar['time_range_end']]), self._get_sequence_with_spacing(new_grammar, [new_grammar['col_ref'], Literal('NOT'), Literal('BETWEEN'), new_grammar['time_range_start'], Literal(f'AND'), new_grammar['time_range_end']]), self._get_sequence_with_spacing(new_grammar, [new_grammar['col_ref'], Literal('not'), Literal('BETWEEN'), new_grammar['time_range_start'], Literal(f'AND'), new_grammar['time_range_end']])] new_grammar['ternaryexpr'] = OneOf(*ternary_expressions, name='ternaryexpr') self._update_expression_reference(new_grammar, 'condition', 'ternaryexpr') new_binary_expressions = [] fare_round_trip_cost_expression = \ self._get_sequence_with_spacing(new_grammar, [Literal('fare'), Literal('.'), Literal('round_trip_cost'), new_grammar['binaryop'], new_grammar['fare_round_trip_cost']]) new_binary_expressions.append(fare_round_trip_cost_expression) fare_one_direction_cost_expression = \ self._get_sequence_with_spacing(new_grammar, [Literal('fare'), Literal('.'), Literal('one_direction_cost'), new_grammar['binaryop'], new_grammar['fare_one_direction_cost']]) new_binary_expressions.append(fare_one_direction_cost_expression) flight_number_expression = \ self._get_sequence_with_spacing(new_grammar, [Literal('flight'), Literal('.'), Literal('flight_number'), new_grammar['binaryop'], new_grammar['flight_number']]) new_binary_expressions.append(flight_number_expression) if self.dates: year_binary_expression = self._get_sequence_with_spacing(new_grammar, [Literal('date_day'), Literal('.'), Literal('year'), new_grammar['binaryop'], new_grammar['year_number']]) month_binary_expression = self._get_sequence_with_spacing(new_grammar, [Literal('date_day'), Literal('.'), Literal('month_number'), new_grammar['binaryop'], new_grammar['month_number']]) day_binary_expression = self._get_sequence_with_spacing(new_grammar, [Literal('date_day'), Literal('.'), Literal('day_number'), new_grammar['binaryop'], new_grammar['day_number']]) new_binary_expressions.extend([year_binary_expression, month_binary_expression, day_binary_expression]) new_binary_expressions = new_binary_expressions + list(new_grammar['biexpr'].members) new_grammar['biexpr'] = OneOf(*new_binary_expressions, name='biexpr') self._update_expression_reference(new_grammar, 'condition', 'biexpr') return new_grammar
This is a helper method for generating sequences, since we often want a list of expressions with whitespaces between them.
def _get_sequence_with_spacing(self, # pylint: disable=no-self-use new_grammar, expressions: List[Expression], name: str = '') -> Sequence: """ This is a helper method for generating sequences, since we often want a list of expressions with whitespaces between them. """ expressions = [subexpression for expression in expressions for subexpression in (expression, new_grammar['ws'])] return Sequence(*expressions, name=name)
When we add a new expression, there may be other expressions that refer to it, and we need to update those to point to the new expression.
def _update_expression_reference(self, # pylint: disable=no-self-use grammar: Grammar, parent_expression_nonterminal: str, child_expression_nonterminal: str) -> None: """ When we add a new expression, there may be other expressions that refer to it, and we need to update those to point to the new expression. """ grammar[parent_expression_nonterminal].members = \ [member if member.name != child_expression_nonterminal else grammar[child_expression_nonterminal] for member in grammar[parent_expression_nonterminal].members]
This is a helper method for adding different types of numbers (eg. starting time ranges) as entities. We first go through all utterances in the interaction and find the numbers of a certain type and add them to the set ``all_numbers``, which is initialized with default values. We want to add all numbers that occur in the interaction, and not just the current turn because the query could contain numbers that were triggered before the current turn. For each entity, we then check if it is triggered by tokens in the current utterance and construct the linking score.
def add_to_number_linking_scores(self, all_numbers: Set[str], number_linking_scores: Dict[str, Tuple[str, str, List[int]]], get_number_linking_dict: Callable[[str, List[Token]], Dict[str, List[int]]], current_tokenized_utterance: List[Token], nonterminal: str) -> None: """ This is a helper method for adding different types of numbers (eg. starting time ranges) as entities. We first go through all utterances in the interaction and find the numbers of a certain type and add them to the set ``all_numbers``, which is initialized with default values. We want to add all numbers that occur in the interaction, and not just the current turn because the query could contain numbers that were triggered before the current turn. For each entity, we then check if it is triggered by tokens in the current utterance and construct the linking score. """ number_linking_dict: Dict[str, List[int]] = {} for utterance, tokenized_utterance in zip(self.utterances, self.tokenized_utterances): number_linking_dict = get_number_linking_dict(utterance, tokenized_utterance) all_numbers.update(number_linking_dict.keys()) all_numbers_list: List[str] = sorted(all_numbers, reverse=True) for number in all_numbers_list: entity_linking = [0 for token in current_tokenized_utterance] # ``number_linking_dict`` is for the last utterance here. If the number was triggered # before the last utterance, then it will have linking scores of 0's. for token_index in number_linking_dict.get(number, []): if token_index < len(entity_linking): entity_linking[token_index] = 1 action = format_action(nonterminal, number, is_number=True, keywords_to_uppercase=KEYWORDS) number_linking_scores[action] = (nonterminal, number, entity_linking)
This method gets entities from the current utterance finds which tokens they are linked to. The entities are divided into two main groups, ``numbers`` and ``strings``. We rely on these entities later for updating the valid actions and the grammar.
def _get_linked_entities(self) -> Dict[str, Dict[str, Tuple[str, str, List[int]]]]: """ This method gets entities from the current utterance finds which tokens they are linked to. The entities are divided into two main groups, ``numbers`` and ``strings``. We rely on these entities later for updating the valid actions and the grammar. """ current_tokenized_utterance = [] if not self.tokenized_utterances \ else self.tokenized_utterances[-1] # We generate a dictionary where the key is the type eg. ``number`` or ``string``. # The value is another dictionary where the key is the action and the value is a tuple # of the nonterminal, the string value and the linking score. entity_linking_scores: Dict[str, Dict[str, Tuple[str, str, List[int]]]] = {} number_linking_scores: Dict[str, Tuple[str, str, List[int]]] = {} string_linking_scores: Dict[str, Tuple[str, str, List[int]]] = {} # Get time range start self.add_to_number_linking_scores({'0'}, number_linking_scores, get_time_range_start_from_utterance, current_tokenized_utterance, 'time_range_start') self.add_to_number_linking_scores({'1200'}, number_linking_scores, get_time_range_end_from_utterance, current_tokenized_utterance, 'time_range_end') self.add_to_number_linking_scores({'0', '1', '60', '41'}, number_linking_scores, get_numbers_from_utterance, current_tokenized_utterance, 'number') self.add_to_number_linking_scores({'0'}, number_linking_scores, get_costs_from_utterance, current_tokenized_utterance, 'fare_round_trip_cost') self.add_to_number_linking_scores({'0'}, number_linking_scores, get_costs_from_utterance, current_tokenized_utterance, 'fare_one_direction_cost') self.add_to_number_linking_scores({'0'}, number_linking_scores, get_flight_numbers_from_utterance, current_tokenized_utterance, 'flight_number') self.add_dates_to_number_linking_scores(number_linking_scores, current_tokenized_utterance) # Add string linking dict. string_linking_dict: Dict[str, List[int]] = {} for tokenized_utterance in self.tokenized_utterances: string_linking_dict = get_strings_from_utterance(tokenized_utterance) strings_list = AtisWorld.sql_table_context.strings_list strings_list.append(('flight_airline_code_string -> ["\'EA\'"]', 'EA')) strings_list.append(('airline_airline_name_string-> ["\'EA\'"]', 'EA')) # We construct the linking scores for strings from the ``string_linking_dict`` here. for string in strings_list: entity_linking = [0 for token in current_tokenized_utterance] # string_linking_dict has the strings and linking scores from the last utterance. # If the string is not in the last utterance, then the linking scores will be all 0. for token_index in string_linking_dict.get(string[1], []): entity_linking[token_index] = 1 action = string[0] string_linking_scores[action] = (action.split(' -> ')[0], string[1], entity_linking) entity_linking_scores['number'] = number_linking_scores entity_linking_scores['string'] = string_linking_scores return entity_linking_scores
Return a sorted list of strings representing all possible actions of the form: nonterminal -> [right_hand_side]
def all_possible_actions(self) -> List[str]: """ Return a sorted list of strings representing all possible actions of the form: nonterminal -> [right_hand_side] """ all_actions = set() for _, action_list in self.valid_actions.items(): for action in action_list: all_actions.add(action) return sorted(all_actions)
When we first get the entities and the linking scores in ``_get_linked_entities`` we represent as dictionaries for easier updates to the grammar and valid actions. In this method, we flatten them for the model so that the entities are represented as a list, and the linking scores are a 2D numpy array of shape (num_entities, num_utterance_tokens).
def _flatten_entities(self) -> Tuple[List[str], numpy.ndarray]: """ When we first get the entities and the linking scores in ``_get_linked_entities`` we represent as dictionaries for easier updates to the grammar and valid actions. In this method, we flatten them for the model so that the entities are represented as a list, and the linking scores are a 2D numpy array of shape (num_entities, num_utterance_tokens). """ entities = [] linking_scores = [] for entity in sorted(self.linked_entities['number']): entities.append(entity) linking_scores.append(self.linked_entities['number'][entity][2]) for entity in sorted(self.linked_entities['string']): entities.append(entity) linking_scores.append(self.linked_entities['string'][entity][2]) return entities, numpy.array(linking_scores)
Creates a Flask app that serves up a simple configuration wizard.
def make_app(include_packages: Sequence[str] = ()) -> Flask: """ Creates a Flask app that serves up a simple configuration wizard. """ # Load modules for package_name in include_packages: import_submodules(package_name) app = Flask(__name__) # pylint: disable=invalid-name @app.errorhandler(ServerError) def handle_invalid_usage(error: ServerError) -> Response: # pylint: disable=unused-variable response = jsonify(error.to_dict()) response.status_code = error.status_code return response @app.route('/') def index() -> Response: # pylint: disable=unused-variable return send_file('config_explorer.html') @app.route('/api/config/') def api_config() -> Response: # pylint: disable=unused-variable """ There are basically two things that can happen here. If this method is called with a ``Registrable`` class (e.g. ``Model``), it should return the list of possible ``Model`` subclasses. If it is called with an instantiable subclass (e.g. ``CrfTagger``), is should return the config for that subclass. This is complicated by the fact that some Registrable base classes (e.g. Vocabulary, Trainer) are _themselves_ instantiable. We handle this in two ways: first, we insist that the first case include an extra ``get_choices`` parameter. That is, if you call this method for ``Trainer`` with get_choices=true, you get the list of Trainer subclasses. If you call it without that extra flag, you get the config for the class itself. There are basically two UX situations in which this API is called. The first is when you have a dropdown list of choices (e.g. Model types) and you select one. Such an API request is made *without* the get_choices flag, which means that the config is returned *even if the class in question is a Registrable class that has subclass choices*. The second is when you click a "Configure" button, which configures a class that may (e.g. ``Model``) or may not (e.g. ``FeedForward``) have registrable subclasses. In this case the API request is made with the "get_choices" flag, but will return the corresponding config object if no choices are available (e.g. in the ``FeedForward``) case. This is not elegant, but it works. """ class_name = request.args.get('class', '') get_choices = request.args.get('get_choices', None) # Get the configuration for this class name config = configure(class_name) try: # May not have choices choice5 = choices(class_name) except ValueError: choice5 = [] if get_choices and choice5: return jsonify({ "className": class_name, "choices": choice5 }) else: return jsonify({ "className": class_name, "config": config.to_json() }) return app
Just converts from an ``argparse.Namespace`` object to string paths.
def train_model_from_args(args: argparse.Namespace): """ Just converts from an ``argparse.Namespace`` object to string paths. """ train_model_from_file(args.param_path, args.serialization_dir, args.overrides, args.file_friendly_logging, args.recover, args.force, args.cache_directory, args.cache_prefix)
A wrapper around :func:`train_model` which loads the params from a file. Parameters ---------- parameter_filename : ``str`` A json parameter file specifying an AllenNLP experiment. serialization_dir : ``str`` The directory in which to save results and logs. We just pass this along to :func:`train_model`. overrides : ``str`` A JSON string that we will use to override values in the input parameter file. file_friendly_logging : ``bool``, optional (default=False) If ``True``, we make our output more friendly to saved model files. We just pass this along to :func:`train_model`. recover : ``bool`, optional (default=False) If ``True``, we will try to recover a training run from an existing serialization directory. This is only intended for use when something actually crashed during the middle of a run. For continuing training a model on new data, see the ``fine-tune`` command. force : ``bool``, optional (default=False) If ``True``, we will overwrite the serialization directory if it already exists. cache_directory : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`. cache_prefix : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`.
def train_model_from_file(parameter_filename: str, serialization_dir: str, overrides: str = "", file_friendly_logging: bool = False, recover: bool = False, force: bool = False, cache_directory: str = None, cache_prefix: str = None) -> Model: """ A wrapper around :func:`train_model` which loads the params from a file. Parameters ---------- parameter_filename : ``str`` A json parameter file specifying an AllenNLP experiment. serialization_dir : ``str`` The directory in which to save results and logs. We just pass this along to :func:`train_model`. overrides : ``str`` A JSON string that we will use to override values in the input parameter file. file_friendly_logging : ``bool``, optional (default=False) If ``True``, we make our output more friendly to saved model files. We just pass this along to :func:`train_model`. recover : ``bool`, optional (default=False) If ``True``, we will try to recover a training run from an existing serialization directory. This is only intended for use when something actually crashed during the middle of a run. For continuing training a model on new data, see the ``fine-tune`` command. force : ``bool``, optional (default=False) If ``True``, we will overwrite the serialization directory if it already exists. cache_directory : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`. cache_prefix : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`. """ # Load the experiment config from a file and pass it to ``train_model``. params = Params.from_file(parameter_filename, overrides) return train_model(params, serialization_dir, file_friendly_logging, recover, force, cache_directory, cache_prefix)
Trains the model specified in the given :class:`Params` object, using the data and training parameters also specified in that object, and saves the results in ``serialization_dir``. Parameters ---------- params : ``Params`` A parameter object specifying an AllenNLP Experiment. serialization_dir : ``str`` The directory in which to save results and logs. file_friendly_logging : ``bool``, optional (default=False) If ``True``, we add newlines to tqdm output, even on an interactive terminal, and we slow down tqdm's output to only once every 10 seconds. recover : ``bool``, optional (default=False) If ``True``, we will try to recover a training run from an existing serialization directory. This is only intended for use when something actually crashed during the middle of a run. For continuing training a model on new data, see the ``fine-tune`` command. force : ``bool``, optional (default=False) If ``True``, we will overwrite the serialization directory if it already exists. cache_directory : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`. cache_prefix : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`. Returns ------- best_model: ``Model`` The model with the best epoch weights.
def train_model(params: Params, serialization_dir: str, file_friendly_logging: bool = False, recover: bool = False, force: bool = False, cache_directory: str = None, cache_prefix: str = None) -> Model: """ Trains the model specified in the given :class:`Params` object, using the data and training parameters also specified in that object, and saves the results in ``serialization_dir``. Parameters ---------- params : ``Params`` A parameter object specifying an AllenNLP Experiment. serialization_dir : ``str`` The directory in which to save results and logs. file_friendly_logging : ``bool``, optional (default=False) If ``True``, we add newlines to tqdm output, even on an interactive terminal, and we slow down tqdm's output to only once every 10 seconds. recover : ``bool``, optional (default=False) If ``True``, we will try to recover a training run from an existing serialization directory. This is only intended for use when something actually crashed during the middle of a run. For continuing training a model on new data, see the ``fine-tune`` command. force : ``bool``, optional (default=False) If ``True``, we will overwrite the serialization directory if it already exists. cache_directory : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`. cache_prefix : ``str``, optional For caching data pre-processing. See :func:`allennlp.training.util.datasets_from_params`. Returns ------- best_model: ``Model`` The model with the best epoch weights. """ prepare_environment(params) create_serialization_dir(params, serialization_dir, recover, force) stdout_handler = prepare_global_logging(serialization_dir, file_friendly_logging) cuda_device = params.params.get('trainer').get('cuda_device', -1) check_for_gpu(cuda_device) params.to_file(os.path.join(serialization_dir, CONFIG_NAME)) evaluate_on_test = params.pop_bool("evaluate_on_test", False) trainer_type = params.get("trainer", {}).get("type", "default") if trainer_type == "default": # Special logic to instantiate backward-compatible trainer. pieces = TrainerPieces.from_params(params, # pylint: disable=no-member serialization_dir, recover, cache_directory, cache_prefix) trainer = Trainer.from_params( model=pieces.model, serialization_dir=serialization_dir, iterator=pieces.iterator, train_data=pieces.train_dataset, validation_data=pieces.validation_dataset, params=pieces.params, validation_iterator=pieces.validation_iterator) evaluation_iterator = pieces.validation_iterator or pieces.iterator evaluation_dataset = pieces.test_dataset else: trainer = TrainerBase.from_params(params, serialization_dir, recover) # TODO(joelgrus): handle evaluation in the general case evaluation_iterator = evaluation_dataset = None params.assert_empty('base train command') try: metrics = trainer.train() except KeyboardInterrupt: # if we have completed an epoch, try to create a model archive. if os.path.exists(os.path.join(serialization_dir, _DEFAULT_WEIGHTS)): logging.info("Training interrupted by the user. Attempting to create " "a model archive using the current best epoch weights.") archive_model(serialization_dir, files_to_archive=params.files_to_archive) raise # Evaluate if evaluation_dataset and evaluate_on_test: logger.info("The model will be evaluated using the best epoch weights.") test_metrics = evaluate(trainer.model, evaluation_dataset, evaluation_iterator, cuda_device=trainer._cuda_devices[0], # pylint: disable=protected-access, # TODO(brendanr): Pass in an arg following Joel's trainer refactor. batch_weight_key="") for key, value in test_metrics.items(): metrics["test_" + key] = value elif evaluation_dataset: logger.info("To evaluate on the test set after training, pass the " "'evaluate_on_test' flag, or use the 'allennlp evaluate' command.") cleanup_global_logging(stdout_handler) # Now tar up results archive_model(serialization_dir, files_to_archive=params.files_to_archive) dump_metrics(os.path.join(serialization_dir, "metrics.json"), metrics, log=True) # We count on the trainer to have the model with best weights return trainer.model
Performs division and handles divide-by-zero. On zero-division, sets the corresponding result elements to zero.
def _prf_divide(numerator, denominator): """Performs division and handles divide-by-zero. On zero-division, sets the corresponding result elements to zero. """ result = numerator / denominator mask = denominator == 0.0 if not mask.any(): return result # remove nan result[mask] = 0.0 return result
One sentence per line, formatted like The###DET dog###NN ate###V the###DET apple###NN Returns a list of pairs (tokenized_sentence, tags)
def load_data(file_path: str) -> Tuple[List[str], List[str]]: """ One sentence per line, formatted like The###DET dog###NN ate###V the###DET apple###NN Returns a list of pairs (tokenized_sentence, tags) """ data = [] with open(file_path) as f: for line in f: pairs = line.strip().split() sentence, tags = zip(*(pair.split("###") for pair in pairs)) data.append((sentence, tags)) return data
max_vocab_size limits the size of the vocabulary, not including the @@UNKNOWN@@ token. max_vocab_size is allowed to be either an int or a Dict[str, int] (or nothing). But it could also be a string representing an int (in the case of environment variable substitution). So we need some complex logic to handle it.
def pop_max_vocab_size(params: Params) -> Union[int, Dict[str, int]]: """ max_vocab_size limits the size of the vocabulary, not including the @@UNKNOWN@@ token. max_vocab_size is allowed to be either an int or a Dict[str, int] (or nothing). But it could also be a string representing an int (in the case of environment variable substitution). So we need some complex logic to handle it. """ size = params.pop("max_vocab_size", None) if isinstance(size, Params): # This is the Dict[str, int] case. return size.as_dict() elif size is not None: # This is the int / str case. return int(size) else: return None
Persist this Vocabulary to files so it can be reloaded later. Each namespace corresponds to one file. Parameters ---------- directory : ``str`` The directory where we save the serialized vocabulary.
def save_to_files(self, directory: str) -> None: """ Persist this Vocabulary to files so it can be reloaded later. Each namespace corresponds to one file. Parameters ---------- directory : ``str`` The directory where we save the serialized vocabulary. """ os.makedirs(directory, exist_ok=True) if os.listdir(directory): logging.warning("vocabulary serialization directory %s is not empty", directory) with codecs.open(os.path.join(directory, NAMESPACE_PADDING_FILE), 'w', 'utf-8') as namespace_file: for namespace_str in self._non_padded_namespaces: print(namespace_str, file=namespace_file) for namespace, mapping in self._index_to_token.items(): # Each namespace gets written to its own file, in index order. with codecs.open(os.path.join(directory, namespace + '.txt'), 'w', 'utf-8') as token_file: num_tokens = len(mapping) start_index = 1 if mapping[0] == self._padding_token else 0 for i in range(start_index, num_tokens): print(mapping[i].replace('\n', '@@NEWLINE@@'), file=token_file)
Loads a ``Vocabulary`` that was serialized using ``save_to_files``. Parameters ---------- directory : ``str`` The directory containing the serialized vocabulary.
def from_files(cls, directory: str) -> 'Vocabulary': """ Loads a ``Vocabulary`` that was serialized using ``save_to_files``. Parameters ---------- directory : ``str`` The directory containing the serialized vocabulary. """ logger.info("Loading token dictionary from %s.", directory) with codecs.open(os.path.join(directory, NAMESPACE_PADDING_FILE), 'r', 'utf-8') as namespace_file: non_padded_namespaces = [namespace_str.strip() for namespace_str in namespace_file] vocab = cls(non_padded_namespaces=non_padded_namespaces) # Check every file in the directory. for namespace_filename in os.listdir(directory): if namespace_filename == NAMESPACE_PADDING_FILE: continue if namespace_filename.startswith("."): continue namespace = namespace_filename.replace('.txt', '') if any(namespace_match(pattern, namespace) for pattern in non_padded_namespaces): is_padded = False else: is_padded = True filename = os.path.join(directory, namespace_filename) vocab.set_from_file(filename, is_padded, namespace=namespace) return vocab
If you already have a vocabulary file for a trained model somewhere, and you really want to use that vocabulary file instead of just setting the vocabulary from a dataset, for whatever reason, you can do that with this method. You must specify the namespace to use, and we assume that you want to use padding and OOV tokens for this. Parameters ---------- filename : ``str`` The file containing the vocabulary to load. It should be formatted as one token per line, with nothing else in the line. The index we assign to the token is the line number in the file (1-indexed if ``is_padded``, 0-indexed otherwise). Note that this file should contain the OOV token string! is_padded : ``bool``, optional (default=True) Is this vocabulary padded? For token / word / character vocabularies, this should be ``True``; while for tag or label vocabularies, this should typically be ``False``. If ``True``, we add a padding token with index 0, and we enforce that the ``oov_token`` is present in the file. oov_token : ``str``, optional (default=DEFAULT_OOV_TOKEN) What token does this vocabulary use to represent out-of-vocabulary characters? This must show up as a line in the vocabulary file. When we find it, we replace ``oov_token`` with ``self._oov_token``, because we only use one OOV token across namespaces. namespace : ``str``, optional (default="tokens") What namespace should we overwrite with this vocab file?
def set_from_file(self, filename: str, is_padded: bool = True, oov_token: str = DEFAULT_OOV_TOKEN, namespace: str = "tokens"): """ If you already have a vocabulary file for a trained model somewhere, and you really want to use that vocabulary file instead of just setting the vocabulary from a dataset, for whatever reason, you can do that with this method. You must specify the namespace to use, and we assume that you want to use padding and OOV tokens for this. Parameters ---------- filename : ``str`` The file containing the vocabulary to load. It should be formatted as one token per line, with nothing else in the line. The index we assign to the token is the line number in the file (1-indexed if ``is_padded``, 0-indexed otherwise). Note that this file should contain the OOV token string! is_padded : ``bool``, optional (default=True) Is this vocabulary padded? For token / word / character vocabularies, this should be ``True``; while for tag or label vocabularies, this should typically be ``False``. If ``True``, we add a padding token with index 0, and we enforce that the ``oov_token`` is present in the file. oov_token : ``str``, optional (default=DEFAULT_OOV_TOKEN) What token does this vocabulary use to represent out-of-vocabulary characters? This must show up as a line in the vocabulary file. When we find it, we replace ``oov_token`` with ``self._oov_token``, because we only use one OOV token across namespaces. namespace : ``str``, optional (default="tokens") What namespace should we overwrite with this vocab file? """ if is_padded: self._token_to_index[namespace] = {self._padding_token: 0} self._index_to_token[namespace] = {0: self._padding_token} else: self._token_to_index[namespace] = {} self._index_to_token[namespace] = {} with codecs.open(filename, 'r', 'utf-8') as input_file: lines = input_file.read().split('\n') # Be flexible about having final newline or not if lines and lines[-1] == '': lines = lines[:-1] for i, line in enumerate(lines): index = i + 1 if is_padded else i token = line.replace('@@NEWLINE@@', '\n') if token == oov_token: token = self._oov_token self._token_to_index[namespace][token] = index self._index_to_token[namespace][index] = token if is_padded: assert self._oov_token in self._token_to_index[namespace], "OOV token not found!"
Constructs a vocabulary given a collection of `Instances` and some parameters. We count all of the vocabulary items in the instances, then pass those counts and the other parameters, to :func:`__init__`. See that method for a description of what the other parameters do.
def from_instances(cls, instances: Iterable['adi.Instance'], min_count: Dict[str, int] = None, max_vocab_size: Union[int, Dict[str, int]] = None, non_padded_namespaces: Iterable[str] = DEFAULT_NON_PADDED_NAMESPACES, pretrained_files: Optional[Dict[str, str]] = None, only_include_pretrained_words: bool = False, tokens_to_add: Dict[str, List[str]] = None, min_pretrained_embeddings: Dict[str, int] = None) -> 'Vocabulary': """ Constructs a vocabulary given a collection of `Instances` and some parameters. We count all of the vocabulary items in the instances, then pass those counts and the other parameters, to :func:`__init__`. See that method for a description of what the other parameters do. """ logger.info("Fitting token dictionary from dataset.") namespace_token_counts: Dict[str, Dict[str, int]] = defaultdict(lambda: defaultdict(int)) for instance in Tqdm.tqdm(instances): instance.count_vocab_items(namespace_token_counts) return cls(counter=namespace_token_counts, min_count=min_count, max_vocab_size=max_vocab_size, non_padded_namespaces=non_padded_namespaces, pretrained_files=pretrained_files, only_include_pretrained_words=only_include_pretrained_words, tokens_to_add=tokens_to_add, min_pretrained_embeddings=min_pretrained_embeddings)
There are two possible ways to build a vocabulary; from a collection of instances, using :func:`Vocabulary.from_instances`, or from a pre-saved vocabulary, using :func:`Vocabulary.from_files`. You can also extend pre-saved vocabulary with collection of instances using this method. This method wraps these options, allowing their specification from a ``Params`` object, generated from a JSON configuration file. Parameters ---------- params: Params, required. instances: Iterable['adi.Instance'], optional If ``params`` doesn't contain a ``directory_path`` key, the ``Vocabulary`` can be built directly from a collection of instances (i.e. a dataset). If ``extend`` key is set False, dataset instances will be ignored and final vocabulary will be one loaded from ``directory_path``. If ``extend`` key is set True, dataset instances will be used to extend the vocabulary loaded from ``directory_path`` and that will be final vocabulary used. Returns ------- A ``Vocabulary``.
def from_params(cls, params: Params, instances: Iterable['adi.Instance'] = None): # type: ignore """ There are two possible ways to build a vocabulary; from a collection of instances, using :func:`Vocabulary.from_instances`, or from a pre-saved vocabulary, using :func:`Vocabulary.from_files`. You can also extend pre-saved vocabulary with collection of instances using this method. This method wraps these options, allowing their specification from a ``Params`` object, generated from a JSON configuration file. Parameters ---------- params: Params, required. instances: Iterable['adi.Instance'], optional If ``params`` doesn't contain a ``directory_path`` key, the ``Vocabulary`` can be built directly from a collection of instances (i.e. a dataset). If ``extend`` key is set False, dataset instances will be ignored and final vocabulary will be one loaded from ``directory_path``. If ``extend`` key is set True, dataset instances will be used to extend the vocabulary loaded from ``directory_path`` and that will be final vocabulary used. Returns ------- A ``Vocabulary``. """ # pylint: disable=arguments-differ # Vocabulary is ``Registrable`` so that you can configure a custom subclass, # but (unlike most of our registrables) almost everyone will want to use the # base implementation. So instead of having an abstract ``VocabularyBase`` or # such, we just add the logic for instantiating a registered subclass here, # so that most users can continue doing what they were doing. vocab_type = params.pop("type", None) if vocab_type is not None: return cls.by_name(vocab_type).from_params(params=params, instances=instances) extend = params.pop("extend", False) vocabulary_directory = params.pop("directory_path", None) if not vocabulary_directory and not instances: raise ConfigurationError("You must provide either a Params object containing a " "vocab_directory key or a Dataset to build a vocabulary from.") if extend and not instances: raise ConfigurationError("'extend' is true but there are not instances passed to extend.") if extend and not vocabulary_directory: raise ConfigurationError("'extend' is true but there is not 'directory_path' to extend from.") if vocabulary_directory and instances: if extend: logger.info("Loading Vocab from files and extending it with dataset.") else: logger.info("Loading Vocab from files instead of dataset.") if vocabulary_directory: vocab = cls.from_files(vocabulary_directory) if not extend: params.assert_empty("Vocabulary - from files") return vocab if extend: vocab.extend_from_instances(params, instances=instances) return vocab min_count = params.pop("min_count", None) max_vocab_size = pop_max_vocab_size(params) non_padded_namespaces = params.pop("non_padded_namespaces", DEFAULT_NON_PADDED_NAMESPACES) pretrained_files = params.pop("pretrained_files", {}) min_pretrained_embeddings = params.pop("min_pretrained_embeddings", None) only_include_pretrained_words = params.pop_bool("only_include_pretrained_words", False) tokens_to_add = params.pop("tokens_to_add", None) params.assert_empty("Vocabulary - from dataset") return cls.from_instances(instances=instances, min_count=min_count, max_vocab_size=max_vocab_size, non_padded_namespaces=non_padded_namespaces, pretrained_files=pretrained_files, only_include_pretrained_words=only_include_pretrained_words, tokens_to_add=tokens_to_add, min_pretrained_embeddings=min_pretrained_embeddings)
This method can be used for extending already generated vocabulary. It takes same parameters as Vocabulary initializer. The token2index and indextotoken mappings of calling vocabulary will be retained. It is an inplace operation so None will be returned.
def _extend(self, counter: Dict[str, Dict[str, int]] = None, min_count: Dict[str, int] = None, max_vocab_size: Union[int, Dict[str, int]] = None, non_padded_namespaces: Iterable[str] = DEFAULT_NON_PADDED_NAMESPACES, pretrained_files: Optional[Dict[str, str]] = None, only_include_pretrained_words: bool = False, tokens_to_add: Dict[str, List[str]] = None, min_pretrained_embeddings: Dict[str, int] = None) -> None: """ This method can be used for extending already generated vocabulary. It takes same parameters as Vocabulary initializer. The token2index and indextotoken mappings of calling vocabulary will be retained. It is an inplace operation so None will be returned. """ if not isinstance(max_vocab_size, dict): int_max_vocab_size = max_vocab_size max_vocab_size = defaultdict(lambda: int_max_vocab_size) # type: ignore min_count = min_count or {} pretrained_files = pretrained_files or {} min_pretrained_embeddings = min_pretrained_embeddings or {} non_padded_namespaces = set(non_padded_namespaces) counter = counter or {} tokens_to_add = tokens_to_add or {} self._retained_counter = counter # Make sure vocabulary extension is safe. current_namespaces = {*self._token_to_index} extension_namespaces = {*counter, *tokens_to_add} for namespace in current_namespaces & extension_namespaces: # if new namespace was already present # Either both should be padded or none should be. original_padded = not any(namespace_match(pattern, namespace) for pattern in self._non_padded_namespaces) extension_padded = not any(namespace_match(pattern, namespace) for pattern in non_padded_namespaces) if original_padded != extension_padded: raise ConfigurationError("Common namespace {} has conflicting ".format(namespace)+ "setting of padded = True/False. "+ "Hence extension cannot be done.") # Add new non-padded namespaces for extension self._token_to_index.add_non_padded_namespaces(non_padded_namespaces) self._index_to_token.add_non_padded_namespaces(non_padded_namespaces) self._non_padded_namespaces.update(non_padded_namespaces) for namespace in counter: if namespace in pretrained_files: pretrained_list = _read_pretrained_tokens(pretrained_files[namespace]) min_embeddings = min_pretrained_embeddings.get(namespace, 0) if min_embeddings > 0: tokens_old = tokens_to_add.get(namespace, []) tokens_new = pretrained_list[:min_embeddings] tokens_to_add[namespace] = tokens_old + tokens_new pretrained_set = set(pretrained_list) else: pretrained_set = None token_counts = list(counter[namespace].items()) token_counts.sort(key=lambda x: x[1], reverse=True) try: max_vocab = max_vocab_size[namespace] except KeyError: max_vocab = None if max_vocab: token_counts = token_counts[:max_vocab] for token, count in token_counts: if pretrained_set is not None: if only_include_pretrained_words: if token in pretrained_set and count >= min_count.get(namespace, 1): self.add_token_to_namespace(token, namespace) elif token in pretrained_set or count >= min_count.get(namespace, 1): self.add_token_to_namespace(token, namespace) elif count >= min_count.get(namespace, 1): self.add_token_to_namespace(token, namespace) for namespace, tokens in tokens_to_add.items(): for token in tokens: self.add_token_to_namespace(token, namespace)
Extends an already generated vocabulary using a collection of instances.
def extend_from_instances(self, params: Params, instances: Iterable['adi.Instance'] = ()) -> None: """ Extends an already generated vocabulary using a collection of instances. """ min_count = params.pop("min_count", None) max_vocab_size = pop_max_vocab_size(params) non_padded_namespaces = params.pop("non_padded_namespaces", DEFAULT_NON_PADDED_NAMESPACES) pretrained_files = params.pop("pretrained_files", {}) min_pretrained_embeddings = params.pop("min_pretrained_embeddings", None) only_include_pretrained_words = params.pop_bool("only_include_pretrained_words", False) tokens_to_add = params.pop("tokens_to_add", None) params.assert_empty("Vocabulary - from dataset") logger.info("Fitting token dictionary from dataset.") namespace_token_counts: Dict[str, Dict[str, int]] = defaultdict(lambda: defaultdict(int)) for instance in Tqdm.tqdm(instances): instance.count_vocab_items(namespace_token_counts) self._extend(counter=namespace_token_counts, min_count=min_count, max_vocab_size=max_vocab_size, non_padded_namespaces=non_padded_namespaces, pretrained_files=pretrained_files, only_include_pretrained_words=only_include_pretrained_words, tokens_to_add=tokens_to_add, min_pretrained_embeddings=min_pretrained_embeddings)
Returns whether or not there are padding and OOV tokens added to the given namespace.
def is_padded(self, namespace: str) -> bool: """ Returns whether or not there are padding and OOV tokens added to the given namespace. """ return self._index_to_token[namespace][0] == self._padding_token
Adds ``token`` to the index, if it is not already present. Either way, we return the index of the token.
def add_token_to_namespace(self, token: str, namespace: str = 'tokens') -> int: """ Adds ``token`` to the index, if it is not already present. Either way, we return the index of the token. """ if not isinstance(token, str): raise ValueError("Vocabulary tokens must be strings, or saving and loading will break." " Got %s (with type %s)" % (repr(token), type(token))) if token not in self._token_to_index[namespace]: index = len(self._token_to_index[namespace]) self._token_to_index[namespace][token] = index self._index_to_token[namespace][index] = token return index else: return self._token_to_index[namespace][token]
Computes the regularization penalty for the model. Returns 0 if the model was not configured to use regularization.
def get_regularization_penalty(self) -> Union[float, torch.Tensor]: """ Computes the regularization penalty for the model. Returns 0 if the model was not configured to use regularization. """ if self._regularizer is None: return 0.0 else: return self._regularizer(self)
Takes an :class:`~allennlp.data.instance.Instance`, which typically has raw text in it, converts that text into arrays using this model's :class:`Vocabulary`, passes those arrays through :func:`self.forward()` and :func:`self.decode()` (which by default does nothing) and returns the result. Before returning the result, we convert any ``torch.Tensors`` into numpy arrays and remove the batch dimension.
def forward_on_instance(self, instance: Instance) -> Dict[str, numpy.ndarray]: """ Takes an :class:`~allennlp.data.instance.Instance`, which typically has raw text in it, converts that text into arrays using this model's :class:`Vocabulary`, passes those arrays through :func:`self.forward()` and :func:`self.decode()` (which by default does nothing) and returns the result. Before returning the result, we convert any ``torch.Tensors`` into numpy arrays and remove the batch dimension. """ return self.forward_on_instances([instance])[0]
Takes a list of :class:`~allennlp.data.instance.Instance`s, converts that text into arrays using this model's :class:`Vocabulary`, passes those arrays through :func:`self.forward()` and :func:`self.decode()` (which by default does nothing) and returns the result. Before returning the result, we convert any ``torch.Tensors`` into numpy arrays and separate the batched output into a list of individual dicts per instance. Note that typically this will be faster on a GPU (and conditionally, on a CPU) than repeated calls to :func:`forward_on_instance`. Parameters ---------- instances : List[Instance], required The instances to run the model on. Returns ------- A list of the models output for each instance.
def forward_on_instances(self, instances: List[Instance]) -> List[Dict[str, numpy.ndarray]]: """ Takes a list of :class:`~allennlp.data.instance.Instance`s, converts that text into arrays using this model's :class:`Vocabulary`, passes those arrays through :func:`self.forward()` and :func:`self.decode()` (which by default does nothing) and returns the result. Before returning the result, we convert any ``torch.Tensors`` into numpy arrays and separate the batched output into a list of individual dicts per instance. Note that typically this will be faster on a GPU (and conditionally, on a CPU) than repeated calls to :func:`forward_on_instance`. Parameters ---------- instances : List[Instance], required The instances to run the model on. Returns ------- A list of the models output for each instance. """ batch_size = len(instances) with torch.no_grad(): cuda_device = self._get_prediction_device() dataset = Batch(instances) dataset.index_instances(self.vocab) model_input = util.move_to_device(dataset.as_tensor_dict(), cuda_device) outputs = self.decode(self(**model_input)) instance_separated_output: List[Dict[str, numpy.ndarray]] = [{} for _ in dataset.instances] for name, output in list(outputs.items()): if isinstance(output, torch.Tensor): # NOTE(markn): This is a hack because 0-dim pytorch tensors are not iterable. # This occurs with batch size 1, because we still want to include the loss in that case. if output.dim() == 0: output = output.unsqueeze(0) if output.size(0) != batch_size: self._maybe_warn_for_unseparable_batches(name) continue output = output.detach().cpu().numpy() elif len(output) != batch_size: self._maybe_warn_for_unseparable_batches(name) continue for instance_output, batch_element in zip(instance_separated_output, output): instance_output[name] = batch_element return instance_separated_output
Takes the result of :func:`forward` and runs inference / decoding / whatever post-processing you need to do your model. The intent is that ``model.forward()`` should produce potentials or probabilities, and then ``model.decode()`` can take those results and run some kind of beam search or constrained inference or whatever is necessary. This does not handle all possible decoding use cases, but it at least handles simple kinds of decoding. This method `modifies` the input dictionary, and also `returns` the same dictionary. By default in the base class we do nothing. If your model has some special decoding step, override this method.
def decode(self, output_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: """ Takes the result of :func:`forward` and runs inference / decoding / whatever post-processing you need to do your model. The intent is that ``model.forward()`` should produce potentials or probabilities, and then ``model.decode()`` can take those results and run some kind of beam search or constrained inference or whatever is necessary. This does not handle all possible decoding use cases, but it at least handles simple kinds of decoding. This method `modifies` the input dictionary, and also `returns` the same dictionary. By default in the base class we do nothing. If your model has some special decoding step, override this method. """ # pylint: disable=no-self-use return output_dict
This method checks the device of the model parameters to determine the cuda_device this model should be run on for predictions. If there are no parameters, it returns -1. Returns ------- The cuda device this model should run on for predictions.
def _get_prediction_device(self) -> int: """ This method checks the device of the model parameters to determine the cuda_device this model should be run on for predictions. If there are no parameters, it returns -1. Returns ------- The cuda device this model should run on for predictions. """ devices = {util.get_device_of(param) for param in self.parameters()} if len(devices) > 1: devices_string = ", ".join(str(x) for x in devices) raise ConfigurationError(f"Parameters have mismatching cuda_devices: {devices_string}") elif len(devices) == 1: return devices.pop() else: return -1
This method warns once if a user implements a model which returns a dictionary with values which we are unable to split back up into elements of the batch. This is controlled by a class attribute ``_warn_for_unseperable_batches`` because it would be extremely verbose otherwise.
def _maybe_warn_for_unseparable_batches(self, output_key: str): """ This method warns once if a user implements a model which returns a dictionary with values which we are unable to split back up into elements of the batch. This is controlled by a class attribute ``_warn_for_unseperable_batches`` because it would be extremely verbose otherwise. """ if output_key not in self._warn_for_unseparable_batches: logger.warning(f"Encountered the {output_key} key in the model's return dictionary which " "couldn't be split by the batch size. Key will be ignored.") # We only want to warn once for this key, # so we set this to false so we don't warn again. self._warn_for_unseparable_batches.add(output_key)
Instantiates an already-trained model, based on the experiment configuration and some optional overrides.
def _load(cls, config: Params, serialization_dir: str, weights_file: str = None, cuda_device: int = -1) -> 'Model': """ Instantiates an already-trained model, based on the experiment configuration and some optional overrides. """ weights_file = weights_file or os.path.join(serialization_dir, _DEFAULT_WEIGHTS) # Load vocabulary from file vocab_dir = os.path.join(serialization_dir, 'vocabulary') # If the config specifies a vocabulary subclass, we need to use it. vocab_params = config.get("vocabulary", Params({})) vocab_choice = vocab_params.pop_choice("type", Vocabulary.list_available(), True) vocab = Vocabulary.by_name(vocab_choice).from_files(vocab_dir) model_params = config.get('model') # The experiment config tells us how to _train_ a model, including where to get pre-trained # embeddings from. We're now _loading_ the model, so those embeddings will already be # stored in our weights. We don't need any pretrained weight file anymore, and we don't # want the code to look for it, so we remove it from the parameters here. remove_pretrained_embedding_params(model_params) model = Model.from_params(vocab=vocab, params=model_params) # If vocab+embedding extension was done, the model initialized from from_params # and one defined by state dict in weights_file might not have same embedding shapes. # Eg. when model embedder module was transferred along with vocab extension, the # initialized embedding weight shape would be smaller than one in the state_dict. # So calling model embedding extension is required before load_state_dict. # If vocab and model embeddings are in sync, following would be just a no-op. model.extend_embedder_vocab() model_state = torch.load(weights_file, map_location=util.device_mapping(cuda_device)) model.load_state_dict(model_state) # Force model to cpu or gpu, as appropriate, to make sure that the embeddings are # in sync with the weights if cuda_device >= 0: model.cuda(cuda_device) else: model.cpu() return model
Instantiates an already-trained model, based on the experiment configuration and some optional overrides. Parameters ---------- config: Params The configuration that was used to train the model. It should definitely have a `model` section, and should probably have a `trainer` section as well. serialization_dir: str = None The directory containing the serialized weights, parameters, and vocabulary of the model. weights_file: str = None By default we load the weights from `best.th` in the serialization directory, but you can override that value here. cuda_device: int = -1 By default we load the model on the CPU, but if you want to load it for GPU usage you can specify the id of your GPU here Returns ------- model: Model The model specified in the configuration, loaded with the serialized vocabulary and the trained weights.
def load(cls, config: Params, serialization_dir: str, weights_file: str = None, cuda_device: int = -1) -> 'Model': """ Instantiates an already-trained model, based on the experiment configuration and some optional overrides. Parameters ---------- config: Params The configuration that was used to train the model. It should definitely have a `model` section, and should probably have a `trainer` section as well. serialization_dir: str = None The directory containing the serialized weights, parameters, and vocabulary of the model. weights_file: str = None By default we load the weights from `best.th` in the serialization directory, but you can override that value here. cuda_device: int = -1 By default we load the model on the CPU, but if you want to load it for GPU usage you can specify the id of your GPU here Returns ------- model: Model The model specified in the configuration, loaded with the serialized vocabulary and the trained weights. """ # Peak at the class of the model. model_type = config["model"]["type"] # Load using an overridable _load method. # This allows subclasses of Model to override _load. # pylint: disable=protected-access return cls.by_name(model_type)._load(config, serialization_dir, weights_file, cuda_device)
Iterates through all embedding modules in the model and assures it can embed with the extended vocab. This is required in fine-tuning or transfer learning scenarios where model was trained with original vocabulary but during fine-tuning/tranfer-learning, it will have it work with extended vocabulary (original + new-data vocabulary). Parameters ---------- embedding_sources_mapping : Dict[str, str], (optional, default=None) Mapping from model_path to pretrained-file path of the embedding modules. If pretrained-file used at time of embedding initialization isn't available now, user should pass this mapping. Model path is path traversing the model attributes upto this embedding module. Eg. "_text_field_embedder.token_embedder_tokens".
def extend_embedder_vocab(self, embedding_sources_mapping: Dict[str, str] = None) -> None: """ Iterates through all embedding modules in the model and assures it can embed with the extended vocab. This is required in fine-tuning or transfer learning scenarios where model was trained with original vocabulary but during fine-tuning/tranfer-learning, it will have it work with extended vocabulary (original + new-data vocabulary). Parameters ---------- embedding_sources_mapping : Dict[str, str], (optional, default=None) Mapping from model_path to pretrained-file path of the embedding modules. If pretrained-file used at time of embedding initialization isn't available now, user should pass this mapping. Model path is path traversing the model attributes upto this embedding module. Eg. "_text_field_embedder.token_embedder_tokens". """ # self.named_modules() gives all sub-modules (including nested children) # The path nesting is already separated by ".": eg. parent_module_name.child_module_name embedding_sources_mapping = embedding_sources_mapping or {} for model_path, module in self.named_modules(): if hasattr(module, 'extend_vocab'): pretrained_file = embedding_sources_mapping.get(model_path, None) module.extend_vocab(self.vocab, extension_pretrained_file=pretrained_file, model_path=model_path)
Takes a logical form, and the list of target values as strings from the original lisp string, and returns True iff the logical form executes to the target list, using the official WikiTableQuestions evaluation script.
def evaluate_logical_form(self, logical_form: str, target_list: List[str]) -> bool: """ Takes a logical form, and the list of target values as strings from the original lisp string, and returns True iff the logical form executes to the target list, using the official WikiTableQuestions evaluation script. """ normalized_target_list = [TableQuestionContext.normalize_string(value) for value in target_list] target_value_list = evaluator.to_value_list(normalized_target_list) try: denotation = self.execute(logical_form) except ExecutionError: logger.warning(f'Failed to execute: {logical_form}') return False if isinstance(denotation, list): denotation_list = [str(denotation_item) for denotation_item in denotation] else: denotation_list = [str(denotation)] denotation_value_list = evaluator.to_value_list(denotation_list) return evaluator.check_denotation(target_value_list, denotation_value_list)
Returns an agenda that can be used guide search. Parameters ---------- conservative : ``bool`` Setting this flag will return a subset of the agenda items that correspond to high confidence lexical matches. You'll need this if you are going to use this agenda to penalize a model for producing logical forms that do not contain some items in it. In that case, you'll want this agenda to have close to perfect precision, at the cost of a lower recall. You may not want to set this flag if you are sorting the output from a search procedure based on how much of this agenda is satisfied.
def get_agenda(self, conservative: bool = False): """ Returns an agenda that can be used guide search. Parameters ---------- conservative : ``bool`` Setting this flag will return a subset of the agenda items that correspond to high confidence lexical matches. You'll need this if you are going to use this agenda to penalize a model for producing logical forms that do not contain some items in it. In that case, you'll want this agenda to have close to perfect precision, at the cost of a lower recall. You may not want to set this flag if you are sorting the output from a search procedure based on how much of this agenda is satisfied. """ agenda_items = [] question_tokens = [token.text for token in self.table_context.question_tokens] question = " ".join(question_tokens) added_number_filters = False if self._table_has_number_columns: if "at least" in question: agenda_items.append("filter_number_greater_equals") if "at most" in question: agenda_items.append("filter_number_lesser_equals") comparison_triggers = ["greater", "larger", "more"] if any(f"no {word} than" in question for word in comparison_triggers): agenda_items.append("filter_number_lesser_equals") elif any(f"{word} than" in question for word in comparison_triggers): agenda_items.append("filter_number_greater") # We want to keep track of this because we do not want to add both number and date # filters to the agenda if we want to be conservative. if agenda_items: added_number_filters = True for token in question_tokens: if token in ["next", "below"] or (token == "after" and not conservative): agenda_items.append("next") if token in ["previous", "above"] or (token == "before" and not conservative): agenda_items.append("previous") if token in ["first", "top"]: agenda_items.append("first") if token in ["last", "bottom"]: agenda_items.append("last") if token == "same": agenda_items.append("same_as") if self._table_has_number_columns: # "total" does not always map to an actual summing operation. if token == "total" and not conservative: agenda_items.append("sum") if token == "difference" or "how many more" in question or "how much more" in question: agenda_items.append("diff") if token == "average": agenda_items.append("average") if token in ["least", "smallest", "shortest", "lowest"] and "at least" not in question: # This condition is too brittle. But for most logical forms with "min", there are # semantically equivalent ones with "argmin". The exceptions are rare. if "what is the least" not in question: agenda_items.append("argmin") if token in ["most", "largest", "highest", "longest", "greatest"] and "at most" not in question: # This condition is too brittle. But for most logical forms with "max", there are # semantically equivalent ones with "argmax". The exceptions are rare. if "what is the most" not in question: agenda_items.append("argmax") if self._table_has_date_columns: if token in MONTH_NUMBERS or (token.isdigit() and len(token) == 4 and int(token) < 2100 and int(token) > 1100): # Token is either a month or an year. We'll add date functions. if not added_number_filters or not conservative: if "after" in question_tokens: agenda_items.append("filter_date_greater") elif "before" in question_tokens: agenda_items.append("filter_date_lesser") elif "not" in question_tokens: agenda_items.append("filter_date_not_equals") else: agenda_items.append("filter_date_equals") if "what is the least" in question and self._table_has_number_columns: agenda_items.append("min_number") if "what is the most" in question and self._table_has_number_columns: agenda_items.append("max_number") if "when" in question_tokens and self._table_has_date_columns: if "last" in question_tokens: agenda_items.append("max_date") elif "first" in question_tokens: agenda_items.append("min_date") else: agenda_items.append("select_date") if "how many" in question: if "sum" not in agenda_items and "average" not in agenda_items: # The question probably just requires counting the rows. But this is not very # accurate. The question could also be asking for a value that is in the table. agenda_items.append("count") agenda = [] # Adding productions from the global set. for agenda_item in set(agenda_items): # Some agenda items may not be present in the terminal productions because some of these # terminals are table-content specific. For example, if the question triggered "sum", # and the table does not have number columns, we should not add "<r,<f,n>> -> sum" to # the agenda. if agenda_item in self.terminal_productions: agenda.append(self.terminal_productions[agenda_item]) if conservative: # Some of the columns in the table have multiple types, and thus occur in the KG as # different columns. We do not want to add them all to the agenda if their names, # because it is unlikely that logical forms use them all. In fact, to be conservative, # we won't add any of them. So we'll first identify such column names. refined_column_productions: Dict[str, str] = {} for column_name, signature in self._column_productions_for_agenda.items(): column_type, name = column_name.split(":") if column_type == "string_column": if f"number_column:{name}" not in self._column_productions_for_agenda and \ f"date_column:{name}" not in self._column_productions_for_agenda: refined_column_productions[column_name] = signature elif column_type == "number_column": if f"string_column:{name}" not in self._column_productions_for_agenda and \ f"date_column:{name}" not in self._column_productions_for_agenda: refined_column_productions[column_name] = signature else: if f"string_column:{name}" not in self._column_productions_for_agenda and \ f"number_column:{name}" not in self._column_productions_for_agenda: refined_column_productions[column_name] = signature # Similarly, we do not want the same spans in the question to be added to the agenda as # both string and number productions. refined_entities: List[str] = [] refined_numbers: List[str] = [] for entity in self._question_entities: if entity.replace("string:", "") not in self._question_numbers: refined_entities.append(entity) for number in self._question_numbers: if f"string:{number}" not in self._question_entities: refined_numbers.append(number) else: refined_column_productions = dict(self._column_productions_for_agenda) refined_entities = list(self._question_entities) refined_numbers = list(self._question_numbers) # Adding column names that occur in question. question_with_underscores = "_".join(question_tokens) normalized_question = re.sub("[^a-z0-9_]", "", question_with_underscores) # We keep track of tokens that are in column names being added to the agenda. We will not # add string productions to the agenda if those tokens were already captured as column # names. # Note: If the same string occurs multiple times, this may cause string productions being # omitted from the agenda unnecessarily. That is fine, as we want to err on the side of # adding fewer rules to the agenda. tokens_in_column_names: Set[str] = set() for column_name_with_type, signature in refined_column_productions.items(): column_name = column_name_with_type.split(":")[1] # Underscores ensure that the match is of whole words. if f"_{column_name}_" in normalized_question: agenda.append(signature) for token in column_name.split("_"): tokens_in_column_names.add(token) # Adding all productions that lead to entities and numbers extracted from the question. for entity in refined_entities: if entity.replace("string:", "") not in tokens_in_column_names: agenda.append(f"str -> {entity}") for number in refined_numbers: # The reason we check for the presence of the number in the question again is because # some of these numbers are extracted from number words like month names and ordinals # like "first". On looking at some agenda outputs, I found that they hurt more than help # in the agenda. if f"_{number}_" in normalized_question: agenda.append(f"Number -> {number}") return agenda
Select function takes a list of rows and a column name and returns a list of strings as in cells.
def select_string(self, rows: List[Row], column: StringColumn) -> List[str]: """ Select function takes a list of rows and a column name and returns a list of strings as in cells. """ return [str(row.values[column.name]) for row in rows if row.values[column.name] is not None]
Select function takes a row (as a list) and a column name and returns the number in that column. If multiple rows are given, will return the first number that is not None.
def select_number(self, rows: List[Row], column: NumberColumn) -> Number: """ Select function takes a row (as a list) and a column name and returns the number in that column. If multiple rows are given, will return the first number that is not None. """ numbers: List[float] = [] for row in rows: cell_value = row.values[column.name] if isinstance(cell_value, float): numbers.append(cell_value) return numbers[0] if numbers else -1
Select function takes a row as a list and a column name and returns the date in that column.
def select_date(self, rows: List[Row], column: DateColumn) -> Date: """ Select function takes a row as a list and a column name and returns the date in that column. """ dates: List[Date] = [] for row in rows: cell_value = row.values[column.name] if isinstance(cell_value, Date): dates.append(cell_value) return dates[0] if dates else Date(-1, -1, -1)
Takes a row and a column and returns a list of rows from the full set of rows that contain the same value under the given column as the given row.
def same_as(self, rows: List[Row], column: Column) -> List[Row]: """ Takes a row and a column and returns a list of rows from the full set of rows that contain the same value under the given column as the given row. """ cell_value = rows[0].values[column.name] return_list = [] for table_row in self.table_data: if table_row.values[column.name] == cell_value: return_list.append(table_row) return return_list
Takes three numbers and returns a ``Date`` object whose year, month, and day are the three numbers in that order.
def date(self, year: Number, month: Number, day: Number) -> Date: """ Takes three numbers and returns a ``Date`` object whose year, month, and day are the three numbers in that order. """ return Date(year, month, day)
Takes an expression that evaluates to a list of rows, and returns the first one in that list.
def first(self, rows: List[Row]) -> List[Row]: """ Takes an expression that evaluates to a list of rows, and returns the first one in that list. """ if not rows: logger.warning("Trying to get first row from an empty list") return [] return [rows[0]]
Takes an expression that evaluates to a list of rows, and returns the last one in that list.
def last(self, rows: List[Row]) -> List[Row]: """ Takes an expression that evaluates to a list of rows, and returns the last one in that list. """ if not rows: logger.warning("Trying to get last row from an empty list") return [] return [rows[-1]]
Takes an expression that evaluates to a single row, and returns the row that occurs before the input row in the original set of rows. If the input row happens to be the top row, we will return an empty list.
def previous(self, rows: List[Row]) -> List[Row]: """ Takes an expression that evaluates to a single row, and returns the row that occurs before the input row in the original set of rows. If the input row happens to be the top row, we will return an empty list. """ if not rows: return [] input_row_index = self._get_row_index(rows[0]) if input_row_index > 0: return [self.table_data[input_row_index - 1]] return []
Takes an expression that evaluates to a single row, and returns the row that occurs after the input row in the original set of rows. If the input row happens to be the last row, we will return an empty list.
def next(self, rows: List[Row]) -> List[Row]: """ Takes an expression that evaluates to a single row, and returns the row that occurs after the input row in the original set of rows. If the input row happens to be the last row, we will return an empty list. """ if not rows: return [] input_row_index = self._get_row_index(rows[0]) if input_row_index < len(self.table_data) - 1 and input_row_index != -1: return [self.table_data[input_row_index + 1]] return []
Takes a list of rows and a column and returns the most frequent values (one or more) under that column in those rows.
def mode_string(self, rows: List[Row], column: StringColumn) -> List[str]: """ Takes a list of rows and a column and returns the most frequent values (one or more) under that column in those rows. """ most_frequent_list = self._get_most_frequent_values(rows, column) if not most_frequent_list: return [] if not all([isinstance(value, str) for value in most_frequent_list]): raise ExecutionError(f"Invalid values for mode_string: {most_frequent_list}") return most_frequent_list
Takes a list of rows and a column and returns the most frequent value under that column in those rows.
def mode_number(self, rows: List[Row], column: NumberColumn) -> Number: """ Takes a list of rows and a column and returns the most frequent value under that column in those rows. """ most_frequent_list = self._get_most_frequent_values(rows, column) if not most_frequent_list: return 0.0 # type: ignore most_frequent_value = most_frequent_list[0] if not isinstance(most_frequent_value, Number): raise ExecutionError(f"Invalid valus for mode_number: {most_frequent_value}") return most_frequent_value
Takes a list of rows and a column and returns the most frequent value under that column in those rows.
def mode_date(self, rows: List[Row], column: DateColumn) -> Date: """ Takes a list of rows and a column and returns the most frequent value under that column in those rows. """ most_frequent_list = self._get_most_frequent_values(rows, column) if not most_frequent_list: return Date(-1, -1, -1) most_frequent_value = most_frequent_list[0] if not isinstance(most_frequent_value, Date): raise ExecutionError(f"Invalid valus for mode_date: {most_frequent_value}") return most_frequent_value
Takes a list of rows and a column name and returns a list containing a single row (dict from columns to cells) that has the maximum numerical value in the given column. We return a list instead of a single dict to be consistent with the return type of ``select`` and ``all_rows``.
def argmax(self, rows: List[Row], column: ComparableColumn) -> List[Row]: """ Takes a list of rows and a column name and returns a list containing a single row (dict from columns to cells) that has the maximum numerical value in the given column. We return a list instead of a single dict to be consistent with the return type of ``select`` and ``all_rows``. """ if not rows: return [] value_row_pairs = [(row.values[column.name], row) for row in rows] if not value_row_pairs: return [] # Returns a list containing the row with the max cell value. return [sorted(value_row_pairs, key=lambda x: x[0], reverse=True)[0][1]]
Takes a list of rows and a column and returns a list containing a single row (dict from columns to cells) that has the minimum numerical value in the given column. We return a list instead of a single dict to be consistent with the return type of ``select`` and ``all_rows``.
def argmin(self, rows: List[Row], column: ComparableColumn) -> List[Row]: """ Takes a list of rows and a column and returns a list containing a single row (dict from columns to cells) that has the minimum numerical value in the given column. We return a list instead of a single dict to be consistent with the return type of ``select`` and ``all_rows``. """ if not rows: return [] value_row_pairs = [(row.values[column.name], row) for row in rows] if not value_row_pairs: return [] # Returns a list containing the row with the max cell value. return [sorted(value_row_pairs, key=lambda x: x[0])[0][1]]
Takes a list of rows and a column and returns the max of the values under that column in those rows.
def max_date(self, rows: List[Row], column: DateColumn) -> Date: """ Takes a list of rows and a column and returns the max of the values under that column in those rows. """ cell_values = [row.values[column.name] for row in rows] if not cell_values: return Date(-1, -1, -1) if not all([isinstance(value, Date) for value in cell_values]): raise ExecutionError(f"Invalid values for date selection function: {cell_values}") return max(cell_values)
Takes a list of rows and a column and returns the max of the values under that column in those rows.
def max_number(self, rows: List[Row], column: NumberColumn) -> Number: """ Takes a list of rows and a column and returns the max of the values under that column in those rows. """ cell_values = [row.values[column.name] for row in rows] if not cell_values: return 0.0 # type: ignore if not all([isinstance(value, Number) for value in cell_values]): raise ExecutionError(f"Invalid values for number selection function: {cell_values}") return max(cell_values)
Takes a list of rows and a column and returns the mean of the values under that column in those rows.
def average(self, rows: List[Row], column: NumberColumn) -> Number: """ Takes a list of rows and a column and returns the mean of the values under that column in those rows. """ cell_values = [row.values[column.name] for row in rows] if not cell_values: return 0.0 # type: ignore return sum(cell_values) / len(cell_values)
Takes a two rows and a number column and returns the difference between the values under that column in those two rows.
def diff(self, first_row: List[Row], second_row: List[Row], column: NumberColumn) -> Number: """ Takes a two rows and a number column and returns the difference between the values under that column in those two rows. """ if not first_row or not second_row: return 0.0 # type: ignore first_value = first_row[0].values[column.name] second_value = second_row[0].values[column.name] if isinstance(first_value, float) and isinstance(second_value, float): return first_value - second_value # type: ignore else: raise ExecutionError(f"Invalid column for diff: {column.name}")
Takes a row and returns its index in the full list of rows. If the row does not occur in the table (which should never happen because this function will only be called with a row that is the result of applying one or more functions on the table rows), the method returns -1.
def _get_row_index(self, row: Row) -> int: """ Takes a row and returns its index in the full list of rows. If the row does not occur in the table (which should never happen because this function will only be called with a row that is the result of applying one or more functions on the table rows), the method returns -1. """ row_index = -1 for index, table_row in enumerate(self.table_data): if table_row.values == row.values: row_index = index break return row_index
This function will be called on nodes of a logical form tree, which are either non-terminal symbols that can be expanded or terminal symbols that must be leaf nodes. Returns ``True`` if the given symbol is a terminal symbol.
def is_terminal(self, symbol: str) -> bool: """ This function will be called on nodes of a logical form tree, which are either non-terminal symbols that can be expanded or terminal symbols that must be leaf nodes. Returns ``True`` if the given symbol is a terminal symbol. """ # We special-case 'lambda' here because it behaves weirdly in action sequences. return (symbol in self.global_name_mapping or symbol in self.local_name_mapping or 'lambda' in symbol)
For a given action, returns at most ``max_num_paths`` paths to the root (production with ``START_SYMBOL``) that are not longer than ``max_path_length``.
def get_paths_to_root(self, action: str, max_path_length: int = 20, beam_size: int = 30, max_num_paths: int = 10) -> List[List[str]]: """ For a given action, returns at most ``max_num_paths`` paths to the root (production with ``START_SYMBOL``) that are not longer than ``max_path_length``. """ action_left_side, _ = action.split(' -> ') right_side_indexed_actions = self._get_right_side_indexed_actions() lists_to_expand: List[Tuple[str, List[str]]] = [(action_left_side, [action])] completed_paths = [] while lists_to_expand: need_to_expand = False for left_side, path in lists_to_expand: if left_side == types.START_SYMBOL: completed_paths.append(path) else: need_to_expand = True if not need_to_expand or len(completed_paths) >= max_num_paths: break # We keep track of finished and unfinished lists separately because we truncate the beam # later, and we want the finished lists to be at the top of the beam. finished_new_lists = [] unfinished_new_lists = [] for left_side, actions in lists_to_expand: for next_left_side, next_action in right_side_indexed_actions[left_side]: if next_action in actions: # Ignoring paths with loops (of size 1) continue new_actions = list(actions) new_actions.append(next_action) # Ignoring lists that are too long, and have too many repetitions. path_length = len(new_actions) if path_length <= max_path_length or next_left_side == types.START_SYMBOL: if next_left_side == types.START_SYMBOL: finished_new_lists.append((next_left_side, new_actions)) else: unfinished_new_lists.append((next_left_side, new_actions)) new_lists = finished_new_lists + unfinished_new_lists lists_to_expand = new_lists[:beam_size] return completed_paths[:max_num_paths]
Returns a mapping from each `MultiMatchNamedBasicType` to all the `NamedBasicTypes` that it matches.
def get_multi_match_mapping(self) -> Dict[Type, List[Type]]: """ Returns a mapping from each `MultiMatchNamedBasicType` to all the `NamedBasicTypes` that it matches. """ if self._multi_match_mapping is None: self._multi_match_mapping = {} basic_types = self.get_basic_types() for basic_type in basic_types: if isinstance(basic_type, types.MultiMatchNamedBasicType): matched_types: List[str] = [] # We need to check if each type in the `types_to_match` field for the given # MultiMatchNamedBasic type is itself in the set of basic types allowed in this # world, and add it to the mapping only if it is. Some basic types that the # multi match type can match with may be diallowed in the world due to the # instance-specific context. for type_ in basic_type.types_to_match: if type_ in basic_types: matched_types.append(type_) self._multi_match_mapping[basic_type] = matched_types return self._multi_match_mapping
Takes a logical form as a string, maps its tokens using the mapping and returns a parsed expression. Parameters ---------- logical_form : ``str`` Logical form to parse remove_var_function : ``bool`` (optional) ``var`` is a special function that some languages use within lambda functions to indicate the usage of a variable. If your language uses it, and you do not want to include it in the parsed expression, set this flag. You may want to do this if you are generating an action sequence from this parsed expression, because it is easier to let the decoder not produce this function due to the way constrained decoding is currently implemented.
def parse_logical_form(self, logical_form: str, remove_var_function: bool = True) -> Expression: """ Takes a logical form as a string, maps its tokens using the mapping and returns a parsed expression. Parameters ---------- logical_form : ``str`` Logical form to parse remove_var_function : ``bool`` (optional) ``var`` is a special function that some languages use within lambda functions to indicate the usage of a variable. If your language uses it, and you do not want to include it in the parsed expression, set this flag. You may want to do this if you are generating an action sequence from this parsed expression, because it is easier to let the decoder not produce this function due to the way constrained decoding is currently implemented. """ if not logical_form.startswith("("): logical_form = f"({logical_form})" if remove_var_function: # Replace "(x)" with "x" logical_form = re.sub(r'\(([x-z])\)', r'\1', logical_form) # Replace "(var x)" with "(x)" logical_form = re.sub(r'\(var ([x-z])\)', r'(\1)', logical_form) parsed_lisp = semparse_util.lisp_to_nested_expression(logical_form) translated_string = self._process_nested_expression(parsed_lisp) type_signature = self.local_type_signatures.copy() type_signature.update(self.global_type_signatures) return self._logic_parser.parse(translated_string, signature=type_signature)
Returns the sequence of actions (as strings) that resulted in the given expression.
def get_action_sequence(self, expression: Expression) -> List[str]: """ Returns the sequence of actions (as strings) that resulted in the given expression. """ # Starting with the type of the whole expression return self._get_transitions(expression, [f"{types.START_TYPE} -> {expression.type}"])
Takes an action sequence and constructs a logical form from it. This is useful if you want to get a logical form from a decoded sequence of actions generated by a transition based semantic parser. Parameters ---------- action_sequence : ``List[str]`` The sequence of actions as strings (eg.: ``['{START_SYMBOL} -> t', 't -> <e,t>', ...]``). add_var_function : ``bool`` (optional) ``var`` is a special function that some languages use within lambda functions to indicate the use of a variable (eg.: ``(lambda x (fb:row.row.year (var x)))``). Due to the way constrained decoding is currently implemented, it is easier for the decoder to not produce these functions. In that case, setting this flag adds the function in the logical form even though it is not present in the action sequence.
def get_logical_form(self, action_sequence: List[str], add_var_function: bool = True) -> str: """ Takes an action sequence and constructs a logical form from it. This is useful if you want to get a logical form from a decoded sequence of actions generated by a transition based semantic parser. Parameters ---------- action_sequence : ``List[str]`` The sequence of actions as strings (eg.: ``['{START_SYMBOL} -> t', 't -> <e,t>', ...]``). add_var_function : ``bool`` (optional) ``var`` is a special function that some languages use within lambda functions to indicate the use of a variable (eg.: ``(lambda x (fb:row.row.year (var x)))``). Due to the way constrained decoding is currently implemented, it is easier for the decoder to not produce these functions. In that case, setting this flag adds the function in the logical form even though it is not present in the action sequence. """ # Basic outline: we assume that the bracketing that we get in the RHS of each action is the # correct bracketing for reconstructing the logical form. This is true when there is no # currying in the action sequence. Given this assumption, we just need to construct a tree # from the action sequence, then output all of the leaves in the tree, with brackets around # the children of all non-terminal nodes. remaining_actions = [action.split(" -> ") for action in action_sequence] tree = Tree(remaining_actions[0][1], []) try: remaining_actions = self._construct_node_from_actions(tree, remaining_actions[1:], add_var_function) except ParsingError: logger.error("Error parsing action sequence: %s", action_sequence) raise if remaining_actions: logger.error("Error parsing action sequence: %s", action_sequence) logger.error("Remaining actions were: %s", remaining_actions) raise ParsingError("Extra actions in action sequence") return nltk_tree_to_logical_form(tree)
Given a current node in the logical form tree, and a list of actions in an action sequence, this method fills in the children of the current node from the action sequence, then returns whatever actions are left. For example, we could get a node with type ``c``, and an action sequence that begins with ``c -> [<r,c>, r]``. This method will add two children to the input node, consuming actions from the action sequence for nodes of type ``<r,c>`` (and all of its children, recursively) and ``r`` (and all of its children, recursively). This method assumes that action sequences are produced `depth-first`, so all actions for the subtree under ``<r,c>`` appear before actions for the subtree under ``r``. If there are any actions in the action sequence after the ``<r,c>`` and ``r`` subtrees have terminated in leaf nodes, they will be returned.
def _construct_node_from_actions(self, current_node: Tree, remaining_actions: List[List[str]], add_var_function: bool) -> List[List[str]]: """ Given a current node in the logical form tree, and a list of actions in an action sequence, this method fills in the children of the current node from the action sequence, then returns whatever actions are left. For example, we could get a node with type ``c``, and an action sequence that begins with ``c -> [<r,c>, r]``. This method will add two children to the input node, consuming actions from the action sequence for nodes of type ``<r,c>`` (and all of its children, recursively) and ``r`` (and all of its children, recursively). This method assumes that action sequences are produced `depth-first`, so all actions for the subtree under ``<r,c>`` appear before actions for the subtree under ``r``. If there are any actions in the action sequence after the ``<r,c>`` and ``r`` subtrees have terminated in leaf nodes, they will be returned. """ if not remaining_actions: logger.error("No actions left to construct current node: %s", current_node) raise ParsingError("Incomplete action sequence") left_side, right_side = remaining_actions.pop(0) if left_side != current_node.label(): mismatch = True multi_match_mapping = {str(key): [str(value) for value in values] for key, values in self.get_multi_match_mapping().items()} current_label = current_node.label() if current_label in multi_match_mapping and left_side in multi_match_mapping[current_label]: mismatch = False if mismatch: logger.error("Current node: %s", current_node) logger.error("Next action: %s -> %s", left_side, right_side) logger.error("Remaining actions were: %s", remaining_actions) raise ParsingError("Current node does not match next action") if right_side[0] == '[': # This is a non-terminal expansion, with more than one child node. for child_type in right_side[1:-1].split(', '): if child_type.startswith("'lambda"): # We need to special-case the handling of lambda here, because it's handled a # bit weirdly in the action sequence. This is stripping off the single quotes # around something like `'lambda x'`. child_type = child_type[1:-1] child_node = Tree(child_type, []) current_node.append(child_node) # you add a child to an nltk.Tree with `append` if not self.is_terminal(child_type): remaining_actions = self._construct_node_from_actions(child_node, remaining_actions, add_var_function) elif self.is_terminal(right_side): # The current node is a pre-terminal; we'll add a single terminal child. We need to # check first for whether we need to add a (var _) around the terminal node, though. if add_var_function and right_side in self._lambda_variables: right_side = f"(var {right_side})" if add_var_function and right_side == 'var': raise ParsingError('add_var_function was true, but action sequence already had var') current_node.append(Tree(right_side, [])) # you add a child to an nltk.Tree with `append` else: # The only way this can happen is if you have a unary non-terminal production rule. # That is almost certainly not what you want with this kind of grammar, so we'll crash. # If you really do want this, open a PR with a valid use case. raise ParsingError(f"Found a unary production rule: {left_side} -> {right_side}. " "Are you sure you want a unary production rule in your grammar?") return remaining_actions
Takes a type signature and infers the number of arguments the corresponding function takes. Examples: e -> 0 <r,e> -> 1 <e,<e,t>> -> 2 <b,<<b,#1>,<#1,b>>> -> 3
def _infer_num_arguments(cls, type_signature: str) -> int: """ Takes a type signature and infers the number of arguments the corresponding function takes. Examples: e -> 0 <r,e> -> 1 <e,<e,t>> -> 2 <b,<<b,#1>,<#1,b>>> -> 3 """ if not "<" in type_signature: return 0 # We need to find the return type from the signature. We do that by removing the outer most # angular brackets and traversing the remaining substring till the angular brackets (if any) # balance. Once we hit a comma after the angular brackets are balanced, whatever is left # after it is the return type. type_signature = type_signature[1:-1] num_brackets = 0 char_index = 0 for char in type_signature: if char == '<': num_brackets += 1 elif char == '>': num_brackets -= 1 elif char == ',': if num_brackets == 0: break char_index += 1 return_type = type_signature[char_index+1:] return 1 + cls._infer_num_arguments(return_type)
``nested_expression`` is the result of parsing a logical form in Lisp format. We process it recursively and return a string in the format that NLTK's ``LogicParser`` would understand.
def _process_nested_expression(self, nested_expression) -> str: """ ``nested_expression`` is the result of parsing a logical form in Lisp format. We process it recursively and return a string in the format that NLTK's ``LogicParser`` would understand. """ expression_is_list = isinstance(nested_expression, list) expression_size = len(nested_expression) if expression_is_list and expression_size == 1 and isinstance(nested_expression[0], list): return self._process_nested_expression(nested_expression[0]) elements_are_leaves = [isinstance(element, str) for element in nested_expression] if all(elements_are_leaves): mapped_names = [self._map_name(name) for name in nested_expression] else: mapped_names = [] for element, is_leaf in zip(nested_expression, elements_are_leaves): if is_leaf: mapped_names.append(self._map_name(element)) else: mapped_names.append(self._process_nested_expression(element)) if mapped_names[0] == "\\": # This means the predicate is lambda. NLTK wants the variable name to not be within parantheses. # Adding parentheses after the variable. arguments = [mapped_names[1]] + [f"({name})" for name in mapped_names[2:]] else: arguments = [f"({name})" for name in mapped_names[1:]] return f'({mapped_names[0]} {" ".join(arguments)})'
Utility method to add a name and its translation to the local name mapping, and the corresponding signature, if available to the local type signatures. This method also updates the reverse name mapping.
def _add_name_mapping(self, name: str, translated_name: str, name_type: Type = None): """ Utility method to add a name and its translation to the local name mapping, and the corresponding signature, if available to the local type signatures. This method also updates the reverse name mapping. """ self.local_name_mapping[name] = translated_name self.reverse_name_mapping[translated_name] = name if name_type: self.local_type_signatures[translated_name] = name_type
Parameters ---------- inputs : PackedSequence, required. A tensor of shape (batch_size, num_timesteps, input_size) to apply the LSTM over. initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None) A tuple (state, memory) representing the initial hidden state and memory of the LSTM. Each tensor has shape (1, batch_size, output_dimension). Returns ------- A PackedSequence containing a torch.FloatTensor of shape (batch_size, num_timesteps, output_dimension) representing the outputs of the LSTM per timestep and a tuple containing the LSTM state, with shape (1, batch_size, hidden_size) to match the Pytorch API.
def forward(self, # pylint: disable=arguments-differ inputs: PackedSequence, initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None): """ Parameters ---------- inputs : PackedSequence, required. A tensor of shape (batch_size, num_timesteps, input_size) to apply the LSTM over. initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None) A tuple (state, memory) representing the initial hidden state and memory of the LSTM. Each tensor has shape (1, batch_size, output_dimension). Returns ------- A PackedSequence containing a torch.FloatTensor of shape (batch_size, num_timesteps, output_dimension) representing the outputs of the LSTM per timestep and a tuple containing the LSTM state, with shape (1, batch_size, hidden_size) to match the Pytorch API. """ if not isinstance(inputs, PackedSequence): raise ConfigurationError('inputs must be PackedSequence but got %s' % (type(inputs))) sequence_tensor, batch_lengths = pad_packed_sequence(inputs, batch_first=True) batch_size = sequence_tensor.size()[0] total_timesteps = sequence_tensor.size()[1] output_accumulator = sequence_tensor.new_zeros(batch_size, total_timesteps, self.hidden_size) if initial_state is None: full_batch_previous_memory = sequence_tensor.new_zeros(batch_size, self.hidden_size) full_batch_previous_state = sequence_tensor.new_zeros(batch_size, self.hidden_size) else: full_batch_previous_state = initial_state[0].squeeze(0) full_batch_previous_memory = initial_state[1].squeeze(0) current_length_index = batch_size - 1 if self.go_forward else 0 if self.recurrent_dropout_probability > 0.0: dropout_mask = get_dropout_mask(self.recurrent_dropout_probability, full_batch_previous_memory) else: dropout_mask = None for timestep in range(total_timesteps): # The index depends on which end we start. index = timestep if self.go_forward else total_timesteps - timestep - 1 # What we are doing here is finding the index into the batch dimension # which we need to use for this timestep, because the sequences have # variable length, so once the index is greater than the length of this # particular batch sequence, we no longer need to do the computation for # this sequence. The key thing to recognise here is that the batch inputs # must be _ordered_ by length from longest (first in batch) to shortest # (last) so initially, we are going forwards with every sequence and as we # pass the index at which the shortest elements of the batch finish, # we stop picking them up for the computation. if self.go_forward: while batch_lengths[current_length_index] <= index: current_length_index -= 1 # If we're going backwards, we are _picking up_ more indices. else: # First conditional: Are we already at the maximum number of elements in the batch? # Second conditional: Does the next shortest sequence beyond the current batch # index require computation use this timestep? while current_length_index < (len(batch_lengths) - 1) and \ batch_lengths[current_length_index + 1] > index: current_length_index += 1 # Actually get the slices of the batch which we need for the computation at this timestep. previous_memory = full_batch_previous_memory[0: current_length_index + 1].clone() previous_state = full_batch_previous_state[0: current_length_index + 1].clone() # Only do recurrent dropout if the dropout prob is > 0.0 and we are in training mode. if dropout_mask is not None and self.training: previous_state = previous_state * dropout_mask[0: current_length_index + 1] timestep_input = sequence_tensor[0: current_length_index + 1, index] # Do the projections for all the gates all at once. projected_input = self.input_linearity(timestep_input) projected_state = self.state_linearity(previous_state) # Main LSTM equations using relevant chunks of the big linear # projections of the hidden state and inputs. input_gate = torch.sigmoid(projected_input[:, 0 * self.hidden_size:1 * self.hidden_size] + projected_state[:, 0 * self.hidden_size:1 * self.hidden_size]) forget_gate = torch.sigmoid(projected_input[:, 1 * self.hidden_size:2 * self.hidden_size] + projected_state[:, 1 * self.hidden_size:2 * self.hidden_size]) memory_init = torch.tanh(projected_input[:, 2 * self.hidden_size:3 * self.hidden_size] + projected_state[:, 2 * self.hidden_size:3 * self.hidden_size]) output_gate = torch.sigmoid(projected_input[:, 3 * self.hidden_size:4 * self.hidden_size] + projected_state[:, 3 * self.hidden_size:4 * self.hidden_size]) memory = input_gate * memory_init + forget_gate * previous_memory timestep_output = output_gate * torch.tanh(memory) if self.use_highway: highway_gate = torch.sigmoid(projected_input[:, 4 * self.hidden_size:5 * self.hidden_size] + projected_state[:, 4 * self.hidden_size:5 * self.hidden_size]) highway_input_projection = projected_input[:, 5 * self.hidden_size:6 * self.hidden_size] timestep_output = highway_gate * timestep_output + (1 - highway_gate) * highway_input_projection # We've been doing computation with less than the full batch, so here we create a new # variable for the the whole batch at this timestep and insert the result for the # relevant elements of the batch into it. full_batch_previous_memory = full_batch_previous_memory.clone() full_batch_previous_state = full_batch_previous_state.clone() full_batch_previous_memory[0:current_length_index + 1] = memory full_batch_previous_state[0:current_length_index + 1] = timestep_output output_accumulator[0:current_length_index + 1, index] = timestep_output output_accumulator = pack_padded_sequence(output_accumulator, batch_lengths, batch_first=True) # Mimic the pytorch API by returning state in the following shape: # (num_layers * num_directions, batch_size, hidden_size). As this # LSTM cannot be stacked, the first dimension here is just 1. final_state = (full_batch_previous_state.unsqueeze(0), full_batch_previous_memory.unsqueeze(0)) return output_accumulator, final_state
Creates a server running SEMPRE that we can send logical forms to for evaluation. This uses inter-process communication, because SEMPRE is java code. We also need to be careful to clean up the process when our program exits.
def _create_sempre_executor(self) -> None: """ Creates a server running SEMPRE that we can send logical forms to for evaluation. This uses inter-process communication, because SEMPRE is java code. We also need to be careful to clean up the process when our program exits. """ if self._executor_process: return # It'd be much nicer to just use `cached_path` for these files. However, the SEMPRE jar # that we're using expects to find these files in a particular location, so we need to make # sure we put the files in that location. os.makedirs(SEMPRE_DIR, exist_ok=True) abbreviations_path = os.path.join(SEMPRE_DIR, 'abbreviations.tsv') if not os.path.exists(abbreviations_path): result = requests.get(ABBREVIATIONS_FILE) with open(abbreviations_path, 'wb') as downloaded_file: downloaded_file.write(result.content) grammar_path = os.path.join(SEMPRE_DIR, 'grow.grammar') if not os.path.exists(grammar_path): result = requests.get(GROW_FILE) with open(grammar_path, 'wb') as downloaded_file: downloaded_file.write(result.content) if not check_for_java(): raise RuntimeError('Java is not installed properly.') args = ['java', '-jar', cached_path(SEMPRE_EXECUTOR_JAR), 'serve', self._table_directory] self._executor_process = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, bufsize=1) lines = [] for _ in range(6): # SEMPRE outputs six lines of stuff when it loads that I can't disable. So, we clear # that here. lines.append(str(self._executor_process.stdout.readline())) assert 'Parser' in lines[-1], "SEMPRE server output unexpected; the server may have changed" logger.info("Started SEMPRE server for evaluating logical forms") # This is supposed to ensure that the subprocess gets killed when python exits. atexit.register(self._stop_sempre_executor)
Averaged per-mention precision and recall. <https://pdfs.semanticscholar.org/cfe3/c24695f1c14b78a5b8e95bcbd1c666140fd1.pdf>
def b_cubed(clusters, mention_to_gold): """ Averaged per-mention precision and recall. <https://pdfs.semanticscholar.org/cfe3/c24695f1c14b78a5b8e95bcbd1c666140fd1.pdf> """ numerator, denominator = 0, 0 for cluster in clusters: if len(cluster) == 1: continue gold_counts = Counter() correct = 0 for mention in cluster: if mention in mention_to_gold: gold_counts[tuple(mention_to_gold[mention])] += 1 for cluster2, count in gold_counts.items(): if len(cluster2) != 1: correct += count * count numerator += correct / float(len(cluster)) denominator += len(cluster) return numerator, denominator
Counts the mentions in each predicted cluster which need to be re-allocated in order for each predicted cluster to be contained by the respective gold cluster. <http://aclweb.org/anthology/M/M95/M95-1005.pdf>
def muc(clusters, mention_to_gold): """ Counts the mentions in each predicted cluster which need to be re-allocated in order for each predicted cluster to be contained by the respective gold cluster. <http://aclweb.org/anthology/M/M95/M95-1005.pdf> """ true_p, all_p = 0, 0 for cluster in clusters: all_p += len(cluster) - 1 true_p += len(cluster) linked = set() for mention in cluster: if mention in mention_to_gold: linked.add(mention_to_gold[mention]) else: true_p -= 1 true_p -= len(linked) return true_p, all_p
Subroutine for ceafe. Computes the mention F measure between gold and predicted mentions in a cluster.
def phi4(gold_clustering, predicted_clustering): """ Subroutine for ceafe. Computes the mention F measure between gold and predicted mentions in a cluster. """ return 2 * len([mention for mention in gold_clustering if mention in predicted_clustering]) \ / float(len(gold_clustering) + len(predicted_clustering))
Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference. Gold and predicted mentions are aligned into clusterings which maximise a metric - in this case, the F measure between gold and predicted clusters. <https://www.semanticscholar.org/paper/On-Coreference-Resolution-Performance-Metrics-Luo/de133c1f22d0dfe12539e25dda70f28672459b99>
def ceafe(clusters, gold_clusters): """ Computes the Constrained EntityAlignment F-Measure (CEAF) for evaluating coreference. Gold and predicted mentions are aligned into clusterings which maximise a metric - in this case, the F measure between gold and predicted clusters. <https://www.semanticscholar.org/paper/On-Coreference-Resolution-Performance-Metrics-Luo/de133c1f22d0dfe12539e25dda70f28672459b99> """ clusters = [cluster for cluster in clusters if len(cluster) != 1] scores = np.zeros((len(gold_clusters), len(clusters))) for i, gold_cluster in enumerate(gold_clusters): for j, cluster in enumerate(clusters): scores[i, j] = Scorer.phi4(gold_cluster, cluster) matching = linear_assignment(-scores) similarity = sum(scores[matching[:, 0], matching[:, 1]]) return similarity, len(clusters), similarity, len(gold_clusters)
Takes an action in the current grammar state, returning a new grammar state with whatever updates are necessary. The production rule is assumed to be formatted as "LHS -> RHS". This will update the non-terminal stack. Updating the non-terminal stack involves popping the non-terminal that was expanded off of the stack, then pushing on any non-terminals in the production rule back on the stack. For example, if our current ``nonterminal_stack`` is ``["r", "<e,r>", "d"]``, and ``action`` is ``d -> [<e,d>, e]``, the resulting stack will be ``["r", "<e,r>", "e", "<e,d>"]``. If ``self._reverse_productions`` is set to ``False`` then we push the non-terminals on in in their given order, which means that the first non-terminal in the production rule gets popped off the stack `last`.
def take_action(self, production_rule: str) -> 'GrammarStatelet': """ Takes an action in the current grammar state, returning a new grammar state with whatever updates are necessary. The production rule is assumed to be formatted as "LHS -> RHS". This will update the non-terminal stack. Updating the non-terminal stack involves popping the non-terminal that was expanded off of the stack, then pushing on any non-terminals in the production rule back on the stack. For example, if our current ``nonterminal_stack`` is ``["r", "<e,r>", "d"]``, and ``action`` is ``d -> [<e,d>, e]``, the resulting stack will be ``["r", "<e,r>", "e", "<e,d>"]``. If ``self._reverse_productions`` is set to ``False`` then we push the non-terminals on in in their given order, which means that the first non-terminal in the production rule gets popped off the stack `last`. """ left_side, right_side = production_rule.split(' -> ') assert self._nonterminal_stack[-1] == left_side, (f"Tried to expand {self._nonterminal_stack[-1]}" f"but got rule {left_side} -> {right_side}") new_stack = self._nonterminal_stack[:-1] productions = self._get_productions_from_string(right_side) if self._reverse_productions: productions = list(reversed(productions)) for production in productions: if self._is_nonterminal(production): new_stack.append(production) return GrammarStatelet(nonterminal_stack=new_stack, valid_actions=self._valid_actions, is_nonterminal=self._is_nonterminal, reverse_productions=self._reverse_productions)
Clips gradient norm of an iterable of parameters. The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place. Supports sparse gradients. Parameters ---------- parameters : ``(Iterable[torch.Tensor])`` An iterable of Tensors that will have gradients normalized. max_norm : ``float`` The max norm of the gradients. norm_type : ``float`` The type of the used p-norm. Can be ``'inf'`` for infinity norm. Returns ------- Total norm of the parameters (viewed as a single vector).
def sparse_clip_norm(parameters, max_norm, norm_type=2) -> float: """Clips gradient norm of an iterable of parameters. The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place. Supports sparse gradients. Parameters ---------- parameters : ``(Iterable[torch.Tensor])`` An iterable of Tensors that will have gradients normalized. max_norm : ``float`` The max norm of the gradients. norm_type : ``float`` The type of the used p-norm. Can be ``'inf'`` for infinity norm. Returns ------- Total norm of the parameters (viewed as a single vector). """ # pylint: disable=invalid-name,protected-access parameters = list(filter(lambda p: p.grad is not None, parameters)) max_norm = float(max_norm) norm_type = float(norm_type) if norm_type == float('inf'): total_norm = max(p.grad.data.abs().max() for p in parameters) else: total_norm = 0 for p in parameters: if p.grad.is_sparse: # need to coalesce the repeated indices before finding norm grad = p.grad.data.coalesce() param_norm = grad._values().norm(norm_type) else: param_norm = p.grad.data.norm(norm_type) total_norm += param_norm ** norm_type total_norm = total_norm ** (1. / norm_type) clip_coef = max_norm / (total_norm + 1e-6) if clip_coef < 1: for p in parameters: if p.grad.is_sparse: p.grad.data._values().mul_(clip_coef) else: p.grad.data.mul_(clip_coef) return total_norm
Move the optimizer state to GPU, if necessary. After calling, any parameter specific state in the optimizer will be located on the same device as the parameter.
def move_optimizer_to_cuda(optimizer): """ Move the optimizer state to GPU, if necessary. After calling, any parameter specific state in the optimizer will be located on the same device as the parameter. """ for param_group in optimizer.param_groups: for param in param_group['params']: if param.is_cuda: param_state = optimizer.state[param] for k in param_state.keys(): if isinstance(param_state[k], torch.Tensor): param_state[k] = param_state[k].cuda(device=param.get_device())
Returns the size of the batch dimension. Assumes a well-formed batch, returns 0 otherwise.
def get_batch_size(batch: Union[Dict, torch.Tensor]) -> int: """ Returns the size of the batch dimension. Assumes a well-formed batch, returns 0 otherwise. """ if isinstance(batch, torch.Tensor): return batch.size(0) # type: ignore elif isinstance(batch, Dict): return get_batch_size(next(iter(batch.values()))) else: return 0
Convert seconds past Epoch to human readable string.
def time_to_str(timestamp: int) -> str: """ Convert seconds past Epoch to human readable string. """ datetimestamp = datetime.datetime.fromtimestamp(timestamp) return '{:04d}-{:02d}-{:02d}-{:02d}-{:02d}-{:02d}'.format( datetimestamp.year, datetimestamp.month, datetimestamp.day, datetimestamp.hour, datetimestamp.minute, datetimestamp.second )
Convert human readable string to datetime.datetime.
def str_to_time(time_str: str) -> datetime.datetime: """ Convert human readable string to datetime.datetime. """ pieces: Any = [int(piece) for piece in time_str.split('-')] return datetime.datetime(*pieces)
Load all the datasets specified by the config. Parameters ---------- params : ``Params`` cache_directory : ``str``, optional If given, we will instruct the ``DatasetReaders`` that we construct to cache their instances in this location (or read their instances from caches in this location, if a suitable cache already exists). This is essentially a `base` directory for the cache, as we will additionally add the ``cache_prefix`` to this directory, giving an actual cache location of ``cache_directory + cache_prefix``. cache_prefix : ``str``, optional This works in conjunction with the ``cache_directory``. The idea is that the ``cache_directory`` contains caches for all different parameter settings, while the ``cache_prefix`` captures a specific set of parameters that led to a particular cache file. That is, if you change the tokenization settings inside your ``DatasetReader``, you don't want to read cached data that used the old settings. In order to avoid this, we compute a hash of the parameters used to construct each ``DatasetReader`` and use that as a "prefix" to the cache files inside the base ``cache_directory``. So, a given ``input_file`` would be cached essentially as ``cache_directory + cache_prefix + input_file``, where you specify a ``cache_directory``, the ``cache_prefix`` is based on the dataset reader parameters, and the ``input_file`` is whatever path you provided to ``DatasetReader.read()``. In order to allow you to give recognizable names to these prefixes if you want them, you can manually specify the ``cache_prefix``. Note that in some rare cases this can be dangerous, as we'll use the `same` prefix for both train and validation dataset readers.
def datasets_from_params(params: Params, cache_directory: str = None, cache_prefix: str = None) -> Dict[str, Iterable[Instance]]: """ Load all the datasets specified by the config. Parameters ---------- params : ``Params`` cache_directory : ``str``, optional If given, we will instruct the ``DatasetReaders`` that we construct to cache their instances in this location (or read their instances from caches in this location, if a suitable cache already exists). This is essentially a `base` directory for the cache, as we will additionally add the ``cache_prefix`` to this directory, giving an actual cache location of ``cache_directory + cache_prefix``. cache_prefix : ``str``, optional This works in conjunction with the ``cache_directory``. The idea is that the ``cache_directory`` contains caches for all different parameter settings, while the ``cache_prefix`` captures a specific set of parameters that led to a particular cache file. That is, if you change the tokenization settings inside your ``DatasetReader``, you don't want to read cached data that used the old settings. In order to avoid this, we compute a hash of the parameters used to construct each ``DatasetReader`` and use that as a "prefix" to the cache files inside the base ``cache_directory``. So, a given ``input_file`` would be cached essentially as ``cache_directory + cache_prefix + input_file``, where you specify a ``cache_directory``, the ``cache_prefix`` is based on the dataset reader parameters, and the ``input_file`` is whatever path you provided to ``DatasetReader.read()``. In order to allow you to give recognizable names to these prefixes if you want them, you can manually specify the ``cache_prefix``. Note that in some rare cases this can be dangerous, as we'll use the `same` prefix for both train and validation dataset readers. """ dataset_reader_params = params.pop('dataset_reader') validation_dataset_reader_params = params.pop('validation_dataset_reader', None) train_cache_dir, validation_cache_dir = _set_up_cache_files(dataset_reader_params, validation_dataset_reader_params, cache_directory, cache_prefix) dataset_reader = DatasetReader.from_params(dataset_reader_params) validation_and_test_dataset_reader: DatasetReader = dataset_reader if validation_dataset_reader_params is not None: logger.info("Using a separate dataset reader to load validation and test data.") validation_and_test_dataset_reader = DatasetReader.from_params(validation_dataset_reader_params) if train_cache_dir: dataset_reader.cache_data(train_cache_dir) validation_and_test_dataset_reader.cache_data(validation_cache_dir) train_data_path = params.pop('train_data_path') logger.info("Reading training data from %s", train_data_path) train_data = dataset_reader.read(train_data_path) datasets: Dict[str, Iterable[Instance]] = {"train": train_data} validation_data_path = params.pop('validation_data_path', None) if validation_data_path is not None: logger.info("Reading validation data from %s", validation_data_path) validation_data = validation_and_test_dataset_reader.read(validation_data_path) datasets["validation"] = validation_data test_data_path = params.pop("test_data_path", None) if test_data_path is not None: logger.info("Reading test data from %s", test_data_path) test_data = validation_and_test_dataset_reader.read(test_data_path) datasets["test"] = test_data return datasets
This function creates the serialization directory if it doesn't exist. If it already exists and is non-empty, then it verifies that we're recovering from a training with an identical configuration. Parameters ---------- params: ``Params`` A parameter object specifying an AllenNLP Experiment. serialization_dir: ``str`` The directory in which to save results and logs. recover: ``bool`` If ``True``, we will try to recover from an existing serialization directory, and crash if the directory doesn't exist, or doesn't match the configuration we're given. force: ``bool`` If ``True``, we will overwrite the serialization directory if it already exists.
def create_serialization_dir( params: Params, serialization_dir: str, recover: bool, force: bool) -> None: """ This function creates the serialization directory if it doesn't exist. If it already exists and is non-empty, then it verifies that we're recovering from a training with an identical configuration. Parameters ---------- params: ``Params`` A parameter object specifying an AllenNLP Experiment. serialization_dir: ``str`` The directory in which to save results and logs. recover: ``bool`` If ``True``, we will try to recover from an existing serialization directory, and crash if the directory doesn't exist, or doesn't match the configuration we're given. force: ``bool`` If ``True``, we will overwrite the serialization directory if it already exists. """ if recover and force: raise ConfigurationError("Illegal arguments: both force and recover are true.") if os.path.exists(serialization_dir) and force: shutil.rmtree(serialization_dir) if os.path.exists(serialization_dir) and os.listdir(serialization_dir): if not recover: raise ConfigurationError(f"Serialization directory ({serialization_dir}) already exists and is " f"not empty. Specify --recover to recover training from existing output.") logger.info(f"Recovering from prior training at {serialization_dir}.") recovered_config_file = os.path.join(serialization_dir, CONFIG_NAME) if not os.path.exists(recovered_config_file): raise ConfigurationError("The serialization directory already exists but doesn't " "contain a config.json. You probably gave the wrong directory.") else: loaded_params = Params.from_file(recovered_config_file) # Check whether any of the training configuration differs from the configuration we are # resuming. If so, warn the user that training may fail. fail = False flat_params = params.as_flat_dict() flat_loaded = loaded_params.as_flat_dict() for key in flat_params.keys() - flat_loaded.keys(): logger.error(f"Key '{key}' found in training configuration but not in the serialization " f"directory we're recovering from.") fail = True for key in flat_loaded.keys() - flat_params.keys(): logger.error(f"Key '{key}' found in the serialization directory we're recovering from " f"but not in the training config.") fail = True for key in flat_params.keys(): if flat_params.get(key, None) != flat_loaded.get(key, None): logger.error(f"Value for '{key}' in training configuration does not match that the value in " f"the serialization directory we're recovering from: " f"{flat_params[key]} != {flat_loaded[key]}") fail = True if fail: raise ConfigurationError("Training configuration does not match the configuration we're " "recovering from.") else: if recover: raise ConfigurationError(f"--recover specified but serialization_dir ({serialization_dir}) " "does not exist. There is nothing to recover from.") os.makedirs(serialization_dir, exist_ok=True)
Performs a forward pass using multiple GPUs. This is a simplification of torch.nn.parallel.data_parallel to support the allennlp model interface.
def data_parallel(batch_group: List[TensorDict], model: Model, cuda_devices: List) -> Dict[str, torch.Tensor]: """ Performs a forward pass using multiple GPUs. This is a simplification of torch.nn.parallel.data_parallel to support the allennlp model interface. """ assert len(batch_group) <= len(cuda_devices) moved = [nn_util.move_to_device(batch, device) for batch, device in zip(batch_group, cuda_devices)] used_device_ids = cuda_devices[:len(moved)] # Counterintuitively, it appears replicate expects the source device id to be the first element # in the device id list. See torch.cuda.comm.broadcast_coalesced, which is called indirectly. replicas = replicate(model, used_device_ids) # We pass all our arguments as kwargs. Create a list of empty tuples of the # correct shape to serve as (non-existent) positional arguments. inputs = [()] * len(batch_group) outputs = parallel_apply(replicas, inputs, moved, used_device_ids) # Only the 'loss' is needed. # a (num_gpu, ) tensor with loss on each GPU losses = gather([output['loss'].unsqueeze(0) for output in outputs], used_device_ids[0], 0) return {'loss': losses.mean()}
Performs gradient rescaling. Is a no-op if gradient rescaling is not enabled.
def rescale_gradients(model: Model, grad_norm: Optional[float] = None) -> Optional[float]: """ Performs gradient rescaling. Is a no-op if gradient rescaling is not enabled. """ if grad_norm: parameters_to_clip = [p for p in model.parameters() if p.grad is not None] return sparse_clip_norm(parameters_to_clip, grad_norm) return None
Gets the metrics but sets ``"loss"`` to the total loss divided by the ``num_batches`` so that the ``"loss"`` metric is "average loss per batch".
def get_metrics(model: Model, total_loss: float, num_batches: int, reset: bool = False) -> Dict[str, float]: """ Gets the metrics but sets ``"loss"`` to the total loss divided by the ``num_batches`` so that the ``"loss"`` metric is "average loss per batch". """ metrics = model.get_metrics(reset=reset) metrics["loss"] = float(total_loss / num_batches) if num_batches > 0 else 0.0 return metrics
Parse all dependencies out of the requirements.txt file.
def parse_requirements() -> Tuple[PackagesType, PackagesType, Set[str]]: """Parse all dependencies out of the requirements.txt file.""" essential_packages: PackagesType = {} other_packages: PackagesType = {} duplicates: Set[str] = set() with open("requirements.txt", "r") as req_file: section: str = "" for line in req_file: line = line.strip() if line.startswith("####"): # Line is a section name. section = parse_section_name(line) continue if not line or line.startswith("#"): # Line is empty or just regular comment. continue module, version = parse_package(line) if module in essential_packages or module in other_packages: duplicates.add(module) if section.startswith("ESSENTIAL"): essential_packages[module] = version else: other_packages[module] = version return essential_packages, other_packages, duplicates
Parse all dependencies out of the setup.py script.
def parse_setup() -> Tuple[PackagesType, PackagesType, Set[str], Set[str]]: """Parse all dependencies out of the setup.py script.""" essential_packages: PackagesType = {} test_packages: PackagesType = {} essential_duplicates: Set[str] = set() test_duplicates: Set[str] = set() with open('setup.py') as setup_file: contents = setup_file.read() # Parse out essential packages. package_string = re.search(r"""install_requires=\[[\s\n]*['"](.*?)['"],?[\s\n]*\]""", contents, re.DOTALL).groups()[0].strip() for package in re.split(r"""['"],[\s\n]+['"]""", package_string): module, version = parse_package(package) if module in essential_packages: essential_duplicates.add(module) else: essential_packages[module] = version # Parse packages only needed for testing. package_string = re.search(r"""tests_require=\[[\s\n]*['"](.*?)['"],?[\s\n]*\]""", contents, re.DOTALL).groups()[0].strip() for package in re.split(r"""['"],[\s\n]+['"]""", package_string): module, version = parse_package(package) if module in test_packages: test_duplicates.add(module) else: test_packages[module] = version return essential_packages, test_packages, essential_duplicates, test_duplicates
Given a sentence, return all token spans within the sentence. Spans are `inclusive`. Additionally, you can provide a maximum and minimum span width, which will be used to exclude spans outside of this range. Finally, you can provide a function mapping ``List[T] -> bool``, which will be applied to every span to decide whether that span should be included. This allows filtering by length, regex matches, pos tags or any Spacy ``Token`` attributes, for example. Parameters ---------- sentence : ``List[T]``, required. The sentence to generate spans for. The type is generic, as this function can be used with strings, or Spacy ``Tokens`` or other sequences. offset : ``int``, optional (default = 0) A numeric offset to add to all span start and end indices. This is helpful if the sentence is part of a larger structure, such as a document, which the indices need to respect. max_span_width : ``int``, optional (default = None) The maximum length of spans which should be included. Defaults to len(sentence). min_span_width : ``int``, optional (default = 1) The minimum length of spans which should be included. Defaults to 1. filter_function : ``Callable[[List[T]], bool]``, optional (default = None) A function mapping sequences of the passed type T to a boolean value. If ``True``, the span is included in the returned spans from the sentence, otherwise it is excluded..
def enumerate_spans(sentence: List[T], offset: int = 0, max_span_width: int = None, min_span_width: int = 1, filter_function: Callable[[List[T]], bool] = None) -> List[Tuple[int, int]]: """ Given a sentence, return all token spans within the sentence. Spans are `inclusive`. Additionally, you can provide a maximum and minimum span width, which will be used to exclude spans outside of this range. Finally, you can provide a function mapping ``List[T] -> bool``, which will be applied to every span to decide whether that span should be included. This allows filtering by length, regex matches, pos tags or any Spacy ``Token`` attributes, for example. Parameters ---------- sentence : ``List[T]``, required. The sentence to generate spans for. The type is generic, as this function can be used with strings, or Spacy ``Tokens`` or other sequences. offset : ``int``, optional (default = 0) A numeric offset to add to all span start and end indices. This is helpful if the sentence is part of a larger structure, such as a document, which the indices need to respect. max_span_width : ``int``, optional (default = None) The maximum length of spans which should be included. Defaults to len(sentence). min_span_width : ``int``, optional (default = 1) The minimum length of spans which should be included. Defaults to 1. filter_function : ``Callable[[List[T]], bool]``, optional (default = None) A function mapping sequences of the passed type T to a boolean value. If ``True``, the span is included in the returned spans from the sentence, otherwise it is excluded.. """ max_span_width = max_span_width or len(sentence) filter_function = filter_function or (lambda x: True) spans: List[Tuple[int, int]] = [] for start_index in range(len(sentence)): last_end_index = min(start_index + max_span_width, len(sentence)) first_end_index = min(start_index + min_span_width - 1, len(sentence)) for end_index in range(first_end_index, last_end_index): start = offset + start_index end = offset + end_index # add 1 to end index because span indices are inclusive. if filter_function(sentence[slice(start_index, end_index + 1)]): spans.append((start, end)) return spans
Given a sequence corresponding to BIO tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are also included (i.e those which do not start with a "B-LABEL"), as otherwise it is possible to get a perfect precision score whilst still predicting ill-formed spans in addition to the correct spans. This function works properly when the spans are unlabeled (i.e., your labels are simply "B", "I", and "O"). Parameters ---------- tag_sequence : List[str], required. The integer class labels for a sequence. classes_to_ignore : List[str], optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : List[TypedStringSpan] The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)). Note that the label `does not` contain any BIO tag prefixes.
def bio_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: """ Given a sequence corresponding to BIO tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are also included (i.e those which do not start with a "B-LABEL"), as otherwise it is possible to get a perfect precision score whilst still predicting ill-formed spans in addition to the correct spans. This function works properly when the spans are unlabeled (i.e., your labels are simply "B", "I", and "O"). Parameters ---------- tag_sequence : List[str], required. The integer class labels for a sequence. classes_to_ignore : List[str], optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : List[TypedStringSpan] The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)). Note that the label `does not` contain any BIO tag prefixes. """ classes_to_ignore = classes_to_ignore or [] spans: Set[Tuple[str, Tuple[int, int]]] = set() span_start = 0 span_end = 0 active_conll_tag = None for index, string_tag in enumerate(tag_sequence): # Actual BIO tag. bio_tag = string_tag[0] if bio_tag not in ["B", "I", "O"]: raise InvalidTagSequence(tag_sequence) conll_tag = string_tag[2:] if bio_tag == "O" or conll_tag in classes_to_ignore: # The span has ended. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = None # We don't care about tags we are # told to ignore, so we do nothing. continue elif bio_tag == "B": # We are entering a new span; reset indices # and active tag to new span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = conll_tag span_start = index span_end = index elif bio_tag == "I" and conll_tag == active_conll_tag: # We're inside a span. span_end += 1 else: # This is the case the bio label is an "I", but either: # 1) the span hasn't started - i.e. an ill formed span. # 2) The span is an I tag for a different conll annotation. # We'll process the previous span if it exists, but also # include this span. This is important, because otherwise, # a model may get a perfect F1 score whilst still including # false positive ill-formed spans. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = conll_tag span_start = index span_end = index # Last token might have been a part of a valid span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) return list(spans)
Given a sequence corresponding to IOB1 tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are also included (i.e., those where "B-LABEL" is not preceded by "I-LABEL" or "B-LABEL"). Parameters ---------- tag_sequence : List[str], required. The integer class labels for a sequence. classes_to_ignore : List[str], optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : List[TypedStringSpan] The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)). Note that the label `does not` contain any BIO tag prefixes.
def iob1_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: """ Given a sequence corresponding to IOB1 tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are also included (i.e., those where "B-LABEL" is not preceded by "I-LABEL" or "B-LABEL"). Parameters ---------- tag_sequence : List[str], required. The integer class labels for a sequence. classes_to_ignore : List[str], optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : List[TypedStringSpan] The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)). Note that the label `does not` contain any BIO tag prefixes. """ classes_to_ignore = classes_to_ignore or [] spans: Set[Tuple[str, Tuple[int, int]]] = set() span_start = 0 span_end = 0 active_conll_tag = None prev_bio_tag = None prev_conll_tag = None for index, string_tag in enumerate(tag_sequence): curr_bio_tag = string_tag[0] curr_conll_tag = string_tag[2:] if curr_bio_tag not in ["B", "I", "O"]: raise InvalidTagSequence(tag_sequence) if curr_bio_tag == "O" or curr_conll_tag in classes_to_ignore: # The span has ended. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = None elif _iob1_start_of_chunk(prev_bio_tag, prev_conll_tag, curr_bio_tag, curr_conll_tag): # We are entering a new span; reset indices # and active tag to new span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = curr_conll_tag span_start = index span_end = index else: # bio_tag == "I" and curr_conll_tag == active_conll_tag # We're continuing a span. span_end += 1 prev_bio_tag = string_tag[0] prev_conll_tag = string_tag[2:] # Last token might have been a part of a valid span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) return list(spans)
Given a sequence corresponding to BIOUL tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are not allowed and will raise ``InvalidTagSequence``. This function works properly when the spans are unlabeled (i.e., your labels are simply "B", "I", "O", "U", and "L"). Parameters ---------- tag_sequence : ``List[str]``, required. The tag sequence encoded in BIOUL, e.g. ["B-PER", "L-PER", "O"]. classes_to_ignore : ``List[str]``, optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : ``List[TypedStringSpan]`` The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)).
def bioul_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: """ Given a sequence corresponding to BIOUL tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are not allowed and will raise ``InvalidTagSequence``. This function works properly when the spans are unlabeled (i.e., your labels are simply "B", "I", "O", "U", and "L"). Parameters ---------- tag_sequence : ``List[str]``, required. The tag sequence encoded in BIOUL, e.g. ["B-PER", "L-PER", "O"]. classes_to_ignore : ``List[str]``, optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : ``List[TypedStringSpan]`` The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)). """ spans = [] classes_to_ignore = classes_to_ignore or [] index = 0 while index < len(tag_sequence): label = tag_sequence[index] if label[0] == 'U': spans.append((label.partition('-')[2], (index, index))) elif label[0] == 'B': start = index while label[0] != 'L': index += 1 if index >= len(tag_sequence): raise InvalidTagSequence(tag_sequence) label = tag_sequence[index] if not (label[0] == 'I' or label[0] == 'L'): raise InvalidTagSequence(tag_sequence) spans.append((label.partition('-')[2], (start, index))) else: if label != 'O': raise InvalidTagSequence(tag_sequence) index += 1 return [span for span in spans if span[0] not in classes_to_ignore]
Given a tag sequence encoded with IOB1 labels, recode to BIOUL. In the IOB1 scheme, I is a token inside a span, O is a token outside a span and B is the beginning of span immediately following another span of the same type. In the BIO scheme, I is a token inside a span, O is a token outside a span and B is the beginning of a span. Parameters ---------- tag_sequence : ``List[str]``, required. The tag sequence encoded in IOB1, e.g. ["I-PER", "I-PER", "O"]. encoding : `str`, optional, (default = ``IOB1``). The encoding type to convert from. Must be either "IOB1" or "BIO". Returns ------- bioul_sequence: ``List[str]`` The tag sequence encoded in IOB1, e.g. ["B-PER", "L-PER", "O"].
def to_bioul(tag_sequence: List[str], encoding: str = "IOB1") -> List[str]: """ Given a tag sequence encoded with IOB1 labels, recode to BIOUL. In the IOB1 scheme, I is a token inside a span, O is a token outside a span and B is the beginning of span immediately following another span of the same type. In the BIO scheme, I is a token inside a span, O is a token outside a span and B is the beginning of a span. Parameters ---------- tag_sequence : ``List[str]``, required. The tag sequence encoded in IOB1, e.g. ["I-PER", "I-PER", "O"]. encoding : `str`, optional, (default = ``IOB1``). The encoding type to convert from. Must be either "IOB1" or "BIO". Returns ------- bioul_sequence: ``List[str]`` The tag sequence encoded in IOB1, e.g. ["B-PER", "L-PER", "O"]. """ if not encoding in {"IOB1", "BIO"}: raise ConfigurationError(f"Invalid encoding {encoding} passed to 'to_bioul'.") # pylint: disable=len-as-condition def replace_label(full_label, new_label): # example: full_label = 'I-PER', new_label = 'U', returns 'U-PER' parts = list(full_label.partition('-')) parts[0] = new_label return ''.join(parts) def pop_replace_append(in_stack, out_stack, new_label): # pop the last element from in_stack, replace the label, append # to out_stack tag = in_stack.pop() new_tag = replace_label(tag, new_label) out_stack.append(new_tag) def process_stack(stack, out_stack): # process a stack of labels, add them to out_stack if len(stack) == 1: # just a U token pop_replace_append(stack, out_stack, 'U') else: # need to code as BIL recoded_stack = [] pop_replace_append(stack, recoded_stack, 'L') while len(stack) >= 2: pop_replace_append(stack, recoded_stack, 'I') pop_replace_append(stack, recoded_stack, 'B') recoded_stack.reverse() out_stack.extend(recoded_stack) # Process the tag_sequence one tag at a time, adding spans to a stack, # then recode them. bioul_sequence = [] stack: List[str] = [] for label in tag_sequence: # need to make a dict like # token = {'token': 'Matt', "labels": {'conll2003': "B-PER"} # 'gold': 'I-PER'} # where 'gold' is the raw value from the CoNLL data set if label == 'O' and len(stack) == 0: bioul_sequence.append(label) elif label == 'O' and len(stack) > 0: # need to process the entries on the stack plus this one process_stack(stack, bioul_sequence) bioul_sequence.append(label) elif label[0] == 'I': # check if the previous type is the same as this one # if it is then append to stack # otherwise this start a new entity if the type # is different if len(stack) == 0: if encoding == "BIO": raise InvalidTagSequence(tag_sequence) stack.append(label) else: # check if the previous type is the same as this one this_type = label.partition('-')[2] prev_type = stack[-1].partition('-')[2] if this_type == prev_type: stack.append(label) else: if encoding == "BIO": raise InvalidTagSequence(tag_sequence) # a new entity process_stack(stack, bioul_sequence) stack.append(label) elif label[0] == 'B': if len(stack) > 0: process_stack(stack, bioul_sequence) stack.append(label) else: raise InvalidTagSequence(tag_sequence) # process the stack if len(stack) > 0: process_stack(stack, bioul_sequence) return bioul_sequence
Given a sequence corresponding to BMES tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are also included (i.e those which do not start with a "B-LABEL"), as otherwise it is possible to get a perfect precision score whilst still predicting ill-formed spans in addition to the correct spans. This function works properly when the spans are unlabeled (i.e., your labels are simply "B", "M", "E" and "S"). Parameters ---------- tag_sequence : List[str], required. The integer class labels for a sequence. classes_to_ignore : List[str], optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : List[TypedStringSpan] The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)). Note that the label `does not` contain any BIO tag prefixes.
def bmes_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: """ Given a sequence corresponding to BMES tags, extracts spans. Spans are inclusive and can be of zero length, representing a single word span. Ill-formed spans are also included (i.e those which do not start with a "B-LABEL"), as otherwise it is possible to get a perfect precision score whilst still predicting ill-formed spans in addition to the correct spans. This function works properly when the spans are unlabeled (i.e., your labels are simply "B", "M", "E" and "S"). Parameters ---------- tag_sequence : List[str], required. The integer class labels for a sequence. classes_to_ignore : List[str], optional (default = None). A list of string class labels `excluding` the bio tag which should be ignored when extracting spans. Returns ------- spans : List[TypedStringSpan] The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)). Note that the label `does not` contain any BIO tag prefixes. """ def extract_bmes_tag_label(text): bmes_tag = text[0] label = text[2:] return bmes_tag, label spans: List[Tuple[str, List[int]]] = [] prev_bmes_tag: Optional[str] = None for index, tag in enumerate(tag_sequence): bmes_tag, label = extract_bmes_tag_label(tag) if bmes_tag in ('B', 'S'): # Regardless of tag, we start a new span when reaching B & S. spans.append( (label, [index, index]) ) elif bmes_tag in ('M', 'E') and prev_bmes_tag in ('B', 'M') and spans[-1][0] == label: # Only expand the span if # 1. Valid transition: B/M -> M/E. # 2. Matched label. spans[-1][1][1] = index else: # Best effort split for invalid span. spans.append( (label, [index, index]) ) # update previous BMES tag. prev_bmes_tag = bmes_tag classes_to_ignore = classes_to_ignore or [] return [ # to tuple. (span[0], (span[1][0], span[1][1])) for span in spans if span[0] not in classes_to_ignore ]
Just converts from an ``argparse.Namespace`` object to params.
def dry_run_from_args(args: argparse.Namespace): """ Just converts from an ``argparse.Namespace`` object to params. """ parameter_path = args.param_path serialization_dir = args.serialization_dir overrides = args.overrides params = Params.from_file(parameter_path, overrides) dry_run_from_params(params, serialization_dir)
Parameters ---------- initial_state : ``State`` The starting state of our search. This is assumed to be `batched`, and our beam search is batch-aware - we'll keep ``beam_size`` states around for each instance in the batch. transition_function : ``TransitionFunction`` The ``TransitionFunction`` object that defines and scores transitions from one state to the next. Returns ------- best_states : ``Dict[int, List[State]]`` This is a mapping from batch index to the top states for that instance.
def search(self, initial_state: State, transition_function: TransitionFunction) -> Dict[int, List[State]]: """ Parameters ---------- initial_state : ``State`` The starting state of our search. This is assumed to be `batched`, and our beam search is batch-aware - we'll keep ``beam_size`` states around for each instance in the batch. transition_function : ``TransitionFunction`` The ``TransitionFunction`` object that defines and scores transitions from one state to the next. Returns ------- best_states : ``Dict[int, List[State]]`` This is a mapping from batch index to the top states for that instance. """ finished_states: Dict[int, List[State]] = defaultdict(list) states = [initial_state] step_num = 0 while states: step_num += 1 next_states: Dict[int, List[State]] = defaultdict(list) grouped_state = states[0].combine_states(states) allowed_actions = [] for batch_index, action_history in zip(grouped_state.batch_indices, grouped_state.action_history): allowed_actions.append(self._allowed_transitions[batch_index][tuple(action_history)]) for next_state in transition_function.take_step(grouped_state, max_actions=self._per_node_beam_size, allowed_actions=allowed_actions): # NOTE: we're doing state.batch_indices[0] here (and similar things below), # hard-coding a group size of 1. But, our use of `next_state.is_finished()` # already checks for that, as it crashes if the group size is not 1. batch_index = next_state.batch_indices[0] if next_state.is_finished(): finished_states[batch_index].append(next_state) else: next_states[batch_index].append(next_state) states = [] for batch_index, batch_states in next_states.items(): # The states from the generator are already sorted, so we can just take the first # ones here, without an additional sort. if self._beam_size: batch_states = batch_states[:self._beam_size] states.extend(batch_states) best_states: Dict[int, List[State]] = {} for batch_index, batch_states in finished_states.items(): # The time this sort takes is pretty negligible, no particular need to optimize this # yet. Maybe with a larger beam size... finished_to_sort = [(-state.score[0].item(), state) for state in batch_states] finished_to_sort.sort(key=lambda x: x[0]) best_states[batch_index] = [state[1] for state in finished_to_sort[:self._beam_size]] return best_states
Check if a URL is reachable.
def url_ok(match_tuple: MatchTuple) -> bool: """Check if a URL is reachable.""" try: result = requests.get(match_tuple.link, timeout=5) return result.ok except (requests.ConnectionError, requests.Timeout): return False
Check if a file in this repository exists.
def path_ok(match_tuple: MatchTuple) -> bool: """Check if a file in this repository exists.""" relative_path = match_tuple.link.split("#")[0] full_path = os.path.join(os.path.dirname(str(match_tuple.source)), relative_path) return os.path.exists(full_path)
In some cases we'll be feeding params dicts to functions we don't own; for example, PyTorch optimizers. In that case we can't use ``pop_int`` or similar to force casts (which means you can't specify ``int`` parameters using environment variables). This function takes something that looks JSON-like and recursively casts things that look like (bool, int, float) to (bool, int, float).
def infer_and_cast(value: Any): """ In some cases we'll be feeding params dicts to functions we don't own; for example, PyTorch optimizers. In that case we can't use ``pop_int`` or similar to force casts (which means you can't specify ``int`` parameters using environment variables). This function takes something that looks JSON-like and recursively casts things that look like (bool, int, float) to (bool, int, float). """ # pylint: disable=too-many-return-statements if isinstance(value, (int, float, bool)): # Already one of our desired types, so leave as is. return value elif isinstance(value, list): # Recursively call on each list element. return [infer_and_cast(item) for item in value] elif isinstance(value, dict): # Recursively call on each dict value. return {key: infer_and_cast(item) for key, item in value.items()} elif isinstance(value, str): # If it looks like a bool, make it a bool. if value.lower() == "true": return True elif value.lower() == "false": return False else: # See if it could be an int. try: return int(value) except ValueError: pass # See if it could be a float. try: return float(value) except ValueError: # Just return it as a string. return value else: raise ValueError(f"cannot infer type of {value}")
Wraps `os.environ` to filter out non-encodable values.
def _environment_variables() -> Dict[str, str]: """ Wraps `os.environ` to filter out non-encodable values. """ return {key: value for key, value in os.environ.items() if _is_encodable(value)}
Given a "flattened" dict with compound keys, e.g. {"a.b": 0} unflatten it: {"a": {"b": 0}}
def unflatten(flat_dict: Dict[str, Any]) -> Dict[str, Any]: """ Given a "flattened" dict with compound keys, e.g. {"a.b": 0} unflatten it: {"a": {"b": 0}} """ unflat: Dict[str, Any] = {} for compound_key, value in flat_dict.items(): curr_dict = unflat parts = compound_key.split(".") for key in parts[:-1]: curr_value = curr_dict.get(key) if key not in curr_dict: curr_dict[key] = {} curr_dict = curr_dict[key] elif isinstance(curr_value, dict): curr_dict = curr_value else: raise ConfigurationError("flattened dictionary is invalid") if not isinstance(curr_dict, dict) or parts[-1] in curr_dict: raise ConfigurationError("flattened dictionary is invalid") else: curr_dict[parts[-1]] = value return unflat
Deep merge two dicts, preferring values from `preferred`.
def with_fallback(preferred: Dict[str, Any], fallback: Dict[str, Any]) -> Dict[str, Any]: """ Deep merge two dicts, preferring values from `preferred`. """ def merge(preferred_value: Any, fallback_value: Any) -> Any: if isinstance(preferred_value, dict) and isinstance(fallback_value, dict): return with_fallback(preferred_value, fallback_value) elif isinstance(preferred_value, dict) and isinstance(fallback_value, list): # treat preferred_value as a sparse list, where each key is an index to be overridden merged_list = fallback_value for elem_key, preferred_element in preferred_value.items(): try: index = int(elem_key) merged_list[index] = merge(preferred_element, fallback_value[index]) except ValueError: raise ConfigurationError("could not merge dicts - the preferred dict contains " f"invalid keys (key {elem_key} is not a valid list index)") except IndexError: raise ConfigurationError("could not merge dicts - the preferred dict contains " f"invalid keys (key {index} is out of bounds)") return merged_list else: return copy.deepcopy(preferred_value) preferred_keys = set(preferred.keys()) fallback_keys = set(fallback.keys()) common_keys = preferred_keys & fallback_keys merged: Dict[str, Any] = {} for key in preferred_keys - fallback_keys: merged[key] = copy.deepcopy(preferred[key]) for key in fallback_keys - preferred_keys: merged[key] = copy.deepcopy(fallback[key]) for key in common_keys: preferred_value = preferred[key] fallback_value = fallback[key] merged[key] = merge(preferred_value, fallback_value) return merged
Performs the same function as :func:`Params.pop_choice`, but is required in order to deal with places that the Params object is not welcome, such as inside Keras layers. See the docstring of that method for more detail on how this function works. This method adds a ``history`` parameter, in the off-chance that you know it, so that we can reproduce :func:`Params.pop_choice` exactly. We default to using "?." if you don't know the history, so you'll have to fix that in the log if you want to actually recover the logged parameters.
def pop_choice(params: Dict[str, Any], key: str, choices: List[Any], default_to_first_choice: bool = False, history: str = "?.") -> Any: """ Performs the same function as :func:`Params.pop_choice`, but is required in order to deal with places that the Params object is not welcome, such as inside Keras layers. See the docstring of that method for more detail on how this function works. This method adds a ``history`` parameter, in the off-chance that you know it, so that we can reproduce :func:`Params.pop_choice` exactly. We default to using "?." if you don't know the history, so you'll have to fix that in the log if you want to actually recover the logged parameters. """ value = Params(params, history).pop_choice(key, choices, default_to_first_choice) return value
Any class in its ``from_params`` method can request that some of its input files be added to the archive by calling this method. For example, if some class ``A`` had an ``input_file`` parameter, it could call ``` params.add_file_to_archive("input_file") ``` which would store the supplied value for ``input_file`` at the key ``previous.history.and.then.input_file``. The ``files_to_archive`` dict is shared with child instances via the ``_check_is_dict`` method, so that the final mapping can be retrieved from the top-level ``Params`` object. NOTE: You must call ``add_file_to_archive`` before you ``pop()`` the parameter, because the ``Params`` instance looks up the value of the filename inside itself. If the ``loading_from_archive`` flag is True, this will be a no-op.
def add_file_to_archive(self, name: str) -> None: """ Any class in its ``from_params`` method can request that some of its input files be added to the archive by calling this method. For example, if some class ``A`` had an ``input_file`` parameter, it could call ``` params.add_file_to_archive("input_file") ``` which would store the supplied value for ``input_file`` at the key ``previous.history.and.then.input_file``. The ``files_to_archive`` dict is shared with child instances via the ``_check_is_dict`` method, so that the final mapping can be retrieved from the top-level ``Params`` object. NOTE: You must call ``add_file_to_archive`` before you ``pop()`` the parameter, because the ``Params`` instance looks up the value of the filename inside itself. If the ``loading_from_archive`` flag is True, this will be a no-op. """ if not self.loading_from_archive: self.files_to_archive[f"{self.history}{name}"] = cached_path(self.get(name))