INSTRUCTION
stringlengths 1
46.3k
| RESPONSE
stringlengths 75
80.2k
|
---|---|
Performs the functionality associated with dict.pop(key), along with checking for
returned dictionaries, replacing them with Param objects with an updated history.
If ``key`` is not present in the dictionary, and no default was specified, we raise a
``ConfigurationError``, instead of the typical ``KeyError``. | def pop(self, key: str, default: Any = DEFAULT) -> Any:
"""
Performs the functionality associated with dict.pop(key), along with checking for
returned dictionaries, replacing them with Param objects with an updated history.
If ``key`` is not present in the dictionary, and no default was specified, we raise a
``ConfigurationError``, instead of the typical ``KeyError``.
"""
if default is self.DEFAULT:
try:
value = self.params.pop(key)
except KeyError:
raise ConfigurationError("key \"{}\" is required at location \"{}\"".format(key, self.history))
else:
value = self.params.pop(key, default)
if not isinstance(value, dict):
logger.info(self.history + key + " = " + str(value)) # type: ignore
return self._check_is_dict(key, value) |
Performs a pop and coerces to an int. | def pop_int(self, key: str, default: Any = DEFAULT) -> int:
"""
Performs a pop and coerces to an int.
"""
value = self.pop(key, default)
if value is None:
return None
else:
return int(value) |
Performs a pop and coerces to a float. | def pop_float(self, key: str, default: Any = DEFAULT) -> float:
"""
Performs a pop and coerces to a float.
"""
value = self.pop(key, default)
if value is None:
return None
else:
return float(value) |
Performs a pop and coerces to a bool. | def pop_bool(self, key: str, default: Any = DEFAULT) -> bool:
"""
Performs a pop and coerces to a bool.
"""
value = self.pop(key, default)
if value is None:
return None
elif isinstance(value, bool):
return value
elif value == "true":
return True
elif value == "false":
return False
else:
raise ValueError("Cannot convert variable to bool: " + value) |
Performs the functionality associated with dict.get(key) but also checks for returned
dicts and returns a Params object in their place with an updated history. | def get(self, key: str, default: Any = DEFAULT):
"""
Performs the functionality associated with dict.get(key) but also checks for returned
dicts and returns a Params object in their place with an updated history.
"""
if default is self.DEFAULT:
try:
value = self.params.get(key)
except KeyError:
raise ConfigurationError("key \"{}\" is required at location \"{}\"".format(key, self.history))
else:
value = self.params.get(key, default)
return self._check_is_dict(key, value) |
Gets the value of ``key`` in the ``params`` dictionary, ensuring that the value is one of
the given choices. Note that this `pops` the key from params, modifying the dictionary,
consistent with how parameters are processed in this codebase.
Parameters
----------
key: str
Key to get the value from in the param dictionary
choices: List[Any]
A list of valid options for values corresponding to ``key``. For example, if you're
specifying the type of encoder to use for some part of your model, the choices might be
the list of encoder classes we know about and can instantiate. If the value we find in
the param dictionary is not in ``choices``, we raise a ``ConfigurationError``, because
the user specified an invalid value in their parameter file.
default_to_first_choice: bool, optional (default=False)
If this is ``True``, we allow the ``key`` to not be present in the parameter
dictionary. If the key is not present, we will use the return as the value the first
choice in the ``choices`` list. If this is ``False``, we raise a
``ConfigurationError``, because specifying the ``key`` is required (e.g., you `have` to
specify your model class when running an experiment, but you can feel free to use
default settings for encoders if you want). | def pop_choice(self, key: str, choices: List[Any], default_to_first_choice: bool = False) -> Any:
"""
Gets the value of ``key`` in the ``params`` dictionary, ensuring that the value is one of
the given choices. Note that this `pops` the key from params, modifying the dictionary,
consistent with how parameters are processed in this codebase.
Parameters
----------
key: str
Key to get the value from in the param dictionary
choices: List[Any]
A list of valid options for values corresponding to ``key``. For example, if you're
specifying the type of encoder to use for some part of your model, the choices might be
the list of encoder classes we know about and can instantiate. If the value we find in
the param dictionary is not in ``choices``, we raise a ``ConfigurationError``, because
the user specified an invalid value in their parameter file.
default_to_first_choice: bool, optional (default=False)
If this is ``True``, we allow the ``key`` to not be present in the parameter
dictionary. If the key is not present, we will use the return as the value the first
choice in the ``choices`` list. If this is ``False``, we raise a
``ConfigurationError``, because specifying the ``key`` is required (e.g., you `have` to
specify your model class when running an experiment, but you can feel free to use
default settings for encoders if you want).
"""
default = choices[0] if default_to_first_choice else self.DEFAULT
value = self.pop(key, default)
if value not in choices:
key_str = self.history + key
message = '%s not in acceptable choices for %s: %s' % (value, key_str, str(choices))
raise ConfigurationError(message)
return value |
Sometimes we need to just represent the parameters as a dict, for instance when we pass
them to PyTorch code.
Parameters
----------
quiet: bool, optional (default = False)
Whether to log the parameters before returning them as a dict.
infer_type_and_cast : bool, optional (default = False)
If True, we infer types and cast (e.g. things that look like floats to floats). | def as_dict(self, quiet: bool = False, infer_type_and_cast: bool = False):
"""
Sometimes we need to just represent the parameters as a dict, for instance when we pass
them to PyTorch code.
Parameters
----------
quiet: bool, optional (default = False)
Whether to log the parameters before returning them as a dict.
infer_type_and_cast : bool, optional (default = False)
If True, we infer types and cast (e.g. things that look like floats to floats).
"""
if infer_type_and_cast:
params_as_dict = infer_and_cast(self.params)
else:
params_as_dict = self.params
if quiet:
return params_as_dict
def log_recursively(parameters, history):
for key, value in parameters.items():
if isinstance(value, dict):
new_local_history = history + key + "."
log_recursively(value, new_local_history)
else:
logger.info(history + key + " = " + str(value))
logger.info("Converting Params object to dict; logging of default "
"values will not occur when dictionary parameters are "
"used subsequently.")
logger.info("CURRENTLY DEFINED PARAMETERS: ")
log_recursively(self.params, self.history)
return params_as_dict |
Returns the parameters of a flat dictionary from keys to values.
Nested structure is collapsed with periods. | def as_flat_dict(self):
"""
Returns the parameters of a flat dictionary from keys to values.
Nested structure is collapsed with periods.
"""
flat_params = {}
def recurse(parameters, path):
for key, value in parameters.items():
newpath = path + [key]
if isinstance(value, dict):
recurse(value, newpath)
else:
flat_params['.'.join(newpath)] = value
recurse(self.params, [])
return flat_params |
Raises a ``ConfigurationError`` if ``self.params`` is not empty. We take ``class_name`` as
an argument so that the error message gives some idea of where an error happened, if there
was one. ``class_name`` should be the name of the `calling` class, the one that got extra
parameters (if there are any). | def assert_empty(self, class_name: str):
"""
Raises a ``ConfigurationError`` if ``self.params`` is not empty. We take ``class_name`` as
an argument so that the error message gives some idea of where an error happened, if there
was one. ``class_name`` should be the name of the `calling` class, the one that got extra
parameters (if there are any).
"""
if self.params:
raise ConfigurationError("Extra parameters passed to {}: {}".format(class_name, self.params)) |
Load a `Params` object from a configuration file.
Parameters
----------
params_file : ``str``
The path to the configuration file to load.
params_overrides : ``str``, optional
A dict of overrides that can be applied to final object.
e.g. {"model.embedding_dim": 10}
ext_vars : ``dict``, optional
Our config files are Jsonnet, which allows specifying external variables
for later substitution. Typically we substitute these using environment
variables; however, you can also specify them here, in which case they
take priority over environment variables.
e.g. {"HOME_DIR": "/Users/allennlp/home"} | def from_file(params_file: str, params_overrides: str = "", ext_vars: dict = None) -> 'Params':
"""
Load a `Params` object from a configuration file.
Parameters
----------
params_file : ``str``
The path to the configuration file to load.
params_overrides : ``str``, optional
A dict of overrides that can be applied to final object.
e.g. {"model.embedding_dim": 10}
ext_vars : ``dict``, optional
Our config files are Jsonnet, which allows specifying external variables
for later substitution. Typically we substitute these using environment
variables; however, you can also specify them here, in which case they
take priority over environment variables.
e.g. {"HOME_DIR": "/Users/allennlp/home"}
"""
if ext_vars is None:
ext_vars = {}
# redirect to cache, if necessary
params_file = cached_path(params_file)
ext_vars = {**_environment_variables(), **ext_vars}
file_dict = json.loads(evaluate_file(params_file, ext_vars=ext_vars))
overrides_dict = parse_overrides(params_overrides)
param_dict = with_fallback(preferred=overrides_dict, fallback=file_dict)
return Params(param_dict) |
Returns Ordered Dict of Params from list of partial order preferences.
Parameters
----------
preference_orders: List[List[str]], optional
``preference_orders`` is list of partial preference orders. ["A", "B", "C"] means
"A" > "B" > "C". For multiple preference_orders first will be considered first.
Keys not found, will have last but alphabetical preference. Default Preferences:
``[["dataset_reader", "iterator", "model", "train_data_path", "validation_data_path",
"test_data_path", "trainer", "vocabulary"], ["type"]]`` | def as_ordered_dict(self, preference_orders: List[List[str]] = None) -> OrderedDict:
"""
Returns Ordered Dict of Params from list of partial order preferences.
Parameters
----------
preference_orders: List[List[str]], optional
``preference_orders`` is list of partial preference orders. ["A", "B", "C"] means
"A" > "B" > "C". For multiple preference_orders first will be considered first.
Keys not found, will have last but alphabetical preference. Default Preferences:
``[["dataset_reader", "iterator", "model", "train_data_path", "validation_data_path",
"test_data_path", "trainer", "vocabulary"], ["type"]]``
"""
params_dict = self.as_dict(quiet=True)
if not preference_orders:
preference_orders = []
preference_orders.append(["dataset_reader", "iterator", "model",
"train_data_path", "validation_data_path", "test_data_path",
"trainer", "vocabulary"])
preference_orders.append(["type"])
def order_func(key):
# Makes a tuple to use for ordering. The tuple is an index into each of the `preference_orders`,
# followed by the key itself. This gives us integer sorting if you have a key in one of the
# `preference_orders`, followed by alphabetical ordering if not.
order_tuple = [order.index(key) if key in order else len(order) for order in preference_orders]
return order_tuple + [key]
def order_dict(dictionary, order_func):
# Recursively orders dictionary according to scoring order_func
result = OrderedDict()
for key, val in sorted(dictionary.items(), key=lambda item: order_func(item[0])):
result[key] = order_dict(val, order_func) if isinstance(val, dict) else val
return result
return order_dict(params_dict, order_func) |
Returns a hash code representing the current state of this ``Params`` object. We don't
want to implement ``__hash__`` because that has deeper python implications (and this is a
mutable object), but this will give you a representation of the current state. | def get_hash(self) -> str:
"""
Returns a hash code representing the current state of this ``Params`` object. We don't
want to implement ``__hash__`` because that has deeper python implications (and this is a
mutable object), but this will give you a representation of the current state.
"""
return str(hash(json.dumps(self.params, sort_keys=True))) |
Clears out the tracked metrics, but keeps the patience and should_decrease settings. | def clear(self) -> None:
"""
Clears out the tracked metrics, but keeps the patience and should_decrease settings.
"""
self._best_so_far = None
self._epochs_with_no_improvement = 0
self._is_best_so_far = True
self._epoch_number = 0
self.best_epoch = None |
A ``Trainer`` can use this to serialize the state of the metric tracker. | def state_dict(self) -> Dict[str, Any]:
"""
A ``Trainer`` can use this to serialize the state of the metric tracker.
"""
return {
"best_so_far": self._best_so_far,
"patience": self._patience,
"epochs_with_no_improvement": self._epochs_with_no_improvement,
"is_best_so_far": self._is_best_so_far,
"should_decrease": self._should_decrease,
"best_epoch_metrics": self.best_epoch_metrics,
"epoch_number": self._epoch_number,
"best_epoch": self.best_epoch
} |
Record a new value of the metric and update the various things that depend on it. | def add_metric(self, metric: float) -> None:
"""
Record a new value of the metric and update the various things that depend on it.
"""
new_best = ((self._best_so_far is None) or
(self._should_decrease and metric < self._best_so_far) or
(not self._should_decrease and metric > self._best_so_far))
if new_best:
self.best_epoch = self._epoch_number
self._is_best_so_far = True
self._best_so_far = metric
self._epochs_with_no_improvement = 0
else:
self._is_best_so_far = False
self._epochs_with_no_improvement += 1
self._epoch_number += 1 |
Helper to add multiple metrics at once. | def add_metrics(self, metrics: Iterable[float]) -> None:
"""
Helper to add multiple metrics at once.
"""
for metric in metrics:
self.add_metric(metric) |
Returns true if improvement has stopped for long enough. | def should_stop_early(self) -> bool:
"""
Returns true if improvement has stopped for long enough.
"""
if self._patience is None:
return False
else:
return self._epochs_with_no_improvement >= self._patience |
Archive the model weights, its training configuration, and its
vocabulary to `model.tar.gz`. Include the additional ``files_to_archive``
if provided.
Parameters
----------
serialization_dir: ``str``
The directory where the weights and vocabulary are written out.
weights: ``str``, optional (default=_DEFAULT_WEIGHTS)
Which weights file to include in the archive. The default is ``best.th``.
files_to_archive: ``Dict[str, str]``, optional (default=None)
A mapping {flattened_key -> filename} of supplementary files to include
in the archive. That is, if you wanted to include ``params['model']['weights']``
then you would specify the key as `"model.weights"`.
archive_path : ``str``, optional, (default = None)
A full path to serialize the model to. The default is "model.tar.gz" inside the
serialization_dir. If you pass a directory here, we'll serialize the model
to "model.tar.gz" inside the directory. | def archive_model(serialization_dir: str,
weights: str = _DEFAULT_WEIGHTS,
files_to_archive: Dict[str, str] = None,
archive_path: str = None) -> None:
"""
Archive the model weights, its training configuration, and its
vocabulary to `model.tar.gz`. Include the additional ``files_to_archive``
if provided.
Parameters
----------
serialization_dir: ``str``
The directory where the weights and vocabulary are written out.
weights: ``str``, optional (default=_DEFAULT_WEIGHTS)
Which weights file to include in the archive. The default is ``best.th``.
files_to_archive: ``Dict[str, str]``, optional (default=None)
A mapping {flattened_key -> filename} of supplementary files to include
in the archive. That is, if you wanted to include ``params['model']['weights']``
then you would specify the key as `"model.weights"`.
archive_path : ``str``, optional, (default = None)
A full path to serialize the model to. The default is "model.tar.gz" inside the
serialization_dir. If you pass a directory here, we'll serialize the model
to "model.tar.gz" inside the directory.
"""
weights_file = os.path.join(serialization_dir, weights)
if not os.path.exists(weights_file):
logger.error("weights file %s does not exist, unable to archive model", weights_file)
return
config_file = os.path.join(serialization_dir, CONFIG_NAME)
if not os.path.exists(config_file):
logger.error("config file %s does not exist, unable to archive model", config_file)
# If there are files we want to archive, write out the mapping
# so that we can use it during de-archiving.
if files_to_archive:
fta_filename = os.path.join(serialization_dir, _FTA_NAME)
with open(fta_filename, 'w') as fta_file:
fta_file.write(json.dumps(files_to_archive))
if archive_path is not None:
archive_file = archive_path
if os.path.isdir(archive_file):
archive_file = os.path.join(archive_file, "model.tar.gz")
else:
archive_file = os.path.join(serialization_dir, "model.tar.gz")
logger.info("archiving weights and vocabulary to %s", archive_file)
with tarfile.open(archive_file, 'w:gz') as archive:
archive.add(config_file, arcname=CONFIG_NAME)
archive.add(weights_file, arcname=_WEIGHTS_NAME)
archive.add(os.path.join(serialization_dir, "vocabulary"),
arcname="vocabulary")
# If there are supplemental files to archive:
if files_to_archive:
# Archive the { flattened_key -> original_filename } mapping.
archive.add(fta_filename, arcname=_FTA_NAME)
# And add each requested file to the archive.
for key, filename in files_to_archive.items():
archive.add(filename, arcname=f"fta/{key}") |
Instantiates an Archive from an archived `tar.gz` file.
Parameters
----------
archive_file: ``str``
The archive file to load the model from.
weights_file: ``str``, optional (default = None)
The weights file to use. If unspecified, weights.th in the archive_file will be used.
cuda_device: ``int``, optional (default = -1)
If `cuda_device` is >= 0, the model will be loaded onto the
corresponding GPU. Otherwise it will be loaded onto the CPU.
overrides: ``str``, optional (default = "")
JSON overrides to apply to the unarchived ``Params`` object. | def load_archive(archive_file: str,
cuda_device: int = -1,
overrides: str = "",
weights_file: str = None) -> Archive:
"""
Instantiates an Archive from an archived `tar.gz` file.
Parameters
----------
archive_file: ``str``
The archive file to load the model from.
weights_file: ``str``, optional (default = None)
The weights file to use. If unspecified, weights.th in the archive_file will be used.
cuda_device: ``int``, optional (default = -1)
If `cuda_device` is >= 0, the model will be loaded onto the
corresponding GPU. Otherwise it will be loaded onto the CPU.
overrides: ``str``, optional (default = "")
JSON overrides to apply to the unarchived ``Params`` object.
"""
# redirect to the cache, if necessary
resolved_archive_file = cached_path(archive_file)
if resolved_archive_file == archive_file:
logger.info(f"loading archive file {archive_file}")
else:
logger.info(f"loading archive file {archive_file} from cache at {resolved_archive_file}")
if os.path.isdir(resolved_archive_file):
serialization_dir = resolved_archive_file
else:
# Extract archive to temp dir
tempdir = tempfile.mkdtemp()
logger.info(f"extracting archive file {resolved_archive_file} to temp dir {tempdir}")
with tarfile.open(resolved_archive_file, 'r:gz') as archive:
archive.extractall(tempdir)
# Postpone cleanup until exit in case the unarchived contents are needed outside
# this function.
atexit.register(_cleanup_archive_dir, tempdir)
serialization_dir = tempdir
# Check for supplemental files in archive
fta_filename = os.path.join(serialization_dir, _FTA_NAME)
if os.path.exists(fta_filename):
with open(fta_filename, 'r') as fta_file:
files_to_archive = json.loads(fta_file.read())
# Add these replacements to overrides
replacements_dict: Dict[str, Any] = {}
for key, original_filename in files_to_archive.items():
replacement_filename = os.path.join(serialization_dir, f"fta/{key}")
if os.path.exists(replacement_filename):
replacements_dict[key] = replacement_filename
else:
logger.warning(f"Archived file {replacement_filename} not found! At train time "
f"this file was located at {original_filename}. This may be "
"because you are loading a serialization directory. Attempting to "
"load the file from its train-time location.")
overrides_dict = parse_overrides(overrides)
combined_dict = with_fallback(preferred=overrides_dict, fallback=unflatten(replacements_dict))
overrides = json.dumps(combined_dict)
# Load config
config = Params.from_file(os.path.join(serialization_dir, CONFIG_NAME), overrides)
config.loading_from_archive = True
if weights_file:
weights_path = weights_file
else:
weights_path = os.path.join(serialization_dir, _WEIGHTS_NAME)
# Fallback for serialization directories.
if not os.path.exists(weights_path):
weights_path = os.path.join(serialization_dir, _DEFAULT_WEIGHTS)
# Instantiate model. Use a duplicate of the config, as it will get consumed.
model = Model.load(config.duplicate(),
weights_file=weights_path,
serialization_dir=serialization_dir,
cuda_device=cuda_device)
return Archive(model=model, config=config) |
This method can be used to load a module from the pretrained model archive.
It is also used implicitly in FromParams based construction. So instead of using standard
params to construct a module, you can instead load a pretrained module from the model
archive directly. For eg, instead of using params like {"type": "module_type", ...}, you
can use the following template::
{
"_pretrained": {
"archive_file": "../path/to/model.tar.gz",
"path": "path.to.module.in.model",
"freeze": False
}
}
If you use this feature with FromParams, take care of the following caveat: Call to
initializer(self) at end of model initializer can potentially wipe the transferred parameters
by reinitializing them. This can happen if you have setup initializer regex that also
matches parameters of the transferred module. To safe-guard against this, you can either
update your initializer regex to prevent conflicting match or add extra initializer::
[
[".*transferred_module_name.*", "prevent"]]
]
Parameters
----------
path : ``str``, required
Path of target module to be loaded from the model.
Eg. "_textfield_embedder.token_embedder_tokens"
freeze : ``bool``, optional (default=True)
Whether to freeze the module parameters or not. | def extract_module(self, path: str, freeze: bool = True) -> Module:
"""
This method can be used to load a module from the pretrained model archive.
It is also used implicitly in FromParams based construction. So instead of using standard
params to construct a module, you can instead load a pretrained module from the model
archive directly. For eg, instead of using params like {"type": "module_type", ...}, you
can use the following template::
{
"_pretrained": {
"archive_file": "../path/to/model.tar.gz",
"path": "path.to.module.in.model",
"freeze": False
}
}
If you use this feature with FromParams, take care of the following caveat: Call to
initializer(self) at end of model initializer can potentially wipe the transferred parameters
by reinitializing them. This can happen if you have setup initializer regex that also
matches parameters of the transferred module. To safe-guard against this, you can either
update your initializer regex to prevent conflicting match or add extra initializer::
[
[".*transferred_module_name.*", "prevent"]]
]
Parameters
----------
path : ``str``, required
Path of target module to be loaded from the model.
Eg. "_textfield_embedder.token_embedder_tokens"
freeze : ``bool``, optional (default=True)
Whether to freeze the module parameters or not.
"""
modules_dict = {path: module for path, module in self.model.named_modules()}
module = modules_dict.get(path, None)
if not module:
raise ConfigurationError(f"You asked to transfer module at path {path} from "
f"the model {type(self.model)}. But it's not present.")
if not isinstance(module, Module):
raise ConfigurationError(f"The transferred object from model {type(self.model)} at path "
f"{path} is not a PyTorch Module.")
for parameter in module.parameters(): # type: ignore
parameter.requires_grad_(not freeze)
return module |
Takes a list of possible actions and indices of decoded actions into those possible actions
for a batch and returns sequences of action strings. We assume ``action_indices`` is a dict
mapping batch indices to k-best decoded sequence lists. | def _get_action_strings(cls,
possible_actions: List[List[ProductionRule]],
action_indices: Dict[int, List[List[int]]]) -> List[List[List[str]]]:
"""
Takes a list of possible actions and indices of decoded actions into those possible actions
for a batch and returns sequences of action strings. We assume ``action_indices`` is a dict
mapping batch indices to k-best decoded sequence lists.
"""
all_action_strings: List[List[List[str]]] = []
batch_size = len(possible_actions)
for i in range(batch_size):
batch_actions = possible_actions[i]
batch_best_sequences = action_indices[i] if i in action_indices else []
# This will append an empty list to ``all_action_strings`` if ``batch_best_sequences``
# is empty.
action_strings = [[batch_actions[rule_id][0] for rule_id in sequence]
for sequence in batch_best_sequences]
all_action_strings.append(action_strings)
return all_action_strings |
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. We only transform the action string sequences into logical
forms here. | def decode(self, output_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
"""
This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test
time, to finalize predictions. We only transform the action string sequences into logical
forms here.
"""
best_action_strings = output_dict["best_action_strings"]
# Instantiating an empty world for getting logical forms.
world = NlvrLanguage(set())
logical_forms = []
for instance_action_sequences in best_action_strings:
instance_logical_forms = []
for action_strings in instance_action_sequences:
if action_strings:
instance_logical_forms.append(world.action_sequence_to_logical_form(action_strings))
else:
instance_logical_forms.append('')
logical_forms.append(instance_logical_forms)
action_mapping = output_dict['action_mapping']
best_actions = output_dict['best_action_strings']
debug_infos = output_dict['debug_info']
batch_action_info = []
for batch_index, (predicted_actions, debug_info) in enumerate(zip(best_actions, debug_infos)):
instance_action_info = []
for predicted_action, action_debug_info in zip(predicted_actions[0], debug_info):
action_info = {}
action_info['predicted_action'] = predicted_action
considered_actions = action_debug_info['considered_actions']
probabilities = action_debug_info['probabilities']
actions = []
for action, probability in zip(considered_actions, probabilities):
if action != -1:
actions.append((action_mapping[(batch_index, action)], probability))
actions.sort()
considered_actions, probabilities = zip(*actions)
action_info['considered_actions'] = considered_actions
action_info['action_probabilities'] = probabilities
action_info['question_attention'] = action_debug_info.get('question_attention', [])
instance_action_info.append(action_info)
batch_action_info.append(instance_action_info)
output_dict["predicted_actions"] = batch_action_info
output_dict["logical_form"] = logical_forms
return output_dict |
Returns whether action history in the state evaluates to the correct denotations over all
worlds. Only defined when the state is finished. | def _check_state_denotations(self, state: GrammarBasedState, worlds: List[NlvrLanguage]) -> List[bool]:
"""
Returns whether action history in the state evaluates to the correct denotations over all
worlds. Only defined when the state is finished.
"""
assert state.is_finished(), "Cannot compute denotations for unfinished states!"
# Since this is a finished state, its group size must be 1.
batch_index = state.batch_indices[0]
instance_label_strings = state.extras[batch_index]
history = state.action_history[0]
all_actions = state.possible_actions[0]
action_sequence = [all_actions[action][0] for action in history]
return self._check_denotation(action_sequence, instance_label_strings, worlds) |
Start learning rate finder for given args | def find_learning_rate_from_args(args: argparse.Namespace) -> None:
"""
Start learning rate finder for given args
"""
params = Params.from_file(args.param_path, args.overrides)
find_learning_rate_model(params, args.serialization_dir,
start_lr=args.start_lr,
end_lr=args.end_lr,
num_batches=args.num_batches,
linear_steps=args.linear,
stopping_factor=args.stopping_factor,
force=args.force) |
Runs learning rate search for given `num_batches` and saves the results in ``serialization_dir``
Parameters
----------
params : ``Params``
A parameter object specifying an AllenNLP Experiment.
serialization_dir : ``str``
The directory in which to save results.
start_lr: ``float``
Learning rate to start the search.
end_lr: ``float``
Learning rate upto which search is done.
num_batches: ``int``
Number of mini-batches to run Learning rate finder.
linear_steps: ``bool``
Increase learning rate linearly if False exponentially.
stopping_factor: ``float``
Stop the search when the current loss exceeds the best loss recorded by
multiple of stopping factor. If ``None`` search proceeds till the ``end_lr``
force: ``bool``
If True and the serialization directory already exists, everything in it will
be removed prior to finding the learning rate. | def find_learning_rate_model(params: Params, serialization_dir: str,
start_lr: float = 1e-5,
end_lr: float = 10,
num_batches: int = 100,
linear_steps: bool = False,
stopping_factor: float = None,
force: bool = False) -> None:
"""
Runs learning rate search for given `num_batches` and saves the results in ``serialization_dir``
Parameters
----------
params : ``Params``
A parameter object specifying an AllenNLP Experiment.
serialization_dir : ``str``
The directory in which to save results.
start_lr: ``float``
Learning rate to start the search.
end_lr: ``float``
Learning rate upto which search is done.
num_batches: ``int``
Number of mini-batches to run Learning rate finder.
linear_steps: ``bool``
Increase learning rate linearly if False exponentially.
stopping_factor: ``float``
Stop the search when the current loss exceeds the best loss recorded by
multiple of stopping factor. If ``None`` search proceeds till the ``end_lr``
force: ``bool``
If True and the serialization directory already exists, everything in it will
be removed prior to finding the learning rate.
"""
if os.path.exists(serialization_dir) and force:
shutil.rmtree(serialization_dir)
if os.path.exists(serialization_dir) and os.listdir(serialization_dir):
raise ConfigurationError(f'Serialization directory {serialization_dir} already exists and is '
f'not empty.')
else:
os.makedirs(serialization_dir, exist_ok=True)
prepare_environment(params)
cuda_device = params.params.get('trainer').get('cuda_device', -1)
check_for_gpu(cuda_device)
all_datasets = datasets_from_params(params)
datasets_for_vocab_creation = set(params.pop("datasets_for_vocab_creation", all_datasets))
for dataset in datasets_for_vocab_creation:
if dataset not in all_datasets:
raise ConfigurationError(f"invalid 'dataset_for_vocab_creation' {dataset}")
logger.info("From dataset instances, %s will be considered for vocabulary creation.",
", ".join(datasets_for_vocab_creation))
vocab = Vocabulary.from_params(
params.pop("vocabulary", {}),
(instance for key, dataset in all_datasets.items()
for instance in dataset
if key in datasets_for_vocab_creation)
)
model = Model.from_params(vocab=vocab, params=params.pop('model'))
iterator = DataIterator.from_params(params.pop("iterator"))
iterator.index_with(vocab)
train_data = all_datasets['train']
trainer_params = params.pop("trainer")
no_grad_regexes = trainer_params.pop("no_grad", ())
for name, parameter in model.named_parameters():
if any(re.search(regex, name) for regex in no_grad_regexes):
parameter.requires_grad_(False)
trainer_choice = trainer_params.pop("type", "default")
if trainer_choice != "default":
raise ConfigurationError("currently find-learning-rate only works with the default Trainer")
trainer = Trainer.from_params(model=model,
serialization_dir=serialization_dir,
iterator=iterator,
train_data=train_data,
validation_data=None,
params=trainer_params,
validation_iterator=None)
logger.info(f'Starting learning rate search from {start_lr} to {end_lr} in {num_batches} iterations.')
learning_rates, losses = search_learning_rate(trainer,
start_lr=start_lr,
end_lr=end_lr,
num_batches=num_batches,
linear_steps=linear_steps,
stopping_factor=stopping_factor)
logger.info(f'Finished learning rate search.')
losses = _smooth(losses, 0.98)
_save_plot(learning_rates, losses, os.path.join(serialization_dir, 'lr-losses.png')) |
Runs training loop on the model using :class:`~allennlp.training.trainer.Trainer`
increasing learning rate from ``start_lr`` to ``end_lr`` recording the losses.
Parameters
----------
trainer: :class:`~allennlp.training.trainer.Trainer`
start_lr: ``float``
The learning rate to start the search.
end_lr: ``float``
The learning rate upto which search is done.
num_batches: ``int``
Number of batches to run the learning rate finder.
linear_steps: ``bool``
Increase learning rate linearly if False exponentially.
stopping_factor: ``float``
Stop the search when the current loss exceeds the best loss recorded by
multiple of stopping factor. If ``None`` search proceeds till the ``end_lr``
Returns
-------
(learning_rates, losses): ``Tuple[List[float], List[float]]``
Returns list of learning rates and corresponding losses.
Note: The losses are recorded before applying the corresponding learning rate | def search_learning_rate(trainer: Trainer,
start_lr: float = 1e-5,
end_lr: float = 10,
num_batches: int = 100,
linear_steps: bool = False,
stopping_factor: float = None) -> Tuple[List[float], List[float]]:
"""
Runs training loop on the model using :class:`~allennlp.training.trainer.Trainer`
increasing learning rate from ``start_lr`` to ``end_lr`` recording the losses.
Parameters
----------
trainer: :class:`~allennlp.training.trainer.Trainer`
start_lr: ``float``
The learning rate to start the search.
end_lr: ``float``
The learning rate upto which search is done.
num_batches: ``int``
Number of batches to run the learning rate finder.
linear_steps: ``bool``
Increase learning rate linearly if False exponentially.
stopping_factor: ``float``
Stop the search when the current loss exceeds the best loss recorded by
multiple of stopping factor. If ``None`` search proceeds till the ``end_lr``
Returns
-------
(learning_rates, losses): ``Tuple[List[float], List[float]]``
Returns list of learning rates and corresponding losses.
Note: The losses are recorded before applying the corresponding learning rate
"""
if num_batches <= 10:
raise ConfigurationError('The number of iterations for learning rate finder should be greater than 10.')
trainer.model.train()
num_gpus = len(trainer._cuda_devices) # pylint: disable=protected-access
raw_train_generator = trainer.iterator(trainer.train_data,
shuffle=trainer.shuffle)
train_generator = lazy_groups_of(raw_train_generator, num_gpus)
train_generator_tqdm = Tqdm.tqdm(train_generator,
total=num_batches)
learning_rates = []
losses = []
best = 1e9
if linear_steps:
lr_update_factor = (end_lr - start_lr) / num_batches
else:
lr_update_factor = (end_lr / start_lr) ** (1.0 / num_batches)
for i, batch_group in enumerate(train_generator_tqdm):
if linear_steps:
current_lr = start_lr + (lr_update_factor * i)
else:
current_lr = start_lr * (lr_update_factor ** i)
for param_group in trainer.optimizer.param_groups:
param_group['lr'] = current_lr
trainer.optimizer.zero_grad()
loss = trainer.batch_loss(batch_group, for_training=True)
loss.backward()
loss = loss.detach().cpu().item()
if stopping_factor is not None and (math.isnan(loss) or loss > stopping_factor * best):
logger.info(f'Loss ({loss}) exceeds stopping_factor * lowest recorded loss.')
break
trainer.rescale_gradients()
trainer.optimizer.step()
learning_rates.append(current_lr)
losses.append(loss)
if loss < best and i > 10:
best = loss
if i == num_batches:
break
return learning_rates, losses |
Exponential smoothing of values | def _smooth(values: List[float], beta: float) -> List[float]:
""" Exponential smoothing of values """
avg_value = 0.
smoothed = []
for i, value in enumerate(values):
avg_value = beta * avg_value + (1 - beta) * value
smoothed.append(avg_value / (1 - beta ** (i + 1)))
return smoothed |
Compute a weighted average of the ``tensors``. The input tensors an be any shape
with at least two dimensions, but must all be the same shape.
When ``do_layer_norm=True``, the ``mask`` is required input. If the ``tensors`` are
dimensioned ``(dim_0, ..., dim_{n-1}, dim_n)``, then the ``mask`` is dimensioned
``(dim_0, ..., dim_{n-1})``, as in the typical case with ``tensors`` of shape
``(batch_size, timesteps, dim)`` and ``mask`` of shape ``(batch_size, timesteps)``.
When ``do_layer_norm=False`` the ``mask`` is ignored. | def forward(self, tensors: List[torch.Tensor], # pylint: disable=arguments-differ
mask: torch.Tensor = None) -> torch.Tensor:
"""
Compute a weighted average of the ``tensors``. The input tensors an be any shape
with at least two dimensions, but must all be the same shape.
When ``do_layer_norm=True``, the ``mask`` is required input. If the ``tensors`` are
dimensioned ``(dim_0, ..., dim_{n-1}, dim_n)``, then the ``mask`` is dimensioned
``(dim_0, ..., dim_{n-1})``, as in the typical case with ``tensors`` of shape
``(batch_size, timesteps, dim)`` and ``mask`` of shape ``(batch_size, timesteps)``.
When ``do_layer_norm=False`` the ``mask`` is ignored.
"""
if len(tensors) != self.mixture_size:
raise ConfigurationError("{} tensors were passed, but the module was initialized to "
"mix {} tensors.".format(len(tensors), self.mixture_size))
def _do_layer_norm(tensor, broadcast_mask, num_elements_not_masked):
tensor_masked = tensor * broadcast_mask
mean = torch.sum(tensor_masked) / num_elements_not_masked
variance = torch.sum(((tensor_masked - mean) * broadcast_mask)**2) / num_elements_not_masked
return (tensor - mean) / torch.sqrt(variance + 1E-12)
normed_weights = torch.nn.functional.softmax(torch.cat([parameter for parameter
in self.scalar_parameters]), dim=0)
normed_weights = torch.split(normed_weights, split_size_or_sections=1)
if not self.do_layer_norm:
pieces = []
for weight, tensor in zip(normed_weights, tensors):
pieces.append(weight * tensor)
return self.gamma * sum(pieces)
else:
mask_float = mask.float()
broadcast_mask = mask_float.unsqueeze(-1)
input_dim = tensors[0].size(-1)
num_elements_not_masked = torch.sum(mask_float) * input_dim
pieces = []
for weight, tensor in zip(normed_weights, tensors):
pieces.append(weight * _do_layer_norm(tensor,
broadcast_mask, num_elements_not_masked))
return self.gamma * sum(pieces) |
Like :func:`predicate`, but used when some of the arguments to the function are meant to be
provided by the decoder or other state, instead of from the language. For example, you might
want to have a function use the decoder's attention over some input text when a terminal was
predicted. That attention won't show up in the language productions. Use this decorator, and
pass in the required state to :func:`DomainLanguage.execute_action_sequence`, if you need to
ignore some arguments when doing grammar induction.
In order for this to work out, the side arguments `must` be after any non-side arguments. This
is because we use ``*args`` to pass the non-side arguments, and ``**kwargs`` to pass the side
arguments, and python requires that ``*args`` be before ``**kwargs``. | def predicate_with_side_args(side_arguments: List[str]) -> Callable: # pylint: disable=invalid-name
"""
Like :func:`predicate`, but used when some of the arguments to the function are meant to be
provided by the decoder or other state, instead of from the language. For example, you might
want to have a function use the decoder's attention over some input text when a terminal was
predicted. That attention won't show up in the language productions. Use this decorator, and
pass in the required state to :func:`DomainLanguage.execute_action_sequence`, if you need to
ignore some arguments when doing grammar induction.
In order for this to work out, the side arguments `must` be after any non-side arguments. This
is because we use ``*args`` to pass the non-side arguments, and ``**kwargs`` to pass the side
arguments, and python requires that ``*args`` be before ``**kwargs``.
"""
def decorator(function: Callable) -> Callable:
setattr(function, '_side_arguments', side_arguments)
return predicate(function)
return decorator |
Given an ``nltk.Tree`` representing the syntax tree that generates a logical form, this method
produces the actual (lisp-like) logical form, with all of the non-terminal symbols converted
into the correct number of parentheses.
This is used in the logic that converts action sequences back into logical forms. It's very
unlikely that you will need this anywhere else. | def nltk_tree_to_logical_form(tree: Tree) -> str:
"""
Given an ``nltk.Tree`` representing the syntax tree that generates a logical form, this method
produces the actual (lisp-like) logical form, with all of the non-terminal symbols converted
into the correct number of parentheses.
This is used in the logic that converts action sequences back into logical forms. It's very
unlikely that you will need this anywhere else.
"""
# nltk.Tree actually inherits from `list`, so you use `len()` to get the number of children.
# We're going to be explicit about checking length, instead of using `if tree:`, just to avoid
# any funny business nltk might have done (e.g., it's really odd if `if tree:` evaluates to
# `False` if there's a single leaf node with no children).
if len(tree) == 0: # pylint: disable=len-as-condition
return tree.label()
if len(tree) == 1:
return tree[0].label()
return '(' + ' '.join(nltk_tree_to_logical_form(child) for child in tree) + ')' |
Converts a python ``Type`` (as you might get from a type annotation) into a
``PredicateType``. If the ``Type`` is callable, this will return a ``FunctionType``;
otherwise, it will return a ``BasicType``.
``BasicTypes`` have a single ``name`` parameter - we typically get this from
``type_.__name__``. This doesn't work for generic types (like ``List[str]``), so we handle
those specially, so that the ``name`` for the ``BasicType`` remains ``List[str]``, as you
would expect. | def get_type(type_: Type) -> 'PredicateType':
"""
Converts a python ``Type`` (as you might get from a type annotation) into a
``PredicateType``. If the ``Type`` is callable, this will return a ``FunctionType``;
otherwise, it will return a ``BasicType``.
``BasicTypes`` have a single ``name`` parameter - we typically get this from
``type_.__name__``. This doesn't work for generic types (like ``List[str]``), so we handle
those specially, so that the ``name`` for the ``BasicType`` remains ``List[str]``, as you
would expect.
"""
if is_callable(type_):
callable_args = type_.__args__
argument_types = [PredicateType.get_type(t) for t in callable_args[:-1]]
return_type = PredicateType.get_type(callable_args[-1])
return FunctionType(argument_types, return_type)
elif is_generic(type_):
# This is something like List[int]. type_.__name__ doesn't do the right thing (and
# crashes in python 3.7), so we need to do some magic here.
name = get_generic_name(type_)
else:
name = type_.__name__
return BasicType(name) |
Executes a logical form, using whatever predicates you have defined. | def execute(self, logical_form: str):
"""Executes a logical form, using whatever predicates you have defined."""
if not hasattr(self, '_functions'):
raise RuntimeError("You must call super().__init__() in your Language constructor")
logical_form = logical_form.replace(",", " ")
expression = util.lisp_to_nested_expression(logical_form)
return self._execute_expression(expression) |
Executes the program defined by an action sequence directly, without needing the overhead
of translating to a logical form first. For any given program, :func:`execute` and this
function are equivalent, they just take different representations of the program, so you
can use whichever is more efficient.
Also, if you have state or side arguments associated with particular production rules
(e.g., the decoder's attention on an input utterance when a predicate was predicted), you
`must` use this function to execute the logical form, instead of :func:`execute`, so that
we can match the side arguments with the right functions. | def execute_action_sequence(self, action_sequence: List[str], side_arguments: List[Dict] = None):
"""
Executes the program defined by an action sequence directly, without needing the overhead
of translating to a logical form first. For any given program, :func:`execute` and this
function are equivalent, they just take different representations of the program, so you
can use whichever is more efficient.
Also, if you have state or side arguments associated with particular production rules
(e.g., the decoder's attention on an input utterance when a predicate was predicted), you
`must` use this function to execute the logical form, instead of :func:`execute`, so that
we can match the side arguments with the right functions.
"""
# We'll strip off the first action, because it doesn't matter for execution.
first_action = action_sequence[0]
left_side = first_action.split(' -> ')[0]
if left_side != '@start@':
raise ExecutionError('invalid action sequence')
remaining_side_args = side_arguments[1:] if side_arguments else None
return self._execute_sequence(action_sequence[1:], remaining_side_args)[0] |
Induces a grammar from the defined collection of predicates in this language and returns
all productions in that grammar, keyed by the non-terminal they are expanding.
This includes terminal productions implied by each predicate as well as productions for the
`return type` of each defined predicate. For example, defining a "multiply" predicate adds
a "<int,int:int> -> multiply" terminal production to the grammar, and `also` a "int ->
[<int,int:int>, int, int]" non-terminal production, because I can use the "multiply"
predicate to produce an int. | def get_nonterminal_productions(self) -> Dict[str, List[str]]:
"""
Induces a grammar from the defined collection of predicates in this language and returns
all productions in that grammar, keyed by the non-terminal they are expanding.
This includes terminal productions implied by each predicate as well as productions for the
`return type` of each defined predicate. For example, defining a "multiply" predicate adds
a "<int,int:int> -> multiply" terminal production to the grammar, and `also` a "int ->
[<int,int:int>, int, int]" non-terminal production, because I can use the "multiply"
predicate to produce an int.
"""
if not self._nonterminal_productions:
actions: Dict[str, Set[str]] = defaultdict(set)
# If you didn't give us a set of valid start types, we'll assume all types we know
# about (including functional types) are valid start types.
if self._start_types:
start_types = self._start_types
else:
start_types = set()
for type_list in self._function_types.values():
start_types.update(type_list)
for start_type in start_types:
actions[START_SYMBOL].add(f"{START_SYMBOL} -> {start_type}")
for name, function_type_list in self._function_types.items():
for function_type in function_type_list:
actions[str(function_type)].add(f"{function_type} -> {name}")
if isinstance(function_type, FunctionType):
return_type = function_type.return_type
arg_types = function_type.argument_types
right_side = f"[{function_type}, {', '.join(str(arg_type) for arg_type in arg_types)}]"
actions[str(return_type)].add(f"{return_type} -> {right_side}")
self._nonterminal_productions = {key: sorted(value) for key, value in actions.items()}
return self._nonterminal_productions |
Returns a sorted list of all production rules in the grammar induced by
:func:`get_nonterminal_productions`. | def all_possible_productions(self) -> List[str]:
"""
Returns a sorted list of all production rules in the grammar induced by
:func:`get_nonterminal_productions`.
"""
all_actions = set()
for action_set in self.get_nonterminal_productions().values():
all_actions.update(action_set)
return sorted(all_actions) |
Converts a logical form into a linearization of the production rules from its abstract
syntax tree. The linearization is top-down, depth-first.
Each production rule is formatted as "LHS -> RHS", where "LHS" is a single non-terminal
type, and RHS is either a terminal or a list of non-terminals (other possible values for
RHS in a more general context-free grammar are not produced by our grammar induction
logic).
Non-terminals are `types` in the grammar, either basic types (like ``int``, ``str``, or
some class that you define), or functional types, represented with angle brackets with a
colon separating arguments from the return type. Multi-argument functions have commas
separating their argument types. For example, ``<int:int>`` is a function that takes an
integer and returns an integer, and ``<int,int:int>`` is a function that takes two integer
arguments and returns an integer.
As an example translation from logical form to complete action sequence, the logical form
``(add 2 3)`` would be translated to ``['@start@ -> int', 'int -> [<int,int:int>, int, int]',
'<int,int:int> -> add', 'int -> 2', 'int -> 3']``. | def logical_form_to_action_sequence(self, logical_form: str) -> List[str]:
"""
Converts a logical form into a linearization of the production rules from its abstract
syntax tree. The linearization is top-down, depth-first.
Each production rule is formatted as "LHS -> RHS", where "LHS" is a single non-terminal
type, and RHS is either a terminal or a list of non-terminals (other possible values for
RHS in a more general context-free grammar are not produced by our grammar induction
logic).
Non-terminals are `types` in the grammar, either basic types (like ``int``, ``str``, or
some class that you define), or functional types, represented with angle brackets with a
colon separating arguments from the return type. Multi-argument functions have commas
separating their argument types. For example, ``<int:int>`` is a function that takes an
integer and returns an integer, and ``<int,int:int>`` is a function that takes two integer
arguments and returns an integer.
As an example translation from logical form to complete action sequence, the logical form
``(add 2 3)`` would be translated to ``['@start@ -> int', 'int -> [<int,int:int>, int, int]',
'<int,int:int> -> add', 'int -> 2', 'int -> 3']``.
"""
expression = util.lisp_to_nested_expression(logical_form)
try:
transitions, start_type = self._get_transitions(expression, expected_type=None)
if self._start_types and start_type not in self._start_types:
raise ParsingError(f"Expression had unallowed start type of {start_type}: {expression}")
except ParsingError:
logger.error(f'Error parsing logical form: {logical_form}')
raise
transitions.insert(0, f'@start@ -> {start_type}')
return transitions |
Takes an action sequence as produced by :func:`logical_form_to_action_sequence`, which is a
linearization of an abstract syntax tree, and reconstructs the logical form defined by that
abstract syntax tree. | def action_sequence_to_logical_form(self, action_sequence: List[str]) -> str:
"""
Takes an action sequence as produced by :func:`logical_form_to_action_sequence`, which is a
linearization of an abstract syntax tree, and reconstructs the logical form defined by that
abstract syntax tree.
"""
# Basic outline: we assume that the bracketing that we get in the RHS of each action is the
# correct bracketing for reconstructing the logical form. This is true when there is no
# currying in the action sequence. Given this assumption, we just need to construct a tree
# from the action sequence, then output all of the leaves in the tree, with brackets around
# the children of all non-terminal nodes.
remaining_actions = [action.split(" -> ") for action in action_sequence]
tree = Tree(remaining_actions[0][1], [])
try:
remaining_actions = self._construct_node_from_actions(tree, remaining_actions[1:])
except ParsingError:
logger.error("Error parsing action sequence: %s", action_sequence)
raise
if remaining_actions:
logger.error("Error parsing action sequence: %s", action_sequence)
logger.error("Remaining actions were: %s", remaining_actions)
raise ParsingError("Extra actions in action sequence")
return nltk_tree_to_logical_form(tree) |
Adds a predicate to this domain language. Typically you do this with the ``@predicate``
decorator on the methods in your class. But, if you need to for whatever reason, you can
also call this function yourself with a (type-annotated) function to add it to your
language.
Parameters
----------
name : ``str``
The name that we will use in the induced language for this function.
function : ``Callable``
The function that gets called when executing a predicate with the given name.
side_arguments : ``List[str]``, optional
If given, we will ignore these arguments for the purposes of grammar induction. This
is to allow passing extra arguments from the decoder state that are not explicitly part
of the language the decoder produces, such as the decoder's attention over the question
when a terminal was predicted. If you use this functionality, you also `must` use
``language.execute_action_sequence()`` instead of ``language.execute()``, and you must
pass the additional side arguments needed to that function. See
:func:`execute_action_sequence` for more information. | def add_predicate(self, name: str, function: Callable, side_arguments: List[str] = None):
"""
Adds a predicate to this domain language. Typically you do this with the ``@predicate``
decorator on the methods in your class. But, if you need to for whatever reason, you can
also call this function yourself with a (type-annotated) function to add it to your
language.
Parameters
----------
name : ``str``
The name that we will use in the induced language for this function.
function : ``Callable``
The function that gets called when executing a predicate with the given name.
side_arguments : ``List[str]``, optional
If given, we will ignore these arguments for the purposes of grammar induction. This
is to allow passing extra arguments from the decoder state that are not explicitly part
of the language the decoder produces, such as the decoder's attention over the question
when a terminal was predicted. If you use this functionality, you also `must` use
``language.execute_action_sequence()`` instead of ``language.execute()``, and you must
pass the additional side arguments needed to that function. See
:func:`execute_action_sequence` for more information.
"""
side_arguments = side_arguments or []
signature = inspect.signature(function)
argument_types = [param.annotation for name, param in signature.parameters.items()
if name not in side_arguments]
return_type = signature.return_annotation
argument_nltk_types: List[PredicateType] = [PredicateType.get_type(arg_type)
for arg_type in argument_types]
return_nltk_type = PredicateType.get_type(return_type)
function_nltk_type = PredicateType.get_function_type(argument_nltk_types, return_nltk_type)
self._functions[name] = function
self._function_types[name].append(function_nltk_type) |
Adds a constant to this domain language. You would typically just pass in a list of
constants to the ``super().__init__()`` call in your constructor, but you can also call
this method to add constants if it is more convenient.
Because we construct a grammar over this language for you, in order for the grammar to be
finite we cannot allow arbitrary constants. Having a finite grammar is important when
you're doing semantic parsing - we need to be able to search over this space, and compute
normalized probability distributions. | def add_constant(self, name: str, value: Any, type_: Type = None):
"""
Adds a constant to this domain language. You would typically just pass in a list of
constants to the ``super().__init__()`` call in your constructor, but you can also call
this method to add constants if it is more convenient.
Because we construct a grammar over this language for you, in order for the grammar to be
finite we cannot allow arbitrary constants. Having a finite grammar is important when
you're doing semantic parsing - we need to be able to search over this space, and compute
normalized probability distributions.
"""
value_type = type_ if type_ else type(value)
constant_type = PredicateType.get_type(value_type)
self._functions[name] = lambda: value
self._function_types[name].append(constant_type) |
Determines whether an input symbol is a valid non-terminal in the grammar. | def is_nonterminal(self, symbol: str) -> bool:
"""
Determines whether an input symbol is a valid non-terminal in the grammar.
"""
nonterminal_productions = self.get_nonterminal_productions()
return symbol in nonterminal_productions |
This does the bulk of the work of executing a logical form, recursively executing a single
expression. Basically, if the expression is a function we know about, we evaluate its
arguments then call the function. If it's a list, we evaluate all elements of the list.
If it's a constant (or a zero-argument function), we evaluate the constant. | def _execute_expression(self, expression: Any):
"""
This does the bulk of the work of executing a logical form, recursively executing a single
expression. Basically, if the expression is a function we know about, we evaluate its
arguments then call the function. If it's a list, we evaluate all elements of the list.
If it's a constant (or a zero-argument function), we evaluate the constant.
"""
# pylint: disable=too-many-return-statements
if isinstance(expression, list):
if isinstance(expression[0], list):
function = self._execute_expression(expression[0])
elif expression[0] in self._functions:
function = self._functions[expression[0]]
else:
if isinstance(expression[0], str):
raise ExecutionError(f"Unrecognized function: {expression[0]}")
else:
raise ExecutionError(f"Unsupported expression type: {expression}")
arguments = [self._execute_expression(arg) for arg in expression[1:]]
try:
return function(*arguments)
except (TypeError, ValueError):
traceback.print_exc()
raise ExecutionError(f"Error executing expression {expression} (see stderr for stack trace)")
elif isinstance(expression, str):
if expression not in self._functions:
raise ExecutionError(f"Unrecognized constant: {expression}")
# This is a bit of a quirk in how we represent constants and zero-argument functions.
# For consistency, constants are wrapped in a zero-argument lambda. So both constants
# and zero-argument functions are callable in `self._functions`, and are `BasicTypes`
# in `self._function_types`. For these, we want to return
# `self._functions[expression]()` _calling_ the zero-argument function. If we get a
# `FunctionType` in here, that means we're referring to the function as a first-class
# object, instead of calling it (maybe as an argument to a higher-order function). In
# that case, we return the function _without_ calling it.
# Also, we just check the first function type here, because we assume you haven't
# registered the same function with both a constant type and a `FunctionType`.
if isinstance(self._function_types[expression][0], FunctionType):
return self._functions[expression]
else:
return self._functions[expression]()
return self._functions[expression]
else:
raise ExecutionError("Not sure how you got here. Please open a github issue with details.") |
This does the bulk of the work of :func:`execute_action_sequence`, recursively executing
the functions it finds and trimming actions off of the action sequence. The return value
is a tuple of (execution, remaining_actions), where the second value is necessary to handle
the recursion. | def _execute_sequence(self,
action_sequence: List[str],
side_arguments: List[Dict]) -> Tuple[Any, List[str], List[Dict]]:
"""
This does the bulk of the work of :func:`execute_action_sequence`, recursively executing
the functions it finds and trimming actions off of the action sequence. The return value
is a tuple of (execution, remaining_actions), where the second value is necessary to handle
the recursion.
"""
first_action = action_sequence[0]
remaining_actions = action_sequence[1:]
remaining_side_args = side_arguments[1:] if side_arguments else None
right_side = first_action.split(' -> ')[1]
if right_side in self._functions:
function = self._functions[right_side]
# mypy doesn't like this check, saying that Callable isn't a reasonable thing to pass
# here. But it works just fine; I'm not sure why mypy complains about it.
if isinstance(function, Callable): # type: ignore
function_arguments = inspect.signature(function).parameters
if not function_arguments:
# This was a zero-argument function / constant that was registered as a lambda
# function, for consistency of execution in `execute()`.
execution_value = function()
elif side_arguments:
kwargs = {}
non_kwargs = []
for argument_name in function_arguments:
if argument_name in side_arguments[0]:
kwargs[argument_name] = side_arguments[0][argument_name]
else:
non_kwargs.append(argument_name)
if kwargs and non_kwargs:
# This is a function that has both side arguments and logical form
# arguments - we curry the function so only the logical form arguments are
# left.
def curried_function(*args):
return function(*args, **kwargs)
execution_value = curried_function
elif kwargs:
# This is a function that _only_ has side arguments - we just call the
# function and return a value.
execution_value = function(**kwargs)
else:
# This is a function that has logical form arguments, but no side arguments
# that match what we were given - just return the function itself.
execution_value = function
else:
execution_value = function
return execution_value, remaining_actions, remaining_side_args
else:
# This is a non-terminal expansion, like 'int -> [<int:int>, int, int]'. We need to
# get the function and its arguments, then call the function with its arguments.
# Because we linearize the abstract syntax tree depth first, left-to-right, we can just
# recursively call `_execute_sequence` for the function and all of its arguments, and
# things will just work.
right_side_parts = right_side.split(', ')
# We don't really need to know what the types are, just how many of them there are, so
# we recurse the right number of times.
function, remaining_actions, remaining_side_args = self._execute_sequence(remaining_actions,
remaining_side_args)
arguments = []
for _ in right_side_parts[1:]:
argument, remaining_actions, remaining_side_args = self._execute_sequence(remaining_actions,
remaining_side_args)
arguments.append(argument)
return function(*arguments), remaining_actions, remaining_side_args |
This is used when converting a logical form into an action sequence. This piece
recursively translates a lisp expression into an action sequence, making sure we match the
expected type (or using the expected type to get the right type for constant expressions). | def _get_transitions(self, expression: Any, expected_type: PredicateType) -> Tuple[List[str], PredicateType]:
"""
This is used when converting a logical form into an action sequence. This piece
recursively translates a lisp expression into an action sequence, making sure we match the
expected type (or using the expected type to get the right type for constant expressions).
"""
if isinstance(expression, (list, tuple)):
function_transitions, return_type, argument_types = self._get_function_transitions(expression[0],
expected_type)
if len(argument_types) != len(expression[1:]):
raise ParsingError(f'Wrong number of arguments for function in {expression}')
argument_transitions = []
for argument_type, subexpression in zip(argument_types, expression[1:]):
argument_transitions.extend(self._get_transitions(subexpression, argument_type)[0])
return function_transitions + argument_transitions, return_type
elif isinstance(expression, str):
if expression not in self._functions:
raise ParsingError(f"Unrecognized constant: {expression}")
constant_types = self._function_types[expression]
if len(constant_types) == 1:
constant_type = constant_types[0]
# This constant had only one type; that's the easy case.
if expected_type and expected_type != constant_type:
raise ParsingError(f'{expression} did not have expected type {expected_type} '
f'(found {constant_type})')
return [f'{constant_type} -> {expression}'], constant_type
else:
if not expected_type:
raise ParsingError('With no expected type and multiple types to pick from '
f"I don't know what type to use (constant was {expression})")
if expected_type not in constant_types:
raise ParsingError(f'{expression} did not have expected type {expected_type} '
f'(found these options: {constant_types}; none matched)')
return [f'{expected_type} -> {expression}'], expected_type
else:
raise ParsingError('Not sure how you got here. Please open an issue on github with details.') |
A helper method for ``_get_transitions``. This gets the transitions for the predicate
itself in a function call. If we only had simple functions (e.g., "(add 2 3)"), this would
be pretty straightforward and we wouldn't need a separate method to handle it. We split it
out into its own method because handling higher-order functions is complicated (e.g.,
something like "((negate add) 2 3)"). | def _get_function_transitions(self,
expression: Union[str, List],
expected_type: PredicateType) -> Tuple[List[str],
PredicateType,
List[PredicateType]]:
"""
A helper method for ``_get_transitions``. This gets the transitions for the predicate
itself in a function call. If we only had simple functions (e.g., "(add 2 3)"), this would
be pretty straightforward and we wouldn't need a separate method to handle it. We split it
out into its own method because handling higher-order functions is complicated (e.g.,
something like "((negate add) 2 3)").
"""
# This first block handles getting the transitions and function type (and some error
# checking) _just for the function itself_. If this is a simple function, this is easy; if
# it's a higher-order function, it involves some recursion.
if isinstance(expression, list):
# This is a higher-order function. TODO(mattg): we'll just ignore type checking on
# higher-order functions, for now.
transitions, function_type = self._get_transitions(expression, None)
elif expression in self._functions:
name = expression
function_types = self._function_types[expression]
if len(function_types) != 1:
raise ParsingError(f"{expression} had multiple types; this is not yet supported for functions")
function_type = function_types[0]
transitions = [f'{function_type} -> {name}']
else:
if isinstance(expression, str):
raise ParsingError(f"Unrecognized function: {expression[0]}")
else:
raise ParsingError(f"Unsupported expression type: {expression}")
if not isinstance(function_type, FunctionType):
raise ParsingError(f'Zero-arg function or constant called with arguments: {name}')
# Now that we have the transitions for the function itself, and the function's type, we can
# get argument types and do the rest of the transitions.
argument_types = function_type.argument_types
return_type = function_type.return_type
right_side = f'[{function_type}, {", ".join(str(arg) for arg in argument_types)}]'
first_transition = f'{return_type} -> {right_side}'
transitions.insert(0, first_transition)
if expected_type and expected_type != return_type:
raise ParsingError(f'{expression} did not have expected type {expected_type} '
f'(found {return_type})')
return transitions, return_type, argument_types |
Given a current node in the logical form tree, and a list of actions in an action sequence,
this method fills in the children of the current node from the action sequence, then
returns whatever actions are left.
For example, we could get a node with type ``c``, and an action sequence that begins with
``c -> [<r,c>, r]``. This method will add two children to the input node, consuming
actions from the action sequence for nodes of type ``<r,c>`` (and all of its children,
recursively) and ``r`` (and all of its children, recursively). This method assumes that
action sequences are produced `depth-first`, so all actions for the subtree under ``<r,c>``
appear before actions for the subtree under ``r``. If there are any actions in the action
sequence after the ``<r,c>`` and ``r`` subtrees have terminated in leaf nodes, they will be
returned. | def _construct_node_from_actions(self,
current_node: Tree,
remaining_actions: List[List[str]]) -> List[List[str]]:
"""
Given a current node in the logical form tree, and a list of actions in an action sequence,
this method fills in the children of the current node from the action sequence, then
returns whatever actions are left.
For example, we could get a node with type ``c``, and an action sequence that begins with
``c -> [<r,c>, r]``. This method will add two children to the input node, consuming
actions from the action sequence for nodes of type ``<r,c>`` (and all of its children,
recursively) and ``r`` (and all of its children, recursively). This method assumes that
action sequences are produced `depth-first`, so all actions for the subtree under ``<r,c>``
appear before actions for the subtree under ``r``. If there are any actions in the action
sequence after the ``<r,c>`` and ``r`` subtrees have terminated in leaf nodes, they will be
returned.
"""
if not remaining_actions:
logger.error("No actions left to construct current node: %s", current_node)
raise ParsingError("Incomplete action sequence")
left_side, right_side = remaining_actions.pop(0)
if left_side != current_node.label():
logger.error("Current node: %s", current_node)
logger.error("Next action: %s -> %s", left_side, right_side)
logger.error("Remaining actions were: %s", remaining_actions)
raise ParsingError("Current node does not match next action")
if right_side[0] == '[':
# This is a non-terminal expansion, with more than one child node.
for child_type in right_side[1:-1].split(', '):
child_node = Tree(child_type, [])
current_node.append(child_node) # you add a child to an nltk.Tree with `append`
# For now, we assume that all children in a list like this are non-terminals, so we
# recurse on them. I'm pretty sure that will always be true for the way our
# grammar induction works. We can revisit this later if we need to.
remaining_actions = self._construct_node_from_actions(child_node, remaining_actions)
else:
# The current node is a pre-terminal; we'll add a single terminal child. By
# construction, the right-hand side of our production rules are only ever terminal
# productions or lists of non-terminals.
current_node.append(Tree(right_side, [])) # you add a child to an nltk.Tree with `append`
return remaining_actions |
Chooses ``num_samples`` samples without replacement from [0, ..., num_words).
Returns a tuple (samples, num_tries). | def _choice(num_words: int, num_samples: int) -> Tuple[np.ndarray, int]:
"""
Chooses ``num_samples`` samples without replacement from [0, ..., num_words).
Returns a tuple (samples, num_tries).
"""
num_tries = 0
num_chosen = 0
def get_buffer() -> np.ndarray:
log_samples = np.random.rand(num_samples) * np.log(num_words + 1)
samples = np.exp(log_samples).astype('int64') - 1
return np.clip(samples, a_min=0, a_max=num_words - 1)
sample_buffer = get_buffer()
buffer_index = 0
samples: Set[int] = set()
while num_chosen < num_samples:
num_tries += 1
# choose sample
sample_id = sample_buffer[buffer_index]
if sample_id not in samples:
samples.add(sample_id)
num_chosen += 1
buffer_index += 1
if buffer_index == num_samples:
# Reset the buffer
sample_buffer = get_buffer()
buffer_index = 0
return np.array(list(samples)), num_tries |
Takes a list of tokens and converts them to one or more sets of indices.
This could be just an ID for each token from the vocabulary.
Or it could split each token into characters and return one ID per character.
Or (for instance, in the case of byte-pair encoding) there might not be a clean
mapping from individual tokens to indices. | def tokens_to_indices(self,
tokens: List[Token],
vocabulary: Vocabulary,
index_name: str) -> Dict[str, List[TokenType]]:
"""
Takes a list of tokens and converts them to one or more sets of indices.
This could be just an ID for each token from the vocabulary.
Or it could split each token into characters and return one ID per character.
Or (for instance, in the case of byte-pair encoding) there might not be a clean
mapping from individual tokens to indices.
"""
raise NotImplementedError |
This method pads a list of tokens to ``desired_num_tokens`` and returns a padded copy of the
input tokens. If the input token list is longer than ``desired_num_tokens`` then it will be
truncated.
``padding_lengths`` is used to provide supplemental padding parameters which are needed
in some cases. For example, it contains the widths to pad characters to when doing
character-level padding. | def pad_token_sequence(self,
tokens: Dict[str, List[TokenType]],
desired_num_tokens: Dict[str, int],
padding_lengths: Dict[str, int]) -> Dict[str, List[TokenType]]:
"""
This method pads a list of tokens to ``desired_num_tokens`` and returns a padded copy of the
input tokens. If the input token list is longer than ``desired_num_tokens`` then it will be
truncated.
``padding_lengths`` is used to provide supplemental padding parameters which are needed
in some cases. For example, it contains the widths to pad characters to when doing
character-level padding.
"""
raise NotImplementedError |
The CONLL 2012 data includes 2 annotated spans which are identical,
but have different ids. This checks all clusters for spans which are
identical, and if it finds any, merges the clusters containing the
identical spans. | def canonicalize_clusters(clusters: DefaultDict[int, List[Tuple[int, int]]]) -> List[List[Tuple[int, int]]]:
"""
The CONLL 2012 data includes 2 annotated spans which are identical,
but have different ids. This checks all clusters for spans which are
identical, and if it finds any, merges the clusters containing the
identical spans.
"""
merged_clusters: List[Set[Tuple[int, int]]] = []
for cluster in clusters.values():
cluster_with_overlapping_mention = None
for mention in cluster:
# Look at clusters we have already processed to
# see if they contain a mention in the current
# cluster for comparison.
for cluster2 in merged_clusters:
if mention in cluster2:
# first cluster in merged clusters
# which contains this mention.
cluster_with_overlapping_mention = cluster2
break
# Already encountered overlap - no need to keep looking.
if cluster_with_overlapping_mention is not None:
break
if cluster_with_overlapping_mention is not None:
# Merge cluster we are currently processing into
# the cluster in the processed list.
cluster_with_overlapping_mention.update(cluster)
else:
merged_clusters.append(set(cluster))
return [list(c) for c in merged_clusters] |
Join multi-word predicates to a single
predicate ('V') token. | def join_mwp(tags: List[str]) -> List[str]:
"""
Join multi-word predicates to a single
predicate ('V') token.
"""
ret = []
verb_flag = False
for tag in tags:
if "V" in tag:
# Create a continuous 'V' BIO span
prefix, _ = tag.split("-")
if verb_flag:
# Continue a verb label across the different predicate parts
prefix = 'I'
ret.append(f"{prefix}-V")
verb_flag = True
else:
ret.append(tag)
verb_flag = False
return ret |
Converts a list of model outputs (i.e., a list of lists of bio tags, each
pertaining to a single word), returns an inline bracket representation of
the prediction. | def make_oie_string(tokens: List[Token], tags: List[str]) -> str:
"""
Converts a list of model outputs (i.e., a list of lists of bio tags, each
pertaining to a single word), returns an inline bracket representation of
the prediction.
"""
frame = []
chunk = []
words = [token.text for token in tokens]
for (token, tag) in zip(words, tags):
if tag.startswith("I-"):
chunk.append(token)
else:
if chunk:
frame.append("[" + " ".join(chunk) + "]")
chunk = []
if tag.startswith("B-"):
chunk.append(tag[2:] + ": " + token)
elif tag == "O":
frame.append(token)
if chunk:
frame.append("[" + " ".join(chunk) + "]")
return " ".join(frame) |
Return the word indices of a predicate in BIO tags. | def get_predicate_indices(tags: List[str]) -> List[int]:
"""
Return the word indices of a predicate in BIO tags.
"""
return [ind for ind, tag in enumerate(tags) if 'V' in tag] |
Get the predicate in this prediction. | def get_predicate_text(sent_tokens: List[Token], tags: List[str]) -> str:
"""
Get the predicate in this prediction.
"""
return " ".join([sent_tokens[pred_id].text
for pred_id in get_predicate_indices(tags)]) |
Tests whether the predicate in BIO tags1 overlap
with those of tags2. | def predicates_overlap(tags1: List[str], tags2: List[str]) -> bool:
"""
Tests whether the predicate in BIO tags1 overlap
with those of tags2.
"""
# Get predicate word indices from both predictions
pred_ind1 = get_predicate_indices(tags1)
pred_ind2 = get_predicate_indices(tags2)
# Return if pred_ind1 pred_ind2 overlap
return any(set.intersection(set(pred_ind1), set(pred_ind2))) |
Generate a coherent tag, given previous tag and current label. | def get_coherent_next_tag(prev_label: str, cur_label: str) -> str:
"""
Generate a coherent tag, given previous tag and current label.
"""
if cur_label == "O":
# Don't need to add prefix to an "O" label
return "O"
if prev_label == cur_label:
return f"I-{cur_label}"
else:
return f"B-{cur_label}" |
Merge two predictions into one. Assumes the predicate in tags1 overlap with
the predicate of tags2. | def merge_overlapping_predictions(tags1: List[str], tags2: List[str]) -> List[str]:
"""
Merge two predictions into one. Assumes the predicate in tags1 overlap with
the predicate of tags2.
"""
ret_sequence = []
prev_label = "O"
# Build a coherent sequence out of two
# spans which predicates' overlap
for tag1, tag2 in zip(tags1, tags2):
label1 = tag1.split("-")[-1]
label2 = tag2.split("-")[-1]
if (label1 == "V") or (label2 == "V"):
# Construct maximal predicate length -
# add predicate tag if any of the sequence predict it
cur_label = "V"
# Else - prefer an argument over 'O' label
elif label1 != "O":
cur_label = label1
else:
cur_label = label2
# Append cur tag to the returned sequence
cur_tag = get_coherent_next_tag(prev_label, cur_label)
prev_label = cur_label
ret_sequence.append(cur_tag)
return ret_sequence |
Identify that certain predicates are part of a multiword predicate
(e.g., "decided to run") in which case, we don't need to return
the embedded predicate ("run"). | def consolidate_predictions(outputs: List[List[str]], sent_tokens: List[Token]) -> Dict[str, List[str]]:
"""
Identify that certain predicates are part of a multiword predicate
(e.g., "decided to run") in which case, we don't need to return
the embedded predicate ("run").
"""
pred_dict: Dict[str, List[str]] = {}
merged_outputs = [join_mwp(output) for output in outputs]
predicate_texts = [get_predicate_text(sent_tokens, tags)
for tags in merged_outputs]
for pred1_text, tags1 in zip(predicate_texts, merged_outputs):
# A flag indicating whether to add tags1 to predictions
add_to_prediction = True
# Check if this predicate overlaps another predicate
for pred2_text, tags2 in pred_dict.items():
if predicates_overlap(tags1, tags2):
# tags1 overlaps tags2
pred_dict[pred2_text] = merge_overlapping_predictions(tags1, tags2)
add_to_prediction = False
# This predicate doesn't overlap - add as a new predicate
if add_to_prediction:
pred_dict[pred1_text] = tags1
return pred_dict |
Sanitize a BIO label - this deals with OIE
labels sometimes having some noise, as parentheses. | def sanitize_label(label: str) -> str:
"""
Sanitize a BIO label - this deals with OIE
labels sometimes having some noise, as parentheses.
"""
if "-" in label:
prefix, suffix = label.split("-")
suffix = suffix.split("(")[-1]
return f"{prefix}-{suffix}"
else:
return label |
Converts a batch of tokenized sentences to a tensor representing the sentences with encoded characters
(len(batch), max sentence length, max word length).
Parameters
----------
batch : ``List[List[str]]``, required
A list of tokenized sentences.
Returns
-------
A tensor of padded character ids. | def batch_to_ids(batch: List[List[str]]) -> torch.Tensor:
"""
Converts a batch of tokenized sentences to a tensor representing the sentences with encoded characters
(len(batch), max sentence length, max word length).
Parameters
----------
batch : ``List[List[str]]``, required
A list of tokenized sentences.
Returns
-------
A tensor of padded character ids.
"""
instances = []
indexer = ELMoTokenCharactersIndexer()
for sentence in batch:
tokens = [Token(token) for token in sentence]
field = TextField(tokens,
{'character_ids': indexer})
instance = Instance({"elmo": field})
instances.append(instance)
dataset = Batch(instances)
vocab = Vocabulary()
dataset.index_instances(vocab)
return dataset.as_tensor_dict()['elmo']['character_ids'] |
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.
word_inputs : ``torch.Tensor``, required.
If you passed a cached vocab, you can in addition pass a tensor of shape
``(batch_size, timesteps)``, which represent word ids which have been pre-cached.
Returns
-------
Dict with keys:
``'elmo_representations'``: ``List[torch.Tensor]``
A ``num_output_representations`` list of ELMo representations for the input sequence.
Each representation is shape ``(batch_size, timesteps, embedding_dim)``
``'mask'``: ``torch.Tensor``
Shape ``(batch_size, timesteps)`` long tensor with sequence mask. | def forward(self, # pylint: disable=arguments-differ
inputs: torch.Tensor,
word_inputs: torch.Tensor = None) -> Dict[str, Union[torch.Tensor, List[torch.Tensor]]]:
"""
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.
word_inputs : ``torch.Tensor``, required.
If you passed a cached vocab, you can in addition pass a tensor of shape
``(batch_size, timesteps)``, which represent word ids which have been pre-cached.
Returns
-------
Dict with keys:
``'elmo_representations'``: ``List[torch.Tensor]``
A ``num_output_representations`` list of ELMo representations for the input sequence.
Each representation is shape ``(batch_size, timesteps, embedding_dim)``
``'mask'``: ``torch.Tensor``
Shape ``(batch_size, timesteps)`` long tensor with sequence mask.
"""
# reshape the input if needed
original_shape = inputs.size()
if len(original_shape) > 3:
timesteps, num_characters = original_shape[-2:]
reshaped_inputs = inputs.view(-1, timesteps, num_characters)
else:
reshaped_inputs = inputs
if word_inputs is not None:
original_word_size = word_inputs.size()
if self._has_cached_vocab and len(original_word_size) > 2:
reshaped_word_inputs = word_inputs.view(-1, original_word_size[-1])
elif not self._has_cached_vocab:
logger.warning("Word inputs were passed to ELMo but it does not have a cached vocab.")
reshaped_word_inputs = None
else:
reshaped_word_inputs = word_inputs
else:
reshaped_word_inputs = word_inputs
# run the biLM
bilm_output = self._elmo_lstm(reshaped_inputs, reshaped_word_inputs)
layer_activations = bilm_output['activations']
mask_with_bos_eos = bilm_output['mask']
# compute the elmo representations
representations = []
for i in range(len(self._scalar_mixes)):
scalar_mix = getattr(self, 'scalar_mix_{}'.format(i))
representation_with_bos_eos = scalar_mix(layer_activations, mask_with_bos_eos)
if self._keep_sentence_boundaries:
processed_representation = representation_with_bos_eos
processed_mask = mask_with_bos_eos
else:
representation_without_bos_eos, mask_without_bos_eos = remove_sentence_boundaries(
representation_with_bos_eos, mask_with_bos_eos)
processed_representation = representation_without_bos_eos
processed_mask = mask_without_bos_eos
representations.append(self._dropout(processed_representation))
# reshape if necessary
if word_inputs is not None and len(original_word_size) > 2:
mask = processed_mask.view(original_word_size)
elmo_representations = [representation.view(original_word_size + (-1, ))
for representation in representations]
elif len(original_shape) > 3:
mask = processed_mask.view(original_shape[:-1])
elmo_representations = [representation.view(original_shape[:-1] + (-1, ))
for representation in representations]
else:
mask = processed_mask
elmo_representations = representations
return {'elmo_representations': elmo_representations, 'mask': mask} |
Compute context insensitive token embeddings for ELMo representations.
Parameters
----------
inputs: ``torch.Tensor``
Shape ``(batch_size, sequence_length, 50)`` of character ids representing the
current batch.
Returns
-------
Dict with keys:
``'token_embedding'``: ``torch.Tensor``
Shape ``(batch_size, sequence_length + 2, embedding_dim)`` tensor with context
insensitive token representations.
``'mask'``: ``torch.Tensor``
Shape ``(batch_size, sequence_length + 2)`` long tensor with sequence mask. | def forward(self, inputs: torch.Tensor) -> Dict[str, torch.Tensor]: # pylint: disable=arguments-differ
"""
Compute context insensitive token embeddings for ELMo representations.
Parameters
----------
inputs: ``torch.Tensor``
Shape ``(batch_size, sequence_length, 50)`` of character ids representing the
current batch.
Returns
-------
Dict with keys:
``'token_embedding'``: ``torch.Tensor``
Shape ``(batch_size, sequence_length + 2, embedding_dim)`` tensor with context
insensitive token representations.
``'mask'``: ``torch.Tensor``
Shape ``(batch_size, sequence_length + 2)`` long tensor with sequence mask.
"""
# Add BOS/EOS
mask = ((inputs > 0).long().sum(dim=-1) > 0).long()
character_ids_with_bos_eos, mask_with_bos_eos = add_sentence_boundary_token_ids(
inputs,
mask,
self._beginning_of_sentence_characters,
self._end_of_sentence_characters
)
# the character id embedding
max_chars_per_token = self._options['char_cnn']['max_characters_per_token']
# (batch_size * sequence_length, max_chars_per_token, embed_dim)
character_embedding = torch.nn.functional.embedding(
character_ids_with_bos_eos.view(-1, max_chars_per_token),
self._char_embedding_weights
)
# run convolutions
cnn_options = self._options['char_cnn']
if cnn_options['activation'] == 'tanh':
activation = torch.tanh
elif cnn_options['activation'] == 'relu':
activation = torch.nn.functional.relu
else:
raise ConfigurationError("Unknown activation")
# (batch_size * sequence_length, embed_dim, max_chars_per_token)
character_embedding = torch.transpose(character_embedding, 1, 2)
convs = []
for i in range(len(self._convolutions)):
conv = getattr(self, 'char_conv_{}'.format(i))
convolved = conv(character_embedding)
# (batch_size * sequence_length, n_filters for this width)
convolved, _ = torch.max(convolved, dim=-1)
convolved = activation(convolved)
convs.append(convolved)
# (batch_size * sequence_length, n_filters)
token_embedding = torch.cat(convs, dim=-1)
# apply the highway layers (batch_size * sequence_length, n_filters)
token_embedding = self._highways(token_embedding)
# final projection (batch_size * sequence_length, embedding_dim)
token_embedding = self._projection(token_embedding)
# reshape to (batch_size, sequence_length, embedding_dim)
batch_size, sequence_length, _ = character_ids_with_bos_eos.size()
return {
'mask': mask_with_bos_eos,
'token_embedding': token_embedding.view(batch_size, sequence_length, -1)
} |
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.
word_inputs : ``torch.Tensor``, required.
If you passed a cached vocab, you can in addition pass a tensor of shape ``(batch_size, timesteps)``,
which represent word ids which have been pre-cached.
Returns
-------
Dict with keys:
``'activations'``: ``List[torch.Tensor]``
A list of activations at each layer of the network, each of shape
``(batch_size, timesteps + 2, embedding_dim)``
``'mask'``: ``torch.Tensor``
Shape ``(batch_size, timesteps + 2)`` long tensor with sequence mask.
Note that the output tensors all include additional special begin and end of sequence
markers. | def forward(self, # pylint: disable=arguments-differ
inputs: torch.Tensor,
word_inputs: torch.Tensor = None) -> Dict[str, Union[torch.Tensor, List[torch.Tensor]]]:
"""
Parameters
----------
inputs: ``torch.Tensor``, required.
Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.
word_inputs : ``torch.Tensor``, required.
If you passed a cached vocab, you can in addition pass a tensor of shape ``(batch_size, timesteps)``,
which represent word ids which have been pre-cached.
Returns
-------
Dict with keys:
``'activations'``: ``List[torch.Tensor]``
A list of activations at each layer of the network, each of shape
``(batch_size, timesteps + 2, embedding_dim)``
``'mask'``: ``torch.Tensor``
Shape ``(batch_size, timesteps + 2)`` long tensor with sequence mask.
Note that the output tensors all include additional special begin and end of sequence
markers.
"""
if self._word_embedding is not None and word_inputs is not None:
try:
mask_without_bos_eos = (word_inputs > 0).long()
# The character cnn part is cached - just look it up.
embedded_inputs = self._word_embedding(word_inputs) # type: ignore
# shape (batch_size, timesteps + 2, embedding_dim)
type_representation, mask = add_sentence_boundary_token_ids(
embedded_inputs,
mask_without_bos_eos,
self._bos_embedding,
self._eos_embedding
)
except RuntimeError:
# Back off to running the character convolutions,
# as we might not have the words in the cache.
token_embedding = self._token_embedder(inputs)
mask = token_embedding['mask']
type_representation = token_embedding['token_embedding']
else:
token_embedding = self._token_embedder(inputs)
mask = token_embedding['mask']
type_representation = token_embedding['token_embedding']
lstm_outputs = self._elmo_lstm(type_representation, mask)
# Prepare the output. The first layer is duplicated.
# Because of minor differences in how masking is applied depending
# on whether the char cnn layers are cached, we'll be defensive and
# multiply by the mask here. It's not strictly necessary, as the
# mask passed on is correct, but the values in the padded areas
# of the char cnn representations can change.
output_tensors = [
torch.cat([type_representation, type_representation], dim=-1) * mask.float().unsqueeze(-1)
]
for layer_activations in torch.chunk(lstm_outputs, lstm_outputs.size(0), dim=0):
output_tensors.append(layer_activations.squeeze(0))
return {
'activations': output_tensors,
'mask': mask,
} |
Given a list of tokens, this method precomputes word representations
by running just the character convolutions and highway layers of elmo,
essentially creating uncontextual word vectors. On subsequent forward passes,
the word ids are looked up from an embedding, rather than being computed on
the fly via the CNN encoder.
This function sets 3 attributes:
_word_embedding : ``torch.Tensor``
The word embedding for each word in the tokens passed to this method.
_bos_embedding : ``torch.Tensor``
The embedding for the BOS token.
_eos_embedding : ``torch.Tensor``
The embedding for the EOS token.
Parameters
----------
tokens : ``List[str]``, required.
A list of tokens to precompute character convolutions for. | def create_cached_cnn_embeddings(self, tokens: List[str]) -> None:
"""
Given a list of tokens, this method precomputes word representations
by running just the character convolutions and highway layers of elmo,
essentially creating uncontextual word vectors. On subsequent forward passes,
the word ids are looked up from an embedding, rather than being computed on
the fly via the CNN encoder.
This function sets 3 attributes:
_word_embedding : ``torch.Tensor``
The word embedding for each word in the tokens passed to this method.
_bos_embedding : ``torch.Tensor``
The embedding for the BOS token.
_eos_embedding : ``torch.Tensor``
The embedding for the EOS token.
Parameters
----------
tokens : ``List[str]``, required.
A list of tokens to precompute character convolutions for.
"""
tokens = [ELMoCharacterMapper.bos_token, ELMoCharacterMapper.eos_token] + tokens
timesteps = 32
batch_size = 32
chunked_tokens = lazy_groups_of(iter(tokens), timesteps)
all_embeddings = []
device = get_device_of(next(self.parameters()))
for batch in lazy_groups_of(chunked_tokens, batch_size):
# Shape (batch_size, timesteps, 50)
batched_tensor = batch_to_ids(batch)
# NOTE: This device check is for when a user calls this method having
# already placed the model on a device. If this is called in the
# constructor, it will probably happen on the CPU. This isn't too bad,
# because it's only a few convolutions and will likely be very fast.
if device >= 0:
batched_tensor = batched_tensor.cuda(device)
output = self._token_embedder(batched_tensor)
token_embedding = output["token_embedding"]
mask = output["mask"]
token_embedding, _ = remove_sentence_boundaries(token_embedding, mask)
all_embeddings.append(token_embedding.view(-1, token_embedding.size(-1)))
full_embedding = torch.cat(all_embeddings, 0)
# We might have some trailing embeddings from padding in the batch, so
# we clip the embedding and lookup to the right size.
full_embedding = full_embedding[:len(tokens), :]
embedding = full_embedding[2:len(tokens), :]
vocab_size, embedding_dim = list(embedding.size())
from allennlp.modules.token_embedders import Embedding # type: ignore
self._bos_embedding = full_embedding[0, :]
self._eos_embedding = full_embedding[1, :]
self._word_embedding = Embedding(vocab_size, # type: ignore
embedding_dim,
weight=embedding.data,
trainable=self._requires_grad,
padding_index=0) |
Performs a normalization that is very similar to that done by the normalization functions in
SQuAD and TriviaQA.
This involves splitting and rejoining the text, and could be a somewhat expensive operation. | def normalize_text(text: str) -> str:
"""
Performs a normalization that is very similar to that done by the normalization functions in
SQuAD and TriviaQA.
This involves splitting and rejoining the text, and could be a somewhat expensive operation.
"""
return ' '.join([token
for token in text.lower().strip(STRIPPED_CHARACTERS).split()
if token not in IGNORED_TOKENS]) |
Converts a character span from a passage into the corresponding token span in the tokenized
version of the passage. If you pass in a character span that does not correspond to complete
tokens in the tokenized version, we'll do our best, but the behavior is officially undefined.
We return an error flag in this case, and have some debug logging so you can figure out the
cause of this issue (in SQuAD, these are mostly either tokenization problems or annotation
problems; there's a fair amount of both).
The basic outline of this method is to find the token span that has the same offsets as the
input character span. If the tokenizer tokenized the passage correctly and has matching
offsets, this is easy. We try to be a little smart about cases where they don't match exactly,
but mostly just find the closest thing we can.
The returned ``(begin, end)`` indices are `inclusive` for both ``begin`` and ``end``.
So, for example, ``(2, 2)`` is the one word span beginning at token index 2, ``(3, 4)`` is the
two-word span beginning at token index 3, and so on.
Returns
-------
token_span : ``Tuple[int, int]``
`Inclusive` span start and end token indices that match as closely as possible to the input
character spans.
error : ``bool``
Whether the token spans match the input character spans exactly. If this is ``False``, it
means there was an error in either the tokenization or the annotated character span. | def char_span_to_token_span(token_offsets: List[Tuple[int, int]],
character_span: Tuple[int, int]) -> Tuple[Tuple[int, int], bool]:
"""
Converts a character span from a passage into the corresponding token span in the tokenized
version of the passage. If you pass in a character span that does not correspond to complete
tokens in the tokenized version, we'll do our best, but the behavior is officially undefined.
We return an error flag in this case, and have some debug logging so you can figure out the
cause of this issue (in SQuAD, these are mostly either tokenization problems or annotation
problems; there's a fair amount of both).
The basic outline of this method is to find the token span that has the same offsets as the
input character span. If the tokenizer tokenized the passage correctly and has matching
offsets, this is easy. We try to be a little smart about cases where they don't match exactly,
but mostly just find the closest thing we can.
The returned ``(begin, end)`` indices are `inclusive` for both ``begin`` and ``end``.
So, for example, ``(2, 2)`` is the one word span beginning at token index 2, ``(3, 4)`` is the
two-word span beginning at token index 3, and so on.
Returns
-------
token_span : ``Tuple[int, int]``
`Inclusive` span start and end token indices that match as closely as possible to the input
character spans.
error : ``bool``
Whether the token spans match the input character spans exactly. If this is ``False``, it
means there was an error in either the tokenization or the annotated character span.
"""
# We have token offsets into the passage from the tokenizer; we _should_ be able to just find
# the tokens that have the same offsets as our span.
error = False
start_index = 0
while start_index < len(token_offsets) and token_offsets[start_index][0] < character_span[0]:
start_index += 1
# start_index should now be pointing at the span start index.
if token_offsets[start_index][0] > character_span[0]:
# In this case, a tokenization or labeling issue made us go too far - the character span
# we're looking for actually starts in the previous token. We'll back up one.
logger.debug("Bad labelling or tokenization - start offset doesn't match")
start_index -= 1
if token_offsets[start_index][0] != character_span[0]:
error = True
end_index = start_index
while end_index < len(token_offsets) and token_offsets[end_index][1] < character_span[1]:
end_index += 1
if end_index == start_index and token_offsets[end_index][1] > character_span[1]:
# Looks like there was a token that should have been split, like "1854-1855", where the
# answer is "1854". We can't do much in this case, except keep the answer as the whole
# token.
logger.debug("Bad tokenization - end offset doesn't match")
elif token_offsets[end_index][1] > character_span[1]:
# This is a case where the given answer span is more than one token, and the last token is
# cut off for some reason, like "split with Luckett and Rober", when the original passage
# said "split with Luckett and Roberson". In this case, we'll just keep the end index
# where it is, and assume the intent was to mark the whole token.
logger.debug("Bad labelling or tokenization - end offset doesn't match")
if token_offsets[end_index][1] != character_span[1]:
error = True
return (start_index, end_index), error |
Finds a list of token spans in ``passage_tokens`` that match the given ``answer_texts``. This
tries to find all spans that would evaluate to correct given the SQuAD and TriviaQA official
evaluation scripts, which do some normalization of the input text.
Note that this could return duplicate spans! The caller is expected to be able to handle
possible duplicates (as already happens in the SQuAD dev set, for instance). | def find_valid_answer_spans(passage_tokens: List[Token],
answer_texts: List[str]) -> List[Tuple[int, int]]:
"""
Finds a list of token spans in ``passage_tokens`` that match the given ``answer_texts``. This
tries to find all spans that would evaluate to correct given the SQuAD and TriviaQA official
evaluation scripts, which do some normalization of the input text.
Note that this could return duplicate spans! The caller is expected to be able to handle
possible duplicates (as already happens in the SQuAD dev set, for instance).
"""
normalized_tokens = [token.text.lower().strip(STRIPPED_CHARACTERS) for token in passage_tokens]
# Because there could be many `answer_texts`, we'll do the most expensive pre-processing
# step once. This gives us a map from tokens to the position in the passage they appear.
word_positions: Dict[str, List[int]] = defaultdict(list)
for i, token in enumerate(normalized_tokens):
word_positions[token].append(i)
spans = []
for answer_text in answer_texts:
# For each answer, we'll first find all valid start positions in the passage. Then
# we'll grow each span to the same length as the number of answer tokens, and see if we
# have a match. We're a little tricky as we grow the span, skipping words that are
# already pruned from the normalized answer text, and stopping early if we don't match.
answer_tokens = answer_text.lower().strip(STRIPPED_CHARACTERS).split()
num_answer_tokens = len(answer_tokens)
for span_start in word_positions[answer_tokens[0]]:
span_end = span_start # span_end is _inclusive_
answer_index = 1
while answer_index < num_answer_tokens and span_end + 1 < len(normalized_tokens):
token = normalized_tokens[span_end + 1]
if answer_tokens[answer_index] == token:
answer_index += 1
span_end += 1
elif token in IGNORED_TOKENS:
span_end += 1
else:
break
if num_answer_tokens == answer_index:
spans.append((span_start, span_end))
return spans |
Converts a question, a passage, and an optional answer (or answers) to an ``Instance`` for use
in a reading comprehension model.
Creates an ``Instance`` with at least these fields: ``question`` and ``passage``, both
``TextFields``; and ``metadata``, a ``MetadataField``. Additionally, if both ``answer_texts``
and ``char_span_starts`` are given, the ``Instance`` has ``span_start`` and ``span_end``
fields, which are both ``IndexFields``.
Parameters
----------
question_tokens : ``List[Token]``
An already-tokenized question.
passage_tokens : ``List[Token]``
An already-tokenized passage that contains the answer to the given question.
token_indexers : ``Dict[str, TokenIndexer]``
Determines how the question and passage ``TextFields`` will be converted into tensors that
get input to a model. See :class:`TokenIndexer`.
passage_text : ``str``
The original passage text. We need this so that we can recover the actual span from the
original passage that the model predicts as the answer to the question. This is used in
official evaluation scripts.
token_spans : ``List[Tuple[int, int]]``, optional
Indices into ``passage_tokens`` to use as the answer to the question for training. This is
a list because there might be several possible correct answer spans in the passage.
Currently, we just select the most frequent span in this list (i.e., SQuAD has multiple
annotations on the dev set; this will select the span that the most annotators gave as
correct).
answer_texts : ``List[str]``, optional
All valid answer strings for the given question. In SQuAD, e.g., the training set has
exactly one answer per question, but the dev and test sets have several. TriviaQA has many
possible answers, which are the aliases for the known correct entity. This is put into the
metadata for use with official evaluation scripts, but not used anywhere else.
additional_metadata : ``Dict[str, Any]``, optional
The constructed ``metadata`` field will by default contain ``original_passage``,
``token_offsets``, ``question_tokens``, ``passage_tokens``, and ``answer_texts`` keys. If
you want any other metadata to be associated with each instance, you can pass that in here.
This dictionary will get added to the ``metadata`` dictionary we already construct. | def make_reading_comprehension_instance(question_tokens: List[Token],
passage_tokens: List[Token],
token_indexers: Dict[str, TokenIndexer],
passage_text: str,
token_spans: List[Tuple[int, int]] = None,
answer_texts: List[str] = None,
additional_metadata: Dict[str, Any] = None) -> Instance:
"""
Converts a question, a passage, and an optional answer (or answers) to an ``Instance`` for use
in a reading comprehension model.
Creates an ``Instance`` with at least these fields: ``question`` and ``passage``, both
``TextFields``; and ``metadata``, a ``MetadataField``. Additionally, if both ``answer_texts``
and ``char_span_starts`` are given, the ``Instance`` has ``span_start`` and ``span_end``
fields, which are both ``IndexFields``.
Parameters
----------
question_tokens : ``List[Token]``
An already-tokenized question.
passage_tokens : ``List[Token]``
An already-tokenized passage that contains the answer to the given question.
token_indexers : ``Dict[str, TokenIndexer]``
Determines how the question and passage ``TextFields`` will be converted into tensors that
get input to a model. See :class:`TokenIndexer`.
passage_text : ``str``
The original passage text. We need this so that we can recover the actual span from the
original passage that the model predicts as the answer to the question. This is used in
official evaluation scripts.
token_spans : ``List[Tuple[int, int]]``, optional
Indices into ``passage_tokens`` to use as the answer to the question for training. This is
a list because there might be several possible correct answer spans in the passage.
Currently, we just select the most frequent span in this list (i.e., SQuAD has multiple
annotations on the dev set; this will select the span that the most annotators gave as
correct).
answer_texts : ``List[str]``, optional
All valid answer strings for the given question. In SQuAD, e.g., the training set has
exactly one answer per question, but the dev and test sets have several. TriviaQA has many
possible answers, which are the aliases for the known correct entity. This is put into the
metadata for use with official evaluation scripts, but not used anywhere else.
additional_metadata : ``Dict[str, Any]``, optional
The constructed ``metadata`` field will by default contain ``original_passage``,
``token_offsets``, ``question_tokens``, ``passage_tokens``, and ``answer_texts`` keys. If
you want any other metadata to be associated with each instance, you can pass that in here.
This dictionary will get added to the ``metadata`` dictionary we already construct.
"""
additional_metadata = additional_metadata or {}
fields: Dict[str, Field] = {}
passage_offsets = [(token.idx, token.idx + len(token.text)) for token in passage_tokens]
# This is separate so we can reference it later with a known type.
passage_field = TextField(passage_tokens, token_indexers)
fields['passage'] = passage_field
fields['question'] = TextField(question_tokens, token_indexers)
metadata = {'original_passage': passage_text, 'token_offsets': passage_offsets,
'question_tokens': [token.text for token in question_tokens],
'passage_tokens': [token.text for token in passage_tokens], }
if answer_texts:
metadata['answer_texts'] = answer_texts
if token_spans:
# There may be multiple answer annotations, so we pick the one that occurs the most. This
# only matters on the SQuAD dev set, and it means our computed metrics ("start_acc",
# "end_acc", and "span_acc") aren't quite the same as the official metrics, which look at
# all of the annotations. This is why we have a separate official SQuAD metric calculation
# (the "em" and "f1" metrics use the official script).
candidate_answers: Counter = Counter()
for span_start, span_end in token_spans:
candidate_answers[(span_start, span_end)] += 1
span_start, span_end = candidate_answers.most_common(1)[0][0]
fields['span_start'] = IndexField(span_start, passage_field)
fields['span_end'] = IndexField(span_end, passage_field)
metadata.update(additional_metadata)
fields['metadata'] = MetadataField(metadata)
return Instance(fields) |
Converts a question, a passage, and an optional answer (or answers) to an ``Instance`` for use
in a reading comprehension model.
Creates an ``Instance`` with at least these fields: ``question`` and ``passage``, both
``TextFields``; and ``metadata``, a ``MetadataField``. Additionally, if both ``answer_texts``
and ``char_span_starts`` are given, the ``Instance`` has ``span_start`` and ``span_end``
fields, which are both ``IndexFields``.
Parameters
----------
question_list_tokens : ``List[List[Token]]``
An already-tokenized list of questions. Each dialog have multiple questions.
passage_tokens : ``List[Token]``
An already-tokenized passage that contains the answer to the given question.
token_indexers : ``Dict[str, TokenIndexer]``
Determines how the question and passage ``TextFields`` will be converted into tensors that
get input to a model. See :class:`TokenIndexer`.
passage_text : ``str``
The original passage text. We need this so that we can recover the actual span from the
original passage that the model predicts as the answer to the question. This is used in
official evaluation scripts.
token_span_lists : ``List[List[Tuple[int, int]]]``, optional
Indices into ``passage_tokens`` to use as the answer to the question for training. This is
a list of list, first because there is multiple questions per dialog, and
because there might be several possible correct answer spans in the passage.
Currently, we just select the last span in this list (i.e., QuAC has multiple
annotations on the dev set; this will select the last span, which was given by the original annotator).
yesno_list : ``List[int]``
List of the affirmation bit for each question answer pairs.
followup_list : ``List[int]``
List of the continuation bit for each question answer pairs.
num_context_answers : ``int``, optional
How many answers to encode into the passage.
additional_metadata : ``Dict[str, Any]``, optional
The constructed ``metadata`` field will by default contain ``original_passage``,
``token_offsets``, ``question_tokens``, ``passage_tokens``, and ``answer_texts`` keys. If
you want any other metadata to be associated with each instance, you can pass that in here.
This dictionary will get added to the ``metadata`` dictionary we already construct. | def make_reading_comprehension_instance_quac(question_list_tokens: List[List[Token]],
passage_tokens: List[Token],
token_indexers: Dict[str, TokenIndexer],
passage_text: str,
token_span_lists: List[List[Tuple[int, int]]] = None,
yesno_list: List[int] = None,
followup_list: List[int] = None,
additional_metadata: Dict[str, Any] = None,
num_context_answers: int = 0) -> Instance:
"""
Converts a question, a passage, and an optional answer (or answers) to an ``Instance`` for use
in a reading comprehension model.
Creates an ``Instance`` with at least these fields: ``question`` and ``passage``, both
``TextFields``; and ``metadata``, a ``MetadataField``. Additionally, if both ``answer_texts``
and ``char_span_starts`` are given, the ``Instance`` has ``span_start`` and ``span_end``
fields, which are both ``IndexFields``.
Parameters
----------
question_list_tokens : ``List[List[Token]]``
An already-tokenized list of questions. Each dialog have multiple questions.
passage_tokens : ``List[Token]``
An already-tokenized passage that contains the answer to the given question.
token_indexers : ``Dict[str, TokenIndexer]``
Determines how the question and passage ``TextFields`` will be converted into tensors that
get input to a model. See :class:`TokenIndexer`.
passage_text : ``str``
The original passage text. We need this so that we can recover the actual span from the
original passage that the model predicts as the answer to the question. This is used in
official evaluation scripts.
token_span_lists : ``List[List[Tuple[int, int]]]``, optional
Indices into ``passage_tokens`` to use as the answer to the question for training. This is
a list of list, first because there is multiple questions per dialog, and
because there might be several possible correct answer spans in the passage.
Currently, we just select the last span in this list (i.e., QuAC has multiple
annotations on the dev set; this will select the last span, which was given by the original annotator).
yesno_list : ``List[int]``
List of the affirmation bit for each question answer pairs.
followup_list : ``List[int]``
List of the continuation bit for each question answer pairs.
num_context_answers : ``int``, optional
How many answers to encode into the passage.
additional_metadata : ``Dict[str, Any]``, optional
The constructed ``metadata`` field will by default contain ``original_passage``,
``token_offsets``, ``question_tokens``, ``passage_tokens``, and ``answer_texts`` keys. If
you want any other metadata to be associated with each instance, you can pass that in here.
This dictionary will get added to the ``metadata`` dictionary we already construct.
"""
additional_metadata = additional_metadata or {}
fields: Dict[str, Field] = {}
passage_offsets = [(token.idx, token.idx + len(token.text)) for token in passage_tokens]
# This is separate so we can reference it later with a known type.
passage_field = TextField(passage_tokens, token_indexers)
fields['passage'] = passage_field
fields['question'] = ListField([TextField(q_tokens, token_indexers) for q_tokens in question_list_tokens])
metadata = {'original_passage': passage_text,
'token_offsets': passage_offsets,
'question_tokens': [[token.text for token in question_tokens] \
for question_tokens in question_list_tokens],
'passage_tokens': [token.text for token in passage_tokens], }
p1_answer_marker_list: List[Field] = []
p2_answer_marker_list: List[Field] = []
p3_answer_marker_list: List[Field] = []
def get_tag(i, i_name):
# Generate a tag to mark previous answer span in the passage.
return "<{0:d}_{1:s}>".format(i, i_name)
def mark_tag(span_start, span_end, passage_tags, prev_answer_distance):
try:
assert span_start >= 0
assert span_end >= 0
except:
raise ValueError("Previous {0:d}th answer span should have been updated!".format(prev_answer_distance))
# Modify "tags" to mark previous answer span.
if span_start == span_end:
passage_tags[prev_answer_distance][span_start] = get_tag(prev_answer_distance, "")
else:
passage_tags[prev_answer_distance][span_start] = get_tag(prev_answer_distance, "start")
passage_tags[prev_answer_distance][span_end] = get_tag(prev_answer_distance, "end")
for passage_index in range(span_start + 1, span_end):
passage_tags[prev_answer_distance][passage_index] = get_tag(prev_answer_distance, "in")
if token_span_lists:
span_start_list: List[Field] = []
span_end_list: List[Field] = []
p1_span_start, p1_span_end, p2_span_start = -1, -1, -1
p2_span_end, p3_span_start, p3_span_end = -1, -1, -1
# Looping each <<answers>>.
for question_index, answer_span_lists in enumerate(token_span_lists):
span_start, span_end = answer_span_lists[-1] # Last one is the original answer
span_start_list.append(IndexField(span_start, passage_field))
span_end_list.append(IndexField(span_end, passage_field))
prev_answer_marker_lists = [["O"] * len(passage_tokens), ["O"] * len(passage_tokens),
["O"] * len(passage_tokens), ["O"] * len(passage_tokens)]
if question_index > 0 and num_context_answers > 0:
mark_tag(p1_span_start, p1_span_end, prev_answer_marker_lists, 1)
if question_index > 1 and num_context_answers > 1:
mark_tag(p2_span_start, p2_span_end, prev_answer_marker_lists, 2)
if question_index > 2 and num_context_answers > 2:
mark_tag(p3_span_start, p3_span_end, prev_answer_marker_lists, 3)
p3_span_start = p2_span_start
p3_span_end = p2_span_end
p2_span_start = p1_span_start
p2_span_end = p1_span_end
p1_span_start = span_start
p1_span_end = span_end
if num_context_answers > 2:
p3_answer_marker_list.append(SequenceLabelField(prev_answer_marker_lists[3],
passage_field,
label_namespace="answer_tags"))
if num_context_answers > 1:
p2_answer_marker_list.append(SequenceLabelField(prev_answer_marker_lists[2],
passage_field,
label_namespace="answer_tags"))
if num_context_answers > 0:
p1_answer_marker_list.append(SequenceLabelField(prev_answer_marker_lists[1],
passage_field,
label_namespace="answer_tags"))
fields['span_start'] = ListField(span_start_list)
fields['span_end'] = ListField(span_end_list)
if num_context_answers > 0:
fields['p1_answer_marker'] = ListField(p1_answer_marker_list)
if num_context_answers > 1:
fields['p2_answer_marker'] = ListField(p2_answer_marker_list)
if num_context_answers > 2:
fields['p3_answer_marker'] = ListField(p3_answer_marker_list)
fields['yesno_list'] = ListField( \
[LabelField(yesno, label_namespace="yesno_labels") for yesno in yesno_list])
fields['followup_list'] = ListField([LabelField(followup, label_namespace="followup_labels") \
for followup in followup_list])
metadata.update(additional_metadata)
fields['metadata'] = MetadataField(metadata)
return Instance(fields) |
Process a list of reference answers.
If equal or more than half of the reference answers are "CANNOTANSWER", take it as gold.
Otherwise, return answers that are not "CANNOTANSWER". | def handle_cannot(reference_answers: List[str]):
"""
Process a list of reference answers.
If equal or more than half of the reference answers are "CANNOTANSWER", take it as gold.
Otherwise, return answers that are not "CANNOTANSWER".
"""
num_cannot = 0
num_spans = 0
for ref in reference_answers:
if ref == 'CANNOTANSWER':
num_cannot += 1
else:
num_spans += 1
if num_cannot >= num_spans:
reference_answers = ['CANNOTANSWER']
else:
reference_answers = [x for x in reference_answers if x != 'CANNOTANSWER']
return reference_answers |
This acts the same as the static method ``BidirectionalAttentionFlow.get_best_span()``
in ``allennlp/models/reading_comprehension/bidaf.py``. We keep it here so that users can
directly import this function without the class.
We call the inputs "logits" - they could either be unnormalized logits or normalized log
probabilities. A log_softmax operation is a constant shifting of the entire logit
vector, so taking an argmax over either one gives the same result. | def get_best_span(span_start_logits: torch.Tensor, span_end_logits: torch.Tensor) -> torch.Tensor:
"""
This acts the same as the static method ``BidirectionalAttentionFlow.get_best_span()``
in ``allennlp/models/reading_comprehension/bidaf.py``. We keep it here so that users can
directly import this function without the class.
We call the inputs "logits" - they could either be unnormalized logits or normalized log
probabilities. A log_softmax operation is a constant shifting of the entire logit
vector, so taking an argmax over either one gives the same result.
"""
if span_start_logits.dim() != 2 or span_end_logits.dim() != 2:
raise ValueError("Input shapes must be (batch_size, passage_length)")
batch_size, passage_length = span_start_logits.size()
device = span_start_logits.device
# (batch_size, passage_length, passage_length)
span_log_probs = span_start_logits.unsqueeze(2) + span_end_logits.unsqueeze(1)
# Only the upper triangle of the span matrix is valid; the lower triangle has entries where
# the span ends before it starts.
span_log_mask = torch.triu(torch.ones((passage_length, passage_length),
device=device)).log()
valid_span_log_probs = span_log_probs + span_log_mask
# Here we take the span matrix and flatten it, then find the best span using argmax. We
# can recover the start and end indices from this flattened list using simple modular
# arithmetic.
# (batch_size, passage_length * passage_length)
best_spans = valid_span_log_probs.view(batch_size, -1).argmax(-1)
span_start_indices = best_spans // passage_length
span_end_indices = best_spans % passage_length
return torch.stack([span_start_indices, span_end_indices], dim=-1) |
Spacy needs to do batch processing, or it can be really slow. This method lets you take
advantage of that if you want. Default implementation is to just iterate of the sentences
and call ``split_words``, but the ``SpacyWordSplitter`` will actually do batched
processing. | def batch_split_words(self, sentences: List[str]) -> List[List[Token]]:
"""
Spacy needs to do batch processing, or it can be really slow. This method lets you take
advantage of that if you want. Default implementation is to just iterate of the sentences
and call ``split_words``, but the ``SpacyWordSplitter`` will actually do batched
processing.
"""
return [self.split_words(sentence) for sentence in sentences] |
Return a new BeamSearch instance that's like this one but with the specified constraint. | def constrained_to(self, initial_sequence: torch.Tensor, keep_beam_details: bool = True) -> 'BeamSearch':
"""
Return a new BeamSearch instance that's like this one but with the specified constraint.
"""
return BeamSearch(self._beam_size, self._per_node_beam_size, initial_sequence, keep_beam_details) |
Parameters
----------
num_steps : ``int``
How many steps should we take in our search? This is an upper bound, as it's possible
for the search to run out of valid actions before hitting this number, or for all
states on the beam to finish.
initial_state : ``StateType``
The starting state of our search. This is assumed to be `batched`, and our beam search
is batch-aware - we'll keep ``beam_size`` states around for each instance in the batch.
transition_function : ``TransitionFunction``
The ``TransitionFunction`` object that defines and scores transitions from one state to the
next.
keep_final_unfinished_states : ``bool``, optional (default=True)
If we run out of steps before a state is "finished", should we return that state in our
search results?
Returns
-------
best_states : ``Dict[int, List[StateType]]``
This is a mapping from batch index to the top states for that instance. | def search(self,
num_steps: int,
initial_state: StateType,
transition_function: TransitionFunction,
keep_final_unfinished_states: bool = True) -> Dict[int, List[StateType]]:
"""
Parameters
----------
num_steps : ``int``
How many steps should we take in our search? This is an upper bound, as it's possible
for the search to run out of valid actions before hitting this number, or for all
states on the beam to finish.
initial_state : ``StateType``
The starting state of our search. This is assumed to be `batched`, and our beam search
is batch-aware - we'll keep ``beam_size`` states around for each instance in the batch.
transition_function : ``TransitionFunction``
The ``TransitionFunction`` object that defines and scores transitions from one state to the
next.
keep_final_unfinished_states : ``bool``, optional (default=True)
If we run out of steps before a state is "finished", should we return that state in our
search results?
Returns
-------
best_states : ``Dict[int, List[StateType]]``
This is a mapping from batch index to the top states for that instance.
"""
finished_states: Dict[int, List[StateType]] = defaultdict(list)
states = [initial_state]
step_num = 1
# Erase stored beams, if we're tracking them.
if self.beam_snapshots is not None:
self.beam_snapshots = defaultdict(list)
while states and step_num <= num_steps:
next_states: Dict[int, List[StateType]] = defaultdict(list)
grouped_state = states[0].combine_states(states)
if self._allowed_transitions:
# We were provided an initial sequence, so we need to check
# if the current sequence is still constrained.
key = tuple(grouped_state.action_history[0])
if key in self._allowed_transitions:
# We're still in the initial_sequence, so our hand is forced.
allowed_actions = [self._allowed_transitions[key]]
else:
# We've gone past the end of the initial sequence, so no constraint.
allowed_actions = None
else:
# No initial sequence was provided, so all actions are allowed.
allowed_actions = None
for next_state in transition_function.take_step(grouped_state,
max_actions=self._per_node_beam_size,
allowed_actions=allowed_actions):
# NOTE: we're doing state.batch_indices[0] here (and similar things below),
# hard-coding a group size of 1. But, our use of `next_state.is_finished()`
# already checks for that, as it crashes if the group size is not 1.
batch_index = next_state.batch_indices[0]
if next_state.is_finished():
finished_states[batch_index].append(next_state)
else:
if step_num == num_steps and keep_final_unfinished_states:
finished_states[batch_index].append(next_state)
next_states[batch_index].append(next_state)
states = []
for batch_index, batch_states in next_states.items():
# The states from the generator are already sorted, so we can just take the first
# ones here, without an additional sort.
states.extend(batch_states[:self._beam_size])
if self.beam_snapshots is not None:
# Add to beams
self.beam_snapshots[batch_index].append(
[(state.score[0].item(), state.action_history[0])
for state in batch_states]
)
step_num += 1
# Add finished states to the stored beams as well.
if self.beam_snapshots is not None:
for batch_index, states in finished_states.items():
for state in states:
score = state.score[0].item()
action_history = state.action_history[0]
while len(self.beam_snapshots[batch_index]) < len(action_history):
self.beam_snapshots[batch_index].append([])
self.beam_snapshots[batch_index][len(action_history) - 1].append((score, action_history))
best_states: Dict[int, List[StateType]] = {}
for batch_index, batch_states in finished_states.items():
# The time this sort takes is pretty negligible, no particular need to optimize this
# yet. Maybe with a larger beam size...
finished_to_sort = [(-state.score[0].item(), state) for state in batch_states]
finished_to_sort.sort(key=lambda x: x[0])
best_states[batch_index] = [state[1] for state in finished_to_sort[:self._beam_size]]
return best_states |
Lower text and remove punctuation, articles and extra whitespace. | def _normalize_answer(text: str) -> str:
"""Lower text and remove punctuation, articles and extra whitespace."""
parts = [_white_space_fix(_remove_articles(_normalize_number(_remove_punc(_lower(token)))))
for token in _tokenize(text)]
parts = [part for part in parts if part.strip()]
normalized = ' '.join(parts).strip()
return normalized |
Takes gold and predicted answer sets and first finds a greedy 1-1 alignment
between them and gets maximum metric values over all the answers | def _align_bags(predicted: List[Set[str]], gold: List[Set[str]]) -> List[float]:
"""
Takes gold and predicted answer sets and first finds a greedy 1-1 alignment
between them and gets maximum metric values over all the answers
"""
f1_scores = []
for gold_index, gold_item in enumerate(gold):
max_f1 = 0.0
max_index = None
best_alignment: Tuple[Set[str], Set[str]] = (set(), set())
if predicted:
for pred_index, pred_item in enumerate(predicted):
current_f1 = _compute_f1(pred_item, gold_item)
if current_f1 >= max_f1:
best_alignment = (gold_item, pred_item)
max_f1 = current_f1
max_index = pred_index
match_flag = _match_numbers_if_present(*best_alignment)
gold[gold_index] = set()
predicted[max_index] = set()
else:
match_flag = False
if match_flag:
f1_scores.append(max_f1)
else:
f1_scores.append(0.0)
return f1_scores |
Takes a predicted answer and a gold answer (that are both either a string or a list of
strings), and returns exact match and the DROP F1 metric for the prediction. If you are
writing a script for evaluating objects in memory (say, the output of predictions during
validation, or while training), this is the function you want to call, after using
:func:`answer_json_to_strings` when reading the gold answer from the released data file. | def get_metrics(predicted: Union[str, List[str], Tuple[str, ...]],
gold: Union[str, List[str], Tuple[str, ...]]) -> Tuple[float, float]:
"""
Takes a predicted answer and a gold answer (that are both either a string or a list of
strings), and returns exact match and the DROP F1 metric for the prediction. If you are
writing a script for evaluating objects in memory (say, the output of predictions during
validation, or while training), this is the function you want to call, after using
:func:`answer_json_to_strings` when reading the gold answer from the released data file.
"""
predicted_bags = _answer_to_bags(predicted)
gold_bags = _answer_to_bags(gold)
exact_match = 1.0 if predicted_bags[0] == gold_bags[0] else 0
f1_per_bag = _align_bags(predicted_bags[1], gold_bags[1])
f1 = np.mean(f1_per_bag)
f1 = round(f1, 2)
return exact_match, f1 |
Takes an answer JSON blob from the DROP data release and converts it into strings used for
evaluation. | def answer_json_to_strings(answer: Dict[str, Any]) -> Tuple[Tuple[str, ...], str]:
"""
Takes an answer JSON blob from the DROP data release and converts it into strings used for
evaluation.
"""
if "number" in answer and answer["number"]:
return tuple([str(answer["number"])]), "number"
elif "spans" in answer and answer["spans"]:
return tuple(answer["spans"]), "span" if len(answer["spans"]) == 1 else "spans"
elif "date" in answer:
return tuple(["{0} {1} {2}".format(answer["date"]["day"],
answer["date"]["month"],
answer["date"]["year"])]), "date"
else:
raise ValueError(f"Answer type not found, should be one of number, spans or date at: {json.dumps(answer)}") |
Takes a prediction file and a gold file and evaluates the predictions for each question in the
gold file. Both files must be json formatted and must have query_id keys, which are used to
match predictions to gold annotations. The gold file is assumed to have the format of the dev
set in the DROP data release. The prediction file must be a JSON dictionary keyed by query id,
where the value is either a JSON dictionary with an "answer" key, or just a string (or list of
strings) that is the answer. | def evaluate_prediction_file(prediction_path: str, gold_path: str) -> Tuple[float, float]:
"""
Takes a prediction file and a gold file and evaluates the predictions for each question in the
gold file. Both files must be json formatted and must have query_id keys, which are used to
match predictions to gold annotations. The gold file is assumed to have the format of the dev
set in the DROP data release. The prediction file must be a JSON dictionary keyed by query id,
where the value is either a JSON dictionary with an "answer" key, or just a string (or list of
strings) that is the answer.
"""
predicted_answers = json.load(open(prediction_path, encoding='utf-8'))
annotations = json.load(open(gold_path, encoding='utf-8'))
return evaluate_json(annotations, predicted_answers) |
When you call this method, we will use this directory to store a cache of already-processed
``Instances`` in every file passed to :func:`read`, serialized as one string-formatted
``Instance`` per line. If the cache file for a given ``file_path`` exists, we read the
``Instances`` from the cache instead of re-processing the data (using
:func:`deserialize_instance`). If the cache file does `not` exist, we will `create` it on
our first pass through the data (using :func:`serialize_instance`).
IMPORTANT CAVEAT: It is the `caller's` responsibility to make sure that this directory is
unique for any combination of code and parameters that you use. That is, if you call this
method, we will use any existing cache files in that directory `regardless of the
parameters you set for this DatasetReader!` If you use our commands, the ``Train`` command
is responsible for calling this method and ensuring that unique parameters correspond to
unique cache directories. If you don't use our commands, that is your responsibility. | def cache_data(self, cache_directory: str) -> None:
"""
When you call this method, we will use this directory to store a cache of already-processed
``Instances`` in every file passed to :func:`read`, serialized as one string-formatted
``Instance`` per line. If the cache file for a given ``file_path`` exists, we read the
``Instances`` from the cache instead of re-processing the data (using
:func:`deserialize_instance`). If the cache file does `not` exist, we will `create` it on
our first pass through the data (using :func:`serialize_instance`).
IMPORTANT CAVEAT: It is the `caller's` responsibility to make sure that this directory is
unique for any combination of code and parameters that you use. That is, if you call this
method, we will use any existing cache files in that directory `regardless of the
parameters you set for this DatasetReader!` If you use our commands, the ``Train`` command
is responsible for calling this method and ensuring that unique parameters correspond to
unique cache directories. If you don't use our commands, that is your responsibility.
"""
self._cache_directory = pathlib.Path(cache_directory)
os.makedirs(self._cache_directory, exist_ok=True) |
Returns an ``Iterable`` containing all the instances
in the specified dataset.
If ``self.lazy`` is False, this calls ``self._read()``,
ensures that the result is a list, then returns the resulting list.
If ``self.lazy`` is True, this returns an object whose
``__iter__`` method calls ``self._read()`` each iteration.
In this case your implementation of ``_read()`` must also be lazy
(that is, not load all instances into memory at once), otherwise
you will get a ``ConfigurationError``.
In either case, the returned ``Iterable`` can be iterated
over multiple times. It's unlikely you want to override this function,
but if you do your result should likewise be repeatedly iterable. | def read(self, file_path: str) -> Iterable[Instance]:
"""
Returns an ``Iterable`` containing all the instances
in the specified dataset.
If ``self.lazy`` is False, this calls ``self._read()``,
ensures that the result is a list, then returns the resulting list.
If ``self.lazy`` is True, this returns an object whose
``__iter__`` method calls ``self._read()`` each iteration.
In this case your implementation of ``_read()`` must also be lazy
(that is, not load all instances into memory at once), otherwise
you will get a ``ConfigurationError``.
In either case, the returned ``Iterable`` can be iterated
over multiple times. It's unlikely you want to override this function,
but if you do your result should likewise be repeatedly iterable.
"""
lazy = getattr(self, 'lazy', None)
if lazy is None:
logger.warning("DatasetReader.lazy is not set, "
"did you forget to call the superclass constructor?")
if self._cache_directory:
cache_file = self._get_cache_location_for_file_path(file_path)
else:
cache_file = None
if lazy:
return _LazyInstances(lambda: self._read(file_path),
cache_file,
self.deserialize_instance,
self.serialize_instance)
else:
# First we read the instances, either from a cache or from the original file.
if cache_file and os.path.exists(cache_file):
instances = self._instances_from_cache_file(cache_file)
else:
instances = self._read(file_path)
# Then some validation.
if not isinstance(instances, list):
instances = [instance for instance in Tqdm.tqdm(instances)]
if not instances:
raise ConfigurationError("No instances were read from the given filepath {}. "
"Is the path correct?".format(file_path))
# And finally we write to the cache if we need to.
if cache_file and not os.path.exists(cache_file):
logger.info(f"Caching instances to {cache_file}")
with open(cache_file, 'w') as cache:
for instance in Tqdm.tqdm(instances):
cache.write(self.serialize_instance(instance) + '\n')
return instances |
Restores a model from a serialization_dir to the last saved checkpoint.
This includes a training state (typically consisting of an epoch count and optimizer state),
which is serialized separately from model parameters. This function should only be used to
continue training - if you wish to load a model for inference/load parts of a model into a new
computation graph, you should use the native Pytorch functions:
`` model.load_state_dict(torch.load("/path/to/model/weights.th"))``
If ``self._serialization_dir`` does not exist or does not contain any checkpointed weights,
this function will do nothing and return empty dicts.
Returns
-------
states: Tuple[Dict[str, Any], Dict[str, Any]]
The model state and the training state. | def restore_checkpoint(self) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""
Restores a model from a serialization_dir to the last saved checkpoint.
This includes a training state (typically consisting of an epoch count and optimizer state),
which is serialized separately from model parameters. This function should only be used to
continue training - if you wish to load a model for inference/load parts of a model into a new
computation graph, you should use the native Pytorch functions:
`` model.load_state_dict(torch.load("/path/to/model/weights.th"))``
If ``self._serialization_dir`` does not exist or does not contain any checkpointed weights,
this function will do nothing and return empty dicts.
Returns
-------
states: Tuple[Dict[str, Any], Dict[str, Any]]
The model state and the training state.
"""
latest_checkpoint = self.find_latest_checkpoint()
if latest_checkpoint is None:
# No checkpoint to restore, start at 0
return {}, {}
model_path, training_state_path = latest_checkpoint
# Load the parameters onto CPU, then transfer to GPU.
# This avoids potential OOM on GPU for large models that
# load parameters onto GPU then make a new GPU copy into the parameter
# buffer. The GPU transfer happens implicitly in load_state_dict.
model_state = torch.load(model_path, map_location=nn_util.device_mapping(-1))
training_state = torch.load(training_state_path, map_location=nn_util.device_mapping(-1))
return model_state, training_state |
Prints predicate argument predictions and gold labels for a single verbal
predicate in a sentence to two provided file references.
Parameters
----------
prediction_file : TextIO, required.
A file reference to print predictions to.
gold_file : TextIO, required.
A file reference to print gold labels to.
verb_index : Optional[int], required.
The index of the verbal predicate in the sentence which
the gold labels are the arguments for, or None if the sentence
contains no verbal predicate.
sentence : List[str], required.
The word tokens.
prediction : List[str], required.
The predicted BIO labels.
gold_labels : List[str], required.
The gold BIO labels. | def write_to_conll_eval_file(prediction_file: TextIO,
gold_file: TextIO,
verb_index: Optional[int],
sentence: List[str],
prediction: List[str],
gold_labels: List[str]):
"""
Prints predicate argument predictions and gold labels for a single verbal
predicate in a sentence to two provided file references.
Parameters
----------
prediction_file : TextIO, required.
A file reference to print predictions to.
gold_file : TextIO, required.
A file reference to print gold labels to.
verb_index : Optional[int], required.
The index of the verbal predicate in the sentence which
the gold labels are the arguments for, or None if the sentence
contains no verbal predicate.
sentence : List[str], required.
The word tokens.
prediction : List[str], required.
The predicted BIO labels.
gold_labels : List[str], required.
The gold BIO labels.
"""
verb_only_sentence = ["-"] * len(sentence)
if verb_index:
verb_only_sentence[verb_index] = sentence[verb_index]
conll_format_predictions = convert_bio_tags_to_conll_format(prediction)
conll_format_gold_labels = convert_bio_tags_to_conll_format(gold_labels)
for word, predicted, gold in zip(verb_only_sentence,
conll_format_predictions,
conll_format_gold_labels):
prediction_file.write(word.ljust(15))
prediction_file.write(predicted.rjust(15) + "\n")
gold_file.write(word.ljust(15))
gold_file.write(gold.rjust(15) + "\n")
prediction_file.write("\n")
gold_file.write("\n") |
Converts BIO formatted SRL tags to the format required for evaluation with the
official CONLL 2005 perl script. Spans are represented by bracketed labels,
with the labels of words inside spans being the same as those outside spans.
Beginning spans always have a opening bracket and a closing asterisk (e.g. "(ARG-1*" )
and closing spans always have a closing bracket (e.g. "*)" ). This applies even for
length 1 spans, (e.g "(ARG-0*)").
A full example of the conversion performed:
[B-ARG-1, I-ARG-1, I-ARG-1, I-ARG-1, I-ARG-1, O]
[ "(ARG-1*", "*", "*", "*", "*)", "*"]
Parameters
----------
labels : List[str], required.
A list of BIO tags to convert to the CONLL span based format.
Returns
-------
A list of labels in the CONLL span based format. | def convert_bio_tags_to_conll_format(labels: List[str]):
"""
Converts BIO formatted SRL tags to the format required for evaluation with the
official CONLL 2005 perl script. Spans are represented by bracketed labels,
with the labels of words inside spans being the same as those outside spans.
Beginning spans always have a opening bracket and a closing asterisk (e.g. "(ARG-1*" )
and closing spans always have a closing bracket (e.g. "*)" ). This applies even for
length 1 spans, (e.g "(ARG-0*)").
A full example of the conversion performed:
[B-ARG-1, I-ARG-1, I-ARG-1, I-ARG-1, I-ARG-1, O]
[ "(ARG-1*", "*", "*", "*", "*)", "*"]
Parameters
----------
labels : List[str], required.
A list of BIO tags to convert to the CONLL span based format.
Returns
-------
A list of labels in the CONLL span based format.
"""
sentence_length = len(labels)
conll_labels = []
for i, label in enumerate(labels):
if label == "O":
conll_labels.append("*")
continue
new_label = "*"
# Are we at the beginning of a new span, at the first word in the sentence,
# or is the label different from the previous one? If so, we are seeing a new label.
if label[0] == "B" or i == 0 or label[1:] != labels[i - 1][1:]:
new_label = "(" + label[2:] + new_label
# Are we at the end of the sentence, is the next word a new span, or is the next
# word not in a span? If so, we need to close the label span.
if i == sentence_length - 1 or labels[i + 1][0] == "B" or label[1:] != labels[i + 1][1:]:
new_label = new_label + ")"
conll_labels.append(new_label)
return conll_labels |
Given a ``sentence``, returns a list of actions the sentence triggers as an ``agenda``. The
``agenda`` can be used while by a parser to guide the decoder. sequences as possible. This
is a simplistic mapping at this point, and can be expanded.
Parameters
----------
sentence : ``str``
The sentence for which an agenda will be produced. | def get_agenda_for_sentence(self, sentence: str) -> List[str]:
"""
Given a ``sentence``, returns a list of actions the sentence triggers as an ``agenda``. The
``agenda`` can be used while by a parser to guide the decoder. sequences as possible. This
is a simplistic mapping at this point, and can be expanded.
Parameters
----------
sentence : ``str``
The sentence for which an agenda will be produced.
"""
agenda = []
sentence = sentence.lower()
if sentence.startswith("there is a box") or sentence.startswith("there is a tower "):
agenda.append(self.terminal_productions["box_exists"])
elif sentence.startswith("there is a "):
agenda.append(self.terminal_productions["object_exists"])
if "<Set[Box]:bool> -> box_exists" not in agenda:
# These are object filters and do not apply if we have a box_exists at the top.
if "touch" in sentence:
if "top" in sentence:
agenda.append(self.terminal_productions["touch_top"])
elif "bottom" in sentence or "base" in sentence:
agenda.append(self.terminal_productions["touch_bottom"])
elif "corner" in sentence:
agenda.append(self.terminal_productions["touch_corner"])
elif "right" in sentence:
agenda.append(self.terminal_productions["touch_right"])
elif "left" in sentence:
agenda.append(self.terminal_productions["touch_left"])
elif "wall" in sentence or "edge" in sentence:
agenda.append(self.terminal_productions["touch_wall"])
else:
agenda.append(self.terminal_productions["touch_object"])
else:
# The words "top" and "bottom" may be referring to top and bottom blocks in a tower.
if "top" in sentence:
agenda.append(self.terminal_productions["top"])
elif "bottom" in sentence or "base" in sentence:
agenda.append(self.terminal_productions["bottom"])
if " not " in sentence:
agenda.append(self.terminal_productions["negate_filter"])
if " contains " in sentence or " has " in sentence:
agenda.append(self.terminal_productions["all_boxes"])
# This takes care of shapes, colors, top, bottom, big, small etc.
for constant, production in self.terminal_productions.items():
# TODO(pradeep): Deal with constant names with underscores.
if "top" in constant or "bottom" in constant:
# We already dealt with top, bottom, touch_top and touch_bottom above.
continue
if constant in sentence:
if "<Set[Object]:Set[Object]> ->" in production and "<Set[Box]:bool> -> box_exists" in agenda:
if constant in ["square", "circle", "triangle"]:
agenda.append(self.terminal_productions[f"shape_{constant}"])
elif constant in ["yellow", "blue", "black"]:
agenda.append(self.terminal_productions[f"color_{constant}"])
else:
continue
else:
agenda.append(production)
# TODO (pradeep): Rules for "member_*" productions ("tower" or "box" followed by a color,
# shape or number...)
number_productions = self._get_number_productions(sentence)
for production in number_productions:
agenda.append(production)
if not agenda:
# None of the rules above was triggered!
if "box" in sentence:
agenda.append(self.terminal_productions["all_boxes"])
else:
agenda.append(self.terminal_productions["all_objects"])
return agenda |
Gathers all the numbers in the sentence, and returns productions that lead to them. | def _get_number_productions(sentence: str) -> List[str]:
"""
Gathers all the numbers in the sentence, and returns productions that lead to them.
"""
# The mapping here is very simple and limited, which also shouldn't be a problem
# because numbers seem to be represented fairly regularly.
number_strings = {"one": "1", "two": "2", "three": "3", "four": "4", "five": "5", "six":
"6", "seven": "7", "eight": "8", "nine": "9", "ten": "10"}
number_productions = []
tokens = sentence.split()
numbers = number_strings.values()
for token in tokens:
if token in numbers:
number_productions.append(f"int -> {token}")
elif token in number_strings:
number_productions.append(f"int -> {number_strings[token]}")
return number_productions |
Filters the set of objects, and returns those objects whose color is the most frequent
color in the initial set of objects, if the highest frequency is greater than 1, or an
empty set otherwise.
This is an unusual name for what the method does, but just as ``blue`` filters objects to
those that are blue, this filters objects to those that are of the same color. | def same_color(self, objects: Set[Object]) -> Set[Object]:
"""
Filters the set of objects, and returns those objects whose color is the most frequent
color in the initial set of objects, if the highest frequency is greater than 1, or an
empty set otherwise.
This is an unusual name for what the method does, but just as ``blue`` filters objects to
those that are blue, this filters objects to those that are of the same color.
"""
return self._get_objects_with_same_attribute(objects, lambda x: x.color) |
Filters the set of objects, and returns those objects whose color is the most frequent
color in the initial set of objects, if the highest frequency is greater than 1, or an
empty set otherwise.
This is an unusual name for what the method does, but just as ``triangle`` filters objects
to those that are triangles, this filters objects to those that are of the same shape. | def same_shape(self, objects: Set[Object]) -> Set[Object]:
"""
Filters the set of objects, and returns those objects whose color is the most frequent
color in the initial set of objects, if the highest frequency is greater than 1, or an
empty set otherwise.
This is an unusual name for what the method does, but just as ``triangle`` filters objects
to those that are triangles, this filters objects to those that are of the same shape.
"""
return self._get_objects_with_same_attribute(objects, lambda x: x.shape) |
Returns all objects that touch the given set of objects. | def touch_object(self, objects: Set[Object]) -> Set[Object]:
"""
Returns all objects that touch the given set of objects.
"""
objects_per_box = self._separate_objects_by_boxes(objects)
return_set = set()
for box, box_objects in objects_per_box.items():
candidate_objects = box.objects
for object_ in box_objects:
for candidate_object in candidate_objects:
if self._objects_touch_each_other(object_, candidate_object):
return_set.add(candidate_object)
return return_set |
Return the topmost objects (i.e. minimum y_loc). The comparison is done separately for each
box. | def top(self, objects: Set[Object]) -> Set[Object]:
"""
Return the topmost objects (i.e. minimum y_loc). The comparison is done separately for each
box.
"""
objects_per_box = self._separate_objects_by_boxes(objects)
return_set: Set[Object] = set()
for _, box_objects in objects_per_box.items():
min_y_loc = min([obj.y_loc for obj in box_objects])
return_set.update(set([obj for obj in box_objects if obj.y_loc == min_y_loc]))
return return_set |
Return the bottom most objects(i.e. maximum y_loc). The comparison is done separately for
each box. | def bottom(self, objects: Set[Object]) -> Set[Object]:
"""
Return the bottom most objects(i.e. maximum y_loc). The comparison is done separately for
each box.
"""
objects_per_box = self._separate_objects_by_boxes(objects)
return_set: Set[Object] = set()
for _, box_objects in objects_per_box.items():
max_y_loc = max([obj.y_loc for obj in box_objects])
return_set.update(set([obj for obj in box_objects if obj.y_loc == max_y_loc]))
return return_set |
Returns the set of objects in the same boxes that are above the given objects. That is, if
the input is a set of two objects, one in each box, we will return a union of the objects
above the first object in the first box, and those above the second object in the second box. | def above(self, objects: Set[Object]) -> Set[Object]:
"""
Returns the set of objects in the same boxes that are above the given objects. That is, if
the input is a set of two objects, one in each box, we will return a union of the objects
above the first object in the first box, and those above the second object in the second box.
"""
objects_per_box = self._separate_objects_by_boxes(objects)
return_set = set()
for box in objects_per_box:
# min_y_loc corresponds to the top-most object.
min_y_loc = min([obj.y_loc for obj in objects_per_box[box]])
for candidate_obj in box.objects:
if candidate_obj.y_loc < min_y_loc:
return_set.add(candidate_obj)
return return_set |
Returns the set of objects in the same boxes that are below the given objects. That is, if
the input is a set of two objects, one in each box, we will return a union of the objects
below the first object in the first box, and those below the second object in the second box. | def below(self, objects: Set[Object]) -> Set[Object]:
"""
Returns the set of objects in the same boxes that are below the given objects. That is, if
the input is a set of two objects, one in each box, we will return a union of the objects
below the first object in the first box, and those below the second object in the second box.
"""
objects_per_box = self._separate_objects_by_boxes(objects)
return_set = set()
for box in objects_per_box:
# max_y_loc corresponds to the bottom-most object.
max_y_loc = max([obj.y_loc for obj in objects_per_box[box]])
for candidate_obj in box.objects:
if candidate_obj.y_loc > max_y_loc:
return_set.add(candidate_obj)
return return_set |
Returns true iff the objects touch each other. | def _objects_touch_each_other(self, object1: Object, object2: Object) -> bool:
"""
Returns true iff the objects touch each other.
"""
in_vertical_range = object1.y_loc <= object2.y_loc + object2.size and \
object1.y_loc + object1.size >= object2.y_loc
in_horizantal_range = object1.x_loc <= object2.x_loc + object2.size and \
object1.x_loc + object1.size >= object2.x_loc
touch_side = object1.x_loc + object1.size == object2.x_loc or \
object2.x_loc + object2.size == object1.x_loc
touch_top_or_bottom = object1.y_loc + object1.size == object2.y_loc or \
object2.y_loc + object2.size == object1.y_loc
return (in_vertical_range and touch_side) or (in_horizantal_range and touch_top_or_bottom) |
Given a set of objects, separate them by the boxes they belong to and return a dict. | def _separate_objects_by_boxes(self, objects: Set[Object]) -> Dict[Box, List[Object]]:
"""
Given a set of objects, separate them by the boxes they belong to and return a dict.
"""
objects_per_box: Dict[Box, List[Object]] = defaultdict(list)
for box in self.boxes:
for object_ in objects:
if object_ in box.objects:
objects_per_box[box].append(object_)
return objects_per_box |
Returns the set of objects for which the attribute function returns an attribute value that
is most frequent in the initial set, if the frequency is greater than 1. If not, all
objects have different attribute values, and this method returns an empty set. | def _get_objects_with_same_attribute(self,
objects: Set[Object],
attribute_function: Callable[[Object], str]) -> Set[Object]:
"""
Returns the set of objects for which the attribute function returns an attribute value that
is most frequent in the initial set, if the frequency is greater than 1. If not, all
objects have different attribute values, and this method returns an empty set.
"""
objects_of_attribute: Dict[str, Set[Object]] = defaultdict(set)
for entity in objects:
objects_of_attribute[attribute_function(entity)].add(entity)
if not objects_of_attribute:
return set()
most_frequent_attribute = max(objects_of_attribute, key=lambda x: len(objects_of_attribute[x]))
if len(objects_of_attribute[most_frequent_attribute]) <= 1:
return set()
return objects_of_attribute[most_frequent_attribute] |
Given a structure (possibly) containing Tensors on the CPU,
move all the Tensors to the specified GPU (or do nothing, if they should be on the CPU). | def move_to_device(obj, cuda_device: int):
"""
Given a structure (possibly) containing Tensors on the CPU,
move all the Tensors to the specified GPU (or do nothing, if they should be on the CPU).
"""
if cuda_device < 0 or not has_tensor(obj):
return obj
elif isinstance(obj, torch.Tensor):
return obj.cuda(cuda_device)
elif isinstance(obj, dict):
return {key: move_to_device(value, cuda_device) for key, value in obj.items()}
elif isinstance(obj, list):
return [move_to_device(item, cuda_device) for item in obj]
elif isinstance(obj, tuple):
return tuple([move_to_device(item, cuda_device) for item in obj])
else:
return obj |
Given a possibly complex data structure,
check if it has any torch.Tensors in it. | def has_tensor(obj) -> bool:
"""
Given a possibly complex data structure,
check if it has any torch.Tensors in it.
"""
if isinstance(obj, torch.Tensor):
return True
elif isinstance(obj, dict):
return any(has_tensor(value) for value in obj.values())
elif isinstance(obj, (list, tuple)):
return any(has_tensor(item) for item in obj)
else:
return False |
Supports sparse and dense tensors.
Returns a tensor with values clamped between the provided minimum and maximum,
without modifying the original tensor. | def clamp_tensor(tensor, minimum, maximum):
"""
Supports sparse and dense tensors.
Returns a tensor with values clamped between the provided minimum and maximum,
without modifying the original tensor.
"""
if tensor.is_sparse:
coalesced_tensor = tensor.coalesce()
# pylint: disable=protected-access
coalesced_tensor._values().clamp_(minimum, maximum)
return coalesced_tensor
else:
return tensor.clamp(minimum, maximum) |
Takes a list of tensor dictionaries, where each dictionary is assumed to have matching keys,
and returns a single dictionary with all tensors with the same key batched together.
Parameters
----------
tensor_dicts : ``List[Dict[str, torch.Tensor]]``
The list of tensor dictionaries to batch.
remove_trailing_dimension : ``bool``
If ``True``, we will check for a trailing dimension of size 1 on the tensors that are being
batched, and remove it if we find it. | def batch_tensor_dicts(tensor_dicts: List[Dict[str, torch.Tensor]],
remove_trailing_dimension: bool = False) -> Dict[str, torch.Tensor]:
"""
Takes a list of tensor dictionaries, where each dictionary is assumed to have matching keys,
and returns a single dictionary with all tensors with the same key batched together.
Parameters
----------
tensor_dicts : ``List[Dict[str, torch.Tensor]]``
The list of tensor dictionaries to batch.
remove_trailing_dimension : ``bool``
If ``True``, we will check for a trailing dimension of size 1 on the tensors that are being
batched, and remove it if we find it.
"""
key_to_tensors: Dict[str, List[torch.Tensor]] = defaultdict(list)
for tensor_dict in tensor_dicts:
for key, tensor in tensor_dict.items():
key_to_tensors[key].append(tensor)
batched_tensors = {}
for key, tensor_list in key_to_tensors.items():
batched_tensor = torch.stack(tensor_list)
if remove_trailing_dimension and all(tensor.size(-1) == 1 for tensor in tensor_list):
batched_tensor = batched_tensor.squeeze(-1)
batched_tensors[key] = batched_tensor
return batched_tensors |
Given a variable of shape ``(batch_size,)`` that represents the sequence lengths of each batch
element, this function returns a ``(batch_size, max_length)`` mask variable. For example, if
our input was ``[2, 2, 3]``, with a ``max_length`` of 4, we'd return
``[[1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 1, 0]]``.
We require ``max_length`` here instead of just computing it from the input ``sequence_lengths``
because it lets us avoid finding the max, then copying that value from the GPU to the CPU so
that we can use it to construct a new tensor. | def get_mask_from_sequence_lengths(sequence_lengths: torch.Tensor, max_length: int) -> torch.Tensor:
"""
Given a variable of shape ``(batch_size,)`` that represents the sequence lengths of each batch
element, this function returns a ``(batch_size, max_length)`` mask variable. For example, if
our input was ``[2, 2, 3]``, with a ``max_length`` of 4, we'd return
``[[1, 1, 0, 0], [1, 1, 0, 0], [1, 1, 1, 0]]``.
We require ``max_length`` here instead of just computing it from the input ``sequence_lengths``
because it lets us avoid finding the max, then copying that value from the GPU to the CPU so
that we can use it to construct a new tensor.
"""
# (batch_size, max_length)
ones = sequence_lengths.new_ones(sequence_lengths.size(0), max_length)
range_tensor = ones.cumsum(dim=1)
return (sequence_lengths.unsqueeze(1) >= range_tensor).long() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.