Code
stringlengths
103
85.9k
Summary
sequencelengths
0
94
Please provide a description of the function:def rescale_gradients(model: Model, grad_norm: Optional[float] = None) -> Optional[float]: if grad_norm: parameters_to_clip = [p for p in model.parameters() if p.grad is not None] return sparse_clip_norm(parameters_to_clip, grad_norm) return None
[ "\n Performs gradient rescaling. Is a no-op if gradient rescaling is not enabled.\n " ]
Please provide a description of the function:def get_metrics(model: Model, total_loss: float, num_batches: int, reset: bool = False) -> Dict[str, float]: metrics = model.get_metrics(reset=reset) metrics["loss"] = float(total_loss / num_batches) if num_batches > 0 else 0.0 return metrics
[ "\n Gets the metrics but sets ``\"loss\"`` to\n the total loss divided by the ``num_batches`` so that\n the ``\"loss\"`` metric is \"average loss per batch\".\n " ]
Please provide a description of the function:def parse_requirements() -> Tuple[PackagesType, PackagesType, Set[str]]: essential_packages: PackagesType = {} other_packages: PackagesType = {} duplicates: Set[str] = set() with open("requirements.txt", "r") as req_file: section: str = "" for line in req_file: line = line.strip() if line.startswith("####"): # Line is a section name. section = parse_section_name(line) continue if not line or line.startswith("#"): # Line is empty or just regular comment. continue module, version = parse_package(line) if module in essential_packages or module in other_packages: duplicates.add(module) if section.startswith("ESSENTIAL"): essential_packages[module] = version else: other_packages[module] = version return essential_packages, other_packages, duplicates
[ "Parse all dependencies out of the requirements.txt file." ]
Please provide a description of the function:def parse_setup() -> Tuple[PackagesType, PackagesType, Set[str], Set[str]]: essential_packages: PackagesType = {} test_packages: PackagesType = {} essential_duplicates: Set[str] = set() test_duplicates: Set[str] = set() with open('setup.py') as setup_file: contents = setup_file.read() # Parse out essential packages. package_string = re.search(r, contents, re.DOTALL).groups()[0].strip() for package in re.split(r, package_string): module, version = parse_package(package) if module in essential_packages: essential_duplicates.add(module) else: essential_packages[module] = version # Parse packages only needed for testing. package_string = re.search(r, contents, re.DOTALL).groups()[0].strip() for package in re.split(r, package_string): module, version = parse_package(package) if module in test_packages: test_duplicates.add(module) else: test_packages[module] = version return essential_packages, test_packages, essential_duplicates, test_duplicates
[ "Parse all dependencies out of the setup.py script.", "install_requires=\\[[\\s\\n]*['\"](.*?)['\"],?[\\s\\n]*\\]", "['\"],[\\s\\n]+['\"]", "tests_require=\\[[\\s\\n]*['\"](.*?)['\"],?[\\s\\n]*\\]", "['\"],[\\s\\n]+['\"]" ]
Please provide a description of the function:def enumerate_spans(sentence: List[T], offset: int = 0, max_span_width: int = None, min_span_width: int = 1, filter_function: Callable[[List[T]], bool] = None) -> List[Tuple[int, int]]: max_span_width = max_span_width or len(sentence) filter_function = filter_function or (lambda x: True) spans: List[Tuple[int, int]] = [] for start_index in range(len(sentence)): last_end_index = min(start_index + max_span_width, len(sentence)) first_end_index = min(start_index + min_span_width - 1, len(sentence)) for end_index in range(first_end_index, last_end_index): start = offset + start_index end = offset + end_index # add 1 to end index because span indices are inclusive. if filter_function(sentence[slice(start_index, end_index + 1)]): spans.append((start, end)) return spans
[ "\n Given a sentence, return all token spans within the sentence. Spans are `inclusive`.\n Additionally, you can provide a maximum and minimum span width, which will be used\n to exclude spans outside of this range.\n\n Finally, you can provide a function mapping ``List[T] -> bool``, which will\n be applied to every span to decide whether that span should be included. This\n allows filtering by length, regex matches, pos tags or any Spacy ``Token``\n attributes, for example.\n\n Parameters\n ----------\n sentence : ``List[T]``, required.\n The sentence to generate spans for. The type is generic, as this function\n can be used with strings, or Spacy ``Tokens`` or other sequences.\n offset : ``int``, optional (default = 0)\n A numeric offset to add to all span start and end indices. This is helpful\n if the sentence is part of a larger structure, such as a document, which\n the indices need to respect.\n max_span_width : ``int``, optional (default = None)\n The maximum length of spans which should be included. Defaults to len(sentence).\n min_span_width : ``int``, optional (default = 1)\n The minimum length of spans which should be included. Defaults to 1.\n filter_function : ``Callable[[List[T]], bool]``, optional (default = None)\n A function mapping sequences of the passed type T to a boolean value.\n If ``True``, the span is included in the returned spans from the\n sentence, otherwise it is excluded..\n " ]
Please provide a description of the function:def bio_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: classes_to_ignore = classes_to_ignore or [] spans: Set[Tuple[str, Tuple[int, int]]] = set() span_start = 0 span_end = 0 active_conll_tag = None for index, string_tag in enumerate(tag_sequence): # Actual BIO tag. bio_tag = string_tag[0] if bio_tag not in ["B", "I", "O"]: raise InvalidTagSequence(tag_sequence) conll_tag = string_tag[2:] if bio_tag == "O" or conll_tag in classes_to_ignore: # The span has ended. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = None # We don't care about tags we are # told to ignore, so we do nothing. continue elif bio_tag == "B": # We are entering a new span; reset indices # and active tag to new span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = conll_tag span_start = index span_end = index elif bio_tag == "I" and conll_tag == active_conll_tag: # We're inside a span. span_end += 1 else: # This is the case the bio label is an "I", but either: # 1) the span hasn't started - i.e. an ill formed span. # 2) The span is an I tag for a different conll annotation. # We'll process the previous span if it exists, but also # include this span. This is important, because otherwise, # a model may get a perfect F1 score whilst still including # false positive ill-formed spans. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = conll_tag span_start = index span_end = index # Last token might have been a part of a valid span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) return list(spans)
[ "\n Given a sequence corresponding to BIO tags, extracts spans.\n Spans are inclusive and can be of zero length, representing a single word span.\n Ill-formed spans are also included (i.e those which do not start with a \"B-LABEL\"),\n as otherwise it is possible to get a perfect precision score whilst still predicting\n ill-formed spans in addition to the correct spans. This function works properly when\n the spans are unlabeled (i.e., your labels are simply \"B\", \"I\", and \"O\").\n\n Parameters\n ----------\n tag_sequence : List[str], required.\n The integer class labels for a sequence.\n classes_to_ignore : List[str], optional (default = None).\n A list of string class labels `excluding` the bio tag\n which should be ignored when extracting spans.\n\n Returns\n -------\n spans : List[TypedStringSpan]\n The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)).\n Note that the label `does not` contain any BIO tag prefixes.\n " ]
Please provide a description of the function:def iob1_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: classes_to_ignore = classes_to_ignore or [] spans: Set[Tuple[str, Tuple[int, int]]] = set() span_start = 0 span_end = 0 active_conll_tag = None prev_bio_tag = None prev_conll_tag = None for index, string_tag in enumerate(tag_sequence): curr_bio_tag = string_tag[0] curr_conll_tag = string_tag[2:] if curr_bio_tag not in ["B", "I", "O"]: raise InvalidTagSequence(tag_sequence) if curr_bio_tag == "O" or curr_conll_tag in classes_to_ignore: # The span has ended. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = None elif _iob1_start_of_chunk(prev_bio_tag, prev_conll_tag, curr_bio_tag, curr_conll_tag): # We are entering a new span; reset indices # and active tag to new span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) active_conll_tag = curr_conll_tag span_start = index span_end = index else: # bio_tag == "I" and curr_conll_tag == active_conll_tag # We're continuing a span. span_end += 1 prev_bio_tag = string_tag[0] prev_conll_tag = string_tag[2:] # Last token might have been a part of a valid span. if active_conll_tag is not None: spans.add((active_conll_tag, (span_start, span_end))) return list(spans)
[ "\n Given a sequence corresponding to IOB1 tags, extracts spans.\n Spans are inclusive and can be of zero length, representing a single word span.\n Ill-formed spans are also included (i.e., those where \"B-LABEL\" is not preceded\n by \"I-LABEL\" or \"B-LABEL\").\n\n Parameters\n ----------\n tag_sequence : List[str], required.\n The integer class labels for a sequence.\n classes_to_ignore : List[str], optional (default = None).\n A list of string class labels `excluding` the bio tag\n which should be ignored when extracting spans.\n\n Returns\n -------\n spans : List[TypedStringSpan]\n The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)).\n Note that the label `does not` contain any BIO tag prefixes.\n " ]
Please provide a description of the function:def bioul_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: spans = [] classes_to_ignore = classes_to_ignore or [] index = 0 while index < len(tag_sequence): label = tag_sequence[index] if label[0] == 'U': spans.append((label.partition('-')[2], (index, index))) elif label[0] == 'B': start = index while label[0] != 'L': index += 1 if index >= len(tag_sequence): raise InvalidTagSequence(tag_sequence) label = tag_sequence[index] if not (label[0] == 'I' or label[0] == 'L'): raise InvalidTagSequence(tag_sequence) spans.append((label.partition('-')[2], (start, index))) else: if label != 'O': raise InvalidTagSequence(tag_sequence) index += 1 return [span for span in spans if span[0] not in classes_to_ignore]
[ "\n Given a sequence corresponding to BIOUL tags, extracts spans.\n Spans are inclusive and can be of zero length, representing a single word span.\n Ill-formed spans are not allowed and will raise ``InvalidTagSequence``.\n This function works properly when the spans are unlabeled (i.e., your labels are\n simply \"B\", \"I\", \"O\", \"U\", and \"L\").\n\n Parameters\n ----------\n tag_sequence : ``List[str]``, required.\n The tag sequence encoded in BIOUL, e.g. [\"B-PER\", \"L-PER\", \"O\"].\n classes_to_ignore : ``List[str]``, optional (default = None).\n A list of string class labels `excluding` the bio tag\n which should be ignored when extracting spans.\n\n Returns\n -------\n spans : ``List[TypedStringSpan]``\n The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)).\n " ]
Please provide a description of the function:def to_bioul(tag_sequence: List[str], encoding: str = "IOB1") -> List[str]: if not encoding in {"IOB1", "BIO"}: raise ConfigurationError(f"Invalid encoding {encoding} passed to 'to_bioul'.") # pylint: disable=len-as-condition def replace_label(full_label, new_label): # example: full_label = 'I-PER', new_label = 'U', returns 'U-PER' parts = list(full_label.partition('-')) parts[0] = new_label return ''.join(parts) def pop_replace_append(in_stack, out_stack, new_label): # pop the last element from in_stack, replace the label, append # to out_stack tag = in_stack.pop() new_tag = replace_label(tag, new_label) out_stack.append(new_tag) def process_stack(stack, out_stack): # process a stack of labels, add them to out_stack if len(stack) == 1: # just a U token pop_replace_append(stack, out_stack, 'U') else: # need to code as BIL recoded_stack = [] pop_replace_append(stack, recoded_stack, 'L') while len(stack) >= 2: pop_replace_append(stack, recoded_stack, 'I') pop_replace_append(stack, recoded_stack, 'B') recoded_stack.reverse() out_stack.extend(recoded_stack) # Process the tag_sequence one tag at a time, adding spans to a stack, # then recode them. bioul_sequence = [] stack: List[str] = [] for label in tag_sequence: # need to make a dict like # token = {'token': 'Matt', "labels": {'conll2003': "B-PER"} # 'gold': 'I-PER'} # where 'gold' is the raw value from the CoNLL data set if label == 'O' and len(stack) == 0: bioul_sequence.append(label) elif label == 'O' and len(stack) > 0: # need to process the entries on the stack plus this one process_stack(stack, bioul_sequence) bioul_sequence.append(label) elif label[0] == 'I': # check if the previous type is the same as this one # if it is then append to stack # otherwise this start a new entity if the type # is different if len(stack) == 0: if encoding == "BIO": raise InvalidTagSequence(tag_sequence) stack.append(label) else: # check if the previous type is the same as this one this_type = label.partition('-')[2] prev_type = stack[-1].partition('-')[2] if this_type == prev_type: stack.append(label) else: if encoding == "BIO": raise InvalidTagSequence(tag_sequence) # a new entity process_stack(stack, bioul_sequence) stack.append(label) elif label[0] == 'B': if len(stack) > 0: process_stack(stack, bioul_sequence) stack.append(label) else: raise InvalidTagSequence(tag_sequence) # process the stack if len(stack) > 0: process_stack(stack, bioul_sequence) return bioul_sequence
[ "\n Given a tag sequence encoded with IOB1 labels, recode to BIOUL.\n\n In the IOB1 scheme, I is a token inside a span, O is a token outside\n a span and B is the beginning of span immediately following another\n span of the same type.\n\n In the BIO scheme, I is a token inside a span, O is a token outside\n a span and B is the beginning of a span.\n\n Parameters\n ----------\n tag_sequence : ``List[str]``, required.\n The tag sequence encoded in IOB1, e.g. [\"I-PER\", \"I-PER\", \"O\"].\n encoding : `str`, optional, (default = ``IOB1``).\n The encoding type to convert from. Must be either \"IOB1\" or \"BIO\".\n\n Returns\n -------\n bioul_sequence: ``List[str]``\n The tag sequence encoded in IOB1, e.g. [\"B-PER\", \"L-PER\", \"O\"].\n " ]
Please provide a description of the function:def bmes_tags_to_spans(tag_sequence: List[str], classes_to_ignore: List[str] = None) -> List[TypedStringSpan]: def extract_bmes_tag_label(text): bmes_tag = text[0] label = text[2:] return bmes_tag, label spans: List[Tuple[str, List[int]]] = [] prev_bmes_tag: Optional[str] = None for index, tag in enumerate(tag_sequence): bmes_tag, label = extract_bmes_tag_label(tag) if bmes_tag in ('B', 'S'): # Regardless of tag, we start a new span when reaching B & S. spans.append( (label, [index, index]) ) elif bmes_tag in ('M', 'E') and prev_bmes_tag in ('B', 'M') and spans[-1][0] == label: # Only expand the span if # 1. Valid transition: B/M -> M/E. # 2. Matched label. spans[-1][1][1] = index else: # Best effort split for invalid span. spans.append( (label, [index, index]) ) # update previous BMES tag. prev_bmes_tag = bmes_tag classes_to_ignore = classes_to_ignore or [] return [ # to tuple. (span[0], (span[1][0], span[1][1])) for span in spans if span[0] not in classes_to_ignore ]
[ "\n Given a sequence corresponding to BMES tags, extracts spans.\n Spans are inclusive and can be of zero length, representing a single word span.\n Ill-formed spans are also included (i.e those which do not start with a \"B-LABEL\"),\n as otherwise it is possible to get a perfect precision score whilst still predicting\n ill-formed spans in addition to the correct spans.\n This function works properly when the spans are unlabeled (i.e., your labels are\n simply \"B\", \"M\", \"E\" and \"S\").\n\n Parameters\n ----------\n tag_sequence : List[str], required.\n The integer class labels for a sequence.\n classes_to_ignore : List[str], optional (default = None).\n A list of string class labels `excluding` the bio tag\n which should be ignored when extracting spans.\n\n Returns\n -------\n spans : List[TypedStringSpan]\n The typed, extracted spans from the sequence, in the format (label, (span_start, span_end)).\n Note that the label `does not` contain any BIO tag prefixes.\n " ]
Please provide a description of the function:def dry_run_from_args(args: argparse.Namespace): parameter_path = args.param_path serialization_dir = args.serialization_dir overrides = args.overrides params = Params.from_file(parameter_path, overrides) dry_run_from_params(params, serialization_dir)
[ "\n Just converts from an ``argparse.Namespace`` object to params.\n " ]
Please provide a description of the function:def search(self, initial_state: State, transition_function: TransitionFunction) -> Dict[int, List[State]]: finished_states: Dict[int, List[State]] = defaultdict(list) states = [initial_state] step_num = 0 while states: step_num += 1 next_states: Dict[int, List[State]] = defaultdict(list) grouped_state = states[0].combine_states(states) allowed_actions = [] for batch_index, action_history in zip(grouped_state.batch_indices, grouped_state.action_history): allowed_actions.append(self._allowed_transitions[batch_index][tuple(action_history)]) for next_state in transition_function.take_step(grouped_state, max_actions=self._per_node_beam_size, allowed_actions=allowed_actions): # NOTE: we're doing state.batch_indices[0] here (and similar things below), # hard-coding a group size of 1. But, our use of `next_state.is_finished()` # already checks for that, as it crashes if the group size is not 1. batch_index = next_state.batch_indices[0] if next_state.is_finished(): finished_states[batch_index].append(next_state) else: next_states[batch_index].append(next_state) states = [] for batch_index, batch_states in next_states.items(): # The states from the generator are already sorted, so we can just take the first # ones here, without an additional sort. if self._beam_size: batch_states = batch_states[:self._beam_size] states.extend(batch_states) best_states: Dict[int, List[State]] = {} for batch_index, batch_states in finished_states.items(): # The time this sort takes is pretty negligible, no particular need to optimize this # yet. Maybe with a larger beam size... finished_to_sort = [(-state.score[0].item(), state) for state in batch_states] finished_to_sort.sort(key=lambda x: x[0]) best_states[batch_index] = [state[1] for state in finished_to_sort[:self._beam_size]] return best_states
[ "\n Parameters\n ----------\n initial_state : ``State``\n The starting state of our search. This is assumed to be `batched`, and our beam search\n is batch-aware - we'll keep ``beam_size`` states around for each instance in the batch.\n transition_function : ``TransitionFunction``\n The ``TransitionFunction`` object that defines and scores transitions from one state to the\n next.\n\n Returns\n -------\n best_states : ``Dict[int, List[State]]``\n This is a mapping from batch index to the top states for that instance.\n " ]
Please provide a description of the function:def url_ok(match_tuple: MatchTuple) -> bool: try: result = requests.get(match_tuple.link, timeout=5) return result.ok except (requests.ConnectionError, requests.Timeout): return False
[ "Check if a URL is reachable." ]
Please provide a description of the function:def path_ok(match_tuple: MatchTuple) -> bool: relative_path = match_tuple.link.split("#")[0] full_path = os.path.join(os.path.dirname(str(match_tuple.source)), relative_path) return os.path.exists(full_path)
[ "Check if a file in this repository exists." ]
Please provide a description of the function:def infer_and_cast(value: Any): # pylint: disable=too-many-return-statements if isinstance(value, (int, float, bool)): # Already one of our desired types, so leave as is. return value elif isinstance(value, list): # Recursively call on each list element. return [infer_and_cast(item) for item in value] elif isinstance(value, dict): # Recursively call on each dict value. return {key: infer_and_cast(item) for key, item in value.items()} elif isinstance(value, str): # If it looks like a bool, make it a bool. if value.lower() == "true": return True elif value.lower() == "false": return False else: # See if it could be an int. try: return int(value) except ValueError: pass # See if it could be a float. try: return float(value) except ValueError: # Just return it as a string. return value else: raise ValueError(f"cannot infer type of {value}")
[ "\n In some cases we'll be feeding params dicts to functions we don't own;\n for example, PyTorch optimizers. In that case we can't use ``pop_int``\n or similar to force casts (which means you can't specify ``int`` parameters\n using environment variables). This function takes something that looks JSON-like\n and recursively casts things that look like (bool, int, float) to (bool, int, float).\n " ]
Please provide a description of the function:def _environment_variables() -> Dict[str, str]: return {key: value for key, value in os.environ.items() if _is_encodable(value)}
[ "\n Wraps `os.environ` to filter out non-encodable values.\n " ]
Please provide a description of the function:def unflatten(flat_dict: Dict[str, Any]) -> Dict[str, Any]: unflat: Dict[str, Any] = {} for compound_key, value in flat_dict.items(): curr_dict = unflat parts = compound_key.split(".") for key in parts[:-1]: curr_value = curr_dict.get(key) if key not in curr_dict: curr_dict[key] = {} curr_dict = curr_dict[key] elif isinstance(curr_value, dict): curr_dict = curr_value else: raise ConfigurationError("flattened dictionary is invalid") if not isinstance(curr_dict, dict) or parts[-1] in curr_dict: raise ConfigurationError("flattened dictionary is invalid") else: curr_dict[parts[-1]] = value return unflat
[ "\n Given a \"flattened\" dict with compound keys, e.g.\n {\"a.b\": 0}\n unflatten it:\n {\"a\": {\"b\": 0}}\n " ]
Please provide a description of the function:def with_fallback(preferred: Dict[str, Any], fallback: Dict[str, Any]) -> Dict[str, Any]: def merge(preferred_value: Any, fallback_value: Any) -> Any: if isinstance(preferred_value, dict) and isinstance(fallback_value, dict): return with_fallback(preferred_value, fallback_value) elif isinstance(preferred_value, dict) and isinstance(fallback_value, list): # treat preferred_value as a sparse list, where each key is an index to be overridden merged_list = fallback_value for elem_key, preferred_element in preferred_value.items(): try: index = int(elem_key) merged_list[index] = merge(preferred_element, fallback_value[index]) except ValueError: raise ConfigurationError("could not merge dicts - the preferred dict contains " f"invalid keys (key {elem_key} is not a valid list index)") except IndexError: raise ConfigurationError("could not merge dicts - the preferred dict contains " f"invalid keys (key {index} is out of bounds)") return merged_list else: return copy.deepcopy(preferred_value) preferred_keys = set(preferred.keys()) fallback_keys = set(fallback.keys()) common_keys = preferred_keys & fallback_keys merged: Dict[str, Any] = {} for key in preferred_keys - fallback_keys: merged[key] = copy.deepcopy(preferred[key]) for key in fallback_keys - preferred_keys: merged[key] = copy.deepcopy(fallback[key]) for key in common_keys: preferred_value = preferred[key] fallback_value = fallback[key] merged[key] = merge(preferred_value, fallback_value) return merged
[ "\n Deep merge two dicts, preferring values from `preferred`.\n " ]
Please provide a description of the function:def pop_choice(params: Dict[str, Any], key: str, choices: List[Any], default_to_first_choice: bool = False, history: str = "?.") -> Any: value = Params(params, history).pop_choice(key, choices, default_to_first_choice) return value
[ "\n Performs the same function as :func:`Params.pop_choice`, but is required in order to deal with\n places that the Params object is not welcome, such as inside Keras layers. See the docstring\n of that method for more detail on how this function works.\n\n This method adds a ``history`` parameter, in the off-chance that you know it, so that we can\n reproduce :func:`Params.pop_choice` exactly. We default to using \"?.\" if you don't know the\n history, so you'll have to fix that in the log if you want to actually recover the logged\n parameters.\n " ]
Please provide a description of the function:def add_file_to_archive(self, name: str) -> None: if not self.loading_from_archive: self.files_to_archive[f"{self.history}{name}"] = cached_path(self.get(name))
[ "\n Any class in its ``from_params`` method can request that some of its\n input files be added to the archive by calling this method.\n\n For example, if some class ``A`` had an ``input_file`` parameter, it could call\n\n ```\n params.add_file_to_archive(\"input_file\")\n ```\n\n which would store the supplied value for ``input_file`` at the key\n ``previous.history.and.then.input_file``. The ``files_to_archive`` dict\n is shared with child instances via the ``_check_is_dict`` method, so that\n the final mapping can be retrieved from the top-level ``Params`` object.\n\n NOTE: You must call ``add_file_to_archive`` before you ``pop()``\n the parameter, because the ``Params`` instance looks up the value\n of the filename inside itself.\n\n If the ``loading_from_archive`` flag is True, this will be a no-op.\n " ]
Please provide a description of the function:def pop(self, key: str, default: Any = DEFAULT) -> Any: if default is self.DEFAULT: try: value = self.params.pop(key) except KeyError: raise ConfigurationError("key \"{}\" is required at location \"{}\"".format(key, self.history)) else: value = self.params.pop(key, default) if not isinstance(value, dict): logger.info(self.history + key + " = " + str(value)) # type: ignore return self._check_is_dict(key, value)
[ "\n Performs the functionality associated with dict.pop(key), along with checking for\n returned dictionaries, replacing them with Param objects with an updated history.\n\n If ``key`` is not present in the dictionary, and no default was specified, we raise a\n ``ConfigurationError``, instead of the typical ``KeyError``.\n " ]
Please provide a description of the function:def pop_int(self, key: str, default: Any = DEFAULT) -> int: value = self.pop(key, default) if value is None: return None else: return int(value)
[ "\n Performs a pop and coerces to an int.\n " ]
Please provide a description of the function:def pop_float(self, key: str, default: Any = DEFAULT) -> float: value = self.pop(key, default) if value is None: return None else: return float(value)
[ "\n Performs a pop and coerces to a float.\n " ]
Please provide a description of the function:def pop_bool(self, key: str, default: Any = DEFAULT) -> bool: value = self.pop(key, default) if value is None: return None elif isinstance(value, bool): return value elif value == "true": return True elif value == "false": return False else: raise ValueError("Cannot convert variable to bool: " + value)
[ "\n Performs a pop and coerces to a bool.\n " ]
Please provide a description of the function:def get(self, key: str, default: Any = DEFAULT): if default is self.DEFAULT: try: value = self.params.get(key) except KeyError: raise ConfigurationError("key \"{}\" is required at location \"{}\"".format(key, self.history)) else: value = self.params.get(key, default) return self._check_is_dict(key, value)
[ "\n Performs the functionality associated with dict.get(key) but also checks for returned\n dicts and returns a Params object in their place with an updated history.\n " ]
Please provide a description of the function:def pop_choice(self, key: str, choices: List[Any], default_to_first_choice: bool = False) -> Any: default = choices[0] if default_to_first_choice else self.DEFAULT value = self.pop(key, default) if value not in choices: key_str = self.history + key message = '%s not in acceptable choices for %s: %s' % (value, key_str, str(choices)) raise ConfigurationError(message) return value
[ "\n Gets the value of ``key`` in the ``params`` dictionary, ensuring that the value is one of\n the given choices. Note that this `pops` the key from params, modifying the dictionary,\n consistent with how parameters are processed in this codebase.\n\n Parameters\n ----------\n key: str\n Key to get the value from in the param dictionary\n choices: List[Any]\n A list of valid options for values corresponding to ``key``. For example, if you're\n specifying the type of encoder to use for some part of your model, the choices might be\n the list of encoder classes we know about and can instantiate. If the value we find in\n the param dictionary is not in ``choices``, we raise a ``ConfigurationError``, because\n the user specified an invalid value in their parameter file.\n default_to_first_choice: bool, optional (default=False)\n If this is ``True``, we allow the ``key`` to not be present in the parameter\n dictionary. If the key is not present, we will use the return as the value the first\n choice in the ``choices`` list. If this is ``False``, we raise a\n ``ConfigurationError``, because specifying the ``key`` is required (e.g., you `have` to\n specify your model class when running an experiment, but you can feel free to use\n default settings for encoders if you want).\n " ]
Please provide a description of the function:def as_dict(self, quiet: bool = False, infer_type_and_cast: bool = False): if infer_type_and_cast: params_as_dict = infer_and_cast(self.params) else: params_as_dict = self.params if quiet: return params_as_dict def log_recursively(parameters, history): for key, value in parameters.items(): if isinstance(value, dict): new_local_history = history + key + "." log_recursively(value, new_local_history) else: logger.info(history + key + " = " + str(value)) logger.info("Converting Params object to dict; logging of default " "values will not occur when dictionary parameters are " "used subsequently.") logger.info("CURRENTLY DEFINED PARAMETERS: ") log_recursively(self.params, self.history) return params_as_dict
[ "\n Sometimes we need to just represent the parameters as a dict, for instance when we pass\n them to PyTorch code.\n\n Parameters\n ----------\n quiet: bool, optional (default = False)\n Whether to log the parameters before returning them as a dict.\n infer_type_and_cast : bool, optional (default = False)\n If True, we infer types and cast (e.g. things that look like floats to floats).\n " ]
Please provide a description of the function:def as_flat_dict(self): flat_params = {} def recurse(parameters, path): for key, value in parameters.items(): newpath = path + [key] if isinstance(value, dict): recurse(value, newpath) else: flat_params['.'.join(newpath)] = value recurse(self.params, []) return flat_params
[ "\n Returns the parameters of a flat dictionary from keys to values.\n Nested structure is collapsed with periods.\n " ]
Please provide a description of the function:def assert_empty(self, class_name: str): if self.params: raise ConfigurationError("Extra parameters passed to {}: {}".format(class_name, self.params))
[ "\n Raises a ``ConfigurationError`` if ``self.params`` is not empty. We take ``class_name`` as\n an argument so that the error message gives some idea of where an error happened, if there\n was one. ``class_name`` should be the name of the `calling` class, the one that got extra\n parameters (if there are any).\n " ]
Please provide a description of the function:def from_file(params_file: str, params_overrides: str = "", ext_vars: dict = None) -> 'Params': if ext_vars is None: ext_vars = {} # redirect to cache, if necessary params_file = cached_path(params_file) ext_vars = {**_environment_variables(), **ext_vars} file_dict = json.loads(evaluate_file(params_file, ext_vars=ext_vars)) overrides_dict = parse_overrides(params_overrides) param_dict = with_fallback(preferred=overrides_dict, fallback=file_dict) return Params(param_dict)
[ "\n Load a `Params` object from a configuration file.\n\n Parameters\n ----------\n params_file : ``str``\n The path to the configuration file to load.\n params_overrides : ``str``, optional\n A dict of overrides that can be applied to final object.\n e.g. {\"model.embedding_dim\": 10}\n ext_vars : ``dict``, optional\n Our config files are Jsonnet, which allows specifying external variables\n for later substitution. Typically we substitute these using environment\n variables; however, you can also specify them here, in which case they\n take priority over environment variables.\n e.g. {\"HOME_DIR\": \"/Users/allennlp/home\"}\n " ]
Please provide a description of the function:def as_ordered_dict(self, preference_orders: List[List[str]] = None) -> OrderedDict: params_dict = self.as_dict(quiet=True) if not preference_orders: preference_orders = [] preference_orders.append(["dataset_reader", "iterator", "model", "train_data_path", "validation_data_path", "test_data_path", "trainer", "vocabulary"]) preference_orders.append(["type"]) def order_func(key): # Makes a tuple to use for ordering. The tuple is an index into each of the `preference_orders`, # followed by the key itself. This gives us integer sorting if you have a key in one of the # `preference_orders`, followed by alphabetical ordering if not. order_tuple = [order.index(key) if key in order else len(order) for order in preference_orders] return order_tuple + [key] def order_dict(dictionary, order_func): # Recursively orders dictionary according to scoring order_func result = OrderedDict() for key, val in sorted(dictionary.items(), key=lambda item: order_func(item[0])): result[key] = order_dict(val, order_func) if isinstance(val, dict) else val return result return order_dict(params_dict, order_func)
[ "\n Returns Ordered Dict of Params from list of partial order preferences.\n\n Parameters\n ----------\n preference_orders: List[List[str]], optional\n ``preference_orders`` is list of partial preference orders. [\"A\", \"B\", \"C\"] means\n \"A\" > \"B\" > \"C\". For multiple preference_orders first will be considered first.\n Keys not found, will have last but alphabetical preference. Default Preferences:\n ``[[\"dataset_reader\", \"iterator\", \"model\", \"train_data_path\", \"validation_data_path\",\n \"test_data_path\", \"trainer\", \"vocabulary\"], [\"type\"]]``\n " ]
Please provide a description of the function:def get_hash(self) -> str: return str(hash(json.dumps(self.params, sort_keys=True)))
[ "\n Returns a hash code representing the current state of this ``Params`` object. We don't\n want to implement ``__hash__`` because that has deeper python implications (and this is a\n mutable object), but this will give you a representation of the current state.\n " ]
Please provide a description of the function:def clear(self) -> None: self._best_so_far = None self._epochs_with_no_improvement = 0 self._is_best_so_far = True self._epoch_number = 0 self.best_epoch = None
[ "\n Clears out the tracked metrics, but keeps the patience and should_decrease settings.\n " ]
Please provide a description of the function:def state_dict(self) -> Dict[str, Any]: return { "best_so_far": self._best_so_far, "patience": self._patience, "epochs_with_no_improvement": self._epochs_with_no_improvement, "is_best_so_far": self._is_best_so_far, "should_decrease": self._should_decrease, "best_epoch_metrics": self.best_epoch_metrics, "epoch_number": self._epoch_number, "best_epoch": self.best_epoch }
[ "\n A ``Trainer`` can use this to serialize the state of the metric tracker.\n " ]
Please provide a description of the function:def add_metric(self, metric: float) -> None: new_best = ((self._best_so_far is None) or (self._should_decrease and metric < self._best_so_far) or (not self._should_decrease and metric > self._best_so_far)) if new_best: self.best_epoch = self._epoch_number self._is_best_so_far = True self._best_so_far = metric self._epochs_with_no_improvement = 0 else: self._is_best_so_far = False self._epochs_with_no_improvement += 1 self._epoch_number += 1
[ "\n Record a new value of the metric and update the various things that depend on it.\n " ]
Please provide a description of the function:def add_metrics(self, metrics: Iterable[float]) -> None: for metric in metrics: self.add_metric(metric)
[ "\n Helper to add multiple metrics at once.\n " ]
Please provide a description of the function:def should_stop_early(self) -> bool: if self._patience is None: return False else: return self._epochs_with_no_improvement >= self._patience
[ "\n Returns true if improvement has stopped for long enough.\n " ]
Please provide a description of the function:def archive_model(serialization_dir: str, weights: str = _DEFAULT_WEIGHTS, files_to_archive: Dict[str, str] = None, archive_path: str = None) -> None: weights_file = os.path.join(serialization_dir, weights) if not os.path.exists(weights_file): logger.error("weights file %s does not exist, unable to archive model", weights_file) return config_file = os.path.join(serialization_dir, CONFIG_NAME) if not os.path.exists(config_file): logger.error("config file %s does not exist, unable to archive model", config_file) # If there are files we want to archive, write out the mapping # so that we can use it during de-archiving. if files_to_archive: fta_filename = os.path.join(serialization_dir, _FTA_NAME) with open(fta_filename, 'w') as fta_file: fta_file.write(json.dumps(files_to_archive)) if archive_path is not None: archive_file = archive_path if os.path.isdir(archive_file): archive_file = os.path.join(archive_file, "model.tar.gz") else: archive_file = os.path.join(serialization_dir, "model.tar.gz") logger.info("archiving weights and vocabulary to %s", archive_file) with tarfile.open(archive_file, 'w:gz') as archive: archive.add(config_file, arcname=CONFIG_NAME) archive.add(weights_file, arcname=_WEIGHTS_NAME) archive.add(os.path.join(serialization_dir, "vocabulary"), arcname="vocabulary") # If there are supplemental files to archive: if files_to_archive: # Archive the { flattened_key -> original_filename } mapping. archive.add(fta_filename, arcname=_FTA_NAME) # And add each requested file to the archive. for key, filename in files_to_archive.items(): archive.add(filename, arcname=f"fta/{key}")
[ "\n Archive the model weights, its training configuration, and its\n vocabulary to `model.tar.gz`. Include the additional ``files_to_archive``\n if provided.\n\n Parameters\n ----------\n serialization_dir: ``str``\n The directory where the weights and vocabulary are written out.\n weights: ``str``, optional (default=_DEFAULT_WEIGHTS)\n Which weights file to include in the archive. The default is ``best.th``.\n files_to_archive: ``Dict[str, str]``, optional (default=None)\n A mapping {flattened_key -> filename} of supplementary files to include\n in the archive. That is, if you wanted to include ``params['model']['weights']``\n then you would specify the key as `\"model.weights\"`.\n archive_path : ``str``, optional, (default = None)\n A full path to serialize the model to. The default is \"model.tar.gz\" inside the\n serialization_dir. If you pass a directory here, we'll serialize the model\n to \"model.tar.gz\" inside the directory.\n " ]
Please provide a description of the function:def load_archive(archive_file: str, cuda_device: int = -1, overrides: str = "", weights_file: str = None) -> Archive: # redirect to the cache, if necessary resolved_archive_file = cached_path(archive_file) if resolved_archive_file == archive_file: logger.info(f"loading archive file {archive_file}") else: logger.info(f"loading archive file {archive_file} from cache at {resolved_archive_file}") if os.path.isdir(resolved_archive_file): serialization_dir = resolved_archive_file else: # Extract archive to temp dir tempdir = tempfile.mkdtemp() logger.info(f"extracting archive file {resolved_archive_file} to temp dir {tempdir}") with tarfile.open(resolved_archive_file, 'r:gz') as archive: archive.extractall(tempdir) # Postpone cleanup until exit in case the unarchived contents are needed outside # this function. atexit.register(_cleanup_archive_dir, tempdir) serialization_dir = tempdir # Check for supplemental files in archive fta_filename = os.path.join(serialization_dir, _FTA_NAME) if os.path.exists(fta_filename): with open(fta_filename, 'r') as fta_file: files_to_archive = json.loads(fta_file.read()) # Add these replacements to overrides replacements_dict: Dict[str, Any] = {} for key, original_filename in files_to_archive.items(): replacement_filename = os.path.join(serialization_dir, f"fta/{key}") if os.path.exists(replacement_filename): replacements_dict[key] = replacement_filename else: logger.warning(f"Archived file {replacement_filename} not found! At train time " f"this file was located at {original_filename}. This may be " "because you are loading a serialization directory. Attempting to " "load the file from its train-time location.") overrides_dict = parse_overrides(overrides) combined_dict = with_fallback(preferred=overrides_dict, fallback=unflatten(replacements_dict)) overrides = json.dumps(combined_dict) # Load config config = Params.from_file(os.path.join(serialization_dir, CONFIG_NAME), overrides) config.loading_from_archive = True if weights_file: weights_path = weights_file else: weights_path = os.path.join(serialization_dir, _WEIGHTS_NAME) # Fallback for serialization directories. if not os.path.exists(weights_path): weights_path = os.path.join(serialization_dir, _DEFAULT_WEIGHTS) # Instantiate model. Use a duplicate of the config, as it will get consumed. model = Model.load(config.duplicate(), weights_file=weights_path, serialization_dir=serialization_dir, cuda_device=cuda_device) return Archive(model=model, config=config)
[ "\n Instantiates an Archive from an archived `tar.gz` file.\n\n Parameters\n ----------\n archive_file: ``str``\n The archive file to load the model from.\n weights_file: ``str``, optional (default = None)\n The weights file to use. If unspecified, weights.th in the archive_file will be used.\n cuda_device: ``int``, optional (default = -1)\n If `cuda_device` is >= 0, the model will be loaded onto the\n corresponding GPU. Otherwise it will be loaded onto the CPU.\n overrides: ``str``, optional (default = \"\")\n JSON overrides to apply to the unarchived ``Params`` object.\n " ]
Please provide a description of the function:def extract_module(self, path: str, freeze: bool = True) -> Module: modules_dict = {path: module for path, module in self.model.named_modules()} module = modules_dict.get(path, None) if not module: raise ConfigurationError(f"You asked to transfer module at path {path} from " f"the model {type(self.model)}. But it's not present.") if not isinstance(module, Module): raise ConfigurationError(f"The transferred object from model {type(self.model)} at path " f"{path} is not a PyTorch Module.") for parameter in module.parameters(): # type: ignore parameter.requires_grad_(not freeze) return module
[ "\n This method can be used to load a module from the pretrained model archive.\n\n It is also used implicitly in FromParams based construction. So instead of using standard\n params to construct a module, you can instead load a pretrained module from the model\n archive directly. For eg, instead of using params like {\"type\": \"module_type\", ...}, you\n can use the following template::\n\n {\n \"_pretrained\": {\n \"archive_file\": \"../path/to/model.tar.gz\",\n \"path\": \"path.to.module.in.model\",\n \"freeze\": False\n }\n }\n\n If you use this feature with FromParams, take care of the following caveat: Call to\n initializer(self) at end of model initializer can potentially wipe the transferred parameters\n by reinitializing them. This can happen if you have setup initializer regex that also\n matches parameters of the transferred module. To safe-guard against this, you can either\n update your initializer regex to prevent conflicting match or add extra initializer::\n\n [\n [\".*transferred_module_name.*\", \"prevent\"]]\n ]\n\n Parameters\n ----------\n path : ``str``, required\n Path of target module to be loaded from the model.\n Eg. \"_textfield_embedder.token_embedder_tokens\"\n freeze : ``bool``, optional (default=True)\n Whether to freeze the module parameters or not.\n\n " ]
Please provide a description of the function:def _get_action_strings(cls, possible_actions: List[List[ProductionRule]], action_indices: Dict[int, List[List[int]]]) -> List[List[List[str]]]: all_action_strings: List[List[List[str]]] = [] batch_size = len(possible_actions) for i in range(batch_size): batch_actions = possible_actions[i] batch_best_sequences = action_indices[i] if i in action_indices else [] # This will append an empty list to ``all_action_strings`` if ``batch_best_sequences`` # is empty. action_strings = [[batch_actions[rule_id][0] for rule_id in sequence] for sequence in batch_best_sequences] all_action_strings.append(action_strings) return all_action_strings
[ "\n Takes a list of possible actions and indices of decoded actions into those possible actions\n for a batch and returns sequences of action strings. We assume ``action_indices`` is a dict\n mapping batch indices to k-best decoded sequence lists.\n " ]
Please provide a description of the function:def decode(self, output_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: best_action_strings = output_dict["best_action_strings"] # Instantiating an empty world for getting logical forms. world = NlvrLanguage(set()) logical_forms = [] for instance_action_sequences in best_action_strings: instance_logical_forms = [] for action_strings in instance_action_sequences: if action_strings: instance_logical_forms.append(world.action_sequence_to_logical_form(action_strings)) else: instance_logical_forms.append('') logical_forms.append(instance_logical_forms) action_mapping = output_dict['action_mapping'] best_actions = output_dict['best_action_strings'] debug_infos = output_dict['debug_info'] batch_action_info = [] for batch_index, (predicted_actions, debug_info) in enumerate(zip(best_actions, debug_infos)): instance_action_info = [] for predicted_action, action_debug_info in zip(predicted_actions[0], debug_info): action_info = {} action_info['predicted_action'] = predicted_action considered_actions = action_debug_info['considered_actions'] probabilities = action_debug_info['probabilities'] actions = [] for action, probability in zip(considered_actions, probabilities): if action != -1: actions.append((action_mapping[(batch_index, action)], probability)) actions.sort() considered_actions, probabilities = zip(*actions) action_info['considered_actions'] = considered_actions action_info['action_probabilities'] = probabilities action_info['question_attention'] = action_debug_info.get('question_attention', []) instance_action_info.append(action_info) batch_action_info.append(instance_action_info) output_dict["predicted_actions"] = batch_action_info output_dict["logical_form"] = logical_forms return output_dict
[ "\n This method overrides ``Model.decode``, which gets called after ``Model.forward``, at test\n time, to finalize predictions. We only transform the action string sequences into logical\n forms here.\n " ]
Please provide a description of the function:def _check_state_denotations(self, state: GrammarBasedState, worlds: List[NlvrLanguage]) -> List[bool]: assert state.is_finished(), "Cannot compute denotations for unfinished states!" # Since this is a finished state, its group size must be 1. batch_index = state.batch_indices[0] instance_label_strings = state.extras[batch_index] history = state.action_history[0] all_actions = state.possible_actions[0] action_sequence = [all_actions[action][0] for action in history] return self._check_denotation(action_sequence, instance_label_strings, worlds)
[ "\n Returns whether action history in the state evaluates to the correct denotations over all\n worlds. Only defined when the state is finished.\n " ]
Please provide a description of the function:def find_learning_rate_from_args(args: argparse.Namespace) -> None: params = Params.from_file(args.param_path, args.overrides) find_learning_rate_model(params, args.serialization_dir, start_lr=args.start_lr, end_lr=args.end_lr, num_batches=args.num_batches, linear_steps=args.linear, stopping_factor=args.stopping_factor, force=args.force)
[ "\n Start learning rate finder for given args\n " ]
Please provide a description of the function:def find_learning_rate_model(params: Params, serialization_dir: str, start_lr: float = 1e-5, end_lr: float = 10, num_batches: int = 100, linear_steps: bool = False, stopping_factor: float = None, force: bool = False) -> None: if os.path.exists(serialization_dir) and force: shutil.rmtree(serialization_dir) if os.path.exists(serialization_dir) and os.listdir(serialization_dir): raise ConfigurationError(f'Serialization directory {serialization_dir} already exists and is ' f'not empty.') else: os.makedirs(serialization_dir, exist_ok=True) prepare_environment(params) cuda_device = params.params.get('trainer').get('cuda_device', -1) check_for_gpu(cuda_device) all_datasets = datasets_from_params(params) datasets_for_vocab_creation = set(params.pop("datasets_for_vocab_creation", all_datasets)) for dataset in datasets_for_vocab_creation: if dataset not in all_datasets: raise ConfigurationError(f"invalid 'dataset_for_vocab_creation' {dataset}") logger.info("From dataset instances, %s will be considered for vocabulary creation.", ", ".join(datasets_for_vocab_creation)) vocab = Vocabulary.from_params( params.pop("vocabulary", {}), (instance for key, dataset in all_datasets.items() for instance in dataset if key in datasets_for_vocab_creation) ) model = Model.from_params(vocab=vocab, params=params.pop('model')) iterator = DataIterator.from_params(params.pop("iterator")) iterator.index_with(vocab) train_data = all_datasets['train'] trainer_params = params.pop("trainer") no_grad_regexes = trainer_params.pop("no_grad", ()) for name, parameter in model.named_parameters(): if any(re.search(regex, name) for regex in no_grad_regexes): parameter.requires_grad_(False) trainer_choice = trainer_params.pop("type", "default") if trainer_choice != "default": raise ConfigurationError("currently find-learning-rate only works with the default Trainer") trainer = Trainer.from_params(model=model, serialization_dir=serialization_dir, iterator=iterator, train_data=train_data, validation_data=None, params=trainer_params, validation_iterator=None) logger.info(f'Starting learning rate search from {start_lr} to {end_lr} in {num_batches} iterations.') learning_rates, losses = search_learning_rate(trainer, start_lr=start_lr, end_lr=end_lr, num_batches=num_batches, linear_steps=linear_steps, stopping_factor=stopping_factor) logger.info(f'Finished learning rate search.') losses = _smooth(losses, 0.98) _save_plot(learning_rates, losses, os.path.join(serialization_dir, 'lr-losses.png'))
[ "\n Runs learning rate search for given `num_batches` and saves the results in ``serialization_dir``\n\n Parameters\n ----------\n params : ``Params``\n A parameter object specifying an AllenNLP Experiment.\n serialization_dir : ``str``\n The directory in which to save results.\n start_lr: ``float``\n Learning rate to start the search.\n end_lr: ``float``\n Learning rate upto which search is done.\n num_batches: ``int``\n Number of mini-batches to run Learning rate finder.\n linear_steps: ``bool``\n Increase learning rate linearly if False exponentially.\n stopping_factor: ``float``\n Stop the search when the current loss exceeds the best loss recorded by\n multiple of stopping factor. If ``None`` search proceeds till the ``end_lr``\n force: ``bool``\n If True and the serialization directory already exists, everything in it will\n be removed prior to finding the learning rate.\n " ]
Please provide a description of the function:def search_learning_rate(trainer: Trainer, start_lr: float = 1e-5, end_lr: float = 10, num_batches: int = 100, linear_steps: bool = False, stopping_factor: float = None) -> Tuple[List[float], List[float]]: if num_batches <= 10: raise ConfigurationError('The number of iterations for learning rate finder should be greater than 10.') trainer.model.train() num_gpus = len(trainer._cuda_devices) # pylint: disable=protected-access raw_train_generator = trainer.iterator(trainer.train_data, shuffle=trainer.shuffle) train_generator = lazy_groups_of(raw_train_generator, num_gpus) train_generator_tqdm = Tqdm.tqdm(train_generator, total=num_batches) learning_rates = [] losses = [] best = 1e9 if linear_steps: lr_update_factor = (end_lr - start_lr) / num_batches else: lr_update_factor = (end_lr / start_lr) ** (1.0 / num_batches) for i, batch_group in enumerate(train_generator_tqdm): if linear_steps: current_lr = start_lr + (lr_update_factor * i) else: current_lr = start_lr * (lr_update_factor ** i) for param_group in trainer.optimizer.param_groups: param_group['lr'] = current_lr trainer.optimizer.zero_grad() loss = trainer.batch_loss(batch_group, for_training=True) loss.backward() loss = loss.detach().cpu().item() if stopping_factor is not None and (math.isnan(loss) or loss > stopping_factor * best): logger.info(f'Loss ({loss}) exceeds stopping_factor * lowest recorded loss.') break trainer.rescale_gradients() trainer.optimizer.step() learning_rates.append(current_lr) losses.append(loss) if loss < best and i > 10: best = loss if i == num_batches: break return learning_rates, losses
[ "\n Runs training loop on the model using :class:`~allennlp.training.trainer.Trainer`\n increasing learning rate from ``start_lr`` to ``end_lr`` recording the losses.\n Parameters\n ----------\n trainer: :class:`~allennlp.training.trainer.Trainer`\n start_lr: ``float``\n The learning rate to start the search.\n end_lr: ``float``\n The learning rate upto which search is done.\n num_batches: ``int``\n Number of batches to run the learning rate finder.\n linear_steps: ``bool``\n Increase learning rate linearly if False exponentially.\n stopping_factor: ``float``\n Stop the search when the current loss exceeds the best loss recorded by\n multiple of stopping factor. If ``None`` search proceeds till the ``end_lr``\n Returns\n -------\n (learning_rates, losses): ``Tuple[List[float], List[float]]``\n Returns list of learning rates and corresponding losses.\n Note: The losses are recorded before applying the corresponding learning rate\n " ]
Please provide a description of the function:def _smooth(values: List[float], beta: float) -> List[float]: avg_value = 0. smoothed = [] for i, value in enumerate(values): avg_value = beta * avg_value + (1 - beta) * value smoothed.append(avg_value / (1 - beta ** (i + 1))) return smoothed
[ " Exponential smoothing of values " ]
Please provide a description of the function:def forward(self, tensors: List[torch.Tensor], # pylint: disable=arguments-differ mask: torch.Tensor = None) -> torch.Tensor: if len(tensors) != self.mixture_size: raise ConfigurationError("{} tensors were passed, but the module was initialized to " "mix {} tensors.".format(len(tensors), self.mixture_size)) def _do_layer_norm(tensor, broadcast_mask, num_elements_not_masked): tensor_masked = tensor * broadcast_mask mean = torch.sum(tensor_masked) / num_elements_not_masked variance = torch.sum(((tensor_masked - mean) * broadcast_mask)**2) / num_elements_not_masked return (tensor - mean) / torch.sqrt(variance + 1E-12) normed_weights = torch.nn.functional.softmax(torch.cat([parameter for parameter in self.scalar_parameters]), dim=0) normed_weights = torch.split(normed_weights, split_size_or_sections=1) if not self.do_layer_norm: pieces = [] for weight, tensor in zip(normed_weights, tensors): pieces.append(weight * tensor) return self.gamma * sum(pieces) else: mask_float = mask.float() broadcast_mask = mask_float.unsqueeze(-1) input_dim = tensors[0].size(-1) num_elements_not_masked = torch.sum(mask_float) * input_dim pieces = [] for weight, tensor in zip(normed_weights, tensors): pieces.append(weight * _do_layer_norm(tensor, broadcast_mask, num_elements_not_masked)) return self.gamma * sum(pieces)
[ "\n Compute a weighted average of the ``tensors``. The input tensors an be any shape\n with at least two dimensions, but must all be the same shape.\n\n When ``do_layer_norm=True``, the ``mask`` is required input. If the ``tensors`` are\n dimensioned ``(dim_0, ..., dim_{n-1}, dim_n)``, then the ``mask`` is dimensioned\n ``(dim_0, ..., dim_{n-1})``, as in the typical case with ``tensors`` of shape\n ``(batch_size, timesteps, dim)`` and ``mask`` of shape ``(batch_size, timesteps)``.\n\n When ``do_layer_norm=False`` the ``mask`` is ignored.\n " ]
Please provide a description of the function:def predicate_with_side_args(side_arguments: List[str]) -> Callable: # pylint: disable=invalid-name def decorator(function: Callable) -> Callable: setattr(function, '_side_arguments', side_arguments) return predicate(function) return decorator
[ "\n Like :func:`predicate`, but used when some of the arguments to the function are meant to be\n provided by the decoder or other state, instead of from the language. For example, you might\n want to have a function use the decoder's attention over some input text when a terminal was\n predicted. That attention won't show up in the language productions. Use this decorator, and\n pass in the required state to :func:`DomainLanguage.execute_action_sequence`, if you need to\n ignore some arguments when doing grammar induction.\n\n In order for this to work out, the side arguments `must` be after any non-side arguments. This\n is because we use ``*args`` to pass the non-side arguments, and ``**kwargs`` to pass the side\n arguments, and python requires that ``*args`` be before ``**kwargs``.\n " ]
Please provide a description of the function:def nltk_tree_to_logical_form(tree: Tree) -> str: # nltk.Tree actually inherits from `list`, so you use `len()` to get the number of children. # We're going to be explicit about checking length, instead of using `if tree:`, just to avoid # any funny business nltk might have done (e.g., it's really odd if `if tree:` evaluates to # `False` if there's a single leaf node with no children). if len(tree) == 0: # pylint: disable=len-as-condition return tree.label() if len(tree) == 1: return tree[0].label() return '(' + ' '.join(nltk_tree_to_logical_form(child) for child in tree) + ')'
[ "\n Given an ``nltk.Tree`` representing the syntax tree that generates a logical form, this method\n produces the actual (lisp-like) logical form, with all of the non-terminal symbols converted\n into the correct number of parentheses.\n\n This is used in the logic that converts action sequences back into logical forms. It's very\n unlikely that you will need this anywhere else.\n " ]
Please provide a description of the function:def get_type(type_: Type) -> 'PredicateType': if is_callable(type_): callable_args = type_.__args__ argument_types = [PredicateType.get_type(t) for t in callable_args[:-1]] return_type = PredicateType.get_type(callable_args[-1]) return FunctionType(argument_types, return_type) elif is_generic(type_): # This is something like List[int]. type_.__name__ doesn't do the right thing (and # crashes in python 3.7), so we need to do some magic here. name = get_generic_name(type_) else: name = type_.__name__ return BasicType(name)
[ "\n Converts a python ``Type`` (as you might get from a type annotation) into a\n ``PredicateType``. If the ``Type`` is callable, this will return a ``FunctionType``;\n otherwise, it will return a ``BasicType``.\n\n ``BasicTypes`` have a single ``name`` parameter - we typically get this from\n ``type_.__name__``. This doesn't work for generic types (like ``List[str]``), so we handle\n those specially, so that the ``name`` for the ``BasicType`` remains ``List[str]``, as you\n would expect.\n " ]
Please provide a description of the function:def execute(self, logical_form: str): if not hasattr(self, '_functions'): raise RuntimeError("You must call super().__init__() in your Language constructor") logical_form = logical_form.replace(",", " ") expression = util.lisp_to_nested_expression(logical_form) return self._execute_expression(expression)
[ "Executes a logical form, using whatever predicates you have defined." ]
Please provide a description of the function:def execute_action_sequence(self, action_sequence: List[str], side_arguments: List[Dict] = None): # We'll strip off the first action, because it doesn't matter for execution. first_action = action_sequence[0] left_side = first_action.split(' -> ')[0] if left_side != '@start@': raise ExecutionError('invalid action sequence') remaining_side_args = side_arguments[1:] if side_arguments else None return self._execute_sequence(action_sequence[1:], remaining_side_args)[0]
[ "\n Executes the program defined by an action sequence directly, without needing the overhead\n of translating to a logical form first. For any given program, :func:`execute` and this\n function are equivalent, they just take different representations of the program, so you\n can use whichever is more efficient.\n\n Also, if you have state or side arguments associated with particular production rules\n (e.g., the decoder's attention on an input utterance when a predicate was predicted), you\n `must` use this function to execute the logical form, instead of :func:`execute`, so that\n we can match the side arguments with the right functions.\n " ]
Please provide a description of the function:def get_nonterminal_productions(self) -> Dict[str, List[str]]: if not self._nonterminal_productions: actions: Dict[str, Set[str]] = defaultdict(set) # If you didn't give us a set of valid start types, we'll assume all types we know # about (including functional types) are valid start types. if self._start_types: start_types = self._start_types else: start_types = set() for type_list in self._function_types.values(): start_types.update(type_list) for start_type in start_types: actions[START_SYMBOL].add(f"{START_SYMBOL} -> {start_type}") for name, function_type_list in self._function_types.items(): for function_type in function_type_list: actions[str(function_type)].add(f"{function_type} -> {name}") if isinstance(function_type, FunctionType): return_type = function_type.return_type arg_types = function_type.argument_types right_side = f"[{function_type}, {', '.join(str(arg_type) for arg_type in arg_types)}]" actions[str(return_type)].add(f"{return_type} -> {right_side}") self._nonterminal_productions = {key: sorted(value) for key, value in actions.items()} return self._nonterminal_productions
[ "\n Induces a grammar from the defined collection of predicates in this language and returns\n all productions in that grammar, keyed by the non-terminal they are expanding.\n\n This includes terminal productions implied by each predicate as well as productions for the\n `return type` of each defined predicate. For example, defining a \"multiply\" predicate adds\n a \"<int,int:int> -> multiply\" terminal production to the grammar, and `also` a \"int ->\n [<int,int:int>, int, int]\" non-terminal production, because I can use the \"multiply\"\n predicate to produce an int.\n " ]
Please provide a description of the function:def all_possible_productions(self) -> List[str]: all_actions = set() for action_set in self.get_nonterminal_productions().values(): all_actions.update(action_set) return sorted(all_actions)
[ "\n Returns a sorted list of all production rules in the grammar induced by\n :func:`get_nonterminal_productions`.\n " ]
Please provide a description of the function:def logical_form_to_action_sequence(self, logical_form: str) -> List[str]: expression = util.lisp_to_nested_expression(logical_form) try: transitions, start_type = self._get_transitions(expression, expected_type=None) if self._start_types and start_type not in self._start_types: raise ParsingError(f"Expression had unallowed start type of {start_type}: {expression}") except ParsingError: logger.error(f'Error parsing logical form: {logical_form}') raise transitions.insert(0, f'@start@ -> {start_type}') return transitions
[ "\n Converts a logical form into a linearization of the production rules from its abstract\n syntax tree. The linearization is top-down, depth-first.\n\n Each production rule is formatted as \"LHS -> RHS\", where \"LHS\" is a single non-terminal\n type, and RHS is either a terminal or a list of non-terminals (other possible values for\n RHS in a more general context-free grammar are not produced by our grammar induction\n logic).\n\n Non-terminals are `types` in the grammar, either basic types (like ``int``, ``str``, or\n some class that you define), or functional types, represented with angle brackets with a\n colon separating arguments from the return type. Multi-argument functions have commas\n separating their argument types. For example, ``<int:int>`` is a function that takes an\n integer and returns an integer, and ``<int,int:int>`` is a function that takes two integer\n arguments and returns an integer.\n\n As an example translation from logical form to complete action sequence, the logical form\n ``(add 2 3)`` would be translated to ``['@start@ -> int', 'int -> [<int,int:int>, int, int]',\n '<int,int:int> -> add', 'int -> 2', 'int -> 3']``.\n " ]
Please provide a description of the function:def action_sequence_to_logical_form(self, action_sequence: List[str]) -> str: # Basic outline: we assume that the bracketing that we get in the RHS of each action is the # correct bracketing for reconstructing the logical form. This is true when there is no # currying in the action sequence. Given this assumption, we just need to construct a tree # from the action sequence, then output all of the leaves in the tree, with brackets around # the children of all non-terminal nodes. remaining_actions = [action.split(" -> ") for action in action_sequence] tree = Tree(remaining_actions[0][1], []) try: remaining_actions = self._construct_node_from_actions(tree, remaining_actions[1:]) except ParsingError: logger.error("Error parsing action sequence: %s", action_sequence) raise if remaining_actions: logger.error("Error parsing action sequence: %s", action_sequence) logger.error("Remaining actions were: %s", remaining_actions) raise ParsingError("Extra actions in action sequence") return nltk_tree_to_logical_form(tree)
[ "\n Takes an action sequence as produced by :func:`logical_form_to_action_sequence`, which is a\n linearization of an abstract syntax tree, and reconstructs the logical form defined by that\n abstract syntax tree.\n " ]
Please provide a description of the function:def add_predicate(self, name: str, function: Callable, side_arguments: List[str] = None): side_arguments = side_arguments or [] signature = inspect.signature(function) argument_types = [param.annotation for name, param in signature.parameters.items() if name not in side_arguments] return_type = signature.return_annotation argument_nltk_types: List[PredicateType] = [PredicateType.get_type(arg_type) for arg_type in argument_types] return_nltk_type = PredicateType.get_type(return_type) function_nltk_type = PredicateType.get_function_type(argument_nltk_types, return_nltk_type) self._functions[name] = function self._function_types[name].append(function_nltk_type)
[ "\n Adds a predicate to this domain language. Typically you do this with the ``@predicate``\n decorator on the methods in your class. But, if you need to for whatever reason, you can\n also call this function yourself with a (type-annotated) function to add it to your\n language.\n\n Parameters\n ----------\n name : ``str``\n The name that we will use in the induced language for this function.\n function : ``Callable``\n The function that gets called when executing a predicate with the given name.\n side_arguments : ``List[str]``, optional\n If given, we will ignore these arguments for the purposes of grammar induction. This\n is to allow passing extra arguments from the decoder state that are not explicitly part\n of the language the decoder produces, such as the decoder's attention over the question\n when a terminal was predicted. If you use this functionality, you also `must` use\n ``language.execute_action_sequence()`` instead of ``language.execute()``, and you must\n pass the additional side arguments needed to that function. See\n :func:`execute_action_sequence` for more information.\n " ]
Please provide a description of the function:def add_constant(self, name: str, value: Any, type_: Type = None): value_type = type_ if type_ else type(value) constant_type = PredicateType.get_type(value_type) self._functions[name] = lambda: value self._function_types[name].append(constant_type)
[ "\n Adds a constant to this domain language. You would typically just pass in a list of\n constants to the ``super().__init__()`` call in your constructor, but you can also call\n this method to add constants if it is more convenient.\n\n Because we construct a grammar over this language for you, in order for the grammar to be\n finite we cannot allow arbitrary constants. Having a finite grammar is important when\n you're doing semantic parsing - we need to be able to search over this space, and compute\n normalized probability distributions.\n " ]
Please provide a description of the function:def is_nonterminal(self, symbol: str) -> bool: nonterminal_productions = self.get_nonterminal_productions() return symbol in nonterminal_productions
[ "\n Determines whether an input symbol is a valid non-terminal in the grammar.\n " ]
Please provide a description of the function:def _execute_expression(self, expression: Any): # pylint: disable=too-many-return-statements if isinstance(expression, list): if isinstance(expression[0], list): function = self._execute_expression(expression[0]) elif expression[0] in self._functions: function = self._functions[expression[0]] else: if isinstance(expression[0], str): raise ExecutionError(f"Unrecognized function: {expression[0]}") else: raise ExecutionError(f"Unsupported expression type: {expression}") arguments = [self._execute_expression(arg) for arg in expression[1:]] try: return function(*arguments) except (TypeError, ValueError): traceback.print_exc() raise ExecutionError(f"Error executing expression {expression} (see stderr for stack trace)") elif isinstance(expression, str): if expression not in self._functions: raise ExecutionError(f"Unrecognized constant: {expression}") # This is a bit of a quirk in how we represent constants and zero-argument functions. # For consistency, constants are wrapped in a zero-argument lambda. So both constants # and zero-argument functions are callable in `self._functions`, and are `BasicTypes` # in `self._function_types`. For these, we want to return # `self._functions[expression]()` _calling_ the zero-argument function. If we get a # `FunctionType` in here, that means we're referring to the function as a first-class # object, instead of calling it (maybe as an argument to a higher-order function). In # that case, we return the function _without_ calling it. # Also, we just check the first function type here, because we assume you haven't # registered the same function with both a constant type and a `FunctionType`. if isinstance(self._function_types[expression][0], FunctionType): return self._functions[expression] else: return self._functions[expression]() return self._functions[expression] else: raise ExecutionError("Not sure how you got here. Please open a github issue with details.")
[ "\n This does the bulk of the work of executing a logical form, recursively executing a single\n expression. Basically, if the expression is a function we know about, we evaluate its\n arguments then call the function. If it's a list, we evaluate all elements of the list.\n If it's a constant (or a zero-argument function), we evaluate the constant.\n " ]
Please provide a description of the function:def _execute_sequence(self, action_sequence: List[str], side_arguments: List[Dict]) -> Tuple[Any, List[str], List[Dict]]: first_action = action_sequence[0] remaining_actions = action_sequence[1:] remaining_side_args = side_arguments[1:] if side_arguments else None right_side = first_action.split(' -> ')[1] if right_side in self._functions: function = self._functions[right_side] # mypy doesn't like this check, saying that Callable isn't a reasonable thing to pass # here. But it works just fine; I'm not sure why mypy complains about it. if isinstance(function, Callable): # type: ignore function_arguments = inspect.signature(function).parameters if not function_arguments: # This was a zero-argument function / constant that was registered as a lambda # function, for consistency of execution in `execute()`. execution_value = function() elif side_arguments: kwargs = {} non_kwargs = [] for argument_name in function_arguments: if argument_name in side_arguments[0]: kwargs[argument_name] = side_arguments[0][argument_name] else: non_kwargs.append(argument_name) if kwargs and non_kwargs: # This is a function that has both side arguments and logical form # arguments - we curry the function so only the logical form arguments are # left. def curried_function(*args): return function(*args, **kwargs) execution_value = curried_function elif kwargs: # This is a function that _only_ has side arguments - we just call the # function and return a value. execution_value = function(**kwargs) else: # This is a function that has logical form arguments, but no side arguments # that match what we were given - just return the function itself. execution_value = function else: execution_value = function return execution_value, remaining_actions, remaining_side_args else: # This is a non-terminal expansion, like 'int -> [<int:int>, int, int]'. We need to # get the function and its arguments, then call the function with its arguments. # Because we linearize the abstract syntax tree depth first, left-to-right, we can just # recursively call `_execute_sequence` for the function and all of its arguments, and # things will just work. right_side_parts = right_side.split(', ') # We don't really need to know what the types are, just how many of them there are, so # we recurse the right number of times. function, remaining_actions, remaining_side_args = self._execute_sequence(remaining_actions, remaining_side_args) arguments = [] for _ in right_side_parts[1:]: argument, remaining_actions, remaining_side_args = self._execute_sequence(remaining_actions, remaining_side_args) arguments.append(argument) return function(*arguments), remaining_actions, remaining_side_args
[ "\n This does the bulk of the work of :func:`execute_action_sequence`, recursively executing\n the functions it finds and trimming actions off of the action sequence. The return value\n is a tuple of (execution, remaining_actions), where the second value is necessary to handle\n the recursion.\n " ]
Please provide a description of the function:def _get_transitions(self, expression: Any, expected_type: PredicateType) -> Tuple[List[str], PredicateType]: if isinstance(expression, (list, tuple)): function_transitions, return_type, argument_types = self._get_function_transitions(expression[0], expected_type) if len(argument_types) != len(expression[1:]): raise ParsingError(f'Wrong number of arguments for function in {expression}') argument_transitions = [] for argument_type, subexpression in zip(argument_types, expression[1:]): argument_transitions.extend(self._get_transitions(subexpression, argument_type)[0]) return function_transitions + argument_transitions, return_type elif isinstance(expression, str): if expression not in self._functions: raise ParsingError(f"Unrecognized constant: {expression}") constant_types = self._function_types[expression] if len(constant_types) == 1: constant_type = constant_types[0] # This constant had only one type; that's the easy case. if expected_type and expected_type != constant_type: raise ParsingError(f'{expression} did not have expected type {expected_type} ' f'(found {constant_type})') return [f'{constant_type} -> {expression}'], constant_type else: if not expected_type: raise ParsingError('With no expected type and multiple types to pick from ' f"I don't know what type to use (constant was {expression})") if expected_type not in constant_types: raise ParsingError(f'{expression} did not have expected type {expected_type} ' f'(found these options: {constant_types}; none matched)') return [f'{expected_type} -> {expression}'], expected_type else: raise ParsingError('Not sure how you got here. Please open an issue on github with details.')
[ "\n This is used when converting a logical form into an action sequence. This piece\n recursively translates a lisp expression into an action sequence, making sure we match the\n expected type (or using the expected type to get the right type for constant expressions).\n " ]
Please provide a description of the function:def _get_function_transitions(self, expression: Union[str, List], expected_type: PredicateType) -> Tuple[List[str], PredicateType, List[PredicateType]]: # This first block handles getting the transitions and function type (and some error # checking) _just for the function itself_. If this is a simple function, this is easy; if # it's a higher-order function, it involves some recursion. if isinstance(expression, list): # This is a higher-order function. TODO(mattg): we'll just ignore type checking on # higher-order functions, for now. transitions, function_type = self._get_transitions(expression, None) elif expression in self._functions: name = expression function_types = self._function_types[expression] if len(function_types) != 1: raise ParsingError(f"{expression} had multiple types; this is not yet supported for functions") function_type = function_types[0] transitions = [f'{function_type} -> {name}'] else: if isinstance(expression, str): raise ParsingError(f"Unrecognized function: {expression[0]}") else: raise ParsingError(f"Unsupported expression type: {expression}") if not isinstance(function_type, FunctionType): raise ParsingError(f'Zero-arg function or constant called with arguments: {name}') # Now that we have the transitions for the function itself, and the function's type, we can # get argument types and do the rest of the transitions. argument_types = function_type.argument_types return_type = function_type.return_type right_side = f'[{function_type}, {", ".join(str(arg) for arg in argument_types)}]' first_transition = f'{return_type} -> {right_side}' transitions.insert(0, first_transition) if expected_type and expected_type != return_type: raise ParsingError(f'{expression} did not have expected type {expected_type} ' f'(found {return_type})') return transitions, return_type, argument_types
[ "\n A helper method for ``_get_transitions``. This gets the transitions for the predicate\n itself in a function call. If we only had simple functions (e.g., \"(add 2 3)\"), this would\n be pretty straightforward and we wouldn't need a separate method to handle it. We split it\n out into its own method because handling higher-order functions is complicated (e.g.,\n something like \"((negate add) 2 3)\").\n " ]
Please provide a description of the function:def _construct_node_from_actions(self, current_node: Tree, remaining_actions: List[List[str]]) -> List[List[str]]: if not remaining_actions: logger.error("No actions left to construct current node: %s", current_node) raise ParsingError("Incomplete action sequence") left_side, right_side = remaining_actions.pop(0) if left_side != current_node.label(): logger.error("Current node: %s", current_node) logger.error("Next action: %s -> %s", left_side, right_side) logger.error("Remaining actions were: %s", remaining_actions) raise ParsingError("Current node does not match next action") if right_side[0] == '[': # This is a non-terminal expansion, with more than one child node. for child_type in right_side[1:-1].split(', '): child_node = Tree(child_type, []) current_node.append(child_node) # you add a child to an nltk.Tree with `append` # For now, we assume that all children in a list like this are non-terminals, so we # recurse on them. I'm pretty sure that will always be true for the way our # grammar induction works. We can revisit this later if we need to. remaining_actions = self._construct_node_from_actions(child_node, remaining_actions) else: # The current node is a pre-terminal; we'll add a single terminal child. By # construction, the right-hand side of our production rules are only ever terminal # productions or lists of non-terminals. current_node.append(Tree(right_side, [])) # you add a child to an nltk.Tree with `append` return remaining_actions
[ "\n Given a current node in the logical form tree, and a list of actions in an action sequence,\n this method fills in the children of the current node from the action sequence, then\n returns whatever actions are left.\n\n For example, we could get a node with type ``c``, and an action sequence that begins with\n ``c -> [<r,c>, r]``. This method will add two children to the input node, consuming\n actions from the action sequence for nodes of type ``<r,c>`` (and all of its children,\n recursively) and ``r`` (and all of its children, recursively). This method assumes that\n action sequences are produced `depth-first`, so all actions for the subtree under ``<r,c>``\n appear before actions for the subtree under ``r``. If there are any actions in the action\n sequence after the ``<r,c>`` and ``r`` subtrees have terminated in leaf nodes, they will be\n returned.\n " ]
Please provide a description of the function:def _choice(num_words: int, num_samples: int) -> Tuple[np.ndarray, int]: num_tries = 0 num_chosen = 0 def get_buffer() -> np.ndarray: log_samples = np.random.rand(num_samples) * np.log(num_words + 1) samples = np.exp(log_samples).astype('int64') - 1 return np.clip(samples, a_min=0, a_max=num_words - 1) sample_buffer = get_buffer() buffer_index = 0 samples: Set[int] = set() while num_chosen < num_samples: num_tries += 1 # choose sample sample_id = sample_buffer[buffer_index] if sample_id not in samples: samples.add(sample_id) num_chosen += 1 buffer_index += 1 if buffer_index == num_samples: # Reset the buffer sample_buffer = get_buffer() buffer_index = 0 return np.array(list(samples)), num_tries
[ "\n Chooses ``num_samples`` samples without replacement from [0, ..., num_words).\n Returns a tuple (samples, num_tries).\n " ]
Please provide a description of the function:def tokens_to_indices(self, tokens: List[Token], vocabulary: Vocabulary, index_name: str) -> Dict[str, List[TokenType]]: raise NotImplementedError
[ "\n Takes a list of tokens and converts them to one or more sets of indices.\n This could be just an ID for each token from the vocabulary.\n Or it could split each token into characters and return one ID per character.\n Or (for instance, in the case of byte-pair encoding) there might not be a clean\n mapping from individual tokens to indices.\n " ]
Please provide a description of the function:def pad_token_sequence(self, tokens: Dict[str, List[TokenType]], desired_num_tokens: Dict[str, int], padding_lengths: Dict[str, int]) -> Dict[str, List[TokenType]]: raise NotImplementedError
[ "\n This method pads a list of tokens to ``desired_num_tokens`` and returns a padded copy of the\n input tokens. If the input token list is longer than ``desired_num_tokens`` then it will be\n truncated.\n\n ``padding_lengths`` is used to provide supplemental padding parameters which are needed\n in some cases. For example, it contains the widths to pad characters to when doing\n character-level padding.\n " ]
Please provide a description of the function:def canonicalize_clusters(clusters: DefaultDict[int, List[Tuple[int, int]]]) -> List[List[Tuple[int, int]]]: merged_clusters: List[Set[Tuple[int, int]]] = [] for cluster in clusters.values(): cluster_with_overlapping_mention = None for mention in cluster: # Look at clusters we have already processed to # see if they contain a mention in the current # cluster for comparison. for cluster2 in merged_clusters: if mention in cluster2: # first cluster in merged clusters # which contains this mention. cluster_with_overlapping_mention = cluster2 break # Already encountered overlap - no need to keep looking. if cluster_with_overlapping_mention is not None: break if cluster_with_overlapping_mention is not None: # Merge cluster we are currently processing into # the cluster in the processed list. cluster_with_overlapping_mention.update(cluster) else: merged_clusters.append(set(cluster)) return [list(c) for c in merged_clusters]
[ "\n The CONLL 2012 data includes 2 annotated spans which are identical,\n but have different ids. This checks all clusters for spans which are\n identical, and if it finds any, merges the clusters containing the\n identical spans.\n " ]
Please provide a description of the function:def join_mwp(tags: List[str]) -> List[str]: ret = [] verb_flag = False for tag in tags: if "V" in tag: # Create a continuous 'V' BIO span prefix, _ = tag.split("-") if verb_flag: # Continue a verb label across the different predicate parts prefix = 'I' ret.append(f"{prefix}-V") verb_flag = True else: ret.append(tag) verb_flag = False return ret
[ "\n Join multi-word predicates to a single\n predicate ('V') token.\n " ]
Please provide a description of the function:def make_oie_string(tokens: List[Token], tags: List[str]) -> str: frame = [] chunk = [] words = [token.text for token in tokens] for (token, tag) in zip(words, tags): if tag.startswith("I-"): chunk.append(token) else: if chunk: frame.append("[" + " ".join(chunk) + "]") chunk = [] if tag.startswith("B-"): chunk.append(tag[2:] + ": " + token) elif tag == "O": frame.append(token) if chunk: frame.append("[" + " ".join(chunk) + "]") return " ".join(frame)
[ "\n Converts a list of model outputs (i.e., a list of lists of bio tags, each\n pertaining to a single word), returns an inline bracket representation of\n the prediction.\n " ]
Please provide a description of the function:def get_predicate_indices(tags: List[str]) -> List[int]: return [ind for ind, tag in enumerate(tags) if 'V' in tag]
[ "\n Return the word indices of a predicate in BIO tags.\n " ]
Please provide a description of the function:def get_predicate_text(sent_tokens: List[Token], tags: List[str]) -> str: return " ".join([sent_tokens[pred_id].text for pred_id in get_predicate_indices(tags)])
[ "\n Get the predicate in this prediction.\n " ]
Please provide a description of the function:def predicates_overlap(tags1: List[str], tags2: List[str]) -> bool: # Get predicate word indices from both predictions pred_ind1 = get_predicate_indices(tags1) pred_ind2 = get_predicate_indices(tags2) # Return if pred_ind1 pred_ind2 overlap return any(set.intersection(set(pred_ind1), set(pred_ind2)))
[ "\n Tests whether the predicate in BIO tags1 overlap\n with those of tags2.\n " ]
Please provide a description of the function:def get_coherent_next_tag(prev_label: str, cur_label: str) -> str: if cur_label == "O": # Don't need to add prefix to an "O" label return "O" if prev_label == cur_label: return f"I-{cur_label}" else: return f"B-{cur_label}"
[ "\n Generate a coherent tag, given previous tag and current label.\n " ]
Please provide a description of the function:def merge_overlapping_predictions(tags1: List[str], tags2: List[str]) -> List[str]: ret_sequence = [] prev_label = "O" # Build a coherent sequence out of two # spans which predicates' overlap for tag1, tag2 in zip(tags1, tags2): label1 = tag1.split("-")[-1] label2 = tag2.split("-")[-1] if (label1 == "V") or (label2 == "V"): # Construct maximal predicate length - # add predicate tag if any of the sequence predict it cur_label = "V" # Else - prefer an argument over 'O' label elif label1 != "O": cur_label = label1 else: cur_label = label2 # Append cur tag to the returned sequence cur_tag = get_coherent_next_tag(prev_label, cur_label) prev_label = cur_label ret_sequence.append(cur_tag) return ret_sequence
[ "\n Merge two predictions into one. Assumes the predicate in tags1 overlap with\n the predicate of tags2.\n " ]
Please provide a description of the function:def consolidate_predictions(outputs: List[List[str]], sent_tokens: List[Token]) -> Dict[str, List[str]]: pred_dict: Dict[str, List[str]] = {} merged_outputs = [join_mwp(output) for output in outputs] predicate_texts = [get_predicate_text(sent_tokens, tags) for tags in merged_outputs] for pred1_text, tags1 in zip(predicate_texts, merged_outputs): # A flag indicating whether to add tags1 to predictions add_to_prediction = True # Check if this predicate overlaps another predicate for pred2_text, tags2 in pred_dict.items(): if predicates_overlap(tags1, tags2): # tags1 overlaps tags2 pred_dict[pred2_text] = merge_overlapping_predictions(tags1, tags2) add_to_prediction = False # This predicate doesn't overlap - add as a new predicate if add_to_prediction: pred_dict[pred1_text] = tags1 return pred_dict
[ "\n Identify that certain predicates are part of a multiword predicate\n (e.g., \"decided to run\") in which case, we don't need to return\n the embedded predicate (\"run\").\n " ]
Please provide a description of the function:def sanitize_label(label: str) -> str: if "-" in label: prefix, suffix = label.split("-") suffix = suffix.split("(")[-1] return f"{prefix}-{suffix}" else: return label
[ "\n Sanitize a BIO label - this deals with OIE\n labels sometimes having some noise, as parentheses.\n " ]
Please provide a description of the function:def batch_to_ids(batch: List[List[str]]) -> torch.Tensor: instances = [] indexer = ELMoTokenCharactersIndexer() for sentence in batch: tokens = [Token(token) for token in sentence] field = TextField(tokens, {'character_ids': indexer}) instance = Instance({"elmo": field}) instances.append(instance) dataset = Batch(instances) vocab = Vocabulary() dataset.index_instances(vocab) return dataset.as_tensor_dict()['elmo']['character_ids']
[ "\n Converts a batch of tokenized sentences to a tensor representing the sentences with encoded characters\n (len(batch), max sentence length, max word length).\n\n Parameters\n ----------\n batch : ``List[List[str]]``, required\n A list of tokenized sentences.\n\n Returns\n -------\n A tensor of padded character ids.\n " ]
Please provide a description of the function:def forward(self, # pylint: disable=arguments-differ inputs: torch.Tensor, word_inputs: torch.Tensor = None) -> Dict[str, Union[torch.Tensor, List[torch.Tensor]]]: # reshape the input if needed original_shape = inputs.size() if len(original_shape) > 3: timesteps, num_characters = original_shape[-2:] reshaped_inputs = inputs.view(-1, timesteps, num_characters) else: reshaped_inputs = inputs if word_inputs is not None: original_word_size = word_inputs.size() if self._has_cached_vocab and len(original_word_size) > 2: reshaped_word_inputs = word_inputs.view(-1, original_word_size[-1]) elif not self._has_cached_vocab: logger.warning("Word inputs were passed to ELMo but it does not have a cached vocab.") reshaped_word_inputs = None else: reshaped_word_inputs = word_inputs else: reshaped_word_inputs = word_inputs # run the biLM bilm_output = self._elmo_lstm(reshaped_inputs, reshaped_word_inputs) layer_activations = bilm_output['activations'] mask_with_bos_eos = bilm_output['mask'] # compute the elmo representations representations = [] for i in range(len(self._scalar_mixes)): scalar_mix = getattr(self, 'scalar_mix_{}'.format(i)) representation_with_bos_eos = scalar_mix(layer_activations, mask_with_bos_eos) if self._keep_sentence_boundaries: processed_representation = representation_with_bos_eos processed_mask = mask_with_bos_eos else: representation_without_bos_eos, mask_without_bos_eos = remove_sentence_boundaries( representation_with_bos_eos, mask_with_bos_eos) processed_representation = representation_without_bos_eos processed_mask = mask_without_bos_eos representations.append(self._dropout(processed_representation)) # reshape if necessary if word_inputs is not None and len(original_word_size) > 2: mask = processed_mask.view(original_word_size) elmo_representations = [representation.view(original_word_size + (-1, )) for representation in representations] elif len(original_shape) > 3: mask = processed_mask.view(original_shape[:-1]) elmo_representations = [representation.view(original_shape[:-1] + (-1, )) for representation in representations] else: mask = processed_mask elmo_representations = representations return {'elmo_representations': elmo_representations, 'mask': mask}
[ "\n Parameters\n ----------\n inputs: ``torch.Tensor``, required.\n Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.\n word_inputs : ``torch.Tensor``, required.\n If you passed a cached vocab, you can in addition pass a tensor of shape\n ``(batch_size, timesteps)``, which represent word ids which have been pre-cached.\n\n Returns\n -------\n Dict with keys:\n ``'elmo_representations'``: ``List[torch.Tensor]``\n A ``num_output_representations`` list of ELMo representations for the input sequence.\n Each representation is shape ``(batch_size, timesteps, embedding_dim)``\n ``'mask'``: ``torch.Tensor``\n Shape ``(batch_size, timesteps)`` long tensor with sequence mask.\n " ]
Please provide a description of the function:def forward(self, inputs: torch.Tensor) -> Dict[str, torch.Tensor]: # pylint: disable=arguments-differ # Add BOS/EOS mask = ((inputs > 0).long().sum(dim=-1) > 0).long() character_ids_with_bos_eos, mask_with_bos_eos = add_sentence_boundary_token_ids( inputs, mask, self._beginning_of_sentence_characters, self._end_of_sentence_characters ) # the character id embedding max_chars_per_token = self._options['char_cnn']['max_characters_per_token'] # (batch_size * sequence_length, max_chars_per_token, embed_dim) character_embedding = torch.nn.functional.embedding( character_ids_with_bos_eos.view(-1, max_chars_per_token), self._char_embedding_weights ) # run convolutions cnn_options = self._options['char_cnn'] if cnn_options['activation'] == 'tanh': activation = torch.tanh elif cnn_options['activation'] == 'relu': activation = torch.nn.functional.relu else: raise ConfigurationError("Unknown activation") # (batch_size * sequence_length, embed_dim, max_chars_per_token) character_embedding = torch.transpose(character_embedding, 1, 2) convs = [] for i in range(len(self._convolutions)): conv = getattr(self, 'char_conv_{}'.format(i)) convolved = conv(character_embedding) # (batch_size * sequence_length, n_filters for this width) convolved, _ = torch.max(convolved, dim=-1) convolved = activation(convolved) convs.append(convolved) # (batch_size * sequence_length, n_filters) token_embedding = torch.cat(convs, dim=-1) # apply the highway layers (batch_size * sequence_length, n_filters) token_embedding = self._highways(token_embedding) # final projection (batch_size * sequence_length, embedding_dim) token_embedding = self._projection(token_embedding) # reshape to (batch_size, sequence_length, embedding_dim) batch_size, sequence_length, _ = character_ids_with_bos_eos.size() return { 'mask': mask_with_bos_eos, 'token_embedding': token_embedding.view(batch_size, sequence_length, -1) }
[ "\n Compute context insensitive token embeddings for ELMo representations.\n\n Parameters\n ----------\n inputs: ``torch.Tensor``\n Shape ``(batch_size, sequence_length, 50)`` of character ids representing the\n current batch.\n\n Returns\n -------\n Dict with keys:\n ``'token_embedding'``: ``torch.Tensor``\n Shape ``(batch_size, sequence_length + 2, embedding_dim)`` tensor with context\n insensitive token representations.\n ``'mask'``: ``torch.Tensor``\n Shape ``(batch_size, sequence_length + 2)`` long tensor with sequence mask.\n " ]
Please provide a description of the function:def forward(self, # pylint: disable=arguments-differ inputs: torch.Tensor, word_inputs: torch.Tensor = None) -> Dict[str, Union[torch.Tensor, List[torch.Tensor]]]: if self._word_embedding is not None and word_inputs is not None: try: mask_without_bos_eos = (word_inputs > 0).long() # The character cnn part is cached - just look it up. embedded_inputs = self._word_embedding(word_inputs) # type: ignore # shape (batch_size, timesteps + 2, embedding_dim) type_representation, mask = add_sentence_boundary_token_ids( embedded_inputs, mask_without_bos_eos, self._bos_embedding, self._eos_embedding ) except RuntimeError: # Back off to running the character convolutions, # as we might not have the words in the cache. token_embedding = self._token_embedder(inputs) mask = token_embedding['mask'] type_representation = token_embedding['token_embedding'] else: token_embedding = self._token_embedder(inputs) mask = token_embedding['mask'] type_representation = token_embedding['token_embedding'] lstm_outputs = self._elmo_lstm(type_representation, mask) # Prepare the output. The first layer is duplicated. # Because of minor differences in how masking is applied depending # on whether the char cnn layers are cached, we'll be defensive and # multiply by the mask here. It's not strictly necessary, as the # mask passed on is correct, but the values in the padded areas # of the char cnn representations can change. output_tensors = [ torch.cat([type_representation, type_representation], dim=-1) * mask.float().unsqueeze(-1) ] for layer_activations in torch.chunk(lstm_outputs, lstm_outputs.size(0), dim=0): output_tensors.append(layer_activations.squeeze(0)) return { 'activations': output_tensors, 'mask': mask, }
[ "\n Parameters\n ----------\n inputs: ``torch.Tensor``, required.\n Shape ``(batch_size, timesteps, 50)`` of character ids representing the current batch.\n word_inputs : ``torch.Tensor``, required.\n If you passed a cached vocab, you can in addition pass a tensor of shape ``(batch_size, timesteps)``,\n which represent word ids which have been pre-cached.\n\n Returns\n -------\n Dict with keys:\n\n ``'activations'``: ``List[torch.Tensor]``\n A list of activations at each layer of the network, each of shape\n ``(batch_size, timesteps + 2, embedding_dim)``\n ``'mask'``: ``torch.Tensor``\n Shape ``(batch_size, timesteps + 2)`` long tensor with sequence mask.\n\n Note that the output tensors all include additional special begin and end of sequence\n markers.\n " ]
Please provide a description of the function:def create_cached_cnn_embeddings(self, tokens: List[str]) -> None: tokens = [ELMoCharacterMapper.bos_token, ELMoCharacterMapper.eos_token] + tokens timesteps = 32 batch_size = 32 chunked_tokens = lazy_groups_of(iter(tokens), timesteps) all_embeddings = [] device = get_device_of(next(self.parameters())) for batch in lazy_groups_of(chunked_tokens, batch_size): # Shape (batch_size, timesteps, 50) batched_tensor = batch_to_ids(batch) # NOTE: This device check is for when a user calls this method having # already placed the model on a device. If this is called in the # constructor, it will probably happen on the CPU. This isn't too bad, # because it's only a few convolutions and will likely be very fast. if device >= 0: batched_tensor = batched_tensor.cuda(device) output = self._token_embedder(batched_tensor) token_embedding = output["token_embedding"] mask = output["mask"] token_embedding, _ = remove_sentence_boundaries(token_embedding, mask) all_embeddings.append(token_embedding.view(-1, token_embedding.size(-1))) full_embedding = torch.cat(all_embeddings, 0) # We might have some trailing embeddings from padding in the batch, so # we clip the embedding and lookup to the right size. full_embedding = full_embedding[:len(tokens), :] embedding = full_embedding[2:len(tokens), :] vocab_size, embedding_dim = list(embedding.size()) from allennlp.modules.token_embedders import Embedding # type: ignore self._bos_embedding = full_embedding[0, :] self._eos_embedding = full_embedding[1, :] self._word_embedding = Embedding(vocab_size, # type: ignore embedding_dim, weight=embedding.data, trainable=self._requires_grad, padding_index=0)
[ "\n Given a list of tokens, this method precomputes word representations\n by running just the character convolutions and highway layers of elmo,\n essentially creating uncontextual word vectors. On subsequent forward passes,\n the word ids are looked up from an embedding, rather than being computed on\n the fly via the CNN encoder.\n\n This function sets 3 attributes:\n\n _word_embedding : ``torch.Tensor``\n The word embedding for each word in the tokens passed to this method.\n _bos_embedding : ``torch.Tensor``\n The embedding for the BOS token.\n _eos_embedding : ``torch.Tensor``\n The embedding for the EOS token.\n\n Parameters\n ----------\n tokens : ``List[str]``, required.\n A list of tokens to precompute character convolutions for.\n " ]
Please provide a description of the function:def normalize_text(text: str) -> str: return ' '.join([token for token in text.lower().strip(STRIPPED_CHARACTERS).split() if token not in IGNORED_TOKENS])
[ "\n Performs a normalization that is very similar to that done by the normalization functions in\n SQuAD and TriviaQA.\n\n This involves splitting and rejoining the text, and could be a somewhat expensive operation.\n " ]
Please provide a description of the function:def char_span_to_token_span(token_offsets: List[Tuple[int, int]], character_span: Tuple[int, int]) -> Tuple[Tuple[int, int], bool]: # We have token offsets into the passage from the tokenizer; we _should_ be able to just find # the tokens that have the same offsets as our span. error = False start_index = 0 while start_index < len(token_offsets) and token_offsets[start_index][0] < character_span[0]: start_index += 1 # start_index should now be pointing at the span start index. if token_offsets[start_index][0] > character_span[0]: # In this case, a tokenization or labeling issue made us go too far - the character span # we're looking for actually starts in the previous token. We'll back up one. logger.debug("Bad labelling or tokenization - start offset doesn't match") start_index -= 1 if token_offsets[start_index][0] != character_span[0]: error = True end_index = start_index while end_index < len(token_offsets) and token_offsets[end_index][1] < character_span[1]: end_index += 1 if end_index == start_index and token_offsets[end_index][1] > character_span[1]: # Looks like there was a token that should have been split, like "1854-1855", where the # answer is "1854". We can't do much in this case, except keep the answer as the whole # token. logger.debug("Bad tokenization - end offset doesn't match") elif token_offsets[end_index][1] > character_span[1]: # This is a case where the given answer span is more than one token, and the last token is # cut off for some reason, like "split with Luckett and Rober", when the original passage # said "split with Luckett and Roberson". In this case, we'll just keep the end index # where it is, and assume the intent was to mark the whole token. logger.debug("Bad labelling or tokenization - end offset doesn't match") if token_offsets[end_index][1] != character_span[1]: error = True return (start_index, end_index), error
[ "\n Converts a character span from a passage into the corresponding token span in the tokenized\n version of the passage. If you pass in a character span that does not correspond to complete\n tokens in the tokenized version, we'll do our best, but the behavior is officially undefined.\n We return an error flag in this case, and have some debug logging so you can figure out the\n cause of this issue (in SQuAD, these are mostly either tokenization problems or annotation\n problems; there's a fair amount of both).\n\n The basic outline of this method is to find the token span that has the same offsets as the\n input character span. If the tokenizer tokenized the passage correctly and has matching\n offsets, this is easy. We try to be a little smart about cases where they don't match exactly,\n but mostly just find the closest thing we can.\n\n The returned ``(begin, end)`` indices are `inclusive` for both ``begin`` and ``end``.\n So, for example, ``(2, 2)`` is the one word span beginning at token index 2, ``(3, 4)`` is the\n two-word span beginning at token index 3, and so on.\n\n Returns\n -------\n token_span : ``Tuple[int, int]``\n `Inclusive` span start and end token indices that match as closely as possible to the input\n character spans.\n error : ``bool``\n Whether the token spans match the input character spans exactly. If this is ``False``, it\n means there was an error in either the tokenization or the annotated character span.\n " ]
Please provide a description of the function:def find_valid_answer_spans(passage_tokens: List[Token], answer_texts: List[str]) -> List[Tuple[int, int]]: normalized_tokens = [token.text.lower().strip(STRIPPED_CHARACTERS) for token in passage_tokens] # Because there could be many `answer_texts`, we'll do the most expensive pre-processing # step once. This gives us a map from tokens to the position in the passage they appear. word_positions: Dict[str, List[int]] = defaultdict(list) for i, token in enumerate(normalized_tokens): word_positions[token].append(i) spans = [] for answer_text in answer_texts: # For each answer, we'll first find all valid start positions in the passage. Then # we'll grow each span to the same length as the number of answer tokens, and see if we # have a match. We're a little tricky as we grow the span, skipping words that are # already pruned from the normalized answer text, and stopping early if we don't match. answer_tokens = answer_text.lower().strip(STRIPPED_CHARACTERS).split() num_answer_tokens = len(answer_tokens) for span_start in word_positions[answer_tokens[0]]: span_end = span_start # span_end is _inclusive_ answer_index = 1 while answer_index < num_answer_tokens and span_end + 1 < len(normalized_tokens): token = normalized_tokens[span_end + 1] if answer_tokens[answer_index] == token: answer_index += 1 span_end += 1 elif token in IGNORED_TOKENS: span_end += 1 else: break if num_answer_tokens == answer_index: spans.append((span_start, span_end)) return spans
[ "\n Finds a list of token spans in ``passage_tokens`` that match the given ``answer_texts``. This\n tries to find all spans that would evaluate to correct given the SQuAD and TriviaQA official\n evaluation scripts, which do some normalization of the input text.\n\n Note that this could return duplicate spans! The caller is expected to be able to handle\n possible duplicates (as already happens in the SQuAD dev set, for instance).\n " ]
Please provide a description of the function:def make_reading_comprehension_instance(question_tokens: List[Token], passage_tokens: List[Token], token_indexers: Dict[str, TokenIndexer], passage_text: str, token_spans: List[Tuple[int, int]] = None, answer_texts: List[str] = None, additional_metadata: Dict[str, Any] = None) -> Instance: additional_metadata = additional_metadata or {} fields: Dict[str, Field] = {} passage_offsets = [(token.idx, token.idx + len(token.text)) for token in passage_tokens] # This is separate so we can reference it later with a known type. passage_field = TextField(passage_tokens, token_indexers) fields['passage'] = passage_field fields['question'] = TextField(question_tokens, token_indexers) metadata = {'original_passage': passage_text, 'token_offsets': passage_offsets, 'question_tokens': [token.text for token in question_tokens], 'passage_tokens': [token.text for token in passage_tokens], } if answer_texts: metadata['answer_texts'] = answer_texts if token_spans: # There may be multiple answer annotations, so we pick the one that occurs the most. This # only matters on the SQuAD dev set, and it means our computed metrics ("start_acc", # "end_acc", and "span_acc") aren't quite the same as the official metrics, which look at # all of the annotations. This is why we have a separate official SQuAD metric calculation # (the "em" and "f1" metrics use the official script). candidate_answers: Counter = Counter() for span_start, span_end in token_spans: candidate_answers[(span_start, span_end)] += 1 span_start, span_end = candidate_answers.most_common(1)[0][0] fields['span_start'] = IndexField(span_start, passage_field) fields['span_end'] = IndexField(span_end, passage_field) metadata.update(additional_metadata) fields['metadata'] = MetadataField(metadata) return Instance(fields)
[ "\n Converts a question, a passage, and an optional answer (or answers) to an ``Instance`` for use\n in a reading comprehension model.\n\n Creates an ``Instance`` with at least these fields: ``question`` and ``passage``, both\n ``TextFields``; and ``metadata``, a ``MetadataField``. Additionally, if both ``answer_texts``\n and ``char_span_starts`` are given, the ``Instance`` has ``span_start`` and ``span_end``\n fields, which are both ``IndexFields``.\n\n Parameters\n ----------\n question_tokens : ``List[Token]``\n An already-tokenized question.\n passage_tokens : ``List[Token]``\n An already-tokenized passage that contains the answer to the given question.\n token_indexers : ``Dict[str, TokenIndexer]``\n Determines how the question and passage ``TextFields`` will be converted into tensors that\n get input to a model. See :class:`TokenIndexer`.\n passage_text : ``str``\n The original passage text. We need this so that we can recover the actual span from the\n original passage that the model predicts as the answer to the question. This is used in\n official evaluation scripts.\n token_spans : ``List[Tuple[int, int]]``, optional\n Indices into ``passage_tokens`` to use as the answer to the question for training. This is\n a list because there might be several possible correct answer spans in the passage.\n Currently, we just select the most frequent span in this list (i.e., SQuAD has multiple\n annotations on the dev set; this will select the span that the most annotators gave as\n correct).\n answer_texts : ``List[str]``, optional\n All valid answer strings for the given question. In SQuAD, e.g., the training set has\n exactly one answer per question, but the dev and test sets have several. TriviaQA has many\n possible answers, which are the aliases for the known correct entity. This is put into the\n metadata for use with official evaluation scripts, but not used anywhere else.\n additional_metadata : ``Dict[str, Any]``, optional\n The constructed ``metadata`` field will by default contain ``original_passage``,\n ``token_offsets``, ``question_tokens``, ``passage_tokens``, and ``answer_texts`` keys. If\n you want any other metadata to be associated with each instance, you can pass that in here.\n This dictionary will get added to the ``metadata`` dictionary we already construct.\n " ]
Please provide a description of the function:def make_reading_comprehension_instance_quac(question_list_tokens: List[List[Token]], passage_tokens: List[Token], token_indexers: Dict[str, TokenIndexer], passage_text: str, token_span_lists: List[List[Tuple[int, int]]] = None, yesno_list: List[int] = None, followup_list: List[int] = None, additional_metadata: Dict[str, Any] = None, num_context_answers: int = 0) -> Instance: additional_metadata = additional_metadata or {} fields: Dict[str, Field] = {} passage_offsets = [(token.idx, token.idx + len(token.text)) for token in passage_tokens] # This is separate so we can reference it later with a known type. passage_field = TextField(passage_tokens, token_indexers) fields['passage'] = passage_field fields['question'] = ListField([TextField(q_tokens, token_indexers) for q_tokens in question_list_tokens]) metadata = {'original_passage': passage_text, 'token_offsets': passage_offsets, 'question_tokens': [[token.text for token in question_tokens] \ for question_tokens in question_list_tokens], 'passage_tokens': [token.text for token in passage_tokens], } p1_answer_marker_list: List[Field] = [] p2_answer_marker_list: List[Field] = [] p3_answer_marker_list: List[Field] = [] def get_tag(i, i_name): # Generate a tag to mark previous answer span in the passage. return "<{0:d}_{1:s}>".format(i, i_name) def mark_tag(span_start, span_end, passage_tags, prev_answer_distance): try: assert span_start >= 0 assert span_end >= 0 except: raise ValueError("Previous {0:d}th answer span should have been updated!".format(prev_answer_distance)) # Modify "tags" to mark previous answer span. if span_start == span_end: passage_tags[prev_answer_distance][span_start] = get_tag(prev_answer_distance, "") else: passage_tags[prev_answer_distance][span_start] = get_tag(prev_answer_distance, "start") passage_tags[prev_answer_distance][span_end] = get_tag(prev_answer_distance, "end") for passage_index in range(span_start + 1, span_end): passage_tags[prev_answer_distance][passage_index] = get_tag(prev_answer_distance, "in") if token_span_lists: span_start_list: List[Field] = [] span_end_list: List[Field] = [] p1_span_start, p1_span_end, p2_span_start = -1, -1, -1 p2_span_end, p3_span_start, p3_span_end = -1, -1, -1 # Looping each <<answers>>. for question_index, answer_span_lists in enumerate(token_span_lists): span_start, span_end = answer_span_lists[-1] # Last one is the original answer span_start_list.append(IndexField(span_start, passage_field)) span_end_list.append(IndexField(span_end, passage_field)) prev_answer_marker_lists = [["O"] * len(passage_tokens), ["O"] * len(passage_tokens), ["O"] * len(passage_tokens), ["O"] * len(passage_tokens)] if question_index > 0 and num_context_answers > 0: mark_tag(p1_span_start, p1_span_end, prev_answer_marker_lists, 1) if question_index > 1 and num_context_answers > 1: mark_tag(p2_span_start, p2_span_end, prev_answer_marker_lists, 2) if question_index > 2 and num_context_answers > 2: mark_tag(p3_span_start, p3_span_end, prev_answer_marker_lists, 3) p3_span_start = p2_span_start p3_span_end = p2_span_end p2_span_start = p1_span_start p2_span_end = p1_span_end p1_span_start = span_start p1_span_end = span_end if num_context_answers > 2: p3_answer_marker_list.append(SequenceLabelField(prev_answer_marker_lists[3], passage_field, label_namespace="answer_tags")) if num_context_answers > 1: p2_answer_marker_list.append(SequenceLabelField(prev_answer_marker_lists[2], passage_field, label_namespace="answer_tags")) if num_context_answers > 0: p1_answer_marker_list.append(SequenceLabelField(prev_answer_marker_lists[1], passage_field, label_namespace="answer_tags")) fields['span_start'] = ListField(span_start_list) fields['span_end'] = ListField(span_end_list) if num_context_answers > 0: fields['p1_answer_marker'] = ListField(p1_answer_marker_list) if num_context_answers > 1: fields['p2_answer_marker'] = ListField(p2_answer_marker_list) if num_context_answers > 2: fields['p3_answer_marker'] = ListField(p3_answer_marker_list) fields['yesno_list'] = ListField( \ [LabelField(yesno, label_namespace="yesno_labels") for yesno in yesno_list]) fields['followup_list'] = ListField([LabelField(followup, label_namespace="followup_labels") \ for followup in followup_list]) metadata.update(additional_metadata) fields['metadata'] = MetadataField(metadata) return Instance(fields)
[ "\n Converts a question, a passage, and an optional answer (or answers) to an ``Instance`` for use\n in a reading comprehension model.\n\n Creates an ``Instance`` with at least these fields: ``question`` and ``passage``, both\n ``TextFields``; and ``metadata``, a ``MetadataField``. Additionally, if both ``answer_texts``\n and ``char_span_starts`` are given, the ``Instance`` has ``span_start`` and ``span_end``\n fields, which are both ``IndexFields``.\n\n Parameters\n ----------\n question_list_tokens : ``List[List[Token]]``\n An already-tokenized list of questions. Each dialog have multiple questions.\n passage_tokens : ``List[Token]``\n An already-tokenized passage that contains the answer to the given question.\n token_indexers : ``Dict[str, TokenIndexer]``\n Determines how the question and passage ``TextFields`` will be converted into tensors that\n get input to a model. See :class:`TokenIndexer`.\n passage_text : ``str``\n The original passage text. We need this so that we can recover the actual span from the\n original passage that the model predicts as the answer to the question. This is used in\n official evaluation scripts.\n token_span_lists : ``List[List[Tuple[int, int]]]``, optional\n Indices into ``passage_tokens`` to use as the answer to the question for training. This is\n a list of list, first because there is multiple questions per dialog, and\n because there might be several possible correct answer spans in the passage.\n Currently, we just select the last span in this list (i.e., QuAC has multiple\n annotations on the dev set; this will select the last span, which was given by the original annotator).\n yesno_list : ``List[int]``\n List of the affirmation bit for each question answer pairs.\n followup_list : ``List[int]``\n List of the continuation bit for each question answer pairs.\n num_context_answers : ``int``, optional\n How many answers to encode into the passage.\n additional_metadata : ``Dict[str, Any]``, optional\n The constructed ``metadata`` field will by default contain ``original_passage``,\n ``token_offsets``, ``question_tokens``, ``passage_tokens``, and ``answer_texts`` keys. If\n you want any other metadata to be associated with each instance, you can pass that in here.\n This dictionary will get added to the ``metadata`` dictionary we already construct.\n " ]
Please provide a description of the function:def handle_cannot(reference_answers: List[str]): num_cannot = 0 num_spans = 0 for ref in reference_answers: if ref == 'CANNOTANSWER': num_cannot += 1 else: num_spans += 1 if num_cannot >= num_spans: reference_answers = ['CANNOTANSWER'] else: reference_answers = [x for x in reference_answers if x != 'CANNOTANSWER'] return reference_answers
[ "\n Process a list of reference answers.\n If equal or more than half of the reference answers are \"CANNOTANSWER\", take it as gold.\n Otherwise, return answers that are not \"CANNOTANSWER\".\n " ]
Please provide a description of the function:def get_best_span(span_start_logits: torch.Tensor, span_end_logits: torch.Tensor) -> torch.Tensor: if span_start_logits.dim() != 2 or span_end_logits.dim() != 2: raise ValueError("Input shapes must be (batch_size, passage_length)") batch_size, passage_length = span_start_logits.size() device = span_start_logits.device # (batch_size, passage_length, passage_length) span_log_probs = span_start_logits.unsqueeze(2) + span_end_logits.unsqueeze(1) # Only the upper triangle of the span matrix is valid; the lower triangle has entries where # the span ends before it starts. span_log_mask = torch.triu(torch.ones((passage_length, passage_length), device=device)).log() valid_span_log_probs = span_log_probs + span_log_mask # Here we take the span matrix and flatten it, then find the best span using argmax. We # can recover the start and end indices from this flattened list using simple modular # arithmetic. # (batch_size, passage_length * passage_length) best_spans = valid_span_log_probs.view(batch_size, -1).argmax(-1) span_start_indices = best_spans // passage_length span_end_indices = best_spans % passage_length return torch.stack([span_start_indices, span_end_indices], dim=-1)
[ "\n This acts the same as the static method ``BidirectionalAttentionFlow.get_best_span()``\n in ``allennlp/models/reading_comprehension/bidaf.py``. We keep it here so that users can\n directly import this function without the class.\n\n We call the inputs \"logits\" - they could either be unnormalized logits or normalized log\n probabilities. A log_softmax operation is a constant shifting of the entire logit\n vector, so taking an argmax over either one gives the same result.\n " ]
Please provide a description of the function:def batch_split_words(self, sentences: List[str]) -> List[List[Token]]: return [self.split_words(sentence) for sentence in sentences]
[ "\n Spacy needs to do batch processing, or it can be really slow. This method lets you take\n advantage of that if you want. Default implementation is to just iterate of the sentences\n and call ``split_words``, but the ``SpacyWordSplitter`` will actually do batched\n processing.\n " ]
Please provide a description of the function:def constrained_to(self, initial_sequence: torch.Tensor, keep_beam_details: bool = True) -> 'BeamSearch': return BeamSearch(self._beam_size, self._per_node_beam_size, initial_sequence, keep_beam_details)
[ "\n Return a new BeamSearch instance that's like this one but with the specified constraint.\n " ]
Please provide a description of the function:def search(self, num_steps: int, initial_state: StateType, transition_function: TransitionFunction, keep_final_unfinished_states: bool = True) -> Dict[int, List[StateType]]: finished_states: Dict[int, List[StateType]] = defaultdict(list) states = [initial_state] step_num = 1 # Erase stored beams, if we're tracking them. if self.beam_snapshots is not None: self.beam_snapshots = defaultdict(list) while states and step_num <= num_steps: next_states: Dict[int, List[StateType]] = defaultdict(list) grouped_state = states[0].combine_states(states) if self._allowed_transitions: # We were provided an initial sequence, so we need to check # if the current sequence is still constrained. key = tuple(grouped_state.action_history[0]) if key in self._allowed_transitions: # We're still in the initial_sequence, so our hand is forced. allowed_actions = [self._allowed_transitions[key]] else: # We've gone past the end of the initial sequence, so no constraint. allowed_actions = None else: # No initial sequence was provided, so all actions are allowed. allowed_actions = None for next_state in transition_function.take_step(grouped_state, max_actions=self._per_node_beam_size, allowed_actions=allowed_actions): # NOTE: we're doing state.batch_indices[0] here (and similar things below), # hard-coding a group size of 1. But, our use of `next_state.is_finished()` # already checks for that, as it crashes if the group size is not 1. batch_index = next_state.batch_indices[0] if next_state.is_finished(): finished_states[batch_index].append(next_state) else: if step_num == num_steps and keep_final_unfinished_states: finished_states[batch_index].append(next_state) next_states[batch_index].append(next_state) states = [] for batch_index, batch_states in next_states.items(): # The states from the generator are already sorted, so we can just take the first # ones here, without an additional sort. states.extend(batch_states[:self._beam_size]) if self.beam_snapshots is not None: # Add to beams self.beam_snapshots[batch_index].append( [(state.score[0].item(), state.action_history[0]) for state in batch_states] ) step_num += 1 # Add finished states to the stored beams as well. if self.beam_snapshots is not None: for batch_index, states in finished_states.items(): for state in states: score = state.score[0].item() action_history = state.action_history[0] while len(self.beam_snapshots[batch_index]) < len(action_history): self.beam_snapshots[batch_index].append([]) self.beam_snapshots[batch_index][len(action_history) - 1].append((score, action_history)) best_states: Dict[int, List[StateType]] = {} for batch_index, batch_states in finished_states.items(): # The time this sort takes is pretty negligible, no particular need to optimize this # yet. Maybe with a larger beam size... finished_to_sort = [(-state.score[0].item(), state) for state in batch_states] finished_to_sort.sort(key=lambda x: x[0]) best_states[batch_index] = [state[1] for state in finished_to_sort[:self._beam_size]] return best_states
[ "\n Parameters\n ----------\n num_steps : ``int``\n How many steps should we take in our search? This is an upper bound, as it's possible\n for the search to run out of valid actions before hitting this number, or for all\n states on the beam to finish.\n initial_state : ``StateType``\n The starting state of our search. This is assumed to be `batched`, and our beam search\n is batch-aware - we'll keep ``beam_size`` states around for each instance in the batch.\n transition_function : ``TransitionFunction``\n The ``TransitionFunction`` object that defines and scores transitions from one state to the\n next.\n keep_final_unfinished_states : ``bool``, optional (default=True)\n If we run out of steps before a state is \"finished\", should we return that state in our\n search results?\n\n Returns\n -------\n best_states : ``Dict[int, List[StateType]]``\n This is a mapping from batch index to the top states for that instance.\n " ]
Please provide a description of the function:def _normalize_answer(text: str) -> str: parts = [_white_space_fix(_remove_articles(_normalize_number(_remove_punc(_lower(token))))) for token in _tokenize(text)] parts = [part for part in parts if part.strip()] normalized = ' '.join(parts).strip() return normalized
[ "Lower text and remove punctuation, articles and extra whitespace." ]
Please provide a description of the function:def _align_bags(predicted: List[Set[str]], gold: List[Set[str]]) -> List[float]: f1_scores = [] for gold_index, gold_item in enumerate(gold): max_f1 = 0.0 max_index = None best_alignment: Tuple[Set[str], Set[str]] = (set(), set()) if predicted: for pred_index, pred_item in enumerate(predicted): current_f1 = _compute_f1(pred_item, gold_item) if current_f1 >= max_f1: best_alignment = (gold_item, pred_item) max_f1 = current_f1 max_index = pred_index match_flag = _match_numbers_if_present(*best_alignment) gold[gold_index] = set() predicted[max_index] = set() else: match_flag = False if match_flag: f1_scores.append(max_f1) else: f1_scores.append(0.0) return f1_scores
[ "\n Takes gold and predicted answer sets and first finds a greedy 1-1 alignment\n between them and gets maximum metric values over all the answers\n " ]
Please provide a description of the function:def get_metrics(predicted: Union[str, List[str], Tuple[str, ...]], gold: Union[str, List[str], Tuple[str, ...]]) -> Tuple[float, float]: predicted_bags = _answer_to_bags(predicted) gold_bags = _answer_to_bags(gold) exact_match = 1.0 if predicted_bags[0] == gold_bags[0] else 0 f1_per_bag = _align_bags(predicted_bags[1], gold_bags[1]) f1 = np.mean(f1_per_bag) f1 = round(f1, 2) return exact_match, f1
[ "\n Takes a predicted answer and a gold answer (that are both either a string or a list of\n strings), and returns exact match and the DROP F1 metric for the prediction. If you are\n writing a script for evaluating objects in memory (say, the output of predictions during\n validation, or while training), this is the function you want to call, after using\n :func:`answer_json_to_strings` when reading the gold answer from the released data file.\n " ]
Please provide a description of the function:def answer_json_to_strings(answer: Dict[str, Any]) -> Tuple[Tuple[str, ...], str]: if "number" in answer and answer["number"]: return tuple([str(answer["number"])]), "number" elif "spans" in answer and answer["spans"]: return tuple(answer["spans"]), "span" if len(answer["spans"]) == 1 else "spans" elif "date" in answer: return tuple(["{0} {1} {2}".format(answer["date"]["day"], answer["date"]["month"], answer["date"]["year"])]), "date" else: raise ValueError(f"Answer type not found, should be one of number, spans or date at: {json.dumps(answer)}")
[ "\n Takes an answer JSON blob from the DROP data release and converts it into strings used for\n evaluation.\n " ]
Please provide a description of the function:def evaluate_json(annotations: Dict[str, Any], predicted_answers: Dict[str, Any]) -> Tuple[float, float]: instance_exact_match = [] instance_f1 = [] # for each type as well type_to_em: Dict[str, List[float]] = defaultdict(list) type_to_f1: Dict[str, List[float]] = defaultdict(list) for _, annotation in annotations.items(): for qa_pair in annotation["qa_pairs"]: query_id = qa_pair["query_id"] max_em_score = 0.0 max_f1_score = 0.0 max_type = None if query_id in predicted_answers: predicted = predicted_answers[query_id] candidate_answers = [qa_pair["answer"]] if "validated_answers" in qa_pair and qa_pair["validated_answers"]: candidate_answers += qa_pair["validated_answers"] for answer in candidate_answers: gold_answer, gold_type = answer_json_to_strings(answer) em_score, f1_score = get_metrics(predicted, gold_answer) if gold_answer[0].strip() != "": max_em_score = max(max_em_score, em_score) max_f1_score = max(max_f1_score, f1_score) if max_em_score == em_score or max_f1_score == f1_score: max_type = gold_type else: print("Missing prediction for question: {}".format(query_id)) if qa_pair and qa_pair["answer"]: max_type = answer_json_to_strings(qa_pair["answer"])[1] else: max_type = "number" max_em_score = 0.0 max_f1_score = 0.0 instance_exact_match.append(max_em_score) instance_f1.append(max_f1_score) type_to_em[max_type].append(max_em_score) type_to_f1[max_type].append(max_f1_score) global_em = np.mean(instance_exact_match) global_f1 = np.mean(instance_f1) print("Exact-match accuracy {0:.2f}".format(global_em * 100)) print("F1 score {0:.2f}".format(global_f1 * 100)) print("{0:.2f} & {1:.2f}".format(global_em * 100, global_f1 * 100)) print("----") total = np.sum([len(v) for v in type_to_em.values()]) for typ in sorted(type_to_em.keys()): print("{0}: {1} ({2:.2f}%)".format(typ, len(type_to_em[typ]), 100. * len(type_to_em[typ])/total)) print(" Exact-match accuracy {0:.3f}".format(100. * np.mean(type_to_em[typ]))) print(" F1 score {0:.3f}".format(100. * np.mean(type_to_f1[typ]))) return global_em, global_f1
[ "\n Takes gold annotations and predicted answers and evaluates the predictions for each question\n in the gold annotations. Both JSON dictionaries must have query_id keys, which are used to\n match predictions to gold annotations (note that these are somewhat deep in the JSON for the\n gold annotations, but must be top-level keys in the predicted answers).\n\n The ``annotations`` are assumed to have the format of the dev set in the DROP data release.\n The ``predicted_answers`` JSON must be a dictionary keyed by query id, where the value is a string\n (or list of strings) that is the answer.\n " ]
Please provide a description of the function:def evaluate_prediction_file(prediction_path: str, gold_path: str) -> Tuple[float, float]: predicted_answers = json.load(open(prediction_path, encoding='utf-8')) annotations = json.load(open(gold_path, encoding='utf-8')) return evaluate_json(annotations, predicted_answers)
[ "\n Takes a prediction file and a gold file and evaluates the predictions for each question in the\n gold file. Both files must be json formatted and must have query_id keys, which are used to\n match predictions to gold annotations. The gold file is assumed to have the format of the dev\n set in the DROP data release. The prediction file must be a JSON dictionary keyed by query id,\n where the value is either a JSON dictionary with an \"answer\" key, or just a string (or list of\n strings) that is the answer.\n " ]
Please provide a description of the function:def cache_data(self, cache_directory: str) -> None: self._cache_directory = pathlib.Path(cache_directory) os.makedirs(self._cache_directory, exist_ok=True)
[ "\n When you call this method, we will use this directory to store a cache of already-processed\n ``Instances`` in every file passed to :func:`read`, serialized as one string-formatted\n ``Instance`` per line. If the cache file for a given ``file_path`` exists, we read the\n ``Instances`` from the cache instead of re-processing the data (using\n :func:`deserialize_instance`). If the cache file does `not` exist, we will `create` it on\n our first pass through the data (using :func:`serialize_instance`).\n\n IMPORTANT CAVEAT: It is the `caller's` responsibility to make sure that this directory is\n unique for any combination of code and parameters that you use. That is, if you call this\n method, we will use any existing cache files in that directory `regardless of the\n parameters you set for this DatasetReader!` If you use our commands, the ``Train`` command\n is responsible for calling this method and ensuring that unique parameters correspond to\n unique cache directories. If you don't use our commands, that is your responsibility.\n " ]