INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Sort a batch first tensor by some specified lengths. Parameters ---------- tensor : torch.FloatTensor, required. A batch first Pytorch tensor. sequence_lengths : torch.LongTensor, required. A tensor representing the lengths of some dimension of the tensor which we want to sort by. Returns ------- sorted_tensor : torch.FloatTensor The original tensor sorted along the batch dimension with respect to sequence_lengths. sorted_sequence_lengths : torch.LongTensor The original sequence_lengths sorted by decreasing size. restoration_indices : torch.LongTensor Indices into the sorted_tensor such that ``sorted_tensor.index_select(0, restoration_indices) == original_tensor`` permutation_index : torch.LongTensor The indices used to sort the tensor. This is useful if you want to sort many tensors using the same ordering.
def sort_batch_by_length(tensor: torch.Tensor, sequence_lengths: torch.Tensor): """ Sort a batch first tensor by some specified lengths. Parameters ---------- tensor : torch.FloatTensor, required. A batch first Pytorch tensor. sequence_lengths : torch.LongTensor, required. A tensor representing the lengths of some dimension of the tensor which we want to sort by. Returns ------- sorted_tensor : torch.FloatTensor The original tensor sorted along the batch dimension with respect to sequence_lengths. sorted_sequence_lengths : torch.LongTensor The original sequence_lengths sorted by decreasing size. restoration_indices : torch.LongTensor Indices into the sorted_tensor such that ``sorted_tensor.index_select(0, restoration_indices) == original_tensor`` permutation_index : torch.LongTensor The indices used to sort the tensor. This is useful if you want to sort many tensors using the same ordering. """ if not isinstance(tensor, torch.Tensor) or not isinstance(sequence_lengths, torch.Tensor): raise ConfigurationError("Both the tensor and sequence lengths must be torch.Tensors.") sorted_sequence_lengths, permutation_index = sequence_lengths.sort(0, descending=True) sorted_tensor = tensor.index_select(0, permutation_index) index_range = torch.arange(0, len(sequence_lengths), device=sequence_lengths.device) # This is the equivalent of zipping with index, sorting by the original # sequence lengths and returning the now sorted indices. _, reverse_mapping = permutation_index.sort(0, descending=False) restoration_indices = index_range.index_select(0, reverse_mapping) return sorted_tensor, sorted_sequence_lengths, restoration_indices, permutation_index
Given the output from a ``Seq2SeqEncoder``, with shape ``(batch_size, sequence_length, encoding_dim)``, this method returns the final hidden state for each element of the batch, giving a tensor of shape ``(batch_size, encoding_dim)``. This is not as simple as ``encoder_outputs[:, -1]``, because the sequences could have different lengths. We use the mask (which has shape ``(batch_size, sequence_length)``) to find the final state for each batch instance. Additionally, if ``bidirectional`` is ``True``, we will split the final dimension of the ``encoder_outputs`` into two and assume that the first half is for the forward direction of the encoder and the second half is for the backward direction. We will concatenate the last state for each encoder dimension, giving ``encoder_outputs[:, -1, :encoding_dim/2]`` concatenated with ``encoder_outputs[:, 0, encoding_dim/2:]``.
def get_final_encoder_states(encoder_outputs: torch.Tensor, mask: torch.Tensor, bidirectional: bool = False) -> torch.Tensor: """ Given the output from a ``Seq2SeqEncoder``, with shape ``(batch_size, sequence_length, encoding_dim)``, this method returns the final hidden state for each element of the batch, giving a tensor of shape ``(batch_size, encoding_dim)``. This is not as simple as ``encoder_outputs[:, -1]``, because the sequences could have different lengths. We use the mask (which has shape ``(batch_size, sequence_length)``) to find the final state for each batch instance. Additionally, if ``bidirectional`` is ``True``, we will split the final dimension of the ``encoder_outputs`` into two and assume that the first half is for the forward direction of the encoder and the second half is for the backward direction. We will concatenate the last state for each encoder dimension, giving ``encoder_outputs[:, -1, :encoding_dim/2]`` concatenated with ``encoder_outputs[:, 0, encoding_dim/2:]``. """ # These are the indices of the last words in the sequences (i.e. length sans padding - 1). We # are assuming sequences are right padded. # Shape: (batch_size,) last_word_indices = mask.sum(1).long() - 1 batch_size, _, encoder_output_dim = encoder_outputs.size() expanded_indices = last_word_indices.view(-1, 1, 1).expand(batch_size, 1, encoder_output_dim) # Shape: (batch_size, 1, encoder_output_dim) final_encoder_output = encoder_outputs.gather(1, expanded_indices) final_encoder_output = final_encoder_output.squeeze(1) # (batch_size, encoder_output_dim) if bidirectional: final_forward_output = final_encoder_output[:, :(encoder_output_dim // 2)] final_backward_output = encoder_outputs[:, 0, (encoder_output_dim // 2):] final_encoder_output = torch.cat([final_forward_output, final_backward_output], dim=-1) return final_encoder_output
Computes and returns an element-wise dropout mask for a given tensor, where each element in the mask is dropped out with probability dropout_probability. Note that the mask is NOT applied to the tensor - the tensor is passed to retain the correct CUDA tensor type for the mask. Parameters ---------- dropout_probability : float, required. Probability of dropping a dimension of the input. tensor_for_masking : torch.Tensor, required. Returns ------- A torch.FloatTensor consisting of the binary mask scaled by 1/ (1 - dropout_probability). This scaling ensures expected values and variances of the output of applying this mask and the original tensor are the same.
def get_dropout_mask(dropout_probability: float, tensor_for_masking: torch.Tensor): """ Computes and returns an element-wise dropout mask for a given tensor, where each element in the mask is dropped out with probability dropout_probability. Note that the mask is NOT applied to the tensor - the tensor is passed to retain the correct CUDA tensor type for the mask. Parameters ---------- dropout_probability : float, required. Probability of dropping a dimension of the input. tensor_for_masking : torch.Tensor, required. Returns ------- A torch.FloatTensor consisting of the binary mask scaled by 1/ (1 - dropout_probability). This scaling ensures expected values and variances of the output of applying this mask and the original tensor are the same. """ binary_mask = (torch.rand(tensor_for_masking.size()) > dropout_probability).to(tensor_for_masking.device) # Scale mask by 1/keep_prob to preserve output statistics. dropout_mask = binary_mask.float().div(1.0 - dropout_probability) return dropout_mask
``torch.nn.functional.softmax(vector)`` does not work if some elements of ``vector`` should be masked. This performs a softmax on just the non-masked portions of ``vector``. Passing ``None`` in for the mask is also acceptable; you'll just get a regular softmax. ``vector`` can have an arbitrary number of dimensions; the only requirement is that ``mask`` is broadcastable to ``vector's`` shape. If ``mask`` has fewer dimensions than ``vector``, we will unsqueeze on dimension 1 until they match. If you need a different unsqueezing of your mask, do it yourself before passing the mask into this function. If ``memory_efficient`` is set to true, we will simply use a very large negative number for those masked positions so that the probabilities of those positions would be approximately 0. This is not accurate in math, but works for most cases and consumes less memory. In the case that the input vector is completely masked and ``memory_efficient`` is false, this function returns an array of ``0.0``. This behavior may cause ``NaN`` if this is used as the last layer of a model that uses categorical cross-entropy loss. Instead, if ``memory_efficient`` is true, this function will treat every element as equal, and do softmax over equal numbers.
def masked_softmax(vector: torch.Tensor, mask: torch.Tensor, dim: int = -1, memory_efficient: bool = False, mask_fill_value: float = -1e32) -> torch.Tensor: """ ``torch.nn.functional.softmax(vector)`` does not work if some elements of ``vector`` should be masked. This performs a softmax on just the non-masked portions of ``vector``. Passing ``None`` in for the mask is also acceptable; you'll just get a regular softmax. ``vector`` can have an arbitrary number of dimensions; the only requirement is that ``mask`` is broadcastable to ``vector's`` shape. If ``mask`` has fewer dimensions than ``vector``, we will unsqueeze on dimension 1 until they match. If you need a different unsqueezing of your mask, do it yourself before passing the mask into this function. If ``memory_efficient`` is set to true, we will simply use a very large negative number for those masked positions so that the probabilities of those positions would be approximately 0. This is not accurate in math, but works for most cases and consumes less memory. In the case that the input vector is completely masked and ``memory_efficient`` is false, this function returns an array of ``0.0``. This behavior may cause ``NaN`` if this is used as the last layer of a model that uses categorical cross-entropy loss. Instead, if ``memory_efficient`` is true, this function will treat every element as equal, and do softmax over equal numbers. """ if mask is None: result = torch.nn.functional.softmax(vector, dim=dim) else: mask = mask.float() while mask.dim() < vector.dim(): mask = mask.unsqueeze(1) if not memory_efficient: # To limit numerical errors from large vector elements outside the mask, we zero these out. result = torch.nn.functional.softmax(vector * mask, dim=dim) result = result * mask result = result / (result.sum(dim=dim, keepdim=True) + 1e-13) else: masked_vector = vector.masked_fill((1 - mask).byte(), mask_fill_value) result = torch.nn.functional.softmax(masked_vector, dim=dim) return result
``torch.nn.functional.log_softmax(vector)`` does not work if some elements of ``vector`` should be masked. This performs a log_softmax on just the non-masked portions of ``vector``. Passing ``None`` in for the mask is also acceptable; you'll just get a regular log_softmax. ``vector`` can have an arbitrary number of dimensions; the only requirement is that ``mask`` is broadcastable to ``vector's`` shape. If ``mask`` has fewer dimensions than ``vector``, we will unsqueeze on dimension 1 until they match. If you need a different unsqueezing of your mask, do it yourself before passing the mask into this function. In the case that the input vector is completely masked, the return value of this function is arbitrary, but not ``nan``. You should be masking the result of whatever computation comes out of this in that case, anyway, so the specific values returned shouldn't matter. Also, the way that we deal with this case relies on having single-precision floats; mixing half-precision floats with fully-masked vectors will likely give you ``nans``. If your logits are all extremely negative (i.e., the max value in your logit vector is -50 or lower), the way we handle masking here could mess you up. But if you've got logit values that extreme, you've got bigger problems than this.
def masked_log_softmax(vector: torch.Tensor, mask: torch.Tensor, dim: int = -1) -> torch.Tensor: """ ``torch.nn.functional.log_softmax(vector)`` does not work if some elements of ``vector`` should be masked. This performs a log_softmax on just the non-masked portions of ``vector``. Passing ``None`` in for the mask is also acceptable; you'll just get a regular log_softmax. ``vector`` can have an arbitrary number of dimensions; the only requirement is that ``mask`` is broadcastable to ``vector's`` shape. If ``mask`` has fewer dimensions than ``vector``, we will unsqueeze on dimension 1 until they match. If you need a different unsqueezing of your mask, do it yourself before passing the mask into this function. In the case that the input vector is completely masked, the return value of this function is arbitrary, but not ``nan``. You should be masking the result of whatever computation comes out of this in that case, anyway, so the specific values returned shouldn't matter. Also, the way that we deal with this case relies on having single-precision floats; mixing half-precision floats with fully-masked vectors will likely give you ``nans``. If your logits are all extremely negative (i.e., the max value in your logit vector is -50 or lower), the way we handle masking here could mess you up. But if you've got logit values that extreme, you've got bigger problems than this. """ if mask is not None: mask = mask.float() while mask.dim() < vector.dim(): mask = mask.unsqueeze(1) # vector + mask.log() is an easy way to zero out masked elements in logspace, but it # results in nans when the whole vector is masked. We need a very small value instead of a # zero in the mask for these cases. log(1 + 1e-45) is still basically 0, so we can safely # just add 1e-45 before calling mask.log(). We use 1e-45 because 1e-46 is so small it # becomes 0 - this is just the smallest value we can actually use. vector = vector + (mask + 1e-45).log() return torch.nn.functional.log_softmax(vector, dim=dim)
To calculate max along certain dimensions on masked values Parameters ---------- vector : ``torch.Tensor`` The vector to calculate max, assume unmasked parts are already zeros mask : ``torch.Tensor`` The mask of the vector. It must be broadcastable with vector. dim : ``int`` The dimension to calculate max keepdim : ``bool`` Whether to keep dimension min_val : ``float`` The minimal value for paddings Returns ------- A ``torch.Tensor`` of including the maximum values.
def masked_max(vector: torch.Tensor, mask: torch.Tensor, dim: int, keepdim: bool = False, min_val: float = -1e7) -> torch.Tensor: """ To calculate max along certain dimensions on masked values Parameters ---------- vector : ``torch.Tensor`` The vector to calculate max, assume unmasked parts are already zeros mask : ``torch.Tensor`` The mask of the vector. It must be broadcastable with vector. dim : ``int`` The dimension to calculate max keepdim : ``bool`` Whether to keep dimension min_val : ``float`` The minimal value for paddings Returns ------- A ``torch.Tensor`` of including the maximum values. """ one_minus_mask = (1.0 - mask).byte() replaced_vector = vector.masked_fill(one_minus_mask, min_val) max_value, _ = replaced_vector.max(dim=dim, keepdim=keepdim) return max_value
Flips a padded tensor along the time dimension without affecting masked entries. Parameters ---------- padded_sequence : ``torch.Tensor`` The tensor to flip along the time dimension. Assumed to be of dimensions (batch size, num timesteps, ...) sequence_lengths : ``torch.Tensor`` A list containing the lengths of each unpadded sequence in the batch. Returns ------- A ``torch.Tensor`` of the same shape as padded_sequence.
def masked_flip(padded_sequence: torch.Tensor, sequence_lengths: List[int]) -> torch.Tensor: """ Flips a padded tensor along the time dimension without affecting masked entries. Parameters ---------- padded_sequence : ``torch.Tensor`` The tensor to flip along the time dimension. Assumed to be of dimensions (batch size, num timesteps, ...) sequence_lengths : ``torch.Tensor`` A list containing the lengths of each unpadded sequence in the batch. Returns ------- A ``torch.Tensor`` of the same shape as padded_sequence. """ assert padded_sequence.size(0) == len(sequence_lengths), \ f'sequence_lengths length ${len(sequence_lengths)} does not match batch size ${padded_sequence.size(0)}' num_timesteps = padded_sequence.size(1) flipped_padded_sequence = torch.flip(padded_sequence, [1]) sequences = [flipped_padded_sequence[i, num_timesteps - length:] for i, length in enumerate(sequence_lengths)] return torch.nn.utils.rnn.pad_sequence(sequences, batch_first=True)
To calculate mean along certain dimensions on masked values Parameters ---------- vector : ``torch.Tensor`` The vector to calculate mean. mask : ``torch.Tensor`` The mask of the vector. It must be broadcastable with vector. dim : ``int`` The dimension to calculate mean keepdim : ``bool`` Whether to keep dimension eps : ``float`` A small value to avoid zero division problem. Returns ------- A ``torch.Tensor`` of including the mean values.
def masked_mean(vector: torch.Tensor, mask: torch.Tensor, dim: int, keepdim: bool = False, eps: float = 1e-8) -> torch.Tensor: """ To calculate mean along certain dimensions on masked values Parameters ---------- vector : ``torch.Tensor`` The vector to calculate mean. mask : ``torch.Tensor`` The mask of the vector. It must be broadcastable with vector. dim : ``int`` The dimension to calculate mean keepdim : ``bool`` Whether to keep dimension eps : ``float`` A small value to avoid zero division problem. Returns ------- A ``torch.Tensor`` of including the mean values. """ one_minus_mask = (1.0 - mask).byte() replaced_vector = vector.masked_fill(one_minus_mask, 0.0) value_sum = torch.sum(replaced_vector, dim=dim, keepdim=keepdim) value_count = torch.sum(mask.float(), dim=dim, keepdim=keepdim) return value_sum / value_count.clamp(min=eps)
Perform Viterbi decoding in log space over a sequence given a transition matrix specifying pairwise (transition) potentials between tags and a matrix of shape (sequence_length, num_tags) specifying unary potentials for possible tags per timestep. Parameters ---------- tag_sequence : torch.Tensor, required. A tensor of shape (sequence_length, num_tags) representing scores for a set of tags over a given sequence. transition_matrix : torch.Tensor, required. A tensor of shape (num_tags, num_tags) representing the binary potentials for transitioning between a given pair of tags. tag_observations : Optional[List[int]], optional, (default = None) A list of length ``sequence_length`` containing the class ids of observed elements in the sequence, with unobserved elements being set to -1. Note that it is possible to provide evidence which results in degenerate labelings if the sequences of tags you provide as evidence cannot transition between each other, or those transitions are extremely unlikely. In this situation we log a warning, but the responsibility for providing self-consistent evidence ultimately lies with the user. Returns ------- viterbi_path : List[int] The tag indices of the maximum likelihood tag sequence. viterbi_score : torch.Tensor The score of the viterbi path.
def viterbi_decode(tag_sequence: torch.Tensor, transition_matrix: torch.Tensor, tag_observations: Optional[List[int]] = None): """ Perform Viterbi decoding in log space over a sequence given a transition matrix specifying pairwise (transition) potentials between tags and a matrix of shape (sequence_length, num_tags) specifying unary potentials for possible tags per timestep. Parameters ---------- tag_sequence : torch.Tensor, required. A tensor of shape (sequence_length, num_tags) representing scores for a set of tags over a given sequence. transition_matrix : torch.Tensor, required. A tensor of shape (num_tags, num_tags) representing the binary potentials for transitioning between a given pair of tags. tag_observations : Optional[List[int]], optional, (default = None) A list of length ``sequence_length`` containing the class ids of observed elements in the sequence, with unobserved elements being set to -1. Note that it is possible to provide evidence which results in degenerate labelings if the sequences of tags you provide as evidence cannot transition between each other, or those transitions are extremely unlikely. In this situation we log a warning, but the responsibility for providing self-consistent evidence ultimately lies with the user. Returns ------- viterbi_path : List[int] The tag indices of the maximum likelihood tag sequence. viterbi_score : torch.Tensor The score of the viterbi path. """ sequence_length, num_tags = list(tag_sequence.size()) if tag_observations: if len(tag_observations) != sequence_length: raise ConfigurationError("Observations were provided, but they were not the same length " "as the sequence. Found sequence of length: {} and evidence: {}" .format(sequence_length, tag_observations)) else: tag_observations = [-1 for _ in range(sequence_length)] path_scores = [] path_indices = [] if tag_observations[0] != -1: one_hot = torch.zeros(num_tags) one_hot[tag_observations[0]] = 100000. path_scores.append(one_hot) else: path_scores.append(tag_sequence[0, :]) # Evaluate the scores for all possible paths. for timestep in range(1, sequence_length): # Add pairwise potentials to current scores. summed_potentials = path_scores[timestep - 1].unsqueeze(-1) + transition_matrix scores, paths = torch.max(summed_potentials, 0) # If we have an observation for this timestep, use it # instead of the distribution over tags. observation = tag_observations[timestep] # Warn the user if they have passed # invalid/extremely unlikely evidence. if tag_observations[timestep - 1] != -1: if transition_matrix[tag_observations[timestep - 1], observation] < -10000: logger.warning("The pairwise potential between tags you have passed as " "observations is extremely unlikely. Double check your evidence " "or transition potentials!") if observation != -1: one_hot = torch.zeros(num_tags) one_hot[observation] = 100000. path_scores.append(one_hot) else: path_scores.append(tag_sequence[timestep, :] + scores.squeeze()) path_indices.append(paths.squeeze()) # Construct the most likely sequence backwards. viterbi_score, best_path = torch.max(path_scores[-1], 0) viterbi_path = [int(best_path.numpy())] for backward_timestep in reversed(path_indices): viterbi_path.append(int(backward_timestep[viterbi_path[-1]])) # Reverse the backward path. viterbi_path.reverse() return viterbi_path, viterbi_score
Takes the dictionary of tensors produced by a ``TextField`` and returns a mask with 0 where the tokens are padding, and 1 otherwise. We also handle ``TextFields`` wrapped by an arbitrary number of ``ListFields``, where the number of wrapping ``ListFields`` is given by ``num_wrapping_dims``. If ``num_wrapping_dims == 0``, the returned mask has shape ``(batch_size, num_tokens)``. If ``num_wrapping_dims > 0`` then the returned mask has ``num_wrapping_dims`` extra dimensions, so the shape will be ``(batch_size, ..., num_tokens)``. There could be several entries in the tensor dictionary with different shapes (e.g., one for word ids, one for character ids). In order to get a token mask, we use the tensor in the dictionary with the lowest number of dimensions. After subtracting ``num_wrapping_dims``, if this tensor has two dimensions we assume it has shape ``(batch_size, ..., num_tokens)``, and use it for the mask. If instead it has three dimensions, we assume it has shape ``(batch_size, ..., num_tokens, num_features)``, and sum over the last dimension to produce the mask. Most frequently this will be a character id tensor, but it could also be a featurized representation of each token, etc. If the input ``text_field_tensors`` contains the "mask" key, this is returned instead of inferring the mask. TODO(joelgrus): can we change this? NOTE: Our functions for generating masks create torch.LongTensors, because using torch.ByteTensors makes it easy to run into overflow errors when doing mask manipulation, such as summing to get the lengths of sequences - see below. >>> mask = torch.ones([260]).byte() >>> mask.sum() # equals 260. >>> var_mask = torch.autograd.V(mask) >>> var_mask.sum() # equals 4, due to 8 bit precision - the sum overflows.
def get_text_field_mask(text_field_tensors: Dict[str, torch.Tensor], num_wrapping_dims: int = 0) -> torch.LongTensor: """ Takes the dictionary of tensors produced by a ``TextField`` and returns a mask with 0 where the tokens are padding, and 1 otherwise. We also handle ``TextFields`` wrapped by an arbitrary number of ``ListFields``, where the number of wrapping ``ListFields`` is given by ``num_wrapping_dims``. If ``num_wrapping_dims == 0``, the returned mask has shape ``(batch_size, num_tokens)``. If ``num_wrapping_dims > 0`` then the returned mask has ``num_wrapping_dims`` extra dimensions, so the shape will be ``(batch_size, ..., num_tokens)``. There could be several entries in the tensor dictionary with different shapes (e.g., one for word ids, one for character ids). In order to get a token mask, we use the tensor in the dictionary with the lowest number of dimensions. After subtracting ``num_wrapping_dims``, if this tensor has two dimensions we assume it has shape ``(batch_size, ..., num_tokens)``, and use it for the mask. If instead it has three dimensions, we assume it has shape ``(batch_size, ..., num_tokens, num_features)``, and sum over the last dimension to produce the mask. Most frequently this will be a character id tensor, but it could also be a featurized representation of each token, etc. If the input ``text_field_tensors`` contains the "mask" key, this is returned instead of inferring the mask. TODO(joelgrus): can we change this? NOTE: Our functions for generating masks create torch.LongTensors, because using torch.ByteTensors makes it easy to run into overflow errors when doing mask manipulation, such as summing to get the lengths of sequences - see below. >>> mask = torch.ones([260]).byte() >>> mask.sum() # equals 260. >>> var_mask = torch.autograd.V(mask) >>> var_mask.sum() # equals 4, due to 8 bit precision - the sum overflows. """ if "mask" in text_field_tensors: return text_field_tensors["mask"] tensor_dims = [(tensor.dim(), tensor) for tensor in text_field_tensors.values()] tensor_dims.sort(key=lambda x: x[0]) smallest_dim = tensor_dims[0][0] - num_wrapping_dims if smallest_dim == 2: token_tensor = tensor_dims[0][1] return (token_tensor != 0).long() elif smallest_dim == 3: character_tensor = tensor_dims[0][1] return ((character_tensor > 0).long().sum(dim=-1) > 0).long() else: raise ValueError("Expected a tensor with dimension 2 or 3, found {}".format(smallest_dim))
Takes a matrix of vectors and a set of weights over the rows in the matrix (which we call an "attention" vector), and returns a weighted sum of the rows in the matrix. This is the typical computation performed after an attention mechanism. Note that while we call this a "matrix" of vectors and an attention "vector", we also handle higher-order tensors. We always sum over the second-to-last dimension of the "matrix", and we assume that all dimensions in the "matrix" prior to the last dimension are matched in the "vector". Non-matched dimensions in the "vector" must be `directly after the batch dimension`. For example, say I have a "matrix" with dimensions ``(batch_size, num_queries, num_words, embedding_dim)``. The attention "vector" then must have at least those dimensions, and could have more. Both: - ``(batch_size, num_queries, num_words)`` (distribution over words for each query) - ``(batch_size, num_documents, num_queries, num_words)`` (distribution over words in a query for each document) are valid input "vectors", producing tensors of shape: ``(batch_size, num_queries, embedding_dim)`` and ``(batch_size, num_documents, num_queries, embedding_dim)`` respectively.
def weighted_sum(matrix: torch.Tensor, attention: torch.Tensor) -> torch.Tensor: """ Takes a matrix of vectors and a set of weights over the rows in the matrix (which we call an "attention" vector), and returns a weighted sum of the rows in the matrix. This is the typical computation performed after an attention mechanism. Note that while we call this a "matrix" of vectors and an attention "vector", we also handle higher-order tensors. We always sum over the second-to-last dimension of the "matrix", and we assume that all dimensions in the "matrix" prior to the last dimension are matched in the "vector". Non-matched dimensions in the "vector" must be `directly after the batch dimension`. For example, say I have a "matrix" with dimensions ``(batch_size, num_queries, num_words, embedding_dim)``. The attention "vector" then must have at least those dimensions, and could have more. Both: - ``(batch_size, num_queries, num_words)`` (distribution over words for each query) - ``(batch_size, num_documents, num_queries, num_words)`` (distribution over words in a query for each document) are valid input "vectors", producing tensors of shape: ``(batch_size, num_queries, embedding_dim)`` and ``(batch_size, num_documents, num_queries, embedding_dim)`` respectively. """ # We'll special-case a few settings here, where there are efficient (but poorly-named) # operations in pytorch that already do the computation we need. if attention.dim() == 2 and matrix.dim() == 3: return attention.unsqueeze(1).bmm(matrix).squeeze(1) if attention.dim() == 3 and matrix.dim() == 3: return attention.bmm(matrix) if matrix.dim() - 1 < attention.dim(): expanded_size = list(matrix.size()) for i in range(attention.dim() - matrix.dim() + 1): matrix = matrix.unsqueeze(1) expanded_size.insert(i + 1, attention.size(i + 1)) matrix = matrix.expand(*expanded_size) intermediate = attention.unsqueeze(-1).expand_as(matrix) * matrix return intermediate.sum(dim=-2)
Computes the cross entropy loss of a sequence, weighted with respect to some user provided weights. Note that the weighting here is not the same as in the :func:`torch.nn.CrossEntropyLoss()` criterion, which is weighting classes; here we are weighting the loss contribution from particular elements in the sequence. This allows loss computations for models which use padding. Parameters ---------- logits : ``torch.FloatTensor``, required. A ``torch.FloatTensor`` of size (batch_size, sequence_length, num_classes) which contains the unnormalized probability for each class. targets : ``torch.LongTensor``, required. A ``torch.LongTensor`` of size (batch, sequence_length) which contains the index of the true class for each corresponding step. weights : ``torch.FloatTensor``, required. A ``torch.FloatTensor`` of size (batch, sequence_length) average: str, optional (default = "batch") If "batch", average the loss across the batches. If "token", average the loss across each item in the input. If ``None``, return a vector of losses per batch element. label_smoothing : ``float``, optional (default = None) Whether or not to apply label smoothing to the cross-entropy loss. For example, with a label smoothing value of 0.2, a 4 class classification target would look like ``[0.05, 0.05, 0.85, 0.05]`` if the 3rd class was the correct label. Returns ------- A torch.FloatTensor representing the cross entropy loss. If ``average=="batch"`` or ``average=="token"``, the returned loss is a scalar. If ``average is None``, the returned loss is a vector of shape (batch_size,).
def sequence_cross_entropy_with_logits(logits: torch.FloatTensor, targets: torch.LongTensor, weights: torch.FloatTensor, average: str = "batch", label_smoothing: float = None) -> torch.FloatTensor: """ Computes the cross entropy loss of a sequence, weighted with respect to some user provided weights. Note that the weighting here is not the same as in the :func:`torch.nn.CrossEntropyLoss()` criterion, which is weighting classes; here we are weighting the loss contribution from particular elements in the sequence. This allows loss computations for models which use padding. Parameters ---------- logits : ``torch.FloatTensor``, required. A ``torch.FloatTensor`` of size (batch_size, sequence_length, num_classes) which contains the unnormalized probability for each class. targets : ``torch.LongTensor``, required. A ``torch.LongTensor`` of size (batch, sequence_length) which contains the index of the true class for each corresponding step. weights : ``torch.FloatTensor``, required. A ``torch.FloatTensor`` of size (batch, sequence_length) average: str, optional (default = "batch") If "batch", average the loss across the batches. If "token", average the loss across each item in the input. If ``None``, return a vector of losses per batch element. label_smoothing : ``float``, optional (default = None) Whether or not to apply label smoothing to the cross-entropy loss. For example, with a label smoothing value of 0.2, a 4 class classification target would look like ``[0.05, 0.05, 0.85, 0.05]`` if the 3rd class was the correct label. Returns ------- A torch.FloatTensor representing the cross entropy loss. If ``average=="batch"`` or ``average=="token"``, the returned loss is a scalar. If ``average is None``, the returned loss is a vector of shape (batch_size,). """ if average not in {None, "token", "batch"}: raise ValueError("Got average f{average}, expected one of " "None, 'token', or 'batch'") # shape : (batch * sequence_length, num_classes) logits_flat = logits.view(-1, logits.size(-1)) # shape : (batch * sequence_length, num_classes) log_probs_flat = torch.nn.functional.log_softmax(logits_flat, dim=-1) # shape : (batch * max_len, 1) targets_flat = targets.view(-1, 1).long() if label_smoothing is not None and label_smoothing > 0.0: num_classes = logits.size(-1) smoothing_value = label_smoothing / num_classes # Fill all the correct indices with 1 - smoothing value. one_hot_targets = torch.zeros_like(log_probs_flat).scatter_(-1, targets_flat, 1.0 - label_smoothing) smoothed_targets = one_hot_targets + smoothing_value negative_log_likelihood_flat = - log_probs_flat * smoothed_targets negative_log_likelihood_flat = negative_log_likelihood_flat.sum(-1, keepdim=True) else: # Contribution to the negative log likelihood only comes from the exact indices # of the targets, as the target distributions are one-hot. Here we use torch.gather # to extract the indices of the num_classes dimension which contribute to the loss. # shape : (batch * sequence_length, 1) negative_log_likelihood_flat = - torch.gather(log_probs_flat, dim=1, index=targets_flat) # shape : (batch, sequence_length) negative_log_likelihood = negative_log_likelihood_flat.view(*targets.size()) # shape : (batch, sequence_length) negative_log_likelihood = negative_log_likelihood * weights.float() if average == "batch": # shape : (batch_size,) per_batch_loss = negative_log_likelihood.sum(1) / (weights.sum(1).float() + 1e-13) num_non_empty_sequences = ((weights.sum(1) > 0).float().sum() + 1e-13) return per_batch_loss.sum() / num_non_empty_sequences elif average == "token": return negative_log_likelihood.sum() / (weights.sum().float() + 1e-13) else: # shape : (batch_size,) per_batch_loss = negative_log_likelihood.sum(1) / (weights.sum(1).float() + 1e-13) return per_batch_loss
Replaces all masked values in ``tensor`` with ``replace_with``. ``mask`` must be broadcastable to the same shape as ``tensor``. We require that ``tensor.dim() == mask.dim()``, as otherwise we won't know which dimensions of the mask to unsqueeze. This just does ``tensor.masked_fill()``, except the pytorch method fills in things with a mask value of 1, where we want the opposite. You can do this in your own code with ``tensor.masked_fill((1 - mask).byte(), replace_with)``.
def replace_masked_values(tensor: torch.Tensor, mask: torch.Tensor, replace_with: float) -> torch.Tensor: """ Replaces all masked values in ``tensor`` with ``replace_with``. ``mask`` must be broadcastable to the same shape as ``tensor``. We require that ``tensor.dim() == mask.dim()``, as otherwise we won't know which dimensions of the mask to unsqueeze. This just does ``tensor.masked_fill()``, except the pytorch method fills in things with a mask value of 1, where we want the opposite. You can do this in your own code with ``tensor.masked_fill((1 - mask).byte(), replace_with)``. """ if tensor.dim() != mask.dim(): raise ConfigurationError("tensor.dim() (%d) != mask.dim() (%d)" % (tensor.dim(), mask.dim())) return tensor.masked_fill((1 - mask).byte(), replace_with)
A check for tensor equality (by value). We make sure that the tensors have the same shape, then check all of the entries in the tensor for equality. We additionally allow the input tensors to be lists or dictionaries, where we then do the above check on every position in the list / item in the dictionary. If we find objects that aren't tensors as we're doing that, we just defer to their equality check. This is kind of a catch-all method that's designed to make implementing ``__eq__`` methods easier, in a way that's really only intended to be useful for tests.
def tensors_equal(tensor1: torch.Tensor, tensor2: torch.Tensor, tolerance: float = 1e-12) -> bool: """ A check for tensor equality (by value). We make sure that the tensors have the same shape, then check all of the entries in the tensor for equality. We additionally allow the input tensors to be lists or dictionaries, where we then do the above check on every position in the list / item in the dictionary. If we find objects that aren't tensors as we're doing that, we just defer to their equality check. This is kind of a catch-all method that's designed to make implementing ``__eq__`` methods easier, in a way that's really only intended to be useful for tests. """ # pylint: disable=too-many-return-statements if isinstance(tensor1, (list, tuple)): if not isinstance(tensor2, (list, tuple)) or len(tensor1) != len(tensor2): return False return all([tensors_equal(t1, t2, tolerance) for t1, t2 in zip(tensor1, tensor2)]) elif isinstance(tensor1, dict): if not isinstance(tensor2, dict): return False if tensor1.keys() != tensor2.keys(): return False return all([tensors_equal(tensor1[key], tensor2[key], tolerance) for key in tensor1]) elif isinstance(tensor1, torch.Tensor): if not isinstance(tensor2, torch.Tensor): return False if tensor1.size() != tensor2.size(): return False return ((tensor1 - tensor2).abs().float() < tolerance).all() else: try: return tensor1 == tensor2 except RuntimeError: print(type(tensor1), type(tensor2)) raise
In order to `torch.load()` a GPU-trained model onto a CPU (or specific GPU), you have to supply a `map_location` function. Call this with the desired `cuda_device` to get the function that `torch.load()` needs.
def device_mapping(cuda_device: int): """ In order to `torch.load()` a GPU-trained model onto a CPU (or specific GPU), you have to supply a `map_location` function. Call this with the desired `cuda_device` to get the function that `torch.load()` needs. """ def inner_device_mapping(storage: torch.Storage, location) -> torch.Storage: # pylint: disable=unused-argument if cuda_device >= 0: return storage.cuda(cuda_device) else: return storage return inner_device_mapping
Combines a list of tensors using element-wise operations and concatenation, specified by a ``combination`` string. The string refers to (1-indexed) positions in the input tensor list, and looks like ``"1,2,1+2,3-1"``. We allow the following kinds of combinations: ``x``, ``x*y``, ``x+y``, ``x-y``, and ``x/y``, where ``x`` and ``y`` are positive integers less than or equal to ``len(tensors)``. Each of the binary operations is performed elementwise. You can give as many combinations as you want in the ``combination`` string. For example, for the input string ``"1,2,1*2"``, the result would be ``[1;2;1*2]``, as you would expect, where ``[;]`` is concatenation along the last dimension. If you have a fixed, known way to combine tensors that you use in a model, you should probably just use something like ``torch.cat([x_tensor, y_tensor, x_tensor * y_tensor])``. This function adds some complexity that is only necessary if you want the specific combination used to be `configurable`. If you want to do any element-wise operations, the tensors involved in each element-wise operation must have the same shape. This function also accepts ``x`` and ``y`` in place of ``1`` and ``2`` in the combination string.
def combine_tensors(combination: str, tensors: List[torch.Tensor]) -> torch.Tensor: """ Combines a list of tensors using element-wise operations and concatenation, specified by a ``combination`` string. The string refers to (1-indexed) positions in the input tensor list, and looks like ``"1,2,1+2,3-1"``. We allow the following kinds of combinations: ``x``, ``x*y``, ``x+y``, ``x-y``, and ``x/y``, where ``x`` and ``y`` are positive integers less than or equal to ``len(tensors)``. Each of the binary operations is performed elementwise. You can give as many combinations as you want in the ``combination`` string. For example, for the input string ``"1,2,1*2"``, the result would be ``[1;2;1*2]``, as you would expect, where ``[;]`` is concatenation along the last dimension. If you have a fixed, known way to combine tensors that you use in a model, you should probably just use something like ``torch.cat([x_tensor, y_tensor, x_tensor * y_tensor])``. This function adds some complexity that is only necessary if you want the specific combination used to be `configurable`. If you want to do any element-wise operations, the tensors involved in each element-wise operation must have the same shape. This function also accepts ``x`` and ``y`` in place of ``1`` and ``2`` in the combination string. """ if len(tensors) > 9: raise ConfigurationError("Double-digit tensor lists not currently supported") combination = combination.replace('x', '1').replace('y', '2') to_concatenate = [_get_combination(piece, tensors) for piece in combination.split(',')] return torch.cat(to_concatenate, dim=-1)
Return zero-based index in the sequence of the last item whose value is equal to obj. Raises a ValueError if there is no such item. Parameters ---------- sequence : ``Sequence[T]`` obj : ``T`` Returns ------- zero-based index associated to the position of the last item equal to obj
def _rindex(sequence: Sequence[T], obj: T) -> int: """ Return zero-based index in the sequence of the last item whose value is equal to obj. Raises a ValueError if there is no such item. Parameters ---------- sequence : ``Sequence[T]`` obj : ``T`` Returns ------- zero-based index associated to the position of the last item equal to obj """ for i in range(len(sequence) - 1, -1, -1): if sequence[i] == obj: return i raise ValueError(f"Unable to find {obj} in sequence {sequence}.")
Like :func:`combine_tensors`, but does a weighted (linear) multiplication while combining. This is a separate function from ``combine_tensors`` because we try to avoid instantiating large intermediate tensors during the combination, which is possible because we know that we're going to be multiplying by a weight vector in the end. Parameters ---------- combination : ``str`` Same as in :func:`combine_tensors` tensors : ``List[torch.Tensor]`` A list of tensors to combine, where the integers in the ``combination`` are (1-indexed) positions in this list of tensors. These tensors are all expected to have either three or four dimensions, with the final dimension being an embedding. If there are four dimensions, one of them must have length 1. weights : ``torch.nn.Parameter`` A vector of weights to use for the combinations. This should have shape (combined_dim,), as calculated by :func:`get_combined_dim`.
def combine_tensors_and_multiply(combination: str, tensors: List[torch.Tensor], weights: torch.nn.Parameter) -> torch.Tensor: """ Like :func:`combine_tensors`, but does a weighted (linear) multiplication while combining. This is a separate function from ``combine_tensors`` because we try to avoid instantiating large intermediate tensors during the combination, which is possible because we know that we're going to be multiplying by a weight vector in the end. Parameters ---------- combination : ``str`` Same as in :func:`combine_tensors` tensors : ``List[torch.Tensor]`` A list of tensors to combine, where the integers in the ``combination`` are (1-indexed) positions in this list of tensors. These tensors are all expected to have either three or four dimensions, with the final dimension being an embedding. If there are four dimensions, one of them must have length 1. weights : ``torch.nn.Parameter`` A vector of weights to use for the combinations. This should have shape (combined_dim,), as calculated by :func:`get_combined_dim`. """ if len(tensors) > 9: raise ConfigurationError("Double-digit tensor lists not currently supported") combination = combination.replace('x', '1').replace('y', '2') pieces = combination.split(',') tensor_dims = [tensor.size(-1) for tensor in tensors] combination_dims = [_get_combination_dim(piece, tensor_dims) for piece in pieces] dims_so_far = 0 to_sum = [] for piece, combination_dim in zip(pieces, combination_dims): weight = weights[dims_so_far:(dims_so_far + combination_dim)] dims_so_far += combination_dim to_sum.append(_get_combination_and_multiply(piece, tensors, weight)) result = to_sum[0] for result_piece in to_sum[1:]: result = result + result_piece return result
For use with :func:`combine_tensors`. This function computes the resultant dimension when calling ``combine_tensors(combination, tensors)``, when the tensor dimension is known. This is necessary for knowing the sizes of weight matrices when building models that use ``combine_tensors``. Parameters ---------- combination : ``str`` A comma-separated list of combination pieces, like ``"1,2,1*2"``, specified identically to ``combination`` in :func:`combine_tensors`. tensor_dims : ``List[int]`` A list of tensor dimensions, where each dimension is from the `last axis` of the tensors that will be input to :func:`combine_tensors`.
def get_combined_dim(combination: str, tensor_dims: List[int]) -> int: """ For use with :func:`combine_tensors`. This function computes the resultant dimension when calling ``combine_tensors(combination, tensors)``, when the tensor dimension is known. This is necessary for knowing the sizes of weight matrices when building models that use ``combine_tensors``. Parameters ---------- combination : ``str`` A comma-separated list of combination pieces, like ``"1,2,1*2"``, specified identically to ``combination`` in :func:`combine_tensors`. tensor_dims : ``List[int]`` A list of tensor dimensions, where each dimension is from the `last axis` of the tensors that will be input to :func:`combine_tensors`. """ if len(tensor_dims) > 9: raise ConfigurationError("Double-digit tensor lists not currently supported") combination = combination.replace('x', '1').replace('y', '2') return sum([_get_combination_dim(piece, tensor_dims) for piece in combination.split(',')])
A numerically stable computation of logsumexp. This is mathematically equivalent to `tensor.exp().sum(dim, keep=keepdim).log()`. This function is typically used for summing log probabilities. Parameters ---------- tensor : torch.FloatTensor, required. A tensor of arbitrary size. dim : int, optional (default = -1) The dimension of the tensor to apply the logsumexp to. keepdim: bool, optional (default = False) Whether to retain a dimension of size one at the dimension we reduce over.
def logsumexp(tensor: torch.Tensor, dim: int = -1, keepdim: bool = False) -> torch.Tensor: """ A numerically stable computation of logsumexp. This is mathematically equivalent to `tensor.exp().sum(dim, keep=keepdim).log()`. This function is typically used for summing log probabilities. Parameters ---------- tensor : torch.FloatTensor, required. A tensor of arbitrary size. dim : int, optional (default = -1) The dimension of the tensor to apply the logsumexp to. keepdim: bool, optional (default = False) Whether to retain a dimension of size one at the dimension we reduce over. """ max_score, _ = tensor.max(dim, keepdim=keepdim) if keepdim: stable_vec = tensor - max_score else: stable_vec = tensor - max_score.unsqueeze(dim) return max_score + (stable_vec.exp().sum(dim, keepdim=keepdim)).log()
This is a subroutine for :func:`~batched_index_select`. The given ``indices`` of size ``(batch_size, d_1, ..., d_n)`` indexes into dimension 2 of a target tensor, which has size ``(batch_size, sequence_length, embedding_size)``. This function returns a vector that correctly indexes into the flattened target. The sequence length of the target must be provided to compute the appropriate offsets. .. code-block:: python indices = torch.ones([2,3], dtype=torch.long) # Sequence length of the target tensor. sequence_length = 10 shifted_indices = flatten_and_batch_shift_indices(indices, sequence_length) # Indices into the second element in the batch are correctly shifted # to take into account that the target tensor will be flattened before # the indices are applied. assert shifted_indices == [1, 1, 1, 11, 11, 11] Parameters ---------- indices : ``torch.LongTensor``, required. sequence_length : ``int``, required. The length of the sequence the indices index into. This must be the second dimension of the tensor. Returns ------- offset_indices : ``torch.LongTensor``
def flatten_and_batch_shift_indices(indices: torch.Tensor, sequence_length: int) -> torch.Tensor: """ This is a subroutine for :func:`~batched_index_select`. The given ``indices`` of size ``(batch_size, d_1, ..., d_n)`` indexes into dimension 2 of a target tensor, which has size ``(batch_size, sequence_length, embedding_size)``. This function returns a vector that correctly indexes into the flattened target. The sequence length of the target must be provided to compute the appropriate offsets. .. code-block:: python indices = torch.ones([2,3], dtype=torch.long) # Sequence length of the target tensor. sequence_length = 10 shifted_indices = flatten_and_batch_shift_indices(indices, sequence_length) # Indices into the second element in the batch are correctly shifted # to take into account that the target tensor will be flattened before # the indices are applied. assert shifted_indices == [1, 1, 1, 11, 11, 11] Parameters ---------- indices : ``torch.LongTensor``, required. sequence_length : ``int``, required. The length of the sequence the indices index into. This must be the second dimension of the tensor. Returns ------- offset_indices : ``torch.LongTensor`` """ # Shape: (batch_size) offsets = get_range_vector(indices.size(0), get_device_of(indices)) * sequence_length for _ in range(len(indices.size()) - 1): offsets = offsets.unsqueeze(1) # Shape: (batch_size, d_1, ..., d_n) offset_indices = indices + offsets # Shape: (batch_size * d_1 * ... * d_n) offset_indices = offset_indices.view(-1) return offset_indices
The given ``indices`` of size ``(batch_size, d_1, ..., d_n)`` indexes into the sequence dimension (dimension 2) of the target, which has size ``(batch_size, sequence_length, embedding_size)``. This function returns selected values in the target with respect to the provided indices, which have size ``(batch_size, d_1, ..., d_n, embedding_size)``. This can use the optionally precomputed :func:`~flattened_indices` with size ``(batch_size * d_1 * ... * d_n)`` if given. An example use case of this function is looking up the start and end indices of spans in a sequence tensor. This is used in the :class:`~allennlp.models.coreference_resolution.CoreferenceResolver`. Model to select contextual word representations corresponding to the start and end indices of mentions. The key reason this can't be done with basic torch functions is that we want to be able to use look-up tensors with an arbitrary number of dimensions (for example, in the coref model, we don't know a-priori how many spans we are looking up). Parameters ---------- target : ``torch.Tensor``, required. A 3 dimensional tensor of shape (batch_size, sequence_length, embedding_size). This is the tensor to be indexed. indices : ``torch.LongTensor`` A tensor of shape (batch_size, ...), where each element is an index into the ``sequence_length`` dimension of the ``target`` tensor. flattened_indices : Optional[torch.Tensor], optional (default = None) An optional tensor representing the result of calling :func:~`flatten_and_batch_shift_indices` on ``indices``. This is helpful in the case that the indices can be flattened once and cached for many batch lookups. Returns ------- selected_targets : ``torch.Tensor`` A tensor with shape [indices.size(), target.size(-1)] representing the embedded indices extracted from the batch flattened target tensor.
def batched_index_select(target: torch.Tensor, indices: torch.LongTensor, flattened_indices: Optional[torch.LongTensor] = None) -> torch.Tensor: """ The given ``indices`` of size ``(batch_size, d_1, ..., d_n)`` indexes into the sequence dimension (dimension 2) of the target, which has size ``(batch_size, sequence_length, embedding_size)``. This function returns selected values in the target with respect to the provided indices, which have size ``(batch_size, d_1, ..., d_n, embedding_size)``. This can use the optionally precomputed :func:`~flattened_indices` with size ``(batch_size * d_1 * ... * d_n)`` if given. An example use case of this function is looking up the start and end indices of spans in a sequence tensor. This is used in the :class:`~allennlp.models.coreference_resolution.CoreferenceResolver`. Model to select contextual word representations corresponding to the start and end indices of mentions. The key reason this can't be done with basic torch functions is that we want to be able to use look-up tensors with an arbitrary number of dimensions (for example, in the coref model, we don't know a-priori how many spans we are looking up). Parameters ---------- target : ``torch.Tensor``, required. A 3 dimensional tensor of shape (batch_size, sequence_length, embedding_size). This is the tensor to be indexed. indices : ``torch.LongTensor`` A tensor of shape (batch_size, ...), where each element is an index into the ``sequence_length`` dimension of the ``target`` tensor. flattened_indices : Optional[torch.Tensor], optional (default = None) An optional tensor representing the result of calling :func:~`flatten_and_batch_shift_indices` on ``indices``. This is helpful in the case that the indices can be flattened once and cached for many batch lookups. Returns ------- selected_targets : ``torch.Tensor`` A tensor with shape [indices.size(), target.size(-1)] representing the embedded indices extracted from the batch flattened target tensor. """ if flattened_indices is None: # Shape: (batch_size * d_1 * ... * d_n) flattened_indices = flatten_and_batch_shift_indices(indices, target.size(1)) # Shape: (batch_size * sequence_length, embedding_size) flattened_target = target.view(-1, target.size(-1)) # Shape: (batch_size * d_1 * ... * d_n, embedding_size) flattened_selected = flattened_target.index_select(0, flattened_indices) selected_shape = list(indices.size()) + [target.size(-1)] # Shape: (batch_size, d_1, ..., d_n, embedding_size) selected_targets = flattened_selected.view(*selected_shape) return selected_targets
The given ``indices`` of size ``(set_size, subset_size)`` specifies subsets of the ``target`` that each of the set_size rows should select. The `target` has size ``(batch_size, sequence_length, embedding_size)``, and the resulting selected tensor has size ``(batch_size, set_size, subset_size, embedding_size)``. Parameters ---------- target : ``torch.Tensor``, required. A Tensor of shape (batch_size, sequence_length, embedding_size). indices : ``torch.LongTensor``, required. A LongTensor of shape (set_size, subset_size). All indices must be < sequence_length as this tensor is an index into the sequence_length dimension of the target. Returns ------- selected : ``torch.Tensor``, required. A Tensor of shape (batch_size, set_size, subset_size, embedding_size).
def flattened_index_select(target: torch.Tensor, indices: torch.LongTensor) -> torch.Tensor: """ The given ``indices`` of size ``(set_size, subset_size)`` specifies subsets of the ``target`` that each of the set_size rows should select. The `target` has size ``(batch_size, sequence_length, embedding_size)``, and the resulting selected tensor has size ``(batch_size, set_size, subset_size, embedding_size)``. Parameters ---------- target : ``torch.Tensor``, required. A Tensor of shape (batch_size, sequence_length, embedding_size). indices : ``torch.LongTensor``, required. A LongTensor of shape (set_size, subset_size). All indices must be < sequence_length as this tensor is an index into the sequence_length dimension of the target. Returns ------- selected : ``torch.Tensor``, required. A Tensor of shape (batch_size, set_size, subset_size, embedding_size). """ if indices.dim() != 2: raise ConfigurationError("Indices passed to flattened_index_select had shape {} but " "only 2 dimensional inputs are supported.".format(indices.size())) # Shape: (batch_size, set_size * subset_size, embedding_size) flattened_selected = target.index_select(1, indices.view(-1)) # Shape: (batch_size, set_size, subset_size, embedding_size) selected = flattened_selected.view(target.size(0), indices.size(0), indices.size(1), -1) return selected
Returns a range vector with the desired size, starting at 0. The CUDA implementation is meant to avoid copy data from CPU to GPU.
def get_range_vector(size: int, device: int) -> torch.Tensor: """ Returns a range vector with the desired size, starting at 0. The CUDA implementation is meant to avoid copy data from CPU to GPU. """ if device > -1: return torch.cuda.LongTensor(size, device=device).fill_(1).cumsum(0) - 1 else: return torch.arange(0, size, dtype=torch.long)
Places the given values (designed for distances) into ``num_total_buckets``semi-logscale buckets, with ``num_identity_buckets`` of these capturing single values. The default settings will bucket values into the following buckets: [0, 1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+]. Parameters ---------- distances : ``torch.Tensor``, required. A Tensor of any size, to be bucketed. num_identity_buckets: int, optional (default = 4). The number of identity buckets (those only holding a single value). num_total_buckets : int, (default = 10) The total number of buckets to bucket values into. Returns ------- A tensor of the same shape as the input, containing the indices of the buckets the values were placed in.
def bucket_values(distances: torch.Tensor, num_identity_buckets: int = 4, num_total_buckets: int = 10) -> torch.Tensor: """ Places the given values (designed for distances) into ``num_total_buckets``semi-logscale buckets, with ``num_identity_buckets`` of these capturing single values. The default settings will bucket values into the following buckets: [0, 1, 2, 3, 4, 5-7, 8-15, 16-31, 32-63, 64+]. Parameters ---------- distances : ``torch.Tensor``, required. A Tensor of any size, to be bucketed. num_identity_buckets: int, optional (default = 4). The number of identity buckets (those only holding a single value). num_total_buckets : int, (default = 10) The total number of buckets to bucket values into. Returns ------- A tensor of the same shape as the input, containing the indices of the buckets the values were placed in. """ # Chunk the values into semi-logscale buckets using .floor(). # This is a semi-logscale bucketing because we divide by log(2) after taking the log. # We do this to make the buckets more granular in the initial range, where we expect # most values to fall. We then add (num_identity_buckets - 1) because we want these indices # to start _after_ the fixed number of buckets which we specified would only hold single values. logspace_index = (distances.float().log() / math.log(2)).floor().long() + (num_identity_buckets - 1) # create a mask for values which will go into single number buckets (i.e not a range). use_identity_mask = (distances <= num_identity_buckets).long() use_buckets_mask = 1 + (-1 * use_identity_mask) # Use the original values if they are less than num_identity_buckets, otherwise # use the logspace indices. combined_index = use_identity_mask * distances + use_buckets_mask * logspace_index # Clamp to put anything > num_total_buckets into the final bucket. return combined_index.clamp(0, num_total_buckets - 1)
Add begin/end of sentence tokens to the batch of sentences. Given a batch of sentences with size ``(batch_size, timesteps)`` or ``(batch_size, timesteps, dim)`` this returns a tensor of shape ``(batch_size, timesteps + 2)`` or ``(batch_size, timesteps + 2, dim)`` respectively. Returns both the new tensor and updated mask. Parameters ---------- tensor : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps)`` or ``(batch_size, timesteps, dim)`` mask : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps)`` sentence_begin_token: Any (anything that can be broadcast in torch for assignment) For 2D input, a scalar with the <S> id. For 3D input, a tensor with length dim. sentence_end_token: Any (anything that can be broadcast in torch for assignment) For 2D input, a scalar with the </S> id. For 3D input, a tensor with length dim. Returns ------- tensor_with_boundary_tokens : ``torch.Tensor`` The tensor with the appended and prepended boundary tokens. If the input was 2D, it has shape (batch_size, timesteps + 2) and if the input was 3D, it has shape (batch_size, timesteps + 2, dim). new_mask : ``torch.Tensor`` The new mask for the tensor, taking into account the appended tokens marking the beginning and end of the sentence.
def add_sentence_boundary_token_ids(tensor: torch.Tensor, mask: torch.Tensor, sentence_begin_token: Any, sentence_end_token: Any) -> Tuple[torch.Tensor, torch.Tensor]: """ Add begin/end of sentence tokens to the batch of sentences. Given a batch of sentences with size ``(batch_size, timesteps)`` or ``(batch_size, timesteps, dim)`` this returns a tensor of shape ``(batch_size, timesteps + 2)`` or ``(batch_size, timesteps + 2, dim)`` respectively. Returns both the new tensor and updated mask. Parameters ---------- tensor : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps)`` or ``(batch_size, timesteps, dim)`` mask : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps)`` sentence_begin_token: Any (anything that can be broadcast in torch for assignment) For 2D input, a scalar with the <S> id. For 3D input, a tensor with length dim. sentence_end_token: Any (anything that can be broadcast in torch for assignment) For 2D input, a scalar with the </S> id. For 3D input, a tensor with length dim. Returns ------- tensor_with_boundary_tokens : ``torch.Tensor`` The tensor with the appended and prepended boundary tokens. If the input was 2D, it has shape (batch_size, timesteps + 2) and if the input was 3D, it has shape (batch_size, timesteps + 2, dim). new_mask : ``torch.Tensor`` The new mask for the tensor, taking into account the appended tokens marking the beginning and end of the sentence. """ # TODO: matthewp, profile this transfer sequence_lengths = mask.sum(dim=1).detach().cpu().numpy() tensor_shape = list(tensor.data.shape) new_shape = list(tensor_shape) new_shape[1] = tensor_shape[1] + 2 tensor_with_boundary_tokens = tensor.new_zeros(*new_shape) if len(tensor_shape) == 2: tensor_with_boundary_tokens[:, 1:-1] = tensor tensor_with_boundary_tokens[:, 0] = sentence_begin_token for i, j in enumerate(sequence_lengths): tensor_with_boundary_tokens[i, j + 1] = sentence_end_token new_mask = (tensor_with_boundary_tokens != 0).long() elif len(tensor_shape) == 3: tensor_with_boundary_tokens[:, 1:-1, :] = tensor for i, j in enumerate(sequence_lengths): tensor_with_boundary_tokens[i, 0, :] = sentence_begin_token tensor_with_boundary_tokens[i, j + 1, :] = sentence_end_token new_mask = ((tensor_with_boundary_tokens > 0).long().sum(dim=-1) > 0).long() else: raise ValueError("add_sentence_boundary_token_ids only accepts 2D and 3D input") return tensor_with_boundary_tokens, new_mask
Remove begin/end of sentence embeddings from the batch of sentences. Given a batch of sentences with size ``(batch_size, timesteps, dim)`` this returns a tensor of shape ``(batch_size, timesteps - 2, dim)`` after removing the beginning and end sentence markers. The sentences are assumed to be padded on the right, with the beginning of each sentence assumed to occur at index 0 (i.e., ``mask[:, 0]`` is assumed to be 1). Returns both the new tensor and updated mask. This function is the inverse of ``add_sentence_boundary_token_ids``. Parameters ---------- tensor : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps, dim)`` mask : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps)`` Returns ------- tensor_without_boundary_tokens : ``torch.Tensor`` The tensor after removing the boundary tokens of shape ``(batch_size, timesteps - 2, dim)`` new_mask : ``torch.Tensor`` The new mask for the tensor of shape ``(batch_size, timesteps - 2)``.
def remove_sentence_boundaries(tensor: torch.Tensor, mask: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: """ Remove begin/end of sentence embeddings from the batch of sentences. Given a batch of sentences with size ``(batch_size, timesteps, dim)`` this returns a tensor of shape ``(batch_size, timesteps - 2, dim)`` after removing the beginning and end sentence markers. The sentences are assumed to be padded on the right, with the beginning of each sentence assumed to occur at index 0 (i.e., ``mask[:, 0]`` is assumed to be 1). Returns both the new tensor and updated mask. This function is the inverse of ``add_sentence_boundary_token_ids``. Parameters ---------- tensor : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps, dim)`` mask : ``torch.Tensor`` A tensor of shape ``(batch_size, timesteps)`` Returns ------- tensor_without_boundary_tokens : ``torch.Tensor`` The tensor after removing the boundary tokens of shape ``(batch_size, timesteps - 2, dim)`` new_mask : ``torch.Tensor`` The new mask for the tensor of shape ``(batch_size, timesteps - 2)``. """ # TODO: matthewp, profile this transfer sequence_lengths = mask.sum(dim=1).detach().cpu().numpy() tensor_shape = list(tensor.data.shape) new_shape = list(tensor_shape) new_shape[1] = tensor_shape[1] - 2 tensor_without_boundary_tokens = tensor.new_zeros(*new_shape) new_mask = tensor.new_zeros((new_shape[0], new_shape[1]), dtype=torch.long) for i, j in enumerate(sequence_lengths): if j > 2: tensor_without_boundary_tokens[i, :(j - 2), :] = tensor[i, 1:(j - 1), :] new_mask[i, :(j - 2)] = 1 return tensor_without_boundary_tokens, new_mask
Implements the frequency-based positional encoding described in `Attention is all you Need <https://www.semanticscholar.org/paper/Attention-Is-All-You-Need-Vaswani-Shazeer/0737da0767d77606169cbf4187b83e1ab62f6077>`_ . Adds sinusoids of different frequencies to a ``Tensor``. A sinusoid of a different frequency and phase is added to each dimension of the input ``Tensor``. This allows the attention heads to use absolute and relative positions. The number of timescales is equal to hidden_dim / 2 within the range (min_timescale, max_timescale). For each timescale, the two sinusoidal signals sin(timestep / timescale) and cos(timestep / timescale) are generated and concatenated along the hidden_dim dimension. Parameters ---------- tensor : ``torch.Tensor`` a Tensor with shape (batch_size, timesteps, hidden_dim). min_timescale : ``float``, optional (default = 1.0) The smallest timescale to use. max_timescale : ``float``, optional (default = 1.0e4) The largest timescale to use. Returns ------- The input tensor augmented with the sinusoidal frequencies.
def add_positional_features(tensor: torch.Tensor, min_timescale: float = 1.0, max_timescale: float = 1.0e4): # pylint: disable=line-too-long """ Implements the frequency-based positional encoding described in `Attention is all you Need <https://www.semanticscholar.org/paper/Attention-Is-All-You-Need-Vaswani-Shazeer/0737da0767d77606169cbf4187b83e1ab62f6077>`_ . Adds sinusoids of different frequencies to a ``Tensor``. A sinusoid of a different frequency and phase is added to each dimension of the input ``Tensor``. This allows the attention heads to use absolute and relative positions. The number of timescales is equal to hidden_dim / 2 within the range (min_timescale, max_timescale). For each timescale, the two sinusoidal signals sin(timestep / timescale) and cos(timestep / timescale) are generated and concatenated along the hidden_dim dimension. Parameters ---------- tensor : ``torch.Tensor`` a Tensor with shape (batch_size, timesteps, hidden_dim). min_timescale : ``float``, optional (default = 1.0) The smallest timescale to use. max_timescale : ``float``, optional (default = 1.0e4) The largest timescale to use. Returns ------- The input tensor augmented with the sinusoidal frequencies. """ _, timesteps, hidden_dim = tensor.size() timestep_range = get_range_vector(timesteps, get_device_of(tensor)).data.float() # We're generating both cos and sin frequencies, # so half for each. num_timescales = hidden_dim // 2 timescale_range = get_range_vector(num_timescales, get_device_of(tensor)).data.float() log_timescale_increments = math.log(float(max_timescale) / float(min_timescale)) / float(num_timescales - 1) inverse_timescales = min_timescale * torch.exp(timescale_range * -log_timescale_increments) # Broadcasted multiplication - shape (timesteps, num_timescales) scaled_time = timestep_range.unsqueeze(1) * inverse_timescales.unsqueeze(0) # shape (timesteps, 2 * num_timescales) sinusoids = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 1) if hidden_dim % 2 != 0: # if the number of dimensions is odd, the cos and sin # timescales had size (hidden_dim - 1) / 2, so we need # to add a row of zeros to make up the difference. sinusoids = torch.cat([sinusoids, sinusoids.new_zeros(timesteps, 1)], 1) return tensor + sinusoids.unsqueeze(0)
Produce N identical layers.
def clone(module: torch.nn.Module, num_copies: int) -> torch.nn.ModuleList: """Produce N identical layers.""" return torch.nn.ModuleList([copy.deepcopy(module) for _ in range(num_copies)])
Given a (possibly higher order) tensor of ids with shape (d1, ..., dn, sequence_length) Return a view that's (d1 * ... * dn, sequence_length). If original tensor is 1-d or 2-d, return it as is.
def combine_initial_dims(tensor: torch.Tensor) -> torch.Tensor: """ Given a (possibly higher order) tensor of ids with shape (d1, ..., dn, sequence_length) Return a view that's (d1 * ... * dn, sequence_length). If original tensor is 1-d or 2-d, return it as is. """ if tensor.dim() <= 2: return tensor else: return tensor.view(-1, tensor.size(-1))
Given a tensor of embeddings with shape (d1 * ... * dn, sequence_length, embedding_dim) and the original shape (d1, ..., dn, sequence_length), return the reshaped tensor of embeddings with shape (d1, ..., dn, sequence_length, embedding_dim). If original size is 1-d or 2-d, return it as is.
def uncombine_initial_dims(tensor: torch.Tensor, original_size: torch.Size) -> torch.Tensor: """ Given a tensor of embeddings with shape (d1 * ... * dn, sequence_length, embedding_dim) and the original shape (d1, ..., dn, sequence_length), return the reshaped tensor of embeddings with shape (d1, ..., dn, sequence_length, embedding_dim). If original size is 1-d or 2-d, return it as is. """ if len(original_size) <= 2: return tensor else: view_args = list(original_size) + [tensor.size(-1)] return tensor.view(*view_args)
Checks if the string occurs in the table, and if it does, returns the names of the columns under which it occurs. If it does not, returns an empty list.
def _string_in_table(self, candidate: str) -> List[str]: """ Checks if the string occurs in the table, and if it does, returns the names of the columns under which it occurs. If it does not, returns an empty list. """ candidate_column_names: List[str] = [] # First check if the entire candidate occurs as a cell. if candidate in self._string_column_mapping: candidate_column_names = self._string_column_mapping[candidate] # If not, check if it is a substring pf any cell value. if not candidate_column_names: for cell_value, column_names in self._string_column_mapping.items(): if candidate in cell_value: candidate_column_names.extend(column_names) candidate_column_names = list(set(candidate_column_names)) return candidate_column_names
These are the transformation rules used to normalize cell in column names in Sempre. See ``edu.stanford.nlp.sempre.tables.StringNormalizationUtils.characterNormalize`` and ``edu.stanford.nlp.sempre.tables.TableTypeSystem.canonicalizeName``. We reproduce those rules here to normalize and canonicalize cells and columns in the same way so that we can match them against constants in logical forms appropriately.
def normalize_string(string: str) -> str: """ These are the transformation rules used to normalize cell in column names in Sempre. See ``edu.stanford.nlp.sempre.tables.StringNormalizationUtils.characterNormalize`` and ``edu.stanford.nlp.sempre.tables.TableTypeSystem.canonicalizeName``. We reproduce those rules here to normalize and canonicalize cells and columns in the same way so that we can match them against constants in logical forms appropriately. """ # Normalization rules from Sempre # \u201A -> , string = re.sub("‚", ",", string) string = re.sub("„", ",,", string) string = re.sub("[·・]", ".", string) string = re.sub("…", "...", string) string = re.sub("ˆ", "^", string) string = re.sub("˜", "~", string) string = re.sub("‹", "<", string) string = re.sub("›", ">", string) string = re.sub("[‘’´`]", "'", string) string = re.sub("[“”«»]", "\"", string) string = re.sub("[•†‡²³]", "", string) string = re.sub("[‐‑–—−]", "-", string) # Oddly, some unicode characters get converted to _ instead of being stripped. Not really # sure how sempre decides what to do with these... TODO(mattg): can we just get rid of the # need for this function somehow? It's causing a whole lot of headaches. string = re.sub("[ðø′″€⁄ªΣ]", "_", string) # This is such a mess. There isn't just a block of unicode that we can strip out, because # sometimes sempre just strips diacritics... We'll try stripping out a few separate # blocks, skipping the ones that sempre skips... string = re.sub("[\\u0180-\\u0210]", "", string).strip() string = re.sub("[\\u0220-\\uFFFF]", "", string).strip() string = string.replace("\\n", "_") string = re.sub("\\s+", " ", string) # Canonicalization rules from Sempre. string = re.sub("[^\\w]", "_", string) string = re.sub("_+", "_", string) string = re.sub("_$", "", string) return unidecode(string.lower())
Takes a logical form as a lisp string and returns a nested list representation of the lisp. For example, "(count (division first))" would get mapped to ['count', ['division', 'first']].
def lisp_to_nested_expression(lisp_string: str) -> List: """ Takes a logical form as a lisp string and returns a nested list representation of the lisp. For example, "(count (division first))" would get mapped to ['count', ['division', 'first']]. """ stack: List = [] current_expression: List = [] tokens = lisp_string.split() for token in tokens: while token[0] == '(': nested_expression: List = [] current_expression.append(nested_expression) stack.append(current_expression) current_expression = nested_expression token = token[1:] current_expression.append(token.replace(')', '')) while token[-1] == ')': current_expression = stack.pop() token = token[:-1] return current_expression[0]
Parameters ---------- batch : ``List[List[str]]``, required A list of tokenized sentences. Returns ------- A tuple of tensors, the first representing activations (batch_size, 3, num_timesteps, 1024) and the second a mask (batch_size, num_timesteps).
def batch_to_embeddings(self, batch: List[List[str]]) -> Tuple[torch.Tensor, torch.Tensor]: """ Parameters ---------- batch : ``List[List[str]]``, required A list of tokenized sentences. Returns ------- A tuple of tensors, the first representing activations (batch_size, 3, num_timesteps, 1024) and the second a mask (batch_size, num_timesteps). """ character_ids = batch_to_ids(batch) if self.cuda_device >= 0: character_ids = character_ids.cuda(device=self.cuda_device) bilm_output = self.elmo_bilm(character_ids) layer_activations = bilm_output['activations'] mask_with_bos_eos = bilm_output['mask'] # without_bos_eos is a 3 element list of (activation, mask) tensor pairs, # each with size (batch_size, num_timesteps, dim and (batch_size, num_timesteps) # respectively. without_bos_eos = [remove_sentence_boundaries(layer, mask_with_bos_eos) for layer in layer_activations] # Converts a list of pairs (activation, mask) tensors to a single tensor of activations. activations = torch.cat([ele[0].unsqueeze(1) for ele in without_bos_eos], dim=1) # The mask is the same for each ELMo vector, so just take the first. mask = without_bos_eos[0][1] return activations, mask
Computes the ELMo embeddings for a single tokenized sentence. Please note that ELMo has internal state and will give different results for the same input. See the comment under the class definition. Parameters ---------- sentence : ``List[str]``, required A tokenized sentence. Returns ------- A tensor containing the ELMo vectors.
def embed_sentence(self, sentence: List[str]) -> numpy.ndarray: """ Computes the ELMo embeddings for a single tokenized sentence. Please note that ELMo has internal state and will give different results for the same input. See the comment under the class definition. Parameters ---------- sentence : ``List[str]``, required A tokenized sentence. Returns ------- A tensor containing the ELMo vectors. """ return self.embed_batch([sentence])[0]
Computes the ELMo embeddings for a batch of tokenized sentences. Please note that ELMo has internal state and will give different results for the same input. See the comment under the class definition. Parameters ---------- batch : ``List[List[str]]``, required A list of tokenized sentences. Returns ------- A list of tensors, each representing the ELMo vectors for the input sentence at the same index.
def embed_batch(self, batch: List[List[str]]) -> List[numpy.ndarray]: """ Computes the ELMo embeddings for a batch of tokenized sentences. Please note that ELMo has internal state and will give different results for the same input. See the comment under the class definition. Parameters ---------- batch : ``List[List[str]]``, required A list of tokenized sentences. Returns ------- A list of tensors, each representing the ELMo vectors for the input sentence at the same index. """ elmo_embeddings = [] # Batches with only an empty sentence will throw an exception inside AllenNLP, so we handle this case # and return an empty embedding instead. if batch == [[]]: elmo_embeddings.append(empty_embedding()) else: embeddings, mask = self.batch_to_embeddings(batch) for i in range(len(batch)): length = int(mask[i, :].sum()) # Slicing the embedding :0 throws an exception so we need to special case for empty sentences. if length == 0: elmo_embeddings.append(empty_embedding()) else: elmo_embeddings.append(embeddings[i, :, :length, :].detach().cpu().numpy()) return elmo_embeddings
Computes the ELMo embeddings for a iterable of sentences. Please note that ELMo has internal state and will give different results for the same input. See the comment under the class definition. Parameters ---------- sentences : ``Iterable[List[str]]``, required An iterable of tokenized sentences. batch_size : ``int``, required The number of sentences ELMo should process at once. Returns ------- A list of tensors, each representing the ELMo vectors for the input sentence at the same index.
def embed_sentences(self, sentences: Iterable[List[str]], batch_size: int = DEFAULT_BATCH_SIZE) -> Iterable[numpy.ndarray]: """ Computes the ELMo embeddings for a iterable of sentences. Please note that ELMo has internal state and will give different results for the same input. See the comment under the class definition. Parameters ---------- sentences : ``Iterable[List[str]]``, required An iterable of tokenized sentences. batch_size : ``int``, required The number of sentences ELMo should process at once. Returns ------- A list of tensors, each representing the ELMo vectors for the input sentence at the same index. """ for batch in lazy_groups_of(iter(sentences), batch_size): yield from self.embed_batch(batch)
Computes ELMo embeddings from an input_file where each line contains a sentence tokenized by whitespace. The ELMo embeddings are written out in HDF5 format, where each sentence embedding is saved in a dataset with the line number in the original file as the key. Parameters ---------- input_file : ``IO``, required A file with one tokenized sentence per line. output_file_path : ``str``, required A path to the output hdf5 file. output_format : ``str``, optional, (default = "all") The embeddings to output. Must be one of "all", "top", or "average". batch_size : ``int``, optional, (default = 64) The number of sentences to process in ELMo at one time. forget_sentences : ``bool``, optional, (default = False). If use_sentence_keys is False, whether or not to include a string serialized JSON dictionary that associates sentences with their line number (its HDF5 key). The mapping is placed in the "sentence_to_index" HDF5 key. This is useful if you want to use the embeddings without keeping the original file of sentences around. use_sentence_keys : ``bool``, optional, (default = False). Whether or not to use full sentences as keys. By default, the line numbers of the input file are used as ids, which is more robust.
def embed_file(self, input_file: IO, output_file_path: str, output_format: str = "all", batch_size: int = DEFAULT_BATCH_SIZE, forget_sentences: bool = False, use_sentence_keys: bool = False) -> None: """ Computes ELMo embeddings from an input_file where each line contains a sentence tokenized by whitespace. The ELMo embeddings are written out in HDF5 format, where each sentence embedding is saved in a dataset with the line number in the original file as the key. Parameters ---------- input_file : ``IO``, required A file with one tokenized sentence per line. output_file_path : ``str``, required A path to the output hdf5 file. output_format : ``str``, optional, (default = "all") The embeddings to output. Must be one of "all", "top", or "average". batch_size : ``int``, optional, (default = 64) The number of sentences to process in ELMo at one time. forget_sentences : ``bool``, optional, (default = False). If use_sentence_keys is False, whether or not to include a string serialized JSON dictionary that associates sentences with their line number (its HDF5 key). The mapping is placed in the "sentence_to_index" HDF5 key. This is useful if you want to use the embeddings without keeping the original file of sentences around. use_sentence_keys : ``bool``, optional, (default = False). Whether or not to use full sentences as keys. By default, the line numbers of the input file are used as ids, which is more robust. """ assert output_format in ["all", "top", "average"] # Tokenizes the sentences. sentences = [line.strip() for line in input_file] blank_lines = [i for (i, line) in enumerate(sentences) if line == ""] if blank_lines: raise ConfigurationError(f"Your input file contains empty lines at indexes " f"{blank_lines}. Please remove them.") split_sentences = [sentence.split() for sentence in sentences] # Uses the sentence index as the key. if use_sentence_keys: logger.warning("Using sentences as keys can fail if sentences " "contain forward slashes or colons. Use with caution.") embedded_sentences = zip(sentences, self.embed_sentences(split_sentences, batch_size)) else: embedded_sentences = ((str(i), x) for i, x in enumerate(self.embed_sentences(split_sentences, batch_size))) sentence_to_index = {} logger.info("Processing sentences.") with h5py.File(output_file_path, 'w') as fout: for key, embeddings in Tqdm.tqdm(embedded_sentences): if use_sentence_keys and key in fout.keys(): raise ConfigurationError(f"Key already exists in {output_file_path}. " f"To encode duplicate sentences, do not pass " f"the --use-sentence-keys flag.") if not forget_sentences and not use_sentence_keys: sentence = sentences[int(key)] sentence_to_index[sentence] = key if output_format == "all": output = embeddings elif output_format == "top": output = embeddings[-1] elif output_format == "average": output = numpy.average(embeddings, axis=0) fout.create_dataset( str(key), output.shape, dtype='float32', data=output ) if not forget_sentences and not use_sentence_keys: sentence_index_dataset = fout.create_dataset( "sentence_to_index", (1,), dtype=h5py.special_dtype(vlen=str)) sentence_index_dataset[0] = json.dumps(sentence_to_index) input_file.close()
Add the field to the existing fields mapping. If we have already indexed the Instance, then we also index `field`, so it is necessary to supply the vocab.
def add_field(self, field_name: str, field: Field, vocab: Vocabulary = None) -> None: """ Add the field to the existing fields mapping. If we have already indexed the Instance, then we also index `field`, so it is necessary to supply the vocab. """ self.fields[field_name] = field if self.indexed: field.index(vocab)
Increments counts in the given ``counter`` for all of the vocabulary items in all of the ``Fields`` in this ``Instance``.
def count_vocab_items(self, counter: Dict[str, Dict[str, int]]): """ Increments counts in the given ``counter`` for all of the vocabulary items in all of the ``Fields`` in this ``Instance``. """ for field in self.fields.values(): field.count_vocab_items(counter)
Indexes all fields in this ``Instance`` using the provided ``Vocabulary``. This `mutates` the current object, it does not return a new ``Instance``. A ``DataIterator`` will call this on each pass through a dataset; we use the ``indexed`` flag to make sure that indexing only happens once. This means that if for some reason you modify your vocabulary after you've indexed your instances, you might get unexpected behavior.
def index_fields(self, vocab: Vocabulary) -> None: """ Indexes all fields in this ``Instance`` using the provided ``Vocabulary``. This `mutates` the current object, it does not return a new ``Instance``. A ``DataIterator`` will call this on each pass through a dataset; we use the ``indexed`` flag to make sure that indexing only happens once. This means that if for some reason you modify your vocabulary after you've indexed your instances, you might get unexpected behavior. """ if not self.indexed: self.indexed = True for field in self.fields.values(): field.index(vocab)
Returns a dictionary of padding lengths, keyed by field name. Each ``Field`` returns a mapping from padding keys to actual lengths, and we just key that dictionary by field name.
def get_padding_lengths(self) -> Dict[str, Dict[str, int]]: """ Returns a dictionary of padding lengths, keyed by field name. Each ``Field`` returns a mapping from padding keys to actual lengths, and we just key that dictionary by field name. """ lengths = {} for field_name, field in self.fields.items(): lengths[field_name] = field.get_padding_lengths() return lengths
Pads each ``Field`` in this instance to the lengths given in ``padding_lengths`` (which is keyed by field name, then by padding key, the same as the return value in :func:`get_padding_lengths`), returning a list of torch tensors for each field. If ``padding_lengths`` is omitted, we will call ``self.get_padding_lengths()`` to get the sizes of the tensors to create.
def as_tensor_dict(self, padding_lengths: Dict[str, Dict[str, int]] = None) -> Dict[str, DataArray]: """ Pads each ``Field`` in this instance to the lengths given in ``padding_lengths`` (which is keyed by field name, then by padding key, the same as the return value in :func:`get_padding_lengths`), returning a list of torch tensors for each field. If ``padding_lengths`` is omitted, we will call ``self.get_padding_lengths()`` to get the sizes of the tensors to create. """ padding_lengths = padding_lengths or self.get_padding_lengths() tensors = {} for field_name, field in self.fields.items(): tensors[field_name] = field.as_tensor(padding_lengths[field_name]) return tensors
Return the full name (including module) of the given class.
def full_name(cla55: Optional[type]) -> str: """ Return the full name (including module) of the given class. """ # Special case to handle None: if cla55 is None: return "?" if issubclass(cla55, Initializer) and cla55 not in [Initializer, PretrainedModelInitializer]: init_fn = cla55()._init_function return f"{init_fn.__module__}.{init_fn.__name__}" origin = getattr(cla55, '__origin__', None) args = getattr(cla55, '__args__', ()) # Special handling for compound types if origin in (Dict, dict): key_type, value_type = args return f"""Dict[{full_name(key_type)}, {full_name(value_type)}]""" elif origin in (Tuple, tuple, List, list, Sequence, collections.abc.Sequence): return f"""{_remove_prefix(str(origin))}[{", ".join(full_name(arg) for arg in args)}]""" elif origin == Union: # Special special case to handle optional types: if len(args) == 2 and args[-1] == type(None): return f"""Optional[{full_name(args[0])}]""" else: return f"""Union[{", ".join(full_name(arg) for arg in args)}]""" else: return _remove_prefix(f"{cla55.__module__}.{cla55.__name__}")
Find the name (if any) that a subclass was registered under. We do this simply by iterating through the registry until we find it.
def _get_config_type(cla55: type) -> Optional[str]: """ Find the name (if any) that a subclass was registered under. We do this simply by iterating through the registry until we find it. """ # Special handling for pytorch RNN types: if cla55 == torch.nn.RNN: return "rnn" elif cla55 == torch.nn.LSTM: return "lstm" elif cla55 == torch.nn.GRU: return "gru" for subclass_dict in Registrable._registry.values(): for name, subclass in subclass_dict.items(): if subclass == cla55: return name # Special handling for initializer functions if hasattr(subclass, '_initializer_wrapper'): sif = subclass()._init_function if sif == cla55: return sif.__name__.rstrip("_") return None
Inspect the docstring and get the comments for each parameter.
def _docspec_comments(obj) -> Dict[str, str]: """ Inspect the docstring and get the comments for each parameter. """ # Sometimes our docstring is on the class, and sometimes it's on the initializer, # so we've got to check both. class_docstring = getattr(obj, '__doc__', None) init_docstring = getattr(obj.__init__, '__doc__', None) if hasattr(obj, '__init__') else None docstring = class_docstring or init_docstring or '' doc = NumpyDocString(docstring) params = doc["Parameters"] comments: Dict[str, str] = {} for line in params: # It looks like when there's not a space after the parameter name, # numpydocstring parses it incorrectly. name_bad = line[0] name = name_bad.split(":")[0] # Sometimes the line has 3 fields, sometimes it has 4 fields. comment = "\n".join(line[-1]) comments[name] = comment return comments
Create the ``Config`` for a class by reflecting on its ``__init__`` method and applying a few hacks.
def _auto_config(cla55: Type[T]) -> Config[T]: """ Create the ``Config`` for a class by reflecting on its ``__init__`` method and applying a few hacks. """ typ3 = _get_config_type(cla55) # Don't include self, or vocab names_to_ignore = {"self", "vocab"} # Hack for RNNs if cla55 in [torch.nn.RNN, torch.nn.LSTM, torch.nn.GRU]: cla55 = torch.nn.RNNBase names_to_ignore.add("mode") if isinstance(cla55, type): # It's a class, so inspect its constructor function_to_inspect = cla55.__init__ else: # It's a function, so inspect it, and ignore tensor function_to_inspect = cla55 names_to_ignore.add("tensor") argspec = inspect.getfullargspec(function_to_inspect) comments = _docspec_comments(cla55) items: List[ConfigItem] = [] num_args = len(argspec.args) defaults = list(argspec.defaults or []) num_default_args = len(defaults) num_non_default_args = num_args - num_default_args # Required args all come first, default args at the end. defaults = [_NO_DEFAULT for _ in range(num_non_default_args)] + defaults for name, default in zip(argspec.args, defaults): if name in names_to_ignore: continue annotation = argspec.annotations.get(name) comment = comments.get(name) # Don't include Model, the only place you'd specify that is top-level. if annotation == Model: continue # Don't include DataIterator, the only place you'd specify that is top-level. if annotation == DataIterator: continue # Don't include params for an Optimizer if torch.optim.Optimizer in getattr(cla55, '__bases__', ()) and name == "params": continue # Don't include datasets in the trainer if cla55 == Trainer and name.endswith("_dataset"): continue # Hack in our Optimizer class to the trainer if cla55 == Trainer and annotation == torch.optim.Optimizer: annotation = AllenNLPOptimizer # Hack in embedding num_embeddings as optional (it can be inferred from the pretrained file) if cla55 == Embedding and name == "num_embeddings": default = None items.append(ConfigItem(name, annotation, default, comment)) # More hacks, Embedding if cla55 == Embedding: items.insert(1, ConfigItem("pretrained_file", str, None)) return Config(items, typ3=typ3)
Pretty-print a config in sort-of-JSON+comments.
def render_config(config: Config, indent: str = "") -> str: """ Pretty-print a config in sort-of-JSON+comments. """ # Add four spaces to the indent. new_indent = indent + " " return "".join([ # opening brace + newline "{\n", # "type": "...", (if present) f'{new_indent}"type": "{config.typ3}",\n' if config.typ3 else '', # render each item "".join(_render(item, new_indent) for item in config.items), # indent and close the brace indent, "}\n" ])
Render a single config item, with the provided indent
def _render(item: ConfigItem, indent: str = "") -> str: """ Render a single config item, with the provided indent """ optional = item.default_value != _NO_DEFAULT if is_configurable(item.annotation): rendered_annotation = f"{item.annotation} (configurable)" else: rendered_annotation = str(item.annotation) rendered_item = "".join([ # rendered_comment, indent, "// " if optional else "", f'"{item.name}": ', rendered_annotation, f" (default: {item.default_value} )" if optional else "", f" // {item.comment}" if item.comment else "", "\n" ]) return rendered_item
Return a mapping {registered_name -> subclass_name} for the registered subclasses of `cla55`.
def _valid_choices(cla55: type) -> Dict[str, str]: """ Return a mapping {registered_name -> subclass_name} for the registered subclasses of `cla55`. """ valid_choices: Dict[str, str] = {} if cla55 not in Registrable._registry: raise ValueError(f"{cla55} is not a known Registrable class") for name, subclass in Registrable._registry[cla55].items(): # These wrapper classes need special treatment if isinstance(subclass, (_Seq2SeqWrapper, _Seq2VecWrapper)): subclass = subclass._module_class valid_choices[name] = full_name(subclass) return valid_choices
Convert `url` into a hashed filename in a repeatable way. If `etag` is specified, append its hash to the url's, delimited by a period.
def url_to_filename(url: str, etag: str = None) -> str: """ Convert `url` into a hashed filename in a repeatable way. If `etag` is specified, append its hash to the url's, delimited by a period. """ url_bytes = url.encode('utf-8') url_hash = sha256(url_bytes) filename = url_hash.hexdigest() if etag: etag_bytes = etag.encode('utf-8') etag_hash = sha256(etag_bytes) filename += '.' + etag_hash.hexdigest() return filename
Return the url and etag (which may be ``None``) stored for `filename`. Raise ``FileNotFoundError`` if `filename` or its stored metadata do not exist.
def filename_to_url(filename: str, cache_dir: str = None) -> Tuple[str, str]: """ Return the url and etag (which may be ``None``) stored for `filename`. Raise ``FileNotFoundError`` if `filename` or its stored metadata do not exist. """ if cache_dir is None: cache_dir = CACHE_DIRECTORY cache_path = os.path.join(cache_dir, filename) if not os.path.exists(cache_path): raise FileNotFoundError("file {} not found".format(cache_path)) meta_path = cache_path + '.json' if not os.path.exists(meta_path): raise FileNotFoundError("file {} not found".format(meta_path)) with open(meta_path) as meta_file: metadata = json.load(meta_file) url = metadata['url'] etag = metadata['etag'] return url, etag
Given something that might be a URL (or might be a local path), determine which. If it's a URL, download the file and cache it, and return the path to the cached file. If it's already a local path, make sure the file exists and then return the path.
def cached_path(url_or_filename: Union[str, Path], cache_dir: str = None) -> str: """ Given something that might be a URL (or might be a local path), determine which. If it's a URL, download the file and cache it, and return the path to the cached file. If it's already a local path, make sure the file exists and then return the path. """ if cache_dir is None: cache_dir = CACHE_DIRECTORY if isinstance(url_or_filename, Path): url_or_filename = str(url_or_filename) url_or_filename = os.path.expanduser(url_or_filename) parsed = urlparse(url_or_filename) if parsed.scheme in ('http', 'https', 's3'): # URL, so get it from the cache (downloading if necessary) return get_from_cache(url_or_filename, cache_dir) elif os.path.exists(url_or_filename): # File, and it exists. return url_or_filename elif parsed.scheme == '': # File, but it doesn't exist. raise FileNotFoundError("file {} not found".format(url_or_filename)) else: # Something unknown raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename))
Given something that might be a URL (or might be a local path), determine check if it's url or an existing file path.
def is_url_or_existing_file(url_or_filename: Union[str, Path, None]) -> bool: """ Given something that might be a URL (or might be a local path), determine check if it's url or an existing file path. """ if url_or_filename is None: return False url_or_filename = os.path.expanduser(str(url_or_filename)) parsed = urlparse(url_or_filename) return parsed.scheme in ('http', 'https', 's3') or os.path.exists(url_or_filename)
Split a full s3 path into the bucket name and path.
def split_s3_path(url: str) -> Tuple[str, str]: """Split a full s3 path into the bucket name and path.""" parsed = urlparse(url) if not parsed.netloc or not parsed.path: raise ValueError("bad s3 path {}".format(url)) bucket_name = parsed.netloc s3_path = parsed.path # Remove '/' at beginning of path. if s3_path.startswith("/"): s3_path = s3_path[1:] return bucket_name, s3_path
Wrapper function for s3 requests in order to create more helpful error messages.
def s3_request(func: Callable): """ Wrapper function for s3 requests in order to create more helpful error messages. """ @wraps(func) def wrapper(url: str, *args, **kwargs): try: return func(url, *args, **kwargs) except ClientError as exc: if int(exc.response["Error"]["Code"]) == 404: raise FileNotFoundError("file {} not found".format(url)) else: raise return wrapper
Check ETag on S3 object.
def s3_etag(url: str) -> Optional[str]: """Check ETag on S3 object.""" s3_resource = boto3.resource("s3") bucket_name, s3_path = split_s3_path(url) s3_object = s3_resource.Object(bucket_name, s3_path) return s3_object.e_tag
Pull a file directly from S3.
def s3_get(url: str, temp_file: IO) -> None: """Pull a file directly from S3.""" s3_resource = boto3.resource("s3") bucket_name, s3_path = split_s3_path(url) s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file)
Given a URL, look for the corresponding dataset in the local cache. If it's not there, download it. Then return the path to the cached file.
def get_from_cache(url: str, cache_dir: str = None) -> str: """ Given a URL, look for the corresponding dataset in the local cache. If it's not there, download it. Then return the path to the cached file. """ if cache_dir is None: cache_dir = CACHE_DIRECTORY os.makedirs(cache_dir, exist_ok=True) # Get eTag to add to filename, if it exists. if url.startswith("s3://"): etag = s3_etag(url) else: response = requests.head(url, allow_redirects=True) if response.status_code != 200: raise IOError("HEAD request failed for url {} with status code {}" .format(url, response.status_code)) etag = response.headers.get("ETag") filename = url_to_filename(url, etag) # get cache path to put the file cache_path = os.path.join(cache_dir, filename) if not os.path.exists(cache_path): # Download to temporary file, then copy to cache dir once finished. # Otherwise you get corrupt cache entries if the download gets interrupted. with tempfile.NamedTemporaryFile() as temp_file: logger.info("%s not found in cache, downloading to %s", url, temp_file.name) # GET file object if url.startswith("s3://"): s3_get(url, temp_file) else: http_get(url, temp_file) # we are copying the file before closing it, so flush to avoid truncation temp_file.flush() # shutil.copyfileobj() starts at the current position, so go to the start temp_file.seek(0) logger.info("copying %s to cache at %s", temp_file.name, cache_path) with open(cache_path, 'wb') as cache_file: shutil.copyfileobj(temp_file, cache_file) logger.info("creating metadata file for %s", cache_path) meta = {'url': url, 'etag': etag} meta_path = cache_path + '.json' with open(meta_path, 'w') as meta_file: json.dump(meta, meta_file) logger.info("removing temp file %s", temp_file.name) return cache_path
Extract a de-duped collection (set) of text from a file. Expected file format is one item per line.
def read_set_from_file(filename: str) -> Set[str]: """ Extract a de-duped collection (set) of text from a file. Expected file format is one item per line. """ collection = set() with open(filename, 'r') as file_: for line in file_: collection.add(line.rstrip()) return collection
Processes the text2sql data into the following directory structure: ``dataset/{query_split, question_split}/{train,dev,test}.json`` for datasets which have train, dev and test splits, or: ``dataset/{query_split, question_split}/{split_{split_id}}.json`` for datasets which use cross validation. The JSON format is identical to the original datasets, apart from they are split into separate files with respect to the split_type. This means that for the question split, all of the sql data is duplicated for each sentence which is bucketed together as having the same semantics. As an example, the following blob would be put "as-is" into the query split dataset, and split into two datasets with identical blobs for the question split, differing only in the "sentence" key, where blob1 would end up in the train split and blob2 would be in the dev split, with the rest of the json duplicated in each. { "comments": [], "old-name": "", "query-split": "train", "sentences": [{blob1, "question-split": "train"}, {blob2, "question-split": "dev"}], "sql": [], "variables": [] }, Parameters ---------- output_directory : str, required. The output directory. data: str, default = None The path to the data director of https://github.com/jkkummerfeld/text2sql-data.
def main(output_directory: int, data: str) -> None: """ Processes the text2sql data into the following directory structure: ``dataset/{query_split, question_split}/{train,dev,test}.json`` for datasets which have train, dev and test splits, or: ``dataset/{query_split, question_split}/{split_{split_id}}.json`` for datasets which use cross validation. The JSON format is identical to the original datasets, apart from they are split into separate files with respect to the split_type. This means that for the question split, all of the sql data is duplicated for each sentence which is bucketed together as having the same semantics. As an example, the following blob would be put "as-is" into the query split dataset, and split into two datasets with identical blobs for the question split, differing only in the "sentence" key, where blob1 would end up in the train split and blob2 would be in the dev split, with the rest of the json duplicated in each. { "comments": [], "old-name": "", "query-split": "train", "sentences": [{blob1, "question-split": "train"}, {blob2, "question-split": "dev"}], "sql": [], "variables": [] }, Parameters ---------- output_directory : str, required. The output directory. data: str, default = None The path to the data director of https://github.com/jkkummerfeld/text2sql-data. """ json_files = glob.glob(os.path.join(data, "*.json")) for dataset in json_files: dataset_name = os.path.basename(dataset)[:-5] print(f"Processing dataset: {dataset} into query and question " f"splits at output path: {output_directory + '/' + dataset_name}") full_dataset = json.load(open(dataset)) if not isinstance(full_dataset, list): full_dataset = [full_dataset] for split_type in ["query_split", "question_split"]: dataset_out = os.path.join(output_directory, dataset_name, split_type) for split, split_dataset in process_dataset(full_dataset, split_type): dataset_out = os.path.join(output_directory, dataset_name, split_type) os.makedirs(dataset_out, exist_ok=True) json.dump(split_dataset, open(os.path.join(dataset_out, split), "w"), indent=4)
Apply dropout to this layer, for this whole mini-batch. dropout_prob = layer_index / total_layers * undecayed_dropout_prob if layer_idx and total_layers is specified, else it will use the undecayed_dropout_prob directly. Parameters ---------- layer_input ``torch.FloatTensor`` required The input tensor of this layer. layer_output ``torch.FloatTensor`` required The output tensor of this layer, with the same shape as the layer_input. layer_index ``int`` The layer index, starting from 1. This is used to calcuate the dropout prob together with the `total_layers` parameter. total_layers ``int`` The total number of layers. Returns ------- output: ``torch.FloatTensor`` A tensor with the same shape as `layer_input` and `layer_output`.
def forward(self, layer_input: torch.Tensor, layer_output: torch.Tensor, layer_index: int = None, total_layers: int = None) -> torch.Tensor: # pylint: disable=arguments-differ """ Apply dropout to this layer, for this whole mini-batch. dropout_prob = layer_index / total_layers * undecayed_dropout_prob if layer_idx and total_layers is specified, else it will use the undecayed_dropout_prob directly. Parameters ---------- layer_input ``torch.FloatTensor`` required The input tensor of this layer. layer_output ``torch.FloatTensor`` required The output tensor of this layer, with the same shape as the layer_input. layer_index ``int`` The layer index, starting from 1. This is used to calcuate the dropout prob together with the `total_layers` parameter. total_layers ``int`` The total number of layers. Returns ------- output: ``torch.FloatTensor`` A tensor with the same shape as `layer_input` and `layer_output`. """ if layer_index is not None and total_layers is not None: dropout_prob = 1.0 * self.undecayed_dropout_prob * layer_index / total_layers else: dropout_prob = 1.0 * self.undecayed_dropout_prob if self.training: if torch.rand(1) < dropout_prob: return layer_input else: return layer_output + layer_input else: return (1 - dropout_prob) * layer_output + layer_input
See ``PlaceholderType.resolve``
def resolve(self, other: Type) -> Optional[Type]: """See ``PlaceholderType.resolve``""" if not isinstance(other, NltkComplexType): return None expected_second = ComplexType(NUMBER_TYPE, ComplexType(ANY_TYPE, ComplexType(ComplexType(ANY_TYPE, ANY_TYPE), ANY_TYPE))) resolved_second = other.second.resolve(expected_second) if resolved_second is None: return None # The lambda function that we use inside the argmax must take either a number or a date as # an argument. lambda_arg_type = other.second.second.second.first.first if lambda_arg_type.resolve(NUMBER_TYPE) is None and lambda_arg_type.resolve(DATE_TYPE) is None: return None try: # This is the first #1 in the type signature above. selector_function_type = resolved_second.second.first # This is the second #1 in the type signature above. quant_function_argument_type = resolved_second.second.second.first.second # This is the third #1 in the type signature above. return_type = resolved_second.second.second.second # All three placeholder (ph) types above should resolve against each other. resolved_first_ph = selector_function_type.resolve(quant_function_argument_type) resolved_first_ph.resolve(return_type) resolved_second_ph = quant_function_argument_type.resolve(resolved_first_ph) resolved_second_ph.resolve(return_type) resolved_third_ph = return_type.resolve(resolved_first_ph) resolved_third_ph = return_type.resolve(resolved_second_ph) if not resolved_first_ph or not resolved_second_ph or not resolved_third_ph: return None return ArgExtremeType(resolved_first_ph, lambda_arg_type) except AttributeError: return None
See ``PlaceholderType.resolve``
def resolve(self, other: Type) -> Type: """See ``PlaceholderType.resolve``""" if not isinstance(other, NltkComplexType): return None resolved_second = NUMBER_TYPE.resolve(other.second) if not resolved_second: return None return CountType(other.first)
Reads an NLVR dataset and returns a JSON representation containing sentences, labels, correct and incorrect logical forms. The output will contain at most `max_num_logical_forms` logical forms each in both correct and incorrect lists. The output format is: ``[{"id": str, "label": str, "sentence": str, "correct": List[str], "incorrect": List[str]}]``
def process_data(input_file: str, output_file: str, max_path_length: int, max_num_logical_forms: int, ignore_agenda: bool, write_sequences: bool) -> None: """ Reads an NLVR dataset and returns a JSON representation containing sentences, labels, correct and incorrect logical forms. The output will contain at most `max_num_logical_forms` logical forms each in both correct and incorrect lists. The output format is: ``[{"id": str, "label": str, "sentence": str, "correct": List[str], "incorrect": List[str]}]`` """ processed_data: JsonDict = [] # We can instantiate the ``ActionSpaceWalker`` with any world because the action space is the # same for all the ``NlvrWorlds``. It is just the execution that differs. serialized_walker_path = f"serialized_action_space_walker_pl={max_path_length}.pkl" if os.path.isfile(serialized_walker_path): print("Reading walker from serialized file", file=sys.stderr) walker = pickle.load(open(serialized_walker_path, "rb")) else: walker = ActionSpaceWalker(NlvrWorld({}), max_path_length=max_path_length) pickle.dump(walker, open(serialized_walker_path, "wb")) for line in open(input_file): instance_id, sentence, structured_reps, label_strings = read_json_line(line) worlds = [NlvrWorld(structured_rep) for structured_rep in structured_reps] labels = [label_string == "true" for label_string in label_strings] correct_logical_forms = [] incorrect_logical_forms = [] if ignore_agenda: # Get 1000 shortest logical forms. logical_forms = walker.get_all_logical_forms(max_num_logical_forms=1000) else: # TODO (pradeep): Assuming all worlds give the same agenda. sentence_agenda = worlds[0].get_agenda_for_sentence(sentence, add_paths_to_agenda=False) logical_forms = walker.get_logical_forms_with_agenda(sentence_agenda, max_num_logical_forms * 10) for logical_form in logical_forms: if all([world.execute(logical_form) == label for world, label in zip(worlds, labels)]): if len(correct_logical_forms) <= max_num_logical_forms: correct_logical_forms.append(logical_form) else: if len(incorrect_logical_forms) <= max_num_logical_forms: incorrect_logical_forms.append(logical_form) if len(correct_logical_forms) >= max_num_logical_forms \ and len(incorrect_logical_forms) >= max_num_logical_forms: break if write_sequences: parsed_correct_forms = [worlds[0].parse_logical_form(logical_form) for logical_form in correct_logical_forms] correct_sequences = [worlds[0].get_action_sequence(parsed_form) for parsed_form in parsed_correct_forms] parsed_incorrect_forms = [worlds[0].parse_logical_form(logical_form) for logical_form in incorrect_logical_forms] incorrect_sequences = [worlds[0].get_action_sequence(parsed_form) for parsed_form in parsed_incorrect_forms] processed_data.append({"id": instance_id, "sentence": sentence, "correct_sequences": correct_sequences, "incorrect_sequences": incorrect_sequences, "worlds": structured_reps, "labels": label_strings}) else: processed_data.append({"id": instance_id, "sentence": sentence, "correct_logical_forms": correct_logical_forms, "incorrect_logical_forms": incorrect_logical_forms, "worlds": structured_reps, "labels": label_strings}) with open(output_file, "w") as outfile: for instance_processed_data in processed_data: json.dump(instance_processed_data, outfile) outfile.write('\n') outfile.close()
This method lets you take advantage of spacy's batch processing. Default implementation is to just iterate over the texts and call ``split_sentences``.
def batch_split_sentences(self, texts: List[str]) -> List[List[str]]: """ This method lets you take advantage of spacy's batch processing. Default implementation is to just iterate over the texts and call ``split_sentences``. """ return [self.split_sentences(text) for text in texts]
An iterator over the entire dataset, yielding all sentences processed.
def dataset_iterator(self, file_path: str) -> Iterator[OntonotesSentence]: """ An iterator over the entire dataset, yielding all sentences processed. """ for conll_file in self.dataset_path_iterator(file_path): yield from self.sentence_iterator(conll_file)
An iterator returning file_paths in a directory containing CONLL-formatted files.
def dataset_path_iterator(file_path: str) -> Iterator[str]: """ An iterator returning file_paths in a directory containing CONLL-formatted files. """ logger.info("Reading CONLL sentences from dataset files at: %s", file_path) for root, _, files in list(os.walk(file_path)): for data_file in files: # These are a relic of the dataset pre-processing. Every # file will be duplicated - one file called filename.gold_skel # and one generated from the preprocessing called filename.gold_conll. if not data_file.endswith("gold_conll"): continue yield os.path.join(root, data_file)
An iterator over CONLL formatted files which yields documents, regardless of the number of document annotations in a particular file. This is useful for conll data which has been preprocessed, such as the preprocessing which takes place for the 2012 CONLL Coreference Resolution task.
def dataset_document_iterator(self, file_path: str) -> Iterator[List[OntonotesSentence]]: """ An iterator over CONLL formatted files which yields documents, regardless of the number of document annotations in a particular file. This is useful for conll data which has been preprocessed, such as the preprocessing which takes place for the 2012 CONLL Coreference Resolution task. """ with codecs.open(file_path, 'r', encoding='utf8') as open_file: conll_rows = [] document: List[OntonotesSentence] = [] for line in open_file: line = line.strip() if line != '' and not line.startswith('#'): # Non-empty line. Collect the annotation. conll_rows.append(line) else: if conll_rows: document.append(self._conll_rows_to_sentence(conll_rows)) conll_rows = [] if line.startswith("#end document"): yield document document = [] if document: # Collect any stragglers or files which might not # have the '#end document' format for the end of the file. yield document
An iterator over the sentences in an individual CONLL formatted file.
def sentence_iterator(self, file_path: str) -> Iterator[OntonotesSentence]: """ An iterator over the sentences in an individual CONLL formatted file. """ for document in self.dataset_document_iterator(file_path): for sentence in document: yield sentence
For a given coref label, add it to a currently open span(s), complete a span(s) or ignore it, if it is outside of all spans. This method mutates the clusters and coref_stacks dictionaries. Parameters ---------- label : ``str`` The coref label for this word. word_index : ``int`` The word index into the sentence. clusters : ``DefaultDict[int, List[Tuple[int, int]]]`` A dictionary mapping cluster ids to lists of inclusive spans into the sentence. coref_stacks: ``DefaultDict[int, List[int]]`` Stacks for each cluster id to hold the start indices of active spans (spans which we are inside of when processing a given word). Spans with the same id can be nested, which is why we collect these opening spans on a stack, e.g: [Greg, the baker who referred to [himself]_ID1 as 'the bread man']_ID1
def _process_coref_span_annotations_for_word(label: str, word_index: int, clusters: DefaultDict[int, List[Tuple[int, int]]], coref_stacks: DefaultDict[int, List[int]]) -> None: """ For a given coref label, add it to a currently open span(s), complete a span(s) or ignore it, if it is outside of all spans. This method mutates the clusters and coref_stacks dictionaries. Parameters ---------- label : ``str`` The coref label for this word. word_index : ``int`` The word index into the sentence. clusters : ``DefaultDict[int, List[Tuple[int, int]]]`` A dictionary mapping cluster ids to lists of inclusive spans into the sentence. coref_stacks: ``DefaultDict[int, List[int]]`` Stacks for each cluster id to hold the start indices of active spans (spans which we are inside of when processing a given word). Spans with the same id can be nested, which is why we collect these opening spans on a stack, e.g: [Greg, the baker who referred to [himself]_ID1 as 'the bread man']_ID1 """ if label != "-": for segment in label.split("|"): # The conll representation of coref spans allows spans to # overlap. If spans end or begin at the same word, they are # separated by a "|". if segment[0] == "(": # The span begins at this word. if segment[-1] == ")": # The span begins and ends at this word (single word span). cluster_id = int(segment[1:-1]) clusters[cluster_id].append((word_index, word_index)) else: # The span is starting, so we record the index of the word. cluster_id = int(segment[1:]) coref_stacks[cluster_id].append(word_index) else: # The span for this id is ending, but didn't start at this word. # Retrieve the start index from the document state and # add the span to the clusters for this id. cluster_id = int(segment[:-1]) start = coref_stacks[cluster_id].pop() clusters[cluster_id].append((start, word_index))
Given a sequence of different label types for a single word and the current span label we are inside, compute the BIO tag for each label and append to a list. Parameters ---------- annotations: ``List[str]`` A list of labels to compute BIO tags for. span_labels : ``List[List[str]]`` A list of lists, one for each annotation, to incrementally collect the BIO tags for a sequence. current_span_labels : ``List[Optional[str]]`` The currently open span per annotation type, or ``None`` if there is no open span.
def _process_span_annotations_for_word(annotations: List[str], span_labels: List[List[str]], current_span_labels: List[Optional[str]]) -> None: """ Given a sequence of different label types for a single word and the current span label we are inside, compute the BIO tag for each label and append to a list. Parameters ---------- annotations: ``List[str]`` A list of labels to compute BIO tags for. span_labels : ``List[List[str]]`` A list of lists, one for each annotation, to incrementally collect the BIO tags for a sequence. current_span_labels : ``List[Optional[str]]`` The currently open span per annotation type, or ``None`` if there is no open span. """ for annotation_index, annotation in enumerate(annotations): # strip all bracketing information to # get the actual propbank label. label = annotation.strip("()*") if "(" in annotation: # Entering into a span for a particular semantic role label. # We append the label and set the current span for this annotation. bio_label = "B-" + label span_labels[annotation_index].append(bio_label) current_span_labels[annotation_index] = label elif current_span_labels[annotation_index] is not None: # If there's no '(' token, but the current_span_label is not None, # then we are inside a span. bio_label = "I-" + current_span_labels[annotation_index] span_labels[annotation_index].append(bio_label) else: # We're outside a span. span_labels[annotation_index].append("O") # Exiting a span, so we reset the current span label for this annotation. if ")" in annotation: current_span_labels[annotation_index] = None
Apply dropout to input tensor. Parameters ---------- input_tensor: ``torch.FloatTensor`` A tensor of shape ``(batch_size, num_timesteps, embedding_dim)`` Returns ------- output: ``torch.FloatTensor`` A tensor of shape ``(batch_size, num_timesteps, embedding_dim)`` with dropout applied.
def forward(self, input_tensor): # pylint: disable=arguments-differ """ Apply dropout to input tensor. Parameters ---------- input_tensor: ``torch.FloatTensor`` A tensor of shape ``(batch_size, num_timesteps, embedding_dim)`` Returns ------- output: ``torch.FloatTensor`` A tensor of shape ``(batch_size, num_timesteps, embedding_dim)`` with dropout applied. """ ones = input_tensor.data.new_ones(input_tensor.shape[0], input_tensor.shape[-1]) dropout_mask = torch.nn.functional.dropout(ones, self.p, self.training, inplace=False) if self.inplace: input_tensor *= dropout_mask.unsqueeze(1) return None else: return dropout_mask.unsqueeze(1) * input_tensor
Compute and return the metric. Optionally also call :func:`self.reset`.
def get_metric(self, reset: bool) -> Union[float, Tuple[float, ...], Dict[str, float], Dict[str, List[float]]]: """ Compute and return the metric. Optionally also call :func:`self.reset`. """ raise NotImplementedError
If you actually passed gradient-tracking Tensors to a Metric, there will be a huge memory leak, because it will prevent garbage collection for the computation graph. This method ensures that you're using tensors directly and that they are on the CPU.
def unwrap_to_tensors(*tensors: torch.Tensor): """ If you actually passed gradient-tracking Tensors to a Metric, there will be a huge memory leak, because it will prevent garbage collection for the computation graph. This method ensures that you're using tensors directly and that they are on the CPU. """ return (x.detach().cpu() if isinstance(x, torch.Tensor) else x for x in tensors)
Replaces abstract variables in text with their concrete counterparts.
def replace_variables(sentence: List[str], sentence_variables: Dict[str, str]) -> Tuple[List[str], List[str]]: """ Replaces abstract variables in text with their concrete counterparts. """ tokens = [] tags = [] for token in sentence: if token not in sentence_variables: tokens.append(token) tags.append("O") else: for word in sentence_variables[token].split(): tokens.append(word) tags.append(token) return tokens, tags
Cleans up and unifies a SQL query. This involves unifying quoted strings and splitting brackets which aren't formatted consistently in the data.
def clean_and_split_sql(sql: str) -> List[str]: """ Cleans up and unifies a SQL query. This involves unifying quoted strings and splitting brackets which aren't formatted consistently in the data. """ sql_tokens: List[str] = [] for token in sql.strip().split(): token = token.replace('"', "'").replace("%", "") if token.endswith("(") and len(token) > 1: sql_tokens.extend(split_table_and_column_names(token[:-1])) sql_tokens.extend(split_table_and_column_names(token[-1])) else: sql_tokens.extend(split_table_and_column_names(token)) return sql_tokens
Some examples in the text2sql datasets use ID as a column reference to the column of a table which has a primary key. This causes problems if you are trying to constrain a grammar to only produce the column names directly, because you don't know what ID refers to. So instead of dealing with that, we just replace it.
def resolve_primary_keys_in_schema(sql_tokens: List[str], schema: Dict[str, List[TableColumn]]) -> List[str]: """ Some examples in the text2sql datasets use ID as a column reference to the column of a table which has a primary key. This causes problems if you are trying to constrain a grammar to only produce the column names directly, because you don't know what ID refers to. So instead of dealing with that, we just replace it. """ primary_keys_for_tables = {name: max(columns, key=lambda x: x.is_primary_key).name for name, columns in schema.items()} resolved_tokens = [] for i, token in enumerate(sql_tokens): if i > 2: table_name = sql_tokens[i - 2] if token == "ID" and table_name in primary_keys_for_tables.keys(): token = primary_keys_for_tables[table_name] resolved_tokens.append(token) return resolved_tokens
Reads a schema from the text2sql data, returning a dictionary mapping table names to their columns and respective types. This handles columns in an arbitrary order and also allows either ``{Table, Field}`` or ``{Table, Field} Name`` as headers, because both appear in the data. It also uppercases table and column names if they are not already uppercase. Parameters ---------- schema_path : ``str``, required. The path to the csv schema. Returns ------- A dictionary mapping table names to typed columns.
def read_dataset_schema(schema_path: str) -> Dict[str, List[TableColumn]]: """ Reads a schema from the text2sql data, returning a dictionary mapping table names to their columns and respective types. This handles columns in an arbitrary order and also allows either ``{Table, Field}`` or ``{Table, Field} Name`` as headers, because both appear in the data. It also uppercases table and column names if they are not already uppercase. Parameters ---------- schema_path : ``str``, required. The path to the csv schema. Returns ------- A dictionary mapping table names to typed columns. """ schema: Dict[str, List[TableColumn]] = defaultdict(list) for i, line in enumerate(open(schema_path, "r")): if i == 0: header = [x.strip() for x in line.split(",")] elif line[0] == "-": continue else: data = {key: value for key, value in zip(header, [x.strip() for x in line.split(",")])} table = data.get("Table Name", None) or data.get("Table") column = data.get("Field Name", None) or data.get("Field") is_primary_key = data.get("Primary Key") == "y" schema[table.upper()].append(TableColumn(column.upper(), data["Type"], is_primary_key)) return {**schema}
A utility function for reading in text2sql data. The blob is the result of loading the json from a file produced by the script ``scripts/reformat_text2sql_data.py``. Parameters ---------- data : ``JsonDict`` use_all_sql : ``bool``, optional (default = False) Whether to use all of the sql queries which have identical semantics, or whether to just use the first one. use_all_queries : ``bool``, (default = False) Whether or not to enforce query sentence uniqueness. If false, duplicated queries will occur in the dataset as separate instances, as for a given SQL query, not only are there multiple queries with the same template, but there are also duplicate queries. remove_unneeded_aliases : ``bool``, (default = False) The text2sql data by default creates alias names for `all` tables, regardless of whether the table is derived or if it is identical to the original (e.g SELECT TABLEalias0.COLUMN FROM TABLE AS TABLEalias0). This is not necessary and makes the action sequence and grammar manipulation much harder in a grammar based decoder. Note that this does not remove aliases which are legitimately required, such as when a new table is formed by performing operations on the original table. schema : ``Dict[str, List[TableColumn]]``, optional, (default = None) A schema to resolve primary keys against. Converts 'ID' column names to their actual name with respect to the Primary Key for the table in the schema.
def process_sql_data(data: List[JsonDict], use_all_sql: bool = False, use_all_queries: bool = False, remove_unneeded_aliases: bool = False, schema: Dict[str, List[TableColumn]] = None) -> Iterable[SqlData]: """ A utility function for reading in text2sql data. The blob is the result of loading the json from a file produced by the script ``scripts/reformat_text2sql_data.py``. Parameters ---------- data : ``JsonDict`` use_all_sql : ``bool``, optional (default = False) Whether to use all of the sql queries which have identical semantics, or whether to just use the first one. use_all_queries : ``bool``, (default = False) Whether or not to enforce query sentence uniqueness. If false, duplicated queries will occur in the dataset as separate instances, as for a given SQL query, not only are there multiple queries with the same template, but there are also duplicate queries. remove_unneeded_aliases : ``bool``, (default = False) The text2sql data by default creates alias names for `all` tables, regardless of whether the table is derived or if it is identical to the original (e.g SELECT TABLEalias0.COLUMN FROM TABLE AS TABLEalias0). This is not necessary and makes the action sequence and grammar manipulation much harder in a grammar based decoder. Note that this does not remove aliases which are legitimately required, such as when a new table is formed by performing operations on the original table. schema : ``Dict[str, List[TableColumn]]``, optional, (default = None) A schema to resolve primary keys against. Converts 'ID' column names to their actual name with respect to the Primary Key for the table in the schema. """ for example in data: seen_sentences: Set[str] = set() for sent_info in example['sentences']: # Loop over the different sql statements with "equivalent" semantics for sql in example["sql"]: text_with_variables = sent_info['text'].strip().split() text_vars = sent_info['variables'] query_tokens, tags = replace_variables(text_with_variables, text_vars) if not use_all_queries: key = " ".join(query_tokens) if key in seen_sentences: continue else: seen_sentences.add(key) sql_tokens = clean_and_split_sql(sql) if remove_unneeded_aliases: sql_tokens = clean_unneeded_aliases(sql_tokens) if schema is not None: sql_tokens = resolve_primary_keys_in_schema(sql_tokens, schema) sql_variables = {} for variable in example['variables']: sql_variables[variable['name']] = {'text': variable['example'], 'type': variable['type']} sql_data = SqlData(text=query_tokens, text_with_variables=text_with_variables, variable_tags=tags, sql=sql_tokens, text_variables=text_vars, sql_variables=sql_variables) yield sql_data # Some questions might have multiple equivalent SQL statements. # By default, we just use the first one. TODO(Mark): Use the shortest? if not use_all_sql: break
This function exists because Pytorch RNNs require that their inputs be sorted before being passed as input. As all of our Seq2xxxEncoders use this functionality, it is provided in a base class. This method can be called on any module which takes as input a ``PackedSequence`` and some ``hidden_state``, which can either be a tuple of tensors or a tensor. As all of our Seq2xxxEncoders have different return types, we return `sorted` outputs from the module, which is called directly. Additionally, we return the indices into the batch dimension required to restore the tensor to it's correct, unsorted order and the number of valid batch elements (i.e the number of elements in the batch which are not completely masked). This un-sorting and re-padding of the module outputs is left to the subclasses because their outputs have different types and handling them smoothly here is difficult. Parameters ---------- module : ``Callable[[PackedSequence, Optional[RnnState]], Tuple[Union[PackedSequence, torch.Tensor], RnnState]]``, required. A function to run on the inputs. In most cases, this is a ``torch.nn.Module``. inputs : ``torch.Tensor``, required. A tensor of shape ``(batch_size, sequence_length, embedding_size)`` representing the inputs to the Encoder. mask : ``torch.Tensor``, required. A tensor of shape ``(batch_size, sequence_length)``, representing masked and non-masked elements of the sequence for each element in the batch. hidden_state : ``Optional[RnnState]``, (default = None). A single tensor of shape (num_layers, batch_size, hidden_size) representing the state of an RNN with or a tuple of tensors of shapes (num_layers, batch_size, hidden_size) and (num_layers, batch_size, memory_size), representing the hidden state and memory state of an LSTM-like RNN. Returns ------- module_output : ``Union[torch.Tensor, PackedSequence]``. A Tensor or PackedSequence representing the output of the Pytorch Module. The batch size dimension will be equal to ``num_valid``, as sequences of zero length are clipped off before the module is called, as Pytorch cannot handle zero length sequences. final_states : ``Optional[RnnState]`` A Tensor representing the hidden state of the Pytorch Module. This can either be a single tensor of shape (num_layers, num_valid, hidden_size), for instance in the case of a GRU, or a tuple of tensors, such as those required for an LSTM. restoration_indices : ``torch.LongTensor`` A tensor of shape ``(batch_size,)``, describing the re-indexing required to transform the outputs back to their original batch order.
def sort_and_run_forward(self, module: Callable[[PackedSequence, Optional[RnnState]], Tuple[Union[PackedSequence, torch.Tensor], RnnState]], inputs: torch.Tensor, mask: torch.Tensor, hidden_state: Optional[RnnState] = None): """ This function exists because Pytorch RNNs require that their inputs be sorted before being passed as input. As all of our Seq2xxxEncoders use this functionality, it is provided in a base class. This method can be called on any module which takes as input a ``PackedSequence`` and some ``hidden_state``, which can either be a tuple of tensors or a tensor. As all of our Seq2xxxEncoders have different return types, we return `sorted` outputs from the module, which is called directly. Additionally, we return the indices into the batch dimension required to restore the tensor to it's correct, unsorted order and the number of valid batch elements (i.e the number of elements in the batch which are not completely masked). This un-sorting and re-padding of the module outputs is left to the subclasses because their outputs have different types and handling them smoothly here is difficult. Parameters ---------- module : ``Callable[[PackedSequence, Optional[RnnState]], Tuple[Union[PackedSequence, torch.Tensor], RnnState]]``, required. A function to run on the inputs. In most cases, this is a ``torch.nn.Module``. inputs : ``torch.Tensor``, required. A tensor of shape ``(batch_size, sequence_length, embedding_size)`` representing the inputs to the Encoder. mask : ``torch.Tensor``, required. A tensor of shape ``(batch_size, sequence_length)``, representing masked and non-masked elements of the sequence for each element in the batch. hidden_state : ``Optional[RnnState]``, (default = None). A single tensor of shape (num_layers, batch_size, hidden_size) representing the state of an RNN with or a tuple of tensors of shapes (num_layers, batch_size, hidden_size) and (num_layers, batch_size, memory_size), representing the hidden state and memory state of an LSTM-like RNN. Returns ------- module_output : ``Union[torch.Tensor, PackedSequence]``. A Tensor or PackedSequence representing the output of the Pytorch Module. The batch size dimension will be equal to ``num_valid``, as sequences of zero length are clipped off before the module is called, as Pytorch cannot handle zero length sequences. final_states : ``Optional[RnnState]`` A Tensor representing the hidden state of the Pytorch Module. This can either be a single tensor of shape (num_layers, num_valid, hidden_size), for instance in the case of a GRU, or a tuple of tensors, such as those required for an LSTM. restoration_indices : ``torch.LongTensor`` A tensor of shape ``(batch_size,)``, describing the re-indexing required to transform the outputs back to their original batch order. """ # In some circumstances you may have sequences of zero length. ``pack_padded_sequence`` # requires all sequence lengths to be > 0, so remove sequences of zero length before # calling self._module, then fill with zeros. # First count how many sequences are empty. batch_size = mask.size(0) num_valid = torch.sum(mask[:, 0]).int().item() sequence_lengths = get_lengths_from_binary_sequence_mask(mask) sorted_inputs, sorted_sequence_lengths, restoration_indices, sorting_indices =\ sort_batch_by_length(inputs, sequence_lengths) # Now create a PackedSequence with only the non-empty, sorted sequences. packed_sequence_input = pack_padded_sequence(sorted_inputs[:num_valid, :, :], sorted_sequence_lengths[:num_valid].data.tolist(), batch_first=True) # Prepare the initial states. if not self.stateful: if hidden_state is None: initial_states = hidden_state elif isinstance(hidden_state, tuple): initial_states = [state.index_select(1, sorting_indices)[:, :num_valid, :].contiguous() for state in hidden_state] else: initial_states = hidden_state.index_select(1, sorting_indices)[:, :num_valid, :].contiguous() else: initial_states = self._get_initial_states(batch_size, num_valid, sorting_indices) # Actually call the module on the sorted PackedSequence. module_output, final_states = module(packed_sequence_input, initial_states) return module_output, final_states, restoration_indices
Returns an initial state for use in an RNN. Additionally, this method handles the batch size changing across calls by mutating the state to append initial states for new elements in the batch. Finally, it also handles sorting the states with respect to the sequence lengths of elements in the batch and removing rows which are completely padded. Importantly, this `mutates` the state if the current batch size is larger than when it was previously called. Parameters ---------- batch_size : ``int``, required. The batch size can change size across calls to stateful RNNs, so we need to know if we need to expand or shrink the states before returning them. Expanded states will be set to zero. num_valid : ``int``, required. The batch may contain completely padded sequences which get removed before the sequence is passed through the encoder. We also need to clip these off of the state too. sorting_indices ``torch.LongTensor``, required. Pytorch RNNs take sequences sorted by length. When we return the states to be used for a given call to ``module.forward``, we need the states to match up to the sorted sequences, so before returning them, we sort the states using the same indices used to sort the sequences. Returns ------- This method has a complex return type because it has to deal with the first time it is called, when it has no state, and the fact that types of RNN have heterogeneous states. If it is the first time the module has been called, it returns ``None``, regardless of the type of the ``Module``. Otherwise, for LSTMs, it returns a tuple of ``torch.Tensors`` with shape ``(num_layers, num_valid, state_size)`` and ``(num_layers, num_valid, memory_size)`` respectively, or for GRUs, it returns a single ``torch.Tensor`` of shape ``(num_layers, num_valid, state_size)``.
def _get_initial_states(self, batch_size: int, num_valid: int, sorting_indices: torch.LongTensor) -> Optional[RnnState]: """ Returns an initial state for use in an RNN. Additionally, this method handles the batch size changing across calls by mutating the state to append initial states for new elements in the batch. Finally, it also handles sorting the states with respect to the sequence lengths of elements in the batch and removing rows which are completely padded. Importantly, this `mutates` the state if the current batch size is larger than when it was previously called. Parameters ---------- batch_size : ``int``, required. The batch size can change size across calls to stateful RNNs, so we need to know if we need to expand or shrink the states before returning them. Expanded states will be set to zero. num_valid : ``int``, required. The batch may contain completely padded sequences which get removed before the sequence is passed through the encoder. We also need to clip these off of the state too. sorting_indices ``torch.LongTensor``, required. Pytorch RNNs take sequences sorted by length. When we return the states to be used for a given call to ``module.forward``, we need the states to match up to the sorted sequences, so before returning them, we sort the states using the same indices used to sort the sequences. Returns ------- This method has a complex return type because it has to deal with the first time it is called, when it has no state, and the fact that types of RNN have heterogeneous states. If it is the first time the module has been called, it returns ``None``, regardless of the type of the ``Module``. Otherwise, for LSTMs, it returns a tuple of ``torch.Tensors`` with shape ``(num_layers, num_valid, state_size)`` and ``(num_layers, num_valid, memory_size)`` respectively, or for GRUs, it returns a single ``torch.Tensor`` of shape ``(num_layers, num_valid, state_size)``. """ # We don't know the state sizes the first time calling forward, # so we let the module define what it's initial hidden state looks like. if self._states is None: return None # Otherwise, we have some previous states. if batch_size > self._states[0].size(1): # This batch is larger than the all previous states. # If so, resize the states. num_states_to_concat = batch_size - self._states[0].size(1) resized_states = [] # state has shape (num_layers, batch_size, hidden_size) for state in self._states: # This _must_ be inside the loop because some # RNNs have states with different last dimension sizes. zeros = state.new_zeros(state.size(0), num_states_to_concat, state.size(2)) resized_states.append(torch.cat([state, zeros], 1)) self._states = tuple(resized_states) correctly_shaped_states = self._states elif batch_size < self._states[0].size(1): # This batch is smaller than the previous one. correctly_shaped_states = tuple(state[:, :batch_size, :] for state in self._states) else: correctly_shaped_states = self._states # At this point, our states are of shape (num_layers, batch_size, hidden_size). # However, the encoder uses sorted sequences and additionally removes elements # of the batch which are fully padded. We need the states to match up to these # sorted and filtered sequences, so we do that in the next two blocks before # returning the state/s. if len(self._states) == 1: # GRUs only have a single state. This `unpacks` it from the # tuple and returns the tensor directly. correctly_shaped_state = correctly_shaped_states[0] sorted_state = correctly_shaped_state.index_select(1, sorting_indices) return sorted_state[:, :num_valid, :].contiguous() else: # LSTMs have a state tuple of (state, memory). sorted_states = [state.index_select(1, sorting_indices) for state in correctly_shaped_states] return tuple(state[:, :num_valid, :].contiguous() for state in sorted_states)
After the RNN has run forward, the states need to be updated. This method just sets the state to the updated new state, performing several pieces of book-keeping along the way - namely, unsorting the states and ensuring that the states of completely padded sequences are not updated. Finally, it also detaches the state variable from the computational graph, such that the graph can be garbage collected after each batch iteration. Parameters ---------- final_states : ``RnnStateStorage``, required. The hidden states returned as output from the RNN. restoration_indices : ``torch.LongTensor``, required. The indices that invert the sorting used in ``sort_and_run_forward`` to order the states with respect to the lengths of the sequences in the batch.
def _update_states(self, final_states: RnnStateStorage, restoration_indices: torch.LongTensor) -> None: """ After the RNN has run forward, the states need to be updated. This method just sets the state to the updated new state, performing several pieces of book-keeping along the way - namely, unsorting the states and ensuring that the states of completely padded sequences are not updated. Finally, it also detaches the state variable from the computational graph, such that the graph can be garbage collected after each batch iteration. Parameters ---------- final_states : ``RnnStateStorage``, required. The hidden states returned as output from the RNN. restoration_indices : ``torch.LongTensor``, required. The indices that invert the sorting used in ``sort_and_run_forward`` to order the states with respect to the lengths of the sequences in the batch. """ # TODO(Mark): seems weird to sort here, but append zeros in the subclasses. # which way around is best? new_unsorted_states = [state.index_select(1, restoration_indices) for state in final_states] if self._states is None: # We don't already have states, so just set the # ones we receive to be the current state. self._states = tuple(state.data for state in new_unsorted_states) else: # Now we've sorted the states back so that they correspond to the original # indices, we need to figure out what states we need to update, because if we # didn't use a state for a particular row, we want to preserve its state. # Thankfully, the rows which are all zero in the state correspond exactly # to those which aren't used, so we create masks of shape (new_batch_size,), # denoting which states were used in the RNN computation. current_state_batch_size = self._states[0].size(1) new_state_batch_size = final_states[0].size(1) # Masks for the unused states of shape (1, new_batch_size, 1) used_new_rows_mask = [(state[0, :, :].sum(-1) != 0.0).float().view(1, new_state_batch_size, 1) for state in new_unsorted_states] new_states = [] if current_state_batch_size > new_state_batch_size: # The new state is smaller than the old one, # so just update the indices which we used. for old_state, new_state, used_mask in zip(self._states, new_unsorted_states, used_new_rows_mask): # zero out all rows in the previous state # which _were_ used in the current state. masked_old_state = old_state[:, :new_state_batch_size, :] * (1 - used_mask) # The old state is larger, so update the relevant parts of it. old_state[:, :new_state_batch_size, :] = new_state + masked_old_state new_states.append(old_state.detach()) else: # The states are the same size, so we just have to # deal with the possibility that some rows weren't used. new_states = [] for old_state, new_state, used_mask in zip(self._states, new_unsorted_states, used_new_rows_mask): # zero out all rows which _were_ used in the current state. masked_old_state = old_state * (1 - used_mask) # The old state is larger, so update the relevant parts of it. new_state += masked_old_state new_states.append(new_state.detach()) # It looks like there should be another case handled here - when # the current_state_batch_size < new_state_batch_size. However, # this never happens, because the states themeselves are mutated # by appending zeros when calling _get_inital_states, meaning that # the new states are either of equal size, or smaller, in the case # that there are some unused elements (zero-length) for the RNN computation. self._states = tuple(new_states)
Takes a list of valid target action sequences and creates a mapping from all possible (valid) action prefixes to allowed actions given that prefix. While the method is called ``construct_prefix_tree``, we're actually returning a map that has as keys the paths to `all internal nodes of the trie`, and as values all of the outgoing edges from that node. ``targets`` is assumed to be a tensor of shape ``(batch_size, num_valid_sequences, sequence_length)``. If the mask is not ``None``, it is assumed to have the same shape, and we will ignore any value in ``targets`` that has a value of ``0`` in the corresponding position in the mask. We assume that the mask has the format 1*0* for each item in ``targets`` - that is, once we see our first zero, we stop processing that target. For example, if ``targets`` is the following tensor: ``[[1, 2, 3], [1, 4, 5]]``, the return value will be: ``{(): set([1]), (1,): set([2, 4]), (1, 2): set([3]), (1, 4): set([5])}``. This could be used, e.g., to do an efficient constrained beam search, or to efficiently evaluate the probability of all of the target sequences.
def construct_prefix_tree(targets: Union[torch.Tensor, List[List[List[int]]]], target_mask: Optional[torch.Tensor] = None) -> List[Dict[Tuple[int, ...], Set[int]]]: """ Takes a list of valid target action sequences and creates a mapping from all possible (valid) action prefixes to allowed actions given that prefix. While the method is called ``construct_prefix_tree``, we're actually returning a map that has as keys the paths to `all internal nodes of the trie`, and as values all of the outgoing edges from that node. ``targets`` is assumed to be a tensor of shape ``(batch_size, num_valid_sequences, sequence_length)``. If the mask is not ``None``, it is assumed to have the same shape, and we will ignore any value in ``targets`` that has a value of ``0`` in the corresponding position in the mask. We assume that the mask has the format 1*0* for each item in ``targets`` - that is, once we see our first zero, we stop processing that target. For example, if ``targets`` is the following tensor: ``[[1, 2, 3], [1, 4, 5]]``, the return value will be: ``{(): set([1]), (1,): set([2, 4]), (1, 2): set([3]), (1, 4): set([5])}``. This could be used, e.g., to do an efficient constrained beam search, or to efficiently evaluate the probability of all of the target sequences. """ batched_allowed_transitions: List[Dict[Tuple[int, ...], Set[int]]] = [] if not isinstance(targets, list): assert targets.dim() == 3, "targets tensor needs to be batched!" targets = targets.detach().cpu().numpy().tolist() if target_mask is not None: target_mask = target_mask.detach().cpu().numpy().tolist() else: target_mask = [None for _ in targets] for instance_targets, instance_mask in zip(targets, target_mask): allowed_transitions: Dict[Tuple[int, ...], Set[int]] = defaultdict(set) for i, target_sequence in enumerate(instance_targets): history: Tuple[int, ...] = () for j, action in enumerate(target_sequence): if instance_mask and instance_mask[i][j] == 0: break allowed_transitions[history].add(action) history = history + (action,) batched_allowed_transitions.append(allowed_transitions) return batched_allowed_transitions
Convert the string to Value object. Args: original_string (basestring): Original string corenlp_value (basestring): Optional value returned from CoreNLP Returns: Value
def to_value(original_string, corenlp_value=None): """Convert the string to Value object. Args: original_string (basestring): Original string corenlp_value (basestring): Optional value returned from CoreNLP Returns: Value """ if isinstance(original_string, Value): # Already a Value return original_string if not corenlp_value: corenlp_value = original_string # Number? amount = NumberValue.parse(corenlp_value) if amount is not None: return NumberValue(amount, original_string) # Date? ymd = DateValue.parse(corenlp_value) if ymd is not None: if ymd[1] == ymd[2] == -1: return NumberValue(ymd[0], original_string) else: return DateValue(ymd[0], ymd[1], ymd[2], original_string) # String. return StringValue(original_string)
Convert a list of strings to a list of Values Args: original_strings (list[basestring]) corenlp_values (list[basestring or None]) Returns: list[Value]
def to_value_list(original_strings, corenlp_values=None): """Convert a list of strings to a list of Values Args: original_strings (list[basestring]) corenlp_values (list[basestring or None]) Returns: list[Value] """ assert isinstance(original_strings, (list, tuple, set)) if corenlp_values is not None: assert isinstance(corenlp_values, (list, tuple, set)) assert len(original_strings) == len(corenlp_values) return list(set(to_value(x, y) for (x, y) in zip(original_strings, corenlp_values))) else: return list(set(to_value(x) for x in original_strings))
Return True if the predicted denotation is correct. Args: target_values (list[Value]) predicted_values (list[Value]) Returns: bool
def check_denotation(target_values, predicted_values): """Return True if the predicted denotation is correct. Args: target_values (list[Value]) predicted_values (list[Value]) Returns: bool """ # Check size if len(target_values) != len(predicted_values): return False # Check items for target in target_values: if not any(target.match(pred) for pred in predicted_values): return False return True
Try to parse into a number. Return: the number (int or float) if successful; otherwise None.
def parse(text): """Try to parse into a number. Return: the number (int or float) if successful; otherwise None. """ try: return int(text) except ValueError: try: amount = float(text) assert not isnan(amount) and not isinf(amount) return amount except (ValueError, AssertionError): return None
Try to parse into a date. Return: tuple (year, month, date) if successful; otherwise None.
def parse(text): """Try to parse into a date. Return: tuple (year, month, date) if successful; otherwise None. """ try: ymd = text.lower().split('-') assert len(ymd) == 3 year = -1 if ymd[0] in ('xx', 'xxxx') else int(ymd[0]) month = -1 if ymd[1] == 'xx' else int(ymd[1]) day = -1 if ymd[2] == 'xx' else int(ymd[2]) assert not year == month == day == -1 assert month == -1 or 1 <= month <= 12 assert day == -1 or 1 <= day <= 31 return (year, month, day) except (ValueError, AssertionError): return None
Given a sequence tensor, extract spans and return representations of them. Span representation can be computed in many different ways, such as concatenation of the start and end spans, attention over the vectors contained inside the span, etc. Parameters ---------- sequence_tensor : ``torch.FloatTensor``, required. A tensor of shape (batch_size, sequence_length, embedding_size) representing an embedded sequence of words. span_indices : ``torch.LongTensor``, required. A tensor of shape ``(batch_size, num_spans, 2)``, where the last dimension represents the inclusive start and end indices of the span to be extracted from the ``sequence_tensor``. sequence_mask : ``torch.LongTensor``, optional (default = ``None``). A tensor of shape (batch_size, sequence_length) representing padded elements of the sequence. span_indices_mask : ``torch.LongTensor``, optional (default = ``None``). A tensor of shape (batch_size, num_spans) representing the valid spans in the ``indices`` tensor. This mask is optional because sometimes it's easier to worry about masking after calling this function, rather than passing a mask directly. Returns ------- A tensor of shape ``(batch_size, num_spans, embedded_span_size)``, where ``embedded_span_size`` depends on the way spans are represented.
def forward(self, # pylint: disable=arguments-differ sequence_tensor: torch.FloatTensor, span_indices: torch.LongTensor, sequence_mask: torch.LongTensor = None, span_indices_mask: torch.LongTensor = None): """ Given a sequence tensor, extract spans and return representations of them. Span representation can be computed in many different ways, such as concatenation of the start and end spans, attention over the vectors contained inside the span, etc. Parameters ---------- sequence_tensor : ``torch.FloatTensor``, required. A tensor of shape (batch_size, sequence_length, embedding_size) representing an embedded sequence of words. span_indices : ``torch.LongTensor``, required. A tensor of shape ``(batch_size, num_spans, 2)``, where the last dimension represents the inclusive start and end indices of the span to be extracted from the ``sequence_tensor``. sequence_mask : ``torch.LongTensor``, optional (default = ``None``). A tensor of shape (batch_size, sequence_length) representing padded elements of the sequence. span_indices_mask : ``torch.LongTensor``, optional (default = ``None``). A tensor of shape (batch_size, num_spans) representing the valid spans in the ``indices`` tensor. This mask is optional because sometimes it's easier to worry about masking after calling this function, rather than passing a mask directly. Returns ------- A tensor of shape ``(batch_size, num_spans, embedded_span_size)``, where ``embedded_span_size`` depends on the way spans are represented. """ raise NotImplementedError
serialization_directory : str, required. The directory containing the serialized weights. device: int, default = -1 The device to run the evaluation on. data: str, default = None The data to evaluate on. By default, we use the validation data from the original experiment. prefix: str, default="" The prefix to prepend to the generated gold and prediction files, to distinguish different models/data. domain: str, optional (default = None) If passed, filters the ontonotes evaluation/test dataset to only contain the specified domain. This overwrites the domain in the config file from the model, to allow evaluation on domains other than the one the model was trained on.
def main(serialization_directory: int, device: int, data: str, prefix: str, domain: str = None): """ serialization_directory : str, required. The directory containing the serialized weights. device: int, default = -1 The device to run the evaluation on. data: str, default = None The data to evaluate on. By default, we use the validation data from the original experiment. prefix: str, default="" The prefix to prepend to the generated gold and prediction files, to distinguish different models/data. domain: str, optional (default = None) If passed, filters the ontonotes evaluation/test dataset to only contain the specified domain. This overwrites the domain in the config file from the model, to allow evaluation on domains other than the one the model was trained on. """ config = Params.from_file(os.path.join(serialization_directory, "config.json")) if domain is not None: # Hack to allow evaluation on different domains than the # model was trained on. config["dataset_reader"]["domain_identifier"] = domain prefix = f"{domain}_{prefix}" else: config["dataset_reader"].pop("domain_identifier", None) dataset_reader = DatasetReader.from_params(config['dataset_reader']) evaluation_data_path = data if data else config['validation_data_path'] archive = load_archive(os.path.join(serialization_directory, "model.tar.gz"), cuda_device=device) model = archive.model model.eval() prediction_file_path = os.path.join(serialization_directory, prefix + "_predictions.txt") gold_file_path = os.path.join(serialization_directory, prefix + "_gold.txt") prediction_file = open(prediction_file_path, "w+") gold_file = open(gold_file_path, "w+") # Load the evaluation data and index it. print("reading evaluation data from {}".format(evaluation_data_path)) instances = dataset_reader.read(evaluation_data_path) with torch.autograd.no_grad(): iterator = BasicIterator(batch_size=32) iterator.index_with(model.vocab) model_predictions = [] batches = iterator(instances, num_epochs=1, shuffle=False, cuda_device=device) for batch in Tqdm.tqdm(batches): result = model(**batch) predictions = model.decode(result) model_predictions.extend(predictions["tags"]) for instance, prediction in zip(instances, model_predictions): fields = instance.fields try: # Most sentences have a verbal predicate, but not all. verb_index = fields["verb_indicator"].labels.index(1) except ValueError: verb_index = None gold_tags = fields["tags"].labels sentence = [x.text for x in fields["tokens"].tokens] write_to_conll_eval_file(prediction_file, gold_file, verb_index, sentence, prediction, gold_tags) prediction_file.close() gold_file.close()
Takes an initial state object, a means of transitioning from state to state, and a supervision signal, and uses the supervision to train the transition function to pick "good" states. This function should typically return a ``loss`` key during training, which the ``Model`` will use as its loss. Parameters ---------- initial_state : ``State`` This is the initial state for decoding, typically initialized after running some kind of encoder on some inputs. transition_function : ``TransitionFunction`` This is the transition function that scores all possible actions that can be taken in a given state, and returns a ranked list of next states at each step of decoding. supervision : ``SupervisionType`` This is the supervision that is used to train the ``transition_function`` function to pick "good" states. You can use whatever kind of supervision you want (e.g., a single "gold" action sequence, a set of possible "gold" action sequences, a reward function, etc.). We use ``typing.Generics`` to make sure that our static type checker is happy with how you've matched the supervision that you provide in the model to the ``DecoderTrainer`` that you want to use.
def decode(self, initial_state: State, transition_function: TransitionFunction, supervision: SupervisionType) -> Dict[str, torch.Tensor]: """ Takes an initial state object, a means of transitioning from state to state, and a supervision signal, and uses the supervision to train the transition function to pick "good" states. This function should typically return a ``loss`` key during training, which the ``Model`` will use as its loss. Parameters ---------- initial_state : ``State`` This is the initial state for decoding, typically initialized after running some kind of encoder on some inputs. transition_function : ``TransitionFunction`` This is the transition function that scores all possible actions that can be taken in a given state, and returns a ranked list of next states at each step of decoding. supervision : ``SupervisionType`` This is the supervision that is used to train the ``transition_function`` function to pick "good" states. You can use whatever kind of supervision you want (e.g., a single "gold" action sequence, a set of possible "gold" action sequences, a reward function, etc.). We use ``typing.Generics`` to make sure that our static type checker is happy with how you've matched the supervision that you provide in the model to the ``DecoderTrainer`` that you want to use. """ raise NotImplementedError
Returns the state of the scheduler as a ``dict``.
def state_dict(self) -> Dict[str, Any]: """ Returns the state of the scheduler as a ``dict``. """ return {key: value for key, value in self.__dict__.items() if key != 'optimizer'}
Load the schedulers state. Parameters ---------- state_dict : ``Dict[str, Any]`` Scheduler state. Should be an object returned from a call to ``state_dict``.
def load_state_dict(self, state_dict: Dict[str, Any]) -> None: """ Load the schedulers state. Parameters ---------- state_dict : ``Dict[str, Any]`` Scheduler state. Should be an object returned from a call to ``state_dict``. """ self.__dict__.update(state_dict)
Parameters ---------- text_field_input : ``Dict[str, torch.Tensor]`` A dictionary that was the output of a call to ``TextField.as_tensor``. Each tensor in here is assumed to have a shape roughly similar to ``(batch_size, sequence_length)`` (perhaps with an extra trailing dimension for the characters in each token). num_wrapping_dims : ``int``, optional (default=0) If you have a ``ListField[TextField]`` that created the ``text_field_input``, you'll end up with tensors of shape ``(batch_size, wrapping_dim1, wrapping_dim2, ..., sequence_length)``. This parameter tells us how many wrapping dimensions there are, so that we can correctly ``TimeDistribute`` the embedding of each named representation.
def forward(self, # pylint: disable=arguments-differ text_field_input: Dict[str, torch.Tensor], num_wrapping_dims: int = 0) -> torch.Tensor: """ Parameters ---------- text_field_input : ``Dict[str, torch.Tensor]`` A dictionary that was the output of a call to ``TextField.as_tensor``. Each tensor in here is assumed to have a shape roughly similar to ``(batch_size, sequence_length)`` (perhaps with an extra trailing dimension for the characters in each token). num_wrapping_dims : ``int``, optional (default=0) If you have a ``ListField[TextField]`` that created the ``text_field_input``, you'll end up with tensors of shape ``(batch_size, wrapping_dim1, wrapping_dim2, ..., sequence_length)``. This parameter tells us how many wrapping dimensions there are, so that we can correctly ``TimeDistribute`` the embedding of each named representation. """ raise NotImplementedError
Identifies the best prediction given the results from the submodels. Parameters ---------- subresults : List[Dict[str, torch.Tensor]] Results of each submodel. Returns ------- The index of the best submodel.
def ensemble(subresults: List[Dict[str, torch.Tensor]]) -> torch.Tensor: """ Identifies the best prediction given the results from the submodels. Parameters ---------- subresults : List[Dict[str, torch.Tensor]] Results of each submodel. Returns ------- The index of the best submodel. """ # Choose the highest average confidence span. span_start_probs = sum(subresult['span_start_probs'] for subresult in subresults) / len(subresults) span_end_probs = sum(subresult['span_end_probs'] for subresult in subresults) / len(subresults) return get_best_span(span_start_probs.log(), span_end_probs.log())
Parameters ---------- inputs : ``torch.Tensor``, required. A Tensor of shape ``(batch_size, sequence_length, hidden_size)``. mask : ``torch.LongTensor``, required. A binary mask of shape ``(batch_size, sequence_length)`` representing the non-padded elements in each sequence in the batch. Returns ------- A ``torch.Tensor`` of shape (num_layers, batch_size, sequence_length, hidden_size), where the num_layers dimension represents the LSTM output from that layer.
def forward(self, # pylint: disable=arguments-differ inputs: torch.Tensor, mask: torch.LongTensor) -> torch.Tensor: """ Parameters ---------- inputs : ``torch.Tensor``, required. A Tensor of shape ``(batch_size, sequence_length, hidden_size)``. mask : ``torch.LongTensor``, required. A binary mask of shape ``(batch_size, sequence_length)`` representing the non-padded elements in each sequence in the batch. Returns ------- A ``torch.Tensor`` of shape (num_layers, batch_size, sequence_length, hidden_size), where the num_layers dimension represents the LSTM output from that layer. """ batch_size, total_sequence_length = mask.size() stacked_sequence_output, final_states, restoration_indices = \ self.sort_and_run_forward(self._lstm_forward, inputs, mask) num_layers, num_valid, returned_timesteps, encoder_dim = stacked_sequence_output.size() # Add back invalid rows which were removed in the call to sort_and_run_forward. if num_valid < batch_size: zeros = stacked_sequence_output.new_zeros(num_layers, batch_size - num_valid, returned_timesteps, encoder_dim) stacked_sequence_output = torch.cat([stacked_sequence_output, zeros], 1) # The states also need to have invalid rows added back. new_states = [] for state in final_states: state_dim = state.size(-1) zeros = state.new_zeros(num_layers, batch_size - num_valid, state_dim) new_states.append(torch.cat([state, zeros], 1)) final_states = new_states # It's possible to need to pass sequences which are padded to longer than the # max length of the sequence to a Seq2StackEncoder. However, packing and unpacking # the sequences mean that the returned tensor won't include these dimensions, because # the RNN did not need to process them. We add them back on in the form of zeros here. sequence_length_difference = total_sequence_length - returned_timesteps if sequence_length_difference > 0: zeros = stacked_sequence_output.new_zeros(num_layers, batch_size, sequence_length_difference, stacked_sequence_output[0].size(-1)) stacked_sequence_output = torch.cat([stacked_sequence_output, zeros], 2) self._update_states(final_states, restoration_indices) # Restore the original indices and return the sequence. # Has shape (num_layers, batch_size, sequence_length, hidden_size) return stacked_sequence_output.index_select(1, restoration_indices)
Parameters ---------- inputs : ``PackedSequence``, required. A batch first ``PackedSequence`` to run the stacked LSTM over. initial_state : ``Tuple[torch.Tensor, torch.Tensor]``, optional, (default = None) A tuple (state, memory) representing the initial hidden state and memory of the LSTM, with shape (num_layers, batch_size, 2 * hidden_size) and (num_layers, batch_size, 2 * cell_size) respectively. Returns ------- output_sequence : ``torch.FloatTensor`` The encoded sequence of shape (num_layers, batch_size, sequence_length, hidden_size) final_states: ``Tuple[torch.FloatTensor, torch.FloatTensor]`` The per-layer final (state, memory) states of the LSTM, with shape (num_layers, batch_size, 2 * hidden_size) and (num_layers, batch_size, 2 * cell_size) respectively. The last dimension is duplicated because it contains the state/memory for both the forward and backward layers.
def _lstm_forward(self, inputs: PackedSequence, initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None) -> \ Tuple[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: """ Parameters ---------- inputs : ``PackedSequence``, required. A batch first ``PackedSequence`` to run the stacked LSTM over. initial_state : ``Tuple[torch.Tensor, torch.Tensor]``, optional, (default = None) A tuple (state, memory) representing the initial hidden state and memory of the LSTM, with shape (num_layers, batch_size, 2 * hidden_size) and (num_layers, batch_size, 2 * cell_size) respectively. Returns ------- output_sequence : ``torch.FloatTensor`` The encoded sequence of shape (num_layers, batch_size, sequence_length, hidden_size) final_states: ``Tuple[torch.FloatTensor, torch.FloatTensor]`` The per-layer final (state, memory) states of the LSTM, with shape (num_layers, batch_size, 2 * hidden_size) and (num_layers, batch_size, 2 * cell_size) respectively. The last dimension is duplicated because it contains the state/memory for both the forward and backward layers. """ if initial_state is None: hidden_states: List[Optional[Tuple[torch.Tensor, torch.Tensor]]] = [None] * len(self.forward_layers) elif initial_state[0].size()[0] != len(self.forward_layers): raise ConfigurationError("Initial states were passed to forward() but the number of " "initial states does not match the number of layers.") else: hidden_states = list(zip(initial_state[0].split(1, 0), initial_state[1].split(1, 0))) inputs, batch_lengths = pad_packed_sequence(inputs, batch_first=True) forward_output_sequence = inputs backward_output_sequence = inputs final_states = [] sequence_outputs = [] for layer_index, state in enumerate(hidden_states): forward_layer = getattr(self, 'forward_layer_{}'.format(layer_index)) backward_layer = getattr(self, 'backward_layer_{}'.format(layer_index)) forward_cache = forward_output_sequence backward_cache = backward_output_sequence if state is not None: forward_hidden_state, backward_hidden_state = state[0].split(self.hidden_size, 2) forward_memory_state, backward_memory_state = state[1].split(self.cell_size, 2) forward_state = (forward_hidden_state, forward_memory_state) backward_state = (backward_hidden_state, backward_memory_state) else: forward_state = None backward_state = None forward_output_sequence, forward_state = forward_layer(forward_output_sequence, batch_lengths, forward_state) backward_output_sequence, backward_state = backward_layer(backward_output_sequence, batch_lengths, backward_state) # Skip connections, just adding the input to the output. if layer_index != 0: forward_output_sequence += forward_cache backward_output_sequence += backward_cache sequence_outputs.append(torch.cat([forward_output_sequence, backward_output_sequence], -1)) # Append the state tuples in a list, so that we can return # the final states for all the layers. final_states.append((torch.cat([forward_state[0], backward_state[0]], -1), torch.cat([forward_state[1], backward_state[1]], -1))) stacked_sequence_outputs: torch.FloatTensor = torch.stack(sequence_outputs) # Stack the hidden state and memory for each layer into 2 tensors of shape # (num_layers, batch_size, hidden_size) and (num_layers, batch_size, cell_size) # respectively. final_hidden_states, final_memory_states = zip(*final_states) final_state_tuple: Tuple[torch.FloatTensor, torch.FloatTensor] = (torch.cat(final_hidden_states, 0), torch.cat(final_memory_states, 0)) return stacked_sequence_outputs, final_state_tuple
Load the pre-trained weights from the file.
def load_weights(self, weight_file: str) -> None: """ Load the pre-trained weights from the file. """ requires_grad = self.requires_grad with h5py.File(cached_path(weight_file), 'r') as fin: for i_layer, lstms in enumerate( zip(self.forward_layers, self.backward_layers) ): for j_direction, lstm in enumerate(lstms): # lstm is an instance of LSTMCellWithProjection cell_size = lstm.cell_size dataset = fin['RNN_%s' % j_direction]['RNN']['MultiRNNCell']['Cell%s' % i_layer ]['LSTMCell'] # tensorflow packs together both W and U matrices into one matrix, # but pytorch maintains individual matrices. In addition, tensorflow # packs the gates as input, memory, forget, output but pytorch # uses input, forget, memory, output. So we need to modify the weights. tf_weights = numpy.transpose(dataset['W_0'][...]) torch_weights = tf_weights.copy() # split the W from U matrices input_size = lstm.input_size input_weights = torch_weights[:, :input_size] recurrent_weights = torch_weights[:, input_size:] tf_input_weights = tf_weights[:, :input_size] tf_recurrent_weights = tf_weights[:, input_size:] # handle the different gate order convention for torch_w, tf_w in [[input_weights, tf_input_weights], [recurrent_weights, tf_recurrent_weights]]: torch_w[(1 * cell_size):(2 * cell_size), :] = tf_w[(2 * cell_size):(3 * cell_size), :] torch_w[(2 * cell_size):(3 * cell_size), :] = tf_w[(1 * cell_size):(2 * cell_size), :] lstm.input_linearity.weight.data.copy_(torch.FloatTensor(input_weights)) lstm.state_linearity.weight.data.copy_(torch.FloatTensor(recurrent_weights)) lstm.input_linearity.weight.requires_grad = requires_grad lstm.state_linearity.weight.requires_grad = requires_grad # the bias weights tf_bias = dataset['B'][...] # tensorflow adds 1.0 to forget gate bias instead of modifying the # parameters... tf_bias[(2 * cell_size):(3 * cell_size)] += 1 torch_bias = tf_bias.copy() torch_bias[(1 * cell_size):(2 * cell_size) ] = tf_bias[(2 * cell_size):(3 * cell_size)] torch_bias[(2 * cell_size):(3 * cell_size) ] = tf_bias[(1 * cell_size):(2 * cell_size)] lstm.state_linearity.bias.data.copy_(torch.FloatTensor(torch_bias)) lstm.state_linearity.bias.requires_grad = requires_grad # the projection weights proj_weights = numpy.transpose(dataset['W_P_0'][...]) lstm.state_projection.weight.data.copy_(torch.FloatTensor(proj_weights)) lstm.state_projection.weight.requires_grad = requires_grad