INSTRUCTION
stringlengths 1
46.3k
| RESPONSE
stringlengths 75
80.2k
|
---|---|
Pass the input (and mask) through each layer in turn. | def forward(self, x, mask):
"""Pass the input (and mask) through each layer in turn."""
all_layers = []
for layer in self.layers:
x = layer(x, mask)
if self.return_all_layers:
all_layers.append(x)
if self.return_all_layers:
all_layers[-1] = self.norm(all_layers[-1])
return all_layers
return self.norm(x) |
Apply residual connection to any sublayer with the same size. | def forward(self, x: torch.Tensor, sublayer: Callable[[torch.Tensor], torch.Tensor]) -> torch.Tensor:
"""Apply residual connection to any sublayer with the same size."""
return x + self.dropout(sublayer(self.norm(x))) |
Follow Figure 1 (left) for connections. | def forward(self, x: torch.Tensor, mask: torch.Tensor) -> torch.Tensor:
"""Follow Figure 1 (left) for connections."""
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))
return self.sublayer[1](x, self.feed_forward) |
An initaliser which preserves output variance for approximately gaussian
distributed inputs. This boils down to initialising layers using a uniform
distribution in the range ``(-sqrt(3/dim[0]) * scale, sqrt(3 / dim[0]) * scale)``, where
``dim[0]`` is equal to the input dimension of the parameter and the ``scale``
is a constant scaling factor which depends on the non-linearity used.
See `Random Walk Initialisation for Training Very Deep Feedforward Networks
<https://www.semanticscholar.org/paper/Random-Walk-Initialization-for-Training-Very-Deep-Sussillo-Abbott/be9728a0728b6acf7a485225b1e41592176eda0b>`_
for more information.
Parameters
----------
tensor : ``torch.Tensor``, required.
The tensor to initialise.
nonlinearity : ``str``, optional (default = "linear")
The non-linearity which is performed after the projection that this
tensor is involved in. This must be the name of a function contained
in the ``torch.nn.functional`` package.
Returns
-------
The initialised tensor. | def uniform_unit_scaling(tensor: torch.Tensor, nonlinearity: str = "linear"):
"""
An initaliser which preserves output variance for approximately gaussian
distributed inputs. This boils down to initialising layers using a uniform
distribution in the range ``(-sqrt(3/dim[0]) * scale, sqrt(3 / dim[0]) * scale)``, where
``dim[0]`` is equal to the input dimension of the parameter and the ``scale``
is a constant scaling factor which depends on the non-linearity used.
See `Random Walk Initialisation for Training Very Deep Feedforward Networks
<https://www.semanticscholar.org/paper/Random-Walk-Initialization-for-Training-Very-Deep-Sussillo-Abbott/be9728a0728b6acf7a485225b1e41592176eda0b>`_
for more information.
Parameters
----------
tensor : ``torch.Tensor``, required.
The tensor to initialise.
nonlinearity : ``str``, optional (default = "linear")
The non-linearity which is performed after the projection that this
tensor is involved in. This must be the name of a function contained
in the ``torch.nn.functional`` package.
Returns
-------
The initialised tensor.
"""
size = 1.
# Estimate the input size. This won't work perfectly,
# but it covers almost all use cases where this initialiser
# would be expected to be useful, i.e in large linear and
# convolutional layers, as the last dimension will almost
# always be the output size.
for dimension in list(tensor.size())[:-1]:
size *= dimension
activation_scaling = torch.nn.init.calculate_gain(nonlinearity, tensor)
max_value = math.sqrt(3 / size) * activation_scaling
return tensor.data.uniform_(-max_value, max_value) |
An initializer which allows initializing model parameters in "blocks". This is helpful
in the case of recurrent models which use multiple gates applied to linear projections,
which can be computed efficiently if they are concatenated together. However, they are
separate parameters which should be initialized independently.
Parameters
----------
tensor : ``torch.Tensor``, required.
A tensor to initialize.
split_sizes : List[int], required.
A list of length ``tensor.ndim()`` specifying the size of the
blocks along that particular dimension. E.g. ``[10, 20]`` would
result in the tensor being split into chunks of size 10 along the
first dimension and 20 along the second.
gain : float, optional (default = 1.0)
The gain (scaling) applied to the orthogonal initialization. | def block_orthogonal(tensor: torch.Tensor,
split_sizes: List[int],
gain: float = 1.0) -> None:
"""
An initializer which allows initializing model parameters in "blocks". This is helpful
in the case of recurrent models which use multiple gates applied to linear projections,
which can be computed efficiently if they are concatenated together. However, they are
separate parameters which should be initialized independently.
Parameters
----------
tensor : ``torch.Tensor``, required.
A tensor to initialize.
split_sizes : List[int], required.
A list of length ``tensor.ndim()`` specifying the size of the
blocks along that particular dimension. E.g. ``[10, 20]`` would
result in the tensor being split into chunks of size 10 along the
first dimension and 20 along the second.
gain : float, optional (default = 1.0)
The gain (scaling) applied to the orthogonal initialization.
"""
data = tensor.data
sizes = list(tensor.size())
if any([a % b != 0 for a, b in zip(sizes, split_sizes)]):
raise ConfigurationError("tensor dimensions must be divisible by their respective "
"split_sizes. Found size: {} and split_sizes: {}".format(sizes, split_sizes))
indexes = [list(range(0, max_size, split))
for max_size, split in zip(sizes, split_sizes)]
# Iterate over all possible blocks within the tensor.
for block_start_indices in itertools.product(*indexes):
# A list of tuples containing the index to start at for this block
# and the appropriate step size (i.e split_size[i] for dimension i).
index_and_step_tuples = zip(block_start_indices, split_sizes)
# This is a tuple of slices corresponding to:
# tensor[index: index + step_size, ...]. This is
# required because we could have an arbitrary number
# of dimensions. The actual slices we need are the
# start_index: start_index + step for each dimension in the tensor.
block_slice = tuple([slice(start_index, start_index + step)
for start_index, step in index_and_step_tuples])
data[block_slice] = torch.nn.init.orthogonal_(tensor[block_slice].contiguous(), gain=gain) |
Initialize the biases of the forget gate to 1, and all other gates to 0,
following Jozefowicz et al., An Empirical Exploration of Recurrent Network Architectures | def lstm_hidden_bias(tensor: torch.Tensor) -> None:
"""
Initialize the biases of the forget gate to 1, and all other gates to 0,
following Jozefowicz et al., An Empirical Exploration of Recurrent Network Architectures
"""
# gates are (b_hi|b_hf|b_hg|b_ho) of shape (4*hidden_size)
tensor.data.zero_()
hidden_size = tensor.shape[0] // 4
tensor.data[hidden_size:(2 * hidden_size)] = 1.0 |
Converts a Params object into an InitializerApplicator. The json should
be formatted as follows::
[
["parameter_regex_match1",
{
"type": "normal"
"mean": 0.01
"std": 0.1
}
],
["parameter_regex_match2", "uniform"]
["prevent_init_regex", "prevent"]
]
where the first item in each tuple is the regex that matches to parameters, and the second
item is a set of parameters that will be passed to ``Initialzer.from_params()``. These
values can either be strings, in which case they correspond to the names of initializers,
or dictionaries, in which case they must contain the "type" key, corresponding to the name
of an initializer. In addition, they may contain auxiliary named parameters which will be
fed to the initializer itself. To determine valid auxiliary parameters, please refer to the
torch.nn.init documentation. Only "prevent" is a special type which does not have corresponding
initializer. Any parameter matching its corresponding regex will be overridden to NOT initialize.
Returns
-------
An InitializerApplicator containing the specified initializers. | def from_params(cls, params: List[Tuple[str, Params]] = None) -> "InitializerApplicator":
"""
Converts a Params object into an InitializerApplicator. The json should
be formatted as follows::
[
["parameter_regex_match1",
{
"type": "normal"
"mean": 0.01
"std": 0.1
}
],
["parameter_regex_match2", "uniform"]
["prevent_init_regex", "prevent"]
]
where the first item in each tuple is the regex that matches to parameters, and the second
item is a set of parameters that will be passed to ``Initialzer.from_params()``. These
values can either be strings, in which case they correspond to the names of initializers,
or dictionaries, in which case they must contain the "type" key, corresponding to the name
of an initializer. In addition, they may contain auxiliary named parameters which will be
fed to the initializer itself. To determine valid auxiliary parameters, please refer to the
torch.nn.init documentation. Only "prevent" is a special type which does not have corresponding
initializer. Any parameter matching its corresponding regex will be overridden to NOT initialize.
Returns
-------
An InitializerApplicator containing the specified initializers.
"""
# pylint: disable=arguments-differ
params = params or []
is_prevent = lambda item: item == "prevent" or item == {"type": "prevent"}
prevent_regexes = [param[0] for param in params if is_prevent(param[1])]
params = [param for param in params if param[1] if not is_prevent(param[1])]
initializers = [(name, Initializer.from_params(init_params)) for name, init_params in params]
return InitializerApplicator(initializers, prevent_regexes) |
We read tables formatted as TSV files here. We assume the first line in the file is a tab
separated list of column headers, and all subsequent lines are content rows. For example if
the TSV file is:
Nation Olympics Medals
USA 1896 8
China 1932 9
we read "Nation", "Olympics" and "Medals" as column headers, "USA" and "China" as cells
under the "Nation" column and so on. | def read_from_file(cls, filename: str, question: List[Token]) -> 'TableQuestionKnowledgeGraph':
"""
We read tables formatted as TSV files here. We assume the first line in the file is a tab
separated list of column headers, and all subsequent lines are content rows. For example if
the TSV file is:
Nation Olympics Medals
USA 1896 8
China 1932 9
we read "Nation", "Olympics" and "Medals" as column headers, "USA" and "China" as cells
under the "Nation" column and so on.
"""
return cls.read_from_lines(open(filename).readlines(), question) |
We read tables formatted as JSON objects (dicts) here. This is useful when you are reading
data from a demo. The expected format is::
{"question": [token1, token2, ...],
"columns": [column1, column2, ...],
"cells": [[row1_cell1, row1_cell2, ...],
[row2_cell1, row2_cell2, ...],
... ]} | def read_from_json(cls, json_object: Dict[str, Any]) -> 'TableQuestionKnowledgeGraph':
"""
We read tables formatted as JSON objects (dicts) here. This is useful when you are reading
data from a demo. The expected format is::
{"question": [token1, token2, ...],
"columns": [column1, column2, ...],
"cells": [[row1_cell1, row1_cell2, ...],
[row2_cell1, row2_cell2, ...],
... ]}
"""
entity_text: Dict[str, str] = {}
neighbors: DefaultDict[str, List[str]] = defaultdict(list)
# Getting number entities first. Number entities don't have any neighbors, and their
# "entity text" is the text from the question that evoked the number.
question_tokens = json_object['question']
for number, number_text in cls._get_numbers_from_tokens(question_tokens):
entity_text[number] = number_text
neighbors[number] = []
for default_number in DEFAULT_NUMBERS:
if default_number not in neighbors:
neighbors[default_number] = []
entity_text[default_number] = default_number
# Following Sempre's convention for naming columns. Sempre gives columns unique names when
# columns normalize to a collision, so we keep track of these. We do not give cell text
# unique names, however, as `fb:cell.x` is actually a function that returns all cells that
# have text that normalizes to "x".
column_ids = []
columns: Dict[str, int] = {}
for column_string in json_object['columns']:
column_string = column_string.replace('\\n', '\n')
normalized_string = f'fb:row.row.{cls._normalize_string(column_string)}'
if normalized_string in columns:
columns[normalized_string] += 1
normalized_string = f'{normalized_string}_{columns[normalized_string]}'
columns[normalized_string] = 1
column_ids.append(normalized_string)
entity_text[normalized_string] = column_string
# Stores cell text to cell name, making sure that unique text maps to a unique name.
cell_id_mapping: Dict[str, str] = {}
column_cells: List[List[str]] = [[] for _ in columns]
for row_index, row_cells in enumerate(json_object['cells']):
assert len(columns) == len(row_cells), ("Invalid format. Row %d has %d cells, but header has %d"
" columns" % (row_index, len(row_cells), len(columns)))
# Following Sempre's convention for naming cells.
row_cell_ids = []
for column_index, cell_string in enumerate(row_cells):
cell_string = cell_string.replace('\\n', '\n')
column_cells[column_index].append(cell_string)
if cell_string in cell_id_mapping:
normalized_string = cell_id_mapping[cell_string]
else:
base_normalized_string = f'fb:cell.{cls._normalize_string(cell_string)}'
normalized_string = base_normalized_string
attempt_number = 1
while normalized_string in cell_id_mapping.values():
attempt_number += 1
normalized_string = f"{base_normalized_string}_{attempt_number}"
cell_id_mapping[cell_string] = normalized_string
row_cell_ids.append(normalized_string)
entity_text[normalized_string] = cell_string
for column_id, cell_id in zip(column_ids, row_cell_ids):
neighbors[column_id].append(cell_id)
neighbors[cell_id].append(column_id)
for column in column_cells:
if cls._should_split_column_cells(column):
for cell_string in column:
for part_entity, part_string in cls._get_cell_parts(cell_string):
neighbors[part_entity] = []
entity_text[part_entity] = part_string
return cls(set(neighbors.keys()), dict(neighbors), entity_text, question_tokens) |
Finds numbers in the input tokens and returns them as strings. We do some simple heuristic
number recognition, finding ordinals and cardinals expressed as text ("one", "first",
etc.), as well as numerals ("7th", "3rd"), months (mapping "july" to 7), and units
("1ghz").
We also handle year ranges expressed as decade or centuries ("1800s" or "1950s"), adding
the endpoints of the range as possible numbers to generate.
We return a list of tuples, where each tuple is the (number_string, token_text) for a
number found in the input tokens. | def _get_numbers_from_tokens(tokens: List[Token]) -> List[Tuple[str, str]]:
"""
Finds numbers in the input tokens and returns them as strings. We do some simple heuristic
number recognition, finding ordinals and cardinals expressed as text ("one", "first",
etc.), as well as numerals ("7th", "3rd"), months (mapping "july" to 7), and units
("1ghz").
We also handle year ranges expressed as decade or centuries ("1800s" or "1950s"), adding
the endpoints of the range as possible numbers to generate.
We return a list of tuples, where each tuple is the (number_string, token_text) for a
number found in the input tokens.
"""
numbers = []
for i, token in enumerate(tokens):
number: Union[int, float] = None
token_text = token.text
text = token.text.replace(',', '').lower()
if text in NUMBER_WORDS:
number = NUMBER_WORDS[text]
magnitude = 1
if i < len(tokens) - 1:
next_token = tokens[i + 1].text.lower()
if next_token in ORDER_OF_MAGNITUDE_WORDS:
magnitude = ORDER_OF_MAGNITUDE_WORDS[next_token]
token_text += ' ' + tokens[i + 1].text
is_range = False
if len(text) > 1 and text[-1] == 's' and text[-2] == '0':
is_range = True
text = text[:-1]
# We strip out any non-digit characters, to capture things like '7th', or '1ghz'. The
# way we're doing this could lead to false positives for something like '1e2', but
# we'll take that risk. It shouldn't be a big deal.
text = ''.join(text[i] for i, char in enumerate(text) if char in NUMBER_CHARACTERS)
try:
# We'll use a check for float(text) to find numbers, because text.isdigit() doesn't
# catch things like "-3" or "0.07".
number = float(text)
except ValueError:
pass
if number is not None:
number = number * magnitude
if '.' in text:
number_string = '%.3f' % number
else:
number_string = '%d' % number
numbers.append((number_string, token_text))
if is_range:
# TODO(mattg): both numbers in the range will have the same text, and so the
# linking score won't have any way to differentiate them... We should figure
# out a better way to handle this.
num_zeros = 1
while text[-(num_zeros + 1)] == '0':
num_zeros += 1
numbers.append((str(int(number + 10 ** num_zeros)), token_text))
return numbers |
Splits a cell into parts and returns the parts of the cell. We return a list of
``(entity_name, entity_text)``, where ``entity_name`` is ``fb:part.[something]``, and
``entity_text`` is the text of the cell corresponding to that part. For many cells, there
is only one "part", and we return a list of length one.
Note that you shouldn't call this on every cell in the table; SEMPRE decides to make these
splits only when at least one of the cells in a column looks "splittable". Only if you're
splitting the cells in a column should you use this function. | def _get_cell_parts(cls, cell_text: str) -> List[Tuple[str, str]]:
"""
Splits a cell into parts and returns the parts of the cell. We return a list of
``(entity_name, entity_text)``, where ``entity_name`` is ``fb:part.[something]``, and
``entity_text`` is the text of the cell corresponding to that part. For many cells, there
is only one "part", and we return a list of length one.
Note that you shouldn't call this on every cell in the table; SEMPRE decides to make these
splits only when at least one of the cells in a column looks "splittable". Only if you're
splitting the cells in a column should you use this function.
"""
parts = []
for part_text in cls.cell_part_regex.split(cell_text):
part_text = part_text.strip()
part_entity = f'fb:part.{cls._normalize_string(part_text)}'
parts.append((part_entity, part_text))
return parts |
Returns true if there is any cell in this column that can be split. | def _should_split_column_cells(cls, column_cells: List[str]) -> bool:
"""
Returns true if there is any cell in this column that can be split.
"""
return any(cls._should_split_cell(cell_text) for cell_text in column_cells) |
Checks whether the cell should be split. We're just doing the same thing that SEMPRE did
here. | def _should_split_cell(cls, cell_text: str) -> bool:
"""
Checks whether the cell should be split. We're just doing the same thing that SEMPRE did
here.
"""
if ', ' in cell_text or '\n' in cell_text or '/' in cell_text:
return True
return False |
Returns entities that can be linked to spans in the question, that should be in the agenda,
for training a coverage based semantic parser. This method essentially does a heuristic
entity linking, to provide weak supervision for a learning to search parser. | def get_linked_agenda_items(self) -> List[str]:
"""
Returns entities that can be linked to spans in the question, that should be in the agenda,
for training a coverage based semantic parser. This method essentially does a heuristic
entity linking, to provide weak supervision for a learning to search parser.
"""
agenda_items: List[str] = []
for entity in self._get_longest_span_matching_entities():
agenda_items.append(entity)
# If the entity is a cell, we need to add the column to the agenda as well,
# because the answer most likely involves getting the row with the cell.
if 'fb:cell' in entity:
agenda_items.append(self.neighbors[entity][0])
return agenda_items |
inp_fn: str, required.
Path to file from which to read Open IE extractions in Open IE4's format.
domain: str, required.
Domain to be used when writing CoNLL format.
out_fn: str, required.
Path to file to which to write the CoNLL format Open IE extractions. | def main(inp_fn: str,
domain: str,
out_fn: str) -> None:
"""
inp_fn: str, required.
Path to file from which to read Open IE extractions in Open IE4's format.
domain: str, required.
Domain to be used when writing CoNLL format.
out_fn: str, required.
Path to file to which to write the CoNLL format Open IE extractions.
"""
with open(out_fn, 'w') as fout:
for sent_ls in read(inp_fn):
fout.write("{}\n\n".format('\n'.join(['\t'.join(map(str,
pad_line_to_ontonotes(line,
domain)))
for line
in convert_sent_to_conll(sent_ls)]))) |
Return an Element from span (list of spacy toks) | def element_from_span(span: List[int],
span_type: str) -> Element:
"""
Return an Element from span (list of spacy toks)
"""
return Element(span_type,
[span[0].idx,
span[-1].idx + len(span[-1])],
' '.join(map(str, span))) |
Ensure single word predicate
by adding "before-predicate" and "after-predicate"
arguments. | def split_predicate(ex: Extraction) -> Extraction:
"""
Ensure single word predicate
by adding "before-predicate" and "after-predicate"
arguments.
"""
rel_toks = ex.toks[char_to_word_index(ex.rel.span[0], ex.sent) \
: char_to_word_index(ex.rel.span[1], ex.sent) + 1]
if not rel_toks:
return ex
verb_inds = [tok_ind for (tok_ind, tok)
in enumerate(rel_toks)
if tok.tag_.startswith('VB')]
last_verb_ind = verb_inds[-1] if verb_inds \
else (len(rel_toks) - 1)
rel_parts = [element_from_span([rel_toks[last_verb_ind]],
'V')]
before_verb = rel_toks[ : last_verb_ind]
after_verb = rel_toks[last_verb_ind + 1 : ]
if before_verb:
rel_parts.append(element_from_span(before_verb, "BV"))
if after_verb:
rel_parts.append(element_from_span(after_verb, "AV"))
return Extraction(ex.sent, ex.toks, ex.arg1, rel_parts, ex.args2, ex.confidence) |
Return a conll representation of a given input Extraction. | def extraction_to_conll(ex: Extraction) -> List[str]:
"""
Return a conll representation of a given input Extraction.
"""
ex = split_predicate(ex)
toks = ex.sent.split(' ')
ret = ['*'] * len(toks)
args = [ex.arg1] + ex.args2
rels_and_args = [("ARG{}".format(arg_ind), arg)
for arg_ind, arg in enumerate(args)] + \
[(rel_part.elem_type, rel_part)
for rel_part
in ex.rel]
for rel, arg in rels_and_args:
# Add brackets
cur_start_ind = char_to_word_index(arg.span[0],
ex.sent)
cur_end_ind = char_to_word_index(arg.span[1],
ex.sent)
ret[cur_start_ind] = "({}{}".format(rel, ret[cur_start_ind])
ret[cur_end_ind] += ')'
return ret |
Return an integer tuple from
textual representation of closed / open spans. | def interpret_span(text_spans: str) -> List[int]:
"""
Return an integer tuple from
textual representation of closed / open spans.
"""
m = regex.match("^(?:(?:([\(\[]\d+, \d+[\)\]])|({\d+}))[,]?\s*)+$",
text_spans)
spans = m.captures(1) + m.captures(2)
int_spans = []
for span in spans:
ints = list(map(int,
span[1: -1].split(',')))
if span[0] == '(':
ints[0] += 1
if span[-1] == ']':
ints[1] += 1
if span.startswith('{'):
assert(len(ints) == 1)
ints.append(ints[0] + 1)
assert(len(ints) == 2)
int_spans.append(ints)
# Merge consecutive spans
ret = []
cur_span = int_spans[0]
for (start, end) in int_spans[1:]:
if start - 1 == cur_span[-1]:
cur_span = (cur_span[0],
end)
else:
ret.append(cur_span)
cur_span = (start, end)
if (not ret) or (cur_span != ret[-1]):
ret.append(cur_span)
return ret[0] |
Construct an Element instance from regexp
groups. | def interpret_element(element_type: str, text: str, span: str) -> Element:
"""
Construct an Element instance from regexp
groups.
"""
return Element(element_type,
interpret_span(span),
text) |
Parse a raw element into text and indices (integers). | def parse_element(raw_element: str) -> List[Element]:
"""
Parse a raw element into text and indices (integers).
"""
elements = [regex.match("^(([a-zA-Z]+)\(([^;]+),List\(([^;]*)\)\))$",
elem.lstrip().rstrip())
for elem
in raw_element.split(';')]
return [interpret_element(*elem.groups()[1:])
for elem in elements
if elem] |
Given a list of extractions for a single sentence -
convert it to conll representation. | def convert_sent_to_conll(sent_ls: List[Extraction]):
"""
Given a list of extractions for a single sentence -
convert it to conll representation.
"""
# Sanity check - make sure all extractions are on the same sentence
assert(len(set([ex.sent for ex in sent_ls])) == 1)
toks = sent_ls[0].sent.split(' ')
return safe_zip(*[range(len(toks)),
toks] + \
[extraction_to_conll(ex)
for ex in sent_ls]) |
Pad line to conform to ontonotes representation. | def pad_line_to_ontonotes(line, domain) -> List[str]:
"""
Pad line to conform to ontonotes representation.
"""
word_ind, word = line[ : 2]
pos = 'XX'
oie_tags = line[2 : ]
line_num = 0
parse = "-"
lemma = "-"
return [domain, line_num, word_ind, word, pos, parse, lemma, '-',\
'-', '-', '*'] + list(oie_tags) + ['-', ] |
Given a dictionary from sentence -> extractions,
return a corresponding CoNLL representation. | def convert_sent_dict_to_conll(sent_dic, domain) -> str:
"""
Given a dictionary from sentence -> extractions,
return a corresponding CoNLL representation.
"""
return '\n\n'.join(['\n'.join(['\t'.join(map(str, pad_line_to_ontonotes(line, domain)))
for line in convert_sent_to_conll(sent_ls)])
for sent_ls
in sent_dic.iteritems()]) |
Given a Kinesis record data that is decoded, deaggregate if it was packed using the
Kinesis Producer Library into individual records. This method will be a no-op for any
records that are not aggregated (but will still return them).
decoded_data - the base64 decoded data that comprises either the KPL aggregated data, or the Kinesis payload directly.
return value - A list of deaggregated Kinesis record payloads (if the data is not aggregated, we just return a list with the payload alone) | def deaggregate_record(decoded_data):
'''Given a Kinesis record data that is decoded, deaggregate if it was packed using the
Kinesis Producer Library into individual records. This method will be a no-op for any
records that are not aggregated (but will still return them).
decoded_data - the base64 decoded data that comprises either the KPL aggregated data, or the Kinesis payload directly.
return value - A list of deaggregated Kinesis record payloads (if the data is not aggregated, we just return a list with the payload alone)
'''
is_aggregated = True
#Verify the magic header
data_magic = None
if(len(decoded_data) >= len(aws_kinesis_agg.MAGIC)):
data_magic = decoded_data[:len(aws_kinesis_agg.MAGIC)]
else:
print("Not aggregated")
is_aggregated = False
decoded_data_no_magic = decoded_data[len(aws_kinesis_agg.MAGIC):]
if aws_kinesis_agg.MAGIC != data_magic or len(decoded_data_no_magic) <= aws_kinesis_agg.DIGEST_SIZE:
is_aggregated = False
if is_aggregated:
#verify the MD5 digest
message_digest = decoded_data_no_magic[-aws_kinesis_agg.DIGEST_SIZE:]
message_data = decoded_data_no_magic[:-aws_kinesis_agg.DIGEST_SIZE]
md5_calc = md5.new()
md5_calc.update(message_data)
calculated_digest = md5_calc.digest()
if message_digest != calculated_digest:
return [decoded_data]
else:
#Extract the protobuf message
try:
ar = kpl_pb2.AggregatedRecord()
ar.ParseFromString(message_data)
return [mr.data for mr in ar.records]
except BaseException as e:
raise e
else:
return [decoded_data] |
Parses a S3 Uri into a dictionary of the Bucket, Key, and VersionId
:return: a BodyS3Location dict or None if not an S3 Uri
:rtype: dict | def parse_s3_uri(uri):
"""Parses a S3 Uri into a dictionary of the Bucket, Key, and VersionId
:return: a BodyS3Location dict or None if not an S3 Uri
:rtype: dict
"""
if not isinstance(uri, string_types):
return None
url = urlparse(uri)
query = parse_qs(url.query)
if url.scheme == 's3' and url.netloc and url.path:
s3_pointer = {
'Bucket': url.netloc,
'Key': url.path.lstrip('/')
}
if 'versionId' in query and len(query['versionId']) == 1:
s3_pointer['Version'] = query['versionId'][0]
return s3_pointer
else:
return None |
Constructs a S3 URI string from given code dictionary
:param dict code_dict: Dictionary containing Lambda function Code S3 location of the form
{S3Bucket, S3Key, S3ObjectVersion}
:return: S3 URI of form s3://bucket/key?versionId=version
:rtype string | def to_s3_uri(code_dict):
"""Constructs a S3 URI string from given code dictionary
:param dict code_dict: Dictionary containing Lambda function Code S3 location of the form
{S3Bucket, S3Key, S3ObjectVersion}
:return: S3 URI of form s3://bucket/key?versionId=version
:rtype string
"""
try:
uri = "s3://{bucket}/{key}".format(bucket=code_dict["S3Bucket"], key=code_dict["S3Key"])
version = code_dict.get("S3ObjectVersion", None)
except (TypeError, AttributeError):
raise TypeError("Code location should be a dictionary")
if version:
uri += "?versionId=" + version
return uri |
Constructs a Lambda `Code` or `Content` property, from the SAM `CodeUri` or `ContentUri` property.
This follows the current scheme for Lambda Functions and LayerVersions.
:param dict or string location_uri: s3 location dict or string
:param string logical_id: logical_id of the resource calling this function
:param string property_name: name of the property which is used as an input to this function.
:returns: a Code dict, containing the S3 Bucket, Key, and Version of the Lambda layer code
:rtype: dict | def construct_s3_location_object(location_uri, logical_id, property_name):
"""Constructs a Lambda `Code` or `Content` property, from the SAM `CodeUri` or `ContentUri` property.
This follows the current scheme for Lambda Functions and LayerVersions.
:param dict or string location_uri: s3 location dict or string
:param string logical_id: logical_id of the resource calling this function
:param string property_name: name of the property which is used as an input to this function.
:returns: a Code dict, containing the S3 Bucket, Key, and Version of the Lambda layer code
:rtype: dict
"""
if isinstance(location_uri, dict):
if not location_uri.get("Bucket") or not location_uri.get("Key"):
# location_uri is a dictionary but does not contain Bucket or Key property
raise InvalidResourceException(logical_id,
"'{}' requires Bucket and Key properties to be "
"specified".format(property_name))
s3_pointer = location_uri
else:
# location_uri is NOT a dictionary. Parse it as a string
s3_pointer = parse_s3_uri(location_uri)
if s3_pointer is None:
raise InvalidResourceException(logical_id,
'\'{}\' is not a valid S3 Uri of the form '
'"s3://bucket/key" with optional versionId query '
'parameter.'.format(property_name))
code = {
'S3Bucket': s3_pointer['Bucket'],
'S3Key': s3_pointer['Key']
}
if 'Version' in s3_pointer:
code['S3ObjectVersion'] = s3_pointer['Version']
return code |
Returns a list of policies from the resource properties. This method knows how to interpret and handle
polymorphic nature of the policies property.
Policies can be one of the following:
* Managed policy name: string
* List of managed policy names: list of strings
* IAM Policy document: dict containing Statement key
* List of IAM Policy documents: list of IAM Policy Document
* Policy Template: dict with only one key where key is in list of supported policy template names
* List of Policy Templates: list of Policy Template
:param dict resource_properties: Dictionary of resource properties containing the policies property.
It is assumed that this is already a dictionary and contains policies key.
:return list of PolicyEntry: List of policies, where each item is an instance of named tuple `PolicyEntry` | def _get_policies(self, resource_properties):
"""
Returns a list of policies from the resource properties. This method knows how to interpret and handle
polymorphic nature of the policies property.
Policies can be one of the following:
* Managed policy name: string
* List of managed policy names: list of strings
* IAM Policy document: dict containing Statement key
* List of IAM Policy documents: list of IAM Policy Document
* Policy Template: dict with only one key where key is in list of supported policy template names
* List of Policy Templates: list of Policy Template
:param dict resource_properties: Dictionary of resource properties containing the policies property.
It is assumed that this is already a dictionary and contains policies key.
:return list of PolicyEntry: List of policies, where each item is an instance of named tuple `PolicyEntry`
"""
policies = None
if self._contains_policies(resource_properties):
policies = resource_properties[self.POLICIES_PROPERTY_NAME]
if not policies:
# Policies is None or empty
return []
if not isinstance(policies, list):
# Just a single entry. Make it into a list of convenience
policies = [policies]
result = []
for policy in policies:
policy_type = self._get_type(policy)
entry = PolicyEntry(data=policy, type=policy_type)
result.append(entry)
return result |
Is there policies data in this resource?
:param dict resource_properties: Properties of the resource
:return: True if we can process this resource. False, otherwise | def _contains_policies(self, resource_properties):
"""
Is there policies data in this resource?
:param dict resource_properties: Properties of the resource
:return: True if we can process this resource. False, otherwise
"""
return resource_properties is not None \
and isinstance(resource_properties, dict) \
and self.POLICIES_PROPERTY_NAME in resource_properties |
Returns the type of the given policy
:param string or dict policy: Policy data
:return PolicyTypes: Type of the given policy. None, if type could not be inferred | def _get_type(self, policy):
"""
Returns the type of the given policy
:param string or dict policy: Policy data
:return PolicyTypes: Type of the given policy. None, if type could not be inferred
"""
# Must handle intrinsic functions. Policy could be a primitive type or an intrinsic function
# Managed policies are either string or an intrinsic function that resolves to a string
if isinstance(policy, string_types) or is_instrinsic(policy):
return PolicyTypes.MANAGED_POLICY
# Policy statement is a dictionary with the key "Statement" in it
if isinstance(policy, dict) and "Statement" in policy:
return PolicyTypes.POLICY_STATEMENT
# This could be a policy template then.
if self._is_policy_template(policy):
return PolicyTypes.POLICY_TEMPLATE
# Nothing matches. Don't take opinions on how to handle it. Instead just set the appropriate type.
return PolicyTypes.UNKNOWN |
Is the given policy data a policy template? Policy templates is a dictionary with one key which is the name
of the template.
:param dict policy: Policy data
:return: True, if this is a policy template. False if it is not | def _is_policy_template(self, policy):
"""
Is the given policy data a policy template? Policy templates is a dictionary with one key which is the name
of the template.
:param dict policy: Policy data
:return: True, if this is a policy template. False if it is not
"""
return self._policy_template_processor is not None and \
isinstance(policy, dict) and \
len(policy) == 1 and \
self._policy_template_processor.has(list(policy.keys())[0]) is True |
r"""
Call shadow lambda to obtain current shadow state.
:Keyword Arguments:
* *thingName* (``string``) --
[REQUIRED]
The name of the thing.
:returns: (``dict``) --
The output from the GetThingShadow operation
* *payload* (``bytes``) --
The state information, in JSON format. | def get_thing_shadow(self, **kwargs):
r"""
Call shadow lambda to obtain current shadow state.
:Keyword Arguments:
* *thingName* (``string``) --
[REQUIRED]
The name of the thing.
:returns: (``dict``) --
The output from the GetThingShadow operation
* *payload* (``bytes``) --
The state information, in JSON format.
"""
thing_name = self._get_required_parameter('thingName', **kwargs)
payload = b''
return self._shadow_op('get', thing_name, payload) |
r"""
Updates the thing shadow for the specified thing.
:Keyword Arguments:
* *thingName* (``string``) --
[REQUIRED]
The name of the thing.
* *payload* (``bytes or seekable file-like object``) --
[REQUIRED]
The state information, in JSON format.
:returns: (``dict``) --
The output from the UpdateThingShadow operation
* *payload* (``bytes``) --
The state information, in JSON format. | def update_thing_shadow(self, **kwargs):
r"""
Updates the thing shadow for the specified thing.
:Keyword Arguments:
* *thingName* (``string``) --
[REQUIRED]
The name of the thing.
* *payload* (``bytes or seekable file-like object``) --
[REQUIRED]
The state information, in JSON format.
:returns: (``dict``) --
The output from the UpdateThingShadow operation
* *payload* (``bytes``) --
The state information, in JSON format.
"""
thing_name = self._get_required_parameter('thingName', **kwargs)
payload = self._get_required_parameter('payload', **kwargs)
return self._shadow_op('update', thing_name, payload) |
r"""
Deletes the thing shadow for the specified thing.
:Keyword Arguments:
* *thingName* (``string``) --
[REQUIRED]
The name of the thing.
:returns: (``dict``) --
The output from the DeleteThingShadow operation
* *payload* (``bytes``) --
The state information, in JSON format. | def delete_thing_shadow(self, **kwargs):
r"""
Deletes the thing shadow for the specified thing.
:Keyword Arguments:
* *thingName* (``string``) --
[REQUIRED]
The name of the thing.
:returns: (``dict``) --
The output from the DeleteThingShadow operation
* *payload* (``bytes``) --
The state information, in JSON format.
"""
thing_name = self._get_required_parameter('thingName', **kwargs)
payload = b''
return self._shadow_op('delete', thing_name, payload) |
r"""
Publishes state information.
:Keyword Arguments:
* *topic* (``string``) --
[REQUIRED]
The name of the MQTT topic.
* *payload* (``bytes or seekable file-like object``) --
The state information, in JSON format.
:returns: None | def publish(self, **kwargs):
r"""
Publishes state information.
:Keyword Arguments:
* *topic* (``string``) --
[REQUIRED]
The name of the MQTT topic.
* *payload* (``bytes or seekable file-like object``) --
The state information, in JSON format.
:returns: None
"""
topic = self._get_required_parameter('topic', **kwargs)
# payload is an optional parameter
payload = kwargs.get('payload', b'')
function_arn = ROUTER_FUNCTION_ARN
client_context = {
'custom': {
'source': MY_FUNCTION_ARN,
'subject': topic
}
}
customer_logger.info('Publishing message on topic "{}" with Payload "{}"'.format(topic, payload))
self.lambda_client._invoke_internal(
function_arn,
payload,
base64.b64encode(json.dumps(client_context).encode())
) |
Adds global properties to the resource, if necessary. This method is a no-op if there are no global properties
for this resource type
:param string resource_type: Type of the resource (Ex: AWS::Serverless::Function)
:param dict resource_properties: Properties of the resource that need to be merged
:return dict: Merged properties of the resource | def merge(self, resource_type, resource_properties):
"""
Adds global properties to the resource, if necessary. This method is a no-op if there are no global properties
for this resource type
:param string resource_type: Type of the resource (Ex: AWS::Serverless::Function)
:param dict resource_properties: Properties of the resource that need to be merged
:return dict: Merged properties of the resource
"""
if resource_type not in self.template_globals:
# Nothing to do. Return the template unmodified
return resource_properties
global_props = self.template_globals[resource_type]
return global_props.merge(resource_properties) |
Takes a SAM template as input and parses the Globals section
:param globals_dict: Dictionary representation of the Globals section
:return: Processed globals dictionary which can be used to quickly identify properties to merge
:raises: InvalidResourceException if the input contains properties that we don't support | def _parse(self, globals_dict):
"""
Takes a SAM template as input and parses the Globals section
:param globals_dict: Dictionary representation of the Globals section
:return: Processed globals dictionary which can be used to quickly identify properties to merge
:raises: InvalidResourceException if the input contains properties that we don't support
"""
globals = {}
if not isinstance(globals_dict, dict):
raise InvalidGlobalsSectionException(self._KEYWORD,
"It must be a non-empty dictionary".format(self._KEYWORD))
for section_name, properties in globals_dict.items():
resource_type = self._make_resource_type(section_name)
if resource_type not in self.supported_properties:
raise InvalidGlobalsSectionException(self._KEYWORD,
"'{section}' is not supported. "
"Must be one of the following values - {supported}"
.format(section=section_name,
supported=self.supported_resource_section_names))
if not isinstance(properties, dict):
raise InvalidGlobalsSectionException(self._KEYWORD, "Value of ${section} must be a dictionary")
for key, value in properties.items():
supported = self.supported_properties[resource_type]
if key not in supported:
raise InvalidGlobalsSectionException(self._KEYWORD,
"'{key}' is not a supported property of '{section}'. "
"Must be one of the following values - {supported}"
.format(key=key, section=section_name, supported=supported))
# Store all Global properties in a map with key being the AWS::Serverless::* resource type
globals[resource_type] = GlobalProperties(properties)
return globals |
Actually perform the merge operation for the given inputs. This method is used as part of the recursion.
Therefore input values can be of any type. So is the output.
:param global_value: Global value to be merged
:param local_value: Local value to be merged
:return: Merged result | def _do_merge(self, global_value, local_value):
"""
Actually perform the merge operation for the given inputs. This method is used as part of the recursion.
Therefore input values can be of any type. So is the output.
:param global_value: Global value to be merged
:param local_value: Local value to be merged
:return: Merged result
"""
token_global = self._token_of(global_value)
token_local = self._token_of(local_value)
# The following statements codify the rules explained in the doctring above
if token_global != token_local:
return self._prefer_local(global_value, local_value)
elif self.TOKEN.PRIMITIVE == token_global == token_local:
return self._prefer_local(global_value, local_value)
elif self.TOKEN.DICT == token_global == token_local:
return self._merge_dict(global_value, local_value)
elif self.TOKEN.LIST == token_global == token_local:
return self._merge_lists(global_value, local_value)
else:
raise TypeError(
"Unsupported type of objects. GlobalType={}, LocalType={}".format(token_global, token_local)) |
Merges the two dictionaries together
:param global_dict: Global dictionary to be merged
:param local_dict: Local dictionary to be merged
:return: New merged dictionary with values shallow copied | def _merge_dict(self, global_dict, local_dict):
"""
Merges the two dictionaries together
:param global_dict: Global dictionary to be merged
:param local_dict: Local dictionary to be merged
:return: New merged dictionary with values shallow copied
"""
# Local has higher priority than global. So iterate over local dict and merge into global if keys are overridden
global_dict = global_dict.copy()
for key in local_dict.keys():
if key in global_dict:
# Both local & global contains the same key. Let's do a merge.
global_dict[key] = self._do_merge(global_dict[key], local_dict[key])
else:
# Key is not in globals, just in local. Copy it over
global_dict[key] = local_dict[key]
return global_dict |
Returns the token type of the input.
:param input: Input whose type is to be determined
:return TOKENS: Token type of the input | def _token_of(self, input):
"""
Returns the token type of the input.
:param input: Input whose type is to be determined
:return TOKENS: Token type of the input
"""
if isinstance(input, dict):
# Intrinsic functions are always dicts
if is_intrinsics(input):
# Intrinsic functions are handled *exactly* like a primitive type because
# they resolve to a primitive type when creating a stack with CloudFormation
return self.TOKEN.PRIMITIVE
else:
return self.TOKEN.DICT
elif isinstance(input, list):
return self.TOKEN.LIST
else:
return self.TOKEN.PRIMITIVE |
Is this a valid SAM template dictionary
:param dict template_dict: Data to be validated
:param dict schema: Optional, dictionary containing JSON Schema representing SAM template
:return: Empty string if there are no validation errors in template | def validate(template_dict, schema=None):
"""
Is this a valid SAM template dictionary
:param dict template_dict: Data to be validated
:param dict schema: Optional, dictionary containing JSON Schema representing SAM template
:return: Empty string if there are no validation errors in template
"""
if not schema:
schema = SamTemplateValidator._read_schema()
validation_errors = ""
try:
jsonschema.validate(template_dict, schema)
except ValidationError as ex:
# Stringifying the exception will give us useful error message
validation_errors = str(ex)
# Swallowing expected exception here as our caller is expecting validation errors and
# not the valiation exception itself
pass
return validation_errors |
Generates a number within a reasonable range that might be expected for a flight.
The price is fixed for a given pair of locations. | def generate_car_price(location, days, age, car_type):
"""
Generates a number within a reasonable range that might be expected for a flight.
The price is fixed for a given pair of locations.
"""
car_types = ['economy', 'standard', 'midsize', 'full size', 'minivan', 'luxury']
base_location_cost = 0
for i in range(len(location)):
base_location_cost += ord(location.lower()[i]) - 97
age_multiplier = 1.10 if age < 25 else 1
# Select economy is car_type is not found
if car_type not in car_types:
car_type = car_types[0]
return days * ((100 + base_location_cost) + ((car_types.index(car_type) * 50) * age_multiplier)) |
Generates a number within a reasonable range that might be expected for a hotel.
The price is fixed for a pair of location and roomType. | def generate_hotel_price(location, nights, room_type):
"""
Generates a number within a reasonable range that might be expected for a hotel.
The price is fixed for a pair of location and roomType.
"""
room_types = ['queen', 'king', 'deluxe']
cost_of_living = 0
for i in range(len(location)):
cost_of_living += ord(location.lower()[i]) - 97
return nights * (100 + cost_of_living + (100 + room_types.index(room_type.lower()))) |
Performs dialog management and fulfillment for booking a hotel.
Beyond fulfillment, the implementation for this intent demonstrates the following:
1) Use of elicitSlot in slot validation and re-prompting
2) Use of sessionAttributes to pass information that can be used to guide conversation | def book_hotel(intent_request):
"""
Performs dialog management and fulfillment for booking a hotel.
Beyond fulfillment, the implementation for this intent demonstrates the following:
1) Use of elicitSlot in slot validation and re-prompting
2) Use of sessionAttributes to pass information that can be used to guide conversation
"""
location = try_ex(lambda: intent_request['currentIntent']['slots']['Location'])
checkin_date = try_ex(lambda: intent_request['currentIntent']['slots']['CheckInDate'])
nights = safe_int(try_ex(lambda: intent_request['currentIntent']['slots']['Nights']))
room_type = try_ex(lambda: intent_request['currentIntent']['slots']['RoomType'])
session_attributes = intent_request['sessionAttributes']
# Load confirmation history and track the current reservation.
reservation = json.dumps({
'ReservationType': 'Hotel',
'Location': location,
'RoomType': room_type,
'CheckInDate': checkin_date,
'Nights': nights
})
session_attributes['currentReservation'] = reservation
if intent_request['invocationSource'] == 'DialogCodeHook':
# Validate any slots which have been specified. If any are invalid, re-elicit for their value
validation_result = validate_hotel(intent_request['currentIntent']['slots'])
if not validation_result['isValid']:
slots = intent_request['currentIntent']['slots']
slots[validation_result['violatedSlot']] = None
return elicit_slot(
session_attributes,
intent_request['currentIntent']['name'],
slots,
validation_result['violatedSlot'],
validation_result['message']
)
# Otherwise, let native DM rules determine how to elicit for slots and prompt for confirmation. Pass price
# back in sessionAttributes once it can be calculated; otherwise clear any setting from sessionAttributes.
if location and checkin_date and nights and room_type:
# The price of the hotel has yet to be confirmed.
price = generate_hotel_price(location, nights, room_type)
session_attributes['currentReservationPrice'] = price
else:
try_ex(lambda: session_attributes.pop('currentReservationPrice'))
session_attributes['currentReservation'] = reservation
return delegate(session_attributes, intent_request['currentIntent']['slots'])
# Booking the hotel. In a real application, this would likely involve a call to a backend service.
logger.debug('bookHotel under={}'.format(reservation))
try_ex(lambda: session_attributes.pop('currentReservationPrice'))
try_ex(lambda: session_attributes.pop('currentReservation'))
session_attributes['lastConfirmedReservation'] = reservation
return close(
session_attributes,
'Fulfilled',
{
'contentType': 'PlainText',
'content': 'Thanks, I have placed your reservation. Please let me know if you would like to book a car '
'rental, or another hotel.'
}
) |
Performs dialog management and fulfillment for booking a car.
Beyond fulfillment, the implementation for this intent demonstrates the following:
1) Use of elicitSlot in slot validation and re-prompting
2) Use of sessionAttributes to pass information that can be used to guide conversation | def book_car(intent_request):
"""
Performs dialog management and fulfillment for booking a car.
Beyond fulfillment, the implementation for this intent demonstrates the following:
1) Use of elicitSlot in slot validation and re-prompting
2) Use of sessionAttributes to pass information that can be used to guide conversation
"""
slots = intent_request['currentIntent']['slots']
pickup_city = slots['PickUpCity']
pickup_date = slots['PickUpDate']
return_date = slots['ReturnDate']
driver_age = slots['DriverAge']
car_type = slots['CarType']
confirmation_status = intent_request['currentIntent']['confirmationStatus']
session_attributes = intent_request['sessionAttributes']
last_confirmed_reservation = try_ex(lambda: session_attributes['lastConfirmedReservation'])
if last_confirmed_reservation:
last_confirmed_reservation = json.loads(last_confirmed_reservation)
confirmation_context = try_ex(lambda: session_attributes['confirmationContext'])
# Load confirmation history and track the current reservation.
reservation = json.dumps({
'ReservationType': 'Car',
'PickUpCity': pickup_city,
'PickUpDate': pickup_date,
'ReturnDate': return_date,
'CarType': car_type
})
session_attributes['currentReservation'] = reservation
if pickup_city and pickup_date and return_date and driver_age and car_type:
# Generate the price of the car in case it is necessary for future steps.
price = generate_car_price(pickup_city, get_day_difference(pickup_date, return_date), driver_age, car_type)
session_attributes['currentReservationPrice'] = price
if intent_request['invocationSource'] == 'DialogCodeHook':
# Validate any slots which have been specified. If any are invalid, re-elicit for their value
validation_result = validate_book_car(intent_request['currentIntent']['slots'])
if not validation_result['isValid']:
slots[validation_result['violatedSlot']] = None
return elicit_slot(
session_attributes,
intent_request['currentIntent']['name'],
slots,
validation_result['violatedSlot'],
validation_result['message']
)
# Determine if the intent (and current slot settings) has been denied. The messaging will be different
# if the user is denying a reservation he initiated or an auto-populated suggestion.
if confirmation_status == 'Denied':
# Clear out auto-population flag for subsequent turns.
try_ex(lambda: session_attributes.pop('confirmationContext'))
try_ex(lambda: session_attributes.pop('currentReservation'))
if confirmation_context == 'AutoPopulate':
return elicit_slot(
session_attributes,
intent_request['currentIntent']['name'],
{
'PickUpCity': None,
'PickUpDate': None,
'ReturnDate': None,
'DriverAge': None,
'CarType': None
},
'PickUpCity',
{
'contentType': 'PlainText',
'content': 'Where would you like to make your car reservation?'
}
)
return delegate(session_attributes, intent_request['currentIntent']['slots'])
if confirmation_status == 'None':
# If we are currently auto-populating but have not gotten confirmation, keep requesting for confirmation.
if (not pickup_city and not pickup_date and not return_date and not driver_age and not car_type)\
or confirmation_context == 'AutoPopulate':
if last_confirmed_reservation and try_ex(lambda: last_confirmed_reservation['ReservationType']) == 'Hotel':
# If the user's previous reservation was a hotel - prompt for a rental with
# auto-populated values to match this reservation.
session_attributes['confirmationContext'] = 'AutoPopulate'
return confirm_intent(
session_attributes,
intent_request['currentIntent']['name'],
{
'PickUpCity': last_confirmed_reservation['Location'],
'PickUpDate': last_confirmed_reservation['CheckInDate'],
'ReturnDate': add_days(
last_confirmed_reservation['CheckInDate'], last_confirmed_reservation['Nights']
),
'CarType': None,
'DriverAge': None
},
{
'contentType': 'PlainText',
'content': 'Is this car rental for your {} night stay in {} on {}?'.format(
last_confirmed_reservation['Nights'],
last_confirmed_reservation['Location'],
last_confirmed_reservation['CheckInDate']
)
}
)
# Otherwise, let native DM rules determine how to elicit for slots and/or drive confirmation.
return delegate(session_attributes, intent_request['currentIntent']['slots'])
# If confirmation has occurred, continue filling any unfilled slot values or pass to fulfillment.
if confirmation_status == 'Confirmed':
# Remove confirmationContext from sessionAttributes so it does not confuse future requests
try_ex(lambda: session_attributes.pop('confirmationContext'))
if confirmation_context == 'AutoPopulate':
if not driver_age:
return elicit_slot(
session_attributes,
intent_request['currentIntent']['name'],
intent_request['currentIntent']['slots'],
'DriverAge',
{
'contentType': 'PlainText',
'content': 'How old is the driver of this car rental?'
}
)
elif not car_type:
return elicit_slot(
session_attributes,
intent_request['currentIntent']['name'],
intent_request['currentIntent']['slots'],
'CarType',
{
'contentType': 'PlainText',
'content': 'What type of car would you like? Popular models are '
'economy, midsize, and luxury.'
}
)
return delegate(session_attributes, intent_request['currentIntent']['slots'])
# Booking the car. In a real application, this would likely involve a call to a backend service.
logger.debug('bookCar at={}'.format(reservation))
del session_attributes['currentReservationPrice']
del session_attributes['currentReservation']
session_attributes['lastConfirmedReservation'] = reservation
return close(
session_attributes,
'Fulfilled',
{
'contentType': 'PlainText',
'content': 'Thanks, I have placed your reservation.'
}
) |
Called when the user specifies an intent for this bot. | def dispatch(intent_request):
"""
Called when the user specifies an intent for this bot.
"""
logger.debug('dispatch userId={}, intentName={}'.format(intent_request['userId'], intent_request['currentIntent']['name']))
intent_name = intent_request['currentIntent']['name']
# Dispatch to your bot's intent handlers
if intent_name == 'BookHotel':
return book_hotel(intent_request)
elif intent_name == 'BookCar':
return book_car(intent_request)
raise Exception('Intent with name ' + intent_name + ' not supported') |
Returns the Lambda EventSourceMapping to which this pull event corresponds. Adds the appropriate managed
policy to the function's execution role, if such a role is provided.
:param dict kwargs: a dict containing the execution role generated for the function
:returns: a list of vanilla CloudFormation Resources, to which this pull event expands
:rtype: list | def to_cloudformation(self, **kwargs):
"""Returns the Lambda EventSourceMapping to which this pull event corresponds. Adds the appropriate managed
policy to the function's execution role, if such a role is provided.
:param dict kwargs: a dict containing the execution role generated for the function
:returns: a list of vanilla CloudFormation Resources, to which this pull event expands
:rtype: list
"""
function = kwargs.get('function')
if not function:
raise TypeError("Missing required keyword argument: function")
resources = []
lambda_eventsourcemapping = LambdaEventSourceMapping(self.logical_id)
resources.append(lambda_eventsourcemapping)
try:
# Name will not be available for Alias resources
function_name_or_arn = function.get_runtime_attr("name")
except NotImplementedError:
function_name_or_arn = function.get_runtime_attr("arn")
if not self.Stream and not self.Queue:
raise InvalidEventException(
self.relative_id, "No Queue (for SQS) or Stream (for Kinesis or DynamoDB) provided.")
if self.Stream and not self.StartingPosition:
raise InvalidEventException(
self.relative_id, "StartingPosition is required for Kinesis and DynamoDB.")
lambda_eventsourcemapping.FunctionName = function_name_or_arn
lambda_eventsourcemapping.EventSourceArn = self.Stream or self.Queue
lambda_eventsourcemapping.StartingPosition = self.StartingPosition
lambda_eventsourcemapping.BatchSize = self.BatchSize
lambda_eventsourcemapping.Enabled = self.Enabled
if 'Condition' in function.resource_attributes:
lambda_eventsourcemapping.set_resource_attribute('Condition', function.resource_attributes['Condition'])
if 'role' in kwargs:
self._link_policy(kwargs['role'])
return resources |
If this source triggers a Lambda function whose execution role is auto-generated by SAM, add the
appropriate managed policy to this Role.
:param model.iam.IAMROle role: the execution role generated for the function | def _link_policy(self, role):
"""If this source triggers a Lambda function whose execution role is auto-generated by SAM, add the
appropriate managed policy to this Role.
:param model.iam.IAMROle role: the execution role generated for the function
"""
policy_arn = self.get_policy_arn()
if role is not None and policy_arn not in role.ManagedPolicyArns:
role.ManagedPolicyArns.append(policy_arn) |
Method to read default values for template parameters and merge with user supplied values.
Example:
If the template contains the following parameters defined
Parameters:
Param1:
Type: String
Default: default_value
Param2:
Type: String
Default: default_value
And, the user explicitly provided the following parameter values:
{
Param2: "new value"
}
then, this method will grab default value for Param1 and return the following result:
{
Param1: "default_value",
Param2: "new value"
}
:param dict sam_template: SAM template
:param dict parameter_values: Dictionary of parameter values provided by the user
:return dict: Merged parameter values | def add_default_parameter_values(self, sam_template):
"""
Method to read default values for template parameters and merge with user supplied values.
Example:
If the template contains the following parameters defined
Parameters:
Param1:
Type: String
Default: default_value
Param2:
Type: String
Default: default_value
And, the user explicitly provided the following parameter values:
{
Param2: "new value"
}
then, this method will grab default value for Param1 and return the following result:
{
Param1: "default_value",
Param2: "new value"
}
:param dict sam_template: SAM template
:param dict parameter_values: Dictionary of parameter values provided by the user
:return dict: Merged parameter values
"""
parameter_definition = sam_template.get("Parameters", None)
if not parameter_definition or not isinstance(parameter_definition, dict):
return self.parameter_values
for param_name, value in parameter_definition.items():
if param_name not in self.parameter_values and isinstance(value, dict) and "Default" in value:
self.parameter_values[param_name] = value["Default"] |
Add pseudo parameter values
:return: parameter values that have pseudo parameter in it | def add_pseudo_parameter_values(self):
"""
Add pseudo parameter values
:return: parameter values that have pseudo parameter in it
"""
if 'AWS::Region' not in self.parameter_values:
self.parameter_values['AWS::Region'] = boto3.session.Session().region_name |
Add this deployment preference to the collection
:raise ValueError if an existing logical id already exists in the _resource_preferences
:param logical_id: logical id of the resource where this deployment preference applies
:param deployment_preference_dict: the input SAM template deployment preference mapping | def add(self, logical_id, deployment_preference_dict):
"""
Add this deployment preference to the collection
:raise ValueError if an existing logical id already exists in the _resource_preferences
:param logical_id: logical id of the resource where this deployment preference applies
:param deployment_preference_dict: the input SAM template deployment preference mapping
"""
if logical_id in self._resource_preferences:
raise ValueError("logical_id {logical_id} previously added to this deployment_preference_collection".format(
logical_id=logical_id))
self._resource_preferences[logical_id] = DeploymentPreference.from_dict(logical_id, deployment_preference_dict) |
:return: only the logical id's for the deployment preferences in this collection which are enabled | def enabled_logical_ids(self):
"""
:return: only the logical id's for the deployment preferences in this collection which are enabled
"""
return [logical_id for logical_id, preference in self._resource_preferences.items() if preference.enabled] |
:param function_logical_id: logical_id of the function this deployment group belongs to
:return: CodeDeployDeploymentGroup resource | def deployment_group(self, function_logical_id):
"""
:param function_logical_id: logical_id of the function this deployment group belongs to
:return: CodeDeployDeploymentGroup resource
"""
deployment_preference = self.get(function_logical_id)
deployment_group = CodeDeployDeploymentGroup(self.deployment_group_logical_id(function_logical_id))
if deployment_preference.alarms is not None:
deployment_group.AlarmConfiguration = {'Enabled': True,
'Alarms': [{'Name': alarm} for alarm in
deployment_preference.alarms]}
deployment_group.ApplicationName = self.codedeploy_application.get_runtime_attr('name')
deployment_group.AutoRollbackConfiguration = {'Enabled': True,
'Events': ['DEPLOYMENT_FAILURE',
'DEPLOYMENT_STOP_ON_ALARM',
'DEPLOYMENT_STOP_ON_REQUEST']}
deployment_group.DeploymentConfigName = fnSub("CodeDeployDefault.Lambda${ConfigName}",
{"ConfigName": deployment_preference.deployment_type})
deployment_group.DeploymentStyle = {'DeploymentType': 'BLUE_GREEN',
'DeploymentOption': 'WITH_TRAFFIC_CONTROL'}
deployment_group.ServiceRoleArn = self.codedeploy_iam_role.get_runtime_attr("arn")
if deployment_preference.role:
deployment_group.ServiceRoleArn = deployment_preference.role
return deployment_group |
If we wanted to initialize the session to have some attributes we could
add those here | def get_welcome_response():
""" If we wanted to initialize the session to have some attributes we could
add those here
"""
session_attributes = {}
card_title = "Welcome"
speech_output = "Welcome to the Alexa Skills Kit sample. " \
"Please tell me your favorite color by saying, " \
"my favorite color is red"
# If the user either does not reply to the welcome message or says something
# that is not understood, they will be prompted again with this text.
reprompt_text = "Please tell me your favorite color by saying, " \
"my favorite color is red."
should_end_session = False
return build_response(session_attributes, build_speechlet_response(
card_title, speech_output, reprompt_text, should_end_session)) |
Sets the color in the session and prepares the speech to reply to the
user. | def set_color_in_session(intent, session):
""" Sets the color in the session and prepares the speech to reply to the
user.
"""
card_title = intent['name']
session_attributes = {}
should_end_session = False
if 'Color' in intent['slots']:
favorite_color = intent['slots']['Color']['value']
session_attributes = create_favorite_color_attributes(favorite_color)
speech_output = "I now know your favorite color is " + \
favorite_color + \
". You can ask me your favorite color by saying, " \
"what's my favorite color?"
reprompt_text = "You can ask me your favorite color by saying, " \
"what's my favorite color?"
else:
speech_output = "I'm not sure what your favorite color is. " \
"Please try again."
reprompt_text = "I'm not sure what your favorite color is. " \
"You can tell me your favorite color by saying, " \
"my favorite color is red."
return build_response(session_attributes, build_speechlet_response(
card_title, speech_output, reprompt_text, should_end_session)) |
Called when the user specifies an intent for this skill | def on_intent(intent_request, session):
""" Called when the user specifies an intent for this skill """
print("on_intent requestId=" + intent_request['requestId'] +
", sessionId=" + session['sessionId'])
intent = intent_request['intent']
intent_name = intent_request['intent']['name']
# Dispatch to your skill's intent handlers
if intent_name == "MyColorIsIntent":
return set_color_in_session(intent, session)
elif intent_name == "WhatsMyColorIntent":
return get_color_from_session(intent, session)
elif intent_name == "AMAZON.HelpIntent":
return get_welcome_response()
elif intent_name == "AMAZON.CancelIntent" or intent_name == "AMAZON.StopIntent":
return handle_session_end_request()
else:
raise ValueError("Invalid intent") |
Route the incoming request based on type (LaunchRequest, IntentRequest,
etc.) The JSON body of the request is provided in the event parameter. | def lambda_handler(event, context):
""" Route the incoming request based on type (LaunchRequest, IntentRequest,
etc.) The JSON body of the request is provided in the event parameter.
"""
print("event.session.application.applicationId=" +
event['session']['application']['applicationId'])
"""
Uncomment this if statement and populate with your skill's application ID to
prevent someone else from configuring a skill that sends requests to this
function.
"""
# if (event['session']['application']['applicationId'] !=
# "amzn1.echo-sdk-ams.app.[unique-value-here]"):
# raise ValueError("Invalid Application ID")
if event['session']['new']:
on_session_started({'requestId': event['request']['requestId']},
event['session'])
if event['request']['type'] == "LaunchRequest":
return on_launch(event['request'], event['session'])
elif event['request']['type'] == "IntentRequest":
return on_intent(event['request'], event['session'])
elif event['request']['type'] == "SessionEndedRequest":
return on_session_ended(event['request'], event['session']) |
Generate stable LogicalIds based on the prefix and given data. This method ensures that the logicalId is
deterministic and stable based on input prefix & data object. In other words:
logicalId changes *if and only if* either the `prefix` or `data_obj` changes
Internally we simply use a SHA1 of the data and append to the prefix to create the logicalId.
NOTE: LogicalIDs are how CloudFormation identifies a resource. If this ID changes, CFN will delete and
create a new resource. This can be catastrophic for most resources. So it is important to be *always*
backwards compatible here.
:return: LogicalId that can be used to construct resources
:rtype string | def gen(self):
"""
Generate stable LogicalIds based on the prefix and given data. This method ensures that the logicalId is
deterministic and stable based on input prefix & data object. In other words:
logicalId changes *if and only if* either the `prefix` or `data_obj` changes
Internally we simply use a SHA1 of the data and append to the prefix to create the logicalId.
NOTE: LogicalIDs are how CloudFormation identifies a resource. If this ID changes, CFN will delete and
create a new resource. This can be catastrophic for most resources. So it is important to be *always*
backwards compatible here.
:return: LogicalId that can be used to construct resources
:rtype string
"""
data_hash = self.get_hash()
return "{prefix}{hash}".format(prefix=self._prefix, hash=data_hash) |
Generate and return a hash of data that can be used as suffix of logicalId
:return: Hash of data if it was present
:rtype string | def get_hash(self, length=HASH_LENGTH):
"""
Generate and return a hash of data that can be used as suffix of logicalId
:return: Hash of data if it was present
:rtype string
"""
data_hash = ""
if not self.data_str:
return data_hash
encoded_data_str = self.data_str
if sys.version_info.major == 2:
# In Py2, only unicode needs to be encoded.
if isinstance(self.data_str, unicode):
encoded_data_str = self.data_str.encode('utf-8')
else:
# data_str should always be unicode on python 3
encoded_data_str = self.data_str.encode('utf-8')
data_hash = hashlib.sha1(encoded_data_str).hexdigest()
return data_hash[:length] |
Stable, platform & language-independent stringification of a data with basic Python type.
We use JSON to dump a string instead of `str()` method in order to be language independent.
:param data: Data to be stringified. If this is one of JSON native types like string, dict, array etc, it will
be properly serialized. Otherwise this method will throw a TypeError for non-JSON serializable
objects
:return: string representation of the dictionary
:rtype string | def _stringify(self, data):
"""
Stable, platform & language-independent stringification of a data with basic Python type.
We use JSON to dump a string instead of `str()` method in order to be language independent.
:param data: Data to be stringified. If this is one of JSON native types like string, dict, array etc, it will
be properly serialized. Otherwise this method will throw a TypeError for non-JSON serializable
objects
:return: string representation of the dictionary
:rtype string
"""
if isinstance(data, string_types):
return data
# Get the most compact dictionary (separators) and sort the keys recursively to get a stable output
return json.dumps(data, separators=(',', ':'), sort_keys=True) |
Add the information that resource with given `logical_id` supports the given `property`, and that a reference
to `logical_id.property` resolves to given `value.
Example:
"MyApi.Deployment" -> "MyApiDeployment1234567890"
:param logical_id: Logical ID of the resource (Ex: MyLambdaFunction)
:param property: Property on the resource that can be referenced (Ex: Alias)
:param value: Value that this reference resolves to.
:return: nothing | def add(self, logical_id, property, value):
"""
Add the information that resource with given `logical_id` supports the given `property`, and that a reference
to `logical_id.property` resolves to given `value.
Example:
"MyApi.Deployment" -> "MyApiDeployment1234567890"
:param logical_id: Logical ID of the resource (Ex: MyLambdaFunction)
:param property: Property on the resource that can be referenced (Ex: Alias)
:param value: Value that this reference resolves to.
:return: nothing
"""
if not logical_id or not property:
raise ValueError("LogicalId and property must be a non-empty string")
if not value or not isinstance(value, string_types):
raise ValueError("Property value must be a non-empty string")
if logical_id not in self._refs:
self._refs[logical_id] = {}
if property in self._refs[logical_id]:
raise ValueError("Cannot add second reference value to {}.{} property".format(logical_id, property))
self._refs[logical_id][property] = value |
Returns the value of the reference for given logical_id at given property. Ex: MyFunction.Alias
:param logical_id: Logical Id of the resource
:param property: Property of the resource you want to resolve. None if you want to get value of all properties
:return: Value of this property if present. None otherwise | def get(self, logical_id, property):
"""
Returns the value of the reference for given logical_id at given property. Ex: MyFunction.Alias
:param logical_id: Logical Id of the resource
:param property: Property of the resource you want to resolve. None if you want to get value of all properties
:return: Value of this property if present. None otherwise
"""
# By defaulting to empty dictionary, we can handle the case where logical_id is not in map without if statements
prop_values = self.get_all(logical_id)
if prop_values:
return prop_values.get(property, None)
else:
return None |
encrypt leverages KMS encrypt and base64-encode encrypted blob
More info on KMS encrypt API:
https://docs.aws.amazon.com/kms/latest/APIReference/API_encrypt.html | def encrypt(key, message):
'''encrypt leverages KMS encrypt and base64-encode encrypted blob
More info on KMS encrypt API:
https://docs.aws.amazon.com/kms/latest/APIReference/API_encrypt.html
'''
try:
ret = kms.encrypt(KeyId=key, Plaintext=message)
encrypted_data = base64.encodestring(ret.get('CiphertextBlob'))
except Exception as e:
# returns http 500 back to user and log error details in Cloudwatch Logs
raise Exception("Unable to encrypt data: ", e)
return encrypted_data.decode() |
Transforms the SAM defined Tags into the form CloudFormation is expecting.
SAM Example:
```
...
Tags:
TagKey: TagValue
```
CloudFormation equivalent:
- Key: TagKey
Value: TagValue
```
:param resource_tag_dict: Customer defined dictionary (SAM Example from above)
:return: List of Tag Dictionaries (CloudFormation Equivalent from above) | def get_tag_list(resource_tag_dict):
"""
Transforms the SAM defined Tags into the form CloudFormation is expecting.
SAM Example:
```
...
Tags:
TagKey: TagValue
```
CloudFormation equivalent:
- Key: TagKey
Value: TagValue
```
:param resource_tag_dict: Customer defined dictionary (SAM Example from above)
:return: List of Tag Dictionaries (CloudFormation Equivalent from above)
"""
tag_list = []
if resource_tag_dict is None:
return tag_list
for tag_key, tag_value in resource_tag_dict.items():
tag = {_KEY: tag_key, _VALUE: tag_value if tag_value else ""}
tag_list.append(tag)
return tag_list |
Gets the name of the partition given the region name. If region name is not provided, this method will
use Boto3 to get name of the region where this code is running.
This implementation is borrowed from AWS CLI
https://github.com/aws/aws-cli/blob/1.11.139/awscli/customizations/emr/createdefaultroles.py#L59
:param region: Optional name of the region
:return: Partition name | def get_partition_name(cls, region=None):
"""
Gets the name of the partition given the region name. If region name is not provided, this method will
use Boto3 to get name of the region where this code is running.
This implementation is borrowed from AWS CLI
https://github.com/aws/aws-cli/blob/1.11.139/awscli/customizations/emr/createdefaultroles.py#L59
:param region: Optional name of the region
:return: Partition name
"""
if region is None:
# Use Boto3 to get the region where code is running. This uses Boto's regular region resolution
# mechanism, starting from AWS_DEFAULT_REGION environment variable.
region = boto3.session.Session().region_name
region_string = region.lower()
if region_string.startswith("cn-"):
return "aws-cn"
elif region_string.startswith("us-gov"):
return "aws-us-gov"
else:
return "aws" |
Hook method that gets called before the SAM template is processed.
The template has passed the validation and is guaranteed to contain a non-empty "Resources" section.
:param dict template_dict: Dictionary of the SAM template
:return: Nothing | def on_before_transform_template(self, template_dict):
"""
Hook method that gets called before the SAM template is processed.
The template has passed the validation and is guaranteed to contain a non-empty "Resources" section.
:param dict template_dict: Dictionary of the SAM template
:return: Nothing
"""
template = SamTemplate(template_dict)
for logicalId, api in template.iterate(SamResourceType.Api.value):
if api.properties.get('DefinitionBody') or api.properties.get('DefinitionUri'):
continue
api.properties['DefinitionBody'] = SwaggerEditor.gen_skeleton()
api.properties['__MANAGE_SWAGGER'] = True |
Returns True if this Swagger has the given path and optional method
:param string path: Path name
:param string method: HTTP method
:return: True, if this path/method is present in the document | def has_path(self, path, method=None):
"""
Returns True if this Swagger has the given path and optional method
:param string path: Path name
:param string method: HTTP method
:return: True, if this path/method is present in the document
"""
method = self._normalize_method_name(method)
path_dict = self.get_path(path)
path_dict_exists = path_dict is not None
if method:
return path_dict_exists and method in path_dict
return path_dict_exists |
Returns true if the given method contains a valid method definition.
This uses the get_method_contents function to handle conditionals.
:param dict method: method dictionary
:return: true if method has one or multiple integrations | def method_has_integration(self, method):
"""
Returns true if the given method contains a valid method definition.
This uses the get_method_contents function to handle conditionals.
:param dict method: method dictionary
:return: true if method has one or multiple integrations
"""
for method_definition in self.get_method_contents(method):
if self.method_definition_has_integration(method_definition):
return True
return False |
Returns the swagger contents of the given method. This checks to see if a conditional block
has been used inside of the method, and, if so, returns the method contents that are
inside of the conditional.
:param dict method: method dictionary
:return: list of swagger component dictionaries for the method | def get_method_contents(self, method):
"""
Returns the swagger contents of the given method. This checks to see if a conditional block
has been used inside of the method, and, if so, returns the method contents that are
inside of the conditional.
:param dict method: method dictionary
:return: list of swagger component dictionaries for the method
"""
if self._CONDITIONAL_IF in method:
return method[self._CONDITIONAL_IF][1:]
return [method] |
Checks if an API Gateway integration is already present at the given path/method
:param string path: Path name
:param string method: HTTP method
:return: True, if an API Gateway integration is already present | def has_integration(self, path, method):
"""
Checks if an API Gateway integration is already present at the given path/method
:param string path: Path name
:param string method: HTTP method
:return: True, if an API Gateway integration is already present
"""
method = self._normalize_method_name(method)
path_dict = self.get_path(path)
return self.has_path(path, method) and \
isinstance(path_dict[method], dict) and \
self.method_has_integration(path_dict[method]) |
Adds the path/method combination to the Swagger, if not already present
:param string path: Path name
:param string method: HTTP method
:raises ValueError: If the value of `path` in Swagger is not a dictionary | def add_path(self, path, method=None):
"""
Adds the path/method combination to the Swagger, if not already present
:param string path: Path name
:param string method: HTTP method
:raises ValueError: If the value of `path` in Swagger is not a dictionary
"""
method = self._normalize_method_name(method)
path_dict = self.paths.setdefault(path, {})
if not isinstance(path_dict, dict):
# Either customers has provided us an invalid Swagger, or this class has messed it somehow
raise InvalidDocumentException(
[InvalidTemplateException("Value of '{}' path must be a dictionary according to Swagger spec."
.format(path))])
if self._CONDITIONAL_IF in path_dict:
path_dict = path_dict[self._CONDITIONAL_IF][1]
path_dict.setdefault(method, {}) |
Adds aws_proxy APIGW integration to the given path+method.
:param string path: Path name
:param string method: HTTP Method
:param string integration_uri: URI for the integration. | def add_lambda_integration(self, path, method, integration_uri,
method_auth_config=None, api_auth_config=None, condition=None):
"""
Adds aws_proxy APIGW integration to the given path+method.
:param string path: Path name
:param string method: HTTP Method
:param string integration_uri: URI for the integration.
"""
method = self._normalize_method_name(method)
if self.has_integration(path, method):
raise ValueError("Lambda integration already exists on Path={}, Method={}".format(path, method))
self.add_path(path, method)
# Wrap the integration_uri in a Condition if one exists on that function
# This is necessary so CFN doesn't try to resolve the integration reference.
if condition:
integration_uri = make_conditional(condition, integration_uri)
path_dict = self.get_path(path)
path_dict[method][self._X_APIGW_INTEGRATION] = {
'type': 'aws_proxy',
'httpMethod': 'POST',
'uri': integration_uri
}
method_auth_config = method_auth_config or {}
api_auth_config = api_auth_config or {}
if method_auth_config.get('Authorizer') == 'AWS_IAM' \
or api_auth_config.get('DefaultAuthorizer') == 'AWS_IAM' and not method_auth_config:
self.paths[path][method][self._X_APIGW_INTEGRATION]['credentials'] = self._generate_integration_credentials(
method_invoke_role=method_auth_config.get('InvokeRole'),
api_invoke_role=api_auth_config.get('InvokeRole')
)
# If 'responses' key is *not* present, add it with an empty dict as value
path_dict[method].setdefault('responses', {})
# If a condition is present, wrap all method contents up into the condition
if condition:
path_dict[method] = make_conditional(condition, path_dict[method]) |
Wrap entire API path definition in a CloudFormation if condition. | def make_path_conditional(self, path, condition):
"""
Wrap entire API path definition in a CloudFormation if condition.
"""
self.paths[path] = make_conditional(condition, self.paths[path]) |
Add CORS configuration to this path. Specifically, we will add a OPTIONS response config to the Swagger that
will return headers required for CORS. Since SAM uses aws_proxy integration, we cannot inject the headers
into the actual response returned from Lambda function. This is something customers have to implement
themselves.
If OPTIONS method is already present for the Path, we will skip adding CORS configuration
Following this guide:
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html#enable-cors-for-resource-using-swagger-importer-tool
:param string path: Path to add the CORS configuration to.
:param string/dict allowed_origins: Comma separate list of allowed origins.
Value can also be an intrinsic function dict.
:param string/dict allowed_headers: Comma separated list of allowed headers.
Value can also be an intrinsic function dict.
:param string/dict allowed_methods: Comma separated list of allowed methods.
Value can also be an intrinsic function dict.
:param integer/dict max_age: Maximum duration to cache the CORS Preflight request. Value is set on
Access-Control-Max-Age header. Value can also be an intrinsic function dict.
:param bool/None allow_credentials: Flags whether request is allowed to contain credentials.
:raises ValueError: When values for one of the allowed_* variables is empty | def add_cors(self, path, allowed_origins, allowed_headers=None, allowed_methods=None, max_age=None,
allow_credentials=None):
"""
Add CORS configuration to this path. Specifically, we will add a OPTIONS response config to the Swagger that
will return headers required for CORS. Since SAM uses aws_proxy integration, we cannot inject the headers
into the actual response returned from Lambda function. This is something customers have to implement
themselves.
If OPTIONS method is already present for the Path, we will skip adding CORS configuration
Following this guide:
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html#enable-cors-for-resource-using-swagger-importer-tool
:param string path: Path to add the CORS configuration to.
:param string/dict allowed_origins: Comma separate list of allowed origins.
Value can also be an intrinsic function dict.
:param string/dict allowed_headers: Comma separated list of allowed headers.
Value can also be an intrinsic function dict.
:param string/dict allowed_methods: Comma separated list of allowed methods.
Value can also be an intrinsic function dict.
:param integer/dict max_age: Maximum duration to cache the CORS Preflight request. Value is set on
Access-Control-Max-Age header. Value can also be an intrinsic function dict.
:param bool/None allow_credentials: Flags whether request is allowed to contain credentials.
:raises ValueError: When values for one of the allowed_* variables is empty
"""
# Skip if Options is already present
if self.has_path(path, self._OPTIONS_METHOD):
return
if not allowed_origins:
raise ValueError("Invalid input. Value for AllowedOrigins is required")
if not allowed_methods:
# AllowMethods is not given. Let's try to generate the list from the given Swagger.
allowed_methods = self._make_cors_allowed_methods_for_path(path)
# APIGW expects the value to be a "string expression". Hence wrap in another quote. Ex: "'GET,POST,DELETE'"
allowed_methods = "'{}'".format(allowed_methods)
if allow_credentials is not True:
allow_credentials = False
# Add the Options method and the CORS response
self.add_path(path, self._OPTIONS_METHOD)
self.get_path(path)[self._OPTIONS_METHOD] = self._options_method_response_for_cors(allowed_origins,
allowed_headers,
allowed_methods,
max_age,
allow_credentials) |
Returns a Swagger snippet containing configuration for OPTIONS HTTP Method to configure CORS.
This snippet is taken from public documentation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html#enable-cors-for-resource-using-swagger-importer-tool
:param string/dict allowed_origins: Comma separate list of allowed origins.
Value can also be an intrinsic function dict.
:param string/dict allowed_headers: Comma separated list of allowed headers.
Value can also be an intrinsic function dict.
:param string/dict allowed_methods: Comma separated list of allowed methods.
Value can also be an intrinsic function dict.
:param integer/dict max_age: Maximum duration to cache the CORS Preflight request. Value is set on
Access-Control-Max-Age header. Value can also be an intrinsic function dict.
:param bool allow_credentials: Flags whether request is allowed to contain credentials.
:return dict: Dictionary containing Options method configuration for CORS | def _options_method_response_for_cors(self, allowed_origins, allowed_headers=None, allowed_methods=None,
max_age=None, allow_credentials=None):
"""
Returns a Swagger snippet containing configuration for OPTIONS HTTP Method to configure CORS.
This snippet is taken from public documentation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html#enable-cors-for-resource-using-swagger-importer-tool
:param string/dict allowed_origins: Comma separate list of allowed origins.
Value can also be an intrinsic function dict.
:param string/dict allowed_headers: Comma separated list of allowed headers.
Value can also be an intrinsic function dict.
:param string/dict allowed_methods: Comma separated list of allowed methods.
Value can also be an intrinsic function dict.
:param integer/dict max_age: Maximum duration to cache the CORS Preflight request. Value is set on
Access-Control-Max-Age header. Value can also be an intrinsic function dict.
:param bool allow_credentials: Flags whether request is allowed to contain credentials.
:return dict: Dictionary containing Options method configuration for CORS
"""
ALLOW_ORIGIN = "Access-Control-Allow-Origin"
ALLOW_HEADERS = "Access-Control-Allow-Headers"
ALLOW_METHODS = "Access-Control-Allow-Methods"
MAX_AGE = "Access-Control-Max-Age"
ALLOW_CREDENTIALS = "Access-Control-Allow-Credentials"
HEADER_RESPONSE = (lambda x: "method.response.header." + x)
response_parameters = {
# AllowedOrigin is always required
HEADER_RESPONSE(ALLOW_ORIGIN): allowed_origins
}
response_headers = {
# Allow Origin is always required
ALLOW_ORIGIN: {
"type": "string"
}
}
# Optional values. Skip the header if value is empty
#
# The values must not be empty string or null. Also, value of '*' is a very recent addition (2017) and
# not supported in all the browsers. So it is important to skip the header if value is not given
# https://fetch.spec.whatwg.org/#http-new-header-syntax
#
if allowed_headers:
response_parameters[HEADER_RESPONSE(ALLOW_HEADERS)] = allowed_headers
response_headers[ALLOW_HEADERS] = {"type": "string"}
if allowed_methods:
response_parameters[HEADER_RESPONSE(ALLOW_METHODS)] = allowed_methods
response_headers[ALLOW_METHODS] = {"type": "string"}
if max_age is not None:
# MaxAge can be set to 0, which is a valid value. So explicitly check against None
response_parameters[HEADER_RESPONSE(MAX_AGE)] = max_age
response_headers[MAX_AGE] = {"type": "integer"}
if allow_credentials is True:
# Allow-Credentials only has a valid value of true, it should be omitted otherwise.
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Credentials
response_parameters[HEADER_RESPONSE(ALLOW_CREDENTIALS)] = "'true'"
response_headers[ALLOW_CREDENTIALS] = {"type": "string"}
return {
"summary": "CORS support",
"consumes": ["application/json"],
"produces": ["application/json"],
self._X_APIGW_INTEGRATION: {
"type": "mock",
"requestTemplates": {
"application/json": "{\n \"statusCode\" : 200\n}\n"
},
"responses": {
"default": {
"statusCode": "200",
"responseParameters": response_parameters,
"responseTemplates": {
"application/json": "{}\n"
}
}
}
},
"responses": {
"200": {
"description": "Default response for CORS method",
"headers": response_headers
}
}
} |
Creates the value for Access-Control-Allow-Methods header for given path. All HTTP methods defined for this
path will be included in the result. If the path contains "ANY" method, then *all available* HTTP methods will
be returned as result.
:param string path: Path to generate AllowMethods value for
:return string: String containing the value of AllowMethods, if the path contains any methods.
Empty string, otherwise | def _make_cors_allowed_methods_for_path(self, path):
"""
Creates the value for Access-Control-Allow-Methods header for given path. All HTTP methods defined for this
path will be included in the result. If the path contains "ANY" method, then *all available* HTTP methods will
be returned as result.
:param string path: Path to generate AllowMethods value for
:return string: String containing the value of AllowMethods, if the path contains any methods.
Empty string, otherwise
"""
# https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
all_http_methods = ["OPTIONS", "GET", "HEAD", "POST", "PUT", "DELETE", "PATCH"]
if not self.has_path(path):
return ""
# At this point, value of Swagger path should be a dictionary with method names being the keys
methods = list(self.get_path(path).keys())
if self._X_ANY_METHOD in methods:
# API Gateway's ANY method is not a real HTTP method but a wildcard representing all HTTP methods
allow_methods = all_http_methods
else:
allow_methods = methods
allow_methods.append("options") # Always add Options to the CORS methods response
# Clean up the result:
#
# - HTTP Methods **must** be upper case and they are case sensitive.
# (https://tools.ietf.org/html/rfc7231#section-4.1)
# - Convert to set to remove any duplicates
# - Sort to keep this list stable because it could be constructed from dictionary keys which are *not* ordered.
# Therefore we might get back a different list each time the code runs. To prevent any unnecessary
# regression, we sort the list so the returned value is stable.
allow_methods = list({m.upper() for m in allow_methods})
allow_methods.sort()
# Allow-Methods is comma separated string
return ','.join(allow_methods) |
Add Authorizer definitions to the securityDefinitions part of Swagger.
:param list authorizers: List of Authorizer configurations which get translated to securityDefinitions. | def add_authorizers(self, authorizers):
"""
Add Authorizer definitions to the securityDefinitions part of Swagger.
:param list authorizers: List of Authorizer configurations which get translated to securityDefinitions.
"""
self.security_definitions = self.security_definitions or {}
for authorizer_name, authorizer in authorizers.items():
self.security_definitions[authorizer_name] = authorizer.generate_swagger() |
Sets the DefaultAuthorizer for each method on this path. The DefaultAuthorizer won't be set if an Authorizer
was defined at the Function/Path/Method level
:param string path: Path name
:param string default_authorizer: Name of the authorizer to use as the default. Must be a key in the
authorizers param.
:param list authorizers: List of Authorizer configurations defined on the related Api. | def set_path_default_authorizer(self, path, default_authorizer, authorizers):
"""
Sets the DefaultAuthorizer for each method on this path. The DefaultAuthorizer won't be set if an Authorizer
was defined at the Function/Path/Method level
:param string path: Path name
:param string default_authorizer: Name of the authorizer to use as the default. Must be a key in the
authorizers param.
:param list authorizers: List of Authorizer configurations defined on the related Api.
"""
for method_name, method in self.get_path(path).items():
self.set_method_authorizer(path, method_name, default_authorizer, authorizers,
default_authorizer=default_authorizer, is_default=True) |
Adds auth settings for this path/method. Auth settings currently consist solely of Authorizers
but this method will eventually include setting other auth settings such as API Key,
Resource Policy, etc.
:param string path: Path name
:param string method_name: Method name
:param dict auth: Auth configuration such as Authorizers, ApiKey, ResourcePolicy (only Authorizers supported
currently)
:param dict api: Reference to the related Api's properties as defined in the template. | def add_auth_to_method(self, path, method_name, auth, api):
"""
Adds auth settings for this path/method. Auth settings currently consist solely of Authorizers
but this method will eventually include setting other auth settings such as API Key,
Resource Policy, etc.
:param string path: Path name
:param string method_name: Method name
:param dict auth: Auth configuration such as Authorizers, ApiKey, ResourcePolicy (only Authorizers supported
currently)
:param dict api: Reference to the related Api's properties as defined in the template.
"""
method_authorizer = auth and auth.get('Authorizer')
if method_authorizer:
api_auth = api.get('Auth')
api_authorizers = api_auth and api_auth.get('Authorizers')
default_authorizer = api_auth and api_auth.get('DefaultAuthorizer')
self.set_method_authorizer(path, method_name, method_authorizer, api_authorizers, default_authorizer) |
Add Gateway Response definitions to Swagger.
:param dict gateway_responses: Dictionary of GatewayResponse configuration which gets translated. | def add_gateway_responses(self, gateway_responses):
"""
Add Gateway Response definitions to Swagger.
:param dict gateway_responses: Dictionary of GatewayResponse configuration which gets translated.
"""
self.gateway_responses = self.gateway_responses or {}
for response_type, response in gateway_responses.items():
self.gateway_responses[response_type] = response.generate_swagger() |
Returns a **copy** of the Swagger document as a dictionary.
:return dict: Dictionary containing the Swagger document | def swagger(self):
"""
Returns a **copy** of the Swagger document as a dictionary.
:return dict: Dictionary containing the Swagger document
"""
# Make sure any changes to the paths are reflected back in output
self._doc["paths"] = self.paths
if self.security_definitions:
self._doc["securityDefinitions"] = self.security_definitions
if self.gateway_responses:
self._doc[self._X_APIGW_GATEWAY_RESPONSES] = self.gateway_responses
return copy.deepcopy(self._doc) |
Checks if the input data is a Swagger document
:param dict data: Data to be validated
:return: True, if data is a Swagger | def is_valid(data):
"""
Checks if the input data is a Swagger document
:param dict data: Data to be validated
:return: True, if data is a Swagger
"""
return bool(data) and \
isinstance(data, dict) and \
bool(data.get("swagger")) and \
isinstance(data.get('paths'), dict) |
Returns a lower case, normalized version of HTTP Method. It also know how to handle API Gateway specific methods
like "ANY"
NOTE: Always normalize before using the `method` value passed in as input
:param string method: Name of the HTTP Method
:return string: Normalized method name | def _normalize_method_name(method):
"""
Returns a lower case, normalized version of HTTP Method. It also know how to handle API Gateway specific methods
like "ANY"
NOTE: Always normalize before using the `method` value passed in as input
:param string method: Name of the HTTP Method
:return string: Normalized method name
"""
if not method or not isinstance(method, string_types):
return method
method = method.lower()
if method == 'any':
return SwaggerEditor._X_ANY_METHOD
else:
return method |
Hook method that runs before a template gets transformed. In this method, we parse and process Globals section
from the template (if present).
:param dict template_dict: SAM template as a dictionary | def on_before_transform_template(self, template_dict):
"""
Hook method that runs before a template gets transformed. In this method, we parse and process Globals section
from the template (if present).
:param dict template_dict: SAM template as a dictionary
"""
try:
global_section = Globals(template_dict)
except InvalidGlobalsSectionException as ex:
raise InvalidDocumentException([ex])
# For each resource in template, try and merge with Globals if necessary
template = SamTemplate(template_dict)
for logicalId, resource in template.iterate():
resource.properties = global_section.merge(resource.type, resource.properties)
template.set(logicalId, resource)
# Remove the Globals section from template if necessary
Globals.del_section(template_dict) |
Returns a validator function that succeeds only for inputs of the provided valid_type.
:param type valid_type: the type that should be considered valid for the validator
:returns: a function which returns True its input is an instance of valid_type, and raises TypeError otherwise
:rtype: callable | def is_type(valid_type):
"""Returns a validator function that succeeds only for inputs of the provided valid_type.
:param type valid_type: the type that should be considered valid for the validator
:returns: a function which returns True its input is an instance of valid_type, and raises TypeError otherwise
:rtype: callable
"""
def validate(value, should_raise=True):
if not isinstance(value, valid_type):
if should_raise:
raise TypeError("Expected value of type {expected}, actual value was of type {actual}.".format(
expected=valid_type, actual=type(value)))
return False
return True
return validate |
Returns a validator function that succeeds only if the input is a list, and each item in the list passes as input
to the provided validator validate_item.
:param callable validate_item: the validator function for items in the list
:returns: a function which returns True its input is an list of valid items, and raises TypeError otherwise
:rtype: callable | def list_of(validate_item):
"""Returns a validator function that succeeds only if the input is a list, and each item in the list passes as input
to the provided validator validate_item.
:param callable validate_item: the validator function for items in the list
:returns: a function which returns True its input is an list of valid items, and raises TypeError otherwise
:rtype: callable
"""
def validate(value, should_raise=True):
validate_type = is_type(list)
if not validate_type(value, should_raise=should_raise):
return False
for item in value:
try:
validate_item(item)
except TypeError as e:
if should_raise:
samtranslator.model.exceptions.prepend(e, "list contained an invalid item")
raise
return False
return True
return validate |
Returns a validator function that succeeds only if the input is a dict, and each key and value in the dict passes
as input to the provided validators validate_key and validate_item, respectively.
:param callable validate_key: the validator function for keys in the dict
:param callable validate_item: the validator function for values in the list
:returns: a function which returns True its input is an dict of valid items, and raises TypeError otherwise
:rtype: callable | def dict_of(validate_key, validate_item):
"""Returns a validator function that succeeds only if the input is a dict, and each key and value in the dict passes
as input to the provided validators validate_key and validate_item, respectively.
:param callable validate_key: the validator function for keys in the dict
:param callable validate_item: the validator function for values in the list
:returns: a function which returns True its input is an dict of valid items, and raises TypeError otherwise
:rtype: callable
"""
def validate(value, should_raise=True):
validate_type = is_type(dict)
if not validate_type(value, should_raise=should_raise):
return False
for key, item in value.items():
try:
validate_key(key)
except TypeError as e:
if should_raise:
samtranslator.model.exceptions.prepend(e, "dict contained an invalid key")
raise
return False
try:
validate_item(item)
except TypeError as e:
if should_raise:
samtranslator.model.exceptions.prepend(e, "dict contained an invalid value")
raise
return False
return True
return validate |
Returns a validator function that succeeds only if the input passes at least one of the provided validators.
:param callable validators: the validator functions
:returns: a function which returns True its input passes at least one of the validators, and raises TypeError
otherwise
:rtype: callable | def one_of(*validators):
"""Returns a validator function that succeeds only if the input passes at least one of the provided validators.
:param callable validators: the validator functions
:returns: a function which returns True its input passes at least one of the validators, and raises TypeError
otherwise
:rtype: callable
"""
def validate(value, should_raise=True):
if any(validate(value, should_raise=False) for validate in validators):
return True
if should_raise:
raise TypeError("value did not match any allowable type")
return False
return validate |
With the given values for each parameter, this method will return a policy statement that can be used
directly with IAM.
:param dict parameter_values: Dict containing values for each parameter defined in the template
:return dict: Dictionary containing policy statement
:raises InvalidParameterValues: If parameter values is not a valid dictionary or does not contain values
for all parameters
:raises InsufficientParameterValues: If the parameter values don't have values for all required parameters | def to_statement(self, parameter_values):
"""
With the given values for each parameter, this method will return a policy statement that can be used
directly with IAM.
:param dict parameter_values: Dict containing values for each parameter defined in the template
:return dict: Dictionary containing policy statement
:raises InvalidParameterValues: If parameter values is not a valid dictionary or does not contain values
for all parameters
:raises InsufficientParameterValues: If the parameter values don't have values for all required parameters
"""
missing = self.missing_parameter_values(parameter_values)
if len(missing) > 0:
# str() of elements of list to prevent any `u` prefix from being displayed in user-facing error message
raise InsufficientParameterValues("Following required parameters of template '{}' don't have values: {}"
.format(self.name, [str(m) for m in missing]))
# Select only necessary parameter_values. this is to prevent malicious or accidental
# injection of values for parameters not intended in the template. This is important because "Ref" resolution
# will substitute any references for which a value is provided.
necessary_parameter_values = {name: value for name, value in parameter_values.items()
if name in self.parameters}
# Only "Ref" is supported
supported_intrinsics = {
RefAction.intrinsic_name: RefAction()
}
resolver = IntrinsicsResolver(necessary_parameter_values, supported_intrinsics)
definition_copy = copy.deepcopy(self.definition)
return resolver.resolve_parameter_refs(definition_copy) |
Checks if the given input contains values for all parameters used by this template
:param dict parameter_values: Dictionary of values for each parameter used in the template
:return list: List of names of parameters that are missing.
:raises InvalidParameterValues: When parameter values is not a valid dictionary | def missing_parameter_values(self, parameter_values):
"""
Checks if the given input contains values for all parameters used by this template
:param dict parameter_values: Dictionary of values for each parameter used in the template
:return list: List of names of parameters that are missing.
:raises InvalidParameterValues: When parameter values is not a valid dictionary
"""
if not self._is_valid_parameter_values(parameter_values):
raise InvalidParameterValues("Parameter values are required to process a policy template")
return list(set(self.parameters.keys()) - set(parameter_values.keys())) |
Parses the input and returns an instance of this class.
:param string template_name: Name of the template
:param dict template_values_dict: Dictionary containing the value of the template. This dict must have passed
the JSON Schema validation.
:return Template: Instance of this class containing the values provided in this dictionary | def from_dict(template_name, template_values_dict):
"""
Parses the input and returns an instance of this class.
:param string template_name: Name of the template
:param dict template_values_dict: Dictionary containing the value of the template. This dict must have passed
the JSON Schema validation.
:return Template: Instance of this class containing the values provided in this dictionary
"""
parameters = template_values_dict.get("Parameters", {})
definition = template_values_dict.get("Definition", {})
return Template(template_name, parameters, definition) |
Register a plugin. New plugins are added to the end of the plugins list.
:param samtranslator.plugins.BasePlugin plugin: Instance/subclass of BasePlugin class that implements hooks
:raises ValueError: If plugin is not an instance of samtranslator.plugins.BasePlugin or if it is already
registered
:return: None | def register(self, plugin):
"""
Register a plugin. New plugins are added to the end of the plugins list.
:param samtranslator.plugins.BasePlugin plugin: Instance/subclass of BasePlugin class that implements hooks
:raises ValueError: If plugin is not an instance of samtranslator.plugins.BasePlugin or if it is already
registered
:return: None
"""
if not plugin or not isinstance(plugin, BasePlugin):
raise ValueError("Plugin must be implemented as a subclass of BasePlugin class")
if self.is_registered(plugin.name):
raise ValueError("Plugin with name {} is already registered".format(plugin.name))
self._plugins.append(plugin) |
Retrieves the plugin with given name
:param plugin_name: Name of the plugin to retrieve
:return samtranslator.plugins.BasePlugin: Returns the plugin object if found. None, otherwise | def _get(self, plugin_name):
"""
Retrieves the plugin with given name
:param plugin_name: Name of the plugin to retrieve
:return samtranslator.plugins.BasePlugin: Returns the plugin object if found. None, otherwise
"""
for p in self._plugins:
if p.name == plugin_name:
return p
return None |
Act on the specific life cycle event. The action here is to invoke the hook function on all registered plugins.
*args and **kwargs will be passed directly to the plugin's hook functions
:param samtranslator.plugins.LifeCycleEvents event: Event to act upon
:return: Nothing
:raises ValueError: If event is not a valid life cycle event
:raises NameError: If a plugin does not have the hook method defined
:raises Exception: Any exception that a plugin raises | def act(self, event, *args, **kwargs):
"""
Act on the specific life cycle event. The action here is to invoke the hook function on all registered plugins.
*args and **kwargs will be passed directly to the plugin's hook functions
:param samtranslator.plugins.LifeCycleEvents event: Event to act upon
:return: Nothing
:raises ValueError: If event is not a valid life cycle event
:raises NameError: If a plugin does not have the hook method defined
:raises Exception: Any exception that a plugin raises
"""
if not isinstance(event, LifeCycleEvents):
raise ValueError("'event' must be an instance of LifeCycleEvents class")
method_name = "on_" + event.name
for plugin in self._plugins:
if not hasattr(plugin, method_name):
raise NameError("'{}' method is not found in the plugin with name '{}'"
.format(method_name, plugin.name))
try:
getattr(plugin, method_name)(*args, **kwargs)
except InvalidResourceException as ex:
# Don't need to log these because they don't result in crashes
raise ex
except Exception as ex:
logging.exception("Plugin '%s' raised an exception: %s", plugin.name, ex)
raise ex |
:param logical_id: the logical_id of the resource that owns this deployment preference
:param deployment_preference_dict: the dict object taken from the SAM template
:return: | def from_dict(cls, logical_id, deployment_preference_dict):
"""
:param logical_id: the logical_id of the resource that owns this deployment preference
:param deployment_preference_dict: the dict object taken from the SAM template
:return:
"""
enabled = deployment_preference_dict.get('Enabled', True)
if not enabled:
return DeploymentPreference(None, None, None, None, False, None)
if 'Type' not in deployment_preference_dict:
raise InvalidResourceException(logical_id, "'DeploymentPreference' is missing required Property 'Type'")
deployment_type = deployment_preference_dict['Type']
hooks = deployment_preference_dict.get('Hooks', dict())
if not isinstance(hooks, dict):
raise InvalidResourceException(logical_id,
"'Hooks' property of 'DeploymentPreference' must be a dictionary")
pre_traffic_hook = hooks.get('PreTraffic', None)
post_traffic_hook = hooks.get('PostTraffic', None)
alarms = deployment_preference_dict.get('Alarms', None)
role = deployment_preference_dict.get('Role', None)
return DeploymentPreference(deployment_type, pre_traffic_hook, post_traffic_hook, alarms, enabled, role) |
decrypt leverages KMS decrypt and base64-encode decrypted blob
More info on KMS decrypt API:
https://docs.aws.amazon.com/kms/latest/APIReference/API_decrypt.html | def decrypt(message):
'''decrypt leverages KMS decrypt and base64-encode decrypted blob
More info on KMS decrypt API:
https://docs.aws.amazon.com/kms/latest/APIReference/API_decrypt.html
'''
try:
ret = kms.decrypt(
CiphertextBlob=base64.decodestring(message))
decrypted_data = ret.get('Plaintext')
except Exception as e:
# returns http 500 back to user and log error details in Cloudwatch Logs
raise Exception("Unable to decrypt data: ", e)
return decrypted_data |
Demonstrates S3 trigger that uses
Rekognition APIs to detect faces, labels and index faces in S3 Object. | def lambda_handler(event, context):
'''Demonstrates S3 trigger that uses
Rekognition APIs to detect faces, labels and index faces in S3 Object.
'''
#print("Received event: " + json.dumps(event, indent=2))
# Get the object from the event
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
try:
# Calls rekognition DetectFaces API to detect faces in S3 object
response = detect_faces(bucket, key)
# Calls rekognition DetectLabels API to detect labels in S3 object
#response = detect_labels(bucket, key)
# Calls rekognition IndexFaces API to detect faces in S3 object and index faces into specified collection
#response = index_faces(bucket, key)
# Print response to console.
print(response)
return response
except Exception as e:
print(e)
print("Error processing object {} from bucket {}. ".format(key, bucket) +
"Make sure your object and bucket exist and your bucket is in the same region as this function.")
raise e |
Prepends the first argument (i.e., the exception message) of the a BaseException with the provided message.
Useful for reraising exceptions with additional information.
:param BaseException exception: the exception to prepend
:param str message: the message to prepend
:param str end: the separator to add to the end of the provided message
:returns: the exception | def prepend(exception, message, end=': '):
"""Prepends the first argument (i.e., the exception message) of the a BaseException with the provided message.
Useful for reraising exceptions with additional information.
:param BaseException exception: the exception to prepend
:param str message: the message to prepend
:param str end: the separator to add to the end of the provided message
:returns: the exception
"""
exception.args = exception.args or ('',)
exception.args = (message + end + exception.args[0], ) + exception.args[1:]
return exception |
Validate the incoming token and produce the principal user identifier
associated with the token. This can be accomplished in a number of ways:
1. Call out to the OAuth provider
2. Decode a JWT token inline
3. Lookup in a self-managed DB | def lambda_handler(event, context):
# incoming token value
token = event['authorizationToken']
print("Method ARN: " + event['methodArn'])
'''
Validate the incoming token and produce the principal user identifier
associated with the token. This can be accomplished in a number of ways:
1. Call out to the OAuth provider
2. Decode a JWT token inline
3. Lookup in a self-managed DB
'''
principalId = 'user|a1b2c3d4'
'''
You can send a 401 Unauthorized response to the client by failing like so:
raise Exception('Unauthorized')
If the token is valid, a policy must be generated which will allow or deny
access to the client. If access is denied, the client will receive a 403
Access Denied response. If access is allowed, API Gateway will proceed with
the backend integration configured on the method that was called.
This function must generate a policy that is associated with the recognized
principal user identifier. Depending on your use case, you might store
policies in a DB, or generate them on the fly.
Keep in mind, the policy is cached for 5 minutes by default (TTL is
configurable in the authorizer) and will apply to subsequent calls to any
method/resource in the RestApi made with the same token.
The example policy below denies access to all resources in the RestApi.
'''
tmp = event['methodArn'].split(':')
apiGatewayArnTmp = tmp[5].split('/')
awsAccountId = tmp[4]
policy = AuthPolicy(principalId, awsAccountId)
policy.restApiId = apiGatewayArnTmp[0]
policy.region = tmp[3]
policy.stage = apiGatewayArnTmp[1]
policy.denyAllMethods()
#policy.allowMethod(HttpVerb.GET, '/pets/*')
# Finally, build the policy
authResponse = policy.build()
# new! -- add additional key-value pairs associated with the authenticated principal
# these are made available by APIGW like so: $context.authorizer.<key>
# additional context is cached
context = {
'key': 'value', # $context.authorizer.key -> value
'number': 1,
'bool': True
}
# context['arr'] = ['foo'] <- this is invalid, APIGW will not accept it
# context['obj'] = {'foo':'bar'} <- also invalid
authResponse['context'] = context
return authResponse |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.