code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# python-zamg
[![GitHub Release][releases-shield]][releases]
[![GitHub Activity][commits-shield]][commits]
[![License][license-shield]](LICENSE)
[![pre-commit][pre-commit-shield]][pre-commit]
[![Black][black-shield]][black]
[![Code Coverage][codecov-shield]][codecov]
[![Project Maintenance][maintenance-shield]][user_profile]
Python library to read 10 min weather data from ZAMG
## About
This package allows you to read the weather data from weather stations of ZAMG weather service.
ZAMG is the Zentralanstalt für Meteorologie und Geodynamik in Austria.
## Installation
```bash
pip install zamg
```
## Usage
Simple usage example to fetch specific data from the closest station.
```python
"""Asynchronous Python client for ZAMG weather data."""
import asyncio
import src.zamg.zamg
from src.zamg.exceptions import ZamgError
async def main():
"""Sample of getting data"""
try:
async with src.zamg.zamg.ZamgData() as zamg:
# option to disable verify of ssl check
zamg.verify_ssl = False
# trying to read zamg station id of the closest station
data = await zamg.closest_station(46.99, 15.499)
# set closest station as default one to read
zamg.set_default_station(data)
print("closest_station = " + str(zamg.get_station_name) + " / " + str(data))
# print list with all possible parameters
print(f"Possible station parameters: {zamg.get_all_parameters()}")
# set parameters directly
zamg.station_parameters = "TL,SO"
# or set parameters as list
zamg.set_parameters(("TL", "SO"))
# if none of the above parameters are set, all possible parameters are read
# do an update
await zamg.update()
print(f"---------- Weather for station {zamg.get_station_name} ({data})")
for param in zamg.get_parameters():
print(
str(param)
+ " -> "
+ str(zamg.get_data(parameter=param, data_type="name"))
+ " -> "
+ str(zamg.get_data(parameter=param))
+ " "
+ str(zamg.get_data(parameter=param, data_type="unit"))
)
print("last update: %s", zamg.last_update)
except (ZamgError) as exc:
print(exc)
if __name__ == "__main__":
asyncio.run(main())
```
## Contributions are welcome!
If you want to contribute to this please read the [Contribution guidelines](https://github.com/killer0071234/python-zamg/blob/master/CONTRIBUTING.md)
## Credits
Code template to read dataset API was mainly taken from [@LuisTheOne](https://github.com/LuisThe0ne)'s [zamg-api-cli-client][zamg_api_cli_client]
[Dataset API Dokumentation][dataset_api_doc]
---
[black]: https://github.com/psf/black
[black-shield]: https://img.shields.io/badge/code%20style-black-000000.svg?style=for-the-badge
[commits-shield]: https://img.shields.io/github/commit-activity/y/killer0071234/python-zamg.svg?style=for-the-badge
[commits]: https://github.com/killer0071234/python-zamg/commits/main
[codecov-shield]: https://img.shields.io/codecov/c/gh/killer0071234/python-zamg?style=for-the-badge&token=O5YDLF0X9G
[codecov]: https://codecov.io/gh/killer0071234/python-zamg
[license-shield]: https://img.shields.io/github/license/killer0071234/python-zamg.svg?style=for-the-badge
[maintenance-shield]: https://img.shields.io/badge/[email protected]?style=for-the-badge
[pre-commit]: https://github.com/pre-commit/pre-commit
[pre-commit-shield]: https://img.shields.io/badge/pre--commit-enabled-brightgreen?style=for-the-badge
[releases-shield]: https://img.shields.io/github/release/killer0071234/python-zamg.svg?style=for-the-badge
[releases]: https://github.com/killer0071234/python-zamg/releases
[user_profile]: https://github.com/killer0071234
[zamg_api_cli_client]: https://github.com/LuisThe0ne/zamg-api-cli-client
[dataset_api_doc]: https://dataset.api.hub.zamg.ac.at/v1/docs/index.html
| zamg | /zamg-0.2.4.tar.gz/zamg-0.2.4/README.md | README.md |
# Zamia Prolog
Scalable and embeddable compiler/interpreter for a Zamia-Prolog (a Prolog dialect). Stores its knowledge base in a
Database via SQLAlchemy - hence the scalability, i.e. the knowledge base is not limited by the amount of RAM available.
Zamia-Prolog is written in pure python so it can be easily embedded into other python applications. Compiler and runtime
have interfaces to register custom builtins which can either be evaluated at compile time (called directives in
Zamia-Prolog) or at runtime.
The Prolog core is based on http://openbookproject.net/py4fun/prolog/prolog3.html by Chris Meyers.
While performance is definitely important, right now Chris' interpreted approach is more than good enough for my needs.
My main focus here is embedding and language features - at the time of this writing I am experimenting with
incorporating some imperative concepts into Zamia-Prolog, such as re-assignable variables and if/then/else constructs.
So please note that this is a Prolog dialect that probably never will be compliant to any Prolog standards. Instead it will
most likely drift further away from standard prolog and may evolve into my own logic-based language.
Features
========
* pure Python implementation
* easy to embed in Python applications
* easy to extend with custom builtins for domain specific tasks
* re-assignable variables with full backtracking support
* assertz/retract with full backtracking support (using database overlays)
* imperative language constructs such as if/then/else
* pseudo-variables/-predicates that make DB assertz/retractz easier to code
Requirements
============
*Note*: probably incomplete.
* Python 2.7
* py-nltools
* SQLAlchemy
Usage
=====
Compile `hanoi1.pl` example:
```python
from zamiaprolog.logicdb import LogicDB
from zamiaprolog.parser import PrologParser
db_url = 'sqlite:///foo.db'
db = LogicDB(db_url)
parser = PrologParser()
parser.compile_file('samples/hanoi1.pl', 'unittests', db)
```
now run a sample goal:
```python
from zamiaprolog.runtime import PrologRuntime
clause = parser.parse_line_clause_body('move(3,left,right,center)')
rt = PrologRuntime(db)
solutions = rt.search(clause)
```
output:
```
Move top disk from left to right
Move top disk from left to center
Move top disk from right to center
Move top disk from left to right
Move top disk from center to left
Move top disk from center to right
Move top disk from left to right
```
Accessing Prolog Variables from Python
--------------------------------------
Set var X from python:
```python
clause = parser.parse_line_clause_body('Y is X*X')
solutions = rt.search(clause, {'X': NumberLiteral(3)})
```
check number of solutions:
```python
print len(solutions)
```
output:
```
1
```
access prolog result Y from python:
```python
print solutions[0]['Y'].f
```
output:
```
9
```
Custom Python Builtin Predicates
--------------------------------
To demonstrate how to register custom predicates with the interpreter, we will
introduce a python builtin to record the moves in our Hanoi example:
```python
recorded_moves = []
def record_move(g, rt):
global recorded_moves
pred = g.terms[g.inx]
args = pred.args
arg_from = rt.prolog_eval(args[0], g.env)
arg_to = rt.prolog_eval(args[1], g.env)
recorded_moves.append((arg_from, arg_to))
return True
rt.register_builtin('record_move', record_move)
```
now, compile and run the `hanoi2.pl` example:
```python
parser.compile_file('samples/hanoi2.pl', 'unittests', db)
clause = parser.parse_line_clause_body('move(3,left,right,center)')
solutions = rt.search(clause)
```
output:
```
Move top disk from left to right
Move top disk from left to center
Move top disk from right to center
Move top disk from left to right
Move top disk from center to left
Move top disk from center to right
Move top disk from left to right
```
now, check the recorded moves:
```python
print len(recorded_moves)
print repr(recorded_moves)
```
output:
```
7
[(Predicate(left), Predicate(right)), (Predicate(left), Predicate(center)), (Predicate(right), Predicate(center)), (Predicate(left), Predicate(right)), (Predicate(center), Predicate(left)), (Predicate(center), Predicate(right)), (Predicate(left), Predicate(right))]
```
Generate Multiple Bindings from Custom Predicates
-------------------------------------------------
Custom predicates not only can return True/False and manipulate the environment directly to generate a single binding as
in
```python
def custom_pred1(g, rt):
rt._trace ('CALLED BUILTIN custom_pred1', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('custom_pred1: 1 arg expected.')
arg_var = rt.prolog_get_variable(args[0], g.env)
g.env[arg_var] = NumberLiteral(42)
return True
```
they can also return a list of bindings which will then result in one prolog result each. In this example,
we generate 4 bindings of two variables each:
```python
def multi_binder(g, rt):
global recorded_moves
rt._trace ('CALLED BUILTIN multi_binder', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('multi_binder: 2 args expected.')
var_x = rt.prolog_get_variable(args[0], g.env)
var_y = rt.prolog_get_variable(args[1], g.env)
res = []
for x in range(2):
lx = NumberLiteral(x)
for y in range(2):
ly = NumberLiteral(y)
res.append({var_x: lx, var_y: ly})
return res
```
so running
```python
clause = self.parser.parse_line_clause_body('multi_binder(X,Y)')
solutions = self.rt.search(clause)
```
will produce 4 solutions:
```
[{u'Y': 0, u'X': 0}, {u'Y': 1, u'X': 0}, {u'Y': 0, u'X': 1}, {u'Y': 1, u'X': 1}]
```
Custom Compiler Directives
--------------------------
Besides custom builtins we can also have custom compiler-directives in Zamia-Prolog. Directives are evalutated at compile
time and will not be stored in the database.
Here is an example: First, register your custom directive:
```python
def _custom_directive (module_name, clause, user_data):
print "_custom_directive has been called. clause: %s user_data:%s" % (clause, user_data)
parser.register_directive('custom_directive', _custom_directive, None)
```
now, compile a piece of prolog code that uses the directive:
```python
parser.parse_line_clauses('custom_directive(abc, 42, \'foo\').')
```
output:
```
_custom_directive has been called. clause: custom_directive(abc, 42.0, "foo"). user_data:None
[]
```
Re-Assignable Variables
-----------------------
Variables can be re-assigned using the built-in special `set` (`:=`):
```prolog
Z := 23, Z := 42
```
this comes with full backtracking support.
Pseudo-Variables/-Predicates
----------------------------
This is an extension to standard prolog syntax found in Zamia-Prolog to make "variable" setting and access
easier:
```
C:user -> user (C, X)
C:user:name -> user (C, X), name (X, Y)
self:name -> name (self, X)
self:label|de -> label (self, de, X)
```
this works for evaluation as well as setting/asserting (left-hand and right-hand side of expressions).
Example:
```prolog
assertz(foo(bar, 23)), bar:foo := 42, Z := bar:foo
```
will result in `Z == 42` and `foo(bar, 42)` asserted in the database.
if/then/else/endif
------------------
```prolog
if foo(bar) then
do1, do2
else
do2, do3
endif
```
is equivalent to
```prolog
or ( and (foo(bar), do1, do2), and (not(foo(bar)), do2, do3) )
```
License
=======
My own scripts as well as the data I create is LGPLv3 licensed unless otherwise noted in the script's copyright headers.
Some scripts and files are based on works of others, in those cases it is my
intention to keep the original license intact. Please make sure to check the
copyright headers inside for more information.
Author
======
* Guenter Bartsch <[email protected]>
* Chris Meyers.
* Heiko Schäfer <[email protected]>
| zamia-prolog | /zamia-prolog-0.1.0.tar.gz/zamia-prolog-0.1.0/README.md | README.md |
#
# Copyright 2015, 2016, 2017 Guenter Bartsch
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# classes that represent prolog clauses
#
import logging
import json
from six import python_2_unicode_compatible, text_type, string_types
from zamiaprolog.errors import PrologError
class JSONLogic:
""" just a base class that indicates to_dict() and __init__(json_dict) are supported
for JSON (de)-serialization """
def to_dict(self):
raise PrologError ("to_dict is not implemented, but should be!")
@python_2_unicode_compatible
class SourceLocation(JSONLogic):
def __init__ (self, fn=None, line=None, col=None, json_dict=None):
if json_dict:
self.fn = json_dict['fn']
self.line = json_dict['line']
self.col = json_dict['col']
else:
self.fn = fn
self.line = line
self.col = col
def __str__(self):
return u'%s: line=%s, col=%s' % (self.fn, text_type(self.line), text_type(self.col))
def __repr__(self):
return 'SourceLocation(fn=%s, line=%d, col=%d)' % (self.fn, self.line, self.col)
def to_dict(self):
return {'pt': 'SourceLocation', 'fn': self.fn, 'line': self.line, 'col': self.col}
@python_2_unicode_compatible
class Literal(JSONLogic):
def __str__(self):
return u"<LITERAL>"
@python_2_unicode_compatible
class StringLiteral(Literal):
def __init__(self, s=None, json_dict=None):
if json_dict:
self.s = json_dict['s']
else:
self.s = s
def __eq__(self, b):
return isinstance(b, StringLiteral) and self.s == b.s
def __lt__(self, b):
assert isinstance(b, StringLiteral)
return self.s < b.s
def __le__(self, b):
assert isinstance(b, StringLiteral)
return self.s <= b.s
def __ne__(self, b):
return not isinstance(b, StringLiteral) or self.s != b.s
def __ge__(self, b):
assert isinstance(b, StringLiteral)
return self.s >= b.s
def __gt__(self, b):
assert isinstance(b, StringLiteral)
return self.s > b.s
def get_literal(self):
return self.s
def __str__(self):
return u'"' + text_type(self.s.replace('"', '\\"')) + u'"'
def __repr__(self):
return u'StringLiteral(' + repr(self.s) + ')'
def to_dict(self):
return {'pt': 'StringLiteral', 's': self.s}
def __hash__(self):
return hash(self.s)
@python_2_unicode_compatible
class NumberLiteral(Literal):
def __init__(self, f=None, json_dict=None):
if json_dict:
self.f = json_dict['f']
else:
self.f = f
def __str__(self):
return text_type(self.f)
def __repr__(self):
return repr(self.f)
def __eq__(self, b):
return isinstance(b, NumberLiteral) and self.f == b.f
def __lt__(self, b):
assert isinstance(b, NumberLiteral)
return self.f < b.f
def __le__(self, b):
assert isinstance(b, NumberLiteral)
return self.f <= b.f
def __ne__(self, b):
return not isinstance(b, NumberLiteral) or self.f != b.f
def __ge__(self, b):
assert isinstance(b, NumberLiteral)
return self.f >= b.f
def __gt__(self, b):
assert isinstance(b, NumberLiteral)
return self.f > b.f
def __add__(self, b):
assert isinstance(b, NumberLiteral)
return NumberLiteral(b.f + self.f)
def __div__(self, b):
assert isinstance(b, NumberLiteral)
return NumberLiteral(self.f / b.f)
def get_literal(self):
return self.f
def to_dict(self):
return {'pt': 'NumberLiteral', 'f': self.f}
def __hash__(self):
return hash(self.f)
@python_2_unicode_compatible
class ListLiteral(Literal):
def __init__(self, l=None, json_dict=None):
if json_dict:
self.l = json_dict['l']
else:
self.l = l
def __eq__(self, other):
if not isinstance(other, ListLiteral):
return False
return other.l == self.l
def __ne__(self, other):
if not isinstance(other, ListLiteral):
return True
return other.l != self.l
def get_literal(self):
return self.l
def __str__(self):
return u'[' + u','.join(map(lambda e: text_type(e), self.l)) + u']'
def __repr__(self):
return repr(self.l)
def to_dict(self):
return {'pt': 'ListLiteral', 'l': self.l}
@python_2_unicode_compatible
class DictLiteral(Literal):
def __init__(self, d=None, json_dict=None):
if json_dict:
self.d = json_dict['d']
else:
self.d = d
def __eq__(self, other):
if not isinstance(other, DictLiteral):
return False
return other.d == self.d
def __ne__(self, other):
if not isinstance(other, DictLiteral):
return True
return other.d != self.d
def get_literal(self):
return self.d
def __str__(self):
return text_type(self.d)
def __repr__(self):
return repr(self.d)
def to_dict(self):
return {'pt': 'DictLiteral', 'd': self.d}
@python_2_unicode_compatible
class SetLiteral(Literal):
def __init__(self, s=None, json_dict=None):
if json_dict:
self.s = json_dict['s']
else:
self.s = s
def __eq__(self, other):
if not isinstance(other, SetLiteral):
return False
return other.s == self.s
def __ne__(self, other):
if not isinstance(other, SetLiteral):
return True
return other.s != self.s
def get_literal(self):
return self.s
def __str__(self):
return text_type(self.s)
def __repr__(self):
return repr(self.s)
def to_dict(self):
return {'pt': 'SetLiteral', 's': self.s}
@python_2_unicode_compatible
class Variable(JSONLogic):
def __init__(self, name=None, json_dict=None):
if json_dict:
self.name = json_dict['name']
else:
self.name = name
def __repr__(self):
return u'Variable(' + self.__unicode__() + u')'
def __str__(self):
return self.name
def __eq__(self, other):
return isinstance(other, Variable) and other.name == self.name
def __hash__(self):
return hash(self.name)
def to_dict(self):
return {'pt': 'Variable', 'name': self.name}
@python_2_unicode_compatible
class Predicate(JSONLogic):
def __init__(self, name=None, args=None, json_dict=None):
if json_dict:
self.name = json_dict['name']
self.args = json_dict['args']
else:
self.name = name
self.args = args if args else []
def __str__(self):
if not self.args:
return self.name
# if self.name == 'and':
# return u', '.join(map(unicode, self.args))
# if self.name == 'or':
# return u'; '.join(map(unicode, self.args))
# elif self.name == 'and':
# return u', '.join(map(unicode, self.args))
return u'%s(%s)' % (self.name, u', '.join(map(text_type, self.args)))
#return '(' + self.name + ' ' + ' '.join( [str(arg) for arg in self.args]) + ')'
def __repr__(self):
return u'Predicate(' + text_type(self) + ')'
def __eq__(self, other):
return isinstance(other, Predicate) \
and self.name == other.name \
and self.args == other.args
def __ne__(self, other):
if not isinstance(other, Predicate):
return True
if self.name != other.name:
return True
if self.args != other.args:
return True
return False
def to_dict(self):
return {'pt' : 'Predicate',
'name': self.name,
'args': list(map(lambda a: a.to_dict(), self.args))
}
def __hash__(self):
# FIXME hash args?
return hash(self.name + u'/' + text_type(len(self.args)))
# helper function
def build_predicate(name, args):
mapped_args = []
for arg in args:
if not isinstance(arg, string_types):
if isinstance (arg, int):
mapped_args.append(NumberLiteral(arg))
elif isinstance (arg, float):
mapped_args.append(NumberLiteral(arg))
else:
mapped_args.append(arg)
continue
if arg[0].isupper() or arg[0].startswith('_'):
mapped_args.append(Variable(arg))
else:
mapped_args.append(Predicate(arg))
return Predicate (name, mapped_args)
@python_2_unicode_compatible
class Clause(JSONLogic):
def __init__(self, head=None, body=None, location=None, json_dict=None):
if json_dict:
self.head = json_dict['head']
self.body = json_dict['body']
self.location = json_dict['location']
else:
self.head = head
self.body = body
self.location = location
def __str__(self):
if self.body:
return u'%s :- %s.' % (text_type(self.head), text_type(self.body))
return text_type(self.head) + '.'
def __repr__(self):
return u'Clause(' + text_type(self) + u')'
def __eq__(self, other):
return (isinstance(other, Clause)
and self.head == other.head
and list(self.body) == list(other.body))
def to_dict(self):
return {'pt' : 'Clause',
'head' : self.head.to_dict(),
'body' : self.body.to_dict() if self.body else None,
'location': self.location.to_dict(),
}
@python_2_unicode_compatible
class MacroCall(JSONLogic):
def __init__(self, name=None, pred=None, location=None, json_dict=None):
if json_dict:
self.name = json_dict['name']
self.pred = json_dict['pred']
self.location = json_dict['location']
else:
self.name = name
self.pred = pred
self.location = location
def __str__(self):
return u'@' + text_type(self.name) + u':' + text_type(self.pred)
def __repr__(self):
return 'MacroCall(%s, %s)' % (self.name, self.pred)
def to_dict(self):
return {'pt' : 'MacroCall',
'name' : self.name,
'pred' : self.pred,
'location': self.location.to_dict(),
}
#
# JSON interface
#
class PrologJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance (o, JSONLogic):
return o.to_dict()
try:
return json.JSONEncoder.default(self, o)
except TypeError:
import pdb; pdb.set_trace()
_prolog_json_encoder = PrologJSONEncoder()
def prolog_to_json(pl):
return _prolog_json_encoder.encode(pl)
def _prolog_from_json(o):
if o == None:
return None
if not 'pt' in o:
# import pdb; pdb.set_trace()
# raise PrologError('cannot convert from json: %s [pt missing] .' % repr(o))
return o
if o['pt'] == 'Clause':
return Clause(json_dict=o)
if o['pt'] == 'Predicate':
return Predicate(json_dict=o)
if o['pt'] == 'StringLiteral':
return StringLiteral (json_dict=o)
if o['pt'] == 'NumberLiteral':
return NumberLiteral (json_dict=o)
if o['pt'] == 'ListLiteral':
return ListLiteral (json_dict=o)
if o['pt'] == 'DictLiteral':
return DictLiteral (json_dict=o)
if o['pt'] == 'SetLiteral':
return SetLiteral (json_dict=o)
if o['pt'] == 'Variable':
return Variable (json_dict=o)
if o['pt'] == 'SourceLocation':
return SourceLocation (json_dict=o)
if o['pt'] == 'MacroCall':
return MacroCall (json_dict=o)
raise PrologError('cannot convert from json: %s .' % repr(o))
def json_to_prolog(jstr):
return json.JSONDecoder(object_hook = _prolog_from_json).decode(jstr) | zamia-prolog | /zamia-prolog-0.1.0.tar.gz/zamia-prolog-0.1.0/zamiaprolog/logic.py | logic.py |
#
# Copyright 2015, 2016, 2017 Guenter Bartsch
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# Zamia-Prolog engine
#
# based on http://openbookproject.net/py4fun/prolog/prolog3.html by Chris Meyers
#
# A Goal is a rule in at a certain point in its computation.
# env contains binding, inx indexes the current term
# being satisfied, parent is another Goal which spawned this one
# and which we will unify back to when this Goal is complete.
import os
import sys
import logging
import codecs
import re
import copy
import time
from six import string_types
from zamiaprolog.logic import *
from zamiaprolog.builtins import *
from zamiaprolog.errors import *
from nltools.misc import limit_str
SLOW_QUERY_TS = 1.0
def prolog_unary_plus (a) : return NumberLiteral(a)
def prolog_unary_minus (a) : return NumberLiteral(-a)
unary_operators = {'+': prolog_unary_plus,
'-': prolog_unary_minus}
def prolog_binary_add (a,b) : return NumberLiteral(a + b)
def prolog_binary_sub (a,b) : return NumberLiteral(a - b)
def prolog_binary_mul (a,b) : return NumberLiteral(a * b)
def prolog_binary_div (a,b) : return NumberLiteral(a / b)
def prolog_binary_mod (a,b) : return NumberLiteral(a % b)
binary_operators = {'+' : prolog_binary_add,
'-' : prolog_binary_sub,
'*' : prolog_binary_mul,
'/' : prolog_binary_div,
'mod': prolog_binary_mod,
}
builtin_specials = set(['cut', 'fail', 'not', 'or', 'and', 'is', 'set'])
class PrologGoal:
def __init__ (self, head, terms, parent=None, env={}, negate=False, inx=0, location=None) :
assert type(terms) is list
assert location
self.head = head
self.terms = terms
self.parent = parent
self.env = env
self.negate = negate
self.inx = inx
self.location = location
def __unicode__ (self):
res = u'!goal ' if self.negate else u'goal '
if self.head:
res += unicode(self.head)
else:
res += u'TOP'
res += ' '
for i, t in enumerate(self.terms):
if i == self.inx:
res += u"**"
res += unicode(t) + u' '
res += u'env=%s' % unicode(self.env)
return res
def __str__ (self) :
return unicode(self).encode('utf8')
def __repr__ (self):
return 'PrologGoal(%s)' % str(self)
def get_depth (self):
if not self.parent:
return 0
return self.parent.get_depth() + 1
class PrologRuntime(object):
def register_builtin (self, name, builtin):
self.builtins[name] = builtin
def register_builtin_function (self, name, fn):
self.builtin_functions[name] = fn
def set_trace(self, trace):
self.trace = trace
def __init__(self, db):
self.db = db
self.builtins = {}
self.builtin_functions = {}
self.trace = False
# arithmetic
self.register_builtin('>', builtin_larger)
self.register_builtin('<', builtin_smaller)
self.register_builtin('=<', builtin_smaller_or_equal)
self.register_builtin('>=', builtin_larger_or_equal)
self.register_builtin('\\=', builtin_non_equal)
self.register_builtin('=', builtin_equal)
self.register_builtin('increment', builtin_increment) # increment (?V, +I)
self.register_builtin('decrement', builtin_decrement) # decrement (?V, +D)
self.register_builtin('between', builtin_between) # between (+Low, +High, ?Value)
# strings
self.register_builtin('sub_string', builtin_sub_string)
self.register_builtin('str_append', builtin_str_append) # str_append (?String, +Append)
self.register_builtin('atom_chars', builtin_atom_chars)
# time and date
self.register_builtin('date_time_stamp', builtin_date_time_stamp)
self.register_builtin('stamp_date_time', builtin_stamp_date_time)
self.register_builtin('get_time', builtin_get_time)
self.register_builtin('day_of_the_week', builtin_day_of_the_week) # day_of_the_week (+Date,-DayOfTheWeek)
# I/O
self.register_builtin('write', builtin_write) # write (+Term)
self.register_builtin('nl', builtin_nl) # nl
# debug, tracing, control
self.register_builtin('log', builtin_log) # log (+Level, +Terms...)
self.register_builtin('trace', builtin_trace) # trace (+OnOff)
self.register_builtin('true', builtin_true) # true
self.register_builtin('ignore', builtin_ignore) # ignore (+P)
self.register_builtin('var', builtin_var) # var (+Term)
self.register_builtin('nonvar', builtin_nonvar) # nonvar (+Term)
# these have become specials now
# self.register_builtin('is', builtin_is) # is (?Ques, +Ans)
# self.register_builtin('set', builtin_set) # set (?Var, +Val)
# lists
self.register_builtin('list_contains', builtin_list_contains)
self.register_builtin('list_nth', builtin_list_nth)
self.register_builtin('length', builtin_length) # length (+List, -Len)
self.register_builtin('list_slice', builtin_list_slice) # list_slice (+Idx1, +Idx2, +List, -Slice)
self.register_builtin('list_append', builtin_list_append) # list_append (?List, +Element)
self.register_builtin('list_extend', builtin_list_extend) # list_extend (?List, +Element)
self.register_builtin('list_str_join', builtin_list_str_join) # list_str_join (+Glue, +List, -Str)
self.register_builtin('list_findall', builtin_list_findall) # list_findall (+Template, +Goal, -List)
# dicts
self.register_builtin('dict_put', builtin_dict_put) # dict_put (?Dict, +Key, +Value)
self.register_builtin('dict_get', builtin_dict_get) # dict_get (+Dict, ?Key, -Value)
# sets
self.register_builtin('set_add', builtin_set_add) # set_add (?Set, +Value)
self.register_builtin('set_get', builtin_set_get) # set_get (+Set, -Value)
self.register_builtin('set_findall', builtin_set_findall) # set_findall (+Template, +Goal, -Set)
# assert, rectract...
self.register_builtin('assertz', builtin_assertz) # assertz (+P)
self.register_builtin('retract', builtin_retract) # retract (+P)
self.register_builtin('setz', builtin_setz) # setz (+P, +V)
self.register_builtin('gensym', builtin_gensym) # gensym (+Root, -Unique)
#
# builtin functions
#
self.register_builtin_function ('format_str', builtin_format_str)
# lists
self.register_builtin_function ('list_max', builtin_list_max)
self.register_builtin_function ('list_min', builtin_list_min)
self.register_builtin_function ('list_sum', builtin_list_sum)
self.register_builtin_function ('list_avg', builtin_list_avg)
self.register_builtin_function ('list_len', builtin_list_len)
self.register_builtin_function ('list_slice', builtin_list_slice_fn)
self.register_builtin_function ('list_join', builtin_list_join_fn)
def prolog_eval (self, term, env, location): # eval all variables within a term to constants
#
# implement Pseudo-Variables and -Predicates, e.g. USER:NAME
#
if (isinstance (term, Variable) or isinstance (term, Predicate)) and (":" in term.name):
# import pdb; pdb.set_trace()
parts = term.name.split(':')
v = parts[0]
if v[0].isupper():
if not v in env:
raise PrologRuntimeError('is: unbound variable %s.' % v, location)
v = env[v]
for part in parts[1:]:
subparts = part.split('|')
pattern = [v]
wildcard_found = False
for sp in subparts[1:]:
if sp == '_':
wildcard_found = True
pattern.append('_1')
else:
pattern.append(sp)
if not wildcard_found:
pattern.append('_1')
solutions = self.search_predicate (subparts[0], pattern, env=env)
if len(solutions)<1:
return Variable(term.name)
v = solutions[0]['_1']
return v
#
# regular eval semantics
#
if isinstance(term, Predicate):
# unary builtin ?
if len(term.args) == 1:
op = unary_operators.get(term.name)
if op:
a = self.prolog_eval(term.args[0], env, location)
if not isinstance (a, NumberLiteral):
return None
return op(a.f)
# binary builtin ?
op = binary_operators.get(term.name)
if op:
if len(term.args) != 2:
return None
a = self.prolog_eval(term.args[0], env, location)
if not isinstance (a, NumberLiteral):
return None
b = self.prolog_eval(term.args[1], env, location)
if not isinstance (b, NumberLiteral):
return None
return op(a.f, b.f)
have_vars = False
args = []
for arg in term.args :
a = self.prolog_eval(arg, env, location)
if isinstance(a, Variable):
have_vars = True
args.append(a)
# engine-provided builtin function ?
if term.name in self.builtin_functions and not have_vars:
return self.builtin_functions[term.name](args, env, self, location)
return Predicate(term.name, args)
if isinstance (term, ListLiteral):
return ListLiteral (list(map (lambda x: self.prolog_eval(x, env, location), term.l)))
if isinstance (term, Literal):
return term
if isinstance (term, MacroCall):
return term
if isinstance (term, Variable):
ans = env.get(term.name)
if not ans:
return term
else:
return self.prolog_eval(ans, env, location)
raise PrologError('Internal error: prolog_eval on unhandled object: %s (%s)' % (repr(term), term.__class__), location)
# helper functions (used by builtin predicates)
def prolog_get_int(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance (t, NumberLiteral):
raise PrologRuntimeError('Integer expected, %s found instead.' % term.__class__, location)
return int(t.f)
def prolog_get_float(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance (t, NumberLiteral):
raise PrologRuntimeError('Float expected, %s found instead.' % term.__class__, location)
return t.f
def prolog_get_string(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance (t, StringLiteral):
raise PrologRuntimeError('String expected, %s (%s) found instead.' % (unicode(t), t.__class__), location)
return t.s
def prolog_get_literal(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance (t, Literal):
raise PrologRuntimeError('Literal expected, %s %s found instead.' % (t.__class__, t), location)
return t.get_literal()
def prolog_get_bool(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance(t, Predicate):
raise PrologRuntimeError('Boolean expected, %s found instead.' % term.__class__, location)
return t.name == 'true'
def prolog_get_list(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance(t, ListLiteral):
raise PrologRuntimeError('List expected, %s (%s) found instead.' % (unicode(term), term.__class__), location)
return t
def prolog_get_dict(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance(t, DictLiteral):
raise PrologRuntimeError('Dict expected, %s found instead.' % term.__class__, location)
return t
def prolog_get_set(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance(t, SetLiteral):
raise PrologRuntimeError('Set expected, %s found instead.' % term.__class__, location)
return t
def prolog_get_variable(self, term, env, location):
if not isinstance(term, Variable):
raise PrologRuntimeError('Variable expected, %s found instead.' % term.__class__, location)
return term.name
def prolog_get_constant(self, term, env, location):
t = self.prolog_eval (term, env, location)
if not isinstance(t, Predicate):
raise PrologRuntimeError('Constant expected, %s found instead.' % term.__class__, location)
if len(t.args) >0:
raise PrologRuntimeError('Constant expected, %s found instead.' % unicode(t), location)
return t.name
def prolog_get_predicate(self, term, env, location):
t = self.prolog_eval (term, env, location)
if t:
if not isinstance(t, Predicate):
raise PrologRuntimeError(u'Predicate expected, %s (%s) found instead.' % (unicode(t), t.__class__), location)
return t
if not isinstance(term, Predicate):
raise PrologRuntimeError(u'Predicate expected, %s (%s) found instead.' % (unicode(term), term.__class__), location)
return term
def _unify (self, src, srcEnv, dest, destEnv, location, overwrite_vars) :
"update dest env from src. return true if unification succeeds"
# logging.debug("Unify %s %s to %s %s" % (src, srcEnv, dest, destEnv))
# import pdb; pdb.set_trace()
if isinstance (src, Variable):
if (src.name == u'_'):
return True
# if ':' in src.name:
# import pdb; pdb.set_trace()
srcVal = self.prolog_eval(src, srcEnv, location)
if isinstance (srcVal, Variable):
return True
else:
return self._unify(srcVal, srcEnv, dest, destEnv, location, overwrite_vars)
if isinstance (dest, Variable):
if (dest.name == u'_'):
return True
destVal = self.prolog_eval(dest, destEnv, location) # evaluate destination
if not isinstance(destVal, Variable) and not overwrite_vars:
return self._unify(src, srcEnv, destVal, destEnv, location, overwrite_vars)
else:
# handle pseudo-vars?
if ':' in dest.name:
# import pdb; pdb.set_trace()
pname, r_pattern, a_pattern = self._compute_retract_assert_patterns (dest, self.prolog_eval(src, srcEnv, location), destEnv)
ovl = destEnv.get(ASSERT_OVERLAY_VAR_NAME)
if ovl is None:
ovl = LogicDBOverlay()
else:
ovl = ovl.clone()
ovl.retract(Predicate ( pname, r_pattern))
ovl.assertz(Clause ( Predicate(pname, a_pattern), location=location))
destEnv[ASSERT_OVERLAY_VAR_NAME] = ovl
else:
destEnv[dest.name] = self.prolog_eval(src, srcEnv, location)
return True # unifies. destination updated
elif isinstance (src, Literal):
srcVal = self.prolog_eval(src, srcEnv, location)
destVal = self.prolog_eval(dest, destEnv, location)
return srcVal == destVal
elif isinstance (dest, Literal):
return False
else:
if not isinstance(src, Predicate) or not isinstance(dest, Predicate):
raise PrologRuntimeError (u'_unify: expected src/dest, got "%s" vs "%s"' % (repr(src), repr(dest)))
if src.name != dest.name:
return False
elif len(src.args) != len(dest.args):
return False
else:
for i in range(len(src.args)):
if not self._unify(src.args[i], srcEnv, dest.args[i], destEnv, location, overwrite_vars):
return False
# always unify implicit overlay variable:
if ASSERT_OVERLAY_VAR_NAME in srcEnv:
destEnv[ASSERT_OVERLAY_VAR_NAME] = srcEnv[ASSERT_OVERLAY_VAR_NAME]
return True
def _trace (self, label, goal):
if not self.trace:
return
# logging.debug ('label: %s, goal: %s' % (label, unicode(goal)))
depth = goal.get_depth()
# ind = depth * ' ' + len(label) * ' '
res = u'!' if goal.negate else u''
if goal.head:
res += limit_str(unicode(goal.head), 60)
else:
res += u'TOP'
res += ' '
for i, t in enumerate(goal.terms):
if i == goal.inx:
res += u" -> " + limit_str(unicode(t), 60)
res += ' [' + unicode(goal.location) + ']'
# indent = depth*' ' + len(label) * ' '
indent = depth*' '
logging.info(u"%s %s: %s" % (indent, label, res))
for k in sorted(goal.env):
if k != ASSERT_OVERLAY_VAR_NAME:
logging.info(u"%s %s=%s" % (indent, k, limit_str(repr(goal.env[k]), 100)))
if ASSERT_OVERLAY_VAR_NAME in goal.env:
goal.env[ASSERT_OVERLAY_VAR_NAME].log_trace(indent)
# res += u'env=%s' % unicode(self.env)
def _trace_fn (self, label, env):
if not self.trace:
return
indent = ' '
logging.info(u"%s %s" % (indent, label))
# import pdb; pdb.set_trace()
for k in sorted(env):
logging.info(u"%s %s=%s" % (indent, k, limit_str(repr(env[k]), 80)))
def _finish_goal (self, g, succeed, stack, solutions):
while True:
succ = not succeed if g.negate else succeed
if succ:
self._trace ('SUCCESS ', g)
if g.parent == None : # Our original goal?
solutions.append(g.env) # Record solution
else:
# stack up shallow copy of parent goal to resume
parent = PrologGoal (head = g.parent.head,
terms = g.parent.terms,
parent = g.parent.parent,
env = copy.copy(g.parent.env),
negate = g.parent.negate,
inx = g.parent.inx,
location = g.parent.location)
self._unify (g.head, g.env,
parent.terms[parent.inx], parent.env, g.location, overwrite_vars = True)
parent.inx = parent.inx+1 # advance to next goal in body
stack.append(parent) # put it on the stack
break
else:
self._trace ('FAIL ', g)
if g.parent == None : # Our original goal?
break
else:
# prepare shallow copy of parent goal to resume
parent = PrologGoal (head = g.parent.head,
terms = g.parent.terms,
parent = g.parent.parent,
env = copy.copy(g.parent.env),
negate = g.parent.negate,
inx = g.parent.inx,
location = g.parent.location)
self._unify (g.head, g.env,
parent.terms[parent.inx], parent.env, g.location, overwrite_vars = True)
g = parent
succeed = False
def apply_overlay (self, module, solution, commit=True):
if not ASSERT_OVERLAY_VAR_NAME in solution:
return
solution[ASSERT_OVERLAY_VAR_NAME].do_apply(module, self.db, commit=True)
def _compute_retract_assert_patterns (self, arg_Var, arg_Val, env):
parts = arg_Var.name.split(':')
v = parts[0]
if v[0].isupper():
if not v in env:
raise PrologRuntimeError('%s: unbound variable %s.' % (arg_Var.name, v), g.location)
v = env[v]
else:
v = Predicate(v)
for part in parts[1:len(parts)-1]:
subparts = part.split('|')
pattern = [v]
wildcard_found = False
for sp in subparts[1:]:
if sp == '_':
wildcard_found = True
pattern.append('_1')
else:
pattern.append(Predicate(sp))
if not wildcard_found:
pattern.append('_1')
solutions = self.search_predicate (subparts[0], pattern, env=env)
if len(solutions)<1:
raise PrologRuntimeError(u'is: failed to match part "%s" of "%s".' % (part, unicode(arg_Var)), g.location)
v = solutions[0]['_1']
lastpart = parts[len(parts)-1]
subparts = lastpart.split('|')
r_pattern = [v]
a_pattern = [v]
wildcard_found = False
for sp in subparts[1:]:
if sp == '_':
wildcard_found = True
r_pattern.append(Variable('_'))
a_pattern.append(arg_Val)
else:
r_pattern.append(Predicate(sp))
a_pattern.append(Predicate(sp))
if not wildcard_found:
r_pattern.append(Variable('_'))
a_pattern.append(arg_Val)
return subparts[0], r_pattern, a_pattern
def _special_is(self, g):
self._trace ('CALLED SPECIAL is (?Var, +Val)', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('is: 2 args (?Var, +Val) expected.', g.location)
arg_Var = self.prolog_eval(pred.args[0], g.env, g.location)
arg_Val = self.prolog_eval(pred.args[1], g.env, g.location)
# handle pseudo-variable assignment
if (isinstance (arg_Var, Variable) or isinstance (arg_Var, Predicate)) and (":" in arg_Var.name):
pname, r_pattern, a_pattern = self._compute_retract_assert_patterns (arg_Var, arg_Val, g.env)
g.env = do_retract ({}, Predicate ( pname, r_pattern), res=g.env)
g.env = do_assertz ({}, Clause ( Predicate(pname, a_pattern), location=g.location), res=g.env)
return True
# regular is/2 semantics
if isinstance(arg_Var, Variable):
if arg_Var.name != u'_':
g.env[arg_Var.name] = arg_Val # Set variable
return True
if arg_Var != arg_Val:
return False
return True
def _special_set(self, g):
self._trace ('CALLED BUILTIN set (-Var, +Val)', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('set: 2 args (-Var, +Val) expected.', g.location)
arg_Var = pred.args[0]
arg_Val = self.prolog_eval(pred.args[1], g.env, g.location)
# handle pseudo-variable assignment
if (isinstance (arg_Var, Variable) or isinstance (arg_Var, Predicate)) and (":" in arg_Var.name):
pname, r_pattern, a_pattern = self._compute_retract_assert_patterns (arg_Var, arg_Val, g.env)
# import pdb; pdb.set_trace()
g.env = do_retract ({}, Predicate ( pname, r_pattern), res=g.env)
g.env = do_assertz ({}, Clause ( Predicate(pname, a_pattern), location=g.location), res=g.env)
return True
# regular set/2 semantics
if not isinstance(arg_Var, Variable):
raise PrologRuntimeError('set: arg 0 Variable expected, %s (%s) found instead.' % (unicode(arg_Var), arg_Var.__class__), g.location)
g.env[arg_Var.name] = arg_Val # Set variable
return True
def search (self, a_clause, env={}):
if a_clause.body is None:
return [{}]
if isinstance (a_clause.body, Predicate):
if a_clause.body.name == 'and':
terms = a_clause.body.args
else:
terms = [ a_clause.body ]
else:
raise PrologRuntimeError (u'search: expected predicate in body, got "%s" !' % unicode(a_clause))
stack = [ PrologGoal (a_clause.head, terms, env=copy.copy(env), location=a_clause.location) ]
solutions = []
ts_start = time.time()
while stack :
g = stack.pop() # Next goal to consider
self._trace ('CONSIDER', g)
if g.inx >= len(g.terms) : # Is this one finished?
self._finish_goal (g, True, stack, solutions)
continue
# No. more to do with this goal.
pred = g.terms[g.inx] # what we want to solve
if not isinstance(pred, Predicate):
raise PrologRuntimeError (u'search: encountered "%s" (%s) when I expected a predicate!' % (unicode(pred), pred.__class__), g.location )
name = pred.name
# FIXME: debug only
# if name == 'ias':
# import pdb; pdb.set_trace()
if name in builtin_specials:
if name == 'cut': # zap the competition for the current goal
# logging.debug ("CUT: stack before %s" % repr(stack))
# import pdb; pdb.set_trace()
while len(stack)>0 and stack[len(stack)-1].head and stack[len(stack)-1].head.name == g.parent.head.name:
stack.pop()
# logging.debug ("CUT: stack after %s" % repr(stack))
elif name == 'fail': # Dont succeed
self._finish_goal (g, False, stack, solutions)
continue
elif name == 'not':
# insert negated sub-guoal
stack.append(PrologGoal(pred, pred.args, g, env=copy.copy(g.env), negate=True, location=g.location))
continue
elif name == 'or':
# logging.debug (' or clause detected.')
# import pdb; pdb.set_trace()
for subgoal in reversed(pred.args):
or_subg = PrologGoal(pred, [subgoal], g, env=copy.copy(g.env), location=g.location)
self._trace (' OR', or_subg)
# logging.debug (' subgoal: %s' % subgoal)
stack.append(or_subg)
continue
elif name == 'and':
stack.append(PrologGoal(pred, pred.args, g, env=copy.copy(g.env), location=g.location))
continue
elif name == 'is':
if not (self._special_is (g)):
self._finish_goal (g, False, stack, solutions)
continue
elif name == 'set':
if not (self._special_set (g)):
self._finish_goal (g, False, stack, solutions)
continue
g.inx = g.inx + 1 # Succeed. resume self.
stack.append(g)
continue
# builtin predicate ?
if pred.name in self.builtins:
bindings = self.builtins[pred.name](g, self)
if bindings:
self._trace ('SUCCESS FROM BUILTIN ', g)
g.inx = g.inx + 1
if type(bindings) is list:
for b in reversed(bindings):
new_env = copy.copy(g.env)
new_env.update(b)
stack.append(PrologGoal(g.head, g.terms, parent=g.parent, env=new_env, inx=g.inx, location=g.location))
else:
stack.append(g)
else:
self._finish_goal (g, False, stack, solutions)
continue
# Not special. look up in rule database
static_filter = {}
for i, a in enumerate(pred.args):
ca = self.prolog_eval(a, g.env, g.location)
if isinstance(ca, Predicate) and len(ca.args)==0:
static_filter[i] = ca.name
# if len(static_filter)>0:
# if pred.name == 'not_dog':
# import pdb; pdb.set_trace()
clauses = self.db.lookup(pred.name, len(pred.args), overlay=g.env.get(ASSERT_OVERLAY_VAR_NAME), sf=static_filter)
# if len(clauses) == 0:
# # fail
# self._finish_goal (g, False, stack, solutions)
# continue
success = False
for clause in reversed(clauses):
if len(clause.head.args) != len(pred.args):
continue
# logging.debug('clause: %s' % clause)
# stack up child subgoal
if clause.body:
child = PrologGoal(clause.head, [clause.body], g, env={}, location=clause.location)
else:
child = PrologGoal(clause.head, [], g, env={}, location=clause.location)
ans = self._unify (pred, g.env, clause.head, child.env, g.location, overwrite_vars = False)
if ans: # if unifies, stack it up
stack.append(child)
success = True
# logging.debug ("Queue %s" % str(child))
if not success:
# make sure we explicitly fail for proper negation support
self._finish_goal (g, False, stack, solutions)
# profiling
ts_delay = time.time() - ts_start
# logging.debug (u'runtime: search for %s took %fs.' % (unicode(a_clause), ts_delay))
if ts_delay>SLOW_QUERY_TS:
logging.warn (u'runtime: SLOW search for %s took %fs.' % (unicode(a_clause), ts_delay))
# import pdb; pdb.set_trace()
return solutions
def search_predicate(self, name, args, env={}, location=None):
""" convenience function: build Clause/Predicate structure, translate python strings in args
into Predicates/Variables by Prolog conventions (lowercase: predicate, uppercase: variable) """
if not location:
location = SourceLocation('<input>', 0, 0)
solutions = self.search(Clause(body=build_predicate(name, args), location=location), env=env)
return solutions | zamia-prolog | /zamia-prolog-0.1.0.tar.gz/zamia-prolog-0.1.0/zamiaprolog/runtime.py | runtime.py |
#
# Copyright 2015, 2016, 2017 Guenter Bartsch
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# store and retrieve logic clauses to and from our relational db
#
import os
import sys
import logging
import time
from copy import deepcopy, copy
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from six import python_2_unicode_compatible, text_type
from zamiaprolog import model
from zamiaprolog.logic import *
from nltools.misc import limit_str
class LogicDB(object):
def __init__(self, db_url, echo=False):
self.engine = create_engine(db_url, echo=echo)
self.Session = sessionmaker(bind=self.engine)
self.session = self.Session()
model.Base.metadata.create_all(self.engine)
self.cache = {}
def commit(self):
logging.debug("commit.")
self.session.commit()
def close (self, do_commit=True):
if do_commit:
self.commit()
self.session.close()
def clear_module(self, module, commit=True):
logging.info("Clearing %s ..." % module)
self.session.query(model.ORMClause).filter(model.ORMClause.module==module).delete()
self.session.query(model.ORMPredicateDoc).filter(model.ORMPredicateDoc.module==module).delete()
logging.info("Clearing %s ... done." % module)
if commit:
self.commit()
self.invalidate_cache()
def clear_all_modules(self, commit=True):
logging.info("Clearing all modules ...")
self.session.query(model.ORMClause).delete()
self.session.query(model.ORMPredicateDoc).delete()
logging.info("Clearing all modules ... done.")
if commit:
self.commit()
self.invalidate_cache()
def store (self, module, clause):
ormc = model.ORMClause(module = module,
arity = len(clause.head.args),
head = clause.head.name,
prolog = prolog_to_json(clause))
# print text_type(clause)
self.session.add(ormc)
self.invalidate_cache(clause.head.name)
def invalidate_cache(self, name=None):
if name and name in self.cache:
del self.cache[name]
else:
self.cache = {}
def store_doc (self, module, name, doc):
ormd = model.ORMPredicateDoc(module = module,
name = name,
doc = doc)
self.session.add(ormd)
# use arity=-1 to disable filtering
def lookup (self, name, arity, overlay=None, sf=None):
ts_start = time.time()
# if name == 'lang':
# import pdb; pdb.set_trace()
# DB caching
if name in self.cache:
res = copy(self.cache[name])
else:
res = []
for ormc in self.session.query(model.ORMClause).filter(model.ORMClause.head==name).order_by(model.ORMClause.id).all():
res.append (json_to_prolog(ormc.prolog))
self.cache[name] = copy(res)
if overlay:
res = overlay.do_filter(name, res)
if arity<0:
return res
res2 = []
for clause in res:
if len(clause.head.args) != arity:
continue
match = True
if sf:
for i in sf:
ca = sf[i]
a = clause.head.args[i]
if not isinstance(a, Predicate):
continue
if (a.name != ca) or (len(a.args) !=0):
# logging.info('no match: %s vs %s %s' % (repr(ca), repr(a), text_type(clause)))
match=False
break
if not match:
continue
res2.append(clause)
ts_delay = time.time() - ts_start
# logging.debug (u'db lookup for %s/%d took %fs' % (name, arity, ts_delay))
return res2
@python_2_unicode_compatible
class LogicDBOverlay(object):
def __init__(self):
self.d_assertz = {}
self.d_retracted = {}
def clone(self):
clone = LogicDBOverlay()
for name in self.d_retracted:
for c in self.d_retracted[name]:
clone.retract(c)
for name in self.d_assertz:
for c in self.d_assertz[name]:
clone.assertz(c)
return clone
def assertz (self, clause):
name = clause.head.name
if name in self.d_assertz:
self.d_assertz[name].append(clause)
else:
self.d_assertz[name] = [clause]
def _match_p (self, p1, p2):
""" extremely simplified variant of full-blown unification - just enough to get basic retract/1 working """
if isinstance (p1, Variable):
return True
if isinstance (p2, Variable):
return True
elif isinstance (p1, Literal):
return p1 == p2
elif p1.name != p2.name:
return False
elif len(p1.args) != len(p2.args):
return False
else:
for i in range(len(p1.args)):
if not self._match_p(p1.args[i], p2.args[i]):
return False
return True
def retract (self, p):
name = p.name
if name in self.d_assertz:
l = []
for c in self.d_assertz[name]:
if not self._match_p(p, c.head):
l.append(c)
self.d_assertz[name] = l
if name in self.d_retracted:
self.d_retracted[name].append(p)
else:
self.d_retracted[name] = [p]
def do_filter (self, name, res):
if name in self.d_retracted:
res2 = []
for clause in res:
for p in self.d_retracted[name]:
if not self._match_p(clause.head, p):
res2.append(clause)
res = res2
# append overlay clauses
if name in self.d_assertz:
for clause in self.d_assertz[name]:
res.append(clause)
return res
def log_trace (self, indent):
for k in sorted(self.d_assertz):
for clause in self.d_assertz[k]:
logging.info(u"%s [O] %s" % (indent, limit_str(text_type(clause), 100)))
# FIXME: log retracted clauses?
def __str__ (self):
res = u'DBOvl('
for k in sorted(self.d_assertz):
for clause in self.d_assertz[k]:
res += u'+' + limit_str(text_type(clause), 40)
for k in sorted(self.d_retracted):
for p in self.d_retracted[k]:
res += u'-' + limit_str(text_type(p), 40)
res += u')'
return res
def __repr__(self):
return text_type(self).encode('utf8')
def do_apply(self, module, db, commit=True):
to_delete = set()
for name in self.d_retracted:
for ormc in db.session.query(model.ORMClause).filter(model.ORMClause.head==name).all():
clause = json_to_prolog(ormc.prolog)
for p in self.d_retracted[name]:
if self._match_p(clause.head, p):
to_delete.add(ormc.id)
if to_delete:
db.session.query(model.ORMClause).filter(model.ORMClause.id.in_(list(to_delete))).delete(synchronize_session='fetch')
db.invalidate_cache()
for name in self.d_assertz:
for clause in self.d_assertz[name]:
db.store(module, clause)
if commit:
db.commit()
# class LogicMemDB(object):
#
# def __init__(self):
# self.clauses = {}
#
# def store (self, clause):
# if clause.head.name in self.clauses:
# self.clauses[clause.head.name].append (clause)
# else:
# self.clauses[clause.head.name] = [clause]
#
# def lookup (self, name):
# if name in self.clauses:
# return self.clauses[name]
# return [] | zamia-prolog | /zamia-prolog-0.1.0.tar.gz/zamia-prolog-0.1.0/zamiaprolog/logicdb.py | logicdb.py |
#
# Copyright 2015, 2016, 2017 Guenter Bartsch
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# basic Zamia-Prolog builtins
#
import sys
import datetime
import dateutil.parser
import time
import logging
import pytz # $ pip install pytz
import copy
from tzlocal import get_localzone # $ pip install tzlocal
from zamiaprolog import model
from zamiaprolog.logic import *
from zamiaprolog.logicdb import LogicDBOverlay
from zamiaprolog.errors import *
PROLOG_LOGGER_NAME = 'prolog'
prolog_logger = logging.getLogger(PROLOG_LOGGER_NAME)
def builtin_cmp_op(g, op, rt):
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('cmp_op: 2 args expected.', g.location)
a = rt.prolog_get_literal(args[0], g.env, g.location)
b = rt.prolog_get_literal(args[1], g.env, g.location)
res = op(a,b)
if rt.trace:
logging.info("builtin_cmp_op called, a=%s, b=%s, res=%s" % (a, b, res))
return res
def builtin_larger(g, rt): return builtin_cmp_op(g, lambda a,b: a>b ,rt)
def builtin_smaller(g, rt): return builtin_cmp_op(g, lambda a,b: a<b ,rt)
def builtin_smaller_or_equal(g, rt): return builtin_cmp_op(g, lambda a,b: a<=b ,rt)
def builtin_larger_or_equal(g, rt): return builtin_cmp_op(g, lambda a,b: a>=b ,rt)
def builtin_non_equal(g, rt): return builtin_cmp_op(g, lambda a,b: a!=b ,rt)
def builtin_equal(g, rt): return builtin_cmp_op(g, lambda a,b: a==b ,rt)
def builtin_arith_imp(g, op, rt):
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('arith_op: 2 args expected.', g.location)
a = rt.prolog_get_variable (args[0], g.env, g.location)
b = rt.prolog_get_float (args[1], g.env, g.location)
af = g.env[a].f if a in g.env else 0.0
res = NumberLiteral(op(af,b))
if rt.trace:
logging.info("builtin_arith_op called, a=%s, b=%s, res=%s" % (a, b, res))
g.env[a] = res
return True
def builtin_increment(g, rt): return builtin_arith_imp(g, lambda a,b: a+b ,rt)
def builtin_decrement(g, rt): return builtin_arith_imp(g, lambda a,b: a-b ,rt)
def builtin_between(g, rt):
rt._trace ('CALLED BUILTIN between (+Low, +High, ?Value)', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 3:
raise PrologRuntimeError('between: 3 args (+Low, +High, ?Value) expected.', g.location)
arg_Low = rt.prolog_get_float(args[0], g.env, g.location)
arg_High = rt.prolog_get_float(args[1], g.env, g.location)
arg_Value = rt.prolog_eval(args[2], g.env, g.location)
if isinstance(arg_Value, Variable):
res = []
for i in range(int(arg_Low), int(arg_High)+1):
res.append({arg_Value.name: NumberLiteral(i)})
return res
v = arg_Value.f
return ( arg_Low <= v ) and ( arg_High >= v )
def builtin_date_time_stamp(g, rt):
# logging.debug( "builtin_date_time_stamp called, g: %s" % g)
rt._trace ('CALLED BUILTIN date_time_stamp', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('date_time_stamp: 2 args expected.', g.location)
if not isinstance(args[0], Predicate) or not args[0].name == 'date' or len(args[0].args) != 7:
raise PrologRuntimeError('date_time_stamp: arg0: date structure expected.', g.location)
arg_Y = rt.prolog_get_int(args[0].args[0], g.env, g.location)
arg_M = rt.prolog_get_int(args[0].args[1], g.env, g.location)
arg_D = rt.prolog_get_int(args[0].args[2], g.env, g.location)
arg_H = rt.prolog_get_int(args[0].args[3], g.env, g.location)
arg_Mn = rt.prolog_get_int(args[0].args[4], g.env, g.location)
arg_S = rt.prolog_get_int(args[0].args[5], g.env, g.location)
arg_TZ = rt.prolog_get_string(args[0].args[6], g.env, g.location)
#if pe.trace:
# print "BUILTIN date_time_stamp called, Y=%s M=%s D=%s H=%s Mn=%s S=%s TZ=%s" % ( str(arg_Y), str(arg_M), str(arg_D), str(arg_H), str(arg_Mn), str(arg_S), str(arg_TZ))
tz = get_localzone() if arg_TZ == 'local' else pytz.timezone(arg_TZ)
if not isinstance(args[1], Variable):
raise PrologRuntimeError('date_time_stamp: arg1: variable expected.', g.location)
v = g.env.get(args[1].name)
if v:
raise PrologRuntimeError('date_time_stamp: arg1: variable already bound.', g.location)
dt = datetime.datetime(arg_Y, arg_M, arg_D, arg_H, arg_Mn, arg_S, tzinfo=tz)
g.env[args[1].name] = StringLiteral(dt.isoformat())
return True
def builtin_get_time(g, rt):
rt._trace ('CALLED BUILTIN get_time', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('get_time: 1 arg expected.', g.location)
arg_T = rt.prolog_get_variable(args[0], g.env, g.location)
dt = datetime.datetime.now().replace(tzinfo=pytz.UTC)
g.env[arg_T] = StringLiteral(dt.isoformat())
return True
def builtin_day_of_the_week(g, rt):
""" day_of_the_week (+TS,-DayOfTheWeek) """
rt._trace ('CALLED BUILTIN day_of_the_week', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('day_of_the_week: 2 args (+TS,-DayOfTheWeek) expected.', g.location)
arg_TS = rt.prolog_get_string(args[0], g.env, g.location)
arg_DayOfTheWeek = rt.prolog_get_variable(args[1], g.env, g.location)
dt = dateutil.parser.parse(arg_TS)
g.env[arg_DayOfTheWeek] = NumberLiteral(dt.weekday()+1)
return True
def builtin_stamp_date_time(g, rt):
rt._trace ('CALLED BUILTIN stamp_date_time', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('stamp_date_time: 2 args expected.', g.location)
if not isinstance(args[1], Predicate) or not args[1].name == 'date' or len(args[1].args) != 7:
raise PrologRuntimeError('stamp_date_time: arg1: date structure expected.', g.location)
# try:
arg_Y = rt.prolog_get_variable(args[1].args[0], g.env, g.location)
arg_M = rt.prolog_get_variable(args[1].args[1], g.env, g.location)
arg_D = rt.prolog_get_variable(args[1].args[2], g.env, g.location)
arg_H = rt.prolog_get_variable(args[1].args[3], g.env, g.location)
arg_Mn = rt.prolog_get_variable(args[1].args[4], g.env, g.location)
arg_S = rt.prolog_get_variable(args[1].args[5], g.env, g.location)
arg_TZ = rt.prolog_get_string(args[1].args[6], g.env, g.location)
tz = get_localzone() if arg_TZ == 'local' else pytz.timezone(arg_TZ)
arg_TS = rt.prolog_get_string(args[0], g.env, g.location)
#dt = datetime.datetime.fromtimestamp(arg_TS, tz)
dt = dateutil.parser.parse(arg_TS).astimezone(tz)
g.env[arg_Y] = NumberLiteral(dt.year)
g.env[arg_M] = NumberLiteral(dt.month)
g.env[arg_D] = NumberLiteral(dt.day)
g.env[arg_H] = NumberLiteral(dt.hour)
g.env[arg_Mn] = NumberLiteral(dt.minute)
g.env[arg_S] = NumberLiteral(dt.second)
# except PrologRuntimeError as pre:
# logging.debug(pre)
# import pdb; pdb.set_trace()
# return False
return True
def builtin_sub_string(g, rt):
rt._trace ('CALLED BUILTIN sub_string', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 5:
raise PrologRuntimeError('sub_string: 5 args expected.', g.location)
arg_String = rt.prolog_get_string(args[0], g.env, g.location)
arg_Before = rt.prolog_eval(args[1], g.env, g.location)
arg_Length = rt.prolog_eval(args[2], g.env, g.location)
arg_After = rt.prolog_eval(args[3], g.env, g.location)
arg_SubString = rt.prolog_eval(args[4], g.env, g.location)
# FIXME: implement other variants
if arg_Before:
if not isinstance (arg_Before, NumberLiteral):
raise PrologRuntimeError('sub_string: arg_Before: Number expected, %s found instead.' % arg_Before.__class__, g.location)
before = int(arg_Before.f)
if arg_Length:
if not isinstance (arg_Length, NumberLiteral):
raise PrologRuntimeError('sub_string: arg_Length: Number expected, %s found instead.' % arg_Length.__class__, g.location)
length = int(arg_Length.f)
if not isinstance(arg_After, Variable):
raise PrologRuntimeError('sub_string: FIXME: arg_After required to be a variable for now.', g.location)
else:
var_After = arg_After.name
if var_After != '_':
g.env[var_After] = NumberLiteral(len(arg_String) - before - length)
if not isinstance(arg_SubString, Variable):
raise PrologRuntimeError('sub_string: FIXME: arg_SubString required to be a variable for now.', g.location)
else:
var_SubString = arg_SubString.name
if var_SubString != '_':
g.env[var_SubString] = StringLiteral(arg_String[before:before + length])
else:
raise PrologRuntimeError('sub_string: FIXME: arg_Length required to be a literal for now.', g.location)
else:
raise PrologRuntimeError('sub_string: FIXME: arg_Before required to be a literal for now.', g.location)
return True
def builtin_str_append(g, rt):
""" str_append (?String, +Append) """
rt._trace ('CALLED BUILTIN str_append', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('str_append: 2 args (?String, +Append) expected.', g.location)
arg_str = rt.prolog_get_variable (args[0], g.env, g.location)
arg_append = rt.prolog_eval (args[1], g.env, g.location)
if not arg_str in g.env:
g.env[arg_str] = StringLiteral(arg_append.s)
else:
s2 = copy.deepcopy(g.env[arg_str].s)
s2 += arg_append.s
g.env[arg_str] = StringLiteral(s2)
return True
def builtin_atom_chars(g, rt):
rt._trace ('CALLED BUILTIN atom_chars', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('atom_chars: 2 args expected.', g.location)
arg_atom = rt.prolog_eval(args[0], g.env, g.location)
arg_str = rt.prolog_eval(args[1], g.env, g.location)
if isinstance (arg_atom, Variable):
if isinstance (arg_str, Variable):
raise PrologRuntimeError('atom_chars: exactly one arg has to be bound.', g.location)
g.env[arg_atom.name] = Predicate(arg_str.s)
else:
if not isinstance (arg_str, Variable):
raise PrologRuntimeError('atom_chars: exactly one arg has to be bound.', g.location)
g.env[arg_str.name] = StringLiteral(arg_atom.name)
return True
def builtin_write(g, rt):
""" write (+Term) """
rt._trace ('CALLED BUILTIN write', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('write: 1 arg (+Term) expected.', g.location)
t = rt.prolog_eval(args[0], g.env, g.location)
if isinstance (t, StringLiteral):
sys.stdout.write(t.s)
else:
sys.stdout.write(unicode(t))
return True
def builtin_nl(g, rt):
rt._trace ('CALLED BUILTIN nl', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 0:
raise PrologRuntimeError('nl: no args expected.', g.location)
sys.stdout.write('\n')
return True
def builtin_log(g, rt):
""" log (+Level, +Terms...) """
global prolog_logger
rt._trace ('CALLED BUILTIN log', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) < 2:
raise PrologRuntimeError('log: at least 2 args (+Level, +Terms...) expected.', g.location)
l = rt.prolog_get_constant(args[0], g.env, g.location)
s = u''
for a in args[1:]:
t = rt.prolog_eval (a, g.env, g.location)
if len(s)>0:
s += ' '
if isinstance (t, StringLiteral):
s += t.s
else:
s += unicode(t)
if l == u'debug':
prolog_logger.debug(s)
elif l == u'info':
prolog_logger.info(s)
elif l == u'error':
prolog_logger.error(s)
else:
raise PrologRuntimeError('log: unknown level %s, one of (debug, info, error) expected.' % l, g.location)
return True
def builtin_trace(g, rt):
""" trace (+OnOff) """
rt._trace ('CALLED BUILTIN trace', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('trace: 1 arg (+OnOff) expected.', g.location)
onoff = rt.prolog_get_constant(args[0], g.env, g.location)
if onoff == u'on':
rt.set_trace(True)
elif onoff == u'off':
rt.set_trace(False)
else:
raise PrologRuntimeError('trace: unknown onoff value %s, one of (on, off) expected.' % onoff, g.location)
return True
def builtin_true(g, rt):
""" true """
rt._trace ('CALLED BUILTIN true', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 0:
raise PrologRuntimeError('true: no args expected.', g.location)
return True
def builtin_ignore(g, rt):
""" ignore (+P) """
rt._trace ('CALLED BUILTIN ignore', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('ignore: 1 arg (+P) expected.', g.location)
arg_p = args[0]
if not isinstance (arg_p, Predicate):
raise PrologRuntimeError('ignore: predicate expected, %s found instead.' % repr(arg_p), g.location)
solutions = rt.search_predicate(arg_p.name, arg_p.args, env=g.env, location=g.location)
if len(solutions)>0:
return solutions
return True
def builtin_var(g, rt):
""" var (+Term) """
rt._trace ('CALLED BUILTIN var', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('var: 1 arg (+Term) expected.', g.location)
arg_term = rt.prolog_eval(args[0], g.env, g.location)
#import pdb; pdb.set_trace()
return isinstance (arg_term, Variable)
def builtin_nonvar(g, rt):
""" nonvar (+Term) """
rt._trace ('CALLED BUILTIN nonvar', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('nonvar: 1 arg (+Term) expected.', g.location)
arg_term = rt.prolog_eval(args[0], g.env, g.location)
#import pdb; pdb.set_trace()
return not isinstance (arg_term, Variable)
def builtin_list_contains(g, rt):
rt._trace ('CALLED BUILTIN list_contains', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('list_contains: 2 args expected.', g.location)
arg_list = rt.prolog_get_list (args[0], g.env, g.location)
arg_needle = rt.prolog_eval(args[1], g.env, g.location)
for o in arg_list.l:
if o == arg_needle:
return True
return False
def builtin_list_nth(g, rt):
rt._trace ('CALLED BUILTIN list_nth', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 3:
raise PrologRuntimeError('list_nth: 3 args (index, list, elem) expected.', g.location)
arg_idx = rt.prolog_get_int (args[0], g.env, g.location)
arg_list = rt.prolog_get_list (args[1], g.env, g.location)
arg_elem = rt.prolog_eval (args[2], g.env, g.location)
if not arg_elem:
arg_elem = args[2]
if not isinstance(arg_elem, Variable):
raise PrologRuntimeError('list_nth: 3rd arg has to be an unbound variable for now, %s found instead.' % repr(arg_elem), g.location)
g.env[arg_elem.name] = arg_list.l[arg_idx]
return True
def builtin_length(g, rt):
""" length (+List, -Len) """
rt._trace ('CALLED BUILTIN length', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('length: 2 args (+List, -Len) expected.', g.location)
arg_list = rt.prolog_get_list (args[0], g.env, g.location)
arg_len = rt.prolog_get_variable (args[1], g.env, g.location)
g.env[arg_len] = NumberLiteral(len(arg_list.l))
return True
def builtin_list_slice(g, rt):
""" list_slice (+Idx1, +Idx2, +List, -Slice) """
rt._trace ('CALLED BUILTIN list_slice', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 4:
raise PrologRuntimeError('list_slice: 4 args (+Idx1, +Idx2, +List, -Slice) expected.', g.location)
arg_idx1 = rt.prolog_get_int (args[0], g.env, g.location)
arg_idx2 = rt.prolog_get_int (args[1], g.env, g.location)
arg_list = rt.prolog_get_list (args[2], g.env, g.location)
arg_slice = rt.prolog_eval (args[3], g.env, g.location)
if not arg_slice:
arg_slice = args[3]
if not isinstance(arg_slice, Variable):
raise PrologRuntimeError('list_slice: 4th arg has to be an unbound variable for now, %s found instead.' % repr(arg_slice), g.location)
g.env[arg_slice.name] = ListLiteral(arg_list.l[arg_idx1:arg_idx2])
return True
def builtin_list_append(g, rt):
""" list_append (?List, +Element) """
rt._trace ('CALLED BUILTIN list_append', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('list_append: 2 args (?List, +Element) expected.', g.location)
arg_list = rt.prolog_get_variable (args[0], g.env, g.location)
arg_element = rt.prolog_eval (args[1], g.env, g.location)
if not arg_list in g.env:
g.env[arg_list] = ListLiteral([arg_element])
else:
l2 = copy.deepcopy(g.env[arg_list].l)
l2.append(arg_element)
g.env[arg_list] = ListLiteral(l2)
return True
def do_list_extend(env, arg_list, arg_elements):
if not arg_list in env:
env[arg_list] = arg_elements
else:
l2 = copy.deepcopy(env[arg_list].l)
l2.extend(arg_elements.l)
env[arg_list] = ListLiteral(l2)
return True
def builtin_list_extend(g, rt):
""" list_extend (?List, +Elements) """
rt._trace ('CALLED BUILTIN list_extend', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('list_extend: 2 args (?List, +Elements) expected.', g.location)
arg_list = rt.prolog_get_variable (args[0], g.env, g.location)
arg_elements = rt.prolog_eval (args[1], g.env, g.location)
if not isinstance(arg_elements, ListLiteral):
raise PrologRuntimeError('list_extend: list type elements expected', g.location)
return do_list_extend(g.env, arg_list, arg_elements)
def builtin_list_str_join(g, rt):
""" list_str_join (+Glue, +List, -Str) """
rt._trace ('CALLED BUILTIN list_str_join', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 3:
raise PrologRuntimeError('list_str_join: 3 args (+Glue, +List, -Str) expected.', g.location)
arg_glue = rt.prolog_get_string (args[0], g.env, g.location)
arg_list = rt.prolog_get_list (args[1], g.env, g.location)
arg_str = rt.prolog_eval (args[2], g.env, g.location)
if not arg_str:
arg_str = args[2]
if not isinstance(arg_str, Variable):
raise PrologRuntimeError('list_str_join: 3rd arg has to be an unbound variable for now, %s found instead.' % repr(arg_slice), g.location)
g.env[arg_str.name] = StringLiteral(arg_glue.join(map(lambda a: a.s if isinstance(a, StringLiteral) else unicode(a), arg_list.l)))
return True
def builtin_list_findall(g, rt):
""" list_findall (+Template, +Goal, -List) """
rt._trace ('CALLED BUILTIN list_findall', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 3:
raise PrologRuntimeError('list_findall: 3 args (+Template, +Goal, -List) expected.', g.location)
arg_tmpl = rt.prolog_get_variable (args[0], g.env, g.location)
arg_goal = args[1]
arg_list = rt.prolog_get_variable (args[2], g.env, g.location)
if not isinstance (arg_goal, Predicate):
raise PrologRuntimeError('list_findall: predicate goal expected, %s found instead.' % repr(arg_goal), g.location)
solutions = rt.search_predicate(arg_goal.name, arg_goal.args, env=g.env, location=g.location)
rs = []
for s in solutions:
rs.append(s[arg_tmpl])
g.env[arg_list] = ListLiteral(rs)
return True
def builtin_dict_put(g, rt):
""" dict_put (?Dict, +Key, +Value) """
rt._trace ('CALLED BUILTIN dict_put', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 3:
raise PrologRuntimeError('dict_put: 3 args (?Dict, +Key, +Value) expected.', g.location)
arg_dict = rt.prolog_get_variable (args[0], g.env, g.location)
arg_key = rt.prolog_get_constant (args[1], g.env, g.location)
arg_val = rt.prolog_eval (args[2], g.env, g.location)
if not arg_dict in g.env:
g.env[arg_dict] = DictLiteral({arg_key: arg_val})
else:
d2 = copy.deepcopy(g.env[arg_dict].d)
d2[arg_key] = arg_val
g.env[arg_dict] = DictLiteral(d2)
return True
def builtin_dict_get(g, rt):
""" dict_get (+Dict, ?Key, -Value) """
rt._trace ('CALLED BUILTIN dict_get', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 3:
raise PrologRuntimeError('dict_get: 3 args (+Dict, ?Key, -Value) expected.', g.location)
arg_dict = rt.prolog_get_dict (args[0], g.env, g.location)
arg_key = rt.prolog_eval (args[1], g.env, g.location)
arg_val = rt.prolog_get_variable (args[2], g.env, g.location)
res = []
if isinstance(arg_key, Variable):
arg_key = arg_key.name
for key in arg_dict.d:
res.append({arg_key: StringLiteral(key), arg_val: arg_dict.d[key]})
else:
arg_key = rt.prolog_get_constant (args[1], g.env, g.location)
res.append({arg_val: arg_dict.d[arg_key]})
return res
def builtin_set_add(g, rt):
""" set_add (?Set, +Value) """
rt._trace ('CALLED BUILTIN set_add', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('set_add: 2 args (?Set, +Value) expected.', g.location)
arg_set = rt.prolog_get_variable (args[0], g.env, g.location)
arg_val = rt.prolog_eval (args[1], g.env, g.location)
if not arg_set in g.env:
g.env[arg_set] = SetLiteral(set([arg_val]))
else:
s2 = copy.deepcopy(g.env[arg_set].s)
s2.add(arg_val)
g.env[arg_set] = SetLiteral(s2)
return True
def builtin_set_get(g, rt):
""" set_get (+Set, -Value) """
rt._trace ('CALLED BUILTIN set_get', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('set_get: 2 args (+Set, -Value) expected.', g.location)
arg_set = rt.prolog_get_set (args[0], g.env, g.location)
arg_val = rt.prolog_get_variable (args[1], g.env, g.location)
res = []
for v in arg_set.s:
res.append({arg_val: v})
return res
def builtin_set_findall(g, rt):
""" set_findall (+Template, +Goal, -Set) """
rt._trace ('CALLED BUILTIN set_findall', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 3:
raise PrologRuntimeError('set_findall: 3 args (+Template, +Goal, -Set) expected.', g.location)
arg_tmpl = rt.prolog_get_variable (args[0], g.env, g.location)
arg_goal = args[1]
arg_set = rt.prolog_get_variable (args[2], g.env, g.location)
if not isinstance (arg_goal, Predicate):
raise PrologRuntimeError('set_findall: predicate goal expected, %s found instead.' % repr(arg_goal), g.location)
solutions = rt.search_predicate(arg_goal.name, arg_goal.args, env=g.env, location=g.location)
rs = set()
for s in solutions:
rs.add(s[arg_tmpl])
g.env[arg_set] = SetLiteral(rs)
return True
ASSERT_OVERLAY_VAR_NAME = '__OVERLAYZ__'
def do_assertz(env, clause, res={}):
ovl = res.get(ASSERT_OVERLAY_VAR_NAME)
if ovl is None:
ovl = env.get(ASSERT_OVERLAY_VAR_NAME)
if ovl is None:
ovl = LogicDBOverlay()
else:
ovl = ovl.clone()
ovl.assertz(clause)
# important: do not modify our (default!) argument
res2 = copy.copy(res)
res2[ASSERT_OVERLAY_VAR_NAME] = ovl
return res2
def builtin_assertz(g, rt):
""" assertz (+P) """
rt._trace ('CALLED BUILTIN assertz', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('assertz: 1 arg (+P) expected.', g.location)
arg_p = rt.prolog_get_predicate (args[0], g.env, g.location)
# if not arg_p:
# import pdb; pdb.set_trace()
clause = Clause (head=arg_p, location=g.location)
return [do_assertz(g.env, clause, res={})]
def do_assertz_predicate(env, name, args, res={}, location=None):
""" convenience function: build Clause/Predicate structure, translate python string into Predicates/Variables by
Prolog conventions (lowercase: predicate, uppercase: variable) """
if not location:
location = SourceLocation('<input>', 0, 0)
mapped_args = []
for arg in args:
if not isinstance(arg, basestring):
mapped_args.append(arg)
continue
if arg[0].isupper():
mapped_args.append(Variable(arg))
else:
mapped_args.append(Predicate(arg))
return do_assertz (env, Clause(head=Predicate(name, mapped_args), location=location), res=res)
def do_retract(env, p, res={}):
ovl = res.get(ASSERT_OVERLAY_VAR_NAME)
if ovl is None:
ovl = env.get(ASSERT_OVERLAY_VAR_NAME)
if ovl is None:
ovl = LogicDBOverlay()
else:
ovl = ovl.clone()
ovl.retract(p)
# important: do not modify our (default!) argument
res2 = copy.copy(res)
res2[ASSERT_OVERLAY_VAR_NAME] = ovl
return res2
def builtin_retract(g, rt):
""" retract (+P) """
rt._trace ('CALLED BUILTIN retract', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 1:
raise PrologRuntimeError('retract: 1 arg (+P) expected.', g.location)
arg_p = rt.prolog_get_predicate (args[0], g.env, g.location)
return [do_retract(g.env, arg_p, res={})]
def builtin_setz(g, rt):
""" setz (+P, +V) """
rt._trace ('CALLED BUILTIN setz', g)
# import pdb; pdb.set_trace()
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('setz: 2 args (+P, +V) expected.', g.location)
arg_p = rt.prolog_get_predicate (args[0], g.env, g.location)
arg_v = rt.prolog_eval (args[1], g.env, g.location)
env = do_retract(g.env, arg_p, res={})
pa = []
for arg in arg_p.args:
if isinstance(arg, Variable):
pa.append(arg_v)
else:
pa.append(arg)
env = do_assertz (env, Clause(head=Predicate(arg_p.name, pa), location=g.location), res={})
return [env]
def do_gensym(rt, root):
orm_gn = rt.db.session.query(model.ORMGensymNum).filter(model.ORMGensymNum.root==root).first()
if not orm_gn:
current_num = 1
orm_gn = model.ORMGensymNum(root=root, current_num=1)
rt.db.session.add(orm_gn)
else:
current_num = orm_gn.current_num + 1
orm_gn.current_num = current_num
return root + str(current_num)
def builtin_gensym(g, rt):
""" gensym (+Root, -Unique) """
rt._trace ('CALLED BUILTIN gensym', g)
pred = g.terms[g.inx]
args = pred.args
if len(args) != 2:
raise PrologRuntimeError('gensym: 2 args (+Root, -Unique) expected.', g.location)
arg_root = rt.prolog_eval (args[0], g.env, g.location)
arg_unique = rt.prolog_get_variable (args[1], g.env, g.location)
unique = do_gensym(rt, arg_root.name)
g.env[arg_unique] = Predicate(unique)
return True
#
# functions
#
def builtin_format_str(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION format_str', env)
arg_F = args[0].s
if len(args)>1:
a = []
for arg in args[1:]:
if not isinstance(arg, Literal):
raise PrologRuntimeError ('format_str: literal expected, %s found instead' % arg, location)
a = map(lambda x: x.get_literal(), args[1:])
f_str = arg_F % tuple(a)
else:
f_str = arg_F
return StringLiteral(f_str)
def _builtin_list_lambda (args, env, rt, l, location):
if len(args) != 1:
raise PrologRuntimeError('list builtin fn: 1 arg expected.', location)
arg_list = args[0]
if not isinstance(arg_list, ListLiteral):
raise PrologRuntimeError('list builtin fn: list expected, %s found instead.' % arg_list, location)
res = reduce(l, arg_list.l)
return res, arg_list.l
# if isinstance(res, (int, float)):
# return NumberLiteral(res)
# else:
# return StringLiteral(unicode(res))
def builtin_list_max(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION list_max', env)
return _builtin_list_lambda (args, env, rt, lambda x, y: x if x > y else y, location)[0]
def builtin_list_min(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION list_min', env)
return _builtin_list_lambda (args, env, rt, lambda x, y: x if x < y else y, location)[0]
def builtin_list_sum(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION list_sum', env)
return _builtin_list_lambda (args, env, rt, lambda x, y: x + y, location)[0]
def builtin_list_avg(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION list_avg', env)
l_sum, l = _builtin_list_lambda (args, env, rt, lambda x, y: x + y, location)
assert len(l)>0
return l_sum / NumberLiteral(float(len(l)))
def builtin_list_len(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION list_len', env)
if len(args) != 1:
raise PrologRuntimeError('list builtin fn: 1 arg expected.', location)
arg_list = rt.prolog_get_list (args[0], env, location)
return NumberLiteral(len(arg_list.l))
def builtin_list_slice_fn(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION list_slice', env)
if len(args) != 3:
raise PrologRuntimeError('list_slice: 3 args (+Idx1, +Idx2, +List) expected.', location)
arg_idx1 = rt.prolog_get_int (args[0], env, location)
arg_idx2 = rt.prolog_get_int (args[1], env, location)
arg_list = rt.prolog_get_list (args[2], env, location)
return ListLiteral(arg_list.l[arg_idx1:arg_idx2])
def builtin_list_join_fn(args, env, rt, location):
rt._trace_fn ('CALLED FUNCTION list_join', env)
if len(args) != 2:
raise PrologRuntimeError('list_join: 2 args (+Glue, +List) expected.', location)
arg_glue = rt.prolog_get_string (args[0], env, location)
arg_list = rt.prolog_get_list (args[1], env, location)
return StringLiteral(arg_glue.join(map(lambda a: a.s if isinstance(a, StringLiteral) else unicode(a), arg_list.l))) | zamia-prolog | /zamia-prolog-0.1.0.tar.gz/zamia-prolog-0.1.0/zamiaprolog/builtins.py | builtins.py |
#
# Copyright 2015, 2016, 2017 Guenter Bartsch
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
# Zamia-Prolog
# ------------
#
# Zamia-Prolog grammar
#
# program ::= { clause }
#
# clause ::= relation [ ':-' clause_body ] '.'
#
# relation ::= name [ '(' term { ',' term } ')' ]
#
# clause_body ::= subgoals { ';' subgoals }
#
# subgoals ::= subgoal { ',' subgoal }
#
# subgoal ::= ( term | conditional | inline )
#
# inline ::= 'inline' relation
#
# conditional ::= 'if' term 'then' subgoals [ 'else' subgoals ] 'endif'
#
# term ::= add-term { rel-op add-term }
#
# rel-op ::= '=' | '\=' | '<' | '>' | '=<' | '>=' | 'is' | ':='
#
# add-term ::= mul-term { add-op mul-term }
#
# add-op ::= '+' | '-'
#
# mul-term ::= unary-term { mul-op unary-term }
#
# mul-op ::= '*' | '/' | 'div' | 'mod'
#
# unary-term ::= [ unary-op ] primary-term
#
# unary-op ::= '+' | '-'
#
# primary-term ::= ( variable | number | string | list | relation | '(' term { ',' term } ')' | '!' )
#
# list ::= '[' [ primary-term ] ( { ',' primary-term } | '|' primary-term ) ']'
#
import os
import sys
import logging
import codecs
import re
from copy import copy
from six import StringIO, text_type
from zamiaprolog.logic import *
from zamiaprolog.errors import *
from zamiaprolog.runtime import PrologRuntime
from nltools.tokenizer import tokenize
# lexer
NAME_CHARS = set([u'a',u'b',u'c',u'd',u'e',u'f',u'g',u'h',u'i',u'j',u'k',u'l',u'm',u'n',u'o',u'p',u'q',u'r',u's',u't',u'u',u'v',u'w',u'x',u'y',u'z',
u'A',u'B',u'C',u'D',u'E',u'F',u'G',u'H',u'I',u'J',u'K',u'L',u'M',u'N',u'O',u'P',u'Q',u'R',u'S',u'T',u'U',u'V',u'W',u'X',u'Y',u'Z',
u'_',u'0',u'1',u'2',u'3',u'4',u'5',u'6',u'7',u'8',u'9'])
NAME_CHARS_EXTENDED = NAME_CHARS | set([':','|'])
SYM_NONE = 0
SYM_EOF = 1
SYM_STRING = 2 # 'abc'
SYM_NAME = 3 # abc aWord =< is div +
SYM_VARIABLE = 4 # X Variable _Variable _
SYM_NUMBER = 5
SYM_IMPL = 10 # :-
SYM_LPAREN = 11 # (
SYM_RPAREN = 12 # )
SYM_COMMA = 13 # ,
SYM_PERIOD = 14 # .
SYM_SEMICOLON = 15 # ;
SYM_COLON = 17 # :
SYM_LBRACKET = 18 # [
SYM_RBRACKET = 19 # ]
SYM_PIPE = 20 # |
SYM_CUT = 21 # !
SYM_EQUAL = 22 # =
SYM_NEQUAL = 23 # \=, !=
SYM_LESS = 24 # <
SYM_GREATER = 25 # >
SYM_LESSEQ = 26 # =<, <=
SYM_GREATEREQ = 27 # >=
SYM_IS = 28 # is
SYM_SET = 29 # set, :=
SYM_PLUS = 30 # +
SYM_MINUS = 31 # -
SYM_ASTERISK = 32 # *
SYM_DIV = 33 # div, /
SYM_MOD = 34 # mod
SYM_IF = 40 # if
SYM_THEN = 41 # then
SYM_ELSE = 42 # else
SYM_ENDIF = 43 # endif
SYM_INLINE = 44 # inline
REL_OPS = set([SYM_EQUAL,
SYM_NEQUAL,
SYM_LESS,
SYM_GREATER,
SYM_LESSEQ,
SYM_GREATEREQ,
SYM_IS,
SYM_SET])
UNARY_OPS = set([SYM_PLUS, SYM_MINUS])
ADD_OPS = set([SYM_PLUS, SYM_MINUS])
MUL_OPS = set([SYM_ASTERISK, SYM_DIV, SYM_MOD])
REL_NAMES = {
SYM_EQUAL : u'=',
SYM_NEQUAL : u'\\=',
SYM_LESS : u'<',
SYM_GREATER : u'>',
SYM_LESSEQ : u'=<',
SYM_GREATEREQ : u'>=',
SYM_IS : u'is',
SYM_SET : u'set',
SYM_PLUS : u'+',
SYM_MINUS : u'-',
SYM_ASTERISK : u'*',
SYM_DIV : u'/',
SYM_MOD : u'mod',
SYM_CUT : u'cut',
}
# structured comments
CSTATE_IDLE = 0
CSTATE_HEADER = 1
CSTATE_BODY = 2
class PrologParser(object):
def __init__(self, db):
# compile-time built-in predicates
self.directives = {}
self.db = db
def report_error(self, s):
raise PrologError ("%s: error in line %d col %d: %s" % (self.prolog_fn, self.cur_line, self.cur_col, s))
def get_location(self):
return SourceLocation(self.prolog_fn, self.cur_line, self.cur_col)
def next_c(self):
self.cur_c = text_type(self.prolog_f.read(1))
self.cur_col += 1
if self.cur_c == u'\n':
self.cur_line += 1
self.cur_col = 1
if (self.linecnt > 0) and (self.cur_line % 1000 == 0):
logging.info ("%s: parsing line %6d / %6d (%3d%%)" % (self.prolog_fn,
self.cur_line,
self.linecnt,
self.cur_line * 100 / self.linecnt))
# print '[', self.cur_c, ']',
def peek_c(self):
peek_c = text_type(self.prolog_f.read(1))
self.prolog_f.seek(-1,1)
return peek_c
def is_name_char(self, c):
if c in NAME_CHARS:
return True
if ord(c) >= 128:
return True
return False
def is_name_char_ext(self, c):
if c in NAME_CHARS_EXTENDED:
return True
if ord(c) >= 128:
return True
return False
def next_sym(self):
# whitespace, comments
self.cstate = CSTATE_IDLE
while True:
# skip whitespace
while not (self.cur_c is None) and self.cur_c.isspace():
self.next_c()
if not self.cur_c:
self.cur_sym = SYM_EOF
return
# skip comments
if self.cur_c == u'%':
comment_line = u''
self.next_c()
if self.cur_c == u'!':
self.cstate = CSTATE_HEADER
self.next_c()
while True:
if not self.cur_c:
self.cur_sym = SYM_EOF
return
if self.cur_c == u'\n':
self.next_c()
break
comment_line += self.cur_c
self.next_c()
if self.cstate == CSTATE_HEADER:
m = re.match (r"^\s*doc\s+([a-zA-Z0-9_]+)", comment_line)
if m:
self.comment_pred = m.group(1)
self.comment = ''
self.cstate = CSTATE_BODY
elif self.cstate == CSTATE_BODY:
if len(self.comment)>0:
self.comment += '\n'
self.comment += comment_line.lstrip().rstrip()
else:
break
#if self.comment_pred:
# print "COMMENT FOR %s : %s" % (self.comment_pred, self.comment)
self.cur_str = u''
# import pdb; pdb.set_trace()
if self.cur_c == u'\'' or self.cur_c == u'"':
self.cur_sym = SYM_STRING
startc = self.cur_c
while True:
self.next_c()
if not self.cur_c:
self.report_error ("Unterminated string literal.")
self.cur_sym = SYM_EOF
break
if self.cur_c == u'\\':
self.next_c()
self.cur_str += self.cur_c
self.next_c()
if self.cur_c == startc:
self.next_c()
break
self.cur_str += self.cur_c
elif self.cur_c.isdigit():
self.cur_sym = SYM_NUMBER
while True:
self.cur_str += self.cur_c
self.next_c()
if self.cur_c == '.' and not self.peek_c().isdigit():
break
if not self.cur_c or (not self.cur_c.isdigit() and self.cur_c != '.'):
break
elif self.is_name_char(self.cur_c):
self.cur_sym = SYM_VARIABLE if self.cur_c == u'_' or self.cur_c.isupper() else SYM_NAME
while True:
self.cur_str += self.cur_c
self.next_c()
if not self.cur_c or not self.is_name_char_ext(self.cur_c):
break
# keywords
if self.cur_str == 'if':
self.cur_sym = SYM_IF
elif self.cur_str == 'then':
self.cur_sym = SYM_THEN
elif self.cur_str == 'else':
self.cur_sym = SYM_ELSE
elif self.cur_str == 'endif':
self.cur_sym = SYM_ENDIF
elif self.cur_str == 'is':
self.cur_sym = SYM_IS
elif self.cur_str == 'set':
self.cur_sym = SYM_SET
elif self.cur_str == 'div':
self.cur_sym = SYM_DIV
elif self.cur_str == 'mod':
self.cur_sym = SYM_MOD
elif self.cur_str == 'inline':
self.cur_sym = SYM_INLINE
elif self.cur_c == u':':
self.next_c()
if self.cur_c == u'-':
self.next_c()
self.cur_sym = SYM_IMPL
elif self.cur_c == u'=':
self.next_c()
self.cur_sym = SYM_SET
else:
self.cur_sym = SYM_COLON
elif self.cur_c == u'(':
self.cur_sym = SYM_LPAREN
self.next_c()
elif self.cur_c == u')':
self.cur_sym = SYM_RPAREN
self.next_c()
elif self.cur_c == u',':
self.cur_sym = SYM_COMMA
self.next_c()
elif self.cur_c == u'.':
self.cur_sym = SYM_PERIOD
self.next_c()
elif self.cur_c == u';':
self.cur_sym = SYM_SEMICOLON
self.next_c()
elif self.cur_c == u'[':
self.cur_sym = SYM_LBRACKET
self.next_c()
elif self.cur_c == u']':
self.cur_sym = SYM_RBRACKET
self.next_c()
elif self.cur_c == u'|':
self.cur_sym = SYM_PIPE
self.next_c()
elif self.cur_c == u'+':
self.cur_sym = SYM_PLUS
self.next_c()
elif self.cur_c == u'-':
self.cur_sym = SYM_MINUS
self.next_c()
elif self.cur_c == u'*':
self.cur_sym = SYM_ASTERISK
self.next_c()
elif self.cur_c == u'/':
self.cur_sym = SYM_DIV
self.next_c()
elif self.cur_c == u'!':
self.next_c()
if self.cur_c == u'=':
self.next_c()
self.cur_sym = SYM_NEQUAL
else:
self.cur_sym = SYM_CUT
elif self.cur_c == u'=':
self.next_c()
if self.cur_c == u'<':
self.next_c()
self.cur_sym = SYM_LESSEQ
else:
self.cur_sym = SYM_EQUAL
elif self.cur_c == u'<':
self.next_c()
if self.cur_c == u'=':
self.next_c()
self.cur_sym = SYM_LESSEQ
else:
self.cur_sym = SYM_LESS
elif self.cur_c == u'>':
self.next_c()
if self.cur_c == u'=':
self.next_c()
self.cur_sym = SYM_GREATEREQ
else:
self.cur_sym = SYM_GREATER
elif self.cur_c == u'\\':
self.next_c()
if self.cur_c == u'=':
self.next_c()
self.cur_sym = SYM_NEQUAL
else:
self.report_error ("Lexer error: \\= expected")
else:
self.report_error ("Illegal character: " + repr(self.cur_c))
# logging.info( "[%2d]" % self.cur_sym )
#
# parser starts here
#
def parse_list(self):
res = ListLiteral([])
if self.cur_sym != SYM_RBRACKET:
res.l.append(self.primary_term())
# FIXME: implement proper head/tail mechanics
if self.cur_sym == SYM_PIPE:
self.next_sym()
res.l.append(self.primary_term())
else:
while (self.cur_sym == SYM_COMMA):
self.next_sym()
res.l.append(self.primary_term())
if self.cur_sym != SYM_RBRACKET:
self.report_error ("list: ] expected.")
self.next_sym()
return res
def primary_term(self):
res = None
if self.cur_sym == SYM_VARIABLE:
res = Variable (self.cur_str)
self.next_sym()
elif self.cur_sym == SYM_NUMBER:
res = NumberLiteral (float(self.cur_str))
self.next_sym()
elif self.cur_sym == SYM_STRING:
res = StringLiteral (self.cur_str)
self.next_sym()
elif self.cur_sym == SYM_NAME:
res = self.relation()
elif self.cur_sym in REL_NAMES:
res = self.relation()
elif self.cur_sym == SYM_LPAREN:
self.next_sym()
res = self.term()
while (self.cur_sym == SYM_COMMA):
self.next_sym()
if not isinstance(res, list):
res = [res]
res.append(self.term())
if self.cur_sym != SYM_RPAREN:
self.report_error ("primary term: ) expected.")
self.next_sym()
elif self.cur_sym == SYM_LBRACKET:
self.next_sym()
res = self.parse_list()
else:
self.report_error ("primary term: variable / number / string / name / ( expected, sym #%d found instead." % self.cur_sym)
# logging.debug ('primary_term: %s' % str(res))
return res
def unary_term(self):
o = None
if self.cur_sym in UNARY_OPS:
o = REL_NAMES[self.cur_sym]
self.next_sym()
res = self.primary_term()
if o:
if not isinstance(res, list):
res = [res]
res = Predicate (o, res)
return res
def mul_term(self):
args = []
ops = []
args.append(self.unary_term())
while self.cur_sym in MUL_OPS:
o = REL_NAMES[self.cur_sym]
ops.append(o)
self.next_sym()
args.append(self.unary_term())
res = None
while len(args)>0:
arg = args.pop()
if not res:
res = arg
else:
res = Predicate (o, [arg, res])
if len(ops)>0:
o = ops.pop()
# logging.debug ('mul_term: ' + str(res))
return res
def add_term(self):
args = []
ops = []
args.append(self.mul_term())
while self.cur_sym in ADD_OPS:
o = REL_NAMES[self.cur_sym]
ops.append(o)
self.next_sym()
args.append(self.mul_term())
res = None
while len(args)>0:
arg = args.pop()
if not res:
res = arg
else:
res = Predicate (o, [arg, res])
if len(ops)>0:
o = ops.pop()
# logging.debug ('add_term: ' + str(res))
return res
def term(self):
args = []
ops = []
args.append(self.add_term())
while self.cur_sym in REL_OPS:
ops.append(REL_NAMES[self.cur_sym])
self.next_sym()
args.append(self.add_term())
res = None
while len(args)>0:
arg = args.pop()
if not res:
res = arg
else:
res = Predicate (o, [arg, res])
if len(ops)>0:
o = ops.pop()
# logging.debug ('term: ' + str(res))
return res
def relation(self):
if self.cur_sym in REL_NAMES:
name = REL_NAMES[self.cur_sym]
elif self.cur_sym == SYM_NAME:
name = self.cur_str
else:
self.report_error ("Name expected.")
self.next_sym()
args = None
if self.cur_sym == SYM_LPAREN:
self.next_sym()
args = []
while True:
args.append(self.term())
if self.cur_sym != SYM_COMMA:
break
self.next_sym()
if self.cur_sym != SYM_RPAREN:
self.report_error ("relation: ) expected.")
self.next_sym()
return Predicate (name, args)
def _apply_bindings (self, a, bindings):
""" static application of bindings when inlining predicates """
if isinstance (a, Predicate):
aargs = []
for b in a.args:
aargs.append(self._apply_bindings (b, bindings))
return Predicate (a.name, aargs)
if isinstance (a, Variable):
if not a.name in bindings:
return a
return bindings[a.name]
if isinstance (a, ListLiteral):
rl = []
for i in a.l:
rl.append(self._apply_bindings (i, bindings))
return ListLiteral(rl)
if isinstance (a, Literal):
return a
raise Exception ('_apply_bindings not implemented yet for %s' % a.__class__)
def subgoal(self):
if self.cur_sym == SYM_IF:
self.next_sym()
c = self.term()
if self.cur_sym != SYM_THEN:
self.report_error ("subgoal: then expected.")
self.next_sym()
t = self.subgoals()
t = Predicate ('and', [c, t])
if self.cur_sym == SYM_ELSE:
self.next_sym()
e = self.subgoals()
else:
e = Predicate ('true')
nc = Predicate ('not', [c])
e = Predicate ('and', [nc, e])
if self.cur_sym != SYM_ENDIF:
self.report_error ("subgoal: endif expected.")
self.next_sym()
return [ Predicate ('or', [t, e]) ]
elif self.cur_sym == SYM_INLINE:
self.next_sym()
pred = self.relation()
# if self.cur_line == 38:
# import pdb; pdb.set_trace()
# see if we can find a clause that unifies with the pred to inline
clauses = self.db.lookup(pred.name, arity=-1)
succeeded = None
succ_bind = None
for clause in reversed(clauses):
if len(clause.head.args) != len(pred.args):
continue
bindings = {}
if self.rt._unify (pred, {}, clause.head, bindings, clause.location, overwrite_vars = False):
if succeeded:
self.report_error ("inline: %s: more than one matching pred found." % text_type(pred))
succeeded = clause
succ_bind = bindings
if not succeeded:
self.report_error ("inline: %s: no matching pred found." % text_type(pred))
res = []
if isinstance(succeeded.body, Predicate):
if succeeded.body.name == 'and':
for a in succeeded.body.args:
res.append(self._apply_bindings(a, succ_bind))
else:
res2 = []
for a in succeeded.body.args:
res2.append(self._apply_bindings(a, succ_bind))
res.append(Predicate (succeeded.body.name, res2))
elif isinstance(succeeded.body, StringLiteral):
res.append(self._apply_bindings(succeeded.body, succ_bind))
else:
self.report_error ("inline: inlined predicate has wrong form.")
return res
else:
return [ self.term() ]
def subgoals(self):
res = self.subgoal()
while self.cur_sym == SYM_COMMA:
self.next_sym()
t2 = self.subgoal()
res.extend(t2)
if len(res) == 1:
return res[0]
return Predicate ('and', res)
def clause_body(self):
res = [ self.subgoals() ]
while self.cur_sym == SYM_SEMICOLON:
self.next_sym()
sg2 = self.subgoals()
res.append(sg2)
if len(res) == 1:
return res[0]
return Predicate ('or', res)
def clause(self):
res = []
loc = self.get_location()
head = self.relation()
if self.cur_sym == SYM_IMPL:
self.next_sym()
body = self.clause_body()
c = Clause (head, body, location=loc)
else:
c = Clause (head, location=loc)
if self.cur_sym != SYM_PERIOD:
self.report_error ("clause: . expected.")
self.next_sym()
# compiler directive?
if c.head.name in self.directives:
f, user_data = self.directives[c.head.name]
f(self.db, self.module_name, c, user_data)
else:
res.append(c)
# logging.debug ('clause: ' + str(res))
return res
#
# high-level interface
#
def start (self, prolog_f, prolog_fn, linecnt = 1, module_name = None):
self.cur_c = u' '
self.cur_sym = SYM_NONE
self.cur_str = u''
self.cur_line = 1
self.cur_col = 1
self.prolog_f = prolog_f
self.prolog_fn = prolog_fn
self.linecnt = linecnt
self.module_name = module_name
self.cstate = CSTATE_IDLE
self.comment_pred = None
self.comment = u''
self.next_c()
self.next_sym()
def parse_line_clause_body (self, line):
self.start (StringIO(line), '<str>')
body = self.clause_body()
return Clause (None, body, location=self.get_location())
def parse_line_clauses (self, line):
self.start (StringIO(line), '<str>')
return self.clause()
def register_directive(self, name, f, user_data):
self.directives[name] = (f, user_data)
def clear_module (self, module_name):
self.db.clear_module(module_name)
def compile_file (self, filename, module_name, clear_module=False):
# quick source line count for progress output below
self.linecnt = 1
with codecs.open(filename, encoding='utf-8', errors='ignore', mode='r') as f:
while f.readline():
self.linecnt += 1
logging.info("%s: %d lines." % (filename, self.linecnt))
# remove old predicates of this module from db
if clear_module:
self.clear_module (module_name, db)
# actual parsing starts here
with codecs.open(filename, encoding='utf-8', errors='ignore', mode='r') as f:
self.start(f, filename, module_name=module_name, linecnt=self.linecnt)
while self.cur_sym != SYM_EOF:
clauses = self.clause()
for clause in clauses:
logging.debug(u"%7d / %7d (%3d%%) > %s" % (self.cur_line, self.linecnt, self.cur_line * 100 / self.linecnt, text_type(clause)))
self.db.store (module_name, clause)
if self.comment_pred:
self.db.store_doc (module_name, self.comment_pred, self.comment)
self.comment_pred = None
self.comment = ''
self.db.commit()
logging.info("Compilation succeeded.") | zamia-prolog | /zamia-prolog-0.1.0.tar.gz/zamia-prolog-0.1.0/zamiaprolog/parser.py | parser.py |
# ZAMM
This is an informal automation tool where you show GPT how to do something, and have it do it for you afterwards. This is good for boring but straightforward tasks that you haven't gotten around to writing a proper script to automate.
We are entering a time when our target audiences may include machines as well as humans. As such, this tool will generate tutorials that you can edit to make pleasant for both humans and LLMs alike to read.
**This is an experimental tool, and has only been run on WSL Ubuntu so far.** It seems to work ok on the specific examples below. YMMV. Please feel free to add issues or PRs.
## Quickstart
`pipx` recommended over `pip` for install because it should allow you to run this with a different version of `langchain` than the one you might have installed:
```bash
pipx install zamm
```
Teach GPT to do something:
```bash
zamm teach
```
You will be roleplaying the LLM. The results of your interaction will be output as a Markdown tutorial file, which you can then edit to be more human-readable. See [this example](zamm/resources/tutorials/hello.md) of teaching the LLM how to create a "Hello world" script.
Afterwards, you can tell the LLM to do a slightly different task using that same tutorial:
```bash
zamm execute --task 'Write a script goodbye.sh that prints out "Goodbye world". Execute it.' --documentation zamm/resources/tutorials/hello.md
```
This results in [this example transcript](demos/hello-transcript.md) of LLM interactions. **Note that GPT successfully generalizes from the tutorial to code in a completely different language based just on the difference in filenames.** Imagine having to manually add that feature to a script!
### Using internal tutorials
Select any of the [prepackaged tutorials](zamm/resources/tutorials/) as documentation by prefacing their filename with `@internal`. The `.md` extension is optional.
For example:
```bash
zamm execute --task 'Protect the `main` branch' --documentation @internal/branch-protection
```
to protect the `main` branch of the project in the current directory on Github. (Note that this tutorial was written in mind for ZAMM-built projects, so YMMV for using this on custom projects.)
### Sessions
Sessions are recorded in case a crash happens, or if you want to change something up. On Linux, sessions are saved to `~/.local/share/zamm/sessions/`. To continue from the most recent session, run
```bash
zamm teach --last-session
```
### Free-styling
You can also simply tell the LLM to do something without teaching it to do so beforehand. However, this is a lot more brittle. An example of a free-style command that works:
```bash
zamm execute --task 'Write a script hello.py that prints out "Hello world". Execute it.'
```
The resulting transcript can be found [here](demos/freestyle-hello-transcript.md).
## Prompting
When a step is failing and you need faster iteration by repeatedly testing a single prompt, you can do so with the `prompt` command. First, write your prompt out to a file on disk. Then run this command:
```bash
zamm prompt --stop '\n' --raw <path-to-prompt>
```
| zamm | /zamm-0.0.4.tar.gz/zamm-0.0.4/README.md | README.md |
import json
import requests
# Functions
def req(method, target, token, data=None):
'''Read or write to rest API
Args:
method: Must be one of GET, PUT, POST, DELETE
target: Link and port to connect to
token: API token
data: Optional, default Null. What data to PUT
Returns:
Decoded JSON data or error in JSON format
'''
# Check method valid
if method not in ['GET', 'PUT', 'POST', 'DELETE']:
raise AttributeError('Invalid method %s' % method)
# Check that data supplied with PUT and POST
if (method == 'PUT' or method == 'POST') and not data:
raise TypeError('JSON must be supplied when %s is used' % method)
# Build header and data
if data:
headers = {'Authorization': 'Token token=%s' % token, 'Content-Type': 'application/json'}
try:
data = json.loads(data)
except json.decoder.JSONDecodeError:
raise ValueError('Invalid JSON supplied')
else:
headers = {'Authorization': 'Token token=%s' % token}
# Try connecting
try:
response = requests.request(method=method, url=target, headers=headers, data=data)
except Exception as error:
return {'error_target': target, 'error_message': error, 'status_code': 1},
# If not 200 or 201
if not response.status_code == 200 and not response.status_code == 201:
return {'error_target': target, 'error_message': 'Unknown error', 'status_code': response.status_code}
# Try and decode json
try:
return response.json()
# If converting to json fails
except json.decoder.JSONDecodeError as error:
raise Exception('Error deciding JSON returned from API')
# Classes
class ZammadApi:
"""Work with the Zammad API"""
api_endpoint = ''
description = ''
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
self.target = '%s/api/v1/' % target
self.api_key = api_key
self.json_data = json.dumps(json_data)
self.object_type = object_type
self.object_id = str(object_id)
if filter_string:
self.filter_string = str(filter_string)
else:
self.filter_string = None
if self.object_type and self.object_id:
self.api_endpoint = self.target + self.api_endpoint + '?object=' + self.object_type + '&o_id=' + self.object_id
elif self.object_id:
self.api_endpoint = self.target + self.api_endpoint + '/' + str(self.object_id)
else:
self.api_endpoint = self.target + self.api_endpoint
def action(self, method):
"""Call Zammad rest api
Args:
method: What method to use with requests. One of GET, PUT, POST or DELETE
Returns:
Whatever is returned from the rest_api function. Being data or an error message
"""
if self.json_data:
self.result = req(method, self.api_endpoint, self.api_key, self.json_data)
else:
self.result = req(method, self.api_endpoint, self.api_key)
return self.result
# --clone
def clone(self):
# TODO clone will probably not be handled in the class?
pass
# --list
def list_objects(self):
self.results = self.objs = self.action('GET')
if self.filter_string:
self.filter_hits = []
for self.result in self.results:
for _, self.value in self.result.items():
if self.filter_string.lower() in str(self.value).lower():
self.filter_hits.append(self.result)
break
if len(self.filter_hits) > 0:
return self.filter_hits
else:
return {'error': 'No %ss containing %s was found' % (self.description, self.filter_string)}
else:
return self.results
# --get
def get(self):
self.objs = self.action('GET')
for self.object in self.objs:
if self.object.get('object_id') == self.object_id:
return self.object
return {'error': '%s with object_id %s not found' % (self.description, self.object_id)}
# --new
def new(self):
return self.action('POST')
# --update
def update(self):
# Ask for confirmation
self.confirmation = input('\nDo you want to continue? [y/N] ')
if self.confirmation.lower() == 'y':
return self.action('PUT')
# --delete
def delete(self):
# Ask for confirmation
self.confirmation = input('\nDo you want to continue? [y/N] ')
if self.confirmation.lower() == 'y':
return self.action('DELETE')
class Tag(ZammadApi):
api_endpoint = 'tag_list'
description = 'tag'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
def list_objects(self):
if self.filter_string:
self.api_endpoint = self.api_endpoint.replace('tag_list', 'tag_search') + '?term=' + self.filter_string
self.results = self.action('GET')
if len(self.results) > 0:
return self.results
else:
return {'error': 'No %ss containing %s was found' % (self.description, self.filter_string)}
class TicketTags(ZammadApi):
api_endpoint = 'tags'
description = 'ticket'
def __init__(self, target, api_key, filter_string=None, object_type='Ticket', object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
def get(self):
"""Get all tags for a ticket"""
return self.action('GET')
class EmailFilter(ZammadApi):
api_endpoint = 'postmaster_filters'
description = 'email filter_string'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
class EmailSignature(ZammadApi):
api_endpoint = 'signatures'
description = 'email signature'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
class Group(ZammadApi):
api_endpoint = 'groups'
description = 'group'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
class KnowledgeBase(ZammadApi):
api_endpoint = ''
description = 'knowledge base'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
class Macro(ZammadApi):
api_endpoint = 'macros'
description = 'macro'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
def get(self):
self.api_endpoint = self.api_endpoint + '/#' + self.object_id
return self.action('GET')
class Organization(ZammadApi):
api_endpoint = 'organizations'
description = 'organization'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
def list_objects(self):
if self.filter_string:
self.api_endpoint = self.api_endpoint + '/search?query=' + self.filter_string + '&limit=999999999999999'
self.results = self.action('GET')
if len(self.results) > 0:
return self.results
else:
return {'error': 'No %ss containing %s was found' % (self.description, self.filter_string)}
class Overview(ZammadApi):
api_endpoint = 'overviews'
description = 'overview'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
class Role(ZammadApi):
api_endpoint = 'roles'
description = 'role'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
class Ticket(ZammadApi):
api_endpoint = 'tickets'
description = 'ticket'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
def get(self):
return self.action('GET')
class Trigger(ZammadApi):
api_endpoint = 'triggers'
description = 'trigger'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
class User(ZammadApi):
api_endpoint = 'users'
description = 'user'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data)
def get(self):
return self.action('GET')
def list_objects(self):
if self.filter_string:
self.api_endpoint = self.api_endpoint + '/search?query=' + self.filter_string
self.results = self.action('GET')
else:
self.api_endpoint_static = self.api_endpoint
self.pagination = 0
self.res_counter = 1
self.results = []
while self.res_counter > 0:
self.pagination += 1
self.api_endpoint = self.api_endpoint_static + '?page=' + str(self.pagination) + '&per_page=500'
self.temp = self.action('GET')
self.res_counter = len(self.temp)
self.results += self.temp
if len(self.results) > 0:
return self.results
else:
return {'error': 'No %ss containing %s was found' % (self.description, self.filter_string)}
class Collection(ZammadApi):
api_endpoint = ''
description = 'collection'
def __init__(self, target, api_key, filter_string=None, object_type=None, object_id=None, json_data=None):
super().__init__(target, api_key, filter_string, object_type, object_id, json_data) | zammad-api | /zammad_api-0.1.4.tar.gz/zammad_api-0.1.4/zammad_api/zammad_api.py | zammad_api.py |
=================
Zammad API Client
=================
.. image:: https://img.shields.io/pypi/v/zammad_py.svg
:target: https://pypi.python.org/pypi/zammad_py
.. image:: https://img.shields.io/travis/joeirimpan/zammad_py.svg
:target: https://travis-ci.org/joeirimpan/zammad_py
.. image:: https://readthedocs.org/projects/zammad-py/badge/?version=latest
:target: https://zammad-py.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://pyup.io/repos/github/joeirimpan/zammad_py/shield.svg
:target: https://pyup.io/repos/github/joeirimpan/zammad_py/
:alt: Updates
Python API client for zammad
* Free software: MIT license
* Documentation: https://zammad-py.readthedocs.io.
Quickstart
----------
.. code-block:: python
from zammad_py import ZammadAPI
# Initialize the client with the URL, username, and password
# Note the Host URL should be in this format: 'https://zammad.example.org/api/v1/'
client = ZammadAPI(url='<HOST>', username='<USERNAME>', password='<PASSWORD>')
# Example: Access all users
this_page = client.user.all()
for user in this_page:
print(user)
# Example: Get information about the current user
print(client.user.me())
# Example: Create a ticket
params = {
"title": "Help me!",
"group": "2nd Level",
"customer": "[email protected]",
"article": {
"subject": "My subject",
"body": "I am a message!",
"type": "note",
"internal": false
}
}
new_ticket = client.ticket.create(params=params)
General Methods
---------------
Most resources support these methods:
`.all()`: Returns a paginated response with the current page number and a list of elements.
`.next_page()`: Returns the next page of the current pagination object.
`.prev_page()`: Returns the previous page of the current pagination object.
`.search(params)`: Returns a paginated response based on the search parameters.
`.find(id)`: Returns a single object with the specified ID.
`.create(params)`: Creates a new object with the specified parameters.
`.update(params)`: Updates an existing object with the specified parameters.
`.destroy(id)`: Deletes an object with the specified ID.
Additional Resource Methods
---------------------------
User resource also has the .me() method to get information about the current user.
Ticket resource also has the .articles() method to get the articles associated with a ticket.
Link resource has methods to list, add, and delete links between objects.
TicketArticleAttachment resource has the .download() method to download a ticket attachment.
Object resource has the .execute_migrations() method to run migrations on an object.
You can set the `on_behalf_of` attribute of the ZammadAPI instance to do actions on behalf of another user.
Contributing
------------
The Zammad API Client (zammad_py) welcomes contributions.
You can contribute by reporting bugs, fixing bugs, implementing new features, writing documentation, and submitting feedback.
To get started, see the contributing section in the docs!
Please ensure that your changes include tests and updated documentation if necessary.
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| zammad-py | /zammad_py-3.0.0.tar.gz/zammad_py-3.0.0/README.rst | README.rst |
import atexit
from abc import ABC, abstractmethod
from contextlib import contextmanager
from typing import Any, Generator, List, Optional, Tuple
import requests
from requests.exceptions import HTTPError
from zammad_py.exceptions import ConfigException
__all__ = ["ZammadAPI"]
class ZammadAPI:
def __init__(
self,
url: str,
username: Optional[str] = None,
password: Optional[str] = None,
http_token: Optional[str] = None,
oauth2_token: Optional[str] = None,
on_behalf_of: Optional[str] = None,
additional_headers: Optional[List[Tuple[str, str]]] = None,
) -> None:
self.url = url if url.endswith("/") else f"{url}/"
self._username = username
self._password = password
self._http_token = http_token
self._oauth2_token = oauth2_token
self._on_behalf_of = on_behalf_of
self._additional_headers = additional_headers
self._check_config()
self.session = requests.Session()
atexit.register(self.session.close)
self.session.headers["User-Agent"] = "Zammad API Python"
if self._http_token:
self.session.headers["Authorization"] = "Token token=%s" % self._http_token
elif oauth2_token:
self.session.headers["Authorization"] = "Bearer %s" % self._oauth2_token
elif self._username and self._password: # noqa: SIM106
self.session.auth = (self._username, self._password)
else:
raise ValueError("Invalid Authentication information in config")
if self._on_behalf_of:
self.session.headers["X-On-Behalf-Of"] = self._on_behalf_of
if self._additional_headers:
for additional_header in self._additional_headers:
self.session.headers[additional_header[0]] = additional_header[1]
def _check_config(self) -> None:
"""Check the configuration"""
if not self.url:
raise ConfigException("Missing url in config")
if self._http_token:
return
if self._oauth2_token:
return
if not self._username:
raise ConfigException("Missing username in config")
if not self._password:
raise ConfigException("Missing password in config")
@property
def on_behalf_of(self) -> Optional[str]:
return self._on_behalf_of
@on_behalf_of.setter
def on_behalf_of(self, value: str) -> None:
self._on_behalf_of = value
self.session.headers["X-On-Behalf-Of"] = self._on_behalf_of
@contextmanager
def request_on_behalf_of(
self, on_behalf_of: str
) -> Generator["ZammadAPI", None, None]:
"""
Use X-On-Behalf-Of Header, see https://docs.zammad.org/en/latest/api/intro.html?highlight=on%20behalf#actions-on-behalf-of-other-users
:param on_behalf_of: The value of this header can be one of the following: user ID, login or email
"""
initial_value = self.session.headers["X-On-Behalf-Of"]
self.session.headers["X-On-Behalf-Of"] = on_behalf_of
yield self
self.session.headers["X-On-Behalf-Of"] = initial_value
@property
def group(self) -> "Group":
"""Return a `Group` instance"""
return Group(connection=self)
@property
def organization(self) -> "Organization":
"""Return a `Organization` instance"""
return Organization(connection=self)
@property
def role(self) -> "Role":
"""Return a `Role` instance"""
return Role(connection=self)
@property
def ticket(self) -> "Ticket":
"""Return a `Ticket` instance"""
return Ticket(connection=self)
@property
def link(self):
"""Return a `Link` instance"""
return Link(connection=self)
@property
def ticket_article(self) -> "TicketArticle":
"""Return a `TicketArticle` instance"""
return TicketArticle(connection=self)
@property
def ticket_article_attachment(self) -> "TicketArticleAttachment":
"""Return a `TicketArticleAttachment` instance"""
return TicketArticleAttachment(connection=self)
@property
def ticket_article_plain(self) -> "TicketArticlePlain":
"""Return a `TicketArticlePlain` instance"""
return TicketArticlePlain(connection=self)
@property
def ticket_priority(self) -> "TicketPriority":
"""Return a `TicketPriority` instance"""
return TicketPriority(connection=self)
@property
def ticket_state(self) -> "TicketState":
"""Return a `TicketState` instance"""
return TicketState(connection=self)
@property
def user(self) -> "User":
"""Return a `User` instance"""
return User(connection=self)
class Pagination:
def __init__(
self,
items,
resource: "Resource",
function_name: str,
params=None,
page: int = 1,
) -> None:
self._items = items
self._page = page
self._resource = resource
self._params = params
self._function_name = function_name
def __iter__(self):
yield from self._items
def __len__(self) -> int:
return len(self._items)
def __getitem__(self, index: int):
return self._items[index]
def __setitem__(self, index: int, value) -> None:
self._items[index] = value
def next_page(self) -> "Pagination":
self._page += 1
return getattr(self._resource, self._function_name)(
page=self._page, **self._params
)
def prev_page(self) -> "Pagination":
self._page -= 1
return getattr(self._resource, self._function_name)(
page=self._page, **self._params
)
class Resource(ABC):
def __init__(self, connection: ZammadAPI, per_page: int = 10) -> None:
self._connection = connection
self._per_page = per_page
@property
@abstractmethod
def path_attribute(self) -> str:
...
@property
def url(self) -> str:
"""Returns a the full url concatenated with the resource class name"""
return self._connection.url + self.path_attribute
@property
def per_page(self) -> int:
return self._per_page
@per_page.setter
def per_page(self, value: int) -> None:
self._per_page = value
def _raise_or_return_json(self, response: requests.Response) -> Any:
"""Raise HTTPError before converting response to json
:param response: Request response object
"""
try:
response.raise_for_status()
except HTTPError:
raise HTTPError(response.text)
try:
json_value = response.json()
except ValueError:
return response.content
else:
return json_value
def all(self, page: int = 1, filters=None) -> Pagination:
"""Returns the list of resources
:param page: Page number
:param filters: Filter arguments like page, per_page
"""
params = filters or {}
params.update({"page": page, "per_page": self._per_page, "expand": "true"})
response = self._connection.session.get(self.url, params=params)
data = self._raise_or_return_json(response)
return Pagination(
items=data,
resource=self,
function_name="all",
params={"filters": params},
page=page,
)
def search(self, search_string: str, page: int = 1, filters=None) -> Pagination:
"""Returns the list of resources
:param search_string: option to filter for
:param page: Page number
:param filters: Filter arguments like page, per_page
"""
params = filters or {}
params.update({"query": search_string})
params.update({"page": page, "per_page": self._per_page, "expand": "true"})
response = self._connection.session.get(self.url + "/search", params=params)
data = self._raise_or_return_json(response)
return Pagination(
items=data,
resource=self,
function_name="search",
params={"search_string": search_string, "filters": params},
page=page,
)
def find(self, id):
"""Return the resource associated with the id
:param id: Resource id
"""
response = self._connection.session.get(self.url + "/%s" % id)
return self._raise_or_return_json(response)
def create(self, params):
"""Create the requested resource
:param params: Resource data for creating
"""
response = self._connection.session.post(self.url, json=params)
return self._raise_or_return_json(response)
def update(self, id, params):
"""Update the requested resource
:param id: Resource id
:param params: Resource data for updating
"""
response = self._connection.session.put(self.url + "/%s" % id, json=params)
return self._raise_or_return_json(response)
def destroy(self, id):
"""Delete the resource associated with the id
:param id: Resource id
"""
response = self._connection.session.delete(self.url + "/%s" % id)
return self._raise_or_return_json(response)
class Group(Resource):
path_attribute = "groups"
class Role(Resource):
path_attribute = "roles"
class Organization(Resource):
path_attribute = "organizations"
class Ticket(Resource):
path_attribute = "tickets"
def articles(self, id):
"""Returns all the articles associated with the ticket id
:param id: Ticket id
"""
response = self._connection.session.get(
self._connection.url + "ticket_articles/by_ticket/%s?expand=true" % id
)
return self._raise_or_return_json(response)
def merge(self, id, number):
"""Merges two tickets, (undocumented in Zammad Docs)
If the objects are already merged, it will return "Object already exists!"
Attention: Must use password to authenticate to Zammad, otherwise this will not work!
:param id: Ticket id of the child
:param number: Ticket Number of the Parent
"""
response = self._connection.session.put(
self._connection.url + f"ticket_merge/{id}/{number}"
)
return self._raise_or_return_json(response)
class Link(Resource):
path_attribute = "links"
def add(
self,
link_object_target_value,
link_object_source_number,
link_type="normal",
link_object_target="Ticket",
link_object_source="Ticket",
):
"""Create the link
:params link_type: Link type ('normal', 'parent', 'child')
:params link_object_target: (for now*: 'Ticket')
:params link_object_target_value: Ticket ID
:params link_object_source: (for now*: 'Ticket')
:params link_object_source_number: Ticket Number (Not the ID!)
*Currently, only Tickets can be linked together.
"""
params = {
"link_type": link_type,
"link_object_target": link_object_target,
"link_object_target_value": link_object_target_value,
"link_object_source": link_object_source,
"link_object_source_number": link_object_source_number,
}
response = self._connection.session.post(self.url + "/add", json=params)
return self._raise_or_return_json(response)
def remove(
self,
link_object_target_value,
link_object_source_number,
link_type="normal",
link_object_target="Ticket",
link_object_source="Ticket",
):
"""Remove the Link
:params link_type: Link type ('normal', 'parent', 'child')
:params link_object_target: (for now: 'Ticket')
:params link_object_target_value: Ticket ID
:params link_object_source: (for now: 'Ticket')
:params link_object_source_number: Ticket ID
"""
params = {
"link_type": link_type,
"link_object_target": link_object_target,
"link_object_target_value": link_object_target_value,
"link_object_source": link_object_source,
"link_object_source_number": link_object_source_number,
}
response = self._connection.session.delete(self.url + "/remove", json=params)
return self._raise_or_return_json(response)
def get(self, id):
"""Returns all the links associated with the ticket id
:param id: Ticket id
"""
params = {"link_object": "Ticket", "link_object_value": id}
response = self._connection.session.get(
self._connection.url + self.path_attribute, params=params
)
return self._raise_or_return_json(response)
class TicketArticle(Resource):
path_attribute = "ticket_articles"
class TicketArticleAttachment(Resource):
path_attribute = "ticket_attachment"
def download(self, id, article_id, ticket_id):
"""Download the ticket attachment associated with the ticket id
:param id: Ticket attachment id
:param article_id: Ticket article id
:param ticket_id: Ticket id
"""
response = self._connection.session.get(
self.url + f"/{ticket_id}/{article_id}/{id}"
)
return self._raise_or_return_json(response)
class TicketArticlePlain(Resource):
path_attribute = "ticket_article_plain"
class TicketPriority(Resource):
path_attribute = "ticket_priorities"
class TicketState(Resource):
path_attribute = "ticket_states"
class User(Resource):
path_attribute = "users"
def me(self):
"""Returns current user information"""
response = self._connection.session.get(self.url + "/me")
return self._raise_or_return_json(response)
class OnlineNotification(Resource):
path_attribute = "online_notifications"
def mark_all_read(self):
"""Marks all online notification as read"""
response = self._connection.session.post(self.url + "/mark_all_as_read")
return self._raise_or_return_json(response)
class Object(Resource):
path_attribute = "object_manager_attributes"
def execute_migrations(self):
"""Executes all database migrations"""
response = self._connection.session.post(
self._connection.url + "object_manager_attributes_execute_migrations"
)
return self._raise_or_return_json(response)
class TagList(Resource):
path_attribute = "tag_list" | zammad-py | /zammad_py-3.0.0.tar.gz/zammad_py-3.0.0/zammad_py/api.py | api.py |
======
README
======
This package provides management pages for the z3c.authenticator implementation.
The zam.skin is used as basic skin for this test.
First login as manager:
>>> from zope.testbrowser.testing import Browser
>>> mgr = Browser()
>>> mgr.addHeader('Authorization', 'Basic mgr:mgrpw')
And go to the plugins page at the site root:
>>> rootURL = 'http://localhost/++skin++ZAM'
>>> mgr.open(rootURL + '/plugins.html')
>>> mgr.url
'http://localhost/++skin++ZAM/plugins.html'
and install the error plugins:
>>> mgr.getControl(name='zamplugin.authenticator.buttons.install').click()
>>> print mgr.contents
<!DOCTYPE ...
...
<h1>ZAM Plugin Management</h1>
<fieldset id="pluginManagement">
<strong class="installedPlugin">Z3C Authenticator management</strong>
<div class="description">ZAM Authenticator Management.</div>
...
Let's add and setup an Authenticator utility:
>>> from zope.app.security.interfaces import IAuthentication
>>> from z3c.authenticator.authentication import Authenticator
>>> root = getRootFolder()
>>> sm = root.getSiteManager()
>>> auth = Authenticator()
>>> sm['auth'] = auth
>>> sm.registerUtility(auth, IAuthentication)
Now you can see that we can access the authenticator utility at the site root:
>>> mgr.open(rootURL + '/++etc++site/auth/contents.html')
>>> print mgr.contents
<!DOCTYPE ...
...
<ul>
<li class="selected">
<a href="http://localhost/++skin++ZAM/++etc++site/auth/contents.html"><span>Contents</span></a>
...
...
<table>
<thead>
<tr>
<th>X</th>
<th><a href="?contents-sortOn=contents-renameColumn-1&contents-sortOrder=descending" title="Sort">Name</a></th>
<th><a href="?contents-sortOn=contents-createdColumn-2&contents-sortOrder=ascending" title="Sort">Created</a></th>
<th><a href="?contents-sortOn=contents-modifiedColumn-3&contents-sortOrder=ascending" title="Sort">Modified</a></th>
</tr>
</thead>
<tbody>
</tbody>
</table>
...
The management page can be found at the edit.html page:
>>> mgr.handleErrors = False
>>> mgr.open(rootURL + '/++etc++site/auth/edit.html')
>>> print mgr.contents
<!DOCTYPE ...
<div id="content">
<form action="http://localhost/++skin++ZAM/++etc++site/auth/edit.html"
method="post" enctype="multipart/form-data"
class="edit-form" name="form" id="form">
<div class="viewspace">
<h1>Edit Authenticator.</h1>
<div class="required-info">
<span class="required">*</span>– required
</div>
<div>
<div id="form-widgets-includeNextUtilityForAuthenticate-row"
class="row">
<div class="label">
<label for="form-widgets-includeNextUtilityForAuthenticate">
<span>Include next utility for authenticate</span>
<span class="required">*</span>
</label>
</div>
<div class="widget">
<span class="option">
<label for="form-widgets-includeNextUtilityForAuthenticate-0">
<input id="form-widgets-includeNextUtilityForAuthenticate-0"
name="form.widgets.includeNextUtilityForAuthenticate:list"
class="radio-widget required bool-field"
value="true" checked="checked" type="radio" />
<span class="label">yes</span>
</label>
</span>
<span class="option">
<label for="form-widgets-includeNextUtilityForAuthenticate-1">
<input id="form-widgets-includeNextUtilityForAuthenticate-1"
name="form.widgets.includeNextUtilityForAuthenticate:list"
class="radio-widget required bool-field"
value="false" type="radio" />
<span class="label">no</span>
</label>
</span>
<input name="form.widgets.includeNextUtilityForAuthenticate-empty-marker"
type="hidden" value="1" />
</div>
</div>
<div id="form-widgets-credentialsPlugins-row"
class="row">
<div class="label">
<label for="form-widgets-credentialsPlugins">
<span>Credentials Plugins</span>
<span class="required">*</span>
</label>
</div>
<div class="widget">
<script type="text/javascript">
/* <![CDATA[ */
function moveItems(from, to)
{
// shortcuts for selection fields
var src = document.getElementById(from);
var tgt = document.getElementById(to);
if (src.selectedIndex == -1) selectionError();
else
{
// iterate over all selected items
// --> attribute "selectedIndex" doesn't support multiple selection.
// Anyway, it works here, as a moved item isn't selected anymore,
// thus "selectedIndex" indicating the "next" selected item :)
while (src.selectedIndex > -1)
if (src.options[src.selectedIndex].selected)
{
// create a new virtal object with values of item to copy
temp = new Option(src.options[src.selectedIndex].text,
src.options[src.selectedIndex].value);
// append virtual object to targe
tgt.options[tgt.length] = temp;
// want to select newly created item
temp.selected = true;
// delete moved item in source
src.options[src.selectedIndex] = null;
}
}
}
// move item from "from" selection to "to" selection
function from2to(name)
{
moveItems(name+"-from", name+"-to");
copyDataForSubmit(name);
}
// move item from "to" selection back to "from" selection
function to2from(name)
{
moveItems(name+"-to", name+"-from");
copyDataForSubmit(name);
}
function swapFields(a, b)
{
// swap text
var temp = a.text;
a.text = b.text;
b.text = temp;
// swap value
temp = a.value;
a.value = b.value;
b.value = temp;
// swap selection
temp = a.selected;
a.selected = b.selected;
b.selected = temp;
}
// move selected item in "to" selection one up
function moveUp(name)
{
// shortcuts for selection field
var toSel = document.getElementById(name+"-to");
if (toSel.selectedIndex == -1)
selectionError();
else if (toSel.options[0].selected)
alert("Cannot move further up!");
else for (var i = 0; i < toSel.length; i++)
if (toSel.options[i].selected)
{
swapFields(toSel.options[i-1], toSel.options[i]);
copyDataForSubmit(name);
}
}
// move selected item in "to" selection one down
function moveDown(name)
{
// shortcuts for selection field
var toSel = document.getElementById(name+"-to");
if (toSel.selectedIndex == -1) {
selectionError();
} else if (toSel.options[toSel.length-1].selected) {
alert("Cannot move further down!");
} else {
for (var i = toSel.length-1; i >= 0; i--) {
if (toSel.options[i].selected) {
swapFields(toSel.options[i+1], toSel.options[i]);
}
}
copyDataForSubmit(name);
}
}
// copy each item of "toSel" into one hidden input field
function copyDataForSubmit(name)
{
// shortcuts for selection field and hidden data field
var toSel = document.getElementById(name+"-to");
var toDataContainer = document.getElementById(name+"-toDataContainer");
// delete all child nodes (--> complete content) of "toDataContainer" span
while (toDataContainer.hasChildNodes())
toDataContainer.removeChild(toDataContainer.firstChild);
// create new hidden input fields - one for each selection item of
// "to" selection
for (var i = 0; i < toSel.options.length; i++)
{
// create virtual node with suitable attributes
var newNode = document.createElement("input");
var newAttr = document.createAttribute("name");
newAttr.nodeValue = name.replace(/-/g, '.')+':list';
newNode.setAttributeNode(newAttr);
newAttr = document.createAttribute("type");
newAttr.nodeValue = "hidden";
newNode.setAttributeNode(newAttr);
newAttr = document.createAttribute("value");
newAttr.nodeValue = toSel.options[i].value;
newNode.setAttributeNode(newAttr);
// actually append virtual node to DOM tree
toDataContainer.appendChild(newNode);
}
}
// error message for missing selection
function selectionError()
{alert("Must select something!")}
/* ]]> */
</script>
<table border="0" class="ordered-selection-field">
<tr>
<td>
<select id="form-widgets-credentialsPlugins-from"
name="form.widgets.credentialsPlugins.from"
class="required list-field"
multiple="multiple" size="5">
</select>
</td>
<td>
<button onclick="javascript:from2to('form-widgets-credentialsPlugins')"
name="from2toButton" type="button" value="→">→</button>
<br />
<button onclick="javascript:to2from('form-widgets-credentialsPlugins')"
name="to2fromButton" type="button" value="←">←</button>
</td>
<td>
<select id="form-widgets-credentialsPlugins-to"
name="form.widgets.credentialsPlugins.to"
class="required list-field"
multiple="multiple" size="5">
</select>
<input name="form.widgets.credentialsPlugins-empty-marker"
type="hidden" />
<span id="form-widgets-credentialsPlugins-toDataContainer">
<script type="text/javascript">
copyDataForSubmit('form-widgets-credentialsPlugins');</script>
</span>
</td>
<td>
<button onclick="javascript:moveUp('form-widgets-credentialsPlugins')"
name="upButton" type="button" value="↑">↑</button>
<br />
<button onclick="javascript:moveDown('form-widgets-credentialsPlugins')"
name="downButton" type="button" value="↓">↓</button>
</td>
</tr>
</table>
</div>
</div>
<div id="form-widgets-authenticatorPlugins-row"
class="row">
<div class="label">
<label for="form-widgets-authenticatorPlugins">
<span>Authenticator Plugins</span>
<span class="required">*</span>
</label>
</div>
<div class="widget">
<script type="text/javascript">
/* <![CDATA[ */
function moveItems(from, to)
{
// shortcuts for selection fields
var src = document.getElementById(from);
var tgt = document.getElementById(to);
if (src.selectedIndex == -1) selectionError();
else
{
// iterate over all selected items
// --> attribute "selectedIndex" doesn't support multiple selection.
// Anyway, it works here, as a moved item isn't selected anymore,
// thus "selectedIndex" indicating the "next" selected item :)
while (src.selectedIndex > -1)
if (src.options[src.selectedIndex].selected)
{
// create a new virtal object with values of item to copy
temp = new Option(src.options[src.selectedIndex].text,
src.options[src.selectedIndex].value);
// append virtual object to targe
tgt.options[tgt.length] = temp;
// want to select newly created item
temp.selected = true;
// delete moved item in source
src.options[src.selectedIndex] = null;
}
}
}
// move item from "from" selection to "to" selection
function from2to(name)
{
moveItems(name+"-from", name+"-to");
copyDataForSubmit(name);
}
// move item from "to" selection back to "from" selection
function to2from(name)
{
moveItems(name+"-to", name+"-from");
copyDataForSubmit(name);
}
function swapFields(a, b)
{
// swap text
var temp = a.text;
a.text = b.text;
b.text = temp;
// swap value
temp = a.value;
a.value = b.value;
b.value = temp;
// swap selection
temp = a.selected;
a.selected = b.selected;
b.selected = temp;
}
// move selected item in "to" selection one up
function moveUp(name)
{
// shortcuts for selection field
var toSel = document.getElementById(name+"-to");
if (toSel.selectedIndex == -1)
selectionError();
else if (toSel.options[0].selected)
alert("Cannot move further up!");
else for (var i = 0; i < toSel.length; i++)
if (toSel.options[i].selected)
{
swapFields(toSel.options[i-1], toSel.options[i]);
copyDataForSubmit(name);
}
}
// move selected item in "to" selection one down
function moveDown(name)
{
// shortcuts for selection field
var toSel = document.getElementById(name+"-to");
if (toSel.selectedIndex == -1) {
selectionError();
} else if (toSel.options[toSel.length-1].selected) {
alert("Cannot move further down!");
} else {
for (var i = toSel.length-1; i >= 0; i--) {
if (toSel.options[i].selected) {
swapFields(toSel.options[i+1], toSel.options[i]);
}
}
copyDataForSubmit(name);
}
}
// copy each item of "toSel" into one hidden input field
function copyDataForSubmit(name)
{
// shortcuts for selection field and hidden data field
var toSel = document.getElementById(name+"-to");
var toDataContainer = document.getElementById(name+"-toDataContainer");
// delete all child nodes (--> complete content) of "toDataContainer" span
while (toDataContainer.hasChildNodes())
toDataContainer.removeChild(toDataContainer.firstChild);
// create new hidden input fields - one for each selection item of
// "to" selection
for (var i = 0; i < toSel.options.length; i++)
{
// create virtual node with suitable attributes
var newNode = document.createElement("input");
var newAttr = document.createAttribute("name");
newAttr.nodeValue = name.replace(/-/g, '.')+':list';
newNode.setAttributeNode(newAttr);
newAttr = document.createAttribute("type");
newAttr.nodeValue = "hidden";
newNode.setAttributeNode(newAttr);
newAttr = document.createAttribute("value");
newAttr.nodeValue = toSel.options[i].value;
newNode.setAttributeNode(newAttr);
// actually append virtual node to DOM tree
toDataContainer.appendChild(newNode);
}
}
// error message for missing selection
function selectionError()
{alert("Must select something!")}
/* ]]> */
</script>
<table border="0" class="ordered-selection-field">
<tr>
<td>
<select id="form-widgets-authenticatorPlugins-from"
name="form.widgets.authenticatorPlugins.from"
class="required list-field"
multiple="multiple" size="5">
</select>
</td>
<td>
<button onclick="javascript:from2to('form-widgets-authenticatorPlugins')"
name="from2toButton" type="button" value="→">→</button>
<br />
<button onclick="javascript:to2from('form-widgets-authenticatorPlugins')"
name="to2fromButton" type="button" value="←">←</button>
</td>
<td>
<select id="form-widgets-authenticatorPlugins-to"
name="form.widgets.authenticatorPlugins.to"
class="required list-field"
multiple="multiple" size="5">
</select>
<input name="form.widgets.authenticatorPlugins-empty-marker"
type="hidden" />
<span id="form-widgets-authenticatorPlugins-toDataContainer">
<script type="text/javascript">
copyDataForSubmit('form-widgets-authenticatorPlugins');</script>
</span>
</td>
<td>
<button onclick="javascript:moveUp('form-widgets-authenticatorPlugins')"
name="upButton" type="button" value="↑">↑</button>
<br />
<button onclick="javascript:moveDown('form-widgets-authenticatorPlugins')"
name="downButton" type="button" value="↓">↓</button>
</td>
</tr>
</table>
</div>
</div>
</div>
</div>
<div>
<div class="buttons">
<input id="form-buttons-apply" name="form.buttons.apply"
class="submit-widget button-field" value="Apply"
type="submit" />
</div>
</div>
</form>
...
| zamplugin.authenticator | /zamplugin.authenticator-0.6.0.tar.gz/zamplugin.authenticator-0.6.0/src/zamplugin/authenticator/README.txt | README.txt |
======
README
======
This package contains a container managment view for the Zope Application
Management.
Login as mgr first:
>>> from zope.testbrowser.testing import Browser
>>> mgr = Browser()
>>> mgr.addHeader('Authorization', 'Basic mgr:mgrpw')
Check if we can access the index.html view which is registred within the ZAM
skin:
>>> mgr = Browser()
>>> mgr.handleErrors = False
>>> mgr.addHeader('Authorization', 'Basic mgr:mgrpw')
>>> rootURL = 'http://localhost/++skin++ZAM'
>>> mgr.open(rootURL + '/index.html')
>>> mgr.url
'http://localhost/++skin++ZAM/index.html'
>>> 'There is no index.html page registered for this object' in mgr.contents
True
As you can see there is no real ``contents.hml`` page available only the default
one from the skin configuration which shows the following message:
>>> mgr.open(rootURL + '/contents.html')
>>> 'There is no contents.html page registered for this object' in mgr.contents
True
Go to the plugins page at the site root:
>>> mgr.open(rootURL + '/plugins.html')
>>> mgr.url
'http://localhost/++skin++ZAM/plugins.html'
and install the contents plugins:
>>> mgr.getControl(name='zamplugin.contents.buttons.install').click()
>>> print mgr.contents
<!DOCTYPE ...
...
<h1>ZAM Plugin Management</h1>
<fieldset id="pluginManagement">
<strong class="installedPlugin">Container management page</strong>
<div class="description">This container management page is configured for IReadContainer.</div>
...
Now you can see there is ``contents.html`` page at the site root:
>>> mgr.open(rootURL + '/contents.html')
>>> print mgr.contents
<!DOCTYPE ...
...
<table>
<tr>
<td class="row">
<label for="search-widgets-searchterm">Search</label>
<input id="search-widgets-searchterm"
name="search.widgets.searchterm"
class="text-widget required textline-field"
value="" type="text" />
</td>
<td class="action">
<input id="search-buttons-search"
name="search.buttons.search"
class="submit-widget button-field" value="Search"
type="submit" />
</td>
</tr>
</table>
</fieldset>
<table class="contents">
<thead>
<tr>
<th>X</th>
<th><a href="?contents-sortOn=contents-renameColumn-1&contents-sortOrder=descending" title="Sort">Name</a></th>
<th><a href="?contents-sortOn=contents-createdColumn-2&contents-sortOrder=ascending" title="Sort">Created</a></th>
<th><a href="?contents-sortOn=contents-modifiedColumn-3&contents-sortOrder=ascending" title="Sort">Modified</a></th>
</tr>
</thead>
<tbody>
</tbody>
</table>
</div>
</div>
<div>
<div class="buttons">
</div>
</div>
</form>
</div>
</div>
</div>
...
| zamplugin.contents | /zamplugin.contents-0.6.0.tar.gz/zamplugin.contents-0.6.0/src/zamplugin/contents/README.txt | README.txt |
======
README
======
This package provides the server control management. The zam.skin is used as
basic skin for this test.
First login as manager:
>>> from zope.testbrowser.testing import Browser
>>> mgr = Browser()
>>> mgr.addHeader('Authorization', 'Basic mgr:mgrpw')
Check if we can access the management namespace without the installed plugin:
>>> rootURL = 'http://localhost/++skin++ZAM'
>>> mgr.open(rootURL + '/++etc++ApplicationController')
Traceback (most recent call last):
HTTPError: HTTP Error 404: Not Found
As you can see there is no real page available only the default one from the
skin configuration which shows the following message:
>>> 'The page you are trying to access is not available' in mgr.contents
True
Go to the plugins page at the site root:
>>> mgr.open(rootURL + '/plugins.html')
>>> mgr.url
'http://localhost/++skin++ZAM/plugins.html'
and install the contents plugins:
>>> mgr.getControl(name='zamplugin.control.buttons.install').click()
>>> print mgr.contents
<!DOCTYPE ...
...
<div id="content">
<form action="./plugins.html" method="post" enctype="multipart/form-data" class="plugin-form">
<h1>ZAM Plugin Management</h1>
<fieldset id="pluginManagement">
<strong class="installedPlugin">Server control plugin</strong>
<div class="description">ZAM Control plugin.</div>
...
Now you can see there is management namespace at the site root:
>>> mgr.open(rootURL + '/++etc++ApplicationController')
>>> print mgr.contents
<!DOCTYPE ...
...
<div id="content">
<div class="row">
<div class="label">Uptime</div>
...
</div>
<div class="row">
<div class="label">System platform</div>
...
</div>
<div class="row">
<div class="label">Zope version</div>
...
</div>
<div class="row">
<div class="label">Python version</div>
...
</div>
<div class="row">
<div class="label">Command line</div>
...
<div class="row">
<div class="label">Preferred encoding</div>
...
</div>
<div class="row">
<div class="label">FileSystem encoding</div>
...
</div>
<div class="row">
<div class="label">Process id</div>
...
</div>
<div class="row">
<div class="label">Developer mode</div>
<div class="field">On</div>
</div>
<div class="row">
<div class="label">Python path</div>
...
</div>
</div>
</div>
</div>
</div>
</body>
</html>
The ZODB control page allows you to pack the Database and shows the current
database size:
>>> mgr.open(rootURL + '/++etc++ApplicationController/ZODBControl.html')
>>> print mgr.contents
<!DOCTYPE ...
...
<div>
<form action="http://localhost/++skin++ZAM/++etc++ApplicationController/ZODBControl.html"
method="post">
<div class="row">
<table border="1">
<tr>
<th>Pack</th>
<th>Utility Name</th>
<th>Database Name</th>
<th>Size</th>
</tr>
<tr>
<td>
<input type="checkbox" name="dbs:list"
value="unnamed" />
</td>
<td>
unnamed
</td>
<td>
Demo storage 'unnamed'
</td>
<td>
2 KB
</td>
</tr>
</table>
<div class="row">
<span class="label">Keep up to</span>
<span class="field">
<input type="text" size="4" name="days" value="0" />
days
</span>
<div class="controls">
<input type="submit" name="PACK" value="Pack" />
</div>
</div>
</div>
</form>
</div>
...
The generation page shows you pending generations and will list already
processed generation steps:
>>> mgr.open(rootURL + '/++etc++ApplicationController/generations.html')
>>> print mgr.contents
<!DOCTYPE ...
...
<div id="content">
<span>Database generations</span>
<form action="http://localhost/++skin++ZAM/++etc++ApplicationController/generations.html">
<table border="1">
<tr>
<th>Application</th>
<th>Minimum Generation</th>
<th>Maximum Generation</th>
<th>Current Database Generation</th>
<th>Evolve?</th>
</tr>
<tr>
<td>
<a href="generationDetails.html?id=zope.app">zope.app</a>
</td>
<td>1</td>
<td>5</td>
<td>5</td>
<td>
<span>No, up to date</span>
</td>
</tr>
</table>
</form>
...
| zamplugin.control | /zamplugin.control-0.6.1.tar.gz/zamplugin.control-0.6.1/src/zamplugin/control/README.txt | README.txt |
__docformat__ = 'restructuredtext'
import zope.component
from ZODB.interfaces import IDatabase
from ZODB.FileStorage.FileStorage import FileStorageError
from zope.size import byteDisplay
from zope.app.applicationcontrol.i18n import ZopeMessageFactory as _
from z3c.pagelet import browser
from z3c.template.template import getPageTemplate
class ZODBControl(browser.BrowserPagelet):
template = getPageTemplate()
status = None
@property
def databases(self):
res = []
for name, db in zope.component.getUtilitiesFor(
IDatabase):
d = dict(
dbName = db.getName(),
utilName = str(name),
size = self._getSize(db),
)
res.append(d)
return res
def _getSize(self, db):
"""Get the database size in a human readable format."""
size = db.getSize()
if not isinstance(size, (int, long, float)):
return str(size)
return byteDisplay(size)
def update(self):
if self.status is not None:
return self.status
status = []
if 'PACK' in self.request.form:
dbs = self.request.form.get('dbs', [])
try:
days = int(self.request.form.get('days','').strip() or 0)
except ValueError:
status.append(_('Error: Invalid Number'))
self.status = status
return self.status
for dbName in dbs:
db = zope.component.getUtility(IDatabase, name=dbName)
try:
db.pack(days=days)
status.append(_('ZODB "${name}" successfully packed.',
mapping=dict(name=str(dbName))))
except FileStorageError, err:
status.append(_('ERROR packing ZODB "${name}": ${err}',
mapping=dict(name=str(dbName), err=err)))
self.status = status
return self.status | zamplugin.control | /zamplugin.control-0.6.1.tar.gz/zamplugin.control-0.6.1/src/zamplugin/control/browser/zodbcontrol.py | zodbcontrol.py |
__docformat__ = 'restructuredtext'
import transaction
import zope.component
from zope.app.generations.interfaces import ISchemaManager
from zope.app.generations.interfaces import ISchemaManager
from zope.app.generations.generations import generations_key
from zope.app.generations.generations import Context
from zope.app.renderer.rest import ReStructuredTextToHTMLRenderer
from z3c.pagelet import browser
from z3c.template.template import getPageTemplate
request_key_format = "evolve-app-%s"
class Generations(browser.BrowserPagelet):
"""GEneration management page."""
template = getPageTemplate()
def _getdb(self):
# TODO: There needs to be a better api for this
return self.request.publication.db
def evolve(self):
"""Perform a requested evolution."""
self.managers = managers = dict(
zope.component.getUtilitiesFor(ISchemaManager))
db = self._getdb()
conn = db.open()
try:
generations = conn.root().get(generations_key, ())
request = self.request
for key in generations:
generation = generations[key]
rkey = request_key_format % key
if rkey in request:
manager = managers[key]
if generation >= manager.generation:
return {'app': key, 'to': 0}
context = Context()
context.connection = conn
generation += 1
manager.evolve(context, generation)
generations[key] = generation
transaction.commit()
return {'app': key, 'to': generation}
return None
finally:
transaction.abort()
conn.close()
def applications(self):
"""Get information about database-generation status."""
result = []
db = self._getdb()
conn = db.open()
try:
managers = self.managers
generations = conn.root().get(generations_key, ())
for key in generations:
generation = generations[key]
manager = managers.get(key)
if manager is None:
continue
result.append({
'id': key,
'min': manager.minimum_generation,
'max': manager.generation,
'generation': generation,
'evolve': (generation < manager.generation
and request_key_format % key
or ''
),
})
return result
finally:
conn.close()
class GenerationDetails(object):
r"""Show Details of a particular Schema Manager's Evolvers
This method needs to use the component architecture, so
we'll set it up:
>>> from zope.app.testing.placelesssetup import setUp, tearDown
>>> setUp()
We need to define some schema managers. We'll define just one:
>>> from zope.app.generations.generations import SchemaManager
>>> from zope.app.testing import ztapi
>>> app1 = SchemaManager(0, 3, 'zope.app.generations.demo')
>>> ztapi.provideUtility(ISchemaManager, app1, 'foo.app1')
Now let's create the view:
>>> from zope.publisher.browser import TestRequest
>>> details = ManagerDetails()
>>> details.context = None
>>> details.request = TestRequest(environ={'id': 'foo.app1'})
Let's now see that the view gets the ID correctly from the request:
>>> details.id
'foo.app1'
Now check that we get all the info from the evolvers:
>>> info = details.getEvolvers()
>>> for item in info:
... print sorted(item.items())
[('from', 0), ('info', u'<p>Evolver 1</p>\n'), ('to', 1)]
[('from', 1), ('info', u'<p>Evolver 2</p>\n'), ('to', 2)]
[('from', 2), ('info', ''), ('to', 3)]
We'd better clean up:
>>> tearDown()
"""
id = property(lambda self: self.request['id'])
def getEvolvers(self):
id = self.id
manager = zope.component.getUtility(ISchemaManager, id)
evolvers = []
for gen in range(manager.minimum_generation, manager.generation):
info = manager.getInfo(gen+1)
if info is None:
info = ''
else:
# XXX: the renderer *expects* unicode as input encoding (ajung)
renderer = ReStructuredTextToHTMLRenderer(
unicode(info), self.request)
info = renderer.render()
evolvers.append({'from': gen, 'to': gen+1, 'info': info})
return evolvers | zamplugin.control | /zamplugin.control-0.6.1.tar.gz/zamplugin.control-0.6.1/src/zamplugin/control/browser/generation.py | generation.py |
======
README
======
This package provides the error utility pages. The zam.skin is used as basic
skin for this test.
First login as manager:
>>> from zope.testbrowser.testing import Browser
>>> mgr = Browser()
>>> mgr.addHeader('Authorization', 'Basic mgr:mgrpw')
And go to the plugins page at the site root:
>>> rootURL = 'http://localhost/++skin++ZAM'
>>> mgr.open(rootURL + '/plugins.html')
>>> mgr.url
'http://localhost/++skin++ZAM/plugins.html'
and install the error plugins:
>>> mgr.getControl(name='zamplugin.error.buttons.install').click()
>>> print mgr.contents
<!DOCTYPE ...
...
<h1>ZAM Plugin Management</h1>
<fieldset id="pluginManagement">
<strong class="installedPlugin">Error reporting utility</strong>
<div class="description">ZAM Error reporting utility.</div>
...
Now you can see that we can access the error utility at the site root:
>>> mgr.open(rootURL + '/++etc++site/default/RootErrorReportingUtility')
>>> print mgr.contents
<!DOCTYPE ...
...
<div id="content">
<div>
<h3>Exception Log (most recent first)</h3>
<p>This page lists the exceptions that have occurred in this
site recently.</p>
<div>
<em> No exceptions logged. </em>
</div>
<!-- just offer reload button -->
<form action="." method="get">
<div class="row">
<div class="controls">
<input type="submit" name="submit" value="Refresh" />
</div>
</div>
</form>
</div>
...
Let's go to the edit.html page and look at the default configuration:
>>> mgr.open(
... rootURL + '/++etc++site/default/RootErrorReportingUtility/edit.html')
>>> mgr.getControl('Keep entries').value
'20'
>>> mgr.getControl(name='form.widgets.copy_to_zlog:list').value
['true']
>>> mgr.getControl('Ignore exceptions').value
'Unauthorized'
And change the configuration:
>>> mgr.getControl('Keep entries').value = '10'
>>> mgr.getControl(name='form.widgets.copy_to_zlog:list').value = ['false']
>>> mgr.getControl('Ignore exceptions').value = 'UserError'
>>> mgr.getControl('Apply').click()
Now go to the `edit.html` page and check the values again.
>>> mgr.open(
... rootURL + '/++etc++site/default/RootErrorReportingUtility/edit.html')
>>> mgr.getControl('Keep entries').value
'10'
>>> mgr.getControl(name='form.widgets.copy_to_zlog:list').value
['false']
>>> mgr.getControl('Ignore exceptions').value
'UserError'
| zamplugin.error | /zamplugin.error-0.6.0.tar.gz/zamplugin.error-0.6.0/src/zamplugin/error/README.txt | README.txt |
__docformat__ = "reStructuredText"
import zope.interface
import zope.component
import zope.schema
from zope.error.interfaces import IErrorReportingUtility
from zope.error.interfaces import ILocalErrorReportingUtility
from zamplugin.error import interfaces
class ErrorReportingUtilityManager(object):
"""Error reporting utility schema."""
zope.interface.implements(interfaces.IErrorReportingUtilityManager)
zope.component.adapts(ILocalErrorReportingUtility)
def __init__(self, context):
self.context = context
@apply
def keep_entries():
def get(self):
data = self.context.getProperties()
return data['keep_entries']
def set(self, value):
data = self.context.getProperties()
keep_entries = value
copy_to_zlog = data['copy_to_zlog']
ignored_exceptions = data['ignored_exceptions']
self.context.setProperties(keep_entries, copy_to_zlog,
ignored_exceptions)
return property(get, set)
@apply
def copy_to_zlog():
def get(self):
data = self.context.getProperties()
return data['copy_to_zlog']
def set(self, value):
data = self.context.getProperties()
keep_entries = data['keep_entries']
copy_to_zlog = value
ignored_exceptions = data['ignored_exceptions']
self.context.setProperties(keep_entries, copy_to_zlog,
ignored_exceptions)
return property(get, set)
@apply
def ignored_exceptions():
def get(self):
data = self.context.getProperties()
return data['ignored_exceptions']
def set(self, value):
data = self.context.getProperties()
keep_entries = data['keep_entries']
copy_to_zlog = data['copy_to_zlog']
ignored_exceptions = value
self.context.setProperties(keep_entries, copy_to_zlog,
ignored_exceptions)
return property(get, set) | zamplugin.error | /zamplugin.error-0.6.0.tar.gz/zamplugin.error-0.6.0/src/zamplugin/error/manager.py | manager.py |
======
README
======
This package contains a navigation for the Zope Application Management.
Login as manager first:
>>> from zope.testbrowser.testing import Browser
>>> manager = Browser()
>>> manager.addHeader('Authorization', 'Basic mgr:mgrpw')
Check if we can access the page.html view which is registred in the
ftesting.zcml file with our skin:
>>> manager = Browser()
>>> manager.handleErrors = False
>>> manager.addHeader('Authorization', 'Basic mgr:mgrpw')
>>> skinURL = 'http://localhost/++skin++ZAM/index.html'
>>> manager.open(skinURL)
>>> manager.url
'http://localhost/++skin++ZAM/index.html'
| zamplugin.navigation | /zamplugin.navigation-0.6.0.tar.gz/zamplugin.navigation-0.6.0/src/zamplugin/navigation/README.txt | README.txt |
__docformat__ = "reStructuredText"
import zope.interface
import zope.component
import zope.schema
from zope.traversing.browser import absoluteURL
from zope.exceptions.interfaces import DuplicationError
from z3c.template.template import getPageTemplate
from z3c.template.template import getLayoutTemplate
from z3c.template.interfaces import ILayoutTemplate
from z3c.pagelet import browser
from z3c.form.interfaces import IWidgets
from z3c.form import field
from z3c.form import button
from z3c.formui import form
from z3c.formui import layout
from z3c.sampledata.interfaces import ISampleManager
from zam.api.i18n import MessageFactory as _
from zamplugin.sampledata import interfaces
class SampleData(browser.BrowserPagelet):
"""Sampledata managers."""
zope.interface.implements(interfaces.ISampleDataPage)
def managers(self):
return [name for name, util in
zope.component.getUtilitiesFor(ISampleManager)]
def update(self):
if 'manager' in self.request:
managerName = self.request['manager']
self.request.response.redirect(
absoluteURL(self.context, self.request)+
'/@@generatesample.html?manager="%s"'%(managerName))
class IGeneratorSchema(zope.interface.Interface):
"""Schema for the minimal generator parameters"""
seed = zope.schema.TextLine(
title = _('Seed'),
description = _('A seed for the random generator'),
default = u'sample',
required=False,
)
class GenerateSample(form.Form):
"""Edit all generator parameters for a given manager"""
zope.interface.implements(interfaces.ISampleDataPage)
subforms = []
workDone = False
@property
def showGenerateButton(self):
if self.request.get('manager', None) is None:
return False
return True
def updateWidgets(self):
self.widgets = zope.component.getMultiAdapter(
(self, self.request, self.getContent()), IWidgets)
self.widgets.ignoreContext = True
self.widgets.update()
def update(self):
managerName = self.request.get('manager', None)
if managerName is not None:
self.subforms = []
manager = zope.component.getUtility(ISampleManager, name=managerName)
plugins = manager.orderedPlugins()
self.fields = field.Fields()
subform = Generator(context=self.context,
request=self.request,
schema=IGeneratorSchema,
prefix='generator')
subform.fields = field.Fields(IGeneratorSchema)
self.subforms.append(subform)
for plugin in plugins:
if plugin.generator.schema is None:
continue
subform = Generator(context=self.context,
request=self.request,
plugin=plugin.generator,
prefix=str(plugin.name))
subform.fields = field.Fields(plugin.generator.schema)
self.subforms.append(subform)
super(GenerateSample, self).update()
@button.buttonAndHandler(u'Generate',
condition=lambda form: form.showGenerateButton)
def handleGenerate(self, action):
managerName = self.request['manager']
manager = zope.component.getUtility(ISampleManager, name=managerName)
generatorData = {}
for subform in self.subforms:
subform.update()
formData = {}
data, errors = subform.widgets.extract()
generatorData[subform.prefix] = data
gen = generatorData.get('generator', {})
seed = gen.get('seed', None)
try:
self.workedOn = manager.generate(context=self.context,
param=generatorData, seed=seed)
self.workDone = True
except DuplicationError:
self.status = _('Duplidated item')
def manager(self):
return self.request.get('manager', None)
class Generator(form.Form):
"""An editor for a single generator"""
template = getPageTemplate('subform')
def updateWidgets(self):
self.widgets = zope.component.getMultiAdapter(
(self, self.request, self.getContent()), IWidgets)
self.widgets.ignoreContext = True
self.widgets.update()
def __init__(self, context, request, plugin=None, schema=None, prefix=''):
self.plugin = plugin
self.schema = schema
self.prefix = str(prefix) # must be a string in z3c.form
super(Generator, self).__init__(context, request)
def render(self):
return self.template()
def __call__(self):
self.update()
return self.render() | zamplugin.sampledata | /zamplugin.sampledata-0.6.0.tar.gz/zamplugin.sampledata-0.6.0/src/zamplugin/sampledata/browser.py | browser.py |
======
README
======
This package provides sampledata pages for the z3c.sampledata implementation.
The zam.skin is used as basic skin for this test.
First login as manager:
>>> from zope.testbrowser.testing import Browser
>>> mgr = Browser()
>>> mgr.addHeader('Authorization', 'Basic mgr:mgrpw')
And go to the plugins page at the site root:
>>> rootURL = 'http://localhost/++skin++ZAM'
>>> mgr.open(rootURL + '/plugins.html')
>>> mgr.url
'http://localhost/++skin++ZAM/plugins.html'
and install the error plugins:
>>> mgr.getControl(name='zamplugin.sampledata.buttons.install').click()
>>> print mgr.contents
<!DOCTYPE ...
...
<h1>ZAM Plugin Management</h1>
<fieldset id="pluginManagement">
<strong class="installedPlugin">Sample data configuration views</strong>
<div class="description">ZAM sample data configuration views utility.</div>
...
Now you can see that we can access the error utility at the site root:
>>> mgr.open(rootURL + '/sampledata.html')
>>> print mgr.contents
<!DOCTYPE ...
...
<div id="content">
<h1>Sample Data Generation</h1>
<div class="row">Select the sample manager</div>
</div>
</div>
</div>
...
| zamplugin.sampledata | /zamplugin.sampledata-0.6.0.tar.gz/zamplugin.sampledata-0.6.0/src/zamplugin/sampledata/README.txt | README.txt |
__docformat__ = "reStructuredText"
import warnings
import zope.interface
import zope.component
import zope.schema
from zope.publisher.browser import BrowserPage
from zope.security.proxy import removeSecurityProxy
import zope.component.interfaces
import zope.publisher.interfaces.browser
import zope.app.pagetemplate
from zope.app.component.browser.registration import IRegistrationDisplay
from zope.app.component.browser.registration import ISiteRegistrationDisplay
from z3c.template.template import getPageTemplate
from z3c.formui import form
from z3c.form import field
from z3c.form import button
from z3c.pagelet import browser
from zam.api.i18n import MessageFactory as _
def _registrations(context, comp):
sm = zope.component.getSiteManager(context)
for r in sm.registeredUtilities():
if r.component == comp or comp is None:
yield r
for r in sm.registeredAdapters():
if r.factory == comp or comp is None:
yield r
for r in sm.registeredSubscriptionAdapters():
if r.factory == comp or comp is None:
yield r
for r in sm.registeredHandlers():
if r.factory == comp or comp is None:
yield r
class RegistrationView(browser.BrowserPagelet):
zope.component.adapts(None, zope.publisher.interfaces.browser.IBrowserRequest)
template = getPageTemplate()
def registrations(self):
registrations = [
zope.component.getMultiAdapter((r, self.request), IRegistrationDisplay)
for r in sorted(_registrations(self.context, self.context))
]
return registrations
def update(self):
registrations = dict([(r.id(), r) for r in self.registrations()])
for id in self.request.form.get('ids', ()):
r = registrations.get(id)
if r is not None:
r.unregister()
def render(self):
return self.template()
class UtilityRegistrationDisplay(object):
"""Utility Registration Details"""
zope.component.adapts(zope.component.interfaces.IUtilityRegistration,
zope.publisher.interfaces.browser.IBrowserRequest)
zope.interface.implements(IRegistrationDisplay)
def __init__(self, context, request):
self.context = context
self.request = request
def provided(self):
provided = self.context.provided
return provided.__module__ + '.' + provided.__name__
def id(self):
return 'R' + (("%s %s" % (self.provided(), self.context.name))
.encode('utf8')
.encode('base64')
.replace('+', '_')
.replace('=', '')
.replace('\n', '')
)
def _comment(self):
comment = self.context.info or ''
if comment:
comment = _("comment: ${comment}", mapping={"comment": comment})
return comment
def _provided(self):
name = self.context.name
provided = self.provided()
if name:
info = _("${provided} utility named '${name}'",
mapping={"provided": provided, "name": name})
else:
info = _("${provided} utility",
mapping={"provided": provided})
return info
def render(self):
return {
"info": self._provided(),
"comment": self._comment()
}
def unregister(self):
self.context.registry.unregisterUtility(
self.context.component,
self.context.provided,
self.context.name,
)
class SiteRegistrationView(RegistrationView):
def registrations(self):
registrations = [
zope.component.getMultiAdapter((r, self.request),
ISiteRegistrationDisplay)
for r in sorted(_registrations(self.context, None))
]
return registrations
class UtilitySiteRegistrationDisplay(UtilityRegistrationDisplay):
"""Utility Registration Details"""
zope.interface.implementsOnly(ISiteRegistrationDisplay)
def render(self):
url = zope.component.getMultiAdapter(
(self.context.component, self.request), name='absolute_url')
try:
url = url()
except TypeError:
url = ""
cname = getattr(self.context.component, '__name__', '')
if not cname:
cname = _("(unknown name)")
if url:
url += "/@@SelectedManagementView.html"
return {
"cname": cname,
"url": url,
"info": self._provided(),
"comment": self._comment()
}
class AddUtilityRegistration(form.Form):
"""View for registering utilities
Normally, the provided interface and name are input.
A subclass can provide an empty 'name' attribute if the component should
always be registered without a name.
A subclass can provide a 'provided' attribute if a component
should always be registered with the same interface.
"""
zope.component.adapts(None, zope.publisher.interfaces.browser.IBrowserRequest)
formErrorsMessage = _('There were some errors.')
ignoreContext = True
fields = field.Fields(
zope.schema.Choice(
__name__ = 'provided',
title=_("Provided interface"),
description=_("The interface provided by the utility"),
vocabulary="Utility Component Interfaces",
required=True,
),
zope.schema.TextLine(
__name__ = 'name',
title=_("Register As"),
description=_("The name under which the utility will be known."),
required=False,
default=u'',
missing_value=u''
),
zope.schema.Text(
__name__ = 'comment',
title=_("Comment"),
required=False,
default=u'',
missing_value=u''
),
)
name = provided = None
prefix = 'field' # in hopes of making old tests pass. :)
def __init__(self, context, request):
if self.name is not None:
self.fields = self.fields.omit('name')
if self.provided is not None:
self.fields = self.fields.omit('provided')
super(AddUtilityRegistration, self).__init__(context, request)
@property
def label(self):
return _("Register a $classname",
mapping=dict(classname=self.context.__class__.__name__)
)
@button.buttonAndHandler(_('Register'), name='register')
def handleRegister(self, action):
data, errors = self.extractData()
if errors:
self.status = self.formErrorsMessage
return
sm = zope.component.getSiteManager(self.context)
name = self.name
if name is None:
name = data['name']
provided = self.provided
if provided is None:
provided = data['provided']
# We have to remove the security proxy to save the registration
sm.registerUtility(
removeSecurityProxy(self.context),
provided, name,
data['comment'] or '')
self.request.response.redirect('@@registration.html') | zamplugin.sitemanager | /zamplugin.sitemanager-0.6.0.tar.gz/zamplugin.sitemanager-0.6.0/src/zamplugin/sitemanager/registration.py | registration.py |
======
README
======
This package contains the site manager part for the Zope Application
Management. The zam.skin is used as basic skin for this test.
First login as manager:
>>> from zope.testbrowser.testing import Browser
>>> mgr = Browser()
>>> mgr.addHeader('Authorization', 'Basic mgr:mgrpw')
And go to the plugins page at the site root:
>>> rootURL = 'http://localhost/++skin++ZAM'
>>> mgr.open(rootURL + '/plugins.html')
>>> mgr.url
'http://localhost/++skin++ZAM/plugins.html'
and install the error plugins:
>>> mgr.getControl(name='zamplugin.sitemanager.buttons.install').click()
>>> print mgr.contents
<!DOCTYPE ...
...
<h1>ZAM Plugin Management</h1>
<fieldset id="pluginManagement">
<strong class="installedPlugin">Site management</strong>
<div class="description">ZAM Site Manager.</div>
...
Now you can see that we can access the contents.html page for our site
management container at the site root:
>>> mgr.open(rootURL + '/++etc++site/default/@@contents.html')
>>> print mgr.contents
<!DOCTYPE ...
...
<div id="content">
<form action="http://localhost/++skin++ZAM/++etc++site/default/@@contents.html"
method="post" enctype="multipart/form-data"
class="edit-form" name="contents" id="contents">
<div class="viewspace">
<div>
<fieldset>
<legend>Search</legend>
<table>
<tr>
<td class="row">
<label for="search-widgets-searchterm">Search</label>
<input id="search-widgets-searchterm"
name="search.widgets.searchterm"
class="text-widget required textline-field"
value="" type="text" />
</td>
<td class="action">
<input id="search-buttons-search"
name="search.buttons.search"
class="submit-widget button-field" value="Search"
type="submit" />
</td>
</tr>
</table>
</fieldset>
<table class="contents">
<thead>
<tr>
<th>X</th>
<th><a href="?contents-sortOn=contents-renameColumn-1&contents-sortOrder=descending" title="Sort">Name</a></th>
<th><a href="?contents-sortOn=contents-createdColumn-2&contents-sortOrder=ascending" title="Sort">Created</a></th>
<th><a href="?contents-sortOn=contents-modifiedColumn-3&contents-sortOrder=ascending" title="Sort">Modified</a></th>
</tr>
</thead>
<tbody>
<tr class="even">
<td><input type="checkbox" class="checkbox-widget" name="contents-checkBoxColumn-0-selectedItems" value="CookieClientIdManager" /></td>
<td><a href="http://localhost/++skin++ZAM/++etc++site/default/CookieClientIdManager">CookieClientIdManager</a></td>
<td>None</td>
<td>None</td>
</tr>
<tr class="odd">
<td><input type="checkbox" class="checkbox-widget" name="contents-checkBoxColumn-0-selectedItems" value="PersistentSessionDataContainer" /></td>
<td><a href="http://localhost/++skin++ZAM/++etc++site/default/PersistentSessionDataContainer">PersistentSessionDataContainer</a></td>
<td>None</td>
<td>None</td>
</tr>
<tr class="even">
<td><input type="checkbox" class="checkbox-widget" name="contents-checkBoxColumn-0-selectedItems" value="PrincipalAnnotation" /></td>
<td><a href="http://localhost/++skin++ZAM/++etc++site/default/PrincipalAnnotation">PrincipalAnnotation</a></td>
<td>None</td>
<td>None</td>
</tr>
<tr class="odd">
<td><input type="checkbox" class="checkbox-widget" name="contents-checkBoxColumn-0-selectedItems" value="RootErrorReportingUtility" /></td>
<td><a href="http://localhost/++skin++ZAM/++etc++site/default/RootErrorReportingUtility">RootErrorReportingUtility</a></td>
<td>None</td>
<td>None</td>
</tr>
</tbody>
</table>
</div>
</div>
<div>
<div class="buttons">
<input id="contents-buttons-copy"
name="contents.buttons.copy"
class="submit-widget button-field" value="Copy"
type="submit" />
<input id="contents-buttons-cut" name="contents.buttons.cut"
class="submit-widget button-field" value="Cut"
type="submit" />
<input id="contents-buttons-delete"
name="contents.buttons.delete"
class="submit-widget button-field" value="Delete"
type="submit" />
<input id="contents-buttons-rename"
name="contents.buttons.rename"
class="submit-widget button-field" value="Rename"
type="submit" />
</div>
</div>
</form>
...
| zamplugin.sitemanager | /zamplugin.sitemanager-0.6.0.tar.gz/zamplugin.sitemanager-0.6.0/src/zamplugin/sitemanager/README.txt | README.txt |
# zampy
Tool for downloading Land Surface Model input data
[](https://github.com/EcoExtreML/zampy)
[](https://github.com/EcoExtreML/zampy/actions/workflows/build.yml)
[](https://sonarcloud.io/dashboard?id=EcoExtreML_zampy)
## Tool outline:
- Goal is to retrieve data for LSM model input.
1. First **download** the data for the specified location(s) / geographical area.
2. Be able to **load** the variables in a standardized way (standardized names & standardized units).
3. **Output** the data to standard formats:
- ALMA / PLUMBER2's ALMA formatted netCDF.
- *CMOR formatted netCDF*.
- User-interaction should go through recipes. For example, see [springtime](https://github.com/phenology/springtime/blob/main/tests/recipes/daymet.yaml).
- Recipes define:
- data folder (where data should be downloaded to)
- time extent.
- spatial location / bounding box.
- datasets to be used
- variables within datasets
- Load recipes using Pydantic ([for example](https://github.com/phenology/springtime/blob/main/src/springtime/datasets/daymet.py)).
- Support both a CLI & Python API.
Note: items in *italic* will not be worked on for now/low priority, but we want to allow space for these in the future.
## Instructions for CDS datasets (e.g. ERA5)
To download the following datasets, users need access to CDS via cdsapi:
- ERA5
- ERA5 land
- LAI
First, you need to be a registered user on CDS via the [registration page](https://cds.climate.copernicus.eu/user/register?destination=%2F%23!%2Fhome).
Before submitting any request with `zampy`, please configure your `.cdsapirc` file following the instructions on https://cds.climate.copernicus.eu/api-how-to.
When downloading a dataset for the first time, it is **necessary to agree to the Terms of Use of every datasets that you intend to download**. This can only be done via the CDS website. When you try to download these datasets, you will be prompted to go to the terms of use and accept them. | zampy | /zampy-0.1.0.tar.gz/zampy-0.1.0/README.md | README.md |
# Using Zampy
## Installing Zampy
Zampy can be installed by doing:
```bash
pip install zampy git+https://github.com/EcoExtreML/zampy
```
## Configuration
Zampy needs to be configured with a simple configuration file.
You need to create this file under your -*user's home*-/.config directory: `~/.config/zampy/zampy_config.yml`, and should contain the following:
```yaml
working_directory: /path_to_a_working_directory/ #for example: /home/bart/Zampy
```
## Formulating a recipe
A "recipe" is a file with `yml` extension and has the following structure:
```yaml
name: "test_recipe"
download:
years: [2020, 2020]
bbox: [54, 6, 50, 3] # NESW
datasets:
era5:
variables:
- 10m_v_component_of_wind
- surface_pressure
convert:
convention: ALMA
frequency: 1H # outputs at 1 hour frequency. Pandas-like freq-keyword.
resolution: 0.5 # output resolution in degrees.
```
You can specify multiple datasets and multiple variables per dataset.
## Running a recipe
Save this recipe to disk and run the following code in your shell:
```bash
zampy --filename /home/username/path_to_file/simple_recipe.yml
```
This will execute the recipe (i.e. download, ingest, convert, resample and save the data).
| zampy | /zampy-0.1.0.tar.gz/zampy-0.1.0/docs/using_zampy.md | using_zampy.md |
### Handle ERA5 dataset with Zampy
Demo notebook for developers.
```
import numpy as np
from zampy.datasets import ERA5
from zampy.datasets.dataset_protocol import TimeBounds, SpatialBounds
from pathlib import Path
work_dir = Path("/home/yangliu/EcoExtreML/temp")
download_dir = work_dir / "download"
ingest_dir = work_dir / "ingest"
times = TimeBounds(np.datetime64("2010-01-01T00:00:00"), np.datetime64("2010-01-31T23:00:00"))
bbox_demo = SpatialBounds(54, 56, 1, 3)
```
Download dataset.
```
era5_dataset = ERA5()
era5_dataset.download(
download_dir=download_dir,
time_bounds=times,
spatial_bounds=bbox_demo,
variable_names=["10m_v_component_of_wind"], #"surface_pressure", "mean_total_precipitation_rate"
)
```
Data ingestion to the unified format in `zampy`.
```
era5_dataset.ingest(download_dir, ingest_dir)
ds = era5_dataset.load(
ingest_dir=ingest_dir,
time_bounds=times,
spatial_bounds=bbox_demo,
variable_names=["10m_v_component_of_wind"],
resolution=1.0,
regrid_method="flox",
)
ds
from zampy.datasets import converter
ds_convert = converter.convert(ds, era5_dataset, "ALMA")
ds_convert
```
| zampy | /zampy-0.1.0.tar.gz/zampy-0.1.0/demo/era5_dataset_demo.ipynb | era5_dataset_demo.ipynb |
### Handle ETH Canopy Height dataset with Zampy
Demo notebook for developers.
Import packages and configure paths.
```
import numpy as np
from zampy.datasets import EthCanopyHeight
from zampy.datasets.dataset_protocol import TimeBounds, SpatialBounds
from pathlib import Path
work_dir = Path("/home/bart/Zampy")
download_dir = work_dir / "download"
ingest_dir = work_dir / "ingest"
times = TimeBounds(np.datetime64("2020-01-01"), np.datetime64("2020-12-31"))
bbox_demo = SpatialBounds(54, 6, 51, 3)
```
Download dataset.
```
canopy_height_dataset = EthCanopyHeight()
canopy_height_dataset.download(
download_dir=download_dir,
time_bounds=times,
spatial_bounds=bbox_demo,
variable_names=["height_of_vegetation"],
)
```
Data ingestion to the unified format in `zampy`.
```
canopy_height_dataset.ingest(download_dir, ingest_dir)
ds = canopy_height_dataset.load(
ingest_dir=ingest_dir,
time_bounds=times,
spatial_bounds=bbox_demo,
variable_names=["height_of_vegetation"],
resolution=0.05,
regrid_method="flox",
)
ds
from zampy.datasets import converter
ds_convert = converter.convert(ds, canopy_height_dataset, "ALMA")
```
For testing purpose only. <br>
Since the canopy height dataset doesn't need to have a unit conversion performed, we just fake a dataset to trigger the conversion step.
```
# concerning the memory limit, we take a subset for testing
ds_test = ds.sel(latitude=slice(51, 52), longitude=slice(3.0,4.0))
ds_test
ds_test["Latent_heat_flux"] = ds_test["height_of_vegetation"] * 0.5
ds_test["Latent_heat_flux"].attrs["units"] = "watt/decimeter**2"
ds_test
from dask.distributed import Client
client = Client(n_workers=4, threads_per_worker=2)
client
ds_convert = converter.convert(ds_test, canopy_height_dataset, "ALMA")
ds_convert.compute()
# check the conversion
assert np.allclose(
ds_convert["Qle"][0,:20,:20].values / 100,
ds_test["Latent_heat_flux"][0,:20,:20].values,
equal_nan=True,
)
```
| zampy | /zampy-0.1.0.tar.gz/zampy-0.1.0/demo/eth_dataset_demo.ipynb | eth_dataset_demo.ipynb |
Overview
========
``zamqp`` is aimed to broadcast messages and trigger events between python
instances via AMQP.
It is based on amqplib and provides consumer and producer implementations as
well as a mechanism to trigger zope events remotely.
Helper Classes
--------------
Create properties for AMQP connection.
::
>>> from zamqp import AMQPProps
>>> props = AMQPProps(host='localhost',
... user='guest',
... password='guest',
... ssl=False,
... exchange='zamqp.broadcast.fanout',
... type='fanout',
... realm='/data')
Create AMQP connection manually.
::
>>> from zamqp import AMQPConnection
>>> connection = AMQPConnection('zamqp_queue', props)
Access connection channel.
::
>>> connection.channel
Consumer and producer
---------------------
Create consumer callback.
::
>>> def callback(message):
... pass # do anything with received message here
Create and start consumer thread.
::
>>> from zamqp import AMQPConsumer
>>> from zamqp import AMQPThread
>>> consumer = AMQPConsumer('zamqp_queue', props, callback)
>>> thread = AMQPThread(consumer)
>>> thread.start()
Create producer and send a messages. Every python object which is serializable
can be used as a message.
::
>>> from zamqp import AMQPProducer
>>> producer = AMQPProducer('zamqp_queue', props)
>>> message = 'foo'
>>> producer(message)
Trigger events
--------------
Create an event which should be triggered in the remote instance.
::
>>> class MyEvent(object):
... def __init__(self, name):
... self.name = name
Create a listener for ``MyEvent``. This gets called when AMQP events are
received.
::
>>> def my_listener(event):
... if not isinstance(event, MyEvent):
... return
... # do something
>>> import zope.event
>>> zope.event.subscribers.append(my_listener)
The default ``AMQPEventCallback`` just calls ``zope.event.notify`` with the
received payload, which is the serialized event, in this case an instance of
``MyEvent``.
Start our AMQP consumer for events.
::
>>> exchange = 'zamqp.events.fanout'
>>> queue = 'zamqp_events'
>>> from zamqp import AMQPEventCallback
>>> props = AMQPProps(exchange=exchange)
>>> callback = AMQPEventCallback()
>>> consumer = AMQPConsumer(queue, props, callback)
>>> thread = AMQPThread(consumer)
>>> thread.start()
Trigger ``MyEvent`` to AMQP channel. The previously started event consumer now
receives this event and triggers it locally in it's own interpreter.
::
>>> from zamqp import AMQPEvent
>>> event = AMQPEvent(queue, props, MyEvent('myevent'))
>>> zope.event.notify(event)
Credits
=======
-Robert Niederreiter <[email protected]>
Changes
=======
1.0b1
-----
* make it work [rnix] | zamqp | /zamqp-1.0b1.tar.gz/zamqp-1.0b1/README.txt | README.txt |
Zamtel bulk SMS api example in Python
==========================
# Installation
` pip install zamtelsms `
# Setup
Create a .env file in the root of your project and add the following
```
API_KEY=YOUR_API_KEY_FROM_ZAMTEL
SENDER_ID=YOUR_SENDER_ID_FROM_ZAMTEL
BASE_URL=https://bulksms.zamtel.co.zm/api/v2.1/action/send/
```
* API_KEY is the API_KEY you were given by Zamtel
* SENDER_ID is the SENDER_ID you were given by Zamtel
# Usage
You can use the function to send a single here is an example
```
from sms import send_sms
response = send_sms('0975442232', 'Hello there, I am testing the Zamtel Bulk SMS API')
print(response)
```
You can also pass an array of phone numbers to send a sms SMS to multiple clients.
```
from zamtelsms.sms import send_sms
phone_numbers = ['0976xxxxxx','0976xxxxxx','0976xxxxxx','0976xxxxxx','0976xxxxxx',]
message = 'Hello there, I am testing the Zamtel Bulk SMS API'
response = send_sms(phone_numbers, message)
print(response)
# output
{'success': True, 'responseText': 'SMS(es) have been queued for delivery'}
```
It is as simple as that 😃
Happy coding!!
| zamtelsms | /zamtelsms-0.0.7.tar.gz/zamtelsms-0.0.7/README.md | README.md |
Zamtools-navigation is a simple Django application that can be used to display
navigation locations, as well as track the current location the user is at.
Locations can be nested under one another to produce a hierarchical layout of
the site map. Additionally, the location structure does not need to follow the
same structure of the urls.py, allowing off-site urls to be easily integrated
and tracked. Certain locations can also be hidden, allowing them to act as
structure to the hierarchy without being tracked by the current location.
Benefits
* Hierarchical representation with methods for navigating children, root
and top_level locations
* Explicit declaration of base and target urls allows for off-site
locations to be integrated into the hierarchy
* Track the {{ top_locations }} and {{ current_location }} locations using
variables supplied by the context processors
* Locations can be hidden, allowing them to act as structure in the
hierarchy, without being tracked by the current location
* Locations can be put in any order. By default they sort by the order
they were added
* Tests included
Installation
Add zamtools-navigation to your project directory or put it somewhere on the
PYTHONPATH.
You can also use easy_install:
> easy_install zamtools-navigation
In your settings.py file, add zamtools-navigation to the INSTALLED_APPS.
INSTALLED_APPS = (
'navigation',
)
Add the top_level and current context processors to your
TEMPLATE_CONTEXT_PROCESSORS.
TEMPLATE_CONTEXT_PROCESSORS = (
'navigation.context_processors.top_level',
'navigation.context_processors.current',
)
Synchronize your database.
> python manage.py syncdb
Usage
Log in to the Admin panel and create some locations.
It is usually a good idea to surround the base_url and target_url with slashes
(eg: /about/) to prevent any ambiguity when urls are similar (eg: /about and
/blog/about-last-night).
Locations that don't specify a parent are considered to be "top level"
locations.
NOTE: It is not recommended that you create a root location, this will break
top level functionality since that root would be the only top level location.
By default, the order the locations are displayed in are the order they were
added. However, order can be more tightly controlled by using the order field.
Locations are sorted in ascending order, with lower values being listed first.
Hidden locations are ignored by all Location model methods (top_level, root,
children) and context processors (top_level, current). The purpose of hidden
locations is to act as structure in the location hierarchy, but avoid being
tracked as the current location.
Examples of hidden locations are /login/ and /logout/ pages that belong to an
/accounts/ parent. The login and logout locations can be set to hidden, causing
the current location detection to fall back on the accounts parent.
Context Variables
The zamtools-navigation context processors add top_locations and
current_location variables to the context.
top_locations is a list of all locations without parents that aren't hidden.
current_location is the location that most closely matches the current url. If
no match is found, its value is None.
The following is typically how you'd generate a menu with highlighting on the
current location.
<ul class="navigation">
{% for location in top_locations %}
<li id="navigation-{{ location.slug }}" {% ifequal location current_location %}class="navigation-current"{% endifequal %}>
<a href="{{ location.target_url }}" title="{{ location.name }}">{{ location.name }}</a>
</li>
{% endfor %}
</ul>
Testing
Tests can be performed using the standard Django test command.
> python manage.py test navigation | zamtools-navigation | /zamtools-navigation-1.0.zip/zamtools-navigation-1.0/README.txt | README.txt |
from django.db import models
from django.core.urlresolvers import reverse
from fields import AutoSlugField
class TopLevelManager(models.Manager):
def get_query_set(self):
"""
Returns all top level Locations; any Location without a parent. Hidden
locations are ignored.
"""
return super(TopLevelManager, self).get_query_set().filter(parent=None, hidden=False)
class Location(models.Model):
"""
Represents a particular location within a site's map. This map can be
built as a hierarchy using the recursive parent location references.
"""
name = models.CharField(max_length=50, help_text='The text visible in the navigation.')
slug = AutoSlugField(overwrite_on_save=True)
base_url = models.CharField(max_length=500, help_text='The smallest unique portion of the url that determines the location. eg: /about/ ')
target_url = models.CharField(max_length=500, help_text='Go here when clicked. eg: /about/')
parent = models.ForeignKey('Location', null=True, blank=True, help_text='Optionally group this location under another location. Assumed to be a top level location if left blank. Does not affect urls.')
order = models.PositiveSmallIntegerField(null=True, blank=True, help_text='The order the locations appear in when listed, lower numbers come sooner. By default, locations are sorted in the order they were added.')
hidden = models.BooleanField(help_text='Prevents the location from being displayed in lists. Also prevents the location from being a top level location. Good for defining invisible structure.')
objects = models.Manager()
top_level = TopLevelManager()
def children(self):
"""
Returns all children that belong to this location one level deep.
Hidden children are ignored.
"""
return self.location_set.filter(hidden=False)
def root(self):
"""
Returns this location's top-most parent location that isn't hidden.
"""
root_location = self
if self.parent:
current = self
last_visible = current
while current.parent:
current = current.parent
if not current.hidden:
last_visible = current
root_location = last_visible
return root_location
def get_absolute_url(self):
return self.target_url
def __unicode__(self):
return self.name
class Meta:
ordering = ['order', 'id',] | zamtools-navigation | /zamtools-navigation-1.0.zip/zamtools-navigation-1.0/navigation/models.py | models.py |
from django.db.models import fields
from django.db import IntegrityError
from django.template.defaultfilters import slugify
class AutoSlugField(fields.SlugField):
"""
A SlugField that automatically populates itself at save-time from
the value of another field.
Accepts argument populate_from, which should be the name of a single
field which the AutoSlugField will populate from (default = 'name').
By default, also sets unique=True, db_index=True, and
editable=False.
Accepts additional argument, overwrite_on_save. If True, will
re-populate on every save, overwriting any existing value. If
False, will not touch existing value and will only populate if
slug field is empty. Default is False.
"""
def __init__ (self, populate_from='name', overwrite_on_save=False,
*args, **kwargs):
kwargs.setdefault('unique', True)
kwargs.setdefault('db_index', True)
kwargs.setdefault('editable', False)
self._save_populate = populate_from
self._overwrite_on_save = overwrite_on_save
super(AutoSlugField, self).__init__(*args, **kwargs)
def _populate_slug(self, model_instance):
value = getattr(model_instance, self.attname, None)
prepop = getattr(model_instance, self._save_populate, None)
if (prepop is not None) and (not value or self._overwrite_on_save):
value = slugify(prepop)
setattr(model_instance, self.attname, value)
return value
def contribute_to_class (self, cls, name):
# apparently in inheritance cases, contribute_to_class is called more
# than once, so we have to be careful not to overwrite the original
# save method.
if not hasattr(cls, '_orig_save'):
cls._orig_save = cls.save
def _new_save (self_, *args, **kwargs):
counter = 1
orig_slug = self._populate_slug(self_)
slug_len = len(orig_slug)
if slug_len > self.max_length:
orig_slug = orig_slug[:self.max_length]
slug_len = self.max_length
setattr(self_, name, orig_slug)
while True:
try:
self_._orig_save(*args, **kwargs)
break
except IntegrityError, e:
# check to be sure a slug fight caused the IntegrityError
s_e = str(e)
if name in s_e and 'unique' in s_e:
counter += 1
max_len = self.max_length - (len(str(counter)) + 1)
if slug_len > max_len:
orig_slug = orig_slug[:max_len]
setattr(self_, name, "%s-%s" % (orig_slug, counter))
else:
raise
cls.save = _new_save
super(AutoSlugField, self).contribute_to_class(cls, name) | zamtools-navigation | /zamtools-navigation-1.0.zip/zamtools-navigation-1.0/navigation/fields.py | fields.py |
from models import Article
from django.shortcuts import render_to_response
from django.template import RequestContext
from dateutil.relativedelta import relativedelta
from django.core.paginator import Paginator
from datetime import date
import settings
page_size = getattr(settings, 'NEWS_PAGE_SIZE', 10)
def index(request, extra_context={}, template_name='news/article_archive.html'):
"""
Returns a list of the recent public articles.
"""
articles = Article.recent.all()
paginator = Paginator(articles, page_size)
page_num = request.GET.get('page', 1)
page = paginator.page(page_num)
return render_to_response(
template_name,
{
'articles':articles,
'page':page,
},
context_instance=RequestContext(request, extra_context)
)
def year(request, year, extra_context={}, template_name='news/article_archive_year.html'):
"""
Returns a list of public articles that belong to the supplied year.
"""
year = int(year)
articles = Article.public.filter(date_created__year=year)
paginator = Paginator(articles, page_size)
page_num = request.GET.get('page', 1)
page = paginator.page(page_num)
current_date = date(year, 1, 1)
# if no articles exist for previous_date make it None
previous_date = current_date + relativedelta(years=-1)
if not Article.public.filter(date_created__year=previous_date.year):
previous_date = None
# if no articles exist for next_date make it None
next_date = current_date + relativedelta(years=+1)
if not Article.public.filter(date_created__year=next_date.year):
next_date = None
return render_to_response(
template_name,
{
'articles':articles,
'page':page,
'current_date':current_date,
'previous_date':previous_date,
'next_date':next_date,
},
context_instance=RequestContext(request, extra_context)
)
def month(request, year, month, extra_context={}, template_name='news/article_archive_month.html'):
"""
Returns a list of public articles that belong to the supplied year and
month.
"""
year = int(year)
month = int(month)
articles = Article.public.filter(date_created__year=year, date_created__month=month)
paginator = Paginator(articles, page_size)
page_num = request.GET.get('page', 1)
page = paginator.page(page_num)
current_date = date(year, month, 1)
# if no articles exist for previous_date make it None
previous_date = current_date + relativedelta(months=-1)
if not Article.public.filter(date_created__year=previous_date.year, date_created__month=previous_date.month):
previous_date = None
# if no articles exist for next_date make it None
next_date = current_date + relativedelta(months=+1)
if not Article.public.filter(date_created__year=next_date.year, date_created__month=next_date.month):
next_date = None
return render_to_response(
template_name,
{
'articles':articles,
'page':page,
'current_date':current_date,
'previous_date':previous_date,
'next_date':next_date,
},
context_instance=RequestContext(request, extra_context)
)
def day(request, year, month, day, extra_context={}, template_name='news/article_archive_day.html'):
"""
Returns a list of public articles that belong to the supplied year,
month and day.
"""
year = int(year)
month = int(month)
day = int(day)
articles = Article.public.filter(date_created__year=year, date_created__month=month, date_created__day=day)
paginator = Paginator(articles, page_size)
page_num = request.GET.get('page', 1)
page = paginator.page(page_num)
current_date = date(year, month, day)
# if no articles exist for previous_date make it None
previous_date = current_date + relativedelta(days=-1)
if not Article.public.filter(date_created__year=previous_date.year, date_created__month=previous_date.month, date_created__day=previous_date.day):
previous_date = None
# if no articles exist for next_date make it None
next_date = current_date + relativedelta(days=+1)
if not Article.public.filter(date_created__year=next_date.year, date_created__month=next_date.month, date_created__day=next_date.day):
next_date = None
return render_to_response(
template_name,
{
'articles':articles,
'page':page,
'current_date':current_date,
'previous_date':previous_date,
'next_date':next_date,
},
context_instance=RequestContext(request, extra_context)
)
def detail(request, year, month, day, slug, extra_context={}, template_name='news/article_detail.html'):
"""
Returns a single article based on the supplied year, month, day and slug.
"""
year = int(year)
month = int(month)
day = int(day)
articles = Article.public.filter(date_created__year=year, date_created__month=month, date_created__day=day, slug=slug)
paginator = Paginator(articles, page_size)
page_num = request.GET.get('page', 1)
page = paginator.page(page_num)
current_date = date(year, month, day)
current_date = date(year, month, day)
return render_to_response(
template_name,
{
'articles':articles,
'page':page,
'current_date':current_date,
},
context_instance=RequestContext(request, extra_context)
)
def rss_recent(request):
pass | zamtools-news | /zamtools-news-0.9.zip/zamtools-news-0.9/news/views.py | views.py |
from django.db.models import fields
from django.db import IntegrityError
from django.template.defaultfilters import slugify
class AutoSlugField(fields.SlugField):
"""
A SlugField that automatically populates itself at save-time from
the value of another field.
Accepts argument populate_from, which should be the name of a single
field which the AutoSlugField will populate from (default = 'name').
By default, also sets unique=True, db_index=True, and
editable=False.
Accepts additional argument, overwrite_on_save. If True, will
re-populate on every save, overwriting any existing value. If
False, will not touch existing value and will only populate if
slug field is empty. Default is False.
"""
def __init__ (self, populate_from='name', overwrite_on_save=False,
*args, **kwargs):
kwargs.setdefault('unique', True)
kwargs.setdefault('db_index', True)
kwargs.setdefault('editable', False)
self._save_populate = populate_from
self._overwrite_on_save = overwrite_on_save
super(AutoSlugField, self).__init__(*args, **kwargs)
def _populate_slug(self, model_instance):
value = getattr(model_instance, self.attname, None)
prepop = getattr(model_instance, self._save_populate, None)
if (prepop is not None) and (not value or self._overwrite_on_save):
value = slugify(prepop)
setattr(model_instance, self.attname, value)
return value
def contribute_to_class (self, cls, name):
# apparently in inheritance cases, contribute_to_class is called more
# than once, so we have to be careful not to overwrite the original
# save method.
if not hasattr(cls, '_orig_save'):
cls._orig_save = cls.save
def _new_save (self_, *args, **kwargs):
counter = 1
orig_slug = self._populate_slug(self_)
slug_len = len(orig_slug)
if slug_len > self.max_length:
orig_slug = orig_slug[:self.max_length]
slug_len = self.max_length
setattr(self_, name, orig_slug)
while True:
try:
self_._orig_save(*args, **kwargs)
break
except IntegrityError, e:
# check to be sure a slug fight caused the IntegrityError
s_e = str(e)
if name in s_e and 'unique' in s_e:
counter += 1
max_len = self.max_length - (len(str(counter)) + 1)
if slug_len > max_len:
orig_slug = orig_slug[:max_len]
setattr(self_, name, "%s-%s" % (orig_slug, counter))
else:
raise
cls.save = _new_save
super(AutoSlugField, self).contribute_to_class(cls, name) | zamtools-news | /zamtools-news-0.9.zip/zamtools-news-0.9/news/fields.py | fields.py |
Django application for creating member profiles for users and grouping them into teams. The member profiles are intentionally loosely coupled to users, this allows profiles to exist for users that aren't in the system.
= Benefits =
* Member profiles can be grouped into multiple teams
* Member profiles can be made private
* Resizable and cached profile picture
* Ready-made templates
* Tests included
= Dependencies =
* [http://www.pythonware.com/products/pil/ Python Image Library (PIL)]
* [http://code.google.com/p/django-photologue/downloads/list Photologue 2.2]
* [http://www.djangoproject.com Django 1.0]
= Installation =
*NOTE: These steps assume that PIL is already installed.*
== Installing Photologue ==
Photologue is necessary to do image resizing and caching for the member profile pictures. Download Photologue 2.2 and run the following command in the download directory.
{{{
> python setup.py install
}}}
If you have setuptools installed, you can use easy_install instead.
{{{
> easy_install django-photologue
}}}
In your Django project, add Photologue to the INSTALLED_APPS settings.py file.
{{{
INSTALLED_APPS = (
'photologue',
)
}}}
You can optionally specify where you want Photologue to save uploaded images to, relative to your MEDIA_ROOT, using the PHOTOLOGUE_DIR variable in settings.py. By default, this is set to 'photologue'.
{{{
PHOTOLOGUE_DIR = 'images'
}}}
The preceding would result in files being saved in `/media/images/` assuming `media` is your MEDIA_ROOT.
Synchronize the database to create the Photologue tables.
{{{
> python manage.py syncdb
}}}
Initialize Photologue to create all necessary defaults.
{{{
> python manage.py plinit
}}}
You will then be prompted with a series of questions to fill in the defaults. Use the following settings:
{{{
Photologue requires a specific photo size to display thumbnail previews in the Django admin application.
Would you like to generate this size now? (yes, no): yes
We will now define the "admin_thumbnail" photo size:
Width (in pixels): 200
Height (in pixels): 200
Crop to fit? (yes, no): yes
Pre-cache? (yes, no): yes
Increment count? (yes, no): no
A "admin_thumbnail" photo size has been created.
Would you like to apply a sample enhancement effect to your admin thumbnails? (yes, no): no
Photologue comes with a set of templates for setting up a complete photo gallery. These templates require you to define both a "thumbnail" and "display" size.
Would you like to define them now? (yes, no): yes
We will now define the "thumbnail" photo size:
Width (in pixels): 200
Height (in pixels): 200
Crop to fit? (yes, no): yes
Pre-cache? (yes, no): yes
Increment count? (yes, no): no
A "thumbnail" photo size has been created.
We will now define the "display" photo size:
Width (in pixels): 600
Height (in pixels): 600
Crop to fit? (yes, no): yes
Pre-cache? (yes, no): yes
Increment count? (yes, no): no
A "display" photo size has been created.
Would you like to apply a sample reflection effect to your display images? (yes, no): no
}}}
Create a custom PhotoSize for the member profile picture called "avatar".
{{{
> python manage.py plcreatesize avatar
}}}
You will then be prompted with a series of questions to fill in the defaults. Use the following settings:
{{{
We will now define the "avatar" photo size:
Width (in pixels): 100
Height (in pixels): 150
Crop to fit? (yes, no): yes
Pre-cache? (yes, no): yes
Increment count? (yes, no): no
A "avatar" photo size has been created.
}}}
== Installing zamtools-profiles ==
If you have setuptools installed, you can use easy_install.
{{{
> easy_install zamtools-profiles
}}}
Add zamtools-profiles to the INSTALLED_APPS list in settings.py.
{{{
INSTALLED_APPS = (
'photologue',
'profiles',
)
}}}
Synchronize the database.
{{{
> python manage.py syncdb
}}}
If you want to use the ready-made views and templates include the following url pattern in urls.py.
{{{
urlpatterns = patterns('',
(r'^profiles/', include('profiles.urls')),
)
}}}
= Usage =
Login to the admin interface and add a Member and assign it an image.
Create a Team and assign your Member to it.
The Member model has a `public` manager for retriving only members who's `is_public` field is set to True.
{{{
public_members = Members.public.all()
}}}
The Member model has an `image()` convenience method for retrieving the latest `MemberImage` assigned to it.
{{{
member = Member.public.get(id=1)
member_image = member.image()
member_image.get_avatar_url()
}}}
The image can be retrieved in a template using the following.
{{{
<div class="member">
<h1>{{ member }}</h1>
<img src="{{ member.image.get_avatar_url }}" />
</div>
}}}
The Team model has a `public_members()` convenience method for retrieving only those members on the team who's `is_public` field is set to True. | zamtools-profiles | /zamtools-profiles-0.9.zip/zamtools-profiles-0.9/README.txt | README.txt |
from django.db import models
from django.contrib.auth.models import User
from photologue.models import ImageModel
from fields import AutoSlugField
class PublicManager(models.Manager):
"""
Returns only members where is_public is True.
"""
def get_query_set(self):
return super(PublicManager, self).get_query_set().filter(is_public=True)
class Member(models.Model):
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
position = models.CharField(max_length=50, null=True, blank=True, help_text='Optionally specify the person\'s position. eg: CEO, Manager, Developer, etc.')
description = models.TextField(null=True, blank=True)
email = models.EmailField(max_length=250, null=True, blank=True)
user = models.ForeignKey(User, null=True, blank=True, help_text='Optionally associate the profile with a user.')
order = models.IntegerField(default=0, null=True, blank=True, help_text='Optionally specify the order profiles should appear. Lower numbers appear sooner. By default, profiles appear in the order they were created.')
is_public = models.BooleanField(default=True, null=True, blank=True, help_text='Profile can be seen by anyone when checked.')
date_created = models.DateTimeField(auto_now_add=True)
objects = models.Manager()
public = PublicManager()
def image(self):
"""
Convenience method for returning the single most recent member image.
"""
member_images = self.memberimage_set.all()
if member_images:
return member_images[0]
def __unicode__(self):
return '%s %s' % (self.first_name, self.last_name)
class Meta:
ordering = ['-order', '-date_created', 'id']
class MemberImage(ImageModel):
member = models.ForeignKey(Member)
class Meta:
ordering = ['-id']
class Team(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(overwrite_on_save=True)
description = models.TextField(null=True, blank=True)
members = models.ManyToManyField(Member, null=True, blank=True)
order = models.IntegerField(default=0, null=True, blank=True, help_text='Optionally specify the order teams should appear. Lower numbers appear sooner. By default, teams appear in the order they were created.')
date_created = models.DateTimeField(auto_now_add=True)
def public_members(self):
"""
Convenience method for returning only members of the team who's
is_public is True.
"""
return self.members.filter(is_public=True)
def __unicode__(self):
return self.name
class Meta:
ordering = ['-order', '-date_created', 'id'] | zamtools-profiles | /zamtools-profiles-0.9.zip/zamtools-profiles-0.9/profiles/models.py | models.py |
from django.db.models import fields
from django.db import IntegrityError
from django.template.defaultfilters import slugify
class AutoSlugField(fields.SlugField):
"""
A SlugField that automatically populates itself at save-time from
the value of another field.
Accepts argument populate_from, which should be the name of a single
field which the AutoSlugField will populate from (default = 'name').
By default, also sets unique=True, db_index=True, and
editable=False.
Accepts additional argument, overwrite_on_save. If True, will
re-populate on every save, overwriting any existing value. If
False, will not touch existing value and will only populate if
slug field is empty. Default is False.
"""
def __init__ (self, populate_from='name', overwrite_on_save=False,
*args, **kwargs):
kwargs.setdefault('unique', True)
kwargs.setdefault('db_index', True)
kwargs.setdefault('editable', False)
self._save_populate = populate_from
self._overwrite_on_save = overwrite_on_save
super(AutoSlugField, self).__init__(*args, **kwargs)
def _populate_slug(self, model_instance):
value = getattr(model_instance, self.attname, None)
prepop = getattr(model_instance, self._save_populate, None)
if (prepop is not None) and (not value or self._overwrite_on_save):
value = slugify(prepop)
setattr(model_instance, self.attname, value)
return value
def contribute_to_class (self, cls, name):
# apparently in inheritance cases, contribute_to_class is called more
# than once, so we have to be careful not to overwrite the original
# save method.
if not hasattr(cls, '_orig_save'):
cls._orig_save = cls.save
def _new_save (self_, *args, **kwargs):
counter = 1
orig_slug = self._populate_slug(self_)
slug_len = len(orig_slug)
if slug_len > self.max_length:
orig_slug = orig_slug[:self.max_length]
slug_len = self.max_length
setattr(self_, name, orig_slug)
while True:
try:
self_._orig_save(*args, **kwargs)
break
except IntegrityError, e:
# check to be sure a slug fight caused the IntegrityError
s_e = str(e)
if name in s_e and 'unique' in s_e:
counter += 1
max_len = self.max_length - (len(str(counter)) + 1)
if slug_len > max_len:
orig_slug = orig_slug[:max_len]
setattr(self_, name, "%s-%s" % (orig_slug, counter))
else:
raise
cls.save = _new_save
super(AutoSlugField, self).contribute_to_class(cls, name) | zamtools-profiles | /zamtools-profiles-0.9.zip/zamtools-profiles-0.9/profiles/fields.py | fields.py |
# Python Zana
[![PyPi version][pypi-image]][pypi-link]
[![Supported Python versions][pyversions-image]][pyversions-link]
[![Build status][ci-image]][ci-link]
[![Coverage status][codecov-image]][codecov-link]
A Python tool kit
## Installation
Install from [PyPi](https://pypi.org/project/zana/)
```
pip install zana
```
## Documentation
Full documentation is available [here][docs-link].
## Production
__This package is still in active development and should not be used in production environment__
[docs-link]: https://python-zana.github.io/zana/
[pypi-image]: https://img.shields.io/pypi/v/zana.svg?color=%233d85c6
[pypi-link]: https://pypi.python.org/pypi/zana
[pyversions-image]: https://img.shields.io/pypi/pyversions/zana.svg
[pyversions-link]: https://pypi.python.org/pypi/zana
[ci-image]: https://github.com/python-zana/zana/actions/workflows/workflow.yaml/badge.svg?event=push&branch=main
[ci-link]: https://github.com/python-zana/zana/actions?query=workflow%3ACI%2FCD+event%3Apush+branch%3Amain
[codecov-image]: https://codecov.io/gh/python-zana/zana/branch/main/graph/badge.svg
[codecov-link]: https://codecov.io/gh/python-zana/zana
See this release on GitHub: [v0.2.0.a0](https://github.com/python-zana/zana/releases/tag/0.2.0.a0)
| zana | /zana-0.2.0a0.tar.gz/zana-0.2.0a0/README.md | README.md |
import hmac
import hashlib
import json
import os
import re
import fedmsg
import flask
app = flask.Flask(__name__)
app.config.from_envvar('ZANATA2FEDMSG_CONFIG')
## Here's an example payload from zanata itself
#body = {
# "project": "webhooks-dummy",
# "version": "0.1",
# "docId": "foo.txt",
# "locale": "af",
# "milestone": "100% Translated",
# "eventType": "org.zanata.event.DocumentMilestoneEvent",
#}
def camel2dot(camel):
""" Convert CamelCaseText to dot.separated.text. """
regexp = r'([A-Z][a-z0-9]+|[a-z0-9]+|[A-Z0-9]+)'
return '.'.join([s.lower() for s in re.findall(regexp, camel)])
@app.route('/webhook/<provided_secret>', methods=['POST'])
def webhook(provided_secret):
# First, verify that the hashed url sent to us is the one that we provided
# to zanata in the first place.
salt = app.config['WEBHOOK_SALT']
payload = json.loads(flask.request.body)
name = payload['project']
valid_secret = hmac.new(salt, name, hashlib.sha256).hexdigest()
if provided_secret != valid_secret:
error = "%s is not valid for %s" % (provided_secret, name)
return error, 403
# XXX - Note that the validation mechanism we used above is not really
# secure. An attacker could eavesdrop on the request and get the url and
# then perform a replay attack (since the provided_secret will be the same
# every time for each project). It would be better to use a shared salt
# and then sign each message uniquely, but we'll have to wait until zanata
# supports something like that. This will do in the mean time.
# See https://bugzilla.redhat.com/show_bug.cgi?id=1213630
# Having verified the message, we're all set. Republish it on our bus.
topic = camel2dot(payload['eventType'].split('.')[-1])
fedmsg.publish(
modname='zanata',
topic=topic,
msg=payload,
)
return "Everything is 200 OK"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8081, debug=True) | zanata2fedmsg | /zanata2fedmsg-0.2.tar.gz/zanata2fedmsg-0.2/zanata2fedmsg.py | zanata2fedmsg.py |
from collections import OrderedDict
from torch import nn
import numpy as np
import torch
class Discriminator():
def __init__(self, drop=0.1, neurons=[64, 32, 16], lr_nn=0.0001,
epochs=20, layers=3, device='cuda:0', batch_size=2**7):
self.drop, self.layers = drop, layers
self.neurons = neurons
self.lr_nn, self.epochs = lr_nn, epochs
self.device = torch.device(device)
self.batch_size = batch_size
return None
class Classifier(nn.Module):
def __init__(self, shape, neurons, drop, output, layers=3):
super().__init__()
neurons = [shape] + neurons
sequential = OrderedDict()
i = 0
while i < layers:
sequential[f'linear_{i}'] = nn.Linear(neurons[i], neurons[i+1])
sequential[f'relu_{i}'] = nn.ReLU()
sequential[f'drop_{i}'] = nn.Dropout(drop)
i+=1
sequential['linear_final'] = nn.Linear(neurons[i], output)
sequential['softmax'] = nn.Softmax(dim=1)
self.model = nn.Sequential(sequential)
def forward(self, x):
out = self.model(x)
return out
def fit(self, x, y):
col_count = x.shape[1]
output = len(set(y))
x, y = torch.from_numpy(x.values).to(self.device), torch.from_numpy(y.values).to(self.device)
train_set = [(x[i].to(self.device), y[i].to(self.device)) for i in range(len(y))]
train_loader = torch.utils.data.DataLoader(train_set, batch_size=self.batch_size, shuffle=True)
loss_function = nn.CrossEntropyLoss()
discriminator = self.Classifier(col_count, self.neurons, self.drop, output, self.layers).to(self.device)
optim = torch.optim.Adam(discriminator.parameters(), lr=self.lr_nn)
for epoch in range(self.epochs):
for i, (inputs, targets) in enumerate(train_loader):
discriminator.zero_grad()
yhat = discriminator(inputs.float())
loss = loss_function(yhat, targets.long())
loss.backward()
optim.step()
self.model = discriminator
return None
def predict(self, x):
discriminator = self.model
discriminator.to(self.device).eval()
x = torch.from_numpy(x.values).to(self.device)
preds = np.argmax(discriminator(x.float()).cpu().detach(), axis=1)
return preds
def predict_proba(self, x):
discriminator = self.model
discriminator.to(self.device).eval()
x = torch.from_numpy(x.values).to(self.device)
preds = discriminator(x.float()).cpu().detach()
return preds | zangorth-helpers | /zangorth-helpers-1.3.6.tar.gz/zangorth-helpers-1.3.6/helpers/arbitraryNN.py | arbitraryNN.py |
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.exceptions import ConvergenceWarning
from sklearn.metrics import f1_score
from multiprocessing import Pool
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('error', category=ConvergenceWarning)
##################
# Model Function #
##################
def model_fit(x_train, y_train, x_test, y_test, model, pop, i):
if model == 'logit':
solver = 'lbfgs' if 'solver' not in pop else pop['solver'][i]
penalty = 'l2' if 'penalty' not in pop else pop['penalty'][i]
model = LogisticRegression(penalty=penalty, solver=solver)
try:
model.fit(x_train, y_train)
predictions = model.predict(x_test)
return f1_score(y_test, predictions, average='micro')
except ConvergenceWarning:
return 0
elif model == 'rfc':
n_samples = 1000 if 'n_samples' not in pop else pop['n_samples'][i]
max_depth = None if 'max_depth_rfc' not in pop else pop['max_depth_rfc'][i]
model = RandomForestClassifier(n_samples=n_samples, max_depth=max_depth)
model.fit(x_train, y_train)
predictions = model.predict(x_test)
return f1_score(y_test, predictions, average='micro')
elif model == 'gbc':
learning_rate = 0.1 if 'learning_rate' not in pop else pop['learning_rate'][i]
n_estimators = 100 if 'n_estimators' not in pop else pop['n_estimators'][i]
max_depth = 3 if 'max_depth_gbc' not in pop else pop['max_depth_gbc'][i]
model = GradientBoostingClassifier(n_estimators=n_estimators, learning_rate=learning_rate, max_depth=max_depth)
model.fit(x_train, y_train)
predictions = model.predict(x_test)
return f1_score(y_test, predictions, average='micro')
class GA:
def __init__(self, x, y, pop, cv=[], model='logit', max_features=None):
self.x, self.y = x, y
self.pop, self.cv = pop, cv
self.max_features = max_features
self.model = model
self.max = []
####################
# Fitness Function #
####################
def fit(self, i=0):
max_features = self.max_features if self.max_features != None else self.x.shape[1]
rows = self.x.loc[self.pop['rows'][i] == 1].index
cols = self.pop['cols'][i]
fit_list = []
if sum(cols) > max_features:
fit_list.append(0)
else:
x = self.x.dropna()
y = self.y.loc[x.index]
x = x[[x.columns[j] for j in range(x.shape[1]) if cols[j] == 1]]
for fold in self.cv:
x_test = x.loc[x.index.isin(fold)]
x_train = x.loc[~x.index.isin(fold)]
x_train = x_train.loc[~x_train.index.isin(rows)]
y_test = y.loc[x_test.index]
y_train = y.loc[x_train.index]
fit_list.append(model_fit(x_train, y_train, x_test, y_test, self.model, self.pop, i))
return np.mean(fit_list)
###################
# Mating Function #
###################
def mate(self, fitness, num_parents):
return self.pop.loc[self.pop.index.isin(((-np.array(fitness)).argsort())[0:num_parents])]
#####################
# Children Function #
#####################
def children(self, fitness, num_children, mutation_rate):
children = pd.DataFrame(columns=self.pop.columns, index=range(num_children))
for i in range(num_children):
mates = np.random.choice(np.arange(0, len(self.pop)), size=2, p=fitness/sum(fitness))
mom, dad = self.pop.iloc[mates[0]], self.pop.iloc[mates[1]]
for j in range(len(self.pop.columns)):
col = self.pop.columns[j]
if type(mom[col]) == int:
mutate = bool(np.random.binomial(1, mutation_rate))
children.iloc[i, j] = round(np.mean([mom[col], dad[col]])) if mutate else np.random.choice([mom[col], dad[col]])
elif type(mom[col]) == float:
mutate = bool(np.random.binomial(1, mutation_rate))
children.iloc[i, j] = np.mean([mom[col], dad[col]]) if mutate else np.random.choice([mom[col], dad[col]])
elif type(mom[col]) == str:
mutate = bool(np.random.binomial(1, mutation_rate))
children.iloc[i, j] = str(np.random.choice(list(set(self.pop[col]))) if mutate else np.random.choice([mom[col], dad[col]]))
elif type(mom[col]) == np.ndarray:
mutate = np.random.binomial(1, mutation_rate, len(mom[col]))
choice = np.random.binomial(1, 0.5, len(mom[col]))
child = [mom[col][i] if choice[i] == 1 else dad[col][i] for i in range(len(choice))]
children.iloc[i, j] = np.array([1 - child[i] if mutate[i] == 1 else child[i] for i in range(len(mutate))])
else:
print('Invalid Data Type')
print(f'{mom[col]} needs to be data type int, float, str, or ndarray')
print(f'{mom[col]} is currently data type {type(mom[col])}')
print('')
break
return children
#######################
# Parallelize Fitness #
#######################
def parallel(self, solutions, workers=2):
pool = Pool(workers)
out = pool.map(self.fit, range(solutions))
pool.close()
pool.join()
return out | zangorth-helpers | /zangorth-helpers-1.3.6.tar.gz/zangorth-helpers-1.3.6/helpers/SGA.py | SGA.py |
from youtube_transcript_api import YouTubeTranscriptApi
from sqlalchemy import create_engine
from pydub import AudioSegment
from datetime import datetime
from pytube import YouTube
import pyodbc as sql
import pandas as pd
import numpy as np
import warnings
import librosa
import urllib
import pickle
import string
import os
class ParameterError(Exception):
pass
class NoTranscriptFound(Exception):
pass
###############
# Data Scrape #
###############
class Scrape():
def __init__(self, link, audio_location, transcript_location):
self.link = link
self.audio_location, self.transcript_location = audio_location, transcript_location
self.personalities = {'UC7eBNeDW1GQf2NJQ6G6gAxw': 'ramsey',
'UC4HiMKM8WLcNbt9ae_XNRNQ': 'deloney',
'UC0tVfiyBpMOQLA3FAanPGJA': 'coleman',
'UCaW51g-nmLfq703TPZC7Gsg': 'ao',
'UCt59W0ScV709iwy2h-oiulQ': 'cruze',
'UC1CHQyZ5-MTJzuSCvSVw_qg': 'wright',
'UCKFrkFOwmiXMuZtQJXuG5OQ': 'kamel'}
return None
def metadata(self):
connection_string = ('DRIVER={ODBC Driver 17 for SQL Server};' +
'Server=ZANGORTH;DATABASE=HomeBase;' +
'Trusted_Connection=yes;')
con = sql.connect(connection_string)
query = 'SELECT * FROM [ramsey].[metadata]'
collected = pd.read_sql(query, con)
self.columns = pd.read_sql('SELECT TOP 1 * FROM [ramsey].[audio]', con).columns
con.close()
self.yt = YouTube(self.link)
if self.yt.channel_id not in [personality for personality in self.personalities]:
return 'Please only upload videos from the Ramsey Personality Channels'
if self.link in list(collected['link']):
return 'Link already exists in database'
self.personality = self.personalities[self.yt.channel_id]
name = self.yt.title
name = name.translate(str.maketrans('', '', string.punctuation)).lower()
self.name = name
keywords = ('|'.join(self.yt.keywords)).replace("'", "''").lower()
self.publish_date = self.yt.publish_date.strftime("%Y-%m-%d")
self.random_id = int(round(np.random.uniform()*1000000, 0))
if (datetime.now() - self.yt.publish_date).days < 7:
return 'Videos are only recorded after having been published at least 7 days'
out = pd.DataFrame({'channel': self.personality,
'publish_date': [self.publish_date],
'random_id': [self.random_id],
'title': [name], 'link': [self.link],
'keywords': [keywords], 'seconds': [self.yt.length],
'rating': [self.yt.rating], 'view_count': [self.yt.views],
'upload_date': [datetime.now().strftime("%Y-%m-%d")]})
return out
def audio(self):
os.chdir(self.audio_location)
self.yt.streams.filter(only_audio=True).first().download()
current_name = [f for f in os.listdir() if '.mp4' in f][0]
self.file = f'{self.personality} {str(self.publish_date)} {self.random_id}.mp3'
try:
os.rename(current_name, self.file)
except FileExistsError:
os.remove(self.file)
os.rename(current_name, self.file)
return None
def transcript(self):
transcript_link = self.link.split('?v=')[-1]
try:
transcript = YouTubeTranscriptApi.get_transcript(transcript_link)
pickle.dump(transcript, open(f'{self.transcript_location}\\{self.file.replace(".mp3", ".pkl")}', 'wb'))
except NoTranscriptFound:
pass
except Exception:
pass
return None
def iterables(self):
sound = AudioSegment.from_file(self.file)
iterables = [[cut, sound[cut*1000:cut*1000+1000]] for cut in range(int(round(len(sound)/1000, 0)))]
return iterables
def encode_audio(self, sound):
warnings.filterwarnings('ignore')
second = sound[0]
sound = sound[1]
try:
y, rate = librosa.load(sound.export(format='wav'), res_type='kaiser_fast')
mfccs = np.mean(librosa.feature.mfcc(y, rate, n_mfcc=40).T,axis=0)
stft = np.abs(librosa.stft(y))
chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=rate).T,axis=0)
mel = np.mean(librosa.feature.melspectrogram(y, sr=rate).T,axis=0)
contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=rate).T,axis=0)
tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(y), sr=rate).T,axis=0)
features = list(mfccs) + list(chroma) + list(mel) + list(contrast) + list(tonnetz)
features = [float(f) for f in features]
features = [self.personality, self.publish_date, self.random_id, second] + features
features = pd.DataFrame([features], columns=self.columns)
except ValueError:
features = pd.DataFrame(columns=self.columns, index=[0])
features.iloc[0, 0:3] = [self.personality, self.publish_date, self.random_id, second]
except ParameterError:
features = pd.DataFrame(columns=self.columns, index=[0])
features.iloc[0, 0:3] = [self.personality, self.publish_date, self.random_id, second]
except Exception:
features = pd.DataFrame(columns=self.columns, index=[0])
features.iloc[0, 0:3] = [self.personality, self.publish_date, self.random_id, second]
return features
###############
# Upload Data #
###############
def upload(dataframe, schema, table, exists='append'):
conn_str = (
r'Driver={ODBC Driver 17 for SQL Server};'
r'Server=ZANGORTH;'
r'Database=HomeBase;'
'Trusted_Connection=yes;'
)
con = urllib.parse.quote_plus(conn_str)
engine = create_engine(f'mssql+pyodbc:///?odbc_connect={con}')
dataframe.to_sql(name=table, con=engine, schema=schema, if_exists=exists, index=False)
return None
##############
# Index Data #
##############
def reindex(schema, table, index, username, password, alter = {'channel': 'VARCHAR(10)'}):
if 'guest' not in username:
connection_string = ('DRIVER={ODBC Driver 17 for SQL Server};' +
'Server=zangorth.database.windows.net;DATABASE=HomeBase;' +
f'UID={username};PWD={password}')
azure = sql.connect(connection_string)
csr = azure.cursor()
for key in alter:
query = f'''
ALTER TABLE {schema}.{table}
ALTER COLUMN {key} {alter[key]}
'''
csr.execute(query)
query = f'''
CREATE CLUSTERED INDEX IX_{table}
ON {schema}.{table}({', '.join(index)})
'''
csr.execute(query)
csr.commit()
azure.close()
return None
##############
# Lags/Leads #
##############
def shift(x, group, lags, leads, exclude = []):
out = x.copy()
x = out[[col for col in out.columns if col not in exclude]]
for i in range(lags):
lag = x.groupby(group).shift(i)
lag.columns = [f'{col}_lag{i}' for col in lag.columns]
out = out.merge(lag, left_index=True, right_index=True)
for i in range(leads):
lead = x.groupby(group).shift(-i)
lead.columns = [f'{col}_lead{i}' for col in lead.columns]
out = out.merge(lead, left_index=True, right_index=True)
return out | zangorth-ramsey | /zangorth_ramsey-1.1.5-py3-none-any.whl/ramsey/ramsey.py | ramsey.py |
[](https://pypi.org/project/zanj/)
[](https://github.com/mivanit/zanj/actions/workflows/checks.yml)
[](docs/coverage/coverage.txt)


<!-- 
 -->
<!--  -->
# ZANJ
# Overview
The `ZANJ` format is meant to be a way of saving arbitrary objects to disk, in a way that is flexible, allows keeping configuration and data together, and is human readable. It is very loosely inspired by HDF5 and the derived `exdir` format, and the implementation is inspired by `npz` files.
- You can take any `SerializableDataclass` from the [muutils](https://github.com/mivanit/muutils) library and save it to disk -- any large arrays or lists will be stored efficiently as external files in the zip archive, while the basic structure and metadata will be stored in readable JSON files.
- You can also specify a special `ConfiguredModel`, which inherits from a `torch.nn.Module` which will let you save not just your model weights, but all required configuration information, plus any other metadata (like training logs) in a single file.
This library was originally a module in [muutils](https://github.com/mivanit/muutils/)
# Installation
Available on PyPI as [`zanj`](https://pypi.org/project/zanj/)
```
pip install zanj
```
# Usage
You can find a runnable example of this in [`demo.ipynb`](demo.ipynb)
## Saving a basic object
Any `SerializableDataclass` of basic types can be saved as zanj:
```python
import numpy as np
import pandas as pd
from muutils.json_serialize import SerializableDataclass, serializable_dataclass, serializable_field
from zanj import ZANJ
@serializable_dataclass
class BasicZanj(SerializableDataclass):
a: str
q: int = 42
c: list[int] = serializable_field(default_factory=list)
# initialize a zanj reader/writer
zj = ZANJ()
# create an instance
instance: BasicZanj = BasicZanj("hello", 42, [1, 2, 3])
path: str = "tests/junk_data/path_to_save_instance.zanj"
zj.save(instance, path)
recovered: BasicZanj = zj.read(path)
```
ZANJ will intelligently handle nested serializable dataclasses, numpy arrays, pytorch tensors, and pandas dataframes:
```python
import torch
import pandas as pd
@serializable_dataclass
class Complicated(SerializableDataclass):
name: str
arr1: np.ndarray
arr2: np.ndarray
iris_data: pd.DataFrame
brain_data: pd.DataFrame
container: list[BasicZanj]
torch_tensor: torch.Tensor
```
For custom classes, you can specify a `serialization_fn` and `loading_fn` to handle the logic of converting to and from a json-serializable format:
```python
@serializable_dataclass
class Complicated(SerializableDataclass):
name: str
device: torch.device = serializable_field(
serialization_fn=lambda self: str(self.device),
loading_fn=lambda data: torch.device(data["device"]),
)
```
Note that `loading_fn` takes the dictionary of the whole class -- this is in case you've stored data in multiple fields of the dict which are needed to reconstruct the object.
## Saving Models
First, define a configuration class for your model. This class will hold the parameters for your model and any associated objects (like losses and optimizers). The configuration class should be a subclass of `SerializableDataclass` and use the `serializable_field` function to define fields that need special serialization.
Here's an example that defines a GPT-like model configuration:
```python
from zanj.torchutil import ConfiguredModel, set_config_class
@serializable_dataclass
class MyNNConfig(SerializableDataclass):
input_dim: int
hidden_dim: int
output_dim: int
# store the activation function by name, reconstruct it by looking it up in torch.nn
act_fn: torch.nn.Module = serializable_field(
serialization_fn=lambda x: x.__name__,
loading_fn=lambda x: getattr(torch.nn, x["act_fn"]),
)
# same for the loss function
loss_kwargs: dict = serializable_field(default_factory=dict)
loss_factory: torch.nn.modules.loss._Loss = serializable_field(
default_factory=lambda: torch.nn.CrossEntropyLoss,
serialization_fn=lambda x: x.__name__,
loading_fn=lambda x: getattr(torch.nn, x["loss_factory"]),
)
loss = property(lambda self: self.loss_factory(**self.loss_kwargs))
```
Then, define your model class. It should be a subclass of `ConfiguredModel`, and use the `set_config_class` decorator to associate it with your configuration class. The `__init__` method should take a single argument, which is an instance of your configuration class. You must also call the superclass `__init__` method with the configuration instance.
```python
@set_config_class(MyNNConfig)
class MyNN(ConfiguredModel[MyNNConfig]):
def __init__(self, config: MyNNConfig):
# call the superclass init!
# this will store the model in the zanj_model_config field
super().__init__(config)
# whatever you want here
self.net = torch.nn.Sequential(
torch.nn.Linear(config.input_dim, config.hidden_dim),
config.act_fn(),
torch.nn.Linear(config.hidden_dim, config.output_dim),
)
def forward(self, x):
return self.net(x)
```
You can now create instances of your model, save them to disk, and load them back into memory:
```python
config = MyNNConfig(
input_dim=10,
hidden_dim=20,
output_dim=2,
act_fn=torch.nn.ReLU,
loss_kwargs=dict(reduction="mean"),
)
# create your model from the config, and save
model = MyNN(config)
fname = "tests/junk_data/path_to_save_model.zanj"
ZANJ().save(model, fname)
# load by calling the class method `read()`
loaded_model = MyNN.read(fname)
# zanj will actually infer the type of the object in the file
# -- and will warn you if you don't have the correct package installed
loaded_another_way = ZANJ().read(fname)
```
## Configuration
When initializing a `ZANJ` object, you can specify some configuration info about saving, such as:
- thresholds for how big an array/table has to be before moving to external file
- compression settings
- error modes
- additional handlers for serialization
```python
# how big an array or list (including pandas DataFrame) can be before moving it from the core JSON file
external_array_threshold: int = ZANJ_GLOBAL_DEFAULTS.external_array_threshold
external_list_threshold: int = ZANJ_GLOBAL_DEFAULTS.external_list_threshold
# compression settings passed to `zipfile` package
compress: bool | int = ZANJ_GLOBAL_DEFAULTS.compress
# for doing very cursed things in your own custom loading or serialization functions
custom_settings: dict[str, Any] | None = ZANJ_GLOBAL_DEFAULTS.custom_settings
# specify additional serialization handlers
handlers_pre: MonoTuple[SerializerHandler] = tuple()
handlers_default: MonoTuple[SerializerHandler] = DEFAULT_SERIALIZER_HANDLERS_ZANJ,
```
# Implementation
The on-disk format is a file `<filename>.zanj` is a zip file containing:
- `__zanj_meta__.json`: a file containing zanj-specific metadata including:
- system information
- installed packages
- information about external files
- `__zanj__.json`: a file containing user-specified data
- when an element is too big, it can be moved to an external file
- `.npy` for numpy arrays or torch tensors
- `.jsonl` for pandas dataframes or large sequences
- list of external files stored in `__zanj_meta__.json`
- "$ref" key will have value pointing to external file
- `__format__` key will detail an external format type
# Comparison to other formats
| Format | Safe | Zero-copy | Lazy loading | No file size limit | Layout control | Flexibility | Bfloat16 |
| ----------------------- | ---- | --------- | ------------ | ------------------ | -------------- | ----------- | -------- |
| pickle (PyTorch) | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ |
| H5 (Tensorflow) | ✅ | ❌ | ✅ | ✅ | ~ | ~ | ❌ |
| HDF5 | ✅ | ? | ✅ | ✅ | ~ | ✅ | ❌ |
| SavedModel (Tensorflow) | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ✅ |
| MsgPack (flax) | ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ |
| Protobuf (ONNX) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Cap'n'Proto | ✅ | ✅ | ~ | ✅ | ✅ | ~ | ❌ |
| Numpy (npy,npz) | ✅ | ? | ? | ❌ | ✅ | ❌ | ❌ |
| SafeTensors | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ |
| exdir | ✅ | ? | ? | ? | ? | ✅ | ❌ |
| ZANJ | ✅ | ❌ | ❌* | ✅ | ✅ | ✅ | ❌* |
- Safe: Can I use a file randomly downloaded and expect not to run arbitrary code ?
- Zero-copy: Does reading the file require more memory than the original file ?
- Lazy loading: Can I inspect the file without loading everything ? And loading only some tensors in it without scanning the whole file (distributed setting) ?
- Layout control: Lazy loading, is not necessarily enough since if the information about tensors is spread out in your file, then even if the information is lazily accessible you might have to access most of your file to read the available tensors (incurring many DISK -> RAM copies). Controlling the layout to keep fast access to single tensors is important.
- No file size limit: Is there a limit to the file size ?
- Flexibility: Can I save custom code in the format and be able to use it later with zero extra code ? (~ means we can store more than pure tensors, but no custom code)
- Bfloat16: Does the format support native bfloat16 (meaning no weird workarounds are necessary)? This is becoming increasingly important in the ML world.
`*` denotes this feature may be coming at a future date :)
(This table was stolen from [safetensors](https://github.com/huggingface/safetensors/blob/main/README.md))
| zanj | /zanj-0.2.0.tar.gz/zanj-0.2.0/README.md | README.md |
=====
zanna
=====
.. image:: https://img.shields.io/pypi/v/zanna.svg
:target: https://pypi.python.org/pypi/zanna
.. image:: https://img.shields.io/travis/MirkoRossini/zanna.svg
:target: https://travis-ci.org/MirkoRossini/zanna
.. image:: https://readthedocs.org/projects/zanna/badge/?version=latest
:target: https://zanna.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://pyup.io/repos/github/mirkorossini/zanna/shield.svg
:target: https://pyup.io/repos/github/mirkorossini/zanna/
:alt: Updates
.. image:: https://pyup.io/repos/github/mirkorossini/zanna/python-3-shield.svg
:target: https://pyup.io/repos/github/mirkorossini/zanna/
:alt: Python 3
Simple Dependency Injection library.
Supports python 3.5+ and makes full use of the typing annotations.
The design is pythonic but inspired by Guice in many aspects.
* Free software: BSD license
* Documentation: https://zanna.readthedocs.io.
Motivation
==========
Zanna is meant to be a modern (3.5+), well maintained injection library for Python.
Features
========
* Support for typing annotations
* Decorators are not mandatory: all the injection logic can be outside your modules
* Supports injection by name
* Instances can be bound directly, useful when testing (i.e. by override bindings with mocks)
* No autodiscover for performance reasons and to avoid running into annoying bugs
Usage
=====
Injecting by variable name
--------------------------
The basic form of injection is performed by variable name.
The injector expects a list of modules (any callable that takes a Binder as argument).
You can get the bound instance by calling get_instance
.. code-block:: python
from zanna import Injector, Binder
def mymodule(binder: Binder) -> None:
binder.bind_to("value", 3)
injector = Injector(mymodule)
assert injector.get_instance("value") == 3
Zanna will automatically inject the value into arguments with the same name:
.. code-block:: python
from zanna import Injector, Binder
def mymodule(binder: Binder) -> None:
binder.bind_to("value", 3)
class ValueConsumer:
def __init__(self, value):
self.value = value
injector = Injector(mymodule)
assert injector.get_instance(ValueConsumer).value == 3
Injecting by type annotation
----------------------------
Zanna also makes use of python typing annotations to find the right instance to inject.
.. code-block:: python
from zanna import Injector, Binder
class ValueClass:
def __init__(self, the_value: int):
self.the_value = the_value
class ValueConsumer:
def __init__(self, value_class_instance: ValueClass):
self.value_class_instance = value_class_instance
def mymodule(binder: Binder) -> None:
binder.bind_to("the_value", 3)
binder.bind(ValueClass)
injector = Injector(mymodule)
assert injector.get_instance(ValueConsumer).value_class_instance.the_value == 3
Singleton or not singleton?
---------------------------
Instances provided by the injector are always singletons, meaning that the __init__ method of
the class will be called only the first time, and every subsequent call of get_instance will
return the same instance:
.. code-block:: python
from zanna import Injector
class MyClass:
pass
injector = Injector(lambda binder: binder.bind(MyClass))
assert injector.get_instance(MyClass) == injector.get_instance(MyClass)
Use providers for more complex use cases
----------------------------------------
Binder instances can be used to bind providers. A provider is any callable that takes
any number of arguments and returns any type. The injector will try to inject all the necessary
arguments. Providers can be bound explicitely or implicitely (in which case zanna will use the
return annotation to bind by type).
.. code-block:: python
from zanna import Injector, Binder
class AValueConsumer:
def __init__(self, value: int):
self.value = value
def explicit_provider(a_value: int) -> int:
return a_value + 100
def implicit_provider(value_plus_100: int) -> AValueConsumer:
return AValueConsumer(value_plus_100)
def mymodule(binder: Binder) -> None:
binder.bind_to("a_value", 3)
binder.bind_provider("value_plus_100", explicit_provider)
binder.bind_provider(implicit_provider)
injector = Injector(mymodule)
assert injector.get_instance(AValueConsumer).value == 103
Override existing bindings
--------------------------
Bindings can be overridden. Overriding a non-existent binding will result in a ValueError being raised.
Override bindings is extremely useful when testing, as any part of your stack can be replaced with a mock.
.. code-block:: python
from zanna import Injector, Binder
from unittest.mock import MagicMock
class ValueClass:
def __init__(self):
pass
def retrieve_something(self):
return ['some', 'thing']
class ValueConsumer:
def __init__(self, value: ValueClass):
self.value = value
def mymodule(binder: Binder) -> None:
binder.bind(ValueClass)
injector = Injector(mymodule)
assert injector.get_instance(ValueConsumer).value.retrieve_something() == ['some', 'thing']
def module_overriding_value_class(binder: Binder) -> None:
mock_value_class = MagicMock(ValueClass)
mock_value_class.retrieve_something.return_value = ['mock']
binder.override_binding(ValueClass, mock_value_class)
injector = Injector(mymodule, module_overriding_value_class)
assert injector.get_instance(ValueConsumer).value.retrieve_something() == ['mock']
Using the decorators
--------------------
One of the advantages of using Zanna over other solutions is that it doesn't force you
to pollute your code by mixing in the injection logic.
If you are working on a small project and would like to handle part (or all) of the
injection logic using decorators instead of modules, Zanna supports that as well.
Internally, Zanna creates a module that sets up the bindings as indicated by the decorators
(in a random order).
All Injectors initialized with use_decorators=True will run that module first on their Binder.
Zanna supports the following decorators:
* decorators.provider, which takes a provided annotated with an appropriate return type
* decorators.provider_for, which can be given the name or the class of the instance provided
* decorators.inject, to annotate class to be bound/injected
Here's an example:
.. code-block:: python
from zanna import Injector
from zanna import decorators
class Thing:
pass
@decorators.provider_for("value")
def provide_value():
return 3
@decorators.provider
def provide_thing() -> Thing:
return Thing()
@decorators.inject
class OtherThing:
def __init__(self, value, thing:Thing):
self.value = value
self.thing = thing
inj = Injector(use_decorators=True)
otherthing = inj.get_instance(OtherThing)
assert otherthing.value == 3
assert isinstance(otherthing.thing, Thing)
assert isinstance(otherthing, OtherThing)
Credits
-------
This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template.
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
| zanna | /zanna-0.3.1.tar.gz/zanna-0.3.1/README.rst | README.rst |
.. highlight:: shell
============
Contributing
============
Contributions are welcome, and they are greatly appreciated! Every
little bit helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions
----------------------
Report Bugs
~~~~~~~~~~~
Report bugs at https://github.com/MirkoRossini/zanna/issues.
If you are reporting a bug, please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting.
* Detailed steps to reproduce the bug.
Fix Bugs
~~~~~~~~
Look through the GitHub issues for bugs. Anything tagged with "bug"
and "help wanted" is open to whoever wants to implement it.
Implement Features
~~~~~~~~~~~~~~~~~~
Look through the GitHub issues for features. Anything tagged with "enhancement"
and "help wanted" is open to whoever wants to implement it.
Write Documentation
~~~~~~~~~~~~~~~~~~~
zanna could always use more documentation, whether as part of the
official zanna docs, in docstrings, or even on the web in blog posts,
articles, and such.
Submit Feedback
~~~~~~~~~~~~~~~
The best way to send feedback is to file an issue at https://github.com/MirkoRossini/zanna/issues.
If you are proposing a feature:
* Explain in detail how it would work.
* Keep the scope as narrow as possible, to make it easier to implement.
* Remember that this is a volunteer-driven project, and that contributions
are welcome :)
Get Started!
------------
Ready to contribute? Here's how to set up `zanna` for local development.
1. Fork the `zanna` repo on GitHub.
2. Clone your fork locally::
$ git clone [email protected]:your_name_here/zanna.git
3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development::
$ mkvirtualenv zanna
$ cd zanna/
$ python setup.py develop
4. Create a branch for local development::
$ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox::
$ flake8 zanna tests
$ python setup.py test or py.test
$ tox
To get flake8 and tox, just pip install them into your virtualenv.
6. Commit your changes and push your branch to GitHub::
$ git add .
$ git commit -m "Your detailed description of your changes."
$ git push origin name-of-your-bugfix-or-feature
7. Submit a pull request through the GitHub website.
Pull Request Guidelines
-----------------------
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.rst.
3. The pull request should work for Python 2.6, 2.7, 3.3, 3.4 and 3.5, and for PyPy. Check
https://travis-ci.org/MirkoRossini/zanna/pull_requests
and make sure that the tests pass for all supported Python versions.
Tips
----
To run a subset of tests::
$ py.test tests.test_zanna
| zanna | /zanna-0.3.1.tar.gz/zanna-0.3.1/CONTRIBUTING.rst | CONTRIBUTING.rst |
.. highlight:: shell
============
Installation
============
Stable release
--------------
To install zanna, run this command in your terminal:
.. code-block:: console
$ pip install zanna
This is the preferred method to install zanna, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for zanna can be downloaded from the `Github repo`_.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/MirkoRossini/zanna
Or download the `tarball`_:
.. code-block:: console
$ curl -OL https://github.com/MirkoRossini/zanna/tarball/master
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Github repo: https://github.com/MirkoRossini/zanna
.. _tarball: https://github.com/MirkoRossini/zanna/tarball/master
| zanna | /zanna-0.3.1.tar.gz/zanna-0.3.1/docs/installation.rst | installation.rst |
# zanon
A module to anonymize data streams with zero-delay called z-anonymity.
When instantiating the *zanon* object, the constructor receives a value in seconds for **Delta_t**, the desired probability of reaching k-anonymity expressed with a value between 0 and 1 (**pk**) and a value for **k**.
The *anonymize()* method accepts a tuple with 3 arguments **(t,u,a)** , meaning that at time **t** a user **u** exposes an attribute **a**.
Also a tuple of kind **(t,u, latitude, longitude)** is handled.
If the tuple exposes an attribute that has not been exposed by at least other **z - 1** users in the past **Delta_t**, the tuple is simply ignored. Otherwise, the tuple is printed in the file 'output.txt'.
The algorithm can handle generalization when providing the attribute with a hierarchy using \* as separator (*max_generalization\*...\*min_generalization\*attribute*).
Whenever releasing the attribute is not possible, the algorithm will look for the most specific generalization exposed by at least other **z - 1** users in the past **Delta_t**. If none is found, nothing is print out.
For the geolocation case, the algorithm divides in cells the territory (only Italy) with cells of differents sizes and outputs the cell with at most 5 level of details (3km, 5km, 10km, 30km, 500km).
### Other methods
Run the *study_output.py* after the simulation to plot the distribution of z, pk and traffic during time. (You will find the plot 'z_tuning.pdf').
*endFiles()*: the file output.txt and counters.txt are intended for testing. They need to be closed at the end of the simulation (see example below).
*duration()* print the range of time covered.
## Install
```
pip install zanon
```
[Link to PyPI](https://pypi.org/project/zanon/)
## Example of usage
```python
from zanonymity import zanon
file_in = "trace.txt"
deltat = 3600 #in seconds
pk = 0.8
k = 2
z = zanon.zanon(deltat, pk, k)
for line in open(file_in, 'r'):
t,u,a = line.split(",")
z.anonymize((t,u,a))
z.endFiles()
z.duration()
```
| zanon | /zanon-0.3.3.tar.gz/zanon-0.3.3/README.md | README.md |
from collections import defaultdict
import numpy as np
import json
import matplotlib.pyplot as plt
from matplotlib.ticker import LinearLocator
MAX_GENERALIZATION = 20
stepsize = np.array([3000, 5000, 10000, 30000, 500000])
numeric_category = False
with open('output.json',"r") as f:
data = json.load(f)
labels = []
labels.append("Anonymized")
details = [x for x in data["all_details"] if not all(v == 0 for v in x)]
if numeric_category:
for i in reversed(stepsize):
labels.append(str(int(i/1000))+" km")
else:
for i in range(len(details)):
labels.append(str(i + 1) + "-detail")
to_plot = np.vstack((data["tot_anon"], details))
color = 'tab:red'
fig, ax_left = plt.subplots(figsize=(20, 10))
ax_right = ax_left.twinx()
ax_third = ax_left.twinx()
ax_third.spines["right"].set_position(("axes", 1.1))
ax_left.plot(data["time"],data["z"], color=color, linewidth="5")
ax_right.plot(data["time"], data["kanon"], color='black', linewidth="5")
ax_third.stackplot(data["time"], to_plot,
labels = labels ,alpha=0.4)
ax_third.legend(loc='upper left', prop={"size":20})
ax_left.set_xlabel('time', fontsize=20)
ax_left.set_ylabel('z', color=color, fontsize=20)
ax_left.autoscale()
ax_third.autoscale()
ax_third.set_ylabel('Tuple traffic', color = "blue", fontsize=20)
ax_third.tick_params(axis='y', labelcolor="blue", labelsize=20.0)
ax_right.set_ylim(bottom = 0.0, top = 1.0)
ax_left.tick_params(axis='y', labelcolor=color, labelsize=20.0)
ax_right.set_ylabel('pkanon', color='black', fontsize= 20)
ax_right.tick_params(axis='y', labelcolor='black', labelsize = 20.0)
ax_left.get_xaxis().set_major_locator(LinearLocator(numticks=20))
ax_left.tick_params(labelsize=20)
fig.autofmt_xdate(rotation = 45)
fig.tight_layout()
fig.savefig('z_tuning.pdf')
with open('trace.txt') as f:
rows = sum(1 for _ in f)
final_dataset = defaultdict(set)
file = open('output.txt','r')
gen = [0]*MAX_GENERALIZATION
tot = 0
for line in file:
tot += 1
t,u,a = line.split("\t")
t = float(t)
a.strip()
final_dataset[u].add(a)
cat = a.split("*")
gen[len(cat)] += 1
final_dataset_inv = defaultdict(list)
for k,v in final_dataset.items():
final_dataset_inv[str(v)].append(k)
ks = np.array([len(v) for v in final_dataset_inv.values()])
for k in range(2,5):
print("Final " + str(k) + "-anonymization: " + str(sum(ks[ks >= k])/sum(ks)))
for index,i in enumerate(gen):
if(i == 0 and index == 0):
continue
elif(i == 0):
break
print("Tuple passed with " + str(index ) + "-details level: " + str(i))
print("Tuple anonymized: " + str(rows - tot)) | zanon | /zanon-0.3.3.tar.gz/zanon-0.3.3/zanonymity/study_output.py | study_output.py |
from .utils import *
from .evaluate_category import *
from .evaluate_output import *
import matplotlib.pyplot as plt
import collections
import json
class zanon(object):
def __init__(self, deltat, pk, k):
super(zanon, self).__init__()
self.deltat = deltat
self.rate = 1800
self.z = 10
self.H = {}
self.c = {}
self.pk = pk
self.k = k
self.t_start = 0
self.t_stop = 0
self.last_update = 0
self.test = []
self.time = []
self.kanon = []
self.queue = collections.deque()
self.out_tuple = []
self.all_tuple = collections.deque()
self.f_out = open('output.txt', 'w+')
self.f_count = open('counters.txt', 'w+')
self.tot_data = []
self.details = [0]*20
self.all_details = [[] for i in range(20)]
self.anonymized = collections.deque()
self.tot_anon = []
self.stepsize = np.array([3000, 5000, 10000, 30000, 500000])
self.numeric_category = False
def anonymize(self, tupla):
t = float(tupla[0])
u = tupla[1]
if(len(tupla) == 4):
lat = tupla[2]
lon = tupla[3]
a = "*".join(str(x) for x in reversed(get_cell(self,lat,lon)))
self.numeric_category = True
elif(len(tupla)==3):
a = tupla[2].strip()
else: raise ValueError("Arguments can be either 3 or 4, not " + str(len(tupla)))
if self.t_start == 0:
self.t_start = t
output = {}
output['all_details']=[[] for i in range(20)]
output['tot_anon']=[]
output['time'] = []
output['z']=[]
output['kanon']=[]
with open("output.json","w+") as f:
json.dump(output,f)
sep = '*'
cat = a.split(sep)
for level in range(len(cat)):
att = '*'.join(cat[:level + 1])
if att in self.H:
evict(self, t, att)
clean_queue(self,t)
z_change(self, t)
self.t_stop = t
manage_data_structure(self, t, u, a)
check_and_output(self, t, u, a)
def duration(self):
print('End of simulation (simulated time: {})'.format(str(timedelta(seconds = int(self.t_stop - self.t_start)))))
def evaluate_output(self):
evaluate_output()
def evaluate_category(self,z):
evaluate_cat(z)
def final_kanon(self):
final_kanon()
def plot_z(self):
plot_z(self)
def endFiles(self):
self.f_out.close()
self.f_count.close() | zanon | /zanon-0.3.3.tar.gz/zanon-0.3.3/zanonymity/zanon.py | zanon.py |
import numpy as np
from collections import defaultdict
import subprocess
import sys
import collections
from datetime import datetime, timedelta
import json
def get_cell(self, lat, lon):
nw = self.from_ll_to_mt.transform(47.5, 5.5)
se = self.from_ll_to_mt.transform(35, 20)
point = self.from_ll_to_mt.transform(lat, lon)
distx = point[0] - nw[0]
disty = nw[1] - point[1]
cellx = (distx / self.stepsize).astype(int)
celly = (disty / self.stepsize).astype(int)
maxx = (np.ceil((se[0] - nw[0]) / self.stepsize)).astype(int)
cell = celly * maxx + cellx
return cell.tolist()
def kanon_for_binary(self, z):
final_dataset = defaultdict(set)
for i in self.all_tuple: #coda di tuple(t,u,a,contatori di a)
u = i[1]
a = i[2]
counters = i[3]
cat = a.split("*")
att = None
for i,c in enumerate(counters):
if int(c) >= z:
att = '*'.join(cat[:i + 1])
if att != None:
final_dataset[u].add(att)
if len(final_dataset) != 0:
final_dataset_inv = defaultdict(list)
for k,v in final_dataset.items():
final_dataset_inv[str(v)].append(k)
ks = np.array([len(v) for v in final_dataset_inv.values()])
#print(sum(ks[ks >= 2])/sum(ks))
return sum(ks[ks >= self.k])/sum(ks)
else: return 2
def binary_search(self, pk, z_start, z_end):
if z_start > z_end:
return -1
z_mid = (z_start + z_end) // 2
r = kanon_for_binary(self, int(z_mid))
if r >= pk and r <= pk + 0.01:
return z_mid
if r > pk:
return binary_search(self, pk, z_start, z_mid-1)
else:
return binary_search(self, pk, z_mid+1, z_end)
def z_change(self, t):
if(t - self.t_start >= self.deltat and t-self.last_update >= self.rate):
self.last_update = t
result = binary_search(self, self.pk, 0, 1800)
'''
self.kanon.append(result)
if(result < 0.80):
self.z += 1
if(result > 0.81 and self.z > 5):
self.z -=1
#print("Value of z: " + str(self.z))
'''
with open("output.json", "r") as fi:
data = json.load(fi)
if(result != -1):
self.z = result
i = 0
for detail in self.details:
#self.all_details[i].append(detail)
data['all_details'][i].append(detail)
i += 1
data['tot_anon'].append(len(self.anonymized))
data['time'].append(str(datetime.utcfromtimestamp(t).strftime('%Y-%m-%d %H:%M:%S')))
data['z'].append(self.z)
data['kanon'].append(compute_kanon(self))
with open("output.json", "w") as fi:
json.dump(data, fi)
def compute_kanon(self):
final_dataset = defaultdict(set)
for i in self.queue:
final_dataset[i[1]].add(i[2])
final_dataset_inv = defaultdict(list)
for k,v in final_dataset.items():
final_dataset_inv[str(v)].append(k)
ks = np.array([len(v) for v in final_dataset_inv.values()])
if(sum(ks) != 0):
return sum(ks[ks >= 2])/sum(ks)
else: return 1.0
def clean_queue(self,t):
if self.all_tuple:
while True:
if self.all_tuple:
temp = self.all_tuple.popleft()
if(t - temp[0] <= self.deltat):
self.all_tuple.appendleft(temp)
break
else: break
if self.queue:
while True:
if self.queue:
temp = self.queue.popleft()
detail = len(temp[2].split("*"))
self.details[detail - 1] -= 1
if(t - temp[0] <= self.deltat):
self.queue.appendleft(temp)
self.details[detail - 1] += 1
break
else: break
if self.anonymized:
while True:
if self.anonymized:
temp = self.anonymized.popleft()
if(t - temp[0] <= self.deltat):
self.anonymized.appendleft(temp)
break
else: break
def read_next_visit(line):
t, u, a = line.split(',')
t = float(t)
a = a.strip()
return t, u, a
def a_not_present(self, t, u, a):
self.H[a] = collections.OrderedDict()
self.H[a][u] = t
self.c[a] = 1
def a_present(self, t, u, a):
if u not in self.H[a]:
u_not_present(self, t, u, a)
else:
u_present(self, t, u, a)
def u_not_present(self, t, u, a):
self.H[a][u] = t
self.c[a] += 1
def u_present(self, t, u, a):
self.H[a][u] = t
self.H[a].move_to_end(u)
def evict(self, t, a):
to_remove = []
for u,time in self.H[a].items():
if (t - time > self.deltat):
to_remove.append(u)
else:break
for u in to_remove:
self.H[a].pop(u, None)
self.c[a] -= 1
if len(self.H[a]) == 0:
self.H.pop(a, None)
break
def manage_data_structure(self, t, u, a):
sep = '*'
cat = a.split(sep)
for level in range(len(cat)):
i = '*'.join(cat[:level + 1])
if i not in self.H:
a_not_present(self, t, u, i)
else:
a_present(self, t, u, i)
def check_and_output(self, t, u, a):
sep = '*'
cat = a.split(sep)
counters = []
output = None
for level in range(len(cat)):
attr = '*'.join(cat[:level + 1])
counters.append(self.c[attr])
if self.c[attr] >= self.z:
output = (t,u,attr)
if(output != None):
self.queue.append(output)
detail = len(output[2].split("*"))
self.details[detail - 1] += 1
self.f_out.write("\t".join(str(x) for x in output) + "\n")
else: self.anonymized.append((t,u,a))
self.f_count.write("\t".join(str(x) for x in [t,u,a])+ "\t" +
"\t".join(str(x) for x in counters) + '\n')
self.all_tuple.append((t,u,a,counters))
self.out_tuple.append((t,u,a,counters)) | zanon | /zanon-0.3.3.tar.gz/zanon-0.3.3/zanonymity/utils.py | utils.py |
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import LinearLocator
import statistics
import numpy as np
import json
from mpl_toolkits.mplot3d import Axes3D
from collections import defaultdict
import math
from scipy.stats import entropy
MAX_GENERALIZATION = 20
def fill_json_stats():
with open('output.json',"r") as f:
data = json.load(f)
z = data['z']
z = np.array(z)
pk = data['kanon']
pk = np.array(pk)
cpu = data['cpu']
cpu = np.array(cpu)
ram = data['ram']
ram = np.array(ram)
ent = entropy_fun()
with open('max_mean.json', 'r') as f:
data = json.load(f)
data['zmedian'].append(float(np.median(z)))
data['zmax'].append(float(np.max(z)))
data["zmin"].append(float(np.min(z)))
data['zmean'].append(float(np.mean(z)))
data['pkmedian'].append(float(np.median(pk)))
data['pkmax'].append(float(np.max(pk)))
data['pkmin'].append(float(np.min(pk)))
data['pkmean'].append(float(np.mean(pk)))
data['cpumedian'].append(float(np.median(cpu)))
data['cpumax'].append(float(np.max(cpu)))
data['cpumin'].append(float(np.min(cpu)))
data['rammedian'].append(float(np.median(ram)))
data['rammax'].append(float(np.max(ram)))
data['rammin'].append(float(np.min(ram)))
data['entropy'].append(ent)
with open("max_mean.json", "w") as fi:
json.dump(data, fi)
def generate_empty_json():
data = {}
data['zmedian'] = []
data['zmax'] = []
data["zmin"] = []
data['zmean'] = []
data['pkmedian'] = []
data['pkmax'] = []
data['pkmin'] = []
data['pkmean'] = []
data['cpumax'] = []
data['cpumin'] = []
data['cpumedian'] = []
data['rammedian'] = []
data['rammax'] = []
data['rammin'] = []
data['entropy'] = []
with open("max_mean.json", "w") as fi:
json.dump(data, fi)
def plot_z(k):
if (k < 1):
kanon = [2,3,4,5,6,7,8,9]
k_pk = "k"
pk_k = "pk"
else:
kanon = [0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
k_pk = "pk"
pk_k = "k"
with open("max_mean.json", "r") as f:
data = json.load(f)
fig, ax_left = plt.subplots(figsize=(15, 8))
str_title = "Considering "+pk_k+" goal of " + str(k)
ax_left.set_title(str_title, fontdict={'fontsize': 15.0, 'fontweight': 'medium'})
#ax_right = ax_left.twinx()
ax_left.plot(kanon,data['zmedian'], color="#EF1717", linewidth="3", label="z median")
ax_left.plot(kanon,data['zmin'], color="#FF3333", linewidth="3", label="z min/max", linestyle="dashed")
ax_left.plot(kanon,data['zmax'], color="#FF3333", linewidth="3",linestyle="dashed")
#ax_right.plot(kanon,data['pkmedian'], color='#000000', linewidth="3", label = "pk median")
#ax_right.plot(kanon,data['pkmin'], color='#808080', linewidth="3", label="pk min/max",linestyle="dashed")
#ax_right.plot(kanon,data['pkmax'], color='#808080', linewidth="3",linestyle="dashed")
ax_left.fill_between(kanon, data['zmin'],data['zmax'], color='#FF3333', alpha=0.5)
#ax_right.fill_between(kanon, data['pkmin'],data['pkmax'], color='#808080', alpha=0.5)
ax_left.set_xlabel(k_pk+' goal', fontsize=30)
ax_left.set_ylabel('z', color="red", fontsize=30)
#ax_right.set_ylabel('pk', color='black', fontsize=30)
ax_left.autoscale()
#ax_right.set_ylim(bottom = 0.0, top = 1.0)
ax_left.tick_params(axis='y', labelcolor="red", labelsize=20.0)
#ax_right.tick_params(axis='y', labelcolor='black', labelsize = 20.0)
#ax_left.get_xaxis().set_major_locator(LinearLocator(numticks=20))
ax_left.tick_params(labelsize=20)
h1, l1 = ax_left.get_legend_handles_labels()
#h2, l2 = ax_right.get_legend_handles_labels()
ax_left.legend(h1,l1,bbox_to_anchor=(1.5, 1), prop={"size":20})
#ax_right.legend(loc='upper right', prop={"size":20})
fig.autofmt_xdate(rotation = 45)
fig.tight_layout()
stringa = "z with "+pk_k+"="+str(k)+".pdf"
fig.savefig(stringa)
def plot_pk(k):
if (k < 1):
kanon = [2,3,4,5,6,7,8,9]
k_pk = "k"
pk_k = "pk"
else:
kanon = [0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
k_pk = "pk"
pk_k = "k"
with open("max_mean.json", "r") as f:
data = json.load(f)
fig, ax_left = plt.subplots(figsize=(15, 8))
str_title = "Considering "+pk_k+" goal of " + str(k)
ax_left.set_title(str_title, fontdict={'fontsize': 15.0, 'fontweight': 'medium'})
ax_left.plot(kanon,data['pkmedian'], color="black", linewidth="3", label="pk median")
ax_left.plot(kanon,data['pkmin'], color="gray", linewidth="3", label="pk min/max", linestyle="dashed")
ax_left.plot(kanon,data['pkmax'], color="gray", linewidth="3",linestyle="dashed")
ax_left.fill_between(kanon, data['pkmin'],data['pkmax'], color='gray', alpha=0.4)
ax_left.set_xlabel(k_pk+' goal', fontsize=30)
#ax_left.set_ylabel('pk', color="red", fontsize=30)
ax_left.autoscale()
#ax_left.tick_params(axis='y', labelcolor="red", labelsize=20.0)
ax_left.tick_params(labelsize=20)
h1, l1 = ax_left.get_legend_handles_labels()
ax_left.legend(h1,l1,bbox_to_anchor=(1.5, 1), prop={"size":20})
fig.autofmt_xdate(rotation = 45)
fig.tight_layout()
stringa = pk_k+"="+str(k)+".pdf"
fig.savefig(stringa)
def plot_comp_time(k):
if (k < 1):
kanon = [2,3,4,5,6,7,8,9]
k_pk = "k"
pk_k = "pk"
else:
kanon = [0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
k_pk = "pk"
pk_k = "k"
with open("time.json", "r") as f:
data = json.load(f)
fig, ax = plt.subplots(figsize=(10, 8))
str_title = "Considering "+pk_k+" goal of " + str(k)
ax.set_title(str_title, fontdict={'fontsize': 15.0, 'fontweight': 'medium'})
ax.plot(kanon, data['time'])
ax.set_xlabel(k_pk+' goal', fontsize=20)
ax.set_ylabel('Seconds to compute', fontsize=20)
ax.autoscale()
ax.tick_params(axis='y', labelsize=12.0)
ax.tick_params(labelsize = 12.0)
fig.tight_layout()
stringa = "Time " + pk_k+"="+str(k)+".pdf"
fig.savefig(stringa)
def plot_cpu(k):
if (k < 1):
kanon = [2,3,4,5,6,7,8,9]
k_pk = "k"
pk_k = "pk"
else:
kanon = [0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
k_pk = "pk"
pk_k = "k"
with open("max_mean.json", "r") as f:
data = json.load(f)
fig, ax_left = plt.subplots(figsize=(15, 8))
str_title = "Considering "+pk_k+" goal of " + str(k)
ax_left.set_title(str_title, fontdict={'fontsize': 15.0, 'fontweight': 'medium'})
ax_left.plot(kanon,data['cpumedian'], linewidth="3", label="z median")
ax_left.plot(kanon,data['cpumin'], linewidth="3", label="z min/max", linestyle="dashed", color = "lightblue")
ax_left.plot(kanon,data['cpumax'], linewidth="3",linestyle="dashed", color = "lightblue")
ax_left.fill_between(kanon, data['cpumin'],data['cpumax'], color='blue', alpha=0.1)
ax_left.set_xlabel(k_pk+' goal', fontsize=30)
ax_left.set_ylabel('cpu %', fontsize=30)
ax_left.autoscale()
ax_left.tick_params(axis='y', labelsize=20.0)
#ax_left.get_xaxis().set_major_locator(LinearLocator(numticks=20))
ax_left.tick_params(labelsize=20)
ax_left.legend(bbox_to_anchor=(1.5, 1), prop={"size":20})
#ax_right.legend(loc='upper right', prop={"size":20})
#fig.autofmt_xdate(rotation = 45)
fig.tight_layout()
stringa = "cpu with "+pk_k+"="+str(k)+".pdf"
fig.savefig(stringa)
def plot_entropy():
if (k < 1):
kanon = [2,3,4,5,6,7,8,9]
k_pk = "k"
pk_k = "pk"
else:
kanon = [0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
k_pk = "pk"
pk_k = "k"
with open("max_mean.json", "r") as f:
data = json.load(f)
fig, ax_left = plt.subplots(figsize=(15, 8))
str_title = "Considering "+pk_k+" goal of " + str(k)
ax_left.set_title(str_title, fontdict={'fontsize': 15.0, 'fontweight': 'medium'})
ax_left.plot(kanon,data['entropy'], linewidth="3", label="Entropy")
ax_left.set_xlabel(k_pk+' goal', fontsize=30)
ax_left.set_ylabel('Entropy', fontsize=30)
ax_left.autoscale()
ax_left.tick_params(axis='y', labelsize=20.0)
#ax_left.get_xaxis().set_major_locator(LinearLocator(numticks=20))
ax_left.tick_params(labelsize=20)
#ax_left.legend(bbox_to_anchor=(1.5, 1), prop={"size":20})
#ax_right.legend(loc='upper right', prop={"size":20})
#fig.autofmt_xdate(rotation = 45)
fig.tight_layout()
stringa = "Entropy with "+pk_k+"="+str(k)+".pdf"
fig.savefig(stringa)
def entropy_fun():
final_dataset = defaultdict(set)
file = open('output.txt','r')
for line in file:
t,u,a = line.split("\t")
t = float(t)
a.strip()
final_dataset[u].add(a)
tot_user = len(final_dataset)
#print("distinct users: " + str(len(final_dataset)))
final_dataset_inv = defaultdict(list)
for k,v in final_dataset.items():
final_dataset_inv[frozenset(v)].append(k)
ks = np.array([len(v) for v in final_dataset_inv.values()])
#for k in range(1,5):
# print("Final " + str(k) + "-anonymization: " + str(sum(ks[ks >= k])/sum(ks)))
groups = {}
for i in range(1,10000):
if(sum(ks[ks == i]) != 0):
groups[i] = sum(ks[ks == i])
print(groups)
#print(tot_user)
pk_list = []
for G,U in groups.items():
for x in range(int(U/G)):
pk_list.append(G/tot_user)
e = entropy(pk_list)
return e | zanon | /zanon-0.3.3.tar.gz/zanon-0.3.3/zanonymity/handle_stats.py | handle_stats.py |
import matplotlib.pyplot as plt
import numpy as np
import json
from collections import defaultdict
import pathlib
def plot_ca():
params = json.loads(open('output_params.json', 'r').read())
start = params['start']
c_oa = params['c_oa']
t_oa = params['t_oa']
observed_attributes = params['observed_attributes']
plt.figure(figsize = (10,6))
plt.xlabel('s')
plt.ylabel('c_a')
#plt.plot([0, stop-start], [z, z], '--', label = 'z')
for oa in observed_attributes:
plt.plot([(x - start) for x in t_oa[oa]], c_oa[oa], label = oa)
#plt.xticks()
plt.grid()
plt.legend()
plt.show()
def get_pkanon_with_cat(output, z):
pathlib.Path('./output/').mkdir(parents=True, exist_ok=True)
f = open('output/out_{:02d}.txt'.format(z), 'w+')
attribute_set = set()
final_dataset = defaultdict(set)
for o in output:
record = o[0] #tuple in the form (timestamp, user, attribute)
c_a = o[1] #list of counters per category, most general to the left
for i, level in enumerate(reversed(c_a)): #read the counters from the one reffering to the most specific category
level = int(level)
if level >= z: #if the counter satisfy the threshold, select the appropriate level of generalization
if i == 0:
a = record[2] #if the most specific category satisfies the threshold, output the whole attribute
else:
a = '*'.join(record[2].split('*')[:-i]) #else, remove the z-private specifications
final_dataset[record[1]].add(a)
if a not in attribute_set:
attribute_set.add(a)
f.write(a + '\n')
break
f.close()
final_dataset_inv = defaultdict(list)
for k,v in final_dataset.items():
final_dataset_inv[str(v)].append(k)
#ks = np.array([len(v) for v in final_dataset_inv.values()])
return final_dataset_inv
def get_pkanon_without_cat(output, z):
final_dataset = defaultdict(set)
for o in output:
record = o[0]
c_a = int(o[1][-1])
if c_a >= z:
final_dataset[record[1]].add(record[2])
final_dataset_inv = defaultdict(list)
for k,v in final_dataset.items():
final_dataset_inv[str(v)].append(k)
#ks = np.array([len(v) for v in final_dataset_inv.values()])
return final_dataset_inv
def pkanon_vs_z():
output = []
for line in open('simulation_output.txt', 'r'):
items = line.split('\t')
output.append(((float(items[0]), items[1], items[2]), [x for x in items[3:]]))
z_range = range(1, 51)
#k1 = []
k2 = []
k3 = []
k4 = []
k2_nocat = []
for z in z_range:
ks = np.array([len(v) for v in get_pkanon_with_cat(output, z).values()])
k2.append(sum(ks[ks >= 2])/sum(ks))
k3.append(sum(ks[ks >= 3])/sum(ks))
k4.append(sum(ks[ks >= 4])/sum(ks))
ks_nocat = np.array([len(v) for v in get_pkanon_without_cat(output, z).values()])
if (sum(ks_nocat) == 0):
break
k2_nocat.append(sum(ks_nocat[ks_nocat >= 2])/sum(ks_nocat))
plt.figure()
plt.xlabel('z')
plt.ylabel('p_kanon')
plt.plot(z_range, k2, label = 'w/ categories')
plt.plot(z_range, k2_nocat, label = 'w/o categories')
plt.ylim(bottom = 0., top = 1.)
plt.legend()
plt.savefig('pkanon_cat_nocat.pdf')
plt.show()
plt.figure()
plt.xlabel('z')
plt.ylabel('p_kanon')
plt.plot(z_range, k2, label = 'k = 2')
plt.plot(z_range, k3, label = 'k = 3')
plt.plot(z_range, k4, label = 'k = 4')
plt.ylim(bottom = 0., top = 1.)
plt.legend()
plt.savefig('pkanon_vs_k.pdf')
plt.show()
def evaluate_output():
#plot_ca()
pkanon_vs_z() | zanon | /zanon-0.3.3.tar.gz/zanon-0.3.3/zanonymity/evaluate_output.py | evaluate_output.py |
import sys
import numpy as np
from numpy import array
import matplotlib.pyplot as plt
MAX_GENERALIZATION = 20
def evaluate(z):
categories = {}
count_attr = [0]*MAX_GENERALIZATION #count number of records passed per level of generalization
count_cat = {} #count distinct categories/attributes passed per level of generalization
j = 0
for line in open("simulation_output.txt","r"):
items = line.split("\t")
for i,count in enumerate(items[3::][::-1]):
if (int(count) >= z):
if (i == 0):
lev = items[2]
else:
lev = '*'.join(items[2].split('*')[:-i])
j += 1
count_attr[i] += 1
if i not in count_cat:
count_cat[i] = [lev]
else:
if lev not in count_cat[i]:
count_cat[i].append(lev)
if(lev not in categories):
categories[lev] = 1
else: categories[lev] += 1
break
dict = {k: v for k, v in sorted(categories.items(), key=lambda item: item[1], reverse = True)}
f = open("categories_zanon.txt", "w") #store top categories
for i in dict:
f.write(i + ": " + str(dict[i]) + "\n")
f.close()
fig = plt.figure(figsize=(8,8))
plt.barh(list(dict.keys())[:40],list(dict.values())[:40])
plt.gca().invert_yaxis()
plt.title("Top 40 attributes/categories over threshold with z = " + str(z))
plt.tight_layout()
plt.savefig('categories.pdf')
print("Outputs over threshold with z = " + str(z) + ": " + str(j))
for index,i in enumerate(count_attr):
if((i) == 0):
break
print("With " + str(index) + "-generalization: " + str(i) + ". Distinct: " + str(len(count_cat[index])))
print("Number of distinct outputs over threshold: " + str(len(categories)))
def evaluate_cat(zeta):
evaluate(zeta) | zanon | /zanon-0.3.3.tar.gz/zanon-0.3.3/zanonymity/evaluate_category.py | evaluate_category.py |
import sys
DEFAULT_VERSION = "0.6c1"
DEFAULT_URL = "http://cheeseshop.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
}
import sys, os
def _validate_md5(egg_name, data):
if egg_name in md5_data:
from md5 import md5
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
try:
import setuptools
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
except ImportError:
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
import pkg_resources
try:
pkg_resources.require("setuptools>="+version)
except pkg_resources.VersionConflict:
# XXX could we install in a subprocess here?
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first."
) % version
sys.exit(2)
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
%s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
version, download_base, delay, url
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
import tempfile, shutil
tmpdir = tempfile.mkdtemp(prefix="easy_install-")
try:
egg = download_setuptools(version, to_dir=tmpdir, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
return main(list(argv)+[egg]) # we're done here
finally:
shutil.rmtree(tmpdir)
else:
if setuptools.__version__ == '0.0.1':
# tell the user to uninstall obsolete version
use_setuptools(version)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
from md5 import md5
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:]) | zanshin | /zanshin-0.6.tar.gz/zanshin-0.6/ez_setup.py | ez_setup.py |
Overview
========
Zanshin is a library for collaboration over HTTP, WebDAV and CalDAV. It was originally conceived by Lisa Dusseault, and is currently being developed and maintained by Grant Baillie. Its primary client is the Chandler `Sharing Project`_.
Goals
=====
* **High-level** API: Zanshin works at the level of resources and properties of resources, rather than HTTP requests and responses. This, coupled with careful thought about what data to persist, will hopefully lead to an easier-to-use and better performing API than what you get by making the obvious 1:1 mapping between Python method calls and HTTP requests.
* **Asynchronicity** via the `Twisted networking framework`_. For an excellent discussion of Chandler's use of Twisted, see TwistedHome_.
Documentation
=============
See `Lisa's original design notes`_.
There are docstrings ranging from sparse to thorough in code itself. You
can generate epydoc strings by running::
python setup.py doc
inside the project directory. (The ``zanshin.webdav`` module contains a
fairly detailed doctest).
Installation
============
The project uses setuptools_, and is therefore installable via the
``setup.py`` script, or by using the standard `easy_install`_ script.
Source code
===========
The (read-only) subversion trunk is: `http://svn.osafoundation.org/zanshin/trunk`_
.. _http://svn.osafoundation.org/zanshin/trunk: http://svn.osafoundation.org/zanshin/trunk#egg=zanshin-dev
Er, What Does "Zanshin" Mean, Anyway?
=====================================
Zanshin means both readiness and follow-through; it's the attitude of being
ready for what happens and complete in how you react to it. Lisa picked
"zanshin" to sound cool (Japanese!) and to convey the first Goal above.
.. _Sharing Project: http://chandlerproject.org/Projects/SharingProject
.. _Twisted networking framework: http://twistedmatrix.com
.. _TwistedHome: http://chandlerproject.org/Projects/TwistedHome
.. _Lisa's original design notes: http://chandlerproject.org/Journal/LisaDusseault20050315
.. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools
.. _easy_install: http://peak.telecommunity.com/DevCenter/EasyInstall
| zanshin | /zanshin-0.6.tar.gz/zanshin-0.6/README.txt | README.txt |
import httplib
import urllib
import string
import types
import mimetypes
import base64
INFINITY = 'infinity'
XML_DOC_HEADER = '<?xml version="1.0" encoding="utf-8"?>'
XML_CONTENT_TYPE = 'text/xml; charset="utf-8"'
# block size for copying files up to the server
BLOCKSIZE = 16384
class DAV(httplib.HTTPConnection):
def setauth(self, username, password):
self._username = username
self._password = password
def get(self, url, extra_hdrs={ }):
return self._request('GET', url, extra_hdrs=extra_hdrs)
def head(self, url, extra_hdrs={ }):
return self._request('HEAD', url, extra_hdrs=extra_hdrs)
def post(self, url, data={ }, body=None, extra_hdrs={ }):
headers = extra_hdrs.copy()
assert body or data, "body or data must be supplied"
assert not (body and data), "cannot supply both body and data"
if data:
body = ''
for key, value in data.items():
if isinstance(value, types.ListType):
for item in value:
body = body + '&' + key + '=' + urllib.quote(str(item))
else:
body = body + '&' + key + '=' + urllib.quote(str(value))
body = body[1:]
headers['Content-Type'] = 'application/x-www-form-urlencoded'
return self._request('POST', url, body, headers)
def options(self, url='*', extra_hdrs={ }):
return self._request('OPTIONS', url, extra_hdrs=extra_hdrs)
def trace(self, url, extra_hdrs={ }):
return self._request('TRACE', url, extra_hdrs=extra_hdrs)
def put(self, url, contents,
content_type=None, content_enc=None, extra_hdrs={ }):
if not content_type:
content_type, content_enc = mimetypes.guess_type(url)
headers = extra_hdrs.copy()
if content_type:
headers['Content-Type'] = content_type
if content_enc:
headers['Content-Encoding'] = content_enc
return self._request('PUT', url, contents, headers)
def delete(self, url, extra_hdrs={ }):
return self._request('DELETE', url, extra_hdrs=extra_hdrs)
def propfind(self, url, body=None, depth=None, extra_hdrs={ }):
headers = extra_hdrs.copy()
headers['Content-Type'] = XML_CONTENT_TYPE
if depth is not None:
headers['Depth'] = str(depth)
return self._request('PROPFIND', url, body, headers)
def proppatch(self, url, body, extra_hdrs={ }):
headers = extra_hdrs.copy()
headers['Content-Type'] = XML_CONTENT_TYPE
return self._request('PROPPATCH', url, body, headers)
def mkcol(self, url, extra_hdrs={ }):
return self._request('MKCOL', url, extra_hdrs=extra_hdrs)
def move(self, src, dst, extra_hdrs={ }):
headers = extra_hdrs.copy()
headers['Destination'] = dst
return self._request('MOVE', src, extra_hdrs=headers)
def copy(self, src, dst, depth=None, extra_hdrs={ }):
headers = extra_hdrs.copy()
headers['Destination'] = dst
if depth is not None:
headers['Depth'] = str(depth)
return self._request('COPY', src, extra_hdrs=headers)
def lock(self, url, owner='', timeout=None, depth=None,
scope='exclusive', type='write', extra_hdrs={ }):
headers = extra_hdrs.copy()
headers['Content-Type'] = XML_CONTENT_TYPE
if depth is not None:
headers['Depth'] = str(depth)
if timeout is not None:
headers['Timeout'] = timeout
body = XML_DOC_HEADER + \
'<DAV:lockinfo xmlns:DAV="DAV:">' + \
'<DAV:lockscope><DAV:%s/></DAV:lockscope>' % scope + \
'<DAV:locktype><DAV:%s/></DAV:locktype>' % type + \
owner + \
'</DAV:lockinfo>'
return self._request('LOCK', url, body, extra_hdrs=headers)
def unlock(self, url, locktoken, extra_hdrs={ }):
headers = extra_hdrs.copy()
if locktoken[0] != '<':
locktoken = '<' + locktoken + '>'
headers['Lock-Token'] = locktoken
return self._request('UNLOCK', url, extra_hdrs=headers)
def _request(self, method, url, body=None, extra_hdrs={}):
"Internal method for sending a request."
auth = 'Basic ' + string.strip(base64.encodestring(self._username + ':' + self._password))
extra_hdrs['Authorization'] = auth
self.request(method, url, body, extra_hdrs)
return self.getresponse()
#
# Higher-level methods for typical client use
#
def allprops(self, url, depth=None):
return self.propfind(url, depth=depth)
def propnames(self, url, depth=None):
body = XML_DOC_HEADER + \
'<DAV:propfind xmlns:DAV="DAV:"><DAV:propname/></DAV:propfind>'
return self.propfind(url, body, depth)
def getprops(self, url, *names, **kw):
assert names, 'at least one property name must be provided'
if kw.has_key('ns'):
xmlns = ' xmlns:NS="' + kw['ns'] + '"'
ns = 'NS:'
del kw['ns']
else:
xmlns = ns = ''
if kw.has_key('depth'):
depth = kw['depth']
del kw['depth']
else:
depth = 0
assert not kw, 'unknown arguments'
body = XML_DOC_HEADER + \
'<DAV:propfind xmlns:DAV="DAV:"' + xmlns + '><DAV:prop><' + ns + \
string.joinfields(names, '/><' + ns) + \
'/></DAV:prop></DAV:propfind>'
return self.propfind(url, body, depth)
def delprops(self, url, *names, **kw):
assert names, 'at least one property name must be provided'
if kw.has_key('ns'):
xmlns = ' xmlns:NS="' + kw['ns'] + '"'
ns = 'NS:'
del kw['ns']
else:
xmlns = ns = ''
assert not kw, 'unknown arguments'
body = XML_DOC_HEADER + \
'<DAV:propertyupdate xmlns:DAV="DAV:"' + xmlns + \
'><DAV:remove><DAV:prop><' + ns + \
string.joinfields(names, '/><' + ns) + \
'/></DAV:prop></DAV:remove></DAV:propertyupdate>'
return self.proppatch(url, body)
def setprops(self, url, *xmlprops, **props):
assert xmlprops or props, 'at least one property must be provided'
xmlprops = list(xmlprops)
if props.has_key('ns'):
xmlns = ' xmlns:NS="' + props['ns'] + '"'
ns = 'NS:'
del props['ns']
else:
xmlns = ns = ''
for key, value in props.items():
if value:
xmlprops.append('<%s%s>%s</%s%s>' % (ns, key, value, ns, key))
else:
xmlprops.append('<%s%s/>' % (ns, key))
elems = string.joinfields(xmlprops, '')
body = XML_DOC_HEADER + \
'<DAV:propertyupdate xmlns:DAV="DAV:"' + xmlns + \
'><DAV:set><DAV:prop>' + \
elems + \
'</DAV:prop></DAV:set></DAV:propertyupdate>'
return self.proppatch(url, body)
""" My new and improved? version """
def setprops2(self, url, xmlstuff):
body = XML_DOC_HEADER + \
'<D:propertyupdate xmlns:D="DAV:">' + \
'<D:set><D:prop>' + xmlstuff + '</D:prop></D:set>' + \
'</D:propertyupdate>'
return self.proppatch(url, body)
#def get_lock(self, url, owner='', timeout=None, depth=None):
# response = self.lock(url, owner, timeout, depth)
# #response.parse_lock_response()
# return response.locktoken | zanshin | /zanshin-0.6.tar.gz/zanshin-0.6/misc/davlib.py | davlib.py |
|PyPI version shields.io| |PyPI pyversions|
Zanshin CLI
===========
This Python package provides a command-line utility to interact with the
`API of the Zanshin SaaS
service <https://api.zanshin.tenchisecurity.com>`__ from `Tenchi
Security <https://www.tenchisecurity.com>`__.
Is it based on the Zanshin Python SDK available on
`Github <https://github.com/tenchi-security/zanshin-sdk-python>`__ and
`PyPI <https://pypi.python.org/pypi/zanshinsdk/>`__.
If you are a Zanshin customer and have any questions regarding the use
of the service, its API or this command-line utility, please get in
touch via e-mail at support {at} tenchisecurity {dot} com or via the
support widget on the `Zanshin
Portal <https://zanshin.tenchisecurity.com>`__.
Installation
------------
We recommend the CLI is installed using
`pipx <https://pypa.github.io/pipx/installation/>`__, using the command:
.. code:: shell
pipx install zanshincli
When a new version is available, you can upgrade it with:
.. code:: shell
pipx upgrade zanshincli
Configuration File
------------------
The way the SDK and CLI handles credentials is by using a configuration
file in the format created by the Python
`RawConfigParser <https://docs.python.org/3/library/configparser.html#configparser.RawConfigParser>`__
class.
The file is located at ``~/.tenchi/config``, where ``~`` is the `current
user's home
directory <https://docs.python.org/3/library/pathlib.html#pathlib.Path.home>`__.
Each section is treated as a configuration profile, and the SDK and CLI
will look for a section called ``default`` if another is not explicitly
selected.
These are the supported options:
- ``api_key`` (required) which contains the Zanshin API key obtained at
the `Zanshin web
portal <https://zanshin.tenchisecurity.com/my-profile>`__.
- ``user_agent`` (optional) allows you to override the default
user-agent header used by the SDK when making API requests.
- ``api_url`` (optional) directs the SDK and CLI to use a different API
endpoint than the default
(`https://api.zanshin.tenchisecurity.com <https://api.zanshin.tenchisecurity.com>`__).
You can populate the file with the ``zanshin init`` command of the CLI
tool. This is what a minimal configuration file would look like:
.. code:: ini
[default]
api_key = abcdefghijklmnopqrstuvxyz
Using the CLI Utility
---------------------
This package installs a command-line utility called ``zanshin`` built
with the great `Typer <https://typer.tiangolo.com/>`__ package.
You can obtain help by using the ``--help`` option.
Keep in mind that when options are present that expect multiple values,
these need to be provided as multiple options. For example if you wanted
to list an organization's alerts filtering by the OPEN and RISK_ACCEPTED
states, this is the command you would use:
.. code:: shell
$ zanshin organization alerts d48edaa6-871a-4082-a196-4daab372d4a1 --state OPEN --state RISK_ACCEPTED
Command Reference
-----------------
``zanshin``
===========
Command-line utility to interact with the Zanshin SaaS service offered
by Tenchi Security
(`https://tenchisecurity.com <https://tenchisecurity.com>`__), go to
`https://github.com/tenchi-security/zanshin-cli <https://github.com/tenchi-security/zanshin-cli>`__
for license, source code and documentation
**Usage**:
.. code:: console
$ zanshin [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--profile TEXT``: Configuration file section to read API keyand
configuration from [default: default]
- ``--format [json|table|csv|html]``: Output format to use for list
operations [default: json]
- ``--verbose / --no-verbose``: Print more information to stderr
[default: True]
- ``--debug / --no-debug``: Enable debug logging in the SDK [default:
False]
- ``--install-completion``: Install completion for the current shell.
- ``--show-completion``: Show completion for the current shell, to copy
it or customize the installation.
- ``--help``: Show this message and exit.
**Commands**:
- ``account``: Operations on user the API key owner has...
- ``alert``: Operations on alerts the API key owner has...
- ``init``: Update settings on configuration file.
- ``organization``: Operations on organizations the API key owner...
- ``summary``: Operations on summaries the API key owner has...
- ``version``: Display the program and Python versions in...
``zanshin account``
-------------------
Operations on user the API key owner has direct access to
**Usage**:
.. code:: console
$ zanshin account [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``api_key``: Operations on API keys from account the API...
- ``invites``: Operations on invites from account the API...
- ``me``: Returns the details of the user account that...
``zanshin account api_key``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Operations on API keys from account the API key owner has direct access
to
**Usage**:
.. code:: console
$ zanshin account api_key [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``create``: Creates a new API key for the current logged...
- ``delete``: Deletes a given API key by its id, it will...
- ``list``: Iterates over the API keys of current logged...
``zanshin account api_key create``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Creates a new API key for the current logged user, API Keys can be used
to interact with the zanshin api directly a behalf of that user.
**Usage**:
.. code:: console
$ zanshin account api_key create [OPTIONS] NAME
**Arguments**:
- ``NAME``: Name of the new API key [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin account api_key delete``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deletes a given API key by its id, it will only work if the informed ID
belongs to the current logged user.
**Usage**:
.. code:: console
$ zanshin account api_key delete [OPTIONS] API_KEY_ID
**Arguments**:
- ``API_KEY_ID``: UUID of the invite to delete [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin account api_key list``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Iterates over the API keys of current logged user.
**Usage**:
.. code:: console
$ zanshin account api_key list [OPTIONS]
**Options**:
- ``--help``: Show this message and exit.
``zanshin account invites``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Operations on invites from account the API key owner has direct access
to
**Usage**:
.. code:: console
$ zanshin account invites [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``accept``: Accepts an invitation with the informed ID,...
- ``get``: Gets a specific invitation details, it only...
- ``list``: Iterates over the invites of current logged...
``zanshin account invites accept``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Accepts an invitation with the informed ID, it only works if the user
accepting the invitation is the user that received the invitation.
**Usage**:
.. code:: console
$ zanshin account invites accept [OPTIONS] INVITE_ID
**Arguments**:
- ``INVITE_ID``: UUID of the invite [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin account invites get``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Gets a specific invitation details, it only works if the invitation was
made for the current logged user.
**Usage**:
.. code:: console
$ zanshin account invites get [OPTIONS] INVITE_ID
**Arguments**:
- ``INVITE_ID``: UUID of the invite [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin account invites list``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Iterates over the invites of current logged user.
**Usage**:
.. code:: console
$ zanshin account invites list [OPTIONS]
**Options**:
- ``--help``: Show this message and exit.
``zanshin account me``
~~~~~~~~~~~~~~~~~~~~~~
Returns the details of the user account that owns the API key used by
this Connection instance as per
**Usage**:
.. code:: console
$ zanshin account me [OPTIONS]
**Options**:
- ``--help``: Show this message and exit.
``zanshin alert``
-----------------
Operations on alerts the API key owner has direct access to
**Usage**:
.. code:: console
$ zanshin alert [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``get``: Returns details about a specified alert
- ``list``: List alerts from a given organization, with...
- ``list_following``: List following alerts from a given...
- ``list_grouped``: List grouped alerts from a given...
- ``list_grouped_following``: List grouped following alerts from a
given...
- ``list_history``: List alerts from a given organization, with...
- ``list_history_following``: List alerts from a given organization,
with...
- ``update``: Updates the alert.
``zanshin alert get``
~~~~~~~~~~~~~~~~~~~~~
Returns details about a specified alert
**Usage**:
.. code:: console
$ zanshin alert get [OPTIONS] ALERT_ID
**Arguments**:
- ``ALERT_ID``: UUID of the alert to look up [required]
**Options**:
- ``--list-history / --no-list-history``: History of this alert.
[default: False]
- ``--list-comments / --no-list-comments``: Comments of this alert.
[default: False]
- ``--help``: Show this message and exit.
``zanshin alert list``
~~~~~~~~~~~~~~~~~~~~~~
List alerts from a given organization, with optional filters by scan
target, state or severity.
**Usage**:
.. code:: console
$ zanshin alert list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--scan-target-id UUID``: Only list alerts from the specifiedscan
targets.
- ``--states [OPEN|ACTIVE|IN_PROGRESS|RISK_ACCEPTED|MITIGATING_CONTROL|FALSE_POSITIVE|CLOSED]``:
Only list alerts in the specified states. [default: OPEN,
IN_PROGRESS, RISK_ACCEPTED, MITIGATING_CONTROL, FALSE_POSITIVE]
- ``--severity [CRITICAL|HIGH|MEDIUM|LOW|INFO]``: Only list alerts with
the specifiedseverities [default: CRITICAL, HIGH, MEDIUM, LOW, INFO]
- ``--language [pt-BR|en-US]``: Show alert titles in the specified
language [default: en-US]
- ``--created-at-start TEXT``: Date created starts at (format
YYYY-MM-DDTHH:MM:SS)
- ``--created-at-end TEXT``: Date created ends at (format
YYYY-MM-DDTHH:MM:SS)
- ``--updated-at-start TEXT``: Date updated starts at (format
YYYY-MM-DDTHH:MM:SS)
- ``--updated-at-end TEXT``: Date updated ends at (format
YYYY-MM-DDTHH:MM:SS)
- ``--search TEXT``: Text to search for in the alerts [default: ]
- ``--sort [asc|desc]``: Sort order [default: desc]
- ``--order [scanTargetId|resource|rule|severity|state|createdAt|updatedAt]``:
Field to sort results on [default: severity]
- ``--help``: Show this message and exit.
``zanshin alert list_following``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
List following alerts from a given organization, with optional filters
by following ids, state or severity.
**Usage**:
.. code:: console
$ zanshin alert list_following [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--following-ids UUID``: Only list alerts from the specified scan
targets.
- ``--states [OPEN|ACTIVE|IN_PROGRESS|RISK_ACCEPTED|MITIGATING_CONTROL|FALSE_POSITIVE|CLOSED]``:
Only list alerts in the specified states. [default: OPEN,
IN_PROGRESS, RISK_ACCEPTED, MITIGATING_CONTROL, FALSE_POSITIVE]
- ``--severity [CRITICAL|HIGH|MEDIUM|LOW|INFO]``: Only list alerts with
the specified severities [default: CRITICAL, HIGH, MEDIUM, LOW, INFO]
- ``--created-at-start TEXT``: Date created starts at (format
YYYY-MM-DDTHH:MM:SS)
- ``--created-at-end TEXT``: Date created ends at (format
YYYY-MM-DDTHH:MM:SS)
- ``--updated-at-start TEXT``: Date updated starts at (format
YYYY-MM-DDTHH:MM:SS)
- ``--updated-at-end TEXT``: Date updated ends at (format
YYYY-MM-DDTHH:MM:SS)
- ``--search TEXT``: Text to search for in the alerts [default: ]
- ``--sort [asc|desc]``: Sort order [default: desc]
- ``--order [scanTargetId|resource|rule|severity|state|createdAt|updatedAt]``:
Field to sort results on [default: severity]
- ``--help``: Show this message and exit.
``zanshin alert list_grouped``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
List grouped alerts from a given organization, with optional filters by
scan target, state or severity.
**Usage**:
.. code:: console
$ zanshin alert list_grouped [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--scan-target-id UUID``: Only list alerts from the specifiedscan
targets.
- ``--state [OPEN|ACTIVE|IN_PROGRESS|RISK_ACCEPTED|MITIGATING_CONTROL|FALSE_POSITIVE|CLOSED]``:
Only list alerts in the specified states. [default: OPEN,
IN_PROGRESS, RISK_ACCEPTED, MITIGATING_CONTROL, FALSE_POSITIVE]
- ``--severity [CRITICAL|HIGH|MEDIUM|LOW|INFO]``: Only list alerts with
the specifiedseverities [default: CRITICAL, HIGH, MEDIUM, LOW, INFO]
- ``--help``: Show this message and exit.
``zanshin alert list_grouped_following``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
List grouped following alerts from a given organization, with optional
filters by scan target, state or severity.
**Usage**:
.. code:: console
$ zanshin alert list_grouped_following [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--following-ids UUID``: Only list alerts from thespecified scan
targets.
- ``--state [OPEN|ACTIVE|IN_PROGRESS|RISK_ACCEPTED|MITIGATING_CONTROL|FALSE_POSITIVE|CLOSED]``:
Only list alerts in the specified states. [default: OPEN,
IN_PROGRESS, RISK_ACCEPTED, MITIGATING_CONTROL, FALSE_POSITIVE]
- ``--severity [CRITICAL|HIGH|MEDIUM|LOW|INFO]``: Only list alerts with
the specified severities [default: CRITICAL, HIGH, MEDIUM, LOW, INFO]
- ``--help``: Show this message and exit.
``zanshin alert list_history``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
List alerts from a given organization, with optional filters by scan
target, state or severity.
**Usage**:
.. code:: console
$ zanshin alert list_history [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--scan-target-id UUID``: Only list alerts from the specifiedscan
targets.
- ``--cursor TEXT``: Cursor.
- ``--persist / --no-persist``: Persist. [default: False]
- ``--help``: Show this message and exit.
``zanshin alert list_history_following``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
List alerts from a given organization, with optional filters by scan
target, state or severity.
**Usage**:
.. code:: console
$ zanshin alert list_history_following [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--following-ids UUID``: Only list alerts from the specifiedscan
targets.
- ``--cursor TEXT``: Cursor.
- ``--persist / --no-persist``: Persist. [default: False]
- ``--help``: Show this message and exit.
``zanshin alert update``
~~~~~~~~~~~~~~~~~~~~~~~~
Updates the alert.
**Usage**:
.. code:: console
$ zanshin alert update [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID ALERT_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization that owns the alert
[required]
- ``SCAN_TARGET_ID``: UUID of the scan target associated with the alert
[required]
- ``ALERT_ID``: UUID of the alert [required]
**Options**:
- ``--state [OPEN|IN_PROGRESS|RISK_ACCEPTED|MITIGATING_CONTROL|FALSE_POSITIVE]``:
New alert state
- ``--labels TEXT``: Custom label(s) for the alert
- ``--comment TEXT``: A comment when closing the alert with
RISK_ACCEPTED, FALSE_POSITIVE, MITIGATING_CONTROL
- ``--help``: Show this message and exit.
``zanshin init``
----------------
Update settings on configuration file.
**Usage**:
.. code:: console
$ zanshin init [OPTIONS]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization``
------------------------
Operations on organizations the API key owner has direct access to
**Usage**:
.. code:: console
$ zanshin organization [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``follower``: Operations on followers of organization the...
- ``following``: Operations on following of organization the...
- ``get``: Gets an organization given its ID.
- ``list``: Lists the organizations this user has direct...
- ``member``: Operations on members of organization the API...
- ``scan_target``: Operations on scan targets from organizations...
- ``update``: Gets an organization given its ID.
``zanshin organization follower``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Operations on followers of organization the API key owner has direct
access to
**Usage**:
.. code:: console
$ zanshin organization follower [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``list``: Lists the followers of organization this user...
- ``request``: Operations on follower requests of...
- ``stop``: Stops one organization follower of another.
``zanshin organization follower list``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Lists the followers of organization this user has direct access to.
**Usage**:
.. code:: console
$ zanshin organization follower list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization follower request``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Operations on follower requests of organization the API key owner has
directaccess to
**Usage**:
.. code:: console
$ zanshin organization follower request [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``create``: Create organization follower request.
- ``delete``: Delete organization follower request.
- ``get``: Get organization follower request.
- ``list``: Lists the follower requests of organization...
``zanshin organization follower request create``
''''''''''''''''''''''''''''''''''''''''''''''''
Create organization follower request.
**Usage**:
.. code:: console
$ zanshin organization follower request create [OPTIONS] ORGANIZATION_ID TOKEN
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``TOKEN``: Token of the follower request [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization follower request delete``
''''''''''''''''''''''''''''''''''''''''''''''''
Delete organization follower request.
**Usage**:
.. code:: console
$ zanshin organization follower request delete [OPTIONS] ORGANIZATION_ID TOKEN
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``TOKEN``: Token of the follower request [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization follower request get``
'''''''''''''''''''''''''''''''''''''''''''''
Get organization follower request.
**Usage**:
.. code:: console
$ zanshin organization follower request get [OPTIONS] ORGANIZATION_ID TOKEN
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``TOKEN``: Token of the follower request [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization follower request list``
''''''''''''''''''''''''''''''''''''''''''''''
Lists the follower requests of organization this user has direct access
to.
**Usage**:
.. code:: console
$ zanshin organization follower request list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization follower stop``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Stops one organization follower of another.
**Usage**:
.. code:: console
$ zanshin organization follower stop [OPTIONS] ORGANIZATION_ID ORGANIZATION_FOLLOWER_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_FOLLOWER_ID``: UUID of the organization follower
[required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization following``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Operations on following of organization the API key owner has direct
access to
**Usage**:
.. code:: console
$ zanshin organization following [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``list``: Lists the following of organization this user...
- ``request``: Operations on following requests of...
- ``stop``: Stops one organization following of another.
``zanshin organization following list``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Lists the following of organization this user has direct access to.
**Usage**:
.. code:: console
$ zanshin organization following list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization following request``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Operations on following requests of organization the API key owner
hasdirect access to
**Usage**:
.. code:: console
$ zanshin organization following request [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``accept``: Accepts a request to follow another...
- ``decline``: Declines a request to follow another...
- ``get``: Returns a request received by an organization...
- ``list``: Lists the following requests of organization...
``zanshin organization following request accept``
'''''''''''''''''''''''''''''''''''''''''''''''''
Accepts a request to follow another organization.
**Usage**:
.. code:: console
$ zanshin organization following request accept [OPTIONS] ORGANIZATION_ID FOLLOWING_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``FOLLOWING_ID``: UUID of the following request [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization following request decline``
''''''''''''''''''''''''''''''''''''''''''''''''''
Declines a request to follow another organization.
**Usage**:
.. code:: console
$ zanshin organization following request decline [OPTIONS] ORGANIZATION_ID FOLLOWING_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``FOLLOWING_ID``: UUID of the following request [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization following request get``
''''''''''''''''''''''''''''''''''''''''''''''
Returns a request received by an organization to follow another.
**Usage**:
.. code:: console
$ zanshin organization following request get [OPTIONS] ORGANIZATION_ID FOLLOWING_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``FOLLOWING_ID``: UUID of the following request [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization following request list``
'''''''''''''''''''''''''''''''''''''''''''''''
Lists the following requests of organization this user has direct access
to.
**Usage**:
.. code:: console
$ zanshin organization following request list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization following stop``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Stops one organization following of another.
**Usage**:
.. code:: console
$ zanshin organization following stop [OPTIONS] ORGANIZATION_ID ORGANIZATION_FOLLOWING_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_FOLLOWING_ID``: UUID of the organization following
[required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization get``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gets an organization given its ID.
**Usage**:
.. code:: console
$ zanshin organization get [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization list``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lists the organizations this user has direct access to as a member.
**Usage**:
.. code:: console
$ zanshin organization list [OPTIONS]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Operations on members of organization the API key owner has direct
access to
**Usage**:
.. code:: console
$ zanshin organization member [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``delete``: Delete organization member.
- ``get``: Get organization member.
- ``invite``: Operations on member invites of organization...
- ``list``: Lists the members of organization this user...
- ``update``: Update organization member.
``zanshin organization member delete``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Delete organization member.
**Usage**:
.. code:: console
$ zanshin organization member delete [OPTIONS] ORGANIZATION_ID ORGANIZATION_MEMBER_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_MEMBER_ID``: UUID of the organization member
[required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member get``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Get organization member.
**Usage**:
.. code:: console
$ zanshin organization member get [OPTIONS] ORGANIZATION_ID ORGANIZATION_MEMBER_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_MEMBER_ID``: UUID of the organization member
[required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member invite``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Operations on member invites of organization the API key owner has
directaccess to
**Usage**:
.. code:: console
$ zanshin organization member invite [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``create``: Create organization member invite.
- ``delete``: Delete organization member invite.
- ``get``: Get organization member invite.
- ``list``: Lists the member invites of organization this...
- ``resend``: Resend organization member invitation.
``zanshin organization member invite create``
'''''''''''''''''''''''''''''''''''''''''''''
Create organization member invite.
**Usage**:
.. code:: console
$ zanshin organization member invite create [OPTIONS] ORGANIZATION_ID ORGANIZATION_MEMBER_INVITE_EMAIL
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_MEMBER_INVITE_EMAIL``: E-mail of the organization
member [required]
**Options**:
- ``--organization-member-invite-role [ADMIN]``: Role of the
organization member [default: ADMIN]
- ``--help``: Show this message and exit.
``zanshin organization member invite delete``
'''''''''''''''''''''''''''''''''''''''''''''
Delete organization member invite.
**Usage**:
.. code:: console
$ zanshin organization member invite delete [OPTIONS] ORGANIZATION_ID ORGANIZATION_MEMBER_INVITE_EMAIL
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_MEMBER_INVITE_EMAIL``: E-mail of the organization
member [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member invite get``
''''''''''''''''''''''''''''''''''''''''''
Get organization member invite.
**Usage**:
.. code:: console
$ zanshin organization member invite get [OPTIONS] ORGANIZATION_ID ORGANIZATION_MEMBER_INVITE_EMAIL
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_MEMBER_INVITE_EMAIL``: E-mail of the organization
member invite [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member invite list``
'''''''''''''''''''''''''''''''''''''''''''
Lists the member invites of organization this user has direct access to.
**Usage**:
.. code:: console
$ zanshin organization member invite list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member invite resend``
'''''''''''''''''''''''''''''''''''''''''''''
Resend organization member invitation.
**Usage**:
.. code:: console
$ zanshin organization member invite resend [OPTIONS] ORGANIZATION_ID ORGANIZATION_MEMBER_INVITE_EMAIL
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_MEMBER_INVITE_EMAIL``: E-mail of the organization
member [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member list``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Lists the members of organization this user has direct access to.
**Usage**:
.. code:: console
$ zanshin organization member list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization member update``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Update organization member.
**Usage**:
.. code:: console
$ zanshin organization member update [OPTIONS] ORGANIZATION_ID ORGANIZATION_MEMBER_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``ORGANIZATION_MEMBER_ID``: UUID of the organization member
[required]
**Options**:
- ``--role [ADMIN]``: Role of the organization member [default: ADMIN]
- ``--help``: Show this message and exit.
``zanshin organization scan_target``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Operations on scan targets from organizations the API key owner has
direct access to
**Usage**:
.. code:: console
$ zanshin organization scan_target [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``check``: Check scan target.
- ``create``: Create a new scan target in organization.
- ``delete``: Delete scan target of organization.
- ``get``: Get scan target of organization.
- ``list``: Lists the scan targets of organization this...
- ``onboard_aws``: Create a new scan target in organization and...
- ``onboard_aws_organization``: For each of selected accounts in AWS...
- ``scan``: Operations on scan targets from organizations...
- ``update``: Update scan target of organization.
``zanshin organization scan_target check``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Check scan target.
**Usage**:
.. code:: console
$ zanshin organization scan_target check [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target create``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create a new scan target in organization.
**Usage**:
.. code:: console
$ zanshin organization scan_target create [OPTIONS] ORGANIZATION_ID KIND:[AWS|GCP|AZURE|HUAWEI|DOMAIN|ORACLE] NAME CREDENTIAL [SCHEDULE]:[1h|6h|12h|24h|7d]
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``KIND:[AWS|GCP|AZURE|HUAWEI|DOMAIN|ORACLE]``: kind of the scan
target [required]
- ``NAME``: name of the scan target [required]
- ``CREDENTIAL``: credential of the scan target [required]
- ``[SCHEDULE]:[1h|6h|12h|24h|7d]``: schedule of the scan target
[default: 24h]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target delete``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Delete scan target of organization.
**Usage**:
.. code:: console
$ zanshin organization scan_target delete [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target get``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Get scan target of organization.
**Usage**:
.. code:: console
$ zanshin organization scan_target get [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target list``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Lists the scan targets of organization this user has direct access to.
**Usage**:
.. code:: console
$ zanshin organization scan_target list [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target onboard_aws``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create a new scan target in organization and perform onboard. Requires
boto3 and correct AWS IAM Privileges. Checkout the required AWS IAM
privileges here
`https://github.com/tenchi-security/zanshin-sdk-python/blob/main/zanshinsdk/docs/README.md <https://github.com/tenchi-security/zanshin-sdk-python/blob/main/zanshinsdk/docs/README.md>`__
**Usage**:
.. code:: console
$ zanshin organization scan_target onboard_aws [OPTIONS] REGION ORGANIZATION_ID NAME CREDENTIAL [SCHEDULE]:[1h|6h|12h|24h|7d]
**Arguments**:
- ``REGION``: AWS Region to deploy CloudFormation [required]
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``NAME``: name of the scan target [required]
- ``CREDENTIAL``: credential of the scan target [required]
- ``[SCHEDULE]:[1h|6h|12h|24h|7d]``: schedule of the scan target
[default: 24h]
**Options**:
- ``--boto3-profile TEXT``: Boto3 profile name to use for Onboard AWS
Account
- ``--help``: Show this message and exit.
``zanshin organization scan_target onboard_aws_organization``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For each of selected accounts in AWS Organization, creates a new Scan
Target in informed zanshin organization and performs onboarding.
Requires boto3 and correct AWS IAM Privileges. Checkout the required AWS
IAM privileges at
`https://github.com/tenchi-security/zanshin-cli/blob/main/zanshincli/docs/README.md <https://github.com/tenchi-security/zanshin-cli/blob/main/zanshincli/docs/README.md>`__
**Usage**:
.. code:: console
$ zanshin organization scan_target onboard_aws_organization [OPTIONS] REGION ORGANIZATION_ID [SCHEDULE]:[1h|6h|12h|24h|7d]
**Arguments**:
- ``REGION``: AWS Region to deploy CloudFormation [required]
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``[SCHEDULE]:[1h|6h|12h|24h|7d]``: schedule of the scan target
[default: 24h]
**Options**:
- ``--target-accounts [ALL|MASTER|MEMBERS|NONE]``: choose which
accounts to onboard
- ``--exclude-account TEXT``: ID, Name, E-mail or ARN of AWS Account
not to be onboarded
- ``--boto3-profile TEXT``: Boto3 profile name to use for Onboard AWS
Account
- ``--aws-role-name TEXT``: Name of AWS role that allow access from
Management Account to Member accounts [default:
OrganizationAccountAccessRole]
- ``--help``: Show this message and exit.
``zanshin organization scan_target scan``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Operations on scan targets from organizations the API key owner has
direct access to
**Usage**:
.. code:: console
$ zanshin organization scan_target scan [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``get``: Get scan of scan target.
- ``list``: Lists the scan target scans of organization...
- ``start``: Starts a scan on the specified scan target.
- ``stop``: Stop a scan on the specified scan target.
``zanshin organization scan_target scan get``
'''''''''''''''''''''''''''''''''''''''''''''
Get scan of scan target.
**Usage**:
.. code:: console
$ zanshin organization scan_target scan get [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID SCAN_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
- ``SCAN_ID``: UUID of the scan [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target scan list``
''''''''''''''''''''''''''''''''''''''''''''''
Lists the scan target scans of organization this user has direct access
to.
**Usage**:
.. code:: console
$ zanshin organization scan_target scan list [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target scan start``
'''''''''''''''''''''''''''''''''''''''''''''''
Starts a scan on the specified scan target.
**Usage**:
.. code:: console
$ zanshin organization scan_target scan start [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
**Options**:
- ``--force / --no-force``: Whether to force running a scan target that
has state INVALID_CREDENTIAL or NEW [default: False]
- ``--help``: Show this message and exit.
``zanshin organization scan_target scan stop``
''''''''''''''''''''''''''''''''''''''''''''''
Stop a scan on the specified scan target.
**Usage**:
.. code:: console
$ zanshin organization scan_target scan stop [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization scan_target update``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Update scan target of organization.
**Usage**:
.. code:: console
$ zanshin organization scan_target update [OPTIONS] ORGANIZATION_ID SCAN_TARGET_ID [NAME] [SCHEDULE]:[1h|6h|12h|24h|7d]
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``SCAN_TARGET_ID``: UUID of the scan target [required]
- ``[NAME]``: name of the scan target
- ``[SCHEDULE]:[1h|6h|12h|24h|7d]``: schedule of the scan target
**Options**:
- ``--help``: Show this message and exit.
``zanshin organization update``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gets an organization given its ID.
**Usage**:
.. code:: console
$ zanshin organization update [OPTIONS] ORGANIZATION_ID [NAME] [PICTURE] [EMAIL]
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
- ``[NAME]``: Name of the organization
- ``[PICTURE]``: Picture of the organization
- ``[EMAIL]``: Contact e-mail of the organization
**Options**:
- ``--help``: Show this message and exit.
``zanshin summary``
-------------------
Operations on summaries the API key owner has direct access to
**Usage**:
.. code:: console
$ zanshin summary [OPTIONS] COMMAND [ARGS]...
**Options**:
- ``--help``: Show this message and exit.
**Commands**:
- ``alert``: Gets a summary of the current state of alerts...
- ``alert_following``: Gets a summary of the current state of alerts...
- ``scan``: Returns summaries of scan results over a...
- ``scan_following``: Returns summaries of scan results over a...
``zanshin summary alert``
~~~~~~~~~~~~~~~~~~~~~~~~~
Gets a summary of the current state of alerts for an organization, both
in total and broken down by scan target.
**Usage**:
.. code:: console
$ zanshin summary alert [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--scan-target-id UUID``: Only summarize alerts from the
specifiedscan targets, defaults to all.
- ``--help``: Show this message and exit.
``zanshin summary alert_following``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gets a summary of the current state of alerts for followed
organizations.
**Usage**:
.. code:: console
$ zanshin summary alert_following [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--following-ids UUID``: Only summarize alerts from thespecified
following, defaults toall.
- ``--help``: Show this message and exit.
``zanshin summary scan``
~~~~~~~~~~~~~~~~~~~~~~~~
Returns summaries of scan results over a period of time, summarizing
number of alerts that changed states.
**Usage**:
.. code:: console
$ zanshin summary scan [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--scan-target-ids UUID``: Only summarize alerts from the
specifiedscan targets, defaults to all.
- ``--days INTEGER``: Number of days to go back in time in historical
search [default: 7]
- ``--help``: Show this message and exit.
``zanshin summary scan_following``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Returns summaries of scan results over a period of time, summarizing
number of alerts that changed states.
**Usage**:
.. code:: console
$ zanshin summary scan_following [OPTIONS] ORGANIZATION_ID
**Arguments**:
- ``ORGANIZATION_ID``: UUID of the organization [required]
**Options**:
- ``--following-ids UUID``: Only summarize alerts from thespecified
following, defaults toall.
- ``--days INTEGER``: Number of days to go back in time in
historicalsearch [default: 7]
- ``--help``: Show this message and exit.
``zanshin version``
-------------------
Display the program and Python versions in use.
**Usage**:
.. code:: console
$ zanshin version [OPTIONS]
**Options**:
- ``--help``: Show this message and exit.
.. |PyPI version shields.io| image:: https://img.shields.io/pypi/v/zanshincli.svg
:target: https://pypi.python.org/pypi/zanshincli/
.. |PyPI pyversions| image:: https://img.shields.io/pypi/pyversions/zanshincli.svg
:target: https://pypi.python.org/pypi/zanshincli/
| zanshincli | /zanshincli-1.5.1.tar.gz/zanshincli-1.5.1/README.rst | README.rst |
|Coverage badge| |PyPI version shields.io| |PyPI pyversions|
Zanshin Python SDK
==================
This Python package contains an SDK to interact with the `API of the
Zanshin SaaS service <https://api.zanshin.tenchisecurity.com>`__ from
`Tenchi Security <https://www.tenchisecurity.com>`__.
This SDK is used to implement a command-line utility, which is available
on `Github <https://github.com/tenchi-security/zanshin-cli>`__ and on
`PyPI <https://pypi.python.org/pypi/zanshincli/>`__.
Setting up Credentials
----------------------
There are three ways that the SDK handles credentials. The order of
evaluation is:
- `1st Client Parameters <#client-parameters>`__
- `2nd Environment Variables <#environment-variables>`__
- `3rd Config File <#config-file>`__
Client Parameters
~~~~~~~~~~~~~~~~~
When calling the ``Client`` class, you can pass the values API Key, API
URL, Proxy URL and User Agent you want to use as below:
.. code:: python
from zanshinsdk import Client
client = Client(api_key="my_zanshin_api_key")
print(client.get_me())
..
⚠️ These values will overwrite anything you set as Environment
Variables or in the Config File.
Environment Variables
~~~~~~~~~~~~~~~~~~~~~
You can use the following Environment Variables to configure Zanshin
SDK:
- ``ZANSHIN_API_KEY``: Will setup your Zanshin credentials
- ``ZANSHIN_API_URL``: Will define the API URL. Default is
``https://api.zanshin.tenchisecurity.com``
- ``ZANSHIN_USER_AGENT``: If you want to overwrite the User Agent when
calling Zanshin API
- ``HTTP_PROXY | HTTPS_PROXY``: Zanshin SDK uses HTTPX under the hood,
checkout the `Environment
Variables <https://www.python-httpx.org/environment_variables/#proxies>`__
section of their documentation for more use cases
Usage
^^^^^
.. code:: shell
export ZANSHIN_API_KEY="eyJhbGciOiJIU..."
..
⚠️ These Environment Variables will overwrite anything you set on the
Config File.
Config File
~~~~~~~~~~~
Second is by using a configuration file in the format created by the
Python
`RawConfigParser <https://docs.python.org/3/library/configparser.html#configparser.RawConfigParser>`__
class.
The file is located at ``~/.tenchi/config``, where ``~`` is the `current
user's home
directory <https://docs.python.org/3/library/pathlib.html#pathlib.Path.home>`__.
Each section is treated as a configuration profile, and the SDK will
look for a section called ``default`` if another is not explicitly
selected.
These are the supported options:
- ``api_key`` (required) which contains the Zanshin API key obtained at
the `Zanshin web
portal <https://zanshin.tenchisecurity.com/my-profile>`__.
- ``user_agent`` (optional) allows you to override the default
user-agent header used by the SDK when making API requests.
- ``api_url`` (optional) directs the SDK to use a different API
endpoint than the default
(`https://api.zanshin.tenchisecurity.com <https://api.zanshin.tenchisecurity.com>`__).
This is what a minimal configuration file looks like:
.. code:: ini
[default]
api_key = abcdefghijklmnopqrstuvxyz
The SDK
-------
The SDK uses Python 3 type hints extensively. It attempts to abstract
API artifacts such as pagination by using Python generators, thus making
the service easier to interact with.
The network connections are done using the wonderful
`httpx <https://www.python-httpx.org/>`__ library.
Currently it focuses on returning the parsed JSON values instead of
converting them into native classes for higher level abstractions.
The ``zanshinsdk.Client`` class is the main entry point of the SDK. Here
is a quick example that shows information about the owner of the API key
in use:
.. code:: python
from zanshinsdk import Client
from json import dumps
client = Client() # loads API key from the "default" profile in ~/.tenchi/config
me = client.get_me() # calls /me API endpoint
print(dumps(me, indent=4))
For more examples, checkout the `docs <zanshinsdk/docs/README.md>`__.
All operations call ``raise_for_status`` on the httpx `Response
object <https://www.python-httpx.org/api/#response>`__ internally, so
any 4xx or 5xx will raise
`exceptions <https://www.python-httpx.org/exceptions/>`__.
Installing
----------
To install the SDK, you can use ``pip``. You have two options to install
ZanshinSDK:
- *Essentials*
Using ``pip install zanshinsdk`` will install the SDK with all features
exception ability to perform onboarding of new Scan Targets. For this,
you'll need to install boto3.
- *With Boto3*
With ``pip install zanshinsdk[with_boto3]`` you'll automatically install
`boto3 <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>`__
along with ZanshinSDK. This will enable you to perform Onboard of new
Scan Targets via SDK.
Testing
-------
To run all tests call ``make test`` on the project root directory. Make
sure there's a ``[default]`` profile configured, else some tests will
fail. Also, be sure to install ``boto3`` and ``moto[all]`` or some
integration tests will fail.
Support
=======
If you are a Zanshin customer and have any questions regarding the use
of the service, its API or this SDK package, please get in touch via
e-mail at support {at} tenchisecurity {dot} com or via the support
widget on the `Zanshin Portal <https://zanshin.tenchisecurity.com>`__.
.. |Coverage badge| image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/wiki/tenchi-security/zanshin-sdk-python/python-coverage-comment-action-badge.json
.. |PyPI version shields.io| image:: https://img.shields.io/pypi/v/zanshinsdk.svg
:target: https://pypi.python.org/pypi/zanshinsdk/
.. |PyPI pyversions| image:: https://img.shields.io/pypi/pyversions/zanshinsdk.svg
:target: https://pypi.python.org/pypi/zanshinsdk/
| zanshinsdk | /zanshinsdk-1.5.2.tar.gz/zanshinsdk-1.5.2/README.rst | README.rst |
Zanthor!
http://www.zanthor.org
run_game.py
KEYBOARD: arrows or wasd - to move. F10 pause. f/space/ctrl to fire.
MOUSE: button one fire. button two move.
JOYSTICK: game pad move. button 1 fire.
We love you.
Thank you for downloading.
' ' . Thank you for installing.
.' '
!"' Thank you for clicking.
'
`' Thank you for loading.
\
_()()__ Thank you for playing.
| |
| | Thank you for quiting.
//\_____/\\
// \ / \\ Thank you uninstalling.
|| | | ||
// | \ Thank you for deleting.
Sanity is as overrated as a nuclear bomb.
| zanthor | /zanthor-1.2.3.tar.gz/zanthor-1.2.3/README.txt | README.txt |
# Zanza
[](https://badge.fury.io/py/zanza)
Dead-simple string obfuscation algorithm.
Obfuscation works by identifying the Unicode code points (character codes) of
each character in the input string. The return value is a *list*, in which the
first element is also a list containing the first character's code point digits.
The rest of the elements are the code point delta values, where each element is
compared to the previous one.
As the project supports **Python 3.0 and up**, all Unicode strings should work.
## Installation
To install as a **project dependency**, or **global package**, execute the
following command in the project's directory:
pip install zanza
To use as a standalone **command-line utility**, add the `--user` flag to the
previous command:
pip install --user zanza
## Usage
This package contains scripts for both string _obfuscation_ (`zanza`) and
_deobfuscation_ (`dezanza`)
### Obfuscation
```python
>>> from zanza import zanza
>>> zanza("I am awesome!")
[[7, 3], -41, 65, 12, -77, 65, 22, -18, 14, -4, -2, -8, -68]
>>> zanza("Emojis will work, too: 💪")
[[6, 9], 40, 2, -5, -1, 10, -83, 87, -14, 3, 0, -76, 87, -8, 3, -7, -63, -12, 84, -5, 0, -53, -26, 65501]
>>> zanza("""Another
... fancy
... stuff""")
[[6, 5], 45, 1, 5, -12, -3, 13, -104, 92, -5, 13, -11, 22, -111, 105, 1, 1, -15, 0]
```
In the command line input can be passed as a *script argument* or from *stdin*.
```bash
$ zanza "foo bar baz"
[[1, 0, 2], 9, 0, -79, 66, -1, 17, -82, 66, -1, 25]
$ echo "Encrypt me" | zanza
[[6, 9], 41, -11, 15, 7, -9, 4, -84, 77, -8]
```
### Deobfuscation
```python
>>> from dezanza import dezanza
>>> dezanza([[8, 3], 18, -2, 15, -13, 15, -84, 83, 1, -2, -9, 5, -7, -71, 82, -13, 17, -17, -4, 11, -7, -1])
'Secret string revealed'
>>> dezanza([[7, 8], 33, -101, 98, 3, -1, -7, -2, 13, -104, 101, -13, 4, 15, -2, -16, -2, 19, -15, -1])
'No\nlonger\nobfuscated'
```
Using the command line:
```bash
$ dezanza "[[7, 6], 35, 0, -4, -75, 65, 19, -84, 77, -8, -69, 78, 1, 8]"
Look at me now
$ echo "[[7, 3], 43, -84, 87, -8, 3, -7, 8, -82]" | dezanza
It works!
```
## License
[BSD-3-Clause](https://opensource.org/licenses/BSD-3-Clause)
| zanza | /zanza-0.2.0.tar.gz/zanza-0.2.0/README.md | README.md |
# pysdk 造数Python版本SDK
[](https://circleci.com/gh/BuleAnt/pysdk)
## 简介
zaoshu 是对造数openAPI接口的一层封装实现,使用户更专注于功能,而不必关注底层实现,这些SDK帮你完成。
## pip 安装造数模块
```
pip install zaoshu
```
## pip 安装完成后引入ZaoshuSdk,即可使用
```
from zaoshu import ZaoshuSdk
# 测试代码 部分
if __name__ == '__main__':
ZAOSHU_URL = 'https://openapi.zaoshu.io/v2'
# api_key = '你的api_key'
API_KEY = '你的api_key'
# api_secret = '你的api_secret'
API_SERVER = '你的api_secret'
sdk = ZaoshuSdk(API_KEY, API_SERVER, base_url=ZAOSHU_URL)
# 造数Http类对象
sdk.request
# 造数实例类对象
sdk.instance
# 造数用户类对象
sdk.user
```
## sdk功能
### http类对象功能
- 发送 带造数签名的 GET请求
- 发送 带造数签名的 POST请求
- 发送 带造数签名的 PATCH请求
### 造数实例对象
- 获取用户的爬虫实例列表
- 获取实例详情
- 获取实例的数据格式
- 编辑实例
- 获取实例下的任务列表
- 运行实例
- 获取实例下的任务详情
- 下载实例下任务数据
### 造数用户对象
- 获取用户账户信息
- 获取用户钱包信息
## zaoshu模块的构成
* zaoshuRequests : 造数HTTP库
* zaoshuSdk :造数SDK
* Instance : 造数实例类
* User :造数用户类
### zaoshuRequests : 造数HTTP库
造数HTTP库基于Requests库的基础上,添加了符合造数签名规则的函数,目前支持 GET、POST、PATCH 请求自动添加签名
- **发送 带造数签名的 GET请求**
```
zaoshuRequests.get(self, url, params=None):
"""
get请求
:param url: 请求url
:param params: 请求参数
:return:requests.request
"""
```
- **发送 带造数签名的 POST请求**
```
zaoshuRequests.post(self, url, params=None, body=None):
"""
post请求
:param url:请求url
:param params:请求参数
:param body:内容
:return:requests.request 对象
"""
```
- **发送 带造数签名的 PATCH请求**
```
zaoshuRequests.patch(self, url, params=None, body=None):
"""
patch请求
:param url:请求url
:param params:请求参数
:param body:内容
:return:requests.request
"""
```
- **requests.Response**
requests.Response 的详细文档见 http://docs.python-requests.org/zh_CN/latest/user/quickstart.html
### zaoshuSdk : 造数SDK
造数SDK 将 造数HTTP库,造数实例类,造数用户类 聚合在一起,通过 统一的对象进行使用
- **zaoshuSdk的属性代码**
```
self.request = ZaoshuRequests(api_key, api_secret)
self.instance = Instance(self._base_url, self.request)
self.user = User(self._base_url, self.request)
```
### Instance : 造数实例类
造数实例类 是对造数实例 api 功能的一个封装,大家可以直接使用函数来使用造数提供的服务
- **获取用户的爬虫实例列表**
```
Instance.list(self):
"""
获取实例列表
:return: requests.Response
"""
```
- **获取实例详情**
```
Instance.item(self, instance_id):
"""
获取实例详情
:param instance_id: 运行实例的id编号,可以从实例列表中获取
:return: requests.Response
"""
```
- **获取实例的数据格式**
```
Instance.schema(self, instance_id):
"""
获取单个实例的数据格式
:param instance_id:
:return: requests.Response
"""
```
- **获取某实例下的任务列表**
```
Instance.task(self, instance_id, task_id):
"""
获取某实例下,单个任务详情
:param instance_id:
:param task_id:
:return: requests.Response
"""
```
- **下载运行结果数据**
```
Instance.download_run_data(self, instance_id, task_id, file_type='csv', save_file=False):
"""
下载运行结果
:param instance_id: 实例ID
:param task_id: 任务ID
:param file_type: 文件类型
:param save_file: 是否保持文件
:return:保存文件的路径/保存文件内容的元组
"""
```
- **运行实例**
```
Instance.run(self, instance_id, body=None):
"""
运行实例
:param instance_id: 运行实例的id编号,可以从实例列表中获取
:return: requests.Response
"""
```
- **编辑实例**
```
Instance.edit(self, instance_id, title=None, result_notify_uri=None):
"""
实例编辑
:param instance_id: 实例id
:param title: 要修改的实例标题
:param result_notify_uri: 回调url
:return:requests.Response
"""
```
### User :造数用户类
造数实例类 是对造数用户 api 功能的一个封装,大家可以直接使用函数来使用造数提供的服务
- **获得用户帐号信息**
```
User.account(self):
"""
获得用户帐号信息
:return:requests.Response
"""
```
- **获取用户钱包信息**
```
User.wallet(self):
"""
获得用户钱包信息
:return:requests.Response
"""
```
# 使用教程DEMO详解
## ZaoshuRequests对象
ZaoshuRequests对象是对Requests对象的请求头进行了请求头的封装。
可以使用Requests的方法和属性
Requests.url : 请求网址
Requests.status_code : 请求响应代码
Requests.text : 请求响应内容
## 公共函数,输出请求信息,参数为response对象
```
def print_resopnse_info(response, title=''):
"""
输出响应信息
:param response:response响应对象
:param title: 显示标题
:return: None
"""
print('====[%s]========================================'% title)
print("URL:"+response.url)
print("状态:"+str(response.status_code))
print("返回内容:"+response.text)
print("返回头信息:", end='')
print(response.headers)
print('\n')
```
## 1. 创建ZaoshuSdk的实例
```
# 造数api链接
ZAOSHU_URL = 'https://openapi.zaoshu.io/v2'
# api_key = '你的api_key'
API_KEY = '你的api_key'
# api_secret = '你的api_secret'
API_SERVER = '你的api_secret'
sdk = ZaoshuSdk(API_KEY, API_SERVER, base_url=ZAOSHU_URL)
```
## 2. 用户信息, sdk.user是用户信息模块对象
```
# 获取用户账户信息
user_account_response = sdk.user.account()
print_resopnse_info(user_account_response, '获取用户账户信息')
# 获取用户钱包信息
user_wallet_response = sdk.user.wallet()
print_resopnse_info(user_wallet_response, '获取用户钱包信息')
```
## 3. 用户实例, sdk.instance
```
# 获取用户的爬虫实例
instance_list_response = sdk.instance.list()
print_resopnse_info(instance_list_response, '获取用户的爬虫实例')
response_json = instance_list_response.json()
# 获取实例详情
if response_json['data']:
# 实例id
instance_id = response_json['data'][0]['id']
# 获取实例详情
instance_item_response = sdk.instance.item(instance_id)
print_resopnse_info(instance_item_response, '获取实例详情')
# 获取实例的数据格式
instance_schema_response = sdk.instance.schema(instance_id)
print_resopnse_info(instance_schema_response, '获取实例的数据格式')
# 编辑实例的数据
instance_edit_response = sdk.instance.edit(instance_id, title='测试修改实例数据标题')
print_resopnse_info(instance_edit_response, '编辑实例的数据')
# 运行实例
instance_run_response = sdk.instance.run(instance_id)
print_resopnse_info(instance_run_response, '运行实例')
print('暂停10秒,等待实例运行完成')
sleep(10)
# 获取实例任务列表
instance_task_list_response = sdk.instance.task_list(instance_id)
print_resopnse_info(instance_task_list_response, '获取实例任务列表')
# 解析实例任务列表
tasks = instance_task_list_response.json()
# 获取任务详情
if tasks['data']:
# 任务id
task_id = tasks['data'][-1]['id']
# 获取任务详情
instance_task_response = sdk.instance.task(instance_id, task_id)
print_resopnse_info(instance_task_response, '获取任务详情')
# 实例任务数据下载
instance_download_path = sdk.instance.download_run_data(instance_id, task_id,
file_type='json')
print('====[实例任务数据下载]========================================')
print('下载路径:'+instance_download_path)
else:
print("没有实例无法继续,请创建实例后继续")
```
| zaoshu | /zaoshu-0.2.0-py3-none-any.whl/zaoshu-0.2.0.dist-info/DESCRIPTION.rst | DESCRIPTION.rst |
from contextlib import contextmanager
import json
import os
import re
import sys
import click
from tabulate import tabulate
from zapcli.exceptions import ZAPError
from zapcli.log import console
def validate_ids(ctx, param, value):
"""Validate a list of IDs and convert them to a list."""
if not value:
return None
ids = [x.strip() for x in value.split(',')]
for id_item in ids:
if not id_item.isdigit():
raise click.BadParameter('Non-numeric value "{0}" provided for an ID.'.format(id_item))
return ids
def validate_scanner_list(ctx, param, value):
"""
Validate a comma-separated list of scanners and extract it into a list of groups and IDs.
"""
if not value:
return None
valid_groups = ctx.obj.scanner_groups
scanners = [x.strip() for x in value.split(',')]
if 'all' in scanners:
return ['all']
scanner_ids = []
for scanner in scanners:
if scanner.isdigit():
scanner_ids.append(scanner)
elif scanner in valid_groups:
scanner_ids += ctx.obj.scanner_group_map[scanner]
else:
raise click.BadParameter('Invalid scanner "{0}" provided. Must be a valid group or numeric ID.'
.format(scanner))
return scanner_ids
def validate_regex(ctx, param, value):
"""
Validate that a provided regex compiles.
"""
if not value:
return None
try:
re.compile(value)
except re.error:
raise click.BadParameter('Invalid regex "{0}" provided'.format(value))
return value
@contextmanager
def zap_error_handler():
"""Context manager to handle ZAPError exceptions in a standard way."""
try:
yield
except ZAPError as ex:
console.error(str(ex))
if not os.getenv("SOFT_FAIL"):
sys.exit(2)
else:
sys.exit(0)
def report_alerts(alerts, output_format='table'):
"""
Print our alerts in the given format.
"""
num_alerts = len(alerts)
if output_format == 'json':
click.echo(json.dumps(alerts, indent=4))
else:
console.info('Issues found: {0}'.format(num_alerts))
if num_alerts > 0:
click.echo(tabulate([[a['alert'], a['risk'], a['cweid'], a['url']] for a in alerts],
headers=['Alert', 'Risk', 'CWE ID', 'URL'], tablefmt='grid'))
def filter_by_ids(original_list, ids_to_filter):
"""Filter a list of dicts by IDs using an id key on each dict."""
if not ids_to_filter:
return original_list
return [i for i in original_list if i['id'] in ids_to_filter] | zap-cli-v2 | /zap_cli_v2-0.12.2-py3-none-any.whl/zapcli/helpers.py | helpers.py |
import os
import platform
import re
import shlex
import subprocess
import time
from six import binary_type
import requests
from requests.exceptions import RequestException
from zapv2 import ZAPv2
from zapcli.exceptions import ZAPError
from zapcli.log import console
class ZAPHelper(object):
"""ZAPHelper class for wrapping the ZAP API client."""
alert_levels = {
'Informational': 1,
'Low': 2,
'Medium': 3,
'High': 4,
}
scanner_group_map = {
'sqli': ['40018'],
'xss': ['40012', '40014', '40016', '40017'],
'xss_reflected': ['40012'],
'xss_persistent': ['40014', '40016', '40017'],
}
timeout = 60
_status_check_sleep = 10
def __init__(self, zap_path='', port=8090, url='http://127.0.0.1', api_key='', log_path=None, logger=None, soft_fail=False):
if os.path.isfile(zap_path):
zap_path = os.path.dirname(zap_path)
self.zap_path = zap_path
self.port = port
self.proxy_url = '{0}:{1}'.format(url, self.port)
self.zap = ZAPv2(proxies={'http': self.proxy_url, 'https': self.proxy_url}, apikey=api_key)
self.api_key = api_key
self.log_path = log_path
self.logger = logger or console
self.soft_fail = soft_fail
@property
def scanner_groups(self):
"""Available scanner groups."""
return ['all'] + list(self.scanner_group_map.keys())
def start(self, options=None):
"""Start the ZAP Daemon."""
if self.is_running():
self.logger.warning('ZAP is already running on port {0}'.format(self.port))
return
if platform.system() == 'Windows' or platform.system().startswith('CYGWIN'):
executable = 'zap.bat'
else:
executable = 'zap.sh'
executable_path = os.path.join(self.zap_path, executable)
if not os.path.isfile(executable_path):
raise ZAPError(('ZAP was not found in the path "{0}". You can set the path to where ZAP is ' +
'installed on your system using the --zap-path command line parameter or by ' +
'default using the ZAP_PATH environment variable.').format(self.zap_path))
zap_command = [executable_path, '-daemon', '-port', str(self.port)]
if options:
extra_options = shlex.split(options)
zap_command += extra_options
if self.log_path is None:
log_path = os.path.join(self.zap_path, 'zap.log')
else:
log_path = os.path.join(self.log_path, 'zap.log')
self.logger.debug('Starting ZAP process with command: {0}.'.format(' '.join(zap_command)))
self.logger.debug('Logging to {0}'.format(log_path))
with open(log_path, 'w+') as log_file:
subprocess.Popen(
zap_command, cwd=self.zap_path, stdout=log_file,
stderr=subprocess.STDOUT)
self.wait_for_zap(self.timeout)
self.logger.debug('ZAP started successfully.')
def shutdown(self):
"""Shutdown ZAP."""
if not self.is_running():
self.logger.warning('ZAP is not running.')
return
self.logger.debug('Shutting down ZAP.')
self.zap.core.shutdown()
timeout_time = time.time() + self.timeout
while self.is_running():
if time.time() > timeout_time:
raise ZAPError('Timed out waiting for ZAP to shutdown.')
time.sleep(2)
self.logger.debug('ZAP shutdown successfully.')
def wait_for_zap(self, timeout):
"""Wait for ZAP to be ready to receive API calls."""
timeout_time = time.time() + timeout
while not self.is_running():
if time.time() > timeout_time:
raise ZAPError('Timed out waiting for ZAP to start.')
time.sleep(2)
def is_running(self):
"""Check if ZAP is running."""
try:
result = requests.get(self.proxy_url)
except RequestException:
return False
if 'ZAP-Header' in result.headers.get('Access-Control-Allow-Headers', []):
return True
raise ZAPError('Another process is listening on {0}'.format(self.proxy_url))
def open_url(self, url, sleep_after_open=2):
"""Access a URL through ZAP."""
self.zap.urlopen(url)
# Give the sites tree a chance to get updated
time.sleep(sleep_after_open)
def run_spider(self, target_url, context_name=None, user_name=None):
"""Run spider against a URL."""
self.logger.debug('Spidering target {0}...'.format(target_url))
context_id, user_id = self._get_context_and_user_ids(context_name, user_name)
if user_id:
self.logger.debug('Running spider in context {0} as user {1}'.format(context_id, user_id))
scan_id = self.zap.spider.scan_as_user(context_id, user_id, target_url)
else:
scan_id = self.zap.spider.scan(target_url)
if not scan_id:
raise ZAPError('Error running spider.')
elif not scan_id.isdigit():
raise ZAPError('Error running spider: "{0}"'.format(scan_id))
self.logger.debug('Started spider with ID {0}...'.format(scan_id))
while int(self.zap.spider.status()) < 100:
self.logger.debug('Spider progress %: {0}'.format(self.zap.spider.status()))
time.sleep(self._status_check_sleep)
self.logger.debug('Spider #{0} completed'.format(scan_id))
def run_active_scan(self, target_url, recursive=False, context_name=None, user_name=None):
"""Run an active scan against a URL."""
self.logger.debug('Scanning target {0}...'.format(target_url))
context_id, user_id = self._get_context_and_user_ids(context_name, user_name)
if user_id:
self.logger.debug('Scanning in context {0} as user {1}'.format(context_id, user_id))
scan_id = self.zap.ascan.scan_as_user(target_url, context_id, user_id, recursive)
else:
scan_id = self.zap.ascan.scan(target_url, recurse=recursive)
if not scan_id:
raise ZAPError('Error running active scan.')
elif not scan_id.isdigit():
raise ZAPError(('Error running active scan: "{0}". Make sure the URL is in the site ' +
'tree by using the open-url or scanner commands before running an active ' +
'scan.').format(scan_id))
self.logger.debug('Started scan with ID {0}...'.format(scan_id))
while int(self.zap.ascan.status()) < 100:
self.logger.debug('Scan progress %: {0}'.format(self.zap.ascan.status()))
time.sleep(self._status_check_sleep)
self.logger.debug('Scan #{0} completed'.format(scan_id))
def run_ajax_spider(self, target_url):
"""Run AJAX Spider against a URL."""
self.logger.debug('AJAX Spidering target {0}...'.format(target_url))
self.zap.ajaxSpider.scan(target_url)
while self.zap.ajaxSpider.status == 'running':
self.logger.debug('AJAX Spider: {0}'.format(self.zap.ajaxSpider.status))
time.sleep(self._status_check_sleep)
self.logger.debug('AJAX Spider completed')
def alerts(self, alert_level='High'):
"""Get a filtered list of alerts at the given alert level, and sorted by alert level."""
alerts = self.zap.core.alerts()
alert_level_value = self.alert_levels[alert_level]
alerts = sorted((a for a in alerts if self.alert_levels[a['risk']] >= alert_level_value),
key=lambda k: self.alert_levels[k['risk']], reverse=True)
return alerts
def enabled_scanner_ids(self):
"""Retrieves a list of currently enabled scanners."""
enabled_scanners = []
scanners = self.zap.ascan.scanners()
for scanner in scanners:
if scanner['enabled'] == 'true':
enabled_scanners.append(scanner['id'])
return enabled_scanners
def enable_scanners_by_ids(self, scanner_ids):
"""Enable a list of scanner IDs."""
scanner_ids = ','.join(scanner_ids)
self.logger.debug('Enabling scanners with IDs {0}'.format(scanner_ids))
return self.zap.ascan.enable_scanners(scanner_ids)
def disable_scanners_by_ids(self, scanner_ids):
"""Disable a list of scanner IDs."""
scanner_ids = ','.join(scanner_ids)
self.logger.debug('Disabling scanners with IDs {0}'.format(scanner_ids))
return self.zap.ascan.disable_scanners(scanner_ids)
def enable_scanners_by_group(self, group):
"""
Enables the scanners in the group if it matches one in the scanner_group_map.
"""
if group == 'all':
self.logger.debug('Enabling all scanners')
return self.zap.ascan.enable_all_scanners()
try:
scanner_list = self.scanner_group_map[group]
except KeyError:
raise ZAPError(
'Invalid group "{0}" provided. Valid groups are: {1}'.format(
group, ', '.join(self.scanner_groups)
)
)
self.logger.debug('Enabling scanner group {0}'.format(group))
return self.enable_scanners_by_ids(scanner_list)
def disable_scanners_by_group(self, group):
"""
Disables the scanners in the group if it matches one in the scanner_group_map.
"""
if group == 'all':
self.logger.debug('Disabling all scanners')
return self.zap.ascan.disable_all_scanners()
try:
scanner_list = self.scanner_group_map[group]
except KeyError:
raise ZAPError(
'Invalid group "{0}" provided. Valid groups are: {1}'.format(
group, ', '.join(self.scanner_groups)
)
)
self.logger.debug('Disabling scanner group {0}'.format(group))
return self.disable_scanners_by_ids(scanner_list)
def enable_scanners(self, scanners):
"""
Enable the provided scanners by group and/or IDs.
"""
scanner_ids = []
for scanner in scanners:
if scanner in self.scanner_groups:
self.enable_scanners_by_group(scanner)
elif scanner.isdigit():
scanner_ids.append(scanner)
else:
raise ZAPError('Invalid scanner "{0}" provided. Must be a valid group or numeric ID.'.format(scanner))
if scanner_ids:
self.enable_scanners_by_ids(scanner_ids)
def disable_scanners(self, scanners):
"""
Enable the provided scanners by group and/or IDs.
"""
scanner_ids = []
for scanner in scanners:
if scanner in self.scanner_groups:
self.disable_scanners_by_group(scanner)
elif scanner.isdigit():
scanner_ids.append(scanner)
else:
raise ZAPError('Invalid scanner "{0}" provided. Must be a valid group or numeric ID.'.format(scanner))
if scanner_ids:
self.disable_scanners_by_ids(scanner_ids)
def set_enabled_scanners(self, scanners):
"""
Set only the provided scanners by group and/or IDs and disable all others.
"""
self.logger.debug('Disabling all current scanners')
self.zap.ascan.disable_all_scanners()
self.enable_scanners(scanners)
def set_scanner_attack_strength(self, scanner_ids, attack_strength):
"""Set the attack strength for the given scanners."""
for scanner_id in scanner_ids:
self.logger.debug('Setting strength for scanner {0} to {1}'.format(scanner_id, attack_strength))
result = self.zap.ascan.set_scanner_attack_strength(scanner_id, attack_strength)
if result != 'OK':
raise ZAPError('Error setting strength for scanner with ID {0}: {1}'.format(scanner_id, result))
def set_scanner_alert_threshold(self, scanner_ids, alert_threshold):
"""Set the alert theshold for the given policies."""
for scanner_id in scanner_ids:
self.logger.debug('Setting alert threshold for scanner {0} to {1}'.format(scanner_id, alert_threshold))
result = self.zap.ascan.set_scanner_alert_threshold(scanner_id, alert_threshold)
if result != 'OK':
raise ZAPError('Error setting alert threshold for scanner with ID {0}: {1}'.format(scanner_id, result))
def enable_policies_by_ids(self, policy_ids):
"""Set enabled policy from a list of IDs."""
policy_ids = ','.join(policy_ids)
self.logger.debug('Setting enabled policies to IDs {0}'.format(policy_ids))
self.zap.ascan.set_enabled_policies(policy_ids)
def set_policy_attack_strength(self, policy_ids, attack_strength):
"""Set the attack strength for the given policies."""
for policy_id in policy_ids:
self.logger.debug('Setting strength for policy {0} to {1}'.format(policy_id, attack_strength))
result = self.zap.ascan.set_policy_attack_strength(policy_id, attack_strength)
if result != 'OK':
raise ZAPError('Error setting strength for policy with ID {0}: {1}'.format(policy_id, result))
def set_policy_alert_threshold(self, policy_ids, alert_threshold):
"""Set the alert theshold for the given policies."""
for policy_id in policy_ids:
self.logger.debug('Setting alert threshold for policy {0} to {1}'.format(policy_id, alert_threshold))
result = self.zap.ascan.set_policy_alert_threshold(policy_id, alert_threshold)
if result != 'OK':
raise ZAPError('Error setting alert threshold for policy with ID {0}: {1}'.format(policy_id, result))
def exclude_from_all(self, exclude_regex):
"""Exclude a pattern from proxy, spider and active scanner."""
try:
re.compile(exclude_regex)
except re.error:
raise ZAPError('Invalid regex "{0}" provided'.format(exclude_regex))
self.logger.debug('Excluding {0} from proxy, spider and active scanner.'.format(exclude_regex))
self.zap.core.exclude_from_proxy(exclude_regex)
self.zap.spider.exclude_from_scan(exclude_regex)
self.zap.ascan.exclude_from_scan(exclude_regex)
def xml_report(self, file_path):
"""Generate and save XML report"""
self.logger.debug('Generating XML report')
report = self.zap.core.xmlreport()
self._write_report(report, file_path)
def md_report(self, file_path):
"""Generate and save MD report"""
self.logger.debug('Generating MD report')
report = self.zap.core.mdreport()
self._write_report(report, file_path)
def html_report(self, file_path):
"""Generate and save HTML report."""
self.logger.debug('Generating HTML report')
report = self.zap.core.htmlreport()
self._write_report(report, file_path)
@staticmethod
def _write_report(report, file_path):
"""Write report to the given file path."""
with open(file_path, mode='wb') as f:
if not isinstance(report, binary_type):
report = report.encode('utf-8')
f.write(report)
def get_context_info(self, context_name):
"""Get the context ID for a given context name."""
context_info = self.zap.context.context(context_name)
if not isinstance(context_info, dict):
raise ZAPError('Context with name "{0}" wasn\'t found'.format(context_name))
return context_info
def _get_context_and_user_ids(self, context_name, user_name):
"""Helper to get the context ID and user ID from the given names."""
if context_name is None:
return None, None
context_id = self.get_context_info(context_name)['id']
user_id = None
if user_name:
user_id = self._get_user_id_from_name(context_id, user_name)
return context_id, user_id
def _get_user_id_from_name(self, context_id, user_name):
"""Get a user ID from the user name."""
users = self.zap.users.users_list(context_id)
for user in users:
if user['name'] == user_name:
return user['id']
raise ZAPError('No user with the name "{0}"" was found for context {1}'.format(user_name, context_id)) | zap-cli-v2 | /zap_cli_v2-0.12.2-py3-none-any.whl/zapcli/zap_helper.py | zap_helper.py |
import sys
import os
import click
from zapcli.version import __version__
from zapcli import helpers
from zapcli.commands.context import context_group
from zapcli.commands.policies import policies_group
from zapcli.commands.scanners import scanner_group
from zapcli.commands.scripts import scripts_group
from zapcli.commands.session import session_group
from zapcli.log import console
from zapcli.zap_helper import ZAPHelper
@click.group(help='ZAP CLI v{0} - A simple commandline tool for OWASP ZAP.'.format(__version__))
@click.option('--boring', is_flag=True, default=False, help='Remove color from console output.')
@click.option('--verbose', '-v', is_flag=True, default=False, type=bool,
help='Add more verbose debugging output.')
@click.option('--zap-path', default='/zap', envvar='ZAP_PATH', type=str,
help='Path to the ZAP daemon. Defaults to /zap or the value of the environment variable ZAP_PATH.')
@click.option('--port', '-p', default=8090, envvar='ZAP_PORT', type=int,
help='Port of the ZAP proxy. Defaults to 8090 or the value of the environment variable ZAP_PORT.')
@click.option('--zap-url', default='http://127.0.0.1', envvar='ZAP_URL', type=str,
help='The URL of the ZAP proxy. Defaults to http://127.0.0.1 or the value of the environment ' +
'variable ZAP_URL.')
@click.option('--api-key', default='', envvar='ZAP_API_KEY', type=str,
help='The API key for using the ZAP API if required. Defaults to the value of the environment ' +
'variable ZAP_API_KEY.')
@click.option('--log-path', envvar='ZAP_LOG_PATH', type=str,
help='Path to the directory in which to save the ZAP output log file. Defaults to the value of ' +
'the environment variable ZAP_LOG_PATH and uses the value of --zap-path if it is not set.')
@click.option('--soft-fail', type=bool, default=False, is_flag=True, envvar="SOFT_FAIL", help="Runs scans but suppresses error code")
@click.pass_context
def cli(ctx, boring, verbose, zap_path, port, zap_url, api_key, log_path, soft_fail):
"""Main command line entry point."""
console.colorize = not boring
if verbose:
console.setLevel('DEBUG')
else:
console.setLevel('INFO')
if soft_fail:
os.environ["SOFT_FAIL"] = "true"
ctx.obj = ZAPHelper(zap_path=zap_path, port=port, url=zap_url, api_key=api_key, log_path=log_path, soft_fail=soft_fail)
@cli.command('start', short_help='Start the ZAP daemon.')
@click.option('--start-options', '-o', type=str,
help='Extra options to pass to the ZAP start command, e.g. "-config api.key=12345"')
@click.pass_obj
def start_zap_daemon(zap_helper, start_options):
"""Helper to start the daemon using the current config."""
console.info('Starting ZAP daemon')
with helpers.zap_error_handler():
zap_helper.start(options=start_options)
@cli.command('shutdown')
@click.pass_obj
def shutdown_zap_daemon(zap_helper):
"""Shutdown the ZAP daemon."""
console.info('Shutting down ZAP daemon')
with helpers.zap_error_handler():
zap_helper.shutdown()
@cli.command('status', short_help='Check if ZAP is running.')
@click.option('--timeout', '-t', type=int,
help='Wait this number of seconds for ZAP to have started')
@click.pass_obj
def check_status(zap_helper, timeout):
"""
Check if ZAP is running and able to receive API calls.
You can provide a timeout option which is the amount of time in seconds
the command should wait for ZAP to start if it is not currently running.
This is useful to run before calling other commands if ZAP was started
outside of zap-cli. For example:
zap-cli status -t 60 && zap-cli open-url "http://127.0.0.1/"
Exits with code 1 if ZAP is either not running or the command timed out
waiting for ZAP to start.
"""
with helpers.zap_error_handler():
if zap_helper.is_running():
console.info('ZAP is running')
elif timeout is not None:
zap_helper.wait_for_zap(timeout)
console.info('ZAP is running')
else:
console.error('ZAP is not running')
sys.exit(2)
@cli.command('open-url')
@click.argument('url')
@click.pass_obj
def open_url(zap_helper, url):
"""Open a URL using the ZAP proxy."""
console.info('Accessing URL {0}'.format(url))
zap_helper.open_url(url)
@cli.command('spider')
@click.argument('url')
@click.option('--context-name', '-c', type=str, help='Context to use if provided.')
@click.option('--user-name', '-u', type=str,
help='Run scan as this user if provided. If this option is used, the context parameter must also ' +
'be provided.')
@click.pass_obj
def spider_url(zap_helper, url, context_name, user_name):
"""Run the spider against a URL."""
console.info('Running spider...')
with helpers.zap_error_handler():
zap_helper.run_spider(url, context_name, user_name)
@cli.command('ajax-spider')
@click.argument('url')
@click.pass_obj
def ajax_spider_url(zap_helper, url):
"""Run the AJAX Spider against a URL."""
console.info('Running AJAX Spider...')
zap_helper.run_ajax_spider(url)
@cli.command('active-scan', short_help='Run an Active Scan.')
@click.argument('url')
@click.option('--scanners', '-s', type=str, callback=helpers.validate_scanner_list,
help='Comma separated list of scanner IDs and/or groups to use in the scan. Use the scanners ' +
'subcommand to get a list of IDs. Available groups are: {0}.'.format(
', '.join(['all'] + list(ZAPHelper.scanner_group_map.keys()))))
@click.option('--recursive', '-r', is_flag=True, default=False, help='Make scan recursive.')
@click.option('--context-name', '-c', type=str, help='Context to use if provided.')
@click.option('--user-name', '-u', type=str,
help='Run scan as this user if provided. If this option is used, the context parameter must also ' +
'be provided.')
@click.option('--soft-fail', type=bool, default=False, is_flag=True, envvar="SOFT_FAIL", help="Runs scans but suppresses error code")
@click.pass_obj
def active_scan(zap_helper, url, scanners, recursive, context_name, user_name, soft_fail):
"""
Run an Active Scan against a URL.
The URL to be scanned must be in ZAP's site tree, i.e. it should have already
been opened using the open-url command or found by running the spider command.
"""
console.info('Running an active scan...')
with helpers.zap_error_handler():
if scanners:
zap_helper.set_enabled_scanners(scanners)
zap_helper.run_active_scan(url, recursive, context_name, user_name)
@cli.command('alerts')
@click.option('--alert-level', '-l', default='High', type=click.Choice(ZAPHelper.alert_levels.keys()),
help='Minimum alert level to include in report (default: High).')
@click.option('--output-format', '-f', default='table', type=click.Choice(['table', 'json']),
help='Output format to print the alerts.')
@click.option('--exit-code', default=True, type=bool,
help='Whether to set a non-zero exit code when there are any alerts of the specified ' +
'level (default: True).')
@click.pass_obj
def show_alerts(zap_helper, alert_level, output_format, exit_code):
"""Show alerts at the given alert level."""
alerts = zap_helper.alerts(alert_level)
helpers.report_alerts(alerts, output_format)
if exit_code:
code = 1 if len(alerts) > 0 else 0
sys.exit(code)
@cli.command('quick-scan', short_help='Run a quick scan.')
@click.argument('url')
@click.option('--self-contained', '-sc', is_flag=True, default=False,
help='Make the scan self-contained, i.e. start the daemon, open the URL, scan it, ' +
'and shutdown the daemon when done.')
@click.option('--scanners', '-s', type=str, callback=helpers.validate_scanner_list,
help='Comma separated list of scanner IDs and/or groups to use in the scan. Use the scanners ' +
'subcommand to get a list of IDs. Available groups are: {0}.'.format(
', '.join(['all'] + list(ZAPHelper.scanner_group_map.keys()))))
@click.option('--spider', is_flag=True, default=False, help='If set, run the spider before running the scan.')
@click.option('--ajax-spider', is_flag=True, default=False, help='If set, run the AJAX Spider before running the scan.')
@click.option('--recursive', '-r', is_flag=True, default=False, help='Make scan recursive.')
@click.option('--alert-level', '-l', default='High', type=click.Choice(ZAPHelper.alert_levels.keys()),
help='Minimum alert level to include in report.')
@click.option('--exclude', '-e', type=str, callback=helpers.validate_regex,
help='Regex to exclude from all aspects of the scan')
@click.option('--start-options', '-o', type=str,
help='Extra options to pass to the ZAP start command when the --self-contained option is used, ' +
' e.g. "-config api.key=12345"')
@click.option('--output-format', '-f', default='table', type=click.Choice(['table', 'json']),
help='Output format to print the alerts.')
@click.option('--context-name', '-c', type=str, help='Context to use if provided.')
@click.option('--user-name', '-u', type=str,
help='Run scan as this user if provided. If this option is used, the context parameter must also ' +
'be provided.')
@click.option('--soft-fail', type=bool, default=False, is_flag=True, envvar="SOFT_FAIL", help="Runs scans but suppresses error code")
@click.pass_obj
def quick_scan(zap_helper, url, **options):
"""
Run a quick scan of a site by opening a URL, optionally spidering the URL,
running an Active Scan, and reporting any issues found.
This command contains most scan options as parameters, so you can do
everything in one go.
If any alerts are found for the given alert level, this command will exit
with a status code of 1.
"""
if options['self_contained']:
console.info('Starting ZAP daemon')
with helpers.zap_error_handler():
zap_helper.start(options['start_options'])
console.info('Running a quick scan for {0}'.format(url))
with helpers.zap_error_handler():
if options['scanners']:
zap_helper.set_enabled_scanners(options['scanners'])
if options['exclude']:
zap_helper.exclude_from_all(options['exclude'])
zap_helper.open_url(url)
if options['spider']:
zap_helper.run_spider(url, options['context_name'], options['user_name'])
if options['ajax_spider']:
zap_helper.run_ajax_spider(url)
zap_helper.run_active_scan(url, options['recursive'], options['context_name'], options['user_name'])
alerts = zap_helper.alerts(options['alert_level'])
helpers.report_alerts(alerts, options['output_format'])
if options['self_contained']:
console.info('Shutting down ZAP daemon')
with helpers.zap_error_handler():
zap_helper.shutdown()
# Customization: Soft fail for error codes
if len(alerts) > 0 and not options.get("soft_fail") and not os.getenv("SOFT_FAIL"):
exit_code = 1
else:
exit_code = 0
# exit_code = 1 if len(alerts) > 0 else 0
sys.exit(exit_code)
@cli.command('exclude', short_help='Exclude a pattern from all scanners.')
@click.argument('pattern', callback=helpers.validate_regex)
@click.pass_obj
def exclude_from_scanners(zap_helper, pattern):
"""Exclude a pattern from proxy, spider and active scanner."""
with helpers.zap_error_handler():
zap_helper.exclude_from_all(pattern)
@cli.command('report')
@click.option('--output', '-o', help='Output file for report.')
@click.option('--output-format', '-f', default='xml', type=click.Choice(['xml', 'html', 'md']),
help='Report format.')
@click.pass_obj
def report(zap_helper, output, output_format):
"""Generate XML, MD or HTML report."""
if output_format == 'html':
zap_helper.html_report(output)
elif output_format == 'md':
zap_helper.md_report(output)
else:
zap_helper.xml_report(output)
console.info('Report saved to "{0}"'.format(output))
# Add subcommand groups
cli.add_command(context_group)
cli.add_command(policies_group)
cli.add_command(scanner_group)
cli.add_command(scripts_group)
cli.add_command(session_group) | zap-cli-v2 | /zap_cli_v2-0.12.2-py3-none-any.whl/zapcli/cli.py | cli.py |
import click
from zapcli.exceptions import ZAPError
from zapcli.helpers import validate_regex, zap_error_handler
from zapcli.log import console
@click.group(name='context', short_help='Manage contexts for the current session.')
@click.pass_context
def context_group(ctx):
"""Group of commands to manage the contexts for the current session."""
pass
@context_group.command('list')
@click.pass_obj
def context_list(zap_helper):
"""List the available contexts."""
contexts = zap_helper.zap.context.context_list
if len(contexts):
console.info('Available contexts: {0}'.format(contexts[1:-1]))
else:
console.info('No contexts available in the current session')
@context_group.command('new')
@click.argument('name')
@click.pass_obj
def context_new(zap_helper, name):
"""Create a new context."""
console.info('Creating context with name: {0}'.format(name))
res = zap_helper.zap.context.new_context(contextname=name)
console.info('Context "{0}" created with ID: {1}'.format(name, res))
@context_group.command('include')
@click.option('--name', '-n', type=str, required=True,
help='Name of the context.')
@click.option('--pattern', '-p', type=str, callback=validate_regex,
help='Regex to include.')
@click.pass_obj
def context_include(zap_helper, name, pattern):
"""Include a pattern in a given context."""
console.info('Including regex {0} in context with name: {1}'.format(pattern, name))
with zap_error_handler():
result = zap_helper.zap.context.include_in_context(contextname=name, regex=pattern)
if result != 'OK':
raise ZAPError('Including regex from context failed: {}'.format(result))
@context_group.command('exclude')
@click.option('--name', '-n', type=str, required=True,
help='Name of the context.')
@click.option('--pattern', '-p', type=str, callback=validate_regex,
help='Regex to exclude.')
@click.pass_obj
def context_exclude(zap_helper, name, pattern):
"""Exclude a pattern from a given context."""
console.info('Excluding regex {0} from context with name: {1}'.format(pattern, name))
with zap_error_handler():
result = zap_helper.zap.context.exclude_from_context(contextname=name, regex=pattern)
if result != 'OK':
raise ZAPError('Excluding regex from context failed: {}'.format(result))
@context_group.command('info')
@click.argument('context-name')
@click.pass_obj
def context_info(zap_helper, context_name):
"""Get info about the given context."""
with zap_error_handler():
info = zap_helper.get_context_info(context_name)
console.info('ID: {}'.format(info['id']))
console.info('Name: {}'.format(info['name']))
console.info('Authentication type: {}'.format(info['authType']))
console.info('Included regexes: {}'.format(info['includeRegexs']))
console.info('Excluded regexes: {}'.format(info['excludeRegexs']))
@context_group.command('users')
@click.argument('context-name')
@click.pass_obj
def context_list_users(zap_helper, context_name):
"""List the users available for a given context."""
with zap_error_handler():
info = zap_helper.get_context_info(context_name)
users = zap_helper.zap.users.users_list(info['id'])
if len(users):
user_list = ', '.join([user['name'] for user in users])
console.info('Available users for the context {0}: {1}'.format(context_name, user_list))
else:
console.info('No users configured for the context {}'.format(context_name))
@context_group.command('import')
@click.argument('file-path')
@click.pass_obj
def context_import(zap_helper, file_path):
"""Import a saved context file."""
with zap_error_handler():
result = zap_helper.zap.context.import_context(file_path)
if not result.isdigit():
raise ZAPError('Importing context from file failed: {}'.format(result))
console.info('Imported context from {}'.format(file_path))
@context_group.command('export')
@click.option('--name', '-n', type=str, required=True,
help='Name of the context.')
@click.option('--file-path', '-f', type=str,
help='Output file to export the context.')
@click.pass_obj
def context_export(zap_helper, name, file_path):
"""Export a given context to a file."""
with zap_error_handler():
result = zap_helper.zap.context.export_context(name, file_path)
if result != 'OK':
raise ZAPError('Exporting context to file failed: {}'.format(result))
console.info('Exported context {0} to {1}'.format(name, file_path)) | zap-cli-v2 | /zap_cli_v2-0.12.2-py3-none-any.whl/zapcli/commands/context.py | context.py |
import click
from tabulate import tabulate
from zapcli.helpers import filter_by_ids, validate_scanner_list, zap_error_handler
from zapcli.zap_helper import ZAPHelper
from zapcli.log import console
@click.group(name='scanners', short_help='Enable, disable, or list a set of scanners.')
@click.pass_context
def scanner_group(ctx):
"""
Get a list of scanners and whether or not they are enabled,
or disable/enable scanners to use in the scan.
"""
pass
@scanner_group.command('list')
@click.option('--scanners', '-s', type=str, callback=validate_scanner_list,
help='Comma separated list of scanner IDs and/or groups to list (by default the list ' +
'command will output all scanners). Available groups are: {0}.'.format(
', '.join(['all'] + list(ZAPHelper.scanner_group_map.keys()))))
@click.pass_obj
def list_scanners(zap_helper, scanners):
"""Get a list of scanners and whether or not they are enabled."""
scanner_list = zap_helper.zap.ascan.scanners()
if scanners is not None and 'all' not in scanners:
scanner_list = filter_by_ids(scanner_list, scanners)
click.echo(tabulate([[s['id'], s['name'], s['policyId'], s['enabled'], s['attackStrength'], s['alertThreshold']]
for s in scanner_list],
headers=['ID', 'Name', 'Policy ID', 'Enabled', 'Strength', 'Threshold'],
tablefmt='grid'))
@scanner_group.command('enable')
@click.option('--scanners', '-s', type=str, callback=validate_scanner_list,
help='Comma separated list of scanner IDs and/or groups to enable. Available groups are: {0}.'.format(
', '.join(['all'] + list(ZAPHelper.scanner_group_map.keys()))))
@click.pass_obj
def enable_scanners(zap_helper, scanners):
"""Enable scanners to use in a scan."""
scanners = scanners or ['all']
zap_helper.enable_scanners(scanners)
@scanner_group.command('disable')
@click.option('--scanners', '-s', type=str, callback=validate_scanner_list,
help='Comma separated list of scanner IDs and/or groups to disable. Available groups are: {0}.'.format(
', '.join(['all'] + list(ZAPHelper.scanner_group_map.keys()))))
@click.pass_obj
def disable_scanners(zap_helper, scanners):
"""Disable scanners so they are not used in a scan."""
scanners = scanners or ['all']
zap_helper.disable_scanners(scanners)
@scanner_group.command('set-strength')
@click.option('--scanners', type=str, callback=validate_scanner_list,
help='Comma separated list of scanner IDs and/or groups for which to set the strength. Available ' +
'groups are: {0}.'.format(', '.join(['all'] + list(ZAPHelper.scanner_group_map.keys()))))
@click.option('--strength', default='Default',
type=click.Choice(['Default', 'Low', 'Medium', 'High', 'Insane']),
help='Attack strength to apply to the given policies.')
@click.pass_obj
def set_scanner_strength(zap_helper, scanners, strength):
"""Set the attack strength for scanners."""
if not scanners or 'all' in scanners:
scanners = _get_all_scanner_ids(zap_helper)
with zap_error_handler():
zap_helper.set_scanner_attack_strength(scanners, strength)
console.info('Set attack strength to {0}.'.format(strength))
@scanner_group.command('set-threshold')
@click.option('--scanners', '-s', type=str, callback=validate_scanner_list,
help='Comma separated list of scanner IDs and/or groups for which to set the threshold. Available ' +
'groups are: {0}.'.format(', '.join(['all'] + list(ZAPHelper.scanner_group_map.keys()))))
@click.option('--threshold', '-t', default='Default',
type=click.Choice(['Default', 'Off', 'Low', 'Medium', 'High']),
help='Alert threshold to apply to the given policies.')
@click.pass_obj
def set_scanner_threshold(zap_helper, scanners, threshold):
"""Set the alert threshold for scanners."""
if not scanners or 'all' in scanners:
scanners = _get_all_scanner_ids(zap_helper)
with zap_error_handler():
zap_helper.set_scanner_alert_threshold(scanners, threshold)
console.info('Set alert threshold to {0}.'.format(threshold))
def _get_all_scanner_ids(zap_helper):
"""Get all scanner IDs."""
scanners = zap_helper.zap.ascan.scanners()
return [s['id'] for s in scanners] | zap-cli-v2 | /zap_cli_v2-0.12.2-py3-none-any.whl/zapcli/commands/scanners.py | scanners.py |
import click
from tabulate import tabulate
from zapcli.helpers import filter_by_ids, validate_ids, zap_error_handler
from zapcli.log import console
@click.group(name='policies', short_help='Enable or list a set of policies.')
@click.pass_context
def policies_group(ctx):
"""
Get a list of policies and whether or not they are enabled,
or set the enabled policies to use in the scan.
"""
pass
@policies_group.command('list')
@click.option('--policy-ids', '-p', type=str, callback=validate_ids,
help='Comma separated list of policy IDs to list ' +
'(by default the list command will output all policies).')
@click.pass_obj
def list_policies(zap_helper, policy_ids):
"""
Get a list of policies and whether or not they are enabled.
"""
policies = filter_by_ids(zap_helper.zap.ascan.policies(), policy_ids)
click.echo(tabulate([[p['id'], p['name'], p['enabled'], p['attackStrength'], p['alertThreshold']]
for p in policies],
headers=['ID', 'Name', 'Enabled', 'Strength', 'Threshold'],
tablefmt='grid'))
@policies_group.command('enable')
@click.option('--policy-ids', '-p', type=str, callback=validate_ids,
help='Comma separated list of policy IDs to enable ' +
'(by default the enable command will enable all policies).')
@click.pass_obj
def enable_policies(zap_helper, policy_ids):
"""
Set the enabled policies to use in a scan.
When you enable a selection of policies, all other policies are
disabled.
"""
if not policy_ids:
policy_ids = _get_all_policy_ids(zap_helper)
with zap_error_handler():
zap_helper.enable_policies_by_ids(policy_ids)
@policies_group.command('set-strength')
@click.option('--policy-ids', '-p', type=str, callback=validate_ids,
help='Comma separated list of policy IDs for which to set the strength.')
@click.option('--strength', '-s', default='Default',
type=click.Choice(['Default', 'Low', 'Medium', 'High', 'Insane']),
help='Attack strength to apply to the given policies.')
@click.pass_obj
def set_policy_strength(zap_helper, policy_ids, strength):
"""Set the attack strength for policies."""
if not policy_ids:
policy_ids = _get_all_policy_ids(zap_helper)
with zap_error_handler():
zap_helper.set_policy_attack_strength(policy_ids, strength)
console.info('Set attack strength to {0}.'.format(strength))
@policies_group.command('set-threshold')
@click.option('--policy-ids', '-p', type=str, callback=validate_ids,
help='Comma separated list of policy IDs for which to set the threshold.')
@click.option('--threshold', '-t', default='Default',
type=click.Choice(['Default', 'Off', 'Low', 'Medium', 'High']),
help='Alert threshold to apply to the given policies.')
@click.pass_obj
def set_policy_threshold(zap_helper, policy_ids, threshold):
"""Set the alert threshold for policies."""
if not policy_ids:
policy_ids = _get_all_policy_ids(zap_helper)
with zap_error_handler():
zap_helper.set_policy_alert_threshold(policy_ids, threshold)
console.info('Set alert threshold to {0}.'.format(threshold))
def _get_all_policy_ids(zap_helper):
"""Get all policy IDs."""
policies = zap_helper.zap.ascan.policies()
return [p['id'] for p in policies] | zap-cli-v2 | /zap_cli_v2-0.12.2-py3-none-any.whl/zapcli/commands/policies.py | policies.py |
import os
import click
from tabulate import tabulate
from zapcli.exceptions import ZAPError
from zapcli.helpers import zap_error_handler
from zapcli.log import console
@click.group(name='scripts', short_help='Manage scripts.')
@click.pass_context
def scripts_group(ctx):
"""
Get a list of scripts and whether or not they are enabled,
load and remove scripts, or disable/enable scripts to use.
"""
pass
@scripts_group.command('list')
@click.pass_obj
def list_scripts(zap_helper):
"""List scripts currently loaded into ZAP."""
scripts = zap_helper.zap.script.list_scripts
output = []
for s in scripts:
if 'enabled' not in s:
s['enabled'] = 'N/A'
output.append([s['name'], s['type'], s['engine'], s['enabled']])
click.echo(tabulate(output, headers=['Name', 'Type', 'Engine', 'Enabled'], tablefmt='grid'))
@scripts_group.command('list-engines')
@click.pass_obj
def list_engines(zap_helper):
"""List engines that can be used to run scripts."""
engines = zap_helper.zap.script.list_engines
console.info('Available engines: {}'.format(', '.join(engines)))
@scripts_group.command('enable')
@click.argument('script-name', metavar='"SCRIPT NAME"')
@click.pass_obj
def enable_script(zap_helper, script_name):
"""Enable a script."""
with zap_error_handler():
console.debug('Enabling script "{0}"'.format(script_name))
result = zap_helper.zap.script.enable(script_name)
if result != 'OK':
raise ZAPError('Error enabling script: {0}'.format(result))
console.info('Script "{0}" enabled'.format(script_name))
@scripts_group.command('disable')
@click.argument('script-name', metavar='"SCRIPT NAME"')
@click.pass_obj
def disable_script(zap_helper, script_name):
"""Disable a script."""
with zap_error_handler():
console.debug('Disabling script "{0}"'.format(script_name))
result = zap_helper.zap.script.disable(script_name)
if result != 'OK':
raise ZAPError('Error disabling script: {0}'.format(result))
console.info('Script "{0}" disabled'.format(script_name))
@scripts_group.command('remove')
@click.argument('script-name', metavar='"SCRIPT NAME"')
@click.pass_obj
def remove_script(zap_helper, script_name):
"""Remove a script."""
with zap_error_handler():
console.debug('Removing script "{0}"'.format(script_name))
result = zap_helper.zap.script.remove(script_name)
if result != 'OK':
raise ZAPError('Error removing script: {0}'.format(result))
console.info('Script "{0}" removed'.format(script_name))
@scripts_group.command('load')
@click.option('--name', '-n', prompt=True, help='Name of the script')
@click.option('--script-type', '-t', prompt=True, help='Type of script')
@click.option('--engine', '-e', prompt=True, help='Engine the script should use')
@click.option('--file-path', '-f', prompt=True, help='Path to the script file (i.e. /home/user/script.js)')
@click.option('--description', '-d', default='', help='Optional description for the script')
@click.pass_obj
def load_script(zap_helper, **options):
"""Load a script from a file."""
with zap_error_handler():
if not os.path.isfile(options['file_path']):
raise ZAPError('No file found at "{0}", cannot load script.'.format(options['file_path']))
if not _is_valid_script_engine(zap_helper.zap, options['engine']):
engines = zap_helper.zap.script.list_engines
raise ZAPError('Invalid script engine provided. Valid engines are: {0}'.format(', '.join(engines)))
console.debug('Loading script "{0}" from "{1}"'.format(options['name'], options['file_path']))
result = zap_helper.zap.script.load(options['name'], options['script_type'], options['engine'],
options['file_path'], scriptdescription=options['description'])
if result != 'OK':
raise ZAPError('Error loading script: {0}'.format(result))
console.info('Script "{0}" loaded'.format(options['name']))
def _is_valid_script_engine(zap, engine):
"""Check if given script engine is valid."""
engine_names = zap.script.list_engines
short_names = [e.split(' : ')[1] for e in engine_names]
return engine in engine_names or engine in short_names | zap-cli-v2 | /zap_cli_v2-0.12.2-py3-none-any.whl/zapcli/commands/scripts.py | scripts.py |
ZAP (the Zurich Atmosphere Purge)
---------------------------------
Tired of sky subtraction residuals? ZAP them!
ZAP is a high precision sky subtraction tool which can be used as complete sky
subtraction solution, or as an enhancement to previously sky-subtracted MUSE
data. The method uses PCA to isolate the residual sky subtraction features and
remove them from the observed datacube. Future developments will include
modification for use on a variety of instruments.
The last stable release of ZAP can be installed simply with pip::
pip install zap
Or into the user path with::
pip install --user zap
Links
~~~~~
- `documentation <http://zap.readthedocs.io/en/latest/>`_
- `git repository <https://github.com/musevlt/zap>`_
- `changelog <https://github.com/musevlt/zap/blob/master/CHANGELOG>`_
Citation
~~~~~~~~
The paper describing the original method can be found here:
http://adsabs.harvard.edu/abs/2016MNRAS.458.3210S
Please cite ZAP as::
\bibitem[Soto et al.(2016)]{2016MNRAS.458.3210S} Soto, K.~T., Lilly, S.~J., Bacon, R., Richard, J., \& Conseil, S.\ 2016, \mnras, 458, 3210
| zap | /zap-2.0.tar.gz/zap-2.0/README.rst | README.rst |
================================
Welcome to ZAP's documentation!
================================
.. toctree::
:maxdepth: 2
Tired of sky subtraction residuals? ZAP them!
ZAP (the *Zurich Atmosphere Purge*) is a high precision sky subtraction tool
which can be used as complete sky subtraction solution, or as an enhancement to
previously sky-subtracted MUSE integral field spectroscopic data. The method
uses PCA to isolate the residual sky subtraction features and remove them from
the observed datacube. Though the operation of ZAP is not dependent on perfect
flatfielding of the data in a MUSE exposure, better results are obtained when
these corrections are made ahead of time. Future development will include
expansion to more instruments.
.. note::
Version 2.0 is now compatible with the WFM-AO mode, and also brings major
improvements for the sky subtraction. Check below the details in the
:ref:`changelog` section as well as the dicussion on the
:ref:`eigenvectors-number`.
The paper describing the original method can be found here:
http://adsabs.harvard.edu/abs/2016MNRAS.458.3210S
Please cite ZAP as::
\bibitem[Soto et al.(2016)]{2016MNRAS.458.3210S} Soto, K.~T., Lilly, S.~J., Bacon, R., Richard, J., \& Conseil, S.\ 2016, \mnras, 458, 3210
Installation
============
ZAP requires the following packages:
* Numpy (1.6.0 or later)
* Astropy (1.0 or later)
* SciPy (0.13.3 or later)
* Scikit-learn
Many linear algebra operations are performed in ZAP, so it can be beneficial to
use an alternative BLAS package. In the Anaconda distribution, the default BLAS
comes with Numpy linked to MKL, which can amount to a 20% speedup of ZAP.
The last stable release of ZAP can be installed simply with pip::
pip install zap
Or into the user path with::
pip install --user zap
Usage
=====
In its most hands-off form, ZAP can take an input FITS datacube, operate on it,
and output a final FITS datacube. The main function to do this is
`zap.process`::
import zap
zap.process('INPUT.fits', outcubefits='OUTPUT.fits')
Care should be taken, however, since this case assumes a sparse field, and
better results can be obtained by applying masks.
There are a number of options that can be passed to the code which we describe
below.
Sparse Field Case
-----------------
This case specifically refers to the case where the sky can be measured in the
sky frame itself, using::
zap.process('INPUT.fits', outcubefits='OUTPUT.fits')
In both cases, the code will create a resulting processed datacube named
``DATACUBE_ZAP.fits`` in the current directory. While this can work well in the
case of very faint sources, masks can improve the results.
For the sparse field case, a mask file can be included, which is a 2D FITS
image matching the spatial dimensions of the input datacube. Masks are defined
to be >= 1 on astronomical sources and 0 at the position of the sky. Set this
parameter with the ``mask`` keyword ::
zap.process('INPUT.fits', outcubefits='OUTPUT.fits', mask='mask.fits')
Filled Field Case
-----------------
This approach also can address the saturated field case and is robust in the
case of strong emission lines, in this case the input is an offset sky
observation. To achieve this, we calculate the SVD on an external sky frame
using the function `zap.SVDoutput`.
An example of running the code in this way is as follows::
extSVD = zap.SVDoutput('Offset_Field_CUBE.fits', mask='mask.fits')
zap.process('Source_cube.fits', outcubefits='OUTPUT.fits', extSVD=extSVD)
The integration time of this frame does not need to be the same as the object
exposure, but rather just a 2-3 minute exposure.
.. _eigenvectors-number:
Optimal number of eigenvectors
------------------------------
The major difficulty to get a high quality sky subtraction is to find the
optimal number of eigenvalues to use. ZAP provides an automated way for this,
trying to find the inflexion point of the variance curve. This is one way to do
it, but there is no right answer to this issue. A higher number of eigenvalues
used for the reconstruction will give a better sky subtraction, but with the
risk of subtracting signal from strong emission lines.
The first thing one can do to optimize the PCA quality is to use a good mask, to
avoid incorporating signal from astronomical sources in the eigenvectors. Then
it is highly recommended to have a look at the explained variance curves (which
can be saved with the ``varcurvefits`` parameter) and the selected number of
eigenvalues (saved in the FITS headers in ``ZAPNEV*``). It is also possible to
use the interactive mode (see below) to try different number of eigenvectors.
This number can be specified manually with the ``neval`` parameter.
Command Line Interface
======================
ZAP can also be used from the command line::
python -m zap INPUT_CUBE.fits
More information use of the command line interface can be found with the
command ::
python -m zap -h
Interactive mode
================
ZAP can be used interactively from the Python console::
import zap
zobj = zap.process('INPUT.fits', interactive=True)
The run method operates on the datacube, and retains all of the data and methods
necessary to process a final data cube in a Python class named `~zap.Zap`. You
can elect to investigate the data product via the `~zap.Zap` object, and even
reprocess the cube with a different number of eigenspectra per region.
A workflow may go as follows:
.. code-block:: python
import zap
from matplotlib import pyplot as plt
# allow ZAP to run the optimize routine
zobj = zap.process('INPUT.fits', interactive=True)
# plot the variance curves and the selection of the number of eigenspectra used
zobj.plotvarcurve()
# plot a spectrum extracted from the original cube
plt.figure()
plt.plot(zobj.cube[:,50:100,50:100].sum(axis=(1,2)), 'b', alpha=0.3)
# plot a spectrum of the cleaned ZAP dataproduct
plt.plot(zobj.cleancube[:,50:100,50:100].sum(axis=(1,2)), 'g')
# choose just the first 3 spectra for all segments
zobj.reprocess(nevals=3)
# plot a spectrum extracted from the original cube
plt.plot(zobj.cube[:,50:100,50:100].sum(axis=(1,2)), 'b', alpha=0.3)
# plot a spectrum of the cleaned ZAP dataproduct
plt.plot(zobj.cleancube[:,50:100,50:100].sum(axis=(1,2))), 'g')
# choose some number of modes by hand
zobj.reprocess(nevals=[2,5,2,4,6,7,9,8,5,3,5])
# plot a spectrum
plt.plot(zobj.cleancube[:,50:100,50:100].sum(axis=(1,2))), 'k')
# Use the optimization algorithm to identify the best number of modes per segment
zobj.optimize()
# compare to the previous versions
plt.plot(zobj.cleancube[:,50:100,50:100].sum(axis=(1,2))), 'r')
# identify a pixel in the dispersion axis that shows a residual feature in
# the original
plt.figure()
plt.matshow(zobj.cube[2903,:,:])
# compare this to the zap dataproduct
plt.figure()
plt.matshow(zobj.cleancube[2903,:,:])
# write the processed cube as a single extension FITS
zobj.writecube('DATACUBE_ZAP.fits')
# or merge the zap datacube into the original input datacube, replacing the
# data extension
zobj.writefits(outcubefits='DATACUBE_FINAL_ZAP.fits')
.. _changelog:
Changelog
=========
.. include:: ../CHANGELOG
API
===
.. autofunction:: zap.process
.. autofunction:: zap.SVDoutput
.. autofunction:: zap.nancleanfits
.. autofunction:: zap.contsubfits
.. autofunction:: zap.wmedian
.. autoclass:: zap.Zap
:members:
| zap | /zap-2.0.tar.gz/zap-2.0/doc/index.rst | index.rst |
ZAP CLI
=======
.. image:: https://travis-ci.org/Grunny/zap-cli.svg?branch=master
:target: https://travis-ci.org/Grunny/zap-cli
A commandline tool that wraps the OWASP ZAP API for controlling ZAP and
executing quick, targeted attacks.
Installation
============
To install the latest release from PyPI, you can run the following command:
::
pip install --upgrade zapcli
To install the latest development version of ZAP CLI, you can run the
following:
::
pip install --upgrade git+https://github.com/Grunny/zap-cli.git
To install ZAP CLI for development, including the dependencies needed
in order to run unit tests, clone this repository and use
``pip install -e .[dev]``.
Usage
=====
To use ZAP CLI, you need to set the port ZAP runs on (defaults to 8090) and
the path to the folder in which ZAP is installed. These can be set either as
commandline parameters or with the environment variables ``ZAP_PORT`` and
``ZAP_PATH``. If you have an API key set for ZAP, this can likewise be set
either as a commandline parameter or with the ``ZAP_API_KEY`` environment
variable.
ZAP CLI can then be used with the following commands:
::
Usage: zap-cli [OPTIONS] COMMAND [ARGS]...
ZAP CLI - A simple commandline tool for OWASP ZAP.
Options:
--boring Remove color from console output.
-v, --verbose Add more verbose debugging output.
--zap-path TEXT Path to the ZAP daemon. Defaults to /zap or the value of
the environment variable ZAP_PATH.
-p, --port INTEGER Port of the ZAP proxy. Defaults to 8090 or the value of
the environment variable ZAP_PORT.
--zap-url TEXT The URL of the ZAP proxy. Defaults to http://127.0.0.1
or the value of the environment variable ZAP_URL.
--api-key TEXT The API key for using the ZAP API if required. Defaults
to the value of the environment variable ZAP_API_KEY.
--log-path TEXT Path to the directory in which to save the ZAP output
log file. Defaults to the value of the environment
variable ZAP_LOG_PATH and uses the value of --zap-path
if it is not set.
--help Show this message and exit.
Commands:
active-scan Run an Active Scan.
ajax-spider Run the AJAX Spider against a URL.
alerts Show alerts at the given alert level.
context Manage contexts for the current session.
exclude Exclude a pattern from all scanners.
open-url Open a URL using the ZAP proxy.
policies Enable or list a set of policies.
quick-scan Run a quick scan.
report Generate XML, MD or HTML report.
scanners Enable, disable, or list a set of scanners.
scripts Manage scripts.
session Manage sessions.
shutdown Shutdown the ZAP daemon.
spider Run the spider against a URL.
start Start the ZAP daemon.
status Check if ZAP is running.
You can use ``--help`` with any of the subcommands to get information on how to use
them.
Getting started running a scan
------------------------------
In order to run a scan, you can use either the ``active-scan`` or the ``quick-scan``
command. The ``active-scan`` only runs an active scan against a URL that is already
in ZAP's site tree (i.e. has already been opened using the ``open-url`` command or
found by running the ``spider``). The ``quick-scan`` command is intended to be a way
to run quick scans of a site with most options contained within a single command
(including being able to start and shutdown ZAP before and after), so you can do
everything in one go. Without any other options passed to the command, ``quick-scan``
will open the URL to make sure it's in the site tree, run an active scan, and will
output any found alerts.
As an example, to run a quick scan of a URL that will open and spider the URL, scan
recursively, exclude URLs matching a given regex, and only use XSS and SQLi scanners,
you could run:
::
$ zap-cli quick-scan -s xss,sqli --spider -r -e "some_regex_pattern" http://127.0.0.1/
[INFO] Running a quick scan for http://127.0.0.1/
[INFO] Issues found: 1
+----------------------------------+--------+----------+---------------------------------------------------------------------------------+
| Alert | Risk | CWE ID | URL |
+==================================+========+==========+=================================================================================+
| Cross Site Scripting (Reflected) | High | 79 | http://127.0.0.1/index.php?foo=%22%3E%3Cscript%3Ealert%281%29%3B%3C%2Fscript%3E |
+----------------------------------+--------+----------+---------------------------------------------------------------------------------+
The above example is equivalent to running the following commands in order:
::
$ zap-cli open-url http://127.0.0.1/
[INFO] Accessing URL http://127.0.0.1/
$ zap-cli exclude "some_regex_pattern"
$ zap-cli spider http://127.0.0.1/
[INFO] Running spider...
$ zap-cli active-scan --scanners xss,sqli --recursive http://127.0.0.1/
[INFO] Running an active scan...
$ zap-cli alerts
[INFO] Issues found: 1
+----------------------------------+--------+----------+---------------------------------------------------------------------------------+
| Alert | Risk | CWE ID | URL |
+==================================+========+==========+=================================================================================+
| Cross Site Scripting (Reflected) | High | 79 | http://127.0.0.1/index.php?foo=%22%3E%3Cscript%3Ealert%281%29%3B%3C%2Fscript%3E |
+----------------------------------+--------+----------+---------------------------------------------------------------------------------+
The ``quick-scan`` command also has a ``--self-contained`` option (or ``-sc`` for short)
which will first try to start ZAP if it isn't running already and shutdown ZAP once the
scan is finished. For example:
::
$ zap-cli quick-scan --self-contained --spider -r -s xss http://127.0.0.1/
[INFO] Starting ZAP daemon
[INFO] Running a quick scan for http://127.0.0.1/
[INFO] Issues found: 1
+----------------------------------+--------+----------+---------------------------------------------------------------------------------+
| Alert | Risk | CWE ID | URL |
+==================================+========+==========+=================================================================================+
| Cross Site Scripting (Reflected) | High | 79 | http://127.0.0.1/index.php?foo=%22%3E%3Cscript%3Ealert%281%29%3B%3C%2Fscript%3E |
+----------------------------------+--------+----------+---------------------------------------------------------------------------------+
[INFO] Shutting down ZAP daemon
Extra start options
-------------------
You can also pass extra options to the start command of ZAP using ``--start-options`` or ``-o``
with commands that allow it. For example, to start ZAP with a custom API key you could use:
::
$ zap-cli start --start-options '-config api.key=12345'
Or to run a self-contained quick scan (that will start ZAP and shut it down after the scan
is complete) with a custom API key, you could use:
::
$ zap-cli --api-key 12345 quick-scan --self-contained -o '-config api.key=12345' -s xss http://127.0.0.1/
Or to run the same scan with the API key disabled:
::
$ zap-cli quick-scan -sc -o '-config api.disablekey=true' -s xss http://127.0.0.1/
Running scans as authenticated users
------------------------------------
In order to run a scan as an authenticated user, first configure the authentication method and users for
a context using the ZAP UI (see the `ZAP help page <https://github.com/zaproxy/zap-core-help/wiki/HelpStartConceptsAuthentication>`_
for more information). Once the authentication method and users are prepared, you can then export the context
with the configured authentication method so it can be imported and used to run authenticated scans with ZAP CLI.
You can export a context with the authentication method and users configured either through the ZAP UI or using the
``context export`` ZAP CLI command. For example, to export a context with the name DevTest to a file, you could run:
::
$ zap-cli context export --name DevTest --file-path /home/user/DevTest.context
To import the saved context for use with ZAP CLI later, you could run:
::
$ zap-cli context import /home/user/DevTest.context
After importing the context with the configured authentication method and users, you can then provide the context name
and user name to the ``spider``, ``active-scan``, and ``quick-scan`` commands to run the scans while authenticated as
the given user. For example:
::
$ zap-cli context import /home/user/DevTest.context
$ zap-cli open-url "http://localhost/"
$ zap-cli spider --context-name DevTest --user-name SomeUser "http://localhost"
$ zap-cli active-scan --recursive -c DevTest -u SomeUser "http://localhost"
$ zap-cli quick-scan --recursive --spider -c DevTest -u SomeUser "http://localhost"
| zapcli | /zapcli-0.10.0.tar.gz/zapcli-0.10.0/README.rst | README.rst |
PILS? Zapf it!
==============
This is a client library for the PILS PLC interface specification,
found here: https://forge.frm2.tum.de/public/doc/plc/master/html/
A minimal example of usage::
import logging
import zapf.scan
# Connection via different protocols is abstracted via URIs.
# Here we connect via Modbus/TCP using slave number 0.
URI = 'modbus://my.plc.host:502/0'
# The Scanner allows reading the PLC's "indexer" which provides
# all metadata about the PLC and its devices.
scanner = zapf.scan.Scanner(URI, logging.root)
plc_data = scanner.get_plc_data()
print('connected to PLC:', plc_data.plc_name)
# For each found device, this will create a client object and
# read the most basic property - the current value.
for dev in scanner.scan_devices():
print('got a device:', dev)
print('device value:', device.read_value())
| zapf | /zapf-0.4.7.tar.gz/zapf-0.4.7/README.rst | README.rst |
.. _device:
The device API
==============
Since PILS structures the PLC functionality into devices as the basic unit,
Zapf provides a wrapper object for each such device.
The concrete class depends on the typecode, but all device classes share the
same interface.
.. currentmodule:: zapf.device
.. autoclass:: Device
:members:
.. autoclass:: DiscreteDevice
.. autoclass:: SimpleDiscreteIn
.. autoclass:: SimpleDiscreteOut
.. autoclass:: DiscreteIn
.. autoclass:: DiscreteOut
.. autoclass:: Keyword
.. autoclass:: AnalogDevice
.. autoclass:: SimpleAnalogIn
.. autoclass:: SimpleAnalogOut
.. autoclass:: AnalogIn
.. autoclass:: AnalogOut
.. autoclass:: RealValue
.. autoclass:: FlatIn
.. autoclass:: FlatOut
.. autoclass:: ParamIn
.. autoclass:: ParamOut
.. autoclass:: VectorDevice
.. autoclass:: VectorIn
.. autoclass:: VectorOut
.. autoclass:: MessageIO
:members:
.. attribute:: TYPECODE_MAP
Maps the PILS :ref:`type code <pils:type-codes>` to a namedtuple with the
following items:
* ``devcls``: The concrete `Device` subclass to use
* ``value_fmt``: The basic Python `struct` format for the device values
* ``num_values``: The number of main values of the device
* ``has_target``: If the device has one or more "target" fields
* ``readonly``: If the device can only be read, not written to.
* ``status_size``: The size, in bytes, of the device's status fields
* ``num_params``: The number of parameter fields
* ``has_pctrl``: Whether this device uses the :ref:`pils:param-ctrl` field
Most of this information is used by the `Device` classes to determine their
internal layout.
The spec module
---------------
``zapf.spec`` contains several helpers, enums, and constants that implement
aspects of the PILS specification.
.. currentmodule:: zapf.spec
.. data:: DevStatus
A mapping-like enumeration of the possible PLC states, see
:ref:`pils:status-word`.
* ``DevStatus.RESET`` - device is initializing or command to reset
* ``DevStatus.IDLE`` - device is idle and fully functional
* ``DevStatus.DISABLED`` - device is disabled (switched off)
* ``DevStatus.WARN`` - device is idle and possibly not fully functional
* ``DevStatus.START`` - command from client to change value
* ``DevStatus.BUSY`` - device is changing its value
* ``DevStatus.STOP`` - command from client to stop value change
* ``DevStatus.ERROR`` - device is unusable due to error
* ``DevStatus.DIAGNOSTIC_ERROR``
Apart from ``DevStatus.IDLE`` to get the numeric value, you can also
get the numeric value using indexing (``DevStatus['IDLE']``), and the
string value by indexing with the numeric value (``DevStatus[1]``).
.. data:: ReasonMap
A list of text equivalent strings for the 4-bit "reason" code, see
:ref:`the spec <pils:reason-bits>`, inside a status word.
.. data:: ParamControl
A "bit field accessor" like `StatusStruct` for the parameter control field.
Its subfields are:
* ``CMD`` - the command as `ParamCMDs`
* ``SUBINDEX`` - the device subindex for the parameter
* ``IDX`` - the parameter index
.. data:: ParamCMDs
This is a mapping-like enumeration of the possible return values of a
device's :ref:`parameter state machine <pils:param-ctrl>`:
* ``ParamCMDs.INIT`` - parameter value is invalid, awaiting command
* ``ParamCMDs.DO_READ`` - command from client to read a value
* ``ParamCMDs.DO_WRITE`` - command from client to write a value
* ``ParamCMDs.BUSY`` - request is being processed
* ``ParamCMDs.DONE`` - request was processed, value field contains
current value (or return value for special functions)
* ``ParamCMDs.ERR_NO_IDX`` - parameter does not exist
* ``ParamCMDs.ERR_RO`` - parameter is read-only
* ``ParamCMDs.ERR_RETRY`` - parameter can *temporarily* not be changed
Apart from ``ParamCMDs.DO_READ`` to get the numeric value, you can also
get the numeric value using indexing (``ParamCMDs['DO_READ']``), and the
string value by indexing with the numeric value (``ParamCMDs[1]``).
.. data:: Parameters
The mapping of known parameters and special functions.
It can be indexed by name (``Parameters['Speed'] == 60``) and by number
(``Parameters[60] == 'Speed'``).
Parameters not defined by the spec are given a generic name like
``'Param80'``.
.. function:: is_function(index)
Return true if the given parameter number is a special function.
| zapf | /zapf-0.4.7.tar.gz/zapf-0.4.7/doc/device.rst | device.rst |
Introduction
============
What is it?
-----------
Zapf is a client library to access PLCs (Programmable logic controllers) that
offer an interface conforming to the **PILS** specification. The specification
is hosted here:
https://forge.frm2.tum.de/public/doc/plc/master/html/
Zapf provides APIs for:
* Connecting to a PLC via a variety of protocols
* Querying the PLC for its metadata and the available devices
* Creating a client object for each device, according to its :ref:`device type
<pils:device-types>`
* Fully interacting with the PLC using the device objects
The library abstracts over the different communication protocols and PILS
specification versions.
Example
-------
.. include:: ../README.rst
:start-line: 6
Installation
------------
Zapf can be installed from PyPI with the usual methods.
Its only (optional) non-standard-library dependency is ``PyTango``, which is
required to communicate using the `Tango <https://tango-controls.org>`_
framework.
| zapf | /zapf-0.4.7.tar.gz/zapf-0.4.7/doc/intro.rst | intro.rst |
Zapi: An simple lightweight web framework
=========================================
.. image:: https://img.shields.io/pypi/v/zapi.svg
:target: https://pypi.python.org/pypi/zapi
.. image:: https://img.shields.io/pypi/dm/zapi.svg
:target: https://pypi.python.org/pypi/zapi
Zapi is an Application Development Framework - a library - for people who build http api service using Python,
without the need for manual labor. There's no need to manually add a class or function, binding to a url path.
Everything is automatic.
Zapi lets you build your api service by minimizing the amount of code needed for a given task.
Zapi supports Python >= 2.6.
Installation
------------
To install Zapi, simply:
.. code-block:: bash
$ pip install zapi
Documentation
-------------
Fantastic documentation is available at http://zapi.readthedocs.org/, coming soon.
How to Contribute
-----------------
#. Check for open issues or open a fresh issue to start a discussion around a feature idea or a bug. There is a `Contributor Friendly`_ tag for issues that should be ideal for people who are not very familiar with the codebase yet.
#. Fork `the repository`_ on GitHub to start making your changes to the **master** branch (or branch off of it).
#. Write a test which shows that the bug was fixed or that the feature works as expected.
#. Send a pull request and bug the maintainer until it gets merged and published. :) Make sure to add yourself to AUTHORS_.
.. _`the repository`: http://github.com/linzhonghong/zapi
.. _AUTHORS: https://github.com/linzhonghong/zapi/blob/master/AUTHORS.rst
.. _Contributor Friendly: https://github.com/linzhonghong/zapi/issues?direction=desc&labels=Contributor+Friendly&page=1&sort=updated&state=open | zapi | /zapi-0.0.8.zip/zapi-0.0.8/README.rst | README.rst |
zapian: schemaless python interface to Xapian
===============================================
作为一个pythoner,我们有理由爱xapian...
但xappy已年久失修过于陈旧了... 喜欢elasticsearch的api,但是憎恨Luence的java架构,不愿引入新的服务进程?
那么zapian,可能是你需要的....
欢迎拍砖: http://weibo.com/panjunyong
特性
-------------------------------------
- 为xapian提供更友好的schemaless的api
- 支持分区索引:可单独指定分区搜索,或合并搜索
- 历史数据存放在不同的索引分区
- 根据数据存放区域进行分区
Schemaless API
-------------------------------------
首先需要初始化数据库:
db = Zapian(path='/tmp/test_zapian_db')
添加一个分区:
db.add_part('2001-02')
添加索引:
db.add_document(part='2001-02',
uid='1111',
index = { '+title' : u'我们很好.doc',
'searchable_text' : u'',
'modified' : datetime.datetime(),
'crated' : datetime.datetime()
},
data = {}
)
修改索引:
db.replace_document(part, uid, doc)
删除索引:
db.delete_document(part, uid)
搜索:
db.search(parts, ["and",
{ "filters":
"exclude":
},
[ "or",
{"filters":
"exclude": },
{ "filters":
"exclude": }
]
]
)
doc和索引的关系
-------------------------------------
xapian内部对数据有三种用途:term索引、排序字段、返回data;系统自动对数据类型进行处理:
- set/list/tuple:对每个包含数据,完整匹配搜索(term索引)
- string/unicode: 用于全文搜索(term索引)
- datetime/int/float: 用于排序和范围比较(排序字段)
- 如果字符串类型的字段以 + 开头,表示除了全文索引,也会用于排序
数据库的结构
-------------------------------------
数据库存放在文件夹结构:
schema.yaml # 库结构信息
20120112/ # 某个分区,标准xapian数据库
20120512/ # 另外一个分区,标准xapian数据库
其中schema.json, 由系统自动维护, 记录了2个信息:
1. PREFIX和字段的映射:
prefixes:{title':"NC", 'created':"LL"}
2. attribute存放的slot位置:
slots:{'modified':1, 'created':2}
安装使用
-------------------------------------
1. 需要先安装xapian:http://xapian.org/download
2. 直接在pypi上安装这个包: https://pypi.python.org/pypi/zapian | zapian | /zapian-0.4.0.tar.gz/zapian-0.4.0/README | README |
# Zap Imoveis Scraper
zapimoveis-scraper is a Python package that works as a crawler and scraper using beautifulsoup4 to get data from [zap imóveis](https://zapimoveis.com.br).
### Installation
Use the package manager [pip](https://pip.pypa.io/en/stable/) to install zapimoveis-scraper.
```bash
pip install zapimoveis_scraper
```
### Usage
```python
import zapimoveis_scraper as zap
# returns a list with objects containing scraped data
zap.search(localization="go+goiania++setor-oeste", num_pages=5)
```
#### Available search parameters:
* localization (string): location in which the search will be performed
* default: 'go+goiania++setor-marista'
* The search string is available on the search url of zap imóveis. Eg: https://www.zapimoveis.com.br/aluguel/imoveis/rj+rio-de-janeiro+ilha-do-governador+cacuia/
* num\_pages (int): Number of pages to scrape
* default: 1
* acao (string): type of contract. Possible values: 'venda', 'aluguel', 'lancamentos'
* default: 'aluguel'
* tipo (string): type of property. Possible values: 'imoveis', 'apartamentos', 'casas'
* default: 'casas'
* dictionaty\_out (boolean): Specifies the method output (list of objects or dictionary)
* default: False
* time_to_wait (float): time to wait until the script scrapes the next page
* default: 0
#### Scraped attributes:
The objects returned from `search` contain the following attributes:
* description: property description
* price: property price (monthly)
* condo\_fee: property condo fee (monthly)
* bedrooms: number of bedrooms on property
* bathrooms: number of bathrooms on property
* total\_area\_m2: property area (square meters)
* vacancies: parking spots available on property
* address: property address
* link: link of the property
| zapimoveis-scraper | /zapimoveis_scraper-0.4.0.tar.gz/zapimoveis_scraper-0.4.0/README.md | README.md |
# Python bindings to the Google search engine
# Copyright (c) 2009-2016, Geovany Rodrigues
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice,this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
import json
import time
from zapimoveis_scraper.enums import ZapAcao, ZapTipo
from zapimoveis_scraper.item import ZapItem
from collections import defaultdict
__all__ = [
# Main search function.
'search',
]
# URL templates to make urls searches.
url_home = "https://www.zapimoveis.com.br/%(acao)s/%(tipo)s/%(localization)s/?pagina=%(page)s"
# Default user agent, unless instructed by the user to change it.
USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
def get_page(url):
request = Request(url)
request.add_header('User-Agent', USER_AGENT)
response = urlopen(request)
return response
def __get_text(element, content=False):
text = ''
if element is not None:
if content is False:
text = element.getText()
else:
text = element.get("content")
text.replace('\\n', '')
return text.strip()
def convert_dict(data):
'''
Simple function to convert the data from objects to a dictionary
dicts: Empty default dictionary
Keys: List with the keys for the dictionary
'''
#start dictonary
dicts = defaultdict(list)
#create a list with the keys
keys = ['price', 'condo_fee', 'bedrooms','bathrooms','vacancies','total_area_m2','address','description', 'link']
#simple for loops to create the dictionary
for i in keys:
for j in range(len(data)):
to_dict = data[j].__dict__
dicts[i].append(to_dict['%s' % i])
return dicts
def get_listings(soup):
page_data_string = soup.find(lambda tag:tag.name=="script" and isinstance(tag.string, str) and tag.string.startswith("window"))
json_string = page_data_string.string.replace("window.__INITIAL_STATE__=","").replace(";(function(){var s;(s=document.currentScript||document.scripts[document.scripts.length-1]).parentNode.removeChild(s);}());","")
return json.loads(json_string)['results']['listings']
def get_ZapItem(listing):
item = ZapItem()
item.link = listing['link']['href']
item.price = listing['listing']['pricingInfos'][0].get('price', None) if len(listing['listing']['pricingInfos']) > 0 else 0
item.condo_fee = listing['listing']['pricingInfos'][0].get('monthlyCondoFee', None) if len(listing['listing']['pricingInfos']) > 0 else 0
item.bedrooms = listing['listing']['bedrooms'][0] if len(listing['listing']['bedrooms']) > 0 else 0
item.bathrooms = listing['listing']['bathrooms'][0] if len(listing['listing']['bathrooms']) > 0 else 0
item.vacancies = listing['listing']['parkingSpaces'][0] if len(listing['listing']['parkingSpaces']) > 0 else 0
item.total_area_m2 = listing['listing']['usableAreas'][0] if len(listing['listing']['usableAreas']) > 0 else 0
item.address = (listing['link']['data']['street'] + ", " + listing['link']['data']['neighborhood']).strip(',').strip()
item.description = listing['listing']['title']
return item
def search(localization='go+goiania++setor-marista', num_pages=1, acao=ZapAcao.aluguel.value, tipo=ZapTipo.apartamentos.value, dictionary_out = False, time_to_wait=0):
page = 1
items = []
while page <= num_pages:
html = get_page(url_home % vars())
soup = BeautifulSoup(html, 'html.parser')
listings = get_listings(soup)
for listing in listings:
if 'type' not in listing or listing['type'] != 'nearby':
items.append(get_ZapItem(listing))
page += 1
time.sleep(time_to_wait)
if dictionary_out:
return convert_dict(items)
return items | zapimoveis-scraper | /zapimoveis_scraper-0.4.0.tar.gz/zapimoveis_scraper-0.4.0/zapimoveis_scraper/__init__.py | __init__.py |
| BRANCH | STATUS |
|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| main | [](https://github.com/btr1975/zapish-logger/actions/workflows/test-coverage-lint.yml) |
| develop | [](https://github.com/btr1975/zapish-logger/actions/workflows/test-coverage-lint.yml) |
[](https://pepy.tech/project/zapish-logger)
[](https://pypi.org/project/zapish-logger)
[](https://zapish-logger.readthedocs.io/en/latest/?badge=latest)
# zapish-logger
## Description
* A simple logger that logs in a JSON format inspired by go zap
#### Generated by CookieCutter
* Repository: [GitHub](https://github.com/btr1975/cookiecutter-python-library)
* Version: 1.0.3
| zapish-logger | /zapish-logger-1.0.3.tar.gz/zapish-logger-1.0.3/README.md | README.md |
from typing import List, Dict
import json
import logging
import sys
SCHEMA = '{"level": "%(levelname)s", "ts": "%(asctime)s", "caller": "%(name)s", "msg": "%(message)s"}'
def file_logger(path: str, name: str) -> logging.Logger:
"""Function to get a file logger
:type path: String
:param path: The full path to the log file Example: /tmp/some_log.log
:type name: String
:param name: The name of the logger
:rtype: logging.Logger
:returns: The logger
"""
this_logger = logging.getLogger(name)
logging.basicConfig(format=SCHEMA, filename=path)
logging.getLogger().setLevel(logging.INFO)
return this_logger
def console_logger(name: str) -> logging.Logger:
"""Function to get a console logger
:type name: String
:param name: The name of the logger
:rtype: logging.Logger
:returns: The logger
"""
this_logger = logging.getLogger(name)
logging.basicConfig(format=SCHEMA, stream=sys.stdout)
logging.getLogger().setLevel(logging.INFO)
return this_logger
def add_console_logger(root_logger: logging.Logger) -> None:
"""Function to add a console logger
:type root_logger: logging.Logger
:param root_logger: The logger to add console logger to
:rtype: None
:returns: Nothing it adds a console logger
:raises TypeError: If root_logger is not of type logging.Logger
"""
if not isinstance(root_logger, logging.Logger):
raise TypeError(f'root_logger must be of type logging.Logger but received a {type(root_logger)}')
console_handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter(fmt=SCHEMA)
console_handler.setFormatter(formatter)
root_logger.addHandler(console_handler)
def process_log_file(data: str) -> List[Dict[str, str]]:
"""Function that converts log entries to dicts and appends to list
:type data: String
:param data: The log data
:rtype: List[Dict[str, str]]
:returns: Te log data as python objects
"""
final_data = []
data_split = data.splitlines()
for line in data_split:
final_data.append(json.loads(line))
return final_data
def read_log_file(path: str) -> List[Dict[str, str]]:
"""Function that reads log data and converts log entries to dicts and appends to list
:type path: String
:param path: The full path to the log file Example: /tmp/some_log.log
:rtype: List[Dict[str, str]]
:returns: Te log data as python objects
"""
with open(path, 'r', encoding='utf-8') as file:
log_file = file.read()
return process_log_file(log_file) | zapish-logger | /zapish-logger-1.0.3.tar.gz/zapish-logger-1.0.3/zapish_logger/logger.py | logger.py |
import matlab.engine
class bridge:
"""
A Python to MATLAB bridge for zapit
Zapit runs in MATLAB, and this class is a bridge to bring key API commands into Python.
The user sets up the sample in MATLAB, and then uses this class to run the experiment
in python.
Attributes:
eng (matlab.engine): refence to the MATLAB session in which Zapit is running
hZP : refernce to Zapit API (model) matlab object
hZPview : refernce to Zapit GUI controller matlab object
"""
_CONNECTION_OPEN = False # Set to true if we managed to connect to MATLAB
def __init__(self):
"""
Connect to the MATLAB engine
"""
names = matlab.engine.find_matlab()
if "zapit" in names:
print("Attempting MATLAB connection...")
self.eng = matlab.engine.connect_matlab("zapit")
self._CONNECTION_OPEN = True
print("Connected!")
else:
print('FAILED TO FIND MATLAB SESSION "zapit"')
return
try:
self.hZP = self.eng.workspace["hZP"]
self.hZPview = self.eng.workspace["hZPview"]
except matlab.engine.MatlabExecutionError:
self.release_matlab()
msg = """
Can not find variables hZP and hZPview in the MATLAB session.
Suggested Solution:
a. Start (or restart) Zapit
b. You should see the message "Opened Python Bridge".
If not set general.openPythonBridgeOnStart to 1 in the settings file.
Then restart Zapit again.
b. Re-instantiate this class.
"""
print(msg)
def __del__(self):
"""Destructor"""
self.release_matlab()
def release_matlab(self):
"""Disconnect from matlab engine"""
try:
self.eng.workspace["hZP"] # Fails if we have already quit
self.eng.quit()
print("Disconnected from MATLAB")
except matlab.engine.MatlabExecutionError:
pass
def send_samples(
self,
conditionNum=-1,
laserOn=True,
hardwareTriggered=True,
logging=True,
verbose=False,
):
"""Send samples to the DAQ"""
# fmt: off
condition_num, laser_on = self.eng.sendSamples(self.hZP,
"conditionNum", conditionNum,
"laserOn", laserOn,
"hardwareTriggered", hardwareTriggered,
"logging", logging,
"verbose", verbose,
nargout=2)
# fmt: on
def stop_opto_stim(self):
"""Stops the optostim
Inputs:
none
"""
self.eng.stopOptoStim(self.hZP, nargout=0)
def is_stimConfig_loaded(self):
"""Return true if zapit has a loaded stim config
Inputs:
none
"""
return self.eng.eval("~isempty(hZP.stimConfig)", nargout=1)
def num_stim_cond(self):
"""Return the number of stimulus conditions"""
if self.is_stimConfig_loaded():
n = self.eng.eval("hZP.stimConfig.numConditions", nargout=1)
else:
n = 0
return n
def get_experiment_path(self):
"""Get the experiment directory
Inputs:
none
"""
exp_dir = self.eng.eval("hZP.experimentPath", nargout=1)
return exp_dir
def set_experiment_path(self, exp_dir):
"""Set the experiment directory
Inputs:
none
"""
self.eng.eval("hZP.experimentPath='%s';" % exp_dir, nargout=0)
def clear_experiment_path(self):
"""Clear the experiment path
Inputs:
none
"""
self.eng.eval("hZP.clearExperimentPath", nargout=0) | zapit-Python-Bridge | /zapit_Python_Bridge-0.1.3-py3-none-any.whl/zapit_python_bridge/bridge.py | bridge.py |
..
Introduction
============
Build ``zipapp`` single file Python applications easily.
Usage
=====
Standalone application
----------------------
.. code::
zapp ~/bin/myapp myapp.cli:main 'myapp==1.2.3' 'mylib==3.2.1'
python3 -m zapp ~/bin/myapp myapp.cli:main 'myapp==1.2.3' 'mylib==3.2.1'
zapp toolmaker.pyz toolmaker.cli:main toolmaker
zapp pipdeptree.pyz pipdeptree:main pipdeptree
zapp ~/bin/httpie httpie.__main__:main httpie
# Without requirements
zapp zipfile.pyz zipfile:main
Library
-------
.. code::
import zapp
zapp.core.build_zapp(
[
'myapp==1.2.3',
'mylib==3.2.1',
],
'myapp.cli:main',
'myapp.pyz',
)
Setuptools command
------------------
.. code::
python3 setup.py bdist_zapp --entry-point myapp.cli:main
Details
=======
Similar applications
--------------------
* Shiv https://shiv.readthedocs.io
* Pex https://pex.readthedocs.io
Hacking
=======
This project makes extensive use of `tox`_, `pytest`_, and `GNU Make`_.
Development environment
-----------------------
Use following command to create a Python virtual environment with all
necessary dependencies::
tox --recreate -e develop
This creates a Python virtual environment in the ``.tox/develop`` directory. It
can be activated with the following command::
. .tox/develop/bin/activate
Run test suite
--------------
In a Python virtual environment run the following command::
make review
Outside of a Python virtual environment run the following command::
tox --recreate
Build and package
-----------------
In a Python virtual environment run the following command::
make package
Outside of a Python virtual environment run the following command::
tox --recreate -e package
.. Links
.. _`GNU Make`: https://www.gnu.org/software/make/
.. _`pytest`: https://pytest.org/
.. _`tox`: https://tox.readthedocs.io/
.. EOF
| zapp | /zapp-0.0.2.tar.gz/zapp-0.0.2/README.rst | README.rst |
import atexit
import base64
import copy
import json
import hashlib
import logging
import re
import subprocess
import os
import shutil
import sys
import time
import tempfile
import binascii
import textwrap
import requests
from urllib.request import urlopen
# Staging
# Amazon doesn't accept these though.
# DEFAULT_CA = "https://acme-staging.api.letsencrypt.org"
# Production
DEFAULT_CA = "https://acme-v01.api.letsencrypt.org"
LOGGER = logging.getLogger(__name__)
LOGGER.addHandler(logging.StreamHandler())
def get_cert_and_update_domain(
zappa_instance,
lambda_name,
api_stage,
domain=None,
manual=False,
):
"""
Main cert installer path.
"""
try:
create_domain_key()
create_domain_csr(domain)
get_cert(zappa_instance)
create_chained_certificate()
with open('{}/signed.crt'.format(gettempdir())) as f:
certificate_body = f.read()
with open('{}/domain.key'.format(gettempdir())) as f:
certificate_private_key = f.read()
with open('{}/intermediate.pem'.format(gettempdir())) as f:
certificate_chain = f.read()
if not manual:
if domain:
if not zappa_instance.get_domain_name(domain):
zappa_instance.create_domain_name(
domain_name=domain,
certificate_name=domain + "-Zappa-LE-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=None,
lambda_name=lambda_name,
stage=api_stage
)
print("Created a new domain name. Please note that it can take up to 40 minutes for this domain to be created and propagated through AWS, but it requires no further work on your part.")
else:
zappa_instance.update_domain_name(
domain_name=domain,
certificate_name=domain + "-Zappa-LE-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=None,
lambda_name=lambda_name,
stage=api_stage
)
else:
print("Cerificate body:\n")
print(certificate_body)
print("\nCerificate private key:\n")
print(certificate_private_key)
print("\nCerificate chain:\n")
print(certificate_chain)
except Exception as e:
print(e)
return False
return True
def create_domain_key():
devnull = open(os.devnull, 'wb')
out = subprocess.check_output(['openssl', 'genrsa', '2048'], stderr=devnull)
with open(os.path.join(gettempdir(), 'domain.key'), 'wb') as f:
f.write(out)
def create_domain_csr(domain):
subj = "/CN=" + domain
cmd = [
'openssl', 'req',
'-new',
'-sha256',
'-key', os.path.join(gettempdir(), 'domain.key'),
'-subj', subj
]
devnull = open(os.devnull, 'wb')
out = subprocess.check_output(cmd, stderr=devnull)
with open(os.path.join(gettempdir(), 'domain.csr'), 'wb') as f:
f.write(out)
def create_chained_certificate():
signed_crt = open(os.path.join(gettempdir(), 'signed.crt'), 'rb').read()
cross_cert_url = "https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem"
cert = requests.get(cross_cert_url)
with open(os.path.join(gettempdir(), 'intermediate.pem'), 'wb') as intermediate_pem:
intermediate_pem.write(cert.content)
with open(os.path.join(gettempdir(), 'chained.pem'), 'wb') as chained_pem:
chained_pem.write(signed_crt)
chained_pem.write(cert.content)
def parse_account_key():
"""Parse account key to get public key"""
LOGGER.info("Parsing account key...")
cmd = [
'openssl', 'rsa',
'-in', os.path.join(gettempdir(), 'account.key'),
'-noout',
'-text'
]
devnull = open(os.devnull, 'wb')
return subprocess.check_output(cmd, stderr=devnull)
def parse_csr():
"""
Parse certificate signing request for domains
"""
LOGGER.info("Parsing CSR...")
cmd = [
'openssl', 'req',
'-in', os.path.join(gettempdir(), 'domain.csr'),
'-noout',
'-text'
]
devnull = open(os.devnull, 'wb')
out = subprocess.check_output(cmd, stderr=devnull)
domains = set([])
common_name = re.search(r"Subject:.*? CN\s?=\s?([^\s,;/]+)", out.decode('utf8'))
if common_name is not None:
domains.add(common_name.group(1))
subject_alt_names = re.search(r"X509v3 Subject Alternative Name: \n +([^\n]+)\n", out.decode('utf8'), re.MULTILINE | re.DOTALL)
if subject_alt_names is not None:
for san in subject_alt_names.group(1).split(", "):
if san.startswith("DNS:"):
domains.add(san[4:])
return domains
def get_boulder_header(key_bytes):
"""
Use regular expressions to find crypto values from parsed account key,
and return a header we can send to our Boulder instance.
"""
pub_hex, pub_exp = re.search(
r"modulus:\n\s+00:([a-f0-9\:\s]+?)\npublicExponent: ([0-9]+)",
key_bytes.decode('utf8'), re.MULTILINE | re.DOTALL).groups()
pub_exp = "{0:x}".format(int(pub_exp))
pub_exp = "0{0}".format(pub_exp) if len(pub_exp) % 2 else pub_exp
header = {
"alg": "RS256",
"jwk": {
"e": _b64(binascii.unhexlify(pub_exp.encode("utf-8"))),
"kty": "RSA",
"n": _b64(binascii.unhexlify(re.sub(r"(\s|:)", "", pub_hex).encode("utf-8"))),
},
}
return header
def register_account():
"""
Agree to LE TOS
"""
LOGGER.info("Registering account...")
code, result = _send_signed_request(DEFAULT_CA + "/acme/new-reg", {
"resource": "new-reg",
"agreement": "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf",
})
if code == 201: # pragma: no cover
LOGGER.info("Registered!")
elif code == 409: # pragma: no cover
LOGGER.info("Already registered!")
else: # pragma: no cover
raise ValueError("Error registering: {0} {1}".format(code, result))
def get_cert(zappa_instance, log=LOGGER, CA=DEFAULT_CA):
"""
Call LE to get a new signed CA.
"""
out = parse_account_key()
header = get_boulder_header(out)
accountkey_json = json.dumps(header['jwk'], sort_keys=True, separators=(',', ':'))
thumbprint = _b64(hashlib.sha256(accountkey_json.encode('utf8')).digest())
# find domains
domains = parse_csr()
# get the certificate domains and expiration
register_account()
# verify each domain
for domain in domains:
log.info("Verifying {0}...".format(domain))
# get new challenge
code, result = _send_signed_request(CA + "/acme/new-authz", {
"resource": "new-authz",
"identifier": {"type": "dns", "value": domain},
})
if code != 201:
raise ValueError("Error requesting challenges: {0} {1}".format(code, result))
challenge = [ch for ch in json.loads(result.decode('utf8'))['challenges'] if ch['type'] == "dns-01"][0]
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = "{0}.{1}".format(token, thumbprint).encode('utf-8')
# sha256_b64
digest = _b64(hashlib.sha256(keyauthorization).digest())
zone_id = zappa_instance.get_hosted_zone_id_for_domain(domain)
if not zone_id:
raise ValueError("Could not find Zone ID for: " + domain)
zappa_instance.set_dns_challenge_txt(zone_id, domain, digest) # resp is unused
print("Waiting for DNS to propagate..")
# What's optimal here?
# import time # double import; import in loop; shadowed import
time.sleep(45)
# notify challenge are met
code, result = _send_signed_request(challenge['uri'], {
"resource": "challenge",
"keyAuthorization": keyauthorization.decode('utf-8'),
})
if code != 202:
raise ValueError("Error triggering challenge: {0} {1}".format(code, result))
# wait for challenge to be verified
verify_challenge(challenge['uri'])
# Challenge verified, clean up R53
zappa_instance.remove_dns_challenge_txt(zone_id, domain, digest)
# Sign
result = sign_certificate()
# Encode to PEM format
encode_certificate(result)
return True
def verify_challenge(uri):
"""
Loop until our challenge is verified, else fail.
"""
while True:
try:
resp = urlopen(uri)
challenge_status = json.loads(resp.read().decode('utf8'))
except IOError as e:
raise ValueError("Error checking challenge: {0} {1}".format(
e.code, json.loads(e.read().decode('utf8'))))
if challenge_status['status'] == "pending":
time.sleep(2)
elif challenge_status['status'] == "valid":
LOGGER.info("Domain verified!")
break
else:
raise ValueError("Domain challenge did not pass: {0}".format(
challenge_status))
def sign_certificate():
"""
Get the new certificate.
Returns the signed bytes.
"""
LOGGER.info("Signing certificate...")
cmd = [
'openssl', 'req',
'-in', os.path.join(gettempdir(), 'domain.csr'),
'-outform', 'DER'
]
devnull = open(os.devnull, 'wb')
csr_der = subprocess.check_output(cmd, stderr=devnull)
code, result = _send_signed_request(DEFAULT_CA + "/acme/new-cert", {
"resource": "new-cert",
"csr": _b64(csr_der),
})
if code != 201:
raise ValueError("Error signing certificate: {0} {1}".format(code, result))
LOGGER.info("Certificate signed!")
return result
def encode_certificate(result):
"""
Encode cert bytes to PEM encoded cert file.
"""
cert_body = """-----BEGIN CERTIFICATE-----\n{0}\n-----END CERTIFICATE-----\n""".format(
"\n".join(textwrap.wrap(base64.b64encode(result).decode('utf8'), 64)))
signed_crt = open("{}/signed.crt".format(gettempdir()), "w")
signed_crt.write(cert_body)
signed_crt.close()
return True
##
# Request Utility
##
def _b64(b):
"""
Helper function base64 encode for jose spec
"""
return base64.urlsafe_b64encode(b).decode('utf8').replace("=", "")
def _send_signed_request(url, payload):
"""
Helper function to make signed requests to Boulder
"""
payload64 = _b64(json.dumps(payload).encode('utf8'))
out = parse_account_key()
header = get_boulder_header(out)
protected = copy.deepcopy(header)
protected["nonce"] = urlopen(DEFAULT_CA + "/directory").headers['Replay-Nonce']
protected64 = _b64(json.dumps(protected).encode('utf8'))
cmd = [
'openssl', 'dgst',
'-sha256',
'-sign', os.path.join(gettempdir(), 'account.key')
]
proc = subprocess.Popen(
cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
out, err = proc.communicate("{0}.{1}".format(protected64, payload64).encode('utf8'))
if proc.returncode != 0: # pragma: no cover
raise IOError("OpenSSL Error: {0}".format(err))
data = json.dumps({
"header": header, "protected": protected64,
"payload": payload64, "signature": _b64(out),
})
try:
resp = urlopen(url, data.encode('utf8'))
return resp.getcode(), resp.read()
except IOError as e:
return getattr(e, "code", None), getattr(e, "read", e.__str__)()
##
# Temporary Directory Utility
##
__tempdir = None
def gettempdir():
"""
Lazily creates a temporary directory in a secure manner. When Python exits,
or the cleanup() function is called, the directory is erased.
"""
global __tempdir
if __tempdir is not None:
return __tempdir
__tempdir = tempfile.mkdtemp()
return __tempdir
@atexit.register
def cleanup():
"""
Delete any temporary files.
"""
global __tempdir
if __tempdir is not None:
shutil.rmtree(__tempdir)
__tempdir = None | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/letsencrypt.py | letsencrypt.py |
from werkzeug.wsgi import ClosingIterator
def all_casings(input_string):
"""
Permute all casings of a given string.
A pretty algorithm, via @Amber
http://stackoverflow.com/questions/6792803/finding-all-possible-case-permutations-in-python
"""
if not input_string:
yield ""
else:
first = input_string[:1]
if first.lower() == first.upper():
for sub_casing in all_casings(input_string[1:]):
yield first + sub_casing
else:
for sub_casing in all_casings(input_string[1:]):
yield first.lower() + sub_casing
yield first.upper() + sub_casing
class ZappaWSGIMiddleware:
"""
Middleware functions necessary for a Zappa deployment.
Most hacks have now been remove except for Set-Cookie permutation.
"""
def __init__(self, application):
self.application = application
def __call__(self, environ, start_response):
"""
We must case-mangle the Set-Cookie header name or AWS will use only a
single one of these headers.
"""
def encode_response(status, headers, exc_info=None):
"""
This makes the 'set-cookie' headers name lowercase,
all the non-cookie headers should be sent unharmed.
Related: https://github.com/Miserlou/Zappa/issues/1965
"""
new_headers = [header for header in headers
if ((type(header[0]) != str) or (header[0].lower() != 'set-cookie'))]
cookie_headers = [(header[0].lower(), header[1]) for header in headers
if ((type(header[0]) == str) and (header[0].lower() == "set-cookie"))]
new_headers = new_headers + cookie_headers
return start_response(status, new_headers, exc_info)
# Call the application with our modifier
response = self.application(environ, encode_response)
# Return the response as a WSGI-safe iterator
return ClosingIterator(response) | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/middleware.py | middleware.py |
import base64
import boto3
import collections
import datetime
import importlib
import inspect
import json
import logging
import os
import sys
import traceback
import tarfile
from builtins import str
from werkzeug.wrappers import Response
# This file may be copied into a project's root,
# so handle both scenarios.
try:
from zappa.middleware import ZappaWSGIMiddleware
from zappa.wsgi import create_wsgi_request, common_log
from zappa.utilities import merge_headers, parse_s3_url
except ImportError as e: # pragma: no cover
from .middleware import ZappaWSGIMiddleware
from .wsgi import create_wsgi_request, common_log
from .utilities import merge_headers, parse_s3_url
# Set up logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
class LambdaHandler:
"""
Singleton for avoiding duplicate setup.
Pattern provided by @benbangert.
"""
__instance = None
settings = None
settings_name = None
session = None
# Application
app_module = None
wsgi_app = None
trailing_slash = False
def __new__(cls, settings_name="zappa_settings", session=None):
"""Singleton instance to avoid repeat setup"""
if LambdaHandler.__instance is None:
print("Instancing..")
LambdaHandler.__instance = object.__new__(cls)
return LambdaHandler.__instance
def __init__(self, settings_name="zappa_settings", session=None):
# We haven't cached our settings yet, load the settings and app.
if not self.settings:
# Loading settings from a python module
self.settings = importlib.import_module(settings_name)
self.settings_name = settings_name
self.session = session
# Custom log level
if self.settings.LOG_LEVEL:
level = logging.getLevelName(self.settings.LOG_LEVEL)
logger.setLevel(level)
remote_env = getattr(self.settings, 'REMOTE_ENV', None)
remote_bucket, remote_file = parse_s3_url(remote_env)
if remote_bucket and remote_file:
self.load_remote_settings(remote_bucket, remote_file)
# Let the system know that this will be a Lambda/Zappa/Stack
os.environ["SERVERTYPE"] = "AWS Lambda"
os.environ["FRAMEWORK"] = "Zappa"
try:
os.environ["PROJECT"] = self.settings.PROJECT_NAME
os.environ["STAGE"] = self.settings.API_STAGE
except Exception: # pragma: no cover
pass
# Set any locally defined env vars
# Environment variable keys can't be Unicode
# https://github.com/Miserlou/Zappa/issues/604
for key in self.settings.ENVIRONMENT_VARIABLES.keys():
os.environ[str(key)] = self.settings.ENVIRONMENT_VARIABLES[key]
# Pulling from S3 if given a zip path
project_archive_path = getattr(self.settings, 'ARCHIVE_PATH', None)
if project_archive_path:
self.load_remote_project_archive(project_archive_path)
# Load compiled library to the PythonPath
# checks if we are the slim_handler since this is not needed otherwise
# https://github.com/Miserlou/Zappa/issues/776
is_slim_handler = getattr(self.settings, 'SLIM_HANDLER', False)
if is_slim_handler:
included_libraries = getattr(self.settings, 'INCLUDE', ['libmysqlclient.so.18'])
try:
from ctypes import cdll, util
for library in included_libraries:
try:
cdll.LoadLibrary(os.path.join(os.getcwd(), library))
except OSError:
print("Failed to find library: {}...right filename?".format(library))
except ImportError:
print ("Failed to import cytpes library")
# This is a non-WSGI application
# https://github.com/Miserlou/Zappa/pull/748
if not hasattr(self.settings, 'APP_MODULE') and not self.settings.DJANGO_SETTINGS:
self.app_module = None
wsgi_app_function = None
# This is probably a normal WSGI app (Or django with overloaded wsgi application)
# https://github.com/Miserlou/Zappa/issues/1164
elif hasattr(self.settings, 'APP_MODULE'):
if self.settings.DJANGO_SETTINGS:
sys.path.append('/var/task')
from django.conf import ENVIRONMENT_VARIABLE as SETTINGS_ENVIRONMENT_VARIABLE
# add the Lambda root path into the sys.path
self.trailing_slash = True
os.environ[SETTINGS_ENVIRONMENT_VARIABLE] = self.settings.DJANGO_SETTINGS
else:
self.trailing_slash = False
# The app module
self.app_module = importlib.import_module(self.settings.APP_MODULE)
# The application
wsgi_app_function = getattr(self.app_module, self.settings.APP_FUNCTION)
# Django gets special treatment.
else:
try: # Support both for tests
from zappa.ext.django_zappa import get_django_wsgi
except ImportError: # pragma: no cover
from django_zappa_app import get_django_wsgi
# Get the Django WSGI app from our extension
wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
self.trailing_slash = True
self.wsgi_app = ZappaWSGIMiddleware(wsgi_app_function)
def load_remote_project_archive(self, project_zip_path):
"""
Puts the project files from S3 in /tmp and adds to path
"""
project_folder = '/tmp/{0!s}'.format(self.settings.PROJECT_NAME)
if not os.path.isdir(project_folder):
# The project folder doesn't exist in this cold lambda, get it from S3
if not self.session:
boto_session = boto3.Session()
else:
boto_session = self.session
# Download zip file from S3
remote_bucket, remote_file = parse_s3_url(project_zip_path)
s3 = boto_session.resource('s3')
archive_on_s3 = s3.Object(remote_bucket, remote_file).get()
with tarfile.open(fileobj=archive_on_s3['Body'], mode="r|gz") as t:
t.extractall(project_folder)
# Add to project path
sys.path.insert(0, project_folder)
# Change working directory to project folder
# Related: https://github.com/Miserlou/Zappa/issues/702
os.chdir(project_folder)
return True
def load_remote_settings(self, remote_bucket, remote_file):
"""
Attempt to read a file from s3 containing a flat json object. Adds each
key->value pair as environment variables. Helpful for keeping
sensitiZve or stage-specific configuration variables in s3 instead of
version control.
"""
if not self.session:
boto_session = boto3.Session()
else:
boto_session = self.session
s3 = boto_session.resource('s3')
try:
remote_env_object = s3.Object(remote_bucket, remote_file).get()
except Exception as e: # pragma: no cover
# catch everything aws might decide to raise
print('Could not load remote settings file.', e)
return
try:
content = remote_env_object['Body'].read()
except Exception as e: # pragma: no cover
# catch everything aws might decide to raise
print('Exception while reading remote settings file.', e)
return
try:
settings_dict = json.loads(content)
except (ValueError, TypeError): # pragma: no cover
print('Failed to parse remote settings!')
return
# add each key-value to environment - overwrites existing keys!
for key, value in settings_dict.items():
if self.settings.LOG_LEVEL == "DEBUG":
print('Adding {} -> {} to environment'.format(
key,
value
))
# Environment variable keys can't be Unicode
# https://github.com/Miserlou/Zappa/issues/604
try:
os.environ[str(key)] = value
except Exception:
if self.settings.LOG_LEVEL == "DEBUG":
print("Environment variable keys must be non-unicode!")
@staticmethod
def import_module_and_get_function(whole_function):
"""
Given a modular path to a function, import that module
and return the function.
"""
module, function = whole_function.rsplit('.', 1)
app_module = importlib.import_module(module)
app_function = getattr(app_module, function)
return app_function
@classmethod
def lambda_handler(cls, event, context): # pragma: no cover
handler = cls()
exception_handler = handler.settings.EXCEPTION_HANDLER
try:
return handler.handler(event, context)
except Exception as ex:
exception_processed = cls._process_exception(exception_handler=exception_handler,
event=event, context=context, exception=ex)
if not exception_processed:
# Only re-raise exception if handler directed so. Allows handler to control if lambda has to retry
# an event execution in case of failure.
raise
@classmethod
def _process_exception(cls, exception_handler, event, context, exception):
exception_processed = False
if exception_handler:
try:
handler_function = cls.import_module_and_get_function(exception_handler)
exception_processed = handler_function(exception, event, context)
except Exception as cex:
logger.error(msg='Failed to process exception via custom handler.')
print(cex)
return exception_processed
@staticmethod
def run_function(app_function, event, context):
"""
Given a function and event context,
detect signature and execute, returning any result.
"""
# getargspec does not support python 3 method with type hints
# Related issue: https://github.com/Miserlou/Zappa/issues/1452
if hasattr(inspect, "getfullargspec"): # Python 3
args, varargs, keywords, defaults, _, _, _ = inspect.getfullargspec(app_function)
else: # Python 2
args, varargs, keywords, defaults = inspect.getargspec(app_function)
num_args = len(args)
if num_args == 0:
result = app_function(event, context) if varargs else app_function()
elif num_args == 1:
result = app_function(event, context) if varargs else app_function(event)
elif num_args == 2:
result = app_function(event, context)
else:
raise RuntimeError("Function signature is invalid. Expected a function that accepts at most "
"2 arguments or varargs.")
return result
def get_function_for_aws_event(self, record):
"""
Get the associated function to execute for a triggered AWS event
Support S3, SNS, DynamoDB, kinesis and SQS events
"""
if 's3' in record:
if ':' in record['s3']['configurationId']:
return record['s3']['configurationId'].split(':')[-1]
arn = None
if 'Sns' in record:
try:
message = json.loads(record['Sns']['Message'])
if message.get('command'):
return message['command']
except ValueError:
pass
arn = record['Sns'].get('TopicArn')
elif 'dynamodb' in record or 'kinesis' in record:
arn = record.get('eventSourceARN')
elif 'eventSource' in record and record.get('eventSource') == 'aws:sqs':
arn = record.get('eventSourceARN')
elif 's3' in record:
arn = record['s3']['bucket']['arn']
if arn:
return self.settings.AWS_EVENT_MAPPING.get(arn)
return None
def get_function_from_bot_intent_trigger(self, event):
"""
For the given event build ARN and return the configured function
"""
intent = event.get('currentIntent')
if intent:
intent = intent.get('name')
if intent:
return self.settings.AWS_BOT_EVENT_MAPPING.get(
"{}:{}".format(intent, event.get('invocationSource'))
)
def get_function_for_cognito_trigger(self, trigger):
"""
Get the associated function to execute for a cognito trigger
"""
print("get_function_for_cognito_trigger", self.settings.COGNITO_TRIGGER_MAPPING, trigger, self.settings.COGNITO_TRIGGER_MAPPING.get(trigger))
return self.settings.COGNITO_TRIGGER_MAPPING.get(trigger)
def handler(self, event, context):
"""
An AWS Lambda function which parses specific API Gateway input into a
WSGI request, feeds it to our WSGI app, processes the response, and returns
that back to the API Gateway.
"""
settings = self.settings
# If in DEBUG mode, log all raw incoming events.
if settings.DEBUG:
logger.debug('Zappa Event: {}'.format(event))
# Set any API Gateway defined Stage Variables
# as env vars
if event.get('stageVariables'):
for key in event['stageVariables'].keys():
os.environ[str(key)] = event['stageVariables'][key]
# This is the result of a keep alive, recertify
# or scheduled event.
if event.get('detail-type') == 'Scheduled Event':
whole_function = event['resources'][0].split('/')[-1].split('-')[-1]
# This is a scheduled function.
if '.' in whole_function:
app_function = self.import_module_and_get_function(whole_function)
# Execute the function!
return self.run_function(app_function, event, context)
# Else, let this execute as it were.
# This is a direct command invocation.
elif event.get('command', None):
whole_function = event['command']
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
print("Result of %s:" % whole_function)
print(result)
return result
# This is a direct, raw python invocation.
# It's _extremely_ important we don't allow this event source
# to be overridden by unsanitized, non-admin user input.
elif event.get('raw_command', None):
raw_command = event['raw_command']
exec(raw_command)
return
# This is a Django management command invocation.
elif event.get('manage', None):
from django.core import management
try: # Support both for tests
from zappa.ext.django_zappa import get_django_wsgi
except ImportError as e: # pragma: no cover
from django_zappa_app import get_django_wsgi
# Get the Django WSGI app from our extension
# We don't actually need the function,
# but we do need to do all of the required setup for it.
app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
# Couldn't figure out how to get the value into stdout with StringIO..
# Read the log for now. :[]
management.call_command(*event['manage'].split(' '))
return {}
# This is an AWS-event triggered invocation.
elif event.get('Records', None):
records = event.get('Records')
result = None
whole_function = self.get_function_for_aws_event(records[0])
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
logger.debug(result)
else:
logger.error("Cannot find a function to process the triggered event.")
return result
# this is an AWS-event triggered from Lex bot's intent
elif event.get('bot'):
result = None
whole_function = self.get_function_from_bot_intent_trigger(event)
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
logger.debug(result)
else:
logger.error("Cannot find a function to process the triggered event.")
return result
# This is an API Gateway authorizer event
elif event.get('type') == 'TOKEN':
whole_function = self.settings.AUTHORIZER_FUNCTION
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
policy = self.run_function(app_function, event, context)
return policy
else:
logger.error("Cannot find a function to process the authorization request.")
raise Exception('Unauthorized')
# This is an AWS Cognito Trigger Event
elif event.get('triggerSource', None):
triggerSource = event.get('triggerSource')
whole_function = self.get_function_for_cognito_trigger(triggerSource)
result = event
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
logger.debug(result)
else:
logger.error("Cannot find a function to handle cognito trigger {}".format(triggerSource))
return result
# This is a CloudWatch event
# Related: https://github.com/Miserlou/Zappa/issues/1924
elif event.get('awslogs', None):
result = None
whole_function = '{}.{}'.format(settings.APP_MODULE, settings.APP_FUNCTION)
app_function = self.import_module_and_get_function(whole_function)
if app_function:
result = self.run_function(app_function, event, context)
logger.debug("Result of %s:" % whole_function)
logger.debug(result)
else:
logger.error("Cannot find a function to process the triggered event.")
return result
# Normal web app flow
try:
# Timing
time_start = datetime.datetime.now()
# This is a normal HTTP request
if event.get('httpMethod', None):
script_name = ''
is_elb_context = False
headers = merge_headers(event)
if event.get('requestContext', None) and event['requestContext'].get('elb', None):
# Related: https://github.com/Miserlou/Zappa/issues/1715
# inputs/outputs for lambda loadbalancer
# https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html
is_elb_context = True
# host is lower-case when forwarded from ELB
host = headers.get('host')
# TODO: pathParameters is a first-class citizen in apigateway but not available without
# some parsing work for ELB (is this parameter used for anything?)
event['pathParameters'] = ''
else:
if headers:
host = headers.get('Host')
else:
host = None
logger.debug('host found: [{}]'.format(host))
if host:
if 'amazonaws.com' in host:
logger.debug('amazonaws found in host')
# The path provided in th event doesn't include the
# stage, so we must tell Flask to include the API
# stage in the url it calculates. See https://github.com/Miserlou/Zappa/issues/1014
script_name = '/' + settings.API_STAGE
else:
# This is a test request sent from the AWS console
if settings.DOMAIN:
# Assume the requests received will be on the specified
# domain. No special handling is required
pass
else:
# Assume the requests received will be to the
# amazonaws.com endpoint, so tell Flask to include the
# API stage
script_name = '/' + settings.API_STAGE
base_path = getattr(settings, 'BASE_PATH', None)
# Create the environment for WSGI and handle the request
environ = create_wsgi_request(
event,
script_name=script_name,
base_path=base_path,
trailing_slash=self.trailing_slash,
binary_support=settings.BINARY_SUPPORT,
context_header_mappings=settings.CONTEXT_HEADER_MAPPINGS
)
# We are always on https on Lambda, so tell our wsgi app that.
environ['HTTPS'] = 'on'
environ['wsgi.url_scheme'] = 'https'
environ['lambda.context'] = context
environ['lambda.event'] = event
# Execute the application
with Response.from_app(self.wsgi_app, environ) as response:
# This is the object we're going to return.
# Pack the WSGI response into our special dictionary.
zappa_returndict = dict()
# Issue #1715: ALB support. ALB responses must always include
# base64 encoding and status description
if is_elb_context:
zappa_returndict.setdefault('isBase64Encoded', False)
zappa_returndict.setdefault('statusDescription', response.status)
if response.data:
if settings.BINARY_SUPPORT and \
not response.mimetype.startswith("text/") \
and response.mimetype != "application/json":
zappa_returndict['body'] = base64.b64encode(response.data).decode('utf-8')
zappa_returndict["isBase64Encoded"] = True
else:
zappa_returndict['body'] = response.get_data(as_text=True)
zappa_returndict['statusCode'] = response.status_code
if 'headers' in event:
zappa_returndict['headers'] = {}
for key, value in response.headers:
zappa_returndict['headers'][key] = value
if 'multiValueHeaders' in event:
zappa_returndict['multiValueHeaders'] = {}
for key, value in response.headers:
zappa_returndict['multiValueHeaders'][key] = response.headers.getlist(key)
# Calculate the total response time,
# and log it in the Common Log format.
time_end = datetime.datetime.now()
delta = time_end - time_start
response_time_ms = delta.total_seconds() * 1000
response.content = response.data
common_log(environ, response, response_time=response_time_ms)
return zappa_returndict
except Exception as e: # pragma: no cover
# Print statements are visible in the logs either way
print(e)
exc_info = sys.exc_info()
message = ('An uncaught exception happened while servicing this request. '
'You can investigate this with the `zappa tail` command.')
# If we didn't even build an app_module, just raise.
if not settings.DJANGO_SETTINGS:
try:
self.app_module
except NameError as ne:
message = 'Failed to import module: {}'.format(ne.message)
# Call exception handler for unhandled exceptions
exception_handler = self.settings.EXCEPTION_HANDLER
self._process_exception(exception_handler=exception_handler,
event=event, context=context, exception=e)
# Return this unspecified exception as a 500, using template that API Gateway expects.
content = collections.OrderedDict()
content['statusCode'] = 500
body = {'message': message}
if settings.DEBUG: # only include traceback if debug is on.
body['traceback'] = traceback.format_exception(*exc_info) # traceback as a list for readability.
content['body'] = json.dumps(str(body), sort_keys=True, indent=4)
return content
def lambda_handler(event, context): # pragma: no cover
return LambdaHandler.lambda_handler(event, context)
def keep_warm_callback(event, context):
"""Method is triggered by the CloudWatch event scheduled when keep_warm setting is set to true."""
lambda_handler(event={}, context=context) # overriding event with an empty one so that web app initialization will
# be triggered. | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/handler.py | handler.py |
import base64
import logging
import six
import sys
from requestlogger import ApacheFormatter
from werkzeug import urls
from urllib.parse import urlencode
from .utilities import merge_headers, titlecase_keys
BINARY_METHODS = [
"POST",
"PUT",
"PATCH",
"DELETE",
"CONNECT",
"OPTIONS"
]
def create_wsgi_request(event_info,
server_name='zappa',
script_name=None,
trailing_slash=True,
binary_support=False,
base_path=None,
context_header_mappings={},
):
"""
Given some event_info via API Gateway,
create and return a valid WSGI request environ.
"""
method = event_info['httpMethod']
headers = merge_headers(event_info) or {} # Allow for the AGW console 'Test' button to work (Pull #735)
"""
API Gateway and ALB both started allowing for multi-value querystring
params in Nov. 2018. If there aren't multi-value params present, then
it acts identically to 'queryStringParameters', so we can use it as a
drop-in replacement.
The one caveat here is that ALB will only include _one_ of
queryStringParameters _or_ multiValueQueryStringParameters, which means
we have to check for the existence of one and then fall back to the
other.
"""
if 'multiValueQueryStringParameters' in event_info:
query = event_info['multiValueQueryStringParameters']
query_string = urlencode(query, doseq=True) if query else ''
else:
query = event_info.get('queryStringParameters', {})
query_string = urlencode(query) if query else ''
if context_header_mappings:
for key, value in context_header_mappings.items():
parts = value.split('.')
header_val = event_info['requestContext']
for part in parts:
if part not in header_val:
header_val = None
break
else:
header_val = header_val[part]
if header_val is not None:
headers[key] = header_val
# Extract remote user from context if Authorizer is enabled
remote_user = None
if event_info['requestContext'].get('authorizer'):
remote_user = event_info['requestContext']['authorizer'].get('principalId')
elif event_info['requestContext'].get('identity'):
remote_user = event_info['requestContext']['identity'].get('userArn')
# Related: https://github.com/Miserlou/Zappa/issues/677
# https://github.com/Miserlou/Zappa/issues/683
# https://github.com/Miserlou/Zappa/issues/696
# https://github.com/Miserlou/Zappa/issues/836
# https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Summary_table
if binary_support and (method in BINARY_METHODS):
if event_info.get('isBase64Encoded', False):
encoded_body = event_info['body']
body = base64.b64decode(encoded_body)
else:
body = event_info['body']
if isinstance(body, six.string_types):
body = body.encode("utf-8")
else:
body = event_info['body']
if isinstance(body, six.string_types):
body = body.encode("utf-8")
# Make header names canonical, e.g. content-type => Content-Type
# https://github.com/Miserlou/Zappa/issues/1188
headers = titlecase_keys(headers)
path = urls.url_unquote(event_info['path'])
if base_path:
script_name = '/' + base_path
if path.startswith(script_name):
path = path[len(script_name):]
x_forwarded_for = headers.get('X-Forwarded-For', '')
if ',' in x_forwarded_for:
# The last one is the cloudfront proxy ip. The second to last is the real client ip.
# Everything else is user supplied and untrustworthy.
remote_addr = x_forwarded_for.split(', ')[-2]
else:
remote_addr = x_forwarded_for or '127.0.0.1'
environ = {
'PATH_INFO': get_wsgi_string(path),
'QUERY_STRING': get_wsgi_string(query_string),
'REMOTE_ADDR': remote_addr,
'REQUEST_METHOD': method,
'SCRIPT_NAME': get_wsgi_string(str(script_name)) if script_name else '',
'SERVER_NAME': str(server_name),
'SERVER_PORT': headers.get('X-Forwarded-Port', '80'),
'SERVER_PROTOCOL': str('HTTP/1.1'),
'wsgi.version': (1, 0),
'wsgi.url_scheme': headers.get('X-Forwarded-Proto', 'http'),
'wsgi.input': body,
'wsgi.errors': sys.stderr,
'wsgi.multiprocess': False,
'wsgi.multithread': False,
'wsgi.run_once': False,
}
# Input processing
if method in ["POST", "PUT", "PATCH", "DELETE"]:
if 'Content-Type' in headers:
environ['CONTENT_TYPE'] = headers['Content-Type']
# This must be Bytes or None
environ['wsgi.input'] = six.BytesIO(body)
if body:
environ['CONTENT_LENGTH'] = str(len(body))
else:
environ['CONTENT_LENGTH'] = '0'
for header in headers:
wsgi_name = "HTTP_" + header.upper().replace('-', '_')
environ[wsgi_name] = str(headers[header])
if script_name:
environ['SCRIPT_NAME'] = script_name
path_info = environ['PATH_INFO']
if script_name in path_info:
environ['PATH_INFO'].replace(script_name, '')
if remote_user:
environ['REMOTE_USER'] = remote_user
if event_info['requestContext'].get('authorizer'):
environ['API_GATEWAY_AUTHORIZER'] = event_info['requestContext']['authorizer']
return environ
def common_log(environ, response, response_time=None):
"""
Given the WSGI environ and the response,
log this event in Common Log Format.
"""
logger = logging.getLogger()
if response_time:
formatter = ApacheFormatter(with_response_time=True)
try:
log_entry = formatter(response.status_code, environ,
len(response.content), rt_us=response_time)
except TypeError:
# Upstream introduced a very annoying breaking change on the rt_ms/rt_us kwarg.
log_entry = formatter(response.status_code, environ,
len(response.content), rt_ms=response_time)
else:
formatter = ApacheFormatter(with_response_time=False)
log_entry = formatter(response.status_code, environ,
len(response.content))
logger.info(log_entry)
return log_entry
# Related: https://github.com/Miserlou/Zappa/issues/1199
def get_wsgi_string(string, encoding='utf-8'):
"""
Returns wsgi-compatible string
"""
return string.encode(encoding).decode('iso-8859-1') | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/wsgi.py | wsgi.py |
import boto3
import botocore
from functools import update_wrapper, wraps
import importlib
import inspect
import json
import os
import uuid
import time
from .utilities import get_topic_name
try:
from zappa_settings import ASYNC_RESPONSE_TABLE
except ImportError:
ASYNC_RESPONSE_TABLE = None
# Declare these here so they're kept warm.
try:
aws_session = boto3.Session()
LAMBDA_CLIENT = aws_session.client('lambda')
SNS_CLIENT = aws_session.client('sns')
STS_CLIENT = aws_session.client('sts')
DYNAMODB_CLIENT = aws_session.client('dynamodb')
except botocore.exceptions.NoRegionError as e: # pragma: no cover
# This can happen while testing on Travis, but it's taken care of
# during class initialization.
pass
##
# Response and Exception classes
##
LAMBDA_ASYNC_PAYLOAD_LIMIT = 256000
SNS_ASYNC_PAYLOAD_LIMIT = 256000
class AsyncException(Exception): # pragma: no cover
""" Simple exception class for async tasks. """
pass
class LambdaAsyncResponse:
"""
Base Response Dispatcher class
Can be used directly or subclassed if the method to send the message is changed.
"""
def __init__(self, lambda_function_name=None, aws_region=None, capture_response=False, **kwargs):
""" """
if kwargs.get('boto_session'):
self.client = kwargs.get('boto_session').client('lambda')
else: # pragma: no cover
self.client = LAMBDA_CLIENT
self.lambda_function_name = lambda_function_name
self.aws_region = aws_region
if capture_response:
if ASYNC_RESPONSE_TABLE is None:
print(
"Warning! Attempted to capture a response without "
"async_response_table configured in settings (you won't "
"capture async responses)."
)
capture_response = False
self.response_id = "MISCONFIGURED"
else:
self.response_id = str(uuid.uuid4())
else:
self.response_id = None
self.capture_response = capture_response
def send(self, task_path, args, kwargs):
"""
Create the message object and pass it to the actual sender.
"""
message = {
'task_path': task_path,
'capture_response': self.capture_response,
'response_id': self.response_id,
'args': args,
'kwargs': kwargs
}
self._send(message)
return self
def _send(self, message):
"""
Given a message, directly invoke the lamdba function for this task.
"""
message['command'] = 'zappa.asynchronous.route_lambda_task'
payload = json.dumps(message).encode('utf-8')
if len(payload) > LAMBDA_ASYNC_PAYLOAD_LIMIT: # pragma: no cover
raise AsyncException("Payload too large for async Lambda call")
self.response = self.client.invoke(
FunctionName=self.lambda_function_name,
InvocationType='Event', #makes the call async
Payload=payload
)
self.sent = (self.response.get('StatusCode', 0) == 202)
class SnsAsyncResponse(LambdaAsyncResponse):
"""
Send a SNS message to a specified SNS topic
Serialise the func path and arguments
"""
def __init__(self, lambda_function_name=None, aws_region=None, capture_response=False, **kwargs):
self.lambda_function_name = lambda_function_name
self.aws_region = aws_region
if kwargs.get('boto_session'):
self.client = kwargs.get('boto_session').client('sns')
else: # pragma: no cover
self.client = SNS_CLIENT
if kwargs.get('arn'):
self.arn = kwargs.get('arn')
else:
if kwargs.get('boto_session'):
sts_client = kwargs.get('boto_session').client('sts')
else:
sts_client = STS_CLIENT
AWS_ACCOUNT_ID = sts_client.get_caller_identity()['Account']
self.arn = 'arn:aws:sns:{region}:{account}:{topic_name}'.format(
region=self.aws_region,
account=AWS_ACCOUNT_ID,
topic_name=get_topic_name(self.lambda_function_name)
)
# Issue: https://github.com/Miserlou/Zappa/issues/1209
# TODO: Refactor
self.capture_response = capture_response
if capture_response:
if ASYNC_RESPONSE_TABLE is None:
print(
"Warning! Attempted to capture a response without "
"async_response_table configured in settings (you won't "
"capture async responses)."
)
capture_response = False
self.response_id = "MISCONFIGURED"
else:
self.response_id = str(uuid.uuid4())
else:
self.response_id = None
self.capture_response = capture_response
def _send(self, message):
"""
Given a message, publish to this topic.
"""
message['command'] = 'zappa.asynchronous.route_sns_task'
payload = json.dumps(message).encode('utf-8')
if len(payload) > LAMBDA_ASYNC_PAYLOAD_LIMIT: # pragma: no cover
raise AsyncException("Payload too large for SNS")
self.response = self.client.publish(
TargetArn=self.arn,
Message=payload
)
self.sent = self.response.get('MessageId')
##
# Aync Routers
##
ASYNC_CLASSES = {
'lambda': LambdaAsyncResponse,
'sns': SnsAsyncResponse,
}
def route_lambda_task(event, context):
"""
Deserialises the message from event passed to zappa.handler.run_function
imports the function, calls the function with args
"""
message = event
return run_message(message)
def route_sns_task(event, context):
"""
Gets SNS Message, deserialises the message,
imports the function, calls the function with args
"""
record = event['Records'][0]
message = json.loads(
record['Sns']['Message']
)
return run_message(message)
def run_message(message):
"""
Runs a function defined by a message object with keys:
'task_path', 'args', and 'kwargs' used by lambda routing
and a 'command' in handler.py
"""
if message.get('capture_response', False):
DYNAMODB_CLIENT.put_item(
TableName=ASYNC_RESPONSE_TABLE,
Item={
'id': {'S': str(message['response_id'])},
'ttl': {'N': str(int(time.time()+600))},
'async_status': {'S': 'in progress'},
'async_response': {'S': str(json.dumps('N/A'))},
}
)
func = import_and_get_task(message['task_path'])
if hasattr(func, 'sync'):
response = func.sync(
*message['args'],
**message['kwargs']
)
else:
response = func(
*message['args'],
**message['kwargs']
)
if message.get('capture_response', False):
DYNAMODB_CLIENT.update_item(
TableName=ASYNC_RESPONSE_TABLE,
Key={'id': {'S': str(message['response_id'])}},
UpdateExpression="SET async_response = :r, async_status = :s",
ExpressionAttributeValues={
':r': {'S': str(json.dumps(response))},
':s': {'S': 'complete'},
},
)
return response
##
# Execution interfaces and classes
##
def run(func, args=[], kwargs={}, service='lambda', capture_response=False,
remote_aws_lambda_function_name=None, remote_aws_region=None, **task_kwargs):
"""
Instead of decorating a function with @task, you can just run it directly.
If you were going to do func(*args, **kwargs), then you will call this:
import zappa.asynchronous.run
zappa.asynchronous.run(func, args, kwargs)
If you want to use SNS, then do:
zappa.asynchronous.run(func, args, kwargs, service='sns')
and other arguments are similar to @task
"""
lambda_function_name = remote_aws_lambda_function_name or os.environ.get('AWS_LAMBDA_FUNCTION_NAME')
aws_region = remote_aws_region or os.environ.get('AWS_REGION')
task_path = get_func_task_path(func)
return ASYNC_CLASSES[service](lambda_function_name=lambda_function_name,
aws_region=aws_region,
capture_response=capture_response,
**task_kwargs).send(task_path, args, kwargs)
# Handy:
# http://stackoverflow.com/questions/10294014/python-decorator-best-practice-using-a-class-vs-a-function
# However, this needs to pass inspect.getargspec() in handler.py which does not take classes
# Wrapper written to take optional arguments
# http://chase-seibert.github.io/blog/2013/12/17/python-decorator-optional-parameter.html
def task(*args, **kwargs):
"""Async task decorator so that running
Args:
func (function): the function to be wrapped
Further requirements:
func must be an independent top-level function.
i.e. not a class method or an anonymous function
service (str): either 'lambda' or 'sns'
remote_aws_lambda_function_name (str): the name of a remote lambda function to call with this task
remote_aws_region (str): the name of a remote region to make lambda/sns calls against
Returns:
A replacement function that dispatches func() to
run asynchronously through the service in question
"""
func = None
if len(args) == 1 and callable(args[0]):
func = args[0]
if not kwargs: # Default Values
service = 'lambda'
lambda_function_name_arg = None
aws_region_arg = None
else: # Arguments were passed
service = kwargs.get('service', 'lambda')
lambda_function_name_arg = kwargs.get('remote_aws_lambda_function_name')
aws_region_arg = kwargs.get('remote_aws_region')
capture_response = kwargs.get('capture_response', False)
def func_wrapper(func):
task_path = get_func_task_path(func)
@wraps(func)
def _run_async(*args, **kwargs):
"""
This is the wrapping async function that replaces the function
that is decorated with @task.
Args:
These are just passed through to @task's func
Assuming a valid service is passed to task() and it is run
inside a Lambda process (i.e. AWS_LAMBDA_FUNCTION_NAME exists),
it dispatches the function to be run through the service variable.
Otherwise, it runs the task synchronously.
Returns:
In async mode, the object returned includes state of the dispatch.
For instance
When outside of Lambda, the func passed to @task is run and we
return the actual value.
"""
lambda_function_name = lambda_function_name_arg or os.environ.get('AWS_LAMBDA_FUNCTION_NAME')
aws_region = aws_region_arg or os.environ.get('AWS_REGION')
if (service in ASYNC_CLASSES) and (lambda_function_name):
send_result = ASYNC_CLASSES[service](lambda_function_name=lambda_function_name,
aws_region=aws_region,
capture_response=capture_response).send(task_path, args, kwargs)
return send_result
else:
return func(*args, **kwargs)
update_wrapper(_run_async, func)
_run_async.service = service
_run_async.sync = func
return _run_async
return func_wrapper(func) if func else func_wrapper
def task_sns(func):
"""
SNS-based task dispatcher. Functions the same way as task()
"""
return task(func, service='sns')
##
# Utility Functions
##
def import_and_get_task(task_path):
"""
Given a modular path to a function, import that module
and return the function.
"""
module, function = task_path.rsplit('.', 1)
app_module = importlib.import_module(module)
app_function = getattr(app_module, function)
return app_function
def get_func_task_path(func):
"""
Format the modular task path for a function via inspection.
"""
module_path = inspect.getmodule(func).__name__
task_path = '{module_path}.{func_name}'.format(
module_path=module_path,
func_name=func.__name__
)
return task_path
def get_async_response(response_id):
"""
Get the response from the async table
"""
response = DYNAMODB_CLIENT.get_item(
TableName=ASYNC_RESPONSE_TABLE,
Key={'id': {'S': str(response_id)}}
)
if 'Item' not in response:
return None
return {
'status': response['Item']['async_status']['S'],
'response': json.loads(response['Item']['async_response']['S']),
} | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/asynchronous.py | asynchronous.py |
import getpass
import glob
import hashlib
import json
import logging
import os
import random
import re
import shutil
import string
import subprocess
import tarfile
import tempfile
import time
import uuid
import zipfile
from builtins import bytes, int
from distutils.dir_util import copy_tree
from io import open
import requests
from setuptools import find_packages
import boto3
import botocore
import troposphere
import troposphere.apigateway
from botocore.exceptions import ClientError
from tqdm import tqdm
from .utilities import (add_event_source, conflicts_with_a_neighbouring_module,
contains_python_files_or_subdirs, copytree,
get_topic_name, get_venv_from_python_version,
human_size, remove_event_source)
##
# Logging Config
##
logging.basicConfig(format='%(levelname)s:%(message)s')
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
##
# Policies And Template Mappings
##
ASSUME_POLICY = """{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com",
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}"""
ATTACH_POLICY = """{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:AttachNetworkInterface",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:DescribeNetworkInterfaces",
"ec2:DetachNetworkInterface",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ResetNetworkInterfaceAttribute"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"kinesis:*"
],
"Resource": "arn:aws:kinesis:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"sns:*"
],
"Resource": "arn:aws:sns:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"sqs:*"
],
"Resource": "arn:aws:sqs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"route53:*"
],
"Resource": "*"
}
]
}"""
# Latest list: https://docs.aws.amazon.com/general/latest/gr/rande.html#apigateway_region
API_GATEWAY_REGIONS = ['us-east-1', 'us-east-2',
'us-west-1', 'us-west-2',
'eu-central-1',
'eu-north-1',
'eu-west-1', 'eu-west-2', 'eu-west-3',
'eu-north-1',
'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3',
'ap-southeast-1', 'ap-southeast-2',
'ap-east-1',
'ap-south-1',
'ca-central-1',
'cn-north-1',
'cn-northwest-1',
'sa-east-1',
'us-gov-east-1', 'us-gov-west-1']
# Latest list: https://docs.aws.amazon.com/general/latest/gr/rande.html#lambda_region
LAMBDA_REGIONS = ['us-east-1', 'us-east-2',
'us-west-1', 'us-west-2',
'eu-central-1',
'eu-north-1',
'eu-west-1', 'eu-west-2', 'eu-west-3',
'eu-north-1',
'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3',
'ap-southeast-1', 'ap-southeast-2',
'ap-east-1',
'ap-south-1',
'ca-central-1',
'cn-north-1',
'cn-northwest-1',
'sa-east-1',
'us-gov-east-1',
'us-gov-west-1']
# We never need to include these.
# Related: https://github.com/Miserlou/Zappa/pull/56
# Related: https://github.com/Miserlou/Zappa/pull/581
ZIP_EXCLUDES = [
'*.exe', '*.DS_Store', '*.Python', '*.git', '.git/*', '*.zip', '*.tar.gz',
'*.hg', 'pip', 'docutils*', 'setuputils*', '__pycache__/*'
]
# When using ALB as an event source for Lambdas, we need to create an alias
# to ensure that, on zappa update, the ALB doesn't lose permissions to access
# the Lambda.
# See: https://github.com/Miserlou/Zappa/pull/1730
ALB_LAMBDA_ALIAS = 'current-alb-version'
##
# Classes
##
class Zappa:
"""
Zappa!
Makes it easy to run Python web applications on AWS Lambda/API Gateway.
"""
##
# Configurables
##
http_methods = ['ANY']
role_name = "ZappaLambdaExecution"
extra_permissions = None
assume_policy = ASSUME_POLICY
attach_policy = ATTACH_POLICY
apigateway_policy = None
cloudwatch_log_levels = ['OFF', 'ERROR', 'INFO']
xray_tracing = False
##
# Credentials
##
boto_session = None
credentials_arn = None
def __init__(self,
boto_session=None,
profile_name=None,
aws_region=None,
load_credentials=True,
desired_role_name=None,
desired_role_arn=None,
runtime='python3.6', # Detected at runtime in CLI
tags=(),
endpoint_urls={},
xray_tracing=False
):
"""
Instantiate this new Zappa instance, loading any custom credentials if necessary.
"""
# Set aws_region to None to use the system's region instead
if aws_region is None:
# https://github.com/Miserlou/Zappa/issues/413
self.aws_region = boto3.Session().region_name
logger.debug("Set region from boto: %s", self.aws_region)
else:
self.aws_region = aws_region
if desired_role_name:
self.role_name = desired_role_name
if desired_role_arn:
self.credentials_arn = desired_role_arn
self.runtime = runtime
if self.runtime == 'python3.6':
self.manylinux_suffix_start = 'cp36m'
elif self.runtime == 'python3.7':
self.manylinux_suffix_start = 'cp37m'
else:
# The 'm' has been dropped in python 3.8+ since builds with and without pymalloc are ABI compatible
# See https://github.com/pypa/manylinux for a more detailed explanation
self.manylinux_suffix_start = 'cp38'
# AWS Lambda supports manylinux1/2010 and manylinux2014
manylinux_suffixes = ("2014", "2010", "1")
self.manylinux_wheel_file_match = re.compile(f'^.*{self.manylinux_suffix_start}-manylinux({"|".join(manylinux_suffixes)})_x86_64.whl$')
self.manylinux_wheel_abi3_file_match = re.compile(f'^.*cp3.-abi3-manylinux({"|".join(manylinux_suffixes)})_x86_64.whl$')
self.endpoint_urls = endpoint_urls
self.xray_tracing = xray_tracing
# Some common invocations, such as DB migrations,
# can take longer than the default.
# Note that this is set to 300s, but if connected to
# APIGW, Lambda will max out at 30s.
# Related: https://github.com/Miserlou/Zappa/issues/205
long_config_dict = {
'region_name': aws_region,
'connect_timeout': 5,
'read_timeout': 300
}
long_config = botocore.client.Config(**long_config_dict)
if load_credentials:
self.load_credentials(boto_session, profile_name)
# Initialize clients
self.s3_client = self.boto_client('s3')
self.lambda_client = self.boto_client('lambda', config=long_config)
self.elbv2_client = self.boto_client('elbv2')
self.events_client = self.boto_client('events')
self.apigateway_client = self.boto_client('apigateway')
# AWS ACM certificates need to be created from us-east-1 to be used by API gateway
east_config = botocore.client.Config(region_name='us-east-1')
self.acm_client = self.boto_client('acm', config=east_config)
self.logs_client = self.boto_client('logs')
self.iam_client = self.boto_client('iam')
self.iam = self.boto_resource('iam')
self.cloudwatch = self.boto_client('cloudwatch')
self.route53 = self.boto_client('route53')
self.sns_client = self.boto_client('sns')
self.cf_client = self.boto_client('cloudformation')
self.dynamodb_client = self.boto_client('dynamodb')
self.cognito_client = self.boto_client('cognito-idp')
self.sts_client = self.boto_client('sts')
self.tags = tags
self.cf_template = troposphere.Template()
self.cf_api_resources = []
self.cf_parameters = {}
def configure_boto_session_method_kwargs(self, service, kw):
"""Allow for custom endpoint urls for non-AWS (testing and bootleg cloud) deployments"""
if service in self.endpoint_urls and not 'endpoint_url' in kw:
kw['endpoint_url'] = self.endpoint_urls[service]
return kw
def boto_client(self, service, *args, **kwargs):
"""A wrapper to apply configuration options to boto clients"""
return self.boto_session.client(service, *args, **self.configure_boto_session_method_kwargs(service, kwargs))
def boto_resource(self, service, *args, **kwargs):
"""A wrapper to apply configuration options to boto resources"""
return self.boto_session.resource(service, *args, **self.configure_boto_session_method_kwargs(service, kwargs))
def cache_param(self, value):
'''Returns a troposphere Ref to a value cached as a parameter.'''
if value not in self.cf_parameters:
keyname = chr(ord('A') + len(self.cf_parameters))
param = self.cf_template.add_parameter(troposphere.Parameter(
keyname, Type="String", Default=value, tags=self.tags
))
self.cf_parameters[value] = param
return troposphere.Ref(self.cf_parameters[value])
##
# Packaging
##
def copy_editable_packages(self, egg_links, temp_package_path):
""" """
for egg_link in egg_links:
with open(egg_link, 'rb') as df:
egg_path = df.read().decode('utf-8').splitlines()[0].strip()
pkgs = set([x.split(".")[0] for x in find_packages(egg_path, exclude=['test', 'tests'])])
for pkg in pkgs:
copytree(os.path.join(egg_path, pkg), os.path.join(temp_package_path, pkg), metadata=False, symlinks=False)
if temp_package_path:
# now remove any egg-links as they will cause issues if they still exist
for link in glob.glob(os.path.join(temp_package_path, "*.egg-link")):
os.remove(link)
def get_deps_list(self, pkg_name, installed_distros=None):
"""
For a given package, returns a list of required packages. Recursive.
"""
# https://github.com/Miserlou/Zappa/issues/1478. Using `pkg_resources`
# instead of `pip` is the recommended approach. The usage is nearly
# identical.
import pkg_resources
deps = []
if not installed_distros:
installed_distros = pkg_resources.WorkingSet()
for package in installed_distros:
if package.project_name.lower() == pkg_name.lower():
deps = [(package.project_name, package.version)]
for req in package.requires():
deps += self.get_deps_list(pkg_name=req.project_name, installed_distros=installed_distros)
return list(set(deps)) # de-dupe before returning
def create_handler_venv(self):
"""
Takes the installed zappa and brings it into a fresh virtualenv-like folder. All dependencies are then downloaded.
"""
import subprocess
# We will need the currenv venv to pull Zappa from
current_venv = self.get_current_venv()
# Make a new folder for the handler packages
ve_path = os.path.join(os.getcwd(), 'handler_venv')
if os.sys.platform == 'win32':
current_site_packages_dir = os.path.join(current_venv, 'Lib', 'site-packages')
venv_site_packages_dir = os.path.join(ve_path, 'Lib', 'site-packages')
else:
current_site_packages_dir = os.path.join(current_venv, 'lib', get_venv_from_python_version(), 'site-packages')
venv_site_packages_dir = os.path.join(ve_path, 'lib', get_venv_from_python_version(), 'site-packages')
if not os.path.isdir(venv_site_packages_dir):
os.makedirs(venv_site_packages_dir)
# Copy zappa* to the new virtualenv
zappa_things = [z for z in os.listdir(current_site_packages_dir) if z.lower()[:5] == 'zappa']
for z in zappa_things:
copytree(os.path.join(current_site_packages_dir, z), os.path.join(venv_site_packages_dir, z))
# Use pip to download zappa's dependencies. Copying from current venv causes issues with things like PyYAML that installs as yaml
zappa_deps = self.get_deps_list('zappa-bepro')
pkg_list = ['{0!s}=={1!s}'.format(dep, version) for dep, version in zappa_deps]
# Need to manually add setuptools
pkg_list.append('setuptools')
command = ["pip", "install", "--quiet", "--target", venv_site_packages_dir] + pkg_list
# This is the recommended method for installing packages if you don't
# to depend on `setuptools`
# https://github.com/pypa/pip/issues/5240#issuecomment-381662679
pip_process = subprocess.Popen(command, stdout=subprocess.PIPE)
# Using communicate() to avoid deadlocks
pip_process.communicate()
pip_return_code = pip_process.returncode
if pip_return_code:
raise EnvironmentError("Pypi lookup failed")
return ve_path
# staticmethod as per https://github.com/Miserlou/Zappa/issues/780
@staticmethod
def get_current_venv():
"""
Returns the path to the current virtualenv
"""
if 'VIRTUAL_ENV' in os.environ:
venv = os.environ['VIRTUAL_ENV']
elif os.path.exists('.python-version'): # pragma: no cover
try:
subprocess.check_output(['pyenv', 'help'], stderr=subprocess.STDOUT)
except OSError:
print("This directory seems to have pyenv's local venv, "
"but pyenv executable was not found.")
with open('.python-version', 'r') as f:
# minor fix in how .python-version is read
# Related: https://github.com/Miserlou/Zappa/issues/921
env_name = f.readline().strip()
bin_path = subprocess.check_output(['pyenv', 'which', 'python']).decode('utf-8')
venv = bin_path[:bin_path.rfind(env_name)] + env_name
else: # pragma: no cover
return None
return venv
def create_lambda_zip( self,
prefix='lambda_package',
handler_file=None,
slim_handler=False,
minify=True,
exclude=None,
exclude_glob=None,
use_precompiled_packages=True,
include=None,
venv=None,
output=None,
disable_progress=False,
archive_format='zip'
):
"""
Create a Lambda-ready zip file of the current virtualenvironment and working directory.
Returns path to that file.
"""
# Validate archive_format
if archive_format not in ['zip', 'tarball']:
raise KeyError("The archive format to create a lambda package must be zip or tarball")
# Pip is a weird package.
# Calling this function in some environments without this can cause.. funkiness.
import pip
if not venv:
venv = self.get_current_venv()
build_time = str(int(time.time()))
cwd = os.getcwd()
if not output:
if archive_format == 'zip':
archive_fname = prefix + '-' + build_time + '.zip'
elif archive_format == 'tarball':
archive_fname = prefix + '-' + build_time + '.tar.gz'
else:
archive_fname = output
archive_path = os.path.join(cwd, archive_fname)
# Files that should be excluded from the zip
if exclude is None:
exclude = list()
if exclude_glob is None:
exclude_glob = list()
# Exclude the zip itself
exclude.append(archive_path)
# Make sure that 'concurrent' is always forbidden.
# https://github.com/Miserlou/Zappa/issues/827
if not 'concurrent' in exclude:
exclude.append('concurrent')
def splitpath(path):
parts = []
(path, tail) = os.path.split(path)
while path and tail:
parts.append(tail)
(path, tail) = os.path.split(path)
parts.append(os.path.join(path, tail))
return list(map(os.path.normpath, parts))[::-1]
split_venv = splitpath(venv)
split_cwd = splitpath(cwd)
# Ideally this should be avoided automatically,
# but this serves as an okay stop-gap measure.
if split_venv[-1] == split_cwd[-1]: # pragma: no cover
print(
"Warning! Your project and virtualenv have the same name! You may want "
"to re-create your venv with a new name, or explicitly define a "
"'project_name', as this may cause errors."
)
# First, do the project..
temp_project_path = tempfile.mkdtemp(prefix='zappa-project')
if not slim_handler:
# Slim handler does not take the project files.
if minify:
# Related: https://github.com/Miserlou/Zappa/issues/744
excludes = ZIP_EXCLUDES + exclude + [split_venv[-1]]
copytree(cwd, temp_project_path, metadata=False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
else:
copytree(cwd, temp_project_path, metadata=False, symlinks=False)
for glob_path in exclude_glob:
for path in glob.glob(os.path.join(temp_project_path, glob_path)):
try:
os.remove(path)
except OSError: # is a directory
shutil.rmtree(path)
# If a handler_file is supplied, copy that to the root of the package,
# because that's where AWS Lambda looks for it. It can't be inside a package.
if handler_file:
filename = handler_file.split(os.sep)[-1]
shutil.copy(handler_file, os.path.join(temp_project_path, filename))
# Create and populate package ID file and write to temp project path
package_info = {}
package_info['uuid'] = str(uuid.uuid4())
package_info['build_time'] = build_time
package_info['build_platform'] = os.sys.platform
package_info['build_user'] = getpass.getuser()
# TODO: Add git head and info?
# Ex, from @scoates:
# def _get_git_branch():
# chdir(DIR)
# out = check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD']).strip()
# lambci_branch = environ.get('LAMBCI_BRANCH', None)
# if out == "HEAD" and lambci_branch:
# out += " lambci:{}".format(lambci_branch)
# return out
# def _get_git_hash():
# chdir(DIR)
# return check_output(['git', 'rev-parse', 'HEAD']).strip()
# def _get_uname():
# return check_output(['uname', '-a']).strip()
# def _get_user():
# return check_output(['whoami']).strip()
# def set_id_info(zappa_cli):
# build_info = {
# 'branch': _get_git_branch(),
# 'hash': _get_git_hash(),
# 'build_uname': _get_uname(),
# 'build_user': _get_user(),
# 'build_time': datetime.datetime.utcnow().isoformat(),
# }
# with open(path.join(DIR, 'id_info.json'), 'w') as f:
# json.dump(build_info, f)
# return True
package_id_file = open(os.path.join(temp_project_path, 'package_info.json'), 'w')
dumped = json.dumps(package_info, indent=4)
try:
package_id_file.write(dumped)
except TypeError: # This is a Python 2/3 issue. TODO: Make pretty!
package_id_file.write(str(dumped))
package_id_file.close()
# Then, do site site-packages..
egg_links = []
temp_package_path = tempfile.mkdtemp(prefix='zappa-packages')
if os.sys.platform == 'win32':
site_packages = os.path.join(venv, 'Lib', 'site-packages')
else:
site_packages = os.path.join(venv, 'lib', get_venv_from_python_version(), 'site-packages')
egg_links.extend(glob.glob(os.path.join(site_packages, '*.egg-link')))
if minify:
excludes = ZIP_EXCLUDES + exclude
copytree(site_packages, temp_package_path, metadata=False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
else:
copytree(site_packages, temp_package_path, metadata=False, symlinks=False)
# We may have 64-bin specific packages too.
site_packages_64 = os.path.join(venv, 'lib64', get_venv_from_python_version(), 'site-packages')
if os.path.exists(site_packages_64):
egg_links.extend(glob.glob(os.path.join(site_packages_64, '*.egg-link')))
if minify:
excludes = ZIP_EXCLUDES + exclude
copytree(site_packages_64, temp_package_path, metadata = False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
else:
copytree(site_packages_64, temp_package_path, metadata = False, symlinks=False)
if egg_links:
self.copy_editable_packages(egg_links, temp_package_path)
copy_tree(temp_package_path, temp_project_path, update=True)
# Then the pre-compiled packages..
if use_precompiled_packages:
print("Downloading and installing dependencies..")
installed_packages = self.get_installed_packages(site_packages, site_packages_64)
try:
for installed_package_name, installed_package_version in installed_packages.items():
cached_wheel_path = self.get_cached_manylinux_wheel(installed_package_name, installed_package_version, disable_progress)
if cached_wheel_path:
# Otherwise try to use manylinux packages from PyPi..
# Related: https://github.com/Miserlou/Zappa/issues/398
shutil.rmtree(os.path.join(temp_project_path, installed_package_name), ignore_errors=True)
with zipfile.ZipFile(cached_wheel_path) as zfile:
zfile.extractall(temp_project_path)
except Exception as e:
print(e)
# XXX - What should we do here?
# Cleanup
for glob_path in exclude_glob:
for path in glob.glob(os.path.join(temp_project_path, glob_path)):
try:
os.remove(path)
except OSError: # is a directory
shutil.rmtree(path)
# Then archive it all up..
if archive_format == 'zip':
print("Packaging project as zip.")
try:
compression_method = zipfile.ZIP_DEFLATED
except ImportError: # pragma: no cover
compression_method = zipfile.ZIP_STORED
archivef = zipfile.ZipFile(archive_path, 'w', compression_method)
elif archive_format == 'tarball':
print("Packaging project as gzipped tarball.")
archivef = tarfile.open(archive_path, 'w|gz')
for root, dirs, files in os.walk(temp_project_path):
for filename in files:
# Skip .pyc files for Django migrations
# https://github.com/Miserlou/Zappa/issues/436
# https://github.com/Miserlou/Zappa/issues/464
if filename[-4:] == '.pyc' and root[-10:] == 'migrations':
continue
# If there is a .pyc file in this package,
# we can skip the python source code as we'll just
# use the compiled bytecode anyway..
if filename[-3:] == '.py' and root[-10:] != 'migrations':
abs_filname = os.path.join(root, filename)
abs_pyc_filename = abs_filname + 'c'
if os.path.isfile(abs_pyc_filename):
# but only if the pyc is older than the py,
# otherwise we'll deploy outdated code!
py_time = os.stat(abs_filname).st_mtime
pyc_time = os.stat(abs_pyc_filename).st_mtime
if pyc_time > py_time:
continue
# Make sure that the files are all correctly chmodded
# Related: https://github.com/Miserlou/Zappa/issues/484
# Related: https://github.com/Miserlou/Zappa/issues/682
os.chmod(os.path.join(root, filename), 0o755)
if archive_format == 'zip':
# Actually put the file into the proper place in the zip
# Related: https://github.com/Miserlou/Zappa/pull/716
zipi = zipfile.ZipInfo(os.path.join(root.replace(temp_project_path, '').lstrip(os.sep), filename))
zipi.create_system = 3
zipi.external_attr = 0o755 << int(16) # Is this P2/P3 functional?
with open(os.path.join(root, filename), 'rb') as f:
archivef.writestr(zipi, f.read(), compression_method)
elif archive_format == 'tarball':
tarinfo = tarfile.TarInfo(os.path.join(root.replace(temp_project_path, '').lstrip(os.sep), filename))
tarinfo.mode = 0o755
stat = os.stat(os.path.join(root, filename))
tarinfo.mtime = stat.st_mtime
tarinfo.size = stat.st_size
with open(os.path.join(root, filename), 'rb') as f:
archivef.addfile(tarinfo, f)
# Create python init file if it does not exist
# Only do that if there are sub folders or python files and does not conflict with a neighbouring module
# Related: https://github.com/Miserlou/Zappa/issues/766
if not contains_python_files_or_subdirs(root):
# if the directory does not contain any .py file at any level, we can skip the rest
dirs[:] = [d for d in dirs if d != root]
else:
if '__init__.py' not in files and not conflicts_with_a_neighbouring_module(root):
tmp_init = os.path.join(temp_project_path, '__init__.py')
open(tmp_init, 'a').close()
os.chmod(tmp_init, 0o755)
arcname = os.path.join(root.replace(temp_project_path, ''),
os.path.join(root.replace(temp_project_path, ''), '__init__.py'))
if archive_format == 'zip':
archivef.write(tmp_init, arcname)
elif archive_format == 'tarball':
archivef.add(tmp_init, arcname)
# And, we're done!
archivef.close()
# Trash the temp directory
shutil.rmtree(temp_project_path)
shutil.rmtree(temp_package_path)
if os.path.isdir(venv) and slim_handler:
# Remove the temporary handler venv folder
shutil.rmtree(venv)
return archive_fname
@staticmethod
def get_installed_packages(site_packages, site_packages_64):
"""
Returns a dict of installed packages that Zappa cares about.
"""
import pkg_resources
package_to_keep = []
if os.path.isdir(site_packages):
package_to_keep += os.listdir(site_packages)
if os.path.isdir(site_packages_64):
package_to_keep += os.listdir(site_packages_64)
package_to_keep = [x.lower() for x in package_to_keep]
installed_packages = {package.project_name.lower(): package.version for package in
pkg_resources.WorkingSet()
if package.project_name.lower() in package_to_keep
or package.location.lower() in [site_packages.lower(), site_packages_64.lower()]}
return installed_packages
@staticmethod
def download_url_with_progress(url, stream, disable_progress):
"""
Downloads a given url in chunks and writes to the provided stream (can be any io stream).
Displays the progress bar for the download.
"""
resp = requests.get(url, timeout=float(os.environ.get('PIP_TIMEOUT', 2)), stream=True)
resp.raw.decode_content = True
progress = tqdm(unit="B", unit_scale=True, total=int(resp.headers.get('Content-Length', 0)), disable=disable_progress)
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
progress.update(len(chunk))
stream.write(chunk)
progress.close()
def get_cached_manylinux_wheel(self, package_name, package_version, disable_progress=False):
"""
Gets the locally stored version of a manylinux wheel. If one does not exist, the function downloads it.
"""
cached_wheels_dir = os.path.join(tempfile.gettempdir(), 'cached_wheels')
if not os.path.isdir(cached_wheels_dir):
os.makedirs(cached_wheels_dir)
else:
# Check if we already have a cached copy
wheel_file = f'{package_name}-{package_version}-*_x86_64.whl'
wheel_path = os.path.join(cached_wheels_dir, wheel_file)
for pathname in glob.iglob(wheel_path):
if re.match(self.manylinux_wheel_file_match, pathname) or re.match(self.manylinux_wheel_abi3_file_match, pathname):
print(f" - {package_name}=={package_version}: Using locally cached manylinux wheel")
return pathname
# The file is not cached, download it.
wheel_url, filename = self.get_manylinux_wheel_url(package_name, package_version)
if not wheel_url:
return None
wheel_path = os.path.join(cached_wheels_dir, filename)
print(f" - {package_name}=={package_version}: Downloading")
with open(wheel_path, 'wb') as f:
self.download_url_with_progress(wheel_url, f, disable_progress)
if not zipfile.is_zipfile(wheel_path):
return None
return wheel_path
def get_manylinux_wheel_url(self, package_name, package_version):
"""
For a given package name, returns a link to the download URL,
else returns None.
Related: https://github.com/Miserlou/Zappa/issues/398
Examples here: https://gist.github.com/perrygeo/9545f94eaddec18a65fd7b56880adbae
This function downloads metadata JSON of `package_name` from Pypi
and examines if the package has a manylinux wheel. This function
also caches the JSON file so that we don't have to poll Pypi
every time.
"""
cached_pypi_info_dir = os.path.join(tempfile.gettempdir(), 'cached_pypi_info')
if not os.path.isdir(cached_pypi_info_dir):
os.makedirs(cached_pypi_info_dir)
# Even though the metadata is for the package, we save it in a
# filename that includes the package's version. This helps in
# invalidating the cached file if the user moves to a different
# version of the package.
# Related: https://github.com/Miserlou/Zappa/issues/899
json_file = '{0!s}-{1!s}.json'.format(package_name, package_version)
json_file_path = os.path.join(cached_pypi_info_dir, json_file)
if os.path.exists(json_file_path):
with open(json_file_path, 'rb') as metafile:
data = json.load(metafile)
else:
url = 'https://pypi.python.org/pypi/{}/json'.format(package_name)
try:
res = requests.get(url, timeout=float(os.environ.get('PIP_TIMEOUT', 1.5)))
data = res.json()
except Exception as e: # pragma: no cover
return None, None
with open(json_file_path, 'wb') as metafile:
jsondata = json.dumps(data)
metafile.write(bytes(jsondata, "utf-8"))
if package_version not in data['releases']:
return None, None
for f in data['releases'][package_version]:
if re.match(self.manylinux_wheel_file_match, f['filename']):
return f['url'], f['filename']
elif re.match(self.manylinux_wheel_abi3_file_match, f['filename']):
return f['url'], f['filename']
return None, None
##
# S3
##
def upload_to_s3(self, source_path, bucket_name, disable_progress=False):
r"""
Given a file, upload it to S3.
Credentials should be stored in environment variables or ~/.aws/credentials (%USERPROFILE%\.aws\credentials on Windows).
Returns True on success, false on failure.
"""
try:
self.s3_client.head_bucket(Bucket=bucket_name)
except botocore.exceptions.ClientError:
# This is really stupid S3 quirk. Technically, us-east-1 one has no S3,
# it's actually "US Standard", or something.
# More here: https://github.com/boto/boto3/issues/125
if self.aws_region == 'us-east-1':
self.s3_client.create_bucket(
Bucket=bucket_name,
)
else:
self.s3_client.create_bucket(
Bucket=bucket_name,
CreateBucketConfiguration={'LocationConstraint': self.aws_region},
)
if self.tags:
tags = {
'TagSet': [{'Key': key, 'Value': self.tags[key]} for key in self.tags.keys()]
}
self.s3_client.put_bucket_tagging(Bucket=bucket_name, Tagging=tags)
if not os.path.isfile(source_path) or os.stat(source_path).st_size == 0:
print("Problem with source file {}".format(source_path))
return False
dest_path = os.path.split(source_path)[1]
try:
source_size = os.stat(source_path).st_size
print("Uploading {0} ({1})..".format(dest_path, human_size(source_size)))
progress = tqdm(total=float(os.path.getsize(source_path)), unit_scale=True, unit='B', disable=disable_progress)
# Attempt to upload to S3 using the S3 meta client with the progress bar.
# If we're unable to do that, try one more time using a session client,
# which cannot use the progress bar.
# Related: https://github.com/boto/boto3/issues/611
try:
self.s3_client.upload_file(
source_path, bucket_name, dest_path,
Callback=progress.update
)
except Exception as e: # pragma: no cover
self.s3_client.upload_file(source_path, bucket_name, dest_path)
progress.close()
except (KeyboardInterrupt, SystemExit): # pragma: no cover
raise
except Exception as e: # pragma: no cover
print(e)
return False
return True
def copy_on_s3(self, src_file_name, dst_file_name, bucket_name):
"""
Copies src file to destination within a bucket.
"""
try:
self.s3_client.head_bucket(Bucket=bucket_name)
except botocore.exceptions.ClientError as e: # pragma: no cover
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = int(e.response['Error']['Code'])
if error_code == 404:
return False
copy_src = {
"Bucket": bucket_name,
"Key": src_file_name
}
try:
self.s3_client.copy(
CopySource=copy_src,
Bucket=bucket_name,
Key=dst_file_name
)
return True
except botocore.exceptions.ClientError: # pragma: no cover
return False
def remove_from_s3(self, file_name, bucket_name):
"""
Given a file name and a bucket, remove it from S3.
There's no reason to keep the file hosted on S3 once its been made into a Lambda function, so we can delete it from S3.
Returns True on success, False on failure.
"""
try:
self.s3_client.head_bucket(Bucket=bucket_name)
except botocore.exceptions.ClientError as e: # pragma: no cover
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = int(e.response['Error']['Code'])
if error_code == 404:
return False
try:
self.s3_client.delete_object(Bucket=bucket_name, Key=file_name)
return True
except (botocore.exceptions.ParamValidationError, botocore.exceptions.ClientError): # pragma: no cover
return False
##
# Lambda
##
def create_lambda_function( self,
bucket=None,
function_name=None,
handler=None,
s3_key=None,
description='Zappa Deployment',
timeout=30,
memory_size=512,
publish=True,
vpc_config=None,
dead_letter_config=None,
runtime='python3.6',
aws_environment_variables=None,
aws_kms_key_arn=None,
xray_tracing=False,
local_zip=None,
use_alb=False,
layers=None,
concurrency=None,
):
"""
Given a bucket and key (or a local path) of a valid Lambda-zip, a function name and a handler, register that Lambda function.
"""
if not vpc_config:
vpc_config = {}
if not dead_letter_config:
dead_letter_config = {}
if not self.credentials_arn:
self.get_credentials_arn()
if not aws_environment_variables:
aws_environment_variables = {}
if not aws_kms_key_arn:
aws_kms_key_arn = ''
if not layers:
layers = []
kwargs = dict(
FunctionName=function_name,
Runtime=runtime,
Role=self.credentials_arn,
Handler=handler,
Description=description,
Timeout=timeout,
MemorySize=memory_size,
Publish=publish,
VpcConfig=vpc_config,
DeadLetterConfig=dead_letter_config,
Environment={'Variables': aws_environment_variables},
KMSKeyArn=aws_kms_key_arn,
TracingConfig={
'Mode': 'Active' if self.xray_tracing else 'PassThrough'
},
Layers=layers
)
if local_zip:
kwargs['Code'] = {
'ZipFile': local_zip
}
else:
kwargs['Code'] = {
'S3Bucket': bucket,
'S3Key': s3_key
}
response = self.lambda_client.create_function(**kwargs)
resource_arn = response['FunctionArn']
version = response['Version']
# If we're using an ALB, let's create an alias mapped to the newly
# created function. This allows clean, no downtime association when
# using application load balancers as an event source.
# See: https://github.com/Miserlou/Zappa/pull/1730
# https://github.com/Miserlou/Zappa/issues/1823
if use_alb:
self.lambda_client.create_alias(
FunctionName=resource_arn,
FunctionVersion=version,
Name=ALB_LAMBDA_ALIAS,
)
if self.tags:
self.lambda_client.tag_resource(Resource=resource_arn, Tags=self.tags)
if concurrency is not None:
self.lambda_client.put_function_concurrency(
FunctionName=resource_arn,
ReservedConcurrentExecutions=concurrency,
)
return resource_arn
def update_lambda_function(self, bucket, function_name, s3_key=None, publish=True, local_zip=None, num_revisions=None, concurrency=None):
"""
Given a bucket and key (or a local path) of a valid Lambda-zip, a function name and a handler, update that Lambda function's code.
Optionally, delete previous versions if they exceed the optional limit.
"""
print("Updating Lambda function code..")
kwargs = dict(
FunctionName=function_name,
Publish=publish
)
if local_zip:
kwargs['ZipFile'] = local_zip
else:
kwargs['S3Bucket'] = bucket
kwargs['S3Key'] = s3_key
response = self.lambda_client.update_function_code(**kwargs)
resource_arn = response['FunctionArn']
version = response['Version']
# If the lambda has an ALB alias, let's update the alias
# to point to the newest version of the function. We have to use a GET
# here, as there's no HEAD-esque call to retrieve metadata about a
# function alias.
# Related: https://github.com/Miserlou/Zappa/pull/1730
# https://github.com/Miserlou/Zappa/issues/1823
try:
response = self.lambda_client.get_alias(
FunctionName=function_name,
Name=ALB_LAMBDA_ALIAS,
)
alias_exists = True
except botocore.exceptions.ClientError as e: # pragma: no cover
if "ResourceNotFoundException" not in e.response["Error"]["Code"]:
raise e
alias_exists = False
if alias_exists:
self.lambda_client.update_alias(
FunctionName=function_name,
FunctionVersion=version,
Name=ALB_LAMBDA_ALIAS,
)
if concurrency is not None:
self.lambda_client.put_function_concurrency(
FunctionName=function_name,
ReservedConcurrentExecutions=concurrency,
)
else:
self.lambda_client.delete_function_concurrency(
FunctionName=function_name
)
if num_revisions:
# Find the existing revision IDs for the given function
# Related: https://github.com/Miserlou/Zappa/issues/1402
versions_in_lambda = []
versions = self.lambda_client.list_versions_by_function(FunctionName=function_name)
for version in versions['Versions']:
versions_in_lambda.append(version['Version'])
while 'NextMarker' in versions:
versions = self.lambda_client.list_versions_by_function(FunctionName=function_name,Marker=versions['NextMarker'])
for version in versions['Versions']:
versions_in_lambda.append(version['Version'])
versions_in_lambda.remove('$LATEST')
# Delete older revisions if their number exceeds the specified limit
for version in versions_in_lambda[::-1][num_revisions:]:
self.lambda_client.delete_function(FunctionName=function_name,Qualifier=version)
return resource_arn
def update_lambda_configuration( self,
lambda_arn,
function_name,
handler,
description='Zappa Deployment',
timeout=30,
memory_size=512,
publish=True,
vpc_config=None,
runtime='python3.6',
aws_environment_variables=None,
aws_kms_key_arn=None,
layers=None
):
"""
Given an existing function ARN, update the configuration variables.
"""
print("Updating Lambda function configuration..")
if not vpc_config:
vpc_config = {}
if not self.credentials_arn:
self.get_credentials_arn()
if not aws_kms_key_arn:
aws_kms_key_arn = ''
if not aws_environment_variables:
aws_environment_variables = {}
if not layers:
layers = []
# Check if there are any remote aws lambda env vars so they don't get trashed.
# https://github.com/Miserlou/Zappa/issues/987, Related: https://github.com/Miserlou/Zappa/issues/765
lambda_aws_config = self.lambda_client.get_function_configuration(FunctionName=function_name)
if "Environment" in lambda_aws_config:
lambda_aws_environment_variables = lambda_aws_config["Environment"].get("Variables", {})
# Append keys that are remote but not in settings file
for key, value in lambda_aws_environment_variables.items():
if key not in aws_environment_variables:
aws_environment_variables[key] = value
response = self.lambda_client.update_function_configuration(
FunctionName=function_name,
Runtime=runtime,
Role=self.credentials_arn,
Handler=handler,
Description=description,
Timeout=timeout,
MemorySize=memory_size,
VpcConfig=vpc_config,
Environment={'Variables': aws_environment_variables},
KMSKeyArn=aws_kms_key_arn,
TracingConfig={
'Mode': 'Active' if self.xray_tracing else 'PassThrough'
},
Layers=layers
)
resource_arn = response['FunctionArn']
if self.tags:
self.lambda_client.tag_resource(Resource=resource_arn, Tags=self.tags)
return resource_arn
def invoke_lambda_function( self,
function_name,
payload,
invocation_type='Event',
log_type='Tail',
client_context=None,
qualifier=None
):
"""
Directly invoke a named Lambda function with a payload.
Returns the response.
"""
return self.lambda_client.invoke(
FunctionName=function_name,
InvocationType=invocation_type,
LogType=log_type,
Payload=payload
)
def rollback_lambda_function_version(self, function_name, versions_back=1, publish=True):
"""
Rollback the lambda function code 'versions_back' number of revisions.
Returns the Function ARN.
"""
response = self.lambda_client.list_versions_by_function(FunctionName=function_name)
# Take into account $LATEST
if len(response['Versions']) < versions_back + 1:
print("We do not have {} revisions. Aborting".format(str(versions_back)))
return False
revisions = [int(revision['Version']) for revision in response['Versions'] if revision['Version'] != '$LATEST']
revisions.sort(reverse=True)
response = self.lambda_client.get_function(FunctionName='function:{}:{}'.format(function_name, revisions[versions_back]))
response = requests.get(response['Code']['Location'])
if response.status_code != 200:
print("Failed to get version {} of {} code".format(versions_back, function_name))
return False
response = self.lambda_client.update_function_code(FunctionName=function_name, ZipFile=response.content, Publish=publish) # pragma: no cover
return response['FunctionArn']
def get_lambda_function(self, function_name):
"""
Returns the lambda function ARN, given a name
This requires the "lambda:GetFunction" role.
"""
response = self.lambda_client.get_function(
FunctionName=function_name)
return response['Configuration']['FunctionArn']
def get_lambda_function_versions(self, function_name):
"""
Simply returns the versions available for a Lambda function, given a function name.
"""
try:
response = self.lambda_client.list_versions_by_function(
FunctionName=function_name
)
return response.get('Versions', [])
except Exception:
return []
def delete_lambda_function(self, function_name):
"""
Given a function name, delete it from AWS Lambda.
Returns the response.
"""
print("Deleting Lambda function..")
return self.lambda_client.delete_function(
FunctionName=function_name,
)
##
# Application load balancer
##
def deploy_lambda_alb( self,
lambda_arn,
lambda_name,
alb_vpc_config,
timeout
):
"""
The `zappa deploy` functionality for ALB infrastructure.
"""
if not alb_vpc_config:
raise EnvironmentError('When creating an ALB, alb_vpc_config must be filled out in zappa_settings.')
if 'SubnetIds' not in alb_vpc_config:
raise EnvironmentError('When creating an ALB, you must supply two subnets in different availability zones.')
if 'SecurityGroupIds' not in alb_vpc_config:
alb_vpc_config["SecurityGroupIds"] = []
if not alb_vpc_config.get('CertificateArn'):
raise EnvironmentError('When creating an ALB, you must supply a CertificateArn for the HTTPS listener.')
# Related: https://github.com/Miserlou/Zappa/issues/1856
if 'Scheme' not in alb_vpc_config:
alb_vpc_config["Scheme"] = "internet-facing"
print("Deploying ALB infrastructure...")
# Create load balancer
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.create_load_balancer
kwargs = dict(
Name=lambda_name,
Subnets=alb_vpc_config["SubnetIds"],
SecurityGroups=alb_vpc_config["SecurityGroupIds"],
Scheme=alb_vpc_config["Scheme"],
# TODO: Tags might be a useful means of stock-keeping zappa-generated assets.
#Tags=[],
Type="application",
# TODO: can be ipv4 or dualstack (for ipv4 and ipv6) ipv4 is required for internal Scheme.
IpAddressType="ipv4"
)
response = self.elbv2_client.create_load_balancer(**kwargs)
if not(response["LoadBalancers"]) or len(response["LoadBalancers"]) != 1:
raise EnvironmentError("Failure to create application load balancer. Response was in unexpected format. Response was: {}".format(repr(response)))
if response["LoadBalancers"][0]['State']['Code'] == 'failed':
raise EnvironmentError("Failure to create application load balancer. Response reported a failed state: {}".format(response["LoadBalancers"][0]['State']['Reason']))
load_balancer_arn = response["LoadBalancers"][0]["LoadBalancerArn"]
load_balancer_dns = response["LoadBalancers"][0]["DNSName"]
load_balancer_vpc = response["LoadBalancers"][0]["VpcId"]
waiter = self.elbv2_client.get_waiter('load_balancer_available')
print('Waiting for load balancer [{}] to become active..'.format(load_balancer_arn))
waiter.wait(LoadBalancerArns=[load_balancer_arn], WaiterConfig={"Delay": 3})
# Match the lambda timeout on the load balancer.
self.elbv2_client.modify_load_balancer_attributes(
LoadBalancerArn=load_balancer_arn,
Attributes=[{
'Key': 'idle_timeout.timeout_seconds',
'Value': str(timeout)
}]
)
# Create/associate target group.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.create_target_group
kwargs = dict(
Name=lambda_name,
TargetType="lambda",
# TODO: Add options for health checks
)
response = self.elbv2_client.create_target_group(**kwargs)
if not(response["TargetGroups"]) or len(response["TargetGroups"]) != 1:
raise EnvironmentError("Failure to create application load balancer target group. Response was in unexpected format. Response was: {}".format(repr(response)))
target_group_arn = response["TargetGroups"][0]["TargetGroupArn"]
# Enable multi-value headers by default.
response = self.elbv2_client.modify_target_group_attributes(
TargetGroupArn=target_group_arn,
Attributes=[
{
'Key': 'lambda.multi_value_headers.enabled',
'Value': 'true'
},
]
)
# Allow execute permissions from target group to lambda.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.add_permission
kwargs = dict(
Action="lambda:InvokeFunction",
FunctionName="{}:{}".format(lambda_arn, ALB_LAMBDA_ALIAS),
Principal="elasticloadbalancing.amazonaws.com",
SourceArn=target_group_arn,
StatementId=lambda_name
)
response = self.lambda_client.add_permission(**kwargs)
# Register target group to lambda association.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.register_targets
kwargs = dict(
TargetGroupArn=target_group_arn,
Targets=[{"Id": "{}:{}".format(lambda_arn, ALB_LAMBDA_ALIAS)}]
)
response = self.elbv2_client.register_targets(**kwargs)
# Bind listener to load balancer with default rule to target group.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.create_listener
kwargs = dict(
# TODO: Listeners support custom ssl certificates (Certificates). For now we leave this default.
Certificates=[{"CertificateArn": alb_vpc_config['CertificateArn']}],
DefaultActions=[{
"Type": "forward",
"TargetGroupArn": target_group_arn,
}],
LoadBalancerArn=load_balancer_arn,
Protocol="HTTPS",
# TODO: Add option for custom ports
Port=443,
# TODO: Listeners support custom ssl security policy (SslPolicy). For now we leave this default.
)
response = self.elbv2_client.create_listener(**kwargs)
print("ALB created with DNS: {}".format(load_balancer_dns))
print("Note it may take several minutes for load balancer to become available.")
def undeploy_lambda_alb(self, lambda_name):
"""
The `zappa undeploy` functionality for ALB infrastructure.
"""
print("Undeploying ALB infrastructure...")
# Locate and delete alb/lambda permissions
try:
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.remove_permission
self.lambda_client.remove_permission(
FunctionName=lambda_name,
StatementId=lambda_name
)
except botocore.exceptions.ClientError as e: # pragma: no cover
if "ResourceNotFoundException" in e.response["Error"]["Code"]:
pass
else:
raise e
# Locate and delete load balancer
try:
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_load_balancers
response = self.elbv2_client.describe_load_balancers(
Names=[lambda_name]
)
if not(response["LoadBalancers"]) or len(response["LoadBalancers"]) > 1:
raise EnvironmentError("Failure to locate/delete ALB named [{}]. Response was: {}".format(lambda_name, repr(response)))
load_balancer_arn = response["LoadBalancers"][0]["LoadBalancerArn"]
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_listeners
response = self.elbv2_client.describe_listeners(LoadBalancerArn=load_balancer_arn)
if not(response["Listeners"]):
print('No listeners found.')
elif len(response["Listeners"]) > 1:
raise EnvironmentError("Failure to locate/delete listener for ALB named [{}]. Response was: {}".format(lambda_name, repr(response)))
else:
listener_arn = response["Listeners"][0]["ListenerArn"]
# Remove the listener. This explicit deletion of the listener seems necessary to avoid ResourceInUseExceptions when deleting target groups.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.delete_listener
response = self.elbv2_client.delete_listener(ListenerArn=listener_arn)
# Remove the load balancer and wait for completion
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.delete_load_balancer
response = self.elbv2_client.delete_load_balancer(LoadBalancerArn=load_balancer_arn)
waiter = self.elbv2_client.get_waiter('load_balancers_deleted')
print('Waiting for load balancer [{}] to be deleted..'.format(lambda_name))
waiter.wait(LoadBalancerArns=[load_balancer_arn], WaiterConfig={"Delay": 3})
except botocore.exceptions.ClientError as e: # pragma: no cover
print(e.response["Error"]["Code"])
if "LoadBalancerNotFound" in e.response["Error"]["Code"]:
pass
else:
raise e
# Locate and delete target group
try:
# Locate the lambda ARN
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.get_function
response = self.lambda_client.get_function(FunctionName=lambda_name)
lambda_arn = response["Configuration"]["FunctionArn"]
# Locate the target group ARN
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_target_groups
response = self.elbv2_client.describe_target_groups(Names=[lambda_name])
if not(response["TargetGroups"]) or len(response["TargetGroups"]) > 1:
raise EnvironmentError("Failure to locate/delete ALB target group named [{}]. Response was: {}".format(lambda_name, repr(response)))
target_group_arn = response["TargetGroups"][0]["TargetGroupArn"]
# Deregister targets and wait for completion
self.elbv2_client.deregister_targets(
TargetGroupArn=target_group_arn,
Targets=[{"Id": lambda_arn}]
)
waiter = self.elbv2_client.get_waiter('target_deregistered')
print('Waiting for target [{}] to be deregistered...'.format(lambda_name))
waiter.wait(
TargetGroupArn=target_group_arn,
Targets=[{"Id": lambda_arn}],
WaiterConfig={"Delay": 3}
)
# Remove the target group
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.delete_target_group
self.elbv2_client.delete_target_group(TargetGroupArn=target_group_arn)
except botocore.exceptions.ClientError as e: # pragma: no cover
print(e.response["Error"]["Code"])
if "TargetGroupNotFound" in e.response["Error"]["Code"]:
pass
else:
raise e
##
# API Gateway
##
def create_api_gateway_routes( self,
lambda_arn,
api_name=None,
api_key_required=False,
authorization_type='NONE',
authorizer=None,
cors_options=None,
description=None,
endpoint_configuration=None
):
"""
Create the API Gateway for this Zappa deployment.
Returns the new RestAPI CF resource.
"""
restapi = troposphere.apigateway.RestApi('Api')
restapi.Name = api_name or lambda_arn.split(':')[-1]
if not description:
description = 'Created automatically by Zappa.'
restapi.Description = description
endpoint_configuration = [] if endpoint_configuration is None else endpoint_configuration
if self.boto_session.region_name == "us-gov-west-1":
endpoint_configuration.append("REGIONAL")
if endpoint_configuration:
endpoint = troposphere.apigateway.EndpointConfiguration()
endpoint.Types = list(set(endpoint_configuration))
restapi.EndpointConfiguration = endpoint
if self.apigateway_policy:
restapi.Policy = json.loads(self.apigateway_policy)
self.cf_template.add_resource(restapi)
root_id = troposphere.GetAtt(restapi, 'RootResourceId')
invocation_prefix = "aws" if self.boto_session.region_name != "us-gov-west-1" else "aws-us-gov"
invocations_uri = 'arn:' + invocation_prefix + ':apigateway:' + self.boto_session.region_name + ':lambda:path/2015-03-31/functions/' + lambda_arn + '/invocations'
##
# The Resources
##
authorizer_resource = None
if authorizer:
authorizer_lambda_arn = authorizer.get('arn', lambda_arn)
lambda_uri = 'arn:{invocation_prefix}:apigateway:{region_name}:lambda:path/2015-03-31/functions/{lambda_arn}/invocations'.format(
invocation_prefix=invocation_prefix,
region_name=self.boto_session.region_name,
lambda_arn=authorizer_lambda_arn
)
authorizer_resource = self.create_authorizer(
restapi, lambda_uri, authorizer
)
self.create_and_setup_methods( restapi,
root_id,
api_key_required,
invocations_uri,
authorization_type,
authorizer_resource,
0
)
if cors_options:
self.create_and_setup_cors( restapi,
root_id,
invocations_uri,
0,
cors_options
)
resource = troposphere.apigateway.Resource('ResourceAnyPathSlashed')
self.cf_api_resources.append(resource.title)
resource.RestApiId = troposphere.Ref(restapi)
resource.ParentId = root_id
resource.PathPart = "{proxy+}"
self.cf_template.add_resource(resource)
self.create_and_setup_methods( restapi,
resource,
api_key_required,
invocations_uri,
authorization_type,
authorizer_resource,
1
) # pragma: no cover
if cors_options:
self.create_and_setup_cors( restapi,
resource,
invocations_uri,
1,
cors_options
) # pragma: no cover
return restapi
def create_authorizer(self, restapi, uri, authorizer):
"""
Create Authorizer for API gateway
"""
authorizer_type = authorizer.get("type", "TOKEN").upper()
identity_validation_expression = authorizer.get('validation_expression', None)
authorizer_resource = troposphere.apigateway.Authorizer("Authorizer")
authorizer_resource.RestApiId = troposphere.Ref(restapi)
authorizer_resource.Name = authorizer.get("name", "ZappaAuthorizer")
authorizer_resource.Type = authorizer_type
authorizer_resource.AuthorizerUri = uri
authorizer_resource.IdentitySource = "method.request.header.%s" % authorizer.get('token_header', 'Authorization')
if identity_validation_expression:
authorizer_resource.IdentityValidationExpression = identity_validation_expression
if authorizer_type == 'TOKEN':
if not self.credentials_arn:
self.get_credentials_arn()
authorizer_resource.AuthorizerResultTtlInSeconds = authorizer.get('result_ttl', 300)
authorizer_resource.AuthorizerCredentials = self.credentials_arn
if authorizer_type == 'COGNITO_USER_POOLS':
authorizer_resource.ProviderARNs = authorizer.get('provider_arns')
self.cf_api_resources.append(authorizer_resource.title)
self.cf_template.add_resource(authorizer_resource)
return authorizer_resource
def create_and_setup_methods(
self,
restapi,
resource,
api_key_required,
uri,
authorization_type,
authorizer_resource,
depth
):
"""
Set up the methods, integration responses and method responses for a given API Gateway resource.
"""
for method_name in self.http_methods:
method = troposphere.apigateway.Method(method_name + str(depth))
method.RestApiId = troposphere.Ref(restapi)
if type(resource) is troposphere.apigateway.Resource:
method.ResourceId = troposphere.Ref(resource)
else:
method.ResourceId = resource
method.HttpMethod = method_name.upper()
method.AuthorizationType = authorization_type
if authorizer_resource:
method.AuthorizerId = troposphere.Ref(authorizer_resource)
method.ApiKeyRequired = api_key_required
method.MethodResponses = []
self.cf_template.add_resource(method)
self.cf_api_resources.append(method.title)
if not self.credentials_arn:
self.get_credentials_arn()
credentials = self.credentials_arn # This must be a Role ARN
integration = troposphere.apigateway.Integration()
integration.CacheKeyParameters = []
integration.CacheNamespace = 'none'
integration.Credentials = credentials
integration.IntegrationHttpMethod = 'POST'
integration.IntegrationResponses = []
integration.PassthroughBehavior = 'NEVER'
integration.Type = 'AWS_PROXY'
integration.Uri = uri
method.Integration = integration
def create_and_setup_cors(self, restapi, resource, uri, depth, config):
"""
Set up the methods, integration responses and method responses for a given API Gateway resource.
"""
if config is True:
config = {}
method_name = "OPTIONS"
method = troposphere.apigateway.Method(method_name + str(depth))
method.RestApiId = troposphere.Ref(restapi)
if type(resource) is troposphere.apigateway.Resource:
method.ResourceId = troposphere.Ref(resource)
else:
method.ResourceId = resource
method.HttpMethod = method_name.upper()
method.AuthorizationType = "NONE"
method_response = troposphere.apigateway.MethodResponse()
method_response.ResponseModels = {
"application/json": "Empty"
}
response_headers = {
"Access-Control-Allow-Headers": "'%s'" % ",".join(config.get(
"allowed_headers", ["Content-Type", "X-Amz-Date",
"Authorization", "X-Api-Key",
"X-Amz-Security-Token"])),
"Access-Control-Allow-Methods": "'%s'" % ",".join(config.get(
"allowed_methods", ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"])),
"Access-Control-Allow-Origin": "'%s'" % config.get(
"allowed_origin", "*")
}
method_response.ResponseParameters = {
"method.response.header.%s" % key: True for key in response_headers
}
method_response.StatusCode = "200"
method.MethodResponses = [
method_response
]
self.cf_template.add_resource(method)
self.cf_api_resources.append(method.title)
integration = troposphere.apigateway.Integration()
integration.Type = 'MOCK'
integration.PassthroughBehavior = 'NEVER'
integration.RequestTemplates = {
"application/json": "{\"statusCode\": 200}"
}
integration_response = troposphere.apigateway.IntegrationResponse()
integration_response.ResponseParameters = {
"method.response.header.%s" % key: value for key, value in response_headers.items()
}
integration_response.ResponseTemplates = {
"application/json": ""
}
integration_response.StatusCode = "200"
integration.IntegrationResponses = [
integration_response
]
integration.Uri = uri
method.Integration = integration
def deploy_api_gateway( self,
api_id,
stage_name,
stage_description="",
description="",
cache_cluster_enabled=False,
cache_cluster_size='0.5',
variables=None,
cloudwatch_log_level='OFF',
cloudwatch_data_trace=False,
cloudwatch_metrics_enabled=False,
cache_cluster_ttl=300,
cache_cluster_encrypted=False
):
"""
Deploy the API Gateway!
Return the deployed API URL.
"""
print("Deploying API Gateway..")
self.apigateway_client.create_deployment(
restApiId=api_id,
stageName=stage_name,
stageDescription=stage_description,
description=description,
cacheClusterEnabled=cache_cluster_enabled,
cacheClusterSize=cache_cluster_size,
variables=variables or {}
)
if cloudwatch_log_level not in self.cloudwatch_log_levels:
cloudwatch_log_level = 'OFF'
self.apigateway_client.update_stage(
restApiId=api_id,
stageName=stage_name,
patchOperations=[
self.get_patch_op('logging/loglevel', cloudwatch_log_level),
self.get_patch_op('logging/dataTrace', cloudwatch_data_trace),
self.get_patch_op('metrics/enabled', cloudwatch_metrics_enabled),
self.get_patch_op('caching/ttlInSeconds', str(cache_cluster_ttl)),
self.get_patch_op('caching/dataEncrypted', cache_cluster_encrypted)
]
)
return "https://{}.execute-api.{}.amazonaws.com/{}".format(api_id, self.boto_session.region_name, stage_name)
def add_binary_support(self, api_id, cors=False):
"""
Add binary support
"""
response = self.apigateway_client.get_rest_api(
restApiId=api_id
)
if "binaryMediaTypes" not in response or "*/*" not in response["binaryMediaTypes"]:
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': "add",
'path': '/binaryMediaTypes/*~1*'
}
]
)
if cors:
# fix for issue 699 and 1035, cors+binary support don't work together
# go through each resource and update the contentHandling type
response = self.apigateway_client.get_resources(restApiId=api_id)
resource_ids = [
item['id'] for item in response['items']
if 'OPTIONS' in item.get('resourceMethods', {})
]
for resource_id in resource_ids:
self.apigateway_client.update_integration(
restApiId=api_id,
resourceId=resource_id,
httpMethod='OPTIONS',
patchOperations=[
{
"op": "replace",
"path": "/contentHandling",
"value": "CONVERT_TO_TEXT"
}
]
)
def remove_binary_support(self, api_id, cors=False):
"""
Remove binary support
"""
response = self.apigateway_client.get_rest_api(
restApiId=api_id
)
if "binaryMediaTypes" in response and "*/*" in response["binaryMediaTypes"]:
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': 'remove',
'path': '/binaryMediaTypes/*~1*'
}
]
)
if cors:
# go through each resource and change the contentHandling type
response = self.apigateway_client.get_resources(restApiId=api_id)
resource_ids = [
item['id'] for item in response['items']
if 'OPTIONS' in item.get('resourceMethods', {})
]
for resource_id in resource_ids:
self.apigateway_client.update_integration(
restApiId=api_id,
resourceId=resource_id,
httpMethod='OPTIONS',
patchOperations=[
{
"op": "replace",
"path": "/contentHandling",
"value": ""
}
]
)
def add_api_compression(self, api_id, min_compression_size):
"""
Add Rest API compression
"""
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': 'replace',
'path': '/minimumCompressionSize',
'value': str(min_compression_size)
}
]
)
def remove_api_compression(self, api_id):
"""
Remove Rest API compression
"""
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': 'replace',
'path': '/minimumCompressionSize',
}
]
)
def get_api_keys(self, api_id, stage_name):
"""
Generator that allows to iterate per API keys associated to an api_id and a stage_name.
"""
response = self.apigateway_client.get_api_keys(limit=500)
stage_key = '{}/{}'.format(api_id, stage_name)
for api_key in response.get('items'):
if stage_key in api_key.get('stageKeys'):
yield api_key.get('id')
def create_api_key(self, api_id, stage_name):
"""
Create new API key and link it with an api_id and a stage_name
"""
response = self.apigateway_client.create_api_key(
name='{}_{}'.format(stage_name, api_id),
description='Api Key for {}'.format(api_id),
enabled=True,
stageKeys=[
{
'restApiId': '{}'.format(api_id),
'stageName': '{}'.format(stage_name)
},
]
)
print('Created a new x-api-key: {}'.format(response['id']))
def remove_api_key(self, api_id, stage_name):
"""
Remove a generated API key for api_id and stage_name
"""
response = self.apigateway_client.get_api_keys(
limit=1,
nameQuery='{}_{}'.format(stage_name, api_id)
)
for api_key in response.get('items'):
self.apigateway_client.delete_api_key(
apiKey="{}".format(api_key['id'])
)
def add_api_stage_to_api_key(self, api_key, api_id, stage_name):
"""
Add api stage to Api key
"""
self.apigateway_client.update_api_key(
apiKey=api_key,
patchOperations=[
{
'op': 'add',
'path': '/stages',
'value': '{}/{}'.format(api_id, stage_name)
}
]
)
def get_patch_op(self, keypath, value, op='replace'):
"""
Return an object that describes a change of configuration on the given staging.
Setting will be applied on all available HTTP methods.
"""
if isinstance(value, bool):
value = str(value).lower()
return {'op': op, 'path': '/*/*/{}'.format(keypath), 'value': value}
def get_rest_apis(self, project_name):
"""
Generator that allows to iterate per every available apis.
"""
all_apis = self.apigateway_client.get_rest_apis(
limit=500
)
for api in all_apis['items']:
if api['name'] != project_name:
continue
yield api
def undeploy_api_gateway(self, lambda_name, domain_name=None, base_path=None):
"""
Delete a deployed REST API Gateway.
"""
print("Deleting API Gateway..")
api_id = self.get_api_id(lambda_name)
if domain_name:
# XXX - Remove Route53 smartly here?
# XXX - This doesn't raise, but doesn't work either.
try:
self.apigateway_client.delete_base_path_mapping(
domainName=domain_name,
basePath='(none)' if base_path is None else base_path
)
except Exception as e:
# We may not have actually set up the domain.
pass
was_deleted = self.delete_stack(lambda_name, wait=True)
if not was_deleted:
# try erasing it with the older method
for api in self.get_rest_apis(lambda_name):
self.apigateway_client.delete_rest_api(
restApiId=api['id']
)
def update_stage_config( self,
project_name,
stage_name,
cloudwatch_log_level,
cloudwatch_data_trace,
cloudwatch_metrics_enabled
):
"""
Update CloudWatch metrics configuration.
"""
if cloudwatch_log_level not in self.cloudwatch_log_levels:
cloudwatch_log_level = 'OFF'
for api in self.get_rest_apis(project_name):
self.apigateway_client.update_stage(
restApiId=api['id'],
stageName=stage_name,
patchOperations=[
self.get_patch_op('logging/loglevel', cloudwatch_log_level),
self.get_patch_op('logging/dataTrace', cloudwatch_data_trace),
self.get_patch_op('metrics/enabled', cloudwatch_metrics_enabled),
]
)
def update_cognito(self, lambda_name, user_pool, lambda_configs, lambda_arn):
LambdaConfig = {}
for config in lambda_configs:
LambdaConfig[config] = lambda_arn
description = self.cognito_client.describe_user_pool(UserPoolId=user_pool)
description_kwargs = {}
for key, value in description['UserPool'].items():
if key in ('UserPoolId', 'Policies', 'AutoVerifiedAttributes', 'SmsVerificationMessage',
'EmailVerificationMessage', 'EmailVerificationSubject', 'VerificationMessageTemplate',
'SmsAuthenticationMessage', 'MfaConfiguration', 'DeviceConfiguration',
'EmailConfiguration', 'SmsConfiguration', 'UserPoolTags',
'AdminCreateUserConfig'):
description_kwargs[key] = value
elif key == 'LambdaConfig':
for lckey, lcvalue in value.items():
if lckey in LambdaConfig:
value[lckey] = LambdaConfig[lckey]
print("value", value)
description_kwargs[key] = value
if 'LambdaConfig' not in description_kwargs:
description_kwargs['LambdaConfig'] = LambdaConfig
if 'TemporaryPasswordValidityDays' in description_kwargs['Policies']['PasswordPolicy']:
description_kwargs['AdminCreateUserConfig'].pop(
'UnusedAccountValidityDays', None)
if 'UnusedAccountValidityDays' in description_kwargs['AdminCreateUserConfig']:
description_kwargs['Policies']['PasswordPolicy']\
['TemporaryPasswordValidityDays'] = description_kwargs['AdminCreateUserConfig'].pop(
'UnusedAccountValidityDays', None)
result = self.cognito_client.update_user_pool(UserPoolId=user_pool, **description_kwargs)
if result['ResponseMetadata']['HTTPStatusCode'] != 200:
print("Cognito: Failed to update user pool", result)
# Now we need to add a policy to the IAM that allows cognito access
result = self.create_event_permission(lambda_name,
'cognito-idp.amazonaws.com',
'arn:aws:cognito-idp:{}:{}:userpool/{}'.
format(self.aws_region,
self.sts_client.get_caller_identity().get('Account'),
user_pool)
)
if result['ResponseMetadata']['HTTPStatusCode'] != 201:
print("Cognito: Failed to update lambda permission", result)
def delete_stack(self, name, wait=False):
"""
Delete the CF stack managed by Zappa.
"""
try:
stack = self.cf_client.describe_stacks(StackName=name)['Stacks'][0]
except: # pragma: no cover
print('No Zappa stack named {0}'.format(name))
return False
tags = {x['Key']:x['Value'] for x in stack['Tags']}
if tags.get('ZappaProject') == name:
self.cf_client.delete_stack(StackName=name)
if wait:
waiter = self.cf_client.get_waiter('stack_delete_complete')
print('Waiting for stack {0} to be deleted..'.format(name))
waiter.wait(StackName=name)
return True
else:
print('ZappaProject tag not found on {0}, doing nothing'.format(name))
return False
def create_stack_template( self,
lambda_arn,
lambda_name,
api_key_required,
iam_authorization,
authorizer,
cors_options=None,
description=None,
endpoint_configuration=None
):
"""
Build the entire CF stack.
Just used for the API Gateway, but could be expanded in the future.
"""
auth_type = "NONE"
if iam_authorization and authorizer:
logger.warn("Both IAM Authorization and Authorizer are specified, this is not possible. "
"Setting Auth method to IAM Authorization")
authorizer = None
auth_type = "AWS_IAM"
elif iam_authorization:
auth_type = "AWS_IAM"
elif authorizer:
auth_type = authorizer.get("type", "CUSTOM")
# build a fresh template
self.cf_template = troposphere.Template()
self.cf_template.add_description('Automatically generated with Zappa')
self.cf_api_resources = []
self.cf_parameters = {}
restapi = self.create_api_gateway_routes(
lambda_arn,
api_name=lambda_name,
api_key_required=api_key_required,
authorization_type=auth_type,
authorizer=authorizer,
cors_options=cors_options,
description=description,
endpoint_configuration=endpoint_configuration
)
return self.cf_template
def update_stack(self, name, working_bucket, wait=False, update_only=False, disable_progress=False):
"""
Update or create the CF stack managed by Zappa.
"""
capabilities = []
template = name + '-template-' + str(int(time.time())) + '.json'
with open(template, 'wb') as out:
out.write(bytes(self.cf_template.to_json(indent=None, separators=(',',':')), "utf-8"))
self.upload_to_s3(template, working_bucket, disable_progress=disable_progress)
if self.boto_session.region_name == "us-gov-west-1":
url = 'https://s3-us-gov-west-1.amazonaws.com/{0}/{1}'.format(working_bucket, template)
else:
url = 'https://s3.amazonaws.com/{0}/{1}'.format(working_bucket, template)
tags = [{'Key': key, 'Value': self.tags[key]}
for key in self.tags.keys()
if key != 'ZappaProject']
tags.append({'Key':'ZappaProject','Value':name})
update = True
try:
self.cf_client.describe_stacks(StackName=name)
except botocore.client.ClientError:
update = False
if update_only and not update:
print('CloudFormation stack missing, re-deploy to enable updates')
return
if not update:
self.cf_client.create_stack(StackName=name,
Capabilities=capabilities,
TemplateURL=url,
Tags=tags)
print('Waiting for stack {0} to create (this can take a bit)..'.format(name))
else:
try:
self.cf_client.update_stack(StackName=name,
Capabilities=capabilities,
TemplateURL=url,
Tags=tags)
print('Waiting for stack {0} to update..'.format(name))
except botocore.client.ClientError as e:
if e.response['Error']['Message'] == 'No updates are to be performed.':
wait = False
else:
raise
if wait:
total_resources = len(self.cf_template.resources)
current_resources = 0
sr = self.cf_client.get_paginator('list_stack_resources')
progress = tqdm(total=total_resources, unit='res', disable=disable_progress)
while True:
time.sleep(3)
result = self.cf_client.describe_stacks(StackName=name)
if not result['Stacks']:
continue # might need to wait a bit
if result['Stacks'][0]['StackStatus'] in ['CREATE_COMPLETE', 'UPDATE_COMPLETE']:
break
# Something has gone wrong.
# Is raising enough? Should we also remove the Lambda function?
if result['Stacks'][0]['StackStatus'] in [
'DELETE_COMPLETE',
'DELETE_IN_PROGRESS',
'ROLLBACK_IN_PROGRESS',
'UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS',
'UPDATE_ROLLBACK_COMPLETE'
]:
raise EnvironmentError("Stack creation failed. "
"Please check your CloudFormation console. "
"You may also need to `undeploy`.")
count = 0
for result in sr.paginate(StackName=name):
done = (1 for x in result['StackResourceSummaries']
if 'COMPLETE' in x['ResourceStatus'])
count += sum(done)
if count:
# We can end up in a situation where we have more resources being created
# than anticipated.
if (count - current_resources) > 0:
progress.update(count - current_resources)
current_resources = count
progress.close()
try:
os.remove(template)
except OSError:
pass
self.remove_from_s3(template, working_bucket)
def stack_outputs(self, name):
"""
Given a name, describes CloudFront stacks and returns dict of the stack Outputs
, else returns an empty dict.
"""
try:
stack = self.cf_client.describe_stacks(StackName=name)['Stacks'][0]
return {x['OutputKey']: x['OutputValue'] for x in stack['Outputs']}
except botocore.client.ClientError:
return {}
def get_api_url(self, lambda_name, stage_name):
"""
Given a lambda_name and stage_name, return a valid API URL.
"""
api_id = self.get_api_id(lambda_name)
if api_id:
return "https://{}.execute-api.{}.amazonaws.com/{}".format(api_id, self.boto_session.region_name, stage_name)
else:
return None
def get_api_id(self, lambda_name):
"""
Given a lambda_name, return the API id.
"""
try:
response = self.cf_client.describe_stack_resource(StackName=lambda_name,
LogicalResourceId='Api')
return response['StackResourceDetail'].get('PhysicalResourceId', None)
except: # pragma: no cover
try:
# Try the old method (project was probably made on an older, non CF version)
response = self.apigateway_client.get_rest_apis(limit=500)
for item in response['items']:
if item['name'] == lambda_name:
return item['id']
logger.exception('Could not get API ID.')
return None
except: # pragma: no cover
# We don't even have an API deployed. That's okay!
return None
def create_domain_name(self,
domain_name,
certificate_name,
certificate_body=None,
certificate_private_key=None,
certificate_chain=None,
certificate_arn=None,
lambda_name=None,
stage=None,
base_path=None):
"""
Creates the API GW domain and returns the resulting DNS name.
"""
# This is a Let's Encrypt or custom certificate
if not certificate_arn:
agw_response = self.apigateway_client.create_domain_name(
domainName=domain_name,
certificateName=certificate_name,
certificateBody=certificate_body,
certificatePrivateKey=certificate_private_key,
certificateChain=certificate_chain
)
# This is an AWS ACM-hosted Certificate
else:
agw_response = self.apigateway_client.create_domain_name(
domainName=domain_name,
certificateName=certificate_name,
certificateArn=certificate_arn
)
api_id = self.get_api_id(lambda_name)
if not api_id:
raise LookupError("No API URL to certify found - did you deploy?")
self.apigateway_client.create_base_path_mapping(
domainName=domain_name,
basePath='' if base_path is None else base_path,
restApiId=api_id,
stage=stage
)
return agw_response['distributionDomainName']
def update_route53_records(self, domain_name, dns_name):
"""
Updates Route53 Records following GW domain creation
"""
zone_id = self.get_hosted_zone_id_for_domain(domain_name)
is_apex = self.route53.get_hosted_zone(Id=zone_id)['HostedZone']['Name'][:-1] == domain_name
if is_apex:
record_set = {
'Name': domain_name,
'Type': 'A',
'AliasTarget': {
'HostedZoneId': 'Z2FDTNDATAQYW2', # This is a magic value that means "CloudFront"
'DNSName': dns_name,
'EvaluateTargetHealth': False
}
}
else:
record_set = {
'Name': domain_name,
'Type': 'CNAME',
'ResourceRecords': [
{
'Value': dns_name
}
],
'TTL': 60
}
# Related: https://github.com/boto/boto3/issues/157
# and: http://docs.aws.amazon.com/Route53/latest/APIReference/CreateAliasRRSAPI.html
# and policy: https://spin.atomicobject.com/2016/04/28/route-53-hosted-zone-managment/
# pure_zone_id = zone_id.split('/hostedzone/')[1]
# XXX: ClientError: An error occurred (InvalidChangeBatch) when calling the ChangeResourceRecordSets operation:
# Tried to create an alias that targets d1awfeji80d0k2.cloudfront.net., type A in zone Z1XWOQP59BYF6Z,
# but the alias target name does not lie within the target zone
response = self.route53.change_resource_record_sets(
HostedZoneId=zone_id,
ChangeBatch={
'Changes': [
{
'Action': 'UPSERT',
'ResourceRecordSet': record_set
}
]
}
)
return response
def update_domain_name(self,
domain_name,
certificate_name=None,
certificate_body=None,
certificate_private_key=None,
certificate_chain=None,
certificate_arn=None,
lambda_name=None,
stage=None,
route53=True,
base_path=None):
"""
This updates your certificate information for an existing domain,
with similar arguments to boto's update_domain_name API Gateway api.
It returns the resulting new domain information including the new certificate's ARN
if created during this process.
Previously, this method involved downtime that could take up to 40 minutes
because the API Gateway api only allowed this by deleting, and then creating it.
Related issues: https://github.com/Miserlou/Zappa/issues/590
https://github.com/Miserlou/Zappa/issues/588
https://github.com/Miserlou/Zappa/pull/458
https://github.com/Miserlou/Zappa/issues/882
https://github.com/Miserlou/Zappa/pull/883
"""
print("Updating domain name!")
certificate_name = certificate_name + str(time.time())
api_gateway_domain = self.apigateway_client.get_domain_name(domainName=domain_name)
if not certificate_arn\
and certificate_body and certificate_private_key and certificate_chain:
acm_certificate = self.acm_client.import_certificate(Certificate=certificate_body,
PrivateKey=certificate_private_key,
CertificateChain=certificate_chain)
certificate_arn = acm_certificate['CertificateArn']
self.update_domain_base_path_mapping(domain_name, lambda_name, stage, base_path)
return self.apigateway_client.update_domain_name(domainName=domain_name,
patchOperations=[
{"op" : "replace",
"path" : "/certificateName",
"value" : certificate_name},
{"op" : "replace",
"path" : "/certificateArn",
"value" : certificate_arn}
])
def update_domain_base_path_mapping(self, domain_name, lambda_name, stage, base_path):
"""
Update domain base path mapping on API Gateway if it was changed
"""
api_id = self.get_api_id(lambda_name)
if not api_id:
print("Warning! Can't update base path mapping!")
return
base_path_mappings = self.apigateway_client.get_base_path_mappings(domainName=domain_name)
found = False
for base_path_mapping in base_path_mappings.get('items', []):
if base_path_mapping['restApiId'] == api_id and base_path_mapping['stage'] == stage:
found = True
if base_path_mapping['basePath'] != base_path:
self.apigateway_client.update_base_path_mapping(domainName=domain_name,
basePath=base_path_mapping['basePath'],
patchOperations=[
{"op" : "replace",
"path" : "/basePath",
"value" : '' if base_path is None else base_path}
])
if not found:
self.apigateway_client.create_base_path_mapping(
domainName=domain_name,
basePath='' if base_path is None else base_path,
restApiId=api_id,
stage=stage
)
def get_all_zones(self):
"""Same behaviour of list_host_zones, but transparently handling pagination."""
zones = {'HostedZones': []}
new_zones = self.route53.list_hosted_zones(MaxItems='100')
while new_zones['IsTruncated']:
zones['HostedZones'] += new_zones['HostedZones']
new_zones = self.route53.list_hosted_zones(Marker=new_zones['NextMarker'], MaxItems='100')
zones['HostedZones'] += new_zones['HostedZones']
return zones
def get_domain_name(self, domain_name, route53=True):
"""
Scan our hosted zones for the record of a given name.
Returns the record entry, else None.
"""
# Make sure api gateway domain is present
try:
self.apigateway_client.get_domain_name(domainName=domain_name)
except Exception:
return None
if not route53:
return True
try:
zones = self.get_all_zones()
for zone in zones['HostedZones']:
records = self.route53.list_resource_record_sets(HostedZoneId=zone['Id'])
for record in records['ResourceRecordSets']:
if record['Type'] in ('CNAME', 'A') and record['Name'][:-1] == domain_name:
return record
except Exception as e:
return None
##
# Old, automatic logic.
# If re-introduced, should be moved to a new function.
# Related ticket: https://github.com/Miserlou/Zappa/pull/458
##
# We may be in a position where Route53 doesn't have a domain, but the API Gateway does.
# We need to delete this before we can create the new Route53.
# try:
# api_gateway_domain = self.apigateway_client.get_domain_name(domainName=domain_name)
# self.apigateway_client.delete_domain_name(domainName=domain_name)
# except Exception:
# pass
return None
##
# IAM
##
def get_credentials_arn(self):
"""
Given our role name, get and set the credentials_arn.
"""
role = self.iam.Role(self.role_name)
self.credentials_arn = role.arn
return role, self.credentials_arn
def create_iam_roles(self):
"""
Create and defines the IAM roles and policies necessary for Zappa.
If the IAM role already exists, it will be updated if necessary.
"""
attach_policy_obj = json.loads(self.attach_policy)
assume_policy_obj = json.loads(self.assume_policy)
if self.extra_permissions:
for permission in self.extra_permissions:
attach_policy_obj['Statement'].append(dict(permission))
self.attach_policy = json.dumps(attach_policy_obj)
updated = False
# Create the role if needed
try:
role, credentials_arn = self.get_credentials_arn()
except botocore.client.ClientError:
print("Creating " + self.role_name + " IAM Role..")
role = self.iam.create_role(
RoleName=self.role_name,
AssumeRolePolicyDocument=self.assume_policy
)
self.credentials_arn = role.arn
updated = True
# create or update the role's policies if needed
policy = self.iam.RolePolicy(self.role_name, 'zappa-permissions')
try:
if policy.policy_document != attach_policy_obj:
print("Updating zappa-permissions policy on " + self.role_name + " IAM Role.")
policy.put(PolicyDocument=self.attach_policy)
updated = True
except botocore.client.ClientError:
print("Creating zappa-permissions policy on " + self.role_name + " IAM Role.")
policy.put(PolicyDocument=self.attach_policy)
updated = True
if role.assume_role_policy_document != assume_policy_obj and \
set(role.assume_role_policy_document['Statement'][0]['Principal']['Service']) != set(assume_policy_obj['Statement'][0]['Principal']['Service']):
print("Updating assume role policy on " + self.role_name + " IAM Role.")
self.iam_client.update_assume_role_policy(
RoleName=self.role_name,
PolicyDocument=self.assume_policy
)
updated = True
return self.credentials_arn, updated
def _clear_policy(self, lambda_name):
"""
Remove obsolete policy statements to prevent policy from bloating over the limit after repeated updates.
"""
try:
policy_response = self.lambda_client.get_policy(
FunctionName=lambda_name
)
if policy_response['ResponseMetadata']['HTTPStatusCode'] == 200:
statement = json.loads(policy_response['Policy'])['Statement']
for s in statement:
delete_response = self.lambda_client.remove_permission(
FunctionName=lambda_name,
StatementId=s['Sid']
)
if delete_response['ResponseMetadata']['HTTPStatusCode'] != 204:
logger.error('Failed to delete an obsolete policy statement: {}'.format(policy_response))
else:
logger.debug('Failed to load Lambda function policy: {}'.format(policy_response))
except ClientError as e:
if e.args[0].find('ResourceNotFoundException') > -1:
logger.debug('No policy found, must be first run.')
else:
logger.error('Unexpected client error {}'.format(e.args[0]))
##
# CloudWatch Events
##
def create_event_permission(self, lambda_name, principal, source_arn):
"""
Create permissions to link to an event.
Related: http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-configure-event-source.html
"""
logger.debug('Adding new permission to invoke Lambda function: {}'.format(lambda_name))
permission_response = self.lambda_client.add_permission(
FunctionName=lambda_name,
StatementId=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8)),
Action='lambda:InvokeFunction',
Principal=principal,
SourceArn=source_arn,
)
if permission_response['ResponseMetadata']['HTTPStatusCode'] != 201:
print('Problem creating permission to invoke Lambda function')
return None # XXX: Raise?
return permission_response
def schedule_events(self, lambda_arn, lambda_name, events, default=True):
"""
Given a Lambda ARN, name and a list of events, schedule this as CloudWatch Events.
'events' is a list of dictionaries, where the dict must contains the string
of a 'function' and the string of the event 'expression', and an optional 'name' and 'description'.
Expressions can be in rate or cron format:
http://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.html
"""
# The stream sources - DynamoDB, Kinesis and SQS - are working differently than the other services (pull vs push)
# and do not require event permissions. They do require additional permissions on the Lambda roles though.
# http://docs.aws.amazon.com/lambda/latest/dg/lambda-api-permissions-ref.html
pull_services = ['dynamodb', 'kinesis', 'sqs']
# XXX: Not available in Lambda yet.
# We probably want to execute the latest code.
# if default:
# lambda_arn = lambda_arn + ":$LATEST"
self.unschedule_events(lambda_name=lambda_name, lambda_arn=lambda_arn, events=events,
excluded_source_services=pull_services)
for event in events:
function = event['function']
expression = event.get('expression', None) # single expression
expressions = event.get('expressions', None) # multiple expression
kwargs = event.get('kwargs', {}) # optional dict of keyword arguments for the event
event_source = event.get('event_source', None)
description = event.get('description', function)
# - If 'cron' or 'rate' in expression, use ScheduleExpression
# - Else, use EventPattern
# - ex https://github.com/awslabs/aws-lambda-ddns-function
if not self.credentials_arn:
self.get_credentials_arn()
if expression:
expressions = [expression] # same code for single and multiple expression
if expressions:
for index, expression in enumerate(expressions):
name = self.get_scheduled_event_name(event, function, lambda_name, index)
# if it's possible that we truncated name, generate a unique, shortened name
# https://github.com/Miserlou/Zappa/issues/970
if len(name) >= 64:
rule_name = self.get_hashed_rule_name(event, function, lambda_name)
else:
rule_name = name
rule_response = self.events_client.put_rule(
Name=rule_name,
ScheduleExpression=expression,
State='ENABLED',
Description=description,
RoleArn=self.credentials_arn
)
if 'RuleArn' in rule_response:
logger.debug('Rule created. ARN {}'.format(rule_response['RuleArn']))
# Specific permissions are necessary for any trigger to work.
self.create_event_permission(lambda_name, 'events.amazonaws.com', rule_response['RuleArn'])
# Overwriting the input, supply the original values and add kwargs
input_template = '{"time": <time>, ' \
'"detail-type": <detail-type>, ' \
'"source": <source>,' \
'"account": <account>, ' \
'"region": <region>,' \
'"detail": <detail>, ' \
'"version": <version>,' \
'"resources": <resources>,' \
'"id": <id>,' \
'"kwargs": %s' \
'}' % json.dumps(kwargs)
# Create the CloudWatch event ARN for this function.
# https://github.com/Miserlou/Zappa/issues/359
target_response = self.events_client.put_targets(
Rule=rule_name,
Targets=[
{
'Id': 'Id' + ''.join(random.choice(string.digits) for _ in range(12)),
'Arn': lambda_arn,
'InputTransformer': {
'InputPathsMap': {
'time': '$.time',
'detail-type': '$.detail-type',
'source': '$.source',
'account': '$.account',
'region': '$.region',
'detail': '$.detail',
'version': '$.version',
'resources': '$.resources',
'id': '$.id'
},
'InputTemplate': input_template
}
}
]
)
if target_response['ResponseMetadata']['HTTPStatusCode'] == 200:
print("Scheduled {} with expression {}!".format(rule_name, expression))
else:
print("Problem scheduling {} with expression {}.".format(rule_name, expression))
elif event_source:
service = self.service_from_arn(event_source['arn'])
if service not in pull_services:
svc = ','.join(event['event_source']['events'])
self.create_event_permission(
lambda_name,
service + '.amazonaws.com',
event['event_source']['arn']
)
else:
svc = service
rule_response = add_event_source(
event_source,
lambda_arn,
function,
self.boto_session
)
if rule_response == 'successful':
print("Created {} event schedule for {}!".format(svc, function))
elif rule_response == 'failed':
print("Problem creating {} event schedule for {}!".format(svc, function))
elif rule_response == 'exists':
print("{} event schedule for {} already exists - Nothing to do here.".format(svc, function))
elif rule_response == 'dryrun':
print("Dryrun for creating {} event schedule for {}!!".format(svc, function))
else:
print("Could not create event {} - Please define either an expression or an event source".format(name))
@staticmethod
def get_scheduled_event_name(event, function, lambda_name, index=0):
name = event.get('name', function)
if name != function:
# a custom event name has been provided, make sure function name is included as postfix,
# otherwise zappa's handler won't be able to locate the function.
name = '{}-{}'.format(name, function)
if index:
# to ensure unique cloudwatch rule names in the case of multiple expressions
# prefix all entries bar the first with the index
# Related: https://github.com/Miserlou/Zappa/pull/1051
name = '{}-{}'.format(index, name)
# prefix scheduled event names with lambda name. So we can look them up later via the prefix.
return Zappa.get_event_name(lambda_name, name)
@staticmethod
def get_event_name(lambda_name, name):
"""
Returns an AWS-valid Lambda event name.
"""
return '{prefix:.{width}}-{postfix}'.format(prefix=lambda_name, width=max(0, 63 - len(name)), postfix=name)[:64]
@staticmethod
def get_hashed_rule_name(event, function, lambda_name):
"""
Returns an AWS-valid CloudWatch rule name using a digest of the event name, lambda name, and function.
This allows support for rule names that may be longer than the 64 char limit.
"""
event_name = event.get('name', function)
name_hash = hashlib.sha1('{}-{}'.format(lambda_name, event_name).encode('UTF-8')).hexdigest()
return Zappa.get_event_name(name_hash, function)
def delete_rule(self, rule_name):
"""
Delete a CWE rule.
This deletes them, but they will still show up in the AWS console.
Annoying.
"""
logger.debug('Deleting existing rule {}'.format(rule_name))
# All targets must be removed before
# we can actually delete the rule.
try:
targets = self.events_client.list_targets_by_rule(Rule=rule_name)
except botocore.exceptions.ClientError as e:
# This avoids misbehavior if low permissions, related: https://github.com/Miserlou/Zappa/issues/286
error_code = e.response['Error']['Code']
if error_code == 'AccessDeniedException':
raise
else:
logger.debug('No target found for this rule: {} {}'.format(rule_name, e.args[0]))
return
if 'Targets' in targets and targets['Targets']:
self.events_client.remove_targets(Rule=rule_name, Ids=[x['Id'] for x in targets['Targets']])
else: # pragma: no cover
logger.debug('No target to delete')
# Delete our rule.
self.events_client.delete_rule(Name=rule_name)
def get_event_rule_names_for_lambda(self, lambda_arn):
"""
Get all of the rule names associated with a lambda function.
"""
response = self.events_client.list_rule_names_by_target(TargetArn=lambda_arn)
rule_names = response['RuleNames']
# Iterate when the results are paginated
while 'NextToken' in response:
response = self.events_client.list_rule_names_by_target(TargetArn=lambda_arn,
NextToken=response['NextToken'])
rule_names.extend(response['RuleNames'])
return rule_names
def get_event_rules_for_lambda(self, lambda_arn):
"""
Get all of the rule details associated with this function.
"""
rule_names = self.get_event_rule_names_for_lambda(lambda_arn=lambda_arn)
return [self.events_client.describe_rule(Name=r) for r in rule_names]
def unschedule_events(self, events, lambda_arn=None, lambda_name=None, excluded_source_services=None):
excluded_source_services = excluded_source_services or []
"""
Given a list of events, unschedule these CloudWatch Events.
'events' is a list of dictionaries, where the dict must contains the string
of a 'function' and the string of the event 'expression', and an optional 'name' and 'description'.
"""
self._clear_policy(lambda_name)
rule_names = self.get_event_rule_names_for_lambda(lambda_arn=lambda_arn)
for rule_name in rule_names:
self.delete_rule(rule_name)
print('Unscheduled ' + rule_name + '.')
non_cwe = [e for e in events if 'event_source' in e]
for event in non_cwe:
# TODO: This WILL miss non CW events that have been deployed but changed names. Figure out a way to remove
# them no matter what.
# These are non CWE event sources.
function = event['function']
name = event.get('name', function)
event_source = event.get('event_source', function)
service = self.service_from_arn(event_source['arn'])
# DynamoDB and Kinesis streams take quite a while to setup after they are created and do not need to be
# re-scheduled when a new Lambda function is deployed. Therefore, they should not be removed during zappa
# update or zappa schedule.
if service not in excluded_source_services:
remove_event_source(
event_source,
lambda_arn,
function,
self.boto_session
)
print("Removed event {}{}.".format(
name,
" ({})".format(str(event_source['events'])) if 'events' in event_source else '')
)
###
# Async / SNS
##
def create_async_sns_topic(self, lambda_name, lambda_arn):
"""
Create the SNS-based async topic.
"""
topic_name = get_topic_name(lambda_name)
# Create SNS topic
topic_arn = self.sns_client.create_topic(
Name=topic_name)['TopicArn']
# Create subscription
self.sns_client.subscribe(
TopicArn=topic_arn,
Protocol='lambda',
Endpoint=lambda_arn
)
# Add Lambda permission for SNS to invoke function
self.create_event_permission(
lambda_name=lambda_name,
principal='sns.amazonaws.com',
source_arn=topic_arn
)
# Add rule for SNS topic as a event source
add_event_source(
event_source={
"arn": topic_arn,
"events": ["sns:Publish"]
},
lambda_arn=lambda_arn,
target_function="zappa.asynchronous.route_task",
boto_session=self.boto_session
)
return topic_arn
def remove_async_sns_topic(self, lambda_name):
"""
Remove the async SNS topic.
"""
topic_name = get_topic_name(lambda_name)
removed_arns = []
for sub in self.sns_client.list_subscriptions()['Subscriptions']:
if topic_name in sub['TopicArn']:
self.sns_client.delete_topic(TopicArn=sub['TopicArn'])
removed_arns.append(sub['TopicArn'])
return removed_arns
###
# Async / DynamoDB
##
def _set_async_dynamodb_table_ttl(self, table_name):
self.dynamodb_client.update_time_to_live(
TableName=table_name,
TimeToLiveSpecification={
'Enabled': True,
'AttributeName': 'ttl'
}
)
def create_async_dynamodb_table(self, table_name, read_capacity, write_capacity):
"""
Create the DynamoDB table for async task return values
"""
try:
dynamodb_table = self.dynamodb_client.describe_table(TableName=table_name)
return False, dynamodb_table
# catch this exception (triggered if the table doesn't exist)
except botocore.exceptions.ClientError:
dynamodb_table = self.dynamodb_client.create_table(
AttributeDefinitions=[
{
'AttributeName': 'id',
'AttributeType': 'S'
}
],
TableName=table_name,
KeySchema=[
{
'AttributeName': 'id',
'KeyType': 'HASH'
},
],
ProvisionedThroughput = {
'ReadCapacityUnits': read_capacity,
'WriteCapacityUnits': write_capacity
}
)
if dynamodb_table:
try:
self._set_async_dynamodb_table_ttl(table_name)
except botocore.exceptions.ClientError:
# this fails because the operation is async, so retry
time.sleep(10)
self._set_async_dynamodb_table_ttl(table_name)
return True, dynamodb_table
def remove_async_dynamodb_table(self, table_name):
"""
Remove the DynamoDB Table used for async return values
"""
self.dynamodb_client.delete_table(TableName=table_name)
##
# CloudWatch Logging
##
def fetch_logs(self, lambda_name, filter_pattern='', limit=10000, start_time=0):
"""
Fetch the CloudWatch logs for a given Lambda name.
"""
log_name = '/aws/lambda/' + lambda_name
streams = self.logs_client.describe_log_streams(
logGroupName=log_name,
descending=True,
orderBy='LastEventTime'
)
all_streams = streams['logStreams']
all_names = [stream['logStreamName'] for stream in all_streams]
events = []
response = {}
while not response or 'nextToken' in response:
extra_args = {}
if 'nextToken' in response:
extra_args['nextToken'] = response['nextToken']
# Amazon uses millisecond epoch for some reason.
# Thanks, Jeff.
start_time = start_time * 1000
end_time = int(time.time()) * 1000
response = self.logs_client.filter_log_events(
logGroupName=log_name,
logStreamNames=all_names,
startTime=start_time,
endTime=end_time,
filterPattern=filter_pattern,
limit=limit,
interleaved=True, # Does this actually improve performance?
**extra_args
)
if response and 'events' in response:
events += response['events']
return sorted(events, key=lambda k: k['timestamp'])
def remove_log_group(self, group_name):
"""
Filter all log groups that match the name given in log_filter.
"""
print("Removing log group: {}".format(group_name))
try:
self.logs_client.delete_log_group(logGroupName=group_name)
except botocore.exceptions.ClientError as e:
print("Couldn't remove '{}' because of: {}".format(group_name, e))
def remove_lambda_function_logs(self, lambda_function_name):
"""
Remove all logs that are assigned to a given lambda function id.
"""
self.remove_log_group('/aws/lambda/{}'.format(lambda_function_name))
def remove_api_gateway_logs(self, project_name):
"""
Removed all logs that are assigned to a given rest api id.
"""
for rest_api in self.get_rest_apis(project_name):
for stage in self.apigateway_client.get_stages(restApiId=rest_api['id'])['item']:
self.remove_log_group('API-Gateway-Execution-Logs_{}/{}'.format(rest_api['id'], stage['stageName']))
##
# Route53 Domain Name Entries
##
def get_hosted_zone_id_for_domain(self, domain):
"""
Get the Hosted Zone ID for a given domain.
"""
all_zones = self.get_all_zones()
return self.get_best_match_zone(all_zones, domain)
@staticmethod
def get_best_match_zone(all_zones, domain):
"""Return zone id which name is closer matched with domain name."""
# Related: https://github.com/Miserlou/Zappa/issues/459
public_zones = [zone for zone in all_zones['HostedZones'] if not zone['Config']['PrivateZone']]
zones = {zone['Name'][:-1]: zone['Id'] for zone in public_zones if zone['Name'][:-1] in domain}
if zones:
keys = max(zones.keys(), key=lambda a: len(a)) # get longest key -- best match.
return zones[keys]
else:
return None
def set_dns_challenge_txt(self, zone_id, domain, txt_challenge):
"""
Set DNS challenge TXT.
"""
print("Setting DNS challenge..")
resp = self.route53.change_resource_record_sets(
HostedZoneId=zone_id,
ChangeBatch=self.get_dns_challenge_change_batch('UPSERT', domain, txt_challenge)
)
return resp
def remove_dns_challenge_txt(self, zone_id, domain, txt_challenge):
"""
Remove DNS challenge TXT.
"""
print("Deleting DNS challenge..")
resp = self.route53.change_resource_record_sets(
HostedZoneId=zone_id,
ChangeBatch=self.get_dns_challenge_change_batch('DELETE', domain, txt_challenge)
)
return resp
@staticmethod
def get_dns_challenge_change_batch(action, domain, txt_challenge):
"""
Given action, domain and challenge, return a change batch to use with
route53 call.
:param action: DELETE | UPSERT
:param domain: domain name
:param txt_challenge: challenge
:return: change set for a given action, domain and TXT challenge.
"""
return {
'Changes': [{
'Action': action,
'ResourceRecordSet': {
'Name': '_acme-challenge.{0}'.format(domain),
'Type': 'TXT',
'TTL': 60,
'ResourceRecords': [{
'Value': '"{0}"'.format(txt_challenge)
}]
}
}]
}
##
# Utility
##
def shell(self):
"""
Spawn a PDB shell.
"""
import pdb
pdb.set_trace()
def load_credentials(self, boto_session=None, profile_name=None):
"""
Load AWS credentials.
An optional boto_session can be provided, but that's usually for testing.
An optional profile_name can be provided for config files that have multiple sets
of credentials.
"""
# Automatically load credentials from config or environment
if not boto_session:
# If provided, use the supplied profile name.
if profile_name:
self.boto_session = boto3.Session(profile_name=profile_name, region_name=self.aws_region)
elif os.environ.get('AWS_ACCESS_KEY_ID') and os.environ.get('AWS_SECRET_ACCESS_KEY'):
region_name = os.environ.get('AWS_DEFAULT_REGION') or self.aws_region
session_kw = {
"aws_access_key_id": os.environ.get('AWS_ACCESS_KEY_ID'),
"aws_secret_access_key": os.environ.get('AWS_SECRET_ACCESS_KEY'),
"region_name": region_name,
}
# If we're executing in a role, AWS_SESSION_TOKEN will be present, too.
if os.environ.get("AWS_SESSION_TOKEN"):
session_kw["aws_session_token"] = os.environ.get("AWS_SESSION_TOKEN")
self.boto_session = boto3.Session(**session_kw)
else:
self.boto_session = boto3.Session(region_name=self.aws_region)
logger.debug("Loaded boto session from config: %s", boto_session)
else:
logger.debug("Using provided boto session: %s", boto_session)
self.boto_session = boto_session
# use provided session's region in case it differs
self.aws_region = self.boto_session.region_name
if self.boto_session.region_name not in LAMBDA_REGIONS:
print("Warning! AWS Lambda may not be available in this AWS Region!")
if self.boto_session.region_name not in API_GATEWAY_REGIONS:
print("Warning! AWS API Gateway may not be available in this AWS Region!")
@staticmethod
def service_from_arn(arn):
return arn.split(':')[2] | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/core.py | core.py |
import botocore
import calendar
import datetime
import durationpy
import fnmatch
import io
import json
import logging
import os
import re
import shutil
import stat
import sys
from past.builtins import basestring
from urllib.parse import urlparse
LOG = logging.getLogger(__name__)
##
# Settings / Packaging
##
def copytree(src, dst, metadata=True, symlinks=False, ignore=None):
"""
This is a contributed re-implementation of 'copytree' that
should work with the exact same behavior on multiple platforms.
When `metadata` is False, file metadata such as permissions and modification
times are not copied.
"""
def copy_file(src, dst, item):
s = os.path.join(src, item)
d = os.path.join(dst, item)
if symlinks and os.path.islink(s): # pragma: no cover
if os.path.lexists(d):
os.remove(d)
os.symlink(os.readlink(s), d)
if metadata:
try:
st = os.lstat(s)
mode = stat.S_IMODE(st.st_mode)
os.lchmod(d, mode)
except:
pass # lchmod not available
elif os.path.isdir(s):
copytree(s, d, metadata, symlinks, ignore)
else:
shutil.copy2(s, d) if metadata else shutil.copy(s, d)
try:
lst = os.listdir(src)
if not os.path.exists(dst):
os.makedirs(dst)
if metadata:
shutil.copystat(src, dst)
except NotADirectoryError: # egg-link files
copy_file(os.path.dirname(src), os.path.dirname(dst), os.path.basename(src))
return
if ignore:
excl = ignore(src, lst)
lst = [x for x in lst if x not in excl]
for item in lst:
copy_file(src, dst, item)
def parse_s3_url(url):
"""
Parses S3 URL.
Returns bucket (domain) and file (full path).
"""
bucket = ''
path = ''
if url:
result = urlparse(url)
bucket = result.netloc
path = result.path.strip('/')
return bucket, path
def human_size(num, suffix='B'):
"""
Convert bytes length to a human-readable version
"""
for unit in ('', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi'):
if abs(num) < 1024.0:
return "{0:3.1f}{1!s}{2!s}".format(num, unit, suffix)
num /= 1024.0
return "{0:.1f}{1!s}{2!s}".format(num, 'Yi', suffix)
def string_to_timestamp(timestring):
"""
Accepts a str, returns an int timestamp.
"""
ts = None
# Uses an extended version of Go's duration string.
try:
delta = durationpy.from_str(timestring);
past = datetime.datetime.utcnow() - delta
ts = calendar.timegm(past.timetuple())
return ts
except Exception as e:
pass
if ts:
return ts
# else:
# print("Unable to parse timestring.")
return 0
##
# `init` related
##
def detect_django_settings():
"""
Automatically try to discover Django settings files,
return them as relative module paths.
"""
matches = []
for root, dirnames, filenames in os.walk(os.getcwd()):
for filename in fnmatch.filter(filenames, '*settings.py'):
full = os.path.join(root, filename)
if 'site-packages' in full:
continue
full = os.path.join(root, filename)
package_path = full.replace(os.getcwd(), '')
package_module = package_path.replace(os.sep, '.').split('.', 1)[1].replace('.py', '')
matches.append(package_module)
return matches
def detect_flask_apps():
"""
Automatically try to discover Flask apps files,
return them as relative module paths.
"""
matches = []
for root, dirnames, filenames in os.walk(os.getcwd()):
for filename in fnmatch.filter(filenames, '*.py'):
full = os.path.join(root, filename)
if 'site-packages' in full:
continue
full = os.path.join(root, filename)
with io.open(full, 'r', encoding='utf-8') as f:
lines = f.readlines()
for line in lines:
app = None
# Kind of janky..
if '= Flask(' in line:
app = line.split('= Flask(')[0].strip()
if '=Flask(' in line:
app = line.split('=Flask(')[0].strip()
if not app:
continue
package_path = full.replace(os.getcwd(), '')
package_module = package_path.replace(os.sep, '.').split('.', 1)[1].replace('.py', '')
app_module = package_module + '.' + app
matches.append(app_module)
return matches
def get_venv_from_python_version():
return 'python{}.{}'.format(*sys.version_info)
def get_runtime_from_python_version():
"""
"""
if sys.version_info[0] < 3:
raise ValueError("Python 2.x is no longer supported.")
else:
if sys.version_info[1] <= 6:
return 'python3.6'
elif sys.version_info[1] <= 7:
return 'python3.7'
else:
return 'python3.8'
##
# Async Tasks
##
def get_topic_name(lambda_name):
""" Topic name generation """
return '%s-zappa-async' % lambda_name
##
# Event sources / Kappa
##
def get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary item, a session and a lambda_arn,
hack into Kappa's Gibson, create out an object we can call
to schedule this event, and return the event source.
"""
import kappa.function
import kappa.restapi
import kappa.event_source.base
import kappa.event_source.dynamodb_stream
import kappa.event_source.kinesis
import kappa.event_source.s3
import kappa.event_source.sns
import kappa.event_source.cloudwatch
import kappa.policy
import kappa.role
import kappa.awsclient
class PseudoContext:
def __init__(self):
return
class PseudoFunction:
def __init__(self):
return
# Mostly adapted from kappa - will probably be replaced by kappa support
class SqsEventSource(kappa.event_source.base.EventSource):
def __init__(self, context, config):
super().__init__(context, config)
self._lambda = kappa.awsclient.create_client(
'lambda', context.session)
def _get_uuid(self, function):
uuid = None
response = self._lambda.call(
'list_event_source_mappings',
FunctionName=function.name,
EventSourceArn=self.arn)
LOG.debug(response)
if len(response['EventSourceMappings']) > 0:
uuid = response['EventSourceMappings'][0]['UUID']
return uuid
def add(self, function):
try:
response = self._lambda.call(
'create_event_source_mapping',
FunctionName=function.name,
EventSourceArn=self.arn,
BatchSize=self.batch_size,
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to add event source')
def enable(self, function):
self._config['enabled'] = True
try:
response = self._lambda.call(
'update_event_source_mapping',
UUID=self._get_uuid(function),
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to enable event source')
def disable(self, function):
self._config['enabled'] = False
try:
response = self._lambda.call(
'update_event_source_mapping',
FunctionName=function.name,
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to disable event source')
def update(self, function):
response = None
uuid = self._get_uuid(function)
if uuid:
try:
response = self._lambda.call(
'update_event_source_mapping',
BatchSize=self.batch_size,
Enabled=self.enabled,
FunctionName=function.arn)
LOG.debug(response)
except Exception:
LOG.exception('Unable to update event source')
def remove(self, function):
response = None
uuid = self._get_uuid(function)
if uuid:
response = self._lambda.call(
'delete_event_source_mapping',
UUID=uuid)
LOG.debug(response)
return response
def status(self, function):
response = None
LOG.debug('getting status for event source %s', self.arn)
uuid = self._get_uuid(function)
if uuid:
try:
response = self._lambda.call(
'get_event_source_mapping',
UUID=self._get_uuid(function))
LOG.debug(response)
except botocore.exceptions.ClientError:
LOG.debug('event source %s does not exist', self.arn)
response = None
else:
LOG.debug('No UUID for event source %s', self.arn)
return response
class ExtendedSnsEventSource(kappa.event_source.sns.SNSEventSource):
@property
def filters(self):
return self._config.get('filters')
def add_filters(self, function):
try:
subscription = self.exists(function)
if subscription:
response = self._sns.call(
'set_subscription_attributes',
SubscriptionArn=subscription['SubscriptionArn'],
AttributeName='FilterPolicy',
AttributeValue=json.dumps(self.filters)
)
kappa.event_source.sns.LOG.debug(response)
except Exception:
kappa.event_source.sns.LOG.exception('Unable to add filters for SNS topic %s', self.arn)
def add(self, function):
super().add(function)
if self.filters:
self.add_filters(function)
event_source_map = {
'dynamodb': kappa.event_source.dynamodb_stream.DynamoDBStreamEventSource,
'kinesis': kappa.event_source.kinesis.KinesisEventSource,
's3': kappa.event_source.s3.S3EventSource,
'sns': ExtendedSnsEventSource,
'sqs': SqsEventSource,
'events': kappa.event_source.cloudwatch.CloudWatchEventSource
}
arn = event_source['arn']
_, _, svc, _ = arn.split(':', 3)
event_source_func = event_source_map.get(svc, None)
if not event_source_func:
raise ValueError('Unknown event source: {0}'.format(arn))
def autoreturn(self, function_name):
return function_name
event_source_func._make_notification_id = autoreturn
ctx = PseudoContext()
ctx.session = boto_session
funk = PseudoFunction()
funk.name = lambda_arn
# Kappa 0.6.0 requires this nasty hacking,
# hopefully we can remove at least some of this soon.
# Kappa 0.7.0 introduces a whole host over other changes we don't
# really want, so we're stuck here for a little while.
# Related: https://github.com/Miserlou/Zappa/issues/684
# https://github.com/Miserlou/Zappa/issues/688
# https://github.com/Miserlou/Zappa/commit/3216f7e5149e76921ecdf9451167846b95616313
if svc == 's3':
split_arn = lambda_arn.split(':')
arn_front = ':'.join(split_arn[:-1])
arn_back = split_arn[-1]
ctx.environment = arn_back
funk.arn = arn_front
funk.name = ':'.join([arn_back, target_function])
else:
funk.arn = lambda_arn
funk._context = ctx
event_source_obj = event_source_func(ctx, event_source)
return event_source_obj, ctx, funk
def add_event_source(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary, create the object and add the event source.
"""
event_source_obj, ctx, funk = get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False)
# TODO: Detect changes in config and refine exists algorithm
if not dry:
if not event_source_obj.status(funk):
event_source_obj.add(funk)
return 'successful' if event_source_obj.status(funk) else 'failed'
else:
return 'exists'
return 'dryrun'
def remove_event_source(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary, create the object and remove the event source.
"""
event_source_obj, ctx, funk = get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False)
# This is slightly dirty, but necessary for using Kappa this way.
funk.arn = lambda_arn
if not dry:
rule_response = event_source_obj.remove(funk)
return rule_response
else:
return event_source_obj
def get_event_source_status(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary, create the object and get the event source status.
"""
event_source_obj, ctx, funk = get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False)
return event_source_obj.status(funk)
##
# Analytics / Surveillance / Nagging
##
def check_new_version_available(this_version):
"""
Checks if a newer version of Zappa is available.
Returns True is updateable, else False.
"""
import requests
pypi_url = 'https://pypi.org/pypi/Zappa/json'
resp = requests.get(pypi_url, timeout=1.5)
top_version = resp.json()['info']['version']
return this_version != top_version
class InvalidAwsLambdaName(Exception):
"""Exception: proposed AWS Lambda name is invalid"""
pass
def validate_name(name, maxlen=80):
"""Validate name for AWS Lambda function.
name: actual name (without `arn:aws:lambda:...:` prefix and without
`:$LATEST`, alias or version suffix.
maxlen: max allowed length for name without prefix and suffix.
The value 80 was calculated from prefix with longest known region name
and assuming that no alias or version would be longer than `$LATEST`.
Based on AWS Lambda spec
http://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html
Return: the name
Raise: InvalidAwsLambdaName, if the name is invalid.
"""
if not isinstance(name, basestring):
msg = "Name must be of type string"
raise InvalidAwsLambdaName(msg)
if len(name) > maxlen:
msg = "Name is longer than {maxlen} characters."
raise InvalidAwsLambdaName(msg.format(maxlen=maxlen))
if len(name) == 0:
msg = "Name must not be empty string."
raise InvalidAwsLambdaName(msg)
if not re.match("^[a-zA-Z0-9-_]+$", name):
msg = "Name can only contain characters from a-z, A-Z, 0-9, _ and -"
raise InvalidAwsLambdaName(msg)
return name
def contains_python_files_or_subdirs(folder):
"""
Checks (recursively) if the directory contains .py or .pyc files
"""
for root, dirs, files in os.walk(folder):
if [filename for filename in files if filename.endswith('.py') or filename.endswith('.pyc')]:
return True
for d in dirs:
for _, subdirs, subfiles in os.walk(d):
if [filename for filename in subfiles if filename.endswith('.py') or filename.endswith('.pyc')]:
return True
return False
def conflicts_with_a_neighbouring_module(directory_path):
"""
Checks if a directory lies in the same directory as a .py file with the same name.
"""
parent_dir_path, current_dir_name = os.path.split(os.path.normpath(directory_path))
neighbours = os.listdir(parent_dir_path)
conflicting_neighbour_filename = current_dir_name+'.py'
return conflicting_neighbour_filename in neighbours
# https://github.com/Miserlou/Zappa/issues/1188
def titlecase_keys(d):
"""
Takes a dict with keys of type str and returns a new dict with all keys titlecased.
"""
return {k.title(): v for k, v in d.items()}
# https://github.com/Miserlou/Zappa/issues/1688
def is_valid_bucket_name(name):
"""
Checks if an S3 bucket name is valid according to https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules
"""
# Bucket names must be at least 3 and no more than 63 characters long.
if (len(name) < 3 or len(name) > 63):
return False
# Bucket names must not contain uppercase characters or underscores.
if (any(x.isupper() for x in name)):
return False
if "_" in name:
return False
# Bucket names must start with a lowercase letter or number.
if not (name[0].islower() or name[0].isdigit()):
return False
# Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.).
for label in name.split("."):
# Each label must start and end with a lowercase letter or a number.
if len(label) < 1:
return False
if not (label[0].islower() or label[0].isdigit()):
return False
if not (label[-1].islower() or label[-1].isdigit()):
return False
# Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
looks_like_IP = True
for label in name.split("."):
if not label.isdigit():
looks_like_IP = False
break
if looks_like_IP:
return False
return True
def merge_headers(event):
"""
Merge the values of headers and multiValueHeaders into a single dict.
Opens up support for multivalue headers via API Gateway and ALB.
See: https://github.com/Miserlou/Zappa/pull/1756
"""
headers = event.get('headers') or {}
multi_headers = (event.get('multiValueHeaders') or {}).copy()
for h in set(headers.keys()):
if h not in multi_headers:
multi_headers[h] = [headers[h]]
for h in multi_headers.keys():
multi_headers[h] = ', '.join(multi_headers[h])
return multi_headers | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/utilities.py | utilities.py |
from past.builtins import basestring
from builtins import input, bytes
import argcomplete
import argparse
import base64
import pkgutil
import botocore
import click
import collections
import hjson as json
import inspect
import importlib
import logging
import os
import pkg_resources
import random
import re
import requests
import slugify
import string
import sys
import tempfile
import time
import toml
import yaml
import zipfile
from click import Context, BaseCommand
from click.exceptions import ClickException
from click.globals import push_context
from dateutil import parser
from datetime import datetime, timedelta
from .core import Zappa, logger, API_GATEWAY_REGIONS
from .utilities import (check_new_version_available, detect_django_settings,
detect_flask_apps, parse_s3_url, human_size,
validate_name, InvalidAwsLambdaName, get_venv_from_python_version,
get_runtime_from_python_version, string_to_timestamp, is_valid_bucket_name)
CUSTOM_SETTINGS = [
'apigateway_policy',
'assume_policy',
'attach_policy',
'aws_region',
'delete_local_zip',
'delete_s3_zip',
'exclude',
'exclude_glob',
'extra_permissions',
'include',
'role_name',
'touch',
]
BOTO3_CONFIG_DOCS_URL = 'https://boto3.readthedocs.io/en/latest/guide/quickstart.html#configuration'
##
# Main Input Processing
##
class ZappaCLI:
"""
ZappaCLI object is responsible for loading the settings,
handling the input arguments and executing the calls to the core library.
"""
# CLI
vargs = None
command = None
stage_env = None
# Zappa settings
zappa = None
zappa_settings = None
load_credentials = True
disable_progress = False
# Specific settings
api_stage = None
app_function = None
aws_region = None
debug = None
prebuild_script = None
project_name = None
profile_name = None
lambda_arn = None
lambda_name = None
lambda_description = None
lambda_concurrency = None
s3_bucket_name = None
settings_file = None
zip_path = None
handler_path = None
vpc_config = None
memory_size = None
use_apigateway = None
lambda_handler = None
django_settings = None
manage_roles = True
exception_handler = None
environment_variables = None
authorizer = None
xray_tracing = False
aws_kms_key_arn = ''
context_header_mappings = None
tags = []
layers = None
stage_name_env_pattern = re.compile('^[a-zA-Z0-9_]+$')
def __init__(self):
self._stage_config_overrides = {} # change using self.override_stage_config_setting(key, val)
@property
def stage_config(self):
"""
A shortcut property for settings of a stage.
"""
def get_stage_setting(stage, extended_stages=None):
if extended_stages is None:
extended_stages = []
if stage in extended_stages:
raise RuntimeError(stage + " has already been extended to these settings. "
"There is a circular extends within the settings file.")
extended_stages.append(stage)
try:
stage_settings = dict(self.zappa_settings[stage].copy())
except KeyError:
raise ClickException("Cannot extend settings for undefined stage '" + stage + "'.")
extends_stage = self.zappa_settings[stage].get('extends', None)
if not extends_stage:
return stage_settings
extended_settings = get_stage_setting(stage=extends_stage, extended_stages=extended_stages)
extended_settings.update(stage_settings)
return extended_settings
settings = get_stage_setting(stage=self.api_stage)
# Backwards compatible for delete_zip setting that was more explicitly named delete_local_zip
if 'delete_zip' in settings:
settings['delete_local_zip'] = settings.get('delete_zip')
settings.update(self.stage_config_overrides)
return settings
@property
def stage_config_overrides(self):
"""
Returns zappa_settings we forcefully override for the current stage
set by `self.override_stage_config_setting(key, value)`
"""
return getattr(self, '_stage_config_overrides', {}).get(self.api_stage, {})
def override_stage_config_setting(self, key, val):
"""
Forcefully override a setting set by zappa_settings (for the current stage only)
:param key: settings key
:param val: value
"""
self._stage_config_overrides = getattr(self, '_stage_config_overrides', {})
self._stage_config_overrides.setdefault(self.api_stage, {})[key] = val
def handle(self, argv=None):
"""
Main function.
Parses command, load settings and dispatches accordingly.
"""
desc = ('Zappa - Deploy Python applications to AWS Lambda'
' and API Gateway.\n')
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
'-v', '--version', action='version',
version=pkg_resources.get_distribution("zappa-bepro").version,
help='Print the zappa version'
)
parser.add_argument(
'--color', default='auto', choices=['auto','never','always']
)
env_parser = argparse.ArgumentParser(add_help=False)
me_group = env_parser.add_mutually_exclusive_group()
all_help = ('Execute this command for all of our defined '
'Zappa stages.')
me_group.add_argument('--all', action='store_true', help=all_help)
me_group.add_argument('stage_env', nargs='?')
group = env_parser.add_argument_group()
group.add_argument(
'-a', '--app_function', help='The WSGI application function.'
)
group.add_argument(
'-s', '--settings_file', help='The path to a Zappa settings file.'
)
group.add_argument(
'-q', '--quiet', action='store_true', help='Silence all output.'
)
# https://github.com/Miserlou/Zappa/issues/407
# Moved when 'template' command added.
# Fuck Terraform.
group.add_argument(
'-j', '--json', action='store_true', help='Make the output of this command be machine readable.'
)
# https://github.com/Miserlou/Zappa/issues/891
group.add_argument(
'--disable_progress', action='store_true', help='Disable progress bars.'
)
group.add_argument(
"--no_venv", action="store_true", help="Skip venv check."
)
##
# Certify
##
subparsers = parser.add_subparsers(title='subcommands', dest='command')
cert_parser = subparsers.add_parser(
'certify', parents=[env_parser],
help='Create and install SSL certificate'
)
cert_parser.add_argument(
'--manual', action='store_true',
help=("Gets new Let's Encrypt certificates, but prints them to console."
"Does not update API Gateway domains.")
)
cert_parser.add_argument(
'-y', '--yes', action='store_true', help='Auto confirm yes.'
)
##
# Deploy
##
deploy_parser = subparsers.add_parser(
'deploy', parents=[env_parser], help='Deploy application.'
)
deploy_parser.add_argument(
'-z', '--zip', help='Deploy Lambda with specific local or S3 hosted zip package'
)
##
# Init
##
init_parser = subparsers.add_parser('init', help='Initialize Zappa app.')
##
# Package
##
package_parser = subparsers.add_parser(
'package', parents=[env_parser], help='Build the application zip package locally.'
)
package_parser.add_argument(
'-o', '--output', help='Name of file to output the package to.'
)
##
# Template
##
template_parser = subparsers.add_parser(
'template', parents=[env_parser], help='Create a CloudFormation template for this API Gateway.'
)
template_parser.add_argument(
'-l', '--lambda-arn', required=True, help='ARN of the Lambda function to template to.'
)
template_parser.add_argument(
'-r', '--role-arn', required=True, help='ARN of the Role to template with.'
)
template_parser.add_argument(
'-o', '--output', help='Name of file to output the template to.'
)
##
# Invocation
##
invoke_parser = subparsers.add_parser(
'invoke', parents=[env_parser],
help='Invoke remote function.'
)
invoke_parser.add_argument(
'--raw', action='store_true',
help=('When invoking remotely, invoke this python as a string,'
' not as a modular path.')
)
invoke_parser.add_argument(
'--no-color', action='store_true',
help=("Don't color the output")
)
invoke_parser.add_argument('command_rest')
##
# Manage
##
manage_parser = subparsers.add_parser(
'manage',
help='Invoke remote Django manage.py commands.'
)
rest_help = ("Command in the form of <env> <command>. <env> is not "
"required if --all is specified")
manage_parser.add_argument('--all', action='store_true', help=all_help)
manage_parser.add_argument('command_rest', nargs='+', help=rest_help)
manage_parser.add_argument(
'--no-color', action='store_true',
help=("Don't color the output")
)
# This is explicitly added here because this is the only subcommand that doesn't inherit from env_parser
# https://github.com/Miserlou/Zappa/issues/1002
manage_parser.add_argument(
'-s', '--settings_file', help='The path to a Zappa settings file.'
)
##
# Rollback
##
def positive_int(s):
""" Ensure an arg is positive """
i = int(s)
if i < 0:
msg = "This argument must be positive (got {})".format(s)
raise argparse.ArgumentTypeError(msg)
return i
rollback_parser = subparsers.add_parser(
'rollback', parents=[env_parser],
help='Rollback deployed code to a previous version.'
)
rollback_parser.add_argument(
'-n', '--num-rollback', type=positive_int, default=1,
help='The number of versions to rollback.'
)
##
# Scheduling
##
subparsers.add_parser(
'schedule', parents=[env_parser],
help='Schedule functions to occur at regular intervals.'
)
##
# Status
##
subparsers.add_parser(
'status', parents=[env_parser],
help='Show deployment status and event schedules.'
)
##
# Log Tailing
##
tail_parser = subparsers.add_parser(
'tail', parents=[env_parser], help='Tail deployment logs.'
)
tail_parser.add_argument(
'--no-color', action='store_true',
help="Don't color log tail output."
)
tail_parser.add_argument(
'--http', action='store_true',
help='Only show HTTP requests in tail output.'
)
tail_parser.add_argument(
'--non-http', action='store_true',
help='Only show non-HTTP requests in tail output.'
)
tail_parser.add_argument(
'--since', type=str, default="100000s",
help="Only show lines since a certain timeframe."
)
tail_parser.add_argument(
'--filter', type=str, default="",
help="Apply a filter pattern to the logs."
)
tail_parser.add_argument(
'--force-color', action='store_true',
help='Force coloring log tail output even if coloring support is not auto-detected. (example: piping)'
)
tail_parser.add_argument(
'--disable-keep-open', action='store_true',
help="Exit after printing the last available log, rather than keeping the log open."
)
##
# Undeploy
##
undeploy_parser = subparsers.add_parser(
'undeploy', parents=[env_parser], help='Undeploy application.'
)
undeploy_parser.add_argument(
'--remove-logs', action='store_true',
help=('Removes log groups of api gateway and lambda task'
' during the undeployment.'),
)
undeploy_parser.add_argument(
'-y', '--yes', action='store_true', help='Auto confirm yes.'
)
##
# Unschedule
##
subparsers.add_parser('unschedule', parents=[env_parser],
help='Unschedule functions.')
##
# Updating
##
update_parser = subparsers.add_parser(
'update', parents=[env_parser], help='Update deployed application.'
)
update_parser.add_argument(
'-z', '--zip', help='Update Lambda with specific local or S3 hosted zip package'
)
update_parser.add_argument(
'-n', '--no-upload', help="Update configuration where appropriate, but don't upload new code"
)
##
# Debug
##
subparsers.add_parser(
'shell', parents=[env_parser], help='A debug shell with a loaded Zappa object.'
)
argcomplete.autocomplete(parser)
args = parser.parse_args(argv)
self.vargs = vars(args)
if args.color == 'never':
disable_click_colors()
elif args.color == 'always':
#TODO: Support aggressive coloring like "--force-color" on all commands
pass
elif args.color == 'auto':
pass
# Parse the input
# NOTE(rmoe): Special case for manage command
# The manage command can't have both stage_env and command_rest
# arguments. Since they are both positional arguments argparse can't
# differentiate the two. This causes problems when used with --all.
# (e.g. "manage --all showmigrations admin" argparse thinks --all has
# been specified AND that stage_env='showmigrations')
# By having command_rest collect everything but --all we can split it
# apart here instead of relying on argparse.
if not args.command:
parser.print_help()
return
if args.command == 'manage' and not self.vargs.get('all'):
self.stage_env = self.vargs['command_rest'].pop(0)
else:
self.stage_env = self.vargs.get('stage_env')
if args.command == 'package':
self.load_credentials = False
self.command = args.command
self.disable_progress = self.vargs.get('disable_progress')
if self.vargs.get('quiet'):
self.silence()
# We don't have any settings yet, so make those first!
# (Settings-based interactions will fail
# before a project has been initialized.)
if self.command == 'init':
self.init()
return
# Make sure there isn't a new version available
if not self.vargs.get('json'):
self.check_for_update()
# Load and Validate Settings File
self.load_settings_file(self.vargs.get('settings_file'))
# Should we execute this for all stages, or just one?
all_stages = self.vargs.get('all')
stages = []
if all_stages: # All stages!
stages = self.zappa_settings.keys()
else: # Just one env.
if not self.stage_env:
# If there's only one stage defined in the settings,
# use that as the default.
if len(self.zappa_settings.keys()) == 1:
stages.append(list(self.zappa_settings.keys())[0])
else:
parser.error("Please supply a stage to interact with.")
else:
stages.append(self.stage_env)
for stage in stages:
try:
self.dispatch_command(self.command, stage)
except ClickException as e:
# Discussion on exit codes: https://github.com/Miserlou/Zappa/issues/407
e.show()
sys.exit(e.exit_code)
def dispatch_command(self, command, stage):
"""
Given a command to execute and stage,
execute that command.
"""
self.api_stage = stage
if command not in ['status', 'manage']:
if not self.vargs.get('json', None):
click.echo("Calling " + click.style(command, fg="green", bold=True) + " for stage " +
click.style(self.api_stage, bold=True) + ".." )
# Explicitly define the app function.
# Related: https://github.com/Miserlou/Zappa/issues/832
if self.vargs.get('app_function', None):
self.app_function = self.vargs['app_function']
# Load our settings, based on api_stage.
try:
self.load_settings(self.vargs.get('settings_file'))
except ValueError as e:
if hasattr(e, 'message'):
print("Error: {}".format(e.message))
else:
print(str(e))
sys.exit(-1)
self.callback('settings')
# Hand it off
if command == 'deploy': # pragma: no cover
self.deploy(self.vargs['zip'])
if command == 'package': # pragma: no cover
self.package(self.vargs['output'])
if command == 'template': # pragma: no cover
self.template( self.vargs['lambda_arn'],
self.vargs['role_arn'],
output=self.vargs['output'],
json=self.vargs['json']
)
elif command == 'update': # pragma: no cover
self.update(self.vargs['zip'], self.vargs['no_upload'])
elif command == 'rollback': # pragma: no cover
self.rollback(self.vargs['num_rollback'])
elif command == 'invoke': # pragma: no cover
if not self.vargs.get('command_rest'):
print("Please enter the function to invoke.")
return
self.invoke(
self.vargs['command_rest'],
raw_python=self.vargs['raw'],
no_color=self.vargs['no_color'],
)
elif command == 'manage': # pragma: no cover
if not self.vargs.get('command_rest'):
print("Please enter the management command to invoke.")
return
if not self.django_settings:
print("This command is for Django projects only!")
print("If this is a Django project, please define django_settings in your zappa_settings.")
return
command_tail = self.vargs.get('command_rest')
if len(command_tail) > 1:
command = " ".join(command_tail) # ex: zappa manage dev "shell --version"
else:
command = command_tail[0] # ex: zappa manage dev showmigrations admin
self.invoke(
command,
command="manage",
no_color=self.vargs['no_color'],
)
elif command == 'tail': # pragma: no cover
self.tail(
colorize=(not self.vargs['no_color']),
http=self.vargs['http'],
non_http=self.vargs['non_http'],
since=self.vargs['since'],
filter_pattern=self.vargs['filter'],
force_colorize=self.vargs['force_color'] or None,
keep_open=not self.vargs['disable_keep_open']
)
elif command == 'undeploy': # pragma: no cover
self.undeploy(
no_confirm=self.vargs['yes'],
remove_logs=self.vargs['remove_logs']
)
elif command == 'schedule': # pragma: no cover
self.schedule()
elif command == 'unschedule': # pragma: no cover
self.unschedule()
elif command == 'status': # pragma: no cover
self.status(return_json=self.vargs['json'])
elif command == 'certify': # pragma: no cover
self.certify(
no_confirm=self.vargs['yes'],
manual=self.vargs['manual']
)
elif command == 'shell': # pragma: no cover
self.shell()
##
# The Commands
##
def package(self, output=None):
"""
Only build the package
"""
# Make sure we're in a venv.
self.check_venv()
# force not to delete the local zip
self.override_stage_config_setting('delete_local_zip', False)
# Execute the prebuild script
if self.prebuild_script:
self.execute_prebuild_script()
# Create the Lambda Zip
self.create_package(output)
self.callback('zip')
size = human_size(os.path.getsize(self.zip_path))
click.echo(click.style("Package created", fg="green", bold=True) + ": " + click.style(self.zip_path, bold=True) + " (" + size + ")")
def template(self, lambda_arn, role_arn, output=None, json=False):
"""
Only build the template file.
"""
if not lambda_arn:
raise ClickException("Lambda ARN is required to template.")
if not role_arn:
raise ClickException("Role ARN is required to template.")
self.zappa.credentials_arn = role_arn
# Create the template!
template = self.zappa.create_stack_template(
lambda_arn=lambda_arn,
lambda_name=self.lambda_name,
api_key_required=self.api_key_required,
iam_authorization=self.iam_authorization,
authorizer=self.authorizer,
cors_options=self.cors,
description=self.apigateway_description,
endpoint_configuration=self.endpoint_configuration
)
if not output:
template_file = self.lambda_name + '-template-' + str(int(time.time())) + '.json'
else:
template_file = output
with open(template_file, 'wb') as out:
out.write(bytes(template.to_json(indent=None, separators=(',',':')), "utf-8"))
if not json:
click.echo(click.style("Template created", fg="green", bold=True) + ": " + click.style(template_file, bold=True))
else:
with open(template_file, 'r') as out:
print(out.read())
def deploy(self, source_zip=None):
"""
Package your project, upload it to S3, register the Lambda function
and create the API Gateway routes.
"""
if not source_zip:
# Make sure we're in a venv.
self.check_venv()
# Execute the prebuild script
if self.prebuild_script:
self.execute_prebuild_script()
# Make sure this isn't already deployed.
deployed_versions = self.zappa.get_lambda_function_versions(self.lambda_name)
if len(deployed_versions) > 0:
raise ClickException("This application is " + click.style("already deployed", fg="red") +
" - did you mean to call " + click.style("update", bold=True) + "?")
# Make sure the necessary IAM execution roles are available
if self.manage_roles:
try:
self.zappa.create_iam_roles()
except botocore.client.ClientError as ce:
raise ClickException(
click.style("Failed", fg="red") + " to " + click.style("manage IAM roles", bold=True) + "!\n" +
"You may " + click.style("lack the necessary AWS permissions", bold=True) +
" to automatically manage a Zappa execution role.\n" +
click.style("Exception reported by AWS:", bold=True) + format(ce) + '\n' +
"To fix this, see here: " +
click.style(
"https://github.com/Miserlou/Zappa#custom-aws-iam-roles-and-policies-for-deployment",
bold=True)
+ '\n')
# Create the Lambda Zip
self.create_package()
self.callback('zip')
# Upload it to S3
success = self.zappa.upload_to_s3(
self.zip_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload to S3. Quitting.")
# If using a slim handler, upload it to S3 and tell lambda to use this slim handler zip
if self.stage_config.get('slim_handler', False):
# https://github.com/Miserlou/Zappa/issues/510
success = self.zappa.upload_to_s3(self.handler_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload handler to S3. Quitting.")
# Copy the project zip to the current project zip
current_project_name = '{0!s}_{1!s}_current_project.tar.gz'.format(self.api_stage, self.project_name)
success = self.zappa.copy_on_s3(src_file_name=self.zip_path, dst_file_name=current_project_name,
bucket_name=self.s3_bucket_name)
if not success: # pragma: no cover
raise ClickException("Unable to copy the zip to be the current project. Quitting.")
handler_file = self.handler_path
else:
handler_file = self.zip_path
# Fixes https://github.com/Miserlou/Zappa/issues/613
try:
self.lambda_arn = self.zappa.get_lambda_function(
function_name=self.lambda_name)
except botocore.client.ClientError:
# Register the Lambda function with that zip as the source
# You'll also need to define the path to your lambda_handler code.
kwargs = dict(
handler=self.lambda_handler,
description=self.lambda_description,
vpc_config=self.vpc_config,
dead_letter_config=self.dead_letter_config,
timeout=self.timeout_seconds,
memory_size=self.memory_size,
runtime=self.runtime,
aws_environment_variables=self.aws_environment_variables,
aws_kms_key_arn=self.aws_kms_key_arn,
use_alb=self.use_alb,
layers=self.layers,
concurrency=self.lambda_concurrency,
)
if source_zip and source_zip.startswith('s3://'):
bucket, key_name = parse_s3_url(source_zip)
kwargs['function_name'] = self.lambda_name
kwargs['bucket'] = bucket
kwargs['s3_key'] = key_name
elif source_zip and not source_zip.startswith('s3://'):
with open(source_zip, mode='rb') as fh:
byte_stream = fh.read()
kwargs['function_name'] = self.lambda_name
kwargs['local_zip'] = byte_stream
else:
kwargs['function_name'] = self.lambda_name
kwargs['bucket'] = self.s3_bucket_name
kwargs['s3_key'] = handler_file
self.lambda_arn = self.zappa.create_lambda_function(**kwargs)
# Schedule events for this deployment
self.schedule()
endpoint_url = ''
deployment_string = click.style("Deployment complete", fg="green", bold=True) + "!"
if self.use_alb:
kwargs = dict(
lambda_arn=self.lambda_arn,
lambda_name=self.lambda_name,
alb_vpc_config=self.alb_vpc_config,
timeout=self.timeout_seconds
)
self.zappa.deploy_lambda_alb(**kwargs)
if self.use_apigateway:
# Create and configure the API Gateway
template = self.zappa.create_stack_template(
lambda_arn=self.lambda_arn,
lambda_name=self.lambda_name,
api_key_required=self.api_key_required,
iam_authorization=self.iam_authorization,
authorizer=self.authorizer,
cors_options=self.cors,
description=self.apigateway_description,
endpoint_configuration=self.endpoint_configuration
)
self.zappa.update_stack(
self.lambda_name,
self.s3_bucket_name,
wait=True,
disable_progress=self.disable_progress
)
api_id = self.zappa.get_api_id(self.lambda_name)
# Add binary support
if self.binary_support:
self.zappa.add_binary_support(api_id=api_id, cors=self.cors)
# Add payload compression
if self.stage_config.get('payload_compression', True):
self.zappa.add_api_compression(
api_id=api_id,
min_compression_size=self.stage_config.get('payload_minimum_compression_size', 0))
# Deploy the API!
endpoint_url = self.deploy_api_gateway(api_id)
deployment_string = deployment_string + ": {}".format(endpoint_url)
# Create/link API key
if self.api_key_required:
if self.api_key is None:
self.zappa.create_api_key(api_id=api_id, stage_name=self.api_stage)
else:
self.zappa.add_api_stage_to_api_key(api_key=self.api_key, api_id=api_id, stage_name=self.api_stage)
if self.stage_config.get('touch', True):
self.touch_endpoint(endpoint_url)
# Finally, delete the local copy our zip package
if not source_zip:
if self.stage_config.get('delete_local_zip', True):
self.remove_local_zip()
# Remove the project zip from S3.
if not source_zip:
self.remove_uploaded_zip()
self.callback('post')
click.echo(deployment_string)
def update(self, source_zip=None, no_upload=False):
"""
Repackage and update the function code.
"""
if not source_zip:
# Make sure we're in a venv.
self.check_venv()
# Execute the prebuild script
if self.prebuild_script:
self.execute_prebuild_script()
# Temporary version check
try:
updated_time = 1472581018
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
conf = function_response['Configuration']
last_updated = parser.parse(conf['LastModified'])
last_updated_unix = time.mktime(last_updated.timetuple())
except botocore.exceptions.BotoCoreError as e:
click.echo(click.style(type(e).__name__, fg="red") + ": " + e.args[0])
sys.exit(-1)
except Exception as e:
click.echo(click.style("Warning!", fg="red") + " Couldn't get function " + self.lambda_name +
" in " + self.zappa.aws_region + " - have you deployed yet?")
sys.exit(-1)
if last_updated_unix <= updated_time:
click.echo(click.style("Warning!", fg="red") +
" You may have upgraded Zappa since deploying this application. You will need to " +
click.style("redeploy", bold=True) + " for this deployment to work properly!")
# Make sure the necessary IAM execution roles are available
if self.manage_roles:
try:
self.zappa.create_iam_roles()
except botocore.client.ClientError:
click.echo(click.style("Failed", fg="red") + " to " + click.style("manage IAM roles", bold=True) + "!")
click.echo("You may " + click.style("lack the necessary AWS permissions", bold=True) +
" to automatically manage a Zappa execution role.")
click.echo("To fix this, see here: " +
click.style("https://github.com/Miserlou/Zappa#custom-aws-iam-roles-and-policies-for-deployment",
bold=True))
sys.exit(-1)
# Create the Lambda Zip,
if not no_upload:
self.create_package()
self.callback('zip')
# Upload it to S3
if not no_upload:
success = self.zappa.upload_to_s3(self.zip_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload project to S3. Quitting.")
# If using a slim handler, upload it to S3 and tell lambda to use this slim handler zip
if self.stage_config.get('slim_handler', False):
# https://github.com/Miserlou/Zappa/issues/510
success = self.zappa.upload_to_s3(self.handler_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload handler to S3. Quitting.")
# Copy the project zip to the current project zip
current_project_name = '{0!s}_{1!s}_current_project.tar.gz'.format(self.api_stage, self.project_name)
success = self.zappa.copy_on_s3(src_file_name=self.zip_path, dst_file_name=current_project_name,
bucket_name=self.s3_bucket_name)
if not success: # pragma: no cover
raise ClickException("Unable to copy the zip to be the current project. Quitting.")
handler_file = self.handler_path
else:
handler_file = self.zip_path
# Register the Lambda function with that zip as the source
# You'll also need to define the path to your lambda_handler code.
kwargs = dict(
bucket=self.s3_bucket_name,
function_name=self.lambda_name,
num_revisions=self.num_retained_versions,
concurrency=self.lambda_concurrency,
)
if source_zip and source_zip.startswith('s3://'):
bucket, key_name = parse_s3_url(source_zip)
kwargs.update(dict(
bucket=bucket,
s3_key=key_name
))
self.lambda_arn = self.zappa.update_lambda_function(**kwargs)
elif source_zip and not source_zip.startswith('s3://'):
with open(source_zip, mode='rb') as fh:
byte_stream = fh.read()
kwargs['local_zip'] = byte_stream
self.lambda_arn = self.zappa.update_lambda_function(**kwargs)
else:
if not no_upload:
kwargs['s3_key'] = handler_file
self.lambda_arn = self.zappa.update_lambda_function(**kwargs)
# Remove the uploaded zip from S3, because it is now registered..
if not source_zip and not no_upload:
self.remove_uploaded_zip()
# Update the configuration, in case there are changes.
self.lambda_arn = self.zappa.update_lambda_configuration(
lambda_arn=self.lambda_arn,
function_name=self.lambda_name,
handler=self.lambda_handler,
description=self.lambda_description,
vpc_config=self.vpc_config,
timeout=self.timeout_seconds,
memory_size=self.memory_size,
runtime=self.runtime,
aws_environment_variables=self.aws_environment_variables,
aws_kms_key_arn=self.aws_kms_key_arn,
layers=self.layers
)
# Finally, delete the local copy our zip package
if not source_zip and not no_upload:
if self.stage_config.get('delete_local_zip', True):
self.remove_local_zip()
if self.use_apigateway:
self.zappa.create_stack_template(
lambda_arn=self.lambda_arn,
lambda_name=self.lambda_name,
api_key_required=self.api_key_required,
iam_authorization=self.iam_authorization,
authorizer=self.authorizer,
cors_options=self.cors,
description=self.apigateway_description,
endpoint_configuration=self.endpoint_configuration
)
self.zappa.update_stack(
self.lambda_name,
self.s3_bucket_name,
wait=True,
update_only=True,
disable_progress=self.disable_progress)
api_id = self.zappa.get_api_id(self.lambda_name)
# Update binary support
if self.binary_support:
self.zappa.add_binary_support(api_id=api_id, cors=self.cors)
else:
self.zappa.remove_binary_support(api_id=api_id, cors=self.cors)
if self.stage_config.get('payload_compression', True):
self.zappa.add_api_compression(
api_id=api_id,
min_compression_size=self.stage_config.get('payload_minimum_compression_size', 0))
else:
self.zappa.remove_api_compression(api_id=api_id)
# It looks a bit like we might actually be using this just to get the URL,
# but we're also updating a few of the APIGW settings.
endpoint_url = self.deploy_api_gateway(api_id)
if self.stage_config.get('domain', None):
endpoint_url = self.stage_config.get('domain')
else:
endpoint_url = None
self.schedule()
# Update any cognito pool with the lambda arn
# do this after schedule as schedule clears the lambda policy and we need to add one
self.update_cognito_triggers()
self.callback('post')
if endpoint_url and 'https://' not in endpoint_url:
endpoint_url = 'https://' + endpoint_url
if self.base_path:
endpoint_url += '/' + self.base_path
deployed_string = "Your updated Zappa deployment is " + click.style("live", fg='green', bold=True) + "!"
if self.use_apigateway:
deployed_string = deployed_string + ": " + click.style("{}".format(endpoint_url), bold=True)
api_url = None
if endpoint_url and 'amazonaws.com' not in endpoint_url:
api_url = self.zappa.get_api_url(
self.lambda_name,
self.api_stage)
if endpoint_url != api_url:
deployed_string = deployed_string + " (" + api_url + ")"
if self.stage_config.get('touch', True):
if api_url:
self.touch_endpoint(api_url)
elif endpoint_url:
self.touch_endpoint(endpoint_url)
click.echo(deployed_string)
def rollback(self, revision):
"""
Rollsback the currently deploy lambda code to a previous revision.
"""
print("Rolling back..")
self.zappa.rollback_lambda_function_version(
self.lambda_name, versions_back=revision)
print("Done!")
def tail(self, since, filter_pattern, limit=10000, keep_open=True, colorize=True, http=False, non_http=False, force_colorize=False):
"""
Tail this function's logs.
if keep_open, do so repeatedly, printing any new logs
"""
try:
since_stamp = string_to_timestamp(since)
last_since = since_stamp
while True:
new_logs = self.zappa.fetch_logs(
self.lambda_name,
start_time=since_stamp,
limit=limit,
filter_pattern=filter_pattern,
)
new_logs = [ e for e in new_logs if e['timestamp'] > last_since ]
self.print_logs(new_logs, colorize, http, non_http, force_colorize)
if not keep_open:
break
if new_logs:
last_since = new_logs[-1]['timestamp']
time.sleep(1)
except KeyboardInterrupt: # pragma: no cover
# Die gracefully
try:
sys.exit(0)
except SystemExit:
os._exit(130)
def undeploy(self, no_confirm=False, remove_logs=False):
"""
Tear down an existing deployment.
"""
if not no_confirm: # pragma: no cover
confirm = input("Are you sure you want to undeploy? [y/n] ")
if confirm != 'y':
return
if self.use_alb:
self.zappa.undeploy_lambda_alb(self.lambda_name)
if self.use_apigateway:
if remove_logs:
self.zappa.remove_api_gateway_logs(self.lambda_name)
domain_name = self.stage_config.get('domain', None)
base_path = self.stage_config.get('base_path', None)
# Only remove the api key when not specified
if self.api_key_required and self.api_key is None:
api_id = self.zappa.get_api_id(self.lambda_name)
self.zappa.remove_api_key(api_id, self.api_stage)
gateway_id = self.zappa.undeploy_api_gateway(
self.lambda_name,
domain_name=domain_name,
base_path=base_path
)
self.unschedule() # removes event triggers, including warm up event.
self.zappa.delete_lambda_function(self.lambda_name)
if remove_logs:
self.zappa.remove_lambda_function_logs(self.lambda_name)
click.echo(click.style("Done", fg="green", bold=True) + "!")
def update_cognito_triggers(self):
"""
Update any cognito triggers
"""
if self.cognito:
user_pool = self.cognito.get('user_pool')
triggers = self.cognito.get('triggers', [])
lambda_configs = set()
for trigger in triggers:
lambda_configs.add(trigger['source'].split('_')[0])
self.zappa.update_cognito(self.lambda_name, user_pool, lambda_configs, self.lambda_arn)
def schedule(self):
"""
Given a a list of functions and a schedule to execute them,
setup up regular execution.
"""
events = self.stage_config.get('events', [])
if events:
if not isinstance(events, list): # pragma: no cover
print("Events must be supplied as a list.")
return
for event in events:
self.collision_warning(event.get('function'))
if self.stage_config.get('keep_warm', True):
if not events:
events = []
keep_warm_rate = self.stage_config.get('keep_warm_expression', "rate(4 minutes)")
events.append({'name': 'zappa-keep-warm',
'function': 'handler.keep_warm_callback',
'expression': keep_warm_rate,
'description': 'Zappa Keep Warm - {}'.format(self.lambda_name)})
if events:
try:
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
except botocore.exceptions.ClientError as e: # pragma: no cover
click.echo(click.style("Function does not exist", fg="yellow") + ", please " +
click.style("deploy", bold=True) + "first. Ex:" +
click.style("zappa deploy {}.".format(self.api_stage), bold=True))
sys.exit(-1)
print("Scheduling..")
self.zappa.schedule_events(
lambda_arn=function_response['Configuration']['FunctionArn'],
lambda_name=self.lambda_name,
events=events
)
# Add async tasks SNS
if self.stage_config.get('async_source', None) == 'sns' \
and self.stage_config.get('async_resources', True):
self.lambda_arn = self.zappa.get_lambda_function(
function_name=self.lambda_name)
topic_arn = self.zappa.create_async_sns_topic(
lambda_name=self.lambda_name,
lambda_arn=self.lambda_arn
)
click.echo('SNS Topic created: %s' % topic_arn)
# Add async tasks DynamoDB
table_name = self.stage_config.get('async_response_table', False)
read_capacity = self.stage_config.get('async_response_table_read_capacity', 1)
write_capacity = self.stage_config.get('async_response_table_write_capacity', 1)
if table_name and self.stage_config.get('async_resources', True):
created, response_table = self.zappa.create_async_dynamodb_table(
table_name, read_capacity, write_capacity)
if created:
click.echo('DynamoDB table created: %s' % table_name)
else:
click.echo('DynamoDB table exists: %s' % table_name)
provisioned_throughput = response_table['Table']['ProvisionedThroughput']
if provisioned_throughput['ReadCapacityUnits'] != read_capacity or \
provisioned_throughput['WriteCapacityUnits'] != write_capacity:
click.echo(click.style(
"\nWarning! Existing DynamoDB table ({}) does not match configured capacity.\n".format(table_name),
fg='red'
))
def unschedule(self):
"""
Given a a list of scheduled functions,
tear down their regular execution.
"""
# Run even if events are not defined to remove previously existing ones (thus default to []).
events = self.stage_config.get('events', [])
if not isinstance(events, list): # pragma: no cover
print("Events must be supplied as a list.")
return
function_arn = None
try:
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
function_arn = function_response['Configuration']['FunctionArn']
except botocore.exceptions.ClientError as e: # pragma: no cover
raise ClickException("Function does not exist, you should deploy first. Ex: zappa deploy {}. "
"Proceeding to unschedule CloudWatch based events.".format(self.api_stage))
print("Unscheduling..")
self.zappa.unschedule_events(
lambda_name=self.lambda_name,
lambda_arn=function_arn,
events=events,
)
# Remove async task SNS
if self.stage_config.get('async_source', None) == 'sns' \
and self.stage_config.get('async_resources', True):
removed_arns = self.zappa.remove_async_sns_topic(self.lambda_name)
click.echo('SNS Topic removed: %s' % ', '.join(removed_arns))
def invoke(self, function_name, raw_python=False, command=None, no_color=False):
"""
Invoke a remote function.
"""
# There are three likely scenarios for 'command' here:
# command, which is a modular function path
# raw_command, which is a string of python to execute directly
# manage, which is a Django-specific management command invocation
key = command if command is not None else 'command'
if raw_python:
command = {'raw_command': function_name}
else:
command = {key: function_name}
# Can't use hjson
import json as json
response = self.zappa.invoke_lambda_function(
self.lambda_name,
json.dumps(command),
invocation_type='RequestResponse',
)
if 'LogResult' in response:
if no_color:
print(base64.b64decode(response['LogResult']))
else:
decoded = base64.b64decode(response['LogResult']).decode()
formatted = self.format_invoke_command(decoded)
colorized = self.colorize_invoke_command(formatted)
print(colorized)
else:
print(response)
# For a successful request FunctionError is not in response.
# https://github.com/Miserlou/Zappa/pull/1254/
if 'FunctionError' in response:
raise ClickException(
"{} error occurred while invoking command.".format(response['FunctionError'])
)
def format_invoke_command(self, string):
"""
Formats correctly the string output from the invoke() method,
replacing line breaks and tabs when necessary.
"""
string = string.replace('\\n', '\n')
formated_response = ''
for line in string.splitlines():
if line.startswith('REPORT'):
line = line.replace('\t', '\n')
if line.startswith('[DEBUG]'):
line = line.replace('\t', ' ')
formated_response += line + '\n'
formated_response = formated_response.replace('\n\n', '\n')
return formated_response
def colorize_invoke_command(self, string):
"""
Apply various heuristics to return a colorized version the invoke
command string. If these fail, simply return the string in plaintext.
Inspired by colorize_log_entry().
"""
final_string = string
try:
# Line headers
try:
for token in ['START', 'END', 'REPORT', '[DEBUG]']:
if token in final_string:
format_string = '[{}]'
# match whole words only
pattern = r'\b{}\b'
if token == '[DEBUG]':
format_string = '{}'
pattern = re.escape(token)
repl = click.style(
format_string.format(token),
bold=True,
fg='cyan'
)
final_string = re.sub(
pattern.format(token), repl, final_string
)
except Exception: # pragma: no cover
pass
# Green bold Tokens
try:
for token in [
'Zappa Event:',
'RequestId:',
'Version:',
'Duration:',
'Billed',
'Memory Size:',
'Max Memory Used:'
]:
if token in final_string:
final_string = final_string.replace(token, click.style(
token,
bold=True,
fg='green'
))
except Exception: # pragma: no cover
pass
# UUIDs
for token in final_string.replace('\t', ' ').split(' '):
try:
if token.count('-') == 4 and token.replace('-', '').isalnum():
final_string = final_string.replace(
token,
click.style(token, fg='magenta')
)
except Exception: # pragma: no cover
pass
return final_string
except Exception:
return string
def status(self, return_json=False):
"""
Describe the status of the current deployment.
"""
def tabular_print(title, value):
"""
Convenience function for priting formatted table items.
"""
click.echo('%-*s%s' % (32, click.style("\t" + title, fg='green') + ':', str(value)))
return
# Lambda Env Details
lambda_versions = self.zappa.get_lambda_function_versions(self.lambda_name)
if not lambda_versions:
raise ClickException(click.style("No Lambda %s detected in %s - have you deployed yet?" %
(self.lambda_name, self.zappa.aws_region), fg='red'))
status_dict = collections.OrderedDict()
status_dict["Lambda Versions"] = len(lambda_versions)
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
conf = function_response['Configuration']
self.lambda_arn = conf['FunctionArn']
status_dict["Lambda Name"] = self.lambda_name
status_dict["Lambda ARN"] = self.lambda_arn
status_dict["Lambda Role ARN"] = conf['Role']
status_dict["Lambda Handler"] = conf['Handler']
status_dict["Lambda Code Size"] = conf['CodeSize']
status_dict["Lambda Version"] = conf['Version']
status_dict["Lambda Last Modified"] = conf['LastModified']
status_dict["Lambda Memory Size"] = conf['MemorySize']
status_dict["Lambda Timeout"] = conf['Timeout']
status_dict["Lambda Runtime"] = conf['Runtime']
if 'VpcConfig' in conf.keys():
status_dict["Lambda VPC ID"] = conf.get('VpcConfig', {}).get('VpcId', 'Not assigned')
else:
status_dict["Lambda VPC ID"] = None
# Calculated statistics
try:
function_invocations = self.zappa.cloudwatch.get_metric_statistics(
Namespace='AWS/Lambda',
MetricName='Invocations',
StartTime=datetime.utcnow()-timedelta(days=1),
EndTime=datetime.utcnow(),
Period=1440,
Statistics=['Sum'],
Dimensions=[{'Name': 'FunctionName',
'Value': '{}'.format(self.lambda_name)}]
)['Datapoints'][0]['Sum']
except Exception as e:
function_invocations = 0
try:
function_errors = self.zappa.cloudwatch.get_metric_statistics(
Namespace='AWS/Lambda',
MetricName='Errors',
StartTime=datetime.utcnow()-timedelta(days=1),
EndTime=datetime.utcnow(),
Period=1440,
Statistics=['Sum'],
Dimensions=[{'Name': 'FunctionName',
'Value': '{}'.format(self.lambda_name)}]
)['Datapoints'][0]['Sum']
except Exception as e:
function_errors = 0
try:
error_rate = "{0:.2f}%".format(function_errors / function_invocations * 100)
except:
error_rate = "Error calculating"
status_dict["Invocations (24h)"] = int(function_invocations)
status_dict["Errors (24h)"] = int(function_errors)
status_dict["Error Rate (24h)"] = error_rate
# URLs
if self.use_apigateway:
api_url = self.zappa.get_api_url(
self.lambda_name,
self.api_stage)
status_dict["API Gateway URL"] = api_url
# Api Keys
api_id = self.zappa.get_api_id(self.lambda_name)
for api_key in self.zappa.get_api_keys(api_id, self.api_stage):
status_dict["API Gateway x-api-key"] = api_key
# There literally isn't a better way to do this.
# AWS provides no way to tie a APIGW domain name to its Lambda function.
domain_url = self.stage_config.get('domain', None)
base_path = self.stage_config.get('base_path', None)
if domain_url:
status_dict["Domain URL"] = 'https://' + domain_url
if base_path:
status_dict["Domain URL"] += '/' + base_path
else:
status_dict["Domain URL"] = "None Supplied"
# Scheduled Events
event_rules = self.zappa.get_event_rules_for_lambda(lambda_arn=self.lambda_arn)
status_dict["Num. Event Rules"] = len(event_rules)
if len(event_rules) > 0:
status_dict['Events'] = []
for rule in event_rules:
event_dict = {}
rule_name = rule['Name']
event_dict["Event Rule Name"] = rule_name
event_dict["Event Rule Schedule"] = rule.get('ScheduleExpression', None)
event_dict["Event Rule State"] = rule.get('State', None).title()
event_dict["Event Rule ARN"] = rule.get('Arn', None)
status_dict['Events'].append(event_dict)
if return_json:
# Putting the status in machine readable format
# https://github.com/Miserlou/Zappa/issues/407
print(json.dumpsJSON(status_dict))
else:
click.echo("Status for " + click.style(self.lambda_name, bold=True) + ": ")
for k, v in status_dict.items():
if k == 'Events':
# Events are a list of dicts
for event in v:
for item_k, item_v in event.items():
tabular_print(item_k, item_v)
else:
tabular_print(k, v)
# TODO: S3/SQS/etc. type events?
return True
def check_stage_name(self, stage_name):
"""
Make sure the stage name matches the AWS-allowed pattern
(calls to apigateway_client.create_deployment, will fail with error
message "ClientError: An error occurred (BadRequestException) when
calling the CreateDeployment operation: Stage name only allows
a-zA-Z0-9_" if the pattern does not match)
"""
if self.stage_name_env_pattern.match(stage_name):
return True
raise ValueError("AWS requires stage name to match a-zA-Z0-9_")
def check_environment(self, environment):
"""
Make sure the environment contains only strings
(since putenv needs a string)
"""
non_strings = []
for (k,v) in environment.items():
if not isinstance(v, basestring):
non_strings.append(k)
if non_strings:
raise ValueError("The following environment variables are not strings: {}".format(", ".join(non_strings)))
else:
return True
def init(self, settings_file="zappa_settings.json"):
"""
Initialize a new Zappa project by creating a new zappa_settings.json in a guided process.
This should probably be broken up into few separate componants once it's stable.
Testing these inputs requires monkeypatching with mock, which isn't pretty.
"""
# Make sure we're in a venv.
self.check_venv()
# Ensure that we don't already have a zappa_settings file.
if os.path.isfile(settings_file):
raise ClickException("This project already has a " + click.style("{0!s} file".format(settings_file), fg="red", bold=True) + "!")
# Explain system.
click.echo(click.style("""\n███████╗ █████╗ ██████╗ ██████╗ █████╗
╚══███╔╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗
███╔╝ ███████║██████╔╝██████╔╝███████║
███╔╝ ██╔══██║██╔═══╝ ██╔═══╝ ██╔══██║
███████╗██║ ██║██║ ██║ ██║ ██║
╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═╝\n""", fg='green', bold=True))
click.echo(click.style("Welcome to ", bold=True) + click.style("Zappa", fg='green', bold=True) + click.style("!\n", bold=True))
click.echo(click.style("Zappa", bold=True) + " is a system for running server-less Python web applications"
" on AWS Lambda and AWS API Gateway.")
click.echo("This `init` command will help you create and configure your new Zappa deployment.")
click.echo("Let's get started!\n")
# Create Env
while True:
click.echo("Your Zappa configuration can support multiple production stages, like '" +
click.style("dev", bold=True) + "', '" + click.style("staging", bold=True) + "', and '" +
click.style("production", bold=True) + "'.")
env = input("What do you want to call this environment (default 'dev'): ") or "dev"
try:
self.check_stage_name(env)
break
except ValueError:
click.echo(click.style("Stage names must match a-zA-Z0-9_", fg="red"))
# Detect AWS profiles and regions
# If anyone knows a more straightforward way to easily detect and parse AWS profiles I'm happy to change this, feels like a hack
session = botocore.session.Session()
config = session.full_config
profiles = config.get("profiles", {})
profile_names = list(profiles.keys())
click.echo("\nAWS Lambda and API Gateway are only available in certain regions. "\
"Let's check to make sure you have a profile set up in one that will work.")
if not profile_names:
profile_name, profile = None, None
click.echo("We couldn't find an AWS profile to use. Before using Zappa, you'll need to set one up. See here for more info: {}"
.format(click.style(BOTO3_CONFIG_DOCS_URL, fg="blue", underline=True)))
elif len(profile_names) == 1:
profile_name = profile_names[0]
profile = profiles[profile_name]
click.echo("Okay, using profile {}!".format(click.style(profile_name, bold=True)))
else:
if "default" in profile_names:
default_profile = [p for p in profile_names if p == "default"][0]
else:
default_profile = profile_names[0]
while True:
profile_name = input("We found the following profiles: {}, and {}. "\
"Which would you like us to use? (default '{}'): "
.format(
', '.join(profile_names[:-1]),
profile_names[-1],
default_profile
)) or default_profile
if profile_name in profiles:
profile = profiles[profile_name]
break
else:
click.echo("Please enter a valid name for your AWS profile.")
profile_region = profile.get("region") if profile else None
# Create Bucket
click.echo("\nYour Zappa deployments will need to be uploaded to a " + click.style("private S3 bucket", bold=True) + ".")
click.echo("If you don't have a bucket yet, we'll create one for you too.")
default_bucket = "zappa-" + ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(9))
while True:
bucket = input("What do you want to call your bucket? (default '%s'): " % default_bucket) or default_bucket
if is_valid_bucket_name(bucket):
break
click.echo(click.style("Invalid bucket name!", bold=True))
click.echo("S3 buckets must be named according to the following rules:")
click.echo("""* Bucket names must be unique across all existing bucket names in Amazon S3.
* Bucket names must comply with DNS naming conventions.
* Bucket names must be at least 3 and no more than 63 characters long.
* Bucket names must not contain uppercase characters or underscores.
* Bucket names must start with a lowercase letter or number.
* Bucket names must be a series of one or more labels. Adjacent labels are separated
by a single period (.). Bucket names can contain lowercase letters, numbers, and
hyphens. Each label must start and end with a lowercase letter or a number.
* Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
* When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL
wildcard certificate only matches buckets that don't contain periods. To work around
this, use HTTP or write your own certificate verification logic. We recommend that
you do not use periods (".") in bucket names when using virtual hosted–style buckets.
""")
# Detect Django/Flask
try: # pragma: no cover
import django
has_django = True
except ImportError as e:
has_django = False
try: # pragma: no cover
import flask
has_flask = True
except ImportError as e:
has_flask = False
print('')
# App-specific
if has_django: # pragma: no cover
click.echo("It looks like this is a " + click.style("Django", bold=True) + " application!")
click.echo("What is the " + click.style("module path", bold=True) + " to your projects's Django settings?")
django_settings = None
matches = detect_django_settings()
while django_settings in [None, '']:
if matches:
click.echo("We discovered: " + click.style(', '.join('{}'.format(i) for v, i in enumerate(matches)), bold=True))
django_settings = input("Where are your project's settings? (default '%s'): " % matches[0]) or matches[0]
else:
click.echo("(This will likely be something like 'your_project.settings')")
django_settings = input("Where are your project's settings?: ")
django_settings = django_settings.replace("'", "")
django_settings = django_settings.replace('"', "")
else:
matches = None
if has_flask:
click.echo("It looks like this is a " + click.style("Flask", bold=True) + " application.")
matches = detect_flask_apps()
click.echo("What's the " + click.style("modular path", bold=True) + " to your app's function?")
click.echo("This will likely be something like 'your_module.app'.")
app_function = None
while app_function in [None, '']:
if matches:
click.echo("We discovered: " + click.style(', '.join('{}'.format(i) for v, i in enumerate(matches)), bold=True))
app_function = input("Where is your app's function? (default '%s'): " % matches[0]) or matches[0]
else:
app_function = input("Where is your app's function?: ")
app_function = app_function.replace("'", "")
app_function = app_function.replace('"', "")
# TODO: Create VPC?
# Memory size? Time limit?
# Domain? LE keys? Region?
# 'Advanced Settings' mode?
# Globalize
click.echo("\nYou can optionally deploy to " + click.style("all available regions", bold=True) + " in order to provide fast global service.")
click.echo("If you are using Zappa for the first time, you probably don't want to do this!")
global_deployment = False
while True:
global_type = input("Would you like to deploy this application " + click.style("globally", bold=True) + "? (default 'n') [y/n/(p)rimary]: ")
if not global_type:
break
if global_type.lower() in ["y", "yes", "p", "primary"]:
global_deployment = True
break
if global_type.lower() in ["n", "no"]:
global_deployment = False
break
# The given environment name
zappa_settings = {
env: {
'profile_name': profile_name,
's3_bucket': bucket,
'runtime': get_venv_from_python_version(),
'project_name': self.get_project_name()
}
}
if profile_region:
zappa_settings[env]['aws_region'] = profile_region
if has_django:
zappa_settings[env]['django_settings'] = django_settings
else:
zappa_settings[env]['app_function'] = app_function
# Global Region Deployment
if global_deployment:
additional_regions = [r for r in API_GATEWAY_REGIONS if r != profile_region]
# Create additional stages
if global_type.lower() in ["p", "primary"]:
additional_regions = [r for r in additional_regions if '-1' in r]
for region in additional_regions:
env_name = env + '_' + region.replace('-', '_')
g_env = {
env_name: {
'extends': env,
'aws_region': region
}
}
zappa_settings.update(g_env)
import json as json # hjson is fine for loading, not fine for writing.
zappa_settings_json = json.dumps(zappa_settings, sort_keys=True, indent=4)
click.echo("\nOkay, here's your " + click.style("zappa_settings.json", bold=True) + ":\n")
click.echo(click.style(zappa_settings_json, fg="yellow", bold=False))
confirm = input("\nDoes this look " + click.style("okay", bold=True, fg="green") + "? (default 'y') [y/n]: ") or 'yes'
if confirm[0] not in ['y', 'Y', 'yes', 'YES']:
click.echo("" + click.style("Sorry", bold=True, fg='red') + " to hear that! Please init again.")
return
# Write
with open("zappa_settings.json", "w") as zappa_settings_file:
zappa_settings_file.write(zappa_settings_json)
if global_deployment:
click.echo("\n" + click.style("Done", bold=True) + "! You can also " + click.style("deploy all", bold=True) + " by executing:\n")
click.echo(click.style("\t$ zappa deploy --all", bold=True))
click.echo("\nAfter that, you can " + click.style("update", bold=True) + " your application code with:\n")
click.echo(click.style("\t$ zappa update --all", bold=True))
else:
click.echo("\n" + click.style("Done", bold=True) + "! Now you can " + click.style("deploy", bold=True) + " your Zappa application by executing:\n")
click.echo(click.style("\t$ zappa deploy %s" % env, bold=True))
click.echo("\nAfter that, you can " + click.style("update", bold=True) + " your application code with:\n")
click.echo(click.style("\t$ zappa update %s" % env, bold=True))
click.echo("\nTo learn more, check out our project page on " + click.style("GitHub", bold=True) +
" here: " + click.style("https://github.com/Miserlou/Zappa", fg="cyan", bold=True))
click.echo("and stop by our " + click.style("Slack", bold=True) + " channel here: " +
click.style("https://slack.zappa.io", fg="cyan", bold=True))
click.echo("\nEnjoy!,")
click.echo(" ~ Team " + click.style("Zappa", bold=True) + "!")
return
def certify(self, no_confirm=True, manual=False):
"""
Register or update a domain certificate for this env.
"""
if not self.domain:
raise ClickException("Can't certify a domain without " + click.style("domain", fg="red", bold=True) + " configured!")
if not no_confirm: # pragma: no cover
confirm = input("Are you sure you want to certify? [y/n] ")
if confirm != 'y':
return
# Make sure this isn't already deployed.
deployed_versions = self.zappa.get_lambda_function_versions(self.lambda_name)
if len(deployed_versions) == 0:
raise ClickException("This application " + click.style("isn't deployed yet", fg="red") +
" - did you mean to call " + click.style("deploy", bold=True) + "?")
account_key_location = self.stage_config.get('lets_encrypt_key', None)
cert_location = self.stage_config.get('certificate', None)
cert_key_location = self.stage_config.get('certificate_key', None)
cert_chain_location = self.stage_config.get('certificate_chain', None)
cert_arn = self.stage_config.get('certificate_arn', None)
base_path = self.stage_config.get('base_path', None)
# These are sensitive
certificate_body = None
certificate_private_key = None
certificate_chain = None
# Prepare for custom Let's Encrypt
if not cert_location and not cert_arn:
if not account_key_location:
raise ClickException("Can't certify a domain without " + click.style("lets_encrypt_key", fg="red", bold=True) +
" or " + click.style("certificate", fg="red", bold=True)+
" or " + click.style("certificate_arn", fg="red", bold=True) + " configured!")
# Get install account_key to /tmp/account_key.pem
from .letsencrypt import gettempdir
if account_key_location.startswith('s3://'):
bucket, key_name = parse_s3_url(account_key_location)
self.zappa.s3_client.download_file(bucket, key_name, os.path.join(gettempdir(), 'account.key'))
else:
from shutil import copyfile
copyfile(account_key_location, os.path.join(gettempdir(), 'account.key'))
# Prepare for Custom SSL
elif not account_key_location and not cert_arn:
if not cert_location or not cert_key_location or not cert_chain_location:
raise ClickException("Can't certify a domain without " +
click.style("certificate, certificate_key and certificate_chain", fg="red", bold=True) + " configured!")
# Read the supplied certificates.
with open(cert_location) as f:
certificate_body = f.read()
with open(cert_key_location) as f:
certificate_private_key = f.read()
with open(cert_chain_location) as f:
certificate_chain = f.read()
click.echo("Certifying domain " + click.style(self.domain, fg="green", bold=True) + "..")
# Get cert and update domain.
# Let's Encrypt
if not cert_location and not cert_arn:
from .letsencrypt import get_cert_and_update_domain
cert_success = get_cert_and_update_domain(
self.zappa,
self.lambda_name,
self.api_stage,
self.domain,
manual
)
# Custom SSL / ACM
else:
route53 = self.stage_config.get('route53_enabled', True)
if not self.zappa.get_domain_name(self.domain, route53=route53):
dns_name = self.zappa.create_domain_name(
domain_name=self.domain,
certificate_name=self.domain + "-Zappa-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=cert_arn,
lambda_name=self.lambda_name,
stage=self.api_stage,
base_path=base_path
)
if route53:
self.zappa.update_route53_records(self.domain, dns_name)
print("Created a new domain name with supplied certificate. Please note that it can take up to 40 minutes for this domain to be "
"created and propagated through AWS, but it requires no further work on your part.")
else:
self.zappa.update_domain_name(
domain_name=self.domain,
certificate_name=self.domain + "-Zappa-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=cert_arn,
lambda_name=self.lambda_name,
stage=self.api_stage,
route53=route53,
base_path=base_path
)
cert_success = True
if cert_success:
click.echo("Certificate " + click.style("updated", fg="green", bold=True) + "!")
else:
click.echo(click.style("Failed", fg="red", bold=True) + " to generate or install certificate! :(")
click.echo("\n==============\n")
shamelessly_promote()
##
# Shell
##
def shell(self):
"""
Spawn a debug shell.
"""
click.echo(click.style("NOTICE!", fg="yellow", bold=True) + " This is a " + click.style("local", fg="green", bold=True) + " shell, inside a " + click.style("Zappa", bold=True) + " object!")
self.zappa.shell()
return
##
# Utility
##
def callback(self, position):
"""
Allows the execution of custom code between creation of the zip file and deployment to AWS.
:return: None
"""
callbacks = self.stage_config.get('callbacks', {})
callback = callbacks.get(position)
if callback:
(mod_path, cb_func_name) = callback.rsplit('.', 1)
try: # Prefer callback in working directory
if mod_path.count('.') >= 1: # Callback function is nested in a folder
(mod_folder_path, mod_name) = mod_path.rsplit('.', 1)
mod_folder_path_fragments = mod_folder_path.split('.')
working_dir = os.path.join(os.getcwd(), *mod_folder_path_fragments)
else:
mod_name = mod_path
working_dir = os.getcwd()
working_dir_importer = pkgutil.get_importer(working_dir)
module_ = working_dir_importer.find_module(mod_name).load_module(mod_name)
except (ImportError, AttributeError):
try: # Callback func might be in virtualenv
module_ = importlib.import_module(mod_path)
except ImportError: # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"import {position} callback ".format(position=position),
bold=True) + 'module: "{mod_path}"'.format(mod_path=click.style(mod_path, bold=True)))
if not hasattr(module_, cb_func_name): # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"find {position} callback ".format(position=position), bold=True) + 'function: "{cb_func_name}" '.format(
cb_func_name=click.style(cb_func_name, bold=True)) + 'in module "{mod_path}"'.format(mod_path=mod_path))
cb_func = getattr(module_, cb_func_name)
cb_func(self) # Call the function passing self
def check_for_update(self):
"""
Print a warning if there's a new Zappa version available.
"""
try:
version = pkg_resources.require("zappa")[0].version
updateable = check_new_version_available(version)
if updateable:
click.echo(click.style("Important!", fg="yellow", bold=True) +
" A new version of " + click.style("Zappa", bold=True) + " is available!")
click.echo("Upgrade with: " + click.style("pip install zappa --upgrade", bold=True))
click.echo("Visit the project page on GitHub to see the latest changes: " +
click.style("https://github.com/Miserlou/Zappa", bold=True))
except Exception as e: # pragma: no cover
print(e)
return
def load_settings(self, settings_file=None, session=None):
"""
Load the local zappa_settings file.
An existing boto session can be supplied, though this is likely for testing purposes.
Returns the loaded Zappa object.
"""
# Ensure we're passed a valid settings file.
if not settings_file:
settings_file = self.get_json_or_yaml_settings()
if not os.path.isfile(settings_file):
raise ClickException("Please configure your zappa_settings file.")
# Load up file
self.load_settings_file(settings_file)
# Make sure that the stages are valid names:
for stage_name in self.zappa_settings.keys():
try:
self.check_stage_name(stage_name)
except ValueError:
raise ValueError("API stage names must match a-zA-Z0-9_ ; '{0!s}' does not.".format(stage_name))
# Make sure that this stage is our settings
if self.api_stage not in self.zappa_settings.keys():
raise ClickException("Please define stage '{0!s}' in your Zappa settings.".format(self.api_stage))
# We need a working title for this project. Use one if supplied, else cwd dirname.
if 'project_name' in self.stage_config: # pragma: no cover
# If the name is invalid, this will throw an exception with message up stack
self.project_name = validate_name(self.stage_config['project_name'])
else:
self.project_name = self.get_project_name()
# The name of the actual AWS Lambda function, ex, 'helloworld-dev'
# Assume that we already have have validated the name beforehand.
# Related: https://github.com/Miserlou/Zappa/pull/664
# https://github.com/Miserlou/Zappa/issues/678
# And various others from Slack.
self.lambda_name = slugify.slugify(self.project_name + '-' + self.api_stage)
# Load stage-specific settings
self.s3_bucket_name = self.stage_config.get('s3_bucket', "zappa-" + ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(9)))
self.vpc_config = self.stage_config.get('vpc_config', {})
self.memory_size = self.stage_config.get('memory_size', 512)
self.app_function = self.stage_config.get('app_function', None)
self.exception_handler = self.stage_config.get('exception_handler', None)
self.aws_region = self.stage_config.get('aws_region', None)
self.debug = self.stage_config.get('debug', True)
self.prebuild_script = self.stage_config.get('prebuild_script', None)
self.profile_name = self.stage_config.get('profile_name', None)
self.log_level = self.stage_config.get('log_level', "DEBUG")
self.domain = self.stage_config.get('domain', None)
self.base_path = self.stage_config.get('base_path', None)
self.timeout_seconds = self.stage_config.get('timeout_seconds', 30)
dead_letter_arn = self.stage_config.get('dead_letter_arn', '')
self.dead_letter_config = {'TargetArn': dead_letter_arn} if dead_letter_arn else {}
self.cognito = self.stage_config.get('cognito', None)
self.num_retained_versions = self.stage_config.get('num_retained_versions',None)
# Check for valid values of num_retained_versions
if self.num_retained_versions is not None and type(self.num_retained_versions) is not int:
raise ClickException("Please supply either an integer or null for num_retained_versions in the zappa_settings.json. Found %s" % type(self.num_retained_versions))
elif type(self.num_retained_versions) is int and self.num_retained_versions<1:
raise ClickException("The value for num_retained_versions in the zappa_settings.json should be greater than 0.")
# Provide legacy support for `use_apigateway`, now `apigateway_enabled`.
# https://github.com/Miserlou/Zappa/issues/490
# https://github.com/Miserlou/Zappa/issues/493
self.use_apigateway = self.stage_config.get('use_apigateway', True)
if self.use_apigateway:
self.use_apigateway = self.stage_config.get('apigateway_enabled', True)
self.apigateway_description = self.stage_config.get('apigateway_description', None)
self.lambda_handler = self.stage_config.get('lambda_handler', 'handler.lambda_handler')
# DEPRECATED. https://github.com/Miserlou/Zappa/issues/456
self.remote_env_bucket = self.stage_config.get('remote_env_bucket', None)
self.remote_env_file = self.stage_config.get('remote_env_file', None)
self.remote_env = self.stage_config.get('remote_env', None)
self.settings_file = self.stage_config.get('settings_file', None)
self.django_settings = self.stage_config.get('django_settings', None)
self.manage_roles = self.stage_config.get('manage_roles', True)
self.binary_support = self.stage_config.get('binary_support', True)
self.api_key_required = self.stage_config.get('api_key_required', False)
self.api_key = self.stage_config.get('api_key')
self.endpoint_configuration = self.stage_config.get('endpoint_configuration', None)
self.iam_authorization = self.stage_config.get('iam_authorization', False)
self.cors = self.stage_config.get("cors", False)
self.lambda_description = self.stage_config.get('lambda_description', "Zappa Deployment")
self.lambda_concurrency = self.stage_config.get('lambda_concurrency', None)
self.environment_variables = self.stage_config.get('environment_variables', {})
self.aws_environment_variables = self.stage_config.get('aws_environment_variables', {})
self.check_environment(self.environment_variables)
self.authorizer = self.stage_config.get('authorizer', {})
self.runtime = self.stage_config.get('runtime', get_runtime_from_python_version())
self.aws_kms_key_arn = self.stage_config.get('aws_kms_key_arn', '')
self.context_header_mappings = self.stage_config.get('context_header_mappings', {})
self.xray_tracing = self.stage_config.get('xray_tracing', False)
self.desired_role_arn = self.stage_config.get('role_arn')
self.layers = self.stage_config.get('layers', None)
# Load ALB-related settings
self.use_alb = self.stage_config.get('alb_enabled', False)
self.alb_vpc_config = self.stage_config.get('alb_vpc_config', {})
# Additional tags
self.tags = self.stage_config.get('tags', {})
desired_role_name = self.lambda_name + "-ZappaLambdaExecutionRole"
self.zappa = Zappa( boto_session=session,
profile_name=self.profile_name,
aws_region=self.aws_region,
load_credentials=self.load_credentials,
desired_role_name=desired_role_name,
desired_role_arn=self.desired_role_arn,
runtime=self.runtime,
tags=self.tags,
endpoint_urls=self.stage_config.get('aws_endpoint_urls',{}),
xray_tracing=self.xray_tracing
)
for setting in CUSTOM_SETTINGS:
if setting in self.stage_config:
setting_val = self.stage_config[setting]
# Read the policy file contents.
if setting.endswith('policy'):
with open(setting_val, 'r') as f:
setting_val = f.read()
setattr(self.zappa, setting, setting_val)
if self.app_function:
self.collision_warning(self.app_function)
if self.app_function[-3:] == '.py':
click.echo(click.style("Warning!", fg="red", bold=True) +
" Your app_function is pointing to a " + click.style("file and not a function", bold=True) +
"! It should probably be something like 'my_file.app', not 'my_file.py'!")
return self.zappa
def get_json_or_yaml_settings(self, settings_name="zappa_settings"):
"""
Return zappa_settings path as JSON or YAML (or TOML), as appropriate.
"""
zs_json = settings_name + ".json"
zs_yml = settings_name + ".yml"
zs_yaml = settings_name + ".yaml"
zs_toml = settings_name + ".toml"
# Must have at least one
if not os.path.isfile(zs_json) \
and not os.path.isfile(zs_yml) \
and not os.path.isfile(zs_yaml) \
and not os.path.isfile(zs_toml):
raise ClickException("Please configure a zappa_settings file or call `zappa init`.")
# Prefer JSON
if os.path.isfile(zs_json):
settings_file = zs_json
elif os.path.isfile(zs_toml):
settings_file = zs_toml
elif os.path.isfile(zs_yml):
settings_file = zs_yml
else:
settings_file = zs_yaml
return settings_file
def load_settings_file(self, settings_file=None):
"""
Load our settings file.
"""
if not settings_file:
settings_file = self.get_json_or_yaml_settings()
if not os.path.isfile(settings_file):
raise ClickException("Please configure your zappa_settings file or call `zappa init`.")
path, ext = os.path.splitext(settings_file)
if ext == '.yml' or ext == '.yaml':
with open(settings_file) as yaml_file:
try:
self.zappa_settings = yaml.safe_load(yaml_file)
except ValueError: # pragma: no cover
raise ValueError("Unable to load the Zappa settings YAML. It may be malformed.")
elif ext == '.toml':
with open(settings_file) as toml_file:
try:
self.zappa_settings = toml.load(toml_file)
except ValueError: # pragma: no cover
raise ValueError("Unable to load the Zappa settings TOML. It may be malformed.")
else:
with open(settings_file) as json_file:
try:
self.zappa_settings = json.load(json_file)
except ValueError: # pragma: no cover
raise ValueError("Unable to load the Zappa settings JSON. It may be malformed.")
def create_package(self, output=None):
"""
Ensure that the package can be properly configured,
and then create it.
"""
# Create the Lambda zip package (includes project and virtualenvironment)
# Also define the path the handler file so it can be copied to the zip
# root for Lambda.
current_file = os.path.dirname(os.path.abspath(
inspect.getfile(inspect.currentframe())))
handler_file = os.sep.join(current_file.split(os.sep)[0:]) + os.sep + 'handler.py'
# Create the zip file(s)
if self.stage_config.get('slim_handler', False):
# Create two zips. One with the application and the other with just the handler.
# https://github.com/Miserlou/Zappa/issues/510
self.zip_path = self.zappa.create_lambda_zip(
prefix=self.lambda_name,
use_precompiled_packages=self.stage_config.get('use_precompiled_packages', True),
exclude=self.stage_config.get('exclude', []),
exclude_glob=self.stage_config.get('exclude_glob', []),
disable_progress=self.disable_progress,
archive_format='tarball'
)
# Make sure the normal venv is not included in the handler's zip
exclude = self.stage_config.get('exclude', [])
cur_venv = self.zappa.get_current_venv()
exclude.append(cur_venv.split('/')[-1])
self.handler_path = self.zappa.create_lambda_zip(
prefix='handler_{0!s}'.format(self.lambda_name),
venv=self.zappa.create_handler_venv(),
handler_file=handler_file,
slim_handler=True,
exclude=exclude,
exclude_glob=self.stage_config.get('exclude_glob', []),
output=output,
disable_progress=self.disable_progress
)
else:
# This could be python3.6 optimized.
exclude = self.stage_config.get(
'exclude', [
"boto3",
"dateutil",
"botocore",
"s3transfer",
"concurrent"
])
# Create a single zip that has the handler and application
self.zip_path = self.zappa.create_lambda_zip(
prefix=self.lambda_name,
handler_file=handler_file,
use_precompiled_packages=self.stage_config.get('use_precompiled_packages', True),
exclude=exclude,
exclude_glob=self.stage_config.get('exclude_glob', []),
output=output,
disable_progress=self.disable_progress
)
# Warn if this is too large for Lambda.
file_stats = os.stat(self.zip_path)
if file_stats.st_size > 52428800: # pragma: no cover
print('\n\nWarning: Application zip package is likely to be too large for AWS Lambda. '
'Try setting "slim_handler" to true in your Zappa settings file.\n\n')
# Throw custom settings into the zip that handles requests
if self.stage_config.get('slim_handler', False):
handler_zip = self.handler_path
else:
handler_zip = self.zip_path
with zipfile.ZipFile(handler_zip, 'a') as lambda_zip:
settings_s = "# Generated by Zappa\n"
if self.app_function:
if '.' not in self.app_function: # pragma: no cover
raise ClickException("Your " + click.style("app_function", fg='red', bold=True) + " value is not a modular path." +
" It needs to be in the format `" + click.style("your_module.your_app_object", bold=True) + "`.")
app_module, app_function = self.app_function.rsplit('.', 1)
settings_s = settings_s + "APP_MODULE='{0!s}'\nAPP_FUNCTION='{1!s}'\n".format(app_module, app_function)
if self.exception_handler:
settings_s += "EXCEPTION_HANDLER='{0!s}'\n".format(self.exception_handler)
else:
settings_s += "EXCEPTION_HANDLER=None\n"
if self.debug:
settings_s = settings_s + "DEBUG=True\n"
else:
settings_s = settings_s + "DEBUG=False\n"
settings_s = settings_s + "LOG_LEVEL='{0!s}'\n".format((self.log_level))
if self.binary_support:
settings_s = settings_s + "BINARY_SUPPORT=True\n"
else:
settings_s = settings_s + "BINARY_SUPPORT=False\n"
head_map_dict = {}
head_map_dict.update(dict(self.context_header_mappings))
settings_s = settings_s + "CONTEXT_HEADER_MAPPINGS={0}\n".format(
head_map_dict
)
# If we're on a domain, we don't need to define the /<<env>> in
# the WSGI PATH
if self.domain:
settings_s = settings_s + "DOMAIN='{0!s}'\n".format((self.domain))
else:
settings_s = settings_s + "DOMAIN=None\n"
if self.base_path:
settings_s = settings_s + "BASE_PATH='{0!s}'\n".format((self.base_path))
else:
settings_s = settings_s + "BASE_PATH=None\n"
# Pass through remote config bucket and path
if self.remote_env:
settings_s = settings_s + "REMOTE_ENV='{0!s}'\n".format(
self.remote_env
)
# DEPRECATED. use remove_env instead
elif self.remote_env_bucket and self.remote_env_file:
settings_s = settings_s + "REMOTE_ENV='s3://{0!s}/{1!s}'\n".format(
self.remote_env_bucket, self.remote_env_file
)
# Local envs
env_dict = {}
if self.aws_region:
env_dict['AWS_REGION'] = self.aws_region
env_dict.update(dict(self.environment_variables))
# Environment variable keys must be ascii
# https://github.com/Miserlou/Zappa/issues/604
# https://github.com/Miserlou/Zappa/issues/998
try:
env_dict = dict((k.encode('ascii').decode('ascii'), v) for (k, v) in env_dict.items())
except Exception:
raise ValueError("Environment variable keys must be ascii.")
settings_s = settings_s + "ENVIRONMENT_VARIABLES={0}\n".format(
env_dict
)
# We can be environment-aware
settings_s = settings_s + "API_STAGE='{0!s}'\n".format((self.api_stage))
settings_s = settings_s + "PROJECT_NAME='{0!s}'\n".format((self.project_name))
if self.settings_file:
settings_s = settings_s + "SETTINGS_FILE='{0!s}'\n".format((self.settings_file))
else:
settings_s = settings_s + "SETTINGS_FILE=None\n"
if self.django_settings:
settings_s = settings_s + "DJANGO_SETTINGS='{0!s}'\n".format((self.django_settings))
else:
settings_s = settings_s + "DJANGO_SETTINGS=None\n"
# If slim handler, path to project zip
if self.stage_config.get('slim_handler', False):
settings_s += "ARCHIVE_PATH='s3://{0!s}/{1!s}_{2!s}_current_project.tar.gz'\n".format(
self.s3_bucket_name, self.api_stage, self.project_name)
# since includes are for slim handler add the setting here by joining arbitrary list from zappa_settings file
# and tell the handler we are the slim_handler
# https://github.com/Miserlou/Zappa/issues/776
settings_s += "SLIM_HANDLER=True\n"
include = self.stage_config.get('include', [])
if len(include) >= 1:
settings_s += "INCLUDE=" + str(include) + '\n'
# AWS Events function mapping
event_mapping = {}
events = self.stage_config.get('events', [])
for event in events:
arn = event.get('event_source', {}).get('arn')
function = event.get('function')
if arn and function:
event_mapping[arn] = function
settings_s = settings_s + "AWS_EVENT_MAPPING={0!s}\n".format(event_mapping)
# Map Lext bot events
bot_events = self.stage_config.get('bot_events', [])
bot_events_mapping = {}
for bot_event in bot_events:
event_source = bot_event.get('event_source', {})
intent = event_source.get('intent')
invocation_source = event_source.get('invocation_source')
function = bot_event.get('function')
if intent and invocation_source and function:
bot_events_mapping[str(intent) + ':' + str(invocation_source)] = function
settings_s = settings_s + "AWS_BOT_EVENT_MAPPING={0!s}\n".format(bot_events_mapping)
# Map cognito triggers
cognito_trigger_mapping = {}
cognito_config = self.stage_config.get('cognito', {})
triggers = cognito_config.get('triggers', [])
for trigger in triggers:
source = trigger.get('source')
function = trigger.get('function')
if source and function:
cognito_trigger_mapping[source] = function
settings_s = settings_s + "COGNITO_TRIGGER_MAPPING={0!s}\n".format(cognito_trigger_mapping)
# Authorizer config
authorizer_function = self.authorizer.get('function', None)
if authorizer_function:
settings_s += "AUTHORIZER_FUNCTION='{0!s}'\n".format(authorizer_function)
# Copy our Django app into root of our package.
# It doesn't work otherwise.
if self.django_settings:
base = __file__.rsplit(os.sep, 1)[0]
django_py = ''.join(os.path.join(base, 'ext', 'django_zappa.py'))
lambda_zip.write(django_py, 'django_zappa_app.py')
# async response
async_response_table = self.stage_config.get('async_response_table', '')
settings_s += "ASYNC_RESPONSE_TABLE='{0!s}'\n".format(async_response_table)
# Lambda requires a specific chmod
temp_settings = tempfile.NamedTemporaryFile(delete=False)
os.chmod(temp_settings.name, 0o644)
temp_settings.write(bytes(settings_s, "utf-8"))
temp_settings.close()
lambda_zip.write(temp_settings.name, 'zappa_settings.py')
os.unlink(temp_settings.name)
def remove_local_zip(self):
"""
Remove our local zip file.
"""
if self.stage_config.get('delete_local_zip', True):
try:
if os.path.isfile(self.zip_path):
os.remove(self.zip_path)
if self.handler_path and os.path.isfile(self.handler_path):
os.remove(self.handler_path)
except Exception as e: # pragma: no cover
sys.exit(-1)
def remove_uploaded_zip(self):
"""
Remove the local and S3 zip file after uploading and updating.
"""
# Remove the uploaded zip from S3, because it is now registered..
if self.stage_config.get('delete_s3_zip', True):
self.zappa.remove_from_s3(self.zip_path, self.s3_bucket_name)
if self.stage_config.get('slim_handler', False):
# Need to keep the project zip as the slim handler uses it.
self.zappa.remove_from_s3(self.handler_path, self.s3_bucket_name)
def on_exit(self):
"""
Cleanup after the command finishes.
Always called: SystemExit, KeyboardInterrupt and any other Exception that occurs.
"""
if self.zip_path:
# Only try to remove uploaded zip if we're running a command that has loaded credentials
if self.load_credentials:
self.remove_uploaded_zip()
self.remove_local_zip()
def print_logs(self, logs, colorize=True, http=False, non_http=False, force_colorize=None):
"""
Parse, filter and print logs to the console.
"""
for log in logs:
timestamp = log['timestamp']
message = log['message']
if "START RequestId" in message:
continue
if "REPORT RequestId" in message:
continue
if "END RequestId" in message:
continue
if not colorize and not force_colorize:
if http:
if self.is_http_log_entry(message.strip()):
print("[" + str(timestamp) + "] " + message.strip())
elif non_http:
if not self.is_http_log_entry(message.strip()):
print("[" + str(timestamp) + "] " + message.strip())
else:
print("[" + str(timestamp) + "] " + message.strip())
else:
if http:
if self.is_http_log_entry(message.strip()):
click.echo(click.style("[", fg='cyan') + click.style(str(timestamp), bold=True) + click.style("]", fg='cyan') + self.colorize_log_entry(message.strip()), color=force_colorize)
elif non_http:
if not self.is_http_log_entry(message.strip()):
click.echo(click.style("[", fg='cyan') + click.style(str(timestamp), bold=True) + click.style("]", fg='cyan') + self.colorize_log_entry(message.strip()), color=force_colorize)
else:
click.echo(click.style("[", fg='cyan') + click.style(str(timestamp), bold=True) + click.style("]", fg='cyan') + self.colorize_log_entry(message.strip()), color=force_colorize)
def is_http_log_entry(self, string):
"""
Determines if a log entry is an HTTP-formatted log string or not.
"""
# Debug event filter
if 'Zappa Event' in string:
return False
# IP address filter
for token in string.replace('\t', ' ').split(' '):
try:
if (token.count('.') == 3 and token.replace('.', '').isnumeric()):
return True
except Exception: # pragma: no cover
pass
return False
def get_project_name(self):
return slugify.slugify(os.getcwd().split(os.sep)[-1])[:15]
def colorize_log_entry(self, string):
"""
Apply various heuristics to return a colorized version of a string.
If these fail, simply return the string in plaintext.
"""
final_string = string
try:
# First, do stuff in square brackets
inside_squares = re.findall(r'\[([^]]*)\]', string)
for token in inside_squares:
if token in ['CRITICAL', 'ERROR', 'WARNING', 'DEBUG', 'INFO', 'NOTSET']:
final_string = final_string.replace('[' + token + ']', click.style("[", fg='cyan') + click.style(token, fg='cyan', bold=True) + click.style("]", fg='cyan'))
else:
final_string = final_string.replace('[' + token + ']', click.style("[", fg='cyan') + click.style(token, bold=True) + click.style("]", fg='cyan'))
# Then do quoted strings
quotes = re.findall(r'"[^"]*"', string)
for token in quotes:
final_string = final_string.replace(token, click.style(token, fg="yellow"))
# And UUIDs
for token in final_string.replace('\t', ' ').split(' '):
try:
if token.count('-') == 4 and token.replace('-', '').isalnum():
final_string = final_string.replace(token, click.style(token, fg="magenta"))
except Exception: # pragma: no cover
pass
# And IP addresses
try:
if token.count('.') == 3 and token.replace('.', '').isnumeric():
final_string = final_string.replace(token, click.style(token, fg="red"))
except Exception: # pragma: no cover
pass
# And status codes
try:
if token in ['200']:
final_string = final_string.replace(token, click.style(token, fg="green"))
if token in ['400', '401', '403', '404', '405', '500']:
final_string = final_string.replace(token, click.style(token, fg="red"))
except Exception: # pragma: no cover
pass
# And Zappa Events
try:
if "Zappa Event:" in final_string:
final_string = final_string.replace("Zappa Event:", click.style("Zappa Event:", bold=True, fg="green"))
except Exception: # pragma: no cover
pass
# And dates
for token in final_string.split('\t'):
try:
is_date = parser.parse(token)
final_string = final_string.replace(token, click.style(token, fg="green"))
except Exception: # pragma: no cover
pass
final_string = final_string.replace('\t', ' ').replace(' ', ' ')
if final_string[0] != ' ':
final_string = ' ' + final_string
return final_string
except Exception as e: # pragma: no cover
return string
def execute_prebuild_script(self):
"""
Parse and execute the prebuild_script from the zappa_settings.
"""
(pb_mod_path, pb_func) = self.prebuild_script.rsplit('.', 1)
try: # Prefer prebuild script in working directory
if pb_mod_path.count('.') >= 1: # Prebuild script func is nested in a folder
(mod_folder_path, mod_name) = pb_mod_path.rsplit('.', 1)
mod_folder_path_fragments = mod_folder_path.split('.')
working_dir = os.path.join(os.getcwd(), *mod_folder_path_fragments)
else:
mod_name = pb_mod_path
working_dir = os.getcwd()
working_dir_importer = pkgutil.get_importer(working_dir)
module_ = working_dir_importer.find_module(mod_name).load_module(mod_name)
except (ImportError, AttributeError):
try: # Prebuild func might be in virtualenv
module_ = importlib.import_module(pb_mod_path)
except ImportError: # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"import prebuild script ", bold=True) + 'module: "{pb_mod_path}"'.format(
pb_mod_path=click.style(pb_mod_path, bold=True)))
if not hasattr(module_, pb_func): # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"find prebuild script ", bold=True) + 'function: "{pb_func}" '.format(
pb_func=click.style(pb_func, bold=True)) + 'in module "{pb_mod_path}"'.format(
pb_mod_path=pb_mod_path))
prebuild_function = getattr(module_, pb_func)
prebuild_function() # Call the function
def collision_warning(self, item):
"""
Given a string, print a warning if this could
collide with a Zappa core package module.
Use for app functions and events.
"""
namespace_collisions = [
"zappa.", "wsgi.", "middleware.", "handler.", "util.", "letsencrypt.", "cli."
]
for namespace_collision in namespace_collisions:
if item.startswith(namespace_collision):
click.echo(click.style("Warning!", fg="red", bold=True) +
" You may have a namespace collision between " +
click.style(item, bold=True) +
" and " +
click.style(namespace_collision, bold=True) +
"! You may want to rename that file.")
def deploy_api_gateway(self, api_id):
cache_cluster_enabled = self.stage_config.get('cache_cluster_enabled', False)
cache_cluster_size = str(self.stage_config.get('cache_cluster_size', .5))
endpoint_url = self.zappa.deploy_api_gateway(
api_id=api_id,
stage_name=self.api_stage,
cache_cluster_enabled=cache_cluster_enabled,
cache_cluster_size=cache_cluster_size,
cloudwatch_log_level=self.stage_config.get('cloudwatch_log_level', 'OFF'),
cloudwatch_data_trace=self.stage_config.get('cloudwatch_data_trace', False),
cloudwatch_metrics_enabled=self.stage_config.get('cloudwatch_metrics_enabled', False),
cache_cluster_ttl=self.stage_config.get('cache_cluster_ttl', 300),
cache_cluster_encrypted=self.stage_config.get('cache_cluster_encrypted', False)
)
return endpoint_url
def check_venv(self):
""" Ensure we're inside a virtualenv. """
if self.vargs and self.vargs.get("no_venv"):
return
if self.zappa:
venv = self.zappa.get_current_venv()
else:
# Just for `init`, when we don't have settings yet.
venv = Zappa.get_current_venv()
if not venv:
raise ClickException(
click.style("Zappa", bold=True) + " requires an " + click.style("active virtual environment", bold=True, fg="red") + "!\n" +
"Learn more about virtual environments here: " + click.style("http://docs.python-guide.org/en/latest/dev/virtualenvs/", bold=False, fg="cyan"))
def silence(self):
"""
Route all stdout to null.
"""
sys.stdout = open(os.devnull, 'w')
sys.stderr = open(os.devnull, 'w')
def touch_endpoint(self, endpoint_url):
"""
Test the deployed endpoint with a GET request.
"""
# Private APIGW endpoints most likely can't be reached by a deployer
# unless they're connected to the VPC by VPN. Instead of trying
# connect to the service, print a warning and let the user know
# to check it manually.
# See: https://github.com/Miserlou/Zappa/pull/1719#issuecomment-471341565
if 'PRIVATE' in self.stage_config.get('endpoint_configuration', []):
print(
click.style("Warning!", fg="yellow", bold=True) +
" Since you're deploying a private API Gateway endpoint,"
" Zappa cannot determine if your function is returning "
" a correct status code. You should check your API's response"
" manually before considering this deployment complete."
)
return
touch_path = self.stage_config.get('touch_path', '/')
req = requests.get(endpoint_url + touch_path)
# Sometimes on really large packages, it can take 60-90 secs to be
# ready and requests will return 504 status_code until ready.
# So, if we get a 504 status code, rerun the request up to 4 times or
# until we don't get a 504 error
if req.status_code == 504:
i = 0
status_code = 504
touch_try_count = self.stage_config.get('touch_try_count', 4)
while status_code == 504 and i <= touch_try_count:
req = requests.get(endpoint_url + touch_path)
status_code = req.status_code
i += 1
if req.status_code >= 500:
raise ClickException(click.style("Warning!", fg="red", bold=True) +
" Status check on the deployed lambda failed." +
" A GET request to '" + touch_path + "' yielded a " +
click.style(str(req.status_code), fg="red", bold=True) + " response code.")
####################################################################
# Main
####################################################################
def shamelessly_promote():
"""
Shamelessly promote our little community.
"""
click.echo("Need " + click.style("help", fg='green', bold=True) +
"? Found a " + click.style("bug", fg='green', bold=True) +
"? Let us " + click.style("know", fg='green', bold=True) + "! :D")
click.echo("File bug reports on " + click.style("GitHub", bold=True) + " here: "
+ click.style("https://github.com/Miserlou/Zappa", fg='cyan', bold=True))
click.echo("And join our " + click.style("Slack", bold=True) + " channel here: "
+ click.style("https://slack.zappa.io", fg='cyan', bold=True))
click.echo("Love!,")
click.echo(" ~ Team " + click.style("Zappa", bold=True) + "!")
def disable_click_colors():
"""
Set a Click context where colors are disabled. Creates a throwaway BaseCommand
to play nicely with the Context constructor.
The intended side-effect here is that click.echo() checks this context and will
suppress colors.
https://github.com/pallets/click/blob/e1aa43a3/click/globals.py#L39
"""
ctx = Context(BaseCommand('AllYourBaseAreBelongToUs'))
ctx.color = False
push_context(ctx)
def handle(): # pragma: no cover
"""
Main program execution handler.
"""
try:
cli = ZappaCLI()
sys.exit(cli.handle())
except SystemExit as e: # pragma: no cover
cli.on_exit()
sys.exit(e.code)
except KeyboardInterrupt: # pragma: no cover
cli.on_exit()
sys.exit(130)
except Exception as e:
cli.on_exit()
click.echo("Oh no! An " + click.style("error occurred", fg='red', bold=True) + "! :(")
click.echo("\n==============\n")
import traceback
traceback.print_exc()
click.echo("\n==============\n")
shamelessly_promote()
sys.exit(-1)
if __name__ == '__main__': # pragma: no cover
handle() | zappa-bepro | /zappa_bepro-0.51.11-py3-none-any.whl/zappa/cli.py | cli.py |
# zappa-call-later
## Description
A db driven way to run tasks at a future point in time, or at a regular interval, for Django Zappa projects (https://github.com/Miserlou/Zappa).
## Installation
```
pip install zappa-call-later
```
To check for tasks every 4 minutes, add the below to zappa_settings.json:
```json
{
"dev": {
"keep_warm": false,
"events": [
{
"function": "zappa_call_later.zappa_check.now",
"expression": "rate(4 minute)"
}
]
}
}
```
## Usage
Low level currently, where you create and save your tasks straight to db.
```python
def test_function(_arg1, _arg2, _kwarg1=1, _kwarg2=2):
return _arg1, _arg2, _kwarg1, _kwarg2
call_later = CallLater()
call_later.function = test_function
call_later.args = (3, 4) # for the above function
call_later.kwargs = {'_kwarg1': 11, '_kwarg2': 22} # for the above function
call_later.time_to_run = timezone.now() + timedelta(minutes=8)
call_later.save()
```
You can also repeatedly call your task:
```python
call_later_twice.every = timedelta(seconds=1)
call_later_twice.repeat = 2
```
There are 2 types of failure:
- If a task fails to run, it is run on the next checking event. By default, there are 3 attempts to run a function.
- If a task takes too long to run, it is again run on the next checking event. By default, there are 3 retries.
...the task is labelled as problematic after repeated fails.
| zappa-call-later | /zappa-call-later-1.0.2.tar.gz/zappa-call-later-1.0.2/README.md | README.md |
import atexit
import base64
import copy
import json
import hashlib
import logging
import re
import subprocess
import os
import shutil
import sys
import time
import tempfile
import binascii
import textwrap
import requests
from urllib.request import urlopen
# Staging
# Amazon doesn't accept these though.
# DEFAULT_CA = "https://acme-staging.api.letsencrypt.org"
# Production
DEFAULT_CA = "https://acme-v01.api.letsencrypt.org"
LOGGER = logging.getLogger(__name__)
LOGGER.addHandler(logging.StreamHandler())
def get_cert_and_update_domain(
zappa_instance,
lambda_name,
api_stage,
domain=None,
manual=False,
):
"""
Main cert installer path.
"""
try:
create_domain_key()
create_domain_csr(domain)
get_cert(zappa_instance)
create_chained_certificate()
with open('{}/signed.crt'.format(gettempdir())) as f:
certificate_body = f.read()
with open('{}/domain.key'.format(gettempdir())) as f:
certificate_private_key = f.read()
with open('{}/intermediate.pem'.format(gettempdir())) as f:
certificate_chain = f.read()
if not manual:
if domain:
if not zappa_instance.get_domain_name(domain):
zappa_instance.create_domain_name(
domain_name=domain,
certificate_name=domain + "-Zappa-LE-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=None,
lambda_name=lambda_name,
stage=api_stage
)
print("Created a new domain name. Please note that it can take up to 40 minutes for this domain to be created and propagated through AWS, but it requires no further work on your part.")
else:
zappa_instance.update_domain_name(
domain_name=domain,
certificate_name=domain + "-Zappa-LE-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=None,
lambda_name=lambda_name,
stage=api_stage
)
else:
print("Cerificate body:\n")
print(certificate_body)
print("\nCerificate private key:\n")
print(certificate_private_key)
print("\nCerificate chain:\n")
print(certificate_chain)
except Exception as e:
print(e)
return False
return True
def create_domain_key():
devnull = open(os.devnull, 'wb')
out = subprocess.check_output(['openssl', 'genrsa', '2048'], stderr=devnull)
with open(os.path.join(gettempdir(), 'domain.key'), 'wb') as f:
f.write(out)
def create_domain_csr(domain):
subj = "/CN=" + domain
cmd = [
'openssl', 'req',
'-new',
'-sha256',
'-key', os.path.join(gettempdir(), 'domain.key'),
'-subj', subj
]
devnull = open(os.devnull, 'wb')
out = subprocess.check_output(cmd, stderr=devnull)
with open(os.path.join(gettempdir(), 'domain.csr'), 'wb') as f:
f.write(out)
def create_chained_certificate():
signed_crt = open(os.path.join(gettempdir(), 'signed.crt'), 'rb').read()
cross_cert_url = "https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem"
cert = requests.get(cross_cert_url)
with open(os.path.join(gettempdir(), 'intermediate.pem'), 'wb') as intermediate_pem:
intermediate_pem.write(cert.content)
with open(os.path.join(gettempdir(), 'chained.pem'), 'wb') as chained_pem:
chained_pem.write(signed_crt)
chained_pem.write(cert.content)
def parse_account_key():
"""Parse account key to get public key"""
LOGGER.info("Parsing account key...")
cmd = [
'openssl', 'rsa',
'-in', os.path.join(gettempdir(), 'account.key'),
'-noout',
'-text'
]
devnull = open(os.devnull, 'wb')
return subprocess.check_output(cmd, stderr=devnull)
def parse_csr():
"""
Parse certificate signing request for domains
"""
LOGGER.info("Parsing CSR...")
cmd = [
'openssl', 'req',
'-in', os.path.join(gettempdir(), 'domain.csr'),
'-noout',
'-text'
]
devnull = open(os.devnull, 'wb')
out = subprocess.check_output(cmd, stderr=devnull)
domains = set([])
common_name = re.search(r"Subject:.*? CN\s?=\s?([^\s,;/]+)", out.decode('utf8'))
if common_name is not None:
domains.add(common_name.group(1))
subject_alt_names = re.search(r"X509v3 Subject Alternative Name: \n +([^\n]+)\n", out.decode('utf8'), re.MULTILINE | re.DOTALL)
if subject_alt_names is not None:
for san in subject_alt_names.group(1).split(", "):
if san.startswith("DNS:"):
domains.add(san[4:])
return domains
def get_boulder_header(key_bytes):
"""
Use regular expressions to find crypto values from parsed account key,
and return a header we can send to our Boulder instance.
"""
pub_hex, pub_exp = re.search(
r"modulus:\n\s+00:([a-f0-9\:\s]+?)\npublicExponent: ([0-9]+)",
key_bytes.decode('utf8'), re.MULTILINE | re.DOTALL).groups()
pub_exp = "{0:x}".format(int(pub_exp))
pub_exp = "0{0}".format(pub_exp) if len(pub_exp) % 2 else pub_exp
header = {
"alg": "RS256",
"jwk": {
"e": _b64(binascii.unhexlify(pub_exp.encode("utf-8"))),
"kty": "RSA",
"n": _b64(binascii.unhexlify(re.sub(r"(\s|:)", "", pub_hex).encode("utf-8"))),
},
}
return header
def register_account():
"""
Agree to LE TOS
"""
LOGGER.info("Registering account...")
code, result = _send_signed_request(DEFAULT_CA + "/acme/new-reg", {
"resource": "new-reg",
"agreement": "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf",
})
if code == 201: # pragma: no cover
LOGGER.info("Registered!")
elif code == 409: # pragma: no cover
LOGGER.info("Already registered!")
else: # pragma: no cover
raise ValueError("Error registering: {0} {1}".format(code, result))
def get_cert(zappa_instance, log=LOGGER, CA=DEFAULT_CA):
"""
Call LE to get a new signed CA.
"""
out = parse_account_key()
header = get_boulder_header(out)
accountkey_json = json.dumps(header['jwk'], sort_keys=True, separators=(',', ':'))
thumbprint = _b64(hashlib.sha256(accountkey_json.encode('utf8')).digest())
# find domains
domains = parse_csr()
# get the certificate domains and expiration
register_account()
# verify each domain
for domain in domains:
log.info("Verifying {0}...".format(domain))
# get new challenge
code, result = _send_signed_request(CA + "/acme/new-authz", {
"resource": "new-authz",
"identifier": {"type": "dns", "value": domain},
})
if code != 201:
raise ValueError("Error requesting challenges: {0} {1}".format(code, result))
challenge = [ch for ch in json.loads(result.decode('utf8'))['challenges'] if ch['type'] == "dns-01"][0]
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = "{0}.{1}".format(token, thumbprint).encode('utf-8')
# sha256_b64
digest = _b64(hashlib.sha256(keyauthorization).digest())
zone_id = zappa_instance.get_hosted_zone_id_for_domain(domain)
if not zone_id:
raise ValueError("Could not find Zone ID for: " + domain)
zappa_instance.set_dns_challenge_txt(zone_id, domain, digest) # resp is unused
print("Waiting for DNS to propagate..")
# What's optimal here?
# import time # double import; import in loop; shadowed import
time.sleep(45)
# notify challenge are met
code, result = _send_signed_request(challenge['uri'], {
"resource": "challenge",
"keyAuthorization": keyauthorization.decode('utf-8'),
})
if code != 202:
raise ValueError("Error triggering challenge: {0} {1}".format(code, result))
# wait for challenge to be verified
verify_challenge(challenge['uri'])
# Challenge verified, clean up R53
zappa_instance.remove_dns_challenge_txt(zone_id, domain, digest)
# Sign
result = sign_certificate()
# Encode to PEM format
encode_certificate(result)
return True
def verify_challenge(uri):
"""
Loop until our challenge is verified, else fail.
"""
while True:
try:
resp = urlopen(uri)
challenge_status = json.loads(resp.read().decode('utf8'))
except IOError as e:
raise ValueError("Error checking challenge: {0} {1}".format(
e.code, json.loads(e.read().decode('utf8'))))
if challenge_status['status'] == "pending":
time.sleep(2)
elif challenge_status['status'] == "valid":
LOGGER.info("Domain verified!")
break
else:
raise ValueError("Domain challenge did not pass: {0}".format(
challenge_status))
def sign_certificate():
"""
Get the new certificate.
Returns the signed bytes.
"""
LOGGER.info("Signing certificate...")
cmd = [
'openssl', 'req',
'-in', os.path.join(gettempdir(), 'domain.csr'),
'-outform', 'DER'
]
devnull = open(os.devnull, 'wb')
csr_der = subprocess.check_output(cmd, stderr=devnull)
code, result = _send_signed_request(DEFAULT_CA + "/acme/new-cert", {
"resource": "new-cert",
"csr": _b64(csr_der),
})
if code != 201:
raise ValueError("Error signing certificate: {0} {1}".format(code, result))
LOGGER.info("Certificate signed!")
return result
def encode_certificate(result):
"""
Encode cert bytes to PEM encoded cert file.
"""
cert_body = """-----BEGIN CERTIFICATE-----\n{0}\n-----END CERTIFICATE-----\n""".format(
"\n".join(textwrap.wrap(base64.b64encode(result).decode('utf8'), 64)))
signed_crt = open("{}/signed.crt".format(gettempdir()), "w")
signed_crt.write(cert_body)
signed_crt.close()
return True
##
# Request Utility
##
def _b64(b):
"""
Helper function base64 encode for jose spec
"""
return base64.urlsafe_b64encode(b).decode('utf8').replace("=", "")
def _send_signed_request(url, payload):
"""
Helper function to make signed requests to Boulder
"""
payload64 = _b64(json.dumps(payload).encode('utf8'))
out = parse_account_key()
header = get_boulder_header(out)
protected = copy.deepcopy(header)
protected["nonce"] = urlopen(DEFAULT_CA + "/directory").headers['Replay-Nonce']
protected64 = _b64(json.dumps(protected).encode('utf8'))
cmd = [
'openssl', 'dgst',
'-sha256',
'-sign', os.path.join(gettempdir(), 'account.key')
]
proc = subprocess.Popen(
cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
out, err = proc.communicate("{0}.{1}".format(protected64, payload64).encode('utf8'))
if proc.returncode != 0: # pragma: no cover
raise IOError("OpenSSL Error: {0}".format(err))
data = json.dumps({
"header": header, "protected": protected64,
"payload": payload64, "signature": _b64(out),
})
try:
resp = urlopen(url, data.encode('utf8'))
return resp.getcode(), resp.read()
except IOError as e:
return getattr(e, "code", None), getattr(e, "read", e.__str__)()
##
# Temporary Directory Utility
##
__tempdir = None
def gettempdir():
"""
Lazily creates a temporary directory in a secure manner. When Python exits,
or the cleanup() function is called, the directory is erased.
"""
global __tempdir
if __tempdir is not None:
return __tempdir
__tempdir = tempfile.mkdtemp()
return __tempdir
@atexit.register
def cleanup():
"""
Delete any temporary files.
"""
global __tempdir
if __tempdir is not None:
shutil.rmtree(__tempdir)
__tempdir = None | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/letsencrypt.py | letsencrypt.py |
from werkzeug.wsgi import ClosingIterator
def all_casings(input_string):
"""
Permute all casings of a given string.
A pretty algorithm, via @Amber
http://stackoverflow.com/questions/6792803/finding-all-possible-case-permutations-in-python
"""
if not input_string:
yield ""
else:
first = input_string[:1]
if first.lower() == first.upper():
for sub_casing in all_casings(input_string[1:]):
yield first + sub_casing
else:
for sub_casing in all_casings(input_string[1:]):
yield first.lower() + sub_casing
yield first.upper() + sub_casing
class ZappaWSGIMiddleware:
"""
Middleware functions necessary for a Zappa deployment.
Most hacks have now been remove except for Set-Cookie permutation.
"""
def __init__(self, application):
self.application = application
def __call__(self, environ, start_response):
"""
We must case-mangle the Set-Cookie header name or AWS will use only a
single one of these headers.
"""
def encode_response(status, headers, exc_info=None):
"""
This makes the 'set-cookie' headers name lowercase,
all the non-cookie headers should be sent unharmed.
Related: https://github.com/Miserlou/Zappa/issues/1965
"""
new_headers = [header for header in headers
if ((type(header[0]) != str) or (header[0].lower() != 'set-cookie'))]
cookie_headers = [(header[0].lower(), header[1]) for header in headers
if ((type(header[0]) == str) and (header[0].lower() == "set-cookie"))]
new_headers = new_headers + cookie_headers
return start_response(status, new_headers, exc_info)
# Call the application with our modifier
response = self.application(environ, encode_response)
# Return the response as a WSGI-safe iterator
return ClosingIterator(response) | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/middleware.py | middleware.py |
import base64
import boto3
import collections
import datetime
import importlib
import inspect
import json
import logging
import os
import sys
import traceback
import tarfile
from builtins import str
from werkzeug.wrappers import Response
# This file may be copied into a project's root,
# so handle both scenarios.
try:
from zappa.middleware import ZappaWSGIMiddleware
from zappa.wsgi import create_wsgi_request, common_log
from zappa.utilities import merge_headers, parse_s3_url
except ImportError as e: # pragma: no cover
from .middleware import ZappaWSGIMiddleware
from .wsgi import create_wsgi_request, common_log
from .utilities import merge_headers, parse_s3_url
# Set up logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
class LambdaHandler:
"""
Singleton for avoiding duplicate setup.
Pattern provided by @benbangert.
"""
__instance = None
settings = None
settings_name = None
session = None
# Application
app_module = None
wsgi_app = None
trailing_slash = False
def __new__(cls, settings_name="zappa_settings", session=None):
"""Singleton instance to avoid repeat setup"""
if LambdaHandler.__instance is None:
print("Instancing..")
LambdaHandler.__instance = object.__new__(cls)
return LambdaHandler.__instance
def __init__(self, settings_name="zappa_settings", session=None):
# We haven't cached our settings yet, load the settings and app.
if not self.settings:
# Loading settings from a python module
self.settings = importlib.import_module(settings_name)
self.settings_name = settings_name
self.session = session
# Custom log level
if self.settings.LOG_LEVEL:
level = logging.getLevelName(self.settings.LOG_LEVEL)
logger.setLevel(level)
remote_env = getattr(self.settings, 'REMOTE_ENV', None)
remote_bucket, remote_file = parse_s3_url(remote_env)
if remote_bucket and remote_file:
self.load_remote_settings(remote_bucket, remote_file)
# Let the system know that this will be a Lambda/Zappa/Stack
os.environ["SERVERTYPE"] = "AWS Lambda"
os.environ["FRAMEWORK"] = "Zappa"
try:
os.environ["PROJECT"] = self.settings.PROJECT_NAME
os.environ["STAGE"] = self.settings.API_STAGE
except Exception: # pragma: no cover
pass
# Set any locally defined env vars
# Environment variable keys can't be Unicode
# https://github.com/Miserlou/Zappa/issues/604
for key in self.settings.ENVIRONMENT_VARIABLES.keys():
os.environ[str(key)] = self.settings.ENVIRONMENT_VARIABLES[key]
# Pulling from S3 if given a zip path
project_archive_path = getattr(self.settings, 'ARCHIVE_PATH', None)
if project_archive_path:
self.load_remote_project_archive(project_archive_path)
# Load compiled library to the PythonPath
# checks if we are the slim_handler since this is not needed otherwise
# https://github.com/Miserlou/Zappa/issues/776
is_slim_handler = getattr(self.settings, 'SLIM_HANDLER', False)
if is_slim_handler:
included_libraries = getattr(self.settings, 'INCLUDE', ['libmysqlclient.so.18'])
try:
from ctypes import cdll, util
for library in included_libraries:
try:
cdll.LoadLibrary(os.path.join(os.getcwd(), library))
except OSError:
print("Failed to find library: {}...right filename?".format(library))
except ImportError:
print ("Failed to import cytpes library")
# This is a non-WSGI application
# https://github.com/Miserlou/Zappa/pull/748
if not hasattr(self.settings, 'APP_MODULE') and not self.settings.DJANGO_SETTINGS:
self.app_module = None
wsgi_app_function = None
# This is probably a normal WSGI app (Or django with overloaded wsgi application)
# https://github.com/Miserlou/Zappa/issues/1164
elif hasattr(self.settings, 'APP_MODULE'):
if self.settings.DJANGO_SETTINGS:
sys.path.append('/var/task')
from django.conf import ENVIRONMENT_VARIABLE as SETTINGS_ENVIRONMENT_VARIABLE
# add the Lambda root path into the sys.path
self.trailing_slash = True
os.environ[SETTINGS_ENVIRONMENT_VARIABLE] = self.settings.DJANGO_SETTINGS
else:
self.trailing_slash = False
# The app module
self.app_module = importlib.import_module(self.settings.APP_MODULE)
# The application
wsgi_app_function = getattr(self.app_module, self.settings.APP_FUNCTION)
# Django gets special treatment.
else:
try: # Support both for tests
from zappa.ext.django_zappa import get_django_wsgi
except ImportError: # pragma: no cover
from django_zappa_app import get_django_wsgi
# Get the Django WSGI app from our extension
wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
self.trailing_slash = True
self.wsgi_app = ZappaWSGIMiddleware(wsgi_app_function)
def load_remote_project_archive(self, project_zip_path):
"""
Puts the project files from S3 in /tmp and adds to path
"""
project_folder = '/tmp/{0!s}'.format(self.settings.PROJECT_NAME)
if not os.path.isdir(project_folder):
# The project folder doesn't exist in this cold lambda, get it from S3
if not self.session:
boto_session = boto3.Session()
else:
boto_session = self.session
# Download zip file from S3
remote_bucket, remote_file = parse_s3_url(project_zip_path)
s3 = boto_session.resource('s3')
archive_on_s3 = s3.Object(remote_bucket, remote_file).get()
with tarfile.open(fileobj=archive_on_s3['Body'], mode="r|gz") as t:
t.extractall(project_folder)
# Add to project path
sys.path.insert(0, project_folder)
# Change working directory to project folder
# Related: https://github.com/Miserlou/Zappa/issues/702
os.chdir(project_folder)
return True
def load_remote_settings(self, remote_bucket, remote_file):
"""
Attempt to read a file from s3 containing a flat json object. Adds each
key->value pair as environment variables. Helpful for keeping
sensitiZve or stage-specific configuration variables in s3 instead of
version control.
"""
if not self.session:
boto_session = boto3.Session()
else:
boto_session = self.session
s3 = boto_session.resource('s3')
try:
remote_env_object = s3.Object(remote_bucket, remote_file).get()
except Exception as e: # pragma: no cover
# catch everything aws might decide to raise
print('Could not load remote settings file.', e)
return
try:
content = remote_env_object['Body'].read()
except Exception as e: # pragma: no cover
# catch everything aws might decide to raise
print('Exception while reading remote settings file.', e)
return
try:
settings_dict = json.loads(content)
except (ValueError, TypeError): # pragma: no cover
print('Failed to parse remote settings!')
return
# add each key-value to environment - overwrites existing keys!
for key, value in settings_dict.items():
if self.settings.LOG_LEVEL == "DEBUG":
print('Adding {} -> {} to environment'.format(
key,
value
))
# Environment variable keys can't be Unicode
# https://github.com/Miserlou/Zappa/issues/604
try:
os.environ[str(key)] = value
except Exception:
if self.settings.LOG_LEVEL == "DEBUG":
print("Environment variable keys must be non-unicode!")
@staticmethod
def import_module_and_get_function(whole_function):
"""
Given a modular path to a function, import that module
and return the function.
"""
module, function = whole_function.rsplit('.', 1)
app_module = importlib.import_module(module)
app_function = getattr(app_module, function)
return app_function
@classmethod
def lambda_handler(cls, event, context): # pragma: no cover
handler = cls()
exception_handler = handler.settings.EXCEPTION_HANDLER
try:
return handler.handler(event, context)
except Exception as ex:
exception_processed = cls._process_exception(exception_handler=exception_handler,
event=event, context=context, exception=ex)
if not exception_processed:
# Only re-raise exception if handler directed so. Allows handler to control if lambda has to retry
# an event execution in case of failure.
raise
@classmethod
def _process_exception(cls, exception_handler, event, context, exception):
exception_processed = False
if exception_handler:
try:
handler_function = cls.import_module_and_get_function(exception_handler)
exception_processed = handler_function(exception, event, context)
except Exception as cex:
logger.error(msg='Failed to process exception via custom handler.')
print(cex)
return exception_processed
@staticmethod
def run_function(app_function, event, context):
"""
Given a function and event context,
detect signature and execute, returning any result.
"""
# getargspec does not support python 3 method with type hints
# Related issue: https://github.com/Miserlou/Zappa/issues/1452
if hasattr(inspect, "getfullargspec"): # Python 3
args, varargs, keywords, defaults, _, _, _ = inspect.getfullargspec(app_function)
else: # Python 2
args, varargs, keywords, defaults = inspect.getargspec(app_function)
num_args = len(args)
if num_args == 0:
result = app_function(event, context) if varargs else app_function()
elif num_args == 1:
result = app_function(event, context) if varargs else app_function(event)
elif num_args == 2:
result = app_function(event, context)
else:
raise RuntimeError("Function signature is invalid. Expected a function that accepts at most "
"2 arguments or varargs.")
return result
def get_function_for_aws_event(self, record):
"""
Get the associated function to execute for a triggered AWS event
Support S3, SNS, DynamoDB, kinesis and SQS events
"""
if 's3' in record:
if ':' in record['s3']['configurationId']:
return record['s3']['configurationId'].split(':')[-1]
arn = None
if 'Sns' in record:
try:
message = json.loads(record['Sns']['Message'])
if message.get('command'):
return message['command']
except ValueError:
pass
arn = record['Sns'].get('TopicArn')
elif 'dynamodb' in record or 'kinesis' in record:
arn = record.get('eventSourceARN')
elif 'eventSource' in record and record.get('eventSource') == 'aws:sqs':
arn = record.get('eventSourceARN')
elif 's3' in record:
arn = record['s3']['bucket']['arn']
if arn:
return self.settings.AWS_EVENT_MAPPING.get(arn)
return None
def get_function_from_bot_intent_trigger(self, event):
"""
For the given event build ARN and return the configured function
"""
intent = event.get('currentIntent')
if intent:
intent = intent.get('name')
if intent:
return self.settings.AWS_BOT_EVENT_MAPPING.get(
"{}:{}".format(intent, event.get('invocationSource'))
)
def get_function_for_cognito_trigger(self, trigger):
"""
Get the associated function to execute for a cognito trigger
"""
print("get_function_for_cognito_trigger", self.settings.COGNITO_TRIGGER_MAPPING, trigger, self.settings.COGNITO_TRIGGER_MAPPING.get(trigger))
return self.settings.COGNITO_TRIGGER_MAPPING.get(trigger)
def handler(self, event, context):
"""
An AWS Lambda function which parses specific API Gateway input into a
WSGI request, feeds it to our WSGI app, processes the response, and returns
that back to the API Gateway.
"""
settings = self.settings
# If in DEBUG mode, log all raw incoming events.
if settings.DEBUG:
logger.debug('Zappa Event: {}'.format(event))
# Set any API Gateway defined Stage Variables
# as env vars
if event.get('stageVariables'):
for key in event['stageVariables'].keys():
os.environ[str(key)] = event['stageVariables'][key]
# This is the result of a keep alive, recertify
# or scheduled event.
if event.get('detail-type') == 'Scheduled Event':
whole_function = event['resources'][0].split('/')[-1].split('-')[-1]
# This is a scheduled function.
if '.' in whole_function:
app_function = self.import_module_and_get_function(whole_function)
# Execute the function!
return self.run_function(app_function, event, context)
# Else, let this execute as it were.
# This is a direct command invocation.
elif event.get('command', None):
whole_function = event['command']
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
print("Result of %s:" % whole_function)
print(result)
return result
# This is a direct, raw python invocation.
# It's _extremely_ important we don't allow this event source
# to be overridden by unsanitized, non-admin user input.
elif event.get('raw_command', None):
raw_command = event['raw_command']
exec(raw_command)
return
# This is a Django management command invocation.
elif event.get('manage', None):
from django.core import management
try: # Support both for tests
from zappa.ext.django_zappa import get_django_wsgi
except ImportError as e: # pragma: no cover
from django_zappa_app import get_django_wsgi
# Get the Django WSGI app from our extension
# We don't actually need the function,
# but we do need to do all of the required setup for it.
app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
# Couldn't figure out how to get the value into stdout with StringIO..
# Read the log for now. :[]
management.call_command(*event['manage'].split(' '))
return {}
# This is an AWS-event triggered invocation.
elif event.get('Records', None):
records = event.get('Records')
result = None
whole_function = self.get_function_for_aws_event(records[0])
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
logger.debug(result)
else:
logger.error("Cannot find a function to process the triggered event.")
return result
# this is an AWS-event triggered from Lex bot's intent
elif event.get('bot'):
result = None
whole_function = self.get_function_from_bot_intent_trigger(event)
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
logger.debug(result)
else:
logger.error("Cannot find a function to process the triggered event.")
return result
# This is an API Gateway authorizer event
elif event.get('type') == 'TOKEN':
whole_function = self.settings.AUTHORIZER_FUNCTION
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
policy = self.run_function(app_function, event, context)
return policy
else:
logger.error("Cannot find a function to process the authorization request.")
raise Exception('Unauthorized')
# This is an AWS Cognito Trigger Event
elif event.get('triggerSource', None):
triggerSource = event.get('triggerSource')
whole_function = self.get_function_for_cognito_trigger(triggerSource)
result = event
if whole_function:
app_function = self.import_module_and_get_function(whole_function)
result = self.run_function(app_function, event, context)
logger.debug(result)
else:
logger.error("Cannot find a function to handle cognito trigger {}".format(triggerSource))
return result
# This is a CloudWatch event
# Related: https://github.com/Miserlou/Zappa/issues/1924
elif event.get('awslogs', None):
result = None
whole_function = '{}.{}'.format(settings.APP_MODULE, settings.APP_FUNCTION)
app_function = self.import_module_and_get_function(whole_function)
if app_function:
result = self.run_function(app_function, event, context)
logger.debug("Result of %s:" % whole_function)
logger.debug(result)
else:
logger.error("Cannot find a function to process the triggered event.")
return result
# Normal web app flow
try:
# Timing
time_start = datetime.datetime.now()
# This is a normal HTTP request
if event.get('httpMethod', None):
script_name = ''
is_elb_context = False
headers = merge_headers(event)
if event.get('requestContext', None) and event['requestContext'].get('elb', None):
# Related: https://github.com/Miserlou/Zappa/issues/1715
# inputs/outputs for lambda loadbalancer
# https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html
is_elb_context = True
# host is lower-case when forwarded from ELB
host = headers.get('host')
# TODO: pathParameters is a first-class citizen in apigateway but not available without
# some parsing work for ELB (is this parameter used for anything?)
event['pathParameters'] = ''
else:
if headers:
host = headers.get('Host')
else:
host = None
logger.debug('host found: [{}]'.format(host))
if host:
if 'amazonaws.com' in host:
logger.debug('amazonaws found in host')
# The path provided in th event doesn't include the
# stage, so we must tell Flask to include the API
# stage in the url it calculates. See https://github.com/Miserlou/Zappa/issues/1014
script_name = '/' + settings.API_STAGE
else:
# This is a test request sent from the AWS console
if settings.DOMAIN:
# Assume the requests received will be on the specified
# domain. No special handling is required
pass
else:
# Assume the requests received will be to the
# amazonaws.com endpoint, so tell Flask to include the
# API stage
script_name = '/' + settings.API_STAGE
base_path = getattr(settings, 'BASE_PATH', None)
# Create the environment for WSGI and handle the request
environ = create_wsgi_request(
event,
script_name=script_name,
base_path=base_path,
trailing_slash=self.trailing_slash,
binary_support=settings.BINARY_SUPPORT,
context_header_mappings=settings.CONTEXT_HEADER_MAPPINGS
)
# We are always on https on Lambda, so tell our wsgi app that.
environ['HTTPS'] = 'on'
environ['wsgi.url_scheme'] = 'https'
environ['lambda.context'] = context
environ['lambda.event'] = event
# Execute the application
with Response.from_app(self.wsgi_app, environ) as response:
# This is the object we're going to return.
# Pack the WSGI response into our special dictionary.
zappa_returndict = dict()
# Issue #1715: ALB support. ALB responses must always include
# base64 encoding and status description
if is_elb_context:
zappa_returndict.setdefault('isBase64Encoded', False)
zappa_returndict.setdefault('statusDescription', response.status)
if response.data:
if settings.BINARY_SUPPORT and \
not response.mimetype.startswith("text/") \
and response.mimetype != "application/json":
zappa_returndict['body'] = base64.b64encode(response.data).decode('utf-8')
zappa_returndict["isBase64Encoded"] = True
else:
zappa_returndict['body'] = response.get_data(as_text=True)
zappa_returndict['statusCode'] = response.status_code
if 'headers' in event:
zappa_returndict['headers'] = {}
for key, value in response.headers:
zappa_returndict['headers'][key] = value
if 'multiValueHeaders' in event:
zappa_returndict['multiValueHeaders'] = {}
for key, value in response.headers:
zappa_returndict['multiValueHeaders'][key] = response.headers.getlist(key)
# Calculate the total response time,
# and log it in the Common Log format.
time_end = datetime.datetime.now()
delta = time_end - time_start
response_time_ms = delta.total_seconds() * 1000
response.content = response.data
common_log(environ, response, response_time=response_time_ms)
return zappa_returndict
except Exception as e: # pragma: no cover
# Print statements are visible in the logs either way
print(e)
exc_info = sys.exc_info()
message = ('An uncaught exception happened while servicing this request. '
'You can investigate this with the `zappa tail` command.')
# If we didn't even build an app_module, just raise.
if not settings.DJANGO_SETTINGS:
try:
self.app_module
except NameError as ne:
message = 'Failed to import module: {}'.format(ne.message)
# Call exception handler for unhandled exceptions
exception_handler = self.settings.EXCEPTION_HANDLER
self._process_exception(exception_handler=exception_handler,
event=event, context=context, exception=e)
# Return this unspecified exception as a 500, using template that API Gateway expects.
content = collections.OrderedDict()
content['statusCode'] = 500
body = {'message': message}
if settings.DEBUG: # only include traceback if debug is on.
body['traceback'] = traceback.format_exception(*exc_info) # traceback as a list for readability.
content['body'] = json.dumps(str(body), sort_keys=True, indent=4)
return content
def lambda_handler(event, context): # pragma: no cover
return LambdaHandler.lambda_handler(event, context)
def keep_warm_callback(event, context):
"""Method is triggered by the CloudWatch event scheduled when keep_warm setting is set to true."""
lambda_handler(event={}, context=context) # overriding event with an empty one so that web app initialization will
# be triggered. | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/handler.py | handler.py |
import base64
import logging
import six
import sys
from requestlogger import ApacheFormatter
from werkzeug import urls
from urllib.parse import urlencode
from .utilities import merge_headers, titlecase_keys
BINARY_METHODS = [
"POST",
"PUT",
"PATCH",
"DELETE",
"CONNECT",
"OPTIONS"
]
def create_wsgi_request(event_info,
server_name='zappa',
script_name=None,
trailing_slash=True,
binary_support=False,
base_path=None,
context_header_mappings={},
):
"""
Given some event_info via API Gateway,
create and return a valid WSGI request environ.
"""
method = event_info['httpMethod']
headers = merge_headers(event_info) or {} # Allow for the AGW console 'Test' button to work (Pull #735)
"""
API Gateway and ALB both started allowing for multi-value querystring
params in Nov. 2018. If there aren't multi-value params present, then
it acts identically to 'queryStringParameters', so we can use it as a
drop-in replacement.
The one caveat here is that ALB will only include _one_ of
queryStringParameters _or_ multiValueQueryStringParameters, which means
we have to check for the existence of one and then fall back to the
other.
"""
if 'multiValueQueryStringParameters' in event_info:
query = event_info['multiValueQueryStringParameters']
query_string = urlencode(query, doseq=True) if query else ''
else:
query = event_info.get('queryStringParameters', {})
query_string = urlencode(query) if query else ''
if context_header_mappings:
for key, value in context_header_mappings.items():
parts = value.split('.')
header_val = event_info['requestContext']
for part in parts:
if part not in header_val:
header_val = None
break
else:
header_val = header_val[part]
if header_val is not None:
headers[key] = header_val
# Extract remote user from context if Authorizer is enabled
remote_user = None
if event_info['requestContext'].get('authorizer'):
remote_user = event_info['requestContext']['authorizer'].get('principalId')
elif event_info['requestContext'].get('identity'):
remote_user = event_info['requestContext']['identity'].get('userArn')
# Related: https://github.com/Miserlou/Zappa/issues/677
# https://github.com/Miserlou/Zappa/issues/683
# https://github.com/Miserlou/Zappa/issues/696
# https://github.com/Miserlou/Zappa/issues/836
# https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Summary_table
if binary_support and (method in BINARY_METHODS):
if event_info.get('isBase64Encoded', False):
encoded_body = event_info['body']
body = base64.b64decode(encoded_body)
else:
body = event_info['body']
if isinstance(body, six.string_types):
body = body.encode("utf-8")
else:
body = event_info['body']
if isinstance(body, six.string_types):
body = body.encode("utf-8")
# Make header names canonical, e.g. content-type => Content-Type
# https://github.com/Miserlou/Zappa/issues/1188
headers = titlecase_keys(headers)
path = urls.url_unquote(event_info['path'])
if base_path:
script_name = '/' + base_path
if path.startswith(script_name):
path = path[len(script_name):]
x_forwarded_for = headers.get('X-Forwarded-For', '')
if ',' in x_forwarded_for:
# The last one is the cloudfront proxy ip. The second to last is the real client ip.
# Everything else is user supplied and untrustworthy.
remote_addr = x_forwarded_for.split(', ')[-2]
else:
remote_addr = x_forwarded_for or '127.0.0.1'
environ = {
'PATH_INFO': get_wsgi_string(path),
'QUERY_STRING': get_wsgi_string(query_string),
'REMOTE_ADDR': remote_addr,
'REQUEST_METHOD': method,
'SCRIPT_NAME': get_wsgi_string(str(script_name)) if script_name else '',
'SERVER_NAME': str(server_name),
'SERVER_PORT': headers.get('X-Forwarded-Port', '80'),
'SERVER_PROTOCOL': str('HTTP/1.1'),
'wsgi.version': (1, 0),
'wsgi.url_scheme': headers.get('X-Forwarded-Proto', 'http'),
'wsgi.input': body,
'wsgi.errors': sys.stderr,
'wsgi.multiprocess': False,
'wsgi.multithread': False,
'wsgi.run_once': False,
}
# Input processing
if method in ["POST", "PUT", "PATCH", "DELETE"]:
if 'Content-Type' in headers:
environ['CONTENT_TYPE'] = headers['Content-Type']
# This must be Bytes or None
environ['wsgi.input'] = six.BytesIO(body)
if body:
environ['CONTENT_LENGTH'] = str(len(body))
else:
environ['CONTENT_LENGTH'] = '0'
for header in headers:
wsgi_name = "HTTP_" + header.upper().replace('-', '_')
environ[wsgi_name] = str(headers[header])
if script_name:
environ['SCRIPT_NAME'] = script_name
path_info = environ['PATH_INFO']
if script_name in path_info:
environ['PATH_INFO'].replace(script_name, '')
if remote_user:
environ['REMOTE_USER'] = remote_user
if event_info['requestContext'].get('authorizer'):
environ['API_GATEWAY_AUTHORIZER'] = event_info['requestContext']['authorizer']
return environ
def common_log(environ, response, response_time=None):
"""
Given the WSGI environ and the response,
log this event in Common Log Format.
"""
logger = logging.getLogger()
if response_time:
formatter = ApacheFormatter(with_response_time=True)
try:
log_entry = formatter(response.status_code, environ,
len(response.content), rt_us=response_time)
except TypeError:
# Upstream introduced a very annoying breaking change on the rt_ms/rt_us kwarg.
log_entry = formatter(response.status_code, environ,
len(response.content), rt_ms=response_time)
else:
formatter = ApacheFormatter(with_response_time=False)
log_entry = formatter(response.status_code, environ,
len(response.content))
logger.info(log_entry)
return log_entry
# Related: https://github.com/Miserlou/Zappa/issues/1199
def get_wsgi_string(string, encoding='utf-8'):
"""
Returns wsgi-compatible string
"""
return string.encode(encoding).decode('iso-8859-1') | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/wsgi.py | wsgi.py |
import boto3
import botocore
from functools import update_wrapper, wraps
import importlib
import inspect
import json
import os
import uuid
import time
from .utilities import get_topic_name
try:
from zappa_settings import ASYNC_RESPONSE_TABLE
except ImportError:
ASYNC_RESPONSE_TABLE = None
# Declare these here so they're kept warm.
try:
aws_session = boto3.Session()
LAMBDA_CLIENT = aws_session.client('lambda')
SNS_CLIENT = aws_session.client('sns')
STS_CLIENT = aws_session.client('sts')
DYNAMODB_CLIENT = aws_session.client('dynamodb')
except botocore.exceptions.NoRegionError as e: # pragma: no cover
# This can happen while testing on Travis, but it's taken care of
# during class initialization.
pass
##
# Response and Exception classes
##
LAMBDA_ASYNC_PAYLOAD_LIMIT = 256000
SNS_ASYNC_PAYLOAD_LIMIT = 256000
class AsyncException(Exception): # pragma: no cover
""" Simple exception class for async tasks. """
pass
class LambdaAsyncResponse:
"""
Base Response Dispatcher class
Can be used directly or subclassed if the method to send the message is changed.
"""
def __init__(self, lambda_function_name=None, aws_region=None, capture_response=False, **kwargs):
""" """
if kwargs.get('boto_session'):
self.client = kwargs.get('boto_session').client('lambda')
else: # pragma: no cover
self.client = LAMBDA_CLIENT
self.lambda_function_name = lambda_function_name
self.aws_region = aws_region
if capture_response:
if ASYNC_RESPONSE_TABLE is None:
print(
"Warning! Attempted to capture a response without "
"async_response_table configured in settings (you won't "
"capture async responses)."
)
capture_response = False
self.response_id = "MISCONFIGURED"
else:
self.response_id = str(uuid.uuid4())
else:
self.response_id = None
self.capture_response = capture_response
def send(self, task_path, args, kwargs):
"""
Create the message object and pass it to the actual sender.
"""
message = {
'task_path': task_path,
'capture_response': self.capture_response,
'response_id': self.response_id,
'args': args,
'kwargs': kwargs
}
self._send(message)
return self
def _send(self, message):
"""
Given a message, directly invoke the lamdba function for this task.
"""
message['command'] = 'zappa.asynchronous.route_lambda_task'
payload = json.dumps(message).encode('utf-8')
if len(payload) > LAMBDA_ASYNC_PAYLOAD_LIMIT: # pragma: no cover
raise AsyncException("Payload too large for async Lambda call")
self.response = self.client.invoke(
FunctionName=self.lambda_function_name,
InvocationType='Event', #makes the call async
Payload=payload
)
self.sent = (self.response.get('StatusCode', 0) == 202)
class SnsAsyncResponse(LambdaAsyncResponse):
"""
Send a SNS message to a specified SNS topic
Serialise the func path and arguments
"""
def __init__(self, lambda_function_name=None, aws_region=None, capture_response=False, **kwargs):
self.lambda_function_name = lambda_function_name
self.aws_region = aws_region
if kwargs.get('boto_session'):
self.client = kwargs.get('boto_session').client('sns')
else: # pragma: no cover
self.client = SNS_CLIENT
if kwargs.get('arn'):
self.arn = kwargs.get('arn')
else:
if kwargs.get('boto_session'):
sts_client = kwargs.get('boto_session').client('sts')
else:
sts_client = STS_CLIENT
AWS_ACCOUNT_ID = sts_client.get_caller_identity()['Account']
self.arn = 'arn:aws:sns:{region}:{account}:{topic_name}'.format(
region=self.aws_region,
account=AWS_ACCOUNT_ID,
topic_name=get_topic_name(self.lambda_function_name)
)
# Issue: https://github.com/Miserlou/Zappa/issues/1209
# TODO: Refactor
self.capture_response = capture_response
if capture_response:
if ASYNC_RESPONSE_TABLE is None:
print(
"Warning! Attempted to capture a response without "
"async_response_table configured in settings (you won't "
"capture async responses)."
)
capture_response = False
self.response_id = "MISCONFIGURED"
else:
self.response_id = str(uuid.uuid4())
else:
self.response_id = None
self.capture_response = capture_response
def _send(self, message):
"""
Given a message, publish to this topic.
"""
message['command'] = 'zappa.asynchronous.route_sns_task'
payload = json.dumps(message).encode('utf-8')
if len(payload) > LAMBDA_ASYNC_PAYLOAD_LIMIT: # pragma: no cover
raise AsyncException("Payload too large for SNS")
self.response = self.client.publish(
TargetArn=self.arn,
Message=payload
)
self.sent = self.response.get('MessageId')
##
# Aync Routers
##
ASYNC_CLASSES = {
'lambda': LambdaAsyncResponse,
'sns': SnsAsyncResponse,
}
def route_lambda_task(event, context):
"""
Deserialises the message from event passed to zappa.handler.run_function
imports the function, calls the function with args
"""
message = event
return run_message(message)
def route_sns_task(event, context):
"""
Gets SNS Message, deserialises the message,
imports the function, calls the function with args
"""
record = event['Records'][0]
message = json.loads(
record['Sns']['Message']
)
return run_message(message)
def run_message(message):
"""
Runs a function defined by a message object with keys:
'task_path', 'args', and 'kwargs' used by lambda routing
and a 'command' in handler.py
"""
if message.get('capture_response', False):
DYNAMODB_CLIENT.put_item(
TableName=ASYNC_RESPONSE_TABLE,
Item={
'id': {'S': str(message['response_id'])},
'ttl': {'N': str(int(time.time()+600))},
'async_status': {'S': 'in progress'},
'async_response': {'S': str(json.dumps('N/A'))},
}
)
func = import_and_get_task(message['task_path'])
if hasattr(func, 'sync'):
response = func.sync(
*message['args'],
**message['kwargs']
)
else:
response = func(
*message['args'],
**message['kwargs']
)
if message.get('capture_response', False):
DYNAMODB_CLIENT.update_item(
TableName=ASYNC_RESPONSE_TABLE,
Key={'id': {'S': str(message['response_id'])}},
UpdateExpression="SET async_response = :r, async_status = :s",
ExpressionAttributeValues={
':r': {'S': str(json.dumps(response))},
':s': {'S': 'complete'},
},
)
return response
##
# Execution interfaces and classes
##
def run(func, args=[], kwargs={}, service='lambda', capture_response=False,
remote_aws_lambda_function_name=None, remote_aws_region=None, **task_kwargs):
"""
Instead of decorating a function with @task, you can just run it directly.
If you were going to do func(*args, **kwargs), then you will call this:
import zappa.asynchronous.run
zappa.asynchronous.run(func, args, kwargs)
If you want to use SNS, then do:
zappa.asynchronous.run(func, args, kwargs, service='sns')
and other arguments are similar to @task
"""
lambda_function_name = remote_aws_lambda_function_name or os.environ.get('AWS_LAMBDA_FUNCTION_NAME')
aws_region = remote_aws_region or os.environ.get('AWS_REGION')
task_path = get_func_task_path(func)
return ASYNC_CLASSES[service](lambda_function_name=lambda_function_name,
aws_region=aws_region,
capture_response=capture_response,
**task_kwargs).send(task_path, args, kwargs)
# Handy:
# http://stackoverflow.com/questions/10294014/python-decorator-best-practice-using-a-class-vs-a-function
# However, this needs to pass inspect.getargspec() in handler.py which does not take classes
# Wrapper written to take optional arguments
# http://chase-seibert.github.io/blog/2013/12/17/python-decorator-optional-parameter.html
def task(*args, **kwargs):
"""Async task decorator so that running
Args:
func (function): the function to be wrapped
Further requirements:
func must be an independent top-level function.
i.e. not a class method or an anonymous function
service (str): either 'lambda' or 'sns'
remote_aws_lambda_function_name (str): the name of a remote lambda function to call with this task
remote_aws_region (str): the name of a remote region to make lambda/sns calls against
Returns:
A replacement function that dispatches func() to
run asynchronously through the service in question
"""
func = None
if len(args) == 1 and callable(args[0]):
func = args[0]
if not kwargs: # Default Values
service = 'lambda'
lambda_function_name_arg = None
aws_region_arg = None
else: # Arguments were passed
service = kwargs.get('service', 'lambda')
lambda_function_name_arg = kwargs.get('remote_aws_lambda_function_name')
aws_region_arg = kwargs.get('remote_aws_region')
capture_response = kwargs.get('capture_response', False)
def func_wrapper(func):
task_path = get_func_task_path(func)
@wraps(func)
def _run_async(*args, **kwargs):
"""
This is the wrapping async function that replaces the function
that is decorated with @task.
Args:
These are just passed through to @task's func
Assuming a valid service is passed to task() and it is run
inside a Lambda process (i.e. AWS_LAMBDA_FUNCTION_NAME exists),
it dispatches the function to be run through the service variable.
Otherwise, it runs the task synchronously.
Returns:
In async mode, the object returned includes state of the dispatch.
For instance
When outside of Lambda, the func passed to @task is run and we
return the actual value.
"""
lambda_function_name = lambda_function_name_arg or os.environ.get('AWS_LAMBDA_FUNCTION_NAME')
aws_region = aws_region_arg or os.environ.get('AWS_REGION')
if (service in ASYNC_CLASSES) and (lambda_function_name):
send_result = ASYNC_CLASSES[service](lambda_function_name=lambda_function_name,
aws_region=aws_region,
capture_response=capture_response).send(task_path, args, kwargs)
return send_result
else:
return func(*args, **kwargs)
update_wrapper(_run_async, func)
_run_async.service = service
_run_async.sync = func
return _run_async
return func_wrapper(func) if func else func_wrapper
def task_sns(func):
"""
SNS-based task dispatcher. Functions the same way as task()
"""
return task(func, service='sns')
##
# Utility Functions
##
def import_and_get_task(task_path):
"""
Given a modular path to a function, import that module
and return the function.
"""
module, function = task_path.rsplit('.', 1)
app_module = importlib.import_module(module)
app_function = getattr(app_module, function)
return app_function
def get_func_task_path(func):
"""
Format the modular task path for a function via inspection.
"""
module_path = inspect.getmodule(func).__name__
task_path = '{module_path}.{func_name}'.format(
module_path=module_path,
func_name=func.__name__
)
return task_path
def get_async_response(response_id):
"""
Get the response from the async table
"""
response = DYNAMODB_CLIENT.get_item(
TableName=ASYNC_RESPONSE_TABLE,
Key={'id': {'S': str(response_id)}}
)
if 'Item' not in response:
return None
return {
'status': response['Item']['async_status']['S'],
'response': json.loads(response['Item']['async_response']['S']),
} | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/asynchronous.py | asynchronous.py |
import getpass
import glob
import hashlib
import json
import logging
import os
import random
import re
import shutil
import string
import subprocess
import tarfile
import tempfile
import time
import uuid
import zipfile
from builtins import bytes, int
from distutils.dir_util import copy_tree
from io import open
import requests
from setuptools import find_packages
import boto3
import botocore
import troposphere
import troposphere.apigateway
from botocore.exceptions import ClientError
from tqdm import tqdm
from .utilities import (add_event_source, conflicts_with_a_neighbouring_module,
contains_python_files_or_subdirs, copytree,
get_topic_name, get_venv_from_python_version,
human_size, remove_event_source)
##
# Logging Config
##
logging.basicConfig(format='%(levelname)s:%(message)s')
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
##
# Policies And Template Mappings
##
ASSUME_POLICY = """{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com",
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}"""
ATTACH_POLICY = """{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:AttachNetworkInterface",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:DescribeNetworkInterfaces",
"ec2:DetachNetworkInterface",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ResetNetworkInterfaceAttribute"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"kinesis:*"
],
"Resource": "arn:aws:kinesis:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"sns:*"
],
"Resource": "arn:aws:sns:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"sqs:*"
],
"Resource": "arn:aws:sqs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"route53:*"
],
"Resource": "*"
}
]
}"""
# Latest list: https://docs.aws.amazon.com/general/latest/gr/rande.html#apigateway_region
API_GATEWAY_REGIONS = ['us-east-1', 'us-east-2',
'us-west-1', 'us-west-2',
'eu-central-1',
'eu-north-1',
'eu-west-1', 'eu-west-2', 'eu-west-3',
'eu-north-1',
'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3',
'ap-southeast-1', 'ap-southeast-2',
'ap-east-1',
'ap-south-1',
'ca-central-1',
'cn-north-1',
'cn-northwest-1',
'sa-east-1',
'us-gov-east-1', 'us-gov-west-1']
# Latest list: https://docs.aws.amazon.com/general/latest/gr/rande.html#lambda_region
LAMBDA_REGIONS = ['us-east-1', 'us-east-2',
'us-west-1', 'us-west-2',
'eu-central-1',
'eu-north-1',
'eu-west-1', 'eu-west-2', 'eu-west-3',
'eu-north-1',
'ap-northeast-1', 'ap-northeast-2', 'ap-northeast-3',
'ap-southeast-1', 'ap-southeast-2',
'ap-east-1',
'ap-south-1',
'ca-central-1',
'cn-north-1',
'cn-northwest-1',
'sa-east-1',
'us-gov-east-1',
'us-gov-west-1']
# We never need to include these.
# Related: https://github.com/Miserlou/Zappa/pull/56
# Related: https://github.com/Miserlou/Zappa/pull/581
ZIP_EXCLUDES = [
'*.exe', '*.DS_Store', '*.Python', '*.git', '.git/*', '*.zip', '*.tar.gz',
'*.hg', 'pip', 'docutils*', 'setuputils*', '__pycache__/*'
]
# When using ALB as an event source for Lambdas, we need to create an alias
# to ensure that, on zappa update, the ALB doesn't lose permissions to access
# the Lambda.
# See: https://github.com/Miserlou/Zappa/pull/1730
ALB_LAMBDA_ALIAS = 'current-alb-version'
##
# Classes
##
class Zappa:
"""
Zappa!
Makes it easy to run Python web applications on AWS Lambda/API Gateway.
"""
##
# Configurables
##
http_methods = ['ANY']
role_name = "ZappaLambdaExecution"
extra_permissions = None
assume_policy = ASSUME_POLICY
attach_policy = ATTACH_POLICY
apigateway_policy = None
cloudwatch_log_levels = ['OFF', 'ERROR', 'INFO']
xray_tracing = False
##
# Credentials
##
boto_session = None
credentials_arn = None
def __init__(self,
boto_session=None,
profile_name=None,
aws_region=None,
load_credentials=True,
desired_role_name=None,
desired_role_arn=None,
runtime='python3.6', # Detected at runtime in CLI
tags=(),
endpoint_urls={},
xray_tracing=False
):
"""
Instantiate this new Zappa instance, loading any custom credentials if necessary.
"""
# Set aws_region to None to use the system's region instead
if aws_region is None:
# https://github.com/Miserlou/Zappa/issues/413
self.aws_region = boto3.Session().region_name
logger.debug("Set region from boto: %s", self.aws_region)
else:
self.aws_region = aws_region
if desired_role_name:
self.role_name = desired_role_name
if desired_role_arn:
self.credentials_arn = desired_role_arn
self.runtime = runtime
if self.runtime == 'python3.6':
self.manylinux_suffix_start = 'cp36m'
elif self.runtime == 'python3.7':
self.manylinux_suffix_start = 'cp37m'
else:
# The 'm' has been dropped in python 3.8+ since builds with and without pymalloc are ABI compatible
# See https://github.com/pypa/manylinux for a more detailed explanation
self.manylinux_suffix_start = 'cp38'
# AWS Lambda supports manylinux1/2010 and manylinux2014
manylinux_suffixes = ("2014", "2010", "1")
self.manylinux_wheel_file_match = re.compile(f'^.*{self.manylinux_suffix_start}-manylinux({"|".join(manylinux_suffixes)})_x86_64.whl$')
self.manylinux_wheel_abi3_file_match = re.compile(f'^.*cp3.-abi3-manylinux({"|".join(manylinux_suffixes)})_x86_64.whl$')
self.endpoint_urls = endpoint_urls
self.xray_tracing = xray_tracing
# Some common invocations, such as DB migrations,
# can take longer than the default.
# Note that this is set to 300s, but if connected to
# APIGW, Lambda will max out at 30s.
# Related: https://github.com/Miserlou/Zappa/issues/205
long_config_dict = {
'region_name': aws_region,
'connect_timeout': 5,
'read_timeout': 300
}
long_config = botocore.client.Config(**long_config_dict)
if load_credentials:
self.load_credentials(boto_session, profile_name)
# Initialize clients
self.s3_client = self.boto_client('s3')
self.lambda_client = self.boto_client('lambda', config=long_config)
self.elbv2_client = self.boto_client('elbv2')
self.events_client = self.boto_client('events')
self.apigateway_client = self.boto_client('apigateway')
# AWS ACM certificates need to be created from us-east-1 to be used by API gateway
east_config = botocore.client.Config(region_name='us-east-1')
self.acm_client = self.boto_client('acm', config=east_config)
self.logs_client = self.boto_client('logs')
self.iam_client = self.boto_client('iam')
self.iam = self.boto_resource('iam')
self.cloudwatch = self.boto_client('cloudwatch')
self.route53 = self.boto_client('route53')
self.sns_client = self.boto_client('sns')
self.cf_client = self.boto_client('cloudformation')
self.dynamodb_client = self.boto_client('dynamodb')
self.cognito_client = self.boto_client('cognito-idp')
self.sts_client = self.boto_client('sts')
self.tags = tags
self.cf_template = troposphere.Template()
self.cf_api_resources = []
self.cf_parameters = {}
def configure_boto_session_method_kwargs(self, service, kw):
"""Allow for custom endpoint urls for non-AWS (testing and bootleg cloud) deployments"""
if service in self.endpoint_urls and not 'endpoint_url' in kw:
kw['endpoint_url'] = self.endpoint_urls[service]
return kw
def boto_client(self, service, *args, **kwargs):
"""A wrapper to apply configuration options to boto clients"""
return self.boto_session.client(service, *args, **self.configure_boto_session_method_kwargs(service, kwargs))
def boto_resource(self, service, *args, **kwargs):
"""A wrapper to apply configuration options to boto resources"""
return self.boto_session.resource(service, *args, **self.configure_boto_session_method_kwargs(service, kwargs))
def cache_param(self, value):
'''Returns a troposphere Ref to a value cached as a parameter.'''
if value not in self.cf_parameters:
keyname = chr(ord('A') + len(self.cf_parameters))
param = self.cf_template.add_parameter(troposphere.Parameter(
keyname, Type="String", Default=value, tags=self.tags
))
self.cf_parameters[value] = param
return troposphere.Ref(self.cf_parameters[value])
##
# Packaging
##
def copy_editable_packages(self, egg_links, temp_package_path):
""" """
for egg_link in egg_links:
with open(egg_link, 'rb') as df:
egg_path = df.read().decode('utf-8').splitlines()[0].strip()
pkgs = set([x.split(".")[0] for x in find_packages(egg_path, exclude=['test', 'tests'])])
for pkg in pkgs:
copytree(os.path.join(egg_path, pkg), os.path.join(temp_package_path, pkg), metadata=False, symlinks=False)
if temp_package_path:
# now remove any egg-links as they will cause issues if they still exist
for link in glob.glob(os.path.join(temp_package_path, "*.egg-link")):
os.remove(link)
def get_deps_list(self, pkg_name, installed_distros=None):
"""
For a given package, returns a list of required packages. Recursive.
"""
# https://github.com/Miserlou/Zappa/issues/1478. Using `pkg_resources`
# instead of `pip` is the recommended approach. The usage is nearly
# identical.
import pkg_resources
deps = []
if not installed_distros:
installed_distros = pkg_resources.WorkingSet()
for package in installed_distros:
if package.project_name.lower() == pkg_name.lower():
deps = [(package.project_name, package.version)]
for req in package.requires():
deps += self.get_deps_list(pkg_name=req.project_name, installed_distros=installed_distros)
return list(set(deps)) # de-dupe before returning
def create_handler_venv(self):
"""
Takes the installed zappa and brings it into a fresh virtualenv-like folder. All dependencies are then downloaded.
"""
import subprocess
# We will need the currenv venv to pull Zappa from
current_venv = self.get_current_venv()
# Make a new folder for the handler packages
ve_path = os.path.join(os.getcwd(), 'handler_venv')
if os.sys.platform == 'win32':
current_site_packages_dir = os.path.join(current_venv, 'Lib', 'site-packages')
venv_site_packages_dir = os.path.join(ve_path, 'Lib', 'site-packages')
else:
current_site_packages_dir = os.path.join(current_venv, 'lib', get_venv_from_python_version(), 'site-packages')
venv_site_packages_dir = os.path.join(ve_path, 'lib', get_venv_from_python_version(), 'site-packages')
if not os.path.isdir(venv_site_packages_dir):
os.makedirs(venv_site_packages_dir)
# Copy zappa* to the new virtualenv
zappa_things = [z for z in os.listdir(current_site_packages_dir) if z.lower()[:5] == 'zappa']
for z in zappa_things:
copytree(os.path.join(current_site_packages_dir, z), os.path.join(venv_site_packages_dir, z))
# Use pip to download zappa's dependencies. Copying from current venv causes issues with things like PyYAML that installs as yaml
zappa_deps = self.get_deps_list('zappa')
pkg_list = ['{0!s}=={1!s}'.format(dep, version) for dep, version in zappa_deps]
# Need to manually add setuptools
pkg_list.append('setuptools')
command = ["pip", "install", "--quiet", "--target", venv_site_packages_dir] + pkg_list
# This is the recommended method for installing packages if you don't
# to depend on `setuptools`
# https://github.com/pypa/pip/issues/5240#issuecomment-381662679
pip_process = subprocess.Popen(command, stdout=subprocess.PIPE)
# Using communicate() to avoid deadlocks
pip_process.communicate()
pip_return_code = pip_process.returncode
if pip_return_code:
raise EnvironmentError("Pypi lookup failed")
return ve_path
# staticmethod as per https://github.com/Miserlou/Zappa/issues/780
@staticmethod
def get_current_venv():
"""
Returns the path to the current virtualenv
"""
if 'VIRTUAL_ENV' in os.environ:
venv = os.environ['VIRTUAL_ENV']
elif os.path.exists('.python-version'): # pragma: no cover
try:
subprocess.check_output(['pyenv', 'help'], stderr=subprocess.STDOUT)
except OSError:
print("This directory seems to have pyenv's local venv, "
"but pyenv executable was not found.")
with open('.python-version', 'r') as f:
# minor fix in how .python-version is read
# Related: https://github.com/Miserlou/Zappa/issues/921
env_name = f.readline().strip()
bin_path = subprocess.check_output(['pyenv', 'which', 'python']).decode('utf-8')
venv = bin_path[:bin_path.rfind(env_name)] + env_name
else: # pragma: no cover
return None
return venv
def create_lambda_zip( self,
prefix='lambda_package',
handler_file=None,
slim_handler=False,
minify=True,
exclude=None,
exclude_glob=None,
use_precompiled_packages=True,
include=None,
venv=None,
output=None,
disable_progress=False,
archive_format='zip'
):
"""
Create a Lambda-ready zip file of the current virtualenvironment and working directory.
Returns path to that file.
"""
# Validate archive_format
if archive_format not in ['zip', 'tarball']:
raise KeyError("The archive format to create a lambda package must be zip or tarball")
# Pip is a weird package.
# Calling this function in some environments without this can cause.. funkiness.
import pip
if not venv:
venv = self.get_current_venv()
build_time = str(int(time.time()))
cwd = os.getcwd()
if not output:
if archive_format == 'zip':
archive_fname = prefix + '-' + build_time + '.zip'
elif archive_format == 'tarball':
archive_fname = prefix + '-' + build_time + '.tar.gz'
else:
archive_fname = output
archive_path = os.path.join(cwd, archive_fname)
# Files that should be excluded from the zip
if exclude is None:
exclude = list()
if exclude_glob is None:
exclude_glob = list()
# Exclude the zip itself
exclude.append(archive_path)
# Make sure that 'concurrent' is always forbidden.
# https://github.com/Miserlou/Zappa/issues/827
if not 'concurrent' in exclude:
exclude.append('concurrent')
def splitpath(path):
parts = []
(path, tail) = os.path.split(path)
while path and tail:
parts.append(tail)
(path, tail) = os.path.split(path)
parts.append(os.path.join(path, tail))
return list(map(os.path.normpath, parts))[::-1]
split_venv = splitpath(venv)
split_cwd = splitpath(cwd)
# Ideally this should be avoided automatically,
# but this serves as an okay stop-gap measure.
if split_venv[-1] == split_cwd[-1]: # pragma: no cover
print(
"Warning! Your project and virtualenv have the same name! You may want "
"to re-create your venv with a new name, or explicitly define a "
"'project_name', as this may cause errors."
)
# First, do the project..
temp_project_path = tempfile.mkdtemp(prefix='zappa-project')
if not slim_handler:
# Slim handler does not take the project files.
if minify:
# Related: https://github.com/Miserlou/Zappa/issues/744
excludes = ZIP_EXCLUDES + exclude + [split_venv[-1]]
copytree(cwd, temp_project_path, metadata=False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
else:
copytree(cwd, temp_project_path, metadata=False, symlinks=False)
for glob_path in exclude_glob:
for path in glob.glob(os.path.join(temp_project_path, glob_path)):
try:
os.remove(path)
except OSError: # is a directory
shutil.rmtree(path)
# If a handler_file is supplied, copy that to the root of the package,
# because that's where AWS Lambda looks for it. It can't be inside a package.
if handler_file:
filename = handler_file.split(os.sep)[-1]
shutil.copy(handler_file, os.path.join(temp_project_path, filename))
# Create and populate package ID file and write to temp project path
package_info = {}
package_info['uuid'] = str(uuid.uuid4())
package_info['build_time'] = build_time
package_info['build_platform'] = os.sys.platform
package_info['build_user'] = getpass.getuser()
# TODO: Add git head and info?
# Ex, from @scoates:
# def _get_git_branch():
# chdir(DIR)
# out = check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD']).strip()
# lambci_branch = environ.get('LAMBCI_BRANCH', None)
# if out == "HEAD" and lambci_branch:
# out += " lambci:{}".format(lambci_branch)
# return out
# def _get_git_hash():
# chdir(DIR)
# return check_output(['git', 'rev-parse', 'HEAD']).strip()
# def _get_uname():
# return check_output(['uname', '-a']).strip()
# def _get_user():
# return check_output(['whoami']).strip()
# def set_id_info(zappa_cli):
# build_info = {
# 'branch': _get_git_branch(),
# 'hash': _get_git_hash(),
# 'build_uname': _get_uname(),
# 'build_user': _get_user(),
# 'build_time': datetime.datetime.utcnow().isoformat(),
# }
# with open(path.join(DIR, 'id_info.json'), 'w') as f:
# json.dump(build_info, f)
# return True
package_id_file = open(os.path.join(temp_project_path, 'package_info.json'), 'w')
dumped = json.dumps(package_info, indent=4)
try:
package_id_file.write(dumped)
except TypeError: # This is a Python 2/3 issue. TODO: Make pretty!
package_id_file.write(str(dumped))
package_id_file.close()
# Then, do site site-packages..
egg_links = []
temp_package_path = tempfile.mkdtemp(prefix='zappa-packages')
if os.sys.platform == 'win32':
site_packages = os.path.join(venv, 'Lib', 'site-packages')
else:
site_packages = os.path.join(venv, 'lib', get_venv_from_python_version(), 'site-packages')
egg_links.extend(glob.glob(os.path.join(site_packages, '*.egg-link')))
if minify:
excludes = ZIP_EXCLUDES + exclude
copytree(site_packages, temp_package_path, metadata=False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
else:
copytree(site_packages, temp_package_path, metadata=False, symlinks=False)
# We may have 64-bin specific packages too.
site_packages_64 = os.path.join(venv, 'lib64', get_venv_from_python_version(), 'site-packages')
if os.path.exists(site_packages_64):
egg_links.extend(glob.glob(os.path.join(site_packages_64, '*.egg-link')))
if minify:
excludes = ZIP_EXCLUDES + exclude
copytree(site_packages_64, temp_package_path, metadata = False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
else:
copytree(site_packages_64, temp_package_path, metadata = False, symlinks=False)
if egg_links:
self.copy_editable_packages(egg_links, temp_package_path)
copy_tree(temp_package_path, temp_project_path, update=True)
# Then the pre-compiled packages..
if use_precompiled_packages:
print("Downloading and installing dependencies..")
installed_packages = self.get_installed_packages(site_packages, site_packages_64)
try:
for installed_package_name, installed_package_version in installed_packages.items():
cached_wheel_path = self.get_cached_manylinux_wheel(installed_package_name, installed_package_version, disable_progress)
if cached_wheel_path:
# Otherwise try to use manylinux packages from PyPi..
# Related: https://github.com/Miserlou/Zappa/issues/398
shutil.rmtree(os.path.join(temp_project_path, installed_package_name), ignore_errors=True)
with zipfile.ZipFile(cached_wheel_path) as zfile:
zfile.extractall(temp_project_path)
except Exception as e:
print(e)
# XXX - What should we do here?
# Cleanup
for glob_path in exclude_glob:
for path in glob.glob(os.path.join(temp_project_path, glob_path)):
try:
os.remove(path)
except OSError: # is a directory
shutil.rmtree(path)
# Then archive it all up..
if archive_format == 'zip':
print("Packaging project as zip.")
try:
compression_method = zipfile.ZIP_DEFLATED
except ImportError: # pragma: no cover
compression_method = zipfile.ZIP_STORED
archivef = zipfile.ZipFile(archive_path, 'w', compression_method)
elif archive_format == 'tarball':
print("Packaging project as gzipped tarball.")
archivef = tarfile.open(archive_path, 'w|gz')
for root, dirs, files in os.walk(temp_project_path):
for filename in files:
# Skip .pyc files for Django migrations
# https://github.com/Miserlou/Zappa/issues/436
# https://github.com/Miserlou/Zappa/issues/464
if filename[-4:] == '.pyc' and root[-10:] == 'migrations':
continue
# If there is a .pyc file in this package,
# we can skip the python source code as we'll just
# use the compiled bytecode anyway..
if filename[-3:] == '.py' and root[-10:] != 'migrations':
abs_filname = os.path.join(root, filename)
abs_pyc_filename = abs_filname + 'c'
if os.path.isfile(abs_pyc_filename):
# but only if the pyc is older than the py,
# otherwise we'll deploy outdated code!
py_time = os.stat(abs_filname).st_mtime
pyc_time = os.stat(abs_pyc_filename).st_mtime
if pyc_time > py_time:
continue
# Make sure that the files are all correctly chmodded
# Related: https://github.com/Miserlou/Zappa/issues/484
# Related: https://github.com/Miserlou/Zappa/issues/682
os.chmod(os.path.join(root, filename), 0o755)
if archive_format == 'zip':
# Actually put the file into the proper place in the zip
# Related: https://github.com/Miserlou/Zappa/pull/716
zipi = zipfile.ZipInfo(os.path.join(root.replace(temp_project_path, '').lstrip(os.sep), filename))
zipi.create_system = 3
zipi.external_attr = 0o755 << int(16) # Is this P2/P3 functional?
with open(os.path.join(root, filename), 'rb') as f:
archivef.writestr(zipi, f.read(), compression_method)
elif archive_format == 'tarball':
tarinfo = tarfile.TarInfo(os.path.join(root.replace(temp_project_path, '').lstrip(os.sep), filename))
tarinfo.mode = 0o755
stat = os.stat(os.path.join(root, filename))
tarinfo.mtime = stat.st_mtime
tarinfo.size = stat.st_size
with open(os.path.join(root, filename), 'rb') as f:
archivef.addfile(tarinfo, f)
# Create python init file if it does not exist
# Only do that if there are sub folders or python files and does not conflict with a neighbouring module
# Related: https://github.com/Miserlou/Zappa/issues/766
if not contains_python_files_or_subdirs(root):
# if the directory does not contain any .py file at any level, we can skip the rest
dirs[:] = [d for d in dirs if d != root]
else:
if '__init__.py' not in files and not conflicts_with_a_neighbouring_module(root):
tmp_init = os.path.join(temp_project_path, '__init__.py')
open(tmp_init, 'a').close()
os.chmod(tmp_init, 0o755)
arcname = os.path.join(root.replace(temp_project_path, ''),
os.path.join(root.replace(temp_project_path, ''), '__init__.py'))
if archive_format == 'zip':
archivef.write(tmp_init, arcname)
elif archive_format == 'tarball':
archivef.add(tmp_init, arcname)
# And, we're done!
archivef.close()
# Trash the temp directory
shutil.rmtree(temp_project_path)
shutil.rmtree(temp_package_path)
if os.path.isdir(venv) and slim_handler:
# Remove the temporary handler venv folder
shutil.rmtree(venv)
return archive_fname
@staticmethod
def get_installed_packages(site_packages, site_packages_64):
"""
Returns a dict of installed packages that Zappa cares about.
"""
import pkg_resources
package_to_keep = []
if os.path.isdir(site_packages):
package_to_keep += os.listdir(site_packages)
if os.path.isdir(site_packages_64):
package_to_keep += os.listdir(site_packages_64)
package_to_keep = [x.lower() for x in package_to_keep]
installed_packages = {package.project_name.lower(): package.version for package in
pkg_resources.WorkingSet()
if package.project_name.lower() in package_to_keep
or package.location.lower() in [site_packages.lower(), site_packages_64.lower()]}
return installed_packages
@staticmethod
def download_url_with_progress(url, stream, disable_progress):
"""
Downloads a given url in chunks and writes to the provided stream (can be any io stream).
Displays the progress bar for the download.
"""
resp = requests.get(url, timeout=float(os.environ.get('PIP_TIMEOUT', 2)), stream=True)
resp.raw.decode_content = True
progress = tqdm(unit="B", unit_scale=True, total=int(resp.headers.get('Content-Length', 0)), disable=disable_progress)
for chunk in resp.iter_content(chunk_size=1024):
if chunk:
progress.update(len(chunk))
stream.write(chunk)
progress.close()
def get_cached_manylinux_wheel(self, package_name, package_version, disable_progress=False):
"""
Gets the locally stored version of a manylinux wheel. If one does not exist, the function downloads it.
"""
cached_wheels_dir = os.path.join(tempfile.gettempdir(), 'cached_wheels')
if not os.path.isdir(cached_wheels_dir):
os.makedirs(cached_wheels_dir)
else:
# Check if we already have a cached copy
wheel_file = f'{package_name}-{package_version}-*_x86_64.whl'
wheel_path = os.path.join(cached_wheels_dir, wheel_file)
for pathname in glob.iglob(wheel_path):
if re.match(self.manylinux_wheel_file_match, pathname) or re.match(self.manylinux_wheel_abi3_file_match, pathname):
print(f" - {package_name}=={package_version}: Using locally cached manylinux wheel")
return pathname
# The file is not cached, download it.
wheel_url, filename = self.get_manylinux_wheel_url(package_name, package_version)
if not wheel_url:
return None
wheel_path = os.path.join(cached_wheels_dir, filename)
print(f" - {package_name}=={package_version}: Downloading")
with open(wheel_path, 'wb') as f:
self.download_url_with_progress(wheel_url, f, disable_progress)
if not zipfile.is_zipfile(wheel_path):
return None
return wheel_path
def get_manylinux_wheel_url(self, package_name, package_version):
"""
For a given package name, returns a link to the download URL,
else returns None.
Related: https://github.com/Miserlou/Zappa/issues/398
Examples here: https://gist.github.com/perrygeo/9545f94eaddec18a65fd7b56880adbae
This function downloads metadata JSON of `package_name` from Pypi
and examines if the package has a manylinux wheel. This function
also caches the JSON file so that we don't have to poll Pypi
every time.
"""
cached_pypi_info_dir = os.path.join(tempfile.gettempdir(), 'cached_pypi_info')
if not os.path.isdir(cached_pypi_info_dir):
os.makedirs(cached_pypi_info_dir)
# Even though the metadata is for the package, we save it in a
# filename that includes the package's version. This helps in
# invalidating the cached file if the user moves to a different
# version of the package.
# Related: https://github.com/Miserlou/Zappa/issues/899
json_file = '{0!s}-{1!s}.json'.format(package_name, package_version)
json_file_path = os.path.join(cached_pypi_info_dir, json_file)
if os.path.exists(json_file_path):
with open(json_file_path, 'rb') as metafile:
data = json.load(metafile)
else:
url = 'https://pypi.python.org/pypi/{}/json'.format(package_name)
try:
res = requests.get(url, timeout=float(os.environ.get('PIP_TIMEOUT', 1.5)))
data = res.json()
except Exception as e: # pragma: no cover
return None, None
with open(json_file_path, 'wb') as metafile:
jsondata = json.dumps(data)
metafile.write(bytes(jsondata, "utf-8"))
if package_version not in data['releases']:
return None, None
for f in data['releases'][package_version]:
if re.match(self.manylinux_wheel_file_match, f['filename']):
return f['url'], f['filename']
elif re.match(self.manylinux_wheel_abi3_file_match, f['filename']):
return f['url'], f['filename']
return None, None
##
# S3
##
def upload_to_s3(self, source_path, bucket_name, disable_progress=False):
r"""
Given a file, upload it to S3.
Credentials should be stored in environment variables or ~/.aws/credentials (%USERPROFILE%\.aws\credentials on Windows).
Returns True on success, false on failure.
"""
try:
self.s3_client.head_bucket(Bucket=bucket_name)
except botocore.exceptions.ClientError:
# This is really stupid S3 quirk. Technically, us-east-1 one has no S3,
# it's actually "US Standard", or something.
# More here: https://github.com/boto/boto3/issues/125
if self.aws_region == 'us-east-1':
self.s3_client.create_bucket(
Bucket=bucket_name,
)
else:
self.s3_client.create_bucket(
Bucket=bucket_name,
CreateBucketConfiguration={'LocationConstraint': self.aws_region},
)
if self.tags:
tags = {
'TagSet': [{'Key': key, 'Value': self.tags[key]} for key in self.tags.keys()]
}
self.s3_client.put_bucket_tagging(Bucket=bucket_name, Tagging=tags)
if not os.path.isfile(source_path) or os.stat(source_path).st_size == 0:
print("Problem with source file {}".format(source_path))
return False
dest_path = os.path.split(source_path)[1]
try:
source_size = os.stat(source_path).st_size
print("Uploading {0} ({1})..".format(dest_path, human_size(source_size)))
progress = tqdm(total=float(os.path.getsize(source_path)), unit_scale=True, unit='B', disable=disable_progress)
# Attempt to upload to S3 using the S3 meta client with the progress bar.
# If we're unable to do that, try one more time using a session client,
# which cannot use the progress bar.
# Related: https://github.com/boto/boto3/issues/611
try:
self.s3_client.upload_file(
source_path, bucket_name, dest_path,
Callback=progress.update
)
except Exception as e: # pragma: no cover
self.s3_client.upload_file(source_path, bucket_name, dest_path)
progress.close()
except (KeyboardInterrupt, SystemExit): # pragma: no cover
raise
except Exception as e: # pragma: no cover
print(e)
return False
return True
def copy_on_s3(self, src_file_name, dst_file_name, bucket_name):
"""
Copies src file to destination within a bucket.
"""
try:
self.s3_client.head_bucket(Bucket=bucket_name)
except botocore.exceptions.ClientError as e: # pragma: no cover
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = int(e.response['Error']['Code'])
if error_code == 404:
return False
copy_src = {
"Bucket": bucket_name,
"Key": src_file_name
}
try:
self.s3_client.copy(
CopySource=copy_src,
Bucket=bucket_name,
Key=dst_file_name
)
return True
except botocore.exceptions.ClientError: # pragma: no cover
return False
def remove_from_s3(self, file_name, bucket_name):
"""
Given a file name and a bucket, remove it from S3.
There's no reason to keep the file hosted on S3 once its been made into a Lambda function, so we can delete it from S3.
Returns True on success, False on failure.
"""
try:
self.s3_client.head_bucket(Bucket=bucket_name)
except botocore.exceptions.ClientError as e: # pragma: no cover
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = int(e.response['Error']['Code'])
if error_code == 404:
return False
try:
self.s3_client.delete_object(Bucket=bucket_name, Key=file_name)
return True
except (botocore.exceptions.ParamValidationError, botocore.exceptions.ClientError): # pragma: no cover
return False
##
# Lambda
##
def create_lambda_function( self,
bucket=None,
function_name=None,
handler=None,
s3_key=None,
description='Zappa Deployment',
timeout=30,
memory_size=512,
publish=True,
vpc_config=None,
dead_letter_config=None,
runtime='python3.6',
aws_environment_variables=None,
aws_kms_key_arn=None,
xray_tracing=False,
local_zip=None,
use_alb=False,
layers=None,
concurrency=None,
):
"""
Given a bucket and key (or a local path) of a valid Lambda-zip, a function name and a handler, register that Lambda function.
"""
if not vpc_config:
vpc_config = {}
if not dead_letter_config:
dead_letter_config = {}
if not self.credentials_arn:
self.get_credentials_arn()
if not aws_environment_variables:
aws_environment_variables = {}
if not aws_kms_key_arn:
aws_kms_key_arn = ''
if not layers:
layers = []
kwargs = dict(
FunctionName=function_name,
Runtime=runtime,
Role=self.credentials_arn,
Handler=handler,
Description=description,
Timeout=timeout,
MemorySize=memory_size,
Publish=publish,
VpcConfig=vpc_config,
DeadLetterConfig=dead_letter_config,
Environment={'Variables': aws_environment_variables},
KMSKeyArn=aws_kms_key_arn,
TracingConfig={
'Mode': 'Active' if self.xray_tracing else 'PassThrough'
},
Layers=layers
)
if local_zip:
kwargs['Code'] = {
'ZipFile': local_zip
}
else:
kwargs['Code'] = {
'S3Bucket': bucket,
'S3Key': s3_key
}
response = self.lambda_client.create_function(**kwargs)
resource_arn = response['FunctionArn']
version = response['Version']
# If we're using an ALB, let's create an alias mapped to the newly
# created function. This allows clean, no downtime association when
# using application load balancers as an event source.
# See: https://github.com/Miserlou/Zappa/pull/1730
# https://github.com/Miserlou/Zappa/issues/1823
if use_alb:
self.lambda_client.create_alias(
FunctionName=resource_arn,
FunctionVersion=version,
Name=ALB_LAMBDA_ALIAS,
)
if self.tags:
self.lambda_client.tag_resource(Resource=resource_arn, Tags=self.tags)
if concurrency is not None:
self.lambda_client.put_function_concurrency(
FunctionName=resource_arn,
ReservedConcurrentExecutions=concurrency,
)
return resource_arn
def update_lambda_function(self, bucket, function_name, s3_key=None, publish=True, local_zip=None, num_revisions=None, concurrency=None):
"""
Given a bucket and key (or a local path) of a valid Lambda-zip, a function name and a handler, update that Lambda function's code.
Optionally, delete previous versions if they exceed the optional limit.
"""
print("Updating Lambda function code..")
kwargs = dict(
FunctionName=function_name,
Publish=publish
)
if local_zip:
kwargs['ZipFile'] = local_zip
else:
kwargs['S3Bucket'] = bucket
kwargs['S3Key'] = s3_key
response = self.lambda_client.update_function_code(**kwargs)
resource_arn = response['FunctionArn']
version = response['Version']
# If the lambda has an ALB alias, let's update the alias
# to point to the newest version of the function. We have to use a GET
# here, as there's no HEAD-esque call to retrieve metadata about a
# function alias.
# Related: https://github.com/Miserlou/Zappa/pull/1730
# https://github.com/Miserlou/Zappa/issues/1823
try:
response = self.lambda_client.get_alias(
FunctionName=function_name,
Name=ALB_LAMBDA_ALIAS,
)
alias_exists = True
except botocore.exceptions.ClientError as e: # pragma: no cover
if "ResourceNotFoundException" not in e.response["Error"]["Code"]:
raise e
alias_exists = False
if alias_exists:
self.lambda_client.update_alias(
FunctionName=function_name,
FunctionVersion=version,
Name=ALB_LAMBDA_ALIAS,
)
if concurrency is not None:
self.lambda_client.put_function_concurrency(
FunctionName=function_name,
ReservedConcurrentExecutions=concurrency,
)
else:
self.lambda_client.delete_function_concurrency(
FunctionName=function_name
)
if num_revisions:
# Find the existing revision IDs for the given function
# Related: https://github.com/Miserlou/Zappa/issues/1402
versions_in_lambda = []
versions = self.lambda_client.list_versions_by_function(FunctionName=function_name)
for version in versions['Versions']:
versions_in_lambda.append(version['Version'])
while 'NextMarker' in versions:
versions = self.lambda_client.list_versions_by_function(FunctionName=function_name,Marker=versions['NextMarker'])
for version in versions['Versions']:
versions_in_lambda.append(version['Version'])
versions_in_lambda.remove('$LATEST')
# Delete older revisions if their number exceeds the specified limit
for version in versions_in_lambda[::-1][num_revisions:]:
self.lambda_client.delete_function(FunctionName=function_name,Qualifier=version)
return resource_arn
def update_lambda_configuration( self,
lambda_arn,
function_name,
handler,
description='Zappa Deployment',
timeout=30,
memory_size=512,
publish=True,
vpc_config=None,
runtime='python3.6',
aws_environment_variables=None,
aws_kms_key_arn=None,
layers=None
):
"""
Given an existing function ARN, update the configuration variables.
"""
print("Updating Lambda function configuration..")
if not vpc_config:
vpc_config = {}
if not self.credentials_arn:
self.get_credentials_arn()
if not aws_kms_key_arn:
aws_kms_key_arn = ''
if not aws_environment_variables:
aws_environment_variables = {}
if not layers:
layers = []
# Check if there are any remote aws lambda env vars so they don't get trashed.
# https://github.com/Miserlou/Zappa/issues/987, Related: https://github.com/Miserlou/Zappa/issues/765
lambda_aws_config = self.lambda_client.get_function_configuration(FunctionName=function_name)
if "Environment" in lambda_aws_config:
lambda_aws_environment_variables = lambda_aws_config["Environment"].get("Variables", {})
# Append keys that are remote but not in settings file
for key, value in lambda_aws_environment_variables.items():
if key not in aws_environment_variables:
aws_environment_variables[key] = value
response = self.lambda_client.update_function_configuration(
FunctionName=function_name,
Runtime=runtime,
Role=self.credentials_arn,
Handler=handler,
Description=description,
Timeout=timeout,
MemorySize=memory_size,
VpcConfig=vpc_config,
Environment={'Variables': aws_environment_variables},
KMSKeyArn=aws_kms_key_arn,
TracingConfig={
'Mode': 'Active' if self.xray_tracing else 'PassThrough'
},
Layers=layers
)
resource_arn = response['FunctionArn']
if self.tags:
self.lambda_client.tag_resource(Resource=resource_arn, Tags=self.tags)
return resource_arn
def invoke_lambda_function( self,
function_name,
payload,
invocation_type='Event',
log_type='Tail',
client_context=None,
qualifier=None
):
"""
Directly invoke a named Lambda function with a payload.
Returns the response.
"""
return self.lambda_client.invoke(
FunctionName=function_name,
InvocationType=invocation_type,
LogType=log_type,
Payload=payload
)
def rollback_lambda_function_version(self, function_name, versions_back=1, publish=True):
"""
Rollback the lambda function code 'versions_back' number of revisions.
Returns the Function ARN.
"""
response = self.lambda_client.list_versions_by_function(FunctionName=function_name)
# Take into account $LATEST
if len(response['Versions']) < versions_back + 1:
print("We do not have {} revisions. Aborting".format(str(versions_back)))
return False
revisions = [int(revision['Version']) for revision in response['Versions'] if revision['Version'] != '$LATEST']
revisions.sort(reverse=True)
response = self.lambda_client.get_function(FunctionName='function:{}:{}'.format(function_name, revisions[versions_back]))
response = requests.get(response['Code']['Location'])
if response.status_code != 200:
print("Failed to get version {} of {} code".format(versions_back, function_name))
return False
response = self.lambda_client.update_function_code(FunctionName=function_name, ZipFile=response.content, Publish=publish) # pragma: no cover
return response['FunctionArn']
def get_lambda_function(self, function_name):
"""
Returns the lambda function ARN, given a name
This requires the "lambda:GetFunction" role.
"""
response = self.lambda_client.get_function(
FunctionName=function_name)
return response['Configuration']['FunctionArn']
def get_lambda_function_versions(self, function_name):
"""
Simply returns the versions available for a Lambda function, given a function name.
"""
try:
response = self.lambda_client.list_versions_by_function(
FunctionName=function_name
)
return response.get('Versions', [])
except Exception:
return []
def delete_lambda_function(self, function_name):
"""
Given a function name, delete it from AWS Lambda.
Returns the response.
"""
print("Deleting Lambda function..")
return self.lambda_client.delete_function(
FunctionName=function_name,
)
##
# Application load balancer
##
def deploy_lambda_alb( self,
lambda_arn,
lambda_name,
alb_vpc_config,
timeout
):
"""
The `zappa deploy` functionality for ALB infrastructure.
"""
if not alb_vpc_config:
raise EnvironmentError('When creating an ALB, alb_vpc_config must be filled out in zappa_settings.')
if 'SubnetIds' not in alb_vpc_config:
raise EnvironmentError('When creating an ALB, you must supply two subnets in different availability zones.')
if 'SecurityGroupIds' not in alb_vpc_config:
alb_vpc_config["SecurityGroupIds"] = []
if not alb_vpc_config.get('CertificateArn'):
raise EnvironmentError('When creating an ALB, you must supply a CertificateArn for the HTTPS listener.')
# Related: https://github.com/Miserlou/Zappa/issues/1856
if 'Scheme' not in alb_vpc_config:
alb_vpc_config["Scheme"] = "internet-facing"
print("Deploying ALB infrastructure...")
# Create load balancer
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.create_load_balancer
kwargs = dict(
Name=lambda_name,
Subnets=alb_vpc_config["SubnetIds"],
SecurityGroups=alb_vpc_config["SecurityGroupIds"],
Scheme=alb_vpc_config["Scheme"],
# TODO: Tags might be a useful means of stock-keeping zappa-generated assets.
#Tags=[],
Type="application",
# TODO: can be ipv4 or dualstack (for ipv4 and ipv6) ipv4 is required for internal Scheme.
IpAddressType="ipv4"
)
response = self.elbv2_client.create_load_balancer(**kwargs)
if not(response["LoadBalancers"]) or len(response["LoadBalancers"]) != 1:
raise EnvironmentError("Failure to create application load balancer. Response was in unexpected format. Response was: {}".format(repr(response)))
if response["LoadBalancers"][0]['State']['Code'] == 'failed':
raise EnvironmentError("Failure to create application load balancer. Response reported a failed state: {}".format(response["LoadBalancers"][0]['State']['Reason']))
load_balancer_arn = response["LoadBalancers"][0]["LoadBalancerArn"]
load_balancer_dns = response["LoadBalancers"][0]["DNSName"]
load_balancer_vpc = response["LoadBalancers"][0]["VpcId"]
waiter = self.elbv2_client.get_waiter('load_balancer_available')
print('Waiting for load balancer [{}] to become active..'.format(load_balancer_arn))
waiter.wait(LoadBalancerArns=[load_balancer_arn], WaiterConfig={"Delay": 3})
# Match the lambda timeout on the load balancer.
self.elbv2_client.modify_load_balancer_attributes(
LoadBalancerArn=load_balancer_arn,
Attributes=[{
'Key': 'idle_timeout.timeout_seconds',
'Value': str(timeout)
}]
)
# Create/associate target group.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.create_target_group
kwargs = dict(
Name=lambda_name,
TargetType="lambda",
# TODO: Add options for health checks
)
response = self.elbv2_client.create_target_group(**kwargs)
if not(response["TargetGroups"]) or len(response["TargetGroups"]) != 1:
raise EnvironmentError("Failure to create application load balancer target group. Response was in unexpected format. Response was: {}".format(repr(response)))
target_group_arn = response["TargetGroups"][0]["TargetGroupArn"]
# Enable multi-value headers by default.
response = self.elbv2_client.modify_target_group_attributes(
TargetGroupArn=target_group_arn,
Attributes=[
{
'Key': 'lambda.multi_value_headers.enabled',
'Value': 'true'
},
]
)
# Allow execute permissions from target group to lambda.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.add_permission
kwargs = dict(
Action="lambda:InvokeFunction",
FunctionName="{}:{}".format(lambda_arn, ALB_LAMBDA_ALIAS),
Principal="elasticloadbalancing.amazonaws.com",
SourceArn=target_group_arn,
StatementId=lambda_name
)
response = self.lambda_client.add_permission(**kwargs)
# Register target group to lambda association.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.register_targets
kwargs = dict(
TargetGroupArn=target_group_arn,
Targets=[{"Id": "{}:{}".format(lambda_arn, ALB_LAMBDA_ALIAS)}]
)
response = self.elbv2_client.register_targets(**kwargs)
# Bind listener to load balancer with default rule to target group.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.create_listener
kwargs = dict(
# TODO: Listeners support custom ssl certificates (Certificates). For now we leave this default.
Certificates=[{"CertificateArn": alb_vpc_config['CertificateArn']}],
DefaultActions=[{
"Type": "forward",
"TargetGroupArn": target_group_arn,
}],
LoadBalancerArn=load_balancer_arn,
Protocol="HTTPS",
# TODO: Add option for custom ports
Port=443,
# TODO: Listeners support custom ssl security policy (SslPolicy). For now we leave this default.
)
response = self.elbv2_client.create_listener(**kwargs)
print("ALB created with DNS: {}".format(load_balancer_dns))
print("Note it may take several minutes for load balancer to become available.")
def undeploy_lambda_alb(self, lambda_name):
"""
The `zappa undeploy` functionality for ALB infrastructure.
"""
print("Undeploying ALB infrastructure...")
# Locate and delete alb/lambda permissions
try:
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.remove_permission
self.lambda_client.remove_permission(
FunctionName=lambda_name,
StatementId=lambda_name
)
except botocore.exceptions.ClientError as e: # pragma: no cover
if "ResourceNotFoundException" in e.response["Error"]["Code"]:
pass
else:
raise e
# Locate and delete load balancer
try:
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_load_balancers
response = self.elbv2_client.describe_load_balancers(
Names=[lambda_name]
)
if not(response["LoadBalancers"]) or len(response["LoadBalancers"]) > 1:
raise EnvironmentError("Failure to locate/delete ALB named [{}]. Response was: {}".format(lambda_name, repr(response)))
load_balancer_arn = response["LoadBalancers"][0]["LoadBalancerArn"]
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_listeners
response = self.elbv2_client.describe_listeners(LoadBalancerArn=load_balancer_arn)
if not(response["Listeners"]):
print('No listeners found.')
elif len(response["Listeners"]) > 1:
raise EnvironmentError("Failure to locate/delete listener for ALB named [{}]. Response was: {}".format(lambda_name, repr(response)))
else:
listener_arn = response["Listeners"][0]["ListenerArn"]
# Remove the listener. This explicit deletion of the listener seems necessary to avoid ResourceInUseExceptions when deleting target groups.
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.delete_listener
response = self.elbv2_client.delete_listener(ListenerArn=listener_arn)
# Remove the load balancer and wait for completion
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.delete_load_balancer
response = self.elbv2_client.delete_load_balancer(LoadBalancerArn=load_balancer_arn)
waiter = self.elbv2_client.get_waiter('load_balancers_deleted')
print('Waiting for load balancer [{}] to be deleted..'.format(lambda_name))
waiter.wait(LoadBalancerArns=[load_balancer_arn], WaiterConfig={"Delay": 3})
except botocore.exceptions.ClientError as e: # pragma: no cover
print(e.response["Error"]["Code"])
if "LoadBalancerNotFound" in e.response["Error"]["Code"]:
pass
else:
raise e
# Locate and delete target group
try:
# Locate the lambda ARN
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.get_function
response = self.lambda_client.get_function(FunctionName=lambda_name)
lambda_arn = response["Configuration"]["FunctionArn"]
# Locate the target group ARN
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_target_groups
response = self.elbv2_client.describe_target_groups(Names=[lambda_name])
if not(response["TargetGroups"]) or len(response["TargetGroups"]) > 1:
raise EnvironmentError("Failure to locate/delete ALB target group named [{}]. Response was: {}".format(lambda_name, repr(response)))
target_group_arn = response["TargetGroups"][0]["TargetGroupArn"]
# Deregister targets and wait for completion
self.elbv2_client.deregister_targets(
TargetGroupArn=target_group_arn,
Targets=[{"Id": lambda_arn}]
)
waiter = self.elbv2_client.get_waiter('target_deregistered')
print('Waiting for target [{}] to be deregistered...'.format(lambda_name))
waiter.wait(
TargetGroupArn=target_group_arn,
Targets=[{"Id": lambda_arn}],
WaiterConfig={"Delay": 3}
)
# Remove the target group
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.delete_target_group
self.elbv2_client.delete_target_group(TargetGroupArn=target_group_arn)
except botocore.exceptions.ClientError as e: # pragma: no cover
print(e.response["Error"]["Code"])
if "TargetGroupNotFound" in e.response["Error"]["Code"]:
pass
else:
raise e
##
# API Gateway
##
def create_api_gateway_routes( self,
lambda_arn,
api_name=None,
api_key_required=False,
authorization_type='NONE',
authorizer=None,
cors_options=None,
description=None,
endpoint_configuration=None
):
"""
Create the API Gateway for this Zappa deployment.
Returns the new RestAPI CF resource.
"""
restapi = troposphere.apigateway.RestApi('Api')
restapi.Name = api_name or lambda_arn.split(':')[-1]
if not description:
description = 'Created automatically by Zappa.'
restapi.Description = description
endpoint_configuration = [] if endpoint_configuration is None else endpoint_configuration
if self.boto_session.region_name == "us-gov-west-1":
endpoint_configuration.append("REGIONAL")
if endpoint_configuration:
endpoint = troposphere.apigateway.EndpointConfiguration()
endpoint.Types = list(set(endpoint_configuration))
restapi.EndpointConfiguration = endpoint
if self.apigateway_policy:
restapi.Policy = json.loads(self.apigateway_policy)
self.cf_template.add_resource(restapi)
root_id = troposphere.GetAtt(restapi, 'RootResourceId')
invocation_prefix = "aws" if self.boto_session.region_name != "us-gov-west-1" else "aws-us-gov"
invocations_uri = 'arn:' + invocation_prefix + ':apigateway:' + self.boto_session.region_name + ':lambda:path/2015-03-31/functions/' + lambda_arn + '/invocations'
##
# The Resources
##
authorizer_resource = None
if authorizer:
authorizer_lambda_arn = authorizer.get('arn', lambda_arn)
lambda_uri = 'arn:{invocation_prefix}:apigateway:{region_name}:lambda:path/2015-03-31/functions/{lambda_arn}/invocations'.format(
invocation_prefix=invocation_prefix,
region_name=self.boto_session.region_name,
lambda_arn=authorizer_lambda_arn
)
authorizer_resource = self.create_authorizer(
restapi, lambda_uri, authorizer
)
self.create_and_setup_methods( restapi,
root_id,
api_key_required,
invocations_uri,
authorization_type,
authorizer_resource,
0
)
if cors_options:
self.create_and_setup_cors( restapi,
root_id,
invocations_uri,
0,
cors_options
)
resource = troposphere.apigateway.Resource('ResourceAnyPathSlashed')
self.cf_api_resources.append(resource.title)
resource.RestApiId = troposphere.Ref(restapi)
resource.ParentId = root_id
resource.PathPart = "{proxy+}"
self.cf_template.add_resource(resource)
self.create_and_setup_methods( restapi,
resource,
api_key_required,
invocations_uri,
authorization_type,
authorizer_resource,
1
) # pragma: no cover
if cors_options:
self.create_and_setup_cors( restapi,
resource,
invocations_uri,
1,
cors_options
) # pragma: no cover
return restapi
def create_authorizer(self, restapi, uri, authorizer):
"""
Create Authorizer for API gateway
"""
authorizer_type = authorizer.get("type", "TOKEN").upper()
identity_validation_expression = authorizer.get('validation_expression', None)
authorizer_resource = troposphere.apigateway.Authorizer("Authorizer")
authorizer_resource.RestApiId = troposphere.Ref(restapi)
authorizer_resource.Name = authorizer.get("name", "ZappaAuthorizer")
authorizer_resource.Type = authorizer_type
authorizer_resource.AuthorizerUri = uri
authorizer_resource.IdentitySource = "method.request.header.%s" % authorizer.get('token_header', 'Authorization')
if identity_validation_expression:
authorizer_resource.IdentityValidationExpression = identity_validation_expression
if authorizer_type == 'TOKEN':
if not self.credentials_arn:
self.get_credentials_arn()
authorizer_resource.AuthorizerResultTtlInSeconds = authorizer.get('result_ttl', 300)
authorizer_resource.AuthorizerCredentials = self.credentials_arn
if authorizer_type == 'COGNITO_USER_POOLS':
authorizer_resource.ProviderARNs = authorizer.get('provider_arns')
self.cf_api_resources.append(authorizer_resource.title)
self.cf_template.add_resource(authorizer_resource)
return authorizer_resource
def create_and_setup_methods(
self,
restapi,
resource,
api_key_required,
uri,
authorization_type,
authorizer_resource,
depth
):
"""
Set up the methods, integration responses and method responses for a given API Gateway resource.
"""
for method_name in self.http_methods:
method = troposphere.apigateway.Method(method_name + str(depth))
method.RestApiId = troposphere.Ref(restapi)
if type(resource) is troposphere.apigateway.Resource:
method.ResourceId = troposphere.Ref(resource)
else:
method.ResourceId = resource
method.HttpMethod = method_name.upper()
method.AuthorizationType = authorization_type
if authorizer_resource:
method.AuthorizerId = troposphere.Ref(authorizer_resource)
method.ApiKeyRequired = api_key_required
method.MethodResponses = []
self.cf_template.add_resource(method)
self.cf_api_resources.append(method.title)
if not self.credentials_arn:
self.get_credentials_arn()
credentials = self.credentials_arn # This must be a Role ARN
integration = troposphere.apigateway.Integration()
integration.CacheKeyParameters = []
integration.CacheNamespace = 'none'
integration.Credentials = credentials
integration.IntegrationHttpMethod = 'POST'
integration.IntegrationResponses = []
integration.PassthroughBehavior = 'NEVER'
integration.Type = 'AWS_PROXY'
integration.Uri = uri
method.Integration = integration
def create_and_setup_cors(self, restapi, resource, uri, depth, config):
"""
Set up the methods, integration responses and method responses for a given API Gateway resource.
"""
if config is True:
config = {}
method_name = "OPTIONS"
method = troposphere.apigateway.Method(method_name + str(depth))
method.RestApiId = troposphere.Ref(restapi)
if type(resource) is troposphere.apigateway.Resource:
method.ResourceId = troposphere.Ref(resource)
else:
method.ResourceId = resource
method.HttpMethod = method_name.upper()
method.AuthorizationType = "NONE"
method_response = troposphere.apigateway.MethodResponse()
method_response.ResponseModels = {
"application/json": "Empty"
}
response_headers = {
"Access-Control-Allow-Headers": "'%s'" % ",".join(config.get(
"allowed_headers", ["Content-Type", "X-Amz-Date",
"Authorization", "X-Api-Key",
"X-Amz-Security-Token"])),
"Access-Control-Allow-Methods": "'%s'" % ",".join(config.get(
"allowed_methods", ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"])),
"Access-Control-Allow-Origin": "'%s'" % config.get(
"allowed_origin", "*")
}
method_response.ResponseParameters = {
"method.response.header.%s" % key: True for key in response_headers
}
method_response.StatusCode = "200"
method.MethodResponses = [
method_response
]
self.cf_template.add_resource(method)
self.cf_api_resources.append(method.title)
integration = troposphere.apigateway.Integration()
integration.Type = 'MOCK'
integration.PassthroughBehavior = 'NEVER'
integration.RequestTemplates = {
"application/json": "{\"statusCode\": 200}"
}
integration_response = troposphere.apigateway.IntegrationResponse()
integration_response.ResponseParameters = {
"method.response.header.%s" % key: value for key, value in response_headers.items()
}
integration_response.ResponseTemplates = {
"application/json": ""
}
integration_response.StatusCode = "200"
integration.IntegrationResponses = [
integration_response
]
integration.Uri = uri
method.Integration = integration
def deploy_api_gateway( self,
api_id,
stage_name,
stage_description="",
description="",
cache_cluster_enabled=False,
cache_cluster_size='0.5',
variables=None,
cloudwatch_log_level='OFF',
cloudwatch_data_trace=False,
cloudwatch_metrics_enabled=False,
cache_cluster_ttl=300,
cache_cluster_encrypted=False
):
"""
Deploy the API Gateway!
Return the deployed API URL.
"""
print("Deploying API Gateway..")
self.apigateway_client.create_deployment(
restApiId=api_id,
stageName=stage_name,
stageDescription=stage_description,
description=description,
cacheClusterEnabled=cache_cluster_enabled,
cacheClusterSize=cache_cluster_size,
variables=variables or {}
)
if cloudwatch_log_level not in self.cloudwatch_log_levels:
cloudwatch_log_level = 'OFF'
self.apigateway_client.update_stage(
restApiId=api_id,
stageName=stage_name,
patchOperations=[
self.get_patch_op('logging/loglevel', cloudwatch_log_level),
self.get_patch_op('logging/dataTrace', cloudwatch_data_trace),
self.get_patch_op('metrics/enabled', cloudwatch_metrics_enabled),
self.get_patch_op('caching/ttlInSeconds', str(cache_cluster_ttl)),
self.get_patch_op('caching/dataEncrypted', cache_cluster_encrypted)
]
)
return "https://{}.execute-api.{}.amazonaws.com/{}".format(api_id, self.boto_session.region_name, stage_name)
def add_binary_support(self, api_id, cors=False):
"""
Add binary support
"""
response = self.apigateway_client.get_rest_api(
restApiId=api_id
)
if "binaryMediaTypes" not in response or "*/*" not in response["binaryMediaTypes"]:
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': "add",
'path': '/binaryMediaTypes/*~1*'
}
]
)
if cors:
# fix for issue 699 and 1035, cors+binary support don't work together
# go through each resource and update the contentHandling type
response = self.apigateway_client.get_resources(restApiId=api_id)
resource_ids = [
item['id'] for item in response['items']
if 'OPTIONS' in item.get('resourceMethods', {})
]
for resource_id in resource_ids:
self.apigateway_client.update_integration(
restApiId=api_id,
resourceId=resource_id,
httpMethod='OPTIONS',
patchOperations=[
{
"op": "replace",
"path": "/contentHandling",
"value": "CONVERT_TO_TEXT"
}
]
)
def remove_binary_support(self, api_id, cors=False):
"""
Remove binary support
"""
response = self.apigateway_client.get_rest_api(
restApiId=api_id
)
if "binaryMediaTypes" in response and "*/*" in response["binaryMediaTypes"]:
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': 'remove',
'path': '/binaryMediaTypes/*~1*'
}
]
)
if cors:
# go through each resource and change the contentHandling type
response = self.apigateway_client.get_resources(restApiId=api_id)
resource_ids = [
item['id'] for item in response['items']
if 'OPTIONS' in item.get('resourceMethods', {})
]
for resource_id in resource_ids:
self.apigateway_client.update_integration(
restApiId=api_id,
resourceId=resource_id,
httpMethod='OPTIONS',
patchOperations=[
{
"op": "replace",
"path": "/contentHandling",
"value": ""
}
]
)
def add_api_compression(self, api_id, min_compression_size):
"""
Add Rest API compression
"""
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': 'replace',
'path': '/minimumCompressionSize',
'value': str(min_compression_size)
}
]
)
def remove_api_compression(self, api_id):
"""
Remove Rest API compression
"""
self.apigateway_client.update_rest_api(
restApiId=api_id,
patchOperations=[
{
'op': 'replace',
'path': '/minimumCompressionSize',
}
]
)
def get_api_keys(self, api_id, stage_name):
"""
Generator that allows to iterate per API keys associated to an api_id and a stage_name.
"""
response = self.apigateway_client.get_api_keys(limit=500)
stage_key = '{}/{}'.format(api_id, stage_name)
for api_key in response.get('items'):
if stage_key in api_key.get('stageKeys'):
yield api_key.get('id')
def create_api_key(self, api_id, stage_name):
"""
Create new API key and link it with an api_id and a stage_name
"""
response = self.apigateway_client.create_api_key(
name='{}_{}'.format(stage_name, api_id),
description='Api Key for {}'.format(api_id),
enabled=True,
stageKeys=[
{
'restApiId': '{}'.format(api_id),
'stageName': '{}'.format(stage_name)
},
]
)
print('Created a new x-api-key: {}'.format(response['id']))
def remove_api_key(self, api_id, stage_name):
"""
Remove a generated API key for api_id and stage_name
"""
response = self.apigateway_client.get_api_keys(
limit=1,
nameQuery='{}_{}'.format(stage_name, api_id)
)
for api_key in response.get('items'):
self.apigateway_client.delete_api_key(
apiKey="{}".format(api_key['id'])
)
def add_api_stage_to_api_key(self, api_key, api_id, stage_name):
"""
Add api stage to Api key
"""
self.apigateway_client.update_api_key(
apiKey=api_key,
patchOperations=[
{
'op': 'add',
'path': '/stages',
'value': '{}/{}'.format(api_id, stage_name)
}
]
)
def get_patch_op(self, keypath, value, op='replace'):
"""
Return an object that describes a change of configuration on the given staging.
Setting will be applied on all available HTTP methods.
"""
if isinstance(value, bool):
value = str(value).lower()
return {'op': op, 'path': '/*/*/{}'.format(keypath), 'value': value}
def get_rest_apis(self, project_name):
"""
Generator that allows to iterate per every available apis.
"""
all_apis = self.apigateway_client.get_rest_apis(
limit=500
)
for api in all_apis['items']:
if api['name'] != project_name:
continue
yield api
def undeploy_api_gateway(self, lambda_name, domain_name=None, base_path=None):
"""
Delete a deployed REST API Gateway.
"""
print("Deleting API Gateway..")
api_id = self.get_api_id(lambda_name)
if domain_name:
# XXX - Remove Route53 smartly here?
# XXX - This doesn't raise, but doesn't work either.
try:
self.apigateway_client.delete_base_path_mapping(
domainName=domain_name,
basePath='(none)' if base_path is None else base_path
)
except Exception as e:
# We may not have actually set up the domain.
pass
was_deleted = self.delete_stack(lambda_name, wait=True)
if not was_deleted:
# try erasing it with the older method
for api in self.get_rest_apis(lambda_name):
self.apigateway_client.delete_rest_api(
restApiId=api['id']
)
def update_stage_config( self,
project_name,
stage_name,
cloudwatch_log_level,
cloudwatch_data_trace,
cloudwatch_metrics_enabled
):
"""
Update CloudWatch metrics configuration.
"""
if cloudwatch_log_level not in self.cloudwatch_log_levels:
cloudwatch_log_level = 'OFF'
for api in self.get_rest_apis(project_name):
self.apigateway_client.update_stage(
restApiId=api['id'],
stageName=stage_name,
patchOperations=[
self.get_patch_op('logging/loglevel', cloudwatch_log_level),
self.get_patch_op('logging/dataTrace', cloudwatch_data_trace),
self.get_patch_op('metrics/enabled', cloudwatch_metrics_enabled),
]
)
def update_cognito(self, lambda_name, user_pool, lambda_configs, lambda_arn):
LambdaConfig = {}
for config in lambda_configs:
LambdaConfig[config] = lambda_arn
description = self.cognito_client.describe_user_pool(UserPoolId=user_pool)
description_kwargs = {}
for key, value in description['UserPool'].items():
if key in ('UserPoolId', 'Policies', 'AutoVerifiedAttributes', 'SmsVerificationMessage',
'EmailVerificationMessage', 'EmailVerificationSubject', 'VerificationMessageTemplate',
'SmsAuthenticationMessage', 'MfaConfiguration', 'DeviceConfiguration',
'EmailConfiguration', 'SmsConfiguration', 'UserPoolTags',
'AdminCreateUserConfig'):
description_kwargs[key] = value
elif key == 'LambdaConfig':
for lckey, lcvalue in value.items():
if lckey in LambdaConfig:
value[lckey] = LambdaConfig[lckey]
print("value", value)
description_kwargs[key] = value
if 'LambdaConfig' not in description_kwargs:
description_kwargs['LambdaConfig'] = LambdaConfig
if 'TemporaryPasswordValidityDays' in description_kwargs['Policies']['PasswordPolicy']:
description_kwargs['AdminCreateUserConfig'].pop(
'UnusedAccountValidityDays', None)
if 'UnusedAccountValidityDays' in description_kwargs['AdminCreateUserConfig']:
description_kwargs['Policies']['PasswordPolicy']\
['TemporaryPasswordValidityDays'] = description_kwargs['AdminCreateUserConfig'].pop(
'UnusedAccountValidityDays', None)
result = self.cognito_client.update_user_pool(UserPoolId=user_pool, **description_kwargs)
if result['ResponseMetadata']['HTTPStatusCode'] != 200:
print("Cognito: Failed to update user pool", result)
# Now we need to add a policy to the IAM that allows cognito access
result = self.create_event_permission(lambda_name,
'cognito-idp.amazonaws.com',
'arn:aws:cognito-idp:{}:{}:userpool/{}'.
format(self.aws_region,
self.sts_client.get_caller_identity().get('Account'),
user_pool)
)
if result['ResponseMetadata']['HTTPStatusCode'] != 201:
print("Cognito: Failed to update lambda permission", result)
def delete_stack(self, name, wait=False):
"""
Delete the CF stack managed by Zappa.
"""
try:
stack = self.cf_client.describe_stacks(StackName=name)['Stacks'][0]
except: # pragma: no cover
print('No Zappa stack named {0}'.format(name))
return False
tags = {x['Key']:x['Value'] for x in stack['Tags']}
if tags.get('ZappaProject') == name:
self.cf_client.delete_stack(StackName=name)
if wait:
waiter = self.cf_client.get_waiter('stack_delete_complete')
print('Waiting for stack {0} to be deleted..'.format(name))
waiter.wait(StackName=name)
return True
else:
print('ZappaProject tag not found on {0}, doing nothing'.format(name))
return False
def create_stack_template( self,
lambda_arn,
lambda_name,
api_key_required,
iam_authorization,
authorizer,
cors_options=None,
description=None,
endpoint_configuration=None
):
"""
Build the entire CF stack.
Just used for the API Gateway, but could be expanded in the future.
"""
auth_type = "NONE"
if iam_authorization and authorizer:
logger.warn("Both IAM Authorization and Authorizer are specified, this is not possible. "
"Setting Auth method to IAM Authorization")
authorizer = None
auth_type = "AWS_IAM"
elif iam_authorization:
auth_type = "AWS_IAM"
elif authorizer:
auth_type = authorizer.get("type", "CUSTOM")
# build a fresh template
self.cf_template = troposphere.Template()
self.cf_template.add_description('Automatically generated with Zappa')
self.cf_api_resources = []
self.cf_parameters = {}
restapi = self.create_api_gateway_routes(
lambda_arn,
api_name=lambda_name,
api_key_required=api_key_required,
authorization_type=auth_type,
authorizer=authorizer,
cors_options=cors_options,
description=description,
endpoint_configuration=endpoint_configuration
)
return self.cf_template
def update_stack(self, name, working_bucket, wait=False, update_only=False, disable_progress=False):
"""
Update or create the CF stack managed by Zappa.
"""
capabilities = []
template = name + '-template-' + str(int(time.time())) + '.json'
with open(template, 'wb') as out:
out.write(bytes(self.cf_template.to_json(indent=None, separators=(',',':')), "utf-8"))
self.upload_to_s3(template, working_bucket, disable_progress=disable_progress)
if self.boto_session.region_name == "us-gov-west-1":
url = 'https://s3-us-gov-west-1.amazonaws.com/{0}/{1}'.format(working_bucket, template)
else:
url = 'https://s3.amazonaws.com/{0}/{1}'.format(working_bucket, template)
tags = [{'Key': key, 'Value': self.tags[key]}
for key in self.tags.keys()
if key != 'ZappaProject']
tags.append({'Key':'ZappaProject','Value':name})
update = True
try:
self.cf_client.describe_stacks(StackName=name)
except botocore.client.ClientError:
update = False
if update_only and not update:
print('CloudFormation stack missing, re-deploy to enable updates')
return
if not update:
self.cf_client.create_stack(StackName=name,
Capabilities=capabilities,
TemplateURL=url,
Tags=tags)
print('Waiting for stack {0} to create (this can take a bit)..'.format(name))
else:
try:
self.cf_client.update_stack(StackName=name,
Capabilities=capabilities,
TemplateURL=url,
Tags=tags)
print('Waiting for stack {0} to update..'.format(name))
except botocore.client.ClientError as e:
if e.response['Error']['Message'] == 'No updates are to be performed.':
wait = False
else:
raise
if wait:
total_resources = len(self.cf_template.resources)
current_resources = 0
sr = self.cf_client.get_paginator('list_stack_resources')
progress = tqdm(total=total_resources, unit='res', disable=disable_progress)
while True:
time.sleep(3)
result = self.cf_client.describe_stacks(StackName=name)
if not result['Stacks']:
continue # might need to wait a bit
if result['Stacks'][0]['StackStatus'] in ['CREATE_COMPLETE', 'UPDATE_COMPLETE']:
break
# Something has gone wrong.
# Is raising enough? Should we also remove the Lambda function?
if result['Stacks'][0]['StackStatus'] in [
'DELETE_COMPLETE',
'DELETE_IN_PROGRESS',
'ROLLBACK_IN_PROGRESS',
'UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS',
'UPDATE_ROLLBACK_COMPLETE'
]:
raise EnvironmentError("Stack creation failed. "
"Please check your CloudFormation console. "
"You may also need to `undeploy`.")
count = 0
for result in sr.paginate(StackName=name):
done = (1 for x in result['StackResourceSummaries']
if 'COMPLETE' in x['ResourceStatus'])
count += sum(done)
if count:
# We can end up in a situation where we have more resources being created
# than anticipated.
if (count - current_resources) > 0:
progress.update(count - current_resources)
current_resources = count
progress.close()
try:
os.remove(template)
except OSError:
pass
self.remove_from_s3(template, working_bucket)
def stack_outputs(self, name):
"""
Given a name, describes CloudFront stacks and returns dict of the stack Outputs
, else returns an empty dict.
"""
try:
stack = self.cf_client.describe_stacks(StackName=name)['Stacks'][0]
return {x['OutputKey']: x['OutputValue'] for x in stack['Outputs']}
except botocore.client.ClientError:
return {}
def get_api_url(self, lambda_name, stage_name):
"""
Given a lambda_name and stage_name, return a valid API URL.
"""
api_id = self.get_api_id(lambda_name)
if api_id:
return "https://{}.execute-api.{}.amazonaws.com/{}".format(api_id, self.boto_session.region_name, stage_name)
else:
return None
def get_api_id(self, lambda_name):
"""
Given a lambda_name, return the API id.
"""
try:
response = self.cf_client.describe_stack_resource(StackName=lambda_name,
LogicalResourceId='Api')
return response['StackResourceDetail'].get('PhysicalResourceId', None)
except: # pragma: no cover
try:
# Try the old method (project was probably made on an older, non CF version)
response = self.apigateway_client.get_rest_apis(limit=500)
for item in response['items']:
if item['name'] == lambda_name:
return item['id']
logger.exception('Could not get API ID.')
return None
except: # pragma: no cover
# We don't even have an API deployed. That's okay!
return None
def create_domain_name(self,
domain_name,
certificate_name,
certificate_body=None,
certificate_private_key=None,
certificate_chain=None,
certificate_arn=None,
lambda_name=None,
stage=None,
base_path=None):
"""
Creates the API GW domain and returns the resulting DNS name.
"""
# This is a Let's Encrypt or custom certificate
if not certificate_arn:
agw_response = self.apigateway_client.create_domain_name(
domainName=domain_name,
certificateName=certificate_name,
certificateBody=certificate_body,
certificatePrivateKey=certificate_private_key,
certificateChain=certificate_chain
)
# This is an AWS ACM-hosted Certificate
else:
agw_response = self.apigateway_client.create_domain_name(
domainName=domain_name,
certificateName=certificate_name,
certificateArn=certificate_arn
)
api_id = self.get_api_id(lambda_name)
if not api_id:
raise LookupError("No API URL to certify found - did you deploy?")
self.apigateway_client.create_base_path_mapping(
domainName=domain_name,
basePath='' if base_path is None else base_path,
restApiId=api_id,
stage=stage
)
return agw_response['distributionDomainName']
def update_route53_records(self, domain_name, dns_name):
"""
Updates Route53 Records following GW domain creation
"""
zone_id = self.get_hosted_zone_id_for_domain(domain_name)
is_apex = self.route53.get_hosted_zone(Id=zone_id)['HostedZone']['Name'][:-1] == domain_name
if is_apex:
record_set = {
'Name': domain_name,
'Type': 'A',
'AliasTarget': {
'HostedZoneId': 'Z2FDTNDATAQYW2', # This is a magic value that means "CloudFront"
'DNSName': dns_name,
'EvaluateTargetHealth': False
}
}
else:
record_set = {
'Name': domain_name,
'Type': 'CNAME',
'ResourceRecords': [
{
'Value': dns_name
}
],
'TTL': 60
}
# Related: https://github.com/boto/boto3/issues/157
# and: http://docs.aws.amazon.com/Route53/latest/APIReference/CreateAliasRRSAPI.html
# and policy: https://spin.atomicobject.com/2016/04/28/route-53-hosted-zone-managment/
# pure_zone_id = zone_id.split('/hostedzone/')[1]
# XXX: ClientError: An error occurred (InvalidChangeBatch) when calling the ChangeResourceRecordSets operation:
# Tried to create an alias that targets d1awfeji80d0k2.cloudfront.net., type A in zone Z1XWOQP59BYF6Z,
# but the alias target name does not lie within the target zone
response = self.route53.change_resource_record_sets(
HostedZoneId=zone_id,
ChangeBatch={
'Changes': [
{
'Action': 'UPSERT',
'ResourceRecordSet': record_set
}
]
}
)
return response
def update_domain_name(self,
domain_name,
certificate_name=None,
certificate_body=None,
certificate_private_key=None,
certificate_chain=None,
certificate_arn=None,
lambda_name=None,
stage=None,
route53=True,
base_path=None):
"""
This updates your certificate information for an existing domain,
with similar arguments to boto's update_domain_name API Gateway api.
It returns the resulting new domain information including the new certificate's ARN
if created during this process.
Previously, this method involved downtime that could take up to 40 minutes
because the API Gateway api only allowed this by deleting, and then creating it.
Related issues: https://github.com/Miserlou/Zappa/issues/590
https://github.com/Miserlou/Zappa/issues/588
https://github.com/Miserlou/Zappa/pull/458
https://github.com/Miserlou/Zappa/issues/882
https://github.com/Miserlou/Zappa/pull/883
"""
print("Updating domain name!")
certificate_name = certificate_name + str(time.time())
api_gateway_domain = self.apigateway_client.get_domain_name(domainName=domain_name)
if not certificate_arn\
and certificate_body and certificate_private_key and certificate_chain:
acm_certificate = self.acm_client.import_certificate(Certificate=certificate_body,
PrivateKey=certificate_private_key,
CertificateChain=certificate_chain)
certificate_arn = acm_certificate['CertificateArn']
self.update_domain_base_path_mapping(domain_name, lambda_name, stage, base_path)
return self.apigateway_client.update_domain_name(domainName=domain_name,
patchOperations=[
{"op" : "replace",
"path" : "/certificateName",
"value" : certificate_name},
{"op" : "replace",
"path" : "/certificateArn",
"value" : certificate_arn}
])
def update_domain_base_path_mapping(self, domain_name, lambda_name, stage, base_path):
"""
Update domain base path mapping on API Gateway if it was changed
"""
api_id = self.get_api_id(lambda_name)
if not api_id:
print("Warning! Can't update base path mapping!")
return
base_path_mappings = self.apigateway_client.get_base_path_mappings(domainName=domain_name)
found = False
for base_path_mapping in base_path_mappings.get('items', []):
if base_path_mapping['restApiId'] == api_id and base_path_mapping['stage'] == stage:
found = True
if base_path_mapping['basePath'] != base_path:
self.apigateway_client.update_base_path_mapping(domainName=domain_name,
basePath=base_path_mapping['basePath'],
patchOperations=[
{"op" : "replace",
"path" : "/basePath",
"value" : '' if base_path is None else base_path}
])
if not found:
self.apigateway_client.create_base_path_mapping(
domainName=domain_name,
basePath='' if base_path is None else base_path,
restApiId=api_id,
stage=stage
)
def get_all_zones(self):
"""Same behaviour of list_host_zones, but transparently handling pagination."""
zones = {'HostedZones': []}
new_zones = self.route53.list_hosted_zones(MaxItems='100')
while new_zones['IsTruncated']:
zones['HostedZones'] += new_zones['HostedZones']
new_zones = self.route53.list_hosted_zones(Marker=new_zones['NextMarker'], MaxItems='100')
zones['HostedZones'] += new_zones['HostedZones']
return zones
def get_domain_name(self, domain_name, route53=True):
"""
Scan our hosted zones for the record of a given name.
Returns the record entry, else None.
"""
# Make sure api gateway domain is present
try:
self.apigateway_client.get_domain_name(domainName=domain_name)
except Exception:
return None
if not route53:
return True
try:
zones = self.get_all_zones()
for zone in zones['HostedZones']:
records = self.route53.list_resource_record_sets(HostedZoneId=zone['Id'])
for record in records['ResourceRecordSets']:
if record['Type'] in ('CNAME', 'A') and record['Name'][:-1] == domain_name:
return record
except Exception as e:
return None
##
# Old, automatic logic.
# If re-introduced, should be moved to a new function.
# Related ticket: https://github.com/Miserlou/Zappa/pull/458
##
# We may be in a position where Route53 doesn't have a domain, but the API Gateway does.
# We need to delete this before we can create the new Route53.
# try:
# api_gateway_domain = self.apigateway_client.get_domain_name(domainName=domain_name)
# self.apigateway_client.delete_domain_name(domainName=domain_name)
# except Exception:
# pass
return None
##
# IAM
##
def get_credentials_arn(self):
"""
Given our role name, get and set the credentials_arn.
"""
role = self.iam.Role(self.role_name)
self.credentials_arn = role.arn
return role, self.credentials_arn
def create_iam_roles(self):
"""
Create and defines the IAM roles and policies necessary for Zappa.
If the IAM role already exists, it will be updated if necessary.
"""
attach_policy_obj = json.loads(self.attach_policy)
assume_policy_obj = json.loads(self.assume_policy)
if self.extra_permissions:
for permission in self.extra_permissions:
attach_policy_obj['Statement'].append(dict(permission))
self.attach_policy = json.dumps(attach_policy_obj)
updated = False
# Create the role if needed
try:
role, credentials_arn = self.get_credentials_arn()
except botocore.client.ClientError:
print("Creating " + self.role_name + " IAM Role..")
role = self.iam.create_role(
RoleName=self.role_name,
AssumeRolePolicyDocument=self.assume_policy
)
self.credentials_arn = role.arn
updated = True
# create or update the role's policies if needed
policy = self.iam.RolePolicy(self.role_name, 'zappa-permissions')
try:
if policy.policy_document != attach_policy_obj:
print("Updating zappa-permissions policy on " + self.role_name + " IAM Role.")
policy.put(PolicyDocument=self.attach_policy)
updated = True
except botocore.client.ClientError:
print("Creating zappa-permissions policy on " + self.role_name + " IAM Role.")
policy.put(PolicyDocument=self.attach_policy)
updated = True
if role.assume_role_policy_document != assume_policy_obj and \
set(role.assume_role_policy_document['Statement'][0]['Principal']['Service']) != set(assume_policy_obj['Statement'][0]['Principal']['Service']):
print("Updating assume role policy on " + self.role_name + " IAM Role.")
self.iam_client.update_assume_role_policy(
RoleName=self.role_name,
PolicyDocument=self.assume_policy
)
updated = True
return self.credentials_arn, updated
def _clear_policy(self, lambda_name):
"""
Remove obsolete policy statements to prevent policy from bloating over the limit after repeated updates.
"""
try:
policy_response = self.lambda_client.get_policy(
FunctionName=lambda_name
)
if policy_response['ResponseMetadata']['HTTPStatusCode'] == 200:
statement = json.loads(policy_response['Policy'])['Statement']
for s in statement:
delete_response = self.lambda_client.remove_permission(
FunctionName=lambda_name,
StatementId=s['Sid']
)
if delete_response['ResponseMetadata']['HTTPStatusCode'] != 204:
logger.error('Failed to delete an obsolete policy statement: {}'.format(policy_response))
else:
logger.debug('Failed to load Lambda function policy: {}'.format(policy_response))
except ClientError as e:
if e.args[0].find('ResourceNotFoundException') > -1:
logger.debug('No policy found, must be first run.')
else:
logger.error('Unexpected client error {}'.format(e.args[0]))
##
# CloudWatch Events
##
def create_event_permission(self, lambda_name, principal, source_arn):
"""
Create permissions to link to an event.
Related: http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-configure-event-source.html
"""
logger.debug('Adding new permission to invoke Lambda function: {}'.format(lambda_name))
permission_response = self.lambda_client.add_permission(
FunctionName=lambda_name,
StatementId=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8)),
Action='lambda:InvokeFunction',
Principal=principal,
SourceArn=source_arn,
)
if permission_response['ResponseMetadata']['HTTPStatusCode'] != 201:
print('Problem creating permission to invoke Lambda function')
return None # XXX: Raise?
return permission_response
def schedule_events(self, lambda_arn, lambda_name, events, default=True):
"""
Given a Lambda ARN, name and a list of events, schedule this as CloudWatch Events.
'events' is a list of dictionaries, where the dict must contains the string
of a 'function' and the string of the event 'expression', and an optional 'name' and 'description'.
Expressions can be in rate or cron format:
http://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.html
"""
# The stream sources - DynamoDB, Kinesis and SQS - are working differently than the other services (pull vs push)
# and do not require event permissions. They do require additional permissions on the Lambda roles though.
# http://docs.aws.amazon.com/lambda/latest/dg/lambda-api-permissions-ref.html
pull_services = ['dynamodb', 'kinesis', 'sqs']
# XXX: Not available in Lambda yet.
# We probably want to execute the latest code.
# if default:
# lambda_arn = lambda_arn + ":$LATEST"
self.unschedule_events(lambda_name=lambda_name, lambda_arn=lambda_arn, events=events,
excluded_source_services=pull_services)
for event in events:
function = event['function']
expression = event.get('expression', None) # single expression
expressions = event.get('expressions', None) # multiple expression
kwargs = event.get('kwargs', {}) # optional dict of keyword arguments for the event
event_source = event.get('event_source', None)
description = event.get('description', function)
# - If 'cron' or 'rate' in expression, use ScheduleExpression
# - Else, use EventPattern
# - ex https://github.com/awslabs/aws-lambda-ddns-function
if not self.credentials_arn:
self.get_credentials_arn()
if expression:
expressions = [expression] # same code for single and multiple expression
if expressions:
for index, expression in enumerate(expressions):
name = self.get_scheduled_event_name(event, function, lambda_name, index)
# if it's possible that we truncated name, generate a unique, shortened name
# https://github.com/Miserlou/Zappa/issues/970
if len(name) >= 64:
rule_name = self.get_hashed_rule_name(event, function, lambda_name)
else:
rule_name = name
rule_response = self.events_client.put_rule(
Name=rule_name,
ScheduleExpression=expression,
State='ENABLED',
Description=description,
RoleArn=self.credentials_arn
)
if 'RuleArn' in rule_response:
logger.debug('Rule created. ARN {}'.format(rule_response['RuleArn']))
# Specific permissions are necessary for any trigger to work.
self.create_event_permission(lambda_name, 'events.amazonaws.com', rule_response['RuleArn'])
# Overwriting the input, supply the original values and add kwargs
input_template = '{"time": <time>, ' \
'"detail-type": <detail-type>, ' \
'"source": <source>,' \
'"account": <account>, ' \
'"region": <region>,' \
'"detail": <detail>, ' \
'"version": <version>,' \
'"resources": <resources>,' \
'"id": <id>,' \
'"kwargs": %s' \
'}' % json.dumps(kwargs)
# Create the CloudWatch event ARN for this function.
# https://github.com/Miserlou/Zappa/issues/359
target_response = self.events_client.put_targets(
Rule=rule_name,
Targets=[
{
'Id': 'Id' + ''.join(random.choice(string.digits) for _ in range(12)),
'Arn': lambda_arn,
'InputTransformer': {
'InputPathsMap': {
'time': '$.time',
'detail-type': '$.detail-type',
'source': '$.source',
'account': '$.account',
'region': '$.region',
'detail': '$.detail',
'version': '$.version',
'resources': '$.resources',
'id': '$.id'
},
'InputTemplate': input_template
}
}
]
)
if target_response['ResponseMetadata']['HTTPStatusCode'] == 200:
print("Scheduled {} with expression {}!".format(rule_name, expression))
else:
print("Problem scheduling {} with expression {}.".format(rule_name, expression))
elif event_source:
service = self.service_from_arn(event_source['arn'])
if service not in pull_services:
svc = ','.join(event['event_source']['events'])
self.create_event_permission(
lambda_name,
service + '.amazonaws.com',
event['event_source']['arn']
)
else:
svc = service
rule_response = add_event_source(
event_source,
lambda_arn,
function,
self.boto_session
)
if rule_response == 'successful':
print("Created {} event schedule for {}!".format(svc, function))
elif rule_response == 'failed':
print("Problem creating {} event schedule for {}!".format(svc, function))
elif rule_response == 'exists':
print("{} event schedule for {} already exists - Nothing to do here.".format(svc, function))
elif rule_response == 'dryrun':
print("Dryrun for creating {} event schedule for {}!!".format(svc, function))
else:
print("Could not create event {} - Please define either an expression or an event source".format(name))
@staticmethod
def get_scheduled_event_name(event, function, lambda_name, index=0):
name = event.get('name', function)
if name != function:
# a custom event name has been provided, make sure function name is included as postfix,
# otherwise zappa's handler won't be able to locate the function.
name = '{}-{}'.format(name, function)
if index:
# to ensure unique cloudwatch rule names in the case of multiple expressions
# prefix all entries bar the first with the index
# Related: https://github.com/Miserlou/Zappa/pull/1051
name = '{}-{}'.format(index, name)
# prefix scheduled event names with lambda name. So we can look them up later via the prefix.
return Zappa.get_event_name(lambda_name, name)
@staticmethod
def get_event_name(lambda_name, name):
"""
Returns an AWS-valid Lambda event name.
"""
return '{prefix:.{width}}-{postfix}'.format(prefix=lambda_name, width=max(0, 63 - len(name)), postfix=name)[:64]
@staticmethod
def get_hashed_rule_name(event, function, lambda_name):
"""
Returns an AWS-valid CloudWatch rule name using a digest of the event name, lambda name, and function.
This allows support for rule names that may be longer than the 64 char limit.
"""
event_name = event.get('name', function)
name_hash = hashlib.sha1('{}-{}'.format(lambda_name, event_name).encode('UTF-8')).hexdigest()
return Zappa.get_event_name(name_hash, function)
def delete_rule(self, rule_name):
"""
Delete a CWE rule.
This deletes them, but they will still show up in the AWS console.
Annoying.
"""
logger.debug('Deleting existing rule {}'.format(rule_name))
# All targets must be removed before
# we can actually delete the rule.
try:
targets = self.events_client.list_targets_by_rule(Rule=rule_name)
except botocore.exceptions.ClientError as e:
# This avoids misbehavior if low permissions, related: https://github.com/Miserlou/Zappa/issues/286
error_code = e.response['Error']['Code']
if error_code == 'AccessDeniedException':
raise
else:
logger.debug('No target found for this rule: {} {}'.format(rule_name, e.args[0]))
return
if 'Targets' in targets and targets['Targets']:
self.events_client.remove_targets(Rule=rule_name, Ids=[x['Id'] for x in targets['Targets']])
else: # pragma: no cover
logger.debug('No target to delete')
# Delete our rule.
self.events_client.delete_rule(Name=rule_name)
def get_event_rule_names_for_lambda(self, lambda_arn):
"""
Get all of the rule names associated with a lambda function.
"""
response = self.events_client.list_rule_names_by_target(TargetArn=lambda_arn)
rule_names = response['RuleNames']
# Iterate when the results are paginated
while 'NextToken' in response:
response = self.events_client.list_rule_names_by_target(TargetArn=lambda_arn,
NextToken=response['NextToken'])
rule_names.extend(response['RuleNames'])
return rule_names
def get_event_rules_for_lambda(self, lambda_arn):
"""
Get all of the rule details associated with this function.
"""
rule_names = self.get_event_rule_names_for_lambda(lambda_arn=lambda_arn)
return [self.events_client.describe_rule(Name=r) for r in rule_names]
def unschedule_events(self, events, lambda_arn=None, lambda_name=None, excluded_source_services=None):
excluded_source_services = excluded_source_services or []
"""
Given a list of events, unschedule these CloudWatch Events.
'events' is a list of dictionaries, where the dict must contains the string
of a 'function' and the string of the event 'expression', and an optional 'name' and 'description'.
"""
self._clear_policy(lambda_name)
rule_names = self.get_event_rule_names_for_lambda(lambda_arn=lambda_arn)
for rule_name in rule_names:
self.delete_rule(rule_name)
print('Unscheduled ' + rule_name + '.')
non_cwe = [e for e in events if 'event_source' in e]
for event in non_cwe:
# TODO: This WILL miss non CW events that have been deployed but changed names. Figure out a way to remove
# them no matter what.
# These are non CWE event sources.
function = event['function']
name = event.get('name', function)
event_source = event.get('event_source', function)
service = self.service_from_arn(event_source['arn'])
# DynamoDB and Kinesis streams take quite a while to setup after they are created and do not need to be
# re-scheduled when a new Lambda function is deployed. Therefore, they should not be removed during zappa
# update or zappa schedule.
if service not in excluded_source_services:
remove_event_source(
event_source,
lambda_arn,
function,
self.boto_session
)
print("Removed event {}{}.".format(
name,
" ({})".format(str(event_source['events'])) if 'events' in event_source else '')
)
###
# Async / SNS
##
def create_async_sns_topic(self, lambda_name, lambda_arn):
"""
Create the SNS-based async topic.
"""
topic_name = get_topic_name(lambda_name)
# Create SNS topic
topic_arn = self.sns_client.create_topic(
Name=topic_name)['TopicArn']
# Create subscription
self.sns_client.subscribe(
TopicArn=topic_arn,
Protocol='lambda',
Endpoint=lambda_arn
)
# Add Lambda permission for SNS to invoke function
self.create_event_permission(
lambda_name=lambda_name,
principal='sns.amazonaws.com',
source_arn=topic_arn
)
# Add rule for SNS topic as a event source
add_event_source(
event_source={
"arn": topic_arn,
"events": ["sns:Publish"]
},
lambda_arn=lambda_arn,
target_function="zappa.asynchronous.route_task",
boto_session=self.boto_session
)
return topic_arn
def remove_async_sns_topic(self, lambda_name):
"""
Remove the async SNS topic.
"""
topic_name = get_topic_name(lambda_name)
removed_arns = []
for sub in self.sns_client.list_subscriptions()['Subscriptions']:
if topic_name in sub['TopicArn']:
self.sns_client.delete_topic(TopicArn=sub['TopicArn'])
removed_arns.append(sub['TopicArn'])
return removed_arns
###
# Async / DynamoDB
##
def _set_async_dynamodb_table_ttl(self, table_name):
self.dynamodb_client.update_time_to_live(
TableName=table_name,
TimeToLiveSpecification={
'Enabled': True,
'AttributeName': 'ttl'
}
)
def create_async_dynamodb_table(self, table_name, read_capacity, write_capacity):
"""
Create the DynamoDB table for async task return values
"""
try:
dynamodb_table = self.dynamodb_client.describe_table(TableName=table_name)
return False, dynamodb_table
# catch this exception (triggered if the table doesn't exist)
except botocore.exceptions.ClientError:
dynamodb_table = self.dynamodb_client.create_table(
AttributeDefinitions=[
{
'AttributeName': 'id',
'AttributeType': 'S'
}
],
TableName=table_name,
KeySchema=[
{
'AttributeName': 'id',
'KeyType': 'HASH'
},
],
ProvisionedThroughput = {
'ReadCapacityUnits': read_capacity,
'WriteCapacityUnits': write_capacity
}
)
if dynamodb_table:
try:
self._set_async_dynamodb_table_ttl(table_name)
except botocore.exceptions.ClientError:
# this fails because the operation is async, so retry
time.sleep(10)
self._set_async_dynamodb_table_ttl(table_name)
return True, dynamodb_table
def remove_async_dynamodb_table(self, table_name):
"""
Remove the DynamoDB Table used for async return values
"""
self.dynamodb_client.delete_table(TableName=table_name)
##
# CloudWatch Logging
##
def fetch_logs(self, lambda_name, filter_pattern='', limit=10000, start_time=0):
"""
Fetch the CloudWatch logs for a given Lambda name.
"""
log_name = '/aws/lambda/' + lambda_name
streams = self.logs_client.describe_log_streams(
logGroupName=log_name,
descending=True,
orderBy='LastEventTime'
)
all_streams = streams['logStreams']
all_names = [stream['logStreamName'] for stream in all_streams]
events = []
response = {}
while not response or 'nextToken' in response:
extra_args = {}
if 'nextToken' in response:
extra_args['nextToken'] = response['nextToken']
# Amazon uses millisecond epoch for some reason.
# Thanks, Jeff.
start_time = start_time * 1000
end_time = int(time.time()) * 1000
response = self.logs_client.filter_log_events(
logGroupName=log_name,
logStreamNames=all_names,
startTime=start_time,
endTime=end_time,
filterPattern=filter_pattern,
limit=limit,
interleaved=True, # Does this actually improve performance?
**extra_args
)
if response and 'events' in response:
events += response['events']
return sorted(events, key=lambda k: k['timestamp'])
def remove_log_group(self, group_name):
"""
Filter all log groups that match the name given in log_filter.
"""
print("Removing log group: {}".format(group_name))
try:
self.logs_client.delete_log_group(logGroupName=group_name)
except botocore.exceptions.ClientError as e:
print("Couldn't remove '{}' because of: {}".format(group_name, e))
def remove_lambda_function_logs(self, lambda_function_name):
"""
Remove all logs that are assigned to a given lambda function id.
"""
self.remove_log_group('/aws/lambda/{}'.format(lambda_function_name))
def remove_api_gateway_logs(self, project_name):
"""
Removed all logs that are assigned to a given rest api id.
"""
for rest_api in self.get_rest_apis(project_name):
for stage in self.apigateway_client.get_stages(restApiId=rest_api['id'])['item']:
self.remove_log_group('API-Gateway-Execution-Logs_{}/{}'.format(rest_api['id'], stage['stageName']))
##
# Route53 Domain Name Entries
##
def get_hosted_zone_id_for_domain(self, domain):
"""
Get the Hosted Zone ID for a given domain.
"""
all_zones = self.get_all_zones()
return self.get_best_match_zone(all_zones, domain)
@staticmethod
def get_best_match_zone(all_zones, domain):
"""Return zone id which name is closer matched with domain name."""
# Related: https://github.com/Miserlou/Zappa/issues/459
public_zones = [zone for zone in all_zones['HostedZones'] if not zone['Config']['PrivateZone']]
zones = {zone['Name'][:-1]: zone['Id'] for zone in public_zones if zone['Name'][:-1] in domain}
if zones:
keys = max(zones.keys(), key=lambda a: len(a)) # get longest key -- best match.
return zones[keys]
else:
return None
def set_dns_challenge_txt(self, zone_id, domain, txt_challenge):
"""
Set DNS challenge TXT.
"""
print("Setting DNS challenge..")
resp = self.route53.change_resource_record_sets(
HostedZoneId=zone_id,
ChangeBatch=self.get_dns_challenge_change_batch('UPSERT', domain, txt_challenge)
)
return resp
def remove_dns_challenge_txt(self, zone_id, domain, txt_challenge):
"""
Remove DNS challenge TXT.
"""
print("Deleting DNS challenge..")
resp = self.route53.change_resource_record_sets(
HostedZoneId=zone_id,
ChangeBatch=self.get_dns_challenge_change_batch('DELETE', domain, txt_challenge)
)
return resp
@staticmethod
def get_dns_challenge_change_batch(action, domain, txt_challenge):
"""
Given action, domain and challenge, return a change batch to use with
route53 call.
:param action: DELETE | UPSERT
:param domain: domain name
:param txt_challenge: challenge
:return: change set for a given action, domain and TXT challenge.
"""
return {
'Changes': [{
'Action': action,
'ResourceRecordSet': {
'Name': '_acme-challenge.{0}'.format(domain),
'Type': 'TXT',
'TTL': 60,
'ResourceRecords': [{
'Value': '"{0}"'.format(txt_challenge)
}]
}
}]
}
##
# Utility
##
def shell(self):
"""
Spawn a PDB shell.
"""
import pdb
pdb.set_trace()
def load_credentials(self, boto_session=None, profile_name=None):
"""
Load AWS credentials.
An optional boto_session can be provided, but that's usually for testing.
An optional profile_name can be provided for config files that have multiple sets
of credentials.
"""
# Automatically load credentials from config or environment
if not boto_session:
# If provided, use the supplied profile name.
if profile_name:
self.boto_session = boto3.Session(profile_name=profile_name, region_name=self.aws_region)
elif os.environ.get('AWS_ACCESS_KEY_ID') and os.environ.get('AWS_SECRET_ACCESS_KEY'):
region_name = os.environ.get('AWS_DEFAULT_REGION') or self.aws_region
session_kw = {
"aws_access_key_id": os.environ.get('AWS_ACCESS_KEY_ID'),
"aws_secret_access_key": os.environ.get('AWS_SECRET_ACCESS_KEY'),
"region_name": region_name,
}
# If we're executing in a role, AWS_SESSION_TOKEN will be present, too.
if os.environ.get("AWS_SESSION_TOKEN"):
session_kw["aws_session_token"] = os.environ.get("AWS_SESSION_TOKEN")
self.boto_session = boto3.Session(**session_kw)
else:
self.boto_session = boto3.Session(region_name=self.aws_region)
logger.debug("Loaded boto session from config: %s", boto_session)
else:
logger.debug("Using provided boto session: %s", boto_session)
self.boto_session = boto_session
# use provided session's region in case it differs
self.aws_region = self.boto_session.region_name
if self.boto_session.region_name not in LAMBDA_REGIONS:
print("Warning! AWS Lambda may not be available in this AWS Region!")
if self.boto_session.region_name not in API_GATEWAY_REGIONS:
print("Warning! AWS API Gateway may not be available in this AWS Region!")
@staticmethod
def service_from_arn(arn):
return arn.split(':')[2] | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/core.py | core.py |
import botocore
import calendar
import datetime
import durationpy
import fnmatch
import io
import json
import logging
import os
import re
import shutil
import stat
import sys
from past.builtins import basestring
from urllib.parse import urlparse
LOG = logging.getLogger(__name__)
##
# Settings / Packaging
##
def copytree(src, dst, metadata=True, symlinks=False, ignore=None):
"""
This is a contributed re-implementation of 'copytree' that
should work with the exact same behavior on multiple platforms.
When `metadata` is False, file metadata such as permissions and modification
times are not copied.
"""
def copy_file(src, dst, item):
s = os.path.join(src, item)
d = os.path.join(dst, item)
if symlinks and os.path.islink(s): # pragma: no cover
if os.path.lexists(d):
os.remove(d)
os.symlink(os.readlink(s), d)
if metadata:
try:
st = os.lstat(s)
mode = stat.S_IMODE(st.st_mode)
os.lchmod(d, mode)
except:
pass # lchmod not available
elif os.path.isdir(s):
copytree(s, d, metadata, symlinks, ignore)
else:
shutil.copy2(s, d) if metadata else shutil.copy(s, d)
try:
lst = os.listdir(src)
if not os.path.exists(dst):
os.makedirs(dst)
if metadata:
shutil.copystat(src, dst)
except NotADirectoryError: # egg-link files
copy_file(os.path.dirname(src), os.path.dirname(dst), os.path.basename(src))
return
if ignore:
excl = ignore(src, lst)
lst = [x for x in lst if x not in excl]
for item in lst:
copy_file(src, dst, item)
def parse_s3_url(url):
"""
Parses S3 URL.
Returns bucket (domain) and file (full path).
"""
bucket = ''
path = ''
if url:
result = urlparse(url)
bucket = result.netloc
path = result.path.strip('/')
return bucket, path
def human_size(num, suffix='B'):
"""
Convert bytes length to a human-readable version
"""
for unit in ('', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi'):
if abs(num) < 1024.0:
return "{0:3.1f}{1!s}{2!s}".format(num, unit, suffix)
num /= 1024.0
return "{0:.1f}{1!s}{2!s}".format(num, 'Yi', suffix)
def string_to_timestamp(timestring):
"""
Accepts a str, returns an int timestamp.
"""
ts = None
# Uses an extended version of Go's duration string.
try:
delta = durationpy.from_str(timestring);
past = datetime.datetime.utcnow() - delta
ts = calendar.timegm(past.timetuple())
return ts
except Exception as e:
pass
if ts:
return ts
# else:
# print("Unable to parse timestring.")
return 0
##
# `init` related
##
def detect_django_settings():
"""
Automatically try to discover Django settings files,
return them as relative module paths.
"""
matches = []
for root, dirnames, filenames in os.walk(os.getcwd()):
for filename in fnmatch.filter(filenames, '*settings.py'):
full = os.path.join(root, filename)
if 'site-packages' in full:
continue
full = os.path.join(root, filename)
package_path = full.replace(os.getcwd(), '')
package_module = package_path.replace(os.sep, '.').split('.', 1)[1].replace('.py', '')
matches.append(package_module)
return matches
def detect_flask_apps():
"""
Automatically try to discover Flask apps files,
return them as relative module paths.
"""
matches = []
for root, dirnames, filenames in os.walk(os.getcwd()):
for filename in fnmatch.filter(filenames, '*.py'):
full = os.path.join(root, filename)
if 'site-packages' in full:
continue
full = os.path.join(root, filename)
with io.open(full, 'r', encoding='utf-8') as f:
lines = f.readlines()
for line in lines:
app = None
# Kind of janky..
if '= Flask(' in line:
app = line.split('= Flask(')[0].strip()
if '=Flask(' in line:
app = line.split('=Flask(')[0].strip()
if not app:
continue
package_path = full.replace(os.getcwd(), '')
package_module = package_path.replace(os.sep, '.').split('.', 1)[1].replace('.py', '')
app_module = package_module + '.' + app
matches.append(app_module)
return matches
def get_venv_from_python_version():
return 'python{}.{}'.format(*sys.version_info)
def get_runtime_from_python_version():
"""
"""
if sys.version_info[0] < 3:
raise ValueError("Python 2.x is no longer supported.")
else:
if sys.version_info[1] <= 6:
return 'python3.6'
elif sys.version_info[1] <= 7:
return 'python3.7'
else:
return 'python3.8'
##
# Async Tasks
##
def get_topic_name(lambda_name):
""" Topic name generation """
return '%s-zappa-async' % lambda_name
##
# Event sources / Kappa
##
def get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary item, a session and a lambda_arn,
hack into Kappa's Gibson, create out an object we can call
to schedule this event, and return the event source.
"""
import kappa.function
import kappa.restapi
import kappa.event_source.base
import kappa.event_source.dynamodb_stream
import kappa.event_source.kinesis
import kappa.event_source.s3
import kappa.event_source.sns
import kappa.event_source.cloudwatch
import kappa.policy
import kappa.role
import kappa.awsclient
class PseudoContext:
def __init__(self):
return
class PseudoFunction:
def __init__(self):
return
# Mostly adapted from kappa - will probably be replaced by kappa support
class SqsEventSource(kappa.event_source.base.EventSource):
def __init__(self, context, config):
super().__init__(context, config)
self._lambda = kappa.awsclient.create_client(
'lambda', context.session)
def _get_uuid(self, function):
uuid = None
response = self._lambda.call(
'list_event_source_mappings',
FunctionName=function.name,
EventSourceArn=self.arn)
LOG.debug(response)
if len(response['EventSourceMappings']) > 0:
uuid = response['EventSourceMappings'][0]['UUID']
return uuid
def add(self, function):
try:
response = self._lambda.call(
'create_event_source_mapping',
FunctionName=function.name,
EventSourceArn=self.arn,
BatchSize=self.batch_size,
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to add event source')
def enable(self, function):
self._config['enabled'] = True
try:
response = self._lambda.call(
'update_event_source_mapping',
UUID=self._get_uuid(function),
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to enable event source')
def disable(self, function):
self._config['enabled'] = False
try:
response = self._lambda.call(
'update_event_source_mapping',
FunctionName=function.name,
Enabled=self.enabled
)
LOG.debug(response)
except Exception:
LOG.exception('Unable to disable event source')
def update(self, function):
response = None
uuid = self._get_uuid(function)
if uuid:
try:
response = self._lambda.call(
'update_event_source_mapping',
BatchSize=self.batch_size,
Enabled=self.enabled,
FunctionName=function.arn)
LOG.debug(response)
except Exception:
LOG.exception('Unable to update event source')
def remove(self, function):
response = None
uuid = self._get_uuid(function)
if uuid:
response = self._lambda.call(
'delete_event_source_mapping',
UUID=uuid)
LOG.debug(response)
return response
def status(self, function):
response = None
LOG.debug('getting status for event source %s', self.arn)
uuid = self._get_uuid(function)
if uuid:
try:
response = self._lambda.call(
'get_event_source_mapping',
UUID=self._get_uuid(function))
LOG.debug(response)
except botocore.exceptions.ClientError:
LOG.debug('event source %s does not exist', self.arn)
response = None
else:
LOG.debug('No UUID for event source %s', self.arn)
return response
class ExtendedSnsEventSource(kappa.event_source.sns.SNSEventSource):
@property
def filters(self):
return self._config.get('filters')
def add_filters(self, function):
try:
subscription = self.exists(function)
if subscription:
response = self._sns.call(
'set_subscription_attributes',
SubscriptionArn=subscription['SubscriptionArn'],
AttributeName='FilterPolicy',
AttributeValue=json.dumps(self.filters)
)
kappa.event_source.sns.LOG.debug(response)
except Exception:
kappa.event_source.sns.LOG.exception('Unable to add filters for SNS topic %s', self.arn)
def add(self, function):
super().add(function)
if self.filters:
self.add_filters(function)
event_source_map = {
'dynamodb': kappa.event_source.dynamodb_stream.DynamoDBStreamEventSource,
'kinesis': kappa.event_source.kinesis.KinesisEventSource,
's3': kappa.event_source.s3.S3EventSource,
'sns': ExtendedSnsEventSource,
'sqs': SqsEventSource,
'events': kappa.event_source.cloudwatch.CloudWatchEventSource
}
arn = event_source['arn']
_, _, svc, _ = arn.split(':', 3)
event_source_func = event_source_map.get(svc, None)
if not event_source_func:
raise ValueError('Unknown event source: {0}'.format(arn))
def autoreturn(self, function_name):
return function_name
event_source_func._make_notification_id = autoreturn
ctx = PseudoContext()
ctx.session = boto_session
funk = PseudoFunction()
funk.name = lambda_arn
# Kappa 0.6.0 requires this nasty hacking,
# hopefully we can remove at least some of this soon.
# Kappa 0.7.0 introduces a whole host over other changes we don't
# really want, so we're stuck here for a little while.
# Related: https://github.com/Miserlou/Zappa/issues/684
# https://github.com/Miserlou/Zappa/issues/688
# https://github.com/Miserlou/Zappa/commit/3216f7e5149e76921ecdf9451167846b95616313
if svc == 's3':
split_arn = lambda_arn.split(':')
arn_front = ':'.join(split_arn[:-1])
arn_back = split_arn[-1]
ctx.environment = arn_back
funk.arn = arn_front
funk.name = ':'.join([arn_back, target_function])
else:
funk.arn = lambda_arn
funk._context = ctx
event_source_obj = event_source_func(ctx, event_source)
return event_source_obj, ctx, funk
def add_event_source(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary, create the object and add the event source.
"""
event_source_obj, ctx, funk = get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False)
# TODO: Detect changes in config and refine exists algorithm
if not dry:
if not event_source_obj.status(funk):
event_source_obj.add(funk)
return 'successful' if event_source_obj.status(funk) else 'failed'
else:
return 'exists'
return 'dryrun'
def remove_event_source(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary, create the object and remove the event source.
"""
event_source_obj, ctx, funk = get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False)
# This is slightly dirty, but necessary for using Kappa this way.
funk.arn = lambda_arn
if not dry:
rule_response = event_source_obj.remove(funk)
return rule_response
else:
return event_source_obj
def get_event_source_status(event_source, lambda_arn, target_function, boto_session, dry=False):
"""
Given an event_source dictionary, create the object and get the event source status.
"""
event_source_obj, ctx, funk = get_event_source(event_source, lambda_arn, target_function, boto_session, dry=False)
return event_source_obj.status(funk)
##
# Analytics / Surveillance / Nagging
##
def check_new_version_available(this_version):
"""
Checks if a newer version of Zappa is available.
Returns True is updateable, else False.
"""
import requests
pypi_url = 'https://pypi.org/pypi/Zappa/json'
resp = requests.get(pypi_url, timeout=1.5)
top_version = resp.json()['info']['version']
return this_version != top_version
class InvalidAwsLambdaName(Exception):
"""Exception: proposed AWS Lambda name is invalid"""
pass
def validate_name(name, maxlen=80):
"""Validate name for AWS Lambda function.
name: actual name (without `arn:aws:lambda:...:` prefix and without
`:$LATEST`, alias or version suffix.
maxlen: max allowed length for name without prefix and suffix.
The value 80 was calculated from prefix with longest known region name
and assuming that no alias or version would be longer than `$LATEST`.
Based on AWS Lambda spec
http://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html
Return: the name
Raise: InvalidAwsLambdaName, if the name is invalid.
"""
if not isinstance(name, basestring):
msg = "Name must be of type string"
raise InvalidAwsLambdaName(msg)
if len(name) > maxlen:
msg = "Name is longer than {maxlen} characters."
raise InvalidAwsLambdaName(msg.format(maxlen=maxlen))
if len(name) == 0:
msg = "Name must not be empty string."
raise InvalidAwsLambdaName(msg)
if not re.match("^[a-zA-Z0-9-_]+$", name):
msg = "Name can only contain characters from a-z, A-Z, 0-9, _ and -"
raise InvalidAwsLambdaName(msg)
return name
def contains_python_files_or_subdirs(folder):
"""
Checks (recursively) if the directory contains .py or .pyc files
"""
for root, dirs, files in os.walk(folder):
if [filename for filename in files if filename.endswith('.py') or filename.endswith('.pyc')]:
return True
for d in dirs:
for _, subdirs, subfiles in os.walk(d):
if [filename for filename in subfiles if filename.endswith('.py') or filename.endswith('.pyc')]:
return True
return False
def conflicts_with_a_neighbouring_module(directory_path):
"""
Checks if a directory lies in the same directory as a .py file with the same name.
"""
parent_dir_path, current_dir_name = os.path.split(os.path.normpath(directory_path))
neighbours = os.listdir(parent_dir_path)
conflicting_neighbour_filename = current_dir_name+'.py'
return conflicting_neighbour_filename in neighbours
# https://github.com/Miserlou/Zappa/issues/1188
def titlecase_keys(d):
"""
Takes a dict with keys of type str and returns a new dict with all keys titlecased.
"""
return {k.title(): v for k, v in d.items()}
# https://github.com/Miserlou/Zappa/issues/1688
def is_valid_bucket_name(name):
"""
Checks if an S3 bucket name is valid according to https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules
"""
# Bucket names must be at least 3 and no more than 63 characters long.
if (len(name) < 3 or len(name) > 63):
return False
# Bucket names must not contain uppercase characters or underscores.
if (any(x.isupper() for x in name)):
return False
if "_" in name:
return False
# Bucket names must start with a lowercase letter or number.
if not (name[0].islower() or name[0].isdigit()):
return False
# Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.).
for label in name.split("."):
# Each label must start and end with a lowercase letter or a number.
if len(label) < 1:
return False
if not (label[0].islower() or label[0].isdigit()):
return False
if not (label[-1].islower() or label[-1].isdigit()):
return False
# Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
looks_like_IP = True
for label in name.split("."):
if not label.isdigit():
looks_like_IP = False
break
if looks_like_IP:
return False
return True
def merge_headers(event):
"""
Merge the values of headers and multiValueHeaders into a single dict.
Opens up support for multivalue headers via API Gateway and ALB.
See: https://github.com/Miserlou/Zappa/pull/1756
"""
headers = event.get('headers') or {}
multi_headers = (event.get('multiValueHeaders') or {}).copy()
for h in set(headers.keys()):
if h not in multi_headers:
multi_headers[h] = [headers[h]]
for h in multi_headers.keys():
multi_headers[h] = ', '.join(multi_headers[h])
return multi_headers | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/utilities.py | utilities.py |
from past.builtins import basestring
from builtins import input, bytes
import argcomplete
import argparse
import base64
import pkgutil
import botocore
import click
import collections
import hjson as json
import inspect
import importlib
import logging
import os
import pkg_resources
import random
import re
import requests
import slugify
import string
import sys
import tempfile
import time
import toml
import yaml
import zipfile
from click import Context, BaseCommand
from click.exceptions import ClickException
from click.globals import push_context
from dateutil import parser
from datetime import datetime, timedelta
from .core import Zappa, logger, API_GATEWAY_REGIONS
from .utilities import (check_new_version_available, detect_django_settings,
detect_flask_apps, parse_s3_url, human_size,
validate_name, InvalidAwsLambdaName, get_venv_from_python_version,
get_runtime_from_python_version, string_to_timestamp, is_valid_bucket_name)
CUSTOM_SETTINGS = [
'apigateway_policy',
'assume_policy',
'attach_policy',
'aws_region',
'delete_local_zip',
'delete_s3_zip',
'exclude',
'exclude_glob',
'extra_permissions',
'include',
'role_name',
'touch',
]
BOTO3_CONFIG_DOCS_URL = 'https://boto3.readthedocs.io/en/latest/guide/quickstart.html#configuration'
##
# Main Input Processing
##
class ZappaCLI:
"""
ZappaCLI object is responsible for loading the settings,
handling the input arguments and executing the calls to the core library.
"""
# CLI
vargs = None
command = None
stage_env = None
# Zappa settings
zappa = None
zappa_settings = None
load_credentials = True
disable_progress = False
# Specific settings
api_stage = None
app_function = None
aws_region = None
debug = None
prebuild_script = None
project_name = None
profile_name = None
lambda_arn = None
lambda_name = None
lambda_description = None
lambda_concurrency = None
s3_bucket_name = None
settings_file = None
zip_path = None
handler_path = None
vpc_config = None
memory_size = None
use_apigateway = None
lambda_handler = None
django_settings = None
manage_roles = True
exception_handler = None
environment_variables = None
authorizer = None
xray_tracing = False
aws_kms_key_arn = ''
context_header_mappings = None
tags = []
layers = None
stage_name_env_pattern = re.compile('^[a-zA-Z0-9_]+$')
def __init__(self):
self._stage_config_overrides = {} # change using self.override_stage_config_setting(key, val)
@property
def stage_config(self):
"""
A shortcut property for settings of a stage.
"""
def get_stage_setting(stage, extended_stages=None):
if extended_stages is None:
extended_stages = []
if stage in extended_stages:
raise RuntimeError(stage + " has already been extended to these settings. "
"There is a circular extends within the settings file.")
extended_stages.append(stage)
try:
stage_settings = dict(self.zappa_settings[stage].copy())
except KeyError:
raise ClickException("Cannot extend settings for undefined stage '" + stage + "'.")
extends_stage = self.zappa_settings[stage].get('extends', None)
if not extends_stage:
return stage_settings
extended_settings = get_stage_setting(stage=extends_stage, extended_stages=extended_stages)
extended_settings.update(stage_settings)
return extended_settings
settings = get_stage_setting(stage=self.api_stage)
# Backwards compatible for delete_zip setting that was more explicitly named delete_local_zip
if 'delete_zip' in settings:
settings['delete_local_zip'] = settings.get('delete_zip')
settings.update(self.stage_config_overrides)
return settings
@property
def stage_config_overrides(self):
"""
Returns zappa_settings we forcefully override for the current stage
set by `self.override_stage_config_setting(key, value)`
"""
return getattr(self, '_stage_config_overrides', {}).get(self.api_stage, {})
def override_stage_config_setting(self, key, val):
"""
Forcefully override a setting set by zappa_settings (for the current stage only)
:param key: settings key
:param val: value
"""
self._stage_config_overrides = getattr(self, '_stage_config_overrides', {})
self._stage_config_overrides.setdefault(self.api_stage, {})[key] = val
def handle(self, argv=None):
"""
Main function.
Parses command, load settings and dispatches accordingly.
"""
desc = ('Zappa - Deploy Python applications to AWS Lambda'
' and API Gateway.\n')
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
'-v', '--version', action='version',
version=pkg_resources.get_distribution("zappa").version,
help='Print the zappa version'
)
parser.add_argument(
'--color', default='auto', choices=['auto','never','always']
)
env_parser = argparse.ArgumentParser(add_help=False)
me_group = env_parser.add_mutually_exclusive_group()
all_help = ('Execute this command for all of our defined '
'Zappa stages.')
me_group.add_argument('--all', action='store_true', help=all_help)
me_group.add_argument('stage_env', nargs='?')
group = env_parser.add_argument_group()
group.add_argument(
'-a', '--app_function', help='The WSGI application function.'
)
group.add_argument(
'-s', '--settings_file', help='The path to a Zappa settings file.'
)
group.add_argument(
'-q', '--quiet', action='store_true', help='Silence all output.'
)
# https://github.com/Miserlou/Zappa/issues/407
# Moved when 'template' command added.
# Fuck Terraform.
group.add_argument(
'-j', '--json', action='store_true', help='Make the output of this command be machine readable.'
)
# https://github.com/Miserlou/Zappa/issues/891
group.add_argument(
'--disable_progress', action='store_true', help='Disable progress bars.'
)
group.add_argument(
"--no_venv", action="store_true", help="Skip venv check."
)
##
# Certify
##
subparsers = parser.add_subparsers(title='subcommands', dest='command')
cert_parser = subparsers.add_parser(
'certify', parents=[env_parser],
help='Create and install SSL certificate'
)
cert_parser.add_argument(
'--manual', action='store_true',
help=("Gets new Let's Encrypt certificates, but prints them to console."
"Does not update API Gateway domains.")
)
cert_parser.add_argument(
'-y', '--yes', action='store_true', help='Auto confirm yes.'
)
##
# Deploy
##
deploy_parser = subparsers.add_parser(
'deploy', parents=[env_parser], help='Deploy application.'
)
deploy_parser.add_argument(
'-z', '--zip', help='Deploy Lambda with specific local or S3 hosted zip package'
)
##
# Init
##
init_parser = subparsers.add_parser('init', help='Initialize Zappa app.')
##
# Package
##
package_parser = subparsers.add_parser(
'package', parents=[env_parser], help='Build the application zip package locally.'
)
package_parser.add_argument(
'-o', '--output', help='Name of file to output the package to.'
)
##
# Template
##
template_parser = subparsers.add_parser(
'template', parents=[env_parser], help='Create a CloudFormation template for this API Gateway.'
)
template_parser.add_argument(
'-l', '--lambda-arn', required=True, help='ARN of the Lambda function to template to.'
)
template_parser.add_argument(
'-r', '--role-arn', required=True, help='ARN of the Role to template with.'
)
template_parser.add_argument(
'-o', '--output', help='Name of file to output the template to.'
)
##
# Invocation
##
invoke_parser = subparsers.add_parser(
'invoke', parents=[env_parser],
help='Invoke remote function.'
)
invoke_parser.add_argument(
'--raw', action='store_true',
help=('When invoking remotely, invoke this python as a string,'
' not as a modular path.')
)
invoke_parser.add_argument(
'--no-color', action='store_true',
help=("Don't color the output")
)
invoke_parser.add_argument('command_rest')
##
# Manage
##
manage_parser = subparsers.add_parser(
'manage',
help='Invoke remote Django manage.py commands.'
)
rest_help = ("Command in the form of <env> <command>. <env> is not "
"required if --all is specified")
manage_parser.add_argument('--all', action='store_true', help=all_help)
manage_parser.add_argument('command_rest', nargs='+', help=rest_help)
manage_parser.add_argument(
'--no-color', action='store_true',
help=("Don't color the output")
)
# This is explicitly added here because this is the only subcommand that doesn't inherit from env_parser
# https://github.com/Miserlou/Zappa/issues/1002
manage_parser.add_argument(
'-s', '--settings_file', help='The path to a Zappa settings file.'
)
##
# Rollback
##
def positive_int(s):
""" Ensure an arg is positive """
i = int(s)
if i < 0:
msg = "This argument must be positive (got {})".format(s)
raise argparse.ArgumentTypeError(msg)
return i
rollback_parser = subparsers.add_parser(
'rollback', parents=[env_parser],
help='Rollback deployed code to a previous version.'
)
rollback_parser.add_argument(
'-n', '--num-rollback', type=positive_int, default=1,
help='The number of versions to rollback.'
)
##
# Scheduling
##
subparsers.add_parser(
'schedule', parents=[env_parser],
help='Schedule functions to occur at regular intervals.'
)
##
# Status
##
subparsers.add_parser(
'status', parents=[env_parser],
help='Show deployment status and event schedules.'
)
##
# Log Tailing
##
tail_parser = subparsers.add_parser(
'tail', parents=[env_parser], help='Tail deployment logs.'
)
tail_parser.add_argument(
'--no-color', action='store_true',
help="Don't color log tail output."
)
tail_parser.add_argument(
'--http', action='store_true',
help='Only show HTTP requests in tail output.'
)
tail_parser.add_argument(
'--non-http', action='store_true',
help='Only show non-HTTP requests in tail output.'
)
tail_parser.add_argument(
'--since', type=str, default="100000s",
help="Only show lines since a certain timeframe."
)
tail_parser.add_argument(
'--filter', type=str, default="",
help="Apply a filter pattern to the logs."
)
tail_parser.add_argument(
'--force-color', action='store_true',
help='Force coloring log tail output even if coloring support is not auto-detected. (example: piping)'
)
tail_parser.add_argument(
'--disable-keep-open', action='store_true',
help="Exit after printing the last available log, rather than keeping the log open."
)
##
# Undeploy
##
undeploy_parser = subparsers.add_parser(
'undeploy', parents=[env_parser], help='Undeploy application.'
)
undeploy_parser.add_argument(
'--remove-logs', action='store_true',
help=('Removes log groups of api gateway and lambda task'
' during the undeployment.'),
)
undeploy_parser.add_argument(
'-y', '--yes', action='store_true', help='Auto confirm yes.'
)
##
# Unschedule
##
subparsers.add_parser('unschedule', parents=[env_parser],
help='Unschedule functions.')
##
# Updating
##
update_parser = subparsers.add_parser(
'update', parents=[env_parser], help='Update deployed application.'
)
update_parser.add_argument(
'-z', '--zip', help='Update Lambda with specific local or S3 hosted zip package'
)
update_parser.add_argument(
'-n', '--no-upload', help="Update configuration where appropriate, but don't upload new code"
)
##
# Debug
##
subparsers.add_parser(
'shell', parents=[env_parser], help='A debug shell with a loaded Zappa object.'
)
argcomplete.autocomplete(parser)
args = parser.parse_args(argv)
self.vargs = vars(args)
if args.color == 'never':
disable_click_colors()
elif args.color == 'always':
#TODO: Support aggressive coloring like "--force-color" on all commands
pass
elif args.color == 'auto':
pass
# Parse the input
# NOTE(rmoe): Special case for manage command
# The manage command can't have both stage_env and command_rest
# arguments. Since they are both positional arguments argparse can't
# differentiate the two. This causes problems when used with --all.
# (e.g. "manage --all showmigrations admin" argparse thinks --all has
# been specified AND that stage_env='showmigrations')
# By having command_rest collect everything but --all we can split it
# apart here instead of relying on argparse.
if not args.command:
parser.print_help()
return
if args.command == 'manage' and not self.vargs.get('all'):
self.stage_env = self.vargs['command_rest'].pop(0)
else:
self.stage_env = self.vargs.get('stage_env')
if args.command == 'package':
self.load_credentials = False
self.command = args.command
self.disable_progress = self.vargs.get('disable_progress')
if self.vargs.get('quiet'):
self.silence()
# We don't have any settings yet, so make those first!
# (Settings-based interactions will fail
# before a project has been initialized.)
if self.command == 'init':
self.init()
return
# Make sure there isn't a new version available
if not self.vargs.get('json'):
self.check_for_update()
# Load and Validate Settings File
self.load_settings_file(self.vargs.get('settings_file'))
# Should we execute this for all stages, or just one?
all_stages = self.vargs.get('all')
stages = []
if all_stages: # All stages!
stages = self.zappa_settings.keys()
else: # Just one env.
if not self.stage_env:
# If there's only one stage defined in the settings,
# use that as the default.
if len(self.zappa_settings.keys()) == 1:
stages.append(list(self.zappa_settings.keys())[0])
else:
parser.error("Please supply a stage to interact with.")
else:
stages.append(self.stage_env)
for stage in stages:
try:
self.dispatch_command(self.command, stage)
except ClickException as e:
# Discussion on exit codes: https://github.com/Miserlou/Zappa/issues/407
e.show()
sys.exit(e.exit_code)
def dispatch_command(self, command, stage):
"""
Given a command to execute and stage,
execute that command.
"""
self.api_stage = stage
if command not in ['status', 'manage']:
if not self.vargs.get('json', None):
click.echo("Calling " + click.style(command, fg="green", bold=True) + " for stage " +
click.style(self.api_stage, bold=True) + ".." )
# Explicitly define the app function.
# Related: https://github.com/Miserlou/Zappa/issues/832
if self.vargs.get('app_function', None):
self.app_function = self.vargs['app_function']
# Load our settings, based on api_stage.
try:
self.load_settings(self.vargs.get('settings_file'))
except ValueError as e:
if hasattr(e, 'message'):
print("Error: {}".format(e.message))
else:
print(str(e))
sys.exit(-1)
self.callback('settings')
# Hand it off
if command == 'deploy': # pragma: no cover
self.deploy(self.vargs['zip'])
if command == 'package': # pragma: no cover
self.package(self.vargs['output'])
if command == 'template': # pragma: no cover
self.template( self.vargs['lambda_arn'],
self.vargs['role_arn'],
output=self.vargs['output'],
json=self.vargs['json']
)
elif command == 'update': # pragma: no cover
self.update(self.vargs['zip'], self.vargs['no_upload'])
elif command == 'rollback': # pragma: no cover
self.rollback(self.vargs['num_rollback'])
elif command == 'invoke': # pragma: no cover
if not self.vargs.get('command_rest'):
print("Please enter the function to invoke.")
return
self.invoke(
self.vargs['command_rest'],
raw_python=self.vargs['raw'],
no_color=self.vargs['no_color'],
)
elif command == 'manage': # pragma: no cover
if not self.vargs.get('command_rest'):
print("Please enter the management command to invoke.")
return
if not self.django_settings:
print("This command is for Django projects only!")
print("If this is a Django project, please define django_settings in your zappa_settings.")
return
command_tail = self.vargs.get('command_rest')
if len(command_tail) > 1:
command = " ".join(command_tail) # ex: zappa manage dev "shell --version"
else:
command = command_tail[0] # ex: zappa manage dev showmigrations admin
self.invoke(
command,
command="manage",
no_color=self.vargs['no_color'],
)
elif command == 'tail': # pragma: no cover
self.tail(
colorize=(not self.vargs['no_color']),
http=self.vargs['http'],
non_http=self.vargs['non_http'],
since=self.vargs['since'],
filter_pattern=self.vargs['filter'],
force_colorize=self.vargs['force_color'] or None,
keep_open=not self.vargs['disable_keep_open']
)
elif command == 'undeploy': # pragma: no cover
self.undeploy(
no_confirm=self.vargs['yes'],
remove_logs=self.vargs['remove_logs']
)
elif command == 'schedule': # pragma: no cover
self.schedule()
elif command == 'unschedule': # pragma: no cover
self.unschedule()
elif command == 'status': # pragma: no cover
self.status(return_json=self.vargs['json'])
elif command == 'certify': # pragma: no cover
self.certify(
no_confirm=self.vargs['yes'],
manual=self.vargs['manual']
)
elif command == 'shell': # pragma: no cover
self.shell()
##
# The Commands
##
def package(self, output=None):
"""
Only build the package
"""
# Make sure we're in a venv.
self.check_venv()
# force not to delete the local zip
self.override_stage_config_setting('delete_local_zip', False)
# Execute the prebuild script
if self.prebuild_script:
self.execute_prebuild_script()
# Create the Lambda Zip
self.create_package(output)
self.callback('zip')
size = human_size(os.path.getsize(self.zip_path))
click.echo(click.style("Package created", fg="green", bold=True) + ": " + click.style(self.zip_path, bold=True) + " (" + size + ")")
def template(self, lambda_arn, role_arn, output=None, json=False):
"""
Only build the template file.
"""
if not lambda_arn:
raise ClickException("Lambda ARN is required to template.")
if not role_arn:
raise ClickException("Role ARN is required to template.")
self.zappa.credentials_arn = role_arn
# Create the template!
template = self.zappa.create_stack_template(
lambda_arn=lambda_arn,
lambda_name=self.lambda_name,
api_key_required=self.api_key_required,
iam_authorization=self.iam_authorization,
authorizer=self.authorizer,
cors_options=self.cors,
description=self.apigateway_description,
endpoint_configuration=self.endpoint_configuration
)
if not output:
template_file = self.lambda_name + '-template-' + str(int(time.time())) + '.json'
else:
template_file = output
with open(template_file, 'wb') as out:
out.write(bytes(template.to_json(indent=None, separators=(',',':')), "utf-8"))
if not json:
click.echo(click.style("Template created", fg="green", bold=True) + ": " + click.style(template_file, bold=True))
else:
with open(template_file, 'r') as out:
print(out.read())
def deploy(self, source_zip=None):
"""
Package your project, upload it to S3, register the Lambda function
and create the API Gateway routes.
"""
if not source_zip:
# Make sure we're in a venv.
self.check_venv()
# Execute the prebuild script
if self.prebuild_script:
self.execute_prebuild_script()
# Make sure this isn't already deployed.
deployed_versions = self.zappa.get_lambda_function_versions(self.lambda_name)
if len(deployed_versions) > 0:
raise ClickException("This application is " + click.style("already deployed", fg="red") +
" - did you mean to call " + click.style("update", bold=True) + "?")
# Make sure the necessary IAM execution roles are available
if self.manage_roles:
try:
self.zappa.create_iam_roles()
except botocore.client.ClientError as ce:
raise ClickException(
click.style("Failed", fg="red") + " to " + click.style("manage IAM roles", bold=True) + "!\n" +
"You may " + click.style("lack the necessary AWS permissions", bold=True) +
" to automatically manage a Zappa execution role.\n" +
click.style("Exception reported by AWS:", bold=True) + format(ce) + '\n' +
"To fix this, see here: " +
click.style(
"https://github.com/Miserlou/Zappa#custom-aws-iam-roles-and-policies-for-deployment",
bold=True)
+ '\n')
# Create the Lambda Zip
self.create_package()
self.callback('zip')
# Upload it to S3
success = self.zappa.upload_to_s3(
self.zip_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload to S3. Quitting.")
# If using a slim handler, upload it to S3 and tell lambda to use this slim handler zip
if self.stage_config.get('slim_handler', False):
# https://github.com/Miserlou/Zappa/issues/510
success = self.zappa.upload_to_s3(self.handler_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload handler to S3. Quitting.")
# Copy the project zip to the current project zip
current_project_name = '{0!s}_{1!s}_current_project.tar.gz'.format(self.api_stage, self.project_name)
success = self.zappa.copy_on_s3(src_file_name=self.zip_path, dst_file_name=current_project_name,
bucket_name=self.s3_bucket_name)
if not success: # pragma: no cover
raise ClickException("Unable to copy the zip to be the current project. Quitting.")
handler_file = self.handler_path
else:
handler_file = self.zip_path
# Fixes https://github.com/Miserlou/Zappa/issues/613
try:
self.lambda_arn = self.zappa.get_lambda_function(
function_name=self.lambda_name)
except botocore.client.ClientError:
# Register the Lambda function with that zip as the source
# You'll also need to define the path to your lambda_handler code.
kwargs = dict(
handler=self.lambda_handler,
description=self.lambda_description,
vpc_config=self.vpc_config,
dead_letter_config=self.dead_letter_config,
timeout=self.timeout_seconds,
memory_size=self.memory_size,
runtime=self.runtime,
aws_environment_variables=self.aws_environment_variables,
aws_kms_key_arn=self.aws_kms_key_arn,
use_alb=self.use_alb,
layers=self.layers,
concurrency=self.lambda_concurrency,
)
if source_zip and source_zip.startswith('s3://'):
bucket, key_name = parse_s3_url(source_zip)
kwargs['function_name'] = self.lambda_name
kwargs['bucket'] = bucket
kwargs['s3_key'] = key_name
elif source_zip and not source_zip.startswith('s3://'):
with open(source_zip, mode='rb') as fh:
byte_stream = fh.read()
kwargs['function_name'] = self.lambda_name
kwargs['local_zip'] = byte_stream
else:
kwargs['function_name'] = self.lambda_name
kwargs['bucket'] = self.s3_bucket_name
kwargs['s3_key'] = handler_file
self.lambda_arn = self.zappa.create_lambda_function(**kwargs)
# Schedule events for this deployment
self.schedule()
endpoint_url = ''
deployment_string = click.style("Deployment complete", fg="green", bold=True) + "!"
if self.use_alb:
kwargs = dict(
lambda_arn=self.lambda_arn,
lambda_name=self.lambda_name,
alb_vpc_config=self.alb_vpc_config,
timeout=self.timeout_seconds
)
self.zappa.deploy_lambda_alb(**kwargs)
if self.use_apigateway:
# Create and configure the API Gateway
template = self.zappa.create_stack_template(
lambda_arn=self.lambda_arn,
lambda_name=self.lambda_name,
api_key_required=self.api_key_required,
iam_authorization=self.iam_authorization,
authorizer=self.authorizer,
cors_options=self.cors,
description=self.apigateway_description,
endpoint_configuration=self.endpoint_configuration
)
self.zappa.update_stack(
self.lambda_name,
self.s3_bucket_name,
wait=True,
disable_progress=self.disable_progress
)
api_id = self.zappa.get_api_id(self.lambda_name)
# Add binary support
if self.binary_support:
self.zappa.add_binary_support(api_id=api_id, cors=self.cors)
# Add payload compression
if self.stage_config.get('payload_compression', True):
self.zappa.add_api_compression(
api_id=api_id,
min_compression_size=self.stage_config.get('payload_minimum_compression_size', 0))
# Deploy the API!
endpoint_url = self.deploy_api_gateway(api_id)
deployment_string = deployment_string + ": {}".format(endpoint_url)
# Create/link API key
if self.api_key_required:
if self.api_key is None:
self.zappa.create_api_key(api_id=api_id, stage_name=self.api_stage)
else:
self.zappa.add_api_stage_to_api_key(api_key=self.api_key, api_id=api_id, stage_name=self.api_stage)
if self.stage_config.get('touch', True):
self.touch_endpoint(endpoint_url)
# Finally, delete the local copy our zip package
if not source_zip:
if self.stage_config.get('delete_local_zip', True):
self.remove_local_zip()
# Remove the project zip from S3.
if not source_zip:
self.remove_uploaded_zip()
self.callback('post')
click.echo(deployment_string)
def update(self, source_zip=None, no_upload=False):
"""
Repackage and update the function code.
"""
if not source_zip:
# Make sure we're in a venv.
self.check_venv()
# Execute the prebuild script
if self.prebuild_script:
self.execute_prebuild_script()
# Temporary version check
try:
updated_time = 1472581018
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
conf = function_response['Configuration']
last_updated = parser.parse(conf['LastModified'])
last_updated_unix = time.mktime(last_updated.timetuple())
except botocore.exceptions.BotoCoreError as e:
click.echo(click.style(type(e).__name__, fg="red") + ": " + e.args[0])
sys.exit(-1)
except Exception as e:
click.echo(click.style("Warning!", fg="red") + " Couldn't get function " + self.lambda_name +
" in " + self.zappa.aws_region + " - have you deployed yet?")
sys.exit(-1)
if last_updated_unix <= updated_time:
click.echo(click.style("Warning!", fg="red") +
" You may have upgraded Zappa since deploying this application. You will need to " +
click.style("redeploy", bold=True) + " for this deployment to work properly!")
# Make sure the necessary IAM execution roles are available
if self.manage_roles:
try:
self.zappa.create_iam_roles()
except botocore.client.ClientError:
click.echo(click.style("Failed", fg="red") + " to " + click.style("manage IAM roles", bold=True) + "!")
click.echo("You may " + click.style("lack the necessary AWS permissions", bold=True) +
" to automatically manage a Zappa execution role.")
click.echo("To fix this, see here: " +
click.style("https://github.com/Miserlou/Zappa#custom-aws-iam-roles-and-policies-for-deployment",
bold=True))
sys.exit(-1)
# Create the Lambda Zip,
if not no_upload:
self.create_package()
self.callback('zip')
# Upload it to S3
if not no_upload:
success = self.zappa.upload_to_s3(self.zip_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload project to S3. Quitting.")
# If using a slim handler, upload it to S3 and tell lambda to use this slim handler zip
if self.stage_config.get('slim_handler', False):
# https://github.com/Miserlou/Zappa/issues/510
success = self.zappa.upload_to_s3(self.handler_path, self.s3_bucket_name, disable_progress=self.disable_progress)
if not success: # pragma: no cover
raise ClickException("Unable to upload handler to S3. Quitting.")
# Copy the project zip to the current project zip
current_project_name = '{0!s}_{1!s}_current_project.tar.gz'.format(self.api_stage, self.project_name)
success = self.zappa.copy_on_s3(src_file_name=self.zip_path, dst_file_name=current_project_name,
bucket_name=self.s3_bucket_name)
if not success: # pragma: no cover
raise ClickException("Unable to copy the zip to be the current project. Quitting.")
handler_file = self.handler_path
else:
handler_file = self.zip_path
# Register the Lambda function with that zip as the source
# You'll also need to define the path to your lambda_handler code.
kwargs = dict(
bucket=self.s3_bucket_name,
function_name=self.lambda_name,
num_revisions=self.num_retained_versions,
concurrency=self.lambda_concurrency,
)
if source_zip and source_zip.startswith('s3://'):
bucket, key_name = parse_s3_url(source_zip)
kwargs.update(dict(
bucket=bucket,
s3_key=key_name
))
self.lambda_arn = self.zappa.update_lambda_function(**kwargs)
elif source_zip and not source_zip.startswith('s3://'):
with open(source_zip, mode='rb') as fh:
byte_stream = fh.read()
kwargs['local_zip'] = byte_stream
self.lambda_arn = self.zappa.update_lambda_function(**kwargs)
else:
if not no_upload:
kwargs['s3_key'] = handler_file
self.lambda_arn = self.zappa.update_lambda_function(**kwargs)
# Remove the uploaded zip from S3, because it is now registered..
if not source_zip and not no_upload:
self.remove_uploaded_zip()
# Update the configuration, in case there are changes.
self.lambda_arn = self.zappa.update_lambda_configuration(
lambda_arn=self.lambda_arn,
function_name=self.lambda_name,
handler=self.lambda_handler,
description=self.lambda_description,
vpc_config=self.vpc_config,
timeout=self.timeout_seconds,
memory_size=self.memory_size,
runtime=self.runtime,
aws_environment_variables=self.aws_environment_variables,
aws_kms_key_arn=self.aws_kms_key_arn,
layers=self.layers
)
# Finally, delete the local copy our zip package
if not source_zip and not no_upload:
if self.stage_config.get('delete_local_zip', True):
self.remove_local_zip()
if self.use_apigateway:
self.zappa.create_stack_template(
lambda_arn=self.lambda_arn,
lambda_name=self.lambda_name,
api_key_required=self.api_key_required,
iam_authorization=self.iam_authorization,
authorizer=self.authorizer,
cors_options=self.cors,
description=self.apigateway_description,
endpoint_configuration=self.endpoint_configuration
)
self.zappa.update_stack(
self.lambda_name,
self.s3_bucket_name,
wait=True,
update_only=True,
disable_progress=self.disable_progress)
api_id = self.zappa.get_api_id(self.lambda_name)
# Update binary support
if self.binary_support:
self.zappa.add_binary_support(api_id=api_id, cors=self.cors)
else:
self.zappa.remove_binary_support(api_id=api_id, cors=self.cors)
if self.stage_config.get('payload_compression', True):
self.zappa.add_api_compression(
api_id=api_id,
min_compression_size=self.stage_config.get('payload_minimum_compression_size', 0))
else:
self.zappa.remove_api_compression(api_id=api_id)
# It looks a bit like we might actually be using this just to get the URL,
# but we're also updating a few of the APIGW settings.
endpoint_url = self.deploy_api_gateway(api_id)
if self.stage_config.get('domain', None):
endpoint_url = self.stage_config.get('domain')
else:
endpoint_url = None
self.schedule()
# Update any cognito pool with the lambda arn
# do this after schedule as schedule clears the lambda policy and we need to add one
self.update_cognito_triggers()
self.callback('post')
if endpoint_url and 'https://' not in endpoint_url:
endpoint_url = 'https://' + endpoint_url
if self.base_path:
endpoint_url += '/' + self.base_path
deployed_string = "Your updated Zappa deployment is " + click.style("live", fg='green', bold=True) + "!"
if self.use_apigateway:
deployed_string = deployed_string + ": " + click.style("{}".format(endpoint_url), bold=True)
api_url = None
if endpoint_url and 'amazonaws.com' not in endpoint_url:
api_url = self.zappa.get_api_url(
self.lambda_name,
self.api_stage)
if endpoint_url != api_url:
deployed_string = deployed_string + " (" + api_url + ")"
if self.stage_config.get('touch', True):
if api_url:
self.touch_endpoint(api_url)
elif endpoint_url:
self.touch_endpoint(endpoint_url)
click.echo(deployed_string)
def rollback(self, revision):
"""
Rollsback the currently deploy lambda code to a previous revision.
"""
print("Rolling back..")
self.zappa.rollback_lambda_function_version(
self.lambda_name, versions_back=revision)
print("Done!")
def tail(self, since, filter_pattern, limit=10000, keep_open=True, colorize=True, http=False, non_http=False, force_colorize=False):
"""
Tail this function's logs.
if keep_open, do so repeatedly, printing any new logs
"""
try:
since_stamp = string_to_timestamp(since)
last_since = since_stamp
while True:
new_logs = self.zappa.fetch_logs(
self.lambda_name,
start_time=since_stamp,
limit=limit,
filter_pattern=filter_pattern,
)
new_logs = [ e for e in new_logs if e['timestamp'] > last_since ]
self.print_logs(new_logs, colorize, http, non_http, force_colorize)
if not keep_open:
break
if new_logs:
last_since = new_logs[-1]['timestamp']
time.sleep(1)
except KeyboardInterrupt: # pragma: no cover
# Die gracefully
try:
sys.exit(0)
except SystemExit:
os._exit(130)
def undeploy(self, no_confirm=False, remove_logs=False):
"""
Tear down an existing deployment.
"""
if not no_confirm: # pragma: no cover
confirm = input("Are you sure you want to undeploy? [y/n] ")
if confirm != 'y':
return
if self.use_alb:
self.zappa.undeploy_lambda_alb(self.lambda_name)
if self.use_apigateway:
if remove_logs:
self.zappa.remove_api_gateway_logs(self.lambda_name)
domain_name = self.stage_config.get('domain', None)
base_path = self.stage_config.get('base_path', None)
# Only remove the api key when not specified
if self.api_key_required and self.api_key is None:
api_id = self.zappa.get_api_id(self.lambda_name)
self.zappa.remove_api_key(api_id, self.api_stage)
gateway_id = self.zappa.undeploy_api_gateway(
self.lambda_name,
domain_name=domain_name,
base_path=base_path
)
self.unschedule() # removes event triggers, including warm up event.
self.zappa.delete_lambda_function(self.lambda_name)
if remove_logs:
self.zappa.remove_lambda_function_logs(self.lambda_name)
click.echo(click.style("Done", fg="green", bold=True) + "!")
def update_cognito_triggers(self):
"""
Update any cognito triggers
"""
if self.cognito:
user_pool = self.cognito.get('user_pool')
triggers = self.cognito.get('triggers', [])
lambda_configs = set()
for trigger in triggers:
lambda_configs.add(trigger['source'].split('_')[0])
self.zappa.update_cognito(self.lambda_name, user_pool, lambda_configs, self.lambda_arn)
def schedule(self):
"""
Given a a list of functions and a schedule to execute them,
setup up regular execution.
"""
events = self.stage_config.get('events', [])
if events:
if not isinstance(events, list): # pragma: no cover
print("Events must be supplied as a list.")
return
for event in events:
self.collision_warning(event.get('function'))
if self.stage_config.get('keep_warm', True):
if not events:
events = []
keep_warm_rate = self.stage_config.get('keep_warm_expression', "rate(4 minutes)")
events.append({'name': 'zappa-keep-warm',
'function': 'handler.keep_warm_callback',
'expression': keep_warm_rate,
'description': 'Zappa Keep Warm - {}'.format(self.lambda_name)})
if events:
try:
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
except botocore.exceptions.ClientError as e: # pragma: no cover
click.echo(click.style("Function does not exist", fg="yellow") + ", please " +
click.style("deploy", bold=True) + "first. Ex:" +
click.style("zappa deploy {}.".format(self.api_stage), bold=True))
sys.exit(-1)
print("Scheduling..")
self.zappa.schedule_events(
lambda_arn=function_response['Configuration']['FunctionArn'],
lambda_name=self.lambda_name,
events=events
)
# Add async tasks SNS
if self.stage_config.get('async_source', None) == 'sns' \
and self.stage_config.get('async_resources', True):
self.lambda_arn = self.zappa.get_lambda_function(
function_name=self.lambda_name)
topic_arn = self.zappa.create_async_sns_topic(
lambda_name=self.lambda_name,
lambda_arn=self.lambda_arn
)
click.echo('SNS Topic created: %s' % topic_arn)
# Add async tasks DynamoDB
table_name = self.stage_config.get('async_response_table', False)
read_capacity = self.stage_config.get('async_response_table_read_capacity', 1)
write_capacity = self.stage_config.get('async_response_table_write_capacity', 1)
if table_name and self.stage_config.get('async_resources', True):
created, response_table = self.zappa.create_async_dynamodb_table(
table_name, read_capacity, write_capacity)
if created:
click.echo('DynamoDB table created: %s' % table_name)
else:
click.echo('DynamoDB table exists: %s' % table_name)
provisioned_throughput = response_table['Table']['ProvisionedThroughput']
if provisioned_throughput['ReadCapacityUnits'] != read_capacity or \
provisioned_throughput['WriteCapacityUnits'] != write_capacity:
click.echo(click.style(
"\nWarning! Existing DynamoDB table ({}) does not match configured capacity.\n".format(table_name),
fg='red'
))
def unschedule(self):
"""
Given a a list of scheduled functions,
tear down their regular execution.
"""
# Run even if events are not defined to remove previously existing ones (thus default to []).
events = self.stage_config.get('events', [])
if not isinstance(events, list): # pragma: no cover
print("Events must be supplied as a list.")
return
function_arn = None
try:
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
function_arn = function_response['Configuration']['FunctionArn']
except botocore.exceptions.ClientError as e: # pragma: no cover
raise ClickException("Function does not exist, you should deploy first. Ex: zappa deploy {}. "
"Proceeding to unschedule CloudWatch based events.".format(self.api_stage))
print("Unscheduling..")
self.zappa.unschedule_events(
lambda_name=self.lambda_name,
lambda_arn=function_arn,
events=events,
)
# Remove async task SNS
if self.stage_config.get('async_source', None) == 'sns' \
and self.stage_config.get('async_resources', True):
removed_arns = self.zappa.remove_async_sns_topic(self.lambda_name)
click.echo('SNS Topic removed: %s' % ', '.join(removed_arns))
def invoke(self, function_name, raw_python=False, command=None, no_color=False):
"""
Invoke a remote function.
"""
# There are three likely scenarios for 'command' here:
# command, which is a modular function path
# raw_command, which is a string of python to execute directly
# manage, which is a Django-specific management command invocation
key = command if command is not None else 'command'
if raw_python:
command = {'raw_command': function_name}
else:
command = {key: function_name}
# Can't use hjson
import json as json
response = self.zappa.invoke_lambda_function(
self.lambda_name,
json.dumps(command),
invocation_type='RequestResponse',
)
if 'LogResult' in response:
if no_color:
print(base64.b64decode(response['LogResult']))
else:
decoded = base64.b64decode(response['LogResult']).decode()
formatted = self.format_invoke_command(decoded)
colorized = self.colorize_invoke_command(formatted)
print(colorized)
else:
print(response)
# For a successful request FunctionError is not in response.
# https://github.com/Miserlou/Zappa/pull/1254/
if 'FunctionError' in response:
raise ClickException(
"{} error occurred while invoking command.".format(response['FunctionError'])
)
def format_invoke_command(self, string):
"""
Formats correctly the string output from the invoke() method,
replacing line breaks and tabs when necessary.
"""
string = string.replace('\\n', '\n')
formated_response = ''
for line in string.splitlines():
if line.startswith('REPORT'):
line = line.replace('\t', '\n')
if line.startswith('[DEBUG]'):
line = line.replace('\t', ' ')
formated_response += line + '\n'
formated_response = formated_response.replace('\n\n', '\n')
return formated_response
def colorize_invoke_command(self, string):
"""
Apply various heuristics to return a colorized version the invoke
command string. If these fail, simply return the string in plaintext.
Inspired by colorize_log_entry().
"""
final_string = string
try:
# Line headers
try:
for token in ['START', 'END', 'REPORT', '[DEBUG]']:
if token in final_string:
format_string = '[{}]'
# match whole words only
pattern = r'\b{}\b'
if token == '[DEBUG]':
format_string = '{}'
pattern = re.escape(token)
repl = click.style(
format_string.format(token),
bold=True,
fg='cyan'
)
final_string = re.sub(
pattern.format(token), repl, final_string
)
except Exception: # pragma: no cover
pass
# Green bold Tokens
try:
for token in [
'Zappa Event:',
'RequestId:',
'Version:',
'Duration:',
'Billed',
'Memory Size:',
'Max Memory Used:'
]:
if token in final_string:
final_string = final_string.replace(token, click.style(
token,
bold=True,
fg='green'
))
except Exception: # pragma: no cover
pass
# UUIDs
for token in final_string.replace('\t', ' ').split(' '):
try:
if token.count('-') == 4 and token.replace('-', '').isalnum():
final_string = final_string.replace(
token,
click.style(token, fg='magenta')
)
except Exception: # pragma: no cover
pass
return final_string
except Exception:
return string
def status(self, return_json=False):
"""
Describe the status of the current deployment.
"""
def tabular_print(title, value):
"""
Convenience function for priting formatted table items.
"""
click.echo('%-*s%s' % (32, click.style("\t" + title, fg='green') + ':', str(value)))
return
# Lambda Env Details
lambda_versions = self.zappa.get_lambda_function_versions(self.lambda_name)
if not lambda_versions:
raise ClickException(click.style("No Lambda %s detected in %s - have you deployed yet?" %
(self.lambda_name, self.zappa.aws_region), fg='red'))
status_dict = collections.OrderedDict()
status_dict["Lambda Versions"] = len(lambda_versions)
function_response = self.zappa.lambda_client.get_function(FunctionName=self.lambda_name)
conf = function_response['Configuration']
self.lambda_arn = conf['FunctionArn']
status_dict["Lambda Name"] = self.lambda_name
status_dict["Lambda ARN"] = self.lambda_arn
status_dict["Lambda Role ARN"] = conf['Role']
status_dict["Lambda Handler"] = conf['Handler']
status_dict["Lambda Code Size"] = conf['CodeSize']
status_dict["Lambda Version"] = conf['Version']
status_dict["Lambda Last Modified"] = conf['LastModified']
status_dict["Lambda Memory Size"] = conf['MemorySize']
status_dict["Lambda Timeout"] = conf['Timeout']
status_dict["Lambda Runtime"] = conf['Runtime']
if 'VpcConfig' in conf.keys():
status_dict["Lambda VPC ID"] = conf.get('VpcConfig', {}).get('VpcId', 'Not assigned')
else:
status_dict["Lambda VPC ID"] = None
# Calculated statistics
try:
function_invocations = self.zappa.cloudwatch.get_metric_statistics(
Namespace='AWS/Lambda',
MetricName='Invocations',
StartTime=datetime.utcnow()-timedelta(days=1),
EndTime=datetime.utcnow(),
Period=1440,
Statistics=['Sum'],
Dimensions=[{'Name': 'FunctionName',
'Value': '{}'.format(self.lambda_name)}]
)['Datapoints'][0]['Sum']
except Exception as e:
function_invocations = 0
try:
function_errors = self.zappa.cloudwatch.get_metric_statistics(
Namespace='AWS/Lambda',
MetricName='Errors',
StartTime=datetime.utcnow()-timedelta(days=1),
EndTime=datetime.utcnow(),
Period=1440,
Statistics=['Sum'],
Dimensions=[{'Name': 'FunctionName',
'Value': '{}'.format(self.lambda_name)}]
)['Datapoints'][0]['Sum']
except Exception as e:
function_errors = 0
try:
error_rate = "{0:.2f}%".format(function_errors / function_invocations * 100)
except:
error_rate = "Error calculating"
status_dict["Invocations (24h)"] = int(function_invocations)
status_dict["Errors (24h)"] = int(function_errors)
status_dict["Error Rate (24h)"] = error_rate
# URLs
if self.use_apigateway:
api_url = self.zappa.get_api_url(
self.lambda_name,
self.api_stage)
status_dict["API Gateway URL"] = api_url
# Api Keys
api_id = self.zappa.get_api_id(self.lambda_name)
for api_key in self.zappa.get_api_keys(api_id, self.api_stage):
status_dict["API Gateway x-api-key"] = api_key
# There literally isn't a better way to do this.
# AWS provides no way to tie a APIGW domain name to its Lambda function.
domain_url = self.stage_config.get('domain', None)
base_path = self.stage_config.get('base_path', None)
if domain_url:
status_dict["Domain URL"] = 'https://' + domain_url
if base_path:
status_dict["Domain URL"] += '/' + base_path
else:
status_dict["Domain URL"] = "None Supplied"
# Scheduled Events
event_rules = self.zappa.get_event_rules_for_lambda(lambda_arn=self.lambda_arn)
status_dict["Num. Event Rules"] = len(event_rules)
if len(event_rules) > 0:
status_dict['Events'] = []
for rule in event_rules:
event_dict = {}
rule_name = rule['Name']
event_dict["Event Rule Name"] = rule_name
event_dict["Event Rule Schedule"] = rule.get('ScheduleExpression', None)
event_dict["Event Rule State"] = rule.get('State', None).title()
event_dict["Event Rule ARN"] = rule.get('Arn', None)
status_dict['Events'].append(event_dict)
if return_json:
# Putting the status in machine readable format
# https://github.com/Miserlou/Zappa/issues/407
print(json.dumpsJSON(status_dict))
else:
click.echo("Status for " + click.style(self.lambda_name, bold=True) + ": ")
for k, v in status_dict.items():
if k == 'Events':
# Events are a list of dicts
for event in v:
for item_k, item_v in event.items():
tabular_print(item_k, item_v)
else:
tabular_print(k, v)
# TODO: S3/SQS/etc. type events?
return True
def check_stage_name(self, stage_name):
"""
Make sure the stage name matches the AWS-allowed pattern
(calls to apigateway_client.create_deployment, will fail with error
message "ClientError: An error occurred (BadRequestException) when
calling the CreateDeployment operation: Stage name only allows
a-zA-Z0-9_" if the pattern does not match)
"""
if self.stage_name_env_pattern.match(stage_name):
return True
raise ValueError("AWS requires stage name to match a-zA-Z0-9_")
def check_environment(self, environment):
"""
Make sure the environment contains only strings
(since putenv needs a string)
"""
non_strings = []
for (k,v) in environment.items():
if not isinstance(v, basestring):
non_strings.append(k)
if non_strings:
raise ValueError("The following environment variables are not strings: {}".format(", ".join(non_strings)))
else:
return True
def init(self, settings_file="zappa_settings.json"):
"""
Initialize a new Zappa project by creating a new zappa_settings.json in a guided process.
This should probably be broken up into few separate componants once it's stable.
Testing these inputs requires monkeypatching with mock, which isn't pretty.
"""
# Make sure we're in a venv.
self.check_venv()
# Ensure that we don't already have a zappa_settings file.
if os.path.isfile(settings_file):
raise ClickException("This project already has a " + click.style("{0!s} file".format(settings_file), fg="red", bold=True) + "!")
# Explain system.
click.echo(click.style("""\n███████╗ █████╗ ██████╗ ██████╗ █████╗
╚══███╔╝██╔══██╗██╔══██╗██╔══██╗██╔══██╗
███╔╝ ███████║██████╔╝██████╔╝███████║
███╔╝ ██╔══██║██╔═══╝ ██╔═══╝ ██╔══██║
███████╗██║ ██║██║ ██║ ██║ ██║
╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═╝\n""", fg='green', bold=True))
click.echo(click.style("Welcome to ", bold=True) + click.style("Zappa", fg='green', bold=True) + click.style("!\n", bold=True))
click.echo(click.style("Zappa", bold=True) + " is a system for running server-less Python web applications"
" on AWS Lambda and AWS API Gateway.")
click.echo("This `init` command will help you create and configure your new Zappa deployment.")
click.echo("Let's get started!\n")
# Create Env
while True:
click.echo("Your Zappa configuration can support multiple production stages, like '" +
click.style("dev", bold=True) + "', '" + click.style("staging", bold=True) + "', and '" +
click.style("production", bold=True) + "'.")
env = input("What do you want to call this environment (default 'dev'): ") or "dev"
try:
self.check_stage_name(env)
break
except ValueError:
click.echo(click.style("Stage names must match a-zA-Z0-9_", fg="red"))
# Detect AWS profiles and regions
# If anyone knows a more straightforward way to easily detect and parse AWS profiles I'm happy to change this, feels like a hack
session = botocore.session.Session()
config = session.full_config
profiles = config.get("profiles", {})
profile_names = list(profiles.keys())
click.echo("\nAWS Lambda and API Gateway are only available in certain regions. "\
"Let's check to make sure you have a profile set up in one that will work.")
if not profile_names:
profile_name, profile = None, None
click.echo("We couldn't find an AWS profile to use. Before using Zappa, you'll need to set one up. See here for more info: {}"
.format(click.style(BOTO3_CONFIG_DOCS_URL, fg="blue", underline=True)))
elif len(profile_names) == 1:
profile_name = profile_names[0]
profile = profiles[profile_name]
click.echo("Okay, using profile {}!".format(click.style(profile_name, bold=True)))
else:
if "default" in profile_names:
default_profile = [p for p in profile_names if p == "default"][0]
else:
default_profile = profile_names[0]
while True:
profile_name = input("We found the following profiles: {}, and {}. "\
"Which would you like us to use? (default '{}'): "
.format(
', '.join(profile_names[:-1]),
profile_names[-1],
default_profile
)) or default_profile
if profile_name in profiles:
profile = profiles[profile_name]
break
else:
click.echo("Please enter a valid name for your AWS profile.")
profile_region = profile.get("region") if profile else None
# Create Bucket
click.echo("\nYour Zappa deployments will need to be uploaded to a " + click.style("private S3 bucket", bold=True) + ".")
click.echo("If you don't have a bucket yet, we'll create one for you too.")
default_bucket = "zappa-" + ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(9))
while True:
bucket = input("What do you want to call your bucket? (default '%s'): " % default_bucket) or default_bucket
if is_valid_bucket_name(bucket):
break
click.echo(click.style("Invalid bucket name!", bold=True))
click.echo("S3 buckets must be named according to the following rules:")
click.echo("""* Bucket names must be unique across all existing bucket names in Amazon S3.
* Bucket names must comply with DNS naming conventions.
* Bucket names must be at least 3 and no more than 63 characters long.
* Bucket names must not contain uppercase characters or underscores.
* Bucket names must start with a lowercase letter or number.
* Bucket names must be a series of one or more labels. Adjacent labels are separated
by a single period (.). Bucket names can contain lowercase letters, numbers, and
hyphens. Each label must start and end with a lowercase letter or a number.
* Bucket names must not be formatted as an IP address (for example, 192.168.5.4).
* When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL
wildcard certificate only matches buckets that don't contain periods. To work around
this, use HTTP or write your own certificate verification logic. We recommend that
you do not use periods (".") in bucket names when using virtual hosted–style buckets.
""")
# Detect Django/Flask
try: # pragma: no cover
import django
has_django = True
except ImportError as e:
has_django = False
try: # pragma: no cover
import flask
has_flask = True
except ImportError as e:
has_flask = False
print('')
# App-specific
if has_django: # pragma: no cover
click.echo("It looks like this is a " + click.style("Django", bold=True) + " application!")
click.echo("What is the " + click.style("module path", bold=True) + " to your projects's Django settings?")
django_settings = None
matches = detect_django_settings()
while django_settings in [None, '']:
if matches:
click.echo("We discovered: " + click.style(', '.join('{}'.format(i) for v, i in enumerate(matches)), bold=True))
django_settings = input("Where are your project's settings? (default '%s'): " % matches[0]) or matches[0]
else:
click.echo("(This will likely be something like 'your_project.settings')")
django_settings = input("Where are your project's settings?: ")
django_settings = django_settings.replace("'", "")
django_settings = django_settings.replace('"', "")
else:
matches = None
if has_flask:
click.echo("It looks like this is a " + click.style("Flask", bold=True) + " application.")
matches = detect_flask_apps()
click.echo("What's the " + click.style("modular path", bold=True) + " to your app's function?")
click.echo("This will likely be something like 'your_module.app'.")
app_function = None
while app_function in [None, '']:
if matches:
click.echo("We discovered: " + click.style(', '.join('{}'.format(i) for v, i in enumerate(matches)), bold=True))
app_function = input("Where is your app's function? (default '%s'): " % matches[0]) or matches[0]
else:
app_function = input("Where is your app's function?: ")
app_function = app_function.replace("'", "")
app_function = app_function.replace('"', "")
# TODO: Create VPC?
# Memory size? Time limit?
# Domain? LE keys? Region?
# 'Advanced Settings' mode?
# Globalize
click.echo("\nYou can optionally deploy to " + click.style("all available regions", bold=True) + " in order to provide fast global service.")
click.echo("If you are using Zappa for the first time, you probably don't want to do this!")
global_deployment = False
while True:
global_type = input("Would you like to deploy this application " + click.style("globally", bold=True) + "? (default 'n') [y/n/(p)rimary]: ")
if not global_type:
break
if global_type.lower() in ["y", "yes", "p", "primary"]:
global_deployment = True
break
if global_type.lower() in ["n", "no"]:
global_deployment = False
break
# The given environment name
zappa_settings = {
env: {
'profile_name': profile_name,
's3_bucket': bucket,
'runtime': get_venv_from_python_version(),
'project_name': self.get_project_name()
}
}
if profile_region:
zappa_settings[env]['aws_region'] = profile_region
if has_django:
zappa_settings[env]['django_settings'] = django_settings
else:
zappa_settings[env]['app_function'] = app_function
# Global Region Deployment
if global_deployment:
additional_regions = [r for r in API_GATEWAY_REGIONS if r != profile_region]
# Create additional stages
if global_type.lower() in ["p", "primary"]:
additional_regions = [r for r in additional_regions if '-1' in r]
for region in additional_regions:
env_name = env + '_' + region.replace('-', '_')
g_env = {
env_name: {
'extends': env,
'aws_region': region
}
}
zappa_settings.update(g_env)
import json as json # hjson is fine for loading, not fine for writing.
zappa_settings_json = json.dumps(zappa_settings, sort_keys=True, indent=4)
click.echo("\nOkay, here's your " + click.style("zappa_settings.json", bold=True) + ":\n")
click.echo(click.style(zappa_settings_json, fg="yellow", bold=False))
confirm = input("\nDoes this look " + click.style("okay", bold=True, fg="green") + "? (default 'y') [y/n]: ") or 'yes'
if confirm[0] not in ['y', 'Y', 'yes', 'YES']:
click.echo("" + click.style("Sorry", bold=True, fg='red') + " to hear that! Please init again.")
return
# Write
with open("zappa_settings.json", "w") as zappa_settings_file:
zappa_settings_file.write(zappa_settings_json)
if global_deployment:
click.echo("\n" + click.style("Done", bold=True) + "! You can also " + click.style("deploy all", bold=True) + " by executing:\n")
click.echo(click.style("\t$ zappa deploy --all", bold=True))
click.echo("\nAfter that, you can " + click.style("update", bold=True) + " your application code with:\n")
click.echo(click.style("\t$ zappa update --all", bold=True))
else:
click.echo("\n" + click.style("Done", bold=True) + "! Now you can " + click.style("deploy", bold=True) + " your Zappa application by executing:\n")
click.echo(click.style("\t$ zappa deploy %s" % env, bold=True))
click.echo("\nAfter that, you can " + click.style("update", bold=True) + " your application code with:\n")
click.echo(click.style("\t$ zappa update %s" % env, bold=True))
click.echo("\nTo learn more, check out our project page on " + click.style("GitHub", bold=True) +
" here: " + click.style("https://github.com/Miserlou/Zappa", fg="cyan", bold=True))
click.echo("and stop by our " + click.style("Slack", bold=True) + " channel here: " +
click.style("https://zappateam.slack.com", fg="cyan", bold=True))
click.echo("\nEnjoy!,")
click.echo(" ~ Team " + click.style("Zappa", bold=True) + "!")
return
def certify(self, no_confirm=True, manual=False):
"""
Register or update a domain certificate for this env.
"""
if not self.domain:
raise ClickException("Can't certify a domain without " + click.style("domain", fg="red", bold=True) + " configured!")
if not no_confirm: # pragma: no cover
confirm = input("Are you sure you want to certify? [y/n] ")
if confirm != 'y':
return
# Make sure this isn't already deployed.
deployed_versions = self.zappa.get_lambda_function_versions(self.lambda_name)
if len(deployed_versions) == 0:
raise ClickException("This application " + click.style("isn't deployed yet", fg="red") +
" - did you mean to call " + click.style("deploy", bold=True) + "?")
account_key_location = self.stage_config.get('lets_encrypt_key', None)
cert_location = self.stage_config.get('certificate', None)
cert_key_location = self.stage_config.get('certificate_key', None)
cert_chain_location = self.stage_config.get('certificate_chain', None)
cert_arn = self.stage_config.get('certificate_arn', None)
base_path = self.stage_config.get('base_path', None)
# These are sensitive
certificate_body = None
certificate_private_key = None
certificate_chain = None
# Prepare for custom Let's Encrypt
if not cert_location and not cert_arn:
if not account_key_location:
raise ClickException("Can't certify a domain without " + click.style("lets_encrypt_key", fg="red", bold=True) +
" or " + click.style("certificate", fg="red", bold=True)+
" or " + click.style("certificate_arn", fg="red", bold=True) + " configured!")
# Get install account_key to /tmp/account_key.pem
from .letsencrypt import gettempdir
if account_key_location.startswith('s3://'):
bucket, key_name = parse_s3_url(account_key_location)
self.zappa.s3_client.download_file(bucket, key_name, os.path.join(gettempdir(), 'account.key'))
else:
from shutil import copyfile
copyfile(account_key_location, os.path.join(gettempdir(), 'account.key'))
# Prepare for Custom SSL
elif not account_key_location and not cert_arn:
if not cert_location or not cert_key_location or not cert_chain_location:
raise ClickException("Can't certify a domain without " +
click.style("certificate, certificate_key and certificate_chain", fg="red", bold=True) + " configured!")
# Read the supplied certificates.
with open(cert_location) as f:
certificate_body = f.read()
with open(cert_key_location) as f:
certificate_private_key = f.read()
with open(cert_chain_location) as f:
certificate_chain = f.read()
click.echo("Certifying domain " + click.style(self.domain, fg="green", bold=True) + "..")
# Get cert and update domain.
# Let's Encrypt
if not cert_location and not cert_arn:
from .letsencrypt import get_cert_and_update_domain
cert_success = get_cert_and_update_domain(
self.zappa,
self.lambda_name,
self.api_stage,
self.domain,
manual
)
# Custom SSL / ACM
else:
route53 = self.stage_config.get('route53_enabled', True)
if not self.zappa.get_domain_name(self.domain, route53=route53):
dns_name = self.zappa.create_domain_name(
domain_name=self.domain,
certificate_name=self.domain + "-Zappa-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=cert_arn,
lambda_name=self.lambda_name,
stage=self.api_stage,
base_path=base_path
)
if route53:
self.zappa.update_route53_records(self.domain, dns_name)
print("Created a new domain name with supplied certificate. Please note that it can take up to 40 minutes for this domain to be "
"created and propagated through AWS, but it requires no further work on your part.")
else:
self.zappa.update_domain_name(
domain_name=self.domain,
certificate_name=self.domain + "-Zappa-Cert",
certificate_body=certificate_body,
certificate_private_key=certificate_private_key,
certificate_chain=certificate_chain,
certificate_arn=cert_arn,
lambda_name=self.lambda_name,
stage=self.api_stage,
route53=route53,
base_path=base_path
)
cert_success = True
if cert_success:
click.echo("Certificate " + click.style("updated", fg="green", bold=True) + "!")
else:
click.echo(click.style("Failed", fg="red", bold=True) + " to generate or install certificate! :(")
click.echo("\n==============\n")
shamelessly_promote()
##
# Shell
##
def shell(self):
"""
Spawn a debug shell.
"""
click.echo(click.style("NOTICE!", fg="yellow", bold=True) + " This is a " + click.style("local", fg="green", bold=True) + " shell, inside a " + click.style("Zappa", bold=True) + " object!")
self.zappa.shell()
return
##
# Utility
##
def callback(self, position):
"""
Allows the execution of custom code between creation of the zip file and deployment to AWS.
:return: None
"""
callbacks = self.stage_config.get('callbacks', {})
callback = callbacks.get(position)
if callback:
(mod_path, cb_func_name) = callback.rsplit('.', 1)
try: # Prefer callback in working directory
if mod_path.count('.') >= 1: # Callback function is nested in a folder
(mod_folder_path, mod_name) = mod_path.rsplit('.', 1)
mod_folder_path_fragments = mod_folder_path.split('.')
working_dir = os.path.join(os.getcwd(), *mod_folder_path_fragments)
else:
mod_name = mod_path
working_dir = os.getcwd()
working_dir_importer = pkgutil.get_importer(working_dir)
module_ = working_dir_importer.find_module(mod_name).load_module(mod_name)
except (ImportError, AttributeError):
try: # Callback func might be in virtualenv
module_ = importlib.import_module(mod_path)
except ImportError: # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"import {position} callback ".format(position=position),
bold=True) + 'module: "{mod_path}"'.format(mod_path=click.style(mod_path, bold=True)))
if not hasattr(module_, cb_func_name): # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"find {position} callback ".format(position=position), bold=True) + 'function: "{cb_func_name}" '.format(
cb_func_name=click.style(cb_func_name, bold=True)) + 'in module "{mod_path}"'.format(mod_path=mod_path))
cb_func = getattr(module_, cb_func_name)
cb_func(self) # Call the function passing self
def check_for_update(self):
"""
Print a warning if there's a new Zappa version available.
"""
try:
version = pkg_resources.require("zappa")[0].version
updateable = check_new_version_available(version)
if updateable:
click.echo(click.style("Important!", fg="yellow", bold=True) +
" A new version of " + click.style("Zappa", bold=True) + " is available!")
click.echo("Upgrade with: " + click.style("pip install zappa --upgrade", bold=True))
click.echo("Visit the project page on GitHub to see the latest changes: " +
click.style("https://github.com/Miserlou/Zappa", bold=True))
except Exception as e: # pragma: no cover
print(e)
return
def load_settings(self, settings_file=None, session=None):
"""
Load the local zappa_settings file.
An existing boto session can be supplied, though this is likely for testing purposes.
Returns the loaded Zappa object.
"""
# Ensure we're passed a valid settings file.
if not settings_file:
settings_file = self.get_json_or_yaml_settings()
if not os.path.isfile(settings_file):
raise ClickException("Please configure your zappa_settings file.")
# Load up file
self.load_settings_file(settings_file)
# Make sure that the stages are valid names:
for stage_name in self.zappa_settings.keys():
try:
self.check_stage_name(stage_name)
except ValueError:
raise ValueError("API stage names must match a-zA-Z0-9_ ; '{0!s}' does not.".format(stage_name))
# Make sure that this stage is our settings
if self.api_stage not in self.zappa_settings.keys():
raise ClickException("Please define stage '{0!s}' in your Zappa settings.".format(self.api_stage))
# We need a working title for this project. Use one if supplied, else cwd dirname.
if 'project_name' in self.stage_config: # pragma: no cover
# If the name is invalid, this will throw an exception with message up stack
self.project_name = validate_name(self.stage_config['project_name'])
else:
self.project_name = self.get_project_name()
# The name of the actual AWS Lambda function, ex, 'helloworld-dev'
# Assume that we already have have validated the name beforehand.
# Related: https://github.com/Miserlou/Zappa/pull/664
# https://github.com/Miserlou/Zappa/issues/678
# And various others from Slack.
self.lambda_name = slugify.slugify(self.project_name + '-' + self.api_stage)
# Load stage-specific settings
self.s3_bucket_name = self.stage_config.get('s3_bucket', "zappa-" + ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(9)))
self.vpc_config = self.stage_config.get('vpc_config', {})
self.memory_size = self.stage_config.get('memory_size', 512)
self.app_function = self.stage_config.get('app_function', None)
self.exception_handler = self.stage_config.get('exception_handler', None)
self.aws_region = self.stage_config.get('aws_region', None)
self.debug = self.stage_config.get('debug', True)
self.prebuild_script = self.stage_config.get('prebuild_script', None)
self.profile_name = self.stage_config.get('profile_name', None)
self.log_level = self.stage_config.get('log_level', "DEBUG")
self.domain = self.stage_config.get('domain', None)
self.base_path = self.stage_config.get('base_path', None)
self.timeout_seconds = self.stage_config.get('timeout_seconds', 30)
dead_letter_arn = self.stage_config.get('dead_letter_arn', '')
self.dead_letter_config = {'TargetArn': dead_letter_arn} if dead_letter_arn else {}
self.cognito = self.stage_config.get('cognito', None)
self.num_retained_versions = self.stage_config.get('num_retained_versions',None)
# Check for valid values of num_retained_versions
if self.num_retained_versions is not None and type(self.num_retained_versions) is not int:
raise ClickException("Please supply either an integer or null for num_retained_versions in the zappa_settings.json. Found %s" % type(self.num_retained_versions))
elif type(self.num_retained_versions) is int and self.num_retained_versions<1:
raise ClickException("The value for num_retained_versions in the zappa_settings.json should be greater than 0.")
# Provide legacy support for `use_apigateway`, now `apigateway_enabled`.
# https://github.com/Miserlou/Zappa/issues/490
# https://github.com/Miserlou/Zappa/issues/493
self.use_apigateway = self.stage_config.get('use_apigateway', True)
if self.use_apigateway:
self.use_apigateway = self.stage_config.get('apigateway_enabled', True)
self.apigateway_description = self.stage_config.get('apigateway_description', None)
self.lambda_handler = self.stage_config.get('lambda_handler', 'handler.lambda_handler')
# DEPRECATED. https://github.com/Miserlou/Zappa/issues/456
self.remote_env_bucket = self.stage_config.get('remote_env_bucket', None)
self.remote_env_file = self.stage_config.get('remote_env_file', None)
self.remote_env = self.stage_config.get('remote_env', None)
self.settings_file = self.stage_config.get('settings_file', None)
self.django_settings = self.stage_config.get('django_settings', None)
self.manage_roles = self.stage_config.get('manage_roles', True)
self.binary_support = self.stage_config.get('binary_support', True)
self.api_key_required = self.stage_config.get('api_key_required', False)
self.api_key = self.stage_config.get('api_key')
self.endpoint_configuration = self.stage_config.get('endpoint_configuration', None)
self.iam_authorization = self.stage_config.get('iam_authorization', False)
self.cors = self.stage_config.get("cors", False)
self.lambda_description = self.stage_config.get('lambda_description', "Zappa Deployment")
self.lambda_concurrency = self.stage_config.get('lambda_concurrency', None)
self.environment_variables = self.stage_config.get('environment_variables', {})
self.aws_environment_variables = self.stage_config.get('aws_environment_variables', {})
self.check_environment(self.environment_variables)
self.authorizer = self.stage_config.get('authorizer', {})
self.runtime = self.stage_config.get('runtime', get_runtime_from_python_version())
self.aws_kms_key_arn = self.stage_config.get('aws_kms_key_arn', '')
self.context_header_mappings = self.stage_config.get('context_header_mappings', {})
self.xray_tracing = self.stage_config.get('xray_tracing', False)
self.desired_role_arn = self.stage_config.get('role_arn')
self.layers = self.stage_config.get('layers', None)
# Load ALB-related settings
self.use_alb = self.stage_config.get('alb_enabled', False)
self.alb_vpc_config = self.stage_config.get('alb_vpc_config', {})
# Additional tags
self.tags = self.stage_config.get('tags', {})
desired_role_name = self.lambda_name + "-ZappaLambdaExecutionRole"
self.zappa = Zappa( boto_session=session,
profile_name=self.profile_name,
aws_region=self.aws_region,
load_credentials=self.load_credentials,
desired_role_name=desired_role_name,
desired_role_arn=self.desired_role_arn,
runtime=self.runtime,
tags=self.tags,
endpoint_urls=self.stage_config.get('aws_endpoint_urls',{}),
xray_tracing=self.xray_tracing
)
for setting in CUSTOM_SETTINGS:
if setting in self.stage_config:
setting_val = self.stage_config[setting]
# Read the policy file contents.
if setting.endswith('policy'):
with open(setting_val, 'r') as f:
setting_val = f.read()
setattr(self.zappa, setting, setting_val)
if self.app_function:
self.collision_warning(self.app_function)
if self.app_function[-3:] == '.py':
click.echo(click.style("Warning!", fg="red", bold=True) +
" Your app_function is pointing to a " + click.style("file and not a function", bold=True) +
"! It should probably be something like 'my_file.app', not 'my_file.py'!")
return self.zappa
def get_json_or_yaml_settings(self, settings_name="zappa_settings"):
"""
Return zappa_settings path as JSON or YAML (or TOML), as appropriate.
"""
zs_json = settings_name + ".json"
zs_yml = settings_name + ".yml"
zs_yaml = settings_name + ".yaml"
zs_toml = settings_name + ".toml"
# Must have at least one
if not os.path.isfile(zs_json) \
and not os.path.isfile(zs_yml) \
and not os.path.isfile(zs_yaml) \
and not os.path.isfile(zs_toml):
raise ClickException("Please configure a zappa_settings file or call `zappa init`.")
# Prefer JSON
if os.path.isfile(zs_json):
settings_file = zs_json
elif os.path.isfile(zs_toml):
settings_file = zs_toml
elif os.path.isfile(zs_yml):
settings_file = zs_yml
else:
settings_file = zs_yaml
return settings_file
def load_settings_file(self, settings_file=None):
"""
Load our settings file.
"""
if not settings_file:
settings_file = self.get_json_or_yaml_settings()
if not os.path.isfile(settings_file):
raise ClickException("Please configure your zappa_settings file or call `zappa init`.")
path, ext = os.path.splitext(settings_file)
if ext == '.yml' or ext == '.yaml':
with open(settings_file) as yaml_file:
try:
self.zappa_settings = yaml.safe_load(yaml_file)
except ValueError: # pragma: no cover
raise ValueError("Unable to load the Zappa settings YAML. It may be malformed.")
elif ext == '.toml':
with open(settings_file) as toml_file:
try:
self.zappa_settings = toml.load(toml_file)
except ValueError: # pragma: no cover
raise ValueError("Unable to load the Zappa settings TOML. It may be malformed.")
else:
with open(settings_file) as json_file:
try:
self.zappa_settings = json.load(json_file)
except ValueError: # pragma: no cover
raise ValueError("Unable to load the Zappa settings JSON. It may be malformed.")
def create_package(self, output=None):
"""
Ensure that the package can be properly configured,
and then create it.
"""
# Create the Lambda zip package (includes project and virtualenvironment)
# Also define the path the handler file so it can be copied to the zip
# root for Lambda.
current_file = os.path.dirname(os.path.abspath(
inspect.getfile(inspect.currentframe())))
handler_file = os.sep.join(current_file.split(os.sep)[0:]) + os.sep + 'handler.py'
# Create the zip file(s)
if self.stage_config.get('slim_handler', False):
# Create two zips. One with the application and the other with just the handler.
# https://github.com/Miserlou/Zappa/issues/510
self.zip_path = self.zappa.create_lambda_zip(
prefix=self.lambda_name,
use_precompiled_packages=self.stage_config.get('use_precompiled_packages', True),
exclude=self.stage_config.get('exclude', []),
exclude_glob=self.stage_config.get('exclude_glob', []),
disable_progress=self.disable_progress,
archive_format='tarball'
)
# Make sure the normal venv is not included in the handler's zip
exclude = self.stage_config.get('exclude', [])
cur_venv = self.zappa.get_current_venv()
exclude.append(cur_venv.split('/')[-1])
self.handler_path = self.zappa.create_lambda_zip(
prefix='handler_{0!s}'.format(self.lambda_name),
venv=self.zappa.create_handler_venv(),
handler_file=handler_file,
slim_handler=True,
exclude=exclude,
exclude_glob=self.stage_config.get('exclude_glob', []),
output=output,
disable_progress=self.disable_progress
)
else:
# This could be python3.6 optimized.
exclude = self.stage_config.get(
'exclude', [
"boto3",
"dateutil",
"botocore",
"s3transfer",
"concurrent"
])
# Create a single zip that has the handler and application
self.zip_path = self.zappa.create_lambda_zip(
prefix=self.lambda_name,
handler_file=handler_file,
use_precompiled_packages=self.stage_config.get('use_precompiled_packages', True),
exclude=exclude,
exclude_glob=self.stage_config.get('exclude_glob', []),
output=output,
disable_progress=self.disable_progress
)
# Warn if this is too large for Lambda.
file_stats = os.stat(self.zip_path)
if file_stats.st_size > 52428800: # pragma: no cover
print('\n\nWarning: Application zip package is likely to be too large for AWS Lambda. '
'Try setting "slim_handler" to true in your Zappa settings file.\n\n')
# Throw custom settings into the zip that handles requests
if self.stage_config.get('slim_handler', False):
handler_zip = self.handler_path
else:
handler_zip = self.zip_path
with zipfile.ZipFile(handler_zip, 'a') as lambda_zip:
settings_s = "# Generated by Zappa\n"
if self.app_function:
if '.' not in self.app_function: # pragma: no cover
raise ClickException("Your " + click.style("app_function", fg='red', bold=True) + " value is not a modular path." +
" It needs to be in the format `" + click.style("your_module.your_app_object", bold=True) + "`.")
app_module, app_function = self.app_function.rsplit('.', 1)
settings_s = settings_s + "APP_MODULE='{0!s}'\nAPP_FUNCTION='{1!s}'\n".format(app_module, app_function)
if self.exception_handler:
settings_s += "EXCEPTION_HANDLER='{0!s}'\n".format(self.exception_handler)
else:
settings_s += "EXCEPTION_HANDLER=None\n"
if self.debug:
settings_s = settings_s + "DEBUG=True\n"
else:
settings_s = settings_s + "DEBUG=False\n"
settings_s = settings_s + "LOG_LEVEL='{0!s}'\n".format((self.log_level))
if self.binary_support:
settings_s = settings_s + "BINARY_SUPPORT=True\n"
else:
settings_s = settings_s + "BINARY_SUPPORT=False\n"
head_map_dict = {}
head_map_dict.update(dict(self.context_header_mappings))
settings_s = settings_s + "CONTEXT_HEADER_MAPPINGS={0}\n".format(
head_map_dict
)
# If we're on a domain, we don't need to define the /<<env>> in
# the WSGI PATH
if self.domain:
settings_s = settings_s + "DOMAIN='{0!s}'\n".format((self.domain))
else:
settings_s = settings_s + "DOMAIN=None\n"
if self.base_path:
settings_s = settings_s + "BASE_PATH='{0!s}'\n".format((self.base_path))
else:
settings_s = settings_s + "BASE_PATH=None\n"
# Pass through remote config bucket and path
if self.remote_env:
settings_s = settings_s + "REMOTE_ENV='{0!s}'\n".format(
self.remote_env
)
# DEPRECATED. use remove_env instead
elif self.remote_env_bucket and self.remote_env_file:
settings_s = settings_s + "REMOTE_ENV='s3://{0!s}/{1!s}'\n".format(
self.remote_env_bucket, self.remote_env_file
)
# Local envs
env_dict = {}
if self.aws_region:
env_dict['AWS_REGION'] = self.aws_region
env_dict.update(dict(self.environment_variables))
# Environment variable keys must be ascii
# https://github.com/Miserlou/Zappa/issues/604
# https://github.com/Miserlou/Zappa/issues/998
try:
env_dict = dict((k.encode('ascii').decode('ascii'), v) for (k, v) in env_dict.items())
except Exception:
raise ValueError("Environment variable keys must be ascii.")
settings_s = settings_s + "ENVIRONMENT_VARIABLES={0}\n".format(
env_dict
)
# We can be environment-aware
settings_s = settings_s + "API_STAGE='{0!s}'\n".format((self.api_stage))
settings_s = settings_s + "PROJECT_NAME='{0!s}'\n".format((self.project_name))
if self.settings_file:
settings_s = settings_s + "SETTINGS_FILE='{0!s}'\n".format((self.settings_file))
else:
settings_s = settings_s + "SETTINGS_FILE=None\n"
if self.django_settings:
settings_s = settings_s + "DJANGO_SETTINGS='{0!s}'\n".format((self.django_settings))
else:
settings_s = settings_s + "DJANGO_SETTINGS=None\n"
# If slim handler, path to project zip
if self.stage_config.get('slim_handler', False):
settings_s += "ARCHIVE_PATH='s3://{0!s}/{1!s}_{2!s}_current_project.tar.gz'\n".format(
self.s3_bucket_name, self.api_stage, self.project_name)
# since includes are for slim handler add the setting here by joining arbitrary list from zappa_settings file
# and tell the handler we are the slim_handler
# https://github.com/Miserlou/Zappa/issues/776
settings_s += "SLIM_HANDLER=True\n"
include = self.stage_config.get('include', [])
if len(include) >= 1:
settings_s += "INCLUDE=" + str(include) + '\n'
# AWS Events function mapping
event_mapping = {}
events = self.stage_config.get('events', [])
for event in events:
arn = event.get('event_source', {}).get('arn')
function = event.get('function')
if arn and function:
event_mapping[arn] = function
settings_s = settings_s + "AWS_EVENT_MAPPING={0!s}\n".format(event_mapping)
# Map Lext bot events
bot_events = self.stage_config.get('bot_events', [])
bot_events_mapping = {}
for bot_event in bot_events:
event_source = bot_event.get('event_source', {})
intent = event_source.get('intent')
invocation_source = event_source.get('invocation_source')
function = bot_event.get('function')
if intent and invocation_source and function:
bot_events_mapping[str(intent) + ':' + str(invocation_source)] = function
settings_s = settings_s + "AWS_BOT_EVENT_MAPPING={0!s}\n".format(bot_events_mapping)
# Map cognito triggers
cognito_trigger_mapping = {}
cognito_config = self.stage_config.get('cognito', {})
triggers = cognito_config.get('triggers', [])
for trigger in triggers:
source = trigger.get('source')
function = trigger.get('function')
if source and function:
cognito_trigger_mapping[source] = function
settings_s = settings_s + "COGNITO_TRIGGER_MAPPING={0!s}\n".format(cognito_trigger_mapping)
# Authorizer config
authorizer_function = self.authorizer.get('function', None)
if authorizer_function:
settings_s += "AUTHORIZER_FUNCTION='{0!s}'\n".format(authorizer_function)
# Copy our Django app into root of our package.
# It doesn't work otherwise.
if self.django_settings:
base = __file__.rsplit(os.sep, 1)[0]
django_py = ''.join(os.path.join(base, 'ext', 'django_zappa.py'))
lambda_zip.write(django_py, 'django_zappa_app.py')
# async response
async_response_table = self.stage_config.get('async_response_table', '')
settings_s += "ASYNC_RESPONSE_TABLE='{0!s}'\n".format(async_response_table)
# Lambda requires a specific chmod
temp_settings = tempfile.NamedTemporaryFile(delete=False)
os.chmod(temp_settings.name, 0o644)
temp_settings.write(bytes(settings_s, "utf-8"))
temp_settings.close()
lambda_zip.write(temp_settings.name, 'zappa_settings.py')
os.unlink(temp_settings.name)
def remove_local_zip(self):
"""
Remove our local zip file.
"""
if self.stage_config.get('delete_local_zip', True):
try:
if os.path.isfile(self.zip_path):
os.remove(self.zip_path)
if self.handler_path and os.path.isfile(self.handler_path):
os.remove(self.handler_path)
except Exception as e: # pragma: no cover
sys.exit(-1)
def remove_uploaded_zip(self):
"""
Remove the local and S3 zip file after uploading and updating.
"""
# Remove the uploaded zip from S3, because it is now registered..
if self.stage_config.get('delete_s3_zip', True):
self.zappa.remove_from_s3(self.zip_path, self.s3_bucket_name)
if self.stage_config.get('slim_handler', False):
# Need to keep the project zip as the slim handler uses it.
self.zappa.remove_from_s3(self.handler_path, self.s3_bucket_name)
def on_exit(self):
"""
Cleanup after the command finishes.
Always called: SystemExit, KeyboardInterrupt and any other Exception that occurs.
"""
if self.zip_path:
# Only try to remove uploaded zip if we're running a command that has loaded credentials
if self.load_credentials:
self.remove_uploaded_zip()
self.remove_local_zip()
def print_logs(self, logs, colorize=True, http=False, non_http=False, force_colorize=None):
"""
Parse, filter and print logs to the console.
"""
for log in logs:
timestamp = log['timestamp']
message = log['message']
if "START RequestId" in message:
continue
if "REPORT RequestId" in message:
continue
if "END RequestId" in message:
continue
if not colorize and not force_colorize:
if http:
if self.is_http_log_entry(message.strip()):
print("[" + str(timestamp) + "] " + message.strip())
elif non_http:
if not self.is_http_log_entry(message.strip()):
print("[" + str(timestamp) + "] " + message.strip())
else:
print("[" + str(timestamp) + "] " + message.strip())
else:
if http:
if self.is_http_log_entry(message.strip()):
click.echo(click.style("[", fg='cyan') + click.style(str(timestamp), bold=True) + click.style("]", fg='cyan') + self.colorize_log_entry(message.strip()), color=force_colorize)
elif non_http:
if not self.is_http_log_entry(message.strip()):
click.echo(click.style("[", fg='cyan') + click.style(str(timestamp), bold=True) + click.style("]", fg='cyan') + self.colorize_log_entry(message.strip()), color=force_colorize)
else:
click.echo(click.style("[", fg='cyan') + click.style(str(timestamp), bold=True) + click.style("]", fg='cyan') + self.colorize_log_entry(message.strip()), color=force_colorize)
def is_http_log_entry(self, string):
"""
Determines if a log entry is an HTTP-formatted log string or not.
"""
# Debug event filter
if 'Zappa Event' in string:
return False
# IP address filter
for token in string.replace('\t', ' ').split(' '):
try:
if (token.count('.') == 3 and token.replace('.', '').isnumeric()):
return True
except Exception: # pragma: no cover
pass
return False
def get_project_name(self):
return slugify.slugify(os.getcwd().split(os.sep)[-1])[:15]
def colorize_log_entry(self, string):
"""
Apply various heuristics to return a colorized version of a string.
If these fail, simply return the string in plaintext.
"""
final_string = string
try:
# First, do stuff in square brackets
inside_squares = re.findall(r'\[([^]]*)\]', string)
for token in inside_squares:
if token in ['CRITICAL', 'ERROR', 'WARNING', 'DEBUG', 'INFO', 'NOTSET']:
final_string = final_string.replace('[' + token + ']', click.style("[", fg='cyan') + click.style(token, fg='cyan', bold=True) + click.style("]", fg='cyan'))
else:
final_string = final_string.replace('[' + token + ']', click.style("[", fg='cyan') + click.style(token, bold=True) + click.style("]", fg='cyan'))
# Then do quoted strings
quotes = re.findall(r'"[^"]*"', string)
for token in quotes:
final_string = final_string.replace(token, click.style(token, fg="yellow"))
# And UUIDs
for token in final_string.replace('\t', ' ').split(' '):
try:
if token.count('-') == 4 and token.replace('-', '').isalnum():
final_string = final_string.replace(token, click.style(token, fg="magenta"))
except Exception: # pragma: no cover
pass
# And IP addresses
try:
if token.count('.') == 3 and token.replace('.', '').isnumeric():
final_string = final_string.replace(token, click.style(token, fg="red"))
except Exception: # pragma: no cover
pass
# And status codes
try:
if token in ['200']:
final_string = final_string.replace(token, click.style(token, fg="green"))
if token in ['400', '401', '403', '404', '405', '500']:
final_string = final_string.replace(token, click.style(token, fg="red"))
except Exception: # pragma: no cover
pass
# And Zappa Events
try:
if "Zappa Event:" in final_string:
final_string = final_string.replace("Zappa Event:", click.style("Zappa Event:", bold=True, fg="green"))
except Exception: # pragma: no cover
pass
# And dates
for token in final_string.split('\t'):
try:
is_date = parser.parse(token)
final_string = final_string.replace(token, click.style(token, fg="green"))
except Exception: # pragma: no cover
pass
final_string = final_string.replace('\t', ' ').replace(' ', ' ')
if final_string[0] != ' ':
final_string = ' ' + final_string
return final_string
except Exception as e: # pragma: no cover
return string
def execute_prebuild_script(self):
"""
Parse and execute the prebuild_script from the zappa_settings.
"""
(pb_mod_path, pb_func) = self.prebuild_script.rsplit('.', 1)
try: # Prefer prebuild script in working directory
if pb_mod_path.count('.') >= 1: # Prebuild script func is nested in a folder
(mod_folder_path, mod_name) = pb_mod_path.rsplit('.', 1)
mod_folder_path_fragments = mod_folder_path.split('.')
working_dir = os.path.join(os.getcwd(), *mod_folder_path_fragments)
else:
mod_name = pb_mod_path
working_dir = os.getcwd()
working_dir_importer = pkgutil.get_importer(working_dir)
module_ = working_dir_importer.find_module(mod_name).load_module(mod_name)
except (ImportError, AttributeError):
try: # Prebuild func might be in virtualenv
module_ = importlib.import_module(pb_mod_path)
except ImportError: # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"import prebuild script ", bold=True) + 'module: "{pb_mod_path}"'.format(
pb_mod_path=click.style(pb_mod_path, bold=True)))
if not hasattr(module_, pb_func): # pragma: no cover
raise ClickException(click.style("Failed ", fg="red") + 'to ' + click.style(
"find prebuild script ", bold=True) + 'function: "{pb_func}" '.format(
pb_func=click.style(pb_func, bold=True)) + 'in module "{pb_mod_path}"'.format(
pb_mod_path=pb_mod_path))
prebuild_function = getattr(module_, pb_func)
prebuild_function() # Call the function
def collision_warning(self, item):
"""
Given a string, print a warning if this could
collide with a Zappa core package module.
Use for app functions and events.
"""
namespace_collisions = [
"zappa.", "wsgi.", "middleware.", "handler.", "util.", "letsencrypt.", "cli."
]
for namespace_collision in namespace_collisions:
if item.startswith(namespace_collision):
click.echo(click.style("Warning!", fg="red", bold=True) +
" You may have a namespace collision between " +
click.style(item, bold=True) +
" and " +
click.style(namespace_collision, bold=True) +
"! You may want to rename that file.")
def deploy_api_gateway(self, api_id):
cache_cluster_enabled = self.stage_config.get('cache_cluster_enabled', False)
cache_cluster_size = str(self.stage_config.get('cache_cluster_size', .5))
endpoint_url = self.zappa.deploy_api_gateway(
api_id=api_id,
stage_name=self.api_stage,
cache_cluster_enabled=cache_cluster_enabled,
cache_cluster_size=cache_cluster_size,
cloudwatch_log_level=self.stage_config.get('cloudwatch_log_level', 'OFF'),
cloudwatch_data_trace=self.stage_config.get('cloudwatch_data_trace', False),
cloudwatch_metrics_enabled=self.stage_config.get('cloudwatch_metrics_enabled', False),
cache_cluster_ttl=self.stage_config.get('cache_cluster_ttl', 300),
cache_cluster_encrypted=self.stage_config.get('cache_cluster_encrypted', False)
)
return endpoint_url
def check_venv(self):
""" Ensure we're inside a virtualenv. """
if self.vargs and self.vargs.get("no_venv"):
return
if self.zappa:
venv = self.zappa.get_current_venv()
else:
# Just for `init`, when we don't have settings yet.
venv = Zappa.get_current_venv()
if not venv:
raise ClickException(
click.style("Zappa", bold=True) + " requires an " + click.style("active virtual environment", bold=True, fg="red") + "!\n" +
"Learn more about virtual environments here: " + click.style("http://docs.python-guide.org/en/latest/dev/virtualenvs/", bold=False, fg="cyan"))
def silence(self):
"""
Route all stdout to null.
"""
sys.stdout = open(os.devnull, 'w')
sys.stderr = open(os.devnull, 'w')
def touch_endpoint(self, endpoint_url):
"""
Test the deployed endpoint with a GET request.
"""
# Private APIGW endpoints most likely can't be reached by a deployer
# unless they're connected to the VPC by VPN. Instead of trying
# connect to the service, print a warning and let the user know
# to check it manually.
# See: https://github.com/Miserlou/Zappa/pull/1719#issuecomment-471341565
if 'PRIVATE' in self.stage_config.get('endpoint_configuration', []):
print(
click.style("Warning!", fg="yellow", bold=True) +
" Since you're deploying a private API Gateway endpoint,"
" Zappa cannot determine if your function is returning "
" a correct status code. You should check your API's response"
" manually before considering this deployment complete."
)
return
touch_path = self.stage_config.get('touch_path', '/')
req = requests.get(endpoint_url + touch_path)
# Sometimes on really large packages, it can take 60-90 secs to be
# ready and requests will return 504 status_code until ready.
# So, if we get a 504 status code, rerun the request up to 4 times or
# until we don't get a 504 error
if req.status_code == 504:
i = 0
status_code = 504
while status_code == 504 and i <= 4:
req = requests.get(endpoint_url + touch_path)
status_code = req.status_code
i += 1
if req.status_code >= 500:
raise ClickException(click.style("Warning!", fg="red", bold=True) +
" Status check on the deployed lambda failed." +
" A GET request to '" + touch_path + "' yielded a " +
click.style(str(req.status_code), fg="red", bold=True) + " response code.")
####################################################################
# Main
####################################################################
def shamelessly_promote():
"""
Shamelessly promote our little community.
"""
click.echo("Need " + click.style("help", fg='green', bold=True) +
"? Found a " + click.style("bug", fg='green', bold=True) +
"? Let us " + click.style("know", fg='green', bold=True) + "! :D")
click.echo("File bug reports on " + click.style("GitHub", bold=True) + " here: "
+ click.style("https://github.com/Miserlou/Zappa", fg='cyan', bold=True))
click.echo("And join our " + click.style("Slack", bold=True) + " channel here: "
+ click.style("https://zappateam.slack.com", fg='cyan', bold=True))
click.echo("Love!,")
click.echo(" ~ Team " + click.style("Zappa", bold=True) + "!")
def disable_click_colors():
"""
Set a Click context where colors are disabled. Creates a throwaway BaseCommand
to play nicely with the Context constructor.
The intended side-effect here is that click.echo() checks this context and will
suppress colors.
https://github.com/pallets/click/blob/e1aa43a3/click/globals.py#L39
"""
ctx = Context(BaseCommand('AllYourBaseAreBelongToUs'))
ctx.color = False
push_context(ctx)
def handle(): # pragma: no cover
"""
Main program execution handler.
"""
try:
cli = ZappaCLI()
sys.exit(cli.handle())
except SystemExit as e: # pragma: no cover
cli.on_exit()
sys.exit(e.code)
except KeyboardInterrupt: # pragma: no cover
cli.on_exit()
sys.exit(130)
except Exception as e:
cli.on_exit()
click.echo("Oh no! An " + click.style("error occurred", fg='red', bold=True) + "! :(")
click.echo("\n==============\n")
import traceback
traceback.print_exc()
click.echo("\n==============\n")
shamelessly_promote()
sys.exit(-1)
if __name__ == '__main__': # pragma: no cover
handle() | zappa-dateutil | /zappa_dateutil-0.51.0-py3-none-any.whl/zappa/cli.py | cli.py |
from django.core.management.base import BaseCommand
from django.contrib.auth import get_user_model
import random
import string
class Command(BaseCommand):
"""
This command will create a default Django admin superuser.
"""
help = 'Creates a Django admin superuser.'
def add_arguments(self, parser):
parser.add_argument('arguments', nargs='*')
def handle(self, *args, **options):
# Gets the model for the current Django project's user.
# This handles custom user models as well as Django's default.
User = get_user_model()
self.stdout.write(self.style.SUCCESS('Creating a new admin superuser...'))
# If the command args are given -> try to create user with given args
if options['arguments']:
try:
user = User.objects.create_superuser(*options['arguments'])
self.stdout.write(
self.style.SUCCESS(
'Created the admin superuser "{user}" with the given parameters.'.format(
user=user,
)
)
)
except Exception as e:
self.stdout.write('ERROR: Django returned an error when creating the admin superuser:')
self.stdout.write(str(e))
self.stdout.write('')
self.stdout.write('The arguments expected by the command are in this order:')
self.stdout.write(str(User.objects.create_superuser.__code__.co_varnames[1:-1]))
# or create default admin user
else:
pw = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
User.objects.create_superuser(username='admin', email='[email protected]', password=pw)
self.stdout.write(self.style.SUCCESS('Created user "admin", email: "[email protected]", password: ' + pw))
self.stdout.write(self.style.SUCCESS('Log in and change this password immediately!')) | zappa-django-utils | /zappa_django_utils-0.4.1-py3-none-any.whl/zappa_django_utils/management/commands/create_admin_user.py | create_admin_user.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.