repo_name
stringlengths 5
100
| path
stringlengths 4
299
| copies
stringclasses 990
values | size
stringlengths 4
7
| content
stringlengths 666
1.03M
| license
stringclasses 15
values | hash
int64 -9,223,351,895,964,839,000
9,223,297,778B
| line_mean
float64 3.17
100
| line_max
int64 7
1k
| alpha_frac
float64 0.25
0.98
| autogenerated
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|
JensGrabner/mpmath | mpmath/function_docs.py | 1 | 280518 | """
Extended docstrings for functions.py
"""
pi = r"""
`\pi`, roughly equal to 3.141592654, represents the area of the unit
circle, the half-period of trigonometric functions, and many other
things in mathematics.
Mpmath can evaluate `\pi` to arbitrary precision::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +pi
3.1415926535897932384626433832795028841971693993751
This shows digits 99991-100000 of `\pi` (the last digit is actually
a 4 when the decimal expansion is truncated, but here the nearest
rounding is used)::
>>> mp.dps = 100000
>>> str(pi)[-10:]
'5549362465'
**Possible issues**
:data:`pi` always rounds to the nearest floating-point
number when used. This means that exact mathematical identities
involving `\pi` will generally not be preserved in floating-point
arithmetic. In particular, multiples of :data:`pi` (except for
the trivial case ``0*pi``) are *not* the exact roots of
:func:`~mpmath.sin`, but differ roughly by the current epsilon::
>>> mp.dps = 15
>>> sin(pi)
1.22464679914735e-16
One solution is to use the :func:`~mpmath.sinpi` function instead::
>>> sinpi(1)
0.0
See the documentation of trigonometric functions for additional
details.
"""
degree = r"""
Represents one degree of angle, `1^{\circ} = \pi/180`, or
about 0.01745329. This constant may be evaluated to arbitrary
precision::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +degree
0.017453292519943295769236907684886127134428718885417
The :data:`degree` object is convenient for conversion
to radians::
>>> sin(30 * degree)
0.5
>>> asin(0.5) / degree
30.0
"""
e = r"""
The transcendental number `e` = 2.718281828... is the base of the
natural logarithm (:func:`~mpmath.ln`) and of the exponential function
(:func:`~mpmath.exp`).
Mpmath can be evaluate `e` to arbitrary precision::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +e
2.7182818284590452353602874713526624977572470937
This shows digits 99991-100000 of `e` (the last digit is actually
a 5 when the decimal expansion is truncated, but here the nearest
rounding is used)::
>>> mp.dps = 100000
>>> str(e)[-10:]
'2100427166'
**Possible issues**
:data:`e` always rounds to the nearest floating-point number
when used, and mathematical identities involving `e` may not
hold in floating-point arithmetic. For example, ``ln(e)``
might not evaluate exactly to 1.
In particular, don't use ``e**x`` to compute the exponential
function. Use ``exp(x)`` instead; this is both faster and more
accurate.
"""
phi = r"""
Represents the golden ratio `\phi = (1+\sqrt 5)/2`,
approximately equal to 1.6180339887. To high precision,
its value is::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +phi
1.6180339887498948482045868343656381177203091798058
Formulas for the golden ratio include the following::
>>> (1+sqrt(5))/2
1.6180339887498948482045868343656381177203091798058
>>> findroot(lambda x: x**2-x-1, 1)
1.6180339887498948482045868343656381177203091798058
>>> limit(lambda n: fib(n+1)/fib(n), inf)
1.6180339887498948482045868343656381177203091798058
"""
euler = r"""
Euler's constant or the Euler-Mascheroni constant `\gamma`
= 0.57721566... is a number of central importance to
number theory and special functions. It is defined as the limit
.. math ::
\gamma = \lim_{n\to\infty} H_n - \log n
where `H_n = 1 + \frac{1}{2} + \ldots + \frac{1}{n}` is a harmonic
number (see :func:`~mpmath.harmonic`).
Evaluation of `\gamma` is supported at arbitrary precision::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +euler
0.57721566490153286060651209008240243104215933593992
We can also compute `\gamma` directly from the definition,
although this is less efficient::
>>> limit(lambda n: harmonic(n)-log(n), inf)
0.57721566490153286060651209008240243104215933593992
This shows digits 9991-10000 of `\gamma` (the last digit is actually
a 5 when the decimal expansion is truncated, but here the nearest
rounding is used)::
>>> mp.dps = 10000
>>> str(euler)[-10:]
'4679858166'
Integrals, series, and representations for `\gamma` in terms of
special functions include the following (there are many others)::
>>> mp.dps = 25
>>> -quad(lambda x: exp(-x)*log(x), [0,inf])
0.5772156649015328606065121
>>> quad(lambda x,y: (x-1)/(1-x*y)/log(x*y), [0,1], [0,1])
0.5772156649015328606065121
>>> nsum(lambda k: 1/k-log(1+1/k), [1,inf])
0.5772156649015328606065121
>>> nsum(lambda k: (-1)**k*zeta(k)/k, [2,inf])
0.5772156649015328606065121
>>> -diff(gamma, 1)
0.5772156649015328606065121
>>> limit(lambda x: 1/x-gamma(x), 0)
0.5772156649015328606065121
>>> limit(lambda x: zeta(x)-1/(x-1), 1)
0.5772156649015328606065121
>>> (log(2*pi*nprod(lambda n:
... exp(-2+2/n)*(1+2/n)**n, [1,inf]))-3)/2
0.5772156649015328606065121
For generalizations of the identities `\gamma = -\Gamma'(1)`
and `\gamma = \lim_{x\to1} \zeta(x)-1/(x-1)`, see
:func:`~mpmath.psi` and :func:`~mpmath.stieltjes` respectively.
"""
catalan = r"""
Catalan's constant `K` = 0.91596559... is given by the infinite
series
.. math ::
K = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)^2}.
Mpmath can evaluate it to arbitrary precision::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +catalan
0.91596559417721901505460351493238411077414937428167
One can also compute `K` directly from the definition, although
this is significantly less efficient::
>>> nsum(lambda k: (-1)**k/(2*k+1)**2, [0, inf])
0.91596559417721901505460351493238411077414937428167
This shows digits 9991-10000 of `K` (the last digit is actually
a 3 when the decimal expansion is truncated, but here the nearest
rounding is used)::
>>> mp.dps = 10000
>>> str(catalan)[-10:]
'9537871504'
Catalan's constant has numerous integral representations::
>>> mp.dps = 50
>>> quad(lambda x: -log(x)/(1+x**2), [0, 1])
0.91596559417721901505460351493238411077414937428167
>>> quad(lambda x: atan(x)/x, [0, 1])
0.91596559417721901505460351493238411077414937428167
>>> quad(lambda x: ellipk(x**2)/2, [0, 1])
0.91596559417721901505460351493238411077414937428167
>>> quad(lambda x,y: 1/(1+(x*y)**2), [0, 1], [0, 1])
0.91596559417721901505460351493238411077414937428167
As well as series representations::
>>> pi*log(sqrt(3)+2)/8 + 3*nsum(lambda n:
... (fac(n)/(2*n+1))**2/fac(2*n), [0, inf])/8
0.91596559417721901505460351493238411077414937428167
>>> 1-nsum(lambda n: n*zeta(2*n+1)/16**n, [1,inf])
0.91596559417721901505460351493238411077414937428167
"""
khinchin = r"""
Khinchin's constant `K` = 2.68542... is a number that
appears in the theory of continued fractions. Mpmath can evaluate
it to arbitrary precision::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +khinchin
2.6854520010653064453097148354817956938203822939945
An integral representation is::
>>> I = quad(lambda x: log((1-x**2)/sincpi(x))/x/(1+x), [0, 1])
>>> 2*exp(1/log(2)*I)
2.6854520010653064453097148354817956938203822939945
The computation of ``khinchin`` is based on an efficient
implementation of the following series::
>>> f = lambda n: (zeta(2*n)-1)/n*sum((-1)**(k+1)/mpf(k)
... for k in range(1,2*int(n)))
>>> exp(nsum(f, [1,inf])/log(2))
2.6854520010653064453097148354817956938203822939945
"""
glaisher = r"""
Glaisher's constant `A`, also known as the Glaisher-Kinkelin
constant, is a number approximately equal to 1.282427129 that
sometimes appears in formulas related to gamma and zeta functions.
It is also related to the Barnes G-function (see :func:`~mpmath.barnesg`).
The constant is defined as `A = \exp(1/12-\zeta'(-1))` where
`\zeta'(s)` denotes the derivative of the Riemann zeta function
(see :func:`~mpmath.zeta`).
Mpmath can evaluate Glaisher's constant to arbitrary precision:
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +glaisher
1.282427129100622636875342568869791727767688927325
We can verify that the value computed by :data:`glaisher` is
correct using mpmath's facilities for numerical
differentiation and arbitrary evaluation of the zeta function:
>>> exp(mpf(1)/12 - diff(zeta, -1))
1.282427129100622636875342568869791727767688927325
Here is an example of an integral that can be evaluated in
terms of Glaisher's constant:
>>> mp.dps = 15
>>> quad(lambda x: log(gamma(x)), [1, 1.5])
-0.0428537406502909
>>> -0.5 - 7*log(2)/24 + log(pi)/4 + 3*log(glaisher)/2
-0.042853740650291
Mpmath computes Glaisher's constant by applying Euler-Maclaurin
summation to a slowly convergent series. The implementation is
reasonably efficient up to about 10,000 digits. See the source
code for additional details.
References:
http://mathworld.wolfram.com/Glaisher-KinkelinConstant.html
"""
apery = r"""
Represents Apery's constant, which is the irrational number
approximately equal to 1.2020569 given by
.. math ::
\zeta(3) = \sum_{k=1}^\infty\frac{1}{k^3}.
The calculation is based on an efficient hypergeometric
series. To 50 decimal places, the value is given by::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +apery
1.2020569031595942853997381615114499907649862923405
Other ways to evaluate Apery's constant using mpmath
include::
>>> zeta(3)
1.2020569031595942853997381615114499907649862923405
>>> -psi(2,1)/2
1.2020569031595942853997381615114499907649862923405
>>> 8*nsum(lambda k: 1/(2*k+1)**3, [0,inf])/7
1.2020569031595942853997381615114499907649862923405
>>> f = lambda k: 2/k**3/(exp(2*pi*k)-1)
>>> 7*pi**3/180 - nsum(f, [1,inf])
1.2020569031595942853997381615114499907649862923405
This shows digits 9991-10000 of Apery's constant::
>>> mp.dps = 10000
>>> str(apery)[-10:]
'3189504235'
"""
mertens = r"""
Represents the Mertens or Meissel-Mertens constant, which is the
prime number analog of Euler's constant:
.. math ::
B_1 = \lim_{N\to\infty}
\left(\sum_{p_k \le N} \frac{1}{p_k} - \log \log N \right)
Here `p_k` denotes the `k`-th prime number. Other names for this
constant include the Hadamard-de la Vallee-Poussin constant or
the prime reciprocal constant.
The following gives the Mertens constant to 50 digits::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +mertens
0.2614972128476427837554268386086958590515666482612
References:
http://mathworld.wolfram.com/MertensConstant.html
"""
twinprime = r"""
Represents the twin prime constant, which is the factor `C_2`
featuring in the Hardy-Littlewood conjecture for the growth of the
twin prime counting function,
.. math ::
\pi_2(n) \sim 2 C_2 \frac{n}{\log^2 n}.
It is given by the product over primes
.. math ::
C_2 = \prod_{p\ge3} \frac{p(p-2)}{(p-1)^2} \approx 0.66016
Computing `C_2` to 50 digits::
>>> from mpmath import *
>>> mp.dps = 50; mp.pretty = True
>>> +twinprime
0.66016181584686957392781211001455577843262336028473
References:
http://mathworld.wolfram.com/TwinPrimesConstant.html
"""
ln = r"""
Computes the natural logarithm of `x`, `\ln x`.
See :func:`~mpmath.log` for additional documentation."""
sqrt = r"""
``sqrt(x)`` gives the principal square root of `x`, `\sqrt x`.
For positive real numbers, the principal root is simply the
positive square root. For arbitrary complex numbers, the principal
square root is defined to satisfy `\sqrt x = \exp(\log(x)/2)`.
The function thus has a branch cut along the negative half real axis.
For all mpmath numbers ``x``, calling ``sqrt(x)`` is equivalent to
performing ``x**0.5``.
**Examples**
Basic examples and limits::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> sqrt(10)
3.16227766016838
>>> sqrt(100)
10.0
>>> sqrt(-4)
(0.0 + 2.0j)
>>> sqrt(1+1j)
(1.09868411346781 + 0.455089860562227j)
>>> sqrt(inf)
+inf
Square root evaluation is fast at huge precision::
>>> mp.dps = 50000
>>> a = sqrt(3)
>>> str(a)[-10:]
'9329332815'
:func:`mpmath.iv.sqrt` supports interval arguments::
>>> iv.dps = 15; iv.pretty = True
>>> iv.sqrt([16,100])
[4.0, 10.0]
>>> iv.sqrt(2)
[1.4142135623730949234, 1.4142135623730951455]
>>> iv.sqrt(2) ** 2
[1.9999999999999995559, 2.0000000000000004441]
"""
cbrt = r"""
``cbrt(x)`` computes the cube root of `x`, `x^{1/3}`. This
function is faster and more accurate than raising to a floating-point
fraction::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = False
>>> 125**(mpf(1)/3)
mpf('4.9999999999999991')
>>> cbrt(125)
mpf('5.0')
Every nonzero complex number has three cube roots. This function
returns the cube root defined by `\exp(\log(x)/3)` where the
principal branch of the natural logarithm is used. Note that this
does not give a real cube root for negative real numbers::
>>> mp.pretty = True
>>> cbrt(-1)
(0.5 + 0.866025403784439j)
"""
exp = r"""
Computes the exponential function,
.. math ::
\exp(x) = e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}.
For complex numbers, the exponential function also satisfies
.. math ::
\exp(x+yi) = e^x (\cos y + i \sin y).
**Basic examples**
Some values of the exponential function::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> exp(0)
1.0
>>> exp(1)
2.718281828459045235360287
>>> exp(-1)
0.3678794411714423215955238
>>> exp(inf)
+inf
>>> exp(-inf)
0.0
Arguments can be arbitrarily large::
>>> exp(10000)
8.806818225662921587261496e+4342
>>> exp(-10000)
1.135483865314736098540939e-4343
Evaluation is supported for interval arguments via
:func:`mpmath.iv.exp`::
>>> iv.dps = 25; iv.pretty = True
>>> iv.exp([-inf,0])
[0.0, 1.0]
>>> iv.exp([0,1])
[1.0, 2.71828182845904523536028749558]
The exponential function can be evaluated efficiently to arbitrary
precision::
>>> mp.dps = 10000
>>> exp(pi) #doctest: +ELLIPSIS
23.140692632779269005729...8984304016040616
**Functional properties**
Numerical verification of Euler's identity for the complex
exponential function::
>>> mp.dps = 15
>>> exp(j*pi)+1
(0.0 + 1.22464679914735e-16j)
>>> chop(exp(j*pi)+1)
0.0
This recovers the coefficients (reciprocal factorials) in the
Maclaurin series expansion of exp::
>>> nprint(taylor(exp, 0, 5))
[1.0, 1.0, 0.5, 0.166667, 0.0416667, 0.00833333]
The exponential function is its own derivative and antiderivative::
>>> exp(pi)
23.1406926327793
>>> diff(exp, pi)
23.1406926327793
>>> quad(exp, [-inf, pi])
23.1406926327793
The exponential function can be evaluated using various methods,
including direct summation of the series, limits, and solving
the defining differential equation::
>>> nsum(lambda k: pi**k/fac(k), [0,inf])
23.1406926327793
>>> limit(lambda k: (1+pi/k)**k, inf)
23.1406926327793
>>> odefun(lambda t, x: x, 0, 1)(pi)
23.1406926327793
"""
cosh = r"""
Computes the hyperbolic cosine of `x`,
`\cosh(x) = (e^x + e^{-x})/2`. Values and limits include::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> cosh(0)
1.0
>>> cosh(1)
1.543080634815243778477906
>>> cosh(-inf), cosh(+inf)
(+inf, +inf)
The hyperbolic cosine is an even, convex function with
a global minimum at `x = 0`, having a Maclaurin series
that starts::
>>> nprint(chop(taylor(cosh, 0, 5)))
[1.0, 0.0, 0.5, 0.0, 0.0416667, 0.0]
Generalized to complex numbers, the hyperbolic cosine is
equivalent to a cosine with the argument rotated
in the imaginary direction, or `\cosh x = \cos ix`::
>>> cosh(2+3j)
(-3.724545504915322565473971 + 0.5118225699873846088344638j)
>>> cos(3-2j)
(-3.724545504915322565473971 + 0.5118225699873846088344638j)
"""
sinh = r"""
Computes the hyperbolic sine of `x`,
`\sinh(x) = (e^x - e^{-x})/2`. Values and limits include::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> sinh(0)
0.0
>>> sinh(1)
1.175201193643801456882382
>>> sinh(-inf), sinh(+inf)
(-inf, +inf)
The hyperbolic sine is an odd function, with a Maclaurin
series that starts::
>>> nprint(chop(taylor(sinh, 0, 5)))
[0.0, 1.0, 0.0, 0.166667, 0.0, 0.00833333]
Generalized to complex numbers, the hyperbolic sine is
essentially a sine with a rotation `i` applied to
the argument; more precisely, `\sinh x = -i \sin ix`::
>>> sinh(2+3j)
(-3.590564589985779952012565 + 0.5309210862485198052670401j)
>>> j*sin(3-2j)
(-3.590564589985779952012565 + 0.5309210862485198052670401j)
"""
tanh = r"""
Computes the hyperbolic tangent of `x`,
`\tanh(x) = \sinh(x)/\cosh(x)`. Values and limits include::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> tanh(0)
0.0
>>> tanh(1)
0.7615941559557648881194583
>>> tanh(-inf), tanh(inf)
(-1.0, 1.0)
The hyperbolic tangent is an odd, sigmoidal function, similar
to the inverse tangent and error function. Its Maclaurin
series is::
>>> nprint(chop(taylor(tanh, 0, 5)))
[0.0, 1.0, 0.0, -0.333333, 0.0, 0.133333]
Generalized to complex numbers, the hyperbolic tangent is
essentially a tangent with a rotation `i` applied to
the argument; more precisely, `\tanh x = -i \tan ix`::
>>> tanh(2+3j)
(0.9653858790221331242784803 - 0.009884375038322493720314034j)
>>> j*tan(3-2j)
(0.9653858790221331242784803 - 0.009884375038322493720314034j)
"""
cos = r"""
Computes the cosine of `x`, `\cos(x)`.
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> cos(pi/3)
0.5
>>> cos(100000001)
-0.9802850113244713353133243
>>> cos(2+3j)
(-4.189625690968807230132555 - 9.109227893755336597979197j)
>>> cos(inf)
nan
>>> nprint(chop(taylor(cos, 0, 6)))
[1.0, 0.0, -0.5, 0.0, 0.0416667, 0.0, -0.00138889]
Intervals are supported via :func:`mpmath.iv.cos`::
>>> iv.dps = 25; iv.pretty = True
>>> iv.cos([0,1])
[0.540302305868139717400936602301, 1.0]
>>> iv.cos([0,2])
[-0.41614683654714238699756823214, 1.0]
"""
sin = r"""
Computes the sine of `x`, `\sin(x)`.
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> sin(pi/3)
0.8660254037844386467637232
>>> sin(100000001)
0.1975887055794968911438743
>>> sin(2+3j)
(9.1544991469114295734673 - 4.168906959966564350754813j)
>>> sin(inf)
nan
>>> nprint(chop(taylor(sin, 0, 6)))
[0.0, 1.0, 0.0, -0.166667, 0.0, 0.00833333, 0.0]
Intervals are supported via :func:`mpmath.iv.sin`::
>>> iv.dps = 25; iv.pretty = True
>>> iv.sin([0,1])
[0.0, 0.841470984807896506652502331201]
>>> iv.sin([0,2])
[0.0, 1.0]
"""
tan = r"""
Computes the tangent of `x`, `\tan(x) = \frac{\sin(x)}{\cos(x)}`.
The tangent function is singular at `x = (n+1/2)\pi`, but
``tan(x)`` always returns a finite result since `(n+1/2)\pi`
cannot be represented exactly using floating-point arithmetic.
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> tan(pi/3)
1.732050807568877293527446
>>> tan(100000001)
-0.2015625081449864533091058
>>> tan(2+3j)
(-0.003764025641504248292751221 + 1.003238627353609801446359j)
>>> tan(inf)
nan
>>> nprint(chop(taylor(tan, 0, 6)))
[0.0, 1.0, 0.0, 0.333333, 0.0, 0.133333, 0.0]
Intervals are supported via :func:`mpmath.iv.tan`::
>>> iv.dps = 25; iv.pretty = True
>>> iv.tan([0,1])
[0.0, 1.55740772465490223050697482944]
>>> iv.tan([0,2]) # Interval includes a singularity
[-inf, +inf]
"""
sec = r"""
Computes the secant of `x`, `\mathrm{sec}(x) = \frac{1}{\cos(x)}`.
The secant function is singular at `x = (n+1/2)\pi`, but
``sec(x)`` always returns a finite result since `(n+1/2)\pi`
cannot be represented exactly using floating-point arithmetic.
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> sec(pi/3)
2.0
>>> sec(10000001)
-1.184723164360392819100265
>>> sec(2+3j)
(-0.04167496441114427004834991 + 0.0906111371962375965296612j)
>>> sec(inf)
nan
>>> nprint(chop(taylor(sec, 0, 6)))
[1.0, 0.0, 0.5, 0.0, 0.208333, 0.0, 0.0847222]
Intervals are supported via :func:`mpmath.iv.sec`::
>>> iv.dps = 25; iv.pretty = True
>>> iv.sec([0,1])
[1.0, 1.85081571768092561791175326276]
>>> iv.sec([0,2]) # Interval includes a singularity
[-inf, +inf]
"""
csc = r"""
Computes the cosecant of `x`, `\mathrm{csc}(x) = \frac{1}{\sin(x)}`.
This cosecant function is singular at `x = n \pi`, but with the
exception of the point `x = 0`, ``csc(x)`` returns a finite result
since `n \pi` cannot be represented exactly using floating-point
arithmetic.
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> csc(pi/3)
1.154700538379251529018298
>>> csc(10000001)
-1.864910497503629858938891
>>> csc(2+3j)
(0.09047320975320743980579048 + 0.04120098628857412646300981j)
>>> csc(inf)
nan
Intervals are supported via :func:`mpmath.iv.csc`::
>>> iv.dps = 25; iv.pretty = True
>>> iv.csc([0,1]) # Interval includes a singularity
[1.18839510577812121626159943988, +inf]
>>> iv.csc([0,2])
[1.0, +inf]
"""
cot = r"""
Computes the cotangent of `x`,
`\mathrm{cot}(x) = \frac{1}{\tan(x)} = \frac{\cos(x)}{\sin(x)}`.
This cotangent function is singular at `x = n \pi`, but with the
exception of the point `x = 0`, ``cot(x)`` returns a finite result
since `n \pi` cannot be represented exactly using floating-point
arithmetic.
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> cot(pi/3)
0.5773502691896257645091488
>>> cot(10000001)
1.574131876209625656003562
>>> cot(2+3j)
(-0.003739710376336956660117409 - 0.9967577965693583104609688j)
>>> cot(inf)
nan
Intervals are supported via :func:`mpmath.iv.cot`::
>>> iv.dps = 25; iv.pretty = True
>>> iv.cot([0,1]) # Interval includes a singularity
[0.642092615934330703006419974862, +inf]
>>> iv.cot([1,2])
[-inf, +inf]
"""
acos = r"""
Computes the inverse cosine or arccosine of `x`, `\cos^{-1}(x)`.
Since `-1 \le \cos(x) \le 1` for real `x`, the inverse
cosine is real-valued only for `-1 \le x \le 1`. On this interval,
:func:`~mpmath.acos` is defined to be a monotonically decreasing
function assuming values between `+\pi` and `0`.
Basic values are::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> acos(-1)
3.141592653589793238462643
>>> acos(0)
1.570796326794896619231322
>>> acos(1)
0.0
>>> nprint(chop(taylor(acos, 0, 6)))
[1.5708, -1.0, 0.0, -0.166667, 0.0, -0.075, 0.0]
:func:`~mpmath.acos` is defined so as to be a proper inverse function of
`\cos(\theta)` for `0 \le \theta < \pi`.
We have `\cos(\cos^{-1}(x)) = x` for all `x`, but
`\cos^{-1}(\cos(x)) = x` only for `0 \le \Re[x] < \pi`::
>>> for x in [1, 10, -1, 2+3j, 10+3j]:
... print("%s %s" % (cos(acos(x)), acos(cos(x))))
...
1.0 1.0
(10.0 + 0.0j) 2.566370614359172953850574
-1.0 1.0
(2.0 + 3.0j) (2.0 + 3.0j)
(10.0 + 3.0j) (2.566370614359172953850574 - 3.0j)
The inverse cosine has two branch points: `x = \pm 1`. :func:`~mpmath.acos`
places the branch cuts along the line segments `(-\infty, -1)` and
`(+1, +\infty)`. In general,
.. math ::
\cos^{-1}(x) = \frac{\pi}{2} + i \log\left(ix + \sqrt{1-x^2} \right)
where the principal-branch log and square root are implied.
"""
asin = r"""
Computes the inverse sine or arcsine of `x`, `\sin^{-1}(x)`.
Since `-1 \le \sin(x) \le 1` for real `x`, the inverse
sine is real-valued only for `-1 \le x \le 1`.
On this interval, it is defined to be a monotonically increasing
function assuming values between `-\pi/2` and `\pi/2`.
Basic values are::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> asin(-1)
-1.570796326794896619231322
>>> asin(0)
0.0
>>> asin(1)
1.570796326794896619231322
>>> nprint(chop(taylor(asin, 0, 6)))
[0.0, 1.0, 0.0, 0.166667, 0.0, 0.075, 0.0]
:func:`~mpmath.asin` is defined so as to be a proper inverse function of
`\sin(\theta)` for `-\pi/2 < \theta < \pi/2`.
We have `\sin(\sin^{-1}(x)) = x` for all `x`, but
`\sin^{-1}(\sin(x)) = x` only for `-\pi/2 < \Re[x] < \pi/2`::
>>> for x in [1, 10, -1, 1+3j, -2+3j]:
... print("%s %s" % (chop(sin(asin(x))), asin(sin(x))))
...
1.0 1.0
10.0 -0.5752220392306202846120698
-1.0 -1.0
(1.0 + 3.0j) (1.0 + 3.0j)
(-2.0 + 3.0j) (-1.141592653589793238462643 - 3.0j)
The inverse sine has two branch points: `x = \pm 1`. :func:`~mpmath.asin`
places the branch cuts along the line segments `(-\infty, -1)` and
`(+1, +\infty)`. In general,
.. math ::
\sin^{-1}(x) = -i \log\left(ix + \sqrt{1-x^2} \right)
where the principal-branch log and square root are implied.
"""
atan = r"""
Computes the inverse tangent or arctangent of `x`, `\tan^{-1}(x)`.
This is a real-valued function for all real `x`, with range
`(-\pi/2, \pi/2)`.
Basic values are::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> atan(-inf)
-1.570796326794896619231322
>>> atan(-1)
-0.7853981633974483096156609
>>> atan(0)
0.0
>>> atan(1)
0.7853981633974483096156609
>>> atan(inf)
1.570796326794896619231322
>>> nprint(chop(taylor(atan, 0, 6)))
[0.0, 1.0, 0.0, -0.333333, 0.0, 0.2, 0.0]
The inverse tangent is often used to compute angles. However,
the atan2 function is often better for this as it preserves sign
(see :func:`~mpmath.atan2`).
:func:`~mpmath.atan` is defined so as to be a proper inverse function of
`\tan(\theta)` for `-\pi/2 < \theta < \pi/2`.
We have `\tan(\tan^{-1}(x)) = x` for all `x`, but
`\tan^{-1}(\tan(x)) = x` only for `-\pi/2 < \Re[x] < \pi/2`::
>>> mp.dps = 25
>>> for x in [1, 10, -1, 1+3j, -2+3j]:
... print("%s %s" % (tan(atan(x)), atan(tan(x))))
...
1.0 1.0
10.0 0.5752220392306202846120698
-1.0 -1.0
(1.0 + 3.0j) (1.000000000000000000000001 + 3.0j)
(-2.0 + 3.0j) (1.141592653589793238462644 + 3.0j)
The inverse tangent has two branch points: `x = \pm i`. :func:`~mpmath.atan`
places the branch cuts along the line segments `(-i \infty, -i)` and
`(+i, +i \infty)`. In general,
.. math ::
\tan^{-1}(x) = \frac{i}{2}\left(\log(1-ix)-\log(1+ix)\right)
where the principal-branch log is implied.
"""
acot = r"""Computes the inverse cotangent of `x`,
`\mathrm{cot}^{-1}(x) = \tan^{-1}(1/x)`."""
asec = r"""Computes the inverse secant of `x`,
`\mathrm{sec}^{-1}(x) = \cos^{-1}(1/x)`."""
acsc = r"""Computes the inverse cosecant of `x`,
`\mathrm{csc}^{-1}(x) = \sin^{-1}(1/x)`."""
coth = r"""Computes the hyperbolic cotangent of `x`,
`\mathrm{coth}(x) = \frac{\cosh(x)}{\sinh(x)}`.
"""
sech = r"""Computes the hyperbolic secant of `x`,
`\mathrm{sech}(x) = \frac{1}{\cosh(x)}`.
"""
csch = r"""Computes the hyperbolic cosecant of `x`,
`\mathrm{csch}(x) = \frac{1}{\sinh(x)}`.
"""
acosh = r"""Computes the inverse hyperbolic cosine of `x`,
`\mathrm{cosh}^{-1}(x) = \log(x+\sqrt{x+1}\sqrt{x-1})`.
"""
asinh = r"""Computes the inverse hyperbolic sine of `x`,
`\mathrm{sinh}^{-1}(x) = \log(x+\sqrt{1+x^2})`.
"""
atanh = r"""Computes the inverse hyperbolic tangent of `x`,
`\mathrm{tanh}^{-1}(x) = \frac{1}{2}\left(\log(1+x)-\log(1-x)\right)`.
"""
acoth = r"""Computes the inverse hyperbolic cotangent of `x`,
`\mathrm{coth}^{-1}(x) = \tanh^{-1}(1/x)`."""
asech = r"""Computes the inverse hyperbolic secant of `x`,
`\mathrm{sech}^{-1}(x) = \cosh^{-1}(1/x)`."""
acsch = r"""Computes the inverse hyperbolic cosecant of `x`,
`\mathrm{csch}^{-1}(x) = \sinh^{-1}(1/x)`."""
sinpi = r"""
Computes `\sin(\pi x)`, more accurately than the expression
``sin(pi*x)``::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> sinpi(10**10), sin(pi*(10**10))
(0.0, -2.23936276195592e-6)
>>> sinpi(10**10+0.5), sin(pi*(10**10+0.5))
(1.0, 0.999999999998721)
"""
cospi = r"""
Computes `\cos(\pi x)`, more accurately than the expression
``cos(pi*x)``::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> cospi(10**10), cos(pi*(10**10))
(1.0, 0.999999999997493)
>>> cospi(10**10+0.5), cos(pi*(10**10+0.5))
(0.0, 1.59960492420134e-6)
"""
sinc = r"""
``sinc(x)`` computes the unnormalized sinc function, defined as
.. math ::
\mathrm{sinc}(x) = \begin{cases}
\sin(x)/x, & \mbox{if } x \ne 0 \\
1, & \mbox{if } x = 0.
\end{cases}
See :func:`~mpmath.sincpi` for the normalized sinc function.
Simple values and limits include::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> sinc(0)
1.0
>>> sinc(1)
0.841470984807897
>>> sinc(inf)
0.0
The integral of the sinc function is the sine integral Si::
>>> quad(sinc, [0, 1])
0.946083070367183
>>> si(1)
0.946083070367183
"""
sincpi = r"""
``sincpi(x)`` computes the normalized sinc function, defined as
.. math ::
\mathrm{sinc}_{\pi}(x) = \begin{cases}
\sin(\pi x)/(\pi x), & \mbox{if } x \ne 0 \\
1, & \mbox{if } x = 0.
\end{cases}
Equivalently, we have
`\mathrm{sinc}_{\pi}(x) = \mathrm{sinc}(\pi x)`.
The normalization entails that the function integrates
to unity over the entire real line::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> quadosc(sincpi, [-inf, inf], period=2.0)
1.0
Like, :func:`~mpmath.sinpi`, :func:`~mpmath.sincpi` is evaluated accurately
at its roots::
>>> sincpi(10)
0.0
"""
expj = r"""
Convenience function for computing `e^{ix}`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> expj(0)
(1.0 + 0.0j)
>>> expj(-1)
(0.5403023058681397174009366 - 0.8414709848078965066525023j)
>>> expj(j)
(0.3678794411714423215955238 + 0.0j)
>>> expj(1+j)
(0.1987661103464129406288032 + 0.3095598756531121984439128j)
"""
expjpi = r"""
Convenience function for computing `e^{i \pi x}`.
Evaluation is accurate near zeros (see also :func:`~mpmath.cospi`,
:func:`~mpmath.sinpi`)::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> expjpi(0)
(1.0 + 0.0j)
>>> expjpi(1)
(-1.0 + 0.0j)
>>> expjpi(0.5)
(0.0 + 1.0j)
>>> expjpi(-1)
(-1.0 + 0.0j)
>>> expjpi(j)
(0.04321391826377224977441774 + 0.0j)
>>> expjpi(1+j)
(-0.04321391826377224977441774 + 0.0j)
"""
floor = r"""
Computes the floor of `x`, `\lfloor x \rfloor`, defined as
the largest integer less than or equal to `x`::
>>> from mpmath import *
>>> mp.pretty = False
>>> floor(3.5)
mpf('3.0')
.. note ::
:func:`~mpmath.floor`, :func:`~mpmath.ceil` and :func:`~mpmath.nint` return a
floating-point number, not a Python ``int``. If `\lfloor x \rfloor` is
too large to be represented exactly at the present working precision,
the result will be rounded, not necessarily in the direction
implied by the mathematical definition of the function.
To avoid rounding, use *prec=0*::
>>> mp.dps = 15
>>> print(int(floor(10**30+1)))
1000000000000000019884624838656
>>> print(int(floor(10**30+1, prec=0)))
1000000000000000000000000000001
The floor function is defined for complex numbers and
acts on the real and imaginary parts separately::
>>> floor(3.25+4.75j)
mpc(real='3.0', imag='4.0')
"""
ceil = r"""
Computes the ceiling of `x`, `\lceil x \rceil`, defined as
the smallest integer greater than or equal to `x`::
>>> from mpmath import *
>>> mp.pretty = False
>>> ceil(3.5)
mpf('4.0')
The ceiling function is defined for complex numbers and
acts on the real and imaginary parts separately::
>>> ceil(3.25+4.75j)
mpc(real='4.0', imag='5.0')
See notes about rounding for :func:`~mpmath.floor`.
"""
nint = r"""
Evaluates the nearest integer function, `\mathrm{nint}(x)`.
This gives the nearest integer to `x`; on a tie, it
gives the nearest even integer::
>>> from mpmath import *
>>> mp.pretty = False
>>> nint(3.2)
mpf('3.0')
>>> nint(3.8)
mpf('4.0')
>>> nint(3.5)
mpf('4.0')
>>> nint(4.5)
mpf('4.0')
The nearest integer function is defined for complex numbers and
acts on the real and imaginary parts separately::
>>> nint(3.25+4.75j)
mpc(real='3.0', imag='5.0')
See notes about rounding for :func:`~mpmath.floor`.
"""
frac = r"""
Gives the fractional part of `x`, defined as
`\mathrm{frac}(x) = x - \lfloor x \rfloor` (see :func:`~mpmath.floor`).
In effect, this computes `x` modulo 1, or `x+n` where
`n \in \mathbb{Z}` is such that `x+n \in [0,1)`::
>>> from mpmath import *
>>> mp.pretty = False
>>> frac(1.25)
mpf('0.25')
>>> frac(3)
mpf('0.0')
>>> frac(-1.25)
mpf('0.75')
For a complex number, the fractional part function applies to
the real and imaginary parts separately::
>>> frac(2.25+3.75j)
mpc(real='0.25', imag='0.75')
Plotted, the fractional part function gives a sawtooth
wave. The Fourier series coefficients have a simple
form::
>>> mp.dps = 15
>>> nprint(fourier(lambda x: frac(x)-0.5, [0,1], 4))
([0.0, 0.0, 0.0, 0.0, 0.0], [0.0, -0.31831, -0.159155, -0.106103, -0.0795775])
>>> nprint([-1/(pi*k) for k in range(1,5)])
[-0.31831, -0.159155, -0.106103, -0.0795775]
.. note::
The fractional part is sometimes defined as a symmetric
function, i.e. returning `-\mathrm{frac}(-x)` if `x < 0`.
This convention is used, for instance, by Mathematica's
``FractionalPart``.
"""
sign = r"""
Returns the sign of `x`, defined as `\mathrm{sign}(x) = x / |x|`
(with the special case `\mathrm{sign}(0) = 0`)::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = False
>>> sign(10)
mpf('1.0')
>>> sign(-10)
mpf('-1.0')
>>> sign(0)
mpf('0.0')
Note that the sign function is also defined for complex numbers,
for which it gives the projection onto the unit circle::
>>> mp.dps = 15; mp.pretty = True
>>> sign(1+j)
(0.707106781186547 + 0.707106781186547j)
"""
arg = r"""
Computes the complex argument (phase) of `x`, defined as the
signed angle between the positive real axis and `x` in the
complex plane::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> arg(3)
0.0
>>> arg(3+3j)
0.785398163397448
>>> arg(3j)
1.5707963267949
>>> arg(-3)
3.14159265358979
>>> arg(-3j)
-1.5707963267949
The angle is defined to satisfy `-\pi < \arg(x) \le \pi` and
with the sign convention that a nonnegative imaginary part
results in a nonnegative argument.
The value returned by :func:`~mpmath.arg` is an ``mpf`` instance.
"""
fabs = r"""
Returns the absolute value of `x`, `|x|`. Unlike :func:`abs`,
:func:`~mpmath.fabs` converts non-mpmath numbers (such as ``int``)
into mpmath numbers::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = False
>>> fabs(3)
mpf('3.0')
>>> fabs(-3)
mpf('3.0')
>>> fabs(3+4j)
mpf('5.0')
"""
re = r"""
Returns the real part of `x`, `\Re(x)`. :func:`~mpmath.re`
converts a non-mpmath number to an mpmath number::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = False
>>> re(3)
mpf('3.0')
>>> re(-1+4j)
mpf('-1.0')
"""
im = r"""
Returns the imaginary part of `x`, `\Im(x)`. :func:`~mpmath.im`
converts a non-mpmath number to an mpmath number::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = False
>>> im(3)
mpf('0.0')
>>> im(-1+4j)
mpf('4.0')
"""
conj = r"""
Returns the complex conjugate of `x`, `\overline{x}`. Unlike
``x.conjugate()``, :func:`~mpmath.im` converts `x` to a mpmath number::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = False
>>> conj(3)
mpf('3.0')
>>> conj(-1+4j)
mpc(real='-1.0', imag='-4.0')
"""
polar = r"""
Returns the polar representation of the complex number `z`
as a pair `(r, \phi)` such that `z = r e^{i \phi}`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> polar(-2)
(2.0, 3.14159265358979)
>>> polar(3-4j)
(5.0, -0.927295218001612)
"""
rect = r"""
Returns the complex number represented by polar
coordinates `(r, \phi)`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> chop(rect(2, pi))
-2.0
>>> rect(sqrt(2), -pi/4)
(1.0 - 1.0j)
"""
expm1 = r"""
Computes `e^x - 1`, accurately for small `x`.
Unlike the expression ``exp(x) - 1``, ``expm1(x)`` does not suffer from
potentially catastrophic cancellation::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> exp(1e-10)-1; print(expm1(1e-10))
1.00000008274037e-10
1.00000000005e-10
>>> exp(1e-20)-1; print(expm1(1e-20))
0.0
1.0e-20
>>> 1/(exp(1e-20)-1)
Traceback (most recent call last):
...
ZeroDivisionError
>>> 1/expm1(1e-20)
1.0e+20
Evaluation works for extremely tiny values::
>>> expm1(0)
0.0
>>> expm1('1e-10000000')
1.0e-10000000
"""
log1p = r"""
Computes `\log(1+x)`, accurately for small `x`.
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> log(1+1e-10); print(mp.log1p(1e-10))
1.00000008269037e-10
9.9999999995e-11
>>> mp.log1p(1e-100j)
(5.0e-201 + 1.0e-100j)
>>> mp.log1p(0)
0.0
"""
powm1 = r"""
Computes `x^y - 1`, accurately when `x^y` is very close to 1.
This avoids potentially catastrophic cancellation::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> power(0.99999995, 1e-10) - 1
0.0
>>> powm1(0.99999995, 1e-10)
-5.00000012791934e-18
Powers exactly equal to 1, and only those powers, yield 0 exactly::
>>> powm1(-j, 4)
(0.0 + 0.0j)
>>> powm1(3, 0)
0.0
>>> powm1(fadd(-1, 1e-100, exact=True), 4)
-4.0e-100
Evaluation works for extremely tiny `y`::
>>> powm1(2, '1e-100000')
6.93147180559945e-100001
>>> powm1(j, '1e-1000')
(-1.23370055013617e-2000 + 1.5707963267949e-1000j)
"""
root = r"""
``root(z, n, k=0)`` computes an `n`-th root of `z`, i.e. returns a number
`r` that (up to possible approximation error) satisfies `r^n = z`.
(``nthroot`` is available as an alias for ``root``.)
Every complex number `z \ne 0` has `n` distinct `n`-th roots, which are
equidistant points on a circle with radius `|z|^{1/n}`, centered around the
origin. A specific root may be selected using the optional index
`k`. The roots are indexed counterclockwise, starting with `k = 0` for the root
closest to the positive real half-axis.
The `k = 0` root is the so-called principal `n`-th root, often denoted by
`\sqrt[n]{z}` or `z^{1/n}`, and also given by `\exp(\log(z) / n)`. If `z` is
a positive real number, the principal root is just the unique positive
`n`-th root of `z`. Under some circumstances, non-principal real roots exist:
for positive real `z`, `n` even, there is a negative root given by `k = n/2`;
for negative real `z`, `n` odd, there is a negative root given by `k = (n-1)/2`.
To obtain all roots with a simple expression, use
``[root(z,n,k) for k in range(n)]``.
An important special case, ``root(1, n, k)`` returns the `k`-th `n`-th root of
unity, `\zeta_k = e^{2 \pi i k / n}`. Alternatively, :func:`~mpmath.unitroots`
provides a slightly more convenient way to obtain the roots of unity,
including the option to compute only the primitive roots of unity.
Both `k` and `n` should be integers; `k` outside of ``range(n)`` will be
reduced modulo `n`. If `n` is negative, `x^{-1/n} = 1/x^{1/n}` (or
the equivalent reciprocal for a non-principal root with `k \ne 0`) is computed.
:func:`~mpmath.root` is implemented to use Newton's method for small
`n`. At high precision, this makes `x^{1/n}` not much more
expensive than the regular exponentiation, `x^n`. For very large
`n`, :func:`~mpmath.nthroot` falls back to use the exponential function.
**Examples**
:func:`~mpmath.nthroot`/:func:`~mpmath.root` is faster and more accurate than raising to a
floating-point fraction::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = False
>>> 16807 ** (mpf(1)/5)
mpf('7.0000000000000009')
>>> root(16807, 5)
mpf('7.0')
>>> nthroot(16807, 5) # Alias
mpf('7.0')
A high-precision root::
>>> mp.dps = 50; mp.pretty = True
>>> nthroot(10, 5)
1.584893192461113485202101373391507013269442133825
>>> nthroot(10, 5) ** 5
10.0
Computing principal and non-principal square and cube roots::
>>> mp.dps = 15
>>> root(10, 2)
3.16227766016838
>>> root(10, 2, 1)
-3.16227766016838
>>> root(-10, 3)
(1.07721734501594 + 1.86579517236206j)
>>> root(-10, 3, 1)
-2.15443469003188
>>> root(-10, 3, 2)
(1.07721734501594 - 1.86579517236206j)
All the 7th roots of a complex number::
>>> for r in [root(3+4j, 7, k) for k in range(7)]:
... print("%s %s" % (r, r**7))
...
(1.24747270589553 + 0.166227124177353j) (3.0 + 4.0j)
(0.647824911301003 + 1.07895435170559j) (3.0 + 4.0j)
(-0.439648254723098 + 1.17920694574172j) (3.0 + 4.0j)
(-1.19605731775069 + 0.391492658196305j) (3.0 + 4.0j)
(-1.05181082538903 - 0.691023585965793j) (3.0 + 4.0j)
(-0.115529328478668 - 1.25318497558335j) (3.0 + 4.0j)
(0.907748109144957 - 0.871672518271819j) (3.0 + 4.0j)
Cube roots of unity::
>>> for k in range(3): print(root(1, 3, k))
...
1.0
(-0.5 + 0.866025403784439j)
(-0.5 - 0.866025403784439j)
Some exact high order roots::
>>> root(75**210, 105)
5625.0
>>> root(1, 128, 96)
(0.0 - 1.0j)
>>> root(4**128, 128, 96)
(0.0 - 4.0j)
"""
unitroots = r"""
``unitroots(n)`` returns `\zeta_0, \zeta_1, \ldots, \zeta_{n-1}`,
all the distinct `n`-th roots of unity, as a list. If the option
*primitive=True* is passed, only the primitive roots are returned.
Every `n`-th root of unity satisfies `(\zeta_k)^n = 1`. There are `n` distinct
roots for each `n` (`\zeta_k` and `\zeta_j` are the same when
`k = j \pmod n`), which form a regular polygon with vertices on the unit
circle. They are ordered counterclockwise with increasing `k`, starting
with `\zeta_0 = 1`.
**Examples**
The roots of unity up to `n = 4`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> nprint(unitroots(1))
[1.0]
>>> nprint(unitroots(2))
[1.0, -1.0]
>>> nprint(unitroots(3))
[1.0, (-0.5 + 0.866025j), (-0.5 - 0.866025j)]
>>> nprint(unitroots(4))
[1.0, (0.0 + 1.0j), -1.0, (0.0 - 1.0j)]
Roots of unity form a geometric series that sums to 0::
>>> mp.dps = 50
>>> chop(fsum(unitroots(25)))
0.0
Primitive roots up to `n = 4`::
>>> mp.dps = 15
>>> nprint(unitroots(1, primitive=True))
[1.0]
>>> nprint(unitroots(2, primitive=True))
[-1.0]
>>> nprint(unitroots(3, primitive=True))
[(-0.5 + 0.866025j), (-0.5 - 0.866025j)]
>>> nprint(unitroots(4, primitive=True))
[(0.0 + 1.0j), (0.0 - 1.0j)]
There are only four primitive 12th roots::
>>> nprint(unitroots(12, primitive=True))
[(0.866025 + 0.5j), (-0.866025 + 0.5j), (-0.866025 - 0.5j), (0.866025 - 0.5j)]
The `n`-th roots of unity form a group, the cyclic group of order `n`.
Any primitive root `r` is a generator for this group, meaning that
`r^0, r^1, \ldots, r^{n-1}` gives the whole set of unit roots (in
some permuted order)::
>>> for r in unitroots(6): print(r)
...
1.0
(0.5 + 0.866025403784439j)
(-0.5 + 0.866025403784439j)
-1.0
(-0.5 - 0.866025403784439j)
(0.5 - 0.866025403784439j)
>>> r = unitroots(6, primitive=True)[1]
>>> for k in range(6): print(chop(r**k))
...
1.0
(0.5 - 0.866025403784439j)
(-0.5 - 0.866025403784439j)
-1.0
(-0.5 + 0.866025403784438j)
(0.5 + 0.866025403784438j)
The number of primitive roots equals the Euler totient function `\phi(n)`::
>>> [len(unitroots(n, primitive=True)) for n in range(1,20)]
[1, 1, 2, 2, 4, 2, 6, 4, 6, 4, 10, 4, 12, 6, 8, 8, 16, 6, 18]
"""
log = r"""
Computes the base-`b` logarithm of `x`, `\log_b(x)`. If `b` is
unspecified, :func:`~mpmath.log` computes the natural (base `e`) logarithm
and is equivalent to :func:`~mpmath.ln`. In general, the base `b` logarithm
is defined in terms of the natural logarithm as
`\log_b(x) = \ln(x)/\ln(b)`.
By convention, we take `\log(0) = -\infty`.
The natural logarithm is real if `x > 0` and complex if `x < 0` or if
`x` is complex. The principal branch of the complex logarithm is
used, meaning that `\Im(\ln(x)) = -\pi < \arg(x) \le \pi`.
**Examples**
Some basic values and limits::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> log(1)
0.0
>>> log(2)
0.693147180559945
>>> log(1000,10)
3.0
>>> log(4, 16)
0.5
>>> log(j)
(0.0 + 1.5707963267949j)
>>> log(-1)
(0.0 + 3.14159265358979j)
>>> log(0)
-inf
>>> log(inf)
+inf
The natural logarithm is the antiderivative of `1/x`::
>>> quad(lambda x: 1/x, [1, 5])
1.6094379124341
>>> log(5)
1.6094379124341
>>> diff(log, 10)
0.1
The Taylor series expansion of the natural logarithm around
`x = 1` has coefficients `(-1)^{n+1}/n`::
>>> nprint(taylor(log, 1, 7))
[0.0, 1.0, -0.5, 0.333333, -0.25, 0.2, -0.166667, 0.142857]
:func:`~mpmath.log` supports arbitrary precision evaluation::
>>> mp.dps = 50
>>> log(pi)
1.1447298858494001741434273513530587116472948129153
>>> log(pi, pi**3)
0.33333333333333333333333333333333333333333333333333
>>> mp.dps = 25
>>> log(3+4j)
(1.609437912434100374600759 + 0.9272952180016122324285125j)
"""
log10 = r"""
Computes the base-10 logarithm of `x`, `\log_{10}(x)`. ``log10(x)``
is equivalent to ``log(x, 10)``.
"""
fmod = r"""
Converts `x` and `y` to mpmath numbers and returns `x \mod y`.
For mpmath numbers, this is equivalent to ``x % y``.
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> fmod(100, pi)
2.61062773871641
You can use :func:`~mpmath.fmod` to compute fractional parts of numbers::
>>> fmod(10.25, 1)
0.25
"""
radians = r"""
Converts the degree angle `x` to radians::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> radians(60)
1.0471975511966
"""
degrees = r"""
Converts the radian angle `x` to a degree angle::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> degrees(pi/3)
60.0
"""
atan2 = r"""
Computes the two-argument arctangent, `\mathrm{atan2}(y, x)`,
giving the signed angle between the positive `x`-axis and the
point `(x, y)` in the 2D plane. This function is defined for
real `x` and `y` only.
The two-argument arctangent essentially computes
`\mathrm{atan}(y/x)`, but accounts for the signs of both
`x` and `y` to give the angle for the correct quadrant. The
following examples illustrate the difference::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> atan2(1,1), atan(1/1.)
(0.785398163397448, 0.785398163397448)
>>> atan2(1,-1), atan(1/-1.)
(2.35619449019234, -0.785398163397448)
>>> atan2(-1,1), atan(-1/1.)
(-0.785398163397448, -0.785398163397448)
>>> atan2(-1,-1), atan(-1/-1.)
(-2.35619449019234, 0.785398163397448)
The angle convention is the same as that used for the complex
argument; see :func:`~mpmath.arg`.
"""
fibonacci = r"""
``fibonacci(n)`` computes the `n`-th Fibonacci number, `F(n)`. The
Fibonacci numbers are defined by the recurrence `F(n) = F(n-1) + F(n-2)`
with the initial values `F(0) = 0`, `F(1) = 1`. :func:`~mpmath.fibonacci`
extends this definition to arbitrary real and complex arguments
using the formula
.. math ::
F(z) = \frac{\phi^z - \cos(\pi z) \phi^{-z}}{\sqrt 5}
where `\phi` is the golden ratio. :func:`~mpmath.fibonacci` also uses this
continuous formula to compute `F(n)` for extremely large `n`, where
calculating the exact integer would be wasteful.
For convenience, :func:`~mpmath.fib` is available as an alias for
:func:`~mpmath.fibonacci`.
**Basic examples**
Some small Fibonacci numbers are::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for i in range(10):
... print(fibonacci(i))
...
0.0
1.0
1.0
2.0
3.0
5.0
8.0
13.0
21.0
34.0
>>> fibonacci(50)
12586269025.0
The recurrence for `F(n)` extends backwards to negative `n`::
>>> for i in range(10):
... print(fibonacci(-i))
...
0.0
1.0
-1.0
2.0
-3.0
5.0
-8.0
13.0
-21.0
34.0
Large Fibonacci numbers will be computed approximately unless
the precision is set high enough::
>>> fib(200)
2.8057117299251e+41
>>> mp.dps = 45
>>> fib(200)
280571172992510140037611932413038677189525.0
:func:`~mpmath.fibonacci` can compute approximate Fibonacci numbers
of stupendous size::
>>> mp.dps = 15
>>> fibonacci(10**25)
3.49052338550226e+2089876402499787337692720
**Real and complex arguments**
The extended Fibonacci function is an analytic function. The
property `F(z) = F(z-1) + F(z-2)` holds for arbitrary `z`::
>>> mp.dps = 15
>>> fib(pi)
2.1170270579161
>>> fib(pi-1) + fib(pi-2)
2.1170270579161
>>> fib(3+4j)
(-5248.51130728372 - 14195.962288353j)
>>> fib(2+4j) + fib(1+4j)
(-5248.51130728372 - 14195.962288353j)
The Fibonacci function has infinitely many roots on the
negative half-real axis. The first root is at 0, the second is
close to -0.18, and then there are infinitely many roots that
asymptotically approach `-n+1/2`::
>>> findroot(fib, -0.2)
-0.183802359692956
>>> findroot(fib, -2)
-1.57077646820395
>>> findroot(fib, -17)
-16.4999999596115
>>> findroot(fib, -24)
-23.5000000000479
**Mathematical relationships**
For large `n`, `F(n+1)/F(n)` approaches the golden ratio::
>>> mp.dps = 50
>>> fibonacci(101)/fibonacci(100)
1.6180339887498948482045868343656381177203127439638
>>> +phi
1.6180339887498948482045868343656381177203091798058
The sum of reciprocal Fibonacci numbers converges to an irrational
number for which no closed form expression is known::
>>> mp.dps = 15
>>> nsum(lambda n: 1/fib(n), [1, inf])
3.35988566624318
Amazingly, however, the sum of odd-index reciprocal Fibonacci
numbers can be expressed in terms of a Jacobi theta function::
>>> nsum(lambda n: 1/fib(2*n+1), [0, inf])
1.82451515740692
>>> sqrt(5)*jtheta(2,0,(3-sqrt(5))/2)**2/4
1.82451515740692
Some related sums can be done in closed form::
>>> nsum(lambda k: 1/(1+fib(2*k+1)), [0, inf])
1.11803398874989
>>> phi - 0.5
1.11803398874989
>>> f = lambda k:(-1)**(k+1) / sum(fib(n)**2 for n in range(1,int(k+1)))
>>> nsum(f, [1, inf])
0.618033988749895
>>> phi-1
0.618033988749895
**References**
1. http://mathworld.wolfram.com/FibonacciNumber.html
"""
altzeta = r"""
Gives the Dirichlet eta function, `\eta(s)`, also known as the
alternating zeta function. This function is defined in analogy
with the Riemann zeta function as providing the sum of the
alternating series
.. math ::
\eta(s) = \sum_{k=0}^{\infty} \frac{(-1)^k}{k^s}
= 1-\frac{1}{2^s}+\frac{1}{3^s}-\frac{1}{4^s}+\ldots
The eta function, unlike the Riemann zeta function, is an entire
function, having a finite value for all complex `s`. The special case
`\eta(1) = \log(2)` gives the value of the alternating harmonic series.
The alternating zeta function may expressed using the Riemann zeta function
as `\eta(s) = (1 - 2^{1-s}) \zeta(s)`. It can also be expressed
in terms of the Hurwitz zeta function, for example using
:func:`~mpmath.dirichlet` (see documentation for that function).
**Examples**
Some special values are::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> altzeta(1)
0.693147180559945
>>> altzeta(0)
0.5
>>> altzeta(-1)
0.25
>>> altzeta(-2)
0.0
An example of a sum that can be computed more accurately and
efficiently via :func:`~mpmath.altzeta` than via numerical summation::
>>> sum(-(-1)**n / mpf(n)**2.5 for n in range(1, 100))
0.867204951503984
>>> altzeta(2.5)
0.867199889012184
At positive even integers, the Dirichlet eta function
evaluates to a rational multiple of a power of `\pi`::
>>> altzeta(2)
0.822467033424113
>>> pi**2/12
0.822467033424113
Like the Riemann zeta function, `\eta(s)`, approaches 1
as `s` approaches positive infinity, although it does
so from below rather than from above::
>>> altzeta(30)
0.999999999068682
>>> altzeta(inf)
1.0
>>> mp.pretty = False
>>> altzeta(1000, rounding='d')
mpf('0.99999999999999989')
>>> altzeta(1000, rounding='u')
mpf('1.0')
**References**
1. http://mathworld.wolfram.com/DirichletEtaFunction.html
2. http://en.wikipedia.org/wiki/Dirichlet_eta_function
"""
factorial = r"""
Computes the factorial, `x!`. For integers `n \ge 0`, we have
`n! = 1 \cdot 2 \cdots (n-1) \cdot n` and more generally the factorial
is defined for real or complex `x` by `x! = \Gamma(x+1)`.
**Examples**
Basic values and limits::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for k in range(6):
... print("%s %s" % (k, fac(k)))
...
0 1.0
1 1.0
2 2.0
3 6.0
4 24.0
5 120.0
>>> fac(inf)
+inf
>>> fac(0.5), sqrt(pi)/2
(0.886226925452758, 0.886226925452758)
For large positive `x`, `x!` can be approximated by
Stirling's formula::
>>> x = 10**10
>>> fac(x)
2.32579620567308e+95657055186
>>> sqrt(2*pi*x)*(x/e)**x
2.32579597597705e+95657055186
:func:`~mpmath.fac` supports evaluation for astronomically large values::
>>> fac(10**30)
6.22311232304258e+29565705518096748172348871081098
Reciprocal factorials appear in the Taylor series of the
exponential function (among many other contexts)::
>>> nsum(lambda k: 1/fac(k), [0, inf]), exp(1)
(2.71828182845905, 2.71828182845905)
>>> nsum(lambda k: pi**k/fac(k), [0, inf]), exp(pi)
(23.1406926327793, 23.1406926327793)
"""
gamma = r"""
Computes the gamma function, `\Gamma(x)`. The gamma function is a
shifted version of the ordinary factorial, satisfying
`\Gamma(n) = (n-1)!` for integers `n > 0`. More generally, it
is defined by
.. math ::
\Gamma(x) = \int_0^{\infty} t^{x-1} e^{-t}\, dt
for any real or complex `x` with `\Re(x) > 0` and for `\Re(x) < 0`
by analytic continuation.
**Examples**
Basic values and limits::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for k in range(1, 6):
... print("%s %s" % (k, gamma(k)))
...
1 1.0
2 1.0
3 2.0
4 6.0
5 24.0
>>> gamma(inf)
+inf
>>> gamma(0)
Traceback (most recent call last):
...
ValueError: gamma function pole
The gamma function of a half-integer is a rational multiple of
`\sqrt{\pi}`::
>>> gamma(0.5), sqrt(pi)
(1.77245385090552, 1.77245385090552)
>>> gamma(1.5), sqrt(pi)/2
(0.886226925452758, 0.886226925452758)
We can check the integral definition::
>>> gamma(3.5)
3.32335097044784
>>> quad(lambda t: t**2.5*exp(-t), [0,inf])
3.32335097044784
:func:`~mpmath.gamma` supports arbitrary-precision evaluation and
complex arguments::
>>> mp.dps = 50
>>> gamma(sqrt(3))
0.91510229697308632046045539308226554038315280564184
>>> mp.dps = 25
>>> gamma(2j)
(0.009902440080927490985955066 - 0.07595200133501806872408048j)
Arguments can also be large. Note that the gamma function grows
very quickly::
>>> mp.dps = 15
>>> gamma(10**20)
1.9328495143101e+1956570551809674817225
"""
psi = r"""
Gives the polygamma function of order `m` of `z`, `\psi^{(m)}(z)`.
Special cases are known as the *digamma function* (`\psi^{(0)}(z)`),
the *trigamma function* (`\psi^{(1)}(z)`), etc. The polygamma
functions are defined as the logarithmic derivatives of the gamma
function:
.. math ::
\psi^{(m)}(z) = \left(\frac{d}{dz}\right)^{m+1} \log \Gamma(z)
In particular, `\psi^{(0)}(z) = \Gamma'(z)/\Gamma(z)`. In the
present implementation of :func:`~mpmath.psi`, the order `m` must be a
nonnegative integer, while the argument `z` may be an arbitrary
complex number (with exception for the polygamma function's poles
at `z = 0, -1, -2, \ldots`).
**Examples**
For various rational arguments, the polygamma function reduces to
a combination of standard mathematical constants::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> psi(0, 1), -euler
(-0.5772156649015328606065121, -0.5772156649015328606065121)
>>> psi(1, '1/4'), pi**2+8*catalan
(17.19732915450711073927132, 17.19732915450711073927132)
>>> psi(2, '1/2'), -14*apery
(-16.82879664423431999559633, -16.82879664423431999559633)
The polygamma functions are derivatives of each other::
>>> diff(lambda x: psi(3, x), pi), psi(4, pi)
(-0.1105749312578862734526952, -0.1105749312578862734526952)
>>> quad(lambda x: psi(4, x), [2, 3]), psi(3,3)-psi(3,2)
(-0.375, -0.375)
The digamma function diverges logarithmically as `z \to \infty`,
while higher orders tend to zero::
>>> psi(0,inf), psi(1,inf), psi(2,inf)
(+inf, 0.0, 0.0)
Evaluation for a complex argument::
>>> psi(2, -1-2j)
(0.03902435405364952654838445 + 0.1574325240413029954685366j)
Evaluation is supported for large orders `m` and/or large
arguments `z`::
>>> psi(3, 10**100)
2.0e-300
>>> psi(250, 10**30+10**20*j)
(-1.293142504363642687204865e-7010 + 3.232856260909107391513108e-7018j)
**Application to infinite series**
Any infinite series where the summand is a rational function of
the index `k` can be evaluated in closed form in terms of polygamma
functions of the roots and poles of the summand::
>>> a = sqrt(2)
>>> b = sqrt(3)
>>> nsum(lambda k: 1/((k+a)**2*(k+b)), [0, inf])
0.4049668927517857061917531
>>> (psi(0,a)-psi(0,b)-a*psi(1,a)+b*psi(1,a))/(a-b)**2
0.4049668927517857061917531
This follows from the series representation (`m > 0`)
.. math ::
\psi^{(m)}(z) = (-1)^{m+1} m! \sum_{k=0}^{\infty}
\frac{1}{(z+k)^{m+1}}.
Since the roots of a polynomial may be complex, it is sometimes
necessary to use the complex polygamma function to evaluate
an entirely real-valued sum::
>>> nsum(lambda k: 1/(k**2-2*k+3), [0, inf])
1.694361433907061256154665
>>> nprint(polyroots([1,-2,3]))
[(1.0 - 1.41421j), (1.0 + 1.41421j)]
>>> r1 = 1-sqrt(2)*j
>>> r2 = r1.conjugate()
>>> (psi(0,-r2)-psi(0,-r1))/(r1-r2)
(1.694361433907061256154665 + 0.0j)
"""
digamma = r"""
Shortcut for ``psi(0,z)``.
"""
harmonic = r"""
If `n` is an integer, ``harmonic(n)`` gives a floating-point
approximation of the `n`-th harmonic number `H(n)`, defined as
.. math ::
H(n) = 1 + \frac{1}{2} + \frac{1}{3} + \ldots + \frac{1}{n}
The first few harmonic numbers are::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(8):
... print("%s %s" % (n, harmonic(n)))
...
0 0.0
1 1.0
2 1.5
3 1.83333333333333
4 2.08333333333333
5 2.28333333333333
6 2.45
7 2.59285714285714
The infinite harmonic series `1 + 1/2 + 1/3 + \ldots` diverges::
>>> harmonic(inf)
+inf
:func:`~mpmath.harmonic` is evaluated using the digamma function rather
than by summing the harmonic series term by term. It can therefore
be computed quickly for arbitrarily large `n`, and even for
nonintegral arguments::
>>> harmonic(10**100)
230.835724964306
>>> harmonic(0.5)
0.613705638880109
>>> harmonic(3+4j)
(2.24757548223494 + 0.850502209186044j)
:func:`~mpmath.harmonic` supports arbitrary precision evaluation::
>>> mp.dps = 50
>>> harmonic(11)
3.0198773448773448773448773448773448773448773448773
>>> harmonic(pi)
1.8727388590273302654363491032336134987519132374152
The harmonic series diverges, but at a glacial pace. It is possible
to calculate the exact number of terms required before the sum
exceeds a given amount, say 100::
>>> mp.dps = 50
>>> v = 10**findroot(lambda x: harmonic(10**x) - 100, 10)
>>> v
15092688622113788323693563264538101449859496.864101
>>> v = int(ceil(v))
>>> print(v)
15092688622113788323693563264538101449859497
>>> harmonic(v-1)
99.999999999999999999999999999999999999999999942747
>>> harmonic(v)
100.000000000000000000000000000000000000000000009
"""
bernoulli = r"""
Computes the nth Bernoulli number, `B_n`, for any integer `n \ge 0`.
The Bernoulli numbers are rational numbers, but this function
returns a floating-point approximation. To obtain an exact
fraction, use :func:`~mpmath.bernfrac` instead.
**Examples**
Numerical values of the first few Bernoulli numbers::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(15):
... print("%s %s" % (n, bernoulli(n)))
...
0 1.0
1 -0.5
2 0.166666666666667
3 0.0
4 -0.0333333333333333
5 0.0
6 0.0238095238095238
7 0.0
8 -0.0333333333333333
9 0.0
10 0.0757575757575758
11 0.0
12 -0.253113553113553
13 0.0
14 1.16666666666667
Bernoulli numbers can be approximated with arbitrary precision::
>>> mp.dps = 50
>>> bernoulli(100)
-2.8382249570693706959264156336481764738284680928013e+78
Arbitrarily large `n` are supported::
>>> mp.dps = 15
>>> bernoulli(10**20 + 2)
3.09136296657021e+1876752564973863312327
The Bernoulli numbers are related to the Riemann zeta function
at integer arguments::
>>> -bernoulli(8) * (2*pi)**8 / (2*fac(8))
1.00407735619794
>>> zeta(8)
1.00407735619794
**Algorithm**
For small `n` (`n < 3000`) :func:`~mpmath.bernoulli` uses a recurrence
formula due to Ramanujan. All results in this range are cached,
so sequential computation of small Bernoulli numbers is
guaranteed to be fast.
For larger `n`, `B_n` is evaluated in terms of the Riemann zeta
function.
"""
stieltjes = r"""
For a nonnegative integer `n`, ``stieltjes(n)`` computes the
`n`-th Stieltjes constant `\gamma_n`, defined as the
`n`-th coefficient in the Laurent series expansion of the
Riemann zeta function around the pole at `s = 1`. That is,
we have:
.. math ::
\zeta(s) = \frac{1}{s-1} \sum_{n=0}^{\infty}
\frac{(-1)^n}{n!} \gamma_n (s-1)^n
More generally, ``stieltjes(n, a)`` gives the corresponding
coefficient `\gamma_n(a)` for the Hurwitz zeta function
`\zeta(s,a)` (with `\gamma_n = \gamma_n(1)`).
**Examples**
The zeroth Stieltjes constant is just Euler's constant `\gamma`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> stieltjes(0)
0.577215664901533
Some more values are::
>>> stieltjes(1)
-0.0728158454836767
>>> stieltjes(10)
0.000205332814909065
>>> stieltjes(30)
0.00355772885557316
>>> stieltjes(1000)
-1.57095384420474e+486
>>> stieltjes(2000)
2.680424678918e+1109
>>> stieltjes(1, 2.5)
-0.23747539175716
An alternative way to compute `\gamma_1`::
>>> diff(extradps(15)(lambda x: 1/(x-1) - zeta(x)), 1)
-0.0728158454836767
:func:`~mpmath.stieltjes` supports arbitrary precision evaluation::
>>> mp.dps = 50
>>> stieltjes(2)
-0.0096903631928723184845303860352125293590658061013408
**Algorithm**
:func:`~mpmath.stieltjes` numerically evaluates the integral in
the following representation due to Ainsworth, Howell and
Coffey [1], [2]:
.. math ::
\gamma_n(a) = \frac{\log^n a}{2a} - \frac{\log^{n+1}(a)}{n+1} +
\frac{2}{a} \Re \int_0^{\infty}
\frac{(x/a-i)\log^n(a-ix)}{(1+x^2/a^2)(e^{2\pi x}-1)} dx.
For some reference values with `a = 1`, see e.g. [4].
**References**
1. O. R. Ainsworth & L. W. Howell, "An integral representation of
the generalized Euler-Mascheroni constants", NASA Technical
Paper 2456 (1985),
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19850014994_1985014994.pdf
2. M. W. Coffey, "The Stieltjes constants, their relation to the
`\eta_j` coefficients, and representation of the Hurwitz
zeta function", arXiv:0706.0343v1 http://arxiv.org/abs/0706.0343
3. http://mathworld.wolfram.com/StieltjesConstants.html
4. http://pi.lacim.uqam.ca/piDATA/stieltjesgamma.txt
"""
gammaprod = r"""
Given iterables `a` and `b`, ``gammaprod(a, b)`` computes the
product / quotient of gamma functions:
.. math ::
\frac{\Gamma(a_0) \Gamma(a_1) \cdots \Gamma(a_p)}
{\Gamma(b_0) \Gamma(b_1) \cdots \Gamma(b_q)}
Unlike direct calls to :func:`~mpmath.gamma`, :func:`~mpmath.gammaprod` considers
the entire product as a limit and evaluates this limit properly if
any of the numerator or denominator arguments are nonpositive
integers such that poles of the gamma function are encountered.
That is, :func:`~mpmath.gammaprod` evaluates
.. math ::
\lim_{\epsilon \to 0}
\frac{\Gamma(a_0+\epsilon) \Gamma(a_1+\epsilon) \cdots
\Gamma(a_p+\epsilon)}
{\Gamma(b_0+\epsilon) \Gamma(b_1+\epsilon) \cdots
\Gamma(b_q+\epsilon)}
In particular:
* If there are equally many poles in the numerator and the
denominator, the limit is a rational number times the remaining,
regular part of the product.
* If there are more poles in the numerator, :func:`~mpmath.gammaprod`
returns ``+inf``.
* If there are more poles in the denominator, :func:`~mpmath.gammaprod`
returns 0.
**Examples**
The reciprocal gamma function `1/\Gamma(x)` evaluated at `x = 0`::
>>> from mpmath import *
>>> mp.dps = 15
>>> gammaprod([], [0])
0.0
A limit::
>>> gammaprod([-4], [-3])
-0.25
>>> limit(lambda x: gamma(x-1)/gamma(x), -3, direction=1)
-0.25
>>> limit(lambda x: gamma(x-1)/gamma(x), -3, direction=-1)
-0.25
"""
beta = r"""
Computes the beta function,
`B(x,y) = \Gamma(x) \Gamma(y) / \Gamma(x+y)`.
The beta function is also commonly defined by the integral
representation
.. math ::
B(x,y) = \int_0^1 t^{x-1} (1-t)^{y-1} \, dt
**Examples**
For integer and half-integer arguments where all three gamma
functions are finite, the beta function becomes either rational
number or a rational multiple of `\pi`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> beta(5, 2)
0.0333333333333333
>>> beta(1.5, 2)
0.266666666666667
>>> 16*beta(2.5, 1.5)
3.14159265358979
Where appropriate, :func:`~mpmath.beta` evaluates limits. A pole
of the beta function is taken to result in ``+inf``::
>>> beta(-0.5, 0.5)
0.0
>>> beta(-3, 3)
-0.333333333333333
>>> beta(-2, 3)
+inf
>>> beta(inf, 1)
0.0
>>> beta(inf, 0)
nan
:func:`~mpmath.beta` supports complex numbers and arbitrary precision
evaluation::
>>> beta(1, 2+j)
(0.4 - 0.2j)
>>> mp.dps = 25
>>> beta(j,0.5)
(1.079424249270925780135675 - 1.410032405664160838288752j)
>>> mp.dps = 50
>>> beta(pi, e)
0.037890298781212201348153837138927165984170287886464
Various integrals can be computed by means of the
beta function::
>>> mp.dps = 15
>>> quad(lambda t: t**2.5*(1-t)**2, [0, 1])
0.0230880230880231
>>> beta(3.5, 3)
0.0230880230880231
>>> quad(lambda t: sin(t)**4 * sqrt(cos(t)), [0, pi/2])
0.319504062596158
>>> beta(2.5, 0.75)/2
0.319504062596158
"""
betainc = r"""
``betainc(a, b, x1=0, x2=1, regularized=False)`` gives the generalized
incomplete beta function,
.. math ::
I_{x_1}^{x_2}(a,b) = \int_{x_1}^{x_2} t^{a-1} (1-t)^{b-1} dt.
When `x_1 = 0, x_2 = 1`, this reduces to the ordinary (complete)
beta function `B(a,b)`; see :func:`~mpmath.beta`.
With the keyword argument ``regularized=True``, :func:`~mpmath.betainc`
computes the regularized incomplete beta function
`I_{x_1}^{x_2}(a,b) / B(a,b)`. This is the cumulative distribution of the
beta distribution with parameters `a`, `b`.
.. note :
Implementations of the incomplete beta function in some other
software uses a different argument order. For example, Mathematica uses the
reversed argument order ``Beta[x1,x2,a,b]``. For the equivalent of SciPy's
three-argument incomplete beta integral (implicitly with `x1 = 0`), use
``betainc(a,b,0,x2,regularized=True)``.
**Examples**
Verifying that :func:`~mpmath.betainc` computes the integral in the
definition::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> x,y,a,b = 3, 4, 0, 6
>>> betainc(x, y, a, b)
-4010.4
>>> quad(lambda t: t**(x-1) * (1-t)**(y-1), [a, b])
-4010.4
The arguments may be arbitrary complex numbers::
>>> betainc(0.75, 1-4j, 0, 2+3j)
(0.2241657956955709603655887 + 0.3619619242700451992411724j)
With regularization::
>>> betainc(1, 2, 0, 0.25, regularized=True)
0.4375
>>> betainc(pi, e, 0, 1, regularized=True) # Complete
1.0
The beta integral satisfies some simple argument transformation
symmetries::
>>> mp.dps = 15
>>> betainc(2,3,4,5), -betainc(2,3,5,4), betainc(3,2,1-5,1-4)
(56.0833333333333, 56.0833333333333, 56.0833333333333)
The beta integral can often be evaluated analytically. For integer and
rational arguments, the incomplete beta function typically reduces to a
simple algebraic-logarithmic expression::
>>> mp.dps = 25
>>> identify(chop(betainc(0, 0, 3, 4)))
'-(log((9/8)))'
>>> identify(betainc(2, 3, 4, 5))
'(673/12)'
>>> identify(betainc(1.5, 1, 1, 2))
'((-12+sqrt(1152))/18)'
"""
binomial = r"""
Computes the binomial coefficient
.. math ::
{n \choose k} = \frac{n!}{k!(n-k)!}.
The binomial coefficient gives the number of ways that `k` items
can be chosen from a set of `n` items. More generally, the binomial
coefficient is a well-defined function of arbitrary real or
complex `n` and `k`, via the gamma function.
**Examples**
Generate Pascal's triangle::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(5):
... nprint([binomial(n,k) for k in range(n+1)])
...
[1.0]
[1.0, 1.0]
[1.0, 2.0, 1.0]
[1.0, 3.0, 3.0, 1.0]
[1.0, 4.0, 6.0, 4.0, 1.0]
There is 1 way to select 0 items from the empty set, and 0 ways to
select 1 item from the empty set::
>>> binomial(0, 0)
1.0
>>> binomial(0, 1)
0.0
:func:`~mpmath.binomial` supports large arguments::
>>> binomial(10**20, 10**20-5)
8.33333333333333e+97
>>> binomial(10**20, 10**10)
2.60784095465201e+104342944813
Nonintegral binomial coefficients find use in series
expansions::
>>> nprint(taylor(lambda x: (1+x)**0.25, 0, 4))
[1.0, 0.25, -0.09375, 0.0546875, -0.0375977]
>>> nprint([binomial(0.25, k) for k in range(5)])
[1.0, 0.25, -0.09375, 0.0546875, -0.0375977]
An integral representation::
>>> n, k = 5, 3
>>> f = lambda t: exp(-j*k*t)*(1+exp(j*t))**n
>>> chop(quad(f, [-pi,pi])/(2*pi))
10.0
>>> binomial(n,k)
10.0
"""
rf = r"""
Computes the rising factorial or Pochhammer symbol,
.. math ::
x^{(n)} = x (x+1) \cdots (x+n-1) = \frac{\Gamma(x+n)}{\Gamma(x)}
where the rightmost expression is valid for nonintegral `n`.
**Examples**
For integral `n`, the rising factorial is a polynomial::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(5):
... nprint(taylor(lambda x: rf(x,n), 0, n))
...
[1.0]
[0.0, 1.0]
[0.0, 1.0, 1.0]
[0.0, 2.0, 3.0, 1.0]
[0.0, 6.0, 11.0, 6.0, 1.0]
Evaluation is supported for arbitrary arguments::
>>> rf(2+3j, 5.5)
(-7202.03920483347 - 3777.58810701527j)
"""
ff = r"""
Computes the falling factorial,
.. math ::
(x)_n = x (x-1) \cdots (x-n+1) = \frac{\Gamma(x+1)}{\Gamma(x-n+1)}
where the rightmost expression is valid for nonintegral `n`.
**Examples**
For integral `n`, the falling factorial is a polynomial::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(5):
... nprint(taylor(lambda x: ff(x,n), 0, n))
...
[1.0]
[0.0, 1.0]
[0.0, -1.0, 1.0]
[0.0, 2.0, -3.0, 1.0]
[0.0, -6.0, 11.0, -6.0, 1.0]
Evaluation is supported for arbitrary arguments::
>>> ff(2+3j, 5.5)
(-720.41085888203 + 316.101124983878j)
"""
fac2 = r"""
Computes the double factorial `x!!`, defined for integers
`x > 0` by
.. math ::
x!! = \begin{cases}
1 \cdot 3 \cdots (x-2) \cdot x & x \;\mathrm{odd} \\
2 \cdot 4 \cdots (x-2) \cdot x & x \;\mathrm{even}
\end{cases}
and more generally by [1]
.. math ::
x!! = 2^{x/2} \left(\frac{\pi}{2}\right)^{(\cos(\pi x)-1)/4}
\Gamma\left(\frac{x}{2}+1\right).
**Examples**
The integer sequence of double factorials begins::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> nprint([fac2(n) for n in range(10)])
[1.0, 1.0, 2.0, 3.0, 8.0, 15.0, 48.0, 105.0, 384.0, 945.0]
For large `x`, double factorials follow a Stirling-like asymptotic
approximation::
>>> x = mpf(10000)
>>> fac2(x)
5.97272691416282e+17830
>>> sqrt(pi)*x**((x+1)/2)*exp(-x/2)
5.97262736954392e+17830
The recurrence formula `x!! = x (x-2)!!` can be reversed to
define the double factorial of negative odd integers (but
not negative even integers)::
>>> fac2(-1), fac2(-3), fac2(-5), fac2(-7)
(1.0, -1.0, 0.333333333333333, -0.0666666666666667)
>>> fac2(-2)
Traceback (most recent call last):
...
ValueError: gamma function pole
With the exception of the poles at negative even integers,
:func:`~mpmath.fac2` supports evaluation for arbitrary complex arguments.
The recurrence formula is valid generally::
>>> fac2(pi+2j)
(-1.3697207890154e-12 + 3.93665300979176e-12j)
>>> (pi+2j)*fac2(pi-2+2j)
(-1.3697207890154e-12 + 3.93665300979176e-12j)
Double factorials should not be confused with nested factorials,
which are immensely larger::
>>> fac(fac(20))
5.13805976125208e+43675043585825292774
>>> fac2(20)
3715891200.0
Double factorials appear, among other things, in series expansions
of Gaussian functions and the error function. Infinite series
include::
>>> nsum(lambda k: 1/fac2(k), [0, inf])
3.05940740534258
>>> sqrt(e)*(1+sqrt(pi/2)*erf(sqrt(2)/2))
3.05940740534258
>>> nsum(lambda k: 2**k/fac2(2*k-1), [1, inf])
4.06015693855741
>>> e * erf(1) * sqrt(pi)
4.06015693855741
A beautiful Ramanujan sum::
>>> nsum(lambda k: (-1)**k*(fac2(2*k-1)/fac2(2*k))**3, [0,inf])
0.90917279454693
>>> (gamma('9/8')/gamma('5/4')/gamma('7/8'))**2
0.90917279454693
**References**
1. http://functions.wolfram.com/GammaBetaErf/Factorial2/27/01/0002/
2. http://mathworld.wolfram.com/DoubleFactorial.html
"""
hyper = r"""
Evaluates the generalized hypergeometric function
.. math ::
\,_pF_q(a_1,\ldots,a_p; b_1,\ldots,b_q; z) =
\sum_{n=0}^\infty \frac{(a_1)_n (a_2)_n \ldots (a_p)_n}
{(b_1)_n(b_2)_n\ldots(b_q)_n} \frac{z^n}{n!}
where `(x)_n` denotes the rising factorial (see :func:`~mpmath.rf`).
The parameters lists ``a_s`` and ``b_s`` may contain integers,
real numbers, complex numbers, as well as exact fractions given in
the form of tuples `(p, q)`. :func:`~mpmath.hyper` is optimized to handle
integers and fractions more efficiently than arbitrary
floating-point parameters (since rational parameters are by
far the most common).
**Examples**
Verifying that :func:`~mpmath.hyper` gives the sum in the definition, by
comparison with :func:`~mpmath.nsum`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> a,b,c,d = 2,3,4,5
>>> x = 0.25
>>> hyper([a,b],[c,d],x)
1.078903941164934876086237
>>> fn = lambda n: rf(a,n)*rf(b,n)/rf(c,n)/rf(d,n)*x**n/fac(n)
>>> nsum(fn, [0, inf])
1.078903941164934876086237
The parameters can be any combination of integers, fractions,
floats and complex numbers::
>>> a, b, c, d, e = 1, (-1,2), pi, 3+4j, (2,3)
>>> x = 0.2j
>>> hyper([a,b],[c,d,e],x)
(0.9923571616434024810831887 - 0.005753848733883879742993122j)
>>> b, e = -0.5, mpf(2)/3
>>> fn = lambda n: rf(a,n)*rf(b,n)/rf(c,n)/rf(d,n)/rf(e,n)*x**n/fac(n)
>>> nsum(fn, [0, inf])
(0.9923571616434024810831887 - 0.005753848733883879742993122j)
The `\,_0F_0` and `\,_1F_0` series are just elementary functions::
>>> a, z = sqrt(2), +pi
>>> hyper([],[],z)
23.14069263277926900572909
>>> exp(z)
23.14069263277926900572909
>>> hyper([a],[],z)
(-0.09069132879922920160334114 + 0.3283224323946162083579656j)
>>> (1-z)**(-a)
(-0.09069132879922920160334114 + 0.3283224323946162083579656j)
If any `a_k` coefficient is a nonpositive integer, the series terminates
into a finite polynomial::
>>> hyper([1,1,1,-3],[2,5],1)
0.7904761904761904761904762
>>> identify(_)
'(83/105)'
If any `b_k` is a nonpositive integer, the function is undefined (unless the
series terminates before the division by zero occurs)::
>>> hyper([1,1,1,-3],[-2,5],1)
Traceback (most recent call last):
...
ZeroDivisionError: pole in hypergeometric series
>>> hyper([1,1,1,-1],[-2,5],1)
1.1
Except for polynomial cases, the radius of convergence `R` of the hypergeometric
series is either `R = \infty` (if `p \le q`), `R = 1` (if `p = q+1`), or
`R = 0` (if `p > q+1`).
The analytic continuations of the functions with `p = q+1`, i.e. `\,_2F_1`,
`\,_3F_2`, `\,_4F_3`, etc, are all implemented and therefore these functions
can be evaluated for `|z| \ge 1`. The shortcuts :func:`~mpmath.hyp2f1`, :func:`~mpmath.hyp3f2`
are available to handle the most common cases (see their documentation),
but functions of higher degree are also supported via :func:`~mpmath.hyper`::
>>> hyper([1,2,3,4], [5,6,7], 1) # 4F3 at finite-valued branch point
1.141783505526870731311423
>>> hyper([4,5,6,7], [1,2,3], 1) # 4F3 at pole
+inf
>>> hyper([1,2,3,4,5], [6,7,8,9], 10) # 5F4
(1.543998916527972259717257 - 0.5876309929580408028816365j)
>>> hyper([1,2,3,4,5,6], [7,8,9,10,11], 1j) # 6F5
(0.9996565821853579063502466 + 0.0129721075905630604445669j)
Near `z = 1` with noninteger parameters::
>>> hyper(['1/3',1,'3/2',2], ['1/5','11/6','41/8'], 1)
2.219433352235586121250027
>>> hyper(['1/3',1,'3/2',2], ['1/5','11/6','5/4'], 1)
+inf
>>> eps1 = extradps(6)(lambda: 1 - mpf('1e-6'))()
>>> hyper(['1/3',1,'3/2',2], ['1/5','11/6','5/4'], eps1)
2923978034.412973409330956
Please note that, as currently implemented, evaluation of `\,_pF_{p-1}`
with `p \ge 3` may be slow or inaccurate when `|z-1|` is small,
for some parameter values.
Evaluation may be aborted if convergence appears to be too slow.
The optional ``maxterms`` (limiting the number of series terms) and ``maxprec``
(limiting the internal precision) keyword arguments can be used
to control evaluation::
>>> hyper([1,2,3], [4,5,6], 10000)
Traceback (most recent call last):
...
NoConvergence: Hypergeometric series converges too slowly. Try increasing maxterms.
>>> hyper([1,2,3], [4,5,6], 10000, maxterms=10**6)
7.622806053177969474396918e+4310
Additional options include ``force_series`` (which forces direct use of
a hypergeometric series even if another evaluation method might work better)
and ``asymp_tol`` which controls the target tolerance for using
asymptotic series.
When `p > q+1`, ``hyper`` computes the (iterated) Borel sum of the divergent
series. For `\,_2F_0` the Borel sum has an analytic solution and can be
computed efficiently (see :func:`~mpmath.hyp2f0`). For higher degrees, the functions
is evaluated first by attempting to sum it directly as an asymptotic
series (this only works for tiny `|z|`), and then by evaluating the Borel
regularized sum using numerical integration. Except for
special parameter combinations, this can be extremely slow.
>>> hyper([1,1], [], 0.5) # regularization of 2F0
(1.340965419580146562086448 + 0.8503366631752726568782447j)
>>> hyper([1,1,1,1], [1], 0.5) # regularization of 4F1
(1.108287213689475145830699 + 0.5327107430640678181200491j)
With the following magnitude of argument, the asymptotic series for `\,_3F_1`
gives only a few digits. Using Borel summation, ``hyper`` can produce
a value with full accuracy::
>>> mp.dps = 15
>>> hyper([2,0.5,4], [5.25], '0.08', force_series=True)
Traceback (most recent call last):
...
NoConvergence: Hypergeometric series converges too slowly. Try increasing maxterms.
>>> hyper([2,0.5,4], [5.25], '0.08', asymp_tol=1e-4)
1.0725535790737
>>> hyper([2,0.5,4], [5.25], '0.08')
(1.07269542893559 + 5.54668863216891e-5j)
>>> hyper([2,0.5,4], [5.25], '-0.08', asymp_tol=1e-4)
0.946344925484879
>>> hyper([2,0.5,4], [5.25], '-0.08')
0.946312503737771
>>> mp.dps = 25
>>> hyper([2,0.5,4], [5.25], '-0.08')
0.9463125037377662296700858
Note that with the positive `z` value, there is a complex part in the
correct result, which falls below the tolerance of the asymptotic series.
By default, a parameter that appears in both ``a_s`` and ``b_s`` will be removed
unless it is a nonpositive integer. This generally speeds up evaluation
by producing a hypergeometric function of lower order.
This optimization can be disabled by passing ``eliminate=False``.
>>> hyper([1,2,3], [4,5,3], 10000)
1.268943190440206905892212e+4321
>>> hyper([1,2,3], [4,5,3], 10000, eliminate=False)
Traceback (most recent call last):
...
NoConvergence: Hypergeometric series converges too slowly. Try increasing maxterms.
>>> hyper([1,2,3], [4,5,3], 10000, eliminate=False, maxterms=10**6)
1.268943190440206905892212e+4321
If a nonpositive integer `-n` appears in both ``a_s`` and ``b_s``, this parameter
cannot be unambiguously removed since it creates a term 0 / 0.
In this case the hypergeometric series is understood to terminate before
the division by zero occurs. This convention is consistent with Mathematica.
An alternative convention of eliminating the parameters can be toggled
with ``eliminate_all=True``:
>>> hyper([2,-1], [-1], 3)
7.0
>>> hyper([2,-1], [-1], 3, eliminate_all=True)
0.25
>>> hyper([2], [], 3)
0.25
"""
hypercomb = r"""
Computes a weighted combination of hypergeometric functions
.. math ::
\sum_{r=1}^N \left[ \prod_{k=1}^{l_r} {w_{r,k}}^{c_{r,k}}
\frac{\prod_{k=1}^{m_r} \Gamma(\alpha_{r,k})}{\prod_{k=1}^{n_r}
\Gamma(\beta_{r,k})}
\,_{p_r}F_{q_r}(a_{r,1},\ldots,a_{r,p}; b_{r,1},
\ldots, b_{r,q}; z_r)\right].
Typically the parameters are linear combinations of a small set of base
parameters; :func:`~mpmath.hypercomb` permits computing a correct value in
the case that some of the `\alpha`, `\beta`, `b` turn out to be
nonpositive integers, or if division by zero occurs for some `w^c`,
assuming that there are opposing singularities that cancel out.
The limit is computed by evaluating the function with the base
parameters perturbed, at a higher working precision.
The first argument should be a function that takes the perturbable
base parameters ``params`` as input and returns `N` tuples
``(w, c, alpha, beta, a, b, z)``, where the coefficients ``w``, ``c``,
gamma factors ``alpha``, ``beta``, and hypergeometric coefficients
``a``, ``b`` each should be lists of numbers, and ``z`` should be a single
number.
**Examples**
The following evaluates
.. math ::
(a-1) \frac{\Gamma(a-3)}{\Gamma(a-4)} \,_1F_1(a,a-1,z) = e^z(a-4)(a+z-1)
with `a=1, z=3`. There is a zero factor, two gamma function poles, and
the 1F1 function is singular; all singularities cancel out to give a finite
value::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> hypercomb(lambda a: [([a-1],[1],[a-3],[a-4],[a],[a-1],3)], [1])
-180.769832308689
>>> -9*exp(3)
-180.769832308689
"""
hyp0f1 = r"""
Gives the hypergeometric function `\,_0F_1`, sometimes known as the
confluent limit function, defined as
.. math ::
\,_0F_1(a,z) = \sum_{k=0}^{\infty} \frac{1}{(a)_k} \frac{z^k}{k!}.
This function satisfies the differential equation `z f''(z) + a f'(z) = f(z)`,
and is related to the Bessel function of the first kind (see :func:`~mpmath.besselj`).
``hyp0f1(a,z)`` is equivalent to ``hyper([],[a],z)``; see documentation for
:func:`~mpmath.hyper` for more information.
**Examples**
Evaluation for arbitrary arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hyp0f1(2, 0.25)
1.130318207984970054415392
>>> hyp0f1((1,2), 1234567)
6.27287187546220705604627e+964
>>> hyp0f1(3+4j, 1000000j)
(3.905169561300910030267132e+606 + 3.807708544441684513934213e+606j)
Evaluation is supported for arbitrarily large values of `z`,
using asymptotic expansions::
>>> hyp0f1(1, 10**50)
2.131705322874965310390701e+8685889638065036553022565
>>> hyp0f1(1, -10**50)
1.115945364792025420300208e-13
Verifying the differential equation::
>>> a = 2.5
>>> f = lambda z: hyp0f1(a,z)
>>> for z in [0, 10, 3+4j]:
... chop(z*diff(f,z,2) + a*diff(f,z) - f(z))
...
0.0
0.0
0.0
"""
hyp1f1 = r"""
Gives the confluent hypergeometric function of the first kind,
.. math ::
\,_1F_1(a,b,z) = \sum_{k=0}^{\infty} \frac{(a)_k}{(b)_k} \frac{z^k}{k!},
also known as Kummer's function and sometimes denoted by `M(a,b,z)`. This
function gives one solution to the confluent (Kummer's) differential equation
.. math ::
z f''(z) + (b-z) f'(z) - af(z) = 0.
A second solution is given by the `U` function; see :func:`~mpmath.hyperu`.
Solutions are also given in an alternate form by the Whittaker
functions (:func:`~mpmath.whitm`, :func:`~mpmath.whitw`).
``hyp1f1(a,b,z)`` is equivalent
to ``hyper([a],[b],z)``; see documentation for :func:`~mpmath.hyper` for more
information.
**Examples**
Evaluation for real and complex values of the argument `z`, with
fixed parameters `a = 2, b = -1/3`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hyp1f1(2, (-1,3), 3.25)
-2815.956856924817275640248
>>> hyp1f1(2, (-1,3), -3.25)
-1.145036502407444445553107
>>> hyp1f1(2, (-1,3), 1000)
-8.021799872770764149793693e+441
>>> hyp1f1(2, (-1,3), -1000)
0.000003131987633006813594535331
>>> hyp1f1(2, (-1,3), 100+100j)
(-3.189190365227034385898282e+48 - 1.106169926814270418999315e+49j)
Parameters may be complex::
>>> hyp1f1(2+3j, -1+j, 10j)
(261.8977905181045142673351 + 160.8930312845682213562172j)
Arbitrarily large values of `z` are supported::
>>> hyp1f1(3, 4, 10**20)
3.890569218254486878220752e+43429448190325182745
>>> hyp1f1(3, 4, -10**20)
6.0e-60
>>> hyp1f1(3, 4, 10**20*j)
(-1.935753855797342532571597e-20 - 2.291911213325184901239155e-20j)
Verifying the differential equation::
>>> a, b = 1.5, 2
>>> f = lambda z: hyp1f1(a,b,z)
>>> for z in [0, -10, 3, 3+4j]:
... chop(z*diff(f,z,2) + (b-z)*diff(f,z) - a*f(z))
...
0.0
0.0
0.0
0.0
An integral representation::
>>> a, b = 1.5, 3
>>> z = 1.5
>>> hyp1f1(a,b,z)
2.269381460919952778587441
>>> g = lambda t: exp(z*t)*t**(a-1)*(1-t)**(b-a-1)
>>> gammaprod([b],[a,b-a])*quad(g, [0,1])
2.269381460919952778587441
"""
hyp1f2 = r"""
Gives the hypergeometric function `\,_1F_2(a_1,a_2;b_1,b_2; z)`.
The call ``hyp1f2(a1,b1,b2,z)`` is equivalent to
``hyper([a1],[b1,b2],z)``.
Evaluation works for complex and arbitrarily large arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> a, b, c = 1.5, (-1,3), 2.25
>>> hyp1f2(a, b, c, 10**20)
-1.159388148811981535941434e+8685889639
>>> hyp1f2(a, b, c, -10**20)
-12.60262607892655945795907
>>> hyp1f2(a, b, c, 10**20*j)
(4.237220401382240876065501e+6141851464 - 2.950930337531768015892987e+6141851464j)
>>> hyp1f2(2+3j, -2j, 0.5j, 10-20j)
(135881.9905586966432662004 - 86681.95885418079535738828j)
"""
hyp2f2 = r"""
Gives the hypergeometric function `\,_2F_2(a_1,a_2;b_1,b_2; z)`.
The call ``hyp2f2(a1,a2,b1,b2,z)`` is equivalent to
``hyper([a1,a2],[b1,b2],z)``.
Evaluation works for complex and arbitrarily large arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> a, b, c, d = 1.5, (-1,3), 2.25, 4
>>> hyp2f2(a, b, c, d, 10**20)
-5.275758229007902299823821e+43429448190325182663
>>> hyp2f2(a, b, c, d, -10**20)
2561445.079983207701073448
>>> hyp2f2(a, b, c, d, 10**20*j)
(2218276.509664121194836667 - 1280722.539991603850462856j)
>>> hyp2f2(2+3j, -2j, 0.5j, 4j, 10-20j)
(80500.68321405666957342788 - 20346.82752982813540993502j)
"""
hyp2f3 = r"""
Gives the hypergeometric function `\,_2F_3(a_1,a_2;b_1,b_2,b_3; z)`.
The call ``hyp2f3(a1,a2,b1,b2,b3,z)`` is equivalent to
``hyper([a1,a2],[b1,b2,b3],z)``.
Evaluation works for arbitrarily large arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> a1,a2,b1,b2,b3 = 1.5, (-1,3), 2.25, 4, (1,5)
>>> hyp2f3(a1,a2,b1,b2,b3,10**20)
-4.169178177065714963568963e+8685889590
>>> hyp2f3(a1,a2,b1,b2,b3,-10**20)
7064472.587757755088178629
>>> hyp2f3(a1,a2,b1,b2,b3,10**20*j)
(-5.163368465314934589818543e+6141851415 + 1.783578125755972803440364e+6141851416j)
>>> hyp2f3(2+3j, -2j, 0.5j, 4j, -1-j, 10-20j)
(-2280.938956687033150740228 + 13620.97336609573659199632j)
>>> hyp2f3(2+3j, -2j, 0.5j, 4j, -1-j, 10000000-20000000j)
(4.849835186175096516193e+3504 - 3.365981529122220091353633e+3504j)
"""
hyp2f1 = r"""
Gives the Gauss hypergeometric function `\,_2F_1` (often simply referred to as
*the* hypergeometric function), defined for `|z| < 1` as
.. math ::
\,_2F_1(a,b,c,z) = \sum_{k=0}^{\infty}
\frac{(a)_k (b)_k}{(c)_k} \frac{z^k}{k!}.
and for `|z| \ge 1` by analytic continuation, with a branch cut on `(1, \infty)`
when necessary.
Special cases of this function include many of the orthogonal polynomials as
well as the incomplete beta function and other functions. Properties of the
Gauss hypergeometric function are documented comprehensively in many references,
for example Abramowitz & Stegun, section 15.
The implementation supports the analytic continuation as well as evaluation
close to the unit circle where `|z| \approx 1`. The syntax ``hyp2f1(a,b,c,z)``
is equivalent to ``hyper([a,b],[c],z)``.
**Examples**
Evaluation with `z` inside, outside and on the unit circle, for
fixed parameters::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hyp2f1(2, (1,2), 4, 0.75)
1.303703703703703703703704
>>> hyp2f1(2, (1,2), 4, -1.75)
0.7431290566046919177853916
>>> hyp2f1(2, (1,2), 4, 1.75)
(1.418075801749271137026239 - 1.114976146679907015775102j)
>>> hyp2f1(2, (1,2), 4, 1)
1.6
>>> hyp2f1(2, (1,2), 4, -1)
0.8235498012182875315037882
>>> hyp2f1(2, (1,2), 4, j)
(0.9144026291433065674259078 + 0.2050415770437884900574923j)
>>> hyp2f1(2, (1,2), 4, 2+j)
(0.9274013540258103029011549 + 0.7455257875808100868984496j)
>>> hyp2f1(2, (1,2), 4, 0.25j)
(0.9931169055799728251931672 + 0.06154836525312066938147793j)
Evaluation with complex parameter values::
>>> hyp2f1(1+j, 0.75, 10j, 1+5j)
(0.8834833319713479923389638 + 0.7053886880648105068343509j)
Evaluation with `z = 1`::
>>> hyp2f1(-2.5, 3.5, 1.5, 1)
0.0
>>> hyp2f1(-2.5, 3, 4, 1)
0.06926406926406926406926407
>>> hyp2f1(2, 3, 4, 1)
+inf
Evaluation for huge arguments::
>>> hyp2f1((-1,3), 1.75, 4, '1e100')
(7.883714220959876246415651e+32 + 1.365499358305579597618785e+33j)
>>> hyp2f1((-1,3), 1.75, 4, '1e1000000')
(7.883714220959876246415651e+333332 + 1.365499358305579597618785e+333333j)
>>> hyp2f1((-1,3), 1.75, 4, '1e1000000j')
(1.365499358305579597618785e+333333 - 7.883714220959876246415651e+333332j)
An integral representation::
>>> a,b,c,z = -0.5, 1, 2.5, 0.25
>>> g = lambda t: t**(b-1) * (1-t)**(c-b-1) * (1-t*z)**(-a)
>>> gammaprod([c],[b,c-b]) * quad(g, [0,1])
0.9480458814362824478852618
>>> hyp2f1(a,b,c,z)
0.9480458814362824478852618
Verifying the hypergeometric differential equation::
>>> f = lambda z: hyp2f1(a,b,c,z)
>>> chop(z*(1-z)*diff(f,z,2) + (c-(a+b+1)*z)*diff(f,z) - a*b*f(z))
0.0
"""
hyp3f2 = r"""
Gives the generalized hypergeometric function `\,_3F_2`, defined for `|z| < 1`
as
.. math ::
\,_3F_2(a_1,a_2,a_3,b_1,b_2,z) = \sum_{k=0}^{\infty}
\frac{(a_1)_k (a_2)_k (a_3)_k}{(b_1)_k (b_2)_k} \frac{z^k}{k!}.
and for `|z| \ge 1` by analytic continuation. The analytic structure of this
function is similar to that of `\,_2F_1`, generally with a singularity at
`z = 1` and a branch cut on `(1, \infty)`.
Evaluation is supported inside, on, and outside
the circle of convergence `|z| = 1`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hyp3f2(1,2,3,4,5,0.25)
1.083533123380934241548707
>>> hyp3f2(1,2+2j,3,4,5,-10+10j)
(0.1574651066006004632914361 - 0.03194209021885226400892963j)
>>> hyp3f2(1,2,3,4,5,-10)
0.3071141169208772603266489
>>> hyp3f2(1,2,3,4,5,10)
(-0.4857045320523947050581423 - 0.5988311440454888436888028j)
>>> hyp3f2(0.25,1,1,2,1.5,1)
1.157370995096772047567631
>>> (8-pi-2*ln2)/3
1.157370995096772047567631
>>> hyp3f2(1+j,0.5j,2,1,-2j,-1)
(1.74518490615029486475959 + 0.1454701525056682297614029j)
>>> hyp3f2(1+j,0.5j,2,1,-2j,sqrt(j))
(0.9829816481834277511138055 - 0.4059040020276937085081127j)
>>> hyp3f2(-3,2,1,-5,4,1)
1.41
>>> hyp3f2(-3,2,1,-5,4,2)
2.12
Evaluation very close to the unit circle::
>>> hyp3f2(1,2,3,4,5,'1.0001')
(1.564877796743282766872279 - 3.76821518787438186031973e-11j)
>>> hyp3f2(1,2,3,4,5,'1+0.0001j')
(1.564747153061671573212831 + 0.0001305757570366084557648482j)
>>> hyp3f2(1,2,3,4,5,'0.9999')
1.564616644881686134983664
>>> hyp3f2(1,2,3,4,5,'-0.9999')
0.7823896253461678060196207
.. note ::
Evaluation for `|z-1|` small can currently be inaccurate or slow
for some parameter combinations.
For various parameter combinations, `\,_3F_2` admits representation in terms
of hypergeometric functions of lower degree, or in terms of
simpler functions::
>>> for a, b, z in [(1,2,-1), (2,0.5,1)]:
... hyp2f1(a,b,a+b+0.5,z)**2
... hyp3f2(2*a,a+b,2*b,a+b+0.5,2*a+2*b,z)
...
0.4246104461966439006086308
0.4246104461966439006086308
7.111111111111111111111111
7.111111111111111111111111
>>> z = 2+3j
>>> hyp3f2(0.5,1,1.5,2,2,z)
(0.7621440939243342419729144 + 0.4249117735058037649915723j)
>>> 4*(pi-2*ellipe(z))/(pi*z)
(0.7621440939243342419729144 + 0.4249117735058037649915723j)
"""
hyperu = r"""
Gives the Tricomi confluent hypergeometric function `U`, also known as
the Kummer or confluent hypergeometric function of the second kind. This
function gives a second linearly independent solution to the confluent
hypergeometric differential equation (the first is provided by `\,_1F_1` --
see :func:`~mpmath.hyp1f1`).
**Examples**
Evaluation for arbitrary complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hyperu(2,3,4)
0.0625
>>> hyperu(0.25, 5, 1000)
0.1779949416140579573763523
>>> hyperu(0.25, 5, -1000)
(0.1256256609322773150118907 - 0.1256256609322773150118907j)
The `U` function may be singular at `z = 0`::
>>> hyperu(1.5, 2, 0)
+inf
>>> hyperu(1.5, -2, 0)
0.1719434921288400112603671
Verifying the differential equation::
>>> a, b = 1.5, 2
>>> f = lambda z: hyperu(a,b,z)
>>> for z in [-10, 3, 3+4j]:
... chop(z*diff(f,z,2) + (b-z)*diff(f,z) - a*f(z))
...
0.0
0.0
0.0
An integral representation::
>>> a,b,z = 2, 3.5, 4.25
>>> hyperu(a,b,z)
0.06674960718150520648014567
>>> quad(lambda t: exp(-z*t)*t**(a-1)*(1+t)**(b-a-1),[0,inf]) / gamma(a)
0.06674960718150520648014567
[1] http://people.math.sfu.ca/~cbm/aands/page_504.htm
"""
hyp2f0 = r"""
Gives the hypergeometric function `\,_2F_0`, defined formally by the
series
.. math ::
\,_2F_0(a,b;;z) = \sum_{n=0}^{\infty} (a)_n (b)_n \frac{z^n}{n!}.
This series usually does not converge. For small enough `z`, it can be viewed
as an asymptotic series that may be summed directly with an appropriate
truncation. When this is not the case, :func:`~mpmath.hyp2f0` gives a regularized sum,
or equivalently, it uses a representation in terms of the
hypergeometric U function [1]. The series also converges when either `a` or `b`
is a nonpositive integer, as it then terminates into a polynomial
after `-a` or `-b` terms.
**Examples**
Evaluation is supported for arbitrary complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hyp2f0((2,3), 1.25, -100)
0.07095851870980052763312791
>>> hyp2f0((2,3), 1.25, 100)
(-0.03254379032170590665041131 + 0.07269254613282301012735797j)
>>> hyp2f0(-0.75, 1-j, 4j)
(-0.3579987031082732264862155 - 3.052951783922142735255881j)
Even with real arguments, the regularized value of 2F0 is often complex-valued,
but the imaginary part decreases exponentially as `z \to 0`. In the following
example, the first call uses complex evaluation while the second has a small
enough `z` to evaluate using the direct series and thus the returned value
is strictly real (this should be taken to indicate that the imaginary
part is less than ``eps``)::
>>> mp.dps = 15
>>> hyp2f0(1.5, 0.5, 0.05)
(1.04166637647907 + 8.34584913683906e-8j)
>>> hyp2f0(1.5, 0.5, 0.0005)
1.00037535207621
The imaginary part can be retrieved by increasing the working precision::
>>> mp.dps = 80
>>> nprint(hyp2f0(1.5, 0.5, 0.009).imag)
1.23828e-46
In the polynomial case (the series terminating), 2F0 can evaluate exactly::
>>> mp.dps = 15
>>> hyp2f0(-6,-6,2)
291793.0
>>> identify(hyp2f0(-2,1,0.25))
'(5/8)'
The coefficients of the polynomials can be recovered using Taylor expansion::
>>> nprint(taylor(lambda x: hyp2f0(-3,0.5,x), 0, 10))
[1.0, -1.5, 2.25, -1.875, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
>>> nprint(taylor(lambda x: hyp2f0(-4,0.5,x), 0, 10))
[1.0, -2.0, 4.5, -7.5, 6.5625, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[1] http://people.math.sfu.ca/~cbm/aands/page_504.htm
"""
gammainc = r"""
``gammainc(z, a=0, b=inf)`` computes the (generalized) incomplete
gamma function with integration limits `[a, b]`:
.. math ::
\Gamma(z,a,b) = \int_a^b t^{z-1} e^{-t} \, dt
The generalized incomplete gamma function reduces to the
following special cases when one or both endpoints are fixed:
* `\Gamma(z,0,\infty)` is the standard ("complete")
gamma function, `\Gamma(z)` (available directly
as the mpmath function :func:`~mpmath.gamma`)
* `\Gamma(z,a,\infty)` is the "upper" incomplete gamma
function, `\Gamma(z,a)`
* `\Gamma(z,0,b)` is the "lower" incomplete gamma
function, `\gamma(z,b)`.
Of course, we have
`\Gamma(z,0,x) + \Gamma(z,x,\infty) = \Gamma(z)`
for all `z` and `x`.
Note however that some authors reverse the order of the
arguments when defining the lower and upper incomplete
gamma function, so one should be careful to get the correct
definition.
If also given the keyword argument ``regularized=True``,
:func:`~mpmath.gammainc` computes the "regularized" incomplete gamma
function
.. math ::
P(z,a,b) = \frac{\Gamma(z,a,b)}{\Gamma(z)}.
**Examples**
We can compare with numerical quadrature to verify that
:func:`~mpmath.gammainc` computes the integral in the definition::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> gammainc(2+3j, 4, 10)
(0.00977212668627705160602312 - 0.0770637306312989892451977j)
>>> quad(lambda t: t**(2+3j-1) * exp(-t), [4, 10])
(0.00977212668627705160602312 - 0.0770637306312989892451977j)
Argument symmetries follow directly from the integral definition::
>>> gammainc(3, 4, 5) + gammainc(3, 5, 4)
0.0
>>> gammainc(3,0,2) + gammainc(3,2,4); gammainc(3,0,4)
1.523793388892911312363331
1.523793388892911312363331
>>> findroot(lambda z: gammainc(2,z,3), 1)
3.0
Evaluation for arbitrarily large arguments::
>>> gammainc(10, 100)
4.083660630910611272288592e-26
>>> gammainc(10, 10000000000000000)
5.290402449901174752972486e-4342944819032375
>>> gammainc(3+4j, 1000000+1000000j)
(-1.257913707524362408877881e-434284 + 2.556691003883483531962095e-434284j)
Evaluation of a generalized incomplete gamma function automatically chooses
the representation that gives a more accurate result, depending on which
parameter is larger::
>>> gammainc(10000000, 3) - gammainc(10000000, 2) # Bad
0.0
>>> gammainc(10000000, 2, 3) # Good
1.755146243738946045873491e+4771204
>>> gammainc(2, 0, 100000001) - gammainc(2, 0, 100000000) # Bad
0.0
>>> gammainc(2, 100000000, 100000001) # Good
4.078258353474186729184421e-43429441
The incomplete gamma functions satisfy simple recurrence
relations::
>>> mp.dps = 25
>>> z, a = mpf(3.5), mpf(2)
>>> gammainc(z+1, a); z*gammainc(z,a) + a**z*exp(-a)
10.60130296933533459267329
10.60130296933533459267329
>>> gammainc(z+1,0,a); z*gammainc(z,0,a) - a**z*exp(-a)
1.030425427232114336470932
1.030425427232114336470932
Evaluation at integers and poles::
>>> gammainc(-3, -4, -5)
(-0.2214577048967798566234192 + 0.0j)
>>> gammainc(-3, 0, 5)
+inf
If `z` is an integer, the recurrence reduces the incomplete gamma
function to `P(a) \exp(-a) + Q(b) \exp(-b)` where `P` and
`Q` are polynomials::
>>> gammainc(1, 2); exp(-2)
0.1353352832366126918939995
0.1353352832366126918939995
>>> mp.dps = 50
>>> identify(gammainc(6, 1, 2), ['exp(-1)', 'exp(-2)'])
'(326*exp(-1) + (-872)*exp(-2))'
The incomplete gamma functions reduce to functions such as
the exponential integral Ei and the error function for special
arguments::
>>> mp.dps = 25
>>> gammainc(0, 4); -ei(-4)
0.00377935240984890647887486
0.00377935240984890647887486
>>> gammainc(0.5, 0, 2); sqrt(pi)*erf(sqrt(2))
1.691806732945198336509541
1.691806732945198336509541
"""
erf = r"""
Computes the error function, `\mathrm{erf}(x)`. The error
function is the normalized antiderivative of the Gaussian function
`\exp(-t^2)`. More precisely,
.. math::
\mathrm{erf}(x) = \frac{2}{\sqrt \pi} \int_0^x \exp(-t^2) \,dt
**Basic examples**
Simple values and limits include::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> erf(0)
0.0
>>> erf(1)
0.842700792949715
>>> erf(-1)
-0.842700792949715
>>> erf(inf)
1.0
>>> erf(-inf)
-1.0
For large real `x`, `\mathrm{erf}(x)` approaches 1 very
rapidly::
>>> erf(3)
0.999977909503001
>>> erf(5)
0.999999999998463
The error function is an odd function::
>>> nprint(chop(taylor(erf, 0, 5)))
[0.0, 1.12838, 0.0, -0.376126, 0.0, 0.112838]
:func:`~mpmath.erf` implements arbitrary-precision evaluation and
supports complex numbers::
>>> mp.dps = 50
>>> erf(0.5)
0.52049987781304653768274665389196452873645157575796
>>> mp.dps = 25
>>> erf(1+j)
(1.316151281697947644880271 + 0.1904534692378346862841089j)
Evaluation is supported for large arguments::
>>> mp.dps = 25
>>> erf('1e1000')
1.0
>>> erf('-1e1000')
-1.0
>>> erf('1e-1000')
1.128379167095512573896159e-1000
>>> erf('1e7j')
(0.0 + 8.593897639029319267398803e+43429448190317j)
>>> erf('1e7+1e7j')
(0.9999999858172446172631323 + 3.728805278735270407053139e-8j)
**Related functions**
See also :func:`~mpmath.erfc`, which is more accurate for large `x`,
and :func:`~mpmath.erfi` which gives the antiderivative of
`\exp(t^2)`.
The Fresnel integrals :func:`~mpmath.fresnels` and :func:`~mpmath.fresnelc`
are also related to the error function.
"""
erfc = r"""
Computes the complementary error function,
`\mathrm{erfc}(x) = 1-\mathrm{erf}(x)`.
This function avoids cancellation that occurs when naively
computing the complementary error function as ``1-erf(x)``::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> 1 - erf(10)
0.0
>>> erfc(10)
2.08848758376254e-45
:func:`~mpmath.erfc` works accurately even for ludicrously large
arguments::
>>> erfc(10**10)
4.3504398860243e-43429448190325182776
Complex arguments are supported::
>>> erfc(500+50j)
(1.19739830969552e-107492 + 1.46072418957528e-107491j)
"""
erfi = r"""
Computes the imaginary error function, `\mathrm{erfi}(x)`.
The imaginary error function is defined in analogy with the
error function, but with a positive sign in the integrand:
.. math ::
\mathrm{erfi}(x) = \frac{2}{\sqrt \pi} \int_0^x \exp(t^2) \,dt
Whereas the error function rapidly converges to 1 as `x` grows,
the imaginary error function rapidly diverges to infinity.
The functions are related as
`\mathrm{erfi}(x) = -i\,\mathrm{erf}(ix)` for all complex
numbers `x`.
**Examples**
Basic values and limits::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> erfi(0)
0.0
>>> erfi(1)
1.65042575879754
>>> erfi(-1)
-1.65042575879754
>>> erfi(inf)
+inf
>>> erfi(-inf)
-inf
Note the symmetry between erf and erfi::
>>> erfi(3j)
(0.0 + 0.999977909503001j)
>>> erf(3)
0.999977909503001
>>> erf(1+2j)
(-0.536643565778565 - 5.04914370344703j)
>>> erfi(2+1j)
(-5.04914370344703 - 0.536643565778565j)
Large arguments are supported::
>>> erfi(1000)
1.71130938718796e+434291
>>> erfi(10**10)
7.3167287567024e+43429448190325182754
>>> erfi(-10**10)
-7.3167287567024e+43429448190325182754
>>> erfi(1000-500j)
(2.49895233563961e+325717 + 2.6846779342253e+325717j)
>>> erfi(100000j)
(0.0 + 1.0j)
>>> erfi(-100000j)
(0.0 - 1.0j)
"""
erfinv = r"""
Computes the inverse error function, satisfying
.. math ::
\mathrm{erf}(\mathrm{erfinv}(x)) =
\mathrm{erfinv}(\mathrm{erf}(x)) = x.
This function is defined only for `-1 \le x \le 1`.
**Examples**
Special values include::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> erfinv(0)
0.0
>>> erfinv(1)
+inf
>>> erfinv(-1)
-inf
The domain is limited to the standard interval::
>>> erfinv(2)
Traceback (most recent call last):
...
ValueError: erfinv(x) is defined only for -1 <= x <= 1
It is simple to check that :func:`~mpmath.erfinv` computes inverse values of
:func:`~mpmath.erf` as promised::
>>> erf(erfinv(0.75))
0.75
>>> erf(erfinv(-0.995))
-0.995
:func:`~mpmath.erfinv` supports arbitrary-precision evaluation::
>>> mp.dps = 50
>>> x = erf(2)
>>> x
0.99532226501895273416206925636725292861089179704006
>>> erfinv(x)
2.0
A definite integral involving the inverse error function::
>>> mp.dps = 15
>>> quad(erfinv, [0, 1])
0.564189583547756
>>> 1/sqrt(pi)
0.564189583547756
The inverse error function can be used to generate random numbers
with a Gaussian distribution (although this is a relatively
inefficient algorithm)::
>>> nprint([erfinv(2*rand()-1) for n in range(6)]) # doctest: +SKIP
[-0.586747, 1.10233, -0.376796, 0.926037, -0.708142, -0.732012]
"""
npdf = r"""
``npdf(x, mu=0, sigma=1)`` evaluates the probability density
function of a normal distribution with mean value `\mu`
and variance `\sigma^2`.
Elementary properties of the probability distribution can
be verified using numerical integration::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> quad(npdf, [-inf, inf])
1.0
>>> quad(lambda x: npdf(x, 3), [3, inf])
0.5
>>> quad(lambda x: npdf(x, 3, 2), [3, inf])
0.5
See also :func:`~mpmath.ncdf`, which gives the cumulative
distribution.
"""
ncdf = r"""
``ncdf(x, mu=0, sigma=1)`` evaluates the cumulative distribution
function of a normal distribution with mean value `\mu`
and variance `\sigma^2`.
See also :func:`~mpmath.npdf`, which gives the probability density.
Elementary properties include::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> ncdf(pi, mu=pi)
0.5
>>> ncdf(-inf)
0.0
>>> ncdf(+inf)
1.0
The cumulative distribution is the integral of the density
function having identical mu and sigma::
>>> mp.dps = 15
>>> diff(ncdf, 2)
0.053990966513188
>>> npdf(2)
0.053990966513188
>>> diff(lambda x: ncdf(x, 1, 0.5), 0)
0.107981933026376
>>> npdf(0, 1, 0.5)
0.107981933026376
"""
expint = r"""
:func:`~mpmath.expint(n,z)` gives the generalized exponential integral
or En-function,
.. math ::
\mathrm{E}_n(z) = \int_1^{\infty} \frac{e^{-zt}}{t^n} dt,
where `n` and `z` may both be complex numbers. The case with `n = 1` is
also given by :func:`~mpmath.e1`.
**Examples**
Evaluation at real and complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> expint(1, 6.25)
0.0002704758872637179088496194
>>> expint(-3, 2+3j)
(0.00299658467335472929656159 + 0.06100816202125885450319632j)
>>> expint(2+3j, 4-5j)
(0.001803529474663565056945248 - 0.002235061547756185403349091j)
At negative integer values of `n`, `E_n(z)` reduces to a
rational-exponential function::
>>> f = lambda n, z: fac(n)*sum(z**k/fac(k-1) for k in range(1,n+2))/\
... exp(z)/z**(n+2)
>>> n = 3
>>> z = 1/pi
>>> expint(-n,z)
584.2604820613019908668219
>>> f(n,z)
584.2604820613019908668219
>>> n = 5
>>> expint(-n,z)
115366.5762594725451811138
>>> f(n,z)
115366.5762594725451811138
"""
e1 = r"""
Computes the exponential integral `\mathrm{E}_1(z)`, given by
.. math ::
\mathrm{E}_1(z) = \int_z^{\infty} \frac{e^{-t}}{t} dt.
This is equivalent to :func:`~mpmath.expint` with `n = 1`.
**Examples**
Two ways to evaluate this function::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> e1(6.25)
0.0002704758872637179088496194
>>> expint(1,6.25)
0.0002704758872637179088496194
The E1-function is essentially the same as the Ei-function (:func:`~mpmath.ei`)
with negated argument, except for an imaginary branch cut term::
>>> e1(2.5)
0.02491491787026973549562801
>>> -ei(-2.5)
0.02491491787026973549562801
>>> e1(-2.5)
(-7.073765894578600711923552 - 3.141592653589793238462643j)
>>> -ei(2.5)
-7.073765894578600711923552
"""
ei = r"""
Computes the exponential integral or Ei-function, `\mathrm{Ei}(x)`.
The exponential integral is defined as
.. math ::
\mathrm{Ei}(x) = \int_{-\infty\,}^x \frac{e^t}{t} \, dt.
When the integration range includes `t = 0`, the exponential
integral is interpreted as providing the Cauchy principal value.
For real `x`, the Ei-function behaves roughly like
`\mathrm{Ei}(x) \approx \exp(x) + \log(|x|)`.
The Ei-function is related to the more general family of exponential
integral functions denoted by `E_n`, which are available as :func:`~mpmath.expint`.
**Basic examples**
Some basic values and limits are::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> ei(0)
-inf
>>> ei(1)
1.89511781635594
>>> ei(inf)
+inf
>>> ei(-inf)
0.0
For `x < 0`, the defining integral can be evaluated
numerically as a reference::
>>> ei(-4)
-0.00377935240984891
>>> quad(lambda t: exp(t)/t, [-inf, -4])
-0.00377935240984891
:func:`~mpmath.ei` supports complex arguments and arbitrary
precision evaluation::
>>> mp.dps = 50
>>> ei(pi)
10.928374389331410348638445906907535171566338835056
>>> mp.dps = 25
>>> ei(3+4j)
(-4.154091651642689822535359 + 4.294418620024357476985535j)
**Related functions**
The exponential integral is closely related to the logarithmic
integral. See :func:`~mpmath.li` for additional information.
The exponential integral is related to the hyperbolic
and trigonometric integrals (see :func:`~mpmath.chi`, :func:`~mpmath.shi`,
:func:`~mpmath.ci`, :func:`~mpmath.si`) similarly to how the ordinary
exponential function is related to the hyperbolic and
trigonometric functions::
>>> mp.dps = 15
>>> ei(3)
9.93383257062542
>>> chi(3) + shi(3)
9.93383257062542
>>> chop(ci(3j) - j*si(3j) - pi*j/2)
9.93383257062542
Beware that logarithmic corrections, as in the last example
above, are required to obtain the correct branch in general.
For details, see [1].
The exponential integral is also a special case of the
hypergeometric function `\,_2F_2`::
>>> z = 0.6
>>> z*hyper([1,1],[2,2],z) + (ln(z)-ln(1/z))/2 + euler
0.769881289937359
>>> ei(z)
0.769881289937359
**References**
1. Relations between Ei and other functions:
http://functions.wolfram.com/GammaBetaErf/ExpIntegralEi/27/01/
2. Abramowitz & Stegun, section 5:
http://people.math.sfu.ca/~cbm/aands/page_228.htm
3. Asymptotic expansion for Ei:
http://mathworld.wolfram.com/En-Function.html
"""
li = r"""
Computes the logarithmic integral or li-function
`\mathrm{li}(x)`, defined by
.. math ::
\mathrm{li}(x) = \int_0^x \frac{1}{\log t} \, dt
The logarithmic integral has a singularity at `x = 1`.
Alternatively, ``li(x, offset=True)`` computes the offset
logarithmic integral (used in number theory)
.. math ::
\mathrm{Li}(x) = \int_2^x \frac{1}{\log t} \, dt.
These two functions are related via the simple identity
`\mathrm{Li}(x) = \mathrm{li}(x) - \mathrm{li}(2)`.
The logarithmic integral should also not be confused with
the polylogarithm (also denoted by Li), which is implemented
as :func:`~mpmath.polylog`.
**Examples**
Some basic values and limits::
>>> from mpmath import *
>>> mp.dps = 30; mp.pretty = True
>>> li(0)
0.0
>>> li(1)
-inf
>>> li(1)
-inf
>>> li(2)
1.04516378011749278484458888919
>>> findroot(li, 2)
1.45136923488338105028396848589
>>> li(inf)
+inf
>>> li(2, offset=True)
0.0
>>> li(1, offset=True)
-inf
>>> li(0, offset=True)
-1.04516378011749278484458888919
>>> li(10, offset=True)
5.12043572466980515267839286347
The logarithmic integral can be evaluated for arbitrary
complex arguments::
>>> mp.dps = 20
>>> li(3+4j)
(3.1343755504645775265 + 2.6769247817778742392j)
The logarithmic integral is related to the exponential integral::
>>> ei(log(3))
2.1635885946671919729
>>> li(3)
2.1635885946671919729
The logarithmic integral grows like `O(x/\log(x))`::
>>> mp.dps = 15
>>> x = 10**100
>>> x/log(x)
4.34294481903252e+97
>>> li(x)
4.3619719871407e+97
The prime number theorem states that the number of primes less
than `x` is asymptotic to `\mathrm{Li}(x)` (equivalently
`\mathrm{li}(x)`). For example, it is known that there are
exactly 1,925,320,391,606,803,968,923 prime numbers less than
`10^{23}` [1]. The logarithmic integral provides a very
accurate estimate::
>>> li(10**23, offset=True)
1.92532039161405e+21
A definite integral is::
>>> quad(li, [0, 1])
-0.693147180559945
>>> -ln(2)
-0.693147180559945
**References**
1. http://mathworld.wolfram.com/PrimeCountingFunction.html
2. http://mathworld.wolfram.com/LogarithmicIntegral.html
"""
ci = r"""
Computes the cosine integral,
.. math ::
\mathrm{Ci}(x) = -\int_x^{\infty} \frac{\cos t}{t}\,dt
= \gamma + \log x + \int_0^x \frac{\cos t - 1}{t}\,dt
**Examples**
Some values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> ci(0)
-inf
>>> ci(1)
0.3374039229009681346626462
>>> ci(pi)
0.07366791204642548599010096
>>> ci(inf)
0.0
>>> ci(-inf)
(0.0 + 3.141592653589793238462643j)
>>> ci(2+3j)
(1.408292501520849518759125 - 2.983617742029605093121118j)
The cosine integral behaves roughly like the sinc function
(see :func:`~mpmath.sinc`) for large real `x`::
>>> ci(10**10)
-4.875060251748226537857298e-11
>>> sinc(10**10)
-4.875060250875106915277943e-11
>>> chop(limit(ci, inf))
0.0
It has infinitely many roots on the positive real axis::
>>> findroot(ci, 1)
0.6165054856207162337971104
>>> findroot(ci, 2)
3.384180422551186426397851
Evaluation is supported for `z` anywhere in the complex plane::
>>> ci(10**6*(1+j))
(4.449410587611035724984376e+434287 + 9.75744874290013526417059e+434287j)
We can evaluate the defining integral as a reference::
>>> mp.dps = 15
>>> -quadosc(lambda t: cos(t)/t, [5, inf], omega=1)
-0.190029749656644
>>> ci(5)
-0.190029749656644
Some infinite series can be evaluated using the
cosine integral::
>>> nsum(lambda k: (-1)**k/(fac(2*k)*(2*k)), [1,inf])
-0.239811742000565
>>> ci(1) - euler
-0.239811742000565
"""
si = r"""
Computes the sine integral,
.. math ::
\mathrm{Si}(x) = \int_0^x \frac{\sin t}{t}\,dt.
The sine integral is thus the antiderivative of the sinc
function (see :func:`~mpmath.sinc`).
**Examples**
Some values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> si(0)
0.0
>>> si(1)
0.9460830703671830149413533
>>> si(-1)
-0.9460830703671830149413533
>>> si(pi)
1.851937051982466170361053
>>> si(inf)
1.570796326794896619231322
>>> si(-inf)
-1.570796326794896619231322
>>> si(2+3j)
(4.547513889562289219853204 + 1.399196580646054789459839j)
The sine integral approaches `\pi/2` for large real `x`::
>>> si(10**10)
1.570796326707584656968511
>>> pi/2
1.570796326794896619231322
Evaluation is supported for `z` anywhere in the complex plane::
>>> si(10**6*(1+j))
(-9.75744874290013526417059e+434287 + 4.449410587611035724984376e+434287j)
We can evaluate the defining integral as a reference::
>>> mp.dps = 15
>>> quad(sinc, [0, 5])
1.54993124494467
>>> si(5)
1.54993124494467
Some infinite series can be evaluated using the
sine integral::
>>> nsum(lambda k: (-1)**k/(fac(2*k+1)*(2*k+1)), [0,inf])
0.946083070367183
>>> si(1)
0.946083070367183
"""
chi = r"""
Computes the hyperbolic cosine integral, defined
in analogy with the cosine integral (see :func:`~mpmath.ci`) as
.. math ::
\mathrm{Chi}(x) = -\int_x^{\infty} \frac{\cosh t}{t}\,dt
= \gamma + \log x + \int_0^x \frac{\cosh t - 1}{t}\,dt
Some values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> chi(0)
-inf
>>> chi(1)
0.8378669409802082408946786
>>> chi(inf)
+inf
>>> findroot(chi, 0.5)
0.5238225713898644064509583
>>> chi(2+3j)
(-0.1683628683277204662429321 + 2.625115880451325002151688j)
Evaluation is supported for `z` anywhere in the complex plane::
>>> chi(10**6*(1+j))
(4.449410587611035724984376e+434287 - 9.75744874290013526417059e+434287j)
"""
shi = r"""
Computes the hyperbolic sine integral, defined
in analogy with the sine integral (see :func:`~mpmath.si`) as
.. math ::
\mathrm{Shi}(x) = \int_0^x \frac{\sinh t}{t}\,dt.
Some values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> shi(0)
0.0
>>> shi(1)
1.057250875375728514571842
>>> shi(-1)
-1.057250875375728514571842
>>> shi(inf)
+inf
>>> shi(2+3j)
(-0.1931890762719198291678095 + 2.645432555362369624818525j)
Evaluation is supported for `z` anywhere in the complex plane::
>>> shi(10**6*(1+j))
(4.449410587611035724984376e+434287 - 9.75744874290013526417059e+434287j)
"""
fresnels = r"""
Computes the Fresnel sine integral
.. math ::
S(x) = \int_0^x \sin\left(\frac{\pi t^2}{2}\right) \,dt
Note that some sources define this function
without the normalization factor `\pi/2`.
**Examples**
Some basic values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> fresnels(0)
0.0
>>> fresnels(inf)
0.5
>>> fresnels(-inf)
-0.5
>>> fresnels(1)
0.4382591473903547660767567
>>> fresnels(1+2j)
(36.72546488399143842838788 + 15.58775110440458732748279j)
Comparing with the definition::
>>> fresnels(3)
0.4963129989673750360976123
>>> quad(lambda t: sin(pi*t**2/2), [0,3])
0.4963129989673750360976123
"""
fresnelc = r"""
Computes the Fresnel cosine integral
.. math ::
C(x) = \int_0^x \cos\left(\frac{\pi t^2}{2}\right) \,dt
Note that some sources define this function
without the normalization factor `\pi/2`.
**Examples**
Some basic values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> fresnelc(0)
0.0
>>> fresnelc(inf)
0.5
>>> fresnelc(-inf)
-0.5
>>> fresnelc(1)
0.7798934003768228294742064
>>> fresnelc(1+2j)
(16.08787137412548041729489 - 36.22568799288165021578758j)
Comparing with the definition::
>>> fresnelc(3)
0.6057207892976856295561611
>>> quad(lambda t: cos(pi*t**2/2), [0,3])
0.6057207892976856295561611
"""
airyai = r"""
Computes the Airy function `\operatorname{Ai}(z)`, which is
the solution of the Airy differential equation `f''(z) - z f(z) = 0`
with initial conditions
.. math ::
\operatorname{Ai}(0) =
\frac{1}{3^{2/3}\Gamma\left(\frac{2}{3}\right)}
\operatorname{Ai}'(0) =
-\frac{1}{3^{1/3}\Gamma\left(\frac{1}{3}\right)}.
Other common ways of defining the Ai-function include
integrals such as
.. math ::
\operatorname{Ai}(x) = \frac{1}{\pi}
\int_0^{\infty} \cos\left(\frac{1}{3}t^3+xt\right) dt
\qquad x \in \mathbb{R}
\operatorname{Ai}(z) = \frac{\sqrt{3}}{2\pi}
\int_0^{\infty}
\exp\left(-\frac{t^3}{3}-\frac{z^3}{3t^3}\right) dt.
The Ai-function is an entire function with a turning point,
behaving roughly like a slowly decaying sine wave for `z < 0` and
like a rapidly decreasing exponential for `z > 0`.
A second solution of the Airy differential equation
is given by `\operatorname{Bi}(z)` (see :func:`~mpmath.airybi`).
Optionally, with *derivative=alpha*, :func:`airyai` can compute the
`\alpha`-th order fractional derivative with respect to `z`.
For `\alpha = n = 1,2,3,\ldots` this gives the derivative
`\operatorname{Ai}^{(n)}(z)`, and for `\alpha = -n = -1,-2,-3,\ldots`
this gives the `n`-fold iterated integral
.. math ::
f_0(z) = \operatorname{Ai}(z)
f_n(z) = \int_0^z f_{n-1}(t) dt.
The Ai-function has infinitely many zeros, all located along the
negative half of the real axis. They can be computed with
:func:`~mpmath.airyaizero`.
**Plots**
.. literalinclude :: /plots/ai.py
.. image :: /plots/ai.png
.. literalinclude :: /plots/ai_c.py
.. image :: /plots/ai_c.png
**Basic examples**
Limits and values include::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> airyai(0); 1/(power(3,'2/3')*gamma('2/3'))
0.3550280538878172392600632
0.3550280538878172392600632
>>> airyai(1)
0.1352924163128814155241474
>>> airyai(-1)
0.5355608832923521187995166
>>> airyai(inf); airyai(-inf)
0.0
0.0
Evaluation is supported for large magnitudes of the argument::
>>> airyai(-100)
0.1767533932395528780908311
>>> airyai(100)
2.634482152088184489550553e-291
>>> airyai(50+50j)
(-5.31790195707456404099817e-68 - 1.163588003770709748720107e-67j)
>>> airyai(-50+50j)
(1.041242537363167632587245e+158 + 3.347525544923600321838281e+157j)
Huge arguments are also fine::
>>> airyai(10**10)
1.162235978298741779953693e-289529654602171
>>> airyai(-10**10)
0.0001736206448152818510510181
>>> w = airyai(10**10*(1+j))
>>> w.real
5.711508683721355528322567e-186339621747698
>>> w.imag
1.867245506962312577848166e-186339621747697
The first root of the Ai-function is::
>>> findroot(airyai, -2)
-2.338107410459767038489197
>>> airyaizero(1)
-2.338107410459767038489197
**Properties and relations**
Verifying the Airy differential equation::
>>> for z in [-3.4, 0, 2.5, 1+2j]:
... chop(airyai(z,2) - z*airyai(z))
...
0.0
0.0
0.0
0.0
The first few terms of the Taylor series expansion around `z = 0`
(every third term is zero)::
>>> nprint(taylor(airyai, 0, 5))
[0.355028, -0.258819, 0.0, 0.0591713, -0.0215683, 0.0]
The Airy functions satisfy the Wronskian relation
`\operatorname{Ai}(z) \operatorname{Bi}'(z) -
\operatorname{Ai}'(z) \operatorname{Bi}(z) = 1/\pi`::
>>> z = -0.5
>>> airyai(z)*airybi(z,1) - airyai(z,1)*airybi(z)
0.3183098861837906715377675
>>> 1/pi
0.3183098861837906715377675
The Airy functions can be expressed in terms of Bessel
functions of order `\pm 1/3`. For `\Re[z] \le 0`, we have::
>>> z = -3
>>> airyai(z)
-0.3788142936776580743472439
>>> y = 2*power(-z,'3/2')/3
>>> (sqrt(-z) * (besselj('1/3',y) + besselj('-1/3',y)))/3
-0.3788142936776580743472439
**Derivatives and integrals**
Derivatives of the Ai-function (directly and using :func:`~mpmath.diff`)::
>>> airyai(-3,1); diff(airyai,-3)
0.3145837692165988136507873
0.3145837692165988136507873
>>> airyai(-3,2); diff(airyai,-3,2)
1.136442881032974223041732
1.136442881032974223041732
>>> airyai(1000,1); diff(airyai,1000)
-2.943133917910336090459748e-9156
-2.943133917910336090459748e-9156
Several derivatives at `z = 0`::
>>> airyai(0,0); airyai(0,1); airyai(0,2)
0.3550280538878172392600632
-0.2588194037928067984051836
0.0
>>> airyai(0,3); airyai(0,4); airyai(0,5)
0.3550280538878172392600632
-0.5176388075856135968103671
0.0
>>> airyai(0,15); airyai(0,16); airyai(0,17)
1292.30211615165475090663
-3188.655054727379756351861
0.0
The integral of the Ai-function::
>>> airyai(3,-1); quad(airyai, [0,3])
0.3299203760070217725002701
0.3299203760070217725002701
>>> airyai(-10,-1); quad(airyai, [0,-10])
-0.765698403134212917425148
-0.765698403134212917425148
Integrals of high or fractional order::
>>> airyai(-2,0.5); differint(airyai,-2,0.5,0)
(0.0 + 0.2453596101351438273844725j)
(0.0 + 0.2453596101351438273844725j)
>>> airyai(-2,-4); differint(airyai,-2,-4,0)
0.2939176441636809580339365
0.2939176441636809580339365
>>> airyai(0,-1); airyai(0,-2); airyai(0,-3)
0.0
0.0
0.0
Integrals of the Ai-function can be evaluated at limit points::
>>> airyai(-1000000,-1); airyai(-inf,-1)
-0.6666843728311539978751512
-0.6666666666666666666666667
>>> airyai(10,-1); airyai(+inf,-1)
0.3333333332991690159427932
0.3333333333333333333333333
>>> airyai(+inf,-2); airyai(+inf,-3)
+inf
+inf
>>> airyai(-1000000,-2); airyai(-inf,-2)
666666.4078472650651209742
+inf
>>> airyai(-1000000,-3); airyai(-inf,-3)
-333333074513.7520264995733
-inf
**References**
1. [DLMF]_ Chapter 9: Airy and Related Functions
2. [WolframFunctions]_ section: Bessel-Type Functions
"""
airybi = r"""
Computes the Airy function `\operatorname{Bi}(z)`, which is
the solution of the Airy differential equation `f''(z) - z f(z) = 0`
with initial conditions
.. math ::
\operatorname{Bi}(0) =
\frac{1}{3^{1/6}\Gamma\left(\frac{2}{3}\right)}
\operatorname{Bi}'(0) =
\frac{3^{1/6}}{\Gamma\left(\frac{1}{3}\right)}.
Like the Ai-function (see :func:`~mpmath.airyai`), the Bi-function
is oscillatory for `z < 0`, but it grows rather than decreases
for `z > 0`.
Optionally, as for :func:`~mpmath.airyai`, derivatives, integrals
and fractional derivatives can be computed with the *derivative*
parameter.
The Bi-function has infinitely many zeros along the negative
half-axis, as well as complex zeros, which can all be computed
with :func:`~mpmath.airybizero`.
**Plots**
.. literalinclude :: /plots/bi.py
.. image :: /plots/bi.png
.. literalinclude :: /plots/bi_c.py
.. image :: /plots/bi_c.png
**Basic examples**
Limits and values include::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> airybi(0); 1/(power(3,'1/6')*gamma('2/3'))
0.6149266274460007351509224
0.6149266274460007351509224
>>> airybi(1)
1.207423594952871259436379
>>> airybi(-1)
0.10399738949694461188869
>>> airybi(inf); airybi(-inf)
+inf
0.0
Evaluation is supported for large magnitudes of the argument::
>>> airybi(-100)
0.02427388768016013160566747
>>> airybi(100)
6.041223996670201399005265e+288
>>> airybi(50+50j)
(-5.322076267321435669290334e+63 + 1.478450291165243789749427e+65j)
>>> airybi(-50+50j)
(-3.347525544923600321838281e+157 + 1.041242537363167632587245e+158j)
Huge arguments::
>>> airybi(10**10)
1.369385787943539818688433e+289529654602165
>>> airybi(-10**10)
0.001775656141692932747610973
>>> w = airybi(10**10*(1+j))
>>> w.real
-6.559955931096196875845858e+186339621747689
>>> w.imag
-6.822462726981357180929024e+186339621747690
The first real root of the Bi-function is::
>>> findroot(airybi, -1); airybizero(1)
-1.17371322270912792491998
-1.17371322270912792491998
**Properties and relations**
Verifying the Airy differential equation::
>>> for z in [-3.4, 0, 2.5, 1+2j]:
... chop(airybi(z,2) - z*airybi(z))
...
0.0
0.0
0.0
0.0
The first few terms of the Taylor series expansion around `z = 0`
(every third term is zero)::
>>> nprint(taylor(airybi, 0, 5))
[0.614927, 0.448288, 0.0, 0.102488, 0.0373574, 0.0]
The Airy functions can be expressed in terms of Bessel
functions of order `\pm 1/3`. For `\Re[z] \le 0`, we have::
>>> z = -3
>>> airybi(z)
-0.1982896263749265432206449
>>> p = 2*power(-z,'3/2')/3
>>> sqrt(-mpf(z)/3)*(besselj('-1/3',p) - besselj('1/3',p))
-0.1982896263749265432206449
**Derivatives and integrals**
Derivatives of the Bi-function (directly and using :func:`~mpmath.diff`)::
>>> airybi(-3,1); diff(airybi,-3)
-0.675611222685258537668032
-0.675611222685258537668032
>>> airybi(-3,2); diff(airybi,-3,2)
0.5948688791247796296619346
0.5948688791247796296619346
>>> airybi(1000,1); diff(airybi,1000)
1.710055114624614989262335e+9156
1.710055114624614989262335e+9156
Several derivatives at `z = 0`::
>>> airybi(0,0); airybi(0,1); airybi(0,2)
0.6149266274460007351509224
0.4482883573538263579148237
0.0
>>> airybi(0,3); airybi(0,4); airybi(0,5)
0.6149266274460007351509224
0.8965767147076527158296474
0.0
>>> airybi(0,15); airybi(0,16); airybi(0,17)
2238.332923903442675949357
5522.912562599140729510628
0.0
The integral of the Bi-function::
>>> airybi(3,-1); quad(airybi, [0,3])
10.06200303130620056316655
10.06200303130620056316655
>>> airybi(-10,-1); quad(airybi, [0,-10])
-0.01504042480614002045135483
-0.01504042480614002045135483
Integrals of high or fractional order::
>>> airybi(-2,0.5); differint(airybi, -2, 0.5, 0)
(0.0 + 0.5019859055341699223453257j)
(0.0 + 0.5019859055341699223453257j)
>>> airybi(-2,-4); differint(airybi,-2,-4,0)
0.2809314599922447252139092
0.2809314599922447252139092
>>> airybi(0,-1); airybi(0,-2); airybi(0,-3)
0.0
0.0
0.0
Integrals of the Bi-function can be evaluated at limit points::
>>> airybi(-1000000,-1); airybi(-inf,-1)
0.000002191261128063434047966873
0.0
>>> airybi(10,-1); airybi(+inf,-1)
147809803.1074067161675853
+inf
>>> airybi(+inf,-2); airybi(+inf,-3)
+inf
+inf
>>> airybi(-1000000,-2); airybi(-inf,-2)
0.4482883750599908479851085
0.4482883573538263579148237
>>> gamma('2/3')*power(3,'2/3')/(2*pi)
0.4482883573538263579148237
>>> airybi(-100000,-3); airybi(-inf,-3)
-44828.52827206932872493133
-inf
>>> airybi(-100000,-4); airybi(-inf,-4)
2241411040.437759489540248
+inf
"""
airyaizero = r"""
Gives the `k`-th zero of the Airy Ai-function,
i.e. the `k`-th number `a_k` ordered by magnitude for which
`\operatorname{Ai}(a_k) = 0`.
Optionally, with *derivative=1*, the corresponding
zero `a'_k` of the derivative function, i.e.
`\operatorname{Ai}'(a'_k) = 0`, is computed.
**Examples**
Some values of `a_k`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> airyaizero(1)
-2.338107410459767038489197
>>> airyaizero(2)
-4.087949444130970616636989
>>> airyaizero(3)
-5.520559828095551059129856
>>> airyaizero(1000)
-281.0315196125215528353364
Some values of `a'_k`::
>>> airyaizero(1,1)
-1.018792971647471089017325
>>> airyaizero(2,1)
-3.248197582179836537875424
>>> airyaizero(3,1)
-4.820099211178735639400616
>>> airyaizero(1000,1)
-280.9378080358935070607097
Verification::
>>> chop(airyai(airyaizero(1)))
0.0
>>> chop(airyai(airyaizero(1,1),1))
0.0
"""
airybizero = r"""
With *complex=False*, gives the `k`-th real zero of the Airy Bi-function,
i.e. the `k`-th number `b_k` ordered by magnitude for which
`\operatorname{Bi}(b_k) = 0`.
With *complex=True*, gives the `k`-th complex zero in the upper
half plane `\beta_k`. Also the conjugate `\overline{\beta_k}`
is a zero.
Optionally, with *derivative=1*, the corresponding
zero `b'_k` or `\beta'_k` of the derivative function, i.e.
`\operatorname{Bi}'(b'_k) = 0` or `\operatorname{Bi}'(\beta'_k) = 0`,
is computed.
**Examples**
Some values of `b_k`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> airybizero(1)
-1.17371322270912792491998
>>> airybizero(2)
-3.271093302836352715680228
>>> airybizero(3)
-4.830737841662015932667709
>>> airybizero(1000)
-280.9378112034152401578834
Some values of `b_k`::
>>> airybizero(1,1)
-2.294439682614123246622459
>>> airybizero(2,1)
-4.073155089071828215552369
>>> airybizero(3,1)
-5.512395729663599496259593
>>> airybizero(1000,1)
-281.0315164471118527161362
Some values of `\beta_k`::
>>> airybizero(1,complex=True)
(0.9775448867316206859469927 + 2.141290706038744575749139j)
>>> airybizero(2,complex=True)
(1.896775013895336346627217 + 3.627291764358919410440499j)
>>> airybizero(3,complex=True)
(2.633157739354946595708019 + 4.855468179979844983174628j)
>>> airybizero(1000,complex=True)
(140.4978560578493018899793 + 243.3907724215792121244867j)
Some values of `\beta'_k`::
>>> airybizero(1,1,complex=True)
(0.2149470745374305676088329 + 1.100600143302797880647194j)
>>> airybizero(2,1,complex=True)
(1.458168309223507392028211 + 2.912249367458445419235083j)
>>> airybizero(3,1,complex=True)
(2.273760763013482299792362 + 4.254528549217097862167015j)
>>> airybizero(1000,1,complex=True)
(140.4509972835270559730423 + 243.3096175398562811896208j)
Verification::
>>> chop(airybi(airybizero(1)))
0.0
>>> chop(airybi(airybizero(1,1),1))
0.0
>>> u = airybizero(1,complex=True)
>>> chop(airybi(u))
0.0
>>> chop(airybi(conj(u)))
0.0
The complex zeros (in the upper and lower half-planes respectively)
asymptotically approach the rays `z = R \exp(\pm i \pi /3)`::
>>> arg(airybizero(1,complex=True))
1.142532510286334022305364
>>> arg(airybizero(1000,complex=True))
1.047271114786212061583917
>>> arg(airybizero(1000000,complex=True))
1.047197624741816183341355
>>> pi/3
1.047197551196597746154214
"""
ellipk = r"""
Evaluates the complete elliptic integral of the first kind,
`K(m)`, defined by
.. math ::
K(m) = \int_0^{\pi/2} \frac{dt}{\sqrt{1-m \sin^2 t}} \, = \,
\frac{\pi}{2} \,_2F_1\left(\frac{1}{2}, \frac{1}{2}, 1, m\right).
Note that the argument is the parameter `m = k^2`,
not the modulus `k` which is sometimes used.
**Plots**
.. literalinclude :: /plots/ellipk.py
.. image :: /plots/ellipk.png
**Examples**
Values and limits include::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> ellipk(0)
1.570796326794896619231322
>>> ellipk(inf)
(0.0 + 0.0j)
>>> ellipk(-inf)
0.0
>>> ellipk(1)
+inf
>>> ellipk(-1)
1.31102877714605990523242
>>> ellipk(2)
(1.31102877714605990523242 - 1.31102877714605990523242j)
Verifying the defining integral and hypergeometric
representation::
>>> ellipk(0.5)
1.85407467730137191843385
>>> quad(lambda t: (1-0.5*sin(t)**2)**-0.5, [0, pi/2])
1.85407467730137191843385
>>> pi/2*hyp2f1(0.5,0.5,1,0.5)
1.85407467730137191843385
Evaluation is supported for arbitrary complex `m`::
>>> ellipk(3+4j)
(0.9111955638049650086562171 + 0.6313342832413452438845091j)
A definite integral::
>>> quad(ellipk, [0, 1])
2.0
"""
agm = r"""
``agm(a, b)`` computes the arithmetic-geometric mean of `a` and
`b`, defined as the limit of the following iteration:
.. math ::
a_0 = a
b_0 = b
a_{n+1} = \frac{a_n+b_n}{2}
b_{n+1} = \sqrt{a_n b_n}
This function can be called with a single argument, computing
`\mathrm{agm}(a,1) = \mathrm{agm}(1,a)`.
**Examples**
It is a well-known theorem that the geometric mean of
two distinct positive numbers is less than the arithmetic
mean. It follows that the arithmetic-geometric mean lies
between the two means::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> a = mpf(3)
>>> b = mpf(4)
>>> sqrt(a*b)
3.46410161513775
>>> agm(a,b)
3.48202767635957
>>> (a+b)/2
3.5
The arithmetic-geometric mean is scale-invariant::
>>> agm(10*e, 10*pi)
29.261085515723
>>> 10*agm(e, pi)
29.261085515723
As an order-of-magnitude estimate, `\mathrm{agm}(1,x) \approx x`
for large `x`::
>>> agm(10**10)
643448704.760133
>>> agm(10**50)
1.34814309345871e+48
For tiny `x`, `\mathrm{agm}(1,x) \approx -\pi/(2 \log(x/4))`::
>>> agm('0.01')
0.262166887202249
>>> -pi/2/log('0.0025')
0.262172347753122
The arithmetic-geometric mean can also be computed for complex
numbers::
>>> agm(3, 2+j)
(2.51055133276184 + 0.547394054060638j)
The AGM iteration converges very quickly (each step doubles
the number of correct digits), so :func:`~mpmath.agm` supports efficient
high-precision evaluation::
>>> mp.dps = 10000
>>> a = agm(1,2)
>>> str(a)[-10:]
'1679581912'
**Mathematical relations**
The arithmetic-geometric mean may be used to evaluate the
following two parametric definite integrals:
.. math ::
I_1 = \int_0^{\infty}
\frac{1}{\sqrt{(x^2+a^2)(x^2+b^2)}} \,dx
I_2 = \int_0^{\pi/2}
\frac{1}{\sqrt{a^2 \cos^2(x) + b^2 \sin^2(x)}} \,dx
We have::
>>> mp.dps = 15
>>> a = 3
>>> b = 4
>>> f1 = lambda x: ((x**2+a**2)*(x**2+b**2))**-0.5
>>> f2 = lambda x: ((a*cos(x))**2 + (b*sin(x))**2)**-0.5
>>> quad(f1, [0, inf])
0.451115405388492
>>> quad(f2, [0, pi/2])
0.451115405388492
>>> pi/(2*agm(a,b))
0.451115405388492
A formula for `\Gamma(1/4)`::
>>> gamma(0.25)
3.62560990822191
>>> sqrt(2*sqrt(2*pi**3)/agm(1,sqrt(2)))
3.62560990822191
**Possible issues**
The branch cut chosen for complex `a` and `b` is somewhat
arbitrary.
"""
gegenbauer = r"""
Evaluates the Gegenbauer polynomial, or ultraspherical polynomial,
.. math ::
C_n^{(a)}(z) = {n+2a-1 \choose n} \,_2F_1\left(-n, n+2a;
a+\frac{1}{2}; \frac{1}{2}(1-z)\right).
When `n` is a nonnegative integer, this formula gives a polynomial
in `z` of degree `n`, but all parameters are permitted to be
complex numbers. With `a = 1/2`, the Gegenbauer polynomial
reduces to a Legendre polynomial.
**Examples**
Evaluation for arbitrary arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> gegenbauer(3, 0.5, -10)
-2485.0
>>> gegenbauer(1000, 10, 100)
3.012757178975667428359374e+2322
>>> gegenbauer(2+3j, -0.75, -1000j)
(-5038991.358609026523401901 + 9414549.285447104177860806j)
Evaluation at negative integer orders::
>>> gegenbauer(-4, 2, 1.75)
-1.0
>>> gegenbauer(-4, 3, 1.75)
0.0
>>> gegenbauer(-4, 2j, 1.75)
0.0
>>> gegenbauer(-7, 0.5, 3)
8989.0
The Gegenbauer polynomials solve the differential equation::
>>> n, a = 4.5, 1+2j
>>> f = lambda z: gegenbauer(n, a, z)
>>> for z in [0, 0.75, -0.5j]:
... chop((1-z**2)*diff(f,z,2) - (2*a+1)*z*diff(f,z) + n*(n+2*a)*f(z))
...
0.0
0.0
0.0
The Gegenbauer polynomials have generating function
`(1-2zt+t^2)^{-a}`::
>>> a, z = 2.5, 1
>>> taylor(lambda t: (1-2*z*t+t**2)**(-a), 0, 3)
[1.0, 5.0, 15.0, 35.0]
>>> [gegenbauer(n,a,z) for n in range(4)]
[1.0, 5.0, 15.0, 35.0]
The Gegenbauer polynomials are orthogonal on `[-1, 1]` with respect
to the weight `(1-z^2)^{a-\frac{1}{2}}`::
>>> a, n, m = 2.5, 4, 5
>>> Cn = lambda z: gegenbauer(n, a, z, zeroprec=1000)
>>> Cm = lambda z: gegenbauer(m, a, z, zeroprec=1000)
>>> chop(quad(lambda z: Cn(z)*Cm(z)*(1-z**2)*(a-0.5), [-1, 1]))
0.0
"""
laguerre = r"""
Gives the generalized (associated) Laguerre polynomial, defined by
.. math ::
L_n^a(z) = \frac{\Gamma(n+b+1)}{\Gamma(b+1) \Gamma(n+1)}
\,_1F_1(-n, a+1, z).
With `a = 0` and `n` a nonnegative integer, this reduces to an ordinary
Laguerre polynomial, the sequence of which begins
`L_0(z) = 1, L_1(z) = 1-z, L_2(z) = z^2-2z+1, \ldots`.
The Laguerre polynomials are orthogonal with respect to the weight
`z^a e^{-z}` on `[0, \infty)`.
**Plots**
.. literalinclude :: /plots/laguerre.py
.. image :: /plots/laguerre.png
**Examples**
Evaluation for arbitrary arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> laguerre(5, 0, 0.25)
0.03726399739583333333333333
>>> laguerre(1+j, 0.5, 2+3j)
(4.474921610704496808379097 - 11.02058050372068958069241j)
>>> laguerre(2, 0, 10000)
49980001.0
>>> laguerre(2.5, 0, 10000)
-9.327764910194842158583189e+4328
The first few Laguerre polynomials, normalized to have integer
coefficients::
>>> for n in range(7):
... chop(taylor(lambda z: fac(n)*laguerre(n, 0, z), 0, n))
...
[1.0]
[1.0, -1.0]
[2.0, -4.0, 1.0]
[6.0, -18.0, 9.0, -1.0]
[24.0, -96.0, 72.0, -16.0, 1.0]
[120.0, -600.0, 600.0, -200.0, 25.0, -1.0]
[720.0, -4320.0, 5400.0, -2400.0, 450.0, -36.0, 1.0]
Verifying orthogonality::
>>> Lm = lambda t: laguerre(m,a,t)
>>> Ln = lambda t: laguerre(n,a,t)
>>> a, n, m = 2.5, 2, 3
>>> chop(quad(lambda t: exp(-t)*t**a*Lm(t)*Ln(t), [0,inf]))
0.0
"""
hermite = r"""
Evaluates the Hermite polynomial `H_n(z)`, which may be defined using
the recurrence
.. math ::
H_0(z) = 1
H_1(z) = 2z
H_{n+1} = 2z H_n(z) - 2n H_{n-1}(z).
The Hermite polynomials are orthogonal on `(-\infty, \infty)` with
respect to the weight `e^{-z^2}`. More generally, allowing arbitrary complex
values of `n`, the Hermite function `H_n(z)` is defined as
.. math ::
H_n(z) = (2z)^n \,_2F_0\left(-\frac{n}{2}, \frac{1-n}{2},
-\frac{1}{z^2}\right)
for `\Re{z} > 0`, or generally
.. math ::
H_n(z) = 2^n \sqrt{\pi} \left(
\frac{1}{\Gamma\left(\frac{1-n}{2}\right)}
\,_1F_1\left(-\frac{n}{2}, \frac{1}{2}, z^2\right) -
\frac{2z}{\Gamma\left(-\frac{n}{2}\right)}
\,_1F_1\left(\frac{1-n}{2}, \frac{3}{2}, z^2\right)
\right).
**Plots**
.. literalinclude :: /plots/hermite.py
.. image :: /plots/hermite.png
**Examples**
Evaluation for arbitrary arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hermite(0, 10)
1.0
>>> hermite(1, 10); hermite(2, 10)
20.0
398.0
>>> hermite(10000, 2)
4.950440066552087387515653e+19334
>>> hermite(3, -10**8)
-7999999999999998800000000.0
>>> hermite(-3, -10**8)
1.675159751729877682920301e+4342944819032534
>>> hermite(2+3j, -1+2j)
(-0.07652130602993513389421901 - 0.1084662449961914580276007j)
Coefficients of the first few Hermite polynomials are::
>>> for n in range(7):
... chop(taylor(lambda z: hermite(n, z), 0, n))
...
[1.0]
[0.0, 2.0]
[-2.0, 0.0, 4.0]
[0.0, -12.0, 0.0, 8.0]
[12.0, 0.0, -48.0, 0.0, 16.0]
[0.0, 120.0, 0.0, -160.0, 0.0, 32.0]
[-120.0, 0.0, 720.0, 0.0, -480.0, 0.0, 64.0]
Values at `z = 0`::
>>> for n in range(-5, 9):
... hermite(n, 0)
...
0.02769459142039868792653387
0.08333333333333333333333333
0.2215567313631895034122709
0.5
0.8862269254527580136490837
1.0
0.0
-2.0
0.0
12.0
0.0
-120.0
0.0
1680.0
Hermite functions satisfy the differential equation::
>>> n = 4
>>> f = lambda z: hermite(n, z)
>>> z = 1.5
>>> chop(diff(f,z,2) - 2*z*diff(f,z) + 2*n*f(z))
0.0
Verifying orthogonality::
>>> chop(quad(lambda t: hermite(2,t)*hermite(4,t)*exp(-t**2), [-inf,inf]))
0.0
"""
jacobi = r"""
``jacobi(n, a, b, x)`` evaluates the Jacobi polynomial
`P_n^{(a,b)}(x)`. The Jacobi polynomials are a special
case of the hypergeometric function `\,_2F_1` given by:
.. math ::
P_n^{(a,b)}(x) = {n+a \choose n}
\,_2F_1\left(-n,1+a+b+n,a+1,\frac{1-x}{2}\right).
Note that this definition generalizes to nonintegral values
of `n`. When `n` is an integer, the hypergeometric series
terminates after a finite number of terms, giving
a polynomial in `x`.
**Evaluation of Jacobi polynomials**
A special evaluation is `P_n^{(a,b)}(1) = {n+a \choose n}`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> jacobi(4, 0.5, 0.25, 1)
2.4609375
>>> binomial(4+0.5, 4)
2.4609375
A Jacobi polynomial of degree `n` is equal to its
Taylor polynomial of degree `n`. The explicit
coefficients of Jacobi polynomials can therefore
be recovered easily using :func:`~mpmath.taylor`::
>>> for n in range(5):
... nprint(taylor(lambda x: jacobi(n,1,2,x), 0, n))
...
[1.0]
[-0.5, 2.5]
[-0.75, -1.5, 5.25]
[0.5, -3.5, -3.5, 10.5]
[0.625, 2.5, -11.25, -7.5, 20.625]
For nonintegral `n`, the Jacobi "polynomial" is no longer
a polynomial::
>>> nprint(taylor(lambda x: jacobi(0.5,1,2,x), 0, 4))
[0.309983, 1.84119, -1.26933, 1.26699, -1.34808]
**Orthogonality**
The Jacobi polynomials are orthogonal on the interval
`[-1, 1]` with respect to the weight function
`w(x) = (1-x)^a (1+x)^b`. That is,
`w(x) P_n^{(a,b)}(x) P_m^{(a,b)}(x)` integrates to
zero if `m \ne n` and to a nonzero number if `m = n`.
The orthogonality is easy to verify using numerical
quadrature::
>>> P = jacobi
>>> f = lambda x: (1-x)**a * (1+x)**b * P(m,a,b,x) * P(n,a,b,x)
>>> a = 2
>>> b = 3
>>> m, n = 3, 4
>>> chop(quad(f, [-1, 1]), 1)
0.0
>>> m, n = 4, 4
>>> quad(f, [-1, 1])
1.9047619047619
**Differential equation**
The Jacobi polynomials are solutions of the differential
equation
.. math ::
(1-x^2) y'' + (b-a-(a+b+2)x) y' + n (n+a+b+1) y = 0.
We can verify that :func:`~mpmath.jacobi` approximately satisfies
this equation::
>>> from mpmath import *
>>> mp.dps = 15
>>> a = 2.5
>>> b = 4
>>> n = 3
>>> y = lambda x: jacobi(n,a,b,x)
>>> x = pi
>>> A0 = n*(n+a+b+1)*y(x)
>>> A1 = (b-a-(a+b+2)*x)*diff(y,x)
>>> A2 = (1-x**2)*diff(y,x,2)
>>> nprint(A2 + A1 + A0, 1)
4.0e-12
The difference of order `10^{-12}` is as close to zero as
it could be at 15-digit working precision, since the terms
are large::
>>> A0, A1, A2
(26560.2328981879, -21503.7641037294, -5056.46879445852)
"""
legendre = r"""
``legendre(n, x)`` evaluates the Legendre polynomial `P_n(x)`.
The Legendre polynomials are given by the formula
.. math ::
P_n(x) = \frac{1}{2^n n!} \frac{d^n}{dx^n} (x^2 -1)^n.
Alternatively, they can be computed recursively using
.. math ::
P_0(x) = 1
P_1(x) = x
(n+1) P_{n+1}(x) = (2n+1) x P_n(x) - n P_{n-1}(x).
A third definition is in terms of the hypergeometric function
`\,_2F_1`, whereby they can be generalized to arbitrary `n`:
.. math ::
P_n(x) = \,_2F_1\left(-n, n+1, 1, \frac{1-x}{2}\right)
**Plots**
.. literalinclude :: /plots/legendre.py
.. image :: /plots/legendre.png
**Basic evaluation**
The Legendre polynomials assume fixed values at the points
`x = -1` and `x = 1`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> nprint([legendre(n, 1) for n in range(6)])
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
>>> nprint([legendre(n, -1) for n in range(6)])
[1.0, -1.0, 1.0, -1.0, 1.0, -1.0]
The coefficients of Legendre polynomials can be recovered
using degree-`n` Taylor expansion::
>>> for n in range(5):
... nprint(chop(taylor(lambda x: legendre(n, x), 0, n)))
...
[1.0]
[0.0, 1.0]
[-0.5, 0.0, 1.5]
[0.0, -1.5, 0.0, 2.5]
[0.375, 0.0, -3.75, 0.0, 4.375]
The roots of Legendre polynomials are located symmetrically
on the interval `[-1, 1]`::
>>> for n in range(5):
... nprint(polyroots(taylor(lambda x: legendre(n, x), 0, n)[::-1]))
...
[]
[0.0]
[-0.57735, 0.57735]
[-0.774597, 0.0, 0.774597]
[-0.861136, -0.339981, 0.339981, 0.861136]
An example of an evaluation for arbitrary `n`::
>>> legendre(0.75, 2+4j)
(1.94952805264875 + 2.1071073099422j)
**Orthogonality**
The Legendre polynomials are orthogonal on `[-1, 1]` with respect
to the trivial weight `w(x) = 1`. That is, `P_m(x) P_n(x)`
integrates to zero if `m \ne n` and to `2/(2n+1)` if `m = n`::
>>> m, n = 3, 4
>>> quad(lambda x: legendre(m,x)*legendre(n,x), [-1, 1])
0.0
>>> m, n = 4, 4
>>> quad(lambda x: legendre(m,x)*legendre(n,x), [-1, 1])
0.222222222222222
**Differential equation**
The Legendre polynomials satisfy the differential equation
.. math ::
((1-x^2) y')' + n(n+1) y' = 0.
We can verify this numerically::
>>> n = 3.6
>>> x = 0.73
>>> P = legendre
>>> A = diff(lambda t: (1-t**2)*diff(lambda u: P(n,u), t), x)
>>> B = n*(n+1)*P(n,x)
>>> nprint(A+B,1)
9.0e-16
"""
legenp = r"""
Calculates the (associated) Legendre function of the first kind of
degree *n* and order *m*, `P_n^m(z)`. Taking `m = 0` gives the ordinary
Legendre function of the first kind, `P_n(z)`. The parameters may be
complex numbers.
In terms of the Gauss hypergeometric function, the (associated) Legendre
function is defined as
.. math ::
P_n^m(z) = \frac{1}{\Gamma(1-m)} \frac{(1+z)^{m/2}}{(1-z)^{m/2}}
\,_2F_1\left(-n, n+1, 1-m, \frac{1-z}{2}\right).
With *type=3* instead of *type=2*, the alternative
definition
.. math ::
\hat{P}_n^m(z) = \frac{1}{\Gamma(1-m)} \frac{(z+1)^{m/2}}{(z-1)^{m/2}}
\,_2F_1\left(-n, n+1, 1-m, \frac{1-z}{2}\right).
is used. These functions correspond respectively to ``LegendreP[n,m,2,z]``
and ``LegendreP[n,m,3,z]`` in Mathematica.
The general solution of the (associated) Legendre differential equation
.. math ::
(1-z^2) f''(z) - 2zf'(z) + \left(n(n+1)-\frac{m^2}{1-z^2}\right)f(z) = 0
is given by `C_1 P_n^m(z) + C_2 Q_n^m(z)` for arbitrary constants
`C_1`, `C_2`, where `Q_n^m(z)` is a Legendre function of the
second kind as implemented by :func:`~mpmath.legenq`.
**Examples**
Evaluation for arbitrary parameters and arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> legenp(2, 0, 10); legendre(2, 10)
149.5
149.5
>>> legenp(-2, 0.5, 2.5)
(1.972260393822275434196053 - 1.972260393822275434196053j)
>>> legenp(2+3j, 1-j, -0.5+4j)
(-3.335677248386698208736542 - 5.663270217461022307645625j)
>>> chop(legenp(3, 2, -1.5, type=2))
28.125
>>> chop(legenp(3, 2, -1.5, type=3))
-28.125
Verifying the associated Legendre differential equation::
>>> n, m = 2, -0.5
>>> C1, C2 = 1, -3
>>> f = lambda z: C1*legenp(n,m,z) + C2*legenq(n,m,z)
>>> deq = lambda z: (1-z**2)*diff(f,z,2) - 2*z*diff(f,z) + \
... (n*(n+1)-m**2/(1-z**2))*f(z)
>>> for z in [0, 2, -1.5, 0.5+2j]:
... chop(deq(mpmathify(z)))
...
0.0
0.0
0.0
0.0
"""
legenq = r"""
Calculates the (associated) Legendre function of the second kind of
degree *n* and order *m*, `Q_n^m(z)`. Taking `m = 0` gives the ordinary
Legendre function of the second kind, `Q_n(z)`. The parameters may be
complex numbers.
The Legendre functions of the second kind give a second set of
solutions to the (associated) Legendre differential equation.
(See :func:`~mpmath.legenp`.)
Unlike the Legendre functions of the first kind, they are not
polynomials of `z` for integer `n`, `m` but rational or logarithmic
functions with poles at `z = \pm 1`.
There are various ways to define Legendre functions of
the second kind, giving rise to different complex structure.
A version can be selected using the *type* keyword argument.
The *type=2* and *type=3* functions are given respectively by
.. math ::
Q_n^m(z) = \frac{\pi}{2 \sin(\pi m)}
\left( \cos(\pi m) P_n^m(z) -
\frac{\Gamma(1+m+n)}{\Gamma(1-m+n)} P_n^{-m}(z)\right)
\hat{Q}_n^m(z) = \frac{\pi}{2 \sin(\pi m)} e^{\pi i m}
\left( \hat{P}_n^m(z) -
\frac{\Gamma(1+m+n)}{\Gamma(1-m+n)} \hat{P}_n^{-m}(z)\right)
where `P` and `\hat{P}` are the *type=2* and *type=3* Legendre functions
of the first kind. The formulas above should be understood as limits
when `m` is an integer.
These functions correspond to ``LegendreQ[n,m,2,z]`` (or ``LegendreQ[n,m,z]``)
and ``LegendreQ[n,m,3,z]`` in Mathematica. The *type=3* function
is essentially the same as the function defined in
Abramowitz & Stegun (eq. 8.1.3) but with `(z+1)^{m/2}(z-1)^{m/2}` instead
of `(z^2-1)^{m/2}`, giving slightly different branches.
**Examples**
Evaluation for arbitrary parameters and arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> legenq(2, 0, 0.5)
-0.8186632680417568557122028
>>> legenq(-1.5, -2, 2.5)
(0.6655964618250228714288277 + 0.3937692045497259717762649j)
>>> legenq(2-j, 3+4j, -6+5j)
(-10001.95256487468541686564 - 6011.691337610097577791134j)
Different versions of the function::
>>> legenq(2, 1, 0.5)
0.7298060598018049369381857
>>> legenq(2, 1, 1.5)
(-7.902916572420817192300921 + 0.1998650072605976600724502j)
>>> legenq(2, 1, 0.5, type=3)
(2.040524284763495081918338 - 0.7298060598018049369381857j)
>>> chop(legenq(2, 1, 1.5, type=3))
-0.1998650072605976600724502
"""
chebyt = r"""
``chebyt(n, x)`` evaluates the Chebyshev polynomial of the first
kind `T_n(x)`, defined by the identity
.. math ::
T_n(\cos x) = \cos(n x).
The Chebyshev polynomials of the first kind are a special
case of the Jacobi polynomials, and by extension of the
hypergeometric function `\,_2F_1`. They can thus also be
evaluated for nonintegral `n`.
**Plots**
.. literalinclude :: /plots/chebyt.py
.. image :: /plots/chebyt.png
**Basic evaluation**
The coefficients of the `n`-th polynomial can be recovered
using using degree-`n` Taylor expansion::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(5):
... nprint(chop(taylor(lambda x: chebyt(n, x), 0, n)))
...
[1.0]
[0.0, 1.0]
[-1.0, 0.0, 2.0]
[0.0, -3.0, 0.0, 4.0]
[1.0, 0.0, -8.0, 0.0, 8.0]
**Orthogonality**
The Chebyshev polynomials of the first kind are orthogonal
on the interval `[-1, 1]` with respect to the weight
function `w(x) = 1/\sqrt{1-x^2}`::
>>> f = lambda x: chebyt(m,x)*chebyt(n,x)/sqrt(1-x**2)
>>> m, n = 3, 4
>>> nprint(quad(f, [-1, 1]),1)
0.0
>>> m, n = 4, 4
>>> quad(f, [-1, 1])
1.57079632596448
"""
chebyu = r"""
``chebyu(n, x)`` evaluates the Chebyshev polynomial of the second
kind `U_n(x)`, defined by the identity
.. math ::
U_n(\cos x) = \frac{\sin((n+1)x)}{\sin(x)}.
The Chebyshev polynomials of the second kind are a special
case of the Jacobi polynomials, and by extension of the
hypergeometric function `\,_2F_1`. They can thus also be
evaluated for nonintegral `n`.
**Plots**
.. literalinclude :: /plots/chebyu.py
.. image :: /plots/chebyu.png
**Basic evaluation**
The coefficients of the `n`-th polynomial can be recovered
using using degree-`n` Taylor expansion::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(5):
... nprint(chop(taylor(lambda x: chebyu(n, x), 0, n)))
...
[1.0]
[0.0, 2.0]
[-1.0, 0.0, 4.0]
[0.0, -4.0, 0.0, 8.0]
[1.0, 0.0, -12.0, 0.0, 16.0]
**Orthogonality**
The Chebyshev polynomials of the second kind are orthogonal
on the interval `[-1, 1]` with respect to the weight
function `w(x) = \sqrt{1-x^2}`::
>>> f = lambda x: chebyu(m,x)*chebyu(n,x)*sqrt(1-x**2)
>>> m, n = 3, 4
>>> quad(f, [-1, 1])
0.0
>>> m, n = 4, 4
>>> quad(f, [-1, 1])
1.5707963267949
"""
besselj = r"""
``besselj(n, x, derivative=0)`` gives the Bessel function of the first kind
`J_n(x)`. Bessel functions of the first kind are defined as
solutions of the differential equation
.. math ::
x^2 y'' + x y' + (x^2 - n^2) y = 0
which appears, among other things, when solving the radial
part of Laplace's equation in cylindrical coordinates. This
equation has two solutions for given `n`, where the
`J_n`-function is the solution that is nonsingular at `x = 0`.
For positive integer `n`, `J_n(x)` behaves roughly like a sine
(odd `n`) or cosine (even `n`) multiplied by a magnitude factor
that decays slowly as `x \to \pm\infty`.
Generally, `J_n` is a special case of the hypergeometric
function `\,_0F_1`:
.. math ::
J_n(x) = \frac{x^n}{2^n \Gamma(n+1)}
\,_0F_1\left(n+1,-\frac{x^2}{4}\right)
With *derivative* = `m \ne 0`, the `m`-th derivative
.. math ::
\frac{d^m}{dx^m} J_n(x)
is computed.
**Plots**
.. literalinclude :: /plots/besselj.py
.. image :: /plots/besselj.png
.. literalinclude :: /plots/besselj_c.py
.. image :: /plots/besselj_c.png
**Examples**
Evaluation is supported for arbitrary arguments, and at
arbitrary precision::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> besselj(2, 1000)
-0.024777229528606
>>> besselj(4, 0.75)
0.000801070086542314
>>> besselj(2, 1000j)
(-2.48071721019185e+432 + 6.41567059811949e-437j)
>>> mp.dps = 25
>>> besselj(0.75j, 3+4j)
(-2.778118364828153309919653 - 1.5863603889018621585533j)
>>> mp.dps = 50
>>> besselj(1, pi)
0.28461534317975275734531059968613140570981118184947
Arguments may be large::
>>> mp.dps = 25
>>> besselj(0, 10000)
-0.007096160353388801477265164
>>> besselj(0, 10**10)
0.000002175591750246891726859055
>>> besselj(2, 10**100)
7.337048736538615712436929e-51
>>> besselj(2, 10**5*j)
(-3.540725411970948860173735e+43426 + 4.4949812409615803110051e-43433j)
The Bessel functions of the first kind satisfy simple
symmetries around `x = 0`::
>>> mp.dps = 15
>>> nprint([besselj(n,0) for n in range(5)])
[1.0, 0.0, 0.0, 0.0, 0.0]
>>> nprint([besselj(n,pi) for n in range(5)])
[-0.304242, 0.284615, 0.485434, 0.333458, 0.151425]
>>> nprint([besselj(n,-pi) for n in range(5)])
[-0.304242, -0.284615, 0.485434, -0.333458, 0.151425]
Roots of Bessel functions are often used::
>>> nprint([findroot(j0, k) for k in [2, 5, 8, 11, 14]])
[2.40483, 5.52008, 8.65373, 11.7915, 14.9309]
>>> nprint([findroot(j1, k) for k in [3, 7, 10, 13, 16]])
[3.83171, 7.01559, 10.1735, 13.3237, 16.4706]
The roots are not periodic, but the distance between successive
roots asymptotically approaches `2 \pi`. Bessel functions of
the first kind have the following normalization::
>>> quadosc(j0, [0, inf], period=2*pi)
1.0
>>> quadosc(j1, [0, inf], period=2*pi)
1.0
For `n = 1/2` or `n = -1/2`, the Bessel function reduces to a
trigonometric function::
>>> x = 10
>>> besselj(0.5, x), sqrt(2/(pi*x))*sin(x)
(-0.13726373575505, -0.13726373575505)
>>> besselj(-0.5, x), sqrt(2/(pi*x))*cos(x)
(-0.211708866331398, -0.211708866331398)
Derivatives of any order can be computed (negative orders
correspond to integration)::
>>> mp.dps = 25
>>> besselj(0, 7.5, 1)
-0.1352484275797055051822405
>>> diff(lambda x: besselj(0,x), 7.5)
-0.1352484275797055051822405
>>> besselj(0, 7.5, 10)
-0.1377811164763244890135677
>>> diff(lambda x: besselj(0,x), 7.5, 10)
-0.1377811164763244890135677
>>> besselj(0,7.5,-1) - besselj(0,3.5,-1)
-0.1241343240399987693521378
>>> quad(j0, [3.5, 7.5])
-0.1241343240399987693521378
Differentiation with a noninteger order gives the fractional derivative
in the sense of the Riemann-Liouville differintegral, as computed by
:func:`~mpmath.differint`::
>>> mp.dps = 15
>>> besselj(1, 3.5, 0.75)
-0.385977722939384
>>> differint(lambda x: besselj(1, x), 3.5, 0.75)
-0.385977722939384
"""
besseli = r"""
``besseli(n, x, derivative=0)`` gives the modified Bessel function of the
first kind,
.. math ::
I_n(x) = i^{-n} J_n(ix).
With *derivative* = `m \ne 0`, the `m`-th derivative
.. math ::
\frac{d^m}{dx^m} I_n(x)
is computed.
**Plots**
.. literalinclude :: /plots/besseli.py
.. image :: /plots/besseli.png
.. literalinclude :: /plots/besseli_c.py
.. image :: /plots/besseli_c.png
**Examples**
Some values of `I_n(x)`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> besseli(0,0)
1.0
>>> besseli(1,0)
0.0
>>> besseli(0,1)
1.266065877752008335598245
>>> besseli(3.5, 2+3j)
(-0.2904369752642538144289025 - 0.4469098397654815837307006j)
Arguments may be large::
>>> besseli(2, 1000)
2.480717210191852440616782e+432
>>> besseli(2, 10**10)
4.299602851624027900335391e+4342944813
>>> besseli(2, 6000+10000j)
(-2.114650753239580827144204e+2603 + 4.385040221241629041351886e+2602j)
For integers `n`, the following integral representation holds::
>>> mp.dps = 15
>>> n = 3
>>> x = 2.3
>>> quad(lambda t: exp(x*cos(t))*cos(n*t), [0,pi])/pi
0.349223221159309
>>> besseli(n,x)
0.349223221159309
Derivatives and antiderivatives of any order can be computed::
>>> mp.dps = 25
>>> besseli(2, 7.5, 1)
195.8229038931399062565883
>>> diff(lambda x: besseli(2,x), 7.5)
195.8229038931399062565883
>>> besseli(2, 7.5, 10)
153.3296508971734525525176
>>> diff(lambda x: besseli(2,x), 7.5, 10)
153.3296508971734525525176
>>> besseli(2,7.5,-1) - besseli(2,3.5,-1)
202.5043900051930141956876
>>> quad(lambda x: besseli(2,x), [3.5, 7.5])
202.5043900051930141956876
"""
bessely = r"""
``bessely(n, x, derivative=0)`` gives the Bessel function of the second kind,
.. math ::
Y_n(x) = \frac{J_n(x) \cos(\pi n) - J_{-n}(x)}{\sin(\pi n)}.
For `n` an integer, this formula should be understood as a
limit. With *derivative* = `m \ne 0`, the `m`-th derivative
.. math ::
\frac{d^m}{dx^m} Y_n(x)
is computed.
**Plots**
.. literalinclude :: /plots/bessely.py
.. image :: /plots/bessely.png
.. literalinclude :: /plots/bessely_c.py
.. image :: /plots/bessely_c.png
**Examples**
Some values of `Y_n(x)`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> bessely(0,0), bessely(1,0), bessely(2,0)
(-inf, -inf, -inf)
>>> bessely(1, pi)
0.3588729167767189594679827
>>> bessely(0.5, 3+4j)
(9.242861436961450520325216 - 3.085042824915332562522402j)
Arguments may be large::
>>> bessely(0, 10000)
0.00364780555898660588668872
>>> bessely(2.5, 10**50)
-4.8952500412050989295774e-26
>>> bessely(2.5, -10**50)
(0.0 + 4.8952500412050989295774e-26j)
Derivatives and antiderivatives of any order can be computed::
>>> bessely(2, 3.5, 1)
0.3842618820422660066089231
>>> diff(lambda x: bessely(2, x), 3.5)
0.3842618820422660066089231
>>> bessely(0.5, 3.5, 1)
-0.2066598304156764337900417
>>> diff(lambda x: bessely(0.5, x), 3.5)
-0.2066598304156764337900417
>>> diff(lambda x: bessely(2, x), 0.5, 10)
-208173867409.5547350101511
>>> bessely(2, 0.5, 10)
-208173867409.5547350101511
>>> bessely(2, 100.5, 100)
0.02668487547301372334849043
>>> quad(lambda x: bessely(2,x), [1,3])
-1.377046859093181969213262
>>> bessely(2,3,-1) - bessely(2,1,-1)
-1.377046859093181969213262
"""
besselk = r"""
``besselk(n, x)`` gives the modified Bessel function of the
second kind,
.. math ::
K_n(x) = \frac{\pi}{2} \frac{I_{-n}(x)-I_{n}(x)}{\sin(\pi n)}
For `n` an integer, this formula should be understood as a
limit.
**Plots**
.. literalinclude :: /plots/besselk.py
.. image :: /plots/besselk.png
.. literalinclude :: /plots/besselk_c.py
.. image :: /plots/besselk_c.png
**Examples**
Evaluation is supported for arbitrary complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> besselk(0,1)
0.4210244382407083333356274
>>> besselk(0, -1)
(0.4210244382407083333356274 - 3.97746326050642263725661j)
>>> besselk(3.5, 2+3j)
(-0.02090732889633760668464128 + 0.2464022641351420167819697j)
>>> besselk(2+3j, 0.5)
(0.9615816021726349402626083 + 0.1918250181801757416908224j)
Arguments may be large::
>>> besselk(0, 100)
4.656628229175902018939005e-45
>>> besselk(1, 10**6)
4.131967049321725588398296e-434298
>>> besselk(1, 10**6*j)
(0.001140348428252385844876706 - 0.0005200017201681152909000961j)
>>> besselk(4.5, fmul(10**50, j, exact=True))
(1.561034538142413947789221e-26 + 1.243554598118700063281496e-25j)
The point `x = 0` is a singularity (logarithmic if `n = 0`)::
>>> besselk(0,0)
+inf
>>> besselk(1,0)
+inf
>>> for n in range(-4, 5):
... print(besselk(n, '1e-1000'))
...
4.8e+4001
8.0e+3000
2.0e+2000
1.0e+1000
2302.701024509704096466802
1.0e+1000
2.0e+2000
8.0e+3000
4.8e+4001
"""
hankel1 = r"""
``hankel1(n,x)`` computes the Hankel function of the first kind,
which is the complex combination of Bessel functions given by
.. math ::
H_n^{(1)}(x) = J_n(x) + i Y_n(x).
**Plots**
.. literalinclude :: /plots/hankel1.py
.. image :: /plots/hankel1.png
.. literalinclude :: /plots/hankel1_c.py
.. image :: /plots/hankel1_c.png
**Examples**
The Hankel function is generally complex-valued::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hankel1(2, pi)
(0.4854339326315091097054957 - 0.0999007139290278787734903j)
>>> hankel1(3.5, pi)
(0.2340002029630507922628888 - 0.6419643823412927142424049j)
"""
hankel2 = r"""
``hankel2(n,x)`` computes the Hankel function of the second kind,
which is the complex combination of Bessel functions given by
.. math ::
H_n^{(2)}(x) = J_n(x) - i Y_n(x).
**Plots**
.. literalinclude :: /plots/hankel2.py
.. image :: /plots/hankel2.png
.. literalinclude :: /plots/hankel2_c.py
.. image :: /plots/hankel2_c.png
**Examples**
The Hankel function is generally complex-valued::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> hankel2(2, pi)
(0.4854339326315091097054957 + 0.0999007139290278787734903j)
>>> hankel2(3.5, pi)
(0.2340002029630507922628888 + 0.6419643823412927142424049j)
"""
lambertw = r"""
The Lambert W function `W(z)` is defined as the inverse function
of `w \exp(w)`. In other words, the value of `W(z)` is such that
`z = W(z) \exp(W(z))` for any complex number `z`.
The Lambert W function is a multivalued function with infinitely
many branches `W_k(z)`, indexed by `k \in \mathbb{Z}`. Each branch
gives a different solution `w` of the equation `z = w \exp(w)`.
All branches are supported by :func:`~mpmath.lambertw`:
* ``lambertw(z)`` gives the principal solution (branch 0)
* ``lambertw(z, k)`` gives the solution on branch `k`
The Lambert W function has two partially real branches: the
principal branch (`k = 0`) is real for real `z > -1/e`, and the
`k = -1` branch is real for `-1/e < z < 0`. All branches except
`k = 0` have a logarithmic singularity at `z = 0`.
The definition, implementation and choice of branches
is based on [Corless]_.
**Plots**
.. literalinclude :: /plots/lambertw.py
.. image :: /plots/lambertw.png
.. literalinclude :: /plots/lambertw_c.py
.. image :: /plots/lambertw_c.png
**Basic examples**
The Lambert W function is the inverse of `w \exp(w)`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> w = lambertw(1)
>>> w
0.5671432904097838729999687
>>> w*exp(w)
1.0
Any branch gives a valid inverse::
>>> w = lambertw(1, k=3)
>>> w
(-2.853581755409037807206819 + 17.11353553941214591260783j)
>>> w = lambertw(1, k=25)
>>> w
(-5.047020464221569709378686 + 155.4763860949415867162066j)
>>> chop(w*exp(w))
1.0
**Applications to equation-solving**
The Lambert W function may be used to solve various kinds of
equations, such as finding the value of the infinite power
tower `z^{z^{z^{\ldots}}}`::
>>> def tower(z, n):
... if n == 0:
... return z
... return z ** tower(z, n-1)
...
>>> tower(mpf(0.5), 100)
0.6411857445049859844862005
>>> -lambertw(-log(0.5))/log(0.5)
0.6411857445049859844862005
**Properties**
The Lambert W function grows roughly like the natural logarithm
for large arguments::
>>> lambertw(1000); log(1000)
5.249602852401596227126056
6.907755278982137052053974
>>> lambertw(10**100); log(10**100)
224.8431064451185015393731
230.2585092994045684017991
The principal branch of the Lambert W function has a rational
Taylor series expansion around `z = 0`::
>>> nprint(taylor(lambertw, 0, 6), 10)
[0.0, 1.0, -1.0, 1.5, -2.666666667, 5.208333333, -10.8]
Some special values and limits are::
>>> lambertw(0)
0.0
>>> lambertw(1)
0.5671432904097838729999687
>>> lambertw(e)
1.0
>>> lambertw(inf)
+inf
>>> lambertw(0, k=-1)
-inf
>>> lambertw(0, k=3)
-inf
>>> lambertw(inf, k=2)
(+inf + 12.56637061435917295385057j)
>>> lambertw(inf, k=3)
(+inf + 18.84955592153875943077586j)
>>> lambertw(-inf, k=3)
(+inf + 21.9911485751285526692385j)
The `k = 0` and `k = -1` branches join at `z = -1/e` where
`W(z) = -1` for both branches. Since `-1/e` can only be represented
approximately with binary floating-point numbers, evaluating the
Lambert W function at this point only gives `-1` approximately::
>>> lambertw(-1/e, 0)
-0.9999999999998371330228251
>>> lambertw(-1/e, -1)
-1.000000000000162866977175
If `-1/e` happens to round in the negative direction, there might be
a small imaginary part::
>>> mp.dps = 15
>>> lambertw(-1/e)
(-1.0 + 8.22007971483662e-9j)
>>> lambertw(-1/e+eps)
-0.999999966242188
**References**
1. [Corless]_
"""
barnesg = r"""
Evaluates the Barnes G-function, which generalizes the
superfactorial (:func:`~mpmath.superfac`) and by extension also the
hyperfactorial (:func:`~mpmath.hyperfac`) to the complex numbers
in an analogous way to how the gamma function generalizes
the ordinary factorial.
The Barnes G-function may be defined in terms of a Weierstrass
product:
.. math ::
G(z+1) = (2\pi)^{z/2} e^{-[z(z+1)+\gamma z^2]/2}
\prod_{n=1}^\infty
\left[\left(1+\frac{z}{n}\right)^ne^{-z+z^2/(2n)}\right]
For positive integers `n`, we have have relation to superfactorials
`G(n) = \mathrm{sf}(n-2) = 0! \cdot 1! \cdots (n-2)!`.
**Examples**
Some elementary values and limits of the Barnes G-function::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> barnesg(1), barnesg(2), barnesg(3)
(1.0, 1.0, 1.0)
>>> barnesg(4)
2.0
>>> barnesg(5)
12.0
>>> barnesg(6)
288.0
>>> barnesg(7)
34560.0
>>> barnesg(8)
24883200.0
>>> barnesg(inf)
+inf
>>> barnesg(0), barnesg(-1), barnesg(-2)
(0.0, 0.0, 0.0)
Closed-form values are known for some rational arguments::
>>> barnesg('1/2')
0.603244281209446
>>> sqrt(exp(0.25+log(2)/12)/sqrt(pi)/glaisher**3)
0.603244281209446
>>> barnesg('1/4')
0.29375596533861
>>> nthroot(exp('3/8')/exp(catalan/pi)/
... gamma(0.25)**3/sqrt(glaisher)**9, 4)
0.29375596533861
The Barnes G-function satisfies the functional equation
`G(z+1) = \Gamma(z) G(z)`::
>>> z = pi
>>> barnesg(z+1)
2.39292119327948
>>> gamma(z)*barnesg(z)
2.39292119327948
The asymptotic growth rate of the Barnes G-function is related to
the Glaisher-Kinkelin constant::
>>> limit(lambda n: barnesg(n+1)/(n**(n**2/2-mpf(1)/12)*
... (2*pi)**(n/2)*exp(-3*n**2/4)), inf)
0.847536694177301
>>> exp('1/12')/glaisher
0.847536694177301
The Barnes G-function can be differentiated in closed form::
>>> z = 3
>>> diff(barnesg, z)
0.264507203401607
>>> barnesg(z)*((z-1)*psi(0,z)-z+(log(2*pi)+1)/2)
0.264507203401607
Evaluation is supported for arbitrary arguments and at arbitrary
precision::
>>> barnesg(6.5)
2548.7457695685
>>> barnesg(-pi)
0.00535976768353037
>>> barnesg(3+4j)
(-0.000676375932234244 - 4.42236140124728e-5j)
>>> mp.dps = 50
>>> barnesg(1/sqrt(2))
0.81305501090451340843586085064413533788206204124732
>>> q = barnesg(10j)
>>> q.real
0.000000000021852360840356557241543036724799812371995850552234
>>> q.imag
-0.00000000000070035335320062304849020654215545839053210041457588
>>> mp.dps = 15
>>> barnesg(100)
3.10361006263698e+6626
>>> barnesg(-101)
0.0
>>> barnesg(-10.5)
5.94463017605008e+25
>>> barnesg(-10000.5)
-6.14322868174828e+167480422
>>> barnesg(1000j)
(5.21133054865546e-1173597 + 4.27461836811016e-1173597j)
>>> barnesg(-1000+1000j)
(2.43114569750291e+1026623 + 2.24851410674842e+1026623j)
**References**
1. Whittaker & Watson, *A Course of Modern Analysis*,
Cambridge University Press, 4th edition (1927), p.264
2. http://en.wikipedia.org/wiki/Barnes_G-function
3. http://mathworld.wolfram.com/BarnesG-Function.html
"""
superfac = r"""
Computes the superfactorial, defined as the product of
consecutive factorials
.. math ::
\mathrm{sf}(n) = \prod_{k=1}^n k!
For general complex `z`, `\mathrm{sf}(z)` is defined
in terms of the Barnes G-function (see :func:`~mpmath.barnesg`).
**Examples**
The first few superfactorials are (OEIS A000178)::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(10):
... print("%s %s" % (n, superfac(n)))
...
0 1.0
1 1.0
2 2.0
3 12.0
4 288.0
5 34560.0
6 24883200.0
7 125411328000.0
8 5.05658474496e+15
9 1.83493347225108e+21
Superfactorials grow very rapidly::
>>> superfac(1000)
3.24570818422368e+1177245
>>> superfac(10**10)
2.61398543581249e+467427913956904067453
Evaluation is supported for arbitrary arguments::
>>> mp.dps = 25
>>> superfac(pi)
17.20051550121297985285333
>>> superfac(2+3j)
(-0.005915485633199789627466468 + 0.008156449464604044948738263j)
>>> diff(superfac, 1)
0.2645072034016070205673056
**References**
1. http://oeis.org/A000178
"""
hyperfac = r"""
Computes the hyperfactorial, defined for integers as the product
.. math ::
H(n) = \prod_{k=1}^n k^k.
The hyperfactorial satisfies the recurrence formula `H(z) = z^z H(z-1)`.
It can be defined more generally in terms of the Barnes G-function (see
:func:`~mpmath.barnesg`) and the gamma function by the formula
.. math ::
H(z) = \frac{\Gamma(z+1)^z}{G(z)}.
The extension to complex numbers can also be done via
the integral representation
.. math ::
H(z) = (2\pi)^{-z/2} \exp \left[
{z+1 \choose 2} + \int_0^z \log(t!)\,dt
\right].
**Examples**
The rapidly-growing sequence of hyperfactorials begins
(OEIS A002109)::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(10):
... print("%s %s" % (n, hyperfac(n)))
...
0 1.0
1 1.0
2 4.0
3 108.0
4 27648.0
5 86400000.0
6 4031078400000.0
7 3.3197663987712e+18
8 5.56964379417266e+25
9 2.15779412229419e+34
Some even larger hyperfactorials are::
>>> hyperfac(1000)
5.46458120882585e+1392926
>>> hyperfac(10**10)
4.60408207642219e+489142638002418704309
The hyperfactorial can be evaluated for arbitrary arguments::
>>> hyperfac(0.5)
0.880449235173423
>>> diff(hyperfac, 1)
0.581061466795327
>>> hyperfac(pi)
205.211134637462
>>> hyperfac(-10+1j)
(3.01144471378225e+46 - 2.45285242480185e+46j)
The recurrence property of the hyperfactorial holds
generally::
>>> z = 3-4*j
>>> hyperfac(z)
(-4.49795891462086e-7 - 6.33262283196162e-7j)
>>> z**z * hyperfac(z-1)
(-4.49795891462086e-7 - 6.33262283196162e-7j)
>>> z = mpf(-0.6)
>>> chop(z**z * hyperfac(z-1))
1.28170142849352
>>> hyperfac(z)
1.28170142849352
The hyperfactorial may also be computed using the integral
definition::
>>> z = 2.5
>>> hyperfac(z)
15.9842119922237
>>> (2*pi)**(-z/2)*exp(binomial(z+1,2) +
... quad(lambda t: loggamma(t+1), [0, z]))
15.9842119922237
:func:`~mpmath.hyperfac` supports arbitrary-precision evaluation::
>>> mp.dps = 50
>>> hyperfac(10)
215779412229418562091680268288000000000000000.0
>>> hyperfac(1/sqrt(2))
0.89404818005227001975423476035729076375705084390942
**References**
1. http://oeis.org/A002109
2. http://mathworld.wolfram.com/Hyperfactorial.html
"""
rgamma = r"""
Computes the reciprocal of the gamma function, `1/\Gamma(z)`. This
function evaluates to zero at the poles
of the gamma function, `z = 0, -1, -2, \ldots`.
**Examples**
Basic examples::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> rgamma(1)
1.0
>>> rgamma(4)
0.1666666666666666666666667
>>> rgamma(0); rgamma(-1)
0.0
0.0
>>> rgamma(1000)
2.485168143266784862783596e-2565
>>> rgamma(inf)
0.0
A definite integral that can be evaluated in terms of elementary
integrals::
>>> quad(rgamma, [0,inf])
2.807770242028519365221501
>>> e + quad(lambda t: exp(-t)/(pi**2+log(t)**2), [0,inf])
2.807770242028519365221501
"""
loggamma = r"""
Computes the principal branch of the log-gamma function,
`\ln \Gamma(z)`. Unlike `\ln(\Gamma(z))`, which has infinitely many
complex branch cuts, the principal log-gamma function only has a single
branch cut along the negative half-axis. The principal branch
continuously matches the asymptotic Stirling expansion
.. math ::
\ln \Gamma(z) \sim \frac{\ln(2 \pi)}{2} +
\left(z-\frac{1}{2}\right) \ln(z) - z + O(z^{-1}).
The real parts of both functions agree, but their imaginary
parts generally differ by `2 n \pi` for some `n \in \mathbb{Z}`.
They coincide for `z \in \mathbb{R}, z > 0`.
Computationally, it is advantageous to use :func:`~mpmath.loggamma`
instead of :func:`~mpmath.gamma` for extremely large arguments.
**Examples**
Comparing with `\ln(\Gamma(z))`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> loggamma('13.2'); log(gamma('13.2'))
20.49400419456603678498394
20.49400419456603678498394
>>> loggamma(3+4j)
(-1.756626784603784110530604 + 4.742664438034657928194889j)
>>> log(gamma(3+4j))
(-1.756626784603784110530604 - 1.540520869144928548730397j)
>>> log(gamma(3+4j)) + 2*pi*j
(-1.756626784603784110530604 + 4.742664438034657928194889j)
Note the imaginary parts for negative arguments::
>>> loggamma(-0.5); loggamma(-1.5); loggamma(-2.5)
(1.265512123484645396488946 - 3.141592653589793238462643j)
(0.8600470153764810145109327 - 6.283185307179586476925287j)
(-0.05624371649767405067259453 - 9.42477796076937971538793j)
Some special values::
>>> loggamma(1); loggamma(2)
0.0
0.0
>>> loggamma(3); +ln2
0.6931471805599453094172321
0.6931471805599453094172321
>>> loggamma(3.5); log(15*sqrt(pi)/8)
1.200973602347074224816022
1.200973602347074224816022
>>> loggamma(inf)
+inf
Huge arguments are permitted::
>>> loggamma('1e30')
6.807755278982137052053974e+31
>>> loggamma('1e300')
6.897755278982137052053974e+302
>>> loggamma('1e3000')
6.906755278982137052053974e+3003
>>> loggamma('1e100000000000000000000')
2.302585092994045684007991e+100000000000000000020
>>> loggamma('1e30j')
(-1.570796326794896619231322e+30 + 6.807755278982137052053974e+31j)
>>> loggamma('1e300j')
(-1.570796326794896619231322e+300 + 6.897755278982137052053974e+302j)
>>> loggamma('1e3000j')
(-1.570796326794896619231322e+3000 + 6.906755278982137052053974e+3003j)
The log-gamma function can be integrated analytically
on any interval of unit length::
>>> z = 0
>>> quad(loggamma, [z,z+1]); log(2*pi)/2
0.9189385332046727417803297
0.9189385332046727417803297
>>> z = 3+4j
>>> quad(loggamma, [z,z+1]); (log(z)-1)*z + log(2*pi)/2
(-0.9619286014994750641314421 + 5.219637303741238195688575j)
(-0.9619286014994750641314421 + 5.219637303741238195688575j)
The derivatives of the log-gamma function are given by the
polygamma function (:func:`~mpmath.psi`)::
>>> diff(loggamma, -4+3j); psi(0, -4+3j)
(1.688493531222971393607153 + 2.554898911356806978892748j)
(1.688493531222971393607153 + 2.554898911356806978892748j)
>>> diff(loggamma, -4+3j, 2); psi(1, -4+3j)
(-0.1539414829219882371561038 - 0.1020485197430267719746479j)
(-0.1539414829219882371561038 - 0.1020485197430267719746479j)
The log-gamma function satisfies an additive form of the
recurrence relation for the ordinary gamma function::
>>> z = 2+3j
>>> loggamma(z); loggamma(z+1) - log(z)
(-2.092851753092733349564189 + 2.302396543466867626153708j)
(-2.092851753092733349564189 + 2.302396543466867626153708j)
"""
siegeltheta = r"""
Computes the Riemann-Siegel theta function,
.. math ::
\theta(t) = \frac{
\log\Gamma\left(\frac{1+2it}{4}\right) -
\log\Gamma\left(\frac{1-2it}{4}\right)
}{2i} - \frac{\log \pi}{2} t.
The Riemann-Siegel theta function is important in
providing the phase factor for the Z-function
(see :func:`~mpmath.siegelz`). Evaluation is supported for real and
complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> siegeltheta(0)
0.0
>>> siegeltheta(inf)
+inf
>>> siegeltheta(-inf)
-inf
>>> siegeltheta(1)
-1.767547952812290388302216
>>> siegeltheta(10+0.25j)
(-3.068638039426838572528867 + 0.05804937947429712998395177j)
Arbitrary derivatives may be computed with derivative = k
>>> siegeltheta(1234, derivative=2)
0.0004051864079114053109473741
>>> diff(siegeltheta, 1234, n=2)
0.0004051864079114053109473741
The Riemann-Siegel theta function has odd symmetry around `t = 0`,
two local extreme points and three real roots including 0 (located
symmetrically)::
>>> nprint(chop(taylor(siegeltheta, 0, 5)))
[0.0, -2.68609, 0.0, 2.69433, 0.0, -6.40218]
>>> findroot(diffun(siegeltheta), 7)
6.28983598883690277966509
>>> findroot(siegeltheta, 20)
17.84559954041086081682634
For large `t`, there is a famous asymptotic formula
for `\theta(t)`, to first order given by::
>>> t = mpf(10**6)
>>> siegeltheta(t)
5488816.353078403444882823
>>> -t*log(2*pi/t)/2-t/2
5488816.745777464310273645
"""
grampoint = r"""
Gives the `n`-th Gram point `g_n`, defined as the solution
to the equation `\theta(g_n) = \pi n` where `\theta(t)`
is the Riemann-Siegel theta function (:func:`~mpmath.siegeltheta`).
The first few Gram points are::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> grampoint(0)
17.84559954041086081682634
>>> grampoint(1)
23.17028270124630927899664
>>> grampoint(2)
27.67018221781633796093849
>>> grampoint(3)
31.71797995476405317955149
Checking the definition::
>>> siegeltheta(grampoint(3))
9.42477796076937971538793
>>> 3*pi
9.42477796076937971538793
A large Gram point::
>>> grampoint(10**10)
3293531632.728335454561153
Gram points are useful when studying the Z-function
(:func:`~mpmath.siegelz`). See the documentation of that function
for additional examples.
:func:`~mpmath.grampoint` can solve the defining equation for
nonintegral `n`. There is a fixed point where `g(x) = x`::
>>> findroot(lambda x: grampoint(x) - x, 10000)
9146.698193171459265866198
**References**
1. http://mathworld.wolfram.com/GramPoint.html
"""
siegelz = r"""
Computes the Z-function, also known as the Riemann-Siegel Z function,
.. math ::
Z(t) = e^{i \theta(t)} \zeta(1/2+it)
where `\zeta(s)` is the Riemann zeta function (:func:`~mpmath.zeta`)
and where `\theta(t)` denotes the Riemann-Siegel theta function
(see :func:`~mpmath.siegeltheta`).
Evaluation is supported for real and complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> siegelz(1)
-0.7363054628673177346778998
>>> siegelz(3+4j)
(-0.1852895764366314976003936 - 0.2773099198055652246992479j)
The first four derivatives are supported, using the
optional *derivative* keyword argument::
>>> siegelz(1234567, derivative=3)
56.89689348495089294249178
>>> diff(siegelz, 1234567, n=3)
56.89689348495089294249178
The Z-function has a Maclaurin expansion::
>>> nprint(chop(taylor(siegelz, 0, 4)))
[-1.46035, 0.0, 2.73588, 0.0, -8.39357]
The Z-function `Z(t)` is equal to `\pm |\zeta(s)|` on the
critical line `s = 1/2+it` (i.e. for real arguments `t`
to `Z`). Its zeros coincide with those of the Riemann zeta
function::
>>> findroot(siegelz, 14)
14.13472514173469379045725
>>> findroot(siegelz, 20)
21.02203963877155499262848
>>> findroot(zeta, 0.5+14j)
(0.5 + 14.13472514173469379045725j)
>>> findroot(zeta, 0.5+20j)
(0.5 + 21.02203963877155499262848j)
Since the Z-function is real-valued on the critical line
(and unlike `|\zeta(s)|` analytic), it is useful for
investigating the zeros of the Riemann zeta function.
For example, one can use a root-finding algorithm based
on sign changes::
>>> findroot(siegelz, [100, 200], solver='bisect')
176.4414342977104188888926
To locate roots, Gram points `g_n` which can be computed
by :func:`~mpmath.grampoint` are useful. If `(-1)^n Z(g_n)` is
positive for two consecutive `n`, then `Z(t)` must have
a zero between those points::
>>> g10 = grampoint(10)
>>> g11 = grampoint(11)
>>> (-1)**10 * siegelz(g10) > 0
True
>>> (-1)**11 * siegelz(g11) > 0
True
>>> findroot(siegelz, [g10, g11], solver='bisect')
56.44624769706339480436776
>>> g10, g11
(54.67523744685325626632663, 57.54516517954725443703014)
"""
riemannr = r"""
Evaluates the Riemann R function, a smooth approximation of the
prime counting function `\pi(x)` (see :func:`~mpmath.primepi`). The Riemann
R function gives a fast numerical approximation useful e.g. to
roughly estimate the number of primes in a given interval.
The Riemann R function is computed using the rapidly convergent Gram
series,
.. math ::
R(x) = 1 + \sum_{k=1}^{\infty}
\frac{\log^k x}{k k! \zeta(k+1)}.
From the Gram series, one sees that the Riemann R function is a
well-defined analytic function (except for a branch cut along
the negative real half-axis); it can be evaluated for arbitrary
real or complex arguments.
The Riemann R function gives a very accurate approximation
of the prime counting function. For example, it is wrong by at
most 2 for `x < 1000`, and for `x = 10^9` differs from the exact
value of `\pi(x)` by 79, or less than two parts in a million.
It is about 10 times more accurate than the logarithmic integral
estimate (see :func:`~mpmath.li`), which however is even faster to evaluate.
It is orders of magnitude more accurate than the extremely
fast `x/\log x` estimate.
**Examples**
For small arguments, the Riemann R function almost exactly
gives the prime counting function if rounded to the nearest
integer::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> primepi(50), riemannr(50)
(15, 14.9757023241462)
>>> max(abs(primepi(n)-int(round(riemannr(n)))) for n in range(100))
1
>>> max(abs(primepi(n)-int(round(riemannr(n)))) for n in range(300))
2
The Riemann R function can be evaluated for arguments far too large
for exact determination of `\pi(x)` to be computationally
feasible with any presently known algorithm::
>>> riemannr(10**30)
1.46923988977204e+28
>>> riemannr(10**100)
4.3619719871407e+97
>>> riemannr(10**1000)
4.3448325764012e+996
A comparison of the Riemann R function and logarithmic integral estimates
for `\pi(x)` using exact values of `\pi(10^n)` up to `n = 9`.
The fractional error is shown in parentheses::
>>> exact = [4,25,168,1229,9592,78498,664579,5761455,50847534]
>>> for n, p in enumerate(exact):
... n += 1
... r, l = riemannr(10**n), li(10**n)
... rerr, lerr = nstr((r-p)/p,3), nstr((l-p)/p,3)
... print("%i %i %s(%s) %s(%s)" % (n, p, r, rerr, l, lerr))
...
1 4 4.56458314100509(0.141) 6.1655995047873(0.541)
2 25 25.6616332669242(0.0265) 30.1261415840796(0.205)
3 168 168.359446281167(0.00214) 177.609657990152(0.0572)
4 1229 1226.93121834343(-0.00168) 1246.13721589939(0.0139)
5 9592 9587.43173884197(-0.000476) 9629.8090010508(0.00394)
6 78498 78527.3994291277(0.000375) 78627.5491594622(0.00165)
7 664579 664667.447564748(0.000133) 664918.405048569(0.000511)
8 5761455 5761551.86732017(1.68e-5) 5762209.37544803(0.000131)
9 50847534 50847455.4277214(-1.55e-6) 50849234.9570018(3.35e-5)
The derivative of the Riemann R function gives the approximate
probability for a number of magnitude `x` to be prime::
>>> diff(riemannr, 1000)
0.141903028110784
>>> mpf(primepi(1050) - primepi(950)) / 100
0.15
Evaluation is supported for arbitrary arguments and at arbitrary
precision::
>>> mp.dps = 30
>>> riemannr(7.5)
3.72934743264966261918857135136
>>> riemannr(-4+2j)
(-0.551002208155486427591793957644 + 2.16966398138119450043195899746j)
"""
primepi = r"""
Evaluates the prime counting function, `\pi(x)`, which gives
the number of primes less than or equal to `x`. The argument
`x` may be fractional.
The prime counting function is very expensive to evaluate
precisely for large `x`, and the present implementation is
not optimized in any way. For numerical approximation of the
prime counting function, it is better to use :func:`~mpmath.primepi2`
or :func:`~mpmath.riemannr`.
Some values of the prime counting function::
>>> from mpmath import *
>>> [primepi(k) for k in range(20)]
[0, 0, 1, 2, 2, 3, 3, 4, 4, 4, 4, 5, 5, 6, 6, 6, 6, 7, 7, 8]
>>> primepi(3.5)
2
>>> primepi(100000)
9592
"""
primepi2 = r"""
Returns an interval (as an ``mpi`` instance) providing bounds
for the value of the prime counting function `\pi(x)`. For small
`x`, :func:`~mpmath.primepi2` returns an exact interval based on
the output of :func:`~mpmath.primepi`. For `x > 2656`, a loose interval
based on Schoenfeld's inequality
.. math ::
|\pi(x) - \mathrm{li}(x)| < \frac{\sqrt x \log x}{8 \pi}
is returned. This estimate is rigorous assuming the truth of
the Riemann hypothesis, and can be computed very quickly.
**Examples**
Exact values of the prime counting function for small `x`::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> iv.dps = 15; iv.pretty = True
>>> primepi2(10)
[4.0, 4.0]
>>> primepi2(100)
[25.0, 25.0]
>>> primepi2(1000)
[168.0, 168.0]
Loose intervals are generated for moderately large `x`:
>>> primepi2(10000), primepi(10000)
([1209.0, 1283.0], 1229)
>>> primepi2(50000), primepi(50000)
([5070.0, 5263.0], 5133)
As `x` increases, the absolute error gets worse while the relative
error improves. The exact value of `\pi(10^{23})` is
1925320391606803968923, and :func:`~mpmath.primepi2` gives 9 significant
digits::
>>> p = primepi2(10**23)
>>> p
[1.9253203909477020467e+21, 1.925320392280406229e+21]
>>> mpf(p.delta) / mpf(p.a)
6.9219865355293e-10
A more precise, nonrigorous estimate for `\pi(x)` can be
obtained using the Riemann R function (:func:`~mpmath.riemannr`).
For large enough `x`, the value returned by :func:`~mpmath.primepi2`
essentially amounts to a small perturbation of the value returned by
:func:`~mpmath.riemannr`::
>>> primepi2(10**100)
[4.3619719871407024816e+97, 4.3619719871407032404e+97]
>>> riemannr(10**100)
4.3619719871407e+97
"""
primezeta = r"""
Computes the prime zeta function, which is defined
in analogy with the Riemann zeta function (:func:`~mpmath.zeta`)
as
.. math ::
P(s) = \sum_p \frac{1}{p^s}
where the sum is taken over all prime numbers `p`. Although
this sum only converges for `\mathrm{Re}(s) > 1`, the
function is defined by analytic continuation in the
half-plane `\mathrm{Re}(s) > 0`.
**Examples**
Arbitrary-precision evaluation for real and complex arguments is
supported::
>>> from mpmath import *
>>> mp.dps = 30; mp.pretty = True
>>> primezeta(2)
0.452247420041065498506543364832
>>> primezeta(pi)
0.15483752698840284272036497397
>>> mp.dps = 50
>>> primezeta(3)
0.17476263929944353642311331466570670097541212192615
>>> mp.dps = 20
>>> primezeta(3+4j)
(-0.12085382601645763295 - 0.013370403397787023602j)
The prime zeta function has a logarithmic pole at `s = 1`,
with residue equal to the difference of the Mertens and
Euler constants::
>>> primezeta(1)
+inf
>>> extradps(25)(lambda x: primezeta(1+x)+log(x))(+eps)
-0.31571845205389007685
>>> mertens-euler
-0.31571845205389007685
The analytic continuation to `0 < \mathrm{Re}(s) \le 1`
is implemented. In this strip the function exhibits
very complex behavior; on the unit interval, it has poles at
`1/n` for every squarefree integer `n`::
>>> primezeta(0.5) # Pole at s = 1/2
(-inf + 3.1415926535897932385j)
>>> primezeta(0.25)
(-1.0416106801757269036 + 0.52359877559829887308j)
>>> primezeta(0.5+10j)
(0.54892423556409790529 + 0.45626803423487934264j)
Although evaluation works in principle for any `\mathrm{Re}(s) > 0`,
it should be noted that the evaluation time increases exponentially
as `s` approaches the imaginary axis.
For large `\mathrm{Re}(s)`, `P(s)` is asymptotic to `2^{-s}`::
>>> primezeta(inf)
0.0
>>> primezeta(10), mpf(2)**-10
(0.00099360357443698021786, 0.0009765625)
>>> primezeta(1000)
9.3326361850321887899e-302
>>> primezeta(1000+1000j)
(-3.8565440833654995949e-302 - 8.4985390447553234305e-302j)
**References**
Carl-Erik Froberg, "On the prime zeta function",
BIT 8 (1968), pp. 187-202.
"""
bernpoly = r"""
Evaluates the Bernoulli polynomial `B_n(z)`.
The first few Bernoulli polynomials are::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(6):
... nprint(chop(taylor(lambda x: bernpoly(n,x), 0, n)))
...
[1.0]
[-0.5, 1.0]
[0.166667, -1.0, 1.0]
[0.0, 0.5, -1.5, 1.0]
[-0.0333333, 0.0, 1.0, -2.0, 1.0]
[0.0, -0.166667, 0.0, 1.66667, -2.5, 1.0]
At `z = 0`, the Bernoulli polynomial evaluates to a
Bernoulli number (see :func:`~mpmath.bernoulli`)::
>>> bernpoly(12, 0), bernoulli(12)
(-0.253113553113553, -0.253113553113553)
>>> bernpoly(13, 0), bernoulli(13)
(0.0, 0.0)
Evaluation is accurate for large `n` and small `z`::
>>> mp.dps = 25
>>> bernpoly(100, 0.5)
2.838224957069370695926416e+78
>>> bernpoly(1000, 10.5)
5.318704469415522036482914e+1769
"""
polylog = r"""
Computes the polylogarithm, defined by the sum
.. math ::
\mathrm{Li}_s(z) = \sum_{k=1}^{\infty} \frac{z^k}{k^s}.
This series is convergent only for `|z| < 1`, so elsewhere
the analytic continuation is implied.
The polylogarithm should not be confused with the logarithmic
integral (also denoted by Li or li), which is implemented
as :func:`~mpmath.li`.
**Examples**
The polylogarithm satisfies a huge number of functional identities.
A sample of polylogarithm evaluations is shown below::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> polylog(1,0.5), log(2)
(0.693147180559945, 0.693147180559945)
>>> polylog(2,0.5), (pi**2-6*log(2)**2)/12
(0.582240526465012, 0.582240526465012)
>>> polylog(2,-phi), -log(phi)**2-pi**2/10
(-1.21852526068613, -1.21852526068613)
>>> polylog(3,0.5), 7*zeta(3)/8-pi**2*log(2)/12+log(2)**3/6
(0.53721319360804, 0.53721319360804)
:func:`~mpmath.polylog` can evaluate the analytic continuation of the
polylogarithm when `s` is an integer::
>>> polylog(2, 10)
(0.536301287357863 - 7.23378441241546j)
>>> polylog(2, -10)
-4.1982778868581
>>> polylog(2, 10j)
(-3.05968879432873 + 3.71678149306807j)
>>> polylog(-2, 10)
-0.150891632373114
>>> polylog(-2, -10)
0.067618332081142
>>> polylog(-2, 10j)
(0.0384353698579347 + 0.0912451798066779j)
Some more examples, with arguments on the unit circle (note that
the series definition cannot be used for computation here)::
>>> polylog(2,j)
(-0.205616758356028 + 0.915965594177219j)
>>> j*catalan-pi**2/48
(-0.205616758356028 + 0.915965594177219j)
>>> polylog(3,exp(2*pi*j/3))
(-0.534247512515375 + 0.765587078525922j)
>>> -4*zeta(3)/9 + 2*j*pi**3/81
(-0.534247512515375 + 0.765587078525921j)
Polylogarithms of different order are related by integration
and differentiation::
>>> s, z = 3, 0.5
>>> polylog(s+1, z)
0.517479061673899
>>> quad(lambda t: polylog(s,t)/t, [0, z])
0.517479061673899
>>> z*diff(lambda t: polylog(s+2,t), z)
0.517479061673899
Taylor series expansions around `z = 0` are::
>>> for n in range(-3, 4):
... nprint(taylor(lambda x: polylog(n,x), 0, 5))
...
[0.0, 1.0, 8.0, 27.0, 64.0, 125.0]
[0.0, 1.0, 4.0, 9.0, 16.0, 25.0]
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
[0.0, 1.0, 1.0, 1.0, 1.0, 1.0]
[0.0, 1.0, 0.5, 0.333333, 0.25, 0.2]
[0.0, 1.0, 0.25, 0.111111, 0.0625, 0.04]
[0.0, 1.0, 0.125, 0.037037, 0.015625, 0.008]
The series defining the polylogarithm is simultaneously
a Taylor series and an L-series. For certain values of `z`, the
polylogarithm reduces to a pure zeta function::
>>> polylog(pi, 1), zeta(pi)
(1.17624173838258, 1.17624173838258)
>>> polylog(pi, -1), -altzeta(pi)
(-0.909670702980385, -0.909670702980385)
Evaluation for arbitrary, nonintegral `s` is supported
for `z` within the unit circle:
>>> polylog(3+4j, 0.25)
(0.24258605789446 - 0.00222938275488344j)
>>> nsum(lambda k: 0.25**k / k**(3+4j), [1,inf])
(0.24258605789446 - 0.00222938275488344j)
It is also supported outside of the unit circle::
>>> polylog(1+j, 20+40j)
(-7.1421172179728 - 3.92726697721369j)
>>> polylog(1+j, 200+400j)
(-5.41934747194626 - 9.94037752563927j)
**References**
1. Richard Crandall, "Note on fast polylogarithm computation"
http://www.reed.edu/physics/faculty/crandall/papers/Polylog.pdf
2. http://en.wikipedia.org/wiki/Polylogarithm
3. http://mathworld.wolfram.com/Polylogarithm.html
"""
bell = r"""
For `n` a nonnegative integer, ``bell(n,x)`` evaluates the Bell
polynomial `B_n(x)`, the first few of which are
.. math ::
B_0(x) = 1
B_1(x) = x
B_2(x) = x^2+x
B_3(x) = x^3+3x^2+x
If `x = 1` or :func:`~mpmath.bell` is called with only one argument, it
gives the `n`-th Bell number `B_n`, which is the number of
partitions of a set with `n` elements. By setting the precision to
at least `\log_{10} B_n` digits, :func:`~mpmath.bell` provides fast
calculation of exact Bell numbers.
In general, :func:`~mpmath.bell` computes
.. math ::
B_n(x) = e^{-x} \left(\mathrm{sinc}(\pi n) + E_n(x)\right)
where `E_n(x)` is the generalized exponential function implemented
by :func:`~mpmath.polyexp`. This is an extension of Dobinski's formula [1],
where the modification is the sinc term ensuring that `B_n(x)` is
continuous in `n`; :func:`~mpmath.bell` can thus be evaluated,
differentiated, etc for arbitrary complex arguments.
**Examples**
Simple evaluations::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> bell(0, 2.5)
1.0
>>> bell(1, 2.5)
2.5
>>> bell(2, 2.5)
8.75
Evaluation for arbitrary complex arguments::
>>> bell(5.75+1j, 2-3j)
(-10767.71345136587098445143 - 15449.55065599872579097221j)
The first few Bell polynomials::
>>> for k in range(7):
... nprint(taylor(lambda x: bell(k,x), 0, k))
...
[1.0]
[0.0, 1.0]
[0.0, 1.0, 1.0]
[0.0, 1.0, 3.0, 1.0]
[0.0, 1.0, 7.0, 6.0, 1.0]
[0.0, 1.0, 15.0, 25.0, 10.0, 1.0]
[0.0, 1.0, 31.0, 90.0, 65.0, 15.0, 1.0]
The first few Bell numbers and complementary Bell numbers::
>>> [int(bell(k)) for k in range(10)]
[1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147]
>>> [int(bell(k,-1)) for k in range(10)]
[1, -1, 0, 1, 1, -2, -9, -9, 50, 267]
Large Bell numbers::
>>> mp.dps = 50
>>> bell(50)
185724268771078270438257767181908917499221852770.0
>>> bell(50,-1)
-29113173035759403920216141265491160286912.0
Some even larger values::
>>> mp.dps = 25
>>> bell(1000,-1)
-1.237132026969293954162816e+1869
>>> bell(1000)
2.989901335682408421480422e+1927
>>> bell(1000,2)
6.591553486811969380442171e+1987
>>> bell(1000,100.5)
9.101014101401543575679639e+2529
A determinant identity satisfied by Bell numbers::
>>> mp.dps = 15
>>> N = 8
>>> det([[bell(k+j) for j in range(N)] for k in range(N)])
125411328000.0
>>> superfac(N-1)
125411328000.0
**References**
1. http://mathworld.wolfram.com/DobinskisFormula.html
"""
polyexp = r"""
Evaluates the polyexponential function, defined for arbitrary
complex `s`, `z` by the series
.. math ::
E_s(z) = \sum_{k=1}^{\infty} \frac{k^s}{k!} z^k.
`E_s(z)` is constructed from the exponential function analogously
to how the polylogarithm is constructed from the ordinary
logarithm; as a function of `s` (with `z` fixed), `E_s` is an L-series
It is an entire function of both `s` and `z`.
The polyexponential function provides a generalization of the
Bell polynomials `B_n(x)` (see :func:`~mpmath.bell`) to noninteger orders `n`.
In terms of the Bell polynomials,
.. math ::
E_s(z) = e^z B_s(z) - \mathrm{sinc}(\pi s).
Note that `B_n(x)` and `e^{-x} E_n(x)` are identical if `n`
is a nonzero integer, but not otherwise. In particular, they differ
at `n = 0`.
**Examples**
Evaluating a series::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> nsum(lambda k: sqrt(k)/fac(k), [1,inf])
2.101755547733791780315904
>>> polyexp(0.5,1)
2.101755547733791780315904
Evaluation for arbitrary arguments::
>>> polyexp(-3-4j, 2.5+2j)
(2.351660261190434618268706 + 1.202966666673054671364215j)
Evaluation is accurate for tiny function values::
>>> polyexp(4, -100)
3.499471750566824369520223e-36
If `n` is a nonpositive integer, `E_n` reduces to a special
instance of the hypergeometric function `\,_pF_q`::
>>> n = 3
>>> x = pi
>>> polyexp(-n,x)
4.042192318847986561771779
>>> x*hyper([1]*(n+1), [2]*(n+1), x)
4.042192318847986561771779
"""
cyclotomic = r"""
Evaluates the cyclotomic polynomial `\Phi_n(x)`, defined by
.. math ::
\Phi_n(x) = \prod_{\zeta} (x - \zeta)
where `\zeta` ranges over all primitive `n`-th roots of unity
(see :func:`~mpmath.unitroots`). An equivalent representation, used
for computation, is
.. math ::
\Phi_n(x) = \prod_{d\mid n}(x^d-1)^{\mu(n/d)} = \Phi_n(x)
where `\mu(m)` denotes the Moebius function. The cyclotomic
polynomials are integer polynomials, the first of which can be
written explicitly as
.. math ::
\Phi_0(x) = 1
\Phi_1(x) = x - 1
\Phi_2(x) = x + 1
\Phi_3(x) = x^3 + x^2 + 1
\Phi_4(x) = x^2 + 1
\Phi_5(x) = x^4 + x^3 + x^2 + x + 1
\Phi_6(x) = x^2 - x + 1
**Examples**
The coefficients of low-order cyclotomic polynomials can be recovered
using Taylor expansion::
>>> from mpmath import *
>>> mp.dps = 15; mp.pretty = True
>>> for n in range(9):
... p = chop(taylor(lambda x: cyclotomic(n,x), 0, 10))
... print("%s %s" % (n, nstr(p[:10+1-p[::-1].index(1)])))
...
0 [1.0]
1 [-1.0, 1.0]
2 [1.0, 1.0]
3 [1.0, 1.0, 1.0]
4 [1.0, 0.0, 1.0]
5 [1.0, 1.0, 1.0, 1.0, 1.0]
6 [1.0, -1.0, 1.0]
7 [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
8 [1.0, 0.0, 0.0, 0.0, 1.0]
The definition as a product over primitive roots may be checked
by computing the product explicitly (for a real argument, this
method will generally introduce numerical noise in the imaginary
part)::
>>> mp.dps = 25
>>> z = 3+4j
>>> cyclotomic(10, z)
(-419.0 - 360.0j)
>>> fprod(z-r for r in unitroots(10, primitive=True))
(-419.0 - 360.0j)
>>> z = 3
>>> cyclotomic(10, z)
61.0
>>> fprod(z-r for r in unitroots(10, primitive=True))
(61.0 - 3.146045605088568607055454e-25j)
Up to permutation, the roots of a given cyclotomic polynomial
can be checked to agree with the list of primitive roots::
>>> p = taylor(lambda x: cyclotomic(6,x), 0, 6)[:3]
>>> for r in polyroots(p[::-1]):
... print(r)
...
(0.5 - 0.8660254037844386467637232j)
(0.5 + 0.8660254037844386467637232j)
>>>
>>> for r in unitroots(6, primitive=True):
... print(r)
...
(0.5 + 0.8660254037844386467637232j)
(0.5 - 0.8660254037844386467637232j)
"""
meijerg = r"""
Evaluates the Meijer G-function, defined as
.. math ::
G^{m,n}_{p,q} \left( \left. \begin{matrix}
a_1, \dots, a_n ; a_{n+1} \dots a_p \\
b_1, \dots, b_m ; b_{m+1} \dots b_q
\end{matrix}\; \right| \; z ; r \right) =
\frac{1}{2 \pi i} \int_L
\frac{\prod_{j=1}^m \Gamma(b_j+s) \prod_{j=1}^n\Gamma(1-a_j-s)}
{\prod_{j=n+1}^{p}\Gamma(a_j+s) \prod_{j=m+1}^q \Gamma(1-b_j-s)}
z^{-s/r} ds
for an appropriate choice of the contour `L` (see references).
There are `p` elements `a_j`.
The argument *a_s* should be a pair of lists, the first containing the
`n` elements `a_1, \ldots, a_n` and the second containing
the `p-n` elements `a_{n+1}, \ldots a_p`.
There are `q` elements `b_j`.
The argument *b_s* should be a pair of lists, the first containing the
`m` elements `b_1, \ldots, b_m` and the second containing
the `q-m` elements `b_{m+1}, \ldots b_q`.
The implicit tuple `(m, n, p, q)` constitutes the order or degree of the
Meijer G-function, and is determined by the lengths of the coefficient
vectors. Confusingly, the indices in this tuple appear in a different order
from the coefficients, but this notation is standard. The many examples
given below should hopefully clear up any potential confusion.
**Algorithm**
The Meijer G-function is evaluated as a combination of hypergeometric series.
There are two versions of the function, which can be selected with
the optional *series* argument.
*series=1* uses a sum of `m` `\,_pF_{q-1}` functions of `z`
*series=2* uses a sum of `n` `\,_qF_{p-1}` functions of `1/z`
The default series is chosen based on the degree and `|z|` in order
to be consistent with Mathematica's. This definition of the Meijer G-function
has a discontinuity at `|z| = 1` for some orders, which can
be avoided by explicitly specifying a series.
Keyword arguments are forwarded to :func:`~mpmath.hypercomb`.
**Examples**
Many standard functions are special cases of the Meijer G-function
(possibly rescaled and/or with branch cut corrections). We define
some test parameters::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> a = mpf(0.75)
>>> b = mpf(1.5)
>>> z = mpf(2.25)
The exponential function:
`e^z = G^{1,0}_{0,1} \left( \left. \begin{matrix} - \\ 0 \end{matrix} \;
\right| \; -z \right)`
>>> meijerg([[],[]], [[0],[]], -z)
9.487735836358525720550369
>>> exp(z)
9.487735836358525720550369
The natural logarithm:
`\log(1+z) = G^{1,2}_{2,2} \left( \left. \begin{matrix} 1, 1 \\ 1, 0
\end{matrix} \; \right| \; -z \right)`
>>> meijerg([[1,1],[]], [[1],[0]], z)
1.178654996341646117219023
>>> log(1+z)
1.178654996341646117219023
A rational function:
`\frac{z}{z+1} = G^{1,2}_{2,2} \left( \left. \begin{matrix} 1, 1 \\ 1, 1
\end{matrix} \; \right| \; z \right)`
>>> meijerg([[1,1],[]], [[1],[1]], z)
0.6923076923076923076923077
>>> z/(z+1)
0.6923076923076923076923077
The sine and cosine functions:
`\frac{1}{\sqrt \pi} \sin(2 \sqrt z) = G^{1,0}_{0,2} \left( \left. \begin{matrix}
- \\ \frac{1}{2}, 0 \end{matrix} \; \right| \; z \right)`
`\frac{1}{\sqrt \pi} \cos(2 \sqrt z) = G^{1,0}_{0,2} \left( \left. \begin{matrix}
- \\ 0, \frac{1}{2} \end{matrix} \; \right| \; z \right)`
>>> meijerg([[],[]], [[0.5],[0]], (z/2)**2)
0.4389807929218676682296453
>>> sin(z)/sqrt(pi)
0.4389807929218676682296453
>>> meijerg([[],[]], [[0],[0.5]], (z/2)**2)
-0.3544090145996275423331762
>>> cos(z)/sqrt(pi)
-0.3544090145996275423331762
Bessel functions:
`J_a(2 \sqrt z) = G^{1,0}_{0,2} \left( \left.
\begin{matrix} - \\ \frac{a}{2}, -\frac{a}{2}
\end{matrix} \; \right| \; z \right)`
`Y_a(2 \sqrt z) = G^{2,0}_{1,3} \left( \left.
\begin{matrix} \frac{-a-1}{2} \\ \frac{a}{2}, -\frac{a}{2}, \frac{-a-1}{2}
\end{matrix} \; \right| \; z \right)`
`(-z)^{a/2} z^{-a/2} I_a(2 \sqrt z) = G^{1,0}_{0,2} \left( \left.
\begin{matrix} - \\ \frac{a}{2}, -\frac{a}{2}
\end{matrix} \; \right| \; -z \right)`
`2 K_a(2 \sqrt z) = G^{2,0}_{0,2} \left( \left.
\begin{matrix} - \\ \frac{a}{2}, -\frac{a}{2}
\end{matrix} \; \right| \; z \right)`
As the example with the Bessel *I* function shows, a branch
factor is required for some arguments when inverting the square root.
>>> meijerg([[],[]], [[a/2],[-a/2]], (z/2)**2)
0.5059425789597154858527264
>>> besselj(a,z)
0.5059425789597154858527264
>>> meijerg([[],[(-a-1)/2]], [[a/2,-a/2],[(-a-1)/2]], (z/2)**2)
0.1853868950066556941442559
>>> bessely(a, z)
0.1853868950066556941442559
>>> meijerg([[],[]], [[a/2],[-a/2]], -(z/2)**2)
(0.8685913322427653875717476 + 2.096964974460199200551738j)
>>> (-z)**(a/2) / z**(a/2) * besseli(a, z)
(0.8685913322427653875717476 + 2.096964974460199200551738j)
>>> 0.5*meijerg([[],[]], [[a/2,-a/2],[]], (z/2)**2)
0.09334163695597828403796071
>>> besselk(a,z)
0.09334163695597828403796071
Error functions:
`\sqrt{\pi} z^{2(a-1)} \mathrm{erfc}(z) = G^{2,0}_{1,2} \left( \left.
\begin{matrix} a \\ a-1, a-\frac{1}{2}
\end{matrix} \; \right| \; z, \frac{1}{2} \right)`
>>> meijerg([[],[a]], [[a-1,a-0.5],[]], z, 0.5)
0.00172839843123091957468712
>>> sqrt(pi) * z**(2*a-2) * erfc(z)
0.00172839843123091957468712
A Meijer G-function of higher degree, (1,1,2,3):
>>> meijerg([[a],[b]], [[a],[b,a-1]], z)
1.55984467443050210115617
>>> sin((b-a)*pi)/pi*(exp(z)-1)*z**(a-1)
1.55984467443050210115617
A Meijer G-function of still higher degree, (4,1,2,4), that can
be expanded as a messy combination of exponential integrals:
>>> meijerg([[a],[2*b-a]], [[b,a,b-0.5,-1-a+2*b],[]], z)
0.3323667133658557271898061
>>> chop(4**(a-b+1)*sqrt(pi)*gamma(2*b-2*a)*z**a*\
... expint(2*b-2*a, -2*sqrt(-z))*expint(2*b-2*a, 2*sqrt(-z)))
0.3323667133658557271898061
In the following case, different series give different values::
>>> chop(meijerg([[1],[0.25]],[[3],[0.5]],-2))
-0.06417628097442437076207337
>>> meijerg([[1],[0.25]],[[3],[0.5]],-2,series=1)
0.1428699426155117511873047
>>> chop(meijerg([[1],[0.25]],[[3],[0.5]],-2,series=2))
-0.06417628097442437076207337
**References**
1. http://en.wikipedia.org/wiki/Meijer_G-function
2. http://mathworld.wolfram.com/MeijerG-Function.html
3. http://functions.wolfram.com/HypergeometricFunctions/MeijerG/
4. http://functions.wolfram.com/HypergeometricFunctions/MeijerG1/
"""
clsin = r"""
Computes the Clausen sine function, defined formally by the series
.. math ::
\mathrm{Cl}_s(z) = \sum_{k=1}^{\infty} \frac{\sin(kz)}{k^s}.
The special case `\mathrm{Cl}_2(z)` (i.e. ``clsin(2,z)``) is the classical
"Clausen function". More generally, the Clausen function is defined for
complex `s` and `z`, even when the series does not converge. The
Clausen function is related to the polylogarithm (:func:`~mpmath.polylog`) as
.. math ::
\mathrm{Cl}_s(z) = \frac{1}{2i}\left(\mathrm{Li}_s\left(e^{iz}\right) -
\mathrm{Li}_s\left(e^{-iz}\right)\right)
= \mathrm{Im}\left[\mathrm{Li}_s(e^{iz})\right] \quad (s, z \in \mathbb{R}),
and this representation can be taken to provide the analytic continuation of the
series. The complementary function :func:`~mpmath.clcos` gives the corresponding
cosine sum.
**Examples**
Evaluation for arbitrarily chosen `s` and `z`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> s, z = 3, 4
>>> clsin(s, z); nsum(lambda k: sin(z*k)/k**s, [1,inf])
-0.6533010136329338746275795
-0.6533010136329338746275795
Using `z + \pi` instead of `z` gives an alternating series::
>>> clsin(s, z+pi)
0.8860032351260589402871624
>>> nsum(lambda k: (-1)**k*sin(z*k)/k**s, [1,inf])
0.8860032351260589402871624
With `s = 1`, the sum can be expressed in closed form
using elementary functions::
>>> z = 1 + sqrt(3)
>>> clsin(1, z)
0.2047709230104579724675985
>>> chop((log(1-exp(-j*z)) - log(1-exp(j*z)))/(2*j))
0.2047709230104579724675985
>>> nsum(lambda k: sin(k*z)/k, [1,inf])
0.2047709230104579724675985
The classical Clausen function `\mathrm{Cl}_2(\theta)` gives the
value of the integral `\int_0^{\theta} -\ln(2\sin(x/2)) dx` for
`0 < \theta < 2 \pi`::
>>> cl2 = lambda t: clsin(2, t)
>>> cl2(3.5)
-0.2465045302347694216534255
>>> -quad(lambda x: ln(2*sin(0.5*x)), [0, 3.5])
-0.2465045302347694216534255
This function is symmetric about `\theta = \pi` with zeros and extreme
points::
>>> cl2(0); cl2(pi/3); chop(cl2(pi)); cl2(5*pi/3); chop(cl2(2*pi))
0.0
1.014941606409653625021203
0.0
-1.014941606409653625021203
0.0
Catalan's constant is a special value::
>>> cl2(pi/2)
0.9159655941772190150546035
>>> +catalan
0.9159655941772190150546035
The Clausen sine function can be expressed in closed form when
`s` is an odd integer (becoming zero when `s` < 0)::
>>> z = 1 + sqrt(2)
>>> clsin(1, z); (pi-z)/2
0.3636895456083490948304773
0.3636895456083490948304773
>>> clsin(3, z); pi**2/6*z - pi*z**2/4 + z**3/12
0.5661751584451144991707161
0.5661751584451144991707161
>>> clsin(-1, z)
0.0
>>> clsin(-3, z)
0.0
It can also be expressed in closed form for even integer `s \le 0`,
providing a finite sum for series such as
`\sin(z) + \sin(2z) + \sin(3z) + \ldots`::
>>> z = 1 + sqrt(2)
>>> clsin(0, z)
0.1903105029507513881275865
>>> cot(z/2)/2
0.1903105029507513881275865
>>> clsin(-2, z)
-0.1089406163841548817581392
>>> -cot(z/2)*csc(z/2)**2/4
-0.1089406163841548817581392
Call with ``pi=True`` to multiply `z` by `\pi` exactly::
>>> clsin(3, 3*pi)
-8.892316224968072424732898e-26
>>> clsin(3, 3, pi=True)
0.0
Evaluation for complex `s`, `z` in a nonconvergent case::
>>> s, z = -1-j, 1+2j
>>> clsin(s, z)
(-0.593079480117379002516034 + 0.9038644233367868273362446j)
>>> extraprec(20)(nsum)(lambda k: sin(k*z)/k**s, [1,inf])
(-0.593079480117379002516034 + 0.9038644233367868273362446j)
"""
clcos = r"""
Computes the Clausen cosine function, defined formally by the series
.. math ::
\mathrm{\widetilde{Cl}}_s(z) = \sum_{k=1}^{\infty} \frac{\cos(kz)}{k^s}.
This function is complementary to the Clausen sine function
:func:`~mpmath.clsin`. In terms of the polylogarithm,
.. math ::
\mathrm{\widetilde{Cl}}_s(z) =
\frac{1}{2}\left(\mathrm{Li}_s\left(e^{iz}\right) +
\mathrm{Li}_s\left(e^{-iz}\right)\right)
= \mathrm{Re}\left[\mathrm{Li}_s(e^{iz})\right] \quad (s, z \in \mathbb{R}).
**Examples**
Evaluation for arbitrarily chosen `s` and `z`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> s, z = 3, 4
>>> clcos(s, z); nsum(lambda k: cos(z*k)/k**s, [1,inf])
-0.6518926267198991308332759
-0.6518926267198991308332759
Using `z + \pi` instead of `z` gives an alternating series::
>>> s, z = 3, 0.5
>>> clcos(s, z+pi)
-0.8155530586502260817855618
>>> nsum(lambda k: (-1)**k*cos(z*k)/k**s, [1,inf])
-0.8155530586502260817855618
With `s = 1`, the sum can be expressed in closed form
using elementary functions::
>>> z = 1 + sqrt(3)
>>> clcos(1, z)
-0.6720334373369714849797918
>>> chop(-0.5*(log(1-exp(j*z))+log(1-exp(-j*z))))
-0.6720334373369714849797918
>>> -log(abs(2*sin(0.5*z))) # Equivalent to above when z is real
-0.6720334373369714849797918
>>> nsum(lambda k: cos(k*z)/k, [1,inf])
-0.6720334373369714849797918
It can also be expressed in closed form when `s` is an even integer.
For example,
>>> clcos(2,z)
-0.7805359025135583118863007
>>> pi**2/6 - pi*z/2 + z**2/4
-0.7805359025135583118863007
The case `s = 0` gives the renormalized sum of
`\cos(z) + \cos(2z) + \cos(3z) + \ldots` (which happens to be the same for
any value of `z`)::
>>> clcos(0, z)
-0.5
>>> nsum(lambda k: cos(k*z), [1,inf])
-0.5
Also the sums
.. math ::
\cos(z) + 2\cos(2z) + 3\cos(3z) + \ldots
and
.. math ::
\cos(z) + 2^n \cos(2z) + 3^n \cos(3z) + \ldots
for higher integer powers `n = -s` can be done in closed form. They are zero
when `n` is positive and even (`s` negative and even)::
>>> clcos(-1, z); 1/(2*cos(z)-2)
-0.2607829375240542480694126
-0.2607829375240542480694126
>>> clcos(-3, z); (2+cos(z))*csc(z/2)**4/8
0.1472635054979944390848006
0.1472635054979944390848006
>>> clcos(-2, z); clcos(-4, z); clcos(-6, z)
0.0
0.0
0.0
With `z = \pi`, the series reduces to that of the Riemann zeta function
(more generally, if `z = p \pi/q`, it is a finite sum over Hurwitz zeta
function values)::
>>> clcos(2.5, 0); zeta(2.5)
1.34148725725091717975677
1.34148725725091717975677
>>> clcos(2.5, pi); -altzeta(2.5)
-0.8671998890121841381913472
-0.8671998890121841381913472
Call with ``pi=True`` to multiply `z` by `\pi` exactly::
>>> clcos(-3, 2*pi)
2.997921055881167659267063e+102
>>> clcos(-3, 2, pi=True)
0.008333333333333333333333333
Evaluation for complex `s`, `z` in a nonconvergent case::
>>> s, z = -1-j, 1+2j
>>> clcos(s, z)
(0.9407430121562251476136807 + 0.715826296033590204557054j)
>>> extraprec(20)(nsum)(lambda k: cos(k*z)/k**s, [1,inf])
(0.9407430121562251476136807 + 0.715826296033590204557054j)
"""
whitm = r"""
Evaluates the Whittaker function `M(k,m,z)`, which gives a solution
to the Whittaker differential equation
.. math ::
\frac{d^2f}{dz^2} + \left(-\frac{1}{4}+\frac{k}{z}+
\frac{(\frac{1}{4}-m^2)}{z^2}\right) f = 0.
A second solution is given by :func:`~mpmath.whitw`.
The Whittaker functions are defined in Abramowitz & Stegun, section 13.1.
They are alternate forms of the confluent hypergeometric functions
`\,_1F_1` and `U`:
.. math ::
M(k,m,z) = e^{-\frac{1}{2}z} z^{\frac{1}{2}+m}
\,_1F_1(\tfrac{1}{2}+m-k, 1+2m, z)
W(k,m,z) = e^{-\frac{1}{2}z} z^{\frac{1}{2}+m}
U(\tfrac{1}{2}+m-k, 1+2m, z).
**Examples**
Evaluation for arbitrary real and complex arguments is supported::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> whitm(1, 1, 1)
0.7302596799460411820509668
>>> whitm(1, 1, -1)
(0.0 - 1.417977827655098025684246j)
>>> whitm(j, j/2, 2+3j)
(3.245477713363581112736478 - 0.822879187542699127327782j)
>>> whitm(2, 3, 100000)
4.303985255686378497193063e+21707
Evaluation at zero::
>>> whitm(1,-1,0); whitm(1,-0.5,0); whitm(1,0,0)
+inf
nan
0.0
We can verify that :func:`~mpmath.whitm` numerically satisfies the
differential equation for arbitrarily chosen values::
>>> k = mpf(0.25)
>>> m = mpf(1.5)
>>> f = lambda z: whitm(k,m,z)
>>> for z in [-1, 2.5, 3, 1+2j]:
... chop(diff(f,z,2) + (-0.25 + k/z + (0.25-m**2)/z**2)*f(z))
...
0.0
0.0
0.0
0.0
An integral involving both :func:`~mpmath.whitm` and :func:`~mpmath.whitw`,
verifying evaluation along the real axis::
>>> quad(lambda x: exp(-x)*whitm(3,2,x)*whitw(1,-2,x), [0,inf])
3.438869842576800225207341
>>> 128/(21*sqrt(pi))
3.438869842576800225207341
"""
whitw = r"""
Evaluates the Whittaker function `W(k,m,z)`, which gives a second
solution to the Whittaker differential equation. (See :func:`~mpmath.whitm`.)
**Examples**
Evaluation for arbitrary real and complex arguments is supported::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> whitw(1, 1, 1)
1.19532063107581155661012
>>> whitw(1, 1, -1)
(-0.9424875979222187313924639 - 0.2607738054097702293308689j)
>>> whitw(j, j/2, 2+3j)
(0.1782899315111033879430369 - 0.01609578360403649340169406j)
>>> whitw(2, 3, 100000)
1.887705114889527446891274e-21705
>>> whitw(-1, -1, 100)
1.905250692824046162462058e-24
Evaluation at zero::
>>> for m in [-1, -0.5, 0, 0.5, 1]:
... whitw(1, m, 0)
...
+inf
nan
0.0
nan
+inf
We can verify that :func:`~mpmath.whitw` numerically satisfies the
differential equation for arbitrarily chosen values::
>>> k = mpf(0.25)
>>> m = mpf(1.5)
>>> f = lambda z: whitw(k,m,z)
>>> for z in [-1, 2.5, 3, 1+2j]:
... chop(diff(f,z,2) + (-0.25 + k/z + (0.25-m**2)/z**2)*f(z))
...
0.0
0.0
0.0
0.0
"""
ber = r"""
Computes the Kelvin function ber, which for real arguments gives the real part
of the Bessel J function of a rotated argument
.. math ::
J_n\left(x e^{3\pi i/4}\right) = \mathrm{ber}_n(x) + i \mathrm{bei}_n(x).
The imaginary part is given by :func:`~mpmath.bei`.
**Plots**
.. literalinclude :: /plots/ber.py
.. image :: /plots/ber.png
**Examples**
Verifying the defining relation::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> n, x = 2, 3.5
>>> ber(n,x)
1.442338852571888752631129
>>> bei(n,x)
-0.948359035324558320217678
>>> besselj(n, x*root(1,8,3))
(1.442338852571888752631129 - 0.948359035324558320217678j)
The ber and bei functions are also defined by analytic continuation
for complex arguments::
>>> ber(1+j, 2+3j)
(4.675445984756614424069563 - 15.84901771719130765656316j)
>>> bei(1+j, 2+3j)
(15.83886679193707699364398 + 4.684053288183046528703611j)
"""
bei = r"""
Computes the Kelvin function bei, which for real arguments gives the
imaginary part of the Bessel J function of a rotated argument.
See :func:`~mpmath.ber`.
"""
ker = r"""
Computes the Kelvin function ker, which for real arguments gives the real part
of the (rescaled) Bessel K function of a rotated argument
.. math ::
e^{-\pi i/2} K_n\left(x e^{3\pi i/4}\right) = \mathrm{ker}_n(x) + i \mathrm{kei}_n(x).
The imaginary part is given by :func:`~mpmath.kei`.
**Plots**
.. literalinclude :: /plots/ker.py
.. image :: /plots/ker.png
**Examples**
Verifying the defining relation::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> n, x = 2, 4.5
>>> ker(n,x)
0.02542895201906369640249801
>>> kei(n,x)
-0.02074960467222823237055351
>>> exp(-n*pi*j/2) * besselk(n, x*root(1,8,1))
(0.02542895201906369640249801 - 0.02074960467222823237055351j)
The ker and kei functions are also defined by analytic continuation
for complex arguments::
>>> ker(1+j, 3+4j)
(1.586084268115490421090533 - 2.939717517906339193598719j)
>>> kei(1+j, 3+4j)
(-2.940403256319453402690132 - 1.585621643835618941044855j)
"""
kei = r"""
Computes the Kelvin function kei, which for real arguments gives the
imaginary part of the (rescaled) Bessel K function of a rotated argument.
See :func:`~mpmath.ker`.
"""
struveh = r"""
Gives the Struve function
.. math ::
\,\mathbf{H}_n(z) =
\sum_{k=0}^\infty \frac{(-1)^k}{\Gamma(k+\frac{3}{2})
\Gamma(k+n+\frac{3}{2})} {\left({\frac{z}{2}}\right)}^{2k+n+1}
which is a solution to the Struve differential equation
.. math ::
z^2 f''(z) + z f'(z) + (z^2-n^2) f(z) = \frac{2 z^{n+1}}{\pi (2n-1)!!}.
**Examples**
Evaluation for arbitrary real and complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> struveh(0, 3.5)
0.3608207733778295024977797
>>> struveh(-1, 10)
-0.255212719726956768034732
>>> struveh(1, -100.5)
0.5819566816797362287502246
>>> struveh(2.5, 10000000000000)
3153915652525200060.308937
>>> struveh(2.5, -10000000000000)
(0.0 - 3153915652525200060.308937j)
>>> struveh(1+j, 1000000+4000000j)
(-3.066421087689197632388731e+1737173 - 1.596619701076529803290973e+1737173j)
A Struve function of half-integer order is elementary; for example:
>>> z = 3
>>> struveh(0.5, 3)
0.9167076867564138178671595
>>> sqrt(2/(pi*z))*(1-cos(z))
0.9167076867564138178671595
Numerically verifying the differential equation::
>>> z = mpf(4.5)
>>> n = 3
>>> f = lambda z: struveh(n,z)
>>> lhs = z**2*diff(f,z,2) + z*diff(f,z) + (z**2-n**2)*f(z)
>>> rhs = 2*z**(n+1)/fac2(2*n-1)/pi
>>> lhs
17.40359302709875496632744
>>> rhs
17.40359302709875496632744
"""
struvel = r"""
Gives the modified Struve function
.. math ::
\,\mathbf{L}_n(z) = -i e^{-n\pi i/2} \mathbf{H}_n(i z)
which solves to the modified Struve differential equation
.. math ::
z^2 f''(z) + z f'(z) - (z^2+n^2) f(z) = \frac{2 z^{n+1}}{\pi (2n-1)!!}.
**Examples**
Evaluation for arbitrary real and complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> struvel(0, 3.5)
7.180846515103737996249972
>>> struvel(-1, 10)
2670.994904980850550721511
>>> struvel(1, -100.5)
1.757089288053346261497686e+42
>>> struvel(2.5, 10000000000000)
4.160893281017115450519948e+4342944819025
>>> struvel(2.5, -10000000000000)
(0.0 - 4.160893281017115450519948e+4342944819025j)
>>> struvel(1+j, 700j)
(-0.1721150049480079451246076 + 0.1240770953126831093464055j)
>>> struvel(1+j, 1000000+4000000j)
(-2.973341637511505389128708e+434290 - 5.164633059729968297147448e+434290j)
Numerically verifying the differential equation::
>>> z = mpf(3.5)
>>> n = 3
>>> f = lambda z: struvel(n,z)
>>> lhs = z**2*diff(f,z,2) + z*diff(f,z) - (z**2+n**2)*f(z)
>>> rhs = 2*z**(n+1)/fac2(2*n-1)/pi
>>> lhs
6.368850306060678353018165
>>> rhs
6.368850306060678353018165
"""
appellf1 = r"""
Gives the Appell F1 hypergeometric function of two variables,
.. math ::
F_1(a,b_1,b_2,c,x,y) = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty}
\frac{(a)_{m+n} (b_1)_m (b_2)_n}{(c)_{m+n}}
\frac{x^m y^n}{m! n!}.
This series is only generally convergent when `|x| < 1` and `|y| < 1`,
although :func:`~mpmath.appellf1` can evaluate an analytic continuation
with respecto to either variable, and sometimes both.
**Examples**
Evaluation is supported for real and complex parameters::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> appellf1(1,0,0.5,1,0.5,0.25)
1.154700538379251529018298
>>> appellf1(1,1+j,0.5,1,0.5,0.5j)
(1.138403860350148085179415 + 1.510544741058517621110615j)
For some integer parameters, the F1 series reduces to a polynomial::
>>> appellf1(2,-4,-3,1,2,5)
-816.0
>>> appellf1(-5,1,2,1,4,5)
-20528.0
The analytic continuation with respect to either `x` or `y`,
and sometimes with respect to both, can be evaluated::
>>> appellf1(2,3,4,5,100,0.5)
(0.0006231042714165329279738662 + 0.0000005769149277148425774499857j)
>>> appellf1('1.1', '0.3', '0.2+2j', '0.4', '0.2', 1.5+3j)
(-0.1782604566893954897128702 + 0.002472407104546216117161499j)
>>> appellf1(1,2,3,4,10,12)
-0.07122993830066776374929313
For certain arguments, F1 reduces to an ordinary hypergeometric function::
>>> appellf1(1,2,3,5,0.5,0.25)
1.547902270302684019335555
>>> 4*hyp2f1(1,2,5,'1/3')/3
1.547902270302684019335555
>>> appellf1(1,2,3,4,0,1.5)
(-1.717202506168937502740238 - 2.792526803190927323077905j)
>>> hyp2f1(1,3,4,1.5)
(-1.717202506168937502740238 - 2.792526803190927323077905j)
The F1 function satisfies a system of partial differential equations::
>>> a,b1,b2,c,x,y = map(mpf, [1,0.5,0.25,1.125,0.25,-0.25])
>>> F = lambda x,y: appellf1(a,b1,b2,c,x,y)
>>> chop(x*(1-x)*diff(F,(x,y),(2,0)) +
... y*(1-x)*diff(F,(x,y),(1,1)) +
... (c-(a+b1+1)*x)*diff(F,(x,y),(1,0)) -
... b1*y*diff(F,(x,y),(0,1)) -
... a*b1*F(x,y))
0.0
>>>
>>> chop(y*(1-y)*diff(F,(x,y),(0,2)) +
... x*(1-y)*diff(F,(x,y),(1,1)) +
... (c-(a+b2+1)*y)*diff(F,(x,y),(0,1)) -
... b2*x*diff(F,(x,y),(1,0)) -
... a*b2*F(x,y))
0.0
The Appell F1 function allows for closed-form evaluation of various
integrals, such as any integral of the form
`\int x^r (x+a)^p (x+b)^q dx`::
>>> def integral(a,b,p,q,r,x1,x2):
... a,b,p,q,r,x1,x2 = map(mpmathify, [a,b,p,q,r,x1,x2])
... f = lambda x: x**r * (x+a)**p * (x+b)**q
... def F(x):
... v = x**(r+1)/(r+1) * (a+x)**p * (b+x)**q
... v *= (1+x/a)**(-p)
... v *= (1+x/b)**(-q)
... v *= appellf1(r+1,-p,-q,2+r,-x/a,-x/b)
... return v
... print("Num. quad: %s" % quad(f, [x1,x2]))
... print("Appell F1: %s" % (F(x2)-F(x1)))
...
>>> integral('1/5','4/3','-2','3','1/2',0,1)
Num. quad: 9.073335358785776206576981
Appell F1: 9.073335358785776206576981
>>> integral('3/2','4/3','-2','3','1/2',0,1)
Num. quad: 1.092829171999626454344678
Appell F1: 1.092829171999626454344678
>>> integral('3/2','4/3','-2','3','1/2',12,25)
Num. quad: 1106.323225040235116498927
Appell F1: 1106.323225040235116498927
Also incomplete elliptic integrals fall into this category [1]::
>>> def E(z, m):
... if (pi/2).ae(z):
... return ellipe(m)
... return 2*round(re(z)/pi)*ellipe(m) + mpf(-1)**round(re(z)/pi)*\
... sin(z)*appellf1(0.5,0.5,-0.5,1.5,sin(z)**2,m*sin(z)**2)
...
>>> z, m = 1, 0.5
>>> E(z,m); quad(lambda t: sqrt(1-m*sin(t)**2), [0,pi/4,3*pi/4,z])
0.9273298836244400669659042
0.9273298836244400669659042
>>> z, m = 3, 2
>>> E(z,m); quad(lambda t: sqrt(1-m*sin(t)**2), [0,pi/4,3*pi/4,z])
(1.057495752337234229715836 + 1.198140234735592207439922j)
(1.057495752337234229715836 + 1.198140234735592207439922j)
**References**
1. [WolframFunctions]_ http://functions.wolfram.com/EllipticIntegrals/EllipticE2/26/01/
2. [SrivastavaKarlsson]_
3. [CabralRosetti]_
4. [Vidunas]_
5. [Slater]_
"""
angerj = r"""
Gives the Anger function
.. math ::
\mathbf{J}_{\nu}(z) = \frac{1}{\pi}
\int_0^{\pi} \cos(\nu t - z \sin t) dt
which is an entire function of both the parameter `\nu` and
the argument `z`. It solves the inhomogeneous Bessel differential
equation
.. math ::
f''(z) + \frac{1}{z}f'(z) + \left(1-\frac{\nu^2}{z^2}\right) f(z)
= \frac{(z-\nu)}{\pi z^2} \sin(\pi \nu).
**Examples**
Evaluation for real and complex parameter and argument::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> angerj(2,3)
0.4860912605858910769078311
>>> angerj(-3+4j, 2+5j)
(-5033.358320403384472395612 + 585.8011892476145118551756j)
>>> angerj(3.25, 1e6j)
(4.630743639715893346570743e+434290 - 1.117960409887505906848456e+434291j)
>>> angerj(-1.5, 1e6)
0.0002795719747073879393087011
The Anger function coincides with the Bessel J-function when `\nu`
is an integer::
>>> angerj(1,3); besselj(1,3)
0.3390589585259364589255146
0.3390589585259364589255146
>>> angerj(1.5,3); besselj(1.5,3)
0.4088969848691080859328847
0.4777182150870917715515015
Verifying the differential equation::
>>> v,z = mpf(2.25), 0.75
>>> f = lambda z: angerj(v,z)
>>> diff(f,z,2) + diff(f,z)/z + (1-(v/z)**2)*f(z)
-0.6002108774380707130367995
>>> (z-v)/(pi*z**2) * sinpi(v)
-0.6002108774380707130367995
Verifying the integral representation::
>>> angerj(v,z)
0.1145380759919333180900501
>>> quad(lambda t: cos(v*t-z*sin(t))/pi, [0,pi])
0.1145380759919333180900501
**References**
1. [DLMF]_ section 11.10: Anger-Weber Functions
"""
webere = r"""
Gives the Weber function
.. math ::
\mathbf{E}_{\nu}(z) = \frac{1}{\pi}
\int_0^{\pi} \sin(\nu t - z \sin t) dt
which is an entire function of both the parameter `\nu` and
the argument `z`. It solves the inhomogeneous Bessel differential
equation
.. math ::
f''(z) + \frac{1}{z}f'(z) + \left(1-\frac{\nu^2}{z^2}\right) f(z)
= -\frac{1}{\pi z^2} (z+\nu+(z-\nu)\cos(\pi \nu)).
**Examples**
Evaluation for real and complex parameter and argument::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> webere(2,3)
-0.1057668973099018425662646
>>> webere(-3+4j, 2+5j)
(-585.8081418209852019290498 - 5033.314488899926921597203j)
>>> webere(3.25, 1e6j)
(-1.117960409887505906848456e+434291 - 4.630743639715893346570743e+434290j)
>>> webere(3.25, 1e6)
-0.00002812518265894315604914453
Up to addition of a rational function of `z`, the Weber function coincides
with the Struve H-function when `\nu` is an integer::
>>> webere(1,3); 2/pi-struveh(1,3)
-0.3834897968188690177372881
-0.3834897968188690177372881
>>> webere(5,3); 26/(35*pi)-struveh(5,3)
0.2009680659308154011878075
0.2009680659308154011878075
Verifying the differential equation::
>>> v,z = mpf(2.25), 0.75
>>> f = lambda z: webere(v,z)
>>> diff(f,z,2) + diff(f,z)/z + (1-(v/z)**2)*f(z)
-1.097441848875479535164627
>>> -(z+v+(z-v)*cospi(v))/(pi*z**2)
-1.097441848875479535164627
Verifying the integral representation::
>>> webere(v,z)
0.1486507351534283744485421
>>> quad(lambda t: sin(v*t-z*sin(t))/pi, [0,pi])
0.1486507351534283744485421
**References**
1. [DLMF]_ section 11.10: Anger-Weber Functions
"""
lommels1 = r"""
Gives the Lommel function `s_{\mu,\nu}` or `s^{(1)}_{\mu,\nu}`
.. math ::
s_{\mu,\nu}(z) = \frac{z^{\mu+1}}{(\mu-\nu+1)(\mu+\nu+1)}
\,_1F_2\left(1; \frac{\mu-\nu+3}{2}, \frac{\mu+\nu+3}{2};
-\frac{z^2}{4} \right)
which solves the inhomogeneous Bessel equation
.. math ::
z^2 f''(z) + z f'(z) + (z^2-\nu^2) f(z) = z^{\mu+1}.
A second solution is given by :func:`~mpmath.lommels2`.
**Plots**
.. literalinclude :: /plots/lommels1.py
.. image :: /plots/lommels1.png
**Examples**
An integral representation::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> u,v,z = 0.25, 0.125, mpf(0.75)
>>> lommels1(u,v,z)
0.4276243877565150372999126
>>> (bessely(v,z)*quad(lambda t: t**u*besselj(v,t), [0,z]) - \
... besselj(v,z)*quad(lambda t: t**u*bessely(v,t), [0,z]))*(pi/2)
0.4276243877565150372999126
A special value::
>>> lommels1(v,v,z)
0.5461221367746048054932553
>>> gamma(v+0.5)*sqrt(pi)*power(2,v-1)*struveh(v,z)
0.5461221367746048054932553
Verifying the differential equation::
>>> f = lambda z: lommels1(u,v,z)
>>> z**2*diff(f,z,2) + z*diff(f,z) + (z**2-v**2)*f(z)
0.6979536443265746992059141
>>> z**(u+1)
0.6979536443265746992059141
**References**
1. [GradshteynRyzhik]_
2. [Weisstein]_ http://mathworld.wolfram.com/LommelFunction.html
"""
lommels2 = r"""
Gives the second Lommel function `S_{\mu,\nu}` or `s^{(2)}_{\mu,\nu}`
.. math ::
S_{\mu,\nu}(z) = s_{\mu,\nu}(z) + 2^{\mu-1}
\Gamma\left(\tfrac{1}{2}(\mu-\nu+1)\right)
\Gamma\left(\tfrac{1}{2}(\mu+\nu+1)\right) \times
\left[\sin(\tfrac{1}{2}(\mu-\nu)\pi) J_{\nu}(z) -
\cos(\tfrac{1}{2}(\mu-\nu)\pi) Y_{\nu}(z)
\right]
which solves the same differential equation as
:func:`~mpmath.lommels1`.
**Plots**
.. literalinclude :: /plots/lommels2.py
.. image :: /plots/lommels2.png
**Examples**
For large `|z|`, `S_{\mu,\nu} \sim z^{\mu-1}`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> lommels2(10,2,30000)
1.968299831601008419949804e+40
>>> power(30000,9)
1.9683e+40
A special value::
>>> u,v,z = 0.5, 0.125, mpf(0.75)
>>> lommels2(v,v,z)
0.9589683199624672099969765
>>> (struveh(v,z)-bessely(v,z))*power(2,v-1)*sqrt(pi)*gamma(v+0.5)
0.9589683199624672099969765
Verifying the differential equation::
>>> f = lambda z: lommels2(u,v,z)
>>> z**2*diff(f,z,2) + z*diff(f,z) + (z**2-v**2)*f(z)
0.6495190528383289850727924
>>> z**(u+1)
0.6495190528383289850727924
**References**
1. [GradshteynRyzhik]_
2. [Weisstein]_ http://mathworld.wolfram.com/LommelFunction.html
"""
appellf2 = r"""
Gives the Appell F2 hypergeometric function of two variables
.. math ::
F_2(a,b_1,b_2,c_1,c_2,x,y) = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty}
\frac{(a)_{m+n} (b_1)_m (b_2)_n}{(c_1)_m (c_2)_n}
\frac{x^m y^n}{m! n!}.
The series is generally absolutely convergent for `|x| + |y| < 1`.
**Examples**
Evaluation for real and complex arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> appellf2(1,2,3,4,5,0.25,0.125)
1.257417193533135344785602
>>> appellf2(1,-3,-4,2,3,2,3)
-42.8
>>> appellf2(0.5,0.25,-0.25,2,3,0.25j,0.25)
(0.9880539519421899867041719 + 0.01497616165031102661476978j)
>>> chop(appellf2(1,1+j,1-j,3j,-3j,0.25,0.25))
1.201311219287411337955192
>>> appellf2(1,1,1,4,6,0.125,16)
(-0.09455532250274744282125152 - 0.7647282253046207836769297j)
A transformation formula::
>>> a,b1,b2,c1,c2,x,y = map(mpf, [1,2,0.5,0.25,1.625,-0.125,0.125])
>>> appellf2(a,b1,b2,c1,c2,x,y)
0.2299211717841180783309688
>>> (1-x)**(-a)*appellf2(a,c1-b1,b2,c1,c2,x/(x-1),y/(1-x))
0.2299211717841180783309688
A system of partial differential equations satisfied by F2::
>>> a,b1,b2,c1,c2,x,y = map(mpf, [1,0.5,0.25,1.125,1.5,0.0625,-0.0625])
>>> F = lambda x,y: appellf2(a,b1,b2,c1,c2,x,y)
>>> chop(x*(1-x)*diff(F,(x,y),(2,0)) -
... x*y*diff(F,(x,y),(1,1)) +
... (c1-(a+b1+1)*x)*diff(F,(x,y),(1,0)) -
... b1*y*diff(F,(x,y),(0,1)) -
... a*b1*F(x,y))
0.0
>>> chop(y*(1-y)*diff(F,(x,y),(0,2)) -
... x*y*diff(F,(x,y),(1,1)) +
... (c2-(a+b2+1)*y)*diff(F,(x,y),(0,1)) -
... b2*x*diff(F,(x,y),(1,0)) -
... a*b2*F(x,y))
0.0
**References**
See references for :func:`~mpmath.appellf1`.
"""
appellf3 = r"""
Gives the Appell F3 hypergeometric function of two variables
.. math ::
F_3(a_1,a_2,b_1,b_2,c,x,y) = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty}
\frac{(a_1)_m (a_2)_n (b_1)_m (b_2)_n}{(c)_{m+n}}
\frac{x^m y^n}{m! n!}.
The series is generally absolutely convergent for `|x| < 1, |y| < 1`.
**Examples**
Evaluation for various parameters and variables::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> appellf3(1,2,3,4,5,0.5,0.25)
2.221557778107438938158705
>>> appellf3(1,2,3,4,5,6,0); hyp2f1(1,3,5,6)
(-0.5189554589089861284537389 - 0.1454441043328607980769742j)
(-0.5189554589089861284537389 - 0.1454441043328607980769742j)
>>> appellf3(1,-2,-3,1,1,4,6)
-17.4
>>> appellf3(1,2,-3,1,1,4,6)
(17.7876136773677356641825 + 19.54768762233649126154534j)
>>> appellf3(1,2,-3,1,1,6,4)
(85.02054175067929402953645 + 148.4402528821177305173599j)
>>> chop(appellf3(1+j,2,1-j,2,3,0.25,0.25))
1.719992169545200286696007
Many transformations and evaluations for special combinations
of the parameters are possible, e.g.:
>>> a,b,c,x,y = map(mpf, [0.5,0.25,0.125,0.125,-0.125])
>>> appellf3(a,c-a,b,c-b,c,x,y)
1.093432340896087107444363
>>> (1-y)**(a+b-c)*hyp2f1(a,b,c,x+y-x*y)
1.093432340896087107444363
>>> x**2*appellf3(1,1,1,1,3,x,-x)
0.01568646277445385390945083
>>> polylog(2,x**2)
0.01568646277445385390945083
>>> a1,a2,b1,b2,c,x = map(mpf, [0.5,0.25,0.125,0.5,4.25,0.125])
>>> appellf3(a1,a2,b1,b2,c,x,1)
1.03947361709111140096947
>>> gammaprod([c,c-a2-b2],[c-a2,c-b2])*hyp3f2(a1,b1,c-a2-b2,c-a2,c-b2,x)
1.03947361709111140096947
The Appell F3 function satisfies a pair of partial
differential equations::
>>> a1,a2,b1,b2,c,x,y = map(mpf, [0.5,0.25,0.125,0.5,0.625,0.0625,-0.0625])
>>> F = lambda x,y: appellf3(a1,a2,b1,b2,c,x,y)
>>> chop(x*(1-x)*diff(F,(x,y),(2,0)) +
... y*diff(F,(x,y),(1,1)) +
... (c-(a1+b1+1)*x)*diff(F,(x,y),(1,0)) -
... a1*b1*F(x,y))
0.0
>>> chop(y*(1-y)*diff(F,(x,y),(0,2)) +
... x*diff(F,(x,y),(1,1)) +
... (c-(a2+b2+1)*y)*diff(F,(x,y),(0,1)) -
... a2*b2*F(x,y))
0.0
**References**
See references for :func:`~mpmath.appellf1`.
"""
appellf4 = r"""
Gives the Appell F4 hypergeometric function of two variables
.. math ::
F_4(a,b,c_1,c_2,x,y) = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty}
\frac{(a)_{m+n} (b)_{m+n}}{(c_1)_m (c_2)_n}
\frac{x^m y^n}{m! n!}.
The series is generally absolutely convergent for
`\sqrt{|x|} + \sqrt{|y|} < 1`.
**Examples**
Evaluation for various parameters and arguments::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> appellf4(1,1,2,2,0.25,0.125)
1.286182069079718313546608
>>> appellf4(-2,-3,4,5,4,5)
34.8
>>> appellf4(5,4,2,3,0.25j,-0.125j)
(-0.2585967215437846642163352 + 2.436102233553582711818743j)
Reduction to `\,_2F_1` in a special case::
>>> a,b,c,x,y = map(mpf, [0.5,0.25,0.125,0.125,-0.125])
>>> appellf4(a,b,c,a+b-c+1,x*(1-y),y*(1-x))
1.129143488466850868248364
>>> hyp2f1(a,b,c,x)*hyp2f1(a,b,a+b-c+1,y)
1.129143488466850868248364
A system of partial differential equations satisfied by F4::
>>> a,b,c1,c2,x,y = map(mpf, [1,0.5,0.25,1.125,0.0625,-0.0625])
>>> F = lambda x,y: appellf4(a,b,c1,c2,x,y)
>>> chop(x*(1-x)*diff(F,(x,y),(2,0)) -
... y**2*diff(F,(x,y),(0,2)) -
... 2*x*y*diff(F,(x,y),(1,1)) +
... (c1-(a+b+1)*x)*diff(F,(x,y),(1,0)) -
... ((a+b+1)*y)*diff(F,(x,y),(0,1)) -
... a*b*F(x,y))
0.0
>>> chop(y*(1-y)*diff(F,(x,y),(0,2)) -
... x**2*diff(F,(x,y),(2,0)) -
... 2*x*y*diff(F,(x,y),(1,1)) +
... (c2-(a+b+1)*y)*diff(F,(x,y),(0,1)) -
... ((a+b+1)*x)*diff(F,(x,y),(1,0)) -
... a*b*F(x,y))
0.0
**References**
See references for :func:`~mpmath.appellf1`.
"""
zeta = r"""
Computes the Riemann zeta function
.. math ::
\zeta(s) = 1+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\ldots
or, with `a \ne 1`, the more general Hurwitz zeta function
.. math ::
\zeta(s,a) = \sum_{k=0}^\infty \frac{1}{(a+k)^s}.
Optionally, ``zeta(s, a, n)`` computes the `n`-th derivative with
respect to `s`,
.. math ::
\zeta^{(n)}(s,a) = (-1)^n \sum_{k=0}^\infty \frac{\log^n(a+k)}{(a+k)^s}.
Although these series only converge for `\Re(s) > 1`, the Riemann and Hurwitz
zeta functions are defined through analytic continuation for arbitrary
complex `s \ne 1` (`s = 1` is a pole).
The implementation uses three algorithms: the Borwein algorithm for
the Riemann zeta function when `s` is close to the real line;
the Riemann-Siegel formula for the Riemann zeta function when `s` is
large imaginary, and Euler-Maclaurin summation in all other cases.
The reflection formula for `\Re(s) < 0` is implemented in some cases.
The algorithm can be chosen with ``method = 'borwein'``,
``method='riemann-siegel'`` or ``method = 'euler-maclaurin'``.
The parameter `a` is usually a rational number `a = p/q`, and may be specified
as such by passing an integer tuple `(p, q)`. Evaluation is supported for
arbitrary complex `a`, but may be slow and/or inaccurate when `\Re(s) < 0` for
nonrational `a` or when computing derivatives.
**Examples**
Some values of the Riemann zeta function::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> zeta(2); pi**2 / 6
1.644934066848226436472415
1.644934066848226436472415
>>> zeta(0)
-0.5
>>> zeta(-1)
-0.08333333333333333333333333
>>> zeta(-2)
0.0
For large positive `s`, `\zeta(s)` rapidly approaches 1::
>>> zeta(50)
1.000000000000000888178421
>>> zeta(100)
1.0
>>> zeta(inf)
1.0
>>> 1-sum((zeta(k)-1)/k for k in range(2,85)); +euler
0.5772156649015328606065121
0.5772156649015328606065121
>>> nsum(lambda k: zeta(k)-1, [2, inf])
1.0
Evaluation is supported for complex `s` and `a`:
>>> zeta(-3+4j)
(-0.03373057338827757067584698 + 0.2774499251557093745297677j)
>>> zeta(2+3j, -1+j)
(389.6841230140842816370741 + 295.2674610150305334025962j)
The Riemann zeta function has so-called nontrivial zeros on
the critical line `s = 1/2 + it`::
>>> findroot(zeta, 0.5+14j); zetazero(1)
(0.5 + 14.13472514173469379045725j)
(0.5 + 14.13472514173469379045725j)
>>> findroot(zeta, 0.5+21j); zetazero(2)
(0.5 + 21.02203963877155499262848j)
(0.5 + 21.02203963877155499262848j)
>>> findroot(zeta, 0.5+25j); zetazero(3)
(0.5 + 25.01085758014568876321379j)
(0.5 + 25.01085758014568876321379j)
>>> chop(zeta(zetazero(10)))
0.0
Evaluation on and near the critical line is supported for large
heights `t` by means of the Riemann-Siegel formula (currently
for `a = 1`, `n \le 4`)::
>>> zeta(0.5+100000j)
(1.073032014857753132114076 + 5.780848544363503984261041j)
>>> zeta(0.75+1000000j)
(0.9535316058375145020351559 + 0.9525945894834273060175651j)
>>> zeta(0.5+10000000j)
(11.45804061057709254500227 - 8.643437226836021723818215j)
>>> zeta(0.5+100000000j, derivative=1)
(51.12433106710194942681869 + 43.87221167872304520599418j)
>>> zeta(0.5+100000000j, derivative=2)
(-444.2760822795430400549229 - 896.3789978119185981665403j)
>>> zeta(0.5+100000000j, derivative=3)
(3230.72682687670422215339 + 14374.36950073615897616781j)
>>> zeta(0.5+100000000j, derivative=4)
(-11967.35573095046402130602 - 218945.7817789262839266148j)
>>> zeta(1+10000000j) # off the line
(2.859846483332530337008882 + 0.491808047480981808903986j)
>>> zeta(1+10000000j, derivative=1)
(-4.333835494679647915673205 - 0.08405337962602933636096103j)
>>> zeta(1+10000000j, derivative=4)
(453.2764822702057701894278 - 581.963625832768189140995j)
For investigation of the zeta function zeros, the Riemann-Siegel
Z-function is often more convenient than working with the Riemann
zeta function directly (see :func:`~mpmath.siegelz`).
Some values of the Hurwitz zeta function::
>>> zeta(2, 3); -5./4 + pi**2/6
0.3949340668482264364724152
0.3949340668482264364724152
>>> zeta(2, (3,4)); pi**2 - 8*catalan
2.541879647671606498397663
2.541879647671606498397663
For positive integer values of `s`, the Hurwitz zeta function is
equivalent to a polygamma function (except for a normalizing factor)::
>>> zeta(4, (1,5)); psi(3, '1/5')/6
625.5408324774542966919938
625.5408324774542966919938
Evaluation of derivatives::
>>> zeta(0, 3+4j, 1); loggamma(3+4j) - ln(2*pi)/2
(-2.675565317808456852310934 + 4.742664438034657928194889j)
(-2.675565317808456852310934 + 4.742664438034657928194889j)
>>> zeta(2, 1, 20)
2432902008176640000.000242
>>> zeta(3+4j, 5.5+2j, 4)
(-0.140075548947797130681075 - 0.3109263360275413251313634j)
>>> zeta(0.5+100000j, 1, 4)
(-10407.16081931495861539236 + 13777.78669862804508537384j)
>>> zeta(-100+0.5j, (1,3), derivative=4)
(4.007180821099823942702249e+79 + 4.916117957092593868321778e+78j)
Generating a Taylor series at `s = 2` using derivatives::
>>> for k in range(11): print("%s * (s-2)^%i" % (zeta(2,1,k)/fac(k), k))
...
1.644934066848226436472415 * (s-2)^0
-0.9375482543158437537025741 * (s-2)^1
0.9946401171494505117104293 * (s-2)^2
-1.000024300473840810940657 * (s-2)^3
1.000061933072352565457512 * (s-2)^4
-1.000006869443931806408941 * (s-2)^5
1.000000173233769531820592 * (s-2)^6
-0.9999999569989868493432399 * (s-2)^7
0.9999999937218844508684206 * (s-2)^8
-0.9999999996355013916608284 * (s-2)^9
1.000000000004610645020747 * (s-2)^10
Evaluation at zero and for negative integer `s`::
>>> zeta(0, 10)
-9.5
>>> zeta(-2, (2,3)); mpf(1)/81
0.01234567901234567901234568
0.01234567901234567901234568
>>> zeta(-3+4j, (5,4))
(0.2899236037682695182085988 + 0.06561206166091757973112783j)
>>> zeta(-3.25, 1/pi)
-0.0005117269627574430494396877
>>> zeta(-3.5, pi, 1)
11.156360390440003294709
>>> zeta(-100.5, (8,3))
-4.68162300487989766727122e+77
>>> zeta(-10.5, (-8,3))
(-0.01521913704446246609237979 + 29907.72510874248161608216j)
>>> zeta(-1000.5, (-8,3))
(1.031911949062334538202567e+1770 + 1.519555750556794218804724e+426j)
>>> zeta(-1+j, 3+4j)
(-16.32988355630802510888631 - 22.17706465801374033261383j)
>>> zeta(-1+j, 3+4j, 2)
(32.48985276392056641594055 - 51.11604466157397267043655j)
>>> diff(lambda s: zeta(s, 3+4j), -1+j, 2)
(32.48985276392056641594055 - 51.11604466157397267043655j)
**References**
1. http://mathworld.wolfram.com/RiemannZetaFunction.html
2. http://mathworld.wolfram.com/HurwitzZetaFunction.html
3. http://www.cecm.sfu.ca/personal/pborwein/PAPERS/P155.pdf
"""
dirichlet = r"""
Evaluates the Dirichlet L-function
.. math ::
L(s,\chi) = \sum_{k=1}^\infty \frac{\chi(k)}{k^s}.
where `\chi` is a periodic sequence of length `q` which should be supplied
in the form of a list `[\chi(0), \chi(1), \ldots, \chi(q-1)]`.
Strictly, `\chi` should be a Dirichlet character, but any periodic
sequence will work.
For example, ``dirichlet(s, [1])`` gives the ordinary
Riemann zeta function and ``dirichlet(s, [-1,1])`` gives
the alternating zeta function (Dirichlet eta function).
Also the derivative with respect to `s` (currently only a first
derivative) can be evaluated.
**Examples**
The ordinary Riemann zeta function::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> dirichlet(3, [1]); zeta(3)
1.202056903159594285399738
1.202056903159594285399738
>>> dirichlet(1, [1])
+inf
The alternating zeta function::
>>> dirichlet(1, [-1,1]); ln(2)
0.6931471805599453094172321
0.6931471805599453094172321
The following defines the Dirichlet beta function
`\beta(s) = \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)^s}` and verifies
several values of this function::
>>> B = lambda s, d=0: dirichlet(s, [0, 1, 0, -1], d)
>>> B(0); 1./2
0.5
0.5
>>> B(1); pi/4
0.7853981633974483096156609
0.7853981633974483096156609
>>> B(2); +catalan
0.9159655941772190150546035
0.9159655941772190150546035
>>> B(2,1); diff(B, 2)
0.08158073611659279510291217
0.08158073611659279510291217
>>> B(-1,1); 2*catalan/pi
0.5831218080616375602767689
0.5831218080616375602767689
>>> B(0,1); log(gamma(0.25)**2/(2*pi*sqrt(2)))
0.3915943927068367764719453
0.3915943927068367764719454
>>> B(1,1); 0.25*pi*(euler+2*ln2+3*ln(pi)-4*ln(gamma(0.25)))
0.1929013167969124293631898
0.1929013167969124293631898
A custom L-series of period 3::
>>> dirichlet(2, [2,0,1])
0.7059715047839078092146831
>>> 2*nsum(lambda k: (3*k)**-2, [1,inf]) + \
... nsum(lambda k: (3*k+2)**-2, [0,inf])
0.7059715047839078092146831
"""
coulombf = r"""
Calculates the regular Coulomb wave function
.. math ::
F_l(\eta,z) = C_l(\eta) z^{l+1} e^{-iz} \,_1F_1(l+1-i\eta, 2l+2, 2iz)
where the normalization constant `C_l(\eta)` is as calculated by
:func:`~mpmath.coulombc`. This function solves the differential equation
.. math ::
f''(z) + \left(1-\frac{2\eta}{z}-\frac{l(l+1)}{z^2}\right) f(z) = 0.
A second linearly independent solution is given by the irregular
Coulomb wave function `G_l(\eta,z)` (see :func:`~mpmath.coulombg`)
and thus the general solution is
`f(z) = C_1 F_l(\eta,z) + C_2 G_l(\eta,z)` for arbitrary
constants `C_1`, `C_2`.
Physically, the Coulomb wave functions give the radial solution
to the Schrodinger equation for a point particle in a `1/z` potential; `z` is
then the radius and `l`, `\eta` are quantum numbers.
The Coulomb wave functions with real parameters are defined
in Abramowitz & Stegun, section 14. However, all parameters are permitted
to be complex in this implementation (see references).
**Plots**
.. literalinclude :: /plots/coulombf.py
.. image :: /plots/coulombf.png
.. literalinclude :: /plots/coulombf_c.py
.. image :: /plots/coulombf_c.png
**Examples**
Evaluation is supported for arbitrary magnitudes of `z`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> coulombf(2, 1.5, 3.5)
0.4080998961088761187426445
>>> coulombf(-2, 1.5, 3.5)
0.7103040849492536747533465
>>> coulombf(2, 1.5, '1e-10')
4.143324917492256448770769e-33
>>> coulombf(2, 1.5, 1000)
0.4482623140325567050716179
>>> coulombf(2, 1.5, 10**10)
-0.066804196437694360046619
Verifying the differential equation::
>>> l, eta, z = 2, 3, mpf(2.75)
>>> A, B = 1, 2
>>> f = lambda z: A*coulombf(l,eta,z) + B*coulombg(l,eta,z)
>>> chop(diff(f,z,2) + (1-2*eta/z - l*(l+1)/z**2)*f(z))
0.0
A Wronskian relation satisfied by the Coulomb wave functions::
>>> l = 2
>>> eta = 1.5
>>> F = lambda z: coulombf(l,eta,z)
>>> G = lambda z: coulombg(l,eta,z)
>>> for z in [3.5, -1, 2+3j]:
... chop(diff(F,z)*G(z) - F(z)*diff(G,z))
...
1.0
1.0
1.0
Another Wronskian relation::
>>> F = coulombf
>>> G = coulombg
>>> for z in [3.5, -1, 2+3j]:
... chop(F(l-1,eta,z)*G(l,eta,z)-F(l,eta,z)*G(l-1,eta,z) - l/sqrt(l**2+eta**2))
...
0.0
0.0
0.0
An integral identity connecting the regular and irregular wave functions::
>>> l, eta, z = 4+j, 2-j, 5+2j
>>> coulombf(l,eta,z) + j*coulombg(l,eta,z)
(0.7997977752284033239714479 + 0.9294486669502295512503127j)
>>> g = lambda t: exp(-t)*t**(l-j*eta)*(t+2*j*z)**(l+j*eta)
>>> j*exp(-j*z)*z**(-l)/fac(2*l+1)/coulombc(l,eta)*quad(g, [0,inf])
(0.7997977752284033239714479 + 0.9294486669502295512503127j)
Some test case with complex parameters, taken from Michel [2]::
>>> mp.dps = 15
>>> coulombf(1+0.1j, 50+50j, 100.156)
(-1.02107292320897e+15 - 2.83675545731519e+15j)
>>> coulombg(1+0.1j, 50+50j, 100.156)
(2.83675545731519e+15 - 1.02107292320897e+15j)
>>> coulombf(1e-5j, 10+1e-5j, 0.1+1e-6j)
(4.30566371247811e-14 - 9.03347835361657e-19j)
>>> coulombg(1e-5j, 10+1e-5j, 0.1+1e-6j)
(778709182061.134 + 18418936.2660553j)
The following reproduces a table in Abramowitz & Stegun, at twice
the precision::
>>> mp.dps = 10
>>> eta = 2; z = 5
>>> for l in [5, 4, 3, 2, 1, 0]:
... print("%s %s %s" % (l, coulombf(l,eta,z),
... diff(lambda z: coulombf(l,eta,z), z)))
...
5 0.09079533488 0.1042553261
4 0.2148205331 0.2029591779
3 0.4313159311 0.320534053
2 0.7212774133 0.3952408216
1 0.9935056752 0.3708676452
0 1.143337392 0.2937960375
**References**
1. I.J. Thompson & A.R. Barnett, "Coulomb and Bessel Functions of Complex
Arguments and Order", J. Comp. Phys., vol 64, no. 2, June 1986.
2. N. Michel, "Precise Coulomb wave functions for a wide range of
complex `l`, `\eta` and `z`", http://arxiv.org/abs/physics/0702051v1
"""
coulombg = r"""
Calculates the irregular Coulomb wave function
.. math ::
G_l(\eta,z) = \frac{F_l(\eta,z) \cos(\chi) - F_{-l-1}(\eta,z)}{\sin(\chi)}
where `\chi = \sigma_l - \sigma_{-l-1} - (l+1/2) \pi`
and `\sigma_l(\eta) = (\ln \Gamma(1+l+i\eta)-\ln \Gamma(1+l-i\eta))/(2i)`.
See :func:`~mpmath.coulombf` for additional information.
**Plots**
.. literalinclude :: /plots/coulombg.py
.. image :: /plots/coulombg.png
.. literalinclude :: /plots/coulombg_c.py
.. image :: /plots/coulombg_c.png
**Examples**
Evaluation is supported for arbitrary magnitudes of `z`::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> coulombg(-2, 1.5, 3.5)
1.380011900612186346255524
>>> coulombg(2, 1.5, 3.5)
1.919153700722748795245926
>>> coulombg(-2, 1.5, '1e-10')
201126715824.7329115106793
>>> coulombg(-2, 1.5, 1000)
0.1802071520691149410425512
>>> coulombg(-2, 1.5, 10**10)
0.652103020061678070929794
The following reproduces a table in Abramowitz & Stegun,
at twice the precision::
>>> mp.dps = 10
>>> eta = 2; z = 5
>>> for l in [1, 2, 3, 4, 5]:
... print("%s %s %s" % (l, coulombg(l,eta,z),
... -diff(lambda z: coulombg(l,eta,z), z)))
...
1 1.08148276 0.6028279961
2 1.496877075 0.5661803178
3 2.048694714 0.7959909551
4 3.09408669 1.731802374
5 5.629840456 4.549343289
Evaluation close to the singularity at `z = 0`::
>>> mp.dps = 15
>>> coulombg(0,10,1)
3088184933.67358
>>> coulombg(0,10,'1e-10')
5554866000719.8
>>> coulombg(0,10,'1e-100')
5554866221524.1
Evaluation with a half-integer value for `l`::
>>> coulombg(1.5, 1, 10)
0.852320038297334
"""
coulombc = r"""
Gives the normalizing Gamow constant for Coulomb wave functions,
.. math ::
C_l(\eta) = 2^l \exp\left(-\pi \eta/2 + [\ln \Gamma(1+l+i\eta) +
\ln \Gamma(1+l-i\eta)]/2 - \ln \Gamma(2l+2)\right),
where the log gamma function with continuous imaginary part
away from the negative half axis (see :func:`~mpmath.loggamma`) is implied.
This function is used internally for the calculation of
Coulomb wave functions, and automatically cached to make multiple
evaluations with fixed `l`, `\eta` fast.
"""
ellipfun = r"""
Computes any of the Jacobi elliptic functions, defined
in terms of Jacobi theta functions as
.. math ::
\mathrm{sn}(u,m) = \frac{\vartheta_3(0,q)}{\vartheta_2(0,q)}
\frac{\vartheta_1(t,q)}{\vartheta_4(t,q)}
\mathrm{cn}(u,m) = \frac{\vartheta_4(0,q)}{\vartheta_2(0,q)}
\frac{\vartheta_2(t,q)}{\vartheta_4(t,q)}
\mathrm{dn}(u,m) = \frac{\vartheta_4(0,q)}{\vartheta_3(0,q)}
\frac{\vartheta_3(t,q)}{\vartheta_4(t,q)},
or more generally computes a ratio of two such functions. Here
`t = u/\vartheta_3(0,q)^2`, and `q = q(m)` denotes the nome (see
:func:`~mpmath.nome`). Optionally, you can specify the nome directly
instead of `m` by passing ``q=<value>``, or you can directly
specify the elliptic parameter `k` with ``k=<value>``.
The first argument should be a two-character string specifying the
function using any combination of ``'s'``, ``'c'``, ``'d'``, ``'n'``. These
letters respectively denote the basic functions
`\mathrm{sn}(u,m)`, `\mathrm{cn}(u,m)`, `\mathrm{dn}(u,m)`, and `1`.
The identifier specifies the ratio of two such functions.
For example, ``'ns'`` identifies the function
.. math ::
\mathrm{ns}(u,m) = \frac{1}{\mathrm{sn}(u,m)}
and ``'cd'`` identifies the function
.. math ::
\mathrm{cd}(u,m) = \frac{\mathrm{cn}(u,m)}{\mathrm{dn}(u,m)}.
If called with only the first argument, a function object
evaluating the chosen function for given arguments is returned.
**Examples**
Basic evaluation::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> ellipfun('cd', 3.5, 0.5)
-0.9891101840595543931308394
>>> ellipfun('cd', 3.5, q=0.25)
0.07111979240214668158441418
The sn-function is doubly periodic in the complex plane with periods
`4 K(m)` and `2 i K(1-m)` (see :func:`~mpmath.ellipk`)::
>>> sn = ellipfun('sn')
>>> sn(2, 0.25)
0.9628981775982774425751399
>>> sn(2+4*ellipk(0.25), 0.25)
0.9628981775982774425751399
>>> chop(sn(2+2*j*ellipk(1-0.25), 0.25))
0.9628981775982774425751399
The cn-function is doubly periodic with periods `4 K(m)` and `2 K(m) + 2 i K(1-m)`::
>>> cn = ellipfun('cn')
>>> cn(2, 0.25)
-0.2698649654510865792581416
>>> cn(2+4*ellipk(0.25), 0.25)
-0.2698649654510865792581416
>>> chop(cn(2+2*ellipk(0.25)+2*j*ellipk(1-0.25), 0.25))
-0.2698649654510865792581416
The dn-function is doubly periodic with periods `2 K(m)` and `4 i K(1-m)`::
>>> dn = ellipfun('dn')
>>> dn(2, 0.25)
0.8764740583123262286931578
>>> dn(2+2*ellipk(0.25), 0.25)
0.8764740583123262286931578
>>> chop(dn(2+4*j*ellipk(1-0.25), 0.25))
0.8764740583123262286931578
"""
jtheta = r"""
Computes the Jacobi theta function `\vartheta_n(z, q)`, where
`n = 1, 2, 3, 4`, defined by the infinite series:
.. math ::
\vartheta_1(z,q) = 2 q^{1/4} \sum_{n=0}^{\infty}
(-1)^n q^{n^2+n\,} \sin((2n+1)z)
\vartheta_2(z,q) = 2 q^{1/4} \sum_{n=0}^{\infty}
q^{n^{2\,} + n} \cos((2n+1)z)
\vartheta_3(z,q) = 1 + 2 \sum_{n=1}^{\infty}
q^{n^2\,} \cos(2 n z)
\vartheta_4(z,q) = 1 + 2 \sum_{n=1}^{\infty}
(-q)^{n^2\,} \cos(2 n z)
The theta functions are functions of two variables:
* `z` is the *argument*, an arbitrary real or complex number
* `q` is the *nome*, which must be a real or complex number
in the unit disk (i.e. `|q| < 1`). For `|q| \ll 1`, the
series converge very quickly, so the Jacobi theta functions
can efficiently be evaluated to high precision.
The compact notations `\vartheta_n(q) = \vartheta_n(0,q)`
and `\vartheta_n = \vartheta_n(0,q)` are also frequently
encountered. Finally, Jacobi theta functions are frequently
considered as functions of the half-period ratio `\tau`
and then usually denoted by `\vartheta_n(z|\tau)`.
Optionally, ``jtheta(n, z, q, derivative=d)`` with `d > 0` computes
a `d`-th derivative with respect to `z`.
**Examples and basic properties**
Considered as functions of `z`, the Jacobi theta functions may be
viewed as generalizations of the ordinary trigonometric functions
cos and sin. They are periodic functions::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> jtheta(1, 0.25, '0.2')
0.2945120798627300045053104
>>> jtheta(1, 0.25 + 2*pi, '0.2')
0.2945120798627300045053104
Indeed, the series defining the theta functions are essentially
trigonometric Fourier series. The coefficients can be retrieved
using :func:`~mpmath.fourier`::
>>> mp.dps = 10
>>> nprint(fourier(lambda x: jtheta(2, x, 0.5), [-pi, pi], 4))
([0.0, 1.68179, 0.0, 0.420448, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0])
The Jacobi theta functions are also so-called quasiperiodic
functions of `z` and `\tau`, meaning that for fixed `\tau`,
`\vartheta_n(z, q)` and `\vartheta_n(z+\pi \tau, q)` are the same
except for an exponential factor::
>>> mp.dps = 25
>>> tau = 3*j/10
>>> q = exp(pi*j*tau)
>>> z = 10
>>> jtheta(4, z+tau*pi, q)
(-0.682420280786034687520568 + 1.526683999721399103332021j)
>>> -exp(-2*j*z)/q * jtheta(4, z, q)
(-0.682420280786034687520568 + 1.526683999721399103332021j)
The Jacobi theta functions satisfy a huge number of other
functional equations, such as the following identity (valid for
any `q`)::
>>> q = mpf(3)/10
>>> jtheta(3,0,q)**4
6.823744089352763305137427
>>> jtheta(2,0,q)**4 + jtheta(4,0,q)**4
6.823744089352763305137427
Extensive listings of identities satisfied by the Jacobi theta
functions can be found in standard reference works.
The Jacobi theta functions are related to the gamma function
for special arguments::
>>> jtheta(3, 0, exp(-pi))
1.086434811213308014575316
>>> pi**(1/4.) / gamma(3/4.)
1.086434811213308014575316
:func:`~mpmath.jtheta` supports arbitrary precision evaluation and complex
arguments::
>>> mp.dps = 50
>>> jtheta(4, sqrt(2), 0.5)
2.0549510717571539127004115835148878097035750653737
>>> mp.dps = 25
>>> jtheta(4, 1+2j, (1+j)/5)
(7.180331760146805926356634 - 1.634292858119162417301683j)
Evaluation of derivatives::
>>> mp.dps = 25
>>> jtheta(1, 7, 0.25, 1); diff(lambda z: jtheta(1, z, 0.25), 7)
1.209857192844475388637236
1.209857192844475388637236
>>> jtheta(1, 7, 0.25, 2); diff(lambda z: jtheta(1, z, 0.25), 7, 2)
-0.2598718791650217206533052
-0.2598718791650217206533052
>>> jtheta(2, 7, 0.25, 1); diff(lambda z: jtheta(2, z, 0.25), 7)
-1.150231437070259644461474
-1.150231437070259644461474
>>> jtheta(2, 7, 0.25, 2); diff(lambda z: jtheta(2, z, 0.25), 7, 2)
-0.6226636990043777445898114
-0.6226636990043777445898114
>>> jtheta(3, 7, 0.25, 1); diff(lambda z: jtheta(3, z, 0.25), 7)
-0.9990312046096634316587882
-0.9990312046096634316587882
>>> jtheta(3, 7, 0.25, 2); diff(lambda z: jtheta(3, z, 0.25), 7, 2)
-0.1530388693066334936151174
-0.1530388693066334936151174
>>> jtheta(4, 7, 0.25, 1); diff(lambda z: jtheta(4, z, 0.25), 7)
0.9820995967262793943571139
0.9820995967262793943571139
>>> jtheta(4, 7, 0.25, 2); diff(lambda z: jtheta(4, z, 0.25), 7, 2)
0.3936902850291437081667755
0.3936902850291437081667755
**Possible issues**
For `|q| \ge 1` or `\Im(\tau) \le 0`, :func:`~mpmath.jtheta` raises
``ValueError``. This exception is also raised for `|q|` extremely
close to 1 (or equivalently `\tau` very close to 0), since the
series would converge too slowly::
>>> jtheta(1, 10, 0.99999999 * exp(0.5*j))
Traceback (most recent call last):
...
ValueError: abs(q) > THETA_Q_LIM = 1.000000
"""
eulernum = r"""
Gives the `n`-th Euler number, defined as the `n`-th derivative of
`\mathrm{sech}(t) = 1/\cosh(t)` evaluated at `t = 0`. Equivalently, the
Euler numbers give the coefficients of the Taylor series
.. math ::
\mathrm{sech}(t) = \sum_{n=0}^{\infty} \frac{E_n}{n!} t^n.
The Euler numbers are closely related to Bernoulli numbers
and Bernoulli polynomials. They can also be evaluated in terms of
Euler polynomials (see :func:`~mpmath.eulerpoly`) as `E_n = 2^n E_n(1/2)`.
**Examples**
Computing the first few Euler numbers and verifying that they
agree with the Taylor series::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> [eulernum(n) for n in range(11)]
[1.0, 0.0, -1.0, 0.0, 5.0, 0.0, -61.0, 0.0, 1385.0, 0.0, -50521.0]
>>> chop(diffs(sech, 0, 10))
[1.0, 0.0, -1.0, 0.0, 5.0, 0.0, -61.0, 0.0, 1385.0, 0.0, -50521.0]
Euler numbers grow very rapidly. :func:`~mpmath.eulernum` efficiently
computes numerical approximations for large indices::
>>> eulernum(50)
-6.053285248188621896314384e+54
>>> eulernum(1000)
3.887561841253070615257336e+2371
>>> eulernum(10**20)
4.346791453661149089338186e+1936958564106659551331
Comparing with an asymptotic formula for the Euler numbers::
>>> n = 10**5
>>> (-1)**(n//2) * 8 * sqrt(n/(2*pi)) * (2*n/(pi*e))**n
3.69919063017432362805663e+436961
>>> eulernum(n)
3.699193712834466537941283e+436961
Pass ``exact=True`` to obtain exact values of Euler numbers as integers::
>>> print(eulernum(50, exact=True))
-6053285248188621896314383785111649088103498225146815121
>>> print(eulernum(200, exact=True) % 10**10)
1925859625
>>> eulernum(1001, exact=True)
0
"""
eulerpoly = r"""
Evaluates the Euler polynomial `E_n(z)`, defined by the generating function
representation
.. math ::
\frac{2e^{zt}}{e^t+1} = \sum_{n=0}^\infty E_n(z) \frac{t^n}{n!}.
The Euler polynomials may also be represented in terms of
Bernoulli polynomials (see :func:`~mpmath.bernpoly`) using various formulas, for
example
.. math ::
E_n(z) = \frac{2}{n+1} \left(
B_n(z)-2^{n+1}B_n\left(\frac{z}{2}\right)
\right).
Special values include the Euler numbers `E_n = 2^n E_n(1/2)` (see
:func:`~mpmath.eulernum`).
**Examples**
Computing the coefficients of the first few Euler polynomials::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> for n in range(6):
... chop(taylor(lambda z: eulerpoly(n,z), 0, n))
...
[1.0]
[-0.5, 1.0]
[0.0, -1.0, 1.0]
[0.25, 0.0, -1.5, 1.0]
[0.0, 1.0, 0.0, -2.0, 1.0]
[-0.5, 0.0, 2.5, 0.0, -2.5, 1.0]
Evaluation for arbitrary `z`::
>>> eulerpoly(2,3)
6.0
>>> eulerpoly(5,4)
423.5
>>> eulerpoly(35, 11111111112)
3.994957561486776072734601e+351
>>> eulerpoly(4, 10+20j)
(-47990.0 - 235980.0j)
>>> eulerpoly(2, '-3.5e-5')
0.000035001225
>>> eulerpoly(3, 0.5)
0.0
>>> eulerpoly(55, -10**80)
-1.0e+4400
>>> eulerpoly(5, -inf)
-inf
>>> eulerpoly(6, -inf)
+inf
Computing Euler numbers::
>>> 2**26 * eulerpoly(26,0.5)
-4087072509293123892361.0
>>> eulernum(26)
-4087072509293123892361.0
Evaluation is accurate for large `n` and small `z`::
>>> eulerpoly(100, 0.5)
2.29047999988194114177943e+108
>>> eulerpoly(1000, 10.5)
3.628120031122876847764566e+2070
>>> eulerpoly(10000, 10.5)
1.149364285543783412210773e+30688
"""
spherharm = r"""
Evaluates the spherical harmonic `Y_l^m(\theta,\phi)`,
.. math ::
Y_l^m(\theta,\phi) = \sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}
P_l^m(\cos \theta) e^{i m \phi}
where `P_l^m` is an associated Legendre function (see :func:`~mpmath.legenp`).
Here `\theta \in [0, \pi]` denotes the polar coordinate (ranging
from the north pole to the south pole) and `\phi \in [0, 2 \pi]` denotes the
azimuthal coordinate on a sphere. Care should be used since many different
conventions for spherical coordinate variables are used.
Usually spherical harmonics are considered for `l \in \mathbb{N}`,
`m \in \mathbb{Z}`, `|m| \le l`. More generally, `l,m,\theta,\phi`
are permitted to be complex numbers.
.. note ::
:func:`~mpmath.spherharm` returns a complex number, even if the value is
purely real.
**Plots**
.. literalinclude :: /plots/spherharm40.py
`Y_{4,0}`:
.. image :: /plots/spherharm40.png
`Y_{4,1}`:
.. image :: /plots/spherharm41.png
`Y_{4,2}`:
.. image :: /plots/spherharm42.png
`Y_{4,3}`:
.. image :: /plots/spherharm43.png
`Y_{4,4}`:
.. image :: /plots/spherharm44.png
**Examples**
Some low-order spherical harmonics with reference values::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> theta = pi/4
>>> phi = pi/3
>>> spherharm(0,0,theta,phi); 0.5*sqrt(1/pi)*expj(0)
(0.2820947917738781434740397 + 0.0j)
(0.2820947917738781434740397 + 0.0j)
>>> spherharm(1,-1,theta,phi); 0.5*sqrt(3/(2*pi))*expj(-phi)*sin(theta)
(0.1221506279757299803965962 - 0.2115710938304086076055298j)
(0.1221506279757299803965962 - 0.2115710938304086076055298j)
>>> spherharm(1,0,theta,phi); 0.5*sqrt(3/pi)*cos(theta)*expj(0)
(0.3454941494713354792652446 + 0.0j)
(0.3454941494713354792652446 + 0.0j)
>>> spherharm(1,1,theta,phi); -0.5*sqrt(3/(2*pi))*expj(phi)*sin(theta)
(-0.1221506279757299803965962 - 0.2115710938304086076055298j)
(-0.1221506279757299803965962 - 0.2115710938304086076055298j)
With the normalization convention used, the spherical harmonics are orthonormal
on the unit sphere::
>>> sphere = [0,pi], [0,2*pi]
>>> dS = lambda t,p: fp.sin(t) # differential element
>>> Y1 = lambda t,p: fp.spherharm(l1,m1,t,p)
>>> Y2 = lambda t,p: fp.conj(fp.spherharm(l2,m2,t,p))
>>> l1 = l2 = 3; m1 = m2 = 2
>>> fp.chop(fp.quad(lambda t,p: Y1(t,p)*Y2(t,p)*dS(t,p), *sphere))
1.0000000000000007
>>> m2 = 1 # m1 != m2
>>> print(fp.chop(fp.quad(lambda t,p: Y1(t,p)*Y2(t,p)*dS(t,p), *sphere)))
0.0
Evaluation is accurate for large orders::
>>> spherharm(1000,750,0.5,0.25)
(3.776445785304252879026585e-102 - 5.82441278771834794493484e-102j)
Evaluation works with complex parameter values::
>>> spherharm(1+j, 2j, 2+3j, -0.5j)
(64.44922331113759992154992 + 1981.693919841408089681743j)
"""
scorergi = r"""
Evaluates the Scorer function
.. math ::
\operatorname{Gi}(z) =
\operatorname{Ai}(z) \int_0^z \operatorname{Bi}(t) dt +
\operatorname{Bi}(z) \int_z^{\infty} \operatorname{Ai}(t) dt
which gives a particular solution to the inhomogeneous Airy
differential equation `f''(z) - z f(z) = 1/\pi`. Another
particular solution is given by the Scorer Hi-function
(:func:`~mpmath.scorerhi`). The two functions are related as
`\operatorname{Gi}(z) + \operatorname{Hi}(z) = \operatorname{Bi}(z)`.
**Plots**
.. literalinclude :: /plots/gi.py
.. image :: /plots/gi.png
.. literalinclude :: /plots/gi_c.py
.. image :: /plots/gi_c.png
**Examples**
Some values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> scorergi(0); 1/(power(3,'7/6')*gamma('2/3'))
0.2049755424820002450503075
0.2049755424820002450503075
>>> diff(scorergi, 0); 1/(power(3,'5/6')*gamma('1/3'))
0.1494294524512754526382746
0.1494294524512754526382746
>>> scorergi(+inf); scorergi(-inf)
0.0
0.0
>>> scorergi(1)
0.2352184398104379375986902
>>> scorergi(-1)
-0.1166722172960152826494198
Evaluation for large arguments::
>>> scorergi(10)
0.03189600510067958798062034
>>> scorergi(100)
0.003183105228162961476590531
>>> scorergi(1000000)
0.0000003183098861837906721743873
>>> 1/(pi*1000000)
0.0000003183098861837906715377675
>>> scorergi(-1000)
-0.08358288400262780392338014
>>> scorergi(-100000)
0.02886866118619660226809581
>>> scorergi(50+10j)
(0.0061214102799778578790984 - 0.001224335676457532180747917j)
>>> scorergi(-50-10j)
(5.236047850352252236372551e+29 - 3.08254224233701381482228e+29j)
>>> scorergi(100000j)
(-8.806659285336231052679025e+6474077 + 8.684731303500835514850962e+6474077j)
Verifying the connection between Gi and Hi::
>>> z = 0.25
>>> scorergi(z) + scorerhi(z)
0.7287469039362150078694543
>>> airybi(z)
0.7287469039362150078694543
Verifying the differential equation::
>>> for z in [-3.4, 0, 2.5, 1+2j]:
... chop(diff(scorergi,z,2) - z*scorergi(z))
...
-0.3183098861837906715377675
-0.3183098861837906715377675
-0.3183098861837906715377675
-0.3183098861837906715377675
Verifying the integral representation::
>>> z = 0.5
>>> scorergi(z)
0.2447210432765581976910539
>>> Ai,Bi = airyai,airybi
>>> Bi(z)*(Ai(inf,-1)-Ai(z,-1)) + Ai(z)*(Bi(z,-1)-Bi(0,-1))
0.2447210432765581976910539
**References**
1. [DLMF]_ section 9.12: Scorer Functions
"""
scorerhi = r"""
Evaluates the second Scorer function
.. math ::
\operatorname{Hi}(z) =
\operatorname{Bi}(z) \int_{-\infty}^z \operatorname{Ai}(t) dt -
\operatorname{Ai}(z) \int_{-\infty}^z \operatorname{Bi}(t) dt
which gives a particular solution to the inhomogeneous Airy
differential equation `f''(z) - z f(z) = 1/\pi`. See also
:func:`~mpmath.scorergi`.
**Plots**
.. literalinclude :: /plots/hi.py
.. image :: /plots/hi.png
.. literalinclude :: /plots/hi_c.py
.. image :: /plots/hi_c.png
**Examples**
Some values and limits::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> scorerhi(0); 2/(power(3,'7/6')*gamma('2/3'))
0.4099510849640004901006149
0.4099510849640004901006149
>>> diff(scorerhi,0); 2/(power(3,'5/6')*gamma('1/3'))
0.2988589049025509052765491
0.2988589049025509052765491
>>> scorerhi(+inf); scorerhi(-inf)
+inf
0.0
>>> scorerhi(1)
0.9722051551424333218376886
>>> scorerhi(-1)
0.2206696067929598945381098
Evaluation for large arguments::
>>> scorerhi(10)
455641153.5163291358991077
>>> scorerhi(100)
6.041223996670201399005265e+288
>>> scorerhi(1000000)
7.138269638197858094311122e+289529652
>>> scorerhi(-10)
0.0317685352825022727415011
>>> scorerhi(-100)
0.003183092495767499864680483
>>> scorerhi(100j)
(-6.366197716545672122983857e-9 + 0.003183098861710582761688475j)
>>> scorerhi(50+50j)
(-5.322076267321435669290334e+63 + 1.478450291165243789749427e+65j)
>>> scorerhi(-1000-1000j)
(0.0001591549432510502796565538 - 0.000159154943091895334973109j)
Verifying the differential equation::
>>> for z in [-3.4, 0, 2, 1+2j]:
... chop(diff(scorerhi,z,2) - z*scorerhi(z))
...
0.3183098861837906715377675
0.3183098861837906715377675
0.3183098861837906715377675
0.3183098861837906715377675
Verifying the integral representation::
>>> z = 0.5
>>> scorerhi(z)
0.6095559998265972956089949
>>> Ai,Bi = airyai,airybi
>>> Bi(z)*(Ai(z,-1)-Ai(-inf,-1)) - Ai(z)*(Bi(z,-1)-Bi(-inf,-1))
0.6095559998265972956089949
"""
stirling1 = r"""
Gives the Stirling number of the first kind `s(n,k)`, defined by
.. math ::
x(x-1)(x-2)\cdots(x-n+1) = \sum_{k=0}^n s(n,k) x^k.
The value is computed using an integer recurrence. The implementation
is not optimized for approximating large values quickly.
**Examples**
Comparing with the generating function::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> taylor(lambda x: ff(x, 5), 0, 5)
[0.0, 24.0, -50.0, 35.0, -10.0, 1.0]
>>> [stirling1(5, k) for k in range(6)]
[0.0, 24.0, -50.0, 35.0, -10.0, 1.0]
Recurrence relation::
>>> n, k = 5, 3
>>> stirling1(n+1,k) + n*stirling1(n,k) - stirling1(n,k-1)
0.0
The matrices of Stirling numbers of first and second kind are inverses
of each other::
>>> A = matrix(5, 5); B = matrix(5, 5)
>>> for n in range(5):
... for k in range(5):
... A[n,k] = stirling1(n,k)
... B[n,k] = stirling2(n,k)
...
>>> A * B
[1.0 0.0 0.0 0.0 0.0]
[0.0 1.0 0.0 0.0 0.0]
[0.0 0.0 1.0 0.0 0.0]
[0.0 0.0 0.0 1.0 0.0]
[0.0 0.0 0.0 0.0 1.0]
Pass ``exact=True`` to obtain exact values of Stirling numbers as integers::
>>> stirling1(42, 5)
-2.864498971768501633736628e+50
>>> print(stirling1(42, 5, exact=True))
-286449897176850163373662803014001546235808317440000
"""
stirling2 = r"""
Gives the Stirling number of the second kind `S(n,k)`, defined by
.. math ::
x^n = \sum_{k=0}^n S(n,k) x(x-1)(x-2)\cdots(x-k+1)
The value is computed using integer arithmetic to evaluate a power sum.
The implementation is not optimized for approximating large values quickly.
**Examples**
Comparing with the generating function::
>>> from mpmath import *
>>> mp.dps = 25; mp.pretty = True
>>> taylor(lambda x: sum(stirling2(5,k) * ff(x,k) for k in range(6)), 0, 5)
[0.0, 0.0, 0.0, 0.0, 0.0, 1.0]
Recurrence relation::
>>> n, k = 5, 3
>>> stirling2(n+1,k) - k*stirling2(n,k) - stirling2(n,k-1)
0.0
Pass ``exact=True`` to obtain exact values of Stirling numbers as integers::
>>> stirling2(52, 10)
2.641822121003543906807485e+45
>>> print(stirling2(52, 10, exact=True))
2641822121003543906807485307053638921722527655
"""
| bsd-3-clause | -8,398,017,694,677,076,000 | 26.906685 | 94 | 0.628188 | false |
BenKeyFSI/poedit | deps/boost/tools/build/test/library_property.py | 44 | 1126 | #!/usr/bin/python
# Copyright 2004 Vladimir Prus
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
# Test that the <library> property has no effect on "obj" targets. Previously,
# it affected all targets, so
#
# project : requirements <library>foo ;
# exe a : a.cpp helper ;
# obj helper : helper.cpp : <optimization>off ;
#
# caused 'foo' to be built with and without optimization.
import BoostBuild
t = BoostBuild.Tester(use_test_config=False)
t.write("jamroot.jam", """
project : requirements <library>lib//x ;
exe a : a.cpp foo ;
obj foo : foo.cpp : <variant>release ;
""")
t.write("a.cpp", """
void aux();
int main() { aux(); }
""")
t.write("foo.cpp", """
void gee();
void aux() { gee(); }
""")
t.write("lib/x.cpp", """
void
#if defined(_WIN32)
__declspec(dllexport)
#endif
gee() {}
""")
t.write("lib/jamfile.jam", """
lib x : x.cpp ;
""")
t.write("lib/jamroot.jam", """
""")
t.run_build_system()
t.expect_addition("bin/$toolset/debug/a.exe")
t.expect_nothing("lib/bin/$toolset/release/x.obj")
t.cleanup()
| mit | 9,121,069,835,786,699,000 | 19.107143 | 81 | 0.650977 | false |
jaggu303619/asylum-v2.0 | openerp/addons/project_issue/__init__.py | 433 | 1131 | # -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>). All Rights Reserved
# $Id$
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import project_issue
import report
import res_config
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | 1,729,967,506,114,666,200 | 40.888889 | 80 | 0.623342 | false |
snowch/bluemix-spark-examples | examples/DashDB/importfromdashdb.py | 2 | 1389 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import print_function
import sys
from operator import add
import base64
from pyspark import SparkContext
from pyspark.sql import SQLContext
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: importfromdashdb <dash jdbc url>", file=sys.stderr)
exit(-1)
dashdb_jdbc_url = base64.b64decode(sys.argv[1])
sc = SparkContext(appName="Cloudant data pull")
sqlContext = SQLContext(sc)
dashdata = sqlContext.read.format('jdbc').options(url=dashdb_jdbc_url, dbtable='SAMPLES.LANGUAGE').load()
print(dashdata.rdd.take(10))
sc.stop()
| apache-2.0 | 1,663,112,396,057,408,500 | 31.302326 | 109 | 0.733621 | false |
lmgichin/formations | python/npyscr.py | 1 | 1948 | # -*- coding: utf-8 -*-
import npyscreen
class MyAutoComplete(npyscreen.Autocomplete):
colors = ["Jaune","Bleu","Rouge","Vert", "Vert foncΓ©"]
def auto_complete(self, input):
choices = []
for word in MyAutoComplete.colors:
if word.startswith(self.value):
choices.append(word)
self.value = choices[self.get_choice(choices)]
class MyTitleAutoComplete(npyscreen.TitleText):
_entry_type = MyAutoComplete
class MyScreen(npyscreen.NPSApp):
def main(self):
npyscreen.setTheme(npyscreen.Themes.ColorfulTheme)
f = npyscreen.ActionForm(name = u"C'est ma fenΓͺtre...")
f.add(npyscreen.FixedText, value="Texte non modifiable...")
nom = f.add(npyscreen.TitleText, name = "Saisir le nom", value ="<None>")
rvalues = ["Option 1","Option 2", "Option 3"]
radio = f.add(npyscreen.TitleSelectOne, max_height=len(rvalues)+1, value=[0], \
name="Choix :", values = rvalues, scroll_exit=False)
cbox = f.add(npyscreen.TitleMultiSelect, max_height=len(rvalues)+1, value=[0], \
name="Choix :", values = rvalues, scroll_exit=False)
tauto = f.add(MyTitleAutoComplete, name = "Couleur : ")
f.edit()
#npyscreen.notify_wait("Valeur saisie : " + nom.value, title="Check...")
#npyscreen.notify_wait("Valeur saisie : " + radio.get_selected_objects()[0], title="Check radio...")
#npyscreen.notify_wait("Valeur saisie : " + tauto.value, title="Check auto...")
#lval = ""
#for val in cbox.get_selected_objects():
# lval += val
#npyscreen.notify_wait("Valeur saisie : " + lval, title="Check box...")
def on_cancel(self):
npyscreen.notify_wait("Valeur cancel", title="OK")
def on_ok(self):
npyscreen.notify_wait("Valeur ok", title="OK")
if __name__ == '__main__':
app = MyScreen()
app.run() | gpl-2.0 | 6,961,815,802,718,789,000 | 30.918033 | 108 | 0.600206 | false |
pexip/meson | mesonbuild/modules/__init__.py | 5 | 2345 | import os
from .. import build
class ExtensionModule:
def __init__(self, interpreter):
self.interpreter = interpreter
self.snippets = set() # List of methods that operate only on the interpreter.
def is_snippet(self, funcname):
return funcname in self.snippets
def get_include_args(include_dirs, prefix='-I'):
'''
Expand include arguments to refer to the source and build dirs
by using @SOURCE_ROOT@ and @BUILD_ROOT@ for later substitution
'''
if not include_dirs:
return []
dirs_str = []
for incdirs in include_dirs:
if hasattr(incdirs, "held_object"):
dirs = incdirs.held_object
else:
dirs = incdirs
if isinstance(dirs, str):
dirs_str += ['%s%s' % (prefix, dirs)]
continue
# Should be build.IncludeDirs object.
basedir = dirs.get_curdir()
for d in dirs.get_incdirs():
expdir = os.path.join(basedir, d)
srctreedir = os.path.join('@SOURCE_ROOT@', expdir)
buildtreedir = os.path.join('@BUILD_ROOT@', expdir)
dirs_str += ['%s%s' % (prefix, buildtreedir),
'%s%s' % (prefix, srctreedir)]
for d in dirs.get_extra_build_dirs():
dirs_str += ['%s%s' % (prefix, d)]
return dirs_str
class ModuleReturnValue:
def __init__(self, return_value, new_objects):
self.return_value = return_value
assert(isinstance(new_objects, list))
self.new_objects = new_objects
class GResourceTarget(build.CustomTarget):
def __init__(self, name, subdir, subproject, kwargs):
super().__init__(name, subdir, subproject, kwargs)
class GResourceHeaderTarget(build.CustomTarget):
def __init__(self, name, subdir, subproject, kwargs):
super().__init__(name, subdir, subproject, kwargs)
class GirTarget(build.CustomTarget):
def __init__(self, name, subdir, subproject, kwargs):
super().__init__(name, subdir, subproject, kwargs)
class TypelibTarget(build.CustomTarget):
def __init__(self, name, subdir, subproject, kwargs):
super().__init__(name, subdir, subproject, kwargs)
class VapiTarget(build.CustomTarget):
def __init__(self, name, subdir, subproject, kwargs):
super().__init__(name, subdir, subproject, kwargs)
| apache-2.0 | 6,426,759,657,867,208,000 | 32.028169 | 85 | 0.61322 | false |
pim89/youtube-dl | youtube_dl/extractor/noz.py | 26 | 3664 | # coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..compat import (
compat_urllib_parse_unquote,
compat_xpath,
)
from ..utils import (
int_or_none,
find_xpath_attr,
xpath_text,
update_url_query,
)
class NozIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?noz\.de/video/(?P<id>[0-9]+)/'
_TESTS = [{
'url': 'http://www.noz.de/video/25151/32-Deutschland-gewinnt-Badminton-Lnderspiel-in-Melle',
'info_dict': {
'id': '25151',
'ext': 'mp4',
'duration': 215,
'title': '3:2 - Deutschland gewinnt Badminton-LΓ€nderspiel in Melle',
'description': 'Vor rund 370 Zuschauern gewinnt die deutsche Badminton-Nationalmannschaft am Donnerstag ein EM-Vorbereitungsspiel gegen Frankreich in Melle. Video Moritz Frankenberg.',
'thumbnail': 're:^http://.*\.jpg',
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
description = self._og_search_description(webpage)
edge_url = self._html_search_regex(
r'<script\s+(?:type="text/javascript"\s+)?src="(.*?/videojs_.*?)"',
webpage, 'edge URL')
edge_content = self._download_webpage(edge_url, 'meta configuration')
config_url_encoded = self._search_regex(
r'so\.addVariable\("config_url","[^,]*,(.*?)"',
edge_content, 'config URL'
)
config_url = compat_urllib_parse_unquote(config_url_encoded)
doc = self._download_xml(config_url, 'video configuration')
title = xpath_text(doc, './/title')
thumbnail = xpath_text(doc, './/article/thumbnail/url')
duration = int_or_none(xpath_text(
doc, './/article/movie/file/duration'))
formats = []
for qnode in doc.findall(compat_xpath('.//article/movie/file/qualities/qual')):
http_url_ele = find_xpath_attr(
qnode, './html_urls/video_url', 'format', 'video/mp4')
http_url = http_url_ele.text if http_url_ele is not None else None
if http_url:
formats.append({
'url': http_url,
'format_name': xpath_text(qnode, './name'),
'format_id': '%s-%s' % ('http', xpath_text(qnode, './id')),
'height': int_or_none(xpath_text(qnode, './height')),
'width': int_or_none(xpath_text(qnode, './width')),
'tbr': int_or_none(xpath_text(qnode, './bitrate'), scale=1000),
})
else:
f4m_url = xpath_text(qnode, 'url_hd2')
if f4m_url:
formats.extend(self._extract_f4m_formats(
update_url_query(f4m_url, {'hdcore': '3.4.0'}),
video_id, f4m_id='hds', fatal=False))
m3u8_url_ele = find_xpath_attr(
qnode, './html_urls/video_url',
'format', 'application/vnd.apple.mpegurl')
m3u8_url = m3u8_url_ele.text if m3u8_url_ele is not None else None
if m3u8_url:
formats.extend(self._extract_m3u8_formats(
m3u8_url, video_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False))
self._sort_formats(formats)
return {
'id': video_id,
'formats': formats,
'title': title,
'duration': duration,
'description': description,
'thumbnail': thumbnail,
}
| unlicense | 3,701,815,465,301,491,000 | 40.157303 | 196 | 0.528529 | false |
akaihola/django | django/core/cache/backends/filebased.py | 11 | 4711 | "File-based cache backend"
import hashlib
import os
import shutil
import time
try:
import cPickle as pickle
except ImportError:
import pickle
from django.core.cache.backends.base import BaseCache
class FileBasedCache(BaseCache):
def __init__(self, dir, params):
BaseCache.__init__(self, params)
self._dir = dir
if not os.path.exists(self._dir):
self._createdir()
def add(self, key, value, timeout=None, version=None):
if self.has_key(key, version=version):
return False
self.set(key, value, timeout, version=version)
return True
def get(self, key, default=None, version=None):
key = self.make_key(key, version=version)
self.validate_key(key)
fname = self._key_to_file(key)
try:
with open(fname, 'rb') as f:
exp = pickle.load(f)
now = time.time()
if exp < now:
self._delete(fname)
else:
return pickle.load(f)
except (IOError, OSError, EOFError, pickle.PickleError):
pass
return default
def set(self, key, value, timeout=None, version=None):
key = self.make_key(key, version=version)
self.validate_key(key)
fname = self._key_to_file(key)
dirname = os.path.dirname(fname)
if timeout is None:
timeout = self.default_timeout
self._cull()
try:
if not os.path.exists(dirname):
os.makedirs(dirname)
with open(fname, 'wb') as f:
now = time.time()
pickle.dump(now + timeout, f, pickle.HIGHEST_PROTOCOL)
pickle.dump(value, f, pickle.HIGHEST_PROTOCOL)
except (IOError, OSError):
pass
def delete(self, key, version=None):
key = self.make_key(key, version=version)
self.validate_key(key)
try:
self._delete(self._key_to_file(key))
except (IOError, OSError):
pass
def _delete(self, fname):
os.remove(fname)
try:
# Remove the 2 subdirs if they're empty
dirname = os.path.dirname(fname)
os.rmdir(dirname)
os.rmdir(os.path.dirname(dirname))
except (IOError, OSError):
pass
def has_key(self, key, version=None):
key = self.make_key(key, version=version)
self.validate_key(key)
fname = self._key_to_file(key)
try:
with open(fname, 'rb') as f:
exp = pickle.load(f)
now = time.time()
if exp < now:
self._delete(fname)
return False
else:
return True
except (IOError, OSError, EOFError, pickle.PickleError):
return False
def _cull(self):
if int(self._num_entries) < self._max_entries:
return
try:
filelist = sorted(os.listdir(self._dir))
except (IOError, OSError):
return
if self._cull_frequency == 0:
doomed = filelist
else:
doomed = [os.path.join(self._dir, k) for (i, k) in enumerate(filelist) if i % self._cull_frequency == 0]
for topdir in doomed:
try:
for root, _, files in os.walk(topdir):
for f in files:
self._delete(os.path.join(root, f))
except (IOError, OSError):
pass
def _createdir(self):
try:
os.makedirs(self._dir)
except OSError:
raise EnvironmentError("Cache directory '%s' does not exist and could not be created'" % self._dir)
def _key_to_file(self, key):
"""
Convert the filename into an md5 string. We'll turn the first couple
bits of the path into directory prefixes to be nice to filesystems
that have problems with large numbers of files in a directory.
Thus, a cache key of "foo" gets turnned into a file named
``{cache-dir}ac/bd/18db4cc2f85cedef654fccc4a4d8``.
"""
path = hashlib.md5(key).hexdigest()
path = os.path.join(path[:2], path[2:4], path[4:])
return os.path.join(self._dir, path)
def _get_num_entries(self):
count = 0
for _,_,files in os.walk(self._dir):
count += len(files)
return count
_num_entries = property(_get_num_entries)
def clear(self):
try:
shutil.rmtree(self._dir)
except (IOError, OSError):
pass
# For backwards compatibility
class CacheClass(FileBasedCache):
pass
| bsd-3-clause | 5,261,399,535,842,471,000 | 28.816456 | 116 | 0.54447 | false |
rimbalinux/MSISDNArea | django/utils/simplejson/decoder.py | 13 | 12297 | """Implementation of JSONDecoder
"""
import re
import sys
import struct
from django.utils.simplejson.scanner import make_scanner
c_scanstring = None
__all__ = ['JSONDecoder']
FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL
def _floatconstants():
_BYTES = '7FF80000000000007FF0000000000000'.decode('hex')
if sys.byteorder != 'big':
_BYTES = _BYTES[:8][::-1] + _BYTES[8:][::-1]
nan, inf = struct.unpack('dd', _BYTES)
return nan, inf, -inf
NaN, PosInf, NegInf = _floatconstants()
def linecol(doc, pos):
lineno = doc.count('\n', 0, pos) + 1
if lineno == 1:
colno = pos
else:
colno = pos - doc.rindex('\n', 0, pos)
return lineno, colno
def errmsg(msg, doc, pos, end=None):
# Note that this function is called from _speedups
lineno, colno = linecol(doc, pos)
if end is None:
return '%s: line %d column %d (char %d)' % (msg, lineno, colno, pos)
endlineno, endcolno = linecol(doc, end)
return '%s: line %d column %d - line %d column %d (char %d - %d)' % (
msg, lineno, colno, endlineno, endcolno, pos, end)
_CONSTANTS = {
'-Infinity': NegInf,
'Infinity': PosInf,
'NaN': NaN,
}
STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS)
BACKSLASH = {
'"': u'"', '\\': u'\\', '/': u'/',
'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t',
}
DEFAULT_ENCODING = "utf-8"
def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match):
"""Scan the string s for a JSON string. End is the index of the
character in s after the quote that started the JSON string.
Unescapes all valid JSON string escape sequences and raises ValueError
on attempt to decode an invalid string. If strict is False then literal
control characters are allowed in the string.
Returns a tuple of the decoded string and the index of the character in s
after the end quote."""
if encoding is None:
encoding = DEFAULT_ENCODING
chunks = []
_append = chunks.append
begin = end - 1
while 1:
chunk = _m(s, end)
if chunk is None:
raise ValueError(
errmsg("Unterminated string starting at", s, begin))
end = chunk.end()
content, terminator = chunk.groups()
# Content is contains zero or more unescaped string characters
if content:
if not isinstance(content, unicode):
content = unicode(content, encoding)
_append(content)
# Terminator is the end of string, a literal control character,
# or a backslash denoting that an escape sequence follows
if terminator == '"':
break
elif terminator != '\\':
if strict:
msg = "Invalid control character %r at" % (terminator,)
raise ValueError(msg, s, end)
else:
_append(terminator)
continue
try:
esc = s[end]
except IndexError:
raise ValueError(
errmsg("Unterminated string starting at", s, begin))
# If not a unicode escape sequence, must be in the lookup table
if esc != 'u':
try:
char = _b[esc]
except KeyError:
raise ValueError(
errmsg("Invalid \\escape: %r" % (esc,), s, end))
end += 1
else:
# Unicode escape sequence
esc = s[end + 1:end + 5]
next_end = end + 5
if len(esc) != 4:
msg = "Invalid \\uXXXX escape"
raise ValueError(errmsg(msg, s, end))
uni = int(esc, 16)
# Check for surrogate pair on UCS-4 systems
if 0xd800 <= uni <= 0xdbff and sys.maxunicode > 65535:
msg = "Invalid \\uXXXX\\uXXXX surrogate pair"
if not s[end + 5:end + 7] == '\\u':
raise ValueError(errmsg(msg, s, end))
esc2 = s[end + 7:end + 11]
if len(esc2) != 4:
raise ValueError(errmsg(msg, s, end))
uni2 = int(esc2, 16)
uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00))
next_end += 6
char = unichr(uni)
end = next_end
# Append the unescaped character
_append(char)
return u''.join(chunks), end
# Use speedup if available
scanstring = c_scanstring or py_scanstring
WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS)
WHITESPACE_STR = ' \t\n\r'
def JSONObject((s, end), encoding, strict, scan_once, object_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
pairs = {}
# Use a slice to prevent IndexError from being raised, the following
# check will raise a more specific ValueError if the string is empty
nextchar = s[end:end + 1]
# Normally we expect nextchar == '"'
if nextchar != '"':
if nextchar in _ws:
end = _w(s, end).end()
nextchar = s[end:end + 1]
# Trivial empty object
if nextchar == '}':
return pairs, end + 1
elif nextchar != '"':
raise ValueError(errmsg("Expecting property name", s, end))
end += 1
while True:
key, end = scanstring(s, end, encoding, strict)
# To skip some function call overhead we optimize the fast paths where
# the JSON key separator is ": " or just ":".
if s[end:end + 1] != ':':
end = _w(s, end).end()
if s[end:end + 1] != ':':
raise ValueError(errmsg("Expecting : delimiter", s, end))
end += 1
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
try:
value, end = scan_once(s, end)
except StopIteration:
raise ValueError(errmsg("Expecting object", s, end))
pairs[key] = value
try:
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar == '}':
break
elif nextchar != ',':
raise ValueError(errmsg("Expecting , delimiter", s, end - 1))
try:
nextchar = s[end]
if nextchar in _ws:
end += 1
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar != '"':
raise ValueError(errmsg("Expecting property name", s, end - 1))
if object_hook is not None:
pairs = object_hook(pairs)
return pairs, end
def JSONArray((s, end), scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
values = []
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
# Look-ahead for trivial empty array
if nextchar == ']':
return values, end + 1
_append = values.append
while True:
try:
value, end = scan_once(s, end)
except StopIteration:
raise ValueError(errmsg("Expecting object", s, end))
_append(value)
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
end += 1
if nextchar == ']':
break
elif nextchar != ',':
raise ValueError(errmsg("Expecting , delimiter", s, end))
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
return values, end
class JSONDecoder(object):
"""Simple JSON <http://json.org> decoder
Performs the following translations in decoding by default:
+---------------+-------------------+
| JSON | Python |
+===============+===================+
| object | dict |
+---------------+-------------------+
| array | list |
+---------------+-------------------+
| string | unicode |
+---------------+-------------------+
| number (int) | int, long |
+---------------+-------------------+
| number (real) | float |
+---------------+-------------------+
| true | True |
+---------------+-------------------+
| false | False |
+---------------+-------------------+
| null | None |
+---------------+-------------------+
It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
their corresponding ``float`` values, which is outside the JSON spec.
"""
def __init__(self, encoding=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, strict=True):
"""``encoding`` determines the encoding used to interpret any ``str``
objects decoded by this instance (utf-8 by default). It has no
effect when decoding ``unicode`` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as ``unicode``.
``object_hook``, if specified, will be called with the result
of every JSON object decoded and its return value will be used in
place of the given ``dict``. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
``parse_float``, if specified, will be called with the string
of every JSON float to be decoded. By default this is equivalent to
float(num_str). This can be used to use another datatype or parser
for JSON floats (e.g. decimal.Decimal).
``parse_int``, if specified, will be called with the string
of every JSON int to be decoded. By default this is equivalent to
int(num_str). This can be used to use another datatype or parser
for JSON integers (e.g. float).
``parse_constant``, if specified, will be called with one of the
following strings: -Infinity, Infinity, NaN.
This can be used to raise an exception if invalid JSON numbers
are encountered.
"""
self.encoding = encoding
self.object_hook = object_hook
self.parse_float = parse_float or float
self.parse_int = parse_int or int
self.parse_constant = parse_constant or _CONSTANTS.__getitem__
self.strict = strict
self.parse_object = JSONObject
self.parse_array = JSONArray
self.parse_string = scanstring
self.scan_once = make_scanner(self)
def decode(self, s, _w=WHITESPACE.match):
"""Return the Python representation of ``s`` (a ``str`` or ``unicode``
instance containing a JSON document)
"""
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
end = _w(s, end).end()
if end != len(s):
raise ValueError(errmsg("Extra data", s, end, len(s)))
return obj
def raw_decode(self, s, idx=0):
"""Decode a JSON document from ``s`` (a ``str`` or ``unicode`` beginning
with a JSON document) and return a 2-tuple of the Python
representation and the index in ``s`` where the document ended.
This can be used to decode a JSON document from a string that may
have extraneous data at the end.
"""
try:
obj, end = self.scan_once(s, idx)
except StopIteration:
raise ValueError("No JSON object could be decoded")
return obj, end
| bsd-3-clause | 8,789,575,553,706,194,000 | 33.643478 | 108 | 0.507034 | false |
DSLituiev/scikit-learn | examples/plot_johnson_lindenstrauss_bound.py | 8 | 7473 | r"""
=====================================================================
The Johnson-Lindenstrauss bound for embedding with random projections
=====================================================================
The `Johnson-Lindenstrauss lemma`_ states that any high dimensional
dataset can be randomly projected into a lower dimensional Euclidean
space while controlling the distortion in the pairwise distances.
.. _`Johnson-Lindenstrauss lemma`: http://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma
Theoretical bounds
==================
The distortion introduced by a random projection `p` is asserted by
the fact that `p` is defining an eps-embedding with good probability
as defined by:
.. math::
(1 - eps) \|u - v\|^2 < \|p(u) - p(v)\|^2 < (1 + eps) \|u - v\|^2
Where u and v are any rows taken from a dataset of shape [n_samples,
n_features] and p is a projection by a random Gaussian N(0, 1) matrix
with shape [n_components, n_features] (or a sparse Achlioptas matrix).
The minimum number of components to guarantees the eps-embedding is
given by:
.. math::
n\_components >= 4 log(n\_samples) / (eps^2 / 2 - eps^3 / 3)
The first plot shows that with an increasing number of samples ``n_samples``,
the minimal number of dimensions ``n_components`` increased logarithmically
in order to guarantee an ``eps``-embedding.
The second plot shows that an increase of the admissible
distortion ``eps`` allows to reduce drastically the minimal number of
dimensions ``n_components`` for a given number of samples ``n_samples``
Empirical validation
====================
We validate the above bounds on the digits dataset or on the 20 newsgroups
text document (TF-IDF word frequencies) dataset:
- for the digits dataset, some 8x8 gray level pixels data for 500
handwritten digits pictures are randomly projected to spaces for various
larger number of dimensions ``n_components``.
- for the 20 newsgroups dataset some 500 documents with 100k
features in total are projected using a sparse random matrix to smaller
euclidean spaces with various values for the target number of dimensions
``n_components``.
The default dataset is the digits dataset. To run the example on the twenty
newsgroups dataset, pass the --twenty-newsgroups command line argument to this
script.
For each value of ``n_components``, we plot:
- 2D distribution of sample pairs with pairwise distances in original
and projected spaces as x and y axis respectively.
- 1D histogram of the ratio of those distances (projected / original).
We can see that for low values of ``n_components`` the distribution is wide
with many distorted pairs and a skewed distribution (due to the hard
limit of zero ratio on the left as distances are always positives)
while for larger values of n_components the distortion is controlled
and the distances are well preserved by the random projection.
Remarks
=======
According to the JL lemma, projecting 500 samples without too much distortion
will require at least several thousands dimensions, irrespective of the
number of features of the original dataset.
Hence using random projections on the digits dataset which only has 64 features
in the input space does not make sense: it does not allow for dimensionality
reduction in this case.
On the twenty newsgroups on the other hand the dimensionality can be decreased
from 56436 down to 10000 while reasonably preserving pairwise distances.
"""
print(__doc__)
import sys
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.random_projection import johnson_lindenstrauss_min_dim
from sklearn.random_projection import SparseRandomProjection
from sklearn.datasets import fetch_20newsgroups_vectorized
from sklearn.datasets import load_digits
from sklearn.metrics.pairwise import euclidean_distances
# Part 1: plot the theoretical dependency between n_components_min and
# n_samples
# range of admissible distortions
eps_range = np.linspace(0.1, 0.99, 5)
colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(eps_range)))
# range of number of samples (observation) to embed
n_samples_range = np.logspace(1, 9, 9)
plt.figure()
for eps, color in zip(eps_range, colors):
min_n_components = johnson_lindenstrauss_min_dim(n_samples_range, eps=eps)
plt.loglog(n_samples_range, min_n_components, color=color)
plt.legend(["eps = %0.1f" % eps for eps in eps_range], loc="lower right")
plt.xlabel("Number of observations to eps-embed")
plt.ylabel("Minimum number of dimensions")
plt.title("Johnson-Lindenstrauss bounds:\nn_samples vs n_components")
# range of admissible distortions
eps_range = np.linspace(0.01, 0.99, 100)
# range of number of samples (observation) to embed
n_samples_range = np.logspace(2, 6, 5)
colors = plt.cm.Blues(np.linspace(0.3, 1.0, len(n_samples_range)))
plt.figure()
for n_samples, color in zip(n_samples_range, colors):
min_n_components = johnson_lindenstrauss_min_dim(n_samples, eps=eps_range)
plt.semilogy(eps_range, min_n_components, color=color)
plt.legend(["n_samples = %d" % n for n in n_samples_range], loc="upper right")
plt.xlabel("Distortion eps")
plt.ylabel("Minimum number of dimensions")
plt.title("Johnson-Lindenstrauss bounds:\nn_components vs eps")
# Part 2: perform sparse random projection of some digits images which are
# quite low dimensional and dense or documents of the 20 newsgroups dataset
# which is both high dimensional and sparse
if '--twenty-newsgroups' in sys.argv:
# Need an internet connection hence not enabled by default
data = fetch_20newsgroups_vectorized().data[:500]
else:
data = load_digits().data[:500]
n_samples, n_features = data.shape
print("Embedding %d samples with dim %d using various random projections"
% (n_samples, n_features))
n_components_range = np.array([300, 1000, 10000])
dists = euclidean_distances(data, squared=True).ravel()
# select only non-identical samples pairs
nonzero = dists != 0
dists = dists[nonzero]
for n_components in n_components_range:
t0 = time()
rp = SparseRandomProjection(n_components=n_components)
projected_data = rp.fit_transform(data)
print("Projected %d samples from %d to %d in %0.3fs"
% (n_samples, n_features, n_components, time() - t0))
if hasattr(rp, 'components_'):
n_bytes = rp.components_.data.nbytes
n_bytes += rp.components_.indices.nbytes
print("Random matrix with size: %0.3fMB" % (n_bytes / 1e6))
projected_dists = euclidean_distances(
projected_data, squared=True).ravel()[nonzero]
plt.figure()
plt.hexbin(dists, projected_dists, gridsize=100, cmap=plt.cm.PuBu)
plt.xlabel("Pairwise squared distances in original space")
plt.ylabel("Pairwise squared distances in projected space")
plt.title("Pairwise distances distribution for n_components=%d" %
n_components)
cb = plt.colorbar()
cb.set_label('Sample pairs counts')
rates = projected_dists / dists
print("Mean distances rate: %0.2f (%0.2f)"
% (np.mean(rates), np.std(rates)))
plt.figure()
plt.hist(rates, bins=50, normed=True, range=(0., 2.))
plt.xlabel("Squared distances rate: projected / original")
plt.ylabel("Distribution of samples pairs")
plt.title("Histogram of pairwise distance rates for n_components=%d" %
n_components)
# TODO: compute the expected value of eps and add them to the previous plot
# as vertical lines / region
plt.show()
| bsd-3-clause | -1,598,708,507,861,591,600 | 36.552764 | 99 | 0.722869 | false |
40023154/2015cd_midterm | static/Brython3.1.1-20150328-091302/Lib/unittest/result.py | 727 | 6397 | """Test result object"""
import io
import sys
import traceback
from . import util
from functools import wraps
__unittest = True
def failfast(method):
@wraps(method)
def inner(self, *args, **kw):
if getattr(self, 'failfast', False):
self.stop()
return method(self, *args, **kw)
return inner
STDOUT_LINE = '\nStdout:\n%s'
STDERR_LINE = '\nStderr:\n%s'
class TestResult(object):
"""Holder for test result information.
Test results are automatically managed by the TestCase and TestSuite
classes, and do not need to be explicitly manipulated by writers of tests.
Each instance holds the total number of tests run, and collections of
failures and errors that occurred among those test runs. The collections
contain tuples of (testcase, exceptioninfo), where exceptioninfo is the
formatted traceback of the error that occurred.
"""
_previousTestClass = None
_testRunEntered = False
_moduleSetUpFailed = False
def __init__(self, stream=None, descriptions=None, verbosity=None):
self.failfast = False
self.failures = []
self.errors = []
self.testsRun = 0
self.skipped = []
self.expectedFailures = []
self.unexpectedSuccesses = []
self.shouldStop = False
self.buffer = False
self._stdout_buffer = None
self._stderr_buffer = None
self._original_stdout = sys.stdout
self._original_stderr = sys.stderr
self._mirrorOutput = False
def printErrors(self):
"Called by TestRunner after test run"
#fixme brython
pass
def startTest(self, test):
"Called when the given test is about to be run"
self.testsRun += 1
self._mirrorOutput = False
self._setupStdout()
def _setupStdout(self):
if self.buffer:
if self._stderr_buffer is None:
self._stderr_buffer = io.StringIO()
self._stdout_buffer = io.StringIO()
sys.stdout = self._stdout_buffer
sys.stderr = self._stderr_buffer
def startTestRun(self):
"""Called once before any tests are executed.
See startTest for a method called before each test.
"""
def stopTest(self, test):
"""Called when the given test has been run"""
self._restoreStdout()
self._mirrorOutput = False
def _restoreStdout(self):
if self.buffer:
if self._mirrorOutput:
output = sys.stdout.getvalue()
error = sys.stderr.getvalue()
if output:
if not output.endswith('\n'):
output += '\n'
self._original_stdout.write(STDOUT_LINE % output)
if error:
if not error.endswith('\n'):
error += '\n'
self._original_stderr.write(STDERR_LINE % error)
sys.stdout = self._original_stdout
sys.stderr = self._original_stderr
self._stdout_buffer.seek(0)
self._stdout_buffer.truncate()
self._stderr_buffer.seek(0)
self._stderr_buffer.truncate()
def stopTestRun(self):
"""Called once after all tests are executed.
See stopTest for a method called after each test.
"""
@failfast
def addError(self, test, err):
"""Called when an error has occurred. 'err' is a tuple of values as
returned by sys.exc_info().
"""
self.errors.append((test, self._exc_info_to_string(err, test)))
self._mirrorOutput = True
@failfast
def addFailure(self, test, err):
"""Called when an error has occurred. 'err' is a tuple of values as
returned by sys.exc_info()."""
self.failures.append((test, self._exc_info_to_string(err, test)))
self._mirrorOutput = True
def addSuccess(self, test):
"Called when a test has completed successfully"
pass
def addSkip(self, test, reason):
"""Called when a test is skipped."""
self.skipped.append((test, reason))
def addExpectedFailure(self, test, err):
"""Called when an expected failure/error occured."""
self.expectedFailures.append(
(test, self._exc_info_to_string(err, test)))
@failfast
def addUnexpectedSuccess(self, test):
"""Called when a test was expected to fail, but succeed."""
self.unexpectedSuccesses.append(test)
def wasSuccessful(self):
"Tells whether or not this result was a success"
return len(self.failures) == len(self.errors) == 0
def stop(self):
"Indicates that the tests should be aborted"
self.shouldStop = True
def _exc_info_to_string(self, err, test):
"""Converts a sys.exc_info()-style tuple of values into a string."""
exctype, value, tb = err
# Skip test runner traceback levels
while tb and self._is_relevant_tb_level(tb):
tb = tb.tb_next
if exctype is test.failureException:
# Skip assert*() traceback levels
length = self._count_relevant_tb_levels(tb)
msgLines = traceback.format_exception(exctype, value, tb, length)
else:
msgLines = traceback.format_exception(exctype, value, tb)
if self.buffer:
output = sys.stdout.getvalue()
error = sys.stderr.getvalue()
if output:
if not output.endswith('\n'):
output += '\n'
msgLines.append(STDOUT_LINE % output)
if error:
if not error.endswith('\n'):
error += '\n'
msgLines.append(STDERR_LINE % error)
return ''.join(msgLines)
def _is_relevant_tb_level(self, tb):
#fix me brython
#return '__unittest' in tb.tb_frame.f_globals
return True #for now, lets just return False
def _count_relevant_tb_levels(self, tb):
length = 0
while tb and not self._is_relevant_tb_level(tb):
length += 1
tb = tb.tb_next
return length
def __repr__(self):
return ("<%s run=%i errors=%i failures=%i>" %
(util.strclass(self.__class__), self.testsRun, len(self.errors),
len(self.failures)))
| gpl-2.0 | -709,746,041,949,951,400 | 31.805128 | 79 | 0.582304 | false |
tnkteja/myhelp | virtualEnvironment/lib/python2.7/site-packages/pkginfo/installed.py | 3 | 1987 | import glob
import os
import sys
import warnings
from pkginfo.distribution import Distribution
from pkginfo._compat import STRING_TYPES
class Installed(Distribution):
def __init__(self, package, metadata_version=None):
if isinstance(package, STRING_TYPES):
self.package_name = package
try:
__import__(package)
except ImportError:
package = None
else:
package = sys.modules[package]
else:
self.package_name = package.__name__
self.package = package
self.metadata_version = metadata_version
self.extractMetadata()
def read(self):
opj = os.path.join
if self.package is not None:
package = self.package.__package__
if package is None:
package = self.package.__name__
pattern = '%s*.egg-info' % package
file = getattr(self.package, '__file__', None)
if file is not None:
candidates = []
def _add_candidate(where):
candidates.extend(glob.glob(where))
for entry in sys.path:
if file.startswith(entry):
_add_candidate(opj(entry, 'EGG-INFO')) # egg?
_add_candidate(opj(entry, pattern)) # dist-installed?
dir, name = os.path.split(self.package.__file__)
_add_candidate(opj(dir, pattern))
_add_candidate(opj(dir, '..', pattern))
for candidate in candidates:
if os.path.isdir(candidate):
path = opj(candidate, 'PKG-INFO')
else:
path = candidate
if os.path.exists(path):
with open(path) as f:
return f.read()
warnings.warn('No PKG-INFO found for package: %s' % self.package_name)
| mit | -4,016,740,089,818,886,000 | 36.490566 | 78 | 0.508807 | false |
JGarcia-Panach/odoo | addons/procurement_jit/__init__.py | 374 | 1078 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import procurement_jit
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | -4,534,564,349,899,700,000 | 43.916667 | 79 | 0.611317 | false |
seanchen/taiga-back | taiga/export_import/serializers.py | 4 | 22718 | # Copyright (C) 2014 Andrey Antukh <[email protected]>
# Copyright (C) 2014 JesΓΊs Espino <[email protected]>
# Copyright (C) 2014 David BarragΓ‘n <[email protected]>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import base64
import copy
import os
from collections import OrderedDict
from django.core.files.base import ContentFile
from django.core.exceptions import ObjectDoesNotExist
from django.core.exceptions import ValidationError
from django.core.exceptions import ObjectDoesNotExist
from django.utils.translation import ugettext as _
from django.contrib.contenttypes.models import ContentType
from taiga import mdrender
from taiga.base.api import serializers
from taiga.base.fields import JsonField, PgArrayField
from taiga.projects import models as projects_models
from taiga.projects.custom_attributes import models as custom_attributes_models
from taiga.projects.userstories import models as userstories_models
from taiga.projects.tasks import models as tasks_models
from taiga.projects.issues import models as issues_models
from taiga.projects.milestones import models as milestones_models
from taiga.projects.wiki import models as wiki_models
from taiga.projects.history import models as history_models
from taiga.projects.attachments import models as attachments_models
from taiga.timeline import models as timeline_models
from taiga.timeline import service as timeline_service
from taiga.users import models as users_models
from taiga.projects.votes import services as votes_service
from taiga.projects.history import services as history_service
class AttachedFileField(serializers.WritableField):
read_only = False
def to_native(self, obj):
if not obj:
return None
data = base64.b64encode(obj.read()).decode('utf-8')
return OrderedDict([
("data", data),
("name", os.path.basename(obj.name)),
])
def from_native(self, data):
if not data:
return None
return ContentFile(base64.b64decode(data['data']), name=data['name'])
class RelatedNoneSafeField(serializers.RelatedField):
def field_from_native(self, data, files, field_name, into):
if self.read_only:
return
try:
if self.many:
try:
# Form data
value = data.getlist(field_name)
if value == [''] or value == []:
raise KeyError
except AttributeError:
# Non-form data
value = data[field_name]
else:
value = data[field_name]
except KeyError:
if self.partial:
return
value = self.get_default_value()
key = self.source or field_name
if value in self.null_values:
if self.required:
raise ValidationError(self.error_messages['required'])
into[key] = None
elif self.many:
into[key] = [self.from_native(item) for item in value if self.from_native(item) is not None]
else:
into[key] = self.from_native(value)
class UserRelatedField(RelatedNoneSafeField):
read_only = False
def to_native(self, obj):
if obj:
return obj.email
return None
def from_native(self, data):
try:
return users_models.User.objects.get(email=data)
except users_models.User.DoesNotExist:
return None
class UserPkField(serializers.RelatedField):
read_only = False
def to_native(self, obj):
try:
user = users_models.User.objects.get(pk=obj)
return user.email
except users_models.User.DoesNotExist:
return None
def from_native(self, data):
try:
user = users_models.User.objects.get(email=data)
return user.pk
except users_models.User.DoesNotExist:
return None
class CommentField(serializers.WritableField):
read_only = False
def field_from_native(self, data, files, field_name, into):
super().field_from_native(data, files, field_name, into)
into["comment_html"] = mdrender.render(self.context['project'], data.get("comment", ""))
class ProjectRelatedField(serializers.RelatedField):
read_only = False
def __init__(self, slug_field, *args, **kwargs):
self.slug_field = slug_field
super().__init__(*args, **kwargs)
def to_native(self, obj):
if obj:
return getattr(obj, self.slug_field)
return None
def from_native(self, data):
try:
kwargs = {self.slug_field: data, "project": self.context['project']}
return self.queryset.get(**kwargs)
except ObjectDoesNotExist:
raise ValidationError(_("{}=\"{}\" not found in this project".format(self.slug_field, data)))
class HistoryUserField(JsonField):
def to_native(self, obj):
if obj is None or obj == {}:
return []
try:
user = users_models.User.objects.get(pk=obj['pk'])
except users_models.User.DoesNotExist:
user = None
return (UserRelatedField().to_native(user), obj['name'])
def from_native(self, data):
if data is None:
return {}
if len(data) < 2:
return {}
user = UserRelatedField().from_native(data[0])
if user:
pk = user.pk
else:
pk = None
return {"pk": pk, "name": data[1]}
class HistoryValuesField(JsonField):
def to_native(self, obj):
if obj is None:
return []
if "users" in obj:
obj['users'] = list(map(UserPkField().to_native, obj['users']))
return obj
def from_native(self, data):
if data is None:
return []
if "users" in data:
data['users'] = list(map(UserPkField().from_native, data['users']))
return data
class HistoryDiffField(JsonField):
def to_native(self, obj):
if obj is None:
return []
if "assigned_to" in obj:
obj['assigned_to'] = list(map(UserPkField().to_native, obj['assigned_to']))
return obj
def from_native(self, data):
if data is None:
return []
if "assigned_to" in data:
data['assigned_to'] = list(map(UserPkField().from_native, data['assigned_to']))
return data
class HistoryExportSerializer(serializers.ModelSerializer):
user = HistoryUserField()
diff = HistoryDiffField(required=False)
snapshot = JsonField(required=False)
values = HistoryValuesField(required=False)
comment = CommentField(required=False)
delete_comment_date = serializers.DateTimeField(required=False)
delete_comment_user = HistoryUserField(required=False)
class Meta:
model = history_models.HistoryEntry
exclude = ("id", "comment_html", "key")
class HistoryExportSerializerMixin(serializers.ModelSerializer):
history = serializers.SerializerMethodField("get_history")
def get_history(self, obj):
history_qs = history_service.get_history_queryset_by_model_instance(obj)
return HistoryExportSerializer(history_qs, many=True).data
class AttachmentExportSerializer(serializers.ModelSerializer):
owner = UserRelatedField(required=False)
attached_file = AttachedFileField()
modified_date = serializers.DateTimeField(required=False)
class Meta:
model = attachments_models.Attachment
exclude = ('id', 'content_type', 'object_id', 'project')
class AttachmentExportSerializerMixin(serializers.ModelSerializer):
attachments = serializers.SerializerMethodField("get_attachments")
def get_attachments(self, obj):
content_type = ContentType.objects.get_for_model(obj.__class__)
attachments_qs = attachments_models.Attachment.objects.filter(object_id=obj.pk,
content_type=content_type)
return AttachmentExportSerializer(attachments_qs, many=True).data
class PointsExportSerializer(serializers.ModelSerializer):
class Meta:
model = projects_models.Points
exclude = ('id', 'project')
class UserStoryStatusExportSerializer(serializers.ModelSerializer):
class Meta:
model = projects_models.UserStoryStatus
exclude = ('id', 'project')
class TaskStatusExportSerializer(serializers.ModelSerializer):
class Meta:
model = projects_models.TaskStatus
exclude = ('id', 'project')
class IssueStatusExportSerializer(serializers.ModelSerializer):
class Meta:
model = projects_models.IssueStatus
exclude = ('id', 'project')
class PriorityExportSerializer(serializers.ModelSerializer):
class Meta:
model = projects_models.Priority
exclude = ('id', 'project')
class SeverityExportSerializer(serializers.ModelSerializer):
class Meta:
model = projects_models.Severity
exclude = ('id', 'project')
class IssueTypeExportSerializer(serializers.ModelSerializer):
class Meta:
model = projects_models.IssueType
exclude = ('id', 'project')
class RoleExportSerializer(serializers.ModelSerializer):
permissions = PgArrayField(required=False)
class Meta:
model = users_models.Role
exclude = ('id', 'project')
class UserStoryCustomAttributeExportSerializer(serializers.ModelSerializer):
modified_date = serializers.DateTimeField(required=False)
class Meta:
model = custom_attributes_models.UserStoryCustomAttribute
exclude = ('id', 'project')
class TaskCustomAttributeExportSerializer(serializers.ModelSerializer):
modified_date = serializers.DateTimeField(required=False)
class Meta:
model = custom_attributes_models.TaskCustomAttribute
exclude = ('id', 'project')
class IssueCustomAttributeExportSerializer(serializers.ModelSerializer):
modified_date = serializers.DateTimeField(required=False)
class Meta:
model = custom_attributes_models.IssueCustomAttribute
exclude = ('id', 'project')
class CustomAttributesValuesExportSerializerMixin(serializers.ModelSerializer):
custom_attributes_values = serializers.SerializerMethodField("get_custom_attributes_values")
def custom_attributes_queryset(self, project):
raise NotImplementedError()
def get_custom_attributes_values(self, obj):
def _use_name_instead_id_as_key_in_custom_attributes_values(custom_attributes, values):
ret = {}
for attr in custom_attributes:
value = values.get(str(attr["id"]), None)
if value is not None:
ret[attr["name"]] = value
return ret
try:
values = obj.custom_attributes_values.attributes_values
custom_attributes = self.custom_attributes_queryset(obj.project).values('id', 'name')
return _use_name_instead_id_as_key_in_custom_attributes_values(custom_attributes, values)
except ObjectDoesNotExist:
return None
class BaseCustomAttributesValuesExportSerializer(serializers.ModelSerializer):
attributes_values = JsonField(source="attributes_values",required=True)
_custom_attribute_model = None
_container_field = None
class Meta:
exclude = ("id",)
def validate_attributes_values(self, attrs, source):
# values must be a dict
data_values = attrs.get("attributes_values", None)
if self.object:
data_values = (data_values or self.object.attributes_values)
if type(data_values) is not dict:
raise ValidationError(_("Invalid content. It must be {\"key\": \"value\",...}"))
# Values keys must be in the container object project
data_container = attrs.get(self._container_field, None)
if data_container:
project_id = data_container.project_id
elif self.object:
project_id = getattr(self.object, self._container_field).project_id
else:
project_id = None
values_ids = list(data_values.keys())
qs = self._custom_attribute_model.objects.filter(project=project_id,
id__in=values_ids)
if qs.count() != len(values_ids):
raise ValidationError(_("It contain invalid custom fields."))
return attrs
class UserStoryCustomAttributesValuesExportSerializer(BaseCustomAttributesValuesExportSerializer):
_custom_attribute_model = custom_attributes_models.UserStoryCustomAttribute
_container_model = "userstories.UserStory"
_container_field = "user_story"
class Meta(BaseCustomAttributesValuesExportSerializer.Meta):
model = custom_attributes_models.UserStoryCustomAttributesValues
class TaskCustomAttributesValuesExportSerializer(BaseCustomAttributesValuesExportSerializer):
_custom_attribute_model = custom_attributes_models.TaskCustomAttribute
_container_field = "task"
class Meta(BaseCustomAttributesValuesExportSerializer.Meta):
model = custom_attributes_models.TaskCustomAttributesValues
class IssueCustomAttributesValuesExportSerializer(BaseCustomAttributesValuesExportSerializer):
_custom_attribute_model = custom_attributes_models.IssueCustomAttribute
_container_field = "issue"
class Meta(BaseCustomAttributesValuesExportSerializer.Meta):
model = custom_attributes_models.IssueCustomAttributesValues
class MembershipExportSerializer(serializers.ModelSerializer):
user = UserRelatedField(required=False)
role = ProjectRelatedField(slug_field="name")
invited_by = UserRelatedField(required=False)
class Meta:
model = projects_models.Membership
exclude = ('id', 'project', 'token')
def full_clean(self, instance):
return instance
class RolePointsExportSerializer(serializers.ModelSerializer):
role = ProjectRelatedField(slug_field="name")
points = ProjectRelatedField(slug_field="name")
class Meta:
model = userstories_models.RolePoints
exclude = ('id', 'user_story')
class MilestoneExportSerializer(serializers.ModelSerializer):
owner = UserRelatedField(required=False)
watchers = UserRelatedField(many=True, required=False)
modified_date = serializers.DateTimeField(required=False)
def __init__(self, *args, **kwargs):
project = kwargs.pop('project', None)
super(MilestoneExportSerializer, self).__init__(*args, **kwargs)
if project:
self.project = project
def validate_name(self, attrs, source):
"""
Check the milestone name is not duplicated in the project
"""
name = attrs[source]
qs = self.project.milestones.filter(name=name)
if qs.exists():
raise serializers.ValidationError(_("Name duplicated for the project"))
return attrs
class Meta:
model = milestones_models.Milestone
exclude = ('id', 'project')
class TaskExportSerializer(CustomAttributesValuesExportSerializerMixin, HistoryExportSerializerMixin,
AttachmentExportSerializerMixin, serializers.ModelSerializer):
owner = UserRelatedField(required=False)
status = ProjectRelatedField(slug_field="name")
user_story = ProjectRelatedField(slug_field="ref", required=False)
milestone = ProjectRelatedField(slug_field="name", required=False)
assigned_to = UserRelatedField(required=False)
watchers = UserRelatedField(many=True, required=False)
modified_date = serializers.DateTimeField(required=False)
class Meta:
model = tasks_models.Task
exclude = ('id', 'project')
def custom_attributes_queryset(self, project):
return project.taskcustomattributes.all()
class UserStoryExportSerializer(CustomAttributesValuesExportSerializerMixin, HistoryExportSerializerMixin,
AttachmentExportSerializerMixin, serializers.ModelSerializer):
role_points = RolePointsExportSerializer(many=True, required=False)
owner = UserRelatedField(required=False)
assigned_to = UserRelatedField(required=False)
status = ProjectRelatedField(slug_field="name")
milestone = ProjectRelatedField(slug_field="name", required=False)
watchers = UserRelatedField(many=True, required=False)
modified_date = serializers.DateTimeField(required=False)
generated_from_issue = ProjectRelatedField(slug_field="ref", required=False)
class Meta:
model = userstories_models.UserStory
exclude = ('id', 'project', 'points', 'tasks')
def custom_attributes_queryset(self, project):
return project.userstorycustomattributes.all()
class IssueExportSerializer(CustomAttributesValuesExportSerializerMixin, HistoryExportSerializerMixin,
AttachmentExportSerializerMixin, serializers.ModelSerializer):
owner = UserRelatedField(required=False)
status = ProjectRelatedField(slug_field="name")
assigned_to = UserRelatedField(required=False)
priority = ProjectRelatedField(slug_field="name")
severity = ProjectRelatedField(slug_field="name")
type = ProjectRelatedField(slug_field="name")
milestone = ProjectRelatedField(slug_field="name", required=False)
watchers = UserRelatedField(many=True, required=False)
votes = serializers.SerializerMethodField("get_votes")
modified_date = serializers.DateTimeField(required=False)
class Meta:
model = issues_models.Issue
exclude = ('id', 'project')
def get_votes(self, obj):
return [x.email for x in votes_service.get_voters(obj)]
def custom_attributes_queryset(self, project):
return project.issuecustomattributes.all()
class WikiPageExportSerializer(HistoryExportSerializerMixin, AttachmentExportSerializerMixin,
serializers.ModelSerializer):
owner = UserRelatedField(required=False)
last_modifier = UserRelatedField(required=False)
watchers = UserRelatedField(many=True, required=False)
modified_date = serializers.DateTimeField(required=False)
class Meta:
model = wiki_models.WikiPage
exclude = ('id', 'project')
class WikiLinkExportSerializer(serializers.ModelSerializer):
class Meta:
model = wiki_models.WikiLink
exclude = ('id', 'project')
class TimelineDataField(serializers.WritableField):
read_only = False
def to_native(self, data):
new_data = copy.deepcopy(data)
try:
user = users_models.User.objects.get(pk=new_data["user"]["id"])
new_data["user"]["email"] = user.email
del new_data["user"]["id"]
except users_models.User.DoesNotExist:
pass
return new_data
def from_native(self, data):
new_data = copy.deepcopy(data)
try:
user = users_models.User.objects.get(email=new_data["user"]["email"])
new_data["user"]["id"] = user.id
del new_data["user"]["email"]
except users_models.User.DoesNotExist:
pass
return new_data
class TimelineExportSerializer(serializers.ModelSerializer):
data = TimelineDataField()
class Meta:
model = timeline_models.Timeline
exclude = ('id', 'project', 'namespace', 'object_id')
class ProjectExportSerializer(serializers.ModelSerializer):
owner = UserRelatedField(required=False)
default_points = serializers.SlugRelatedField(slug_field="name", required=False)
default_us_status = serializers.SlugRelatedField(slug_field="name", required=False)
default_task_status = serializers.SlugRelatedField(slug_field="name", required=False)
default_priority = serializers.SlugRelatedField(slug_field="name", required=False)
default_severity = serializers.SlugRelatedField(slug_field="name", required=False)
default_issue_status = serializers.SlugRelatedField(slug_field="name", required=False)
default_issue_type = serializers.SlugRelatedField(slug_field="name", required=False)
memberships = MembershipExportSerializer(many=True, required=False)
points = PointsExportSerializer(many=True, required=False)
us_statuses = UserStoryStatusExportSerializer(many=True, required=False)
task_statuses = TaskStatusExportSerializer(many=True, required=False)
issue_statuses = IssueStatusExportSerializer(many=True, required=False)
priorities = PriorityExportSerializer(many=True, required=False)
severities = SeverityExportSerializer(many=True, required=False)
issue_types = IssueTypeExportSerializer(many=True, required=False)
userstorycustomattributes = UserStoryCustomAttributeExportSerializer(many=True, required=False)
taskcustomattributes = TaskCustomAttributeExportSerializer(many=True, required=False)
issuecustomattributes = IssueCustomAttributeExportSerializer(many=True, required=False)
roles = RoleExportSerializer(many=True, required=False)
milestones = MilestoneExportSerializer(many=True, required=False)
wiki_pages = WikiPageExportSerializer(many=True, required=False)
wiki_links = WikiLinkExportSerializer(many=True, required=False)
user_stories = UserStoryExportSerializer(many=True, required=False)
tasks = TaskExportSerializer(many=True, required=False)
issues = IssueExportSerializer(many=True, required=False)
tags_colors = JsonField(required=False)
anon_permissions = PgArrayField(required=False)
public_permissions = PgArrayField(required=False)
modified_date = serializers.DateTimeField(required=False)
timeline = serializers.SerializerMethodField("get_timeline")
class Meta:
model = projects_models.Project
exclude = ('id', 'creation_template', 'members')
def get_timeline(self, obj):
timeline_qs = timeline_service.get_project_timeline(obj)
return TimelineExportSerializer(timeline_qs, many=True).data
| agpl-3.0 | 4,087,791,144,488,356,400 | 35.28754 | 106 | 0.685244 | false |
openstack/ironic-inspector | ironic_inspector/test/unit/test_pxe_filter.py | 1 | 19461 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from automaton import exceptions as automaton_errors
from eventlet import semaphore
import fixtures
from futurist import periodics
from openstack import exceptions as os_exc
from oslo_config import cfg
import stevedore
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector import node_cache
from ironic_inspector.pxe_filter import base as pxe_filter
from ironic_inspector.pxe_filter import interface
from ironic_inspector.test import base as test_base
CONF = cfg.CONF
class TestFilter(pxe_filter.BaseFilter):
pass
class TestDriverManager(test_base.BaseTest):
def setUp(self):
super(TestDriverManager, self).setUp()
pxe_filter._DRIVER_MANAGER = None
stevedore_driver_fixture = self.useFixture(fixtures.MockPatchObject(
stevedore.driver, 'DriverManager', autospec=True))
self.stevedore_driver_mock = stevedore_driver_fixture.mock
def test_default(self):
driver_manager = pxe_filter._driver_manager()
self.stevedore_driver_mock.assert_called_once_with(
pxe_filter._STEVEDORE_DRIVER_NAMESPACE,
name='iptables',
invoke_on_load=True
)
self.assertIsNotNone(driver_manager)
self.assertIs(pxe_filter._DRIVER_MANAGER, driver_manager)
def test_pxe_filter_name(self):
CONF.set_override('driver', 'foo', 'pxe_filter')
driver_manager = pxe_filter._driver_manager()
self.stevedore_driver_mock.assert_called_once_with(
pxe_filter._STEVEDORE_DRIVER_NAMESPACE,
'foo',
invoke_on_load=True
)
self.assertIsNotNone(driver_manager)
self.assertIs(pxe_filter._DRIVER_MANAGER, driver_manager)
def test_default_existing_driver_manager(self):
pxe_filter._DRIVER_MANAGER = True
driver_manager = pxe_filter._driver_manager()
self.stevedore_driver_mock.assert_not_called()
self.assertIs(pxe_filter._DRIVER_MANAGER, driver_manager)
class TestDriverManagerLoading(test_base.BaseTest):
def setUp(self):
super(TestDriverManagerLoading, self).setUp()
pxe_filter._DRIVER_MANAGER = None
@mock.patch.object(pxe_filter, 'NoopFilter', autospec=True)
def test_pxe_filter_driver_loads(self, noop_driver_cls):
CONF.set_override('driver', 'noop', 'pxe_filter')
driver_manager = pxe_filter._driver_manager()
noop_driver_cls.assert_called_once_with()
self.assertIs(noop_driver_cls.return_value, driver_manager.driver)
def test_invalid_filter_driver(self):
CONF.set_override('driver', 'foo', 'pxe_filter')
self.assertRaisesRegex(stevedore.exception.NoMatches, 'foo',
pxe_filter._driver_manager)
self.assertIsNone(pxe_filter._DRIVER_MANAGER)
class BaseFilterBaseTest(test_base.BaseTest):
def setUp(self):
super(BaseFilterBaseTest, self).setUp()
self.mock_lock = mock.MagicMock(spec=semaphore.BoundedSemaphore)
self.mock_bounded_semaphore = self.useFixture(
fixtures.MockPatchObject(semaphore, 'BoundedSemaphore')).mock
self.mock_bounded_semaphore.return_value = self.mock_lock
self.driver = TestFilter()
def assert_driver_is_locked(self):
"""Assert the driver is currently locked and wasn't locked before."""
self.driver.lock.__enter__.assert_called_once_with()
self.driver.lock.__exit__.assert_not_called()
def assert_driver_was_locked_once(self):
"""Assert the driver was locked exactly once before."""
self.driver.lock.__enter__.assert_called_once_with()
self.driver.lock.__exit__.assert_called_once_with(None, None, None)
def assert_driver_was_not_locked(self):
"""Assert the driver was not locked"""
self.mock_lock.__enter__.assert_not_called()
self.mock_lock.__exit__.assert_not_called()
class TestLockedDriverEvent(BaseFilterBaseTest):
def setUp(self):
super(TestLockedDriverEvent, self).setUp()
self.mock_fsm_reset_on_error = self.useFixture(
fixtures.MockPatchObject(self.driver, 'fsm_reset_on_error')).mock
self.expected_args = (None,)
self.expected_kwargs = {'foo': None}
self.mock_fsm = self.useFixture(
fixtures.MockPatchObject(self.driver, 'fsm')).mock
(self.driver.fsm_reset_on_error.return_value.
__enter__.return_value) = self.mock_fsm
def test_locked_driver_event(self):
event = 'foo'
@pxe_filter.locked_driver_event(event)
def fun(driver, *args, **kwargs):
self.assertIs(self.driver, driver)
self.assertEqual(self.expected_args, args)
self.assertEqual(self.expected_kwargs, kwargs)
self.assert_driver_is_locked()
self.assert_driver_was_not_locked()
fun(self.driver, *self.expected_args, **self.expected_kwargs)
self.mock_fsm_reset_on_error.assert_called_once_with()
self.mock_fsm.process_event.assert_called_once_with(event)
self.assert_driver_was_locked_once()
class TestBaseFilterFsmPrecautions(BaseFilterBaseTest):
def setUp(self):
super(TestBaseFilterFsmPrecautions, self).setUp()
self.mock_fsm = self.useFixture(
fixtures.MockPatchObject(TestFilter, 'fsm')).mock
# NOTE(milan): overriding driver so that the patch ^ is applied
self.mock_bounded_semaphore.reset_mock()
self.driver = TestFilter()
self.mock_reset = self.useFixture(
fixtures.MockPatchObject(self.driver, 'reset')).mock
def test___init__(self):
self.assertIs(self.mock_lock, self.driver.lock)
self.mock_bounded_semaphore.assert_called_once_with()
self.assertIs(self.mock_fsm, self.driver.fsm)
self.mock_fsm.initialize.assert_called_once_with(
start_state=pxe_filter.States.uninitialized)
def test_fsm_reset_on_error(self):
with self.driver.fsm_reset_on_error() as fsm:
self.assertIs(self.mock_fsm, fsm)
self.mock_reset.assert_not_called()
def test_fsm_automaton_error(self):
def fun():
with self.driver.fsm_reset_on_error():
raise automaton_errors.NotFound('Oops!')
self.assertRaisesRegex(pxe_filter.InvalidFilterDriverState,
'.*TestFilter.*Oops!', fun)
self.mock_reset.assert_not_called()
def test_fsm_reset_on_error_ctx_custom_error(self):
class MyError(Exception):
pass
def fun():
with self.driver.fsm_reset_on_error():
raise MyError('Oops!')
self.assertRaisesRegex(MyError, 'Oops!', fun)
self.mock_reset.assert_called_once_with()
class TestBaseFilterInterface(BaseFilterBaseTest):
def setUp(self):
super(TestBaseFilterInterface, self).setUp()
self.mock_get_client = self.useFixture(
fixtures.MockPatchObject(ir_utils, 'get_client')).mock
self.mock_ironic = mock.Mock()
self.mock_get_client.return_value = self.mock_ironic
self.mock_periodic = self.useFixture(
fixtures.MockPatchObject(periodics, 'periodic')).mock
self.mock_reset = self.useFixture(
fixtures.MockPatchObject(self.driver, 'reset')).mock
self.mock_log = self.useFixture(
fixtures.MockPatchObject(pxe_filter, 'LOG')).mock
self.driver.fsm_reset_on_error = self.useFixture(
fixtures.MockPatchObject(self.driver, 'fsm_reset_on_error')).mock
def test_init_filter(self):
self.driver.init_filter()
self.mock_log.debug.assert_called_once_with(
'Initializing the PXE filter driver %s', self.driver)
self.mock_reset.assert_not_called()
def test_sync(self):
self.driver.sync(self.mock_ironic)
self.mock_reset.assert_not_called()
def test_tear_down_filter(self):
self.assert_driver_was_not_locked()
self.driver.tear_down_filter()
self.assert_driver_was_locked_once()
self.mock_reset.assert_called_once_with()
def test_get_periodic_sync_task(self):
sync_mock = self.useFixture(
fixtures.MockPatchObject(self.driver, 'sync')).mock
self.driver.get_periodic_sync_task()
self.mock_periodic.assert_called_once_with(spacing=15, enabled=True)
self.mock_periodic.return_value.call_args[0][0]()
sync_mock.assert_called_once_with(self.mock_get_client.return_value)
def test_get_periodic_sync_task_invalid_state(self):
sync_mock = self.useFixture(
fixtures.MockPatchObject(self.driver, 'sync')).mock
sync_mock.side_effect = pxe_filter.InvalidFilterDriverState('Oops!')
self.driver.get_periodic_sync_task()
self.mock_periodic.assert_called_once_with(spacing=15, enabled=True)
self.assertRaisesRegex(periodics.NeverAgain, 'Oops!',
self.mock_periodic.return_value.call_args[0][0])
def test_get_periodic_sync_task_custom_error(self):
class MyError(Exception):
pass
sync_mock = self.useFixture(
fixtures.MockPatchObject(self.driver, 'sync')).mock
sync_mock.side_effect = MyError('Oops!')
self.driver.get_periodic_sync_task()
self.mock_periodic.assert_called_once_with(spacing=15, enabled=True)
self.assertRaisesRegex(
MyError, 'Oops!', self.mock_periodic.return_value.call_args[0][0])
def test_get_periodic_sync_task_disabled(self):
CONF.set_override('sync_period', 0, 'pxe_filter')
self.driver.get_periodic_sync_task()
self.mock_periodic.assert_called_once_with(spacing=float('inf'),
enabled=False)
def test_get_periodic_sync_task_custom_spacing(self):
CONF.set_override('sync_period', 4224, 'pxe_filter')
self.driver.get_periodic_sync_task()
self.mock_periodic.assert_called_once_with(spacing=4224, enabled=True)
class TestDriverReset(BaseFilterBaseTest):
def setUp(self):
super(TestDriverReset, self).setUp()
self.mock_fsm = self.useFixture(
fixtures.MockPatchObject(self.driver, 'fsm')).mock
def test_reset(self):
self.driver.reset()
self.assert_driver_was_not_locked()
self.mock_fsm.process_event.assert_called_once_with(
pxe_filter.Events.reset)
class TestDriver(test_base.BaseTest):
def setUp(self):
super(TestDriver, self).setUp()
self.mock_driver = mock.Mock(spec=interface.FilterDriver)
self.mock__driver_manager = self.useFixture(
fixtures.MockPatchObject(pxe_filter, '_driver_manager')).mock
self.mock__driver_manager.return_value.driver = self.mock_driver
def test_driver(self):
ret = pxe_filter.driver()
self.assertIs(self.mock_driver, ret)
self.mock__driver_manager.assert_called_once_with()
class TestIBMapping(test_base.BaseTest):
def setUp(self):
super(TestIBMapping, self).setUp()
CONF.set_override('ethoib_interfaces', ['eth0'], 'iptables')
self.ib_data = (
'EMAC=02:00:02:97:00:01 IMAC=97:fe:80:00:00:00:00:00:00:7c:fe:90:'
'03:00:29:26:52\n'
'EMAC=02:00:00:61:00:02 IMAC=61:fe:80:00:00:00:00:00:00:7c:fe:90:'
'03:00:29:24:4f\n'
)
self.client_id = ('ff:00:00:00:00:00:02:00:00:02:c9:00:7c:fe:90:03:00:'
'29:24:4f')
self.ib_address = '7c:fe:90:29:24:4f'
self.ib_port = mock.Mock(address=self.ib_address,
extra={'client-id': self.client_id},
spec=['address', 'extra'])
self.port = mock.Mock(address='aa:bb:cc:dd:ee:ff',
extra={}, spec=['address', 'extra'])
self.ports = [self.ib_port, self.port]
self.expected_rmac = '02:00:00:61:00:02'
self.fileobj = mock.mock_open(read_data=self.ib_data)
def test_matching_ib(self):
with mock.patch('builtins.open', self.fileobj,
create=True) as mock_open:
pxe_filter._ib_mac_to_rmac_mapping(self.ports)
self.assertEqual(self.expected_rmac, self.ib_port.address)
self.assertEqual(self.ports, [self.ib_port, self.port])
mock_open.assert_called_once_with('/sys/class/net/eth0/eth/neighs',
'r')
def test_ib_not_match(self):
self.ports[0].extra['client-id'] = 'foo'
with mock.patch('builtins.open', self.fileobj,
create=True) as mock_open:
pxe_filter._ib_mac_to_rmac_mapping(self.ports)
self.assertEqual(self.ib_address, self.ib_port.address)
self.assertEqual(self.ports, [self.ib_port, self.port])
mock_open.assert_called_once_with('/sys/class/net/eth0/eth/neighs',
'r')
def test_open_no_such_file(self):
with mock.patch('builtins.open',
side_effect=IOError(), autospec=True) as mock_open:
pxe_filter._ib_mac_to_rmac_mapping(self.ports)
self.assertEqual(self.ib_address, self.ib_port.address)
self.assertEqual(self.ports, [self.ib_port, self.port])
mock_open.assert_called_once_with('/sys/class/net/eth0/eth/neighs',
'r')
def test_no_interfaces(self):
CONF.set_override('ethoib_interfaces', [], 'iptables')
with mock.patch('builtins.open', self.fileobj,
create=True) as mock_open:
pxe_filter._ib_mac_to_rmac_mapping(self.ports)
self.assertEqual(self.ib_address, self.ib_port.address)
self.assertEqual(self.ports, [self.ib_port, self.port])
mock_open.assert_not_called()
class TestGetInactiveMacs(test_base.BaseTest):
def setUp(self):
super(TestGetInactiveMacs, self).setUp()
self.mock__ib_mac_to_rmac_mapping = self.useFixture(
fixtures.MockPatchObject(pxe_filter,
'_ib_mac_to_rmac_mapping')).mock
self.mock_active_macs = self.useFixture(
fixtures.MockPatchObject(node_cache, 'active_macs')).mock
self.mock_ironic = mock.Mock()
def test_inactive_port(self):
mock_ports_list = [
mock.Mock(address='foo'),
mock.Mock(address='bar'),
]
self.mock_ironic.ports.return_value = mock_ports_list
self.mock_active_macs.return_value = {'foo'}
ports = pxe_filter.get_inactive_macs(self.mock_ironic)
self.assertEqual({'bar'}, ports)
self.mock_ironic.ports.assert_called_once_with(
limit=None, fields=['address', 'extra'])
self.mock__ib_mac_to_rmac_mapping.assert_called_once_with(
[mock_ports_list[1]])
@mock.patch('time.sleep', lambda _x: None)
def test_retry_on_port_list_failure(self):
mock_ports_list = [
mock.Mock(address='foo'),
mock.Mock(address='bar'),
]
self.mock_ironic.ports.side_effect = [
os_exc.SDKException('boom'),
mock_ports_list
]
self.mock_active_macs.return_value = {'foo'}
ports = pxe_filter.get_inactive_macs(self.mock_ironic)
self.assertEqual({'bar'}, ports)
self.mock_ironic.ports.assert_called_with(
limit=None, fields=['address', 'extra'])
self.mock__ib_mac_to_rmac_mapping.assert_called_once_with(
[mock_ports_list[1]])
class TestGetActiveMacs(test_base.BaseTest):
def setUp(self):
super(TestGetActiveMacs, self).setUp()
self.mock__ib_mac_to_rmac_mapping = self.useFixture(
fixtures.MockPatchObject(pxe_filter,
'_ib_mac_to_rmac_mapping')).mock
self.mock_active_macs = self.useFixture(
fixtures.MockPatchObject(node_cache, 'active_macs')).mock
self.mock_ironic = mock.Mock()
def test_active_port(self):
mock_ports_list = [
mock.Mock(address='foo'),
mock.Mock(address='bar'),
]
self.mock_ironic.ports.return_value = mock_ports_list
self.mock_active_macs.return_value = {'foo'}
ports = pxe_filter.get_active_macs(self.mock_ironic)
self.assertEqual({'foo'}, ports)
self.mock_ironic.ports.assert_called_once_with(
limit=None, fields=['address', 'extra'])
self.mock__ib_mac_to_rmac_mapping.assert_called_once_with(
[mock_ports_list[0]])
@mock.patch('time.sleep', lambda _x: None)
def test_retry_on_port_list_failure(self):
mock_ports_list = [
mock.Mock(address='foo'),
mock.Mock(address='bar'),
]
self.mock_ironic.ports.side_effect = [
os_exc.SDKException('boom'),
mock_ports_list
]
self.mock_active_macs.return_value = {'foo'}
ports = pxe_filter.get_active_macs(self.mock_ironic)
self.assertEqual({'foo'}, ports)
self.mock_ironic.ports.assert_called_with(
limit=None, fields=['address', 'extra'])
self.mock__ib_mac_to_rmac_mapping.assert_called_once_with(
[mock_ports_list[0]])
class TestGetIronicMacs(test_base.BaseTest):
def setUp(self):
super(TestGetIronicMacs, self).setUp()
self.mock__ib_mac_to_rmac_mapping = self.useFixture(
fixtures.MockPatchObject(pxe_filter,
'_ib_mac_to_rmac_mapping')).mock
self.mock_ironic = mock.Mock()
def test_active_port(self):
mock_ports_list = [
mock.Mock(address='foo'),
mock.Mock(address='bar'),
]
self.mock_ironic.ports.return_value = mock_ports_list
ports = pxe_filter.get_ironic_macs(self.mock_ironic)
self.assertEqual({'foo', 'bar'}, ports)
self.mock_ironic.ports.assert_called_once_with(
limit=None, fields=['address', 'extra'])
self.mock__ib_mac_to_rmac_mapping.assert_called_once_with(
mock_ports_list)
@mock.patch('time.sleep', lambda _x: None)
def test_retry_on_port_list_failure(self):
mock_ports_list = [
mock.Mock(address='foo'),
mock.Mock(address='bar'),
]
self.mock_ironic.ports.side_effect = [
os_exc.SDKException('boom'),
mock_ports_list
]
ports = pxe_filter.get_ironic_macs(self.mock_ironic)
self.assertEqual({'foo', 'bar'}, ports)
self.mock_ironic.ports.assert_called_with(
limit=None, fields=['address', 'extra'])
self.mock__ib_mac_to_rmac_mapping.assert_called_once_with(
mock_ports_list)
| apache-2.0 | -4,775,636,206,923,448,000 | 38.635438 | 79 | 0.624069 | false |
djmuhlestein/fx2lib | examples/eeprom/client.py | 9 | 2964 | # Copyright (C) 2009 Ubixum, Inc.
#
# This library is free software; you can redistribute it and/or
#
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
import sys
from fx2load import *
def get_eeprom(addr,length):
assert f.isopen()
prom_val = '';
while len(prom_val)<length:
buf='\x00'*1024 # read 1024 bytes max at a time
transfer_len = length-len(prom_val) > 1024 and 1024 or length-len(prom_val)
ret=f.do_usb_command ( buf,
0xc0,
0xb1,
addr+len(prom_val),0,transfer_len )
if (ret>=0):
prom_val += buf[:ret]
else:
raise Exception("eeprom read didn't work: %d" % ret )
return prom_val
def hexchartoint(c):
return int(c.encode('hex'),16)
def fetch_eeprom():
"""
See TRM 3.4.2, 3.4,3.
This function dynamically determines how much data to read for c2 eeprom data and downloads
the eeprom iic file.
"""
assert f.isopen()
# fetch 1st 8 bytes
prom=get_eeprom(0,8)
if prom[0] == '\xc0':
return prom # c0 blocks are 8 bytes long
if prom[0] != '\xc2': raise Exception ( "Envalid eeprom (%s)" % prom[0].encode('hex') )
# the length of the 1st data block is bytes 8,9 (0 based)
read_addr=8
while True:
size_read = get_eeprom(read_addr,4) # get the data length and start address
prom += size_read
read_addr+=4
# if this is the end 0x80 0x01 0xe6 0x00, then break
if size_read == '\x80\x01\xe6\x00': break
# else it is a data block
size = (hexchartoint(size_read[0]) << 8) + hexchartoint(size_read[1])
print "Next eeprom data size %d" % size
prom += get_eeprom(read_addr,size)
read_addr+=size
# one last byte
prom += get_eeprom(read_addr,1) # should always be 0
assert prom[-1] == '\x00'
return prom
def set_eeprom(prom):
assert f.isopen()
bytes_written=0;
while bytes_written<len(prom):
# attemp 1024 at a time
to_write=len(prom)-bytes_written > 1024 and 1024 or len(prom)-bytes_written
print "Writing %d Bytes.." % to_write
ret=f.do_usb_command(prom[bytes_written:bytes_written+to_write], 0x40,0xb1,bytes_written, 0, to_write, 10000)
if ret>0:
bytes_written += ret;
else:
raise Exception ( "eeprom write didn't work: %d" % ret )
if __name__=='__main__':
openfx2(0x04b4,0x0083) # vid/pid of eeprom firmware
| gpl-3.0 | 4,221,992,412,039,080,000 | 31.217391 | 117 | 0.657557 | false |
brianmay/karaage | karaage/tests/projects/test_forms.py | 2 | 2372 | # Copyright 2010-2017, The University of Melbourne
# Copyright 2010-2017, Brian May
#
# This file is part of Karaage.
#
# Karaage is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Karaage is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Karaage If not, see <http://www.gnu.org/licenses/>.
import pytest
import six
from django.test import TestCase
from karaage.projects.forms import ProjectForm
from karaage.tests.fixtures import ProjectFactory
@pytest.mark.django_db
class ProjectFormTestCase(TestCase):
def setUp(self):
super(ProjectFormTestCase, self).setUp()
self.project = ProjectFactory()
def _valid_form_data(self):
data = {
'pid': self.project.pid,
'name': self.project.name,
'description': self.project.description,
'institute': self.project.institute.id,
'additional_req': self.project.additional_req,
'start_date': self.project.start_date,
'end_date': self.project.end_date
}
return data
def test_valid_data(self):
form_data = self._valid_form_data()
form_data['name'] = 'test-project'
form = ProjectForm(data=form_data,
instance=self.project)
self.assertEqual(form.is_valid(), True, form.errors.items())
form.save()
self.assertEqual(self.project.name, 'test-project')
def test_invalid_pid(self):
form_data = self._valid_form_data()
form_data['pid'] = '!test-project'
form = ProjectForm(data=form_data)
self.assertEqual(form.is_valid(), False)
self.assertEqual(
form.errors.items(),
dict.items({
'leaders': [six.u('This field is required.')],
'pid': [six.u(
'Project names can only contain letters,'
' numbers and underscores')]
})
)
| gpl-3.0 | 9,215,623,259,874,224,000 | 33.882353 | 70 | 0.634907 | false |
gpiotti/tsflask | server/lib/werkzeug/debug/__init__.py | 310 | 7800 | # -*- coding: utf-8 -*-
"""
werkzeug.debug
~~~~~~~~~~~~~~
WSGI application traceback debugger.
:copyright: (c) 2013 by the Werkzeug Team, see AUTHORS for more details.
:license: BSD, see LICENSE for more details.
"""
import json
import mimetypes
from os.path import join, dirname, basename, isfile
from werkzeug.wrappers import BaseRequest as Request, BaseResponse as Response
from werkzeug.debug.tbtools import get_current_traceback, render_console_html
from werkzeug.debug.console import Console
from werkzeug.security import gen_salt
#: import this here because it once was documented as being available
#: from this module. In case there are users left ...
from werkzeug.debug.repr import debug_repr
class _ConsoleFrame(object):
"""Helper class so that we can reuse the frame console code for the
standalone console.
"""
def __init__(self, namespace):
self.console = Console(namespace)
self.id = 0
class DebuggedApplication(object):
"""Enables debugging support for a given application::
from werkzeug.debug import DebuggedApplication
from myapp import app
app = DebuggedApplication(app, evalex=True)
The `evalex` keyword argument allows evaluating expressions in a
traceback's frame context.
.. versionadded:: 0.9
The `lodgeit_url` parameter was deprecated.
:param app: the WSGI application to run debugged.
:param evalex: enable exception evaluation feature (interactive
debugging). This requires a non-forking server.
:param request_key: The key that points to the request object in ths
environment. This parameter is ignored in current
versions.
:param console_path: the URL for a general purpose console.
:param console_init_func: the function that is executed before starting
the general purpose console. The return value
is used as initial namespace.
:param show_hidden_frames: by default hidden traceback frames are skipped.
You can show them by setting this parameter
to `True`.
"""
# this class is public
__module__ = 'werkzeug'
def __init__(self, app, evalex=False, request_key='werkzeug.request',
console_path='/console', console_init_func=None,
show_hidden_frames=False, lodgeit_url=None):
if lodgeit_url is not None:
from warnings import warn
warn(DeprecationWarning('Werkzeug now pastes into gists.'))
if not console_init_func:
console_init_func = dict
self.app = app
self.evalex = evalex
self.frames = {}
self.tracebacks = {}
self.request_key = request_key
self.console_path = console_path
self.console_init_func = console_init_func
self.show_hidden_frames = show_hidden_frames
self.secret = gen_salt(20)
def debug_application(self, environ, start_response):
"""Run the application and conserve the traceback frames."""
app_iter = None
try:
app_iter = self.app(environ, start_response)
for item in app_iter:
yield item
if hasattr(app_iter, 'close'):
app_iter.close()
except Exception:
if hasattr(app_iter, 'close'):
app_iter.close()
traceback = get_current_traceback(skip=1, show_hidden_frames=
self.show_hidden_frames,
ignore_system_exceptions=True)
for frame in traceback.frames:
self.frames[frame.id] = frame
self.tracebacks[traceback.id] = traceback
try:
start_response('500 INTERNAL SERVER ERROR', [
('Content-Type', 'text/html; charset=utf-8'),
# Disable Chrome's XSS protection, the debug
# output can cause false-positives.
('X-XSS-Protection', '0'),
])
except Exception:
# if we end up here there has been output but an error
# occurred. in that situation we can do nothing fancy any
# more, better log something into the error log and fall
# back gracefully.
environ['wsgi.errors'].write(
'Debugging middleware caught exception in streamed '
'response at a point where response headers were already '
'sent.\n')
else:
yield traceback.render_full(evalex=self.evalex,
secret=self.secret) \
.encode('utf-8', 'replace')
traceback.log(environ['wsgi.errors'])
def execute_command(self, request, command, frame):
"""Execute a command in a console."""
return Response(frame.console.eval(command), mimetype='text/html')
def display_console(self, request):
"""Display a standalone shell."""
if 0 not in self.frames:
self.frames[0] = _ConsoleFrame(self.console_init_func())
return Response(render_console_html(secret=self.secret),
mimetype='text/html')
def paste_traceback(self, request, traceback):
"""Paste the traceback and return a JSON response."""
rv = traceback.paste()
return Response(json.dumps(rv), mimetype='application/json')
def get_source(self, request, frame):
"""Render the source viewer."""
return Response(frame.render_source(), mimetype='text/html')
def get_resource(self, request, filename):
"""Return a static resource from the shared folder."""
filename = join(dirname(__file__), 'shared', basename(filename))
if isfile(filename):
mimetype = mimetypes.guess_type(filename)[0] \
or 'application/octet-stream'
f = open(filename, 'rb')
try:
return Response(f.read(), mimetype=mimetype)
finally:
f.close()
return Response('Not Found', status=404)
def __call__(self, environ, start_response):
"""Dispatch the requests."""
# important: don't ever access a function here that reads the incoming
# form data! Otherwise the application won't have access to that data
# any more!
request = Request(environ)
response = self.debug_application
if request.args.get('__debugger__') == 'yes':
cmd = request.args.get('cmd')
arg = request.args.get('f')
secret = request.args.get('s')
traceback = self.tracebacks.get(request.args.get('tb', type=int))
frame = self.frames.get(request.args.get('frm', type=int))
if cmd == 'resource' and arg:
response = self.get_resource(request, arg)
elif cmd == 'paste' and traceback is not None and \
secret == self.secret:
response = self.paste_traceback(request, traceback)
elif cmd == 'source' and frame and self.secret == secret:
response = self.get_source(request, frame)
elif self.evalex and cmd is not None and frame is not None and \
self.secret == secret:
response = self.execute_command(request, cmd, frame)
elif self.evalex and self.console_path is not None and \
request.path == self.console_path:
response = self.display_console(request)
return response(environ, start_response)
| apache-2.0 | 2,949,149,047,933,335,000 | 41.162162 | 78 | 0.587436 | false |
ArcherSys/ArcherSys | Lib/tkinter/ttk.py | 1 | 167711 | <<<<<<< HEAD
<<<<<<< HEAD
"""Ttk wrapper.
This module provides classes to allow using Tk themed widget set.
Ttk is based on a revised and enhanced version of
TIP #48 (http://tip.tcl.tk/48) specified style engine.
Its basic idea is to separate, to the extent possible, the code
implementing a widget's behavior from the code implementing its
appearance. Widget class bindings are primarily responsible for
maintaining the widget state and invoking callbacks, all aspects
of the widgets appearance lies at Themes.
"""
__version__ = "0.3.1"
__author__ = "Guilherme Polo <[email protected]>"
__all__ = ["Button", "Checkbutton", "Combobox", "Entry", "Frame", "Label",
"Labelframe", "LabelFrame", "Menubutton", "Notebook", "Panedwindow",
"PanedWindow", "Progressbar", "Radiobutton", "Scale", "Scrollbar",
"Separator", "Sizegrip", "Style", "Treeview",
# Extensions
"LabeledScale", "OptionMenu",
# functions
"tclobjs_to_py", "setup_master"]
import tkinter
from tkinter import _flatten, _join, _stringify, _splitdict
# Verify if Tk is new enough to not need the Tile package
_REQUIRE_TILE = True if tkinter.TkVersion < 8.5 else False
def _load_tile(master):
if _REQUIRE_TILE:
import os
tilelib = os.environ.get('TILE_LIBRARY')
if tilelib:
# append custom tile path to the list of directories that
# Tcl uses when attempting to resolve packages with the package
# command
master.tk.eval(
'global auto_path; '
'lappend auto_path {%s}' % tilelib)
master.tk.eval('package require tile') # TclError may be raised here
master._tile_loaded = True
def _format_optvalue(value, script=False):
"""Internal function."""
if script:
# if caller passes a Tcl script to tk.call, all the values need to
# be grouped into words (arguments to a command in Tcl dialect)
value = _stringify(value)
elif isinstance(value, (list, tuple)):
value = _join(value)
return value
def _format_optdict(optdict, script=False, ignore=None):
"""Formats optdict to a tuple to pass it to tk.call.
E.g. (script=False):
{'foreground': 'blue', 'padding': [1, 2, 3, 4]} returns:
('-foreground', 'blue', '-padding', '1 2 3 4')"""
opts = []
for opt, value in optdict.items():
if not ignore or opt not in ignore:
opts.append("-%s" % opt)
if value is not None:
opts.append(_format_optvalue(value, script))
return _flatten(opts)
def _mapdict_values(items):
# each value in mapdict is expected to be a sequence, where each item
# is another sequence containing a state (or several) and a value
# E.g. (script=False):
# [('active', 'selected', 'grey'), ('focus', [1, 2, 3, 4])]
# returns:
# ['active selected', 'grey', 'focus', [1, 2, 3, 4]]
opt_val = []
for *state, val in items:
# hacks for bakward compatibility
state[0] # raise IndexError if empty
if len(state) == 1:
# if it is empty (something that evaluates to False), then
# format it to Tcl code to denote the "normal" state
state = state[0] or ''
else:
# group multiple states
state = ' '.join(state) # raise TypeError if not str
opt_val.append(state)
if val is not None:
opt_val.append(val)
return opt_val
def _format_mapdict(mapdict, script=False):
"""Formats mapdict to pass it to tk.call.
E.g. (script=False):
{'expand': [('active', 'selected', 'grey'), ('focus', [1, 2, 3, 4])]}
returns:
('-expand', '{active selected} grey focus {1, 2, 3, 4}')"""
opts = []
for opt, value in mapdict.items():
opts.extend(("-%s" % opt,
_format_optvalue(_mapdict_values(value), script)))
return _flatten(opts)
def _format_elemcreate(etype, script=False, *args, **kw):
"""Formats args and kw according to the given element factory etype."""
spec = None
opts = ()
if etype in ("image", "vsapi"):
if etype == "image": # define an element based on an image
# first arg should be the default image name
iname = args[0]
# next args, if any, are statespec/value pairs which is almost
# a mapdict, but we just need the value
imagespec = _join(_mapdict_values(args[1:]))
spec = "%s %s" % (iname, imagespec)
else:
# define an element whose visual appearance is drawn using the
# Microsoft Visual Styles API which is responsible for the
# themed styles on Windows XP and Vista.
# Availability: Tk 8.6, Windows XP and Vista.
class_name, part_id = args[:2]
statemap = _join(_mapdict_values(args[2:]))
spec = "%s %s %s" % (class_name, part_id, statemap)
opts = _format_optdict(kw, script)
elif etype == "from": # clone an element
# it expects a themename and optionally an element to clone from,
# otherwise it will clone {} (empty element)
spec = args[0] # theme name
if len(args) > 1: # elementfrom specified
opts = (_format_optvalue(args[1], script),)
if script:
spec = '{%s}' % spec
opts = ' '.join(opts)
return spec, opts
def _format_layoutlist(layout, indent=0, indent_size=2):
"""Formats a layout list so we can pass the result to ttk::style
layout and ttk::style settings. Note that the layout doesn't has to
be a list necessarily.
E.g.:
[("Menubutton.background", None),
("Menubutton.button", {"children":
[("Menubutton.focus", {"children":
[("Menubutton.padding", {"children":
[("Menubutton.label", {"side": "left", "expand": 1})]
})]
})]
}),
("Menubutton.indicator", {"side": "right"})
]
returns:
Menubutton.background
Menubutton.button -children {
Menubutton.focus -children {
Menubutton.padding -children {
Menubutton.label -side left -expand 1
}
}
}
Menubutton.indicator -side right"""
script = []
for layout_elem in layout:
elem, opts = layout_elem
opts = opts or {}
fopts = ' '.join(_format_optdict(opts, True, ("children",)))
head = "%s%s%s" % (' ' * indent, elem, (" %s" % fopts) if fopts else '')
if "children" in opts:
script.append(head + " -children {")
indent += indent_size
newscript, indent = _format_layoutlist(opts['children'], indent,
indent_size)
script.append(newscript)
indent -= indent_size
script.append('%s}' % (' ' * indent))
else:
script.append(head)
return '\n'.join(script), indent
def _script_from_settings(settings):
"""Returns an appropriate script, based on settings, according to
theme_settings definition to be used by theme_settings and
theme_create."""
script = []
# a script will be generated according to settings passed, which
# will then be evaluated by Tcl
for name, opts in settings.items():
# will format specific keys according to Tcl code
if opts.get('configure'): # format 'configure'
s = ' '.join(_format_optdict(opts['configure'], True))
script.append("ttk::style configure %s %s;" % (name, s))
if opts.get('map'): # format 'map'
s = ' '.join(_format_mapdict(opts['map'], True))
script.append("ttk::style map %s %s;" % (name, s))
if 'layout' in opts: # format 'layout' which may be empty
if not opts['layout']:
s = 'null' # could be any other word, but this one makes sense
else:
s, _ = _format_layoutlist(opts['layout'])
script.append("ttk::style layout %s {\n%s\n}" % (name, s))
if opts.get('element create'): # format 'element create'
eopts = opts['element create']
etype = eopts[0]
# find where args end, and where kwargs start
argc = 1 # etype was the first one
while argc < len(eopts) and not hasattr(eopts[argc], 'items'):
argc += 1
elemargs = eopts[1:argc]
elemkw = eopts[argc] if argc < len(eopts) and eopts[argc] else {}
spec, opts = _format_elemcreate(etype, True, *elemargs, **elemkw)
script.append("ttk::style element create %s %s %s %s" % (
name, etype, spec, opts))
return '\n'.join(script)
def _list_from_statespec(stuple):
"""Construct a list from the given statespec tuple according to the
accepted statespec accepted by _format_mapdict."""
nval = []
for val in stuple:
typename = getattr(val, 'typename', None)
if typename is None:
nval.append(val)
else: # this is a Tcl object
val = str(val)
if typename == 'StateSpec':
val = val.split()
nval.append(val)
it = iter(nval)
return [_flatten(spec) for spec in zip(it, it)]
def _list_from_layouttuple(tk, ltuple):
"""Construct a list from the tuple returned by ttk::layout, this is
somewhat the reverse of _format_layoutlist."""
ltuple = tk.splitlist(ltuple)
res = []
indx = 0
while indx < len(ltuple):
name = ltuple[indx]
opts = {}
res.append((name, opts))
indx += 1
while indx < len(ltuple): # grab name's options
opt, val = ltuple[indx:indx + 2]
if not opt.startswith('-'): # found next name
break
opt = opt[1:] # remove the '-' from the option
indx += 2
if opt == 'children':
val = _list_from_layouttuple(tk, val)
opts[opt] = val
return res
def _val_or_dict(tk, options, *args):
"""Format options then call Tk command with args and options and return
the appropriate result.
If no option is specified, a dict is returned. If a option is
specified with the None value, the value for that option is returned.
Otherwise, the function just sets the passed options and the caller
shouldn't be expecting a return value anyway."""
options = _format_optdict(options)
res = tk.call(*(args + options))
if len(options) % 2: # option specified without a value, return its value
return res
return _splitdict(tk, res, conv=_tclobj_to_py)
def _convert_stringval(value):
"""Converts a value to, hopefully, a more appropriate Python object."""
value = str(value)
try:
value = int(value)
except (ValueError, TypeError):
pass
return value
def _to_number(x):
if isinstance(x, str):
if '.' in x:
x = float(x)
else:
x = int(x)
return x
def _tclobj_to_py(val):
"""Return value converted from Tcl object to Python object."""
if val and hasattr(val, '__len__') and not isinstance(val, str):
if getattr(val[0], 'typename', None) == 'StateSpec':
val = _list_from_statespec(val)
else:
val = list(map(_convert_stringval, val))
elif hasattr(val, 'typename'): # some other (single) Tcl object
val = _convert_stringval(val)
return val
def tclobjs_to_py(adict):
"""Returns adict with its values converted from Tcl objects to Python
objects."""
for opt, val in adict.items():
adict[opt] = _tclobj_to_py(val)
return adict
def setup_master(master=None):
"""If master is not None, itself is returned. If master is None,
the default master is returned if there is one, otherwise a new
master is created and returned.
If it is not allowed to use the default root and master is None,
RuntimeError is raised."""
if master is None:
if tkinter._support_default_root:
master = tkinter._default_root or tkinter.Tk()
else:
raise RuntimeError(
"No master specified and tkinter is "
"configured to not support default root")
return master
class Style(object):
"""Manipulate style database."""
_name = "ttk::style"
def __init__(self, master=None):
master = setup_master(master)
if not getattr(master, '_tile_loaded', False):
# Load tile now, if needed
_load_tile(master)
self.master = master
self.tk = self.master.tk
def configure(self, style, query_opt=None, **kw):
"""Query or sets the default value of the specified option(s) in
style.
Each key in kw is an option and each value is either a string or
a sequence identifying the value for that option."""
if query_opt is not None:
kw[query_opt] = None
return _val_or_dict(self.tk, kw, self._name, "configure", style)
def map(self, style, query_opt=None, **kw):
"""Query or sets dynamic values of the specified option(s) in
style.
Each key in kw is an option and each value should be a list or a
tuple (usually) containing statespecs grouped in tuples, or list,
or something else of your preference. A statespec is compound of
one or more states and then a value."""
if query_opt is not None:
return _list_from_statespec(self.tk.splitlist(
self.tk.call(self._name, "map", style, '-%s' % query_opt)))
return _splitdict(
self.tk,
self.tk.call(self._name, "map", style, *_format_mapdict(kw)),
conv=_tclobj_to_py)
def lookup(self, style, option, state=None, default=None):
"""Returns the value specified for option in style.
If state is specified it is expected to be a sequence of one
or more states. If the default argument is set, it is used as
a fallback value in case no specification for option is found."""
state = ' '.join(state) if state else ''
return self.tk.call(self._name, "lookup", style, '-%s' % option,
state, default)
def layout(self, style, layoutspec=None):
"""Define the widget layout for given style. If layoutspec is
omitted, return the layout specification for given style.
layoutspec is expected to be a list or an object different than
None that evaluates to False if you want to "turn off" that style.
If it is a list (or tuple, or something else), each item should be
a tuple where the first item is the layout name and the second item
should have the format described below:
LAYOUTS
A layout can contain the value None, if takes no options, or
a dict of options specifying how to arrange the element.
The layout mechanism uses a simplified version of the pack
geometry manager: given an initial cavity, each element is
allocated a parcel. Valid options/values are:
side: whichside
Specifies which side of the cavity to place the
element; one of top, right, bottom or left. If
omitted, the element occupies the entire cavity.
sticky: nswe
Specifies where the element is placed inside its
allocated parcel.
children: [sublayout... ]
Specifies a list of elements to place inside the
element. Each element is a tuple (or other sequence)
where the first item is the layout name, and the other
is a LAYOUT."""
lspec = None
if layoutspec:
lspec = _format_layoutlist(layoutspec)[0]
elif layoutspec is not None: # will disable the layout ({}, '', etc)
lspec = "null" # could be any other word, but this may make sense
# when calling layout(style) later
return _list_from_layouttuple(self.tk,
self.tk.call(self._name, "layout", style, lspec))
def element_create(self, elementname, etype, *args, **kw):
"""Create a new element in the current theme of given etype."""
spec, opts = _format_elemcreate(etype, False, *args, **kw)
self.tk.call(self._name, "element", "create", elementname, etype,
spec, *opts)
def element_names(self):
"""Returns the list of elements defined in the current theme."""
return self.tk.splitlist(self.tk.call(self._name, "element", "names"))
def element_options(self, elementname):
"""Return the list of elementname's options."""
return self.tk.splitlist(self.tk.call(self._name, "element", "options", elementname))
def theme_create(self, themename, parent=None, settings=None):
"""Creates a new theme.
It is an error if themename already exists. If parent is
specified, the new theme will inherit styles, elements and
layouts from the specified parent theme. If settings are present,
they are expected to have the same syntax used for theme_settings."""
script = _script_from_settings(settings) if settings else ''
if parent:
self.tk.call(self._name, "theme", "create", themename,
"-parent", parent, "-settings", script)
else:
self.tk.call(self._name, "theme", "create", themename,
"-settings", script)
def theme_settings(self, themename, settings):
"""Temporarily sets the current theme to themename, apply specified
settings and then restore the previous theme.
Each key in settings is a style and each value may contain the
keys 'configure', 'map', 'layout' and 'element create' and they
are expected to have the same format as specified by the methods
configure, map, layout and element_create respectively."""
script = _script_from_settings(settings)
self.tk.call(self._name, "theme", "settings", themename, script)
def theme_names(self):
"""Returns a list of all known themes."""
return self.tk.splitlist(self.tk.call(self._name, "theme", "names"))
def theme_use(self, themename=None):
"""If themename is None, returns the theme in use, otherwise, set
the current theme to themename, refreshes all widgets and emits
a <<ThemeChanged>> event."""
if themename is None:
# Starting on Tk 8.6, checking this global is no longer needed
# since it allows doing self.tk.call(self._name, "theme", "use")
return self.tk.eval("return $ttk::currentTheme")
# using "ttk::setTheme" instead of "ttk::style theme use" causes
# the variable currentTheme to be updated, also, ttk::setTheme calls
# "ttk::style theme use" in order to change theme.
self.tk.call("ttk::setTheme", themename)
class Widget(tkinter.Widget):
"""Base class for Tk themed widgets."""
def __init__(self, master, widgetname, kw=None):
"""Constructs a Ttk Widget with the parent master.
STANDARD OPTIONS
class, cursor, takefocus, style
SCROLLABLE WIDGET OPTIONS
xscrollcommand, yscrollcommand
LABEL WIDGET OPTIONS
text, textvariable, underline, image, compound, width
WIDGET STATES
active, disabled, focus, pressed, selected, background,
readonly, alternate, invalid
"""
master = setup_master(master)
if not getattr(master, '_tile_loaded', False):
# Load tile now, if needed
_load_tile(master)
tkinter.Widget.__init__(self, master, widgetname, kw=kw)
def identify(self, x, y):
"""Returns the name of the element at position x, y, or the empty
string if the point does not lie within any element.
x and y are pixel coordinates relative to the widget."""
return self.tk.call(self._w, "identify", x, y)
def instate(self, statespec, callback=None, *args, **kw):
"""Test the widget's state.
If callback is not specified, returns True if the widget state
matches statespec and False otherwise. If callback is specified,
then it will be invoked with *args, **kw if the widget state
matches statespec. statespec is expected to be a sequence."""
ret = self.tk.getboolean(
self.tk.call(self._w, "instate", ' '.join(statespec)))
if ret and callback:
return callback(*args, **kw)
return bool(ret)
def state(self, statespec=None):
"""Modify or inquire widget state.
Widget state is returned if statespec is None, otherwise it is
set according to the statespec flags and then a new state spec
is returned indicating which flags were changed. statespec is
expected to be a sequence."""
if statespec is not None:
statespec = ' '.join(statespec)
return self.tk.splitlist(str(self.tk.call(self._w, "state", statespec)))
class Button(Widget):
"""Ttk Button widget, displays a textual label and/or image, and
evaluates a command when pressed."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Button widget with the parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, default, width
"""
Widget.__init__(self, master, "ttk::button", kw)
def invoke(self):
"""Invokes the command associated with the button."""
return self.tk.call(self._w, "invoke")
class Checkbutton(Widget):
"""Ttk Checkbutton widget which is either in on- or off-state."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Checkbutton widget with the parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, offvalue, onvalue, variable
"""
Widget.__init__(self, master, "ttk::checkbutton", kw)
def invoke(self):
"""Toggles between the selected and deselected states and
invokes the associated command. If the widget is currently
selected, sets the option variable to the offvalue option
and deselects the widget; otherwise, sets the option variable
to the option onvalue.
Returns the result of the associated command."""
return self.tk.call(self._w, "invoke")
class Entry(Widget, tkinter.Entry):
"""Ttk Entry widget displays a one-line text string and allows that
string to be edited by the user."""
def __init__(self, master=None, widget=None, **kw):
"""Constructs a Ttk Entry widget with the parent master.
STANDARD OPTIONS
class, cursor, style, takefocus, xscrollcommand
WIDGET-SPECIFIC OPTIONS
exportselection, invalidcommand, justify, show, state,
textvariable, validate, validatecommand, width
VALIDATION MODES
none, key, focus, focusin, focusout, all
"""
Widget.__init__(self, master, widget or "ttk::entry", kw)
def bbox(self, index):
"""Return a tuple of (x, y, width, height) which describes the
bounding box of the character given by index."""
return self._getints(self.tk.call(self._w, "bbox", index))
def identify(self, x, y):
"""Returns the name of the element at position x, y, or the
empty string if the coordinates are outside the window."""
return self.tk.call(self._w, "identify", x, y)
def validate(self):
"""Force revalidation, independent of the conditions specified
by the validate option. Returns False if validation fails, True
if it succeeds. Sets or clears the invalid state accordingly."""
return bool(self.tk.getboolean(self.tk.call(self._w, "validate")))
class Combobox(Entry):
"""Ttk Combobox widget combines a text field with a pop-down list of
values."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Combobox widget with the parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
exportselection, justify, height, postcommand, state,
textvariable, values, width
"""
Entry.__init__(self, master, "ttk::combobox", **kw)
def current(self, newindex=None):
"""If newindex is supplied, sets the combobox value to the
element at position newindex in the list of values. Otherwise,
returns the index of the current value in the list of values
or -1 if the current value does not appear in the list."""
if newindex is None:
return self.tk.getint(self.tk.call(self._w, "current"))
return self.tk.call(self._w, "current", newindex)
def set(self, value):
"""Sets the value of the combobox to value."""
self.tk.call(self._w, "set", value)
class Frame(Widget):
"""Ttk Frame widget is a container, used to group other widgets
together."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Frame with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
borderwidth, relief, padding, width, height
"""
Widget.__init__(self, master, "ttk::frame", kw)
class Label(Widget):
"""Ttk Label widget displays a textual label and/or image."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Label with parent master.
STANDARD OPTIONS
class, compound, cursor, image, style, takefocus, text,
textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
anchor, background, font, foreground, justify, padding,
relief, text, wraplength
"""
Widget.__init__(self, master, "ttk::label", kw)
class Labelframe(Widget):
"""Ttk Labelframe widget is a container used to group other widgets
together. It has an optional label, which may be a plain text string
or another widget."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Labelframe with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
labelanchor, text, underline, padding, labelwidget, width,
height
"""
Widget.__init__(self, master, "ttk::labelframe", kw)
LabelFrame = Labelframe # tkinter name compatibility
class Menubutton(Widget):
"""Ttk Menubutton widget displays a textual label and/or image, and
displays a menu when pressed."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Menubutton with parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
direction, menu
"""
Widget.__init__(self, master, "ttk::menubutton", kw)
class Notebook(Widget):
"""Ttk Notebook widget manages a collection of windows and displays
a single one at a time. Each child window is associated with a tab,
which the user may select to change the currently-displayed window."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Notebook with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
height, padding, width
TAB OPTIONS
state, sticky, padding, text, image, compound, underline
TAB IDENTIFIERS (tab_id)
The tab_id argument found in several methods may take any of
the following forms:
* An integer between zero and the number of tabs
* The name of a child window
* A positional specification of the form "@x,y", which
defines the tab
* The string "current", which identifies the
currently-selected tab
* The string "end", which returns the number of tabs (only
valid for method index)
"""
Widget.__init__(self, master, "ttk::notebook", kw)
def add(self, child, **kw):
"""Adds a new tab to the notebook.
If window is currently managed by the notebook but hidden, it is
restored to its previous position."""
self.tk.call(self._w, "add", child, *(_format_optdict(kw)))
def forget(self, tab_id):
"""Removes the tab specified by tab_id, unmaps and unmanages the
associated window."""
self.tk.call(self._w, "forget", tab_id)
def hide(self, tab_id):
"""Hides the tab specified by tab_id.
The tab will not be displayed, but the associated window remains
managed by the notebook and its configuration remembered. Hidden
tabs may be restored with the add command."""
self.tk.call(self._w, "hide", tab_id)
def identify(self, x, y):
"""Returns the name of the tab element at position x, y, or the
empty string if none."""
return self.tk.call(self._w, "identify", x, y)
def index(self, tab_id):
"""Returns the numeric index of the tab specified by tab_id, or
the total number of tabs if tab_id is the string "end"."""
return self.tk.getint(self.tk.call(self._w, "index", tab_id))
def insert(self, pos, child, **kw):
"""Inserts a pane at the specified position.
pos is either the string end, an integer index, or the name of
a managed child. If child is already managed by the notebook,
moves it to the specified position."""
self.tk.call(self._w, "insert", pos, child, *(_format_optdict(kw)))
def select(self, tab_id=None):
"""Selects the specified tab.
The associated child window will be displayed, and the
previously-selected window (if different) is unmapped. If tab_id
is omitted, returns the widget name of the currently selected
pane."""
return self.tk.call(self._w, "select", tab_id)
def tab(self, tab_id, option=None, **kw):
"""Query or modify the options of the specific tab_id.
If kw is not given, returns a dict of the tab option values. If option
is specified, returns the value of that option. Otherwise, sets the
options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "tab", tab_id)
def tabs(self):
"""Returns a list of windows managed by the notebook."""
return self.tk.splitlist(self.tk.call(self._w, "tabs") or ())
def enable_traversal(self):
"""Enable keyboard traversal for a toplevel window containing
this notebook.
This will extend the bindings for the toplevel window containing
this notebook as follows:
Control-Tab: selects the tab following the currently selected
one
Shift-Control-Tab: selects the tab preceding the currently
selected one
Alt-K: where K is the mnemonic (underlined) character of any
tab, will select that tab.
Multiple notebooks in a single toplevel may be enabled for
traversal, including nested notebooks. However, notebook traversal
only works properly if all panes are direct children of the
notebook."""
# The only, and good, difference I see is about mnemonics, which works
# after calling this method. Control-Tab and Shift-Control-Tab always
# works (here at least).
self.tk.call("ttk::notebook::enableTraversal", self._w)
class Panedwindow(Widget, tkinter.PanedWindow):
"""Ttk Panedwindow widget displays a number of subwindows, stacked
either vertically or horizontally."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Panedwindow with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient, width, height
PANE OPTIONS
weight
"""
Widget.__init__(self, master, "ttk::panedwindow", kw)
forget = tkinter.PanedWindow.forget # overrides Pack.forget
def insert(self, pos, child, **kw):
"""Inserts a pane at the specified positions.
pos is either the string end, and integer index, or the name
of a child. If child is already managed by the paned window,
moves it to the specified position."""
self.tk.call(self._w, "insert", pos, child, *(_format_optdict(kw)))
def pane(self, pane, option=None, **kw):
"""Query or modify the options of the specified pane.
pane is either an integer index or the name of a managed subwindow.
If kw is not given, returns a dict of the pane option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "pane", pane)
def sashpos(self, index, newpos=None):
"""If newpos is specified, sets the position of sash number index.
May adjust the positions of adjacent sashes to ensure that
positions are monotonically increasing. Sash positions are further
constrained to be between 0 and the total size of the widget.
Returns the new position of sash number index."""
return self.tk.getint(self.tk.call(self._w, "sashpos", index, newpos))
PanedWindow = Panedwindow # tkinter name compatibility
class Progressbar(Widget):
"""Ttk Progressbar widget shows the status of a long-running
operation. They can operate in two modes: determinate mode shows the
amount completed relative to the total amount of work to be done, and
indeterminate mode provides an animated display to let the user know
that something is happening."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Progressbar with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient, length, mode, maximum, value, variable, phase
"""
Widget.__init__(self, master, "ttk::progressbar", kw)
def start(self, interval=None):
"""Begin autoincrement mode: schedules a recurring timer event
that calls method step every interval milliseconds.
interval defaults to 50 milliseconds (20 steps/second) if ommited."""
self.tk.call(self._w, "start", interval)
def step(self, amount=None):
"""Increments the value option by amount.
amount defaults to 1.0 if omitted."""
self.tk.call(self._w, "step", amount)
def stop(self):
"""Stop autoincrement mode: cancels any recurring timer event
initiated by start."""
self.tk.call(self._w, "stop")
class Radiobutton(Widget):
"""Ttk Radiobutton widgets are used in groups to show or change a
set of mutually-exclusive options."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Radiobutton with parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, value, variable
"""
Widget.__init__(self, master, "ttk::radiobutton", kw)
def invoke(self):
"""Sets the option variable to the option value, selects the
widget, and invokes the associated command.
Returns the result of the command, or an empty string if
no command is specified."""
return self.tk.call(self._w, "invoke")
class Scale(Widget, tkinter.Scale):
"""Ttk Scale widget is typically used to control the numeric value of
a linked variable that varies uniformly over some range."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Scale with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
command, from, length, orient, to, value, variable
"""
Widget.__init__(self, master, "ttk::scale", kw)
def configure(self, cnf=None, **kw):
"""Modify or query scale options.
Setting a value for any of the "from", "from_" or "to" options
generates a <<RangeChanged>> event."""
if cnf:
kw.update(cnf)
Widget.configure(self, **kw)
if any(['from' in kw, 'from_' in kw, 'to' in kw]):
self.event_generate('<<RangeChanged>>')
def get(self, x=None, y=None):
"""Get the current value of the value option, or the value
corresponding to the coordinates x, y if they are specified.
x and y are pixel coordinates relative to the scale widget
origin."""
return self.tk.call(self._w, 'get', x, y)
class Scrollbar(Widget, tkinter.Scrollbar):
"""Ttk Scrollbar controls the viewport of a scrollable widget."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Scrollbar with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
command, orient
"""
Widget.__init__(self, master, "ttk::scrollbar", kw)
class Separator(Widget):
"""Ttk Separator widget displays a horizontal or vertical separator
bar."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Separator with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient
"""
Widget.__init__(self, master, "ttk::separator", kw)
class Sizegrip(Widget):
"""Ttk Sizegrip allows the user to resize the containing toplevel
window by pressing and dragging the grip."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Sizegrip with parent master.
STANDARD OPTIONS
class, cursor, state, style, takefocus
"""
Widget.__init__(self, master, "ttk::sizegrip", kw)
class Treeview(Widget, tkinter.XView, tkinter.YView):
"""Ttk Treeview widget displays a hierarchical collection of items.
Each item has a textual label, an optional image, and an optional list
of data values. The data values are displayed in successive columns
after the tree label."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Treeview with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus, xscrollcommand,
yscrollcommand
WIDGET-SPECIFIC OPTIONS
columns, displaycolumns, height, padding, selectmode, show
ITEM OPTIONS
text, image, values, open, tags
TAG OPTIONS
foreground, background, font, image
"""
Widget.__init__(self, master, "ttk::treeview", kw)
def bbox(self, item, column=None):
"""Returns the bounding box (relative to the treeview widget's
window) of the specified item in the form x y width height.
If column is specified, returns the bounding box of that cell.
If the item is not visible (i.e., if it is a descendant of a
closed item or is scrolled offscreen), returns an empty string."""
return self._getints(self.tk.call(self._w, "bbox", item, column)) or ''
def get_children(self, item=None):
"""Returns a tuple of children belonging to item.
If item is not specified, returns root children."""
return self.tk.splitlist(
self.tk.call(self._w, "children", item or '') or ())
def set_children(self, item, *newchildren):
"""Replaces item's child with newchildren.
Children present in item that are not present in newchildren
are detached from tree. No items in newchildren may be an
ancestor of item."""
self.tk.call(self._w, "children", item, newchildren)
def column(self, column, option=None, **kw):
"""Query or modify the options for the specified column.
If kw is not given, returns a dict of the column option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "column", column)
def delete(self, *items):
"""Delete all specified items and all their descendants. The root
item may not be deleted."""
self.tk.call(self._w, "delete", items)
def detach(self, *items):
"""Unlinks all of the specified items from the tree.
The items and all of their descendants are still present, and may
be reinserted at another point in the tree, but will not be
displayed. The root item may not be detached."""
self.tk.call(self._w, "detach", items)
def exists(self, item):
"""Returns True if the specified item is present in the tree,
False otherwise."""
return bool(self.tk.getboolean(self.tk.call(self._w, "exists", item)))
def focus(self, item=None):
"""If item is specified, sets the focus item to item. Otherwise,
returns the current focus item, or '' if there is none."""
return self.tk.call(self._w, "focus", item)
def heading(self, column, option=None, **kw):
"""Query or modify the heading options for the specified column.
If kw is not given, returns a dict of the heading option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values.
Valid options/values are:
text: text
The text to display in the column heading
image: image_name
Specifies an image to display to the right of the column
heading
anchor: anchor
Specifies how the heading text should be aligned. One of
the standard Tk anchor values
command: callback
A callback to be invoked when the heading label is
pressed.
To configure the tree column heading, call this with column = "#0" """
cmd = kw.get('command')
if cmd and not isinstance(cmd, str):
# callback not registered yet, do it now
kw['command'] = self.master.register(cmd, self._substitute)
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, 'heading', column)
def identify(self, component, x, y):
"""Returns a description of the specified component under the
point given by x and y, or the empty string if no such component
is present at that position."""
return self.tk.call(self._w, "identify", component, x, y)
def identify_row(self, y):
"""Returns the item ID of the item at position y."""
return self.identify("row", 0, y)
def identify_column(self, x):
"""Returns the data column identifier of the cell at position x.
The tree column has ID #0."""
return self.identify("column", x, 0)
def identify_region(self, x, y):
"""Returns one of:
heading: Tree heading area.
separator: Space between two columns headings;
tree: The tree area.
cell: A data cell.
* Availability: Tk 8.6"""
return self.identify("region", x, y)
def identify_element(self, x, y):
"""Returns the element at position x, y.
* Availability: Tk 8.6"""
return self.identify("element", x, y)
def index(self, item):
"""Returns the integer index of item within its parent's list
of children."""
return self.tk.getint(self.tk.call(self._w, "index", item))
def insert(self, parent, index, iid=None, **kw):
"""Creates a new item and return the item identifier of the newly
created item.
parent is the item ID of the parent item, or the empty string
to create a new top-level item. index is an integer, or the value
end, specifying where in the list of parent's children to insert
the new item. If index is less than or equal to zero, the new node
is inserted at the beginning, if index is greater than or equal to
the current number of children, it is inserted at the end. If iid
is specified, it is used as the item identifier, iid must not
already exist in the tree. Otherwise, a new unique identifier
is generated."""
opts = _format_optdict(kw)
if iid:
res = self.tk.call(self._w, "insert", parent, index,
"-id", iid, *opts)
else:
res = self.tk.call(self._w, "insert", parent, index, *opts)
return res
def item(self, item, option=None, **kw):
"""Query or modify the options for the specified item.
If no options are given, a dict with options/values for the item
is returned. If option is specified then the value for that option
is returned. Otherwise, sets the options to the corresponding
values as given by kw."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "item", item)
def move(self, item, parent, index):
"""Moves item to position index in parent's list of children.
It is illegal to move an item under one of its descendants. If
index is less than or equal to zero, item is moved to the
beginning, if greater than or equal to the number of children,
it is moved to the end. If item was detached it is reattached."""
self.tk.call(self._w, "move", item, parent, index)
reattach = move # A sensible method name for reattaching detached items
def next(self, item):
"""Returns the identifier of item's next sibling, or '' if item
is the last child of its parent."""
return self.tk.call(self._w, "next", item)
def parent(self, item):
"""Returns the ID of the parent of item, or '' if item is at the
top level of the hierarchy."""
return self.tk.call(self._w, "parent", item)
def prev(self, item):
"""Returns the identifier of item's previous sibling, or '' if
item is the first child of its parent."""
return self.tk.call(self._w, "prev", item)
def see(self, item):
"""Ensure that item is visible.
Sets all of item's ancestors open option to True, and scrolls
the widget if necessary so that item is within the visible
portion of the tree."""
self.tk.call(self._w, "see", item)
def selection(self, selop=None, items=None):
"""If selop is not specified, returns selected items."""
return self.tk.call(self._w, "selection", selop, items)
def selection_set(self, items):
"""items becomes the new selection."""
self.selection("set", items)
def selection_add(self, items):
"""Add items to the selection."""
self.selection("add", items)
def selection_remove(self, items):
"""Remove items from the selection."""
self.selection("remove", items)
def selection_toggle(self, items):
"""Toggle the selection state of each item in items."""
self.selection("toggle", items)
def set(self, item, column=None, value=None):
"""Query or set the value of given item.
With one argument, return a dictionary of column/value pairs
for the specified item. With two arguments, return the current
value of the specified column. With three arguments, set the
value of given column in given item to the specified value."""
res = self.tk.call(self._w, "set", item, column, value)
if column is None and value is None:
return _splitdict(self.tk, res,
cut_minus=False, conv=_tclobj_to_py)
else:
return res
def tag_bind(self, tagname, sequence=None, callback=None):
"""Bind a callback for the given event sequence to the tag tagname.
When an event is delivered to an item, the callbacks for each
of the item's tags option are called."""
self._bind((self._w, "tag", "bind", tagname), sequence, callback, add=0)
def tag_configure(self, tagname, option=None, **kw):
"""Query or modify the options for the specified tagname.
If kw is not given, returns a dict of the option settings for tagname.
If option is specified, returns the value for that option for the
specified tagname. Otherwise, sets the options to the corresponding
values for the given tagname."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "tag", "configure",
tagname)
def tag_has(self, tagname, item=None):
"""If item is specified, returns 1 or 0 depending on whether the
specified item has the given tagname. Otherwise, returns a list of
all items which have the specified tag.
* Availability: Tk 8.6"""
if item is None:
return self.tk.splitlist(
self.tk.call(self._w, "tag", "has", tagname))
else:
return self.tk.getboolean(
self.tk.call(self._w, "tag", "has", tagname, item))
# Extensions
class LabeledScale(Frame):
"""A Ttk Scale widget with a Ttk Label widget indicating its
current value.
The Ttk Scale can be accessed through instance.scale, and Ttk Label
can be accessed through instance.label"""
def __init__(self, master=None, variable=None, from_=0, to=10, **kw):
"""Construct an horizontal LabeledScale with parent master, a
variable to be associated with the Ttk Scale widget and its range.
If variable is not specified, a tkinter.IntVar is created.
WIDGET-SPECIFIC OPTIONS
compound: 'top' or 'bottom'
Specifies how to display the label relative to the scale.
Defaults to 'top'.
"""
self._label_top = kw.pop('compound', 'top') == 'top'
Frame.__init__(self, master, **kw)
self._variable = variable or tkinter.IntVar(master)
self._variable.set(from_)
self._last_valid = from_
self.label = Label(self)
self.scale = Scale(self, variable=self._variable, from_=from_, to=to)
self.scale.bind('<<RangeChanged>>', self._adjust)
# position scale and label according to the compound option
scale_side = 'bottom' if self._label_top else 'top'
label_side = 'top' if scale_side == 'bottom' else 'bottom'
self.scale.pack(side=scale_side, fill='x')
tmp = Label(self).pack(side=label_side) # place holder
self.label.place(anchor='n' if label_side == 'top' else 's')
# update the label as scale or variable changes
self.__tracecb = self._variable.trace_variable('w', self._adjust)
self.bind('<Configure>', self._adjust)
self.bind('<Map>', self._adjust)
def destroy(self):
"""Destroy this widget and possibly its associated variable."""
try:
self._variable.trace_vdelete('w', self.__tracecb)
except AttributeError:
# widget has been destroyed already
pass
else:
del self._variable
Frame.destroy(self)
def _adjust(self, *args):
"""Adjust the label position according to the scale."""
def adjust_label():
self.update_idletasks() # "force" scale redraw
x, y = self.scale.coords()
if self._label_top:
y = self.scale.winfo_y() - self.label.winfo_reqheight()
else:
y = self.scale.winfo_reqheight() + self.label.winfo_reqheight()
self.label.place_configure(x=x, y=y)
from_ = _to_number(self.scale['from'])
to = _to_number(self.scale['to'])
if to < from_:
from_, to = to, from_
newval = self._variable.get()
if not from_ <= newval <= to:
# value outside range, set value back to the last valid one
self.value = self._last_valid
return
self._last_valid = newval
self.label['text'] = newval
self.after_idle(adjust_label)
def _get_value(self):
"""Return current scale value."""
return self._variable.get()
def _set_value(self, val):
"""Set new scale value."""
self._variable.set(val)
value = property(_get_value, _set_value)
class OptionMenu(Menubutton):
"""Themed OptionMenu, based after tkinter's OptionMenu, which allows
the user to select a value from a menu."""
def __init__(self, master, variable, default=None, *values, **kwargs):
"""Construct a themed OptionMenu widget with master as the parent,
the resource textvariable set to variable, the initially selected
value specified by the default parameter, the menu values given by
*values and additional keywords.
WIDGET-SPECIFIC OPTIONS
style: stylename
Menubutton style.
direction: 'above', 'below', 'left', 'right', or 'flush'
Menubutton direction.
command: callback
A callback that will be invoked after selecting an item.
"""
kw = {'textvariable': variable, 'style': kwargs.pop('style', None),
'direction': kwargs.pop('direction', None)}
Menubutton.__init__(self, master, **kw)
self['menu'] = tkinter.Menu(self, tearoff=False)
self._variable = variable
self._callback = kwargs.pop('command', None)
if kwargs:
raise tkinter.TclError('unknown option -%s' % (
next(iter(kwargs.keys()))))
self.set_menu(default, *values)
def __getitem__(self, item):
if item == 'menu':
return self.nametowidget(Menubutton.__getitem__(self, item))
return Menubutton.__getitem__(self, item)
def set_menu(self, default=None, *values):
"""Build a new menu of radiobuttons with *values and optionally
a default value."""
menu = self['menu']
menu.delete(0, 'end')
for val in values:
menu.add_radiobutton(label=val,
command=tkinter._setit(self._variable, val, self._callback))
if default:
self._variable.set(default)
def destroy(self):
"""Destroy this widget and its associated variable."""
del self._variable
Menubutton.destroy(self)
=======
"""Ttk wrapper.
This module provides classes to allow using Tk themed widget set.
Ttk is based on a revised and enhanced version of
TIP #48 (http://tip.tcl.tk/48) specified style engine.
Its basic idea is to separate, to the extent possible, the code
implementing a widget's behavior from the code implementing its
appearance. Widget class bindings are primarily responsible for
maintaining the widget state and invoking callbacks, all aspects
of the widgets appearance lies at Themes.
"""
__version__ = "0.3.1"
__author__ = "Guilherme Polo <[email protected]>"
__all__ = ["Button", "Checkbutton", "Combobox", "Entry", "Frame", "Label",
"Labelframe", "LabelFrame", "Menubutton", "Notebook", "Panedwindow",
"PanedWindow", "Progressbar", "Radiobutton", "Scale", "Scrollbar",
"Separator", "Sizegrip", "Style", "Treeview",
# Extensions
"LabeledScale", "OptionMenu",
# functions
"tclobjs_to_py", "setup_master"]
import tkinter
from tkinter import _flatten, _join, _stringify, _splitdict
# Verify if Tk is new enough to not need the Tile package
_REQUIRE_TILE = True if tkinter.TkVersion < 8.5 else False
def _load_tile(master):
if _REQUIRE_TILE:
import os
tilelib = os.environ.get('TILE_LIBRARY')
if tilelib:
# append custom tile path to the list of directories that
# Tcl uses when attempting to resolve packages with the package
# command
master.tk.eval(
'global auto_path; '
'lappend auto_path {%s}' % tilelib)
master.tk.eval('package require tile') # TclError may be raised here
master._tile_loaded = True
def _format_optvalue(value, script=False):
"""Internal function."""
if script:
# if caller passes a Tcl script to tk.call, all the values need to
# be grouped into words (arguments to a command in Tcl dialect)
value = _stringify(value)
elif isinstance(value, (list, tuple)):
value = _join(value)
return value
def _format_optdict(optdict, script=False, ignore=None):
"""Formats optdict to a tuple to pass it to tk.call.
E.g. (script=False):
{'foreground': 'blue', 'padding': [1, 2, 3, 4]} returns:
('-foreground', 'blue', '-padding', '1 2 3 4')"""
opts = []
for opt, value in optdict.items():
if not ignore or opt not in ignore:
opts.append("-%s" % opt)
if value is not None:
opts.append(_format_optvalue(value, script))
return _flatten(opts)
def _mapdict_values(items):
# each value in mapdict is expected to be a sequence, where each item
# is another sequence containing a state (or several) and a value
# E.g. (script=False):
# [('active', 'selected', 'grey'), ('focus', [1, 2, 3, 4])]
# returns:
# ['active selected', 'grey', 'focus', [1, 2, 3, 4]]
opt_val = []
for *state, val in items:
# hacks for bakward compatibility
state[0] # raise IndexError if empty
if len(state) == 1:
# if it is empty (something that evaluates to False), then
# format it to Tcl code to denote the "normal" state
state = state[0] or ''
else:
# group multiple states
state = ' '.join(state) # raise TypeError if not str
opt_val.append(state)
if val is not None:
opt_val.append(val)
return opt_val
def _format_mapdict(mapdict, script=False):
"""Formats mapdict to pass it to tk.call.
E.g. (script=False):
{'expand': [('active', 'selected', 'grey'), ('focus', [1, 2, 3, 4])]}
returns:
('-expand', '{active selected} grey focus {1, 2, 3, 4}')"""
opts = []
for opt, value in mapdict.items():
opts.extend(("-%s" % opt,
_format_optvalue(_mapdict_values(value), script)))
return _flatten(opts)
def _format_elemcreate(etype, script=False, *args, **kw):
"""Formats args and kw according to the given element factory etype."""
spec = None
opts = ()
if etype in ("image", "vsapi"):
if etype == "image": # define an element based on an image
# first arg should be the default image name
iname = args[0]
# next args, if any, are statespec/value pairs which is almost
# a mapdict, but we just need the value
imagespec = _join(_mapdict_values(args[1:]))
spec = "%s %s" % (iname, imagespec)
else:
# define an element whose visual appearance is drawn using the
# Microsoft Visual Styles API which is responsible for the
# themed styles on Windows XP and Vista.
# Availability: Tk 8.6, Windows XP and Vista.
class_name, part_id = args[:2]
statemap = _join(_mapdict_values(args[2:]))
spec = "%s %s %s" % (class_name, part_id, statemap)
opts = _format_optdict(kw, script)
elif etype == "from": # clone an element
# it expects a themename and optionally an element to clone from,
# otherwise it will clone {} (empty element)
spec = args[0] # theme name
if len(args) > 1: # elementfrom specified
opts = (_format_optvalue(args[1], script),)
if script:
spec = '{%s}' % spec
opts = ' '.join(opts)
return spec, opts
def _format_layoutlist(layout, indent=0, indent_size=2):
"""Formats a layout list so we can pass the result to ttk::style
layout and ttk::style settings. Note that the layout doesn't has to
be a list necessarily.
E.g.:
[("Menubutton.background", None),
("Menubutton.button", {"children":
[("Menubutton.focus", {"children":
[("Menubutton.padding", {"children":
[("Menubutton.label", {"side": "left", "expand": 1})]
})]
})]
}),
("Menubutton.indicator", {"side": "right"})
]
returns:
Menubutton.background
Menubutton.button -children {
Menubutton.focus -children {
Menubutton.padding -children {
Menubutton.label -side left -expand 1
}
}
}
Menubutton.indicator -side right"""
script = []
for layout_elem in layout:
elem, opts = layout_elem
opts = opts or {}
fopts = ' '.join(_format_optdict(opts, True, ("children",)))
head = "%s%s%s" % (' ' * indent, elem, (" %s" % fopts) if fopts else '')
if "children" in opts:
script.append(head + " -children {")
indent += indent_size
newscript, indent = _format_layoutlist(opts['children'], indent,
indent_size)
script.append(newscript)
indent -= indent_size
script.append('%s}' % (' ' * indent))
else:
script.append(head)
return '\n'.join(script), indent
def _script_from_settings(settings):
"""Returns an appropriate script, based on settings, according to
theme_settings definition to be used by theme_settings and
theme_create."""
script = []
# a script will be generated according to settings passed, which
# will then be evaluated by Tcl
for name, opts in settings.items():
# will format specific keys according to Tcl code
if opts.get('configure'): # format 'configure'
s = ' '.join(_format_optdict(opts['configure'], True))
script.append("ttk::style configure %s %s;" % (name, s))
if opts.get('map'): # format 'map'
s = ' '.join(_format_mapdict(opts['map'], True))
script.append("ttk::style map %s %s;" % (name, s))
if 'layout' in opts: # format 'layout' which may be empty
if not opts['layout']:
s = 'null' # could be any other word, but this one makes sense
else:
s, _ = _format_layoutlist(opts['layout'])
script.append("ttk::style layout %s {\n%s\n}" % (name, s))
if opts.get('element create'): # format 'element create'
eopts = opts['element create']
etype = eopts[0]
# find where args end, and where kwargs start
argc = 1 # etype was the first one
while argc < len(eopts) and not hasattr(eopts[argc], 'items'):
argc += 1
elemargs = eopts[1:argc]
elemkw = eopts[argc] if argc < len(eopts) and eopts[argc] else {}
spec, opts = _format_elemcreate(etype, True, *elemargs, **elemkw)
script.append("ttk::style element create %s %s %s %s" % (
name, etype, spec, opts))
return '\n'.join(script)
def _list_from_statespec(stuple):
"""Construct a list from the given statespec tuple according to the
accepted statespec accepted by _format_mapdict."""
nval = []
for val in stuple:
typename = getattr(val, 'typename', None)
if typename is None:
nval.append(val)
else: # this is a Tcl object
val = str(val)
if typename == 'StateSpec':
val = val.split()
nval.append(val)
it = iter(nval)
return [_flatten(spec) for spec in zip(it, it)]
def _list_from_layouttuple(tk, ltuple):
"""Construct a list from the tuple returned by ttk::layout, this is
somewhat the reverse of _format_layoutlist."""
ltuple = tk.splitlist(ltuple)
res = []
indx = 0
while indx < len(ltuple):
name = ltuple[indx]
opts = {}
res.append((name, opts))
indx += 1
while indx < len(ltuple): # grab name's options
opt, val = ltuple[indx:indx + 2]
if not opt.startswith('-'): # found next name
break
opt = opt[1:] # remove the '-' from the option
indx += 2
if opt == 'children':
val = _list_from_layouttuple(tk, val)
opts[opt] = val
return res
def _val_or_dict(tk, options, *args):
"""Format options then call Tk command with args and options and return
the appropriate result.
If no option is specified, a dict is returned. If a option is
specified with the None value, the value for that option is returned.
Otherwise, the function just sets the passed options and the caller
shouldn't be expecting a return value anyway."""
options = _format_optdict(options)
res = tk.call(*(args + options))
if len(options) % 2: # option specified without a value, return its value
return res
return _splitdict(tk, res, conv=_tclobj_to_py)
def _convert_stringval(value):
"""Converts a value to, hopefully, a more appropriate Python object."""
value = str(value)
try:
value = int(value)
except (ValueError, TypeError):
pass
return value
def _to_number(x):
if isinstance(x, str):
if '.' in x:
x = float(x)
else:
x = int(x)
return x
def _tclobj_to_py(val):
"""Return value converted from Tcl object to Python object."""
if val and hasattr(val, '__len__') and not isinstance(val, str):
if getattr(val[0], 'typename', None) == 'StateSpec':
val = _list_from_statespec(val)
else:
val = list(map(_convert_stringval, val))
elif hasattr(val, 'typename'): # some other (single) Tcl object
val = _convert_stringval(val)
return val
def tclobjs_to_py(adict):
"""Returns adict with its values converted from Tcl objects to Python
objects."""
for opt, val in adict.items():
adict[opt] = _tclobj_to_py(val)
return adict
def setup_master(master=None):
"""If master is not None, itself is returned. If master is None,
the default master is returned if there is one, otherwise a new
master is created and returned.
If it is not allowed to use the default root and master is None,
RuntimeError is raised."""
if master is None:
if tkinter._support_default_root:
master = tkinter._default_root or tkinter.Tk()
else:
raise RuntimeError(
"No master specified and tkinter is "
"configured to not support default root")
return master
class Style(object):
"""Manipulate style database."""
_name = "ttk::style"
def __init__(self, master=None):
master = setup_master(master)
if not getattr(master, '_tile_loaded', False):
# Load tile now, if needed
_load_tile(master)
self.master = master
self.tk = self.master.tk
def configure(self, style, query_opt=None, **kw):
"""Query or sets the default value of the specified option(s) in
style.
Each key in kw is an option and each value is either a string or
a sequence identifying the value for that option."""
if query_opt is not None:
kw[query_opt] = None
return _val_or_dict(self.tk, kw, self._name, "configure", style)
def map(self, style, query_opt=None, **kw):
"""Query or sets dynamic values of the specified option(s) in
style.
Each key in kw is an option and each value should be a list or a
tuple (usually) containing statespecs grouped in tuples, or list,
or something else of your preference. A statespec is compound of
one or more states and then a value."""
if query_opt is not None:
return _list_from_statespec(self.tk.splitlist(
self.tk.call(self._name, "map", style, '-%s' % query_opt)))
return _splitdict(
self.tk,
self.tk.call(self._name, "map", style, *_format_mapdict(kw)),
conv=_tclobj_to_py)
def lookup(self, style, option, state=None, default=None):
"""Returns the value specified for option in style.
If state is specified it is expected to be a sequence of one
or more states. If the default argument is set, it is used as
a fallback value in case no specification for option is found."""
state = ' '.join(state) if state else ''
return self.tk.call(self._name, "lookup", style, '-%s' % option,
state, default)
def layout(self, style, layoutspec=None):
"""Define the widget layout for given style. If layoutspec is
omitted, return the layout specification for given style.
layoutspec is expected to be a list or an object different than
None that evaluates to False if you want to "turn off" that style.
If it is a list (or tuple, or something else), each item should be
a tuple where the first item is the layout name and the second item
should have the format described below:
LAYOUTS
A layout can contain the value None, if takes no options, or
a dict of options specifying how to arrange the element.
The layout mechanism uses a simplified version of the pack
geometry manager: given an initial cavity, each element is
allocated a parcel. Valid options/values are:
side: whichside
Specifies which side of the cavity to place the
element; one of top, right, bottom or left. If
omitted, the element occupies the entire cavity.
sticky: nswe
Specifies where the element is placed inside its
allocated parcel.
children: [sublayout... ]
Specifies a list of elements to place inside the
element. Each element is a tuple (or other sequence)
where the first item is the layout name, and the other
is a LAYOUT."""
lspec = None
if layoutspec:
lspec = _format_layoutlist(layoutspec)[0]
elif layoutspec is not None: # will disable the layout ({}, '', etc)
lspec = "null" # could be any other word, but this may make sense
# when calling layout(style) later
return _list_from_layouttuple(self.tk,
self.tk.call(self._name, "layout", style, lspec))
def element_create(self, elementname, etype, *args, **kw):
"""Create a new element in the current theme of given etype."""
spec, opts = _format_elemcreate(etype, False, *args, **kw)
self.tk.call(self._name, "element", "create", elementname, etype,
spec, *opts)
def element_names(self):
"""Returns the list of elements defined in the current theme."""
return self.tk.splitlist(self.tk.call(self._name, "element", "names"))
def element_options(self, elementname):
"""Return the list of elementname's options."""
return self.tk.splitlist(self.tk.call(self._name, "element", "options", elementname))
def theme_create(self, themename, parent=None, settings=None):
"""Creates a new theme.
It is an error if themename already exists. If parent is
specified, the new theme will inherit styles, elements and
layouts from the specified parent theme. If settings are present,
they are expected to have the same syntax used for theme_settings."""
script = _script_from_settings(settings) if settings else ''
if parent:
self.tk.call(self._name, "theme", "create", themename,
"-parent", parent, "-settings", script)
else:
self.tk.call(self._name, "theme", "create", themename,
"-settings", script)
def theme_settings(self, themename, settings):
"""Temporarily sets the current theme to themename, apply specified
settings and then restore the previous theme.
Each key in settings is a style and each value may contain the
keys 'configure', 'map', 'layout' and 'element create' and they
are expected to have the same format as specified by the methods
configure, map, layout and element_create respectively."""
script = _script_from_settings(settings)
self.tk.call(self._name, "theme", "settings", themename, script)
def theme_names(self):
"""Returns a list of all known themes."""
return self.tk.splitlist(self.tk.call(self._name, "theme", "names"))
def theme_use(self, themename=None):
"""If themename is None, returns the theme in use, otherwise, set
the current theme to themename, refreshes all widgets and emits
a <<ThemeChanged>> event."""
if themename is None:
# Starting on Tk 8.6, checking this global is no longer needed
# since it allows doing self.tk.call(self._name, "theme", "use")
return self.tk.eval("return $ttk::currentTheme")
# using "ttk::setTheme" instead of "ttk::style theme use" causes
# the variable currentTheme to be updated, also, ttk::setTheme calls
# "ttk::style theme use" in order to change theme.
self.tk.call("ttk::setTheme", themename)
class Widget(tkinter.Widget):
"""Base class for Tk themed widgets."""
def __init__(self, master, widgetname, kw=None):
"""Constructs a Ttk Widget with the parent master.
STANDARD OPTIONS
class, cursor, takefocus, style
SCROLLABLE WIDGET OPTIONS
xscrollcommand, yscrollcommand
LABEL WIDGET OPTIONS
text, textvariable, underline, image, compound, width
WIDGET STATES
active, disabled, focus, pressed, selected, background,
readonly, alternate, invalid
"""
master = setup_master(master)
if not getattr(master, '_tile_loaded', False):
# Load tile now, if needed
_load_tile(master)
tkinter.Widget.__init__(self, master, widgetname, kw=kw)
def identify(self, x, y):
"""Returns the name of the element at position x, y, or the empty
string if the point does not lie within any element.
x and y are pixel coordinates relative to the widget."""
return self.tk.call(self._w, "identify", x, y)
def instate(self, statespec, callback=None, *args, **kw):
"""Test the widget's state.
If callback is not specified, returns True if the widget state
matches statespec and False otherwise. If callback is specified,
then it will be invoked with *args, **kw if the widget state
matches statespec. statespec is expected to be a sequence."""
ret = self.tk.getboolean(
self.tk.call(self._w, "instate", ' '.join(statespec)))
if ret and callback:
return callback(*args, **kw)
return bool(ret)
def state(self, statespec=None):
"""Modify or inquire widget state.
Widget state is returned if statespec is None, otherwise it is
set according to the statespec flags and then a new state spec
is returned indicating which flags were changed. statespec is
expected to be a sequence."""
if statespec is not None:
statespec = ' '.join(statespec)
return self.tk.splitlist(str(self.tk.call(self._w, "state", statespec)))
class Button(Widget):
"""Ttk Button widget, displays a textual label and/or image, and
evaluates a command when pressed."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Button widget with the parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, default, width
"""
Widget.__init__(self, master, "ttk::button", kw)
def invoke(self):
"""Invokes the command associated with the button."""
return self.tk.call(self._w, "invoke")
class Checkbutton(Widget):
"""Ttk Checkbutton widget which is either in on- or off-state."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Checkbutton widget with the parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, offvalue, onvalue, variable
"""
Widget.__init__(self, master, "ttk::checkbutton", kw)
def invoke(self):
"""Toggles between the selected and deselected states and
invokes the associated command. If the widget is currently
selected, sets the option variable to the offvalue option
and deselects the widget; otherwise, sets the option variable
to the option onvalue.
Returns the result of the associated command."""
return self.tk.call(self._w, "invoke")
class Entry(Widget, tkinter.Entry):
"""Ttk Entry widget displays a one-line text string and allows that
string to be edited by the user."""
def __init__(self, master=None, widget=None, **kw):
"""Constructs a Ttk Entry widget with the parent master.
STANDARD OPTIONS
class, cursor, style, takefocus, xscrollcommand
WIDGET-SPECIFIC OPTIONS
exportselection, invalidcommand, justify, show, state,
textvariable, validate, validatecommand, width
VALIDATION MODES
none, key, focus, focusin, focusout, all
"""
Widget.__init__(self, master, widget or "ttk::entry", kw)
def bbox(self, index):
"""Return a tuple of (x, y, width, height) which describes the
bounding box of the character given by index."""
return self._getints(self.tk.call(self._w, "bbox", index))
def identify(self, x, y):
"""Returns the name of the element at position x, y, or the
empty string if the coordinates are outside the window."""
return self.tk.call(self._w, "identify", x, y)
def validate(self):
"""Force revalidation, independent of the conditions specified
by the validate option. Returns False if validation fails, True
if it succeeds. Sets or clears the invalid state accordingly."""
return bool(self.tk.getboolean(self.tk.call(self._w, "validate")))
class Combobox(Entry):
"""Ttk Combobox widget combines a text field with a pop-down list of
values."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Combobox widget with the parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
exportselection, justify, height, postcommand, state,
textvariable, values, width
"""
Entry.__init__(self, master, "ttk::combobox", **kw)
def current(self, newindex=None):
"""If newindex is supplied, sets the combobox value to the
element at position newindex in the list of values. Otherwise,
returns the index of the current value in the list of values
or -1 if the current value does not appear in the list."""
if newindex is None:
return self.tk.getint(self.tk.call(self._w, "current"))
return self.tk.call(self._w, "current", newindex)
def set(self, value):
"""Sets the value of the combobox to value."""
self.tk.call(self._w, "set", value)
class Frame(Widget):
"""Ttk Frame widget is a container, used to group other widgets
together."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Frame with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
borderwidth, relief, padding, width, height
"""
Widget.__init__(self, master, "ttk::frame", kw)
class Label(Widget):
"""Ttk Label widget displays a textual label and/or image."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Label with parent master.
STANDARD OPTIONS
class, compound, cursor, image, style, takefocus, text,
textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
anchor, background, font, foreground, justify, padding,
relief, text, wraplength
"""
Widget.__init__(self, master, "ttk::label", kw)
class Labelframe(Widget):
"""Ttk Labelframe widget is a container used to group other widgets
together. It has an optional label, which may be a plain text string
or another widget."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Labelframe with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
labelanchor, text, underline, padding, labelwidget, width,
height
"""
Widget.__init__(self, master, "ttk::labelframe", kw)
LabelFrame = Labelframe # tkinter name compatibility
class Menubutton(Widget):
"""Ttk Menubutton widget displays a textual label and/or image, and
displays a menu when pressed."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Menubutton with parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
direction, menu
"""
Widget.__init__(self, master, "ttk::menubutton", kw)
class Notebook(Widget):
"""Ttk Notebook widget manages a collection of windows and displays
a single one at a time. Each child window is associated with a tab,
which the user may select to change the currently-displayed window."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Notebook with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
height, padding, width
TAB OPTIONS
state, sticky, padding, text, image, compound, underline
TAB IDENTIFIERS (tab_id)
The tab_id argument found in several methods may take any of
the following forms:
* An integer between zero and the number of tabs
* The name of a child window
* A positional specification of the form "@x,y", which
defines the tab
* The string "current", which identifies the
currently-selected tab
* The string "end", which returns the number of tabs (only
valid for method index)
"""
Widget.__init__(self, master, "ttk::notebook", kw)
def add(self, child, **kw):
"""Adds a new tab to the notebook.
If window is currently managed by the notebook but hidden, it is
restored to its previous position."""
self.tk.call(self._w, "add", child, *(_format_optdict(kw)))
def forget(self, tab_id):
"""Removes the tab specified by tab_id, unmaps and unmanages the
associated window."""
self.tk.call(self._w, "forget", tab_id)
def hide(self, tab_id):
"""Hides the tab specified by tab_id.
The tab will not be displayed, but the associated window remains
managed by the notebook and its configuration remembered. Hidden
tabs may be restored with the add command."""
self.tk.call(self._w, "hide", tab_id)
def identify(self, x, y):
"""Returns the name of the tab element at position x, y, or the
empty string if none."""
return self.tk.call(self._w, "identify", x, y)
def index(self, tab_id):
"""Returns the numeric index of the tab specified by tab_id, or
the total number of tabs if tab_id is the string "end"."""
return self.tk.getint(self.tk.call(self._w, "index", tab_id))
def insert(self, pos, child, **kw):
"""Inserts a pane at the specified position.
pos is either the string end, an integer index, or the name of
a managed child. If child is already managed by the notebook,
moves it to the specified position."""
self.tk.call(self._w, "insert", pos, child, *(_format_optdict(kw)))
def select(self, tab_id=None):
"""Selects the specified tab.
The associated child window will be displayed, and the
previously-selected window (if different) is unmapped. If tab_id
is omitted, returns the widget name of the currently selected
pane."""
return self.tk.call(self._w, "select", tab_id)
def tab(self, tab_id, option=None, **kw):
"""Query or modify the options of the specific tab_id.
If kw is not given, returns a dict of the tab option values. If option
is specified, returns the value of that option. Otherwise, sets the
options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "tab", tab_id)
def tabs(self):
"""Returns a list of windows managed by the notebook."""
return self.tk.splitlist(self.tk.call(self._w, "tabs") or ())
def enable_traversal(self):
"""Enable keyboard traversal for a toplevel window containing
this notebook.
This will extend the bindings for the toplevel window containing
this notebook as follows:
Control-Tab: selects the tab following the currently selected
one
Shift-Control-Tab: selects the tab preceding the currently
selected one
Alt-K: where K is the mnemonic (underlined) character of any
tab, will select that tab.
Multiple notebooks in a single toplevel may be enabled for
traversal, including nested notebooks. However, notebook traversal
only works properly if all panes are direct children of the
notebook."""
# The only, and good, difference I see is about mnemonics, which works
# after calling this method. Control-Tab and Shift-Control-Tab always
# works (here at least).
self.tk.call("ttk::notebook::enableTraversal", self._w)
class Panedwindow(Widget, tkinter.PanedWindow):
"""Ttk Panedwindow widget displays a number of subwindows, stacked
either vertically or horizontally."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Panedwindow with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient, width, height
PANE OPTIONS
weight
"""
Widget.__init__(self, master, "ttk::panedwindow", kw)
forget = tkinter.PanedWindow.forget # overrides Pack.forget
def insert(self, pos, child, **kw):
"""Inserts a pane at the specified positions.
pos is either the string end, and integer index, or the name
of a child. If child is already managed by the paned window,
moves it to the specified position."""
self.tk.call(self._w, "insert", pos, child, *(_format_optdict(kw)))
def pane(self, pane, option=None, **kw):
"""Query or modify the options of the specified pane.
pane is either an integer index or the name of a managed subwindow.
If kw is not given, returns a dict of the pane option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "pane", pane)
def sashpos(self, index, newpos=None):
"""If newpos is specified, sets the position of sash number index.
May adjust the positions of adjacent sashes to ensure that
positions are monotonically increasing. Sash positions are further
constrained to be between 0 and the total size of the widget.
Returns the new position of sash number index."""
return self.tk.getint(self.tk.call(self._w, "sashpos", index, newpos))
PanedWindow = Panedwindow # tkinter name compatibility
class Progressbar(Widget):
"""Ttk Progressbar widget shows the status of a long-running
operation. They can operate in two modes: determinate mode shows the
amount completed relative to the total amount of work to be done, and
indeterminate mode provides an animated display to let the user know
that something is happening."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Progressbar with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient, length, mode, maximum, value, variable, phase
"""
Widget.__init__(self, master, "ttk::progressbar", kw)
def start(self, interval=None):
"""Begin autoincrement mode: schedules a recurring timer event
that calls method step every interval milliseconds.
interval defaults to 50 milliseconds (20 steps/second) if ommited."""
self.tk.call(self._w, "start", interval)
def step(self, amount=None):
"""Increments the value option by amount.
amount defaults to 1.0 if omitted."""
self.tk.call(self._w, "step", amount)
def stop(self):
"""Stop autoincrement mode: cancels any recurring timer event
initiated by start."""
self.tk.call(self._w, "stop")
class Radiobutton(Widget):
"""Ttk Radiobutton widgets are used in groups to show or change a
set of mutually-exclusive options."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Radiobutton with parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, value, variable
"""
Widget.__init__(self, master, "ttk::radiobutton", kw)
def invoke(self):
"""Sets the option variable to the option value, selects the
widget, and invokes the associated command.
Returns the result of the command, or an empty string if
no command is specified."""
return self.tk.call(self._w, "invoke")
class Scale(Widget, tkinter.Scale):
"""Ttk Scale widget is typically used to control the numeric value of
a linked variable that varies uniformly over some range."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Scale with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
command, from, length, orient, to, value, variable
"""
Widget.__init__(self, master, "ttk::scale", kw)
def configure(self, cnf=None, **kw):
"""Modify or query scale options.
Setting a value for any of the "from", "from_" or "to" options
generates a <<RangeChanged>> event."""
if cnf:
kw.update(cnf)
Widget.configure(self, **kw)
if any(['from' in kw, 'from_' in kw, 'to' in kw]):
self.event_generate('<<RangeChanged>>')
def get(self, x=None, y=None):
"""Get the current value of the value option, or the value
corresponding to the coordinates x, y if they are specified.
x and y are pixel coordinates relative to the scale widget
origin."""
return self.tk.call(self._w, 'get', x, y)
class Scrollbar(Widget, tkinter.Scrollbar):
"""Ttk Scrollbar controls the viewport of a scrollable widget."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Scrollbar with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
command, orient
"""
Widget.__init__(self, master, "ttk::scrollbar", kw)
class Separator(Widget):
"""Ttk Separator widget displays a horizontal or vertical separator
bar."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Separator with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient
"""
Widget.__init__(self, master, "ttk::separator", kw)
class Sizegrip(Widget):
"""Ttk Sizegrip allows the user to resize the containing toplevel
window by pressing and dragging the grip."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Sizegrip with parent master.
STANDARD OPTIONS
class, cursor, state, style, takefocus
"""
Widget.__init__(self, master, "ttk::sizegrip", kw)
class Treeview(Widget, tkinter.XView, tkinter.YView):
"""Ttk Treeview widget displays a hierarchical collection of items.
Each item has a textual label, an optional image, and an optional list
of data values. The data values are displayed in successive columns
after the tree label."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Treeview with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus, xscrollcommand,
yscrollcommand
WIDGET-SPECIFIC OPTIONS
columns, displaycolumns, height, padding, selectmode, show
ITEM OPTIONS
text, image, values, open, tags
TAG OPTIONS
foreground, background, font, image
"""
Widget.__init__(self, master, "ttk::treeview", kw)
def bbox(self, item, column=None):
"""Returns the bounding box (relative to the treeview widget's
window) of the specified item in the form x y width height.
If column is specified, returns the bounding box of that cell.
If the item is not visible (i.e., if it is a descendant of a
closed item or is scrolled offscreen), returns an empty string."""
return self._getints(self.tk.call(self._w, "bbox", item, column)) or ''
def get_children(self, item=None):
"""Returns a tuple of children belonging to item.
If item is not specified, returns root children."""
return self.tk.splitlist(
self.tk.call(self._w, "children", item or '') or ())
def set_children(self, item, *newchildren):
"""Replaces item's child with newchildren.
Children present in item that are not present in newchildren
are detached from tree. No items in newchildren may be an
ancestor of item."""
self.tk.call(self._w, "children", item, newchildren)
def column(self, column, option=None, **kw):
"""Query or modify the options for the specified column.
If kw is not given, returns a dict of the column option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "column", column)
def delete(self, *items):
"""Delete all specified items and all their descendants. The root
item may not be deleted."""
self.tk.call(self._w, "delete", items)
def detach(self, *items):
"""Unlinks all of the specified items from the tree.
The items and all of their descendants are still present, and may
be reinserted at another point in the tree, but will not be
displayed. The root item may not be detached."""
self.tk.call(self._w, "detach", items)
def exists(self, item):
"""Returns True if the specified item is present in the tree,
False otherwise."""
return bool(self.tk.getboolean(self.tk.call(self._w, "exists", item)))
def focus(self, item=None):
"""If item is specified, sets the focus item to item. Otherwise,
returns the current focus item, or '' if there is none."""
return self.tk.call(self._w, "focus", item)
def heading(self, column, option=None, **kw):
"""Query or modify the heading options for the specified column.
If kw is not given, returns a dict of the heading option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values.
Valid options/values are:
text: text
The text to display in the column heading
image: image_name
Specifies an image to display to the right of the column
heading
anchor: anchor
Specifies how the heading text should be aligned. One of
the standard Tk anchor values
command: callback
A callback to be invoked when the heading label is
pressed.
To configure the tree column heading, call this with column = "#0" """
cmd = kw.get('command')
if cmd and not isinstance(cmd, str):
# callback not registered yet, do it now
kw['command'] = self.master.register(cmd, self._substitute)
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, 'heading', column)
def identify(self, component, x, y):
"""Returns a description of the specified component under the
point given by x and y, or the empty string if no such component
is present at that position."""
return self.tk.call(self._w, "identify", component, x, y)
def identify_row(self, y):
"""Returns the item ID of the item at position y."""
return self.identify("row", 0, y)
def identify_column(self, x):
"""Returns the data column identifier of the cell at position x.
The tree column has ID #0."""
return self.identify("column", x, 0)
def identify_region(self, x, y):
"""Returns one of:
heading: Tree heading area.
separator: Space between two columns headings;
tree: The tree area.
cell: A data cell.
* Availability: Tk 8.6"""
return self.identify("region", x, y)
def identify_element(self, x, y):
"""Returns the element at position x, y.
* Availability: Tk 8.6"""
return self.identify("element", x, y)
def index(self, item):
"""Returns the integer index of item within its parent's list
of children."""
return self.tk.getint(self.tk.call(self._w, "index", item))
def insert(self, parent, index, iid=None, **kw):
"""Creates a new item and return the item identifier of the newly
created item.
parent is the item ID of the parent item, or the empty string
to create a new top-level item. index is an integer, or the value
end, specifying where in the list of parent's children to insert
the new item. If index is less than or equal to zero, the new node
is inserted at the beginning, if index is greater than or equal to
the current number of children, it is inserted at the end. If iid
is specified, it is used as the item identifier, iid must not
already exist in the tree. Otherwise, a new unique identifier
is generated."""
opts = _format_optdict(kw)
if iid:
res = self.tk.call(self._w, "insert", parent, index,
"-id", iid, *opts)
else:
res = self.tk.call(self._w, "insert", parent, index, *opts)
return res
def item(self, item, option=None, **kw):
"""Query or modify the options for the specified item.
If no options are given, a dict with options/values for the item
is returned. If option is specified then the value for that option
is returned. Otherwise, sets the options to the corresponding
values as given by kw."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "item", item)
def move(self, item, parent, index):
"""Moves item to position index in parent's list of children.
It is illegal to move an item under one of its descendants. If
index is less than or equal to zero, item is moved to the
beginning, if greater than or equal to the number of children,
it is moved to the end. If item was detached it is reattached."""
self.tk.call(self._w, "move", item, parent, index)
reattach = move # A sensible method name for reattaching detached items
def next(self, item):
"""Returns the identifier of item's next sibling, or '' if item
is the last child of its parent."""
return self.tk.call(self._w, "next", item)
def parent(self, item):
"""Returns the ID of the parent of item, or '' if item is at the
top level of the hierarchy."""
return self.tk.call(self._w, "parent", item)
def prev(self, item):
"""Returns the identifier of item's previous sibling, or '' if
item is the first child of its parent."""
return self.tk.call(self._w, "prev", item)
def see(self, item):
"""Ensure that item is visible.
Sets all of item's ancestors open option to True, and scrolls
the widget if necessary so that item is within the visible
portion of the tree."""
self.tk.call(self._w, "see", item)
def selection(self, selop=None, items=None):
"""If selop is not specified, returns selected items."""
return self.tk.call(self._w, "selection", selop, items)
def selection_set(self, items):
"""items becomes the new selection."""
self.selection("set", items)
def selection_add(self, items):
"""Add items to the selection."""
self.selection("add", items)
def selection_remove(self, items):
"""Remove items from the selection."""
self.selection("remove", items)
def selection_toggle(self, items):
"""Toggle the selection state of each item in items."""
self.selection("toggle", items)
def set(self, item, column=None, value=None):
"""Query or set the value of given item.
With one argument, return a dictionary of column/value pairs
for the specified item. With two arguments, return the current
value of the specified column. With three arguments, set the
value of given column in given item to the specified value."""
res = self.tk.call(self._w, "set", item, column, value)
if column is None and value is None:
return _splitdict(self.tk, res,
cut_minus=False, conv=_tclobj_to_py)
else:
return res
def tag_bind(self, tagname, sequence=None, callback=None):
"""Bind a callback for the given event sequence to the tag tagname.
When an event is delivered to an item, the callbacks for each
of the item's tags option are called."""
self._bind((self._w, "tag", "bind", tagname), sequence, callback, add=0)
def tag_configure(self, tagname, option=None, **kw):
"""Query or modify the options for the specified tagname.
If kw is not given, returns a dict of the option settings for tagname.
If option is specified, returns the value for that option for the
specified tagname. Otherwise, sets the options to the corresponding
values for the given tagname."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "tag", "configure",
tagname)
def tag_has(self, tagname, item=None):
"""If item is specified, returns 1 or 0 depending on whether the
specified item has the given tagname. Otherwise, returns a list of
all items which have the specified tag.
* Availability: Tk 8.6"""
if item is None:
return self.tk.splitlist(
self.tk.call(self._w, "tag", "has", tagname))
else:
return self.tk.getboolean(
self.tk.call(self._w, "tag", "has", tagname, item))
# Extensions
class LabeledScale(Frame):
"""A Ttk Scale widget with a Ttk Label widget indicating its
current value.
The Ttk Scale can be accessed through instance.scale, and Ttk Label
can be accessed through instance.label"""
def __init__(self, master=None, variable=None, from_=0, to=10, **kw):
"""Construct an horizontal LabeledScale with parent master, a
variable to be associated with the Ttk Scale widget and its range.
If variable is not specified, a tkinter.IntVar is created.
WIDGET-SPECIFIC OPTIONS
compound: 'top' or 'bottom'
Specifies how to display the label relative to the scale.
Defaults to 'top'.
"""
self._label_top = kw.pop('compound', 'top') == 'top'
Frame.__init__(self, master, **kw)
self._variable = variable or tkinter.IntVar(master)
self._variable.set(from_)
self._last_valid = from_
self.label = Label(self)
self.scale = Scale(self, variable=self._variable, from_=from_, to=to)
self.scale.bind('<<RangeChanged>>', self._adjust)
# position scale and label according to the compound option
scale_side = 'bottom' if self._label_top else 'top'
label_side = 'top' if scale_side == 'bottom' else 'bottom'
self.scale.pack(side=scale_side, fill='x')
tmp = Label(self).pack(side=label_side) # place holder
self.label.place(anchor='n' if label_side == 'top' else 's')
# update the label as scale or variable changes
self.__tracecb = self._variable.trace_variable('w', self._adjust)
self.bind('<Configure>', self._adjust)
self.bind('<Map>', self._adjust)
def destroy(self):
"""Destroy this widget and possibly its associated variable."""
try:
self._variable.trace_vdelete('w', self.__tracecb)
except AttributeError:
# widget has been destroyed already
pass
else:
del self._variable
Frame.destroy(self)
def _adjust(self, *args):
"""Adjust the label position according to the scale."""
def adjust_label():
self.update_idletasks() # "force" scale redraw
x, y = self.scale.coords()
if self._label_top:
y = self.scale.winfo_y() - self.label.winfo_reqheight()
else:
y = self.scale.winfo_reqheight() + self.label.winfo_reqheight()
self.label.place_configure(x=x, y=y)
from_ = _to_number(self.scale['from'])
to = _to_number(self.scale['to'])
if to < from_:
from_, to = to, from_
newval = self._variable.get()
if not from_ <= newval <= to:
# value outside range, set value back to the last valid one
self.value = self._last_valid
return
self._last_valid = newval
self.label['text'] = newval
self.after_idle(adjust_label)
def _get_value(self):
"""Return current scale value."""
return self._variable.get()
def _set_value(self, val):
"""Set new scale value."""
self._variable.set(val)
value = property(_get_value, _set_value)
class OptionMenu(Menubutton):
"""Themed OptionMenu, based after tkinter's OptionMenu, which allows
the user to select a value from a menu."""
def __init__(self, master, variable, default=None, *values, **kwargs):
"""Construct a themed OptionMenu widget with master as the parent,
the resource textvariable set to variable, the initially selected
value specified by the default parameter, the menu values given by
*values and additional keywords.
WIDGET-SPECIFIC OPTIONS
style: stylename
Menubutton style.
direction: 'above', 'below', 'left', 'right', or 'flush'
Menubutton direction.
command: callback
A callback that will be invoked after selecting an item.
"""
kw = {'textvariable': variable, 'style': kwargs.pop('style', None),
'direction': kwargs.pop('direction', None)}
Menubutton.__init__(self, master, **kw)
self['menu'] = tkinter.Menu(self, tearoff=False)
self._variable = variable
self._callback = kwargs.pop('command', None)
if kwargs:
raise tkinter.TclError('unknown option -%s' % (
next(iter(kwargs.keys()))))
self.set_menu(default, *values)
def __getitem__(self, item):
if item == 'menu':
return self.nametowidget(Menubutton.__getitem__(self, item))
return Menubutton.__getitem__(self, item)
def set_menu(self, default=None, *values):
"""Build a new menu of radiobuttons with *values and optionally
a default value."""
menu = self['menu']
menu.delete(0, 'end')
for val in values:
menu.add_radiobutton(label=val,
command=tkinter._setit(self._variable, val, self._callback))
if default:
self._variable.set(default)
def destroy(self):
"""Destroy this widget and its associated variable."""
del self._variable
Menubutton.destroy(self)
>>>>>>> b875702c9c06ab5012e52ff4337439b03918f453
=======
"""Ttk wrapper.
This module provides classes to allow using Tk themed widget set.
Ttk is based on a revised and enhanced version of
TIP #48 (http://tip.tcl.tk/48) specified style engine.
Its basic idea is to separate, to the extent possible, the code
implementing a widget's behavior from the code implementing its
appearance. Widget class bindings are primarily responsible for
maintaining the widget state and invoking callbacks, all aspects
of the widgets appearance lies at Themes.
"""
__version__ = "0.3.1"
__author__ = "Guilherme Polo <[email protected]>"
__all__ = ["Button", "Checkbutton", "Combobox", "Entry", "Frame", "Label",
"Labelframe", "LabelFrame", "Menubutton", "Notebook", "Panedwindow",
"PanedWindow", "Progressbar", "Radiobutton", "Scale", "Scrollbar",
"Separator", "Sizegrip", "Style", "Treeview",
# Extensions
"LabeledScale", "OptionMenu",
# functions
"tclobjs_to_py", "setup_master"]
import tkinter
from tkinter import _flatten, _join, _stringify, _splitdict
# Verify if Tk is new enough to not need the Tile package
_REQUIRE_TILE = True if tkinter.TkVersion < 8.5 else False
def _load_tile(master):
if _REQUIRE_TILE:
import os
tilelib = os.environ.get('TILE_LIBRARY')
if tilelib:
# append custom tile path to the list of directories that
# Tcl uses when attempting to resolve packages with the package
# command
master.tk.eval(
'global auto_path; '
'lappend auto_path {%s}' % tilelib)
master.tk.eval('package require tile') # TclError may be raised here
master._tile_loaded = True
def _format_optvalue(value, script=False):
"""Internal function."""
if script:
# if caller passes a Tcl script to tk.call, all the values need to
# be grouped into words (arguments to a command in Tcl dialect)
value = _stringify(value)
elif isinstance(value, (list, tuple)):
value = _join(value)
return value
def _format_optdict(optdict, script=False, ignore=None):
"""Formats optdict to a tuple to pass it to tk.call.
E.g. (script=False):
{'foreground': 'blue', 'padding': [1, 2, 3, 4]} returns:
('-foreground', 'blue', '-padding', '1 2 3 4')"""
opts = []
for opt, value in optdict.items():
if not ignore or opt not in ignore:
opts.append("-%s" % opt)
if value is not None:
opts.append(_format_optvalue(value, script))
return _flatten(opts)
def _mapdict_values(items):
# each value in mapdict is expected to be a sequence, where each item
# is another sequence containing a state (or several) and a value
# E.g. (script=False):
# [('active', 'selected', 'grey'), ('focus', [1, 2, 3, 4])]
# returns:
# ['active selected', 'grey', 'focus', [1, 2, 3, 4]]
opt_val = []
for *state, val in items:
# hacks for bakward compatibility
state[0] # raise IndexError if empty
if len(state) == 1:
# if it is empty (something that evaluates to False), then
# format it to Tcl code to denote the "normal" state
state = state[0] or ''
else:
# group multiple states
state = ' '.join(state) # raise TypeError if not str
opt_val.append(state)
if val is not None:
opt_val.append(val)
return opt_val
def _format_mapdict(mapdict, script=False):
"""Formats mapdict to pass it to tk.call.
E.g. (script=False):
{'expand': [('active', 'selected', 'grey'), ('focus', [1, 2, 3, 4])]}
returns:
('-expand', '{active selected} grey focus {1, 2, 3, 4}')"""
opts = []
for opt, value in mapdict.items():
opts.extend(("-%s" % opt,
_format_optvalue(_mapdict_values(value), script)))
return _flatten(opts)
def _format_elemcreate(etype, script=False, *args, **kw):
"""Formats args and kw according to the given element factory etype."""
spec = None
opts = ()
if etype in ("image", "vsapi"):
if etype == "image": # define an element based on an image
# first arg should be the default image name
iname = args[0]
# next args, if any, are statespec/value pairs which is almost
# a mapdict, but we just need the value
imagespec = _join(_mapdict_values(args[1:]))
spec = "%s %s" % (iname, imagespec)
else:
# define an element whose visual appearance is drawn using the
# Microsoft Visual Styles API which is responsible for the
# themed styles on Windows XP and Vista.
# Availability: Tk 8.6, Windows XP and Vista.
class_name, part_id = args[:2]
statemap = _join(_mapdict_values(args[2:]))
spec = "%s %s %s" % (class_name, part_id, statemap)
opts = _format_optdict(kw, script)
elif etype == "from": # clone an element
# it expects a themename and optionally an element to clone from,
# otherwise it will clone {} (empty element)
spec = args[0] # theme name
if len(args) > 1: # elementfrom specified
opts = (_format_optvalue(args[1], script),)
if script:
spec = '{%s}' % spec
opts = ' '.join(opts)
return spec, opts
def _format_layoutlist(layout, indent=0, indent_size=2):
"""Formats a layout list so we can pass the result to ttk::style
layout and ttk::style settings. Note that the layout doesn't has to
be a list necessarily.
E.g.:
[("Menubutton.background", None),
("Menubutton.button", {"children":
[("Menubutton.focus", {"children":
[("Menubutton.padding", {"children":
[("Menubutton.label", {"side": "left", "expand": 1})]
})]
})]
}),
("Menubutton.indicator", {"side": "right"})
]
returns:
Menubutton.background
Menubutton.button -children {
Menubutton.focus -children {
Menubutton.padding -children {
Menubutton.label -side left -expand 1
}
}
}
Menubutton.indicator -side right"""
script = []
for layout_elem in layout:
elem, opts = layout_elem
opts = opts or {}
fopts = ' '.join(_format_optdict(opts, True, ("children",)))
head = "%s%s%s" % (' ' * indent, elem, (" %s" % fopts) if fopts else '')
if "children" in opts:
script.append(head + " -children {")
indent += indent_size
newscript, indent = _format_layoutlist(opts['children'], indent,
indent_size)
script.append(newscript)
indent -= indent_size
script.append('%s}' % (' ' * indent))
else:
script.append(head)
return '\n'.join(script), indent
def _script_from_settings(settings):
"""Returns an appropriate script, based on settings, according to
theme_settings definition to be used by theme_settings and
theme_create."""
script = []
# a script will be generated according to settings passed, which
# will then be evaluated by Tcl
for name, opts in settings.items():
# will format specific keys according to Tcl code
if opts.get('configure'): # format 'configure'
s = ' '.join(_format_optdict(opts['configure'], True))
script.append("ttk::style configure %s %s;" % (name, s))
if opts.get('map'): # format 'map'
s = ' '.join(_format_mapdict(opts['map'], True))
script.append("ttk::style map %s %s;" % (name, s))
if 'layout' in opts: # format 'layout' which may be empty
if not opts['layout']:
s = 'null' # could be any other word, but this one makes sense
else:
s, _ = _format_layoutlist(opts['layout'])
script.append("ttk::style layout %s {\n%s\n}" % (name, s))
if opts.get('element create'): # format 'element create'
eopts = opts['element create']
etype = eopts[0]
# find where args end, and where kwargs start
argc = 1 # etype was the first one
while argc < len(eopts) and not hasattr(eopts[argc], 'items'):
argc += 1
elemargs = eopts[1:argc]
elemkw = eopts[argc] if argc < len(eopts) and eopts[argc] else {}
spec, opts = _format_elemcreate(etype, True, *elemargs, **elemkw)
script.append("ttk::style element create %s %s %s %s" % (
name, etype, spec, opts))
return '\n'.join(script)
def _list_from_statespec(stuple):
"""Construct a list from the given statespec tuple according to the
accepted statespec accepted by _format_mapdict."""
nval = []
for val in stuple:
typename = getattr(val, 'typename', None)
if typename is None:
nval.append(val)
else: # this is a Tcl object
val = str(val)
if typename == 'StateSpec':
val = val.split()
nval.append(val)
it = iter(nval)
return [_flatten(spec) for spec in zip(it, it)]
def _list_from_layouttuple(tk, ltuple):
"""Construct a list from the tuple returned by ttk::layout, this is
somewhat the reverse of _format_layoutlist."""
ltuple = tk.splitlist(ltuple)
res = []
indx = 0
while indx < len(ltuple):
name = ltuple[indx]
opts = {}
res.append((name, opts))
indx += 1
while indx < len(ltuple): # grab name's options
opt, val = ltuple[indx:indx + 2]
if not opt.startswith('-'): # found next name
break
opt = opt[1:] # remove the '-' from the option
indx += 2
if opt == 'children':
val = _list_from_layouttuple(tk, val)
opts[opt] = val
return res
def _val_or_dict(tk, options, *args):
"""Format options then call Tk command with args and options and return
the appropriate result.
If no option is specified, a dict is returned. If a option is
specified with the None value, the value for that option is returned.
Otherwise, the function just sets the passed options and the caller
shouldn't be expecting a return value anyway."""
options = _format_optdict(options)
res = tk.call(*(args + options))
if len(options) % 2: # option specified without a value, return its value
return res
return _splitdict(tk, res, conv=_tclobj_to_py)
def _convert_stringval(value):
"""Converts a value to, hopefully, a more appropriate Python object."""
value = str(value)
try:
value = int(value)
except (ValueError, TypeError):
pass
return value
def _to_number(x):
if isinstance(x, str):
if '.' in x:
x = float(x)
else:
x = int(x)
return x
def _tclobj_to_py(val):
"""Return value converted from Tcl object to Python object."""
if val and hasattr(val, '__len__') and not isinstance(val, str):
if getattr(val[0], 'typename', None) == 'StateSpec':
val = _list_from_statespec(val)
else:
val = list(map(_convert_stringval, val))
elif hasattr(val, 'typename'): # some other (single) Tcl object
val = _convert_stringval(val)
return val
def tclobjs_to_py(adict):
"""Returns adict with its values converted from Tcl objects to Python
objects."""
for opt, val in adict.items():
adict[opt] = _tclobj_to_py(val)
return adict
def setup_master(master=None):
"""If master is not None, itself is returned. If master is None,
the default master is returned if there is one, otherwise a new
master is created and returned.
If it is not allowed to use the default root and master is None,
RuntimeError is raised."""
if master is None:
if tkinter._support_default_root:
master = tkinter._default_root or tkinter.Tk()
else:
raise RuntimeError(
"No master specified and tkinter is "
"configured to not support default root")
return master
class Style(object):
"""Manipulate style database."""
_name = "ttk::style"
def __init__(self, master=None):
master = setup_master(master)
if not getattr(master, '_tile_loaded', False):
# Load tile now, if needed
_load_tile(master)
self.master = master
self.tk = self.master.tk
def configure(self, style, query_opt=None, **kw):
"""Query or sets the default value of the specified option(s) in
style.
Each key in kw is an option and each value is either a string or
a sequence identifying the value for that option."""
if query_opt is not None:
kw[query_opt] = None
return _val_or_dict(self.tk, kw, self._name, "configure", style)
def map(self, style, query_opt=None, **kw):
"""Query or sets dynamic values of the specified option(s) in
style.
Each key in kw is an option and each value should be a list or a
tuple (usually) containing statespecs grouped in tuples, or list,
or something else of your preference. A statespec is compound of
one or more states and then a value."""
if query_opt is not None:
return _list_from_statespec(self.tk.splitlist(
self.tk.call(self._name, "map", style, '-%s' % query_opt)))
return _splitdict(
self.tk,
self.tk.call(self._name, "map", style, *_format_mapdict(kw)),
conv=_tclobj_to_py)
def lookup(self, style, option, state=None, default=None):
"""Returns the value specified for option in style.
If state is specified it is expected to be a sequence of one
or more states. If the default argument is set, it is used as
a fallback value in case no specification for option is found."""
state = ' '.join(state) if state else ''
return self.tk.call(self._name, "lookup", style, '-%s' % option,
state, default)
def layout(self, style, layoutspec=None):
"""Define the widget layout for given style. If layoutspec is
omitted, return the layout specification for given style.
layoutspec is expected to be a list or an object different than
None that evaluates to False if you want to "turn off" that style.
If it is a list (or tuple, or something else), each item should be
a tuple where the first item is the layout name and the second item
should have the format described below:
LAYOUTS
A layout can contain the value None, if takes no options, or
a dict of options specifying how to arrange the element.
The layout mechanism uses a simplified version of the pack
geometry manager: given an initial cavity, each element is
allocated a parcel. Valid options/values are:
side: whichside
Specifies which side of the cavity to place the
element; one of top, right, bottom or left. If
omitted, the element occupies the entire cavity.
sticky: nswe
Specifies where the element is placed inside its
allocated parcel.
children: [sublayout... ]
Specifies a list of elements to place inside the
element. Each element is a tuple (or other sequence)
where the first item is the layout name, and the other
is a LAYOUT."""
lspec = None
if layoutspec:
lspec = _format_layoutlist(layoutspec)[0]
elif layoutspec is not None: # will disable the layout ({}, '', etc)
lspec = "null" # could be any other word, but this may make sense
# when calling layout(style) later
return _list_from_layouttuple(self.tk,
self.tk.call(self._name, "layout", style, lspec))
def element_create(self, elementname, etype, *args, **kw):
"""Create a new element in the current theme of given etype."""
spec, opts = _format_elemcreate(etype, False, *args, **kw)
self.tk.call(self._name, "element", "create", elementname, etype,
spec, *opts)
def element_names(self):
"""Returns the list of elements defined in the current theme."""
return self.tk.splitlist(self.tk.call(self._name, "element", "names"))
def element_options(self, elementname):
"""Return the list of elementname's options."""
return self.tk.splitlist(self.tk.call(self._name, "element", "options", elementname))
def theme_create(self, themename, parent=None, settings=None):
"""Creates a new theme.
It is an error if themename already exists. If parent is
specified, the new theme will inherit styles, elements and
layouts from the specified parent theme. If settings are present,
they are expected to have the same syntax used for theme_settings."""
script = _script_from_settings(settings) if settings else ''
if parent:
self.tk.call(self._name, "theme", "create", themename,
"-parent", parent, "-settings", script)
else:
self.tk.call(self._name, "theme", "create", themename,
"-settings", script)
def theme_settings(self, themename, settings):
"""Temporarily sets the current theme to themename, apply specified
settings and then restore the previous theme.
Each key in settings is a style and each value may contain the
keys 'configure', 'map', 'layout' and 'element create' and they
are expected to have the same format as specified by the methods
configure, map, layout and element_create respectively."""
script = _script_from_settings(settings)
self.tk.call(self._name, "theme", "settings", themename, script)
def theme_names(self):
"""Returns a list of all known themes."""
return self.tk.splitlist(self.tk.call(self._name, "theme", "names"))
def theme_use(self, themename=None):
"""If themename is None, returns the theme in use, otherwise, set
the current theme to themename, refreshes all widgets and emits
a <<ThemeChanged>> event."""
if themename is None:
# Starting on Tk 8.6, checking this global is no longer needed
# since it allows doing self.tk.call(self._name, "theme", "use")
return self.tk.eval("return $ttk::currentTheme")
# using "ttk::setTheme" instead of "ttk::style theme use" causes
# the variable currentTheme to be updated, also, ttk::setTheme calls
# "ttk::style theme use" in order to change theme.
self.tk.call("ttk::setTheme", themename)
class Widget(tkinter.Widget):
"""Base class for Tk themed widgets."""
def __init__(self, master, widgetname, kw=None):
"""Constructs a Ttk Widget with the parent master.
STANDARD OPTIONS
class, cursor, takefocus, style
SCROLLABLE WIDGET OPTIONS
xscrollcommand, yscrollcommand
LABEL WIDGET OPTIONS
text, textvariable, underline, image, compound, width
WIDGET STATES
active, disabled, focus, pressed, selected, background,
readonly, alternate, invalid
"""
master = setup_master(master)
if not getattr(master, '_tile_loaded', False):
# Load tile now, if needed
_load_tile(master)
tkinter.Widget.__init__(self, master, widgetname, kw=kw)
def identify(self, x, y):
"""Returns the name of the element at position x, y, or the empty
string if the point does not lie within any element.
x and y are pixel coordinates relative to the widget."""
return self.tk.call(self._w, "identify", x, y)
def instate(self, statespec, callback=None, *args, **kw):
"""Test the widget's state.
If callback is not specified, returns True if the widget state
matches statespec and False otherwise. If callback is specified,
then it will be invoked with *args, **kw if the widget state
matches statespec. statespec is expected to be a sequence."""
ret = self.tk.getboolean(
self.tk.call(self._w, "instate", ' '.join(statespec)))
if ret and callback:
return callback(*args, **kw)
return bool(ret)
def state(self, statespec=None):
"""Modify or inquire widget state.
Widget state is returned if statespec is None, otherwise it is
set according to the statespec flags and then a new state spec
is returned indicating which flags were changed. statespec is
expected to be a sequence."""
if statespec is not None:
statespec = ' '.join(statespec)
return self.tk.splitlist(str(self.tk.call(self._w, "state", statespec)))
class Button(Widget):
"""Ttk Button widget, displays a textual label and/or image, and
evaluates a command when pressed."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Button widget with the parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, default, width
"""
Widget.__init__(self, master, "ttk::button", kw)
def invoke(self):
"""Invokes the command associated with the button."""
return self.tk.call(self._w, "invoke")
class Checkbutton(Widget):
"""Ttk Checkbutton widget which is either in on- or off-state."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Checkbutton widget with the parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, offvalue, onvalue, variable
"""
Widget.__init__(self, master, "ttk::checkbutton", kw)
def invoke(self):
"""Toggles between the selected and deselected states and
invokes the associated command. If the widget is currently
selected, sets the option variable to the offvalue option
and deselects the widget; otherwise, sets the option variable
to the option onvalue.
Returns the result of the associated command."""
return self.tk.call(self._w, "invoke")
class Entry(Widget, tkinter.Entry):
"""Ttk Entry widget displays a one-line text string and allows that
string to be edited by the user."""
def __init__(self, master=None, widget=None, **kw):
"""Constructs a Ttk Entry widget with the parent master.
STANDARD OPTIONS
class, cursor, style, takefocus, xscrollcommand
WIDGET-SPECIFIC OPTIONS
exportselection, invalidcommand, justify, show, state,
textvariable, validate, validatecommand, width
VALIDATION MODES
none, key, focus, focusin, focusout, all
"""
Widget.__init__(self, master, widget or "ttk::entry", kw)
def bbox(self, index):
"""Return a tuple of (x, y, width, height) which describes the
bounding box of the character given by index."""
return self._getints(self.tk.call(self._w, "bbox", index))
def identify(self, x, y):
"""Returns the name of the element at position x, y, or the
empty string if the coordinates are outside the window."""
return self.tk.call(self._w, "identify", x, y)
def validate(self):
"""Force revalidation, independent of the conditions specified
by the validate option. Returns False if validation fails, True
if it succeeds. Sets or clears the invalid state accordingly."""
return bool(self.tk.getboolean(self.tk.call(self._w, "validate")))
class Combobox(Entry):
"""Ttk Combobox widget combines a text field with a pop-down list of
values."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Combobox widget with the parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
exportselection, justify, height, postcommand, state,
textvariable, values, width
"""
Entry.__init__(self, master, "ttk::combobox", **kw)
def current(self, newindex=None):
"""If newindex is supplied, sets the combobox value to the
element at position newindex in the list of values. Otherwise,
returns the index of the current value in the list of values
or -1 if the current value does not appear in the list."""
if newindex is None:
return self.tk.getint(self.tk.call(self._w, "current"))
return self.tk.call(self._w, "current", newindex)
def set(self, value):
"""Sets the value of the combobox to value."""
self.tk.call(self._w, "set", value)
class Frame(Widget):
"""Ttk Frame widget is a container, used to group other widgets
together."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Frame with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
borderwidth, relief, padding, width, height
"""
Widget.__init__(self, master, "ttk::frame", kw)
class Label(Widget):
"""Ttk Label widget displays a textual label and/or image."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Label with parent master.
STANDARD OPTIONS
class, compound, cursor, image, style, takefocus, text,
textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
anchor, background, font, foreground, justify, padding,
relief, text, wraplength
"""
Widget.__init__(self, master, "ttk::label", kw)
class Labelframe(Widget):
"""Ttk Labelframe widget is a container used to group other widgets
together. It has an optional label, which may be a plain text string
or another widget."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Labelframe with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
labelanchor, text, underline, padding, labelwidget, width,
height
"""
Widget.__init__(self, master, "ttk::labelframe", kw)
LabelFrame = Labelframe # tkinter name compatibility
class Menubutton(Widget):
"""Ttk Menubutton widget displays a textual label and/or image, and
displays a menu when pressed."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Menubutton with parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
direction, menu
"""
Widget.__init__(self, master, "ttk::menubutton", kw)
class Notebook(Widget):
"""Ttk Notebook widget manages a collection of windows and displays
a single one at a time. Each child window is associated with a tab,
which the user may select to change the currently-displayed window."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Notebook with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
height, padding, width
TAB OPTIONS
state, sticky, padding, text, image, compound, underline
TAB IDENTIFIERS (tab_id)
The tab_id argument found in several methods may take any of
the following forms:
* An integer between zero and the number of tabs
* The name of a child window
* A positional specification of the form "@x,y", which
defines the tab
* The string "current", which identifies the
currently-selected tab
* The string "end", which returns the number of tabs (only
valid for method index)
"""
Widget.__init__(self, master, "ttk::notebook", kw)
def add(self, child, **kw):
"""Adds a new tab to the notebook.
If window is currently managed by the notebook but hidden, it is
restored to its previous position."""
self.tk.call(self._w, "add", child, *(_format_optdict(kw)))
def forget(self, tab_id):
"""Removes the tab specified by tab_id, unmaps and unmanages the
associated window."""
self.tk.call(self._w, "forget", tab_id)
def hide(self, tab_id):
"""Hides the tab specified by tab_id.
The tab will not be displayed, but the associated window remains
managed by the notebook and its configuration remembered. Hidden
tabs may be restored with the add command."""
self.tk.call(self._w, "hide", tab_id)
def identify(self, x, y):
"""Returns the name of the tab element at position x, y, or the
empty string if none."""
return self.tk.call(self._w, "identify", x, y)
def index(self, tab_id):
"""Returns the numeric index of the tab specified by tab_id, or
the total number of tabs if tab_id is the string "end"."""
return self.tk.getint(self.tk.call(self._w, "index", tab_id))
def insert(self, pos, child, **kw):
"""Inserts a pane at the specified position.
pos is either the string end, an integer index, or the name of
a managed child. If child is already managed by the notebook,
moves it to the specified position."""
self.tk.call(self._w, "insert", pos, child, *(_format_optdict(kw)))
def select(self, tab_id=None):
"""Selects the specified tab.
The associated child window will be displayed, and the
previously-selected window (if different) is unmapped. If tab_id
is omitted, returns the widget name of the currently selected
pane."""
return self.tk.call(self._w, "select", tab_id)
def tab(self, tab_id, option=None, **kw):
"""Query or modify the options of the specific tab_id.
If kw is not given, returns a dict of the tab option values. If option
is specified, returns the value of that option. Otherwise, sets the
options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "tab", tab_id)
def tabs(self):
"""Returns a list of windows managed by the notebook."""
return self.tk.splitlist(self.tk.call(self._w, "tabs") or ())
def enable_traversal(self):
"""Enable keyboard traversal for a toplevel window containing
this notebook.
This will extend the bindings for the toplevel window containing
this notebook as follows:
Control-Tab: selects the tab following the currently selected
one
Shift-Control-Tab: selects the tab preceding the currently
selected one
Alt-K: where K is the mnemonic (underlined) character of any
tab, will select that tab.
Multiple notebooks in a single toplevel may be enabled for
traversal, including nested notebooks. However, notebook traversal
only works properly if all panes are direct children of the
notebook."""
# The only, and good, difference I see is about mnemonics, which works
# after calling this method. Control-Tab and Shift-Control-Tab always
# works (here at least).
self.tk.call("ttk::notebook::enableTraversal", self._w)
class Panedwindow(Widget, tkinter.PanedWindow):
"""Ttk Panedwindow widget displays a number of subwindows, stacked
either vertically or horizontally."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Panedwindow with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient, width, height
PANE OPTIONS
weight
"""
Widget.__init__(self, master, "ttk::panedwindow", kw)
forget = tkinter.PanedWindow.forget # overrides Pack.forget
def insert(self, pos, child, **kw):
"""Inserts a pane at the specified positions.
pos is either the string end, and integer index, or the name
of a child. If child is already managed by the paned window,
moves it to the specified position."""
self.tk.call(self._w, "insert", pos, child, *(_format_optdict(kw)))
def pane(self, pane, option=None, **kw):
"""Query or modify the options of the specified pane.
pane is either an integer index or the name of a managed subwindow.
If kw is not given, returns a dict of the pane option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "pane", pane)
def sashpos(self, index, newpos=None):
"""If newpos is specified, sets the position of sash number index.
May adjust the positions of adjacent sashes to ensure that
positions are monotonically increasing. Sash positions are further
constrained to be between 0 and the total size of the widget.
Returns the new position of sash number index."""
return self.tk.getint(self.tk.call(self._w, "sashpos", index, newpos))
PanedWindow = Panedwindow # tkinter name compatibility
class Progressbar(Widget):
"""Ttk Progressbar widget shows the status of a long-running
operation. They can operate in two modes: determinate mode shows the
amount completed relative to the total amount of work to be done, and
indeterminate mode provides an animated display to let the user know
that something is happening."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Progressbar with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient, length, mode, maximum, value, variable, phase
"""
Widget.__init__(self, master, "ttk::progressbar", kw)
def start(self, interval=None):
"""Begin autoincrement mode: schedules a recurring timer event
that calls method step every interval milliseconds.
interval defaults to 50 milliseconds (20 steps/second) if ommited."""
self.tk.call(self._w, "start", interval)
def step(self, amount=None):
"""Increments the value option by amount.
amount defaults to 1.0 if omitted."""
self.tk.call(self._w, "step", amount)
def stop(self):
"""Stop autoincrement mode: cancels any recurring timer event
initiated by start."""
self.tk.call(self._w, "stop")
class Radiobutton(Widget):
"""Ttk Radiobutton widgets are used in groups to show or change a
set of mutually-exclusive options."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Radiobutton with parent master.
STANDARD OPTIONS
class, compound, cursor, image, state, style, takefocus,
text, textvariable, underline, width
WIDGET-SPECIFIC OPTIONS
command, value, variable
"""
Widget.__init__(self, master, "ttk::radiobutton", kw)
def invoke(self):
"""Sets the option variable to the option value, selects the
widget, and invokes the associated command.
Returns the result of the command, or an empty string if
no command is specified."""
return self.tk.call(self._w, "invoke")
class Scale(Widget, tkinter.Scale):
"""Ttk Scale widget is typically used to control the numeric value of
a linked variable that varies uniformly over some range."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Scale with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
command, from, length, orient, to, value, variable
"""
Widget.__init__(self, master, "ttk::scale", kw)
def configure(self, cnf=None, **kw):
"""Modify or query scale options.
Setting a value for any of the "from", "from_" or "to" options
generates a <<RangeChanged>> event."""
if cnf:
kw.update(cnf)
Widget.configure(self, **kw)
if any(['from' in kw, 'from_' in kw, 'to' in kw]):
self.event_generate('<<RangeChanged>>')
def get(self, x=None, y=None):
"""Get the current value of the value option, or the value
corresponding to the coordinates x, y if they are specified.
x and y are pixel coordinates relative to the scale widget
origin."""
return self.tk.call(self._w, 'get', x, y)
class Scrollbar(Widget, tkinter.Scrollbar):
"""Ttk Scrollbar controls the viewport of a scrollable widget."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Scrollbar with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
command, orient
"""
Widget.__init__(self, master, "ttk::scrollbar", kw)
class Separator(Widget):
"""Ttk Separator widget displays a horizontal or vertical separator
bar."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Separator with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus
WIDGET-SPECIFIC OPTIONS
orient
"""
Widget.__init__(self, master, "ttk::separator", kw)
class Sizegrip(Widget):
"""Ttk Sizegrip allows the user to resize the containing toplevel
window by pressing and dragging the grip."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Sizegrip with parent master.
STANDARD OPTIONS
class, cursor, state, style, takefocus
"""
Widget.__init__(self, master, "ttk::sizegrip", kw)
class Treeview(Widget, tkinter.XView, tkinter.YView):
"""Ttk Treeview widget displays a hierarchical collection of items.
Each item has a textual label, an optional image, and an optional list
of data values. The data values are displayed in successive columns
after the tree label."""
def __init__(self, master=None, **kw):
"""Construct a Ttk Treeview with parent master.
STANDARD OPTIONS
class, cursor, style, takefocus, xscrollcommand,
yscrollcommand
WIDGET-SPECIFIC OPTIONS
columns, displaycolumns, height, padding, selectmode, show
ITEM OPTIONS
text, image, values, open, tags
TAG OPTIONS
foreground, background, font, image
"""
Widget.__init__(self, master, "ttk::treeview", kw)
def bbox(self, item, column=None):
"""Returns the bounding box (relative to the treeview widget's
window) of the specified item in the form x y width height.
If column is specified, returns the bounding box of that cell.
If the item is not visible (i.e., if it is a descendant of a
closed item or is scrolled offscreen), returns an empty string."""
return self._getints(self.tk.call(self._w, "bbox", item, column)) or ''
def get_children(self, item=None):
"""Returns a tuple of children belonging to item.
If item is not specified, returns root children."""
return self.tk.splitlist(
self.tk.call(self._w, "children", item or '') or ())
def set_children(self, item, *newchildren):
"""Replaces item's child with newchildren.
Children present in item that are not present in newchildren
are detached from tree. No items in newchildren may be an
ancestor of item."""
self.tk.call(self._w, "children", item, newchildren)
def column(self, column, option=None, **kw):
"""Query or modify the options for the specified column.
If kw is not given, returns a dict of the column option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "column", column)
def delete(self, *items):
"""Delete all specified items and all their descendants. The root
item may not be deleted."""
self.tk.call(self._w, "delete", items)
def detach(self, *items):
"""Unlinks all of the specified items from the tree.
The items and all of their descendants are still present, and may
be reinserted at another point in the tree, but will not be
displayed. The root item may not be detached."""
self.tk.call(self._w, "detach", items)
def exists(self, item):
"""Returns True if the specified item is present in the tree,
False otherwise."""
return bool(self.tk.getboolean(self.tk.call(self._w, "exists", item)))
def focus(self, item=None):
"""If item is specified, sets the focus item to item. Otherwise,
returns the current focus item, or '' if there is none."""
return self.tk.call(self._w, "focus", item)
def heading(self, column, option=None, **kw):
"""Query or modify the heading options for the specified column.
If kw is not given, returns a dict of the heading option values. If
option is specified then the value for that option is returned.
Otherwise, sets the options to the corresponding values.
Valid options/values are:
text: text
The text to display in the column heading
image: image_name
Specifies an image to display to the right of the column
heading
anchor: anchor
Specifies how the heading text should be aligned. One of
the standard Tk anchor values
command: callback
A callback to be invoked when the heading label is
pressed.
To configure the tree column heading, call this with column = "#0" """
cmd = kw.get('command')
if cmd and not isinstance(cmd, str):
# callback not registered yet, do it now
kw['command'] = self.master.register(cmd, self._substitute)
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, 'heading', column)
def identify(self, component, x, y):
"""Returns a description of the specified component under the
point given by x and y, or the empty string if no such component
is present at that position."""
return self.tk.call(self._w, "identify", component, x, y)
def identify_row(self, y):
"""Returns the item ID of the item at position y."""
return self.identify("row", 0, y)
def identify_column(self, x):
"""Returns the data column identifier of the cell at position x.
The tree column has ID #0."""
return self.identify("column", x, 0)
def identify_region(self, x, y):
"""Returns one of:
heading: Tree heading area.
separator: Space between two columns headings;
tree: The tree area.
cell: A data cell.
* Availability: Tk 8.6"""
return self.identify("region", x, y)
def identify_element(self, x, y):
"""Returns the element at position x, y.
* Availability: Tk 8.6"""
return self.identify("element", x, y)
def index(self, item):
"""Returns the integer index of item within its parent's list
of children."""
return self.tk.getint(self.tk.call(self._w, "index", item))
def insert(self, parent, index, iid=None, **kw):
"""Creates a new item and return the item identifier of the newly
created item.
parent is the item ID of the parent item, or the empty string
to create a new top-level item. index is an integer, or the value
end, specifying where in the list of parent's children to insert
the new item. If index is less than or equal to zero, the new node
is inserted at the beginning, if index is greater than or equal to
the current number of children, it is inserted at the end. If iid
is specified, it is used as the item identifier, iid must not
already exist in the tree. Otherwise, a new unique identifier
is generated."""
opts = _format_optdict(kw)
if iid:
res = self.tk.call(self._w, "insert", parent, index,
"-id", iid, *opts)
else:
res = self.tk.call(self._w, "insert", parent, index, *opts)
return res
def item(self, item, option=None, **kw):
"""Query or modify the options for the specified item.
If no options are given, a dict with options/values for the item
is returned. If option is specified then the value for that option
is returned. Otherwise, sets the options to the corresponding
values as given by kw."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "item", item)
def move(self, item, parent, index):
"""Moves item to position index in parent's list of children.
It is illegal to move an item under one of its descendants. If
index is less than or equal to zero, item is moved to the
beginning, if greater than or equal to the number of children,
it is moved to the end. If item was detached it is reattached."""
self.tk.call(self._w, "move", item, parent, index)
reattach = move # A sensible method name for reattaching detached items
def next(self, item):
"""Returns the identifier of item's next sibling, or '' if item
is the last child of its parent."""
return self.tk.call(self._w, "next", item)
def parent(self, item):
"""Returns the ID of the parent of item, or '' if item is at the
top level of the hierarchy."""
return self.tk.call(self._w, "parent", item)
def prev(self, item):
"""Returns the identifier of item's previous sibling, or '' if
item is the first child of its parent."""
return self.tk.call(self._w, "prev", item)
def see(self, item):
"""Ensure that item is visible.
Sets all of item's ancestors open option to True, and scrolls
the widget if necessary so that item is within the visible
portion of the tree."""
self.tk.call(self._w, "see", item)
def selection(self, selop=None, items=None):
"""If selop is not specified, returns selected items."""
return self.tk.call(self._w, "selection", selop, items)
def selection_set(self, items):
"""items becomes the new selection."""
self.selection("set", items)
def selection_add(self, items):
"""Add items to the selection."""
self.selection("add", items)
def selection_remove(self, items):
"""Remove items from the selection."""
self.selection("remove", items)
def selection_toggle(self, items):
"""Toggle the selection state of each item in items."""
self.selection("toggle", items)
def set(self, item, column=None, value=None):
"""Query or set the value of given item.
With one argument, return a dictionary of column/value pairs
for the specified item. With two arguments, return the current
value of the specified column. With three arguments, set the
value of given column in given item to the specified value."""
res = self.tk.call(self._w, "set", item, column, value)
if column is None and value is None:
return _splitdict(self.tk, res,
cut_minus=False, conv=_tclobj_to_py)
else:
return res
def tag_bind(self, tagname, sequence=None, callback=None):
"""Bind a callback for the given event sequence to the tag tagname.
When an event is delivered to an item, the callbacks for each
of the item's tags option are called."""
self._bind((self._w, "tag", "bind", tagname), sequence, callback, add=0)
def tag_configure(self, tagname, option=None, **kw):
"""Query or modify the options for the specified tagname.
If kw is not given, returns a dict of the option settings for tagname.
If option is specified, returns the value for that option for the
specified tagname. Otherwise, sets the options to the corresponding
values for the given tagname."""
if option is not None:
kw[option] = None
return _val_or_dict(self.tk, kw, self._w, "tag", "configure",
tagname)
def tag_has(self, tagname, item=None):
"""If item is specified, returns 1 or 0 depending on whether the
specified item has the given tagname. Otherwise, returns a list of
all items which have the specified tag.
* Availability: Tk 8.6"""
if item is None:
return self.tk.splitlist(
self.tk.call(self._w, "tag", "has", tagname))
else:
return self.tk.getboolean(
self.tk.call(self._w, "tag", "has", tagname, item))
# Extensions
class LabeledScale(Frame):
"""A Ttk Scale widget with a Ttk Label widget indicating its
current value.
The Ttk Scale can be accessed through instance.scale, and Ttk Label
can be accessed through instance.label"""
def __init__(self, master=None, variable=None, from_=0, to=10, **kw):
"""Construct an horizontal LabeledScale with parent master, a
variable to be associated with the Ttk Scale widget and its range.
If variable is not specified, a tkinter.IntVar is created.
WIDGET-SPECIFIC OPTIONS
compound: 'top' or 'bottom'
Specifies how to display the label relative to the scale.
Defaults to 'top'.
"""
self._label_top = kw.pop('compound', 'top') == 'top'
Frame.__init__(self, master, **kw)
self._variable = variable or tkinter.IntVar(master)
self._variable.set(from_)
self._last_valid = from_
self.label = Label(self)
self.scale = Scale(self, variable=self._variable, from_=from_, to=to)
self.scale.bind('<<RangeChanged>>', self._adjust)
# position scale and label according to the compound option
scale_side = 'bottom' if self._label_top else 'top'
label_side = 'top' if scale_side == 'bottom' else 'bottom'
self.scale.pack(side=scale_side, fill='x')
tmp = Label(self).pack(side=label_side) # place holder
self.label.place(anchor='n' if label_side == 'top' else 's')
# update the label as scale or variable changes
self.__tracecb = self._variable.trace_variable('w', self._adjust)
self.bind('<Configure>', self._adjust)
self.bind('<Map>', self._adjust)
def destroy(self):
"""Destroy this widget and possibly its associated variable."""
try:
self._variable.trace_vdelete('w', self.__tracecb)
except AttributeError:
# widget has been destroyed already
pass
else:
del self._variable
Frame.destroy(self)
def _adjust(self, *args):
"""Adjust the label position according to the scale."""
def adjust_label():
self.update_idletasks() # "force" scale redraw
x, y = self.scale.coords()
if self._label_top:
y = self.scale.winfo_y() - self.label.winfo_reqheight()
else:
y = self.scale.winfo_reqheight() + self.label.winfo_reqheight()
self.label.place_configure(x=x, y=y)
from_ = _to_number(self.scale['from'])
to = _to_number(self.scale['to'])
if to < from_:
from_, to = to, from_
newval = self._variable.get()
if not from_ <= newval <= to:
# value outside range, set value back to the last valid one
self.value = self._last_valid
return
self._last_valid = newval
self.label['text'] = newval
self.after_idle(adjust_label)
def _get_value(self):
"""Return current scale value."""
return self._variable.get()
def _set_value(self, val):
"""Set new scale value."""
self._variable.set(val)
value = property(_get_value, _set_value)
class OptionMenu(Menubutton):
"""Themed OptionMenu, based after tkinter's OptionMenu, which allows
the user to select a value from a menu."""
def __init__(self, master, variable, default=None, *values, **kwargs):
"""Construct a themed OptionMenu widget with master as the parent,
the resource textvariable set to variable, the initially selected
value specified by the default parameter, the menu values given by
*values and additional keywords.
WIDGET-SPECIFIC OPTIONS
style: stylename
Menubutton style.
direction: 'above', 'below', 'left', 'right', or 'flush'
Menubutton direction.
command: callback
A callback that will be invoked after selecting an item.
"""
kw = {'textvariable': variable, 'style': kwargs.pop('style', None),
'direction': kwargs.pop('direction', None)}
Menubutton.__init__(self, master, **kw)
self['menu'] = tkinter.Menu(self, tearoff=False)
self._variable = variable
self._callback = kwargs.pop('command', None)
if kwargs:
raise tkinter.TclError('unknown option -%s' % (
next(iter(kwargs.keys()))))
self.set_menu(default, *values)
def __getitem__(self, item):
if item == 'menu':
return self.nametowidget(Menubutton.__getitem__(self, item))
return Menubutton.__getitem__(self, item)
def set_menu(self, default=None, *values):
"""Build a new menu of radiobuttons with *values and optionally
a default value."""
menu = self['menu']
menu.delete(0, 'end')
for val in values:
menu.add_radiobutton(label=val,
command=tkinter._setit(self._variable, val, self._callback))
if default:
self._variable.set(default)
def destroy(self):
"""Destroy this widget and its associated variable."""
del self._variable
Menubutton.destroy(self)
>>>>>>> b875702c9c06ab5012e52ff4337439b03918f453
| mit | 5,245,644,120,300,814,000 | 33.465886 | 93 | 0.608106 | false |
dstockwell/pachi | tools/sgflib/typelib.py | 11 | 14710 | #!/usr/local/bin/python
# typelib.py (Type Class Library)
# Copyright (c) 2000 David John Goodger
#
# This software is provided "as-is", without any express or implied warranty.
# In no event will the authors be held liable for any damages arising from the
# use of this software.
#
# Permission is granted to anyone to use this software for any purpose,
# including commercial applications, and to alter it and redistribute it
# freely, subject to the following restrictions:
#
# 1. The origin of this software must not be misrepresented; you must not
# claim that you wrote the original software. If you use this software in a
# product, an acknowledgment in the product documentation would be appreciated
# but is not required.
#
# 2. Altered source versions must be plainly marked as such, and must not be
# misrepresented as being the original software.
#
# 3. This notice may not be removed or altered from any source distribution.
"""
================================
Type Class Library: typelib.py
================================
version 1.0 (2000-03-27)
Homepage: [[http://gotools.sourceforge.net/]] (see sgflib.py)
Copyright (C) 2000 David John Goodger ([[mailto:[email protected]]]).
typelib.py comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
welcome to redistribute it and/or modify it under certain conditions; see the
source code for details.
Description
===========
This library implements abstract superclasses to emulate Python's built-in data
types. This is useful when you want a class which acts like a built-in type, but
with added/modified behaviour (methods) and/or data (attributes).
Implemented types are: 'String', 'Tuple', 'List', 'Dictionary', 'Integer',
'Long', 'Float', 'Complex' (along with their abstract superclasses).
All methods, including special overloading methods, are implemented for each
type-emulation class. Instance data is stored internally in the 'data' attribute
(i.e., 'self.data'). The type the class is emulating is stored in the class
attribute 'self.TYPE' (as given by the built-in 'type(class)'). The
'SuperClass.__init__()' method uses two class-specific methods to instantiate
objects: '_reset()' and '_convert()'.
See "sgflib.py" (at module's homepage, see above) for examples of use. The Node
class is of particular interest: a modified 'Dictionary' which is ordered and
allows for offset-indexed retrieval."""
# Revision History
#
# 1.0 (2000-03-27): First public release.
# - Implemented Integer, Long, Float, and Complex.
# - Cleaned up a few loose ends.
# - Completed docstring documentatation.
#
# 0.1 (2000-01-27):
# - Implemented String, Tuple, List, and Dictionary emulation.
#
# To do:
# - Implement Function? File? (Have to come up with a good reason first ;-)
class SuperType:
""" Superclass of all type classes. Implements methods common to all types.
Concrete (as opposed to abstract) subclasses must define a class
attribute 'self.TYPE' ('=type(Class)'), and methods '_reset(self)' and
'_convert(self, data)'."""
def __init__(self, data=None):
"""
On 'Class()', initialize 'self.data'. Argument:
- 'data' : optional, default 'None' --
- If the type of 'data' is identical to the Class' 'TYPE',
'data' will be shared (relevant for mutable types only).
- If 'data' is given (and not false), it will be converted by
the Class-specific method 'self._convert(data)'. Incompatible
data types will raise an exception.
- If 'data' is 'None', false, or not given, a Class-specific method
'self._reset()' is called to initialize an empty instance."""
if data:
if type(data) is self.TYPE:
self.data = data
else:
self.data = self._convert(data)
else:
self._reset()
def __str__(self):
""" On 'str(self)' and 'print self'. Returns string representation."""
return str(self.data)
def __cmp__(self, x):
""" On 'self>x', 'self==x', 'cmp(self,x)', etc. Catches all
comparisons: returns -1, 0, or 1 for less, equal, or greater."""
return cmp(self.data, x)
def __rcmp__(self, x):
""" On 'x>self', 'x==self', 'cmp(x,self)', etc. Catches all
comparisons: returns -1, 0, or 1 for less, equal, or greater."""
return cmp(x, self.data)
def __hash__(self):
""" On 'dictionary[self]', 'hash(self)'. Returns a unique and unchanging
integer hash-key."""
return hash(self.data)
class AddMulMixin:
""" Addition & multiplication for numbers, concatenation & repetition for
sequences."""
def __add__(self, other):
""" On 'self+other'. Numeric addition, or sequence concatenation."""
return self.data + other
def __radd__(self, other):
""" On 'other+self'. Numeric addition, or sequence concatenation."""
return other + self.data
def __mul__(self, other):
""" On 'self*other'. Numeric multiplication, or sequence repetition."""
return self.data * other
def __rmul__(self, other):
""" On 'other*self'. Numeric multiplication, or sequence repetition."""
return other * self.data
class MutableMixin:
""" Assignment to and deletion of collection component."""
def __setitem__(self, key, x):
""" On 'self[key]=x'."""
self.data[key] = x
def __delitem__(self, key):
""" On 'del self[key]'."""
del self.data[key]
class ModMixin:
""" Modulo remainder and string formatting."""
def __mod__(self, other):
""" On 'self%other'."""
return self.data % other
def __rmod__(self, other):
""" On 'other%self'."""
return other % self.data
class Number(SuperType, AddMulMixin, ModMixin):
""" Superclass for numeric emulation types."""
def __sub__(self, other):
""" On 'self-other'."""
return self.data - other
def __rsub__(self, other):
""" On 'other-self'."""
return other - self.data
def __div__(self, other):
""" On 'self/other'."""
return self.data / other
def __rdiv__(self, other):
""" On 'other/self'."""
return other / self.data
def __divmod__(self, other):
""" On 'divmod(self,other)'."""
return divmod(self.data, other)
def __rdivmod__(self, other):
""" On 'divmod(other,self)'."""
return divmod(other, self.data)
def __pow__(self, other, mod=None):
""" On 'pow(self,other[,mod])', 'self**other'."""
if mod is None:
return self.data ** other
else:
return pow(self.data, other, mod)
def __rpow__(self, other):
""" On 'pow(other,self)', 'other**self'."""
return other ** self.data
def __neg__(self):
""" On '-self'."""
return -self.data
def __pos__(self):
""" On '+self'."""
return +self.data
def __abs__(self):
""" On 'abs(self)'."""
return abs(self.data)
def __int__(self):
""" On 'int(self)'."""
return int(self.data)
def __long__(self):
""" On 'long(self)'."""
return long(self.data)
def __float__(self):
""" On 'float(self)'."""
return float(self.data)
def __complex__(self):
""" On 'complex(self)'."""
return complex(self.data)
def __nonzero__(self):
""" On truth-value (or uses '__len__()' if defined)."""
return self.data != 0
def __coerce__(self, other):
""" On mixed-type expression, 'coerce()'. Returns tuple of '(self, other)'
converted to a common type."""
return coerce(self.data, other)
class Integer(Number):
""" Emulates a Python integer."""
TYPE = type(1)
def _reset(self):
""" Initialize an integer."""
self.data = 0
def _convert(self, data):
""" Convert data into an integer."""
return int(data)
def __lshift__(self, other):
""" On 'self<<other'."""
return self.data << other
def __rlshift__(self, other):
""" On 'other<<self'."""
return other << self.data
def __rshift__(self, other):
""" On 'self>>other'."""
return self.data >> other
def __rrshift__(self, other):
""" On 'other>>self'."""
return other >> self.data
def __and__(self, other):
""" On 'self&other'."""
return self.data & other
def __rand__(self, other):
""" On 'other&self'."""
return other & self.data
def __or__(self, other):
""" On 'self|other'."""
return self.data | other
def __ror__(self, other):
""" On 'other|self'."""
return other | self.data
def __xor__(self, other):
""" On 'self^other'."""
return self.data ^ other
def __rxor__(self, other):
""" On 'other%self'."""
return other % self.data
def __invert__(self):
""" On '~self'."""
return ~self.data
def __oct__(self):
""" On 'oct(self)'. Returns octal string representation."""
return oct(self.data)
def __hex__(self):
""" On 'hex(self)'. Returns hexidecimal string representation."""
return hex(self.data)
class Long(Integer):
""" Emulates a Python long integer."""
TYPE = type(1L)
def _reset(self):
""" Initialize an integer."""
self.data = 0L
def _convert(self, data):
""" Convert data into an integer."""
return long(data)
class Float(Number):
""" Emulates a Python floating-point number."""
TYPE = type(0.1)
def _reset(self):
""" Initialize a float."""
self.data = 0.0
def _convert(self, data):
""" Convert data into a float."""
return float(data)
class Complex(Number):
""" Emulates a Python complex number."""
TYPE = type(0+0j)
def _reset(self):
""" Initialize an integer."""
self.data = 0+0j
def _convert(self, data):
""" Convert data into an integer."""
return complex(data)
def __getattr__(self, name):
""" On 'self.real' & 'self.imag'."""
if name == "real":
return self.data.real
elif name == "imag":
return self.data.imag
else:
raise AttributeError(name)
def conjugate(self):
""" On 'self.conjugate()'."""
return self.data.conjugate()
class Container(SuperType):
""" Superclass for countable, indexable collection types ('Sequence', 'Mapping')."""
def __len__(self):
""" On 'len(self)', truth-value tests. Returns sequence or mapping
collection size. Zero means false."""
return len(self.data)
def __getitem__(self, key):
""" On 'self[key]', 'x in self', 'for x in self'. Implements all
indexing-related operations. Membership and iteration ('in', 'for')
repeatedly index from 0 until 'IndexError'."""
return self.data[key]
class Sequence(Container, AddMulMixin):
""" Superclass for classes which emulate sequences ('List', 'Tuple', 'String')."""
def __getslice__(self, low, high):
""" On 'self[low:high]'."""
return self.data[low:high]
class String(Sequence, ModMixin):
""" Emulates a Python string."""
TYPE = type("")
def _reset(self):
""" Initialize an empty string."""
self.data = ""
def _convert(self, data):
""" Convert data into a string."""
return str(data)
class Tuple(Sequence):
""" Emulates a Python tuple."""
TYPE = type(())
def _reset(self):
""" Initialize an empty tuple."""
self.data = ()
def _convert(self, data):
""" Non-tuples cannot be converted. Raise an exception."""
raise TypeError("Non-tuples cannot be converted to a tuple.")
class MutableSequence(Sequence, MutableMixin):
""" Superclass for classes which emulate mutable (modifyable in-place)
sequences ('List')."""
def __setslice__(self, low, high, seq):
""" On 'self[low:high]=seq'."""
self.data[low:high] = seq
def __delslice__(self, low, high):
""" On 'del self[low:high]'."""
del self.data[low:high]
def append(self, x):
""" Inserts object 'x' at the end of 'self.data' in-place."""
self.data.append(x)
def count(self, x):
""" Returns the number of occurrences of 'x' in 'self.data'."""
return self.data.count(x)
def extend(self, x):
""" Concatenates sequence 'x' to the end of 'self' in-place
(like 'self=self+x')."""
self.data.extend(x)
def index(self, x):
""" Returns the offset of the first occurrence of object 'x' in
'self.data'; raises an exception if not found."""
return self.data.index(x)
def insert(self, i, x):
""" Inserts object 'x' into 'self.data' at offset 'i'
(like 'self[i:i]=[x]')."""
self.data.insert(i, x)
def pop(self, i=-1):
""" Returns and deletes the last item of 'self.data' (or item
'self.data[i]' if 'i' given)."""
return self.data.pop(i)
def remove(self, x):
""" Deletes the first occurrence of object 'x' from 'self.data';
raise an exception if not found."""
self.data.remove(x)
def reverse(self):
""" Reverses items in 'self.data' in-place."""
self.data.reverse()
def sort(self, func=None):
"""
Sorts 'self.data' in-place. Argument:
- func : optional, default 'None' --
- If 'func' not given, sorting will be in ascending
order.
- If 'func' given, it will determine the sort order.
'func' must be a two-argument comparison function
which returns -1, 0, or 1, to mean before, same,
or after ordering."""
if func:
self.data.sort(func)
else:
self.data.sort()
class List(MutableSequence):
""" Emulates a Python list. When instantiating an object with data
('List(data)'), you can force a copy with 'List(list(data))'."""
TYPE = type([])
def _reset(self):
""" Initialize an empty list."""
self.data = []
def _convert(self, data):
""" Convert data into a list."""
return list(data)
class Mapping(Container):
""" Superclass for classes which emulate mappings/hashes ('Dictionary')."""
def has_key(self, key):
""" Returns 1 (true) if 'self.data' has a key 'key', or 0 otherwise."""
return self.data.has_key(key)
def keys(self):
""" Returns a new list holding all keys from 'self.data'."""
return self.data.keys()
def values(self):
""" Returns a new list holding all values from 'self.data'."""
return self.data.values()
def items(self):
""" Returns a new list of tuple pairs '(key, value)', one for each entry
in 'self.data'."""
return self.data.items()
def clear(self):
""" Removes all items from 'self.data'."""
self.data.clear()
def get(self, key, default=None):
""" Similar to 'self[key]', but returns 'default' (or 'None') instead of
raising an exception when 'key' is not found in 'self.data'."""
return self.data.get(key, default)
def copy(self):
""" Returns a shallow (top-level) copy of 'self.data'."""
return self.data.copy()
def update(self, dict):
""" Merges 'dict' into 'self.data'
(i.e., 'for (k,v) in dict.items(): self.data[k]=v')."""
self.data.update(dict)
class Dictionary(Mapping, MutableMixin):
""" Emulates a Python dictionary, a mutable mapping. When instantiating an
object with data ('Dictionary(data)'), you can force a (shallow) copy
with 'Dictionary(data.copy())'."""
TYPE = type({})
def _reset(self):
""" Initialize an empty dictionary."""
self.data = {}
def _convert(self, data):
""" Non-dictionaries cannot be converted. Raise an exception."""
raise TypeError("Non-dictionaries cannot be converted to a dictionary.")
if __name__ == "__main__":
print __doc__ # show module's documentation string
| gpl-2.0 | -619,525,922,261,596,800 | 25.941392 | 85 | 0.646431 | false |
tizianasellitto/servo | tests/wpt/web-platform-tests/tools/html5lib/html5lib/utils.py | 982 | 2545 | from __future__ import absolute_import, division, unicode_literals
from types import ModuleType
try:
import xml.etree.cElementTree as default_etree
except ImportError:
import xml.etree.ElementTree as default_etree
__all__ = ["default_etree", "MethodDispatcher", "isSurrogatePair",
"surrogatePairToCodepoint", "moduleFactoryFactory"]
class MethodDispatcher(dict):
"""Dict with 2 special properties:
On initiation, keys that are lists, sets or tuples are converted to
multiple keys so accessing any one of the items in the original
list-like object returns the matching value
md = MethodDispatcher({("foo", "bar"):"baz"})
md["foo"] == "baz"
A default value which can be set through the default attribute.
"""
def __init__(self, items=()):
# Using _dictEntries instead of directly assigning to self is about
# twice as fast. Please do careful performance testing before changing
# anything here.
_dictEntries = []
for name, value in items:
if type(name) in (list, tuple, frozenset, set):
for item in name:
_dictEntries.append((item, value))
else:
_dictEntries.append((name, value))
dict.__init__(self, _dictEntries)
self.default = None
def __getitem__(self, key):
return dict.get(self, key, self.default)
# Some utility functions to dal with weirdness around UCS2 vs UCS4
# python builds
def isSurrogatePair(data):
return (len(data) == 2 and
ord(data[0]) >= 0xD800 and ord(data[0]) <= 0xDBFF and
ord(data[1]) >= 0xDC00 and ord(data[1]) <= 0xDFFF)
def surrogatePairToCodepoint(data):
char_val = (0x10000 + (ord(data[0]) - 0xD800) * 0x400 +
(ord(data[1]) - 0xDC00))
return char_val
# Module Factory Factory (no, this isn't Java, I know)
# Here to stop this being duplicated all over the place.
def moduleFactoryFactory(factory):
moduleCache = {}
def moduleFactory(baseModule, *args, **kwargs):
if isinstance(ModuleType.__name__, type("")):
name = "_%s_factory" % baseModule.__name__
else:
name = b"_%s_factory" % baseModule.__name__
if name in moduleCache:
return moduleCache[name]
else:
mod = ModuleType(name)
objs = factory(baseModule, *args, **kwargs)
mod.__dict__.update(objs)
moduleCache[name] = mod
return mod
return moduleFactory
| mpl-2.0 | -4,285,529,845,807,071,000 | 30.036585 | 78 | 0.616896 | false |
jamielennox/tempest | tempest/api/compute/admin/test_quotas_negative.py | 1 | 7093 | # Copyright 2014 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.api.compute import base
from tempest.common.utils import data_utils
from tempest import config
from tempest import exceptions
from tempest import test
CONF = config.CONF
class QuotasAdminNegativeTestJSON(base.BaseV2ComputeAdminTest):
force_tenant_isolation = True
@classmethod
def resource_setup(cls):
super(QuotasAdminNegativeTestJSON, cls).resource_setup()
cls.client = cls.os.quotas_client
cls.adm_client = cls.os_adm.quotas_client
cls.sg_client = cls.security_groups_client
# NOTE(afazekas): these test cases should always create and use a new
# tenant most of them should be skipped if we can't do that
cls.demo_tenant_id = cls.client.tenant_id
@test.attr(type=['negative', 'gate'])
def test_update_quota_normal_user(self):
self.assertRaises(exceptions.Unauthorized,
self.client.update_quota_set,
self.demo_tenant_id,
ram=0)
# TODO(afazekas): Add dedicated tenant to the skiped quota tests
# it can be moved into the setUpClass as well
@test.attr(type=['negative', 'gate'])
def test_create_server_when_cpu_quota_is_full(self):
# Disallow server creation when tenant's vcpu quota is full
quota_set = self.adm_client.get_quota_set(self.demo_tenant_id)
default_vcpu_quota = quota_set['cores']
vcpu_quota = 0 # Set the quota to zero to conserve resources
quota_set = self.adm_client.update_quota_set(self.demo_tenant_id,
force=True,
cores=vcpu_quota)
self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
cores=default_vcpu_quota)
self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
self.create_test_server)
@test.attr(type=['negative', 'gate'])
def test_create_server_when_memory_quota_is_full(self):
# Disallow server creation when tenant's memory quota is full
quota_set = self.adm_client.get_quota_set(self.demo_tenant_id)
default_mem_quota = quota_set['ram']
mem_quota = 0 # Set the quota to zero to conserve resources
self.adm_client.update_quota_set(self.demo_tenant_id,
force=True,
ram=mem_quota)
self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
ram=default_mem_quota)
self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
self.create_test_server)
@test.attr(type=['negative', 'gate'])
def test_create_server_when_instances_quota_is_full(self):
# Once instances quota limit is reached, disallow server creation
quota_set = self.adm_client.get_quota_set(self.demo_tenant_id)
default_instances_quota = quota_set['instances']
instances_quota = 0 # Set quota to zero to disallow server creation
self.adm_client.update_quota_set(self.demo_tenant_id,
force=True,
instances=instances_quota)
self.addCleanup(self.adm_client.update_quota_set, self.demo_tenant_id,
instances=default_instances_quota)
self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
self.create_test_server)
@test.skip_because(bug="1186354",
condition=CONF.service_available.neutron)
@test.attr(type='gate')
@test.services('network')
def test_security_groups_exceed_limit(self):
# Negative test: Creation Security Groups over limit should FAIL
quota_set = self.adm_client.get_quota_set(self.demo_tenant_id)
default_sg_quota = quota_set['security_groups']
sg_quota = 0 # Set the quota to zero to conserve resources
quota_set =\
self.adm_client.update_quota_set(self.demo_tenant_id,
force=True,
security_groups=sg_quota)
self.addCleanup(self.adm_client.update_quota_set,
self.demo_tenant_id,
security_groups=default_sg_quota)
# Check we cannot create anymore
# A 403 Forbidden or 413 Overlimit (old behaviour) exception
# will be raised when out of quota
self.assertRaises((exceptions.Unauthorized, exceptions.OverLimit),
self.sg_client.create_security_group,
"sg-overlimit", "sg-desc")
@test.skip_because(bug="1186354",
condition=CONF.service_available.neutron)
@test.attr(type=['negative', 'gate'])
@test.services('network')
def test_security_groups_rules_exceed_limit(self):
# Negative test: Creation of Security Group Rules should FAIL
# when we reach limit maxSecurityGroupRules
quota_set = self.adm_client.get_quota_set(self.demo_tenant_id)
default_sg_rules_quota = quota_set['security_group_rules']
sg_rules_quota = 0 # Set the quota to zero to conserve resources
quota_set =\
self.adm_client.update_quota_set(
self.demo_tenant_id,
force=True,
security_group_rules=sg_rules_quota)
self.addCleanup(self.adm_client.update_quota_set,
self.demo_tenant_id,
security_group_rules=default_sg_rules_quota)
s_name = data_utils.rand_name('securitygroup-')
s_description = data_utils.rand_name('description-')
securitygroup =\
self.sg_client.create_security_group(s_name, s_description)
self.addCleanup(self.sg_client.delete_security_group,
securitygroup['id'])
secgroup_id = securitygroup['id']
ip_protocol = 'tcp'
# Check we cannot create SG rule anymore
# A 403 Forbidden or 413 Overlimit (old behaviour) exception
# will be raised when out of quota
self.assertRaises((exceptions.OverLimit, exceptions.Unauthorized),
self.sg_client.create_security_group_rule,
secgroup_id, ip_protocol, 1025, 1025)
| apache-2.0 | 2,859,684,301,012,371,000 | 43.892405 | 78 | 0.612153 | false |
cafecivet/django_girls_tutorial | Lib/site-packages/django/db/models/sql/subqueries.py | 66 | 10450 | """
Query subclasses which provide extra functionality beyond simple data retrieval.
"""
from django.conf import settings
from django.core.exceptions import FieldError
from django.db import connections
from django.db.models.query_utils import Q
from django.db.models.constants import LOOKUP_SEP
from django.db.models.fields import DateField, DateTimeField, FieldDoesNotExist
from django.db.models.sql.constants import GET_ITERATOR_CHUNK_SIZE, NO_RESULTS, SelectInfo
from django.db.models.sql.datastructures import Date, DateTime
from django.db.models.sql.query import Query
from django.utils import six
from django.utils import timezone
__all__ = ['DeleteQuery', 'UpdateQuery', 'InsertQuery', 'DateQuery',
'DateTimeQuery', 'AggregateQuery']
class DeleteQuery(Query):
"""
Delete queries are done through this class, since they are more constrained
than general queries.
"""
compiler = 'SQLDeleteCompiler'
def do_query(self, table, where, using):
self.tables = [table]
self.where = where
self.get_compiler(using).execute_sql(NO_RESULTS)
def delete_batch(self, pk_list, using, field=None):
"""
Set up and execute delete queries for all the objects in pk_list.
More than one physical query may be executed if there are a
lot of values in pk_list.
"""
if not field:
field = self.get_meta().pk
for offset in range(0, len(pk_list), GET_ITERATOR_CHUNK_SIZE):
self.where = self.where_class()
self.add_q(Q(
**{field.attname + '__in': pk_list[offset:offset + GET_ITERATOR_CHUNK_SIZE]}))
self.do_query(self.get_meta().db_table, self.where, using=using)
def delete_qs(self, query, using):
"""
Delete the queryset in one SQL query (if possible). For simple queries
this is done by copying the query.query.where to self.query, for
complex queries by using subquery.
"""
innerq = query.query
# Make sure the inner query has at least one table in use.
innerq.get_initial_alias()
# The same for our new query.
self.get_initial_alias()
innerq_used_tables = [t for t in innerq.tables
if innerq.alias_refcount[t]]
if ((not innerq_used_tables or innerq_used_tables == self.tables)
and not len(innerq.having)):
# There is only the base table in use in the query, and there is
# no aggregate filtering going on.
self.where = innerq.where
else:
pk = query.model._meta.pk
if not connections[using].features.update_can_self_select:
# We can't do the delete using subquery.
values = list(query.values_list('pk', flat=True))
if not values:
return
self.delete_batch(values, using)
return
else:
innerq.clear_select_clause()
innerq.select = [
SelectInfo((self.get_initial_alias(), pk.column), None)
]
values = innerq
self.where = self.where_class()
self.add_q(Q(pk__in=values))
self.get_compiler(using).execute_sql(NO_RESULTS)
class UpdateQuery(Query):
"""
Represents an "update" SQL query.
"""
compiler = 'SQLUpdateCompiler'
def __init__(self, *args, **kwargs):
super(UpdateQuery, self).__init__(*args, **kwargs)
self._setup_query()
def _setup_query(self):
"""
Runs on initialization and after cloning. Any attributes that would
normally be set in __init__ should go in here, instead, so that they
are also set up after a clone() call.
"""
self.values = []
self.related_ids = None
if not hasattr(self, 'related_updates'):
self.related_updates = {}
def clone(self, klass=None, **kwargs):
return super(UpdateQuery, self).clone(klass,
related_updates=self.related_updates.copy(), **kwargs)
def update_batch(self, pk_list, values, using):
self.add_update_values(values)
for offset in range(0, len(pk_list), GET_ITERATOR_CHUNK_SIZE):
self.where = self.where_class()
self.add_q(Q(pk__in=pk_list[offset: offset + GET_ITERATOR_CHUNK_SIZE]))
self.get_compiler(using).execute_sql(NO_RESULTS)
def add_update_values(self, values):
"""
Convert a dictionary of field name to value mappings into an update
query. This is the entry point for the public update() method on
querysets.
"""
values_seq = []
for name, val in six.iteritems(values):
field, model, direct, m2m = self.get_meta().get_field_by_name(name)
if not direct or m2m:
raise FieldError('Cannot update model field %r (only non-relations and foreign keys permitted).' % field)
if model:
self.add_related_update(model, field, val)
continue
values_seq.append((field, model, val))
return self.add_update_fields(values_seq)
def add_update_fields(self, values_seq):
"""
Turn a sequence of (field, model, value) triples into an update query.
Used by add_update_values() as well as the "fast" update path when
saving models.
"""
self.values.extend(values_seq)
def add_related_update(self, model, field, value):
"""
Adds (name, value) to an update query for an ancestor model.
Updates are coalesced so that we only run one update query per ancestor.
"""
self.related_updates.setdefault(model, []).append((field, None, value))
def get_related_updates(self):
"""
Returns a list of query objects: one for each update required to an
ancestor model. Each query will have the same filtering conditions as
the current query but will only update a single table.
"""
if not self.related_updates:
return []
result = []
for model, values in six.iteritems(self.related_updates):
query = UpdateQuery(model)
query.values = values
if self.related_ids is not None:
query.add_filter(('pk__in', self.related_ids))
result.append(query)
return result
class InsertQuery(Query):
compiler = 'SQLInsertCompiler'
def __init__(self, *args, **kwargs):
super(InsertQuery, self).__init__(*args, **kwargs)
self.fields = []
self.objs = []
def clone(self, klass=None, **kwargs):
extras = {
'fields': self.fields[:],
'objs': self.objs[:],
'raw': self.raw,
}
extras.update(kwargs)
return super(InsertQuery, self).clone(klass, **extras)
def insert_values(self, fields, objs, raw=False):
"""
Set up the insert query from the 'insert_values' dictionary. The
dictionary gives the model field names and their target values.
If 'raw_values' is True, the values in the 'insert_values' dictionary
are inserted directly into the query, rather than passed as SQL
parameters. This provides a way to insert NULL and DEFAULT keywords
into the query, for example.
"""
self.fields = fields
self.objs = objs
self.raw = raw
class DateQuery(Query):
"""
A DateQuery is a normal query, except that it specifically selects a single
date field. This requires some special handling when converting the results
back to Python objects, so we put it in a separate class.
"""
compiler = 'SQLDateCompiler'
def add_select(self, field_name, lookup_type, order='ASC'):
"""
Converts the query into an extraction query.
"""
try:
result = self.setup_joins(
field_name.split(LOOKUP_SEP),
self.get_meta(),
self.get_initial_alias(),
)
except FieldError:
raise FieldDoesNotExist("%s has no field named '%s'" % (
self.get_meta().object_name, field_name
))
field = result[0]
self._check_field(field) # overridden in DateTimeQuery
alias = result[3][-1]
select = self._get_select((alias, field.column), lookup_type)
self.clear_select_clause()
self.select = [SelectInfo(select, None)]
self.distinct = True
self.order_by = [1] if order == 'ASC' else [-1]
if field.null:
self.add_filter(("%s__isnull" % field_name, False))
def _check_field(self, field):
assert isinstance(field, DateField), \
"%r isn't a DateField." % field.name
if settings.USE_TZ:
assert not isinstance(field, DateTimeField), \
"%r is a DateTimeField, not a DateField." % field.name
def _get_select(self, col, lookup_type):
return Date(col, lookup_type)
class DateTimeQuery(DateQuery):
"""
A DateTimeQuery is like a DateQuery but for a datetime field. If time zone
support is active, the tzinfo attribute contains the time zone to use for
converting the values before truncating them. Otherwise it's set to None.
"""
compiler = 'SQLDateTimeCompiler'
def clone(self, klass=None, memo=None, **kwargs):
if 'tzinfo' not in kwargs and hasattr(self, 'tzinfo'):
kwargs['tzinfo'] = self.tzinfo
return super(DateTimeQuery, self).clone(klass, memo, **kwargs)
def _check_field(self, field):
assert isinstance(field, DateTimeField), \
"%r isn't a DateTimeField." % field.name
def _get_select(self, col, lookup_type):
if self.tzinfo is None:
tzname = None
else:
tzname = timezone._get_timezone_name(self.tzinfo)
return DateTime(col, lookup_type, tzname)
class AggregateQuery(Query):
"""
An AggregateQuery takes another query as a parameter to the FROM
clause and only selects the elements in the provided list.
"""
compiler = 'SQLAggregateCompiler'
def add_subquery(self, query, using):
self.subquery, self.sub_params = query.get_compiler(using).as_sql(with_col_aliases=True)
| gpl-2.0 | -4,727,673,045,453,679,000 | 35.666667 | 121 | 0.604498 | false |
eino-makitalo/odoo | addons/edi/models/res_currency.py | 437 | 2892 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Business Applications
# Copyright (c) 2011-2012 OpenERP S.A. <http://openerp.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import osv
from edi import EDIMixin
from openerp import SUPERUSER_ID
RES_CURRENCY_EDI_STRUCT = {
#custom: 'code'
'symbol': True,
'rate': True,
}
class res_currency(osv.osv, EDIMixin):
_inherit = "res.currency"
def edi_export(self, cr, uid, records, edi_struct=None, context=None):
edi_struct = dict(edi_struct or RES_CURRENCY_EDI_STRUCT)
edi_doc_list = []
for currency in records:
# Get EDI doc based on struct. The result will also contain all metadata fields and attachments.
edi_doc = super(res_currency,self).edi_export(cr, uid, [currency], edi_struct, context)[0]
edi_doc.update(code=currency.name)
edi_doc_list.append(edi_doc)
return edi_doc_list
def edi_import(self, cr, uid, edi_document, context=None):
self._edi_requires_attributes(('code','symbol'), edi_document)
external_id = edi_document['__id']
existing_currency = self._edi_get_object_by_external_id(cr, uid, external_id, 'res_currency', context=context)
if existing_currency:
return existing_currency.id
# find with unique ISO code
existing_ids = self.search(cr, uid, [('name','=',edi_document['code'])])
if existing_ids:
return existing_ids[0]
# nothing found, create a new one
currency_id = self.create(cr, SUPERUSER_ID, {'name': edi_document['code'],
'symbol': edi_document['symbol']}, context=context)
rate = edi_document.pop('rate')
if rate:
self.pool.get('res.currency.rate').create(cr, SUPERUSER_ID, {'currency_id': currency_id,
'rate': rate}, context=context)
return currency_id
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | -3,118,350,269,596,763,000 | 42.818182 | 118 | 0.59751 | false |
AuyaJackie/odoo | addons/resource/resource.py | 81 | 42822 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-TODAY OpenERP SA (http://www.openerp.com)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import datetime
from dateutil import rrule
from dateutil.relativedelta import relativedelta
from operator import itemgetter
from openerp import tools
from openerp.osv import fields, osv
from openerp.tools.float_utils import float_compare
from openerp.tools.translate import _
import pytz
class resource_calendar(osv.osv):
""" Calendar model for a resource. It has
- attendance_ids: list of resource.calendar.attendance that are a working
interval in a given weekday.
- leave_ids: list of leaves linked to this calendar. A leave can be general
or linked to a specific resource, depending on its resource_id.
All methods in this class use intervals. An interval is a tuple holding
(begin_datetime, end_datetime). A list of intervals is therefore a list of
tuples, holding several intervals of work or leaves. """
_name = "resource.calendar"
_description = "Resource Calendar"
_columns = {
'name': fields.char("Name", required=True),
'company_id': fields.many2one('res.company', 'Company', required=False),
'attendance_ids': fields.one2many('resource.calendar.attendance', 'calendar_id', 'Working Time', copy=True),
'manager': fields.many2one('res.users', 'Workgroup Manager'),
'leave_ids': fields.one2many(
'resource.calendar.leaves', 'calendar_id', 'Leaves',
help=''
),
}
_defaults = {
'company_id': lambda self, cr, uid, context: self.pool.get('res.company')._company_default_get(cr, uid, 'resource.calendar', context=context)
}
# --------------------------------------------------
# Utility methods
# --------------------------------------------------
def interval_clean(self, intervals):
""" Utility method that sorts and removes overlapping inside datetime
intervals. The intervals are sorted based on increasing starting datetime.
Overlapping intervals are merged into a single one.
:param list intervals: list of intervals; each interval is a tuple
(datetime_from, datetime_to)
:return list cleaned: list of sorted intervals without overlap """
intervals = sorted(intervals, key=itemgetter(0)) # sort on first datetime
cleaned = []
working_interval = None
while intervals:
current_interval = intervals.pop(0)
if not working_interval: # init
working_interval = [current_interval[0], current_interval[1]]
elif working_interval[1] < current_interval[0]: # interval is disjoint
cleaned.append(tuple(working_interval))
working_interval = [current_interval[0], current_interval[1]]
elif working_interval[1] < current_interval[1]: # union of greater intervals
working_interval[1] = current_interval[1]
if working_interval: # handle void lists
cleaned.append(tuple(working_interval))
return cleaned
def interval_remove_leaves(self, interval, leave_intervals):
""" Utility method that remove leave intervals from a base interval:
- clean the leave intervals, to have an ordered list of not-overlapping
intervals
- initiate the current interval to be the base interval
- for each leave interval:
- finishing before the current interval: skip, go to next
- beginning after the current interval: skip and get out of the loop
because we are outside range (leaves are ordered)
- beginning within the current interval: close the current interval
and begin a new current interval that begins at the end of the leave
interval
- ending within the current interval: update the current interval begin
to match the leave interval ending
:param tuple interval: a tuple (beginning datetime, ending datetime) that
is the base interval from which the leave intervals
will be removed
:param list leave_intervals: a list of tuples (beginning datetime, ending datetime)
that are intervals to remove from the base interval
:return list intervals: a list of tuples (begin datetime, end datetime)
that are the remaining valid intervals """
if not interval:
return interval
if leave_intervals is None:
leave_intervals = []
intervals = []
leave_intervals = self.interval_clean(leave_intervals)
current_interval = [interval[0], interval[1]]
for leave in leave_intervals:
if leave[1] <= current_interval[0]:
continue
if leave[0] >= current_interval[1]:
break
if current_interval[0] < leave[0] < current_interval[1]:
current_interval[1] = leave[0]
intervals.append((current_interval[0], current_interval[1]))
current_interval = [leave[1], interval[1]]
# if current_interval[0] <= leave[1] <= current_interval[1]:
if current_interval[0] <= leave[1]:
current_interval[0] = leave[1]
if current_interval and current_interval[0] < interval[1]: # remove intervals moved outside base interval due to leaves
intervals.append((current_interval[0], current_interval[1]))
return intervals
def interval_schedule_hours(self, intervals, hour, remove_at_end=True):
""" Schedule hours in intervals. The last matching interval is truncated
to match the specified hours.
It is possible to truncate the last interval at its beginning or ending.
However this does nothing on the given interval order that should be
submitted accordingly.
:param list intervals: a list of tuples (beginning datetime, ending datetime)
:param int/float hours: number of hours to schedule. It will be converted
into a timedelta, but should be submitted as an
int or float.
:param boolean remove_at_end: remove extra hours at the end of the last
matching interval. Otherwise, do it at the
beginning.
:return list results: a list of intervals. If the number of hours to schedule
is greater than the possible scheduling in the intervals, no extra-scheduling
is done, and results == intervals. """
results = []
res = datetime.timedelta()
limit = datetime.timedelta(hours=hour)
for interval in intervals:
res += interval[1] - interval[0]
if res > limit and remove_at_end:
interval = (interval[0], interval[1] + relativedelta(seconds=seconds(limit-res)))
elif res > limit:
interval = (interval[0] + relativedelta(seconds=seconds(res-limit)), interval[1])
results.append(interval)
if res > limit:
break
return results
# --------------------------------------------------
# Date and hours computation
# --------------------------------------------------
def get_attendances_for_weekdays(self, cr, uid, id, weekdays, context=None):
""" Given a list of weekdays, return matching resource.calendar.attendance"""
calendar = self.browse(cr, uid, id, context=None)
return [att for att in calendar.attendance_ids if int(att.dayofweek) in weekdays]
def get_weekdays(self, cr, uid, id, default_weekdays=None, context=None):
""" Return the list of weekdays that contain at least one working interval.
If no id is given (no calendar), return default weekdays. """
if id is None:
return default_weekdays if default_weekdays is not None else [0, 1, 2, 3, 4]
calendar = self.browse(cr, uid, id, context=None)
weekdays = set()
for attendance in calendar.attendance_ids:
weekdays.add(int(attendance.dayofweek))
return list(weekdays)
def get_next_day(self, cr, uid, id, day_date, context=None):
""" Get following date of day_date, based on resource.calendar. If no
calendar is provided, just return the next day.
:param int id: id of a resource.calendar. If not given, simply add one day
to the submitted date.
:param date day_date: current day as a date
:return date: next day of calendar, or just next day """
if not id:
return day_date + relativedelta(days=1)
weekdays = self.get_weekdays(cr, uid, id, context)
base_index = -1
for weekday in weekdays:
if weekday > day_date.weekday():
break
base_index += 1
new_index = (base_index + 1) % len(weekdays)
days = (weekdays[new_index] - day_date.weekday())
if days < 0:
days = 7 + days
return day_date + relativedelta(days=days)
def get_previous_day(self, cr, uid, id, day_date, context=None):
""" Get previous date of day_date, based on resource.calendar. If no
calendar is provided, just return the previous day.
:param int id: id of a resource.calendar. If not given, simply remove
one day from the submitted date.
:param date day_date: current day as a date
:return date: previous day of calendar, or just previous day """
if not id:
return day_date + relativedelta(days=-1)
weekdays = self.get_weekdays(cr, uid, id, context)
weekdays.reverse()
base_index = -1
for weekday in weekdays:
if weekday < day_date.weekday():
break
base_index += 1
new_index = (base_index + 1) % len(weekdays)
days = (weekdays[new_index] - day_date.weekday())
if days > 0:
days = days - 7
return day_date + relativedelta(days=days)
def get_leave_intervals(self, cr, uid, id, resource_id=None,
start_datetime=None, end_datetime=None,
context=None):
"""Get the leaves of the calendar. Leaves can be filtered on the resource,
the start datetime or the end datetime.
:param int resource_id: the id of the resource to take into account when
computing the leaves. If not set, only general
leaves are computed. If set, generic and
specific leaves are computed.
:param datetime start_datetime: if provided, do not take into account leaves
ending before this date.
:param datetime end_datetime: if provided, do not take into account leaves
beginning after this date.
:return list leaves: list of tuples (start_datetime, end_datetime) of
leave intervals
"""
resource_calendar = self.browse(cr, uid, id, context=context)
leaves = []
for leave in resource_calendar.leave_ids:
if leave.resource_id and not resource_id == leave.resource_id.id:
continue
date_from = datetime.datetime.strptime(leave.date_from, tools.DEFAULT_SERVER_DATETIME_FORMAT)
if end_datetime and date_from > end_datetime:
continue
date_to = datetime.datetime.strptime(leave.date_to, tools.DEFAULT_SERVER_DATETIME_FORMAT)
if start_datetime and date_to < start_datetime:
continue
leaves.append((date_from, date_to))
return leaves
def get_working_intervals_of_day(self, cr, uid, id, start_dt=None, end_dt=None,
leaves=None, compute_leaves=False, resource_id=None,
default_interval=None, context=None):
""" Get the working intervals of the day based on calendar. This method
handle leaves that come directly from the leaves parameter or can be computed.
:param int id: resource.calendar id; take the first one if is a list
:param datetime start_dt: datetime object that is the beginning hours
for the working intervals computation; any
working interval beginning before start_dt
will be truncated. If not set, set to end_dt
or today() if no end_dt at 00.00.00.
:param datetime end_dt: datetime object that is the ending hour
for the working intervals computation; any
working interval ending after end_dt
will be truncated. If not set, set to start_dt()
at 23.59.59.
:param list leaves: a list of tuples(start_datetime, end_datetime) that
represent leaves.
:param boolean compute_leaves: if set and if leaves is None, compute the
leaves based on calendar and resource.
If leaves is None and compute_leaves false
no leaves are taken into account.
:param int resource_id: the id of the resource to take into account when
computing the leaves. If not set, only general
leaves are computed. If set, generic and
specific leaves are computed.
:param tuple default_interval: if no id, try to return a default working
day using default_interval[0] as beginning
hour, and default_interval[1] as ending hour.
Example: default_interval = (8, 16).
Otherwise, a void list of working intervals
is returned when id is None.
:return list intervals: a list of tuples (start_datetime, end_datetime)
of work intervals """
if isinstance(id, (list, tuple)):
id = id[0]
# Computes start_dt, end_dt (with default values if not set) + off-interval work limits
work_limits = []
if start_dt is None and end_dt is not None:
start_dt = end_dt.replace(hour=0, minute=0, second=0)
elif start_dt is None:
start_dt = datetime.datetime.now().replace(hour=0, minute=0, second=0)
else:
work_limits.append((start_dt.replace(hour=0, minute=0, second=0), start_dt))
if end_dt is None:
end_dt = start_dt.replace(hour=23, minute=59, second=59)
else:
work_limits.append((end_dt, end_dt.replace(hour=23, minute=59, second=59)))
assert start_dt.date() == end_dt.date(), 'get_working_intervals_of_day is restricted to one day'
intervals = []
work_dt = start_dt.replace(hour=0, minute=0, second=0)
# no calendar: try to use the default_interval, then return directly
if id is None:
working_interval = []
if default_interval:
working_interval = (start_dt.replace(hour=default_interval[0], minute=0, second=0), start_dt.replace(hour=default_interval[1], minute=0, second=0))
intervals = self.interval_remove_leaves(working_interval, work_limits)
return intervals
working_intervals = []
tz_info = fields.datetime.context_timestamp(cr, uid, work_dt, context=context).tzinfo
for calendar_working_day in self.get_attendances_for_weekdays(cr, uid, id, [start_dt.weekday()], context):
x = work_dt.replace(hour=int(calendar_working_day.hour_from))
y = work_dt.replace(hour=int(calendar_working_day.hour_to))
x = x.replace(tzinfo=tz_info).astimezone(pytz.UTC).replace(tzinfo=None)
y = y.replace(tzinfo=tz_info).astimezone(pytz.UTC).replace(tzinfo=None)
working_interval = (x, y)
working_intervals += self.interval_remove_leaves(working_interval, work_limits)
# find leave intervals
if leaves is None and compute_leaves:
leaves = self.get_leave_intervals(cr, uid, id, resource_id=resource_id, context=None)
# filter according to leaves
for interval in working_intervals:
work_intervals = self.interval_remove_leaves(interval, leaves)
intervals += work_intervals
return intervals
def get_working_hours_of_date(self, cr, uid, id, start_dt=None, end_dt=None,
leaves=None, compute_leaves=False, resource_id=None,
default_interval=None, context=None):
""" Get the working hours of the day based on calendar. This method uses
get_working_intervals_of_day to have the work intervals of the day. It
then calculates the number of hours contained in those intervals. """
res = datetime.timedelta()
intervals = self.get_working_intervals_of_day(
cr, uid, id,
start_dt, end_dt, leaves,
compute_leaves, resource_id,
default_interval, context)
for interval in intervals:
res += interval[1] - interval[0]
return seconds(res) / 3600.0
def get_working_hours(self, cr, uid, id, start_dt, end_dt, compute_leaves=False,
resource_id=None, default_interval=None, context=None):
hours = 0.0
for day in rrule.rrule(rrule.DAILY, dtstart=start_dt,
until=(end_dt + datetime.timedelta(days=1)).replace(hour=0, minute=0, second=0),
byweekday=self.get_weekdays(cr, uid, id, context=context)):
day_start_dt = day.replace(hour=0, minute=0, second=0)
if start_dt and day.date() == start_dt.date():
day_start_dt = start_dt
day_end_dt = day.replace(hour=23, minute=59, second=59)
if end_dt and day.date() == end_dt.date():
day_end_dt = end_dt
hours += self.get_working_hours_of_date(
cr, uid, id, start_dt=day_start_dt, end_dt=day_end_dt,
compute_leaves=compute_leaves, resource_id=resource_id,
default_interval=default_interval,
context=context)
return hours
# --------------------------------------------------
# Hours scheduling
# --------------------------------------------------
def _schedule_hours(self, cr, uid, id, hours, day_dt=None,
compute_leaves=False, resource_id=None,
default_interval=None, context=None):
""" Schedule hours of work, using a calendar and an optional resource to
compute working and leave days. This method can be used backwards, i.e.
scheduling days before a deadline.
:param int hours: number of hours to schedule. Use a negative number to
compute a backwards scheduling.
:param datetime day_dt: reference date to compute working days. If days is
> 0 date is the starting date. If days is < 0
date is the ending date.
:param boolean compute_leaves: if set, compute the leaves based on calendar
and resource. Otherwise no leaves are taken
into account.
:param int resource_id: the id of the resource to take into account when
computing the leaves. If not set, only general
leaves are computed. If set, generic and
specific leaves are computed.
:param tuple default_interval: if no id, try to return a default working
day using default_interval[0] as beginning
hour, and default_interval[1] as ending hour.
Example: default_interval = (8, 16).
Otherwise, a void list of working intervals
is returned when id is None.
:return tuple (datetime, intervals): datetime is the beginning/ending date
of the schedulign; intervals are the
working intervals of the scheduling.
Note: Why not using rrule.rrule ? Because rrule does not seem to allow
getting back in time.
"""
if day_dt is None:
day_dt = datetime.datetime.now()
backwards = (hours < 0)
hours = abs(hours)
intervals = []
remaining_hours = hours * 1.0
iterations = 0
current_datetime = day_dt
call_args = dict(compute_leaves=compute_leaves, resource_id=resource_id, default_interval=default_interval, context=context)
while float_compare(remaining_hours, 0.0, precision_digits=2) in (1, 0) and iterations < 1000:
if backwards:
call_args['end_dt'] = current_datetime
else:
call_args['start_dt'] = current_datetime
working_intervals = self.get_working_intervals_of_day(cr, uid, id, **call_args)
if id is None and not working_intervals: # no calendar -> consider working 8 hours
remaining_hours -= 8.0
elif working_intervals:
if backwards:
working_intervals.reverse()
new_working_intervals = self.interval_schedule_hours(working_intervals, remaining_hours, not backwards)
if backwards:
new_working_intervals.reverse()
res = datetime.timedelta()
for interval in working_intervals:
res += interval[1] - interval[0]
remaining_hours -= (seconds(res) / 3600.0)
if backwards:
intervals = new_working_intervals + intervals
else:
intervals = intervals + new_working_intervals
# get next day
if backwards:
current_datetime = datetime.datetime.combine(self.get_previous_day(cr, uid, id, current_datetime, context), datetime.time(23, 59, 59))
else:
current_datetime = datetime.datetime.combine(self.get_next_day(cr, uid, id, current_datetime, context), datetime.time())
# avoid infinite loops
iterations += 1
return intervals
def schedule_hours_get_date(self, cr, uid, id, hours, day_dt=None,
compute_leaves=False, resource_id=None,
default_interval=None, context=None):
""" Wrapper on _schedule_hours: return the beginning/ending datetime of
an hours scheduling. """
res = self._schedule_hours(cr, uid, id, hours, day_dt, compute_leaves, resource_id, default_interval, context)
return res and res[0][0] or False
def schedule_hours(self, cr, uid, id, hours, day_dt=None,
compute_leaves=False, resource_id=None,
default_interval=None, context=None):
""" Wrapper on _schedule_hours: return the working intervals of an hours
scheduling. """
return self._schedule_hours(cr, uid, id, hours, day_dt, compute_leaves, resource_id, default_interval, context)
# --------------------------------------------------
# Days scheduling
# --------------------------------------------------
def _schedule_days(self, cr, uid, id, days, day_date=None, compute_leaves=False,
resource_id=None, default_interval=None, context=None):
"""Schedule days of work, using a calendar and an optional resource to
compute working and leave days. This method can be used backwards, i.e.
scheduling days before a deadline.
:param int days: number of days to schedule. Use a negative number to
compute a backwards scheduling.
:param date day_date: reference date to compute working days. If days is > 0
date is the starting date. If days is < 0 date is the
ending date.
:param boolean compute_leaves: if set, compute the leaves based on calendar
and resource. Otherwise no leaves are taken
into account.
:param int resource_id: the id of the resource to take into account when
computing the leaves. If not set, only general
leaves are computed. If set, generic and
specific leaves are computed.
:param tuple default_interval: if no id, try to return a default working
day using default_interval[0] as beginning
hour, and default_interval[1] as ending hour.
Example: default_interval = (8, 16).
Otherwise, a void list of working intervals
is returned when id is None.
:return tuple (datetime, intervals): datetime is the beginning/ending date
of the schedulign; intervals are the
working intervals of the scheduling.
Implementation note: rrule.rrule is not used because rrule it des not seem
to allow getting back in time.
"""
if day_date is None:
day_date = datetime.datetime.now()
backwards = (days < 0)
days = abs(days)
intervals = []
planned_days = 0
iterations = 0
current_datetime = day_date.replace(hour=0, minute=0, second=0)
while planned_days < days and iterations < 1000:
working_intervals = self.get_working_intervals_of_day(
cr, uid, id, current_datetime,
compute_leaves=compute_leaves, resource_id=resource_id,
default_interval=default_interval,
context=context)
if id is None or working_intervals: # no calendar -> no working hours, but day is considered as worked
planned_days += 1
intervals += working_intervals
# get next day
if backwards:
current_datetime = self.get_previous_day(cr, uid, id, current_datetime, context)
else:
current_datetime = self.get_next_day(cr, uid, id, current_datetime, context)
# avoid infinite loops
iterations += 1
return intervals
def schedule_days_get_date(self, cr, uid, id, days, day_date=None, compute_leaves=False,
resource_id=None, default_interval=None, context=None):
""" Wrapper on _schedule_days: return the beginning/ending datetime of
a days scheduling. """
res = self._schedule_days(cr, uid, id, days, day_date, compute_leaves, resource_id, default_interval, context)
return res and res[-1][1] or False
def schedule_days(self, cr, uid, id, days, day_date=None, compute_leaves=False,
resource_id=None, default_interval=None, context=None):
""" Wrapper on _schedule_days: return the working intervals of a days
scheduling. """
return self._schedule_days(cr, uid, id, days, day_date, compute_leaves, resource_id, default_interval, context)
# --------------------------------------------------
# Compatibility / to clean / to remove
# --------------------------------------------------
def working_hours_on_day(self, cr, uid, resource_calendar_id, day, context=None):
""" Used in hr_payroll/hr_payroll.py
:deprecated: OpenERP saas-3. Use get_working_hours_of_date instead. Note:
since saas-3, take hour/minutes into account, not just the whole day."""
if isinstance(day, datetime.datetime):
day = day.replace(hour=0, minute=0)
return self.get_working_hours_of_date(cr, uid, resource_calendar_id.id, start_dt=day, context=None)
def interval_min_get(self, cr, uid, id, dt_from, hours, resource=False):
""" Schedule hours backwards. Used in mrp_operations/mrp_operations.py.
:deprecated: OpenERP saas-3. Use schedule_hours instead. Note: since
saas-3, counts leave hours instead of all-day leaves."""
return self.schedule_hours(
cr, uid, id, hours * -1.0,
day_dt=dt_from.replace(minute=0, second=0),
compute_leaves=True, resource_id=resource,
default_interval=(8, 16)
)
def interval_get_multi(self, cr, uid, date_and_hours_by_cal, resource=False, byday=True):
""" Used in mrp_operations/mrp_operations.py (default parameters) and in
interval_get()
:deprecated: OpenERP saas-3. Use schedule_hours instead. Note:
Byday was not used. Since saas-3, counts Leave hours instead of all-day leaves."""
res = {}
for dt_str, hours, calendar_id in date_and_hours_by_cal:
result = self.schedule_hours(
cr, uid, calendar_id, hours,
day_dt=datetime.datetime.strptime(dt_str, '%Y-%m-%d %H:%M:%S').replace(second=0),
compute_leaves=True, resource_id=resource,
default_interval=(8, 16)
)
res[(dt_str, hours, calendar_id)] = result
return res
def interval_get(self, cr, uid, id, dt_from, hours, resource=False, byday=True):
""" Unifier of interval_get_multi. Used in: mrp_operations/mrp_operations.py,
crm/crm_lead.py (res given).
:deprecated: OpenERP saas-3. Use get_working_hours instead."""
res = self.interval_get_multi(
cr, uid, [(dt_from.strftime('%Y-%m-%d %H:%M:%S'), hours, id)], resource, byday)[(dt_from.strftime('%Y-%m-%d %H:%M:%S'), hours, id)]
return res
def interval_hours_get(self, cr, uid, id, dt_from, dt_to, resource=False):
""" Unused wrapper.
:deprecated: OpenERP saas-3. Use get_working_hours instead."""
return self._interval_hours_get(cr, uid, id, dt_from, dt_to, resource_id=resource)
def _interval_hours_get(self, cr, uid, id, dt_from, dt_to, resource_id=False, timezone_from_uid=None, exclude_leaves=True, context=None):
""" Computes working hours between two dates, taking always same hour/minuts.
:deprecated: OpenERP saas-3. Use get_working_hours instead. Note: since saas-3,
now resets hour/minuts. Now counts leave hours instead of all-day leaves."""
return self.get_working_hours(
cr, uid, id, dt_from, dt_to,
compute_leaves=(not exclude_leaves), resource_id=resource_id,
default_interval=(8, 16), context=context)
class resource_calendar_attendance(osv.osv):
_name = "resource.calendar.attendance"
_description = "Work Detail"
_columns = {
'name' : fields.char("Name", required=True),
'dayofweek': fields.selection([('0','Monday'),('1','Tuesday'),('2','Wednesday'),('3','Thursday'),('4','Friday'),('5','Saturday'),('6','Sunday')], 'Day of Week', required=True, select=True),
'date_from' : fields.date('Starting Date'),
'hour_from' : fields.float('Work from', required=True, help="Start and End time of working.", select=True),
'hour_to' : fields.float("Work to", required=True),
'calendar_id' : fields.many2one("resource.calendar", "Resource's Calendar", required=True),
}
_order = 'dayofweek, hour_from'
_defaults = {
'dayofweek' : '0'
}
def hours_time_string(hours):
""" convert a number of hours (float) into a string with format '%H:%M' """
minutes = int(round(hours * 60))
return "%02d:%02d" % divmod(minutes, 60)
class resource_resource(osv.osv):
_name = "resource.resource"
_description = "Resource Detail"
_columns = {
'name': fields.char("Name", required=True),
'code': fields.char('Code', size=16, copy=False),
'active' : fields.boolean('Active', help="If the active field is set to False, it will allow you to hide the resource record without removing it."),
'company_id' : fields.many2one('res.company', 'Company'),
'resource_type': fields.selection([('user','Human'),('material','Material')], 'Resource Type', required=True),
'user_id' : fields.many2one('res.users', 'User', help='Related user name for the resource to manage its access.'),
'time_efficiency' : fields.float('Efficiency Factor', size=8, required=True, help="This field depict the efficiency of the resource to complete tasks. e.g resource put alone on a phase of 5 days with 5 tasks assigned to him, will show a load of 100% for this phase by default, but if we put a efficiency of 200%, then his load will only be 50%."),
'calendar_id' : fields.many2one("resource.calendar", "Working Time", help="Define the schedule of resource"),
}
_defaults = {
'resource_type' : 'user',
'time_efficiency' : 1,
'active' : True,
'company_id': lambda self, cr, uid, context: self.pool.get('res.company')._company_default_get(cr, uid, 'resource.resource', context=context)
}
def copy(self, cr, uid, id, default=None, context=None):
if default is None:
default = {}
if not default.get('name', False):
default.update(name=_('%s (copy)') % (self.browse(cr, uid, id, context=context).name))
return super(resource_resource, self).copy(cr, uid, id, default, context)
def generate_resources(self, cr, uid, user_ids, calendar_id, context=None):
"""
Return a list of Resource Class objects for the resources allocated to the phase.
NOTE: Used in project/project.py
"""
resource_objs = {}
user_pool = self.pool.get('res.users')
for user in user_pool.browse(cr, uid, user_ids, context=context):
resource_objs[user.id] = {
'name' : user.name,
'vacation': [],
'efficiency': 1.0,
}
resource_ids = self.search(cr, uid, [('user_id', '=', user.id)], context=context)
if resource_ids:
for resource in self.browse(cr, uid, resource_ids, context=context):
resource_objs[user.id]['efficiency'] = resource.time_efficiency
resource_cal = resource.calendar_id.id
if resource_cal:
leaves = self.compute_vacation(cr, uid, calendar_id, resource.id, resource_cal, context=context)
resource_objs[user.id]['vacation'] += list(leaves)
return resource_objs
def compute_vacation(self, cr, uid, calendar_id, resource_id=False, resource_calendar=False, context=None):
"""
Compute the vacation from the working calendar of the resource.
@param calendar_id : working calendar of the project
@param resource_id : resource working on phase/task
@param resource_calendar : working calendar of the resource
NOTE: used in project/project.py, and in generate_resources
"""
resource_calendar_leaves_pool = self.pool.get('resource.calendar.leaves')
leave_list = []
if resource_id:
leave_ids = resource_calendar_leaves_pool.search(cr, uid, ['|', ('calendar_id', '=', calendar_id),
('calendar_id', '=', resource_calendar),
('resource_id', '=', resource_id)
], context=context)
else:
leave_ids = resource_calendar_leaves_pool.search(cr, uid, [('calendar_id', '=', calendar_id),
('resource_id', '=', False)
], context=context)
leaves = resource_calendar_leaves_pool.read(cr, uid, leave_ids, ['date_from', 'date_to'], context=context)
for i in range(len(leaves)):
dt_start = datetime.datetime.strptime(leaves[i]['date_from'], '%Y-%m-%d %H:%M:%S')
dt_end = datetime.datetime.strptime(leaves[i]['date_to'], '%Y-%m-%d %H:%M:%S')
no = dt_end - dt_start
[leave_list.append((dt_start + datetime.timedelta(days=x)).strftime('%Y-%m-%d')) for x in range(int(no.days + 1))]
leave_list.sort()
return leave_list
def compute_working_calendar(self, cr, uid, calendar_id=False, context=None):
"""
Change the format of working calendar from 'Openerp' format to bring it into 'Faces' format.
@param calendar_id : working calendar of the project
NOTE: used in project/project.py
"""
if not calendar_id:
# Calendar is not specified: working days: 24/7
return [('fri', '8:0-12:0','13:0-17:0'), ('thu', '8:0-12:0','13:0-17:0'), ('wed', '8:0-12:0','13:0-17:0'),
('mon', '8:0-12:0','13:0-17:0'), ('tue', '8:0-12:0','13:0-17:0')]
resource_attendance_pool = self.pool.get('resource.calendar.attendance')
time_range = "8:00-8:00"
non_working = ""
week_days = {"0": "mon", "1": "tue", "2": "wed","3": "thu", "4": "fri", "5": "sat", "6": "sun"}
wk_days = {}
wk_time = {}
wktime_list = []
wktime_cal = []
week_ids = resource_attendance_pool.search(cr, uid, [('calendar_id', '=', calendar_id)], context=context)
weeks = resource_attendance_pool.read(cr, uid, week_ids, ['dayofweek', 'hour_from', 'hour_to'], context=context)
# Convert time formats into appropriate format required
# and create a list like [('mon', '8:00-12:00'), ('mon', '13:00-18:00')]
for week in weeks:
res_str = ""
day = None
if week_days.get(week['dayofweek'],False):
day = week_days[week['dayofweek']]
wk_days[week['dayofweek']] = week_days[week['dayofweek']]
else:
raise osv.except_osv(_('Configuration Error!'),_('Make sure the Working time has been configured with proper week days!'))
hour_from_str = hours_time_string(week['hour_from'])
hour_to_str = hours_time_string(week['hour_to'])
res_str = hour_from_str + '-' + hour_to_str
wktime_list.append((day, res_str))
# Convert into format like [('mon', '8:00-12:00', '13:00-18:00')]
for item in wktime_list:
if wk_time.has_key(item[0]):
wk_time[item[0]].append(item[1])
else:
wk_time[item[0]] = [item[0]]
wk_time[item[0]].append(item[1])
for k,v in wk_time.items():
wktime_cal.append(tuple(v))
# Add for the non-working days like: [('sat, sun', '8:00-8:00')]
for k, v in wk_days.items():
if week_days.has_key(k):
week_days.pop(k)
for v in week_days.itervalues():
non_working += v + ','
if non_working:
wktime_cal.append((non_working[:-1], time_range))
return wktime_cal
class resource_calendar_leaves(osv.osv):
_name = "resource.calendar.leaves"
_description = "Leave Detail"
_columns = {
'name' : fields.char("Name"),
'company_id' : fields.related('calendar_id','company_id',type='many2one',relation='res.company',string="Company", store=True, readonly=True),
'calendar_id' : fields.many2one("resource.calendar", "Working Time"),
'date_from' : fields.datetime('Start Date', required=True),
'date_to' : fields.datetime('End Date', required=True),
'resource_id' : fields.many2one("resource.resource", "Resource", help="If empty, this is a generic holiday for the company. If a resource is set, the holiday/leave is only for this resource"),
}
def check_dates(self, cr, uid, ids, context=None):
for leave in self.browse(cr, uid, ids, context=context):
if leave.date_from and leave.date_to and leave.date_from > leave.date_to:
return False
return True
_constraints = [
(check_dates, 'Error! leave start-date must be lower then leave end-date.', ['date_from', 'date_to'])
]
def onchange_resource(self, cr, uid, ids, resource, context=None):
result = {}
if resource:
resource_pool = self.pool.get('resource.resource')
result['calendar_id'] = resource_pool.browse(cr, uid, resource, context=context).calendar_id.id
return {'value': result}
return {'value': {'calendar_id': []}}
def seconds(td):
assert isinstance(td, datetime.timedelta)
return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / 10.**6
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | 9,137,665,114,203,164,000 | 50.345324 | 356 | 0.57363 | false |
SCSSG/Odoo-SCS | openerp/addons/base/res/res_request.py | 342 | 1677 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import osv, fields
def referencable_models(self, cr, uid, context=None):
obj = self.pool.get('res.request.link')
ids = obj.search(cr, uid, [], context=context)
res = obj.read(cr, uid, ids, ['object', 'name'], context)
return [(r['object'], r['name']) for r in res]
class res_request_link(osv.osv):
_name = 'res.request.link'
_columns = {
'name': fields.char('Name', required=True, translate=True),
'object': fields.char('Object', required=True),
'priority': fields.integer('Priority'),
}
_defaults = {
'priority': 5,
}
_order = 'priority'
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | 6,838,821,030,197,906,000 | 38 | 78 | 0.605844 | false |
cvegaj/ElectriCERT | venv3/lib/python3.6/site-packages/bitcoin/rpc.py | 1 | 24025 | # Copyright (C) 2007 Jan-Klaas Kollhof
# Copyright (C) 2011-2015 The python-bitcoinlib developers
#
# This file is part of python-bitcoinlib.
#
# It is subject to the license terms in the LICENSE file found in the top-level
# directory of this distribution.
#
# No part of python-bitcoinlib, including this file, may be copied, modified,
# propagated, or distributed except according to the terms contained in the
# LICENSE file.
"""Bitcoin Core RPC support
By default this uses the standard library ``json`` module. By monkey patching,
a different implementation can be used instead, at your own risk:
>>> import simplejson
>>> import bitcoin.rpc
>>> bitcoin.rpc.json = simplejson
(``simplejson`` is the externally maintained version of the same module and
thus better optimized but perhaps less stable.)
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import ssl
try:
import http.client as httplib
except ImportError:
import httplib
import base64
import binascii
import decimal
import json
import os
import platform
import sys
try:
import urllib.parse as urlparse
except ImportError:
import urlparse
import bitcoin
from bitcoin.core import COIN, x, lx, b2lx, CBlock, CBlockHeader, CTransaction, COutPoint, CTxOut
from bitcoin.core.script import CScript
from bitcoin.wallet import CBitcoinAddress, CBitcoinSecret
DEFAULT_USER_AGENT = "AuthServiceProxy/0.1"
DEFAULT_HTTP_TIMEOUT = 30
# (un)hexlify to/from unicode, needed for Python3
unhexlify = binascii.unhexlify
hexlify = binascii.hexlify
if sys.version > '3':
unhexlify = lambda h: binascii.unhexlify(h.encode('utf8'))
hexlify = lambda b: binascii.hexlify(b).decode('utf8')
class JSONRPCError(Exception):
"""JSON-RPC protocol error base class
Subclasses of this class also exist for specific types of errors; the set
of all subclasses is by no means complete.
"""
SUBCLS_BY_CODE = {}
@classmethod
def _register_subcls(cls, subcls):
cls.SUBCLS_BY_CODE[subcls.RPC_ERROR_CODE] = subcls
return subcls
def __new__(cls, rpc_error):
assert cls is JSONRPCError
cls = JSONRPCError.SUBCLS_BY_CODE.get(rpc_error['code'], cls)
self = Exception.__new__(cls)
super(JSONRPCError, self).__init__(
'msg: %r code: %r' %
(rpc_error['message'], rpc_error['code']))
self.error = rpc_error
return self
@JSONRPCError._register_subcls
class ForbiddenBySafeModeError(JSONRPCError):
RPC_ERROR_CODE = -2
@JSONRPCError._register_subcls
class InvalidAddressOrKeyError(JSONRPCError):
RPC_ERROR_CODE = -5
@JSONRPCError._register_subcls
class InvalidParameterError(JSONRPCError):
RPC_ERROR_CODE = -8
@JSONRPCError._register_subcls
class VerifyError(JSONRPCError):
RPC_ERROR_CODE = -25
@JSONRPCError._register_subcls
class VerifyRejectedError(JSONRPCError):
RPC_ERROR_CODE = -26
@JSONRPCError._register_subcls
class VerifyAlreadyInChainError(JSONRPCError):
RPC_ERROR_CODE = -27
@JSONRPCError._register_subcls
class InWarmupError(JSONRPCError):
RPC_ERROR_CODE = -28
class BaseProxy(object):
"""Base JSON-RPC proxy class. Contains only private methods; do not use
directly."""
def __init__(self,
service_url=None,
service_port=None,
btc_conf_file=None,
timeout=DEFAULT_HTTP_TIMEOUT):
# Create a dummy connection early on so if __init__() fails prior to
# __conn being created __del__() can detect the condition and handle it
# correctly.
self.__conn = None
if service_url is None:
# Figure out the path to the bitcoin.conf file
if btc_conf_file is None:
if platform.system() == 'Darwin':
btc_conf_file = os.path.expanduser('~/Library/Application Support/Bitcoin/')
elif platform.system() == 'Windows':
btc_conf_file = os.path.join(os.environ['APPDATA'], 'Bitcoin')
else:
btc_conf_file = os.path.expanduser('~/.bitcoin')
btc_conf_file = os.path.join(btc_conf_file, 'bitcoin.conf')
# Extract contents of bitcoin.conf to build service_url
with open(btc_conf_file, 'r') as fd:
# Bitcoin Core accepts empty rpcuser, not specified in btc_conf_file
conf = {'rpcuser': ""}
for line in fd.readlines():
if '#' in line:
line = line[:line.index('#')]
if '=' not in line:
continue
k, v = line.split('=', 1)
conf[k.strip()] = v.strip()
if service_port is None:
service_port = bitcoin.params.RPC_PORT
conf['rpcport'] = int(conf.get('rpcport', service_port))
conf['rpchost'] = conf.get('rpcconnect', 'localhost')
if 'rpcpassword' not in conf:
raise ValueError('The value of rpcpassword not specified in the configuration file: %s' % btc_conf_file)
service_url = ('%s://%s:%s@%s:%d' %
('http',
conf['rpcuser'], conf['rpcpassword'],
conf['rpchost'], conf['rpcport']))
self.__service_url = service_url
self.__url = urlparse.urlparse(service_url)
if self.__url.scheme not in ('http',):
raise ValueError('Unsupported URL scheme %r' % self.__url.scheme)
if self.__url.port is None:
port = httplib.HTTP_PORT
else:
port = self.__url.port
self.__id_count = 0
authpair = "%s:%s" % (self.__url.username, self.__url.password)
authpair = authpair.encode('utf8')
self.__auth_header = b"Basic " + base64.b64encode(authpair)
self.__conn = httplib.HTTPConnection(self.__url.hostname, port=port,
timeout=timeout)
def _call(self, service_name, *args):
self.__id_count += 1
postdata = json.dumps({'version': '1.1',
'method': service_name,
'params': args,
'id': self.__id_count})
self.__conn.request('POST', self.__url.path, postdata,
{'Host': self.__url.hostname,
'User-Agent': DEFAULT_USER_AGENT,
'Authorization': self.__auth_header,
'Content-type': 'application/json'})
response = self._get_response()
if response['error'] is not None:
raise JSONRPCError(response['error'])
elif 'result' not in response:
raise JSONRPCError({
'code': -343, 'message': 'missing JSON-RPC result'})
else:
return response['result']
def _batch(self, rpc_call_list):
postdata = json.dumps(list(rpc_call_list))
self.__conn.request('POST', self.__url.path, postdata,
{'Host': self.__url.hostname,
'User-Agent': DEFAULT_USER_AGENT,
'Authorization': self.__auth_header,
'Content-type': 'application/json'})
return self._get_response()
def _get_response(self):
http_response = self.__conn.getresponse()
if http_response is None:
raise JSONRPCError({
'code': -342, 'message': 'missing HTTP response from server'})
return json.loads(http_response.read().decode('utf8'),
parse_float=decimal.Decimal)
def __del__(self):
if self.__conn is not None:
self.__conn.close()
class RawProxy(BaseProxy):
"""Low-level proxy to a bitcoin JSON-RPC service
Unlike ``Proxy``, no conversion is done besides parsing JSON. As far as
Python is concerned, you can call any method; ``JSONRPCError`` will be
raised if the server does not recognize it.
"""
def __init__(self,
service_url=None,
service_port=None,
btc_conf_file=None,
timeout=DEFAULT_HTTP_TIMEOUT,
**kwargs):
super(RawProxy, self).__init__(service_url=service_url,
service_port=service_port,
btc_conf_file=btc_conf_file,
timeout=timeout,
**kwargs)
def __getattr__(self, name):
if name.startswith('__') and name.endswith('__'):
# Python internal stuff
raise AttributeError
# Create a callable to do the actual call
f = lambda *args: self._call(name, *args)
# Make debuggers show <function bitcoin.rpc.name> rather than <function
# bitcoin.rpc.<lambda>>
f.__name__ = name
return f
class Proxy(BaseProxy):
"""Proxy to a bitcoin RPC service
Unlike ``RawProxy``, data is passed as ``bitcoin.core`` objects or packed
bytes, rather than JSON or hex strings. Not all methods are implemented
yet; you can use ``call`` to access missing ones in a forward-compatible
way. Assumes Bitcoin Core version >= v0.13.0; older versions mostly work,
but there are a few incompatibilities.
"""
def __init__(self,
service_url=None,
service_port=None,
btc_conf_file=None,
timeout=DEFAULT_HTTP_TIMEOUT,
**kwargs):
"""Create a proxy object
If ``service_url`` is not specified, the username and password are read
out of the file ``btc_conf_file``. If ``btc_conf_file`` is not
specified, ``~/.bitcoin/bitcoin.conf`` or equivalent is used by
default. The default port is set according to the chain parameters in
use: mainnet, testnet, or regtest.
Usually no arguments to ``Proxy()`` are needed; the local bitcoind will
be used.
``timeout`` - timeout in seconds before the HTTP interface times out
"""
super(Proxy, self).__init__(service_url=service_url,
service_port=service_port,
btc_conf_file=btc_conf_file,
timeout=timeout,
**kwargs)
def call(self, service_name, *args):
"""Call an RPC method by name and raw (JSON encodable) arguments"""
return self._call(service_name, *args)
def dumpprivkey(self, addr):
"""Return the private key matching an address
"""
r = self._call('dumpprivkey', str(addr))
return CBitcoinSecret(r)
def fundrawtransaction(self, tx, include_watching=False):
"""Add inputs to a transaction until it has enough in value to meet its out value.
include_watching - Also select inputs which are watch only
Returns dict:
{'tx': Resulting tx,
'fee': Fee the resulting transaction pays,
'changepos': Position of added change output, or -1,
}
"""
hextx = hexlify(tx.serialize())
r = self._call('fundrawtransaction', hextx, include_watching)
r['tx'] = CTransaction.deserialize(unhexlify(r['hex']))
del r['hex']
r['fee'] = int(r['fee'] * COIN)
return r
def generate(self, numblocks):
"""Mine blocks immediately (before the RPC call returns)
numblocks - How many blocks are generated immediately.
Returns iterable of block hashes generated.
"""
r = self._call('generate', numblocks)
return (lx(blk_hash) for blk_hash in r)
def getaccountaddress(self, account=None):
"""Return the current Bitcoin address for receiving payments to this
account."""
r = self._call('getaccountaddress', account)
return CBitcoinAddress(r)
def getbalance(self, account='*', minconf=1):
"""Get the balance
account - The selected account. Defaults to "*" for entire wallet. It
may be the default account using "".
minconf - Only include transactions confirmed at least this many times.
(default=1)
"""
r = self._call('getbalance', account, minconf)
return int(r*COIN)
def getbestblockhash(self):
"""Return hash of best (tip) block in longest block chain."""
return lx(self._call('getbestblockhash'))
def getblockheader(self, block_hash, verbose=False):
"""Get block header <block_hash>
verbose - If true a dict is returned with the values returned by
getblockheader that are not in the block header itself
(height, nextblockhash, etc.)
Raises IndexError if block_hash is not valid.
"""
try:
block_hash = b2lx(block_hash)
except TypeError:
raise TypeError('%s.getblockheader(): block_hash must be bytes; got %r instance' %
(self.__class__.__name__, block_hash.__class__))
try:
r = self._call('getblockheader', block_hash, verbose)
except InvalidAddressOrKeyError as ex:
raise IndexError('%s.getblockheader(): %s (%d)' %
(self.__class__.__name__, ex.error['message'], ex.error['code']))
if verbose:
nextblockhash = None
if 'nextblockhash' in r:
nextblockhash = lx(r['nextblockhash'])
return {'confirmations':r['confirmations'],
'height':r['height'],
'mediantime':r['mediantime'],
'nextblockhash':nextblockhash,
'chainwork':x(r['chainwork'])}
else:
return CBlockHeader.deserialize(unhexlify(r))
def getblock(self, block_hash):
"""Get block <block_hash>
Raises IndexError if block_hash is not valid.
"""
try:
block_hash = b2lx(block_hash)
except TypeError:
raise TypeError('%s.getblock(): block_hash must be bytes; got %r instance' %
(self.__class__.__name__, block_hash.__class__))
try:
r = self._call('getblock', block_hash, False)
except InvalidAddressOrKeyError as ex:
raise IndexError('%s.getblock(): %s (%d)' %
(self.__class__.__name__, ex.error['message'], ex.error['code']))
return CBlock.deserialize(unhexlify(r))
def getblockcount(self):
"""Return the number of blocks in the longest block chain"""
return self._call('getblockcount')
def getblockhash(self, height):
"""Return hash of block in best-block-chain at height.
Raises IndexError if height is not valid.
"""
try:
return lx(self._call('getblockhash', height))
except InvalidParameterError as ex:
raise IndexError('%s.getblockhash(): %s (%d)' %
(self.__class__.__name__, ex.error['message'], ex.error['code']))
def getinfo(self):
"""Return a JSON object containing various state info"""
r = self._call('getinfo')
if 'balance' in r:
r['balance'] = int(r['balance'] * COIN)
if 'paytxfee' in r:
r['paytxfee'] = int(r['paytxfee'] * COIN)
return r
def getmininginfo(self):
"""Return a JSON object containing mining-related information"""
return self._call('getmininginfo')
def getnewaddress(self, account=None):
"""Return a new Bitcoin address for receiving payments.
If account is not None, it is added to the address book so payments
received with the address will be credited to account.
"""
r = None
if account is not None:
r = self._call('getnewaddress', account)
else:
r = self._call('getnewaddress')
return CBitcoinAddress(r)
def getrawchangeaddress(self):
"""Returns a new Bitcoin address, for receiving change.
This is for use with raw transactions, NOT normal use.
"""
r = self._call('getrawchangeaddress')
return CBitcoinAddress(r)
def getrawmempool(self, verbose=False):
"""Return the mempool"""
if verbose:
return self._call('getrawmempool', verbose)
else:
r = self._call('getrawmempool')
r = [lx(txid) for txid in r]
return r
def getrawtransaction(self, txid, verbose=False):
"""Return transaction with hash txid
Raises IndexError if transaction not found.
verbose - If true a dict is returned instead with additional
information on the transaction.
Note that if all txouts are spent and the transaction index is not
enabled the transaction may not be available.
"""
try:
r = self._call('getrawtransaction', b2lx(txid), 1 if verbose else 0)
except InvalidAddressOrKeyError as ex:
raise IndexError('%s.getrawtransaction(): %s (%d)' %
(self.__class__.__name__, ex.error['message'], ex.error['code']))
if verbose:
r['tx'] = CTransaction.deserialize(unhexlify(r['hex']))
del r['hex']
del r['txid']
del r['version']
del r['locktime']
del r['vin']
del r['vout']
r['blockhash'] = lx(r['blockhash']) if 'blockhash' in r else None
else:
r = CTransaction.deserialize(unhexlify(r))
return r
def getreceivedbyaddress(self, addr, minconf=1):
"""Return total amount received by given a (wallet) address
Get the amount received by <address> in transactions with at least
[minconf] confirmations.
Works only for addresses in the local wallet; other addresses will
always show zero.
addr - The address. (CBitcoinAddress instance)
minconf - Only include transactions confirmed at least this many times.
(default=1)
"""
r = self._call('getreceivedbyaddress', str(addr), minconf)
return int(r * COIN)
def gettransaction(self, txid):
"""Get detailed information about in-wallet transaction txid
Raises IndexError if transaction not found in the wallet.
FIXME: Returned data types are not yet converted.
"""
try:
r = self._call('gettransaction', b2lx(txid))
except InvalidAddressOrKeyError as ex:
raise IndexError('%s.getrawtransaction(): %s (%d)' %
(self.__class__.__name__, ex.error['message'], ex.error['code']))
return r
def gettxout(self, outpoint, includemempool=True):
"""Return details about an unspent transaction output.
Raises IndexError if outpoint is not found or was spent.
includemempool - Include mempool txouts
"""
r = self._call('gettxout', b2lx(outpoint.hash), outpoint.n, includemempool)
if r is None:
raise IndexError('%s.gettxout(): unspent txout %r not found' % (self.__class__.__name__, outpoint))
r['txout'] = CTxOut(int(r['value'] * COIN),
CScript(unhexlify(r['scriptPubKey']['hex'])))
del r['value']
del r['scriptPubKey']
r['bestblock'] = lx(r['bestblock'])
return r
def importaddress(self, addr, label='', rescan=True):
"""Adds an address or pubkey to wallet without the associated privkey."""
addr = str(addr)
r = self._call('importaddress', addr, label, rescan)
return r
def listunspent(self, minconf=0, maxconf=9999999, addrs=None):
"""Return unspent transaction outputs in wallet
Outputs will have between minconf and maxconf (inclusive)
confirmations, optionally filtered to only include txouts paid to
addresses in addrs.
"""
r = None
if addrs is None:
r = self._call('listunspent', minconf, maxconf)
else:
addrs = [str(addr) for addr in addrs]
r = self._call('listunspent', minconf, maxconf, addrs)
r2 = []
for unspent in r:
unspent['outpoint'] = COutPoint(lx(unspent['txid']), unspent['vout'])
del unspent['txid']
del unspent['vout']
unspent['address'] = CBitcoinAddress(unspent['address'])
unspent['scriptPubKey'] = CScript(unhexlify(unspent['scriptPubKey']))
unspent['amount'] = int(unspent['amount'] * COIN)
r2.append(unspent)
return r2
def lockunspent(self, unlock, outpoints):
"""Lock or unlock outpoints"""
json_outpoints = [{'txid':b2lx(outpoint.hash), 'vout':outpoint.n}
for outpoint in outpoints]
return self._call('lockunspent', unlock, json_outpoints)
def sendrawtransaction(self, tx, allowhighfees=False):
"""Submit transaction to local node and network.
allowhighfees - Allow even if fees are unreasonably high.
"""
hextx = hexlify(tx.serialize())
r = None
if allowhighfees:
r = self._call('sendrawtransaction', hextx, True)
else:
r = self._call('sendrawtransaction', hextx)
return lx(r)
def sendmany(self, fromaccount, payments, minconf=1, comment='', subtractfeefromamount=[]):
"""Sent amount to a given address"""
json_payments = {str(addr):float(amount)/COIN
for addr, amount in payments.items()}
r = self._call('sendmany', fromaccount, json_payments, minconf, comment, subtractfeefromamount)
return lx(r)
def sendtoaddress(self, addr, amount, comment='', commentto='', subtractfeefromamount=False):
"""Sent amount to a given address"""
addr = str(addr)
amount = float(amount)/COIN
r = self._call('sendtoaddress', addr, amount, comment, commentto, subtractfeefromamount)
return lx(r)
def signrawtransaction(self, tx, *args):
"""Sign inputs for transaction
FIXME: implement options
"""
hextx = hexlify(tx.serialize())
r = self._call('signrawtransaction', hextx, *args)
r['tx'] = CTransaction.deserialize(unhexlify(r['hex']))
del r['hex']
return r
def submitblock(self, block, params=None):
"""Submit a new block to the network.
params is optional and is currently ignored by bitcoind. See
https://en.bitcoin.it/wiki/BIP_0022 for full specification.
"""
hexblock = hexlify(block.serialize())
if params is not None:
return self._call('submitblock', hexblock, params)
else:
return self._call('submitblock', hexblock)
def validateaddress(self, address):
"""Return information about an address"""
r = self._call('validateaddress', str(address))
if r['isvalid']:
r['address'] = CBitcoinAddress(r['address'])
if 'pubkey' in r:
r['pubkey'] = unhexlify(r['pubkey'])
return r
def _addnode(self, node, arg):
r = self._call('addnode', node, arg)
return r
def addnode(self, node):
return self._addnode(node, 'add')
def addnodeonetry(self, node):
return self._addnode(node, 'onetry')
def removenode(self, node):
return self._addnode(node, 'remove')
__all__ = (
'JSONRPCError',
'ForbiddenBySafeModeError',
'InvalidAddressOrKeyError',
'InvalidParameterError',
'VerifyError',
'VerifyRejectedError',
'VerifyAlreadyInChainError',
'InWarmupError',
'RawProxy',
'Proxy',
)
| gpl-3.0 | -8,538,065,926,282,861,000 | 34.279001 | 124 | 0.581061 | false |
rlewis1988/lean | script/check_md_links.py | 17 | 2586 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 Sebastian Ullrich. All rights reserved.
# Released under Apache 2.0 license as described in the file LICENSE.
#
# Author: Sebastian Ullrich
#
# Python 2/3 compatibility
from __future__ import print_function
import argparse
import collections
import os
import sys
try:
from urllib.request import urlopen
from urllib.parse import urlparse
except ImportError:
from urlparse import urlparse
from urllib import urlopen
try:
import mistune
except ImportError:
print("Mistune package not found. Install e.g. via `pip install mistune`.")
parser = argparse.ArgumentParser(description="Check all *.md files of the current directory's subtree for broken links.")
parser.add_argument('--http', help="also check external links (can be slow)", action='store_true')
parser.add_argument('--check-missing', help="also find unreferenced lean files", action='store_true')
args = parser.parse_args()
lean_root = os.path.join(os.path.dirname(__file__), os.path.pardir)
lean_root = os.path.normpath(lean_root)
result = {}
def check_link(link, root):
if link.startswith('http'):
if not args.http:
return True
if link not in result:
try:
urllib.request.urlopen(link)
result[link] = True
except:
result[link] = False
return result[link]
else:
if link.startswith('/'):
# project root-relative link
path = lean_root + link
else:
path = os.path.join(root, link)
path = os.path.normpath(path) # should make it work on Windows
result[path] = os.path.exists(path)
return result[path]
# check all .md files
for root, _, files in os.walk('.'):
for f in files:
if not f.endswith('.md'):
continue
path = os.path.join(root, f)
class CheckLinks(mistune.Renderer):
def link(self, link, title, content):
if not check_link(link, root):
print("Broken link", link, "in file", path)
mistune.Markdown(renderer=CheckLinks())(open(path).read())
if args.check_missing:
# check all .(h)lean files
for root, _, files in os.walk('.'):
for f in files:
path = os.path.normpath(os.path.join(root, f))
if (path.endswith('.lean') or path.endswith('.hlean')) and path not in result:
result[path] = False
print("Missing file", path)
if not all(result.values()):
sys.exit(1)
| apache-2.0 | 5,352,114,230,549,614,000 | 29.785714 | 121 | 0.619876 | false |
alexmojaki/blaze | docs/source/conf.py | 8 | 9883 | # -*- coding: utf-8 -*-
#
# Blaze documentation build configuration file, created by
# sphinx-quickstart on Mon Oct 8 12:29:11 2012.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os, subprocess
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx',
'sphinx.ext.doctest', 'sphinx.ext.extlinks',
'sphinx.ext.autosummary',
'numpydoc',
# Optional
'sphinx.ext.graphviz',
]
extlinks = dict(issue=('https://github.com/blaze/blaze/issues/%s', '#'))
# -- Math ---------------------------------------------------------------------
try:
subprocess.call(["pdflatex", "--version"])
extensions += ['sphinx.ext.pngmath']
except OSError:
extensions += ['sphinx.ext.mathjax']
# -- Docstrings ---------------------------------------------------------------
import numpydoc
extensions += ['numpydoc']
numpydoc_show_class_members = False
# -- Diagrams -----------------------------------------------------------------
# TODO: check about the legal requirements of putting this in the
# tree. sphinx-ditaa is BSD so should be fine...
#try:
#sys.path.append(os.path.abspath('sphinxext'))
#extensions += ['sphinxext.ditaa']
#diagrams = True
#except ImportError:
#diagrams = False
# -----------------------------------------------------------------------------
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Blaze'
copyright = u'2012, Continuum Analytics'
#------------------------------------------------------------------------
# Path Munging
#------------------------------------------------------------------------
# This is beautiful... yeah
sys.path.append(os.path.abspath('.'))
sys.path.append(os.path.abspath('..'))
sys.path.append(os.path.abspath('../..'))
from blaze import __version__ as version
#------------------------------------------------------------------------
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version is the same as the long version
#version = '0.x.x'
# The full version, including alpha/beta/rc tags.
release = version
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if not on_rtd: # only import and set the theme if we're building docs locally
try:
import sphinx_rtd_theme
except ImportError:
html_theme = 'default'
html_theme_path = []
else:
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name of the Pygments (syntax highlighting) style to use.
highlight_language = 'python'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#html_theme = ''
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = os.path.join('svg', 'blaze.ico')
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
html_show_sphinx = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'blazedoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'blaze.tex', u'Blaze Documentation',
u'Continuum', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'blaze', u'Blaze Documentation',
[u'Continuum'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'blaze', u'Blaze Documentation',
u'Continuum Analytics', 'blaze', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
intersphinx_mapping = {
'http://docs.python.org/dev': None,
}
doctest_global_setup = "import blaze"
| bsd-3-clause | 6,952,066,812,865,936,000 | 30.57508 | 80 | 0.656076 | false |
danmergens/mi-instrument | mi/dataset/dataset_driver.py | 7 | 2920 | import os
from mi.logging import config
from mi.core.log import get_logger
from mi.core.exceptions import NotImplementedException
__author__ = 'wordenm'
log = get_logger()
class ParticleDataHandler(object):
def __init__(self):
self._samples = {}
self._failure = False
def addParticleSample(self, sample_type, sample):
log.debug("Sample type: %s, Sample data: %s", sample_type, sample)
self._samples.setdefault(sample_type, []).append(sample)
def setParticleDataCaptureFailure(self):
log.debug("Particle data capture failed")
self._failure = True
class DataSetDriver(object):
"""
Base Class for dataset drivers used within uFrame
This class of objects processFileStream method
will be used by the parse method
which is called directly from uFrame
"""
def __init__(self, parser, particle_data_handler):
self._parser = parser
self._particle_data_handler = particle_data_handler
def processFileStream(self):
"""
Method to extract records from a parser's get_records method
and pass them to the Java particle_data_handler passed in from uFrame
"""
while True:
try:
records = self._parser.get_records(1)
if len(records) == 0:
log.debug("Done retrieving records.")
break
for record in records:
self._particle_data_handler.addParticleSample(record.data_particle_type(), record.generate())
except Exception as e:
log.error(e)
self._particle_data_handler.setParticleDataCaptureFailure()
break
class SimpleDatasetDriver(DataSetDriver):
"""
Abstract class to simplify driver writing. Derived classes simply need to provide
the _build_parser method
"""
def __init__(self, unused, stream_handle, particle_data_handler):
parser = self._build_parser(stream_handle)
super(SimpleDatasetDriver, self).__init__(parser, particle_data_handler)
def _build_parser(self, stream_handle):
"""
abstract method that must be provided by derived classes to build a parser
:param stream_handle: an open fid created from the source_file_path passed in from edex
:return: A properly configured parser object
"""
raise NotImplementedException("_build_parser must be implemented")
def _exception_callback(self, exception):
"""
A common exception callback method that can be used by _build_parser methods to
map any exceptions coming from the parser back to the edex particle_data_handler
:param exception: any exception from the parser
:return: None
"""
log.debug("ERROR: %r", exception)
self._particle_data_handler.setParticleDataCaptureFailure()
| bsd-2-clause | -7,939,177,868,648,343,000 | 31.444444 | 113 | 0.643151 | false |
UniMOOC/gcb-new-module | modules/dashboard/unit_lesson_editor.py | 3 | 30735 | # Copyright 2013 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Classes supporting unit and lesson editing."""
__author__ = 'John Orr ([email protected])'
import cgi
import logging
import urllib
import messages
from common import utils as common_utils
from controllers import sites
from controllers.utils import ApplicationHandler
from controllers.utils import BaseRESTHandler
from controllers.utils import XsrfTokenManager
from models import courses
from models import resources_display
from models import custom_units
from models import roles
from models import transforms
from modules.oeditor import oeditor
from tools import verify
class CourseOutlineRights(object):
"""Manages view/edit rights for course outline."""
@classmethod
def can_view(cls, handler):
return cls.can_edit(handler)
@classmethod
def can_edit(cls, handler):
return roles.Roles.is_course_admin(handler.app_context)
@classmethod
def can_delete(cls, handler):
return cls.can_edit(handler)
@classmethod
def can_add(cls, handler):
return cls.can_edit(handler)
class UnitLessonEditor(ApplicationHandler):
"""An editor for the unit and lesson titles."""
HIDE_ACTIVITY_ANNOTATIONS = [
(['properties', 'activity_title', '_inputex'], {'_type': 'hidden'}),
(['properties', 'activity_listed', '_inputex'], {'_type': 'hidden'}),
(['properties', 'activity', '_inputex'], {'_type': 'hidden'}),
]
def get_import_course(self):
"""Shows setup form for course import."""
template_values = {}
template_values['page_title'] = self.format_title('Import Course')
annotations = ImportCourseRESTHandler.SCHEMA_ANNOTATIONS_DICT()
if not annotations:
template_values['main_content'] = 'No courses to import from.'
self.render_page(template_values)
return
exit_url = self.canonicalize_url('/dashboard')
rest_url = self.canonicalize_url(ImportCourseRESTHandler.URI)
form_html = oeditor.ObjectEditor.get_html_for(
self,
ImportCourseRESTHandler.SCHEMA_JSON,
annotations,
None, rest_url, exit_url,
auto_return=True,
save_button_caption='Import',
required_modules=ImportCourseRESTHandler.REQUIRED_MODULES)
template_values = {}
template_values['page_title'] = self.format_title('Import Course')
template_values['page_description'] = messages.IMPORT_COURSE_DESCRIPTION
template_values['main_content'] = form_html
self.render_page(template_values)
def post_add_lesson(self):
"""Adds new lesson to a first unit of the course."""
course = courses.Course(self)
target_unit = None
if self.request.get('unit_id'):
target_unit = course.find_unit_by_id(self.request.get('unit_id'))
else:
for unit in course.get_units():
if unit.type == verify.UNIT_TYPE_UNIT:
target_unit = unit
break
if target_unit:
lesson = course.add_lesson(target_unit)
course.save()
# TODO(psimakov): complete 'edit_lesson' view
self.redirect(self.get_action_url(
'edit_lesson', key=lesson.lesson_id,
extra_args={'is_newly_created': 1}))
else:
self.redirect('/dashboard')
def post_add_unit(self):
"""Adds new unit to a course."""
course = courses.Course(self)
unit = course.add_unit()
course.save()
self.redirect(self.get_action_url(
'edit_unit', key=unit.unit_id, extra_args={'is_newly_created': 1}))
def post_add_link(self):
"""Adds new link to a course."""
course = courses.Course(self)
link = course.add_link()
link.href = ''
course.save()
self.redirect(self.get_action_url(
'edit_link', key=link.unit_id, extra_args={'is_newly_created': 1}))
def post_add_assessment(self):
"""Adds new assessment to a course."""
course = courses.Course(self)
assessment = course.add_assessment()
course.save()
self.redirect(self.get_action_url(
'edit_assessment', key=assessment.unit_id,
extra_args={'is_newly_created': 1}))
def post_add_custom_unit(self):
"""Adds a custom unit to a course."""
course = courses.Course(self)
custom_unit_type = self.request.get('unit_type')
custom_unit = course.add_custom_unit(custom_unit_type)
course.save()
self.redirect(self.get_action_url(
'edit_custom_unit', key=custom_unit.unit_id,
extra_args={'is_newly_created': 1,
'unit_type': custom_unit_type}))
def post_set_draft_status(self):
"""Sets the draft status of a course component.
Only works with CourseModel13 courses, but the REST handler
is only called with this type of courses.
"""
key = self.request.get('key')
if not CourseOutlineRights.can_edit(self):
transforms.send_json_response(
self, 401, 'Access denied.', {'key': key})
return
course = courses.Course(self)
component_type = self.request.get('type')
if component_type == 'unit':
course_component = course.find_unit_by_id(key)
elif component_type == 'lesson':
course_component = course.find_lesson_by_id(None, key)
else:
transforms.send_json_response(
self, 401, 'Invalid key.', {'key': key})
return
set_draft = self.request.get('set_draft')
if set_draft == '1':
set_draft = True
elif set_draft == '0':
set_draft = False
else:
transforms.send_json_response(
self, 401, 'Invalid set_draft value, expected 0 or 1.',
{'set_draft': set_draft}
)
return
course_component.now_available = not set_draft
course.save()
transforms.send_json_response(
self,
200,
'Draft status set to %s.' % (
resources_display.DRAFT_TEXT if set_draft else
resources_display.PUBLISHED_TEXT
), {
'is_draft': set_draft
}
)
return
def _render_edit_form_for(
self, rest_handler_cls, title, schema=None, annotations_dict=None,
delete_xsrf_token='delete-unit', page_description=None,
additional_dirs=None, extra_js_files=None, extra_css_files=None):
"""Renders an editor form for a given REST handler class."""
annotations_dict = annotations_dict or []
if schema:
schema_json = schema.get_json_schema()
annotations_dict = schema.get_schema_dict() + annotations_dict
else:
schema_json = rest_handler_cls.SCHEMA_JSON
if not annotations_dict:
annotations_dict = rest_handler_cls.SCHEMA_ANNOTATIONS_DICT
key = self.request.get('key')
extra_args = {}
if self.request.get('is_newly_created'):
extra_args['is_newly_created'] = 1
exit_url = self.canonicalize_url('/dashboard')
rest_url = self.canonicalize_url(rest_handler_cls.URI)
delete_url = '%s?%s' % (
self.canonicalize_url(rest_handler_cls.URI),
urllib.urlencode({
'key': key,
'xsrf_token': cgi.escape(
self.create_xsrf_token(delete_xsrf_token))
}))
def extend_list(target_list, ext_name):
# Extend the optional arg lists such as extra_js_files by an
# optional list field on the REST handler class. Used to provide
# seams for modules to add js files, etc. See LessonRESTHandler
if hasattr(rest_handler_cls, ext_name):
target_list = target_list or []
return (target_list or []) + getattr(rest_handler_cls, ext_name)
return target_list
form_html = oeditor.ObjectEditor.get_html_for(
self,
schema_json,
annotations_dict,
key, rest_url, exit_url,
extra_args=extra_args,
delete_url=delete_url, delete_method='delete',
read_only=not self.app_context.is_editable_fs(),
required_modules=rest_handler_cls.REQUIRED_MODULES,
additional_dirs=extend_list(additional_dirs, 'ADDITIONAL_DIRS'),
extra_css_files=extend_list(extra_css_files, 'EXTRA_CSS_FILES'),
extra_js_files=extend_list(extra_js_files, 'EXTRA_JS_FILES'))
template_values = {}
template_values['page_title'] = self.format_title('Edit %s' % title)
if page_description:
template_values['page_description'] = page_description
template_values['main_content'] = form_html
self.render_page(template_values)
def get_edit_unit(self):
"""Shows unit editor."""
self._render_edit_form_for(
UnitRESTHandler, 'Unit',
page_description=messages.UNIT_EDITOR_DESCRIPTION,
annotations_dict=UnitRESTHandler.get_annotations_dict(
courses.Course(self), int(self.request.get('key'))))
def get_edit_custom_unit(self):
"""Shows custom_unit_editor."""
custom_unit_type = self.request.get('unit_type')
custom_unit = custom_units.UnitTypeRegistry.get(custom_unit_type)
rest_handler = custom_unit.rest_handler
self._render_edit_form_for(
rest_handler,
custom_unit.name,
page_description=rest_handler.DESCRIPTION,
annotations_dict=rest_handler.get_schema_annotations_dict(
courses.Course(self)))
def get_edit_link(self):
"""Shows link editor."""
self._render_edit_form_for(
LinkRESTHandler, 'Link',
page_description=messages.LINK_EDITOR_DESCRIPTION)
def get_edit_assessment(self):
"""Shows assessment editor."""
self._render_edit_form_for(
AssessmentRESTHandler, 'Assessment',
page_description=messages.ASSESSMENT_EDITOR_DESCRIPTION,
extra_js_files=['assessment_editor_lib.js', 'assessment_editor.js'])
def get_edit_lesson(self):
"""Shows the lesson/activity editor."""
key = self.request.get('key')
course = courses.Course(self)
lesson = course.find_lesson_by_id(None, key)
annotations_dict = (
None if lesson.has_activity
else UnitLessonEditor.HIDE_ACTIVITY_ANNOTATIONS)
schema = LessonRESTHandler.get_schema(course, key)
if courses.has_only_new_style_activities(course):
schema.get_property('objectives').extra_schema_dict_values[
'excludedCustomTags'] = set(['gcb-activity'])
self._render_edit_form_for(
LessonRESTHandler, 'Lessons and Activities',
schema=schema,
annotations_dict=annotations_dict,
delete_xsrf_token='delete-lesson')
class CommonUnitRESTHandler(BaseRESTHandler):
"""A common super class for all unit REST handlers."""
# These functions are called with an updated unit object whenever a
# change is saved.
POST_SAVE_HOOKS = []
def unit_to_dict(self, unit):
"""Converts a unit to a dictionary representation."""
return resources_display.UnitTools(self.get_course()).unit_to_dict(unit)
def apply_updates(self, unit, updated_unit_dict, errors):
"""Applies changes to a unit; modifies unit input argument."""
resources_display.UnitTools(courses.Course(self)).apply_updates(
unit, updated_unit_dict, errors)
def get(self):
"""A GET REST method shared by all unit types."""
key = self.request.get('key')
if not CourseOutlineRights.can_view(self):
transforms.send_json_response(
self, 401, 'Access denied.', {'key': key})
return
unit = courses.Course(self).find_unit_by_id(key)
if not unit:
transforms.send_json_response(
self, 404, 'Object not found.', {'key': key})
return
message = ['Success.']
if self.request.get('is_newly_created'):
unit_type = verify.UNIT_TYPE_NAMES[unit.type].lower()
message.append(
'New %s has been created and saved.' % unit_type)
transforms.send_json_response(
self, 200, '\n'.join(message),
payload_dict=self.unit_to_dict(unit),
xsrf_token=XsrfTokenManager.create_xsrf_token('put-unit'))
def put(self):
"""A PUT REST method shared by all unit types."""
request = transforms.loads(self.request.get('request'))
key = request.get('key')
if not self.assert_xsrf_token_or_fail(
request, 'put-unit', {'key': key}):
return
if not CourseOutlineRights.can_edit(self):
transforms.send_json_response(
self, 401, 'Access denied.', {'key': key})
return
unit = courses.Course(self).find_unit_by_id(key)
if not unit:
transforms.send_json_response(
self, 404, 'Object not found.', {'key': key})
return
payload = request.get('payload')
updated_unit_dict = transforms.json_to_dict(
transforms.loads(payload), self.SCHEMA_DICT)
errors = []
self.apply_updates(unit, updated_unit_dict, errors)
if not errors:
course = courses.Course(self)
assert course.update_unit(unit)
course.save()
common_utils.run_hooks(self.POST_SAVE_HOOKS, unit)
transforms.send_json_response(self, 200, 'Saved.')
else:
transforms.send_json_response(self, 412, '\n'.join(errors))
def delete(self):
"""Handles REST DELETE verb with JSON payload."""
key = self.request.get('key')
if not self.assert_xsrf_token_or_fail(
self.request, 'delete-unit', {'key': key}):
return
if not CourseOutlineRights.can_delete(self):
transforms.send_json_response(
self, 401, 'Access denied.', {'key': key})
return
course = courses.Course(self)
unit = course.find_unit_by_id(key)
if not unit:
transforms.send_json_response(
self, 404, 'Object not found.', {'key': key})
return
course.delete_unit(unit)
course.save()
transforms.send_json_response(self, 200, 'Deleted.')
class UnitRESTHandler(CommonUnitRESTHandler):
"""Provides REST API to unit."""
URI = '/rest/course/unit'
SCHEMA = resources_display.ResourceUnit.get_schema(course=None, key=None)
SCHEMA_JSON = SCHEMA.get_json_schema()
SCHEMA_DICT = SCHEMA.get_json_schema_dict()
REQUIRED_MODULES = [
'inputex-string', 'inputex-select', 'inputex-uneditable',
'inputex-list', 'inputex-hidden', 'inputex-number', 'inputex-integer',
'inputex-checkbox', 'gcb-rte']
@classmethod
def get_annotations_dict(cls, course, this_unit_id):
# The set of available assesments needs to be dynamically
# generated and set as selection choices on the form.
# We want to only show assessments that are not already
# selected by other units.
available_assessments = {}
referenced_assessments = {}
for unit in course.get_units():
if unit.type == verify.UNIT_TYPE_ASSESSMENT:
model_version = course.get_assessment_model_version(unit)
track_labels = course.get_unit_track_labels(unit)
# Don't allow selecting old-style assessments, which we
# can't display within Unit page.
# Don't allow selection of assessments with parents
if (model_version != courses.ASSESSMENT_MODEL_VERSION_1_4 and
not track_labels):
available_assessments[unit.unit_id] = unit
elif (unit.type == verify.UNIT_TYPE_UNIT and
this_unit_id != unit.unit_id):
if unit.pre_assessment:
referenced_assessments[unit.pre_assessment] = True
if unit.post_assessment:
referenced_assessments[unit.post_assessment] = True
for referenced in referenced_assessments:
if referenced in available_assessments:
del available_assessments[referenced]
schema = resources_display.ResourceUnit.get_schema(course, this_unit_id)
choices = [(-1, '-- None --')]
for assessment_id in sorted(available_assessments):
choices.append(
(assessment_id, available_assessments[assessment_id].title))
schema.get_property('pre_assessment').set_select_data(choices)
schema.get_property('post_assessment').set_select_data(choices)
return schema.get_schema_dict()
class LinkRESTHandler(CommonUnitRESTHandler):
"""Provides REST API to link."""
URI = '/rest/course/link'
SCHEMA = resources_display.ResourceLink.get_schema(course=None, key=None)
SCHEMA_JSON = SCHEMA.get_json_schema()
SCHEMA_DICT = SCHEMA.get_json_schema_dict()
SCHEMA_ANNOTATIONS_DICT = SCHEMA.get_schema_dict()
REQUIRED_MODULES = [
'inputex-string', 'inputex-select', 'inputex-uneditable',
'inputex-list', 'inputex-hidden', 'inputex-number', 'inputex-checkbox']
class ImportCourseRESTHandler(CommonUnitRESTHandler):
"""Provides REST API to course import."""
URI = '/rest/course/import'
SCHEMA_JSON = """
{
"id": "Import Course Entity",
"type": "object",
"description": "Import Course",
"properties": {
"course" : {"type": "string"}
}
}
"""
SCHEMA_DICT = transforms.loads(SCHEMA_JSON)
REQUIRED_MODULES = [
'inputex-string', 'inputex-select', 'inputex-uneditable']
@classmethod
def _get_course_list(cls):
# Make a list of courses user has the rights to.
course_list = []
for acourse in sites.get_all_courses():
if not roles.Roles.is_course_admin(acourse):
continue
if acourse == sites.get_course_for_current_request():
continue
atitle = '%s (%s)' % (acourse.get_title(), acourse.get_slug())
course_list.append({
'value': acourse.raw, 'label': cgi.escape(atitle)})
return course_list
@classmethod
def SCHEMA_ANNOTATIONS_DICT(cls):
"""Schema annotations are dynamic and include a list of courses."""
course_list = cls._get_course_list()
if not course_list:
return None
# Format annotations.
return [
(['title'], 'Import Course'),
(
['properties', 'course', '_inputex'],
{
'label': 'Available Courses',
'_type': 'select',
'choices': course_list})]
def get(self):
"""Handles REST GET verb and returns an object as JSON payload."""
if not CourseOutlineRights.can_view(self):
transforms.send_json_response(self, 401, 'Access denied.', {})
return
first_course_in_dropdown = self._get_course_list()[0]['value']
transforms.send_json_response(
self, 200, None,
payload_dict={'course': first_course_in_dropdown},
xsrf_token=XsrfTokenManager.create_xsrf_token(
'import-course'))
def put(self):
"""Handles REST PUT verb with JSON payload."""
request = transforms.loads(self.request.get('request'))
if not self.assert_xsrf_token_or_fail(
request, 'import-course', {'key': None}):
return
if not CourseOutlineRights.can_edit(self):
transforms.send_json_response(self, 401, 'Access denied.', {})
return
payload = request.get('payload')
course_raw = transforms.json_to_dict(
transforms.loads(payload), self.SCHEMA_DICT)['course']
source = None
for acourse in sites.get_all_courses():
if acourse.raw == course_raw:
source = acourse
break
if not source:
transforms.send_json_response(
self, 404, 'Object not found.', {'raw': course_raw})
return
course = courses.Course(self)
errors = []
try:
course.import_from(source, errors)
except Exception as e: # pylint: disable=broad-except
logging.exception(e)
errors.append('Import failed: %s' % e)
if errors:
transforms.send_json_response(self, 412, '\n'.join(errors))
return
course.save()
transforms.send_json_response(self, 200, 'Imported.')
class AssessmentRESTHandler(CommonUnitRESTHandler):
"""Provides REST API to assessment."""
URI = '/rest/course/assessment'
SCHEMA = resources_display.ResourceAssessment.get_schema(
course=None, key=None)
SCHEMA_JSON = SCHEMA.get_json_schema()
SCHEMA_DICT = SCHEMA.get_json_schema_dict()
SCHEMA_ANNOTATIONS_DICT = SCHEMA.get_schema_dict()
REQUIRED_MODULES = [
'gcb-rte', 'inputex-select', 'inputex-string', 'inputex-textarea',
'inputex-uneditable', 'inputex-integer', 'inputex-hidden',
'inputex-checkbox', 'inputex-list']
class UnitLessonTitleRESTHandler(BaseRESTHandler):
"""Provides REST API to reorder unit and lesson titles."""
URI = '/rest/course/outline'
XSRF_TOKEN = 'unit-lesson-reorder'
SCHEMA_JSON = """
{
"type": "object",
"description": "Course Outline",
"properties": {
"outline": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type": "string"},
"title": {"type": "string"},
"lessons": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type": "string"},
"title": {"type": "string"}
}
}
}
}
}
}
}
}
"""
SCHEMA_DICT = transforms.loads(SCHEMA_JSON)
def put(self):
"""Handles REST PUT verb with JSON payload."""
request = transforms.loads(self.request.get('request'))
if not self.assert_xsrf_token_or_fail(
request, self.XSRF_TOKEN, {'key': None}):
return
if not CourseOutlineRights.can_edit(self):
transforms.send_json_response(self, 401, 'Access denied.', {})
return
payload = request.get('payload')
payload_dict = transforms.json_to_dict(
transforms.loads(payload), self.SCHEMA_DICT)
course = courses.Course(self)
course.reorder_units(payload_dict['outline'])
course.save()
transforms.send_json_response(self, 200, 'Saved.')
class LessonRESTHandler(BaseRESTHandler):
"""Provides REST API to handle lessons and activities."""
URI = '/rest/course/lesson'
REQUIRED_MODULES = [
'inputex-string', 'gcb-rte', 'inputex-select', 'inputex-textarea',
'inputex-uneditable', 'inputex-checkbox', 'inputex-hidden']
# Enable modules to specify locations to load JS and CSS files
ADDITIONAL_DIRS = []
# Enable modules to add css files to be shown in the editor page.
EXTRA_CSS_FILES = []
# Enable modules to add js files to be shown in the editor page.
EXTRA_JS_FILES = []
# Enable other modules to add transformations to the schema.Each member must
# be a function of the form:
# callback(lesson_field_registry)
# where the argument is the root FieldRegistry for the schema
SCHEMA_LOAD_HOOKS = []
# Enable other modules to add transformations to the load. Each member must
# be a function of the form:
# callback(lesson, lesson_dict)
# and the callback should update fields of the lesson_dict, which will be
# returned to the caller of a GET request.
PRE_LOAD_HOOKS = []
# Enable other modules to add transformations to the save. Each member must
# be a function of the form:
# callback(lesson, lesson_dict)
# and the callback should update fields of the lesson with values read from
# the dict which was the payload of a PUT request.
PRE_SAVE_HOOKS = []
# These functions are called with an updated lesson object whenever a
# change is saved.
POST_SAVE_HOOKS = []
@classmethod
def get_schema(cls, course, key):
lesson_schema = resources_display.ResourceLesson.get_schema(course, key)
common_utils.run_hooks(cls.SCHEMA_LOAD_HOOKS, lesson_schema)
return lesson_schema
@classmethod
def get_lesson_dict(cls, course, lesson):
return cls.get_lesson_dict_for(course, lesson)
@classmethod
def get_lesson_dict_for(cls, course, lesson):
lesson_dict = resources_display.ResourceLesson.get_data_dict(
course, lesson.lesson_id)
common_utils.run_hooks(cls.PRE_LOAD_HOOKS, lesson, lesson_dict)
return lesson_dict
def get(self):
"""Handles GET REST verb and returns lesson object as JSON payload."""
if not CourseOutlineRights.can_view(self):
transforms.send_json_response(self, 401, 'Access denied.', {})
return
key = self.request.get('key')
course = courses.Course(self)
lesson = course.find_lesson_by_id(None, key)
assert lesson
payload_dict = self.get_lesson_dict(course, lesson)
message = ['Success.']
if self.request.get('is_newly_created'):
message.append('New lesson has been created and saved.')
transforms.send_json_response(
self, 200, '\n'.join(message),
payload_dict=payload_dict,
xsrf_token=XsrfTokenManager.create_xsrf_token('lesson-edit'))
def put(self):
"""Handles PUT REST verb to save lesson and associated activity."""
request = transforms.loads(self.request.get('request'))
key = request.get('key')
if not self.assert_xsrf_token_or_fail(
request, 'lesson-edit', {'key': key}):
return
if not CourseOutlineRights.can_edit(self):
transforms.send_json_response(
self, 401, 'Access denied.', {'key': key})
return
course = courses.Course(self)
lesson = course.find_lesson_by_id(None, key)
if not lesson:
transforms.send_json_response(
self, 404, 'Object not found.', {'key': key})
return
payload = request.get('payload')
updates_dict = transforms.json_to_dict(
transforms.loads(payload),
self.get_schema(course, key).get_json_schema_dict())
lesson.title = updates_dict['title']
lesson.unit_id = updates_dict['unit_id']
lesson.scored = (updates_dict['scored'] == 'scored')
lesson.objectives = updates_dict['objectives']
lesson.video = updates_dict['video']
lesson.notes = updates_dict['notes']
lesson.auto_index = updates_dict['auto_index']
lesson.activity_title = updates_dict['activity_title']
lesson.activity_listed = updates_dict['activity_listed']
lesson.manual_progress = updates_dict['manual_progress']
lesson.now_available = not updates_dict['is_draft']
activity = updates_dict.get('activity', '').strip()
errors = []
if activity:
if lesson.has_activity:
course.set_activity_content(lesson, activity, errors=errors)
else:
errors.append('Old-style activities are not supported.')
else:
lesson.has_activity = False
fs = self.app_context.fs
path = fs.impl.physical_to_logical(course.get_activity_filename(
lesson.unit_id, lesson.lesson_id))
if fs.isfile(path):
fs.delete(path)
if not errors:
common_utils.run_hooks(self.PRE_SAVE_HOOKS, lesson, updates_dict)
assert course.update_lesson(lesson)
course.save()
common_utils.run_hooks(self.POST_SAVE_HOOKS, lesson)
transforms.send_json_response(self, 200, 'Saved.')
else:
transforms.send_json_response(self, 412, '\n'.join(errors))
def delete(self):
"""Handles REST DELETE verb with JSON payload."""
key = self.request.get('key')
if not self.assert_xsrf_token_or_fail(
self.request, 'delete-lesson', {'key': key}):
return
if not CourseOutlineRights.can_delete(self):
transforms.send_json_response(
self, 401, 'Access denied.', {'key': key})
return
course = courses.Course(self)
lesson = course.find_lesson_by_id(None, key)
if not lesson:
transforms.send_json_response(
self, 404, 'Object not found.', {'key': key})
return
assert course.delete_lesson(lesson)
course.save()
transforms.send_json_response(self, 200, 'Deleted.')
| apache-2.0 | -6,512,185,131,179,540,000 | 35.808383 | 80 | 0.585684 | false |
Apelsin/trello-tools | trello_tools/burndown.py | 1 | 6529 | import re
from pprint import pprint
from StringIO import StringIO
ProgISO8601Date = re.compile('(\d{4})-([01]\d)-([0-3]\d)')
def _get_next_row(d):
def _get_next_element(_list):
for element in _list:
yield element
return
row = {}
while True:
for key, column in d.iteritems():
try:
element = _get_next_element(column).next()
row[key] = element
except StopIteration as e:
return
yield row
def _get_next_row_list(_dict):
row = _get_next_row(_dict).next()
yield [value for name, value in _get_next_row.iteritems()]
def _dump_list(_list, delimiter):
return delimiter.join([str(e) for e in _list])
class Burndown:
class Snapshot:
def __init__(self, to_do, doing, done):
self.to_do = to_do
self.doing = doing
self.done = done
def counts(self):
return (len(self.to_do), len(self.doing), len(self.done))
def __str__(self):
return '<Snapshot; To Do=%s, Doing=%s, Done=%s>' % self.counts()
def __init__(self, data, *args, **kwargs):
if kwargs is None:
kwargs = {}
self.additional = kwargs.get('additional', [])
self.delimiter = (kwargs.get('delimiter') or ',').decode('string-escape')
self.ideal = kwargs.get('ideal', False)
self.data = data
self.card_deck = {}
# Make the card deck
for j in self.data:
new = {card['id']:card for card in j['cards']}
self.card_deck.update(new)
@staticmethod
def _limit_cards_by_name(cards, lists, list_name):
matching = []
for card in cards:
if lists[card['idList']]==list_name:
matching.append(card)
return matching
def _lookup_cards(self, cards):
return [self.card_deck[card['id']] for card in cards]
def generate(self):
d = {}
column_names = set()
date_rows = {}
stream = StringIO()
overall_to_do = set()
overall_done = set()
for datum in self.data:
# Get lists from this board
lists = {_list['id']:_list['name'] for _list in datum['lists']}
# Lookup the card in the cards deck after finding it in its board's list
cards = datum['cards']
#to_do = self._limit_cards_by_name(cards, lists, 'To Do')
#doing = self._limit_cards_by_name(cards, lists, 'Doing')
done = self._limit_cards_by_name(cards, lists, 'Done')
#to_do = self._lookup_cards(to_do)
#doing = self._lookup_cards(doing)
done = self._lookup_cards(done)
cards_ids = [card['id'] for card in cards]
#to_do_ids = [card['id'] for card in to_do]
#doing_ids = [card['id'] for card in doing]
archive_ids = []
for card in cards:
if card['closed']:
archive_ids.append(card['id'])
done_ids = [card['id'] for card in done]
#not_done_ids = list(set(cards_ids) - set(done_ids))
#not_done_ids = list(set(to_do_ids) + set(doing_ids))
done_or_closed_set = set(done_ids) - set(archive_ids)
not_done_ids = set(cards_ids) - done_or_closed_set
overall_to_do.update(not_done_ids)
#overall_done.update(set([item['id'] for item in done]))
overall_done.update(done_or_closed_set)
#remaining = len(overall_to_do.difference(overall_done))
remaining = len(cards) - len(overall_done)
date_match = ProgISO8601Date.search(datum['file-name'])
year, month, day = (date_match.group(i) for i in xrange(1,4))
date_string = '%s-%s-%s' % (year, month, day)
_sort_index = date_string
column_name = datum['name']
date_rows[date_string] = {
'column-name': column_name,
'remaining': remaining,
'_sort_index': _sort_index,
}
column_names.add(column_name)
column_names_list = list(column_names)
column_names_list.sort()
if self.ideal:
column_names_list.append('_ideal')
idx_ideal = len(column_names_list) - 1
csv_header_list = [u'Date']
csv_header_list.extend(column_names_list)
csv = [_dump_list(csv_header_list, self.delimiter)]
prev_idx = None
prev_value = None
def _csv_row(date, row_data):
csv_row = [date]
csv_row.extend(row_data)
return csv_row
def _first():
yield True
while True:
yield False
first = _first()
for date in sorted(date_rows, key=lambda key: date_rows[key]['_sort_index']):
date_row = date_rows[date]
row_data = ['']*len(column_names_list)
idx = column_names_list.index(date_row['column-name'])
row_data[idx] = date_row['remaining']
# Make data pretty for Excel or whatever
if prev_idx is not None:
if idx != prev_idx:
if self.ideal:
# I love Python...
row_data_trick = [e if i is not idx else '' for i, e in enumerate(row_data)]
row_data_trick[idx_ideal] = 0
row_data_trick[prev_idx] = prev_value
csv.append(_dump_list(_csv_row(date, row_data_trick), self.delimiter))
# Empty row with same date
csv.append(_dump_list(_csv_row(date, ['']*len(column_names_list)), self.delimiter))
row_data[idx_ideal] = row_data[idx]
elif self.ideal:
row_data[idx_ideal] = row_data[idx]
if self.ideal:
if idx == prev_idx:
row_data[idx_ideal] = '=na()'
csv.append(_dump_list(_csv_row(date, row_data), self.delimiter))
prev_idx = idx
prev_value = row_data[idx]
d['csv'] = csv
for row in csv:
print >> stream, row
s = stream.getvalue()
stream.close()
return s | lgpl-2.1 | -4,438,705,502,268,646,000 | 35.892655 | 107 | 0.498698 | false |
shownomercy/django | django/contrib/auth/backends.py | 468 | 6114 | from __future__ import unicode_literals
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Permission
class ModelBackend(object):
"""
Authenticates against settings.AUTH_USER_MODEL.
"""
def authenticate(self, username=None, password=None, **kwargs):
UserModel = get_user_model()
if username is None:
username = kwargs.get(UserModel.USERNAME_FIELD)
try:
user = UserModel._default_manager.get_by_natural_key(username)
if user.check_password(password):
return user
except UserModel.DoesNotExist:
# Run the default password hasher once to reduce the timing
# difference between an existing and a non-existing user (#20760).
UserModel().set_password(password)
def _get_user_permissions(self, user_obj):
return user_obj.user_permissions.all()
def _get_group_permissions(self, user_obj):
user_groups_field = get_user_model()._meta.get_field('groups')
user_groups_query = 'group__%s' % user_groups_field.related_query_name()
return Permission.objects.filter(**{user_groups_query: user_obj})
def _get_permissions(self, user_obj, obj, from_name):
"""
Returns the permissions of `user_obj` from `from_name`. `from_name` can
be either "group" or "user" to return permissions from
`_get_group_permissions` or `_get_user_permissions` respectively.
"""
if not user_obj.is_active or user_obj.is_anonymous() or obj is not None:
return set()
perm_cache_name = '_%s_perm_cache' % from_name
if not hasattr(user_obj, perm_cache_name):
if user_obj.is_superuser:
perms = Permission.objects.all()
else:
perms = getattr(self, '_get_%s_permissions' % from_name)(user_obj)
perms = perms.values_list('content_type__app_label', 'codename').order_by()
setattr(user_obj, perm_cache_name, set("%s.%s" % (ct, name) for ct, name in perms))
return getattr(user_obj, perm_cache_name)
def get_user_permissions(self, user_obj, obj=None):
"""
Returns a set of permission strings the user `user_obj` has from their
`user_permissions`.
"""
return self._get_permissions(user_obj, obj, 'user')
def get_group_permissions(self, user_obj, obj=None):
"""
Returns a set of permission strings the user `user_obj` has from the
groups they belong.
"""
return self._get_permissions(user_obj, obj, 'group')
def get_all_permissions(self, user_obj, obj=None):
if not user_obj.is_active or user_obj.is_anonymous() or obj is not None:
return set()
if not hasattr(user_obj, '_perm_cache'):
user_obj._perm_cache = self.get_user_permissions(user_obj)
user_obj._perm_cache.update(self.get_group_permissions(user_obj))
return user_obj._perm_cache
def has_perm(self, user_obj, perm, obj=None):
if not user_obj.is_active:
return False
return perm in self.get_all_permissions(user_obj, obj)
def has_module_perms(self, user_obj, app_label):
"""
Returns True if user_obj has any permissions in the given app_label.
"""
if not user_obj.is_active:
return False
for perm in self.get_all_permissions(user_obj):
if perm[:perm.index('.')] == app_label:
return True
return False
def get_user(self, user_id):
UserModel = get_user_model()
try:
return UserModel._default_manager.get(pk=user_id)
except UserModel.DoesNotExist:
return None
class RemoteUserBackend(ModelBackend):
"""
This backend is to be used in conjunction with the ``RemoteUserMiddleware``
found in the middleware module of this package, and is used when the server
is handling authentication outside of Django.
By default, the ``authenticate`` method creates ``User`` objects for
usernames that don't already exist in the database. Subclasses can disable
this behavior by setting the ``create_unknown_user`` attribute to
``False``.
"""
# Create a User object if not already in the database?
create_unknown_user = True
def authenticate(self, remote_user):
"""
The username passed as ``remote_user`` is considered trusted. This
method simply returns the ``User`` object with the given username,
creating a new ``User`` object if ``create_unknown_user`` is ``True``.
Returns None if ``create_unknown_user`` is ``False`` and a ``User``
object with the given username is not found in the database.
"""
if not remote_user:
return
user = None
username = self.clean_username(remote_user)
UserModel = get_user_model()
# Note that this could be accomplished in one try-except clause, but
# instead we use get_or_create when creating unknown users since it has
# built-in safeguards for multiple threads.
if self.create_unknown_user:
user, created = UserModel._default_manager.get_or_create(**{
UserModel.USERNAME_FIELD: username
})
if created:
user = self.configure_user(user)
else:
try:
user = UserModel._default_manager.get_by_natural_key(username)
except UserModel.DoesNotExist:
pass
return user
def clean_username(self, username):
"""
Performs any cleaning on the "username" prior to using it to get or
create the user object. Returns the cleaned username.
By default, returns the username unchanged.
"""
return username
def configure_user(self, user):
"""
Configures a user after creation and returns the updated user.
By default, returns the user unmodified.
"""
return user
| bsd-3-clause | -6,925,748,106,298,386,000 | 37.2125 | 95 | 0.616454 | false |
prospwro/odoo | openerp/report/printscreen/ps_list.py | 381 | 11955 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import openerp
from openerp.report.interface import report_int
import openerp.tools as tools
from openerp.tools.safe_eval import safe_eval as eval
from lxml import etree
from openerp.report import render, report_sxw
import locale
import time, os
from operator import itemgetter
from datetime import datetime
class report_printscreen_list(report_int):
def __init__(self, name):
report_int.__init__(self, name)
self.context = {}
self.groupby = []
self.cr=''
def _parse_node(self, root_node):
result = []
for node in root_node:
field_name = node.get('name')
if not eval(str(node.attrib.get('invisible',False)),{'context':self.context}):
if node.tag == 'field':
if field_name in self.groupby:
continue
result.append(field_name)
else:
result.extend(self._parse_node(node))
return result
def _parse_string(self, view):
try:
dom = etree.XML(view.encode('utf-8'))
except Exception:
dom = etree.XML(view)
return self._parse_node(dom)
def create(self, cr, uid, ids, datas, context=None):
if not context:
context={}
self.cr=cr
self.context = context
self.groupby = context.get('group_by',[])
self.groupby_no_leaf = context.get('group_by_no_leaf',False)
registry = openerp.registry(cr.dbname)
model = registry[datas['model']]
model_id = registry['ir.model'].search(cr, uid, [('model','=',model._name)])
model_desc = model._description
if model_id:
model_desc = registry['ir.model'].browse(cr, uid, model_id[0], context).name
self.title = model_desc
datas['ids'] = ids
result = model.fields_view_get(cr, uid, view_type='tree', context=context)
fields_order = self.groupby + self._parse_string(result['arch'])
if self.groupby:
rows = []
def get_groupby_data(groupby = [], domain = []):
records = model.read_group(cr, uid, domain, fields_order, groupby , 0, None, context)
for rec in records:
rec['__group'] = True
rec['__no_leaf'] = self.groupby_no_leaf
rec['__grouped_by'] = groupby[0] if (isinstance(groupby, list) and groupby) else groupby
for f in fields_order:
if f not in rec:
rec.update({f:False})
elif isinstance(rec[f], tuple):
rec[f] = rec[f][1]
rows.append(rec)
inner_groupby = (rec.get('__context', {})).get('group_by',[])
inner_domain = rec.get('__domain', [])
if inner_groupby:
get_groupby_data(inner_groupby, inner_domain)
else:
if self.groupby_no_leaf:
continue
child_ids = model.search(cr, uid, inner_domain)
res = model.read(cr, uid, child_ids, result['fields'].keys(), context)
res.sort(lambda x,y: cmp(ids.index(x['id']), ids.index(y['id'])))
rows.extend(res)
dom = [('id','in',ids)]
if self.groupby_no_leaf and len(ids) and not ids[0]:
dom = datas.get('_domain',[])
get_groupby_data(self.groupby, dom)
else:
rows = model.read(cr, uid, datas['ids'], result['fields'].keys(), context)
ids2 = map(itemgetter('id'), rows) # getting the ids from read result
if datas['ids'] != ids2: # sorted ids were not taken into consideration for print screen
rows_new = []
for id in datas['ids']:
rows_new += [elem for elem in rows if elem['id'] == id]
rows = rows_new
res = self._create_table(uid, datas['ids'], result['fields'], fields_order, rows, context, model_desc)
return self.obj.get(), 'pdf'
def _create_table(self, uid, ids, fields, fields_order, results, context, title=''):
pageSize=[297.0, 210.0]
new_doc = etree.Element("report")
config = etree.SubElement(new_doc, 'config')
def _append_node(name, text):
n = etree.SubElement(config, name)
n.text = text
#_append_node('date', time.strftime('%d/%m/%Y'))
_append_node('date', time.strftime(str(locale.nl_langinfo(locale.D_FMT).replace('%y', '%Y'))))
_append_node('PageSize', '%.2fmm,%.2fmm' % tuple(pageSize))
_append_node('PageWidth', '%.2f' % (pageSize[0] * 2.8346,))
_append_node('PageHeight', '%.2f' %(pageSize[1] * 2.8346,))
_append_node('report-header', title)
registry = openerp.registry(self.cr.dbname)
_append_node('company', registry['res.users'].browse(self.cr,uid,uid).company_id.name)
rpt_obj = registry['res.users']
rml_obj=report_sxw.rml_parse(self.cr, uid, rpt_obj._name,context)
_append_node('header-date', str(rml_obj.formatLang(time.strftime("%Y-%m-%d"),date=True))+' ' + str(time.strftime("%H:%M")))
l = []
t = 0
strmax = (pageSize[0]-40) * 2.8346
temp = []
tsum = []
for i in range(0, len(fields_order)):
temp.append(0)
tsum.append(0)
ince = -1
for f in fields_order:
s = 0
ince += 1
if fields[f]['type'] in ('date','time','datetime','float','integer'):
s = 60
strmax -= s
if fields[f]['type'] in ('float','integer'):
temp[ince] = 1
else:
t += fields[f].get('size', 80) / 28 + 1
l.append(s)
for pos in range(len(l)):
if not l[pos]:
s = fields[fields_order[pos]].get('size', 80) / 28 + 1
l[pos] = strmax * s / t
_append_node('tableSize', ','.join(map(str,l)) )
header = etree.SubElement(new_doc, 'header')
for f in fields_order:
field = etree.SubElement(header, 'field')
field.text = tools.ustr(fields[f]['string'] or '')
lines = etree.SubElement(new_doc, 'lines')
for line in results:
node_line = etree.SubElement(lines, 'row')
count = -1
for f in fields_order:
float_flag = 0
count += 1
if fields[f]['type']=='many2one' and line[f]:
if not line.get('__group'):
line[f] = line[f][1]
if fields[f]['type']=='selection' and line[f]:
for key, value in fields[f]['selection']:
if key == line[f]:
line[f] = value
break
if fields[f]['type'] in ('one2many','many2many') and line[f]:
line[f] = '( '+tools.ustr(len(line[f])) + ' )'
if fields[f]['type'] == 'float' and line[f]:
precision=(('digits' in fields[f]) and fields[f]['digits'][1]) or 2
prec ='%.' + str(precision) +'f'
line[f]=prec%(line[f])
float_flag = 1
if fields[f]['type'] == 'date' and line[f]:
new_d1 = line[f]
if not line.get('__group'):
format = str(locale.nl_langinfo(locale.D_FMT).replace('%y', '%Y'))
d1 = datetime.strptime(line[f],'%Y-%m-%d')
new_d1 = d1.strftime(format)
line[f] = new_d1
if fields[f]['type'] == 'time' and line[f]:
new_d1 = line[f]
if not line.get('__group'):
format = str(locale.nl_langinfo(locale.T_FMT))
d1 = datetime.strptime(line[f], '%H:%M:%S')
new_d1 = d1.strftime(format)
line[f] = new_d1
if fields[f]['type'] == 'datetime' and line[f]:
new_d1 = line[f]
if not line.get('__group'):
format = str(locale.nl_langinfo(locale.D_FMT).replace('%y', '%Y'))+' '+str(locale.nl_langinfo(locale.T_FMT))
d1 = datetime.strptime(line[f], '%Y-%m-%d %H:%M:%S')
new_d1 = d1.strftime(format)
line[f] = new_d1
if line.get('__group'):
col = etree.SubElement(node_line, 'col', para='group', tree='no')
else:
col = etree.SubElement(node_line, 'col', para='yes', tree='no')
# Prevent empty labels in groups
if f == line.get('__grouped_by') and line.get('__group') and not line[f] and not float_flag and not temp[count]:
col.text = line[f] = 'Undefined'
col.set('tree', 'undefined')
if line[f] is not None:
col.text = tools.ustr(line[f] or '')
if float_flag:
col.set('tree','float')
if line.get('__no_leaf') and temp[count] == 1 and f != 'id' and not line['__context']['group_by']:
tsum[count] = float(tsum[count]) + float(line[f])
if not line.get('__group') and f != 'id' and temp[count] == 1:
tsum[count] = float(tsum[count]) + float(line[f])
else:
col.text = '/'
node_line = etree.SubElement(lines, 'row')
for f in range(0, len(fields_order)):
col = etree.SubElement(node_line, 'col', para='group', tree='no')
col.set('tree', 'float')
if tsum[f] is not None:
if tsum[f] != 0.0:
digits = fields[fields_order[f]].get('digits', (16, 2))
prec = '%%.%sf' % (digits[1], )
total = prec % (tsum[f], )
txt = str(total or '')
else:
txt = str(tsum[f] or '')
else:
txt = '/'
if f == 0:
txt ='Total'
col.set('tree','no')
col.text = tools.ustr(txt or '')
transform = etree.XSLT(
etree.parse(os.path.join(tools.config['root_path'],
'addons/base/report/custom_new.xsl')))
rml = etree.tostring(transform(new_doc))
self.obj = render.rml(rml, title=self.title)
self.obj.render()
return True
report_printscreen_list('report.printscreen.list')
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | 6,119,848,660,432,600,000 | 42.791209 | 132 | 0.487746 | false |
jimb0616/namebench | nb_third_party/jinja2/filters.py | 199 | 22056 | # -*- coding: utf-8 -*-
"""
jinja2.filters
~~~~~~~~~~~~~~
Bundled jinja filters.
:copyright: (c) 2010 by the Jinja Team.
:license: BSD, see LICENSE for more details.
"""
import re
import math
from random import choice
from operator import itemgetter
from itertools import imap, groupby
from jinja2.utils import Markup, escape, pformat, urlize, soft_unicode
from jinja2.runtime import Undefined
from jinja2.exceptions import FilterArgumentError, SecurityError
_word_re = re.compile(r'\w+(?u)')
def contextfilter(f):
"""Decorator for marking context dependent filters. The current
:class:`Context` will be passed as first argument.
"""
f.contextfilter = True
return f
def evalcontextfilter(f):
"""Decorator for marking eval-context dependent filters. An eval
context object is passed as first argument. For more information
about the eval context, see :ref:`eval-context`.
.. versionadded:: 2.4
"""
f.evalcontextfilter = True
return f
def environmentfilter(f):
"""Decorator for marking evironment dependent filters. The current
:class:`Environment` is passed to the filter as first argument.
"""
f.environmentfilter = True
return f
def do_forceescape(value):
"""Enforce HTML escaping. This will probably double escape variables."""
if hasattr(value, '__html__'):
value = value.__html__()
return escape(unicode(value))
@evalcontextfilter
def do_replace(eval_ctx, s, old, new, count=None):
"""Return a copy of the value with all occurrences of a substring
replaced with a new one. The first argument is the substring
that should be replaced, the second is the replacement string.
If the optional third argument ``count`` is given, only the first
``count`` occurrences are replaced:
.. sourcecode:: jinja
{{ "Hello World"|replace("Hello", "Goodbye") }}
-> Goodbye World
{{ "aaaaargh"|replace("a", "d'oh, ", 2) }}
-> d'oh, d'oh, aaargh
"""
if count is None:
count = -1
if not eval_ctx.autoescape:
return unicode(s).replace(unicode(old), unicode(new), count)
if hasattr(old, '__html__') or hasattr(new, '__html__') and \
not hasattr(s, '__html__'):
s = escape(s)
else:
s = soft_unicode(s)
return s.replace(soft_unicode(old), soft_unicode(new), count)
def do_upper(s):
"""Convert a value to uppercase."""
return soft_unicode(s).upper()
def do_lower(s):
"""Convert a value to lowercase."""
return soft_unicode(s).lower()
@evalcontextfilter
def do_xmlattr(_eval_ctx, d, autospace=True):
"""Create an SGML/XML attribute string based on the items in a dict.
All values that are neither `none` nor `undefined` are automatically
escaped:
.. sourcecode:: html+jinja
<ul{{ {'class': 'my_list', 'missing': none,
'id': 'list-%d'|format(variable)}|xmlattr }}>
...
</ul>
Results in something like this:
.. sourcecode:: html
<ul class="my_list" id="list-42">
...
</ul>
As you can see it automatically prepends a space in front of the item
if the filter returned something unless the second parameter is false.
"""
rv = u' '.join(
u'%s="%s"' % (escape(key), escape(value))
for key, value in d.iteritems()
if value is not None and not isinstance(value, Undefined)
)
if autospace and rv:
rv = u' ' + rv
if _eval_ctx.autoescape:
rv = Markup(rv)
return rv
def do_capitalize(s):
"""Capitalize a value. The first character will be uppercase, all others
lowercase.
"""
return soft_unicode(s).capitalize()
def do_title(s):
"""Return a titlecased version of the value. I.e. words will start with
uppercase letters, all remaining characters are lowercase.
"""
return soft_unicode(s).title()
def do_dictsort(value, case_sensitive=False, by='key'):
"""Sort a dict and yield (key, value) pairs. Because python dicts are
unsorted you may want to use this function to order them by either
key or value:
.. sourcecode:: jinja
{% for item in mydict|dictsort %}
sort the dict by key, case insensitive
{% for item in mydict|dicsort(true) %}
sort the dict by key, case sensitive
{% for item in mydict|dictsort(false, 'value') %}
sort the dict by key, case insensitive, sorted
normally and ordered by value.
"""
if by == 'key':
pos = 0
elif by == 'value':
pos = 1
else:
raise FilterArgumentError('You can only sort by either '
'"key" or "value"')
def sort_func(item):
value = item[pos]
if isinstance(value, basestring) and not case_sensitive:
value = value.lower()
return value
return sorted(value.items(), key=sort_func)
def do_sort(value, case_sensitive=False):
"""Sort an iterable. If the iterable is made of strings the second
parameter can be used to control the case sensitiveness of the
comparison which is disabled by default.
.. sourcecode:: jinja
{% for item in iterable|sort %}
...
{% endfor %}
"""
if not case_sensitive:
def sort_func(item):
if isinstance(item, basestring):
item = item.lower()
return item
else:
sort_func = None
return sorted(seq, key=sort_func)
def do_default(value, default_value=u'', boolean=False):
"""If the value is undefined it will return the passed default value,
otherwise the value of the variable:
.. sourcecode:: jinja
{{ my_variable|default('my_variable is not defined') }}
This will output the value of ``my_variable`` if the variable was
defined, otherwise ``'my_variable is not defined'``. If you want
to use default with variables that evaluate to false you have to
set the second parameter to `true`:
.. sourcecode:: jinja
{{ ''|default('the string was empty', true) }}
"""
if (boolean and not value) or isinstance(value, Undefined):
return default_value
return value
@evalcontextfilter
def do_join(eval_ctx, value, d=u''):
"""Return a string which is the concatenation of the strings in the
sequence. The separator between elements is an empty string per
default, you can define it with the optional parameter:
.. sourcecode:: jinja
{{ [1, 2, 3]|join('|') }}
-> 1|2|3
{{ [1, 2, 3]|join }}
-> 123
"""
# no automatic escaping? joining is a lot eaiser then
if not eval_ctx.autoescape:
return unicode(d).join(imap(unicode, value))
# if the delimiter doesn't have an html representation we check
# if any of the items has. If yes we do a coercion to Markup
if not hasattr(d, '__html__'):
value = list(value)
do_escape = False
for idx, item in enumerate(value):
if hasattr(item, '__html__'):
do_escape = True
else:
value[idx] = unicode(item)
if do_escape:
d = escape(d)
else:
d = unicode(d)
return d.join(value)
# no html involved, to normal joining
return soft_unicode(d).join(imap(soft_unicode, value))
def do_center(value, width=80):
"""Centers the value in a field of a given width."""
return unicode(value).center(width)
@environmentfilter
def do_first(environment, seq):
"""Return the first item of a sequence."""
try:
return iter(seq).next()
except StopIteration:
return environment.undefined('No first item, sequence was empty.')
@environmentfilter
def do_last(environment, seq):
"""Return the last item of a sequence."""
try:
return iter(reversed(seq)).next()
except StopIteration:
return environment.undefined('No last item, sequence was empty.')
@environmentfilter
def do_random(environment, seq):
"""Return a random item from the sequence."""
try:
return choice(seq)
except IndexError:
return environment.undefined('No random item, sequence was empty.')
def do_filesizeformat(value, binary=False):
"""Format the value like a 'human-readable' file size (i.e. 13 KB,
4.1 MB, 102 bytes, etc). Per default decimal prefixes are used (mega,
giga, etc.), if the second parameter is set to `True` the binary
prefixes are used (mebi, gibi).
"""
bytes = float(value)
base = binary and 1024 or 1000
middle = binary and 'i' or ''
if bytes < base:
return "%d Byte%s" % (bytes, bytes != 1 and 's' or '')
elif bytes < base * base:
return "%.1f K%sB" % (bytes / base, middle)
elif bytes < base * base * base:
return "%.1f M%sB" % (bytes / (base * base), middle)
return "%.1f G%sB" % (bytes / (base * base * base), middle)
def do_pprint(value, verbose=False):
"""Pretty print a variable. Useful for debugging.
With Jinja 1.2 onwards you can pass it a parameter. If this parameter
is truthy the output will be more verbose (this requires `pretty`)
"""
return pformat(value, verbose=verbose)
@evalcontextfilter
def do_urlize(eval_ctx, value, trim_url_limit=None, nofollow=False):
"""Converts URLs in plain text into clickable links.
If you pass the filter an additional integer it will shorten the urls
to that number. Also a third argument exists that makes the urls
"nofollow":
.. sourcecode:: jinja
{{ mytext|urlize(40, true) }}
links are shortened to 40 chars and defined with rel="nofollow"
"""
rv = urlize(value, trim_url_limit, nofollow)
if eval_ctx.autoescape:
rv = Markup(rv)
return rv
def do_indent(s, width=4, indentfirst=False):
"""Return a copy of the passed string, each line indented by
4 spaces. The first line is not indented. If you want to
change the number of spaces or indent the first line too
you can pass additional parameters to the filter:
.. sourcecode:: jinja
{{ mytext|indent(2, true) }}
indent by two spaces and indent the first line too.
"""
indention = u' ' * width
rv = (u'\n' + indention).join(s.splitlines())
if indentfirst:
rv = indention + rv
return rv
def do_truncate(s, length=255, killwords=False, end='...'):
"""Return a truncated copy of the string. The length is specified
with the first parameter which defaults to ``255``. If the second
parameter is ``true`` the filter will cut the text at length. Otherwise
it will try to save the last word. If the text was in fact
truncated it will append an ellipsis sign (``"..."``). If you want a
different ellipsis sign than ``"..."`` you can specify it using the
third parameter.
.. sourcecode jinja::
{{ mytext|truncate(300, false, '»') }}
truncate mytext to 300 chars, don't split up words, use a
right pointing double arrow as ellipsis sign.
"""
if len(s) <= length:
return s
elif killwords:
return s[:length] + end
words = s.split(' ')
result = []
m = 0
for word in words:
m += len(word) + 1
if m > length:
break
result.append(word)
result.append(end)
return u' '.join(result)
def do_wordwrap(s, width=79, break_long_words=True):
"""
Return a copy of the string passed to the filter wrapped after
``79`` characters. You can override this default using the first
parameter. If you set the second parameter to `false` Jinja will not
split words apart if they are longer than `width`.
"""
import textwrap
return u'\n'.join(textwrap.wrap(s, width=width, expand_tabs=False,
replace_whitespace=False,
break_long_words=break_long_words))
def do_wordcount(s):
"""Count the words in that string."""
return len(_word_re.findall(s))
def do_int(value, default=0):
"""Convert the value into an integer. If the
conversion doesn't work it will return ``0``. You can
override this default using the first parameter.
"""
try:
return int(value)
except (TypeError, ValueError):
# this quirk is necessary so that "42.23"|int gives 42.
try:
return int(float(value))
except (TypeError, ValueError):
return default
def do_float(value, default=0.0):
"""Convert the value into a floating point number. If the
conversion doesn't work it will return ``0.0``. You can
override this default using the first parameter.
"""
try:
return float(value)
except (TypeError, ValueError):
return default
def do_format(value, *args, **kwargs):
"""
Apply python string formatting on an object:
.. sourcecode:: jinja
{{ "%s - %s"|format("Hello?", "Foo!") }}
-> Hello? - Foo!
"""
if args and kwargs:
raise FilterArgumentError('can\'t handle positional and keyword '
'arguments at the same time')
return soft_unicode(value) % (kwargs or args)
def do_trim(value):
"""Strip leading and trailing whitespace."""
return soft_unicode(value).strip()
def do_striptags(value):
"""Strip SGML/XML tags and replace adjacent whitespace by one space.
"""
if hasattr(value, '__html__'):
value = value.__html__()
return Markup(unicode(value)).striptags()
def do_slice(value, slices, fill_with=None):
"""Slice an iterator and return a list of lists containing
those items. Useful if you want to create a div containing
three ul tags that represent columns:
.. sourcecode:: html+jinja
<div class="columwrapper">
{%- for column in items|slice(3) %}
<ul class="column-{{ loop.index }}">
{%- for item in column %}
<li>{{ item }}</li>
{%- endfor %}
</ul>
{%- endfor %}
</div>
If you pass it a second argument it's used to fill missing
values on the last iteration.
"""
seq = list(value)
length = len(seq)
items_per_slice = length // slices
slices_with_extra = length % slices
offset = 0
for slice_number in xrange(slices):
start = offset + slice_number * items_per_slice
if slice_number < slices_with_extra:
offset += 1
end = offset + (slice_number + 1) * items_per_slice
tmp = seq[start:end]
if fill_with is not None and slice_number >= slices_with_extra:
tmp.append(fill_with)
yield tmp
def do_batch(value, linecount, fill_with=None):
"""
A filter that batches items. It works pretty much like `slice`
just the other way round. It returns a list of lists with the
given number of items. If you provide a second parameter this
is used to fill missing items. See this example:
.. sourcecode:: html+jinja
<table>
{%- for row in items|batch(3, ' ') %}
<tr>
{%- for column in row %}
<td>{{ column }}</td>
{%- endfor %}
</tr>
{%- endfor %}
</table>
"""
result = []
tmp = []
for item in value:
if len(tmp) == linecount:
yield tmp
tmp = []
tmp.append(item)
if tmp:
if fill_with is not None and len(tmp) < linecount:
tmp += [fill_with] * (linecount - len(tmp))
yield tmp
def do_round(value, precision=0, method='common'):
"""Round the number to a given precision. The first
parameter specifies the precision (default is ``0``), the
second the rounding method:
- ``'common'`` rounds either up or down
- ``'ceil'`` always rounds up
- ``'floor'`` always rounds down
If you don't specify a method ``'common'`` is used.
.. sourcecode:: jinja
{{ 42.55|round }}
-> 43.0
{{ 42.55|round(1, 'floor') }}
-> 42.5
Note that even if rounded to 0 precision, a float is returned. If
you need a real integer, pipe it through `int`:
.. sourcecode:: jinja
{{ 42.55|round|int }}
-> 43
"""
if not method in ('common', 'ceil', 'floor'):
raise FilterArgumentError('method must be common, ceil or floor')
if precision < 0:
raise FilterArgumentError('precision must be a postive integer '
'or zero.')
if method == 'common':
return round(value, precision)
func = getattr(math, method)
if precision:
return func(value * 10 * precision) / (10 * precision)
else:
return func(value)
def do_sort(value, reverse=False):
"""Sort a sequence. Per default it sorts ascending, if you pass it
true as first argument it will reverse the sorting.
"""
return sorted(value, reverse=reverse)
@environmentfilter
def do_groupby(environment, value, attribute):
"""Group a sequence of objects by a common attribute.
If you for example have a list of dicts or objects that represent persons
with `gender`, `first_name` and `last_name` attributes and you want to
group all users by genders you can do something like the following
snippet:
.. sourcecode:: html+jinja
<ul>
{% for group in persons|groupby('gender') %}
<li>{{ group.grouper }}<ul>
{% for person in group.list %}
<li>{{ person.first_name }} {{ person.last_name }}</li>
{% endfor %}</ul></li>
{% endfor %}
</ul>
Additionally it's possible to use tuple unpacking for the grouper and
list:
.. sourcecode:: html+jinja
<ul>
{% for grouper, list in persons|groupby('gender') %}
...
{% endfor %}
</ul>
As you can see the item we're grouping by is stored in the `grouper`
attribute and the `list` contains all the objects that have this grouper
in common.
"""
expr = lambda x: environment.getitem(x, attribute)
return sorted(map(_GroupTuple, groupby(sorted(value, key=expr), expr)))
class _GroupTuple(tuple):
__slots__ = ()
grouper = property(itemgetter(0))
list = property(itemgetter(1))
def __new__(cls, (key, value)):
return tuple.__new__(cls, (key, list(value)))
def do_list(value):
"""Convert the value into a list. If it was a string the returned list
will be a list of characters.
"""
return list(value)
def do_mark_safe(value):
"""Mark the value as safe which means that in an environment with automatic
escaping enabled this variable will not be escaped.
"""
return Markup(value)
def do_mark_unsafe(value):
"""Mark a value as unsafe. This is the reverse operation for :func:`safe`."""
return unicode(value)
def do_reverse(value):
"""Reverse the object or return an iterator the iterates over it the other
way round.
"""
if isinstance(value, basestring):
return value[::-1]
try:
return reversed(value)
except TypeError:
try:
rv = list(value)
rv.reverse()
return rv
except TypeError:
raise FilterArgumentError('argument must be iterable')
@environmentfilter
def do_attr(environment, obj, name):
"""Get an attribute of an object. ``foo|attr("bar")`` works like
``foo["bar"]`` just that always an attribute is returned and items are not
looked up.
See :ref:`Notes on subscriptions <notes-on-subscriptions>` for more details.
"""
try:
name = str(name)
except UnicodeError:
pass
else:
try:
value = getattr(obj, name)
except AttributeError:
pass
else:
if environment.sandboxed and not \
environment.is_safe_attribute(obj, name, value):
return environment.unsafe_undefined(obj, name)
return value
return environment.undefined(obj=obj, name=name)
FILTERS = {
'attr': do_attr,
'replace': do_replace,
'upper': do_upper,
'lower': do_lower,
'escape': escape,
'e': escape,
'forceescape': do_forceescape,
'capitalize': do_capitalize,
'title': do_title,
'default': do_default,
'd': do_default,
'join': do_join,
'count': len,
'dictsort': do_dictsort,
'sort': do_sort,
'length': len,
'reverse': do_reverse,
'center': do_center,
'indent': do_indent,
'title': do_title,
'capitalize': do_capitalize,
'first': do_first,
'last': do_last,
'random': do_random,
'filesizeformat': do_filesizeformat,
'pprint': do_pprint,
'truncate': do_truncate,
'wordwrap': do_wordwrap,
'wordcount': do_wordcount,
'int': do_int,
'float': do_float,
'string': soft_unicode,
'list': do_list,
'urlize': do_urlize,
'format': do_format,
'trim': do_trim,
'striptags': do_striptags,
'slice': do_slice,
'batch': do_batch,
'sum': sum,
'abs': abs,
'round': do_round,
'sort': do_sort,
'groupby': do_groupby,
'safe': do_mark_safe,
'xmlattr': do_xmlattr
}
| apache-2.0 | -6,190,842,606,843,897,000 | 29.213699 | 82 | 0.587459 | false |
butala/pyrsss | pyrsss/emtf/grdio/grd_io.py | 1 | 5556 | """
Read/write tools for nonuniform electric field .grd format.
Matthew Grawe, grawe2 (at) illinois.edu
January 2017
"""
import numpy as np
def next_line(grd_file):
"""
next_line
Function returns the next line in the file
that is not a blank line, unless the line is
'', which is a typical EOF marker.
"""
done = False
while not done:
line = grd_file.readline()
if line == '':
return line, False
elif line.strip():
return line, True
def read_block(grd_file, n_lats):
lats = []
# read+store until we have collected n_lats
go = True
while go:
fline, status = next_line(grd_file)
line = fline.split()
# the line hats lats in it
lats.extend(np.array(line).astype('float'))
if len(lats) == 17:
go = False
return np.array(lats)
def grd_read(grd_filename):
"""
Opens the .grd file grd_file and returns the following:
lon_grid : 1D numpy array of lons
lat_grid : 1D numpy array of lats
time_grid: 1D numpy array of times
DATA : 3D numpy array of the electric field data, such that
the electric field at (lon, lat) for time t
is accessed via DATA[lon, lat, t].
"""
with open(grd_filename, 'rb') as grd_file:
# read the header line
fline, status = next_line(grd_file)
line = fline.split()
lon_res = float(line[0])
lon_west = float(line[1])
n_lons = int(line[2])
lat_res = float(line[3])
lat_south = float(line[4])
n_lats = int(line[5])
DATA = []
times = []
go = True
while go:
# get the time index line
ftline, status = next_line(grd_file)
tline = ftline.split()
t = float(tline[0])
times.append(t)
SLICE = np.zeros([n_lons, n_lats])
for lon_index in range(0, n_lons):
data_slice = read_block(grd_file, n_lats)
SLICE[lon_index, :] = data_slice
DATA.append(SLICE.T)
# current line should have length one to indicate next time index
# make sure, then back up
before = grd_file.tell()
fline, status = next_line(grd_file)
line = fline.split()
if len(line) != 1:
if status == False:
# EOF, leave
break
else:
raise Exception('Unexpected number of lat entries.')
grd_file.seek(before)
DATA = np.array(DATA).T
lon_grid = np.arange(lon_west, lon_west + lon_res*n_lons, lon_res)
lat_grid = np.arange(lat_south, lat_south + lat_res*n_lats, lat_res)
time_grid = np.array(times)
return lon_grid, lat_grid, times, DATA
def write_lon_block(grd_file, n_lats, data):
"""
len(data) == n_lats should be True
"""
current_index = 0
go1 = True
while go1:
line = ['']*81
go2 = True
internal_index = 0
while go2:
datum = data[current_index]
line[16*internal_index:16*internal_index+16] = ('%.11g' % datum).rjust(16)
current_index += 1
internal_index += 1
if(current_index >= len(data)):
line[80] = '\n'
grd_file.write("".join(line))
go2 = False
go1 = False
elif(internal_index >= 5):
line[80] = '\n'
grd_file.write("".join(line))
go2 = False
def grd_write(grd_filename, lon_grid, lat_grid, time_grid, DATA):
"""
Writes out DATA corresponding to the locations
specified by lon_grid, lat_grid in the .grd format.
lon_grid must have the westmost point as lon_grid[0].
lat_grid must have the southmost point as lat_grid[0].
Assumptions made:
latitude/longitude resolutions are positive
number of latitude/longitude points in header is positive
at least one space must be between each number
data lines have no more than 5 entries
Assumed structure of header line:
# first 16 spaces allocated as longitude resolution
# next 16 spaces allocated as westmost longitude
# next 5 spaces allocated as number of longitude points
# next 11 spaces allocated as latitude resolution
# next 16 spaces allocated as southmost latitude
# next 16 spaces allocated for number of latitude points
# TOTAL: 80 characters
Assumed stucture of time line:
# 5 blank spaces
# next 16 spaces allocated for time
Assumed structure a data line:
# 16 spaces allocated for data entry
# .. .. ..
"""
with open(grd_filename, 'wb') as grd_file:
# convert the lon grid to -180 to 180 if necessary
lon_grid = np.array(lon_grid)
lon_grid = lon_grid % 360
lon_grid = ((lon_grid + 180) % 360) - 180
lon_res = np.abs(lon_grid[1] - lon_grid[0])
lon_west = lon_grid[0]
n_lons = len(lon_grid)
lat_res = np.abs(lat_grid[1] - lat_grid[0])
lat_south = lat_grid[0]
n_lats = len(lat_grid)
n_times = len(time_grid)
# write the header: 80 characters
header = ['']*81
header[0:16] = ('%.7g' % lon_res).rjust(16)
header[16:32] = ('%.15g' % lon_west).rjust(16)
header[32:37] = str(n_lons).rjust(5)
header[37:48] = ('%.7g' % lat_res).rjust(11)
header[48:64] = ('%.15g' % lat_south).rjust(16)
header[64:80] = str(n_lats).rjust(16)
header[80] = '\n'
header_str = "".join(header)
grd_file.write(header_str)
for i, t in enumerate(time_grid):
# write the time line
timeline = ['']*(16+5)
timeline[5:-1] = ('%.8g' % t).rjust(9)
timeline[-1] = '\n'
timeline_str = "".join(timeline)
grd_file.write(timeline_str)
for j, lon in enumerate(lon_grid):
# write the lon blocks
write_lon_block(grd_file, n_lats, DATA[j, :, i])
grd_file.close() | mit | 1,698,533,633,989,522,400 | 24.721154 | 77 | 0.610151 | false |
openqt/algorithms | leetcode/python/lc945-minimum-increment-to-make-array-unique.py | 1 | 1062 | # coding=utf-8
import unittest
"""945. Minimum Increment to Make Array Unique
https://leetcode.com/problems/minimum-increment-to-make-array-unique/description/
Given an array of integers A, a _move_ consists of choosing any `A[i]`, and
incrementing it by `1`.
Return the least number of moves to make every value in `A` unique.
**Example 1:**
**Input:** [1,2,2]
**Output:** 1
**Explanation:** After 1 move, the array could be [1, 2, 3].
**Example 2:**
**Input:** [3,2,1,2,1,7]
**Output:** 6
**Explanation:** After 6 moves, the array could be [3, 4, 1, 2, 5, 7].
It can be shown with 5 or less moves that it is impossible for the array to have all unique values.
**Note:**
1. `0 <= A.length <= 40000`
2. `0 <= A[i] < 40000`
Similar Questions:
"""
class Solution(object):
def minIncrementForUnique(self, A):
"""
:type A: List[int]
:rtype: int
"""
def test(self):
pass
if __name__ == "__main__":
unittest.main()
| gpl-3.0 | 6,598,860,312,799,576,000 | 16.847458 | 103 | 0.572505 | false |
esiivola/GPYgradients | GPy/models/gp_grid_regression.py | 6 | 1195 | # Copyright (c) 2012-2014, GPy authors (see AUTHORS.txt).
# Licensed under the BSD 3-clause license (see LICENSE.txt)
# Kurt Cutajar
from ..core import GpGrid
from .. import likelihoods
from .. import kern
class GPRegressionGrid(GpGrid):
"""
Gaussian Process model for grid inputs using Kronecker products
This is a thin wrapper around the models.GpGrid class, with a set of sensible defaults
:param X: input observations
:param Y: observed values
:param kernel: a GPy kernel, defaults to the kron variation of SqExp
:param Norm normalizer: [False]
Normalize Y with the norm given.
If normalizer is False, no normalization will be done
If it is None, we use GaussianNorm(alization)
.. Note:: Multiple independent outputs are allowed using columns of Y
"""
def __init__(self, X, Y, kernel=None, Y_metadata=None, normalizer=None):
if kernel is None:
kernel = kern.RBF(1) # no other kernels implemented so far
likelihood = likelihoods.Gaussian()
super(GPRegressionGrid, self).__init__(X, Y, kernel, likelihood, name='GP Grid regression', Y_metadata=Y_metadata, normalizer=normalizer)
| bsd-3-clause | 800,365,994,982,342,300 | 32.194444 | 145 | 0.693724 | false |
flyfei/python-for-android | python3-alpha/python3-src/Lib/ctypes/test/test_frombuffer.py | 52 | 2485 | from ctypes import *
import array
import gc
import unittest
class X(Structure):
_fields_ = [("c_int", c_int)]
init_called = False
def __init__(self):
self._init_called = True
class Test(unittest.TestCase):
def test_fom_buffer(self):
a = array.array("i", range(16))
x = (c_int * 16).from_buffer(a)
y = X.from_buffer(a)
self.assertEqual(y.c_int, a[0])
self.assertFalse(y.init_called)
self.assertEqual(x[:], a.tolist())
a[0], a[-1] = 200, -200
self.assertEqual(x[:], a.tolist())
self.assertTrue(a in x._objects.values())
self.assertRaises(ValueError,
c_int.from_buffer, a, -1)
expected = x[:]
del a; gc.collect(); gc.collect(); gc.collect()
self.assertEqual(x[:], expected)
self.assertRaises(TypeError,
(c_char * 16).from_buffer, "a" * 16)
def test_fom_buffer_with_offset(self):
a = array.array("i", range(16))
x = (c_int * 15).from_buffer(a, sizeof(c_int))
self.assertEqual(x[:], a.tolist()[1:])
self.assertRaises(ValueError, lambda: (c_int * 16).from_buffer(a, sizeof(c_int)))
self.assertRaises(ValueError, lambda: (c_int * 1).from_buffer(a, 16 * sizeof(c_int)))
def test_from_buffer_copy(self):
a = array.array("i", range(16))
x = (c_int * 16).from_buffer_copy(a)
y = X.from_buffer_copy(a)
self.assertEqual(y.c_int, a[0])
self.assertFalse(y.init_called)
self.assertEqual(x[:], list(range(16)))
a[0], a[-1] = 200, -200
self.assertEqual(x[:], list(range(16)))
self.assertEqual(x._objects, None)
self.assertRaises(ValueError,
c_int.from_buffer, a, -1)
del a; gc.collect(); gc.collect(); gc.collect()
self.assertEqual(x[:], list(range(16)))
x = (c_char * 16).from_buffer_copy(b"a" * 16)
self.assertEqual(x[:], b"a" * 16)
def test_fom_buffer_copy_with_offset(self):
a = array.array("i", range(16))
x = (c_int * 15).from_buffer_copy(a, sizeof(c_int))
self.assertEqual(x[:], a.tolist()[1:])
self.assertRaises(ValueError,
(c_int * 16).from_buffer_copy, a, sizeof(c_int))
self.assertRaises(ValueError,
(c_int * 1).from_buffer_copy, a, 16 * sizeof(c_int))
if __name__ == '__main__':
unittest.main()
| apache-2.0 | -7,650,167,065,506,483,000 | 29.679012 | 93 | 0.538028 | false |
FEniCS/uflacs | test/unit/xtest_ufl_shapes_and_indexing.py | 1 | 3320 | #!/usr/bin/env python
"""
Tests of utilities for dealing with ufl indexing and components vs flattened index spaces.
"""
from ufl import *
from ufl import product
from ufl.permutation import compute_indices
from uflacs.analysis.indexing import (map_indexed_arg_components,
map_component_tensor_arg_components)
from uflacs.analysis.graph_symbols import (map_list_tensor_symbols,
map_transposed_symbols, get_node_symbols)
from uflacs.analysis.graph import build_graph
from operator import eq as equal
def test_map_indexed_arg_components():
W = TensorElement("CG", triangle, 1)
A = Coefficient(W)
i, j = indices(2)
# Ordered indices:
d = map_indexed_arg_components(A[i, j])
assert equal(d, [0, 1, 2, 3])
# Swapped ordering of indices:
d = map_indexed_arg_components(A[j, i])
assert equal(d, [0, 2, 1, 3])
def test_map_indexed_arg_components2():
# This was the previous return type, copied here to preserve the test without having to rewrite
def map_indexed_arg_components2(Aii):
c1, c2 = map_indexed_to_arg_components(Aii)
d = [None]*len(c1)
for k in range(len(c1)):
d[c1[k]] = k
return d
W = TensorElement("CG", triangle, 1)
A = Coefficient(W)
i, j = indices(2)
# Ordered indices:
d = map_indexed_arg_components2(A[i, j])
assert equal(d, [0, 1, 2, 3])
# Swapped ordering of indices:
d = map_indexed_arg_components2(A[j, i])
assert equal(d, [0, 2, 1, 3])
def test_map_componenttensor_arg_components():
W = TensorElement("CG", triangle, 1)
A = Coefficient(W)
i, j = indices(2)
# Ordered indices:
d = map_component_tensor_arg_components(as_tensor(2*A[i, j], (i, j)))
assert equal(d, [0, 1, 2, 3])
# Swapped ordering of indices:
d = map_component_tensor_arg_components(as_tensor(2*A[i, j], (j, i)))
assert equal(d, [0, 2, 1, 3])
def test_map_list_tensor_symbols():
U = FiniteElement("CG", triangle, 1)
u = Coefficient(U)
A = as_tensor(((u+1, u+2, u+3), (u**2+1, u**2+2, u**2+3)))
# Would be nicer to refactor build_graph a bit so we could call map_list_tensor_symbols directly...
G = build_graph([A], DEBUG=False)
s1 = list(get_node_symbols(A, G.e2i, G.V_symbols))
s2 = [get_node_symbols(e, G.e2i, G.V_symbols)[0] for e in (u+1, u+2, u+3, u**2+1, u**2+2, u**2+3)]
assert s1 == s2
def test_map_transposed_symbols():
W = TensorElement("CG", triangle, 1)
w = Coefficient(W)
A = w.T
# Would be nicer to refactor build_graph a bit so we could call map_transposed_symbols directly...
G = build_graph([A], DEBUG=False)
s1 = list(get_node_symbols(A, G.e2i, G.V_symbols))
s2 = list(get_node_symbols(w, G.e2i, G.V_symbols))
s2[1], s2[2] = s2[2], s2[1]
assert s1 == s2
W = TensorElement("CG", tetrahedron, 1)
w = Coefficient(W)
A = w.T
# Would be nicer to refactor build_graph a bit so we could call map_transposed_symbols directly...
G = build_graph([A], DEBUG=False)
s1 = list(get_node_symbols(A, G.e2i, G.V_symbols))
s2 = list(get_node_symbols(w, G.e2i, G.V_symbols))
s2[1], s2[2], s2[5], s2[3], s2[6], s2[7] = s2[3], s2[6], s2[7], s2[1], s2[2], s2[5]
assert s1 == s2
| gpl-3.0 | 6,493,757,585,278,681,000 | 33.947368 | 103 | 0.612651 | false |
nttcom/eclcli | eclcli/rca/v2/user.py | 2 | 4491 | # -*- coding: utf-8 -*-
from eclcli.common import command
from eclcli.common import exceptions
from eclcli.common import utils
from ..rcaclient.common.utils import objectify
class ListUser(command.Lister):
def get_parser(self, prog_name):
parser = super(ListUser, self).get_parser(prog_name)
return parser
def take_action(self, parsed_args):
rca_client = self.app.client_manager.rca
columns = (
'name',
'vpn_endpoints'
)
column_headers = (
'Name',
'VPN Endpoints'
)
data = rca_client.users.list()
return (column_headers,
(utils.get_item_properties(
s, columns, formatters={'vpn_endpoints': utils.format_list_of_dicts}
) for s in data))
class ShowUser(command.ShowOne):
def get_parser(self, prog_name):
parser = super(ShowUser, self).get_parser(prog_name)
parser.add_argument(
'name',
metavar='<name>',
help="User Name for Inter Connect Gateway Service"
)
return parser
def take_action(self, parsed_args):
rca_client = self.app.client_manager.rca
name = parsed_args.name
try:
user = rca_client.users.get(name)
printout = user._info
except exceptions.ClientException as clientexp:
printout = {"code": clientexp.code,
"message": clientexp.message}
columns = utils.get_columns(printout)
data = utils.get_item_properties(
objectify(printout),
columns,
formatters={'vpn_endpoints': utils.format_list_of_dicts})
return columns, data
class CreateUser(command.ShowOne):
def get_parser(self, prog_name):
parser = super(CreateUser, self).get_parser(prog_name)
parser.add_argument(
'--name',
metavar='<name>',
help="User Name for Inter Connect Gateway Service")
parser.add_argument(
'--password',
metavar='<password>',
default=None,
help="User Password for Inter Connect Gateway Service")
return parser
def take_action(self, parsed_args):
rca_client = self.app.client_manager.rca
name = parsed_args.name
password = parsed_args.password
try:
user = rca_client.users.create(name, password)
printout = user._info
except exceptions.ClientException as clientexp:
printout = {"code": clientexp.code,
"message": clientexp.message}
columns = utils.get_columns(printout)
data = utils.get_item_properties(
objectify(printout),
columns,
formatters={'vpn_endpoints': utils.format_list_of_dicts})
return columns, data
class SetUser(command.ShowOne):
def get_parser(self, prog_name):
parser = super(SetUser, self).get_parser(prog_name)
parser.add_argument(
'name',
metavar='<name>',
help="User Name for Inter Connect Gateway Service")
parser.add_argument(
'--password',
metavar='<password>',
default=None,
help="User Password for Inter Connect Gateway Service")
return parser
def take_action(self, parsed_args):
rca_client = self.app.client_manager.rca
name = parsed_args.name
password = parsed_args.password
try:
user = rca_client.users.update(name, password)
printout = user._info
except exceptions.ClientException as clientexp:
printout = {"code": clientexp.code,
"message": clientexp.message}
columns = utils.get_columns(printout)
data = utils.get_item_properties(
objectify(printout),
columns,
formatters={'vpn_endpoints': utils.format_list_of_dicts})
return columns, data
class DeleteUser(command.Command):
def get_parser(self, prog_name):
parser = super(DeleteUser, self).get_parser(prog_name)
parser.add_argument(
'name',
metavar='<name>',
help="User Name for Inter Connect Gateway Service")
return parser
def take_action(self, parsed_args):
rca_client = self.app.client_manager.rca
name = parsed_args.name
rca_client.users.delete(name)
| apache-2.0 | 5,470,362,338,758,852,000 | 31.543478 | 88 | 0.580717 | false |
anderson-81/djangoproject | crud/migrations/0001_initial.py | 1 | 2095 | # -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2017-06-27 00:23
from __future__ import unicode_literals
from decimal import Decimal
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Car',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('model', models.CharField(default=b'', max_length=100)),
('plate', models.CharField(default=b'', max_length=7, unique=True)),
('yearCar', models.IntegerField(default=b'')),
('marketVal', models.DecimalField(decimal_places=2, default=Decimal('0.00'), max_digits=12)),
('imageCar', models.ImageField(blank=True, default=b'no_image.png', null=True, upload_to=b'media')),
('description', models.CharField(default=b'', max_length=200)),
],
),
migrations.CreateModel(
name='Customer',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(default=b'', max_length=100)),
('email', models.EmailField(default=b'', max_length=254, unique=True)),
('salary', models.DecimalField(decimal_places=2, default=Decimal('0.00'), max_digits=12)),
('birthday', models.DateField(blank=True, default=b'26/06/1999')),
('gender', models.CharField(choices=[(b'M', b'MALE'), (b'F', b'FEMALE')], default=b'M', max_length=1)),
('imageCustomer', models.ImageField(blank=True, default=b'no_image.png', null=True, upload_to=b'media')),
],
),
migrations.AddField(
model_name='car',
name='customer',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='crud.Customer'),
),
]
| mit | 2,575,349,278,705,911,000 | 43.574468 | 121 | 0.579952 | false |
bigdataelephants/scikit-learn | sklearn/datasets/tests/test_lfw.py | 50 | 6849 | """This test for the LFW require medium-size data dowloading and processing
If the data has not been already downloaded by running the examples,
the tests won't run (skipped).
If the test are run, the first execution will be long (typically a bit
more than a couple of minutes) but as the dataset loader is leveraging
joblib, successive runs will be fast (less than 200ms).
"""
import random
import os
import shutil
import tempfile
import numpy as np
from sklearn.externals import six
try:
try:
from scipy.misc import imsave
except ImportError:
from scipy.misc.pilutil import imsave
except ImportError:
imsave = None
from sklearn.datasets import load_lfw_pairs
from sklearn.datasets import load_lfw_people
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import SkipTest
from sklearn.utils.testing import raises
SCIKIT_LEARN_DATA = tempfile.mkdtemp(prefix="scikit_learn_lfw_test_")
SCIKIT_LEARN_EMPTY_DATA = tempfile.mkdtemp(prefix="scikit_learn_empty_test_")
LFW_HOME = os.path.join(SCIKIT_LEARN_DATA, 'lfw_home')
FAKE_NAMES = [
'Abdelatif_Smith',
'Abhati_Kepler',
'Camara_Alvaro',
'Chen_Dupont',
'John_Lee',
'Lin_Bauman',
'Onur_Lopez',
]
def setup_module():
"""Test fixture run once and common to all tests of this module"""
if imsave is None:
raise SkipTest("PIL not installed.")
if not os.path.exists(LFW_HOME):
os.makedirs(LFW_HOME)
random_state = random.Random(42)
np_rng = np.random.RandomState(42)
# generate some random jpeg files for each person
counts = {}
for name in FAKE_NAMES:
folder_name = os.path.join(LFW_HOME, 'lfw_funneled', name)
if not os.path.exists(folder_name):
os.makedirs(folder_name)
n_faces = np_rng.randint(1, 5)
counts[name] = n_faces
for i in range(n_faces):
file_path = os.path.join(folder_name, name + '_%04d.jpg' % i)
uniface = np_rng.randint(0, 255, size=(250, 250, 3))
try:
imsave(file_path, uniface)
except ImportError:
raise SkipTest("PIL not installed")
# add some random file pollution to test robustness
with open(os.path.join(LFW_HOME, 'lfw_funneled', '.test.swp'), 'wb') as f:
f.write(six.b('Text file to be ignored by the dataset loader.'))
# generate some pairing metadata files using the same format as LFW
with open(os.path.join(LFW_HOME, 'pairsDevTrain.txt'), 'wb') as f:
f.write(six.b("10\n"))
more_than_two = [name for name, count in six.iteritems(counts)
if count >= 2]
for i in range(5):
name = random_state.choice(more_than_two)
first, second = random_state.sample(range(counts[name]), 2)
f.write(six.b('%s\t%d\t%d\n' % (name, first, second)))
for i in range(5):
first_name, second_name = random_state.sample(FAKE_NAMES, 2)
first_index = random_state.choice(np.arange(counts[first_name]))
second_index = random_state.choice(np.arange(counts[second_name]))
f.write(six.b('%s\t%d\t%s\t%d\n' % (first_name, first_index,
second_name, second_index)))
with open(os.path.join(LFW_HOME, 'pairsDevTest.txt'), 'wb') as f:
f.write(six.b("Fake place holder that won't be tested"))
with open(os.path.join(LFW_HOME, 'pairs.txt'), 'wb') as f:
f.write(six.b("Fake place holder that won't be tested"))
def teardown_module():
"""Test fixture (clean up) run once after all tests of this module"""
if os.path.isdir(SCIKIT_LEARN_DATA):
shutil.rmtree(SCIKIT_LEARN_DATA)
if os.path.isdir(SCIKIT_LEARN_EMPTY_DATA):
shutil.rmtree(SCIKIT_LEARN_EMPTY_DATA)
@raises(IOError)
def test_load_empty_lfw_people():
load_lfw_people(data_home=SCIKIT_LEARN_EMPTY_DATA)
def test_load_fake_lfw_people():
lfw_people = load_lfw_people(data_home=SCIKIT_LEARN_DATA,
min_faces_per_person=3)
# The data is croped around the center as a rectangular bounding box
# arounthe the face. Colors are converted to gray levels:
assert_equal(lfw_people.images.shape, (10, 62, 47))
assert_equal(lfw_people.data.shape, (10, 2914))
# the target is array of person integer ids
assert_array_equal(lfw_people.target, [2, 0, 1, 0, 2, 0, 2, 1, 1, 2])
# names of the persons can be found using the target_names array
expected_classes = ['Abdelatif Smith', 'Abhati Kepler', 'Onur Lopez']
assert_array_equal(lfw_people.target_names, expected_classes)
# It is possible to ask for the original data without any croping or color
# conversion and not limit on the number of picture per person
lfw_people = load_lfw_people(data_home=SCIKIT_LEARN_DATA,
resize=None, slice_=None, color=True)
assert_equal(lfw_people.images.shape, (17, 250, 250, 3))
# the ids and class names are the same as previously
assert_array_equal(lfw_people.target,
[0, 0, 1, 6, 5, 6, 3, 6, 0, 3, 6, 1, 2, 4, 5, 1, 2])
assert_array_equal(lfw_people.target_names,
['Abdelatif Smith', 'Abhati Kepler', 'Camara Alvaro',
'Chen Dupont', 'John Lee', 'Lin Bauman', 'Onur Lopez'])
@raises(ValueError)
def test_load_fake_lfw_people_too_restrictive():
load_lfw_people(data_home=SCIKIT_LEARN_DATA, min_faces_per_person=100)
@raises(IOError)
def test_load_empty_lfw_pairs():
load_lfw_pairs(data_home=SCIKIT_LEARN_EMPTY_DATA)
def test_load_fake_lfw_pairs():
lfw_pairs_train = load_lfw_pairs(data_home=SCIKIT_LEARN_DATA)
# The data is croped around the center as a rectangular bounding box
# arounthe the face. Colors are converted to gray levels:
assert_equal(lfw_pairs_train.pairs.shape, (10, 2, 62, 47))
# the target is whether the person is the same or not
assert_array_equal(lfw_pairs_train.target, [1, 1, 1, 1, 1, 0, 0, 0, 0, 0])
# names of the persons can be found using the target_names array
expected_classes = ['Different persons', 'Same person']
assert_array_equal(lfw_pairs_train.target_names, expected_classes)
# It is possible to ask for the original data without any croping or color
# conversion
lfw_pairs_train = load_lfw_pairs(data_home=SCIKIT_LEARN_DATA,
resize=None, slice_=None, color=True)
assert_equal(lfw_pairs_train.pairs.shape, (10, 2, 250, 250, 3))
# the ids and class names are the same as previously
assert_array_equal(lfw_pairs_train.target, [1, 1, 1, 1, 1, 0, 0, 0, 0, 0])
assert_array_equal(lfw_pairs_train.target_names, expected_classes)
| bsd-3-clause | 30,111,154,124,737,570 | 37.05 | 79 | 0.649146 | false |
saghul/aiohttp | aiohttp/helpers.py | 1 | 10084 | """Various helper functions"""
__all__ = ['BasicAuth', 'FormData', 'parse_mimetype']
import base64
import binascii
import io
import os
import uuid
import urllib.parse
from collections import namedtuple
from wsgiref.handlers import format_date_time
from . import hdrs, multidict
class BasicAuth(namedtuple('BasicAuth', ['login', 'password', 'encoding'])):
"""Http basic authentication helper.
:param str login: Login
:param str password: Password
:param str encoding: (optional) encoding ('latin1' by default)
"""
def __new__(cls, login, password='', encoding='latin1'):
if login is None:
raise ValueError('None is not allowed as login value')
if password is None:
raise ValueError('None is not allowed as password value')
return super().__new__(cls, login, password, encoding)
def encode(self):
"""Encode credentials."""
creds = ('%s:%s' % (self.login, self.password)).encode(self.encoding)
return 'Basic %s' % base64.b64encode(creds).decode(self.encoding)
class FormData:
"""Helper class for multipart/form-data and
application/x-www-form-urlencoded body generation."""
def __init__(self, fields=()):
self._fields = []
self._is_multipart = False
self._boundary = uuid.uuid4().hex
if isinstance(fields, dict):
fields = list(fields.items())
elif not isinstance(fields, (list, tuple)):
fields = (fields,)
self.add_fields(*fields)
@property
def is_multipart(self):
return self._is_multipart
@property
def content_type(self):
if self._is_multipart:
return 'multipart/form-data; boundary=%s' % self._boundary
else:
return 'application/x-www-form-urlencoded'
def add_field(self, name, value, *, content_type=None, filename=None,
content_transfer_encoding=None):
if isinstance(value, io.IOBase):
self._is_multipart = True
type_options = multidict.MultiDict({'name': name})
if filename is None and isinstance(value, io.IOBase):
filename = guess_filename(value, name)
if filename is not None:
type_options['filename'] = filename
self._is_multipart = True
headers = {}
if content_type is not None:
headers[hdrs.CONTENT_TYPE] = content_type
self._is_multipart = True
if content_transfer_encoding is not None:
headers[hdrs.CONTENT_TRANSFER_ENCODING] = content_transfer_encoding
self._is_multipart = True
supported_tranfer_encoding = {
'base64': binascii.b2a_base64,
'quoted-printable': binascii.b2a_qp
}
conv = supported_tranfer_encoding.get(content_transfer_encoding)
if conv is not None:
value = conv(value)
self._fields.append((type_options, headers, value))
def add_fields(self, *fields):
to_add = list(fields)
while to_add:
rec = to_add.pop(0)
if isinstance(rec, io.IOBase):
k = guess_filename(rec, 'unknown')
self.add_field(k, rec)
elif isinstance(rec,
(multidict.MultiDictProxy,
multidict.MultiDict)):
to_add.extend(rec.items())
elif isinstance(rec, (list, tuple)) and len(rec) == 2:
k, fp = rec
self.add_field(k, fp)
else:
raise TypeError('Only io.IOBase, multidict and (name, file) '
'pairs allowed, use .add_field() for passing '
'more complex parameters')
def _gen_form_urlencoded(self, encoding):
# form data (x-www-form-urlencoded)
data = []
for type_options, _, value in self._fields:
data.append((type_options['name'], value))
data = urllib.parse.urlencode(data, doseq=True)
return data.encode(encoding)
def _gen_form_data(self, encoding='utf-8', chunk_size=8192):
"""Encode a list of fields using the multipart/form-data MIME format"""
boundary = self._boundary.encode('latin1')
for type_options, headers, value in self._fields:
yield b'--' + boundary + b'\r\n'
out_headers = []
opts = '; '.join('{0[0]}="{0[1]}"'.format(i)
for i in type_options.items())
out_headers.append(
('Content-Disposition: form-data; ' + opts).encode(encoding)
+ b'\r\n')
for k, v in headers.items():
out_headers.append('{}: {}\r\n'.format(k, v).encode(encoding))
out_headers.append(b'\r\n')
yield b''.join(out_headers)
if isinstance(value, str):
yield value.encode(encoding)
else:
if isinstance(value, (bytes, bytearray)):
value = io.BytesIO(value)
while True:
chunk = value.read(chunk_size)
if not chunk:
break
yield str_to_bytes(chunk, encoding)
yield b'\r\n'
yield b'--' + boundary + b'--\r\n'
def __call__(self, encoding):
if self._is_multipart:
return self._gen_form_data(encoding)
else:
return self._gen_form_urlencoded(encoding)
def parse_mimetype(mimetype):
"""Parses a MIME type into its components.
:param str mimetype: MIME type
:returns: 4 element tuple for MIME type, subtype, suffix and parameters
:rtype: tuple
Example:
>>> parse_mimetype('text/html; charset=utf-8')
('text', 'html', '', {'charset': 'utf-8'})
"""
if not mimetype:
return '', '', '', {}
parts = mimetype.split(';')
params = []
for item in parts[1:]:
if not item:
continue
key, value = item.split('=', 1) if '=' in item else (item, '')
params.append((key.lower().strip(), value.strip(' "')))
params = dict(params)
fulltype = parts[0].strip().lower()
if fulltype == '*':
fulltype = '*/*'
mtype, stype = fulltype.split('/', 1) \
if '/' in fulltype else (fulltype, '')
stype, suffix = stype.split('+', 1) if '+' in stype else (stype, '')
return mtype, stype, suffix, params
def str_to_bytes(s, encoding='utf-8'):
if isinstance(s, str):
return s.encode(encoding)
return s
def guess_filename(obj, default=None):
name = getattr(obj, 'name', None)
if name and name[0] != '<' and name[-1] != '>':
return os.path.split(name)[-1]
return default
def parse_remote_addr(forward):
if isinstance(forward, str):
# we only took the last one
# http://en.wikipedia.org/wiki/X-Forwarded-For
if ',' in forward:
forward = forward.rsplit(',', 1)[-1].strip()
# find host and port on ipv6 address
if '[' in forward and ']' in forward:
host = forward.split(']')[0][1:].lower()
elif ':' in forward and forward.count(':') == 1:
host = forward.split(':')[0].lower()
else:
host = forward
forward = forward.split(']')[-1]
if ':' in forward and forward.count(':') == 1:
port = forward.split(':', 1)[1]
else:
port = 80
remote = (host, port)
else:
remote = forward
return remote[0], str(remote[1])
def atoms(message, environ, response, transport, request_time):
"""Gets atoms for log formatting."""
if message:
r = '{} {} HTTP/{}.{}'.format(
message.method, message.path,
message.version[0], message.version[1])
headers = message.headers
else:
r = ''
headers = {}
remote_addr = parse_remote_addr(
transport.get_extra_info('addr', '127.0.0.1'))
atoms = {
'h': remote_addr[0],
'l': '-',
'u': '-',
't': format_date_time(None),
'r': r,
's': str(getattr(response, 'status', '')),
'b': str(getattr(response, 'output_length', '')),
'f': headers.get(hdrs.REFERER, '-'),
'a': headers.get(hdrs.USER_AGENT, '-'),
'T': str(int(request_time)),
'D': str(request_time).split('.', 1)[-1][:5],
'p': "<%s>" % os.getpid()
}
return atoms
class SafeAtoms(dict):
"""Copy from gunicorn"""
def __init__(self, atoms, i_headers, o_headers):
dict.__init__(self)
self._i_headers = i_headers
self._o_headers = o_headers
for key, value in atoms.items():
self[key] = value.replace('"', '\\"')
def __getitem__(self, k):
if k.startswith('{'):
if k.endswith('}i'):
headers = self._i_headers
elif k.endswith('}o'):
headers = self._o_headers
else:
headers = None
if headers is not None:
return headers.get(k[1:-2], '-')
if k in self:
return super(SafeAtoms, self).__getitem__(k)
else:
return '-'
class reify(object):
""" Use as a class method decorator. It operates almost exactly like the
Python ``@property`` decorator, but it puts the result of the method it
decorates into the instance dict after the first call, effectively
replacing the function it decorates with an instance variable. It is, in
Python parlance, a non-data descriptor. """
def __init__(self, wrapped):
self.wrapped = wrapped
try:
self.__doc__ = wrapped.__doc__
except: # pragma: no cover
pass
def __get__(self, inst, objtype=None):
if inst is None: # pragma: no cover
return self
val = self.wrapped(inst)
setattr(inst, self.wrapped.__name__, val)
return val
| apache-2.0 | -5,142,592,529,588,322,000 | 29.282282 | 79 | 0.540361 | false |
tthtlc/volatility | volatility/plugins/gui/gahti.py | 44 | 2226 | # Volatility
# Copyright (C) 2007-2013 Volatility Foundation
# Copyright (C) 2010,2011,2012 Michael Hale Ligh <[email protected]>
#
# This file is part of Volatility.
#
# Volatility is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Volatility is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Volatility. If not, see <http://www.gnu.org/licenses/>.
#
import volatility.utils as utils
import volatility.plugins.gui.constants as consts
import volatility.plugins.gui.sessions as sessions
class Gahti(sessions.Sessions):
"""Dump the USER handle type information"""
def render_text(self, outfd, data):
profile = utils.load_as(self._config).profile
# Get the OS version being analyzed
version = (profile.metadata.get('major', 0),
profile.metadata.get('minor', 0))
# Choose which USER handle enum to use
if version >= (6, 1):
handle_types = consts.HANDLE_TYPE_ENUM_SEVEN
else:
handle_types = consts.HANDLE_TYPE_ENUM
self.table_header(outfd,
[("Session", "8"),
("Type", "20"),
("Tag", "8"),
("fnDestroy", "[addrpad]"),
("Flags", ""),
])
for session in data:
gahti = session.find_gahti()
if gahti:
for i, h in handle_types.items():
self.table_row(outfd,
session.SessionId,
h,
gahti.types[i].dwAllocTag,
gahti.types[i].fnDestroy,
gahti.types[i].bObjectCreateFlags)
| gpl-2.0 | 2,695,576,877,743,722,500 | 36.728814 | 72 | 0.570979 | false |
moto-timo/ironpython3 | Src/StdLib/Lib/getpass.py | 86 | 6069 | """Utilities to get a password and/or the current user name.
getpass(prompt[, stream]) - Prompt for a password, with echo turned off.
getuser() - Get the user name from the environment or password database.
GetPassWarning - This UserWarning is issued when getpass() cannot prevent
echoing of the password contents while reading.
On Windows, the msvcrt module will be used.
On the Mac EasyDialogs.AskPassword is used, if available.
"""
# Authors: Piers Lauder (original)
# Guido van Rossum (Windows support and cleanup)
# Gregory P. Smith (tty support & GetPassWarning)
import contextlib
import io
import os
import sys
import warnings
__all__ = ["getpass","getuser","GetPassWarning"]
class GetPassWarning(UserWarning): pass
def unix_getpass(prompt='Password: ', stream=None):
"""Prompt for a password, with echo turned off.
Args:
prompt: Written on stream to ask for the input. Default: 'Password: '
stream: A writable file object to display the prompt. Defaults to
the tty. If no tty is available defaults to sys.stderr.
Returns:
The seKr3t input.
Raises:
EOFError: If our input tty or stdin was closed.
GetPassWarning: When we were unable to turn echo off on the input.
Always restores terminal settings before returning.
"""
passwd = None
with contextlib.ExitStack() as stack:
try:
# Always try reading and writing directly on the tty first.
fd = os.open('/dev/tty', os.O_RDWR|os.O_NOCTTY)
tty = io.FileIO(fd, 'w+')
stack.enter_context(tty)
input = io.TextIOWrapper(tty)
stack.enter_context(input)
if not stream:
stream = input
except OSError as e:
# If that fails, see if stdin can be controlled.
stack.close()
try:
fd = sys.stdin.fileno()
except (AttributeError, ValueError):
fd = None
passwd = fallback_getpass(prompt, stream)
input = sys.stdin
if not stream:
stream = sys.stderr
if fd is not None:
try:
old = termios.tcgetattr(fd) # a copy to save
new = old[:]
new[3] &= ~termios.ECHO # 3 == 'lflags'
tcsetattr_flags = termios.TCSAFLUSH
if hasattr(termios, 'TCSASOFT'):
tcsetattr_flags |= termios.TCSASOFT
try:
termios.tcsetattr(fd, tcsetattr_flags, new)
passwd = _raw_input(prompt, stream, input=input)
finally:
termios.tcsetattr(fd, tcsetattr_flags, old)
stream.flush() # issue7208
except termios.error:
if passwd is not None:
# _raw_input succeeded. The final tcsetattr failed. Reraise
# instead of leaving the terminal in an unknown state.
raise
# We can't control the tty or stdin. Give up and use normal IO.
# fallback_getpass() raises an appropriate warning.
if stream is not input:
# clean up unused file objects before blocking
stack.close()
passwd = fallback_getpass(prompt, stream)
stream.write('\n')
return passwd
def win_getpass(prompt='Password: ', stream=None):
"""Prompt for password with echo off, using Windows getch()."""
if sys.stdin is not sys.__stdin__:
return fallback_getpass(prompt, stream)
import msvcrt
for c in prompt:
msvcrt.putwch(c)
pw = ""
while 1:
c = msvcrt.getwch()
if c == '\r' or c == '\n':
break
if c == '\003':
raise KeyboardInterrupt
if c == '\b':
pw = pw[:-1]
else:
pw = pw + c
msvcrt.putwch('\r')
msvcrt.putwch('\n')
return pw
def fallback_getpass(prompt='Password: ', stream=None):
warnings.warn("Can not control echo on the terminal.", GetPassWarning,
stacklevel=2)
if not stream:
stream = sys.stderr
print("Warning: Password input may be echoed.", file=stream)
return _raw_input(prompt, stream)
def _raw_input(prompt="", stream=None, input=None):
# This doesn't save the string in the GNU readline history.
if not stream:
stream = sys.stderr
if not input:
input = sys.stdin
prompt = str(prompt)
if prompt:
try:
stream.write(prompt)
except UnicodeEncodeError:
# Use replace error handler to get as much as possible printed.
prompt = prompt.encode(stream.encoding, 'replace')
prompt = prompt.decode(stream.encoding)
stream.write(prompt)
stream.flush()
# NOTE: The Python C API calls flockfile() (and unlock) during readline.
line = input.readline()
if not line:
raise EOFError
if line[-1] == '\n':
line = line[:-1]
return line
def getuser():
"""Get the username from the environment or password database.
First try various environment variables, then the password
database. This works on Windows as long as USERNAME is set.
"""
for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
user = os.environ.get(name)
if user:
return user
# If this fails, the exception will "explain" why
import pwd
return pwd.getpwuid(os.getuid())[0]
# Bind the name getpass to the appropriate function
try:
import termios
# it's possible there is an incompatible termios from the
# McMillan Installer, make sure we have a UNIX-compatible termios
termios.tcgetattr, termios.tcsetattr
except (ImportError, AttributeError):
try:
import msvcrt
except ImportError:
getpass = fallback_getpass
else:
getpass = win_getpass
else:
getpass = unix_getpass
| apache-2.0 | 803,129,823,616,422,000 | 31.629032 | 81 | 0.588235 | false |
vanpact/scipy | scipy/signal/cont2discrete.py | 68 | 5033 | """
Continuous to discrete transformations for state-space and transfer function.
"""
from __future__ import division, print_function, absolute_import
# Author: Jeffrey Armstrong <[email protected]>
# March 29, 2011
import numpy as np
from scipy import linalg
from .ltisys import tf2ss, ss2tf, zpk2ss, ss2zpk
__all__ = ['cont2discrete']
def cont2discrete(sys, dt, method="zoh", alpha=None):
"""
Transform a continuous to a discrete state-space system.
Parameters
----------
sys : a tuple describing the system.
The following gives the number of elements in the tuple and
the interpretation:
* 2: (num, den)
* 3: (zeros, poles, gain)
* 4: (A, B, C, D)
dt : float
The discretization time step.
method : {"gbt", "bilinear", "euler", "backward_diff", "zoh"}, optional
Which method to use:
* gbt: generalized bilinear transformation
* bilinear: Tustin's approximation ("gbt" with alpha=0.5)
* euler: Euler (or forward differencing) method ("gbt" with alpha=0)
* backward_diff: Backwards differencing ("gbt" with alpha=1.0)
* zoh: zero-order hold (default)
alpha : float within [0, 1], optional
The generalized bilinear transformation weighting parameter, which
should only be specified with method="gbt", and is ignored otherwise
Returns
-------
sysd : tuple containing the discrete system
Based on the input type, the output will be of the form
* (num, den, dt) for transfer function input
* (zeros, poles, gain, dt) for zeros-poles-gain input
* (A, B, C, D, dt) for state-space system input
Notes
-----
By default, the routine uses a Zero-Order Hold (zoh) method to perform
the transformation. Alternatively, a generalized bilinear transformation
may be used, which includes the common Tustin's bilinear approximation,
an Euler's method technique, or a backwards differencing technique.
The Zero-Order Hold (zoh) method is based on [1]_, the generalized bilinear
approximation is based on [2]_ and [3]_.
References
----------
.. [1] http://en.wikipedia.org/wiki/Discretization#Discretization_of_linear_state_space_models
.. [2] http://techteach.no/publications/discretetime_signals_systems/discrete.pdf
.. [3] G. Zhang, X. Chen, and T. Chen, Digital redesign via the generalized
bilinear transformation, Int. J. Control, vol. 82, no. 4, pp. 741-754,
2009.
(http://www.ece.ualberta.ca/~gfzhang/research/ZCC07_preprint.pdf)
"""
if len(sys) == 2:
sysd = cont2discrete(tf2ss(sys[0], sys[1]), dt, method=method,
alpha=alpha)
return ss2tf(sysd[0], sysd[1], sysd[2], sysd[3]) + (dt,)
elif len(sys) == 3:
sysd = cont2discrete(zpk2ss(sys[0], sys[1], sys[2]), dt, method=method,
alpha=alpha)
return ss2zpk(sysd[0], sysd[1], sysd[2], sysd[3]) + (dt,)
elif len(sys) == 4:
a, b, c, d = sys
else:
raise ValueError("First argument must either be a tuple of 2 (tf), "
"3 (zpk), or 4 (ss) arrays.")
if method == 'gbt':
if alpha is None:
raise ValueError("Alpha parameter must be specified for the "
"generalized bilinear transform (gbt) method")
elif alpha < 0 or alpha > 1:
raise ValueError("Alpha parameter must be within the interval "
"[0,1] for the gbt method")
if method == 'gbt':
# This parameter is used repeatedly - compute once here
ima = np.eye(a.shape[0]) - alpha*dt*a
ad = linalg.solve(ima, np.eye(a.shape[0]) + (1.0-alpha)*dt*a)
bd = linalg.solve(ima, dt*b)
# Similarly solve for the output equation matrices
cd = linalg.solve(ima.transpose(), c.transpose())
cd = cd.transpose()
dd = d + alpha*np.dot(c, bd)
elif method == 'bilinear' or method == 'tustin':
return cont2discrete(sys, dt, method="gbt", alpha=0.5)
elif method == 'euler' or method == 'forward_diff':
return cont2discrete(sys, dt, method="gbt", alpha=0.0)
elif method == 'backward_diff':
return cont2discrete(sys, dt, method="gbt", alpha=1.0)
elif method == 'zoh':
# Build an exponential matrix
em_upper = np.hstack((a, b))
# Need to stack zeros under the a and b matrices
em_lower = np.hstack((np.zeros((b.shape[1], a.shape[0])),
np.zeros((b.shape[1], b.shape[1]))))
em = np.vstack((em_upper, em_lower))
ms = linalg.expm(dt * em)
# Dispose of the lower rows
ms = ms[:a.shape[0], :]
ad = ms[:, 0:a.shape[1]]
bd = ms[:, a.shape[1]:]
cd = c
dd = d
else:
raise ValueError("Unknown transformation method '%s'" % method)
return ad, bd, cd, dd, dt
| bsd-3-clause | 1,442,810,713,257,586,700 | 34.443662 | 98 | 0.591496 | false |
ClaudiaSaxer/PlasoScaffolder | src/plasoscaffolder/bll/mappings/formatter_init_mapping.py | 1 | 1250 | # -*- coding: utf-8 -*-
"""Class representing the mapper for the formatter init files."""
from plasoscaffolder.bll.mappings import base_mapping_helper
from plasoscaffolder.bll.mappings import base_sqliteplugin_mapping
from plasoscaffolder.model import init_data_model
class FormatterInitMapping(
base_sqliteplugin_mapping.BaseSQLitePluginMapper):
"""Class representing the formatter init mapper."""
_FORMATTER_INIT_TEMPLATE = 'formatter_init_template.jinja2'
def __init__(self, mapping_helper: base_mapping_helper.BaseMappingHelper):
"""Initializing the init mapper class.
Args:
mapping_helper (base_mapping_helper.BaseMappingHelper): the helper class
for the mapping
"""
super().__init__()
self._helper = mapping_helper
def GetRenderedTemplate(
self,
data: init_data_model.InitDataModel) -> str:
"""Retrieves the formatter.
Args:
data (init_data_model.InitDataModel): the data for init file
Returns:
str: the rendered template
"""
context = {'plugin_name': data.plugin_name,
'is_create_template': data.is_create_template}
rendered = self._helper.RenderTemplate(
self._FORMATTER_INIT_TEMPLATE, context)
return rendered
| apache-2.0 | 2,401,879,130,831,245,300 | 31.051282 | 78 | 0.7008 | false |
nwhidden/ND101-Deep-Learning | transfer-learning/tensorflow_vgg/utils.py | 145 | 1972 | import skimage
import skimage.io
import skimage.transform
import numpy as np
# synset = [l.strip() for l in open('synset.txt').readlines()]
# returns image of shape [224, 224, 3]
# [height, width, depth]
def load_image(path):
# load image
img = skimage.io.imread(path)
img = img / 255.0
assert (0 <= img).all() and (img <= 1.0).all()
# print "Original Image Shape: ", img.shape
# we crop image from center
short_edge = min(img.shape[:2])
yy = int((img.shape[0] - short_edge) / 2)
xx = int((img.shape[1] - short_edge) / 2)
crop_img = img[yy: yy + short_edge, xx: xx + short_edge]
# resize to 224, 224
resized_img = skimage.transform.resize(crop_img, (224, 224), mode='constant')
return resized_img
# returns the top1 string
def print_prob(prob, file_path):
synset = [l.strip() for l in open(file_path).readlines()]
# print prob
pred = np.argsort(prob)[::-1]
# Get top1 label
top1 = synset[pred[0]]
print(("Top1: ", top1, prob[pred[0]]))
# Get top5 label
top5 = [(synset[pred[i]], prob[pred[i]]) for i in range(5)]
print(("Top5: ", top5))
return top1
def load_image2(path, height=None, width=None):
# load image
img = skimage.io.imread(path)
img = img / 255.0
if height is not None and width is not None:
ny = height
nx = width
elif height is not None:
ny = height
nx = img.shape[1] * ny / img.shape[0]
elif width is not None:
nx = width
ny = img.shape[0] * nx / img.shape[1]
else:
ny = img.shape[0]
nx = img.shape[1]
return skimage.transform.resize(img, (ny, nx), mode='constant')
def test():
img = skimage.io.imread("./test_data/starry_night.jpg")
ny = 300
nx = img.shape[1] * ny / img.shape[0]
img = skimage.transform.resize(img, (ny, nx), mode='constant')
skimage.io.imsave("./test_data/test/output.jpg", img)
if __name__ == "__main__":
test()
| mit | -3,846,405,492,483,713,500 | 26.388889 | 81 | 0.592292 | false |
cs243iitg/vehicle-webapp | webapp/vms/admin_views.py | 1 | 18451 | from django.shortcuts import render, render_to_response
from django.http import HttpResponse, HttpResponseRedirect
from django.views import generic
from django.core.context_processors import csrf
from django.views.decorators.csrf import csrf_protect, csrf_exempt
from django.contrib import auth
from django.contrib.auth.models import User
from django.contrib import messages
from django.template import RequestContext
from django.contrib.auth.decorators import login_required
from django.contrib.admin.views.decorators import staff_member_required
from django.shortcuts import get_object_or_404
from .forms import TheftForm, StudentVehicleForm, PersonPassForm, BusTimingForm, EmployeeVehicleForm, DocumentForm
from .models import TheftReport, StudentVehicle, EmployeeVehicle, BusTiming, Guard, Place, ParkingSlot, PersonPass, OnDutyGuard, Gate, StudentCycle
from datetime import datetime
from django.forms.models import model_to_dict
from itertools import chain
import os
from django.conf import settings
#------------------------------------------------------------
# User Authentication
#------------------------------------------------------------
@login_required(login_url="/vms/")
def cancel_theft_report(request, theft_report_id):
"""
Cancels theft report with specified id
"""
theft_report = TheftReport.objects.get(id=theft_report_id)
if request.user == theft_report.reporter:
theft_report.delete()
return HttpResponseRedirect("/users/user_theft_reports")
#------------------------------------------------------------
# Student Vehicle Registration
#------------------------------------------------------------
@login_required(login_url="/vms/")
def cancel_student_vehicle_registration(request, student_vehicle_id):
"""
Cancels student's vehicle registration of specified id
"""
student_vehicle = StudentVehicle.objects.get(id=student_vehicle_id)
if request.user == student_vehicle.registered_in_the_name_of:
student_vehicle.delete()
return HttpResponseRedirect("/vms/users/your-vehicle-registrations")
@login_required(login_url="/vms/")
def block_passes(request):
"""
blocks pass of the specified id
"""
if request.method == 'POST':
if 'block' in request.POST:
pnum = request.POST['passnumber']
num = PersonPass.objects.all()
reasons = request.POST['reason']
flag=0
for n in num:
if n.pass_number == pnum:
flag=1
if flag == 0 or len(reasons) == 0:
if flag == 0:
messages.error(request, "You have entered an invalid pass number")
if len(reasons) == 0:
messages.error(request, 'Reason is required.')
else:
person = PersonPass.objects.get(pass_number= pnum)
#if person is not None:
#if pnum == passnum.pass_number:
if person.is_blocked == False:
#return HttpResponse('Your have already blocked this Pass!!')
person.reason= reasons
person.is_blocked = True
person.save()
# return HttpResponse('Your have successfully blocked!!')
messages.success(request, 'Your have successfully blocked pass for '+ person.name)
else:
messages.error(request, 'Your have already blocked the pass for '+ person.name)
elif 'unblock' in request.POST:
pnum = request.POST['passnumber']
num = PersonPass.objects.all()
reasons = request.POST['reason']
flag=0
for n in num:
if n.pass_number == pnum:
flag=1
if flag == 0 or len(reasons) == 0:
if flag == 0:
messages.error(request, "You have entered an invalid pass number")
if len(reasons) == 0:
messages.error(request, 'Reason is required.')
else:
person = PersonPass.objects.get(pass_number= pnum)
#reasons = request.POST['reason']
#if person is not None:
#if pnum == passnum.pass_number:
if person.is_blocked == True:
#return HttpResponse('Your have already blocked this Pass!!')
person.reason= reasons
person.is_blocked = False
person.save()
# return HttpResponse('Your have successfully blocked!!')
messages.success(request, 'Your have successfully unblocked the pass for '+person.name)
else:
messages.error(request, 'Your have already unblocked the Pass for '+person.name)
#return HttpResponseRedirect("admin/block.pass.html")
# else:
# return render_to_response('admin/block.pass.html' ,{'error' : "You have entered an invalid pass number"}, context_instance=RequestContext(request))
return render_to_response('admin/block_pass.html' , context_instance=RequestContext(request))
@login_required(login_url="/vms/")
def update_bus_details(request):
if request.method == "POST":
form = BusTimingForm(data=request.POST)
if form.is_valid():
form.save()
form2 = BusTimingForm()
return render(request, 'admin/bustiming.html', {
'message': "Bus Timings updated successfully",
'form': form2,
})
else:
form2 = BusTimingForm()
return render(request, 'admin/bustiming.html', {
'message': "Sorry, your given timings could not be updated",
'form': form2,
})
else:
form = BusTimingForm()
places = Place.objects.all()
return render(request, 'admin/bustiming.html', {
'form': form,
'places': places,
})
@login_required(login_url="/vms/")
def issue_pass(request):
if request.method == "POST":
form = PersonPassForm(request.POST,request.FILES)
if form.is_valid():
task = form.save(commit=False)
task.is_blocked = False
task.save()
form2 = PersonPassForm()
messages.success(request, "Pass creation completed successfully")
return render(request, 'admin/issue_pass.html', {
'form': form2,
})
else:
messages.warning(request, "Unable to generate pass successfully")
return render(request, 'admin/issue_pass.html',{
'form':form,
})
else:
form=PersonPassForm()
return render(request, 'admin/issue_pass.html',{
'form':form,
})
@login_required(login_url="/vms/")
def parking_slot_update(request):
if request.method == "POST":
parkings=ParkingSlot.objects.all()
parking=parkings.get(parking_area_name=request.POST['parking_area_name'])
if int(request.POST['total_slots']) < int(request.POST['available_slots']) or int(request.POST['total_slots']) < 0 or int(request.POST['available_slots']) < 0 :
return render(request, 'admin/parking_slot_update.html',{
'parkings':parkings,
'parking1':parkings[0],
'message':"Enter valid slot details for "+str(request.POST['parking_area_name']),
'success':False,
})
else:
parking.total_slots=request.POST['total_slots']
parking.available_slots=request.POST['available_slots']
parking.save()
parkings=ParkingSlot.objects.all()
return render(request, 'admin/parking_slot_update.html',{
'parkings':parkings,
'parking1':parkings[0],
'message':"Information of the parking area is updated",
'success':True,
})
else:
parkings=ParkingSlot.objects.all()
return render(request, 'admin/parking_slot_update.html',{
'parkings':parkings,
'parking1':parkings[0],
})
@login_required(login_url="/vms/")
def guards_on_duty(request):
guards = Guard.objects.all()
success=True
message=""
places = Place.objects.filter(in_campus=True)
gates = Gate.objects.all()
if request.method == "POST":
try:
user=User.objects.get(username=request.POST['guard_name'])
guard=Guard.objects.get(guard_user=user)
temp=Place.objects.filter(place_name=request.POST['place'])
ondutyguard=OnDutyGuard.objects.filter(guard=guard)
is_gate=False
if len(temp) == 0:
temp = Gate.objects.filter(gate_name=request.POST['place'])
place=temp[0]
is_gate=True
else:
place=temp[0]
if len(ondutyguard) == 0 and is_gate:
OnDutyGuard.objects.create(guard=guard, place=place.gate_name, is_gate=is_gate)
elif len(ondutyguard) == 0 and not is_gate:
OnDutyGuard.objects.create(guard=guard, place=place.place_name, is_gate=is_gate)
else:
update=OnDutyGuard.objects.get(guard=guard)
if is_gate:
update.place=place.gate_name
else:
update.place=place.place_name
update.is_gate=is_gate
update.save()
message = "Guard has been alloted the duty"
success=True
return render(request, 'admin/onduty_guards.html',{
'guards':guards,
'places':places,
'gates':gates,
'message':message,
'success':success,
})
except:
message="Username not found"
success=False
return render(request, 'admin/onduty_guards.html', {
'guards': guards,
'places':places,
'gates':gates,
'success':success,
'message':message,
})
@login_required(login_url="/vms/")
def security(request):
guards=Guard.objects.all()
return render(request, 'admin/security.html',{
'guards':guards,
'user':request.user,
})
@login_required(login_url="/vms/")
def registered_vehicles(request):
"""
DUMMY: Function to display all the registered vehicles to the admin
"""
return render(request, 'admin/registered.html', {
'username': request.user.username,
'is_admin': True,
})
@login_required(login_url="/vms/")
def process_empl_vehicle_registration(request, empl_vehicle_id):
obj = EmployeeVehicle.objects.get(id=empl_vehicle_id)
reg_form = EmployeeVehicleForm(data=model_to_dict(obj))
reg_form.driving_license = obj.driving_license
#print str(reg_form.driving_license) + "\n-------------------\n\n\n\n"
#print str(reg_form) + "\n\n\n\n\n\n\n"
return render(request, 'admin/process.html', {
'readonly': True,
'form': reg_form,
'type': 'stud' if obj.user.user.is_student else 'empl',
'reg_id': empl_vehicle_id,
})
@login_required(login_url="/vms/")
def process_stud_vehicle_registration(request, student_vehicle_id):
obj = StudentVehicle.objects.get(id=student_vehicle_id)
reg_form = StudentVehicleForm(data=model_to_dict(obj))
reg_form.driving_license = obj.driving_license
#print str(reg_form.driving_license) + "\n-------------------\n\n\n\n"
#print str(reg_form) + "\n\n\n\n\n\n\n"
return render(request, 'admin/process.html', {
'readonly': True,
'form': reg_form,
'type': 'stud' if obj.user.user.is_student else 'empl',
'reg_id': student_vehicle_id,
})
@csrf_exempt
@login_required(login_url="/vms/")
def approve_reg(request, vehicle_id):
if "stud" in request.path:
obj = StudentVehicle.objects.get(id=vehicle_id)
else:
obj = EmployeeVehicle.objects.get(id=vehicle_id)
obj.registered_with_security_section = True
obj.vehicle_pass_no = str(vehicle_id)
obj.issue_date = datetime.now()
d = datetime.now()
d = d.replace(year=d.year+1)
obj.expiry_date = d
obj.save()
stud_reg_veh = StudentVehicle.objects.filter(registered_with_security_section=True)
empl_reg_veh = EmployeeVehicle.objects.filter(registered_with_security_section=True)
stud_cycles = StudentCycle.objects.all()
return render(request, 'admin/old_registered.html', {
'message': "Vehicle successfully approved. Pass generation and assignment completed successfully.",
'username': request.user.username,
'stud_reg_veh': stud_reg_veh,
'empl_reg_veh': empl_reg_veh,
'stud_cycles':stud_cycles,
})
@csrf_exempt
@login_required(login_url="/vms/")
def deny_reg(request, vehicle_id):
if "stud" in request.path:
obj = StudentVehicle.objects.get(id=vehicle_id)
else:
obj = EmployeeVehicle.objects.get(id=vehicle_id)
obj.registered_with_security_section = False
obj.vehicle_pass_no = str(vehicle_id)
obj.issue_date = datetime.now()
d = datetime.now()
d = d.replace(year=d.year-1)
obj.expiry_date = d
obj.save()
stud_reg_veh = StudentVehicle.objects.filter(registered_with_security_section=True)
empl_reg_veh = EmployeeVehicle.objects.filter(registered_with_security_section=True)
stud_cycles = StudentCycle.objects.all()
return render(request, 'admin/old_registered.html', {
'message': "Vehicle application successfully denied.",
'username': request.user.username,
'stud_reg_veh': stud_reg_veh,
'empl_reg_veh': empl_reg_veh,
'stud_cycles':stud_cycles,
})
@login_required(login_url="/vms/")
def registered_vehicles(request):
"""
Function to display all the registered vehicles to the admin
"""
stud_regs = StudentVehicle.objects.filter(registered_with_security_section=None)
empl_regs = EmployeeVehicle.objects.filter(registered_with_security_section=None)
return render(request, 'admin/registered.html', {
'username': request.user.username,
'num_stud_regs': len(stud_regs),
'num_empl_regs': len(empl_regs),
'stud_regs': stud_regs,
'empl_regs': empl_regs,
})
@login_required(login_url="/vms/")
def old_registered_vehicles(request):
stud_reg_veh = StudentVehicle.objects.filter(registered_with_security_section=True)
empl_reg_veh = EmployeeVehicle.objects.filter(registered_with_security_section=True)
stud_cycles = StudentCycle.objects.all()
return render(request, 'admin/old_registered.html', {
'username': request.user.username,
'stud_reg_veh': stud_reg_veh,
'empl_reg_veh': empl_reg_veh,
'stud_cycles':stud_cycles,
})
@login_required(login_url="/vms/")
def add_guards(request):
form = DocumentForm()
return render_to_response('admin/add_guards.html',{'type':'type','form':form},context_instance=RequestContext(request))
@login_required(login_url="/vms/")
def upload_log(request):
form=DocumentForm()
return render_to_response('admin/csv.html',{'type':'type','form':form},context_instance=RequestContext(request))
@login_required(login_url="/vms/")
def uploadcsv(request):
return render_to_response('admin/csv.html',
{'type':"type" },context_instance=RequestContext(request))
@login_required(login_url="/vms/")
def viewcsv(request):
# Handle file upload
if request.method == 'POST':
f=request.FILES['docfile']
if f.name.split('.')[-1]!="csv":
messages.error(request, "Upload CSV File only.")
return render(request, 'admin/csv.html',{})
with open(os.path.join(settings.MEDIA_ROOT,'csv/cs243iitg.csv'), 'wb+') as destination:
for chunk in f.chunks():
destination.write(chunk)
#checkcsv('/home/fireman/Django-1.6/mysite/article/jai.csv')
import csv
#csvfile(jai)
upload=open(os.path.join(settings.MEDIA_ROOT,'csv/cs243iitg.csv'), 'r')
# upload=open('csv/cs243iitg.csv','r')
data=[j for j in csv.reader(upload)]
upload.close()
rowNo=1
flag=0
f=open(os.path.join(settings.MEDIA_ROOT,'csv/log.txt'), 'wb+')
#dataReader=csv.reader(open('/home/fireman/Django-1.6/mysite/article/jai.csv'),delimiter=',',quotechar='"')
for row in data:
if not row[0] or row[0]=="" or not row[1] or row[1]=="" or not row[2] or row[2]=="" or not row[3] or row[3]=="" or not row[4] or row[4]=="":
# f.truncate()
f.write("The row number " +rowNo+ " has some error.\n")
flag=1
rowNo=rowNo+1
f.close()
if flag==1:
messages.error(request, "Check the file log.txt to see the error in CSV file data.")
return render(request, 'admin/csv.html',{})
else:
for row in data:
test=Guard()
u = User.objects.create_user(username=row[2], password=row[3],first_name=row[0],last_name=row[1])
u.save()
test.guard_user=u
test.guard_phone_number=int(row[4])
test.save()
return HttpResponseRedirect("../security/viewlog")
#return HttpResponse("dataReader")
return render_to_response('enter_log.html', {'form': 'form'})
@login_required(login_url="/vms/")
def add_place(request):
if request.method=="POST":
placename = request.POST['place']
try:
in_campus=request.POST['in_campus']
except:
in_campus=False
try:
Place.objects.create(place_name=placename, in_campus=in_campus)
message="Place Added successfully"
success=True
except:
message="Place already exists"
success=False
form=BusTimingForm()
return render(request, "admin/bustiming.html",{
'message':message,
'success':success,
'form':form,
})
| mit | 8,053,553,436,382,796,000 | 38.67957 | 169 | 0.586635 | false |
knifeyspoony/pyswf | swf/movie.py | 1 | 5645 | """
SWF
"""
from tag import SWFTimelineContainer
from stream import SWFStream
from export import SVGExporter
try:
import cStringIO as StringIO
except ImportError:
import StringIO
class SWFHeaderException(Exception):
""" Exception raised in case of an invalid SWFHeader """
def __init__(self, message):
super(SWFHeaderException, self).__init__(message)
class SWFHeader(object):
""" SWF header """
def __init__(self, stream):
a = stream.readUI8()
b = stream.readUI8()
c = stream.readUI8()
if not a in [0x43, 0x46, 0x5A] or b != 0x57 or c != 0x53:
# Invalid signature! ('FWS' or 'CWS' or 'ZFS')
raise SWFHeaderException("not a SWF file! (invalid signature)")
self._compressed_zlib = (a == 0x43)
self._compressed_lzma = (a == 0x5A)
self._version = stream.readUI8()
self._file_length = stream.readUI32()
if not (self._compressed_zlib or self._compressed_lzma):
self._frame_size = stream.readRECT()
self._frame_rate = stream.readFIXED8()
self._frame_count = stream.readUI16()
@property
def frame_size(self):
""" Return frame size as a SWFRectangle """
return self._frame_size
@property
def frame_rate(self):
""" Return frame rate """
return self._frame_rate
@property
def frame_count(self):
""" Return number of frames """
return self._frame_count
@property
def file_length(self):
""" Return uncompressed file length """
return self._file_length
@property
def version(self):
""" Return SWF version """
return self._version
@property
def compressed(self):
""" Whether the SWF is compressed """
return self._compressed_zlib or self._compressed_lzma
@property
def compressed_zlib(self):
""" Whether the SWF is compressed using ZLIB """
return self._compressed_zlib
@property
def compressed_lzma(self):
""" Whether the SWF is compressed using LZMA """
return self._compressed_lzma
def __str__(self):
return " [SWFHeader]\n" + \
" Version: %d\n" % self.version + \
" FileLength: %d\n" % self.file_length + \
" FrameSize: %s\n" % self.frame_size.__str__() + \
" FrameRate: %d\n" % self.frame_rate + \
" FrameCount: %d\n" % self.frame_count
class SWF(SWFTimelineContainer):
"""
SWF class
The SWF (pronounced 'swiff') file format delivers vector graphics, text,
video, and sound over the Internet and is supported by Adobe Flash
Player software. The SWF file format is designed to be an efficient
delivery format, not a format for exchanging graphics between graphics
editors.
@param file: a file object with read(), seek(), tell() methods.
"""
def __init__(self, file=None):
super(SWF, self).__init__()
self._data = None if file is None else SWFStream(file)
self._header = None
if self._data is not None:
self.parse(self._data)
@property
def data(self):
"""
Return the SWFStream object (READ ONLY)
"""
return self._data
@property
def header(self):
""" Return the SWFHeader """
return self._header
def export(self, exporter=None, force_stroke=False):
"""
Export this SWF using the specified exporter.
When no exporter is passed in the default exporter used
is swf.export.SVGExporter.
Exporters should extend the swf.export.BaseExporter class.
@param exporter : the exporter to use
@param force_stroke : set to true to force strokes on fills,
useful for some edge cases.
"""
exporter = SVGExporter() if exporter is None else exporter
if self._data is None:
raise Exception("This SWF was not loaded! (no data)")
if len(self.tags) == 0:
raise Exception("This SWF doesn't contain any tags!")
return exporter.export(self, force_stroke)
def parse_file(self, filename):
""" Parses the SWF from a filename """
self.parse(open(filename, 'rb'))
def parse(self, data):
"""
Parses the SWF.
The @data parameter can be a file object or a SWFStream
"""
self._data = data = data if isinstance(data, SWFStream) else SWFStream(data)
self._header = SWFHeader(self._data)
if self._header.compressed:
temp = StringIO.StringIO()
if self._header.compressed_zlib:
import zlib
data = data.f.read()
zip = zlib.decompressobj()
temp.write(zip.decompress(data))
else:
import pylzma
data.readUI32() #consume compressed length
data = data.f.read()
temp.write(pylzma.decompress(data))
temp.seek(0)
data = SWFStream(temp)
self._header._frame_size = data.readRECT()
self._header._frame_rate = data.readFIXED8()
self._header._frame_count = data.readUI16()
self.parse_tags(data)
def __str__(self):
s = "[SWF]\n"
s += self._header.__str__()
for tag in self.tags:
s += tag.__str__() + "\n"
return s
| mit | -8,566,016,073,623,992,000 | 32.011696 | 84 | 0.55713 | false |
foss-transportationmodeling/rettina-server | .env/local/lib/python2.7/encodings/cp1257.py | 593 | 13630 | """ Python Character Mapping Codec cp1257 generated from 'MAPPINGS/VENDORS/MICSFT/WINDOWS/CP1257.TXT' with gencodec.py.
"""#"
import codecs
### Codec APIs
class Codec(codecs.Codec):
def encode(self,input,errors='strict'):
return codecs.charmap_encode(input,errors,encoding_table)
def decode(self,input,errors='strict'):
return codecs.charmap_decode(input,errors,decoding_table)
class IncrementalEncoder(codecs.IncrementalEncoder):
def encode(self, input, final=False):
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
class IncrementalDecoder(codecs.IncrementalDecoder):
def decode(self, input, final=False):
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
class StreamWriter(Codec,codecs.StreamWriter):
pass
class StreamReader(Codec,codecs.StreamReader):
pass
### encodings module API
def getregentry():
return codecs.CodecInfo(
name='cp1257',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
)
### Decoding Table
decoding_table = (
u'\x00' # 0x00 -> NULL
u'\x01' # 0x01 -> START OF HEADING
u'\x02' # 0x02 -> START OF TEXT
u'\x03' # 0x03 -> END OF TEXT
u'\x04' # 0x04 -> END OF TRANSMISSION
u'\x05' # 0x05 -> ENQUIRY
u'\x06' # 0x06 -> ACKNOWLEDGE
u'\x07' # 0x07 -> BELL
u'\x08' # 0x08 -> BACKSPACE
u'\t' # 0x09 -> HORIZONTAL TABULATION
u'\n' # 0x0A -> LINE FEED
u'\x0b' # 0x0B -> VERTICAL TABULATION
u'\x0c' # 0x0C -> FORM FEED
u'\r' # 0x0D -> CARRIAGE RETURN
u'\x0e' # 0x0E -> SHIFT OUT
u'\x0f' # 0x0F -> SHIFT IN
u'\x10' # 0x10 -> DATA LINK ESCAPE
u'\x11' # 0x11 -> DEVICE CONTROL ONE
u'\x12' # 0x12 -> DEVICE CONTROL TWO
u'\x13' # 0x13 -> DEVICE CONTROL THREE
u'\x14' # 0x14 -> DEVICE CONTROL FOUR
u'\x15' # 0x15 -> NEGATIVE ACKNOWLEDGE
u'\x16' # 0x16 -> SYNCHRONOUS IDLE
u'\x17' # 0x17 -> END OF TRANSMISSION BLOCK
u'\x18' # 0x18 -> CANCEL
u'\x19' # 0x19 -> END OF MEDIUM
u'\x1a' # 0x1A -> SUBSTITUTE
u'\x1b' # 0x1B -> ESCAPE
u'\x1c' # 0x1C -> FILE SEPARATOR
u'\x1d' # 0x1D -> GROUP SEPARATOR
u'\x1e' # 0x1E -> RECORD SEPARATOR
u'\x1f' # 0x1F -> UNIT SEPARATOR
u' ' # 0x20 -> SPACE
u'!' # 0x21 -> EXCLAMATION MARK
u'"' # 0x22 -> QUOTATION MARK
u'#' # 0x23 -> NUMBER SIGN
u'$' # 0x24 -> DOLLAR SIGN
u'%' # 0x25 -> PERCENT SIGN
u'&' # 0x26 -> AMPERSAND
u"'" # 0x27 -> APOSTROPHE
u'(' # 0x28 -> LEFT PARENTHESIS
u')' # 0x29 -> RIGHT PARENTHESIS
u'*' # 0x2A -> ASTERISK
u'+' # 0x2B -> PLUS SIGN
u',' # 0x2C -> COMMA
u'-' # 0x2D -> HYPHEN-MINUS
u'.' # 0x2E -> FULL STOP
u'/' # 0x2F -> SOLIDUS
u'0' # 0x30 -> DIGIT ZERO
u'1' # 0x31 -> DIGIT ONE
u'2' # 0x32 -> DIGIT TWO
u'3' # 0x33 -> DIGIT THREE
u'4' # 0x34 -> DIGIT FOUR
u'5' # 0x35 -> DIGIT FIVE
u'6' # 0x36 -> DIGIT SIX
u'7' # 0x37 -> DIGIT SEVEN
u'8' # 0x38 -> DIGIT EIGHT
u'9' # 0x39 -> DIGIT NINE
u':' # 0x3A -> COLON
u';' # 0x3B -> SEMICOLON
u'<' # 0x3C -> LESS-THAN SIGN
u'=' # 0x3D -> EQUALS SIGN
u'>' # 0x3E -> GREATER-THAN SIGN
u'?' # 0x3F -> QUESTION MARK
u'@' # 0x40 -> COMMERCIAL AT
u'A' # 0x41 -> LATIN CAPITAL LETTER A
u'B' # 0x42 -> LATIN CAPITAL LETTER B
u'C' # 0x43 -> LATIN CAPITAL LETTER C
u'D' # 0x44 -> LATIN CAPITAL LETTER D
u'E' # 0x45 -> LATIN CAPITAL LETTER E
u'F' # 0x46 -> LATIN CAPITAL LETTER F
u'G' # 0x47 -> LATIN CAPITAL LETTER G
u'H' # 0x48 -> LATIN CAPITAL LETTER H
u'I' # 0x49 -> LATIN CAPITAL LETTER I
u'J' # 0x4A -> LATIN CAPITAL LETTER J
u'K' # 0x4B -> LATIN CAPITAL LETTER K
u'L' # 0x4C -> LATIN CAPITAL LETTER L
u'M' # 0x4D -> LATIN CAPITAL LETTER M
u'N' # 0x4E -> LATIN CAPITAL LETTER N
u'O' # 0x4F -> LATIN CAPITAL LETTER O
u'P' # 0x50 -> LATIN CAPITAL LETTER P
u'Q' # 0x51 -> LATIN CAPITAL LETTER Q
u'R' # 0x52 -> LATIN CAPITAL LETTER R
u'S' # 0x53 -> LATIN CAPITAL LETTER S
u'T' # 0x54 -> LATIN CAPITAL LETTER T
u'U' # 0x55 -> LATIN CAPITAL LETTER U
u'V' # 0x56 -> LATIN CAPITAL LETTER V
u'W' # 0x57 -> LATIN CAPITAL LETTER W
u'X' # 0x58 -> LATIN CAPITAL LETTER X
u'Y' # 0x59 -> LATIN CAPITAL LETTER Y
u'Z' # 0x5A -> LATIN CAPITAL LETTER Z
u'[' # 0x5B -> LEFT SQUARE BRACKET
u'\\' # 0x5C -> REVERSE SOLIDUS
u']' # 0x5D -> RIGHT SQUARE BRACKET
u'^' # 0x5E -> CIRCUMFLEX ACCENT
u'_' # 0x5F -> LOW LINE
u'`' # 0x60 -> GRAVE ACCENT
u'a' # 0x61 -> LATIN SMALL LETTER A
u'b' # 0x62 -> LATIN SMALL LETTER B
u'c' # 0x63 -> LATIN SMALL LETTER C
u'd' # 0x64 -> LATIN SMALL LETTER D
u'e' # 0x65 -> LATIN SMALL LETTER E
u'f' # 0x66 -> LATIN SMALL LETTER F
u'g' # 0x67 -> LATIN SMALL LETTER G
u'h' # 0x68 -> LATIN SMALL LETTER H
u'i' # 0x69 -> LATIN SMALL LETTER I
u'j' # 0x6A -> LATIN SMALL LETTER J
u'k' # 0x6B -> LATIN SMALL LETTER K
u'l' # 0x6C -> LATIN SMALL LETTER L
u'm' # 0x6D -> LATIN SMALL LETTER M
u'n' # 0x6E -> LATIN SMALL LETTER N
u'o' # 0x6F -> LATIN SMALL LETTER O
u'p' # 0x70 -> LATIN SMALL LETTER P
u'q' # 0x71 -> LATIN SMALL LETTER Q
u'r' # 0x72 -> LATIN SMALL LETTER R
u's' # 0x73 -> LATIN SMALL LETTER S
u't' # 0x74 -> LATIN SMALL LETTER T
u'u' # 0x75 -> LATIN SMALL LETTER U
u'v' # 0x76 -> LATIN SMALL LETTER V
u'w' # 0x77 -> LATIN SMALL LETTER W
u'x' # 0x78 -> LATIN SMALL LETTER X
u'y' # 0x79 -> LATIN SMALL LETTER Y
u'z' # 0x7A -> LATIN SMALL LETTER Z
u'{' # 0x7B -> LEFT CURLY BRACKET
u'|' # 0x7C -> VERTICAL LINE
u'}' # 0x7D -> RIGHT CURLY BRACKET
u'~' # 0x7E -> TILDE
u'\x7f' # 0x7F -> DELETE
u'\u20ac' # 0x80 -> EURO SIGN
u'\ufffe' # 0x81 -> UNDEFINED
u'\u201a' # 0x82 -> SINGLE LOW-9 QUOTATION MARK
u'\ufffe' # 0x83 -> UNDEFINED
u'\u201e' # 0x84 -> DOUBLE LOW-9 QUOTATION MARK
u'\u2026' # 0x85 -> HORIZONTAL ELLIPSIS
u'\u2020' # 0x86 -> DAGGER
u'\u2021' # 0x87 -> DOUBLE DAGGER
u'\ufffe' # 0x88 -> UNDEFINED
u'\u2030' # 0x89 -> PER MILLE SIGN
u'\ufffe' # 0x8A -> UNDEFINED
u'\u2039' # 0x8B -> SINGLE LEFT-POINTING ANGLE QUOTATION MARK
u'\ufffe' # 0x8C -> UNDEFINED
u'\xa8' # 0x8D -> DIAERESIS
u'\u02c7' # 0x8E -> CARON
u'\xb8' # 0x8F -> CEDILLA
u'\ufffe' # 0x90 -> UNDEFINED
u'\u2018' # 0x91 -> LEFT SINGLE QUOTATION MARK
u'\u2019' # 0x92 -> RIGHT SINGLE QUOTATION MARK
u'\u201c' # 0x93 -> LEFT DOUBLE QUOTATION MARK
u'\u201d' # 0x94 -> RIGHT DOUBLE QUOTATION MARK
u'\u2022' # 0x95 -> BULLET
u'\u2013' # 0x96 -> EN DASH
u'\u2014' # 0x97 -> EM DASH
u'\ufffe' # 0x98 -> UNDEFINED
u'\u2122' # 0x99 -> TRADE MARK SIGN
u'\ufffe' # 0x9A -> UNDEFINED
u'\u203a' # 0x9B -> SINGLE RIGHT-POINTING ANGLE QUOTATION MARK
u'\ufffe' # 0x9C -> UNDEFINED
u'\xaf' # 0x9D -> MACRON
u'\u02db' # 0x9E -> OGONEK
u'\ufffe' # 0x9F -> UNDEFINED
u'\xa0' # 0xA0 -> NO-BREAK SPACE
u'\ufffe' # 0xA1 -> UNDEFINED
u'\xa2' # 0xA2 -> CENT SIGN
u'\xa3' # 0xA3 -> POUND SIGN
u'\xa4' # 0xA4 -> CURRENCY SIGN
u'\ufffe' # 0xA5 -> UNDEFINED
u'\xa6' # 0xA6 -> BROKEN BAR
u'\xa7' # 0xA7 -> SECTION SIGN
u'\xd8' # 0xA8 -> LATIN CAPITAL LETTER O WITH STROKE
u'\xa9' # 0xA9 -> COPYRIGHT SIGN
u'\u0156' # 0xAA -> LATIN CAPITAL LETTER R WITH CEDILLA
u'\xab' # 0xAB -> LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
u'\xac' # 0xAC -> NOT SIGN
u'\xad' # 0xAD -> SOFT HYPHEN
u'\xae' # 0xAE -> REGISTERED SIGN
u'\xc6' # 0xAF -> LATIN CAPITAL LETTER AE
u'\xb0' # 0xB0 -> DEGREE SIGN
u'\xb1' # 0xB1 -> PLUS-MINUS SIGN
u'\xb2' # 0xB2 -> SUPERSCRIPT TWO
u'\xb3' # 0xB3 -> SUPERSCRIPT THREE
u'\xb4' # 0xB4 -> ACUTE ACCENT
u'\xb5' # 0xB5 -> MICRO SIGN
u'\xb6' # 0xB6 -> PILCROW SIGN
u'\xb7' # 0xB7 -> MIDDLE DOT
u'\xf8' # 0xB8 -> LATIN SMALL LETTER O WITH STROKE
u'\xb9' # 0xB9 -> SUPERSCRIPT ONE
u'\u0157' # 0xBA -> LATIN SMALL LETTER R WITH CEDILLA
u'\xbb' # 0xBB -> RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
u'\xbc' # 0xBC -> VULGAR FRACTION ONE QUARTER
u'\xbd' # 0xBD -> VULGAR FRACTION ONE HALF
u'\xbe' # 0xBE -> VULGAR FRACTION THREE QUARTERS
u'\xe6' # 0xBF -> LATIN SMALL LETTER AE
u'\u0104' # 0xC0 -> LATIN CAPITAL LETTER A WITH OGONEK
u'\u012e' # 0xC1 -> LATIN CAPITAL LETTER I WITH OGONEK
u'\u0100' # 0xC2 -> LATIN CAPITAL LETTER A WITH MACRON
u'\u0106' # 0xC3 -> LATIN CAPITAL LETTER C WITH ACUTE
u'\xc4' # 0xC4 -> LATIN CAPITAL LETTER A WITH DIAERESIS
u'\xc5' # 0xC5 -> LATIN CAPITAL LETTER A WITH RING ABOVE
u'\u0118' # 0xC6 -> LATIN CAPITAL LETTER E WITH OGONEK
u'\u0112' # 0xC7 -> LATIN CAPITAL LETTER E WITH MACRON
u'\u010c' # 0xC8 -> LATIN CAPITAL LETTER C WITH CARON
u'\xc9' # 0xC9 -> LATIN CAPITAL LETTER E WITH ACUTE
u'\u0179' # 0xCA -> LATIN CAPITAL LETTER Z WITH ACUTE
u'\u0116' # 0xCB -> LATIN CAPITAL LETTER E WITH DOT ABOVE
u'\u0122' # 0xCC -> LATIN CAPITAL LETTER G WITH CEDILLA
u'\u0136' # 0xCD -> LATIN CAPITAL LETTER K WITH CEDILLA
u'\u012a' # 0xCE -> LATIN CAPITAL LETTER I WITH MACRON
u'\u013b' # 0xCF -> LATIN CAPITAL LETTER L WITH CEDILLA
u'\u0160' # 0xD0 -> LATIN CAPITAL LETTER S WITH CARON
u'\u0143' # 0xD1 -> LATIN CAPITAL LETTER N WITH ACUTE
u'\u0145' # 0xD2 -> LATIN CAPITAL LETTER N WITH CEDILLA
u'\xd3' # 0xD3 -> LATIN CAPITAL LETTER O WITH ACUTE
u'\u014c' # 0xD4 -> LATIN CAPITAL LETTER O WITH MACRON
u'\xd5' # 0xD5 -> LATIN CAPITAL LETTER O WITH TILDE
u'\xd6' # 0xD6 -> LATIN CAPITAL LETTER O WITH DIAERESIS
u'\xd7' # 0xD7 -> MULTIPLICATION SIGN
u'\u0172' # 0xD8 -> LATIN CAPITAL LETTER U WITH OGONEK
u'\u0141' # 0xD9 -> LATIN CAPITAL LETTER L WITH STROKE
u'\u015a' # 0xDA -> LATIN CAPITAL LETTER S WITH ACUTE
u'\u016a' # 0xDB -> LATIN CAPITAL LETTER U WITH MACRON
u'\xdc' # 0xDC -> LATIN CAPITAL LETTER U WITH DIAERESIS
u'\u017b' # 0xDD -> LATIN CAPITAL LETTER Z WITH DOT ABOVE
u'\u017d' # 0xDE -> LATIN CAPITAL LETTER Z WITH CARON
u'\xdf' # 0xDF -> LATIN SMALL LETTER SHARP S
u'\u0105' # 0xE0 -> LATIN SMALL LETTER A WITH OGONEK
u'\u012f' # 0xE1 -> LATIN SMALL LETTER I WITH OGONEK
u'\u0101' # 0xE2 -> LATIN SMALL LETTER A WITH MACRON
u'\u0107' # 0xE3 -> LATIN SMALL LETTER C WITH ACUTE
u'\xe4' # 0xE4 -> LATIN SMALL LETTER A WITH DIAERESIS
u'\xe5' # 0xE5 -> LATIN SMALL LETTER A WITH RING ABOVE
u'\u0119' # 0xE6 -> LATIN SMALL LETTER E WITH OGONEK
u'\u0113' # 0xE7 -> LATIN SMALL LETTER E WITH MACRON
u'\u010d' # 0xE8 -> LATIN SMALL LETTER C WITH CARON
u'\xe9' # 0xE9 -> LATIN SMALL LETTER E WITH ACUTE
u'\u017a' # 0xEA -> LATIN SMALL LETTER Z WITH ACUTE
u'\u0117' # 0xEB -> LATIN SMALL LETTER E WITH DOT ABOVE
u'\u0123' # 0xEC -> LATIN SMALL LETTER G WITH CEDILLA
u'\u0137' # 0xED -> LATIN SMALL LETTER K WITH CEDILLA
u'\u012b' # 0xEE -> LATIN SMALL LETTER I WITH MACRON
u'\u013c' # 0xEF -> LATIN SMALL LETTER L WITH CEDILLA
u'\u0161' # 0xF0 -> LATIN SMALL LETTER S WITH CARON
u'\u0144' # 0xF1 -> LATIN SMALL LETTER N WITH ACUTE
u'\u0146' # 0xF2 -> LATIN SMALL LETTER N WITH CEDILLA
u'\xf3' # 0xF3 -> LATIN SMALL LETTER O WITH ACUTE
u'\u014d' # 0xF4 -> LATIN SMALL LETTER O WITH MACRON
u'\xf5' # 0xF5 -> LATIN SMALL LETTER O WITH TILDE
u'\xf6' # 0xF6 -> LATIN SMALL LETTER O WITH DIAERESIS
u'\xf7' # 0xF7 -> DIVISION SIGN
u'\u0173' # 0xF8 -> LATIN SMALL LETTER U WITH OGONEK
u'\u0142' # 0xF9 -> LATIN SMALL LETTER L WITH STROKE
u'\u015b' # 0xFA -> LATIN SMALL LETTER S WITH ACUTE
u'\u016b' # 0xFB -> LATIN SMALL LETTER U WITH MACRON
u'\xfc' # 0xFC -> LATIN SMALL LETTER U WITH DIAERESIS
u'\u017c' # 0xFD -> LATIN SMALL LETTER Z WITH DOT ABOVE
u'\u017e' # 0xFE -> LATIN SMALL LETTER Z WITH CARON
u'\u02d9' # 0xFF -> DOT ABOVE
)
### Encoding table
encoding_table=codecs.charmap_build(decoding_table)
| apache-2.0 | 8,060,740,897,651,665,000 | 43.397394 | 119 | 0.544901 | false |
rdaton/ARS2015 | otroScript.py | 1 | 3473 | #!/usr/bin/python2
# -*- coding: utf-8 -*-
##intento import cElementTree que es cΓ³digo nativo
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
import csv
import os
import glob
try:
import xml.etree.cElementTree as ET
except ImportError:
import xml.etree.ElementTree as ET
"""
Creado 1 de Diciembre 2015
@author: R. Daton
"""
##la primera versiΓ³n se hizo con lxml
##la diferencia con xml a secas, (versiΓ³n actual)
##es que no tenemos que crear el diccionario
##de namespaces a mano.. pero eso da igual.
##porque aun con lxml hay referencias a nombres del xml, que son hardcoded
##visto en http://stackoverflow.com/questions/14853243/parsing-xml-with-namespace-in-python-via-elementtree
unNameSpaces={'dc': 'http://purl.org/dc/terms/',
'movie': 'http://data.linkedmdb.org/resource/movie/',
'rdf': 'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
}
def parseaPelicula (ficheroPeliculas,fCsvPeliculas):
##creo puntero a la raΓz del Γ‘rbol
unArbol = ET.parse(ficheroPeliculas)
unaRaiz = unArbol.getroot()
#Saco id de pelicula
unIdPelicula= unaRaiz.find('.//movie:filmid',unNameSpaces).text
##Saco tΓtulo de pelicula
unTituloPelicula= unaRaiz.find('.//dc:title',unNameSpaces).text
fCsvPeliculas.write(unIdPelicula+';'+unTituloPelicula+'\n')
##Saco Actores que han participado
unaListaActores=list();
for todoElemento in unaRaiz.iterfind('.//movie:actor',unNameSpaces):
##cojo la lista de claves de atributos, y me quedo con el primer nombre de clave
##me evito tener que meter en unaKey el siguiente tocho
##{http://www.w3.org/1999/02/22-rdf-syntax-ns#}resource
unaKey=todoElemento.attrib.keys()[0]
unaCadenaActor=todoElemento.attrib.get(unaKey)
##unaCadenaActor, contiene una url tipo http://A/B/C/idActor
##voy a extraer idActor, usando como separador "/" , y accediendo a la posiciΓ³n 5 (rango 0-5)
unSeparador="/"
print unIdPelicula,',',unaCadenaActor.split(unSeparador)[5]
def abreFicheroRw (nombreFichero) :
f=open(nombreFichero,'w');
return f;
def cierraFicheroRw(f):
f.close()
def main():
##bloque de declaraciΓ³n de ficheros de entrada, salida
##indices de ficheros, etc,
##declaraciones e inicializaciones para ficheros de peliculas
##puntero a fichero xml en curso
directorioPelis = 'films'
fEntradaPelicula = ' '
fEntradaPeliculaConRuta=' '
formatoNFicheroXML='data.linkedmdb.org.data.film.*.xml'
##fichero csv de peliculas
##creo carpeta de salida para fichero csv
directorioPelisCsv='films_csv'
if not os.path.exists(directorioPelisCsv):
os.makedirs(directorioPelisCsv)
fSalidaPeliculaCsv = 'pelisCsv.csv'
fSalidaPeliculasCsvConRuta=os.path.join(directorioPelisCsv,fSalidaPeliculaCsv)
##abro el fichero csv para escritura
ficheroPelisCsv=abreFicheroRw(fSalidaPeliculasCsvConRuta)
##genero lista de ficheros xml de la carpeta films (entro y luego salgo de la carpeta)
dirAux=os.getcwd()
os.chdir(directorioPelis)
listaPelisXML=glob.glob(formatoNFicheroXML)
os.chdir(dirAux)
##Genero fichero de nodos de peliculas
for elem in listaPelisXML:
fEntradaPelicula = elem
fEntradaPeliculaConRuta=os.path.join(directorioPelis,fEntradaPelicula)
parseaPelicula(fEntradaPeliculaConRuta,ficheroPelisCsv)
cierraFicheroRw(ficheroPelisCsv)
if __name__ == "__main__":
sys.exit(main())
| gpl-3.0 | -7,756,345,002,758,109,000 | 30.788991 | 107 | 0.716017 | false |
lincolnloop/emailed-me | lib/werkzeug/routing.py | 25 | 55181 | # -*- coding: utf-8 -*-
"""
werkzeug.routing
~~~~~~~~~~~~~~~~
When it comes to combining multiple controller or view functions (however
you want to call them) you need a dispatcher. A simple way would be
applying regular expression tests on the ``PATH_INFO`` and calling
registered callback functions that return the value then.
This module implements a much more powerful system than simple regular
expression matching because it can also convert values in the URLs and
build URLs.
Here a simple example that creates an URL map for an application with
two subdomains (www and kb) and some URL rules:
>>> m = Map([
... # Static URLs
... Rule('/', endpoint='static/index'),
... Rule('/about', endpoint='static/about'),
... Rule('/help', endpoint='static/help'),
... # Knowledge Base
... Subdomain('kb', [
... Rule('/', endpoint='kb/index'),
... Rule('/browse/', endpoint='kb/browse'),
... Rule('/browse/<int:id>/', endpoint='kb/browse'),
... Rule('/browse/<int:id>/<int:page>', endpoint='kb/browse')
... ])
... ], default_subdomain='www')
If the application doesn't use subdomains it's perfectly fine to not set
the default subdomain and not use the `Subdomain` rule factory. The endpoint
in the rules can be anything, for example import paths or unique
identifiers. The WSGI application can use those endpoints to get the
handler for that URL. It doesn't have to be a string at all but it's
recommended.
Now it's possible to create a URL adapter for one of the subdomains and
build URLs:
>>> c = m.bind('example.com')
>>> c.build("kb/browse", dict(id=42))
'http://kb.example.com/browse/42/'
>>> c.build("kb/browse", dict())
'http://kb.example.com/browse/'
>>> c.build("kb/browse", dict(id=42, page=3))
'http://kb.example.com/browse/42/3'
>>> c.build("static/about")
'/about'
>>> c.build("static/index", force_external=True)
'http://www.example.com/'
>>> c = m.bind('example.com', subdomain='kb')
>>> c.build("static/about")
'http://www.example.com/about'
The first argument to bind is the server name *without* the subdomain.
Per default it will assume that the script is mounted on the root, but
often that's not the case so you can provide the real mount point as
second argument:
>>> c = m.bind('example.com', '/applications/example')
The third argument can be the subdomain, if not given the default
subdomain is used. For more details about binding have a look at the
documentation of the `MapAdapter`.
And here is how you can match URLs:
>>> c = m.bind('example.com')
>>> c.match("/")
('static/index', {})
>>> c.match("/about")
('static/about', {})
>>> c = m.bind('example.com', '/', 'kb')
>>> c.match("/")
('kb/index', {})
>>> c.match("/browse/42/23")
('kb/browse', {'id': 42, 'page': 23})
If matching fails you get a `NotFound` exception, if the rule thinks
it's a good idea to redirect (for example because the URL was defined
to have a slash at the end but the request was missing that slash) it
will raise a `RequestRedirect` exception. Both are subclasses of the
`HTTPException` so you can use those errors as responses in the
application.
If matching succeeded but the URL rule was incompatible to the given
method (for example there were only rules for `GET` and `HEAD` and
routing system tried to match a `POST` request) a `MethodNotAllowed`
method is raised.
:copyright: (c) 2010 by the Werkzeug Team, see AUTHORS for more details.
Thomas Johansson.
:license: BSD, see LICENSE for more details.
"""
import re
from pprint import pformat
from urlparse import urljoin
from itertools import izip
from werkzeug.urls import url_encode, url_quote
from werkzeug.utils import redirect, format_string
from werkzeug.exceptions import HTTPException, NotFound, MethodNotAllowed
from werkzeug._internal import _get_environ
_rule_re = re.compile(r'''
(?P<static>[^<]*) # static rule data
<
(?:
(?P<converter>[a-zA-Z_][a-zA-Z0-9_]*) # converter name
(?:\((?P<args>.*?)\))? # converter arguments
\: # variable delimiter
)?
(?P<variable>[a-zA-Z][a-zA-Z0-9_]*) # variable name
>
''', re.VERBOSE)
_simple_rule_re = re.compile(r'<([^>]+)>')
def parse_rule(rule):
"""Parse a rule and return it as generator. Each iteration yields tuples
in the form ``(converter, arguments, variable)``. If the converter is
`None` it's a static url part, otherwise it's a dynamic one.
:internal:
"""
pos = 0
end = len(rule)
do_match = _rule_re.match
used_names = set()
while pos < end:
m = do_match(rule, pos)
if m is None:
break
data = m.groupdict()
if data['static']:
yield None, None, data['static']
variable = data['variable']
converter = data['converter'] or 'default'
if variable in used_names:
raise ValueError('variable name %r used twice.' % variable)
used_names.add(variable)
yield converter, data['args'] or None, variable
pos = m.end()
if pos < end:
remaining = rule[pos:]
if '>' in remaining or '<' in remaining:
raise ValueError('malformed url rule: %r' % rule)
yield None, None, remaining
def get_converter(map, name, args):
"""Create a new converter for the given arguments or raise
exception if the converter does not exist.
:internal:
"""
if not name in map.converters:
raise LookupError('the converter %r does not exist' % name)
if args:
storage = type('_Storage', (), {'__getitem__': lambda s, x: x})()
args, kwargs = eval(u'(lambda *a, **kw: (a, kw))(%s)' % args, {}, storage)
else:
args = ()
kwargs = {}
return map.converters[name](map, *args, **kwargs)
class RoutingException(Exception):
"""Special exceptions that require the application to redirect, notifying
about missing urls, etc.
:internal:
"""
class RequestRedirect(HTTPException, RoutingException):
"""Raise if the map requests a redirect. This is for example the case if
`strict_slashes` are activated and an url that requires a trailing slash.
The attribute `new_url` contains the absolute destination url.
"""
code = 301
def __init__(self, new_url):
RoutingException.__init__(self, new_url)
self.new_url = new_url
def get_response(self, environ):
return redirect(self.new_url, 301)
class RequestSlash(RoutingException):
"""Internal exception."""
class BuildError(RoutingException, LookupError):
"""Raised if the build system cannot find a URL for an endpoint with the
values provided.
"""
def __init__(self, endpoint, values, method):
LookupError.__init__(self, endpoint, values, method)
self.endpoint = endpoint
self.values = values
self.method = method
class ValidationError(ValueError):
"""Validation error. If a rule converter raises this exception the rule
does not match the current URL and the next URL is tried.
"""
class RuleFactory(object):
"""As soon as you have more complex URL setups it's a good idea to use rule
factories to avoid repetitive tasks. Some of them are builtin, others can
be added by subclassing `RuleFactory` and overriding `get_rules`.
"""
def get_rules(self, map):
"""Subclasses of `RuleFactory` have to override this method and return
an iterable of rules."""
raise NotImplementedError()
class Subdomain(RuleFactory):
"""All URLs provided by this factory have the subdomain set to a
specific domain. For example if you want to use the subdomain for
the current language this can be a good setup::
url_map = Map([
Rule('/', endpoint='#select_language'),
Subdomain('<string(length=2):lang_code>', [
Rule('/', endpoint='index'),
Rule('/about', endpoint='about'),
Rule('/help', endpoint='help')
])
])
All the rules except for the ``'#select_language'`` endpoint will now
listen on a two letter long subdomain that holds the language code
for the current request.
"""
def __init__(self, subdomain, rules):
self.subdomain = subdomain
self.rules = rules
def get_rules(self, map):
for rulefactory in self.rules:
for rule in rulefactory.get_rules(map):
rule = rule.empty()
rule.subdomain = self.subdomain
yield rule
class Submount(RuleFactory):
"""Like `Subdomain` but prefixes the URL rule with a given string::
url_map = Map([
Rule('/', endpoint='index'),
Submount('/blog', [
Rule('/', endpoint='blog/index'),
Rule('/entry/<entry_slug>', endpoint='blog/show')
])
])
Now the rule ``'blog/show'`` matches ``/blog/entry/<entry_slug>``.
"""
def __init__(self, path, rules):
self.path = path.rstrip('/')
self.rules = rules
def get_rules(self, map):
for rulefactory in self.rules:
for rule in rulefactory.get_rules(map):
rule = rule.empty()
rule.rule = self.path + rule.rule
yield rule
class EndpointPrefix(RuleFactory):
"""Prefixes all endpoints (which must be strings for this factory) with
another string. This can be useful for sub applications::
url_map = Map([
Rule('/', endpoint='index'),
EndpointPrefix('blog/', [Submount('/blog', [
Rule('/', endpoint='index'),
Rule('/entry/<entry_slug>', endpoint='show')
])])
])
"""
def __init__(self, prefix, rules):
self.prefix = prefix
self.rules = rules
def get_rules(self, map):
for rulefactory in self.rules:
for rule in rulefactory.get_rules(map):
rule = rule.empty()
rule.endpoint = self.prefix + rule.endpoint
yield rule
class RuleTemplate(object):
"""Returns copies of the rules wrapped and expands string templates in
the endpoint, rule, defaults or subdomain sections.
Here a small example for such a rule template::
from werkzeug.routing import Map, Rule, RuleTemplate
resource = RuleTemplate([
Rule('/$name/', endpoint='$name.list'),
Rule('/$name/<int:id>', endpoint='$name.show')
])
url_map = Map([resource(name='user'), resource(name='page')])
When a rule template is called the keyword arguments are used to
replace the placeholders in all the string parameters.
"""
def __init__(self, rules):
self.rules = list(rules)
def __call__(self, *args, **kwargs):
return RuleTemplateFactory(self.rules, dict(*args, **kwargs))
class RuleTemplateFactory(RuleFactory):
"""A factory that fills in template variables into rules. Used by
`RuleTemplate` internally.
:internal:
"""
def __init__(self, rules, context):
self.rules = rules
self.context = context
def get_rules(self, map):
for rulefactory in self.rules:
for rule in rulefactory.get_rules(map):
new_defaults = subdomain = None
if rule.defaults is not None:
new_defaults = {}
for key, value in rule.defaults.iteritems():
if isinstance(value, basestring):
value = format_string(value, self.context)
new_defaults[key] = value
if rule.subdomain is not None:
subdomain = format_string(rule.subdomain, self.context)
new_endpoint = rule.endpoint
if isinstance(new_endpoint, basestring):
new_endpoint = format_string(new_endpoint, self.context)
yield Rule(
format_string(rule.rule, self.context),
new_defaults,
subdomain,
rule.methods,
rule.build_only,
new_endpoint,
rule.strict_slashes
)
class Rule(RuleFactory):
"""A Rule represents one URL pattern. There are some options for `Rule`
that change the way it behaves and are passed to the `Rule` constructor.
Note that besides the rule-string all arguments *must* be keyword arguments
in order to not break the application on Werkzeug upgrades.
`string`
Rule strings basically are just normal URL paths with placeholders in
the format ``<converter(arguments):name>`` where the converter and the
arguments are optional. If no converter is defined the `default`
converter is used which means `string` in the normal configuration.
URL rules that end with a slash are branch URLs, others are leaves.
If you have `strict_slashes` enabled (which is the default), all
branch URLs that are matched without a trailing slash will trigger a
redirect to the same URL with the missing slash appended.
The converters are defined on the `Map`.
`endpoint`
The endpoint for this rule. This can be anything. A reference to a
function, a string, a number etc. The preferred way is using a string
because the endpoint is used for URL generation.
`defaults`
An optional dict with defaults for other rules with the same endpoint.
This is a bit tricky but useful if you want to have unique URLs::
url_map = Map([
Rule('/all/', defaults={'page': 1}, endpoint='all_entries'),
Rule('/all/page/<int:page>', endpoint='all_entries')
])
If a user now visits ``http://example.com/all/page/1`` he will be
redirected to ``http://example.com/all/``. If `redirect_defaults` is
disabled on the `Map` instance this will only affect the URL
generation.
`subdomain`
The subdomain rule string for this rule. If not specified the rule
only matches for the `default_subdomain` of the map. If the map is
not bound to a subdomain this feature is disabled.
Can be useful if you want to have user profiles on different subdomains
and all subdomains are forwarded to your application::
url_map = Map([
Rule('/', subdomain='<username>', endpoint='user/homepage'),
Rule('/stats', subdomain='<username>', endpoint='user/stats')
])
`methods`
A sequence of http methods this rule applies to. If not specified, all
methods are allowed. For example this can be useful if you want different
endpoints for `POST` and `GET`. If methods are defined and the path
matches but the method matched against is not in this list or in the
list of another rule for that path the error raised is of the type
`MethodNotAllowed` rather than `NotFound`. If `GET` is present in the
list of methods and `HEAD` is not, `HEAD` is added automatically.
.. versionchanged:: 0.6.1
`HEAD` is now automatically added to the methods if `GET` is
present. The reason for this is that existing code often did not
work properly in servers not rewriting `HEAD` to `GET`
automatically and it was not documented how `HEAD` should be
treated. This was considered a bug in Werkzeug because of that.
`strict_slashes`
Override the `Map` setting for `strict_slashes` only for this rule. If
not specified the `Map` setting is used.
`build_only`
Set this to True and the rule will never match but will create a URL
that can be build. This is useful if you have resources on a subdomain
or folder that are not handled by the WSGI application (like static data)
`redirect_to`
If given this must be either a string or callable. In case of a
callable it's called with the url adapter that triggered the match and
the values of the URL as keyword arguments and has to return the target
for the redirect, otherwise it has to be a string with placeholders in
rule syntax::
def foo_with_slug(adapter, id):
# ask the database for the slug for the old id. this of
# course has nothing to do with werkzeug.
return 'foo/' + Foo.get_slug_for_id(id)
url_map = Map([
Rule('/foo/<slug>', endpoint='foo'),
Rule('/some/old/url/<slug>', redirect_to='foo/<slug>'),
Rule('/other/old/url/<int:id>', redirect_to=foo_with_slug)
])
When the rule is matched the routing system will raise a
`RequestRedirect` exception with the target for the redirect.
Keep in mind that the URL will be joined against the URL root of the
script so don't use a leading slash on the target URL unless you
really mean root of that domain.
"""
def __init__(self, string, defaults=None, subdomain=None, methods=None,
build_only=False, endpoint=None, strict_slashes=None,
redirect_to=None):
if not string.startswith('/'):
raise ValueError('urls must start with a leading slash')
self.rule = string
self.is_leaf = not string.endswith('/')
self.map = None
self.strict_slashes = strict_slashes
self.subdomain = subdomain
self.defaults = defaults
self.build_only = build_only
if methods is None:
self.methods = None
else:
self.methods = set([x.upper() for x in methods])
if 'HEAD' not in self.methods and 'GET' in self.methods:
self.methods.add('HEAD')
self.endpoint = endpoint
self.greediness = 0
self.redirect_to = redirect_to
if defaults is not None:
self.arguments = set(map(str, defaults))
else:
self.arguments = set()
self._trace = self._converters = self._regex = self._weights = None
def empty(self):
"""Return an unbound copy of this rule. This can be useful if you
want to reuse an already bound URL for another map."""
defaults = None
if self.defaults is not None:
defaults = dict(self.defaults)
return Rule(self.rule, defaults, self.subdomain, self.methods,
self.build_only, self.endpoint, self.strict_slashes,
self.redirect_to)
def get_rules(self, map):
yield self
def refresh(self):
"""Rebinds and refreshes the URL. Call this if you modified the
rule in place.
:internal:
"""
self.bind(self.map, rebind=True)
def bind(self, map, rebind=False):
"""Bind the url to a map and create a regular expression based on
the information from the rule itself and the defaults from the map.
:internal:
"""
if self.map is not None and not rebind:
raise RuntimeError('url rule %r already bound to map %r' %
(self, self.map))
self.map = map
if self.strict_slashes is None:
self.strict_slashes = map.strict_slashes
if self.subdomain is None:
self.subdomain = map.default_subdomain
rule = self.subdomain + '|' + (self.is_leaf and self.rule
or self.rule.rstrip('/'))
self._trace = []
self._converters = {}
self._weights = []
regex_parts = []
for converter, arguments, variable in parse_rule(rule):
if converter is None:
regex_parts.append(re.escape(variable))
self._trace.append((False, variable))
self._weights.append(len(variable))
else:
convobj = get_converter(map, converter, arguments)
regex_parts.append('(?P<%s>%s)' % (variable, convobj.regex))
self._converters[variable] = convobj
self._trace.append((True, variable))
self._weights.append(convobj.weight)
self.arguments.add(str(variable))
if convobj.is_greedy:
self.greediness += 1
if not self.is_leaf:
self._trace.append((False, '/'))
if not self.build_only:
regex = r'^%s%s$' % (
u''.join(regex_parts),
(not self.is_leaf or not self.strict_slashes) and \
'(?<!/)(?P<__suffix__>/?)' or ''
)
self._regex = re.compile(regex, re.UNICODE)
def match(self, path):
"""Check if the rule matches a given path. Path is a string in the
form ``"subdomain|/path(method)"`` and is assembled by the map.
If the rule matches a dict with the converted values is returned,
otherwise the return value is `None`.
:internal:
"""
if not self.build_only:
m = self._regex.search(path)
if m is not None:
groups = m.groupdict()
# we have a folder like part of the url without a trailing
# slash and strict slashes enabled. raise an exception that
# tells the map to redirect to the same url but with a
# trailing slash
if self.strict_slashes and not self.is_leaf and \
not groups.pop('__suffix__'):
raise RequestSlash()
# if we are not in strict slashes mode we have to remove
# a __suffix__
elif not self.strict_slashes:
del groups['__suffix__']
result = {}
for name, value in groups.iteritems():
try:
value = self._converters[name].to_python(value)
except ValidationError:
return
result[str(name)] = value
if self.defaults is not None:
result.update(self.defaults)
return result
def build(self, values, append_unknown=True):
"""Assembles the relative url for that rule and the subdomain.
If building doesn't work for some reasons `None` is returned.
:internal:
"""
tmp = []
add = tmp.append
processed = set(self.arguments)
for is_dynamic, data in self._trace:
if is_dynamic:
try:
add(self._converters[data].to_url(values[data]))
except ValidationError:
return
processed.add(data)
else:
add(data)
subdomain, url = (u''.join(tmp)).split('|', 1)
if append_unknown:
query_vars = MultiDict(values)
for key in processed:
if key in query_vars:
del query_vars[key]
if query_vars:
url += '?' + url_encode(query_vars, self.map.charset,
sort=self.map.sort_parameters,
key=self.map.sort_key)
return subdomain, url
def provides_defaults_for(self, rule):
"""Check if this rule has defaults for a given rule.
:internal:
"""
return not self.build_only and self.defaults is not None and \
self.endpoint == rule.endpoint and self != rule and \
self.arguments == rule.arguments
def suitable_for(self, values, method=None):
"""Check if the dict of values has enough data for url generation.
:internal:
"""
if method is not None:
if self.methods is not None and method not in self.methods:
return False
valueset = set(values)
for key in self.arguments - set(self.defaults or ()):
if key not in values:
return False
if self.arguments.issubset(valueset):
if self.defaults is None:
return True
for key, value in self.defaults.iteritems():
if value != values[key]:
return False
return True
def match_compare(self, other):
"""Compare this object with another one for matching.
:internal:
"""
for sw, ow in izip(self._weights, other._weights):
if sw > ow:
return -1
elif sw < ow:
return 1
if len(self._weights) > len(other._weights):
return -1
if len(self._weights) < len(other._weights):
return 1
if not other.arguments and self.arguments:
return 1
elif other.arguments and not self.arguments:
return -1
elif other.defaults is None and self.defaults is not None:
return 1
elif other.defaults is not None and self.defaults is None:
return -1
elif self.greediness > other.greediness:
return -1
elif self.greediness < other.greediness:
return 1
elif len(self.arguments) > len(other.arguments):
return 1
elif len(self.arguments) < len(other.arguments):
return -1
return 1
def build_compare(self, other):
"""Compare this object with another one for building.
:internal:
"""
if not other.arguments and self.arguments:
return -1
elif other.arguments and not self.arguments:
return 1
elif other.defaults is None and self.defaults is not None:
return -1
elif other.defaults is not None and self.defaults is None:
return 1
elif self.provides_defaults_for(other):
return -1
elif other.provides_defaults_for(self):
return 1
elif self.greediness > other.greediness:
return -1
elif self.greediness < other.greediness:
return 1
elif len(self.arguments) > len(other.arguments):
return -1
elif len(self.arguments) < len(other.arguments):
return 1
return -1
def __eq__(self, other):
return self.__class__ is other.__class__ and \
self._trace == other._trace
def __ne__(self, other):
return not self.__eq__(other)
def __unicode__(self):
return self.rule
def __str__(self):
charset = self.map is not None and self.map.charset or 'utf-8'
return unicode(self).encode(charset)
def __repr__(self):
if self.map is None:
return '<%s (unbound)>' % self.__class__.__name__
charset = self.map is not None and self.map.charset or 'utf-8'
tmp = []
for is_dynamic, data in self._trace:
if is_dynamic:
tmp.append('<%s>' % data)
else:
tmp.append(data)
return '<%s %r%s -> %s>' % (
self.__class__.__name__,
(u''.join(tmp).encode(charset)).lstrip('|'),
self.methods is not None and ' (%s)' % \
', '.join(self.methods) or '',
self.endpoint
)
class BaseConverter(object):
"""Base class for all converters."""
regex = '[^/]+'
is_greedy = False
weight = 100
def __init__(self, map):
self.map = map
def to_python(self, value):
return value
def to_url(self, value):
return url_quote(value, self.map.charset)
class UnicodeConverter(BaseConverter):
"""This converter is the default converter and accepts any string but
only one path segment. Thus the string can not include a slash.
This is the default validator.
Example::
Rule('/pages/<page>'),
Rule('/<string(length=2):lang_code>')
:param map: the :class:`Map`.
:param minlength: the minimum length of the string. Must be greater
or equal 1.
:param maxlength: the maximum length of the string.
:param length: the exact length of the string.
"""
def __init__(self, map, minlength=1, maxlength=None, length=None):
BaseConverter.__init__(self, map)
if length is not None:
length = '{%d}' % int(length)
else:
if maxlength is None:
maxlength = ''
else:
maxlength = int(maxlength)
length = '{%s,%s}' % (
int(minlength),
maxlength
)
self.regex = '[^/]' + length
class AnyConverter(BaseConverter):
"""Matches one of the items provided. Items can either be Python
identifiers or unicode strings::
Rule('/<any(about, help, imprint, u"class"):page_name>')
:param map: the :class:`Map`.
:param items: this function accepts the possible items as positional
arguments.
"""
def __init__(self, map, *items):
BaseConverter.__init__(self, map)
self.regex = '(?:%s)' % '|'.join([re.escape(x) for x in items])
class PathConverter(BaseConverter):
"""Like the default :class:`UnicodeConverter`, but it also matches
slashes. This is useful for wikis and similar applications::
Rule('/<path:wikipage>')
Rule('/<path:wikipage>/edit')
:param map: the :class:`Map`.
"""
regex = '[^/].*?'
is_greedy = True
weight = 50
class NumberConverter(BaseConverter):
"""Baseclass for `IntegerConverter` and `FloatConverter`.
:internal:
"""
def __init__(self, map, fixed_digits=0, min=None, max=None):
BaseConverter.__init__(self, map)
self.fixed_digits = fixed_digits
self.min = min
self.max = max
def to_python(self, value):
if (self.fixed_digits and len(value) != self.fixed_digits):
raise ValidationError()
value = self.num_convert(value)
if (self.min is not None and value < self.min) or \
(self.max is not None and value > self.max):
raise ValidationError()
return value
def to_url(self, value):
value = self.num_convert(value)
if self.fixed_digits:
value = ('%%0%sd' % self.fixed_digits) % value
return str(value)
class IntegerConverter(NumberConverter):
"""This converter only accepts integer values::
Rule('/page/<int:page>')
This converter does not support negative values.
:param map: the :class:`Map`.
:param fixed_digits: the number of fixed digits in the URL. If you set
this to ``4`` for example, the application will
only match if the url looks like ``/0001/``. The
default is variable length.
:param min: the minimal value.
:param max: the maximal value.
"""
regex = r'\d+'
num_convert = int
class FloatConverter(NumberConverter):
"""This converter only accepts floating point values::
Rule('/probability/<float:probability>')
This converter does not support negative values.
:param map: the :class:`Map`.
:param min: the minimal value.
:param max: the maximal value.
"""
regex = r'\d+\.\d+'
num_convert = float
def __init__(self, map, min=None, max=None):
NumberConverter.__init__(self, map, 0, min, max)
class Map(object):
"""The map class stores all the URL rules and some configuration
parameters. Some of the configuration values are only stored on the
`Map` instance since those affect all rules, others are just defaults
and can be overridden for each rule. Note that you have to specify all
arguments besides the `rules` as keyword arguments!
:param rules: sequence of url rules for this map.
:param default_subdomain: The default subdomain for rules without a
subdomain defined.
:param charset: charset of the url. defaults to ``"utf-8"``
:param strict_slashes: Take care of trailing slashes.
:param redirect_defaults: This will redirect to the default rule if it
wasn't visited that way. This helps creating
unique URLs.
:param converters: A dict of converters that adds additional converters
to the list of converters. If you redefine one
converter this will override the original one.
:param sort_parameters: If set to `True` the url parameters are sorted.
See `url_encode` for more details.
:param sort_key: The sort key function for `url_encode`.
.. versionadded:: 0.5
`sort_parameters` and `sort_key` was added.
"""
#: .. versionadded:: 0.6
#: a dict of default converters to be used.
default_converters = None
def __init__(self, rules=None, default_subdomain='', charset='utf-8',
strict_slashes=True, redirect_defaults=True,
converters=None, sort_parameters=False, sort_key=None):
self._rules = []
self._rules_by_endpoint = {}
self._remap = True
self.default_subdomain = default_subdomain
self.charset = charset
self.strict_slashes = strict_slashes
self.redirect_defaults = redirect_defaults
self.converters = self.default_converters.copy()
if converters:
self.converters.update(converters)
self.sort_parameters = sort_parameters
self.sort_key = sort_key
for rulefactory in rules or ():
self.add(rulefactory)
def is_endpoint_expecting(self, endpoint, *arguments):
"""Iterate over all rules and check if the endpoint expects
the arguments provided. This is for example useful if you have
some URLs that expect a language code and others that do not and
you want to wrap the builder a bit so that the current language
code is automatically added if not provided but endpoints expect
it.
:param endpoint: the endpoint to check.
:param arguments: this function accepts one or more arguments
as positional arguments. Each one of them is
checked.
"""
self.update()
arguments = set(arguments)
for rule in self._rules_by_endpoint[endpoint]:
if arguments.issubset(rule.arguments):
return True
return False
def iter_rules(self, endpoint=None):
"""Iterate over all rules or the rules of an endpoint.
:param endpoint: if provided only the rules for that endpoint
are returned.
:return: an iterator
"""
if endpoint is not None:
return iter(self._rules_by_endpoint[endpoint])
return iter(self._rules)
def add(self, rulefactory):
"""Add a new rule or factory to the map and bind it. Requires that the
rule is not bound to another map.
:param rulefactory: a :class:`Rule` or :class:`RuleFactory`
"""
for rule in rulefactory.get_rules(self):
rule.bind(self)
self._rules.append(rule)
self._rules_by_endpoint.setdefault(rule.endpoint, []).append(rule)
self._remap = True
def bind(self, server_name, script_name=None, subdomain=None,
url_scheme='http', default_method='GET', path_info=None):
"""Return a new :class:`MapAdapter` with the details specified to the
call. Note that `script_name` will default to ``'/'`` if not further
specified or `None`. The `server_name` at least is a requirement
because the HTTP RFC requires absolute URLs for redirects and so all
redirect exceptions raised by Werkzeug will contain the full canonical
URL.
If no path_info is passed to :meth:`match` it will use the default path
info passed to bind. While this doesn't really make sense for
manual bind calls, it's useful if you bind a map to a WSGI
environment which already contains the path info.
`subdomain` will default to the `default_subdomain` for this map if
no defined. If there is no `default_subdomain` you cannot use the
subdomain feature.
"""
if subdomain is None:
subdomain = self.default_subdomain
if script_name is None:
script_name = '/'
return MapAdapter(self, server_name, script_name, subdomain,
url_scheme, path_info, default_method)
def bind_to_environ(self, environ, server_name=None, subdomain=None):
"""Like :meth:`bind` but you can pass it an WSGI environment and it
will fetch the information from that dictionary. Note that because of
limitations in the protocol there is no way to get the current
subdomain and real `server_name` from the environment. If you don't
provide it, Werkzeug will use `SERVER_NAME` and `SERVER_PORT` (or
`HTTP_HOST` if provided) as used `server_name` with disabled subdomain
feature.
If `subdomain` is `None` but an environment and a server name is
provided it will calculate the current subdomain automatically.
Example: `server_name` is ``'example.com'`` and the `SERVER_NAME`
in the wsgi `environ` is ``'staging.dev.example.com'`` the calculated
subdomain will be ``'staging.dev'``.
If the object passed as environ has an environ attribute, the value of
this attribute is used instead. This allows you to pass request
objects. Additionally `PATH_INFO` added as a default of the
:class:`MapAdapter` so that you don't have to pass the path info to
the match method.
.. versionchanged:: 0.5
previously this method accepted a bogus `calculate_subdomain`
parameter that did not have any effect. It was removed because
of that.
:param environ: a WSGI environment.
:param server_name: an optional server name hint (see above).
:param subdomain: optionally the current subdomain (see above).
"""
environ = _get_environ(environ)
if server_name is None:
if 'HTTP_HOST' in environ:
server_name = environ['HTTP_HOST']
else:
server_name = environ['SERVER_NAME']
if (environ['wsgi.url_scheme'], environ['SERVER_PORT']) not \
in (('https', '443'), ('http', '80')):
server_name += ':' + environ['SERVER_PORT']
elif subdomain is None:
wsgi_server_name = environ.get('HTTP_HOST', environ['SERVER_NAME'])
cur_server_name = wsgi_server_name.split(':', 1)[0].split('.')
real_server_name = server_name.split(':', 1)[0].split('.')
offset = -len(real_server_name)
if cur_server_name[offset:] != real_server_name:
raise ValueError('the server name provided (%r) does not '
'match the server name from the WSGI '
'environment (%r)' %
(server_name, wsgi_server_name))
subdomain = '.'.join(filter(None, cur_server_name[:offset]))
return Map.bind(self, server_name, environ.get('SCRIPT_NAME'),
subdomain, environ['wsgi.url_scheme'],
environ['REQUEST_METHOD'], environ.get('PATH_INFO'))
def update(self):
"""Called before matching and building to keep the compiled rules
in the correct order after things changed.
"""
if self._remap:
self._rules.sort(lambda a, b: a.match_compare(b))
for rules in self._rules_by_endpoint.itervalues():
rules.sort(lambda a, b: a.build_compare(b))
self._remap = False
def __repr__(self):
rules = self.iter_rules()
return '%s([%s])' % (self.__class__.__name__, pformat(list(rules)))
class MapAdapter(object):
"""Returned by :meth:`Map.bind` or :meth:`Map.bind_to_environ` and does
the URL matching and building based on runtime information.
"""
def __init__(self, map, server_name, script_name, subdomain,
url_scheme, path_info, default_method):
self.map = map
self.server_name = server_name
if not script_name.endswith('/'):
script_name += '/'
self.script_name = script_name
self.subdomain = subdomain
self.url_scheme = url_scheme
self.path_info = path_info or u''
self.default_method = default_method
def dispatch(self, view_func, path_info=None, method=None,
catch_http_exceptions=False):
"""Does the complete dispatching process. `view_func` is called with
the endpoint and a dict with the values for the view. It should
look up the view function, call it, and return a response object
or WSGI application. http exceptions are not caught by default
so that applications can display nicer error messages by just
catching them by hand. If you want to stick with the default
error messages you can pass it ``catch_http_exceptions=True`` and
it will catch the http exceptions.
Here a small example for the dispatch usage::
from werkzeug import Request, Response, responder
from werkzeug.routing import Map, Rule
def on_index(request):
return Response('Hello from the index')
url_map = Map([Rule('/', endpoint='index')])
views = {'index': on_index}
@responder
def application(environ, start_response):
request = Request(environ)
urls = url_map.bind_to_environ(environ)
return urls.dispatch(lambda e, v: views[e](request, **v),
catch_http_exceptions=True)
Keep in mind that this method might return exception objects, too, so
use :class:`Response.force_type` to get a response object.
:param view_func: a function that is called with the endpoint as
first argument and the value dict as second. Has
to dispatch to the actual view function with this
information. (see above)
:param path_info: the path info to use for matching. Overrides the
path info specified on binding.
:param method: the HTTP method used for matching. Overrides the
method specified on binding.
:param catch_http_exceptions: set to `True` to catch any of the
werkzeug :class:`HTTPException`\s.
"""
try:
try:
endpoint, args = self.match(path_info, method)
except RequestRedirect, e:
return e
return view_func(endpoint, args)
except HTTPException, e:
if catch_http_exceptions:
return e
raise
def match(self, path_info=None, method=None, return_rule=False):
"""The usage is simple: you just pass the match method the current
path info as well as the method (which defaults to `GET`). The
following things can then happen:
- you receive a `NotFound` exception that indicates that no URL is
matching. A `NotFound` exception is also a WSGI application you
can call to get a default page not found page (happens to be the
same object as `werkzeug.exceptions.NotFound`)
- you receive a `MethodNotAllowed` exception that indicates that there
is a match for this URL but not for the current request method.
This is useful for RESTful applications.
- you receive a `RequestRedirect` exception with a `new_url`
attribute. This exception is used to notify you about a request
Werkzeug requests from your WSGI application. This is for example the
case if you request ``/foo`` although the correct URL is ``/foo/``
You can use the `RequestRedirect` instance as response-like object
similar to all other subclasses of `HTTPException`.
- you get a tuple in the form ``(endpoint, arguments)`` if there is
a match (unless `return_rule` is True, in which case you get a tuple
in the form ``(rule, arguments)``)
If the path info is not passed to the match method the default path
info of the map is used (defaults to the root URL if not defined
explicitly).
All of the exceptions raised are subclasses of `HTTPException` so they
can be used as WSGI responses. The will all render generic error or
redirect pages.
Here is a small example for matching:
>>> m = Map([
... Rule('/', endpoint='index'),
... Rule('/downloads/', endpoint='downloads/index'),
... Rule('/downloads/<int:id>', endpoint='downloads/show')
... ])
>>> urls = m.bind("example.com", "/")
>>> urls.match("/", "GET")
('index', {})
>>> urls.match("/downloads/42")
('downloads/show', {'id': 42})
And here is what happens on redirect and missing URLs:
>>> urls.match("/downloads")
Traceback (most recent call last):
...
RequestRedirect: http://example.com/downloads/
>>> urls.match("/missing")
Traceback (most recent call last):
...
NotFound: 404 Not Found
:param path_info: the path info to use for matching. Overrides the
path info specified on binding.
:param method: the HTTP method used for matching. Overrides the
method specified on binding.
:param return_rule: return the rule that matched instead of just the
endpoint (defaults to `False`).
.. versionadded:: 0.6
`return_rule` was added.
"""
self.map.update()
if path_info is None:
path_info = self.path_info
if not isinstance(path_info, unicode):
path_info = path_info.decode(self.map.charset, 'ignore')
method = (method or self.default_method).upper()
path = u'%s|/%s' % (self.subdomain, path_info.lstrip('/'))
have_match_for = set()
for rule in self.map._rules:
try:
rv = rule.match(path)
except RequestSlash:
raise RequestRedirect(str('%s://%s%s%s/%s/' % (
self.url_scheme,
self.subdomain and self.subdomain + '.' or '',
self.server_name,
self.script_name[:-1],
url_quote(path_info.lstrip('/'), self.map.charset)
)))
if rv is None:
continue
if rule.methods is not None and method not in rule.methods:
have_match_for.update(rule.methods)
continue
if self.map.redirect_defaults:
for r in self.map._rules_by_endpoint[rule.endpoint]:
if r.provides_defaults_for(rule) and \
r.suitable_for(rv, method):
rv.update(r.defaults)
subdomain, path = r.build(rv)
raise RequestRedirect(str('%s://%s%s%s/%s' % (
self.url_scheme,
subdomain and subdomain + '.' or '',
self.server_name,
self.script_name[:-1],
url_quote(path.lstrip('/'), self.map.charset)
)))
if rule.redirect_to is not None:
if isinstance(rule.redirect_to, basestring):
def _handle_match(match):
value = rv[match.group(1)]
return rule._converters[match.group(1)].to_url(value)
redirect_url = _simple_rule_re.sub(_handle_match,
rule.redirect_to)
else:
redirect_url = rule.redirect_to(self, **rv)
raise RequestRedirect(str(urljoin('%s://%s%s%s' % (
self.url_scheme,
self.subdomain and self.subdomain + '.' or '',
self.server_name,
self.script_name
), redirect_url)))
if return_rule:
return rule, rv
else:
return rule.endpoint, rv
if have_match_for:
raise MethodNotAllowed(valid_methods=list(have_match_for))
raise NotFound()
def test(self, path_info=None, method=None):
"""Test if a rule would match. Works like `match` but returns `True`
if the URL matches, or `False` if it does not exist.
:param path_info: the path info to use for matching. Overrides the
path info specified on binding.
:param method: the HTTP method used for matching. Overrides the
method specified on binding.
"""
try:
self.match(path_info, method)
except RequestRedirect:
pass
except NotFound:
return False
return True
def _partial_build(self, endpoint, values, method, append_unknown):
"""Helper for :meth:`build`. Returns subdomain and path for the
rule that accepts this endpoint, values and method.
:internal:
"""
# in case the method is none, try with the default method first
if method is None:
rv = self._partial_build(endpoint, values, self.default_method,
append_unknown)
if rv is not None:
return rv
# default method did not match or a specific method is passed,
# check all and go with first result.
for rule in self.map._rules_by_endpoint.get(endpoint, ()):
if rule.suitable_for(values, method):
rv = rule.build(values, append_unknown)
if rv is not None:
return rv
def build(self, endpoint, values=None, method=None, force_external=False,
append_unknown=True):
"""Building URLs works pretty much the other way round. Instead of
`match` you call `build` and pass it the endpoint and a dict of
arguments for the placeholders.
The `build` function also accepts an argument called `force_external`
which, if you set it to `True` will force external URLs. Per default
external URLs (include the server name) will only be used if the
target URL is on a different subdomain.
>>> m = Map([
... Rule('/', endpoint='index'),
... Rule('/downloads/', endpoint='downloads/index'),
... Rule('/downloads/<int:id>', endpoint='downloads/show')
... ])
>>> urls = m.bind("example.com", "/")
>>> urls.build("index", {})
'/'
>>> urls.build("downloads/show", {'id': 42})
'/downloads/42'
>>> urls.build("downloads/show", {'id': 42}, force_external=True)
'http://example.com/downloads/42'
Because URLs cannot contain non ASCII data you will always get
bytestrings back. Non ASCII characters are urlencoded with the
charset defined on the map instance.
Additional values are converted to unicode and appended to the URL as
URL querystring parameters:
>>> urls.build("index", {'q': 'My Searchstring'})
'/?q=My+Searchstring'
If a rule does not exist when building a `BuildError` exception is
raised.
The build method accepts an argument called `method` which allows you
to specify the method you want to have an URL built for if you have
different methods for the same endpoint specified.
.. versionadded:: 0.6
the `append_unknown` parameter was added.
:param endpoint: the endpoint of the URL to build.
:param values: the values for the URL to build. Unhandled values are
appended to the URL as query parameters.
:param method: the HTTP method for the rule if there are different
URLs for different methods on the same endpoint.
:param force_external: enforce full canonical external URLs.
:param append_unknown: unknown parameters are appended to the generated
URL as query string argument. Disable this
if you want the builder to ignore those.
"""
self.map.update()
if values:
if isinstance(values, MultiDict):
values = dict((k, v) for k, v in values.iteritems(multi=True)
if v is not None)
else:
values = dict((k, v) for k, v in values.iteritems()
if v is not None)
else:
values = {}
rv = self._partial_build(endpoint, values, method, append_unknown)
if rv is None:
raise BuildError(endpoint, values, method)
subdomain, path = rv
if not force_external and subdomain == self.subdomain:
return str(urljoin(self.script_name, path.lstrip('/')))
return str('%s://%s%s%s/%s' % (
self.url_scheme,
subdomain and subdomain + '.' or '',
self.server_name,
self.script_name[:-1],
path.lstrip('/')
))
#: the default converter mapping for the map.
DEFAULT_CONVERTERS = {
'default': UnicodeConverter,
'string': UnicodeConverter,
'any': AnyConverter,
'path': PathConverter,
'int': IntegerConverter,
'float': FloatConverter
}
from werkzeug.datastructures import ImmutableDict, MultiDict
Map.default_converters = ImmutableDict(DEFAULT_CONVERTERS)
| bsd-3-clause | 2,565,140,452,500,172,300 | 37.480474 | 82 | 0.577953 | false |
zmaruo/coreclr | src/pal/automation/compile.py | 154 | 2660 | import logging as log
import sys
import getopt
import os
import subprocess
import shutil
def RunCMake(workspace, target, platform):
# run CMake
print "\n==================================================\n"
returncode = 0
if platform == "windows":
print "Running: vcvarsall.bat x86_amd64 && " + workspace + "\ProjectK\NDP\clr\src\pal\\tools\gen-buildsys-win.bat " + workspace + "\ProjectK\NDP\clr"
print "\n==================================================\n"
sys.stdout.flush()
returncode = subprocess.call(["vcvarsall.bat", "x86_amd64", "&&", workspace + "\ProjectK\NDP\clr\src\pal\\tools\gen-buildsys-win.bat", workspace + "\ProjectK\NDP\clr"])
elif platform == "linux":
print "Running: " + workspace + "/ProjectK/NDP/clr/src/pal/tools/gen-buildsys-clang.sh " + workspace + "/ProjectK/NDP/clr DEBUG"
print "\n==================================================\n"
sys.stdout.flush()
returncode = subprocess.call(workspace + "/ProjectK/NDP/clr/src/pal/tools/gen-buildsys-clang.sh " + workspace + "/ProjectK/NDP/clr " + target, shell=True)
if returncode != 0:
print "ERROR: cmake failed with exit code " + str(returncode)
return returncode
def RunBuild(target, platform, arch):
if platform == "windows":
return RunMsBuild(target, arch)
elif platform == "linux":
return RunMake()
def RunMsBuild(target, arch):
# run MsBuild
print "\n==================================================\n"
print "Running: vcvarsall.bat x86_amd64 && msbuild CoreCLR.sln /p:Configuration=" + target + " /p:Platform=" + arch
print "\n==================================================\n"
sys.stdout.flush()
returncode = subprocess.call(["vcvarsall.bat","x86_amd64","&&","msbuild","CoreCLR.sln","/p:Configuration=" + target,"/p:Platform=" + arch])
if returncode != 0:
print "ERROR: vcvarsall.bat failed with exit code " + str(returncode)
return returncode
def RunMake():
print "\n==================================================\n"
print "Running: make"
print "\n==================================================\n"
sys.stdout.flush()
returncode = subprocess.call(["make"])
if returncode != 0:
print "ERROR: make failed with exit code " + str(returncode)
return returncode
def Compile(workspace, target, platform, arch):
returncode = RunCMake(workspace, target, platform)
if returncode != 0:
return returncode
returncode += RunBuild(target, platform, arch)
if returncode != 0:
return returncode
return returncode
| mit | -5,234,175,158,317,980,000 | 37 | 176 | 0.556767 | false |
Windowsfreak/OpenNI2 | Packaging/ReleaseVersion.py | 32 | 6788 | #!/usr/bin/python
#/****************************************************************************
#* *
#* OpenNI 2.x Alpha *
#* Copyright (C) 2012 PrimeSense Ltd. *
#* *
#* This file is part of OpenNI. *
#* *
#* Licensed under the Apache License, Version 2.0 (the "License"); *
#* you may not use this file except in compliance with the License. *
#* You may obtain a copy of the License at *
#* *
#* http://www.apache.org/licenses/LICENSE-2.0 *
#* *
#* Unless required by applicable law or agreed to in writing, software *
#* distributed under the License is distributed on an "AS IS" BASIS, *
#* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. *
#* See the License for the specific language governing permissions and *
#* limitations under the License. *
#* *
#****************************************************************************/
import os
import re
import sys
import shutil
import subprocess
import platform
import argparse
import stat
import UpdateVersion
if len(sys.argv) < 2 or sys.argv[1] in ('-h','--help'):
print "usage: " + sys.argv[0] + " <x86|x64|Arm|android> [UpdateVersion]"
sys.exit(1)
plat = sys.argv[1]
origDir = os.getcwd()
shouldUpdate = 0
if len(sys.argv) >= 3 and sys.argv[2] == 'UpdateVersion':
shouldUpdate = 1
if shouldUpdate == 1:
# Increase Build
UpdateVersion.VERSION_BUILD += 1
UpdateVersion.update()
def get_reg_values(reg_key, value_list):
# open the reg key
try:
reg_key = win32api.RegOpenKeyEx(*reg_key)
except pywintypes.error as e:
raise Exception("Failed to open registry key!")
# Get the values
try:
values = [(win32api.RegQueryValueEx(reg_key, name), data_type) for name, data_type in value_list]
# values list of ((value, type), expected_type)
for (value, data_type), expected in values:
if data_type != expected:
raise Exception("Bad registry value type! Expected %d, got %d instead." % (expected, data_type))
# values okay, leave only values
values = [value for ((value, data_type), expected) in values]
except pywintypes.error as e:
raise Exception("Failed to get registry value!")
finally:
try:
win32api.RegCloseKey(reg_key)
except pywintypes.error as e:
# We don't care if reg key close failed...
pass
return tuple(values)
def calc_jobs_number():
cores = 1
try:
if isinstance(self, OSMac):
txt = gop('sysctl -n hw.physicalcpu')
else:
txt = gop('grep "processor\W:" /proc/cpuinfo | wc -l')
cores = int(txt)
except:
pass
return str(cores * 2)
# Create installer
strVersion = UpdateVersion.getVersionName()
print "Creating installer for OpenNI " + strVersion + " " + plat
finalDir = "Final"
if not os.path.isdir(finalDir):
os.mkdir(finalDir)
if plat == 'android':
if not 'NDK_ROOT' in os.environ:
print 'Please define NDK_ROOT!'
sys.exit(2)
ndkDir = os.environ['NDK_ROOT']
buildDir = 'AndroidBuild'
if os.path.isdir(buildDir):
shutil.rmtree(buildDir)
outputDir = 'OpenNI-android-' + strVersion
if os.path.isdir(outputDir):
shutil.rmtree(outputDir)
os.makedirs(buildDir + '/jni')
os.symlink('../../../', buildDir + '/jni/OpenNI2')
shutil.copy('../Android.mk', buildDir + '/jni')
shutil.copy('../Application.mk', buildDir + '/jni')
rc = subprocess.call([ ndkDir + '/ndk-build', '-C', buildDir, '-j8' ])
if rc != 0:
print 'Build failed!'
sys.exit(3)
finalFile = finalDir + '/' + outputDir + '.tar'
shutil.move(buildDir + '/libs/armeabi-v7a', outputDir)
# add config files
shutil.copy('../Config/OpenNI.ini', outputDir)
shutil.copy('../Config/OpenNI2/Drivers/PS1080.ini', outputDir)
print('Creating archive ' + finalFile)
subprocess.check_call(['tar', '-cf', finalFile, outputDir])
elif platform.system() == 'Windows':
import win32con,pywintypes,win32api,platform
(bits,linkage) = platform.architecture()
matchObject = re.search('64',bits)
is_64_bit_machine = matchObject is not None
if is_64_bit_machine:
MSVC_KEY = (win32con.HKEY_LOCAL_MACHINE, r"SOFTWARE\Wow6432Node\Microsoft\VisualStudio\10.0")
else:
MSVC_KEY = (win32con.HKEY_LOCAL_MACHINE, r"SOFTWARE\Microsoft\VisualStudio\10.0")
MSVC_VALUES = [("InstallDir", win32con.REG_SZ)]
VS_INST_DIR = get_reg_values(MSVC_KEY, MSVC_VALUES)[0]
PROJECT_SLN = "..\OpenNI.sln"
bulidLog = origDir+'/build.Release.'+plat+'.txt'
devenv_cmd = '\"'+VS_INST_DIR + 'devenv\" '+PROJECT_SLN + ' /Project Install /Rebuild "Release|'+plat+'\" /out '+bulidLog
print(devenv_cmd)
subprocess.check_call(devenv_cmd, close_fds=True)
# everything OK, can remove build log
os.remove(bulidLog)
outFile = 'OpenNI-Windows-' + plat + '-' + strVersion + '.msi'
finalFile = os.path.join(finalDir, outFile)
if os.path.exists(finalFile):
os.remove(finalFile)
shutil.move('Install/bin/' + plat + '/en-us/' + outFile, finalDir)
elif platform.system() == 'Linux' or platform.system() == 'Darwin':
devNull = open('/dev/null', 'w')
subprocess.check_call(['make', '-C', '../', '-j' + calc_jobs_number(), 'PLATFORM=' + plat, 'clean'], stdout=devNull, stderr=devNull)
devNull.close()
buildLog = open(origDir + '/build.release.' + plat + '.log', 'w')
subprocess.check_call(['make', '-C', '../', '-j' + calc_jobs_number(), 'PLATFORM=' + plat, 'release'], stdout=buildLog, stderr=buildLog)
buildLog.close()
# everything OK, can remove build log
os.remove(origDir + '/build.release.' + plat + '.log')
else:
print "Unknown OS"
sys.exit(2)
# also copy Release Notes and CHANGES documents
shutil.copy('../ReleaseNotes.txt', finalDir)
shutil.copy('../CHANGES.txt', finalDir)
print "Installer can be found under: " + finalDir
print "Done"
| apache-2.0 | 4,098,053,627,494,208,500 | 35.691892 | 140 | 0.546258 | false |
iho/wagtail | wagtail/wagtailcore/migrations/0014_add_verbose_name.py | 26 | 4031 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('wagtailcore', '0013_update_golive_expire_help_text'),
]
operations = [
migrations.AlterField(
model_name='grouppagepermission',
name='group',
field=models.ForeignKey(verbose_name='Group', related_name='page_permissions', to='auth.Group'),
preserve_default=True,
),
migrations.AlterField(
model_name='grouppagepermission',
name='page',
field=models.ForeignKey(verbose_name='Page', related_name='group_permissions', to='wagtailcore.Page'),
preserve_default=True,
),
migrations.AlterField(
model_name='grouppagepermission',
name='permission_type',
field=models.CharField(choices=[('add', 'Add/edit pages you own'), ('edit', 'Add/edit any page'), ('publish', 'Publish any page'), ('lock', 'Lock/unlock any page')], max_length=20, verbose_name='Permission type'),
preserve_default=True,
),
migrations.AlterField(
model_name='page',
name='search_description',
field=models.TextField(blank=True, verbose_name='Search description'),
preserve_default=True,
),
migrations.AlterField(
model_name='page',
name='show_in_menus',
field=models.BooleanField(default=False, help_text='Whether a link to this page will appear in automatically generated menus', verbose_name='Show in menus'),
preserve_default=True,
),
migrations.AlterField(
model_name='page',
name='slug',
field=models.SlugField(help_text='The name of the page as it will appear in URLs e.g http://domain.com/blog/[my-slug]/', max_length=255, verbose_name='Slug'),
preserve_default=True,
),
migrations.AlterField(
model_name='page',
name='title',
field=models.CharField(help_text="The page title as you'd like it to be seen by the public", max_length=255, verbose_name='Title'),
preserve_default=True,
),
migrations.AlterField(
model_name='pageviewrestriction',
name='page',
field=models.ForeignKey(verbose_name='Page', related_name='view_restrictions', to='wagtailcore.Page'),
preserve_default=True,
),
migrations.AlterField(
model_name='pageviewrestriction',
name='password',
field=models.CharField(max_length=255, verbose_name='Password'),
preserve_default=True,
),
migrations.AlterField(
model_name='site',
name='hostname',
field=models.CharField(db_index=True, max_length=255, verbose_name='Hostname'),
preserve_default=True,
),
migrations.AlterField(
model_name='site',
name='is_default_site',
field=models.BooleanField(default=False, help_text='If true, this site will handle requests for all other hostnames that do not have a site entry of their own', verbose_name='Is default site'),
preserve_default=True,
),
migrations.AlterField(
model_name='site',
name='port',
field=models.IntegerField(default=80, help_text='Set this to something other than 80 if you need a specific port number to appear in URLs (e.g. development on port 8000). Does not affect request handling (so port forwarding still works).', verbose_name='Port'),
preserve_default=True,
),
migrations.AlterField(
model_name='site',
name='root_page',
field=models.ForeignKey(verbose_name='Root page', related_name='sites_rooted_here', to='wagtailcore.Page'),
preserve_default=True,
),
]
| bsd-3-clause | 4,108,682,181,955,955,700 | 42.815217 | 273 | 0.596874 | false |
samuellefever/server-tools | cron_run_manually/ir_cron.py | 42 | 2778 | # -*- coding: utf-8 -*-
# OpenERP, Open Source Management Solution
# This module copyright (C) 2013 Therp BV (<http://therp.nl>)
# Code snippets from openobject-server copyright (C) 2004-2013 OpenERP S.A.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import logging
from openerp import _, api, exceptions, models, SUPERUSER_ID
from openerp.tools.safe_eval import safe_eval
from psycopg2 import OperationalError
_logger = logging.getLogger(__name__)
class Cron(models.Model):
_name = _inherit = "ir.cron"
@api.one
def run_manually(self):
"""Run a job from the cron form view."""
if self.env.uid != SUPERUSER_ID and (not self.active or
not self.numbercall):
raise exceptions.AccessError(
_('Only the admin user is allowed to '
'execute inactive cron jobs manually'))
try:
# Try to grab an exclusive lock on the job row
# until the end of the transaction
self.env.cr.execute(
"""SELECT *
FROM ir_cron
WHERE id=%s
FOR UPDATE NOWAIT""",
(self.id,),
log_exceptions=False)
except OperationalError as e:
# User friendly error if the lock could not be claimed
if getattr(e, "pgcode", None) == '55P03':
raise exceptions.Warning(
_('Another process/thread is already busy '
'executing this job'))
raise
_logger.info('Job `%s` triggered from form', self.name)
# Do not propagate active_test to the method to execute
ctx = dict(self.env.context)
ctx.pop('active_test', None)
# Execute the cron job
method = getattr(
self.with_context(ctx).sudo(self.user_id).env[self.model],
self.function)
args = safe_eval('tuple(%s)' % (self.args or ''))
return method(*args)
@api.model
def _current_uid(self):
"""This function returns the current UID, for testing purposes."""
return self.env.uid
| agpl-3.0 | 3,361,049,242,280,642,600 | 34.615385 | 75 | 0.610151 | false |
andreadean5/python-hpOneView | hpOneView/resources/servers/id_pools_vsn_ranges.py | 1 | 1946 | # -*- coding: utf-8 -*-
###
# (C) Copyright (2012-2016) Hewlett Packard Enterprise Development LP
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
###
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future import standard_library
standard_library.install_aliases()
__title__ = 'id-pools-vsn-ranges'
__version__ = '0.0.1'
__copyright__ = '(C) Copyright (2012-2016) Hewlett Packard Enterprise Development LP'
__license__ = 'MIT'
__status__ = 'Development'
from hpOneView.resources.resource import ResourceClient
from hpOneView.resources.servers.id_pools_ranges import IdPoolsRanges
class IdPoolsVsnRanges(IdPoolsRanges):
URI = '/rest/id-pools/vsn/ranges'
def __init__(self, con):
IdPoolsRanges.__init__(self, object, self.URI)
self._connection = con
self._client = ResourceClient(con, self.URI)
| mit | -5,179,355,912,824,232,000 | 39.541667 | 85 | 0.747174 | false |
kaday/rose | lib/python/rose/macros/duplicate.py | 1 | 3415 | # -*- coding: utf-8 -*-
# -----------------------------------------------------------------------------
# (C) British Crown Copyright 2012-6 Met Office.
#
# This file is part of Rose, a framework for meteorological suites.
#
# Rose is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Rose is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Rose. If not, see <http://www.gnu.org/licenses/>.
# -----------------------------------------------------------------------------
import re
import rose.macro
class DuplicateChecker(rose.macro.MacroBase):
"""Returns settings whose duplicate status does not match their name."""
WARNING_DUPL_SECT_NO_NUM = ('incorrect "duplicate=true" metadata')
WARNING_NUM_SECT_NO_DUPL = ('{0} requires "duplicate=true" metadata')
def validate(self, config, meta_config=None):
"""Return a list of errors, if any."""
self.reports = []
sect_error_no_dupl = {}
sect_keys = config.value.keys()
sorter = rose.config.sort_settings
sect_keys.sort(sorter)
for section in sect_keys:
node = config.get([section])
if not isinstance(node.value, dict):
continue
metadata = self.get_metadata_for_config_id(section, meta_config)
duplicate = metadata.get(rose.META_PROP_DUPLICATE)
is_duplicate = duplicate == rose.META_PROP_VALUE_TRUE
basic_section = rose.macro.REC_ID_STRIP.sub("", section)
if is_duplicate:
if basic_section == section:
self.add_report(section, None, None,
self.WARNING_DUPL_SECT_NO_NUM)
elif section != basic_section:
if basic_section not in sect_error_no_dupl:
sect_error_no_dupl.update({basic_section: 1})
no_index_section = rose.macro.REC_ID_STRIP_DUPL.sub(
"", section)
if no_index_section != section:
basic_section = no_index_section
warning = self.WARNING_NUM_SECT_NO_DUPL
if self._get_has_metadata(metadata, basic_section,
meta_config):
self.add_report(section, None, None,
warning.format(basic_section))
return self.reports
def _get_has_metadata(self, metadata, basic_section, meta_config):
if metadata.keys() != ["id"]:
return True
for meta_keys, meta_node in meta_config.walk(no_ignore=True):
meta_section = meta_keys[0]
if len(meta_keys) > 1:
continue
if ((meta_section == basic_section or
meta_section.startswith(
basic_section + rose.CONFIG_DELIMITER)) and
isinstance(meta_node.value, dict)):
return True
return False
| gpl-3.0 | -9,092,590,079,516,549,000 | 42.782051 | 79 | 0.558126 | false |
Alwnikrotikz/pmx | scripts/DTI_analysis.py | 2 | 88009 |
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" >
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" >
<meta name="ROBOTS" content="NOARCHIVE">
<link rel="icon" type="image/vnd.microsoft.icon" href="https://ssl.gstatic.com/codesite/ph/images/phosting.ico">
<script type="text/javascript">
var codesite_token = "3Cd3YLziNQwHJ6q0INBaXA2gZls:1366032649547";
var CS_env = {"token":"3Cd3YLziNQwHJ6q0INBaXA2gZls:1366032649547","projectName":"pmx","domainName":null,"assetHostPath":"https://ssl.gstatic.com/codesite/ph","loggedInUserEmail":"[email protected]","profileUrl":"/u/110130407061490526737/","assetVersionPath":"https://ssl.gstatic.com/codesite/ph/14689258884487974863","projectHomeUrl":"/p/pmx","relativeBaseUrl":""};
var _gaq = _gaq || [];
_gaq.push(
['siteTracker._setAccount', 'UA-18071-1'],
['siteTracker._trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga);
})();
</script>
<title>DTI_analysis.py -
pmx -
python library and tools for computational and structural biophysics - Google Project Hosting
</title>
<link type="text/css" rel="stylesheet" href="https://ssl.gstatic.com/codesite/ph/14689258884487974863/css/core.css">
<link type="text/css" rel="stylesheet" href="https://ssl.gstatic.com/codesite/ph/14689258884487974863/css/ph_detail.css" >
<link type="text/css" rel="stylesheet" href="https://ssl.gstatic.com/codesite/ph/14689258884487974863/css/d_sb.css" >
<!--[if IE]>
<link type="text/css" rel="stylesheet" href="https://ssl.gstatic.com/codesite/ph/14689258884487974863/css/d_ie.css" >
<![endif]-->
<style type="text/css">
.menuIcon.off { background: no-repeat url(https://ssl.gstatic.com/codesite/ph/images/dropdown_sprite.gif) 0 -42px }
.menuIcon.on { background: no-repeat url(https://ssl.gstatic.com/codesite/ph/images/dropdown_sprite.gif) 0 -28px }
.menuIcon.down { background: no-repeat url(https://ssl.gstatic.com/codesite/ph/images/dropdown_sprite.gif) 0 0; }
tr.inline_comment {
background: #fff;
vertical-align: top;
}
div.draft, div.published {
padding: .3em;
border: 1px solid #999;
margin-bottom: .1em;
font-family: arial, sans-serif;
max-width: 60em;
}
div.draft {
background: #ffa;
}
div.published {
background: #e5ecf9;
}
div.published .body, div.draft .body {
padding: .5em .1em .1em .1em;
max-width: 60em;
white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;
font-size: 1em;
}
div.draft .actions {
margin-left: 1em;
font-size: 90%;
}
div.draft form {
padding: .5em .5em .5em 0;
}
div.draft textarea, div.published textarea {
width: 95%;
height: 10em;
font-family: arial, sans-serif;
margin-bottom: .5em;
}
.nocursor, .nocursor td, .cursor_hidden, .cursor_hidden td {
background-color: white;
height: 2px;
}
.cursor, .cursor td {
background-color: darkblue;
height: 2px;
display: '';
}
.list {
border: 1px solid white;
border-bottom: 0;
}
</style>
</head>
<body class="t4">
<script type="text/javascript">
window.___gcfg = {lang: 'en'};
(function()
{var po = document.createElement("script");
po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js";
var s = document.getElementsByTagName("script")[0];
s.parentNode.insertBefore(po, s);
})();
</script>
<div class="headbg">
<div id="gaia">
<span>
<b>[email protected]</b>
| <a href="/u/110130407061490526737/" id="projects-dropdown" onclick="return false;"
><u>My favorites</u> <small>▼</small></a>
| <a href="/u/110130407061490526737/" onclick="_CS_click('/gb/ph/profile');"
title="Profile, Updates, and Settings"
><u>Profile</u></a>
| <a href="https://www.google.com/accounts/Logout?continue=https%3A%2F%2Fcode.google.com%2Fp%2Fpmx%2Fsource%2Fbrowse%2Fscripts%2FDTI_analysis.py"
onclick="_CS_click('/gb/ph/signout');"
><u>Sign out</u></a>
</span>
</div>
<div class="gbh" style="left: 0pt;"></div>
<div class="gbh" style="right: 0pt;"></div>
<div style="height: 1px"></div>
<!--[if lte IE 7]>
<div style="text-align:center;">
Your version of Internet Explorer is not supported. Try a browser that
contributes to open source, such as <a href="http://www.firefox.com">Firefox</a>,
<a href="http://www.google.com/chrome">Google Chrome</a>, or
<a href="http://code.google.com/chrome/chromeframe/">Google Chrome Frame</a>.
</div>
<![endif]-->
<table style="padding:0px; margin: 0px 0px 10px 0px; width:100%" cellpadding="0" cellspacing="0"
itemscope itemtype="http://schema.org/CreativeWork">
<tr style="height: 58px;">
<td id="plogo">
<link itemprop="url" href="/p/pmx">
<a href="/p/pmx/">
<img src="/p/pmx/logo?cct=1355339915"
alt="Logo" itemprop="image">
</a>
</td>
<td style="padding-left: 0.5em">
<div id="pname">
<a href="/p/pmx/"><span itemprop="name">pmx</span></a>
</div>
<div id="psum">
<a id="project_summary_link"
href="/p/pmx/"><span itemprop="description">python library and tools for computational and structural biophysics</span></a>
</div>
</td>
<td style="white-space:nowrap;text-align:right; vertical-align:bottom;">
<form action="/hosting/search">
<input size="30" name="q" value="" type="text">
<input type="submit" name="projectsearch" value="Search projects" >
</form>
</tr>
</table>
</div>
<div id="mt" class="gtb">
<a href="/p/pmx/" class="tab ">Project Home</a>
<a href="/p/pmx/downloads/list" class="tab ">Downloads</a>
<a href="/p/pmx/w/list" class="tab ">Wiki</a>
<a href="/p/pmx/issues/list"
class="tab ">Issues</a>
<a href="/p/pmx/source/checkout"
class="tab active">Source</a>
<div class=gtbc></div>
</div>
<table cellspacing="0" cellpadding="0" width="100%" align="center" border="0" class="st">
<tr>
<td class="subt">
<div class="st2">
<div class="isf">
<form action="/p/pmx/source/browse" style="display: inline">
Repository:
<select name="repo" id="repo" style="font-size: 92%" onchange="submit()">
<option value="default">default</option><option value="wiki">wiki</option>
</select>
</form>
<span class="inst1"><a href="/p/pmx/source/checkout">Checkout</a></span>
<span class="inst2"><a href="/p/pmx/source/browse/">Browse</a></span>
<span class="inst3"><a href="/p/pmx/source/list">Changes</a></span>
<span class="inst4"><a href="/p/pmx/source/clones">Clones</a></span>
<a href="/p/pmx/issues/entry?show=review&former=sourcelist">Request code review</a>
</form>
<script type="text/javascript">
function codesearchQuery(form) {
var query = document.getElementById('q').value;
if (query) { form.action += '%20' + query; }
}
</script>
</div>
</div>
</td>
<td align="right" valign="top" class="bevel-right"></td>
</tr>
</table>
<script type="text/javascript">
var cancelBubble = false;
function _go(url) { document.location = url; }
</script>
<div id="maincol"
>
<div class="collapse">
<div id="colcontrol">
<style type="text/css">
#file_flipper { white-space: nowrap; padding-right: 2em; }
#file_flipper.hidden { display: none; }
#file_flipper .pagelink { color: #0000CC; text-decoration: underline; }
#file_flipper #visiblefiles { padding-left: 0.5em; padding-right: 0.5em; }
</style>
<table id="nav_and_rev" class="list"
cellpadding="0" cellspacing="0" width="100%">
<tr>
<td nowrap="nowrap" class="src_crumbs src_nav" width="33%">
<strong class="src_nav">Source path: </strong>
<span id="crumb_root">
<a href="/p/pmx/source/browse/">git</a>/ </span>
<span id="crumb_links" class="ifClosed"><a href="/p/pmx/source/browse/scripts/">scripts</a><span class="sp">/ </span>DTI_analysis.py</span>
<form class="src_nav">
<span class="sourcelabel"><strong>Branch:</strong>
<select id="branch_select" name="name" onchange="submit()">
<option value="David"
>
David
</option>
<option value="Upgradegrom4.6"
>
Upgradegrom4.6
</option>
<option value="master"
selected>
master
</option>
</select>
</span>
</form>
</td>
<td nowrap="nowrap" width="33%" align="center">
<a href="/p/pmx/source/browse/scripts/DTI_analysis.py?edit=1"
><img src="https://ssl.gstatic.com/codesite/ph/images/pencil-y14.png"
class="edit_icon">Edit file</a>
</td>
<td nowrap="nowrap" width="33%" align="right">
<table cellpadding="0" cellspacing="0" style="font-size: 100%"><tr>
<td class="flipper">
<ul class="leftside">
<li><a href="/p/pmx/source/browse/scripts/DTI_analysis.py?r=a2102ac8113476c16e34079d2812b130339f54bb" title="Previous">‹a2102ac81134</a></li>
</ul>
</td>
<td class="flipper"><b>82a17baf41be</b></td>
</tr></table>
</td>
</tr>
</table>
<div class="fc">
<style type="text/css">
.undermouse span {
background-image: url(https://ssl.gstatic.com/codesite/ph/images/comments.gif); }
</style>
<table class="opened" id="review_comment_area"
onmouseout="gutterOut()"><tr>
<td id="nums">
<pre><table width="100%"><tr class="nocursor"><td></td></tr></table></pre>
<pre><table width="100%" id="nums_table_0"><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_1"
onmouseover="gutterOver(1)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',1);"> </span
></td><td id="1"><a href="#1">1</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_2"
onmouseover="gutterOver(2)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',2);"> </span
></td><td id="2"><a href="#2">2</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_3"
onmouseover="gutterOver(3)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',3);"> </span
></td><td id="3"><a href="#3">3</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_4"
onmouseover="gutterOver(4)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',4);"> </span
></td><td id="4"><a href="#4">4</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_5"
onmouseover="gutterOver(5)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',5);"> </span
></td><td id="5"><a href="#5">5</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_6"
onmouseover="gutterOver(6)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',6);"> </span
></td><td id="6"><a href="#6">6</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_7"
onmouseover="gutterOver(7)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',7);"> </span
></td><td id="7"><a href="#7">7</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_8"
onmouseover="gutterOver(8)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',8);"> </span
></td><td id="8"><a href="#8">8</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_9"
onmouseover="gutterOver(9)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',9);"> </span
></td><td id="9"><a href="#9">9</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_10"
onmouseover="gutterOver(10)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',10);"> </span
></td><td id="10"><a href="#10">10</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_11"
onmouseover="gutterOver(11)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',11);"> </span
></td><td id="11"><a href="#11">11</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_12"
onmouseover="gutterOver(12)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',12);"> </span
></td><td id="12"><a href="#12">12</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_13"
onmouseover="gutterOver(13)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',13);"> </span
></td><td id="13"><a href="#13">13</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_14"
onmouseover="gutterOver(14)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',14);"> </span
></td><td id="14"><a href="#14">14</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_15"
onmouseover="gutterOver(15)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',15);"> </span
></td><td id="15"><a href="#15">15</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_16"
onmouseover="gutterOver(16)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',16);"> </span
></td><td id="16"><a href="#16">16</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_17"
onmouseover="gutterOver(17)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',17);"> </span
></td><td id="17"><a href="#17">17</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_18"
onmouseover="gutterOver(18)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',18);"> </span
></td><td id="18"><a href="#18">18</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_19"
onmouseover="gutterOver(19)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',19);"> </span
></td><td id="19"><a href="#19">19</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_20"
onmouseover="gutterOver(20)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',20);"> </span
></td><td id="20"><a href="#20">20</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_21"
onmouseover="gutterOver(21)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',21);"> </span
></td><td id="21"><a href="#21">21</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_22"
onmouseover="gutterOver(22)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',22);"> </span
></td><td id="22"><a href="#22">22</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_23"
onmouseover="gutterOver(23)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',23);"> </span
></td><td id="23"><a href="#23">23</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_24"
onmouseover="gutterOver(24)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',24);"> </span
></td><td id="24"><a href="#24">24</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_25"
onmouseover="gutterOver(25)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',25);"> </span
></td><td id="25"><a href="#25">25</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_26"
onmouseover="gutterOver(26)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',26);"> </span
></td><td id="26"><a href="#26">26</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_27"
onmouseover="gutterOver(27)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',27);"> </span
></td><td id="27"><a href="#27">27</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_28"
onmouseover="gutterOver(28)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',28);"> </span
></td><td id="28"><a href="#28">28</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_29"
onmouseover="gutterOver(29)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',29);"> </span
></td><td id="29"><a href="#29">29</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_30"
onmouseover="gutterOver(30)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',30);"> </span
></td><td id="30"><a href="#30">30</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_31"
onmouseover="gutterOver(31)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',31);"> </span
></td><td id="31"><a href="#31">31</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_32"
onmouseover="gutterOver(32)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',32);"> </span
></td><td id="32"><a href="#32">32</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_33"
onmouseover="gutterOver(33)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',33);"> </span
></td><td id="33"><a href="#33">33</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_34"
onmouseover="gutterOver(34)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',34);"> </span
></td><td id="34"><a href="#34">34</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_35"
onmouseover="gutterOver(35)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',35);"> </span
></td><td id="35"><a href="#35">35</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_36"
onmouseover="gutterOver(36)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',36);"> </span
></td><td id="36"><a href="#36">36</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_37"
onmouseover="gutterOver(37)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',37);"> </span
></td><td id="37"><a href="#37">37</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_38"
onmouseover="gutterOver(38)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',38);"> </span
></td><td id="38"><a href="#38">38</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_39"
onmouseover="gutterOver(39)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',39);"> </span
></td><td id="39"><a href="#39">39</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_40"
onmouseover="gutterOver(40)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',40);"> </span
></td><td id="40"><a href="#40">40</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_41"
onmouseover="gutterOver(41)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',41);"> </span
></td><td id="41"><a href="#41">41</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_42"
onmouseover="gutterOver(42)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',42);"> </span
></td><td id="42"><a href="#42">42</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_43"
onmouseover="gutterOver(43)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',43);"> </span
></td><td id="43"><a href="#43">43</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_44"
onmouseover="gutterOver(44)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',44);"> </span
></td><td id="44"><a href="#44">44</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_45"
onmouseover="gutterOver(45)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',45);"> </span
></td><td id="45"><a href="#45">45</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_46"
onmouseover="gutterOver(46)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',46);"> </span
></td><td id="46"><a href="#46">46</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_47"
onmouseover="gutterOver(47)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',47);"> </span
></td><td id="47"><a href="#47">47</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_48"
onmouseover="gutterOver(48)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',48);"> </span
></td><td id="48"><a href="#48">48</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_49"
onmouseover="gutterOver(49)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',49);"> </span
></td><td id="49"><a href="#49">49</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_50"
onmouseover="gutterOver(50)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',50);"> </span
></td><td id="50"><a href="#50">50</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_51"
onmouseover="gutterOver(51)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',51);"> </span
></td><td id="51"><a href="#51">51</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_52"
onmouseover="gutterOver(52)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',52);"> </span
></td><td id="52"><a href="#52">52</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_53"
onmouseover="gutterOver(53)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',53);"> </span
></td><td id="53"><a href="#53">53</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_54"
onmouseover="gutterOver(54)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',54);"> </span
></td><td id="54"><a href="#54">54</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_55"
onmouseover="gutterOver(55)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',55);"> </span
></td><td id="55"><a href="#55">55</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_56"
onmouseover="gutterOver(56)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',56);"> </span
></td><td id="56"><a href="#56">56</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_57"
onmouseover="gutterOver(57)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',57);"> </span
></td><td id="57"><a href="#57">57</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_58"
onmouseover="gutterOver(58)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',58);"> </span
></td><td id="58"><a href="#58">58</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_59"
onmouseover="gutterOver(59)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',59);"> </span
></td><td id="59"><a href="#59">59</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_60"
onmouseover="gutterOver(60)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',60);"> </span
></td><td id="60"><a href="#60">60</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_61"
onmouseover="gutterOver(61)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',61);"> </span
></td><td id="61"><a href="#61">61</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_62"
onmouseover="gutterOver(62)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',62);"> </span
></td><td id="62"><a href="#62">62</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_63"
onmouseover="gutterOver(63)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',63);"> </span
></td><td id="63"><a href="#63">63</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_64"
onmouseover="gutterOver(64)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',64);"> </span
></td><td id="64"><a href="#64">64</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_65"
onmouseover="gutterOver(65)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',65);"> </span
></td><td id="65"><a href="#65">65</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_66"
onmouseover="gutterOver(66)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',66);"> </span
></td><td id="66"><a href="#66">66</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_67"
onmouseover="gutterOver(67)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',67);"> </span
></td><td id="67"><a href="#67">67</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_68"
onmouseover="gutterOver(68)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',68);"> </span
></td><td id="68"><a href="#68">68</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_69"
onmouseover="gutterOver(69)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',69);"> </span
></td><td id="69"><a href="#69">69</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_70"
onmouseover="gutterOver(70)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',70);"> </span
></td><td id="70"><a href="#70">70</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_71"
onmouseover="gutterOver(71)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',71);"> </span
></td><td id="71"><a href="#71">71</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_72"
onmouseover="gutterOver(72)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',72);"> </span
></td><td id="72"><a href="#72">72</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_73"
onmouseover="gutterOver(73)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',73);"> </span
></td><td id="73"><a href="#73">73</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_74"
onmouseover="gutterOver(74)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',74);"> </span
></td><td id="74"><a href="#74">74</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_75"
onmouseover="gutterOver(75)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',75);"> </span
></td><td id="75"><a href="#75">75</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_76"
onmouseover="gutterOver(76)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',76);"> </span
></td><td id="76"><a href="#76">76</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_77"
onmouseover="gutterOver(77)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',77);"> </span
></td><td id="77"><a href="#77">77</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_78"
onmouseover="gutterOver(78)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',78);"> </span
></td><td id="78"><a href="#78">78</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_79"
onmouseover="gutterOver(79)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',79);"> </span
></td><td id="79"><a href="#79">79</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_80"
onmouseover="gutterOver(80)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',80);"> </span
></td><td id="80"><a href="#80">80</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_81"
onmouseover="gutterOver(81)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',81);"> </span
></td><td id="81"><a href="#81">81</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_82"
onmouseover="gutterOver(82)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',82);"> </span
></td><td id="82"><a href="#82">82</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_83"
onmouseover="gutterOver(83)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',83);"> </span
></td><td id="83"><a href="#83">83</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_84"
onmouseover="gutterOver(84)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',84);"> </span
></td><td id="84"><a href="#84">84</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_85"
onmouseover="gutterOver(85)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',85);"> </span
></td><td id="85"><a href="#85">85</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_86"
onmouseover="gutterOver(86)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',86);"> </span
></td><td id="86"><a href="#86">86</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_87"
onmouseover="gutterOver(87)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',87);"> </span
></td><td id="87"><a href="#87">87</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_88"
onmouseover="gutterOver(88)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',88);"> </span
></td><td id="88"><a href="#88">88</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_89"
onmouseover="gutterOver(89)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',89);"> </span
></td><td id="89"><a href="#89">89</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_90"
onmouseover="gutterOver(90)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',90);"> </span
></td><td id="90"><a href="#90">90</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_91"
onmouseover="gutterOver(91)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',91);"> </span
></td><td id="91"><a href="#91">91</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_92"
onmouseover="gutterOver(92)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',92);"> </span
></td><td id="92"><a href="#92">92</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_93"
onmouseover="gutterOver(93)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',93);"> </span
></td><td id="93"><a href="#93">93</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_94"
onmouseover="gutterOver(94)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',94);"> </span
></td><td id="94"><a href="#94">94</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_95"
onmouseover="gutterOver(95)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',95);"> </span
></td><td id="95"><a href="#95">95</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_96"
onmouseover="gutterOver(96)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',96);"> </span
></td><td id="96"><a href="#96">96</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_97"
onmouseover="gutterOver(97)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',97);"> </span
></td><td id="97"><a href="#97">97</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_98"
onmouseover="gutterOver(98)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',98);"> </span
></td><td id="98"><a href="#98">98</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_99"
onmouseover="gutterOver(99)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',99);"> </span
></td><td id="99"><a href="#99">99</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_100"
onmouseover="gutterOver(100)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',100);"> </span
></td><td id="100"><a href="#100">100</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_101"
onmouseover="gutterOver(101)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',101);"> </span
></td><td id="101"><a href="#101">101</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_102"
onmouseover="gutterOver(102)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',102);"> </span
></td><td id="102"><a href="#102">102</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_103"
onmouseover="gutterOver(103)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',103);"> </span
></td><td id="103"><a href="#103">103</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_104"
onmouseover="gutterOver(104)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',104);"> </span
></td><td id="104"><a href="#104">104</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_105"
onmouseover="gutterOver(105)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',105);"> </span
></td><td id="105"><a href="#105">105</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_106"
onmouseover="gutterOver(106)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',106);"> </span
></td><td id="106"><a href="#106">106</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_107"
onmouseover="gutterOver(107)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',107);"> </span
></td><td id="107"><a href="#107">107</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_108"
onmouseover="gutterOver(108)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',108);"> </span
></td><td id="108"><a href="#108">108</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_109"
onmouseover="gutterOver(109)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',109);"> </span
></td><td id="109"><a href="#109">109</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_110"
onmouseover="gutterOver(110)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',110);"> </span
></td><td id="110"><a href="#110">110</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_111"
onmouseover="gutterOver(111)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',111);"> </span
></td><td id="111"><a href="#111">111</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_112"
onmouseover="gutterOver(112)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',112);"> </span
></td><td id="112"><a href="#112">112</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_113"
onmouseover="gutterOver(113)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',113);"> </span
></td><td id="113"><a href="#113">113</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_114"
onmouseover="gutterOver(114)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',114);"> </span
></td><td id="114"><a href="#114">114</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_115"
onmouseover="gutterOver(115)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',115);"> </span
></td><td id="115"><a href="#115">115</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_116"
onmouseover="gutterOver(116)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',116);"> </span
></td><td id="116"><a href="#116">116</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_117"
onmouseover="gutterOver(117)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',117);"> </span
></td><td id="117"><a href="#117">117</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_118"
onmouseover="gutterOver(118)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',118);"> </span
></td><td id="118"><a href="#118">118</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_119"
onmouseover="gutterOver(119)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',119);"> </span
></td><td id="119"><a href="#119">119</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_120"
onmouseover="gutterOver(120)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',120);"> </span
></td><td id="120"><a href="#120">120</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_121"
onmouseover="gutterOver(121)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',121);"> </span
></td><td id="121"><a href="#121">121</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_122"
onmouseover="gutterOver(122)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',122);"> </span
></td><td id="122"><a href="#122">122</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_123"
onmouseover="gutterOver(123)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',123);"> </span
></td><td id="123"><a href="#123">123</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_124"
onmouseover="gutterOver(124)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',124);"> </span
></td><td id="124"><a href="#124">124</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_125"
onmouseover="gutterOver(125)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',125);"> </span
></td><td id="125"><a href="#125">125</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_126"
onmouseover="gutterOver(126)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',126);"> </span
></td><td id="126"><a href="#126">126</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_127"
onmouseover="gutterOver(127)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',127);"> </span
></td><td id="127"><a href="#127">127</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_128"
onmouseover="gutterOver(128)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',128);"> </span
></td><td id="128"><a href="#128">128</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_129"
onmouseover="gutterOver(129)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',129);"> </span
></td><td id="129"><a href="#129">129</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_130"
onmouseover="gutterOver(130)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',130);"> </span
></td><td id="130"><a href="#130">130</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_131"
onmouseover="gutterOver(131)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',131);"> </span
></td><td id="131"><a href="#131">131</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_132"
onmouseover="gutterOver(132)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',132);"> </span
></td><td id="132"><a href="#132">132</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_133"
onmouseover="gutterOver(133)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',133);"> </span
></td><td id="133"><a href="#133">133</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_134"
onmouseover="gutterOver(134)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',134);"> </span
></td><td id="134"><a href="#134">134</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_135"
onmouseover="gutterOver(135)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',135);"> </span
></td><td id="135"><a href="#135">135</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_136"
onmouseover="gutterOver(136)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',136);"> </span
></td><td id="136"><a href="#136">136</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_137"
onmouseover="gutterOver(137)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',137);"> </span
></td><td id="137"><a href="#137">137</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_138"
onmouseover="gutterOver(138)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',138);"> </span
></td><td id="138"><a href="#138">138</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_139"
onmouseover="gutterOver(139)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',139);"> </span
></td><td id="139"><a href="#139">139</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_140"
onmouseover="gutterOver(140)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',140);"> </span
></td><td id="140"><a href="#140">140</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_141"
onmouseover="gutterOver(141)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',141);"> </span
></td><td id="141"><a href="#141">141</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_142"
onmouseover="gutterOver(142)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',142);"> </span
></td><td id="142"><a href="#142">142</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_143"
onmouseover="gutterOver(143)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',143);"> </span
></td><td id="143"><a href="#143">143</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_144"
onmouseover="gutterOver(144)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',144);"> </span
></td><td id="144"><a href="#144">144</a></td></tr
><tr id="gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_145"
onmouseover="gutterOver(145)"
><td><span title="Add comment" onclick="codereviews.startEdit('svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b',145);"> </span
></td><td id="145"><a href="#145">145</a></td></tr
></table></pre>
<pre><table width="100%"><tr class="nocursor"><td></td></tr></table></pre>
</td>
<td id="lines">
<pre><table width="100%"><tr class="cursor_stop cursor_hidden"><td></td></tr></table></pre>
<pre class="prettyprint lang-py"><table id="src_table_0"><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_1
onmouseover="gutterOver(1)"
><td class="source"># pmx Copyright Notice<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_2
onmouseover="gutterOver(2)"
><td class="source"># ============================<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_3
onmouseover="gutterOver(3)"
><td class="source">#<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_4
onmouseover="gutterOver(4)"
><td class="source"># The pmx source code is copyrighted, but you can freely use and<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_5
onmouseover="gutterOver(5)"
><td class="source"># copy it as long as you don't change or remove any of the copyright<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_6
onmouseover="gutterOver(6)"
><td class="source"># notices.<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_7
onmouseover="gutterOver(7)"
><td class="source">#<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_8
onmouseover="gutterOver(8)"
><td class="source"># ----------------------------------------------------------------------<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_9
onmouseover="gutterOver(9)"
><td class="source"># pmx is Copyright (C) 2006-2013 by Daniel Seeliger<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_10
onmouseover="gutterOver(10)"
><td class="source">#<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_11
onmouseover="gutterOver(11)"
><td class="source"># All Rights Reserved<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_12
onmouseover="gutterOver(12)"
><td class="source">#<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_13
onmouseover="gutterOver(13)"
><td class="source"># Permission to use, copy, modify, distribute, and distribute modified<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_14
onmouseover="gutterOver(14)"
><td class="source"># versions of this software and its documentation for any purpose and<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_15
onmouseover="gutterOver(15)"
><td class="source"># without fee is hereby granted, provided that the above copyright<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_16
onmouseover="gutterOver(16)"
><td class="source"># notice appear in all copies and that both the copyright notice and<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_17
onmouseover="gutterOver(17)"
><td class="source"># this permission notice appear in supporting documentation, and that<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_18
onmouseover="gutterOver(18)"
><td class="source"># the name of Daniel Seeliger not be used in advertising or publicity<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_19
onmouseover="gutterOver(19)"
><td class="source"># pertaining to distribution of the software without specific, written<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_20
onmouseover="gutterOver(20)"
><td class="source"># prior permission.<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_21
onmouseover="gutterOver(21)"
><td class="source">#<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_22
onmouseover="gutterOver(22)"
><td class="source"># DANIEL SEELIGER DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_23
onmouseover="gutterOver(23)"
><td class="source"># SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_24
onmouseover="gutterOver(24)"
><td class="source"># FITNESS. IN NO EVENT SHALL DANIEL SEELIGER BE LIABLE FOR ANY<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_25
onmouseover="gutterOver(25)"
><td class="source"># SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_26
onmouseover="gutterOver(26)"
><td class="source"># RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_27
onmouseover="gutterOver(27)"
><td class="source"># CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_28
onmouseover="gutterOver(28)"
><td class="source"># CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_29
onmouseover="gutterOver(29)"
><td class="source"># ----------------------------------------------------------------------<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_30
onmouseover="gutterOver(30)"
><td class="source">import sys, os<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_31
onmouseover="gutterOver(31)"
><td class="source">from glob import glob<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_32
onmouseover="gutterOver(32)"
><td class="source">from numpy import *<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_33
onmouseover="gutterOver(33)"
><td class="source">from pmx import *<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_34
onmouseover="gutterOver(34)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_35
onmouseover="gutterOver(35)"
><td class="source">def read_data(fn, b = 0, e = -1):<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_36
onmouseover="gutterOver(36)"
><td class="source"> if e == -1: e = 9999999999<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_37
onmouseover="gutterOver(37)"
><td class="source"> l = open(fn).readlines()<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_38
onmouseover="gutterOver(38)"
><td class="source"> data = []<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_39
onmouseover="gutterOver(39)"
><td class="source"> for line in l:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_40
onmouseover="gutterOver(40)"
><td class="source"> if line[0] not in ['@','#']:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_41
onmouseover="gutterOver(41)"
><td class="source"> entr = line.split()<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_42
onmouseover="gutterOver(42)"
><td class="source"> try:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_43
onmouseover="gutterOver(43)"
><td class="source"> time = float(entr[0])<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_44
onmouseover="gutterOver(44)"
><td class="source"> if time > b and time < e:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_45
onmouseover="gutterOver(45)"
><td class="source"> data.append( float(entr[1] ) )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_46
onmouseover="gutterOver(46)"
><td class="source"> except:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_47
onmouseover="gutterOver(47)"
><td class="source"> pass<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_48
onmouseover="gutterOver(48)"
><td class="source"># print >>sys.stderr, 'Read file:', fn, ' with %d data points' % len(data)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_49
onmouseover="gutterOver(49)"
><td class="source"> return data<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_50
onmouseover="gutterOver(50)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_51
onmouseover="gutterOver(51)"
><td class="source">def datapoint_from_time(time):<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_52
onmouseover="gutterOver(52)"
><td class="source"> return time*500<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_53
onmouseover="gutterOver(53)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_54
onmouseover="gutterOver(54)"
><td class="source">def block_aver( data, block_size = 1000):<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_55
onmouseover="gutterOver(55)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_56
onmouseover="gutterOver(56)"
><td class="source"> total_time = len(data) / 500.<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_57
onmouseover="gutterOver(57)"
><td class="source"> next_time = block_size<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_58
onmouseover="gutterOver(58)"
><td class="source"> results = []<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_59
onmouseover="gutterOver(59)"
><td class="source"> offset = 0<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_60
onmouseover="gutterOver(60)"
><td class="source"> while next_time < total_time:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_61
onmouseover="gutterOver(61)"
><td class="source"> beg = datapoint_from_time(offset)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_62
onmouseover="gutterOver(62)"
><td class="source"> end = datapoint_from_time(next_time)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_63
onmouseover="gutterOver(63)"
><td class="source"> res = average( data[beg:end] )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_64
onmouseover="gutterOver(64)"
><td class="source"> results.append( (str(offset)+'-'+str(next_time), res ) )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_65
onmouseover="gutterOver(65)"
><td class="source"> offset = next_time<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_66
onmouseover="gutterOver(66)"
><td class="source"> next_time += block_size<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_67
onmouseover="gutterOver(67)"
><td class="source"> return results<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_68
onmouseover="gutterOver(68)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_69
onmouseover="gutterOver(69)"
><td class="source">def convergence( data, block_size = 1000):<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_70
onmouseover="gutterOver(70)"
><td class="source"> total_time = len(data) / 500.<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_71
onmouseover="gutterOver(71)"
><td class="source"> next_time = block_size<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_72
onmouseover="gutterOver(72)"
><td class="source"> results = []<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_73
onmouseover="gutterOver(73)"
><td class="source"> offset = 0<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_74
onmouseover="gutterOver(74)"
><td class="source"> while next_time < total_time:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_75
onmouseover="gutterOver(75)"
><td class="source"> beg = datapoint_from_time(offset)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_76
onmouseover="gutterOver(76)"
><td class="source"> end = datapoint_from_time(next_time)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_77
onmouseover="gutterOver(77)"
><td class="source"> res = average( data[beg:end] )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_78
onmouseover="gutterOver(78)"
><td class="source"> results.append( (next_time, res ) )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_79
onmouseover="gutterOver(79)"
><td class="source"> next_time += block_size<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_80
onmouseover="gutterOver(80)"
><td class="source"> return results<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_81
onmouseover="gutterOver(81)"
><td class="source"> <br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_82
onmouseover="gutterOver(82)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_83
onmouseover="gutterOver(83)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_84
onmouseover="gutterOver(84)"
><td class="source">help_text = ('Calculate delta G from multiple DTI runs',)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_85
onmouseover="gutterOver(85)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_86
onmouseover="gutterOver(86)"
><td class="source">options = [<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_87
onmouseover="gutterOver(87)"
><td class="source"> Option( "-b", "real", 500, "Start time [ps]"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_88
onmouseover="gutterOver(88)"
><td class="source"> Option( "-e", "real", -1, "End time[ps]"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_89
onmouseover="gutterOver(89)"
><td class="source"> Option( "-block1", "int", 100, "Time[ps] for block average"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_90
onmouseover="gutterOver(90)"
><td class="source"> Option( "-block2", "int", 500, "Time[ps] for block average"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_91
onmouseover="gutterOver(91)"
><td class="source"># Option( "-r2", "rvec", [1,2,3], "some vector that does wonderful things and returns always segfaults")<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_92
onmouseover="gutterOver(92)"
><td class="source"> ]<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_93
onmouseover="gutterOver(93)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_94
onmouseover="gutterOver(94)"
><td class="source">files = [<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_95
onmouseover="gutterOver(95)"
><td class="source"> FileOption("-dgdl", "r",["xvg"],"run", "Input file with dH/dl values"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_96
onmouseover="gutterOver(96)"
><td class="source"> FileOption("-o", "w",["txt"],"results.txt", "Results"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_97
onmouseover="gutterOver(97)"
><td class="source"> FileOption("-oc", "w",["txt"],"convergence.txt", "text file with mutations to insert"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_98
onmouseover="gutterOver(98)"
><td class="source"> FileOption("-ob", "w",["txt"],"block.txt", "files with block averages"),<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_99
onmouseover="gutterOver(99)"
><td class="source"> <br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_100
onmouseover="gutterOver(100)"
><td class="source">]<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_101
onmouseover="gutterOver(101)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_102
onmouseover="gutterOver(102)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_103
onmouseover="gutterOver(103)"
><td class="source">cmdl = Commandline( sys.argv, options = options,<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_104
onmouseover="gutterOver(104)"
><td class="source"> fileoptions = files,<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_105
onmouseover="gutterOver(105)"
><td class="source"> program_desc = help_text,<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_106
onmouseover="gutterOver(106)"
><td class="source"> check_for_existing_files = False )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_107
onmouseover="gutterOver(107)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_108
onmouseover="gutterOver(108)"
><td class="source">dgdl_file = cmdl['-dgdl']<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_109
onmouseover="gutterOver(109)"
><td class="source">start_time = cmdl['-b']<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_110
onmouseover="gutterOver(110)"
><td class="source">end_time = cmdl['-e']<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_111
onmouseover="gutterOver(111)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_112
onmouseover="gutterOver(112)"
><td class="source">print 'DTI_analysis__> Reading: ', dgdl_file<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_113
onmouseover="gutterOver(113)"
><td class="source">print 'DTI_analysis__> Start time = ', start_time, ' End time = ', end_time<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_114
onmouseover="gutterOver(114)"
><td class="source">data = read_data( dgdl_file, b = start_time, e = end_time )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_115
onmouseover="gutterOver(115)"
><td class="source">av = average(data)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_116
onmouseover="gutterOver(116)"
><td class="source">st = std(data)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_117
onmouseover="gutterOver(117)"
><td class="source">size = len(data)<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_118
onmouseover="gutterOver(118)"
><td class="source">print 'DTI_analysis__> <dH/dl> = %8.4f'% av, ' | #data points = ', size<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_119
onmouseover="gutterOver(119)"
><td class="source">fp = open(cmdl['-o'],'w')<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_120
onmouseover="gutterOver(120)"
><td class="source">print >>fp, av, st, size<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_121
onmouseover="gutterOver(121)"
><td class="source">fp.close()<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_122
onmouseover="gutterOver(122)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_123
onmouseover="gutterOver(123)"
><td class="source">block1 = cmdl['-block1']<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_124
onmouseover="gutterOver(124)"
><td class="source">fn =os.path.splitext(cmdl['-ob'])[0]+str(block1)+os.path.splitext(cmdl['-ob'])[1]<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_125
onmouseover="gutterOver(125)"
><td class="source">print 'DTI_analysis__> Block averaging 1: ', block1<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_126
onmouseover="gutterOver(126)"
><td class="source">res = block_aver( data, block1 )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_127
onmouseover="gutterOver(127)"
><td class="source">fp = open(fn,'w')<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_128
onmouseover="gutterOver(128)"
><td class="source">for a, b in res:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_129
onmouseover="gutterOver(129)"
><td class="source"> print >>fp, a, b<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_130
onmouseover="gutterOver(130)"
><td class="source">fp.close()<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_131
onmouseover="gutterOver(131)"
><td class="source">block2 = cmdl['-block2']<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_132
onmouseover="gutterOver(132)"
><td class="source">fn =os.path.splitext(cmdl['-ob'])[0]+str(block2)+os.path.splitext(cmdl['-ob'])[1]<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_133
onmouseover="gutterOver(133)"
><td class="source">print 'DTI_analysis__> Block averaging 2: ', block2<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_134
onmouseover="gutterOver(134)"
><td class="source">res = block_aver( data, block2 )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_135
onmouseover="gutterOver(135)"
><td class="source">fp = open(fn,'w')<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_136
onmouseover="gutterOver(136)"
><td class="source">for a, b in res:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_137
onmouseover="gutterOver(137)"
><td class="source"> print >>fp, a, b<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_138
onmouseover="gutterOver(138)"
><td class="source">fp.close()<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_139
onmouseover="gutterOver(139)"
><td class="source"><br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_140
onmouseover="gutterOver(140)"
><td class="source">res = convergence( data, 100 )<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_141
onmouseover="gutterOver(141)"
><td class="source">fp = open(cmdl['-oc'],'w')<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_142
onmouseover="gutterOver(142)"
><td class="source">for t, r in res:<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_143
onmouseover="gutterOver(143)"
><td class="source"> print >>fp, t, r<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_144
onmouseover="gutterOver(144)"
><td class="source">fp.close()<br></td></tr
><tr
id=sl_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_145
onmouseover="gutterOver(145)"
><td class="source"><br></td></tr
></table></pre>
<pre><table width="100%"><tr class="cursor_stop cursor_hidden"><td></td></tr></table></pre>
</td>
</tr></table>
<script type="text/javascript">
var lineNumUnderMouse = -1;
function gutterOver(num) {
gutterOut();
var newTR = document.getElementById('gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_' + num);
if (newTR) {
newTR.className = 'undermouse';
}
lineNumUnderMouse = num;
}
function gutterOut() {
if (lineNumUnderMouse != -1) {
var oldTR = document.getElementById(
'gr_svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b_' + lineNumUnderMouse);
if (oldTR) {
oldTR.className = '';
}
lineNumUnderMouse = -1;
}
}
var numsGenState = {table_base_id: 'nums_table_'};
var srcGenState = {table_base_id: 'src_table_'};
var alignerRunning = false;
var startOver = false;
function setLineNumberHeights() {
if (alignerRunning) {
startOver = true;
return;
}
numsGenState.chunk_id = 0;
numsGenState.table = document.getElementById('nums_table_0');
numsGenState.row_num = 0;
if (!numsGenState.table) {
return; // Silently exit if no file is present.
}
srcGenState.chunk_id = 0;
srcGenState.table = document.getElementById('src_table_0');
srcGenState.row_num = 0;
alignerRunning = true;
continueToSetLineNumberHeights();
}
function rowGenerator(genState) {
if (genState.row_num < genState.table.rows.length) {
var currentRow = genState.table.rows[genState.row_num];
genState.row_num++;
return currentRow;
}
var newTable = document.getElementById(
genState.table_base_id + (genState.chunk_id + 1));
if (newTable) {
genState.chunk_id++;
genState.row_num = 0;
genState.table = newTable;
return genState.table.rows[0];
}
return null;
}
var MAX_ROWS_PER_PASS = 1000;
function continueToSetLineNumberHeights() {
var rowsInThisPass = 0;
var numRow = 1;
var srcRow = 1;
while (numRow && srcRow && rowsInThisPass < MAX_ROWS_PER_PASS) {
numRow = rowGenerator(numsGenState);
srcRow = rowGenerator(srcGenState);
rowsInThisPass++;
if (numRow && srcRow) {
if (numRow.offsetHeight != srcRow.offsetHeight) {
numRow.firstChild.style.height = srcRow.offsetHeight + 'px';
}
}
}
if (rowsInThisPass >= MAX_ROWS_PER_PASS) {
setTimeout(continueToSetLineNumberHeights, 10);
} else {
alignerRunning = false;
if (startOver) {
startOver = false;
setTimeout(setLineNumberHeights, 500);
}
}
}
function initLineNumberHeights() {
// Do 2 complete passes, because there can be races
// between this code and prettify.
startOver = true;
setTimeout(setLineNumberHeights, 250);
window.onresize = setLineNumberHeights;
}
initLineNumberHeights();
</script>
<div id="log">
<div style="text-align:right">
<a class="ifCollapse" href="#" onclick="_toggleMeta(this); return false">Show details</a>
<a class="ifExpand" href="#" onclick="_toggleMeta(this); return false">Hide details</a>
</div>
<div class="ifExpand">
<div class="pmeta_bubble_bg" style="border:1px solid white">
<div class="round4"></div>
<div class="round2"></div>
<div class="round1"></div>
<div class="box-inner">
<div id="changelog">
<p>Change log</p>
<div>
<a href="/p/pmx/source/detail?spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b&r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b">82a17baf41be</a>
by Daniel Seeliger <[email protected]>
on Mar 22, 2013
<a href="/p/pmx/source/diff?spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b&r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&format=side&path=/scripts/DTI_analysis.py&old_path=/scripts/DTI_analysis.py&old=a2102ac8113476c16e34079d2812b130339f54bb">Diff</a>
</div>
<pre>changed pdb format (element column)
</pre>
</div>
<script type="text/javascript">
var detail_url = '/p/pmx/source/detail?r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b';
var publish_url = '/p/pmx/source/detail?r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b#publish';
// describe the paths of this revision in javascript.
var changed_paths = [];
var changed_urls = [];
changed_paths.push('/pmx/atom.py');
changed_urls.push('/p/pmx/source/browse/pmx/atom.py?r\x3d82a17baf41be6e3dd11997ac8d7dff4272c3a37b\x26spec\x3dsvn82a17baf41be6e3dd11997ac8d7dff4272c3a37b');
changed_paths.push('/pmx/library.py');
changed_urls.push('/p/pmx/source/browse/pmx/library.py?r\x3d82a17baf41be6e3dd11997ac8d7dff4272c3a37b\x26spec\x3dsvn82a17baf41be6e3dd11997ac8d7dff4272c3a37b');
changed_paths.push('/scripts/DTI_analysis.py');
changed_urls.push('/p/pmx/source/browse/scripts/DTI_analysis.py?r\x3d82a17baf41be6e3dd11997ac8d7dff4272c3a37b\x26spec\x3dsvn82a17baf41be6e3dd11997ac8d7dff4272c3a37b');
var selected_path = '/scripts/DTI_analysis.py';
changed_paths.push('/scripts/DTI_analysis2.py');
changed_urls.push('/p/pmx/source/browse/scripts/DTI_analysis2.py?r\x3d82a17baf41be6e3dd11997ac8d7dff4272c3a37b\x26spec\x3dsvn82a17baf41be6e3dd11997ac8d7dff4272c3a37b');
function getCurrentPageIndex() {
for (var i = 0; i < changed_paths.length; i++) {
if (selected_path == changed_paths[i]) {
return i;
}
}
}
function getNextPage() {
var i = getCurrentPageIndex();
if (i < changed_paths.length - 1) {
return changed_urls[i + 1];
}
return null;
}
function getPreviousPage() {
var i = getCurrentPageIndex();
if (i > 0) {
return changed_urls[i - 1];
}
return null;
}
function gotoNextPage() {
var page = getNextPage();
if (!page) {
page = detail_url;
}
window.location = page;
}
function gotoPreviousPage() {
var page = getPreviousPage();
if (!page) {
page = detail_url;
}
window.location = page;
}
function gotoDetailPage() {
window.location = detail_url;
}
function gotoPublishPage() {
window.location = publish_url;
}
</script>
<style type="text/css">
#review_nav {
border-top: 3px solid white;
padding-top: 6px;
margin-top: 1em;
}
#review_nav td {
vertical-align: middle;
}
#review_nav select {
margin: .5em 0;
}
</style>
<div id="review_nav">
<table><tr><td>Go to: </td><td>
<select name="files_in_rev" onchange="window.location=this.value">
<option value="/p/pmx/source/browse/pmx/atom.py?r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b"
>/pmx/atom.py</option>
<option value="/p/pmx/source/browse/pmx/library.py?r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b"
>/pmx/library.py</option>
<option value="/p/pmx/source/browse/scripts/DTI_analysis.py?r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b"
selected="selected"
>/scripts/DTI_analysis.py</option>
<option value="/p/pmx/source/browse/scripts/DTI_analysis2.py?r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b"
>/scripts/DTI_analysis2.py</option>
</select>
</td></tr></table>
<div id="review_instr" class="closed">
<a class="ifOpened" href="/p/pmx/source/detail?r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b&spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b#publish">Publish your comments</a>
<div class="ifClosed">Double click a line to add a comment</div>
</div>
</div>
</div>
<div class="round1"></div>
<div class="round2"></div>
<div class="round4"></div>
</div>
<div class="pmeta_bubble_bg" style="border:1px solid white">
<div class="round4"></div>
<div class="round2"></div>
<div class="round1"></div>
<div class="box-inner">
<div id="older_bubble">
<p>Older revisions</p>
<div class="closed" style="margin-bottom:3px;" >
<a class="ifClosed" onclick="return _toggleHidden(this)"><img src="https://ssl.gstatic.com/codesite/ph/images/plus.gif" ></a>
<a class="ifOpened" onclick="return _toggleHidden(this)"><img src="https://ssl.gstatic.com/codesite/ph/images/minus.gif" ></a>
<a href="/p/pmx/source/detail?spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b&r=a2102ac8113476c16e34079d2812b130339f54bb">a2102ac81134</a>
by Daniel Seeliger <[email protected]>
on Jan 3, 2013
<a href="/p/pmx/source/diff?spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b&r=a2102ac8113476c16e34079d2812b130339f54bb&format=side&path=/scripts/DTI_analysis.py&old_path=/scripts/DTI_analysis.py&old=1bc76bef6ad1f114f81454f2a6ed3f365c62f63e">Diff</a>
<br>
<pre class="ifOpened">changed MDP class
</pre>
</div>
<div class="closed" style="margin-bottom:3px;" >
<a class="ifClosed" onclick="return _toggleHidden(this)"><img src="https://ssl.gstatic.com/codesite/ph/images/plus.gif" ></a>
<a class="ifOpened" onclick="return _toggleHidden(this)"><img src="https://ssl.gstatic.com/codesite/ph/images/minus.gif" ></a>
<a href="/p/pmx/source/detail?spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b&r=1bc76bef6ad1f114f81454f2a6ed3f365c62f63e">1bc76bef6ad1</a>
by dseelig <[email protected]>
on Nov 26, 2012
<a href="/p/pmx/source/diff?spec=svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b&r=1bc76bef6ad1f114f81454f2a6ed3f365c62f63e&format=side&path=/scripts/DTI_analysis.py&old_path=/scripts/DTI_analysis.py&old=">Diff</a>
<br>
<pre class="ifOpened">initial commit
</pre>
</div>
<a href="/p/pmx/source/list?path=/scripts/DTI_analysis.py&r=82a17baf41be6e3dd11997ac8d7dff4272c3a37b">All revisions of this file</a>
</div>
</div>
<div class="round1"></div>
<div class="round2"></div>
<div class="round4"></div>
</div>
<div class="pmeta_bubble_bg" style="border:1px solid white">
<div class="round4"></div>
<div class="round2"></div>
<div class="round1"></div>
<div class="box-inner">
<div id="fileinfo_bubble">
<p>File info</p>
<div>Size: 4898 bytes,
145 lines</div>
<div><a href="//pmx.googlecode.com/git/scripts/DTI_analysis.py">View raw file</a></div>
</div>
</div>
<div class="round1"></div>
<div class="round2"></div>
<div class="round4"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<script src="https://ssl.gstatic.com/codesite/ph/14689258884487974863/js/prettify/prettify.js"></script>
<script type="text/javascript">prettyPrint();</script>
<script src="https://ssl.gstatic.com/codesite/ph/14689258884487974863/js/source_file_scripts.js"></script>
<script type="text/javascript" src="https://ssl.gstatic.com/codesite/ph/14689258884487974863/js/kibbles.js"></script>
<script type="text/javascript">
var lastStop = null;
var initialized = false;
function updateCursor(next, prev) {
if (prev && prev.element) {
prev.element.className = 'cursor_stop cursor_hidden';
}
if (next && next.element) {
next.element.className = 'cursor_stop cursor';
lastStop = next.index;
}
}
function pubRevealed(data) {
updateCursorForCell(data.cellId, 'cursor_stop cursor_hidden');
if (initialized) {
reloadCursors();
}
}
function draftRevealed(data) {
updateCursorForCell(data.cellId, 'cursor_stop cursor_hidden');
if (initialized) {
reloadCursors();
}
}
function draftDestroyed(data) {
updateCursorForCell(data.cellId, 'nocursor');
if (initialized) {
reloadCursors();
}
}
function reloadCursors() {
kibbles.skipper.reset();
loadCursors();
if (lastStop != null) {
kibbles.skipper.setCurrentStop(lastStop);
}
}
// possibly the simplest way to insert any newly added comments
// is to update the class of the corresponding cursor row,
// then refresh the entire list of rows.
function updateCursorForCell(cellId, className) {
var cell = document.getElementById(cellId);
// we have to go two rows back to find the cursor location
var row = getPreviousElement(cell.parentNode);
row.className = className;
}
// returns the previous element, ignores text nodes.
function getPreviousElement(e) {
var element = e.previousSibling;
if (element.nodeType == 3) {
element = element.previousSibling;
}
if (element && element.tagName) {
return element;
}
}
function loadCursors() {
// register our elements with skipper
var elements = CR_getElements('*', 'cursor_stop');
var len = elements.length;
for (var i = 0; i < len; i++) {
var element = elements[i];
element.className = 'cursor_stop cursor_hidden';
kibbles.skipper.append(element);
}
}
function toggleComments() {
CR_toggleCommentDisplay();
reloadCursors();
}
function keysOnLoadHandler() {
// setup skipper
kibbles.skipper.addStopListener(
kibbles.skipper.LISTENER_TYPE.PRE, updateCursor);
// Set the 'offset' option to return the middle of the client area
// an option can be a static value, or a callback
kibbles.skipper.setOption('padding_top', 50);
// Set the 'offset' option to return the middle of the client area
// an option can be a static value, or a callback
kibbles.skipper.setOption('padding_bottom', 100);
// Register our keys
kibbles.skipper.addFwdKey("n");
kibbles.skipper.addRevKey("p");
kibbles.keys.addKeyPressListener(
'u', function() { window.location = detail_url; });
kibbles.keys.addKeyPressListener(
'r', function() { window.location = detail_url + '#publish'; });
kibbles.keys.addKeyPressListener('j', gotoNextPage);
kibbles.keys.addKeyPressListener('k', gotoPreviousPage);
kibbles.keys.addKeyPressListener('h', toggleComments);
}
</script>
<script src="https://ssl.gstatic.com/codesite/ph/14689258884487974863/js/code_review_scripts.js"></script>
<script type="text/javascript">
function showPublishInstructions() {
var element = document.getElementById('review_instr');
if (element) {
element.className = 'opened';
}
}
var codereviews;
function revsOnLoadHandler() {
// register our source container with the commenting code
var paths = {'svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b': '/scripts/DTI_analysis.py'}
codereviews = CR_controller.setup(
{"token":"3Cd3YLziNQwHJ6q0INBaXA2gZls:1366032649547","projectName":"pmx","domainName":null,"assetHostPath":"https://ssl.gstatic.com/codesite/ph","loggedInUserEmail":"[email protected]","profileUrl":"/u/110130407061490526737/","assetVersionPath":"https://ssl.gstatic.com/codesite/ph/14689258884487974863","projectHomeUrl":"/p/pmx","relativeBaseUrl":""}, '', 'svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b', paths,
CR_BrowseIntegrationFactory);
// register our source container with the commenting code
// in this case we're registering the container and the revison
// associated with the contianer which may be the primary revision
// or may be a previous revision against which the primary revision
// of the file is being compared.
codereviews.registerSourceContainer(document.getElementById('lines'), 'svn82a17baf41be6e3dd11997ac8d7dff4272c3a37b');
codereviews.registerActivityListener(CR_ActivityType.REVEAL_DRAFT_PLATE, showPublishInstructions);
codereviews.registerActivityListener(CR_ActivityType.REVEAL_PUB_PLATE, pubRevealed);
codereviews.registerActivityListener(CR_ActivityType.REVEAL_DRAFT_PLATE, draftRevealed);
codereviews.registerActivityListener(CR_ActivityType.DISCARD_DRAFT_COMMENT, draftDestroyed);
var initialized = true;
reloadCursors();
}
window.onload = function() {keysOnLoadHandler(); revsOnLoadHandler();};
</script>
<script type="text/javascript" src="https://ssl.gstatic.com/codesite/ph/14689258884487974863/js/dit_scripts.js"></script>
<script type="text/javascript" src="https://ssl.gstatic.com/codesite/ph/14689258884487974863/js/ph_core.js"></script>
</div>
<div id="footer" dir="ltr">
<div class="text">
<a href="/projecthosting/terms.html">Terms</a> -
<a href="http://www.google.com/privacy.html">Privacy</a> -
<a href="/p/support/">Project Hosting Help</a>
</div>
</div>
<div class="hostedBy" style="margin-top: -20px;">
<span style="vertical-align: top;">Powered by <a href="http://code.google.com/projecthosting/">Google Project Hosting</a></span>
</div>
</body>
</html>
| lgpl-3.0 | 8,436,432,717,271,101,000 | 32.024015 | 419 | 0.72446 | false |
wwj718/ANALYSE | cms/envs/aws.py | 4 | 11519 | """
This is the default template for our main set of AWS servers.
"""
# We intentionally define lots of variables that aren't used, and
# want to import all variables from base settings files
# pylint: disable=W0401, W0614
import json
from .common import *
from logsettings import get_logger_config
import os
from path import path
from dealer.git import git
from xmodule.modulestore.modulestore_settings import convert_module_store_setting_if_needed
# SERVICE_VARIANT specifies name of the variant used, which decides what JSON
# configuration files are read during startup.
SERVICE_VARIANT = os.environ.get('SERVICE_VARIANT', None)
# CONFIG_ROOT specifies the directory where the JSON configuration
# files are expected to be found. If not specified, use the project
# directory.
CONFIG_ROOT = path(os.environ.get('CONFIG_ROOT', ENV_ROOT))
# CONFIG_PREFIX specifies the prefix of the JSON configuration files,
# based on the service variant. If no variant is use, don't use a
# prefix.
CONFIG_PREFIX = SERVICE_VARIANT + "." if SERVICE_VARIANT else ""
############### ALWAYS THE SAME ################################
DEBUG = False
TEMPLATE_DEBUG = False
EMAIL_BACKEND = 'django_ses.SESBackend'
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
###################################### CELERY ################################
# Don't use a connection pool, since connections are dropped by ELB.
BROKER_POOL_LIMIT = 0
BROKER_CONNECTION_TIMEOUT = 1
# For the Result Store, use the django cache named 'celery'
CELERY_RESULT_BACKEND = 'cache'
CELERY_CACHE_BACKEND = 'celery'
# When the broker is behind an ELB, use a heartbeat to refresh the
# connection and to detect if it has been dropped.
BROKER_HEARTBEAT = 10.0
BROKER_HEARTBEAT_CHECKRATE = 2
# Each worker should only fetch one message at a time
CELERYD_PREFETCH_MULTIPLIER = 1
# Skip djcelery migrations, since we don't use the database as the broker
SOUTH_MIGRATION_MODULES = {
'djcelery': 'ignore',
}
# Rename the exchange and queues for each variant
QUEUE_VARIANT = CONFIG_PREFIX.lower()
CELERY_DEFAULT_EXCHANGE = 'edx.{0}core'.format(QUEUE_VARIANT)
HIGH_PRIORITY_QUEUE = 'edx.{0}core.high'.format(QUEUE_VARIANT)
DEFAULT_PRIORITY_QUEUE = 'edx.{0}core.default'.format(QUEUE_VARIANT)
LOW_PRIORITY_QUEUE = 'edx.{0}core.low'.format(QUEUE_VARIANT)
CELERY_DEFAULT_QUEUE = DEFAULT_PRIORITY_QUEUE
CELERY_DEFAULT_ROUTING_KEY = DEFAULT_PRIORITY_QUEUE
CELERY_QUEUES = {
HIGH_PRIORITY_QUEUE: {},
LOW_PRIORITY_QUEUE: {},
DEFAULT_PRIORITY_QUEUE: {}
}
############# NON-SECURE ENV CONFIG ##############################
# Things like server locations, ports, etc.
with open(CONFIG_ROOT / CONFIG_PREFIX + "env.json") as env_file:
ENV_TOKENS = json.load(env_file)
# STATIC_URL_BASE specifies the base url to use for static files
STATIC_URL_BASE = ENV_TOKENS.get('STATIC_URL_BASE', None)
if STATIC_URL_BASE:
# collectstatic will fail if STATIC_URL is a unicode string
STATIC_URL = STATIC_URL_BASE.encode('ascii')
if not STATIC_URL.endswith("/"):
STATIC_URL += "/"
STATIC_URL += git.revision + "/"
# GITHUB_REPO_ROOT is the base directory
# for course data
GITHUB_REPO_ROOT = ENV_TOKENS.get('GITHUB_REPO_ROOT', GITHUB_REPO_ROOT)
# STATIC_ROOT specifies the directory where static files are
# collected
STATIC_ROOT_BASE = ENV_TOKENS.get('STATIC_ROOT_BASE', None)
if STATIC_ROOT_BASE:
STATIC_ROOT = path(STATIC_ROOT_BASE) / git.revision
EMAIL_BACKEND = ENV_TOKENS.get('EMAIL_BACKEND', EMAIL_BACKEND)
EMAIL_FILE_PATH = ENV_TOKENS.get('EMAIL_FILE_PATH', None)
EMAIL_HOST = ENV_TOKENS.get('EMAIL_HOST', EMAIL_HOST)
EMAIL_PORT = ENV_TOKENS.get('EMAIL_PORT', EMAIL_PORT)
EMAIL_USE_TLS = ENV_TOKENS.get('EMAIL_USE_TLS', EMAIL_USE_TLS)
LMS_BASE = ENV_TOKENS.get('LMS_BASE')
# Note that FEATURES['PREVIEW_LMS_BASE'] gets read in from the environment file.
SITE_NAME = ENV_TOKENS['SITE_NAME']
LOG_DIR = ENV_TOKENS['LOG_DIR']
CACHES = ENV_TOKENS['CACHES']
# Cache used for location mapping -- called many times with the same key/value
# in a given request.
if 'loc_cache' not in CACHES:
CACHES['loc_cache'] = {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'edx_location_mem_cache',
}
SESSION_COOKIE_DOMAIN = ENV_TOKENS.get('SESSION_COOKIE_DOMAIN')
SESSION_ENGINE = ENV_TOKENS.get('SESSION_ENGINE', SESSION_ENGINE)
SESSION_COOKIE_SECURE = ENV_TOKENS.get('SESSION_COOKIE_SECURE', SESSION_COOKIE_SECURE)
# allow for environments to specify what cookie name our login subsystem should use
# this is to fix a bug regarding simultaneous logins between edx.org and edge.edx.org which can
# happen with some browsers (e.g. Firefox)
if ENV_TOKENS.get('SESSION_COOKIE_NAME', None):
# NOTE, there's a bug in Django (http://bugs.python.org/issue18012) which necessitates this being a str()
SESSION_COOKIE_NAME = str(ENV_TOKENS.get('SESSION_COOKIE_NAME'))
#Email overrides
DEFAULT_FROM_EMAIL = ENV_TOKENS.get('DEFAULT_FROM_EMAIL', DEFAULT_FROM_EMAIL)
DEFAULT_FEEDBACK_EMAIL = ENV_TOKENS.get('DEFAULT_FEEDBACK_EMAIL', DEFAULT_FEEDBACK_EMAIL)
ADMINS = ENV_TOKENS.get('ADMINS', ADMINS)
SERVER_EMAIL = ENV_TOKENS.get('SERVER_EMAIL', SERVER_EMAIL)
MKTG_URLS = ENV_TOKENS.get('MKTG_URLS', MKTG_URLS)
TECH_SUPPORT_EMAIL = ENV_TOKENS.get('TECH_SUPPORT_EMAIL', TECH_SUPPORT_EMAIL)
COURSES_WITH_UNSAFE_CODE = ENV_TOKENS.get("COURSES_WITH_UNSAFE_CODE", [])
ASSET_IGNORE_REGEX = ENV_TOKENS.get('ASSET_IGNORE_REGEX', ASSET_IGNORE_REGEX)
# Theme overrides
THEME_NAME = ENV_TOKENS.get('THEME_NAME', None)
#Timezone overrides
TIME_ZONE = ENV_TOKENS.get('TIME_ZONE', TIME_ZONE)
# Push to LMS overrides
GIT_REPO_EXPORT_DIR = ENV_TOKENS.get('GIT_REPO_EXPORT_DIR', '/edx/var/edxapp/export_course_repos')
# Translation overrides
LANGUAGES = ENV_TOKENS.get('LANGUAGES', LANGUAGES)
LANGUAGE_CODE = ENV_TOKENS.get('LANGUAGE_CODE', LANGUAGE_CODE)
USE_I18N = ENV_TOKENS.get('USE_I18N', USE_I18N)
ENV_FEATURES = ENV_TOKENS.get('FEATURES', ENV_TOKENS.get('MITX_FEATURES', {}))
for feature, value in ENV_FEATURES.items():
FEATURES[feature] = value
# Additional installed apps
for app in ENV_TOKENS.get('ADDL_INSTALLED_APPS', []):
INSTALLED_APPS += (app,)
WIKI_ENABLED = ENV_TOKENS.get('WIKI_ENABLED', WIKI_ENABLED)
LOGGING = get_logger_config(LOG_DIR,
logging_env=ENV_TOKENS['LOGGING_ENV'],
debug=False,
service_variant=SERVICE_VARIANT)
#theming start:
PLATFORM_NAME = ENV_TOKENS.get('PLATFORM_NAME', 'edX')
# Event Tracking
if "TRACKING_IGNORE_URL_PATTERNS" in ENV_TOKENS:
TRACKING_IGNORE_URL_PATTERNS = ENV_TOKENS.get("TRACKING_IGNORE_URL_PATTERNS")
# Django CAS external authentication settings
CAS_EXTRA_LOGIN_PARAMS = ENV_TOKENS.get("CAS_EXTRA_LOGIN_PARAMS", None)
if FEATURES.get('AUTH_USE_CAS'):
CAS_SERVER_URL = ENV_TOKENS.get("CAS_SERVER_URL", None)
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'django_cas.backends.CASBackend',
)
INSTALLED_APPS += ('django_cas',)
MIDDLEWARE_CLASSES += ('django_cas.middleware.CASMiddleware',)
CAS_ATTRIBUTE_CALLBACK = ENV_TOKENS.get('CAS_ATTRIBUTE_CALLBACK', None)
if CAS_ATTRIBUTE_CALLBACK:
import importlib
CAS_USER_DETAILS_RESOLVER = getattr(
importlib.import_module(CAS_ATTRIBUTE_CALLBACK['module']),
CAS_ATTRIBUTE_CALLBACK['function']
)
################ SECURE AUTH ITEMS ###############################
# Secret things: passwords, access keys, etc.
with open(CONFIG_ROOT / CONFIG_PREFIX + "auth.json") as auth_file:
AUTH_TOKENS = json.load(auth_file)
############### XBlock filesystem field config ##########
if 'DJFS' in AUTH_TOKENS and AUTH_TOKENS['DJFS'] is not None:
DJFS = AUTH_TOKENS['DJFS']
EMAIL_HOST_USER = AUTH_TOKENS.get('EMAIL_HOST_USER', EMAIL_HOST_USER)
EMAIL_HOST_PASSWORD = AUTH_TOKENS.get('EMAIL_HOST_PASSWORD', EMAIL_HOST_PASSWORD)
# If Segment.io key specified, load it and turn on Segment.io if the feature flag is set
# Note that this is the Studio key. There is a separate key for the LMS.
SEGMENT_IO_KEY = AUTH_TOKENS.get('SEGMENT_IO_KEY')
if SEGMENT_IO_KEY:
FEATURES['SEGMENT_IO'] = ENV_TOKENS.get('SEGMENT_IO', False)
AWS_ACCESS_KEY_ID = AUTH_TOKENS["AWS_ACCESS_KEY_ID"]
if AWS_ACCESS_KEY_ID == "":
AWS_ACCESS_KEY_ID = None
AWS_SECRET_ACCESS_KEY = AUTH_TOKENS["AWS_SECRET_ACCESS_KEY"]
if AWS_SECRET_ACCESS_KEY == "":
AWS_SECRET_ACCESS_KEY = None
DATABASES = AUTH_TOKENS['DATABASES']
MODULESTORE = convert_module_store_setting_if_needed(AUTH_TOKENS.get('MODULESTORE', MODULESTORE))
CONTENTSTORE = AUTH_TOKENS['CONTENTSTORE']
DOC_STORE_CONFIG = AUTH_TOKENS['DOC_STORE_CONFIG']
# Datadog for events!
DATADOG = AUTH_TOKENS.get("DATADOG", {})
DATADOG.update(ENV_TOKENS.get("DATADOG", {}))
# TODO: deprecated (compatibility with previous settings)
if 'DATADOG_API' in AUTH_TOKENS:
DATADOG['api_key'] = AUTH_TOKENS['DATADOG_API']
# Celery Broker
CELERY_ALWAYS_EAGER = ENV_TOKENS.get("CELERY_ALWAYS_EAGER", False)
CELERY_BROKER_TRANSPORT = ENV_TOKENS.get("CELERY_BROKER_TRANSPORT", "")
CELERY_BROKER_HOSTNAME = ENV_TOKENS.get("CELERY_BROKER_HOSTNAME", "")
CELERY_BROKER_VHOST = ENV_TOKENS.get("CELERY_BROKER_VHOST", "")
CELERY_BROKER_USER = AUTH_TOKENS.get("CELERY_BROKER_USER", "")
CELERY_BROKER_PASSWORD = AUTH_TOKENS.get("CELERY_BROKER_PASSWORD", "")
BROKER_URL = "{0}://{1}:{2}@{3}/{4}".format(CELERY_BROKER_TRANSPORT,
CELERY_BROKER_USER,
CELERY_BROKER_PASSWORD,
CELERY_BROKER_HOSTNAME,
CELERY_BROKER_VHOST)
# Event tracking
TRACKING_BACKENDS.update(AUTH_TOKENS.get("TRACKING_BACKENDS", {}))
EVENT_TRACKING_BACKENDS.update(AUTH_TOKENS.get("EVENT_TRACKING_BACKENDS", {}))
SUBDOMAIN_BRANDING = ENV_TOKENS.get('SUBDOMAIN_BRANDING', {})
VIRTUAL_UNIVERSITIES = ENV_TOKENS.get('VIRTUAL_UNIVERSITIES', [])
##### ACCOUNT LOCKOUT DEFAULT PARAMETERS #####
MAX_FAILED_LOGIN_ATTEMPTS_ALLOWED = ENV_TOKENS.get("MAX_FAILED_LOGIN_ATTEMPTS_ALLOWED", 5)
MAX_FAILED_LOGIN_ATTEMPTS_LOCKOUT_PERIOD_SECS = ENV_TOKENS.get("MAX_FAILED_LOGIN_ATTEMPTS_LOCKOUT_PERIOD_SECS", 15 * 60)
MICROSITE_CONFIGURATION = ENV_TOKENS.get('MICROSITE_CONFIGURATION', {})
MICROSITE_ROOT_DIR = path(ENV_TOKENS.get('MICROSITE_ROOT_DIR', ''))
#### PASSWORD POLICY SETTINGS #####
PASSWORD_MIN_LENGTH = ENV_TOKENS.get("PASSWORD_MIN_LENGTH")
PASSWORD_MAX_LENGTH = ENV_TOKENS.get("PASSWORD_MAX_LENGTH")
PASSWORD_COMPLEXITY = ENV_TOKENS.get("PASSWORD_COMPLEXITY", {})
PASSWORD_DICTIONARY_EDIT_DISTANCE_THRESHOLD = ENV_TOKENS.get("PASSWORD_DICTIONARY_EDIT_DISTANCE_THRESHOLD")
PASSWORD_DICTIONARY = ENV_TOKENS.get("PASSWORD_DICTIONARY", [])
### INACTIVITY SETTINGS ####
SESSION_INACTIVITY_TIMEOUT_IN_SECONDS = AUTH_TOKENS.get("SESSION_INACTIVITY_TIMEOUT_IN_SECONDS")
##### X-Frame-Options response header settings #####
X_FRAME_OPTIONS = ENV_TOKENS.get('X_FRAME_OPTIONS', X_FRAME_OPTIONS)
##### ADVANCED_SECURITY_CONFIG #####
ADVANCED_SECURITY_CONFIG = ENV_TOKENS.get('ADVANCED_SECURITY_CONFIG', {})
################ ADVANCED COMPONENT/PROBLEM TYPES ###############
ADVANCED_COMPONENT_TYPES = ENV_TOKENS.get('ADVANCED_COMPONENT_TYPES', ADVANCED_COMPONENT_TYPES)
ADVANCED_PROBLEM_TYPES = ENV_TOKENS.get('ADVANCED_PROBLEM_TYPES', ADVANCED_PROBLEM_TYPES)
| agpl-3.0 | -8,380,252,749,934,635,000 | 37.784512 | 120 | 0.701189 | false |
maohongyuan/kbengine | kbe/res/scripts/common/Lib/test/test_email/test__encoded_words.py | 123 | 6387 | import unittest
from email import _encoded_words as _ew
from email import errors
from test.test_email import TestEmailBase
class TestDecodeQ(TestEmailBase):
def _test(self, source, ex_result, ex_defects=[]):
result, defects = _ew.decode_q(source)
self.assertEqual(result, ex_result)
self.assertDefectsEqual(defects, ex_defects)
def test_no_encoded(self):
self._test(b'foobar', b'foobar')
def test_spaces(self):
self._test(b'foo=20bar=20', b'foo bar ')
self._test(b'foo_bar_', b'foo bar ')
def test_run_of_encoded(self):
self._test(b'foo=20=20=21=2Cbar', b'foo !,bar')
class TestDecodeB(TestEmailBase):
def _test(self, source, ex_result, ex_defects=[]):
result, defects = _ew.decode_b(source)
self.assertEqual(result, ex_result)
self.assertDefectsEqual(defects, ex_defects)
def test_simple(self):
self._test(b'Zm9v', b'foo')
def test_missing_padding(self):
self._test(b'dmk', b'vi', [errors.InvalidBase64PaddingDefect])
def test_invalid_character(self):
self._test(b'dm\x01k===', b'vi', [errors.InvalidBase64CharactersDefect])
def test_invalid_character_and_bad_padding(self):
self._test(b'dm\x01k', b'vi', [errors.InvalidBase64CharactersDefect,
errors.InvalidBase64PaddingDefect])
class TestDecode(TestEmailBase):
def test_wrong_format_input_raises(self):
with self.assertRaises(ValueError):
_ew.decode('=?badone?=')
with self.assertRaises(ValueError):
_ew.decode('=?')
with self.assertRaises(ValueError):
_ew.decode('')
def _test(self, source, result, charset='us-ascii', lang='', defects=[]):
res, char, l, d = _ew.decode(source)
self.assertEqual(res, result)
self.assertEqual(char, charset)
self.assertEqual(l, lang)
self.assertDefectsEqual(d, defects)
def test_simple_q(self):
self._test('=?us-ascii?q?foo?=', 'foo')
def test_simple_b(self):
self._test('=?us-ascii?b?dmk=?=', 'vi')
def test_q_case_ignored(self):
self._test('=?us-ascii?Q?foo?=', 'foo')
def test_b_case_ignored(self):
self._test('=?us-ascii?B?dmk=?=', 'vi')
def test_non_trivial_q(self):
self._test('=?latin-1?q?=20F=fcr=20Elise=20?=', ' FΓΌr Elise ', 'latin-1')
def test_q_escaped_bytes_preserved(self):
self._test(b'=?us-ascii?q?=20\xACfoo?='.decode('us-ascii',
'surrogateescape'),
' \uDCACfoo',
defects = [errors.UndecodableBytesDefect])
def test_b_undecodable_bytes_ignored_with_defect(self):
self._test(b'=?us-ascii?b?dm\xACk?='.decode('us-ascii',
'surrogateescape'),
'vi',
defects = [
errors.InvalidBase64CharactersDefect,
errors.InvalidBase64PaddingDefect])
def test_b_invalid_bytes_ignored_with_defect(self):
self._test('=?us-ascii?b?dm\x01k===?=',
'vi',
defects = [errors.InvalidBase64CharactersDefect])
def test_b_invalid_bytes_incorrect_padding(self):
self._test('=?us-ascii?b?dm\x01k?=',
'vi',
defects = [
errors.InvalidBase64CharactersDefect,
errors.InvalidBase64PaddingDefect])
def test_b_padding_defect(self):
self._test('=?us-ascii?b?dmk?=',
'vi',
defects = [errors.InvalidBase64PaddingDefect])
def test_nonnull_lang(self):
self._test('=?us-ascii*jive?q?test?=', 'test', lang='jive')
def test_unknown_8bit_charset(self):
self._test('=?unknown-8bit?q?foo=ACbar?=',
b'foo\xacbar'.decode('ascii', 'surrogateescape'),
charset = 'unknown-8bit',
defects = [])
def test_unknown_charset(self):
self._test('=?foobar?q?foo=ACbar?=',
b'foo\xacbar'.decode('ascii', 'surrogateescape'),
charset = 'foobar',
# XXX Should this be a new Defect instead?
defects = [errors.CharsetError])
def test_q_nonascii(self):
self._test('=?utf-8?q?=C3=89ric?=',
'Γric',
charset='utf-8')
class TestEncodeQ(TestEmailBase):
def _test(self, src, expected):
self.assertEqual(_ew.encode_q(src), expected)
def test_all_safe(self):
self._test(b'foobar', 'foobar')
def test_spaces(self):
self._test(b'foo bar ', 'foo_bar_')
def test_run_of_encodables(self):
self._test(b'foo ,,bar', 'foo__=2C=2Cbar')
class TestEncodeB(TestEmailBase):
def test_simple(self):
self.assertEqual(_ew.encode_b(b'foo'), 'Zm9v')
def test_padding(self):
self.assertEqual(_ew.encode_b(b'vi'), 'dmk=')
class TestEncode(TestEmailBase):
def test_q(self):
self.assertEqual(_ew.encode('foo', 'utf-8', 'q'), '=?utf-8?q?foo?=')
def test_b(self):
self.assertEqual(_ew.encode('foo', 'utf-8', 'b'), '=?utf-8?b?Zm9v?=')
def test_auto_q(self):
self.assertEqual(_ew.encode('foo', 'utf-8'), '=?utf-8?q?foo?=')
def test_auto_q_if_short_mostly_safe(self):
self.assertEqual(_ew.encode('vi.', 'utf-8'), '=?utf-8?q?vi=2E?=')
def test_auto_b_if_enough_unsafe(self):
self.assertEqual(_ew.encode('.....', 'utf-8'), '=?utf-8?b?Li4uLi4=?=')
def test_auto_b_if_long_unsafe(self):
self.assertEqual(_ew.encode('vi.vi.vi.vi.vi.', 'utf-8'),
'=?utf-8?b?dmkudmkudmkudmkudmku?=')
def test_auto_q_if_long_mostly_safe(self):
self.assertEqual(_ew.encode('vi vi vi.vi ', 'utf-8'),
'=?utf-8?q?vi_vi_vi=2Evi_?=')
def test_utf8_default(self):
self.assertEqual(_ew.encode('foo'), '=?utf-8?q?foo?=')
def test_lang(self):
self.assertEqual(_ew.encode('foo', lang='jive'), '=?utf-8*jive?q?foo?=')
def test_unknown_8bit(self):
self.assertEqual(_ew.encode('foo\uDCACbar', charset='unknown-8bit'),
'=?unknown-8bit?q?foo=ACbar?=')
if __name__ == '__main__':
unittest.main()
| lgpl-3.0 | 2,052,561,620,146,257,400 | 32.255208 | 81 | 0.557713 | false |
iamprakashom/offlineimap | offlineimap/folder/GmailMaildir.py | 10 | 13398 | # Maildir folder support with labels
# Copyright (C) 2002 - 2011 John Goerzen & contributors
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
import os
from sys import exc_info
from .Maildir import MaildirFolder
from offlineimap import OfflineImapError
import offlineimap.accounts
from offlineimap import imaputil
class GmailMaildirFolder(MaildirFolder):
"""Folder implementation to support adding labels to messages in a Maildir.
"""
def __init__(self, root, name, sep, repository):
super(GmailMaildirFolder, self).__init__(root, name, sep, repository)
# The header under which labels are stored
self.labelsheader = self.repository.account.getconf('labelsheader', 'X-Keywords')
# enables / disables label sync
self.synclabels = self.repository.account.getconfboolean('synclabels', 0)
# if synclabels is enabled, add a 4th pass to sync labels
if self.synclabels:
self.syncmessagesto_passes.append(('syncing labels', self.syncmessagesto_labels))
def quickchanged(self, statusfolder):
"""Returns True if the Maildir has changed. Checks uids, flags and mtimes"""
self.cachemessagelist()
# Folder has different uids than statusfolder => TRUE
if sorted(self.getmessageuidlist()) != \
sorted(statusfolder.getmessageuidlist()):
return True
# check for flag changes, it's quick on a Maildir
for (uid, message) in self.getmessagelist().iteritems():
if message['flags'] != statusfolder.getmessageflags(uid):
return True
# check for newer mtimes. it is also fast
for (uid, message) in self.getmessagelist().iteritems():
if message['mtime'] > statusfolder.getmessagemtime(uid):
return True
return False #Nope, nothing changed
# Interface from BaseFolder
def msglist_item_initializer(self, uid):
return {'flags': set(), 'labels': set(), 'labels_cached': False,
'filename': '/no-dir/no-such-file/', 'mtime': 0}
def cachemessagelist(self, min_date=None, min_uid=None):
if self.ismessagelistempty():
self.messagelist = self._scanfolder(min_date=min_date, min_uid=min_uid)
# Get mtimes
if self.synclabels:
for uid, msg in self.messagelist.items():
filepath = os.path.join(self.getfullname(), msg['filename'])
msg['mtime'] = long(os.stat(filepath).st_mtime)
def getmessagelabels(self, uid):
# Labels are not cached in cachemessagelist because it is too slow.
if not self.messagelist[uid]['labels_cached']:
filename = self.messagelist[uid]['filename']
filepath = os.path.join(self.getfullname(), filename)
if not os.path.exists(filepath):
return set()
file = open(filepath, 'rt')
content = file.read()
file.close()
self.messagelist[uid]['labels'] = set()
for hstr in self.getmessageheaderlist(content, self.labelsheader):
self.messagelist[uid]['labels'].update(
imaputil.labels_from_header(self.labelsheader, hstr))
self.messagelist[uid]['labels_cached'] = True
return self.messagelist[uid]['labels']
def getmessagemtime(self, uid):
if not 'mtime' in self.messagelist[uid]:
return 0
else:
return self.messagelist[uid]['mtime']
def savemessage(self, uid, content, flags, rtime):
"""Writes a new message, with the specified uid.
See folder/Base for detail. Note that savemessage() does not
check against dryrun settings, so you need to ensure that
savemessage is never called in a dryrun mode."""
if not self.synclabels:
return super(GmailMaildirFolder, self).savemessage(uid, content, flags, rtime)
labels = set()
for hstr in self.getmessageheaderlist(content, self.labelsheader):
labels.update(imaputil.labels_from_header(self.labelsheader, hstr))
ret = super(GmailMaildirFolder, self).savemessage(uid, content, flags, rtime)
# Update the mtime and labels
filename = self.messagelist[uid]['filename']
filepath = os.path.join(self.getfullname(), filename)
self.messagelist[uid]['mtime'] = long(os.stat(filepath).st_mtime)
self.messagelist[uid]['labels'] = labels
return ret
def savemessagelabels(self, uid, labels, ignorelabels=set()):
"""Change a message's labels to `labels`.
Note that this function does not check against dryrun settings,
so you need to ensure that it is never called in a dryrun mode."""
filename = self.messagelist[uid]['filename']
filepath = os.path.join(self.getfullname(), filename)
file = open(filepath, 'rt')
content = file.read()
file.close()
oldlabels = set()
for hstr in self.getmessageheaderlist(content, self.labelsheader):
oldlabels.update(imaputil.labels_from_header(self.labelsheader, hstr))
labels = labels - ignorelabels
ignoredlabels = oldlabels & ignorelabels
oldlabels = oldlabels - ignorelabels
# Nothing to change
if labels == oldlabels:
return
# Change labels into content
labels_str = imaputil.format_labels_string(self.labelsheader,
sorted(labels | ignoredlabels))
# First remove old labels header, and then add the new one
content = self.deletemessageheaders(content, self.labelsheader)
content = self.addmessageheader(content, '\n', self.labelsheader, labels_str)
mtime = long(os.stat(filepath).st_mtime)
# write file with new labels to a unique file in tmp
messagename = self.new_message_filename(uid, set())
tmpname = self.save_to_tmp_file(messagename, content)
tmppath = os.path.join(self.getfullname(), tmpname)
# move to actual location
try:
os.rename(tmppath, filepath)
except OSError as e:
raise OfflineImapError("Can't rename file '%s' to '%s': %s" % \
(tmppath, filepath, e[1]), OfflineImapError.ERROR.FOLDER), \
None, exc_info()[2]
# if utime_from_header=true, we don't want to change the mtime.
if self.utime_from_header and mtime:
os.utime(filepath, (mtime, mtime))
# save the new mtime and labels
self.messagelist[uid]['mtime'] = long(os.stat(filepath).st_mtime)
self.messagelist[uid]['labels'] = labels
def copymessageto(self, uid, dstfolder, statusfolder, register = 1):
"""Copies a message from self to dst if needed, updating the status
Note that this function does not check against dryrun settings,
so you need to ensure that it is never called in a
dryrun mode.
:param uid: uid of the message to be copied.
:param dstfolder: A BaseFolder-derived instance
:param statusfolder: A LocalStatusFolder instance
:param register: whether we should register a new thread."
:returns: Nothing on success, or raises an Exception."""
# Check if we are really copying
realcopy = uid > 0 and not dstfolder.uidexists(uid)
# first copy the message
super(GmailMaildirFolder, self).copymessageto(uid, dstfolder, statusfolder, register)
# sync labels and mtime now when the message is new (the embedded labels are up to date,
# and have already propagated to the remote server.
# for message which already existed on the remote, this is useless, as later the labels may
# get updated.
if realcopy and self.synclabels:
try:
labels = dstfolder.getmessagelabels(uid)
statusfolder.savemessagelabels(uid, labels, mtime=self.getmessagemtime(uid))
# dstfolder is not GmailMaildir.
except NotImplementedError:
return
def syncmessagesto_labels(self, dstfolder, statusfolder):
"""Pass 4: Label Synchronization (Gmail only)
Compare label mismatches in self with those in statusfolder. If
msg has a valid UID and exists on dstfolder (has not e.g. been
deleted there), sync the labels change to both dstfolder and
statusfolder.
Also skips messages whose mtime remains the same as statusfolder, as the
contents have not changed.
This function checks and protects us from action in ryrun mode.
"""
# For each label, we store a list of uids to which it should be
# added. Then, we can call addmessageslabels() to apply them in
# bulk, rather than one call per message.
addlabellist = {}
dellabellist = {}
uidlist = []
try:
# filter uids (fast)
for uid in self.getmessageuidlist():
# bail out on CTRL-C or SIGTERM
if offlineimap.accounts.Account.abort_NOW_signal.is_set():
break
# Ignore messages with negative UIDs missed by pass 1 and
# don't do anything if the message has been deleted remotely
if uid < 0 or not dstfolder.uidexists(uid):
continue
selfmtime = self.getmessagemtime(uid)
if statusfolder.uidexists(uid):
statusmtime = statusfolder.getmessagemtime(uid)
else:
statusmtime = 0
if selfmtime > statusmtime:
uidlist.append(uid)
self.ui.collectingdata(uidlist, self)
# This can be slow if there is a lot of modified files
for uid in uidlist:
# bail out on CTRL-C or SIGTERM
if offlineimap.accounts.Account.abort_NOW_signal.is_set():
break
selflabels = self.getmessagelabels(uid)
if statusfolder.uidexists(uid):
statuslabels = statusfolder.getmessagelabels(uid)
else:
statuslabels = set()
addlabels = selflabels - statuslabels
dellabels = statuslabels - selflabels
for lb in addlabels:
if not lb in addlabellist:
addlabellist[lb] = []
addlabellist[lb].append(uid)
for lb in dellabels:
if not lb in dellabellist:
dellabellist[lb] = []
dellabellist[lb].append(uid)
for lb, uids in addlabellist.items():
# bail out on CTRL-C or SIGTERM
if offlineimap.accounts.Account.abort_NOW_signal.is_set():
break
self.ui.addinglabels(uids, lb, dstfolder)
if self.repository.account.dryrun:
continue #don't actually add in a dryrun
dstfolder.addmessageslabels(uids, set([lb]))
statusfolder.addmessageslabels(uids, set([lb]))
for lb, uids in dellabellist.items():
# bail out on CTRL-C or SIGTERM
if offlineimap.accounts.Account.abort_NOW_signal.is_set():
break
self.ui.deletinglabels(uids, lb, dstfolder)
if self.repository.account.dryrun:
continue #don't actually remove in a dryrun
dstfolder.deletemessageslabels(uids, set([lb]))
statusfolder.deletemessageslabels(uids, set([lb]))
# Update mtimes on StatusFolder. It is done last to be safe. If something els fails
# and the mtime is not updated, the labels will still be synced next time.
mtimes = {}
for uid in uidlist:
# bail out on CTRL-C or SIGTERM
if offlineimap.accounts.Account.abort_NOW_signal.is_set():
break
if self.repository.account.dryrun:
continue #don't actually update statusfolder
filename = self.messagelist[uid]['filename']
filepath = os.path.join(self.getfullname(), filename)
mtimes[uid] = long(os.stat(filepath).st_mtime)
# finally update statusfolder in a single DB transaction
statusfolder.savemessagesmtimebulk(mtimes)
except NotImplementedError:
self.ui.warn("Can't sync labels. You need to configure a remote repository of type Gmail.")
| gpl-2.0 | 2,524,864,323,799,201,300 | 39.847561 | 103 | 0.615166 | false |
PeterWangIntel/chromium-crosswalk | native_client_sdk/src/tools/tests/create_nmf_test.py | 3 | 19466 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import json
import os
import posixpath
import shutil
import subprocess
import sys
import tempfile
import unittest
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
TOOLS_DIR = os.path.dirname(SCRIPT_DIR)
DATA_DIR = os.path.join(TOOLS_DIR, 'lib', 'tests', 'data')
BUILD_TOOLS_DIR = os.path.join(os.path.dirname(TOOLS_DIR), 'build_tools')
CHROME_SRC = os.path.dirname(os.path.dirname(os.path.dirname(TOOLS_DIR)))
MOCK_DIR = os.path.join(CHROME_SRC, 'third_party', 'pymock')
# For the mock library
sys.path.append(MOCK_DIR)
sys.path.append(TOOLS_DIR)
sys.path.append(BUILD_TOOLS_DIR)
import build_paths
import create_nmf
import getos
from mock import patch, Mock
TOOLCHAIN_OUT = os.path.join(build_paths.OUT_DIR, 'sdk_tests', 'toolchain')
NACL_X86_GLIBC_TOOLCHAIN = os.path.join(TOOLCHAIN_OUT,
'%s_x86' % getos.GetPlatform(),
'nacl_x86_glibc')
PosixRelPath = create_nmf.PosixRelPath
def StripSo(name):
"""Strip trailing hexidecimal characters from the name of a shared object.
It strips everything after the last '.' in the name, and checks that the new
name ends with .so.
e.g.
libc.so.ad6acbfa => libc.so
foo.bar.baz => foo.bar.baz
"""
stripped_name = '.'.join(name.split('.')[:-1])
if stripped_name.endswith('.so'):
return stripped_name
return name
class TestPosixRelPath(unittest.TestCase):
def testBasic(self):
# Note that PosixRelPath only converts from native path format to posix
# path format, that's why we have to use os.path.join here.
path = os.path.join(os.path.sep, 'foo', 'bar', 'baz.blah')
start = os.path.sep + 'foo'
self.assertEqual(PosixRelPath(path, start), 'bar/baz.blah')
class TestDefaultLibpath(unittest.TestCase):
def setUp(self):
patcher = patch('create_nmf.GetSDKRoot', Mock(return_value='/dummy/path'))
patcher.start()
self.addCleanup(patcher.stop)
def testUsesSDKRoot(self):
paths = create_nmf.GetDefaultLibPath('Debug')
for path in paths:
self.assertTrue(path.startswith('/dummy/path'))
def testIncludesNaClPorts(self):
paths = create_nmf.GetDefaultLibPath('Debug')
self.assertTrue(any(os.path.join('ports', 'lib') in p for p in paths),
"naclports libpath missing: %s" % str(paths))
class TestNmfUtils(unittest.TestCase):
"""Tests for the main NmfUtils class in create_nmf."""
def setUp(self):
self.tempdir = None
self.toolchain = NACL_X86_GLIBC_TOOLCHAIN
self.objdump = os.path.join(self.toolchain, 'bin', 'i686-nacl-objdump')
if os.name == 'nt':
self.objdump += '.exe'
self._Mktemp()
def _CreateTestNexe(self, name, arch):
"""Create an empty test .nexe file for use in create_nmf tests.
This is used rather than checking in test binaries since the
checked in binaries depend on .so files that only exist in the
certain SDK that build them.
"""
compiler = os.path.join(self.toolchain, 'bin', '%s-nacl-g++' % arch)
if os.name == 'nt':
compiler += '.exe'
os.environ['CYGWIN'] = 'nodosfilewarning'
program = 'int main() { return 0; }'
name = os.path.join(self.tempdir, name)
dst_dir = os.path.dirname(name)
if not os.path.exists(dst_dir):
os.makedirs(dst_dir)
cmd = [compiler, '-pthread', '-x' , 'c', '-o', name, '-']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
p.communicate(input=program)
self.assertEqual(p.returncode, 0)
return name
def tearDown(self):
if self.tempdir:
shutil.rmtree(self.tempdir)
def _Mktemp(self):
self.tempdir = tempfile.mkdtemp()
def _CreateNmfUtils(self, nexes, **kwargs):
if not kwargs.get('lib_path'):
# Use lib instead of lib64 (lib64 is a symlink to lib).
kwargs['lib_path'] = [
os.path.join(self.toolchain, 'x86_64-nacl', 'lib'),
os.path.join(self.toolchain, 'x86_64-nacl', 'lib32')]
return create_nmf.NmfUtils(nexes,
objdump=self.objdump,
**kwargs)
def _CreateStatic(self, arch_path=None, **kwargs):
"""Copy all static .nexe files from the DATA_DIR to a temporary directory.
Args:
arch_path: A dictionary mapping architecture to the directory to generate
the .nexe for the architecture in.
kwargs: Keyword arguments to pass through to create_nmf.NmfUtils
constructor.
Returns:
A tuple with 2 elements:
* The generated NMF as a dictionary (i.e. parsed by json.loads)
* A list of the generated .nexe paths
"""
arch_path = arch_path or {}
nexes = []
for arch in ('x86_64', 'x86_32', 'arm'):
nexe_name = 'test_static_%s.nexe' % arch
src_nexe = os.path.join(DATA_DIR, nexe_name)
dst_nexe = os.path.join(self.tempdir, arch_path.get(arch, ''), nexe_name)
dst_dir = os.path.dirname(dst_nexe)
if not os.path.exists(dst_dir):
os.makedirs(dst_dir)
shutil.copy(src_nexe, dst_nexe)
nexes.append(dst_nexe)
nexes.sort()
nmf_utils = self._CreateNmfUtils(nexes, **kwargs)
nmf = json.loads(nmf_utils.GetJson())
return nmf, nexes
def _CreateDynamicAndStageDeps(self, arch_path=None, **kwargs):
"""Create dynamic .nexe files and put them in a temporary directory, with
their dependencies staged in the same directory.
Args:
arch_path: A dictionary mapping architecture to the directory to generate
the .nexe for the architecture in.
kwargs: Keyword arguments to pass through to create_nmf.NmfUtils
constructor.
Returns:
A tuple with 2 elements:
* The generated NMF as a dictionary (i.e. parsed by json.loads)
* A list of the generated .nexe paths
"""
arch_path = arch_path or {}
nexes = []
for arch in ('x86_64', 'x86_32'):
nexe_name = 'test_dynamic_%s.nexe' % arch
rel_nexe = os.path.join(arch_path.get(arch, ''), nexe_name)
arch_alt = 'i686' if arch == 'x86_32' else arch
nexe = self._CreateTestNexe(rel_nexe, arch_alt)
nexes.append(nexe)
nexes.sort()
nmf_utils = self._CreateNmfUtils(nexes, **kwargs)
nmf = json.loads(nmf_utils.GetJson())
nmf_utils.StageDependencies(self.tempdir)
return nmf, nexes
def _CreatePexe(self, **kwargs):
"""Copy test.pexe from the DATA_DIR to a temporary directory.
Args:
kwargs: Keyword arguments to pass through to create_nmf.NmfUtils
constructor.
Returns:
A tuple with 2 elements:
* The generated NMF as a dictionary (i.e. parsed by json.loads)
* A list of the generated .pexe paths
"""
pexe_name = 'test.pexe'
src_pexe = os.path.join(DATA_DIR, pexe_name)
dst_pexe = os.path.join(self.tempdir, pexe_name)
shutil.copy(src_pexe, dst_pexe)
pexes = [dst_pexe]
nmf_utils = self._CreateNmfUtils(pexes, **kwargs)
nmf = json.loads(nmf_utils.GetJson())
return nmf, pexes
def _CreateBitCode(self, **kwargs):
"""Copy test.bc from the DATA_DIR to a temporary directory.
Args:
kwargs: Keyword arguments to pass through to create_nmf.NmfUtils
constructor.
Returns:
A tuple with 2 elements:
* The generated NMF as a dictionary (i.e. parsed by json.loads)
* A list of the generated .bc paths
"""
bc_name = 'test.bc'
src_bc = os.path.join(DATA_DIR, bc_name)
dst_bc = os.path.join(self.tempdir, bc_name)
shutil.copy(src_bc, dst_bc)
bcs = [dst_bc]
nmf_utils = self._CreateNmfUtils(bcs, **kwargs)
nmf = json.loads(nmf_utils.GetJson())
return nmf, bcs
def assertManifestEquals(self, manifest, expected):
"""Compare two manifest dictionaries.
The input manifest is regenerated with all string keys and values being
processed through StripSo, to remove the random hexidecimal characters at
the end of shared object names.
Args:
manifest: The generated manifest.
expected: The expected manifest.
"""
def StripSoCopyDict(d):
new_d = {}
for k, v in d.iteritems():
new_k = StripSo(k)
if isinstance(v, (str, unicode)):
new_v = StripSo(v)
elif type(v) is list:
new_v = v[:]
elif type(v) is dict:
new_v = StripSoCopyDict(v)
else:
# Assume that anything else can be copied directly.
new_v = v
new_d[new_k] = new_v
return new_d
self.assertEqual(StripSoCopyDict(manifest), expected)
def assertStagingEquals(self, expected):
"""Compare the contents of the temporary directory, to an expected
directory layout.
Args:
expected: The expected directory layout.
"""
all_files = []
for root, _, files in os.walk(self.tempdir):
rel_root_posix = PosixRelPath(root, self.tempdir)
for f in files:
path = posixpath.join(rel_root_posix, StripSo(f))
if path.startswith('./'):
path = path[2:]
all_files.append(path)
self.assertEqual(set(expected), set(all_files))
def testStatic(self):
nmf, _ = self._CreateStatic()
expected_manifest = {
'files': {},
'program': {
'x86-64': {'url': 'test_static_x86_64.nexe'},
'x86-32': {'url': 'test_static_x86_32.nexe'},
'arm': {'url': 'test_static_arm.nexe'},
}
}
self.assertManifestEquals(nmf, expected_manifest)
def testStaticWithPath(self):
arch_dir = {'x86_32': 'x86_32', 'x86_64': 'x86_64', 'arm': 'arm'}
nmf, _ = self._CreateStatic(arch_dir, nmf_root=self.tempdir)
expected_manifest = {
'files': {},
'program': {
'x86-32': {'url': 'x86_32/test_static_x86_32.nexe'},
'x86-64': {'url': 'x86_64/test_static_x86_64.nexe'},
'arm': {'url': 'arm/test_static_arm.nexe'},
}
}
self.assertManifestEquals(nmf, expected_manifest)
def testStaticWithPathNoNmfRoot(self):
# This case is not particularly useful, but it is similar to how create_nmf
# used to work. If there is no nmf_root given, all paths are relative to
# the first nexe passed on the commandline. I believe the assumption
# previously was that all .nexes would be in the same directory.
arch_dir = {'x86_32': 'x86_32', 'x86_64': 'x86_64', 'arm': 'arm'}
nmf, _ = self._CreateStatic(arch_dir)
expected_manifest = {
'files': {},
'program': {
'x86-32': {'url': '../x86_32/test_static_x86_32.nexe'},
'x86-64': {'url': '../x86_64/test_static_x86_64.nexe'},
'arm': {'url': 'test_static_arm.nexe'},
}
}
self.assertManifestEquals(nmf, expected_manifest)
def testStaticWithNexePrefix(self):
nmf, _ = self._CreateStatic(nexe_prefix='foo')
expected_manifest = {
'files': {},
'program': {
'x86-64': {'url': 'foo/test_static_x86_64.nexe'},
'x86-32': {'url': 'foo/test_static_x86_32.nexe'},
'arm': {'url': 'foo/test_static_arm.nexe'},
}
}
self.assertManifestEquals(nmf, expected_manifest)
def testDynamic(self):
nmf, nexes = self._CreateDynamicAndStageDeps()
expected_manifest = {
'files': {
'main.nexe': {
'x86-32': {'url': 'test_dynamic_x86_32.nexe'},
'x86-64': {'url': 'test_dynamic_x86_64.nexe'},
},
'libc.so': {
'x86-32': {'url': 'lib32/libc.so'},
'x86-64': {'url': 'lib64/libc.so'},
},
'libgcc_s.so': {
'x86-32': {'url': 'lib32/libgcc_s.so'},
'x86-64': {'url': 'lib64/libgcc_s.so'},
},
'libpthread.so': {
'x86-32': {'url': 'lib32/libpthread.so'},
'x86-64': {'url': 'lib64/libpthread.so'},
},
},
'program': {
'x86-32': {'url': 'lib32/runnable-ld.so'},
'x86-64': {'url': 'lib64/runnable-ld.so'},
}
}
expected_staging = [os.path.basename(f) for f in nexes]
expected_staging.extend([
'lib32/libc.so',
'lib32/libgcc_s.so',
'lib32/libpthread.so',
'lib32/runnable-ld.so',
'lib64/libc.so',
'lib64/libgcc_s.so',
'lib64/libpthread.so',
'lib64/runnable-ld.so'])
self.assertManifestEquals(nmf, expected_manifest)
self.assertStagingEquals(expected_staging)
def testDynamicWithPath(self):
arch_dir = {'x86_64': 'x86_64', 'x86_32': 'x86_32'}
nmf, nexes = self._CreateDynamicAndStageDeps(arch_dir,
nmf_root=self.tempdir)
expected_manifest = {
'files': {
'main.nexe': {
'x86-32': {'url': 'x86_32/test_dynamic_x86_32.nexe'},
'x86-64': {'url': 'x86_64/test_dynamic_x86_64.nexe'},
},
'libc.so': {
'x86-32': {'url': 'x86_32/lib32/libc.so'},
'x86-64': {'url': 'x86_64/lib64/libc.so'},
},
'libgcc_s.so': {
'x86-32': {'url': 'x86_32/lib32/libgcc_s.so'},
'x86-64': {'url': 'x86_64/lib64/libgcc_s.so'},
},
'libpthread.so': {
'x86-32': {'url': 'x86_32/lib32/libpthread.so'},
'x86-64': {'url': 'x86_64/lib64/libpthread.so'},
},
},
'program': {
'x86-32': {'url': 'x86_32/lib32/runnable-ld.so'},
'x86-64': {'url': 'x86_64/lib64/runnable-ld.so'},
}
}
expected_staging = [PosixRelPath(f, self.tempdir) for f in nexes]
expected_staging.extend([
'x86_32/lib32/libc.so',
'x86_32/lib32/libgcc_s.so',
'x86_32/lib32/libpthread.so',
'x86_32/lib32/runnable-ld.so',
'x86_64/lib64/libc.so',
'x86_64/lib64/libgcc_s.so',
'x86_64/lib64/libpthread.so',
'x86_64/lib64/runnable-ld.so'])
self.assertManifestEquals(nmf, expected_manifest)
self.assertStagingEquals(expected_staging)
def testDynamicWithRelPath(self):
"""Test that when the nmf root is a relative path that things work."""
arch_dir = {'x86_64': 'x86_64', 'x86_32': 'x86_32'}
old_path = os.getcwd()
try:
os.chdir(self.tempdir)
nmf, nexes = self._CreateDynamicAndStageDeps(arch_dir, nmf_root='')
expected_manifest = {
'files': {
'main.nexe': {
'x86-32': {'url': 'x86_32/test_dynamic_x86_32.nexe'},
'x86-64': {'url': 'x86_64/test_dynamic_x86_64.nexe'},
},
'libc.so': {
'x86-32': {'url': 'x86_32/lib32/libc.so'},
'x86-64': {'url': 'x86_64/lib64/libc.so'},
},
'libgcc_s.so': {
'x86-32': {'url': 'x86_32/lib32/libgcc_s.so'},
'x86-64': {'url': 'x86_64/lib64/libgcc_s.so'},
},
'libpthread.so': {
'x86-32': {'url': 'x86_32/lib32/libpthread.so'},
'x86-64': {'url': 'x86_64/lib64/libpthread.so'},
},
},
'program': {
'x86-32': {'url': 'x86_32/lib32/runnable-ld.so'},
'x86-64': {'url': 'x86_64/lib64/runnable-ld.so'},
}
}
expected_staging = [PosixRelPath(f, self.tempdir) for f in nexes]
expected_staging.extend([
'x86_32/lib32/libc.so',
'x86_32/lib32/libgcc_s.so',
'x86_32/lib32/libpthread.so',
'x86_32/lib32/runnable-ld.so',
'x86_64/lib64/libc.so',
'x86_64/lib64/libgcc_s.so',
'x86_64/lib64/libpthread.so',
'x86_64/lib64/runnable-ld.so'])
self.assertManifestEquals(nmf, expected_manifest)
self.assertStagingEquals(expected_staging)
finally:
os.chdir(old_path)
def testDynamicWithPathNoArchPrefix(self):
arch_dir = {'x86_64': 'x86_64', 'x86_32': 'x86_32'}
nmf, nexes = self._CreateDynamicAndStageDeps(arch_dir,
nmf_root=self.tempdir,
no_arch_prefix=True)
expected_manifest = {
'files': {
'main.nexe': {
'x86-32': {'url': 'x86_32/test_dynamic_x86_32.nexe'},
'x86-64': {'url': 'x86_64/test_dynamic_x86_64.nexe'},
},
'libc.so': {
'x86-32': {'url': 'x86_32/libc.so'},
'x86-64': {'url': 'x86_64/libc.so'},
},
'libgcc_s.so': {
'x86-32': {'url': 'x86_32/libgcc_s.so'},
'x86-64': {'url': 'x86_64/libgcc_s.so'},
},
'libpthread.so': {
'x86-32': {'url': 'x86_32/libpthread.so'},
'x86-64': {'url': 'x86_64/libpthread.so'},
},
},
'program': {
'x86-32': {'url': 'x86_32/runnable-ld.so'},
'x86-64': {'url': 'x86_64/runnable-ld.so'},
}
}
expected_staging = [PosixRelPath(f, self.tempdir) for f in nexes]
expected_staging.extend([
'x86_32/libc.so',
'x86_32/libgcc_s.so',
'x86_32/libpthread.so',
'x86_32/runnable-ld.so',
'x86_64/libc.so',
'x86_64/libgcc_s.so',
'x86_64/libpthread.so',
'x86_64/runnable-ld.so'])
self.assertManifestEquals(nmf, expected_manifest)
self.assertStagingEquals(expected_staging)
def testDynamicWithLibPrefix(self):
nmf, nexes = self._CreateDynamicAndStageDeps(lib_prefix='foo')
expected_manifest = {
'files': {
'main.nexe': {
'x86-32': {'url': 'test_dynamic_x86_32.nexe'},
'x86-64': {'url': 'test_dynamic_x86_64.nexe'},
},
'libc.so': {
'x86-32': {'url': 'foo/lib32/libc.so'},
'x86-64': {'url': 'foo/lib64/libc.so'},
},
'libgcc_s.so': {
'x86-32': {'url': 'foo/lib32/libgcc_s.so'},
'x86-64': {'url': 'foo/lib64/libgcc_s.so'},
},
'libpthread.so': {
'x86-32': {'url': 'foo/lib32/libpthread.so'},
'x86-64': {'url': 'foo/lib64/libpthread.so'},
},
},
'program': {
'x86-32': {'url': 'foo/lib32/runnable-ld.so'},
'x86-64': {'url': 'foo/lib64/runnable-ld.so'},
}
}
expected_staging = [PosixRelPath(f, self.tempdir) for f in nexes]
expected_staging.extend([
'foo/lib32/libc.so',
'foo/lib32/libgcc_s.so',
'foo/lib32/libpthread.so',
'foo/lib32/runnable-ld.so',
'foo/lib64/libc.so',
'foo/lib64/libgcc_s.so',
'foo/lib64/libpthread.so',
'foo/lib64/runnable-ld.so'])
self.assertManifestEquals(nmf, expected_manifest)
self.assertStagingEquals(expected_staging)
def testPexe(self):
nmf, _ = self._CreatePexe()
expected_manifest = {
'program': {
'portable': {
'pnacl-translate': {
'url': 'test.pexe'
}
}
}
}
self.assertManifestEquals(nmf, expected_manifest)
def testPexeOptLevel(self):
nmf, _ = self._CreatePexe(pnacl_optlevel=2)
expected_manifest = {
'program': {
'portable': {
'pnacl-translate': {
'url': 'test.pexe',
'optlevel': 2,
}
}
}
}
self.assertManifestEquals(nmf, expected_manifest)
def testBitCode(self):
nmf, _ = self._CreateBitCode(pnacl_debug_optlevel=0)
expected_manifest = {
'program': {
'portable': {
'pnacl-debug': {
'url': 'test.bc',
'optlevel': 0,
}
}
}
}
self.assertManifestEquals(nmf, expected_manifest)
if __name__ == '__main__':
unittest.main()
| bsd-3-clause | -7,886,083,085,232,335,000 | 31.335548 | 79 | 0.578239 | false |
adityacs/ansible | lib/ansible/modules/network/nxos/nxos_mtu.py | 11 | 11446 | #!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: nxos_mtu
version_added: "2.2"
short_description: Manages MTU settings on Nexus switch.
description:
- Manages MTU settings on Nexus switch.
author:
- Jason Edelman (@jedelman8)
notes:
- Either C(sysmtu) param is required or C(interface) AND C(mtu) params are req'd.
- C(state=absent) unconfigures a given MTU if that value is currently present.
options:
interface:
description:
- Full name of interface, i.e. Ethernet1/1.
required: false
default: null
mtu:
description:
- MTU for a specific interface.
required: false
default: null
sysmtu:
description:
- System jumbo MTU.
required: false
default: null
state:
description:
- Specify desired state of the resource.
required: false
default: present
choices: ['present','absent']
'''
EXAMPLES = '''
# Ensure system mtu is 9126
- nxos_mtu:
sysmtu: 9216
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Config mtu on Eth1/1 (routed interface)
- nxos_mtu:
interface: Ethernet1/1
mtu: 1600
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Config mtu on Eth1/3 (switched interface)
- nxos_mtu:
interface: Ethernet1/3
mtu: 9216
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
# Unconfigure mtu on a given interface
- nxos_mtu:
interface: Ethernet1/3
mtu: 9216
host: "{{ inventory_hostname }}"
username: "{{ un }}"
password: "{{ pwd }}"
state: absent
'''
RETURN = '''
proposed:
description: k/v pairs of parameters passed into module
returned: always
type: dict
sample: {"mtu": "1700"}
existing:
description:
- k/v pairs of existing mtu/sysmtu on the interface/system
type: dict
sample: {"mtu": "1600", "sysmtu": "9216"}
end_state:
description: k/v pairs of mtu/sysmtu values after module execution
returned: always
type: dict
sample: {"mtu": "1700", sysmtu": "9216"}
updates:
description: command sent to the device
returned: always
type: list
sample: ["interface vlan10", "mtu 1700"]
changed:
description: check to see if a change was made on the device
returned: always
type: boolean
sample: true
'''
from ansible.module_utils.nxos import load_config, run_commands
from ansible.module_utils.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
def execute_show_command(command, module, command_type='cli_show'):
if module.params['transport'] == 'cli':
if 'show run' not in command:
command += ' | json'
cmds = [command]
body = run_commands(module, cmds)
elif module.params['transport'] == 'nxapi':
cmds = [command]
body = run_commands(module, cmds)
return body
def flatten_list(command_lists):
flat_command_list = []
for command in command_lists:
if isinstance(command, list):
flat_command_list.extend(command)
else:
flat_command_list.append(command)
return flat_command_list
def get_mtu(interface, module):
command = 'show interface {0}'.format(interface)
mtu = {}
body = execute_show_command(command, module)
try:
mtu_table = body[0]['TABLE_interface']['ROW_interface']
mtu['mtu'] = str(
mtu_table.get('eth_mtu',
mtu_table.get('svi_mtu', 'unreadable_via_api')))
mtu['sysmtu'] = get_system_mtu(module)['sysmtu']
except KeyError:
mtu = {}
return mtu
def get_system_mtu(module):
command = 'show run all | inc jumbomtu'
sysmtu = ''
body = execute_show_command(command, module, command_type='cli_show_ascii')
if body:
sysmtu = str(body[0].split(' ')[-1])
try:
sysmtu = int(sysmtu)
except:
sysmtu = ""
return dict(sysmtu=str(sysmtu))
def get_commands_config_mtu(delta, interface):
CONFIG_ARGS = {
'mtu': 'mtu {mtu}',
'sysmtu': 'system jumbomtu {sysmtu}',
}
commands = []
for param, value in delta.items():
command = CONFIG_ARGS.get(param, 'DNE').format(**delta)
if command and command != 'DNE':
commands.append(command)
command = None
mtu_check = delta.get('mtu', None)
if mtu_check:
commands.insert(0, 'interface {0}'.format(interface))
return commands
def get_commands_remove_mtu(delta, interface):
CONFIG_ARGS = {
'mtu': 'no mtu {mtu}',
'sysmtu': 'no system jumbomtu {sysmtu}',
}
commands = []
for param, value in delta.items():
command = CONFIG_ARGS.get(param, 'DNE').format(**delta)
if command and command != 'DNE':
commands.append(command)
command = None
mtu_check = delta.get('mtu', None)
if mtu_check:
commands.insert(0, 'interface {0}'.format(interface))
return commands
def get_interface_type(interface):
if interface.upper().startswith('ET'):
return 'ethernet'
elif interface.upper().startswith('VL'):
return 'svi'
elif interface.upper().startswith('LO'):
return 'loopback'
elif interface.upper().startswith('MG'):
return 'management'
elif interface.upper().startswith('MA'):
return 'management'
elif interface.upper().startswith('PO'):
return 'portchannel'
else:
return 'unknown'
def is_default(interface, module):
command = 'show run interface {0}'.format(interface)
try:
body = execute_show_command(
command, module, command_type='cli_show_ascii')[0]
if body == 'DNE':
return 'DNE'
else:
raw_list = body.split('\n')
if raw_list[-1].startswith('interface'):
return True
else:
return False
except (KeyError):
return 'DNE'
def get_interface_mode(interface, intf_type, module):
command = 'show interface {0}'.format(interface)
mode = 'unknown'
interface_table = {}
body = execute_show_command(command, module)
try:
interface_table = body[0]['TABLE_interface']['ROW_interface']
except (KeyError, AttributeError, IndexError):
return mode
if intf_type in ['ethernet', 'portchannel']:
mode = str(interface_table.get('eth_mode', 'layer3'))
if mode in ['access', 'trunk']:
mode = 'layer2'
elif mode == 'routed':
mode = 'layer3'
elif intf_type in ['loopback', 'svi']:
mode = 'layer3'
return mode
def main():
argument_spec = dict(
mtu=dict(type='str'),
interface=dict(type='str'),
sysmtu=dict(type='str'),
state=dict(choices=['absent', 'present'], default='present'),
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
required_together=[['mtu', 'interface']],
supports_check_mode=True)
warnings = list()
check_args(module, warnings)
interface = module.params['interface']
mtu = module.params['mtu']
sysmtu = module.params['sysmtu']
state = module.params['state']
if sysmtu and (interface or mtu):
module.fail_json(msg='Proper usage-- either just use the sysmtu param '
'or use interface AND mtu params')
if interface:
intf_type = get_interface_type(interface)
if intf_type != 'ethernet':
if is_default(interface, module) == 'DNE':
module.fail_json(msg='Invalid interface. It does not exist '
'on the switch.')
existing = get_mtu(interface, module)
else:
existing = get_system_mtu(module)
if interface and mtu:
if intf_type == 'loopback':
module.fail_json(msg='Cannot set MTU for loopback interface.')
mode = get_interface_mode(interface, intf_type, module)
if mode == 'layer2':
if intf_type in ['ethernet', 'portchannel']:
if mtu not in [existing['sysmtu'], '1500']:
module.fail_json(msg='MTU on L2 interfaces can only be set'
' to the system default (1500) or '
'existing sysmtu value which is '
' {0}'.format(existing['sysmtu']))
elif mode == 'layer3':
if intf_type in ['ethernet', 'portchannel', 'svi']:
if ((int(mtu) < 576 or int(mtu) > 9216) or
((int(mtu) % 2) != 0)):
module.fail_json(msg='Invalid MTU for Layer 3 interface'
'needs to be an even number between'
'576 and 9216')
if sysmtu:
if ((int(sysmtu) < 576 or int(sysmtu) > 9216 or
((int(sysmtu) % 2) != 0))):
module.fail_json(msg='Invalid MTU- needs to be an even '
'number between 576 and 9216')
args = dict(mtu=mtu, sysmtu=sysmtu)
proposed = dict((k, v) for k, v in args.items() if v is not None)
delta = dict(set(proposed.items()).difference(existing.items()))
changed = False
end_state = existing
commands = []
if state == 'present':
if delta:
command = get_commands_config_mtu(delta, interface)
commands.append(command)
elif state == 'absent':
common = set(proposed.items()).intersection(existing.items())
if common:
command = get_commands_remove_mtu(dict(common), interface)
commands.append(command)
cmds = flatten_list(commands)
if cmds:
if module.check_mode:
module.exit_json(changed=True, commands=cmds)
else:
changed = True
load_config(module, cmds)
if interface:
end_state = get_mtu(interface, module)
else:
end_state = get_system_mtu(module)
if 'configure' in cmds:
cmds.pop(0)
results = {}
results['proposed'] = proposed
results['existing'] = existing
results['end_state'] = end_state
results['updates'] = cmds
results['changed'] = changed
results['warnings'] = warnings
module.exit_json(**results)
if __name__ == '__main__':
main()
| gpl-3.0 | 4,923,917,830,572,851,000 | 29.200528 | 85 | 0.578805 | false |
drufat/sympy | sympy/physics/mechanics/kane.py | 6 | 37043 | from __future__ import print_function, division
from sympy import zeros, Matrix, diff, solve_linear_system_LU, eye
from sympy.core.compatibility import range
from sympy.utilities import default_sort_key
from sympy.physics.vector import (ReferenceFrame, dynamicsymbols,
partial_velocity)
from sympy.physics.mechanics.particle import Particle
from sympy.physics.mechanics.rigidbody import RigidBody
from sympy.physics.mechanics.functions import (msubs, find_dynamicsymbols,
_f_list_parser)
from sympy.physics.mechanics.linearize import Linearizer
from sympy.utilities.exceptions import SymPyDeprecationWarning
from sympy.utilities.iterables import iterable
__all__ = ['KanesMethod']
class KanesMethod(object):
"""Kane's method object.
This object is used to do the "book-keeping" as you go through and form
equations of motion in the way Kane presents in:
Kane, T., Levinson, D. Dynamics Theory and Applications. 1985 McGraw-Hill
The attributes are for equations in the form [M] udot = forcing.
Attributes
==========
q, u : Matrix
Matrices of the generalized coordinates and speeds
bodylist : iterable
Iterable of Point and RigidBody objects in the system.
forcelist : iterable
Iterable of (Point, vector) or (ReferenceFrame, vector) tuples
describing the forces on the system.
auxiliary : Matrix
If applicable, the set of auxiliary Kane's
equations used to solve for non-contributing
forces.
mass_matrix : Matrix
The system's mass matrix
forcing : Matrix
The system's forcing vector
mass_matrix_full : Matrix
The "mass matrix" for the u's and q's
forcing_full : Matrix
The "forcing vector" for the u's and q's
Examples
========
This is a simple example for a one degree of freedom translational
spring-mass-damper.
In this example, we first need to do the kinematics.
This involves creating generalized speeds and coordinates and their
derivatives.
Then we create a point and set its velocity in a frame.
>>> from sympy import symbols
>>> from sympy.physics.mechanics import dynamicsymbols, ReferenceFrame
>>> from sympy.physics.mechanics import Point, Particle, KanesMethod
>>> q, u = dynamicsymbols('q u')
>>> qd, ud = dynamicsymbols('q u', 1)
>>> m, c, k = symbols('m c k')
>>> N = ReferenceFrame('N')
>>> P = Point('P')
>>> P.set_vel(N, u * N.x)
Next we need to arrange/store information in the way that KanesMethod
requires. The kinematic differential equations need to be stored in a
dict. A list of forces/torques must be constructed, where each entry in
the list is a (Point, Vector) or (ReferenceFrame, Vector) tuple, where the
Vectors represent the Force or Torque.
Next a particle needs to be created, and it needs to have a point and mass
assigned to it.
Finally, a list of all bodies and particles needs to be created.
>>> kd = [qd - u]
>>> FL = [(P, (-k * q - c * u) * N.x)]
>>> pa = Particle('pa', P, m)
>>> BL = [pa]
Finally we can generate the equations of motion.
First we create the KanesMethod object and supply an inertial frame,
coordinates, generalized speeds, and the kinematic differential equations.
Additional quantities such as configuration and motion constraints,
dependent coordinates and speeds, and auxiliary speeds are also supplied
here (see the online documentation).
Next we form FR* and FR to complete: Fr + Fr* = 0.
We have the equations of motion at this point.
It makes sense to rearrnge them though, so we calculate the mass matrix and
the forcing terms, for E.o.M. in the form: [MM] udot = forcing, where MM is
the mass matrix, udot is a vector of the time derivatives of the
generalized speeds, and forcing is a vector representing "forcing" terms.
>>> KM = KanesMethod(N, q_ind=[q], u_ind=[u], kd_eqs=kd)
>>> (fr, frstar) = KM.kanes_equations(BL, FL)
>>> MM = KM.mass_matrix
>>> forcing = KM.forcing
>>> rhs = MM.inv() * forcing
>>> rhs
Matrix([[(-c*u(t) - k*q(t))/m]])
>>> KM.linearize(A_and_B=True, new_method=True)[0]
Matrix([
[ 0, 1],
[-k/m, -c/m]])
Please look at the documentation pages for more information on how to
perform linearization and how to deal with dependent coordinates & speeds,
and how do deal with bringing non-contributing forces into evidence.
"""
def __init__(self, frame, q_ind, u_ind, kd_eqs=None, q_dependent=None,
configuration_constraints=None, u_dependent=None,
velocity_constraints=None, acceleration_constraints=None,
u_auxiliary=None):
"""Please read the online documentation. """
if not isinstance(frame, ReferenceFrame):
raise TypeError('An intertial ReferenceFrame must be supplied')
self._inertial = frame
self._fr = None
self._frstar = None
self._forcelist = None
self._bodylist = None
self._initialize_vectors(q_ind, q_dependent, u_ind, u_dependent,
u_auxiliary)
self._initialize_kindiffeq_matrices(kd_eqs)
self._initialize_constraint_matrices(configuration_constraints,
velocity_constraints, acceleration_constraints)
def _initialize_vectors(self, q_ind, q_dep, u_ind, u_dep, u_aux):
"""Initialize the coordinate and speed vectors."""
none_handler = lambda x: Matrix(x) if x else Matrix()
# Initialize generalized coordinates
q_dep = none_handler(q_dep)
if not iterable(q_ind):
raise TypeError('Generalized coordinates must be an iterable.')
if not iterable(q_dep):
raise TypeError('Dependent coordinates must be an iterable.')
q_ind = Matrix(q_ind)
self._qdep = q_dep
self._q = Matrix([q_ind, q_dep])
self._qdot = self.q.diff(dynamicsymbols._t)
# Initialize generalized speeds
u_dep = none_handler(u_dep)
if not iterable(u_ind):
raise TypeError('Generalized speeds must be an iterable.')
if not iterable(u_dep):
raise TypeError('Dependent speeds must be an iterable.')
u_ind = Matrix(u_ind)
self._udep = u_dep
self._u = Matrix([u_ind, u_dep])
self._udot = self.u.diff(dynamicsymbols._t)
self._uaux = none_handler(u_aux)
def _initialize_constraint_matrices(self, config, vel, acc):
"""Initializes constraint matrices."""
# Define vector dimensions
o = len(self.u)
m = len(self._udep)
p = o - m
none_handler = lambda x: Matrix(x) if x else Matrix()
# Initialize configuration constraints
config = none_handler(config)
if len(self._qdep) != len(config):
raise ValueError('There must be an equal number of dependent '
'coordinates and configuration constraints.')
self._f_h = none_handler(config)
# Initialize velocity and acceleration constraints
vel = none_handler(vel)
acc = none_handler(acc)
if len(vel) != m:
raise ValueError('There must be an equal number of dependent '
'speeds and velocity constraints.')
if acc and (len(acc) != m):
raise ValueError('There must be an equal number of dependent '
'speeds and acceleration constraints.')
if vel:
u_zero = dict((i, 0) for i in self.u)
udot_zero = dict((i, 0) for i in self._udot)
# When calling kanes_equations, another class instance will be
# created if auxiliary u's are present. In this case, the
# computation of kinetic differential equation matrices will be
# skipped as this was computed during the original KanesMethod
# object, and the qd_u_map will not be available.
if self._qdot_u_map is not None:
vel = msubs(vel, self._qdot_u_map)
self._f_nh = msubs(vel, u_zero)
self._k_nh = (vel - self._f_nh).jacobian(self.u)
# If no acceleration constraints given, calculate them.
if not acc:
self._f_dnh = (self._k_nh.diff(dynamicsymbols._t) * self.u +
self._f_nh.diff(dynamicsymbols._t))
self._k_dnh = self._k_nh
else:
if self._qdot_u_map is not None:
acc = msubs(acc, self._qdot_u_map)
self._f_dnh = msubs(acc, udot_zero)
self._k_dnh = (acc - self._f_dnh).jacobian(self._udot)
# Form of non-holonomic constraints is B*u + C = 0.
# We partition B into independent and dependent columns:
# Ars is then -B_dep.inv() * B_ind, and it relates dependent speeds
# to independent speeds as: udep = Ars*uind, neglecting the C term.
B_ind = self._k_nh[:, :p]
B_dep = self._k_nh[:, p:o]
self._Ars = -B_dep.LUsolve(B_ind)
else:
self._f_nh = Matrix()
self._k_nh = Matrix()
self._f_dnh = Matrix()
self._k_dnh = Matrix()
self._Ars = Matrix()
def _initialize_kindiffeq_matrices(self, kdeqs):
"""Initialize the kinematic differential equation matrices."""
if kdeqs:
if len(self.q) != len(kdeqs):
raise ValueError('There must be an equal number of kinematic '
'differential equations and coordinates.')
kdeqs = Matrix(kdeqs)
u = self.u
qdot = self._qdot
# Dictionaries setting things to zero
u_zero = dict((i, 0) for i in u)
uaux_zero = dict((i, 0) for i in self._uaux)
qdot_zero = dict((i, 0) for i in qdot)
f_k = msubs(kdeqs, u_zero, qdot_zero)
k_ku = (msubs(kdeqs, qdot_zero) - f_k).jacobian(u)
k_kqdot = (msubs(kdeqs, u_zero) - f_k).jacobian(qdot)
f_k = k_kqdot.LUsolve(f_k)
k_ku = k_kqdot.LUsolve(k_ku)
k_kqdot = eye(len(qdot))
self._qdot_u_map = solve_linear_system_LU(
Matrix([k_kqdot.T, -(k_ku * u + f_k).T]).T, qdot)
self._f_k = msubs(f_k, uaux_zero)
self._k_ku = msubs(k_ku, uaux_zero)
self._k_kqdot = k_kqdot
else:
self._qdot_u_map = None
self._f_k = Matrix()
self._k_ku = Matrix()
self._k_kqdot = Matrix()
def _form_fr(self, fl):
"""Form the generalized active force."""
if fl != None and (len(fl) == 0 or not iterable(fl)):
raise ValueError('Force pairs must be supplied in an '
'non-empty iterable or None.')
N = self._inertial
# pull out relevant velocities for constructing partial velocities
vel_list, f_list = _f_list_parser(fl, N)
vel_list = [msubs(i, self._qdot_u_map) for i in vel_list]
# Fill Fr with dot product of partial velocities and forces
o = len(self.u)
b = len(f_list)
FR = zeros(o, 1)
partials = partial_velocity(vel_list, self.u, N)
for i in range(o):
FR[i] = sum(partials[j][i] & f_list[j] for j in range(b))
# In case there are dependent speeds
if self._udep:
p = o - len(self._udep)
FRtilde = FR[:p, 0]
FRold = FR[p:o, 0]
FRtilde += self._Ars.T * FRold
FR = FRtilde
self._forcelist = fl
self._fr = FR
return FR
def _form_frstar(self, bl):
"""Form the generalized inertia force."""
if not iterable(bl):
raise TypeError('Bodies must be supplied in an iterable.')
t = dynamicsymbols._t
N = self._inertial
# Dicts setting things to zero
udot_zero = dict((i, 0) for i in self._udot)
uaux_zero = dict((i, 0) for i in self._uaux)
uauxdot = [diff(i, t) for i in self._uaux]
uauxdot_zero = dict((i, 0) for i in uauxdot)
# Dictionary of q' and q'' to u and u'
q_ddot_u_map = dict((k.diff(t), v.diff(t)) for (k, v) in
self._qdot_u_map.items())
q_ddot_u_map.update(self._qdot_u_map)
# Fill up the list of partials: format is a list with num elements
# equal to number of entries in body list. Each of these elements is a
# list - either of length 1 for the translational components of
# particles or of length 2 for the translational and rotational
# components of rigid bodies. The inner most list is the list of
# partial velocities.
def get_partial_velocity(body):
if isinstance(body, RigidBody):
vlist = [body.masscenter.vel(N), body.frame.ang_vel_in(N)]
elif isinstance(body, Particle):
vlist = [body.point.vel(N),]
else:
raise TypeError('The body list may only contain either '
'RigidBody or Particle as list elements.')
v = [msubs(vel, self._qdot_u_map) for vel in vlist]
return partial_velocity(v, self.u, N)
partials = [get_partial_velocity(body) for body in bl]
# Compute fr_star in two components:
# fr_star = -(MM*u' + nonMM)
o = len(self.u)
MM = zeros(o, o)
nonMM = zeros(o, 1)
zero_uaux = lambda expr: msubs(expr, uaux_zero)
zero_udot_uaux = lambda expr: msubs(msubs(expr, udot_zero), uaux_zero)
for i, body in enumerate(bl):
if isinstance(body, RigidBody):
M = zero_uaux(body.mass)
I = zero_uaux(body.central_inertia)
vel = zero_uaux(body.masscenter.vel(N))
omega = zero_uaux(body.frame.ang_vel_in(N))
acc = zero_udot_uaux(body.masscenter.acc(N))
inertial_force = (M.diff(t) * vel + M * acc)
inertial_torque = zero_uaux((I.dt(body.frame) & omega) +
msubs(I & body.frame.ang_acc_in(N), udot_zero) +
(omega ^ (I & omega)))
for j in range(o):
tmp_vel = zero_uaux(partials[i][0][j])
tmp_ang = zero_uaux(I & partials[i][1][j])
for k in range(o):
# translational
MM[j, k] += M * (tmp_vel & partials[i][0][k])
# rotational
MM[j, k] += (tmp_ang & partials[i][1][k])
nonMM[j] += inertial_force & partials[i][0][j]
nonMM[j] += inertial_torque & partials[i][1][j]
else:
M = zero_uaux(body.mass)
vel = zero_uaux(body.point.vel(N))
acc = zero_udot_uaux(body.point.acc(N))
inertial_force = (M.diff(t) * vel + M * acc)
for j in range(o):
temp = zero_uaux(partials[i][0][j])
for k in range(o):
MM[j, k] += M * (temp & partials[i][0][k])
nonMM[j] += inertial_force & partials[i][0][j]
# Compose fr_star out of MM and nonMM
MM = zero_uaux(msubs(MM, q_ddot_u_map))
nonMM = msubs(msubs(nonMM, q_ddot_u_map),
udot_zero, uauxdot_zero, uaux_zero)
fr_star = -(MM * msubs(Matrix(self._udot), uauxdot_zero) + nonMM)
# If there are dependent speeds, we need to find fr_star_tilde
if self._udep:
p = o - len(self._udep)
fr_star_ind = fr_star[:p, 0]
fr_star_dep = fr_star[p:o, 0]
fr_star = fr_star_ind + (self._Ars.T * fr_star_dep)
# Apply the same to MM
MMi = MM[:p, :]
MMd = MM[p:o, :]
MM = MMi + (self._Ars.T * MMd)
self._bodylist = bl
self._frstar = fr_star
self._k_d = MM
self._f_d = -msubs(self._fr + self._frstar, udot_zero)
return fr_star
def to_linearizer(self):
"""Returns an instance of the Linearizer class, initiated from the
data in the KanesMethod class. This may be more desirable than using
the linearize class method, as the Linearizer object will allow more
efficient recalculation (i.e. about varying operating points)."""
if (self._fr is None) or (self._frstar is None):
raise ValueError('Need to compute Fr, Fr* first.')
# Get required equation components. The Kane's method class breaks
# these into pieces. Need to reassemble
f_c = self._f_h
if self._f_nh and self._k_nh:
f_v = self._f_nh + self._k_nh*Matrix(self.u)
else:
f_v = Matrix()
if self._f_dnh and self._k_dnh:
f_a = self._f_dnh + self._k_dnh*Matrix(self._udot)
else:
f_a = Matrix()
# Dicts to sub to zero, for splitting up expressions
u_zero = dict((i, 0) for i in self.u)
ud_zero = dict((i, 0) for i in self._udot)
qd_zero = dict((i, 0) for i in self._qdot)
qd_u_zero = dict((i, 0) for i in Matrix([self._qdot, self.u]))
# Break the kinematic differential eqs apart into f_0 and f_1
f_0 = msubs(self._f_k, u_zero) + self._k_kqdot*Matrix(self._qdot)
f_1 = msubs(self._f_k, qd_zero) + self._k_ku*Matrix(self.u)
# Break the dynamic differential eqs into f_2 and f_3
f_2 = msubs(self._frstar, qd_u_zero)
f_3 = msubs(self._frstar, ud_zero) + self._fr
f_4 = zeros(len(f_2), 1)
# Get the required vector components
q = self.q
u = self.u
if self._qdep:
q_i = q[:-len(self._qdep)]
else:
q_i = q
q_d = self._qdep
if self._udep:
u_i = u[:-len(self._udep)]
else:
u_i = u
u_d = self._udep
# Form dictionary to set auxiliary speeds & their derivatives to 0.
uaux = self._uaux
uauxdot = uaux.diff(dynamicsymbols._t)
uaux_zero = dict((i, 0) for i in Matrix([uaux, uauxdot]))
# Checking for dynamic symbols outside the dynamic differential
# equations; throws error if there is.
sym_list = set(Matrix([q, self._qdot, u, self._udot, uaux, uauxdot]))
if any(find_dynamicsymbols(i, sym_list) for i in [self._k_kqdot,
self._k_ku, self._f_k, self._k_dnh, self._f_dnh, self._k_d]):
raise ValueError('Cannot have dynamicsymbols outside dynamic \
forcing vector.')
# Find all other dynamic symbols, forming the forcing vector r.
# Sort r to make it canonical.
r = list(find_dynamicsymbols(msubs(self._f_d, uaux_zero), sym_list))
r.sort(key=default_sort_key)
# Check for any derivatives of variables in r that are also found in r.
for i in r:
if diff(i, dynamicsymbols._t) in r:
raise ValueError('Cannot have derivatives of specified \
quantities when linearizing forcing terms.')
return Linearizer(f_0, f_1, f_2, f_3, f_4, f_c, f_v, f_a, q, u, q_i,
q_d, u_i, u_d, r)
def linearize(self, **kwargs):
""" Linearize the equations of motion about a symbolic operating point.
If kwarg A_and_B is False (default), returns M, A, B, r for the
linearized form, M*[q', u']^T = A*[q_ind, u_ind]^T + B*r.
If kwarg A_and_B is True, returns A, B, r for the linearized form
dx = A*x + B*r, where x = [q_ind, u_ind]^T. Note that this is
computationally intensive if there are many symbolic parameters. For
this reason, it may be more desirable to use the default A_and_B=False,
returning M, A, and B. Values may then be substituted in to these
matrices, and the state space form found as
A = P.T*M.inv()*A, B = P.T*M.inv()*B, where P = Linearizer.perm_mat.
In both cases, r is found as all dynamicsymbols in the equations of
motion that are not part of q, u, q', or u'. They are sorted in
canonical form.
The operating points may be also entered using the ``op_point`` kwarg.
This takes a dictionary of {symbol: value}, or a an iterable of such
dictionaries. The values may be numberic or symbolic. The more values
you can specify beforehand, the faster this computation will run.
As part of the deprecation cycle, the new method will not be used unless
the kwarg ``new_method`` is set to True. If the kwarg is missing, or set
to false, the old linearization method will be used. After next release
the need for this kwarg will be removed.
For more documentation, please see the ``Linearizer`` class."""
if 'new_method' not in kwargs or not kwargs['new_method']:
# User is still using old code.
SymPyDeprecationWarning('The linearize class method has changed '
'to a new interface, the old method is deprecated. To '
'use the new method, set the kwarg `new_method=True`. '
'For more information, read the docstring '
'of `linearize`.').warn()
return self._old_linearize()
# Remove the new method flag, before passing kwargs to linearize
kwargs.pop('new_method')
linearizer = self.to_linearizer()
result = linearizer.linearize(**kwargs)
return result + (linearizer.r,)
def _old_linearize(self):
"""Old method to linearize the equations of motion. Returns a tuple of
(f_lin_A, f_lin_B, y) for forming [M]qudot = [f_lin_A]qu + [f_lin_B]y.
Deprecated in favor of new method using Linearizer class. Please change
your code to use the new `linearize` method."""
if (self._fr is None) or (self._frstar is None):
raise ValueError('Need to compute Fr, Fr* first.')
# Note that this is now unneccessary, and it should never be
# encountered; I still think it should be in here in case the user
# manually sets these matrices incorrectly.
for i in self.q:
if self._k_kqdot.diff(i) != 0 * self._k_kqdot:
raise ValueError('Matrix K_kqdot must not depend on any q.')
t = dynamicsymbols._t
uaux = self._uaux
uauxdot = [diff(i, t) for i in uaux]
# dictionary of auxiliary speeds & derivatives which are equal to zero
subdict = dict(zip(uaux[:] + uauxdot[:],
[0] * (len(uaux) + len(uauxdot))))
# Checking for dynamic symbols outside the dynamic differential
# equations; throws error if there is.
insyms = set(self.q[:] + self._qdot[:] + self.u[:] + self._udot[:] +
uaux[:] + uauxdot)
if any(find_dynamicsymbols(i, insyms) for i in [self._k_kqdot,
self._k_ku, self._f_k, self._k_dnh, self._f_dnh, self._k_d]):
raise ValueError('Cannot have dynamicsymbols outside dynamic \
forcing vector.')
other_dyns = list(find_dynamicsymbols(msubs(self._f_d, subdict), insyms))
# make it canonically ordered so the jacobian is canonical
other_dyns.sort(key=default_sort_key)
for i in other_dyns:
if diff(i, dynamicsymbols._t) in other_dyns:
raise ValueError('Cannot have derivatives of specified '
'quantities when linearizing forcing terms.')
o = len(self.u) # number of speeds
n = len(self.q) # number of coordinates
l = len(self._qdep) # number of configuration constraints
m = len(self._udep) # number of motion constraints
qi = Matrix(self.q[: n - l]) # independent coords
qd = Matrix(self.q[n - l: n]) # dependent coords; could be empty
ui = Matrix(self.u[: o - m]) # independent speeds
ud = Matrix(self.u[o - m: o]) # dependent speeds; could be empty
qdot = Matrix(self._qdot) # time derivatives of coordinates
# with equations in the form MM udot = forcing, expand that to:
# MM_full [q,u].T = forcing_full. This combines coordinates and
# speeds together for the linearization, which is necessary for the
# linearization process, due to dependent coordinates. f1 is the rows
# from the kinematic differential equations, f2 is the rows from the
# dynamic differential equations (and differentiated non-holonomic
# constraints).
f1 = self._k_ku * Matrix(self.u) + self._f_k
f2 = self._f_d
# Only want to do this if these matrices have been filled in, which
# occurs when there are dependent speeds
if m != 0:
f2 = self._f_d.col_join(self._f_dnh)
fnh = self._f_nh + self._k_nh * Matrix(self.u)
f1 = msubs(f1, subdict)
f2 = msubs(f2, subdict)
fh = msubs(self._f_h, subdict)
fku = msubs(self._k_ku * Matrix(self.u), subdict)
fkf = msubs(self._f_k, subdict)
# In the code below, we are applying the chain rule by hand on these
# things. All the matrices have been changed into vectors (by
# multiplying the dynamic symbols which it is paired with), so we can
# take the jacobian of them. The basic operation is take the jacobian
# of the f1, f2 vectors wrt all of the q's and u's. f1 is a function of
# q, u, and t; f2 is a function of q, qdot, u, and t. In the code
# below, we are not considering perturbations in t. So if f1 is a
# function of the q's, u's but some of the q's or u's could be
# dependent on other q's or u's (qd's might be dependent on qi's, ud's
# might be dependent on ui's or qi's), so what we do is take the
# jacobian of the f1 term wrt qi's and qd's, the jacobian wrt the qd's
# gets multiplied by the jacobian of qd wrt qi, this is extended for
# the ud's as well. dqd_dqi is computed by taking a taylor expansion of
# the holonomic constraint equations about q*, treating q* - q as dq,
# separating into dqd (depedent q's) and dqi (independent q's) and the
# rearranging for dqd/dqi. This is again extended for the speeds.
# First case: configuration and motion constraints
if (l != 0) and (m != 0):
fh_jac_qi = fh.jacobian(qi)
fh_jac_qd = fh.jacobian(qd)
fnh_jac_qi = fnh.jacobian(qi)
fnh_jac_qd = fnh.jacobian(qd)
fnh_jac_ui = fnh.jacobian(ui)
fnh_jac_ud = fnh.jacobian(ud)
fku_jac_qi = fku.jacobian(qi)
fku_jac_qd = fku.jacobian(qd)
fku_jac_ui = fku.jacobian(ui)
fku_jac_ud = fku.jacobian(ud)
fkf_jac_qi = fkf.jacobian(qi)
fkf_jac_qd = fkf.jacobian(qd)
f1_jac_qi = f1.jacobian(qi)
f1_jac_qd = f1.jacobian(qd)
f1_jac_ui = f1.jacobian(ui)
f1_jac_ud = f1.jacobian(ud)
f2_jac_qi = f2.jacobian(qi)
f2_jac_qd = f2.jacobian(qd)
f2_jac_ui = f2.jacobian(ui)
f2_jac_ud = f2.jacobian(ud)
f2_jac_qdot = f2.jacobian(qdot)
dqd_dqi = - fh_jac_qd.LUsolve(fh_jac_qi)
dud_dqi = fnh_jac_ud.LUsolve(fnh_jac_qd * dqd_dqi - fnh_jac_qi)
dud_dui = - fnh_jac_ud.LUsolve(fnh_jac_ui)
dqdot_dui = - self._k_kqdot.inv() * (fku_jac_ui +
fku_jac_ud * dud_dui)
dqdot_dqi = - self._k_kqdot.inv() * (fku_jac_qi + fkf_jac_qi +
(fku_jac_qd + fkf_jac_qd) * dqd_dqi + fku_jac_ud * dud_dqi)
f1_q = f1_jac_qi + f1_jac_qd * dqd_dqi + f1_jac_ud * dud_dqi
f1_u = f1_jac_ui + f1_jac_ud * dud_dui
f2_q = (f2_jac_qi + f2_jac_qd * dqd_dqi + f2_jac_qdot * dqdot_dqi +
f2_jac_ud * dud_dqi)
f2_u = f2_jac_ui + f2_jac_ud * dud_dui + f2_jac_qdot * dqdot_dui
# Second case: configuration constraints only
elif l != 0:
dqd_dqi = - fh.jacobian(qd).LUsolve(fh.jacobian(qi))
dqdot_dui = - self._k_kqdot.inv() * fku.jacobian(ui)
dqdot_dqi = - self._k_kqdot.inv() * (fku.jacobian(qi) +
fkf.jacobian(qi) + (fku.jacobian(qd) + fkf.jacobian(qd)) *
dqd_dqi)
f1_q = (f1.jacobian(qi) + f1.jacobian(qd) * dqd_dqi)
f1_u = f1.jacobian(ui)
f2_jac_qdot = f2.jacobian(qdot)
f2_q = (f2.jacobian(qi) + f2.jacobian(qd) * dqd_dqi +
f2.jac_qdot * dqdot_dqi)
f2_u = f2.jacobian(ui) + f2_jac_qdot * dqdot_dui
# Third case: motion constraints only
elif m != 0:
dud_dqi = fnh.jacobian(ud).LUsolve(- fnh.jacobian(qi))
dud_dui = - fnh.jacobian(ud).LUsolve(fnh.jacobian(ui))
dqdot_dui = - self._k_kqdot.inv() * (fku.jacobian(ui) +
fku.jacobian(ud) * dud_dui)
dqdot_dqi = - self._k_kqdot.inv() * (fku.jacobian(qi) +
fkf.jacobian(qi) + fku.jacobian(ud) * dud_dqi)
f1_jac_ud = f1.jacobian(ud)
f2_jac_qdot = f2.jacobian(qdot)
f2_jac_ud = f2.jacobian(ud)
f1_q = f1.jacobian(qi) + f1_jac_ud * dud_dqi
f1_u = f1.jacobian(ui) + f1_jac_ud * dud_dui
f2_q = (f2.jacobian(qi) + f2_jac_qdot * dqdot_dqi + f2_jac_ud
* dud_dqi)
f2_u = (f2.jacobian(ui) + f2_jac_ud * dud_dui + f2_jac_qdot *
dqdot_dui)
# Fourth case: No constraints
else:
dqdot_dui = - self._k_kqdot.inv() * fku.jacobian(ui)
dqdot_dqi = - self._k_kqdot.inv() * (fku.jacobian(qi) +
fkf.jacobian(qi))
f1_q = f1.jacobian(qi)
f1_u = f1.jacobian(ui)
f2_jac_qdot = f2.jacobian(qdot)
f2_q = f2.jacobian(qi) + f2_jac_qdot * dqdot_dqi
f2_u = f2.jacobian(ui) + f2_jac_qdot * dqdot_dui
f_lin_A = -(f1_q.row_join(f1_u)).col_join(f2_q.row_join(f2_u))
if other_dyns:
f1_oths = f1.jacobian(other_dyns)
f2_oths = f2.jacobian(other_dyns)
f_lin_B = -f1_oths.col_join(f2_oths)
else:
f_lin_B = Matrix()
return (f_lin_A, f_lin_B, Matrix(other_dyns))
def kanes_equations(self, bodies, loads=None):
""" Method to form Kane's equations, Fr + Fr* = 0.
Returns (Fr, Fr*). In the case where auxiliary generalized speeds are
present (say, s auxiliary speeds, o generalized speeds, and m motion
constraints) the length of the returned vectors will be o - m + s in
length. The first o - m equations will be the constrained Kane's
equations, then the s auxiliary Kane's equations. These auxiliary
equations can be accessed with the auxiliary_eqs().
Parameters
==========
bodies : iterable
An iterable of all RigidBody's and Particle's in the system.
A system must have at least one body.
loads : iterable
Takes in an iterable of (Particle, Vector) or (ReferenceFrame, Vector)
tuples which represent the force at a point or torque on a frame.
Must be either a non-empty iterable of tuples or None which corresponds
to a system with no constraints.
"""
if (bodies is None and loads != None) or isinstance(bodies[0], tuple):
# This switches the order if they use the old way.
bodies, loads = loads, bodies
SymPyDeprecationWarning(value='The API for kanes_equations() has changed such '
'that the loads (forces and torques) are now the second argument '
'and is optional with None being the default.',
feature='The kanes_equation() argument order',
useinstead='switched argument order to update your code, For example: '
'kanes_equations(loads, bodies) > kanes_equations(bodies, loads).',
issue=10945, deprecated_since_version="1.1").warn()
if not self._k_kqdot:
raise AttributeError('Create an instance of KanesMethod with '
'kinematic differential equations to use this method.')
fr = self._form_fr(loads)
frstar = self._form_frstar(bodies)
if self._uaux:
if not self._udep:
km = KanesMethod(self._inertial, self.q, self._uaux,
u_auxiliary=self._uaux)
else:
km = KanesMethod(self._inertial, self.q, self._uaux,
u_auxiliary=self._uaux, u_dependent=self._udep,
velocity_constraints=(self._k_nh * self.u +
self._f_nh))
km._qdot_u_map = self._qdot_u_map
self._km = km
fraux = km._form_fr(loads)
frstaraux = km._form_frstar(bodies)
self._aux_eq = fraux + frstaraux
self._fr = fr.col_join(fraux)
self._frstar = frstar.col_join(frstaraux)
return (self._fr, self._frstar)
def rhs(self, inv_method=None):
"""Returns the system's equations of motion in first order form. The
output is the right hand side of::
x' = |q'| =: f(q, u, r, p, t)
|u'|
The right hand side is what is needed by most numerical ODE
integrators.
Parameters
==========
inv_method : str
The specific sympy inverse matrix calculation method to use. For a
list of valid methods, see
:meth:`~sympy.matrices.matrices.MatrixBase.inv`
"""
rhs = zeros(len(self.q) + len(self.u), c=1)
kdes = self.kindiffdict()
for i, q_i in enumerate(self.q):
rhs[i] = kdes[q_i.diff()]
if inv_method is None:
rhs[len(self.q):, 0] = self.mass_matrix.LUsolve(self.forcing)
else:
rhs[len(self.q):, 0] = (self.mass_matrix.inv(inv_method,
try_block_diag=True) *
self.forcing)
return rhs
def kindiffdict(self):
"""Returns a dictionary mapping q' to u."""
if not self._qdot_u_map:
raise AttributeError('Create an instance of KanesMethod with '
'kinematic differential equations to use this method.')
return self._qdot_u_map
@property
def auxiliary_eqs(self):
"""A matrix containing the auxiliary equations."""
if not self._fr or not self._frstar:
raise ValueError('Need to compute Fr, Fr* first.')
if not self._uaux:
raise ValueError('No auxiliary speeds have been declared.')
return self._aux_eq
@property
def mass_matrix(self):
"""The mass matrix of the system."""
if not self._fr or not self._frstar:
raise ValueError('Need to compute Fr, Fr* first.')
return Matrix([self._k_d, self._k_dnh])
@property
def mass_matrix_full(self):
"""The mass matrix of the system, augmented by the kinematic
differential equations."""
if not self._fr or not self._frstar:
raise ValueError('Need to compute Fr, Fr* first.')
o = len(self.u)
n = len(self.q)
return ((self._k_kqdot).row_join(zeros(n, o))).col_join((zeros(o,
n)).row_join(self.mass_matrix))
@property
def forcing(self):
"""The forcing vector of the system."""
if not self._fr or not self._frstar:
raise ValueError('Need to compute Fr, Fr* first.')
return -Matrix([self._f_d, self._f_dnh])
@property
def forcing_full(self):
"""The forcing vector of the system, augmented by the kinematic
differential equations."""
if not self._fr or not self._frstar:
raise ValueError('Need to compute Fr, Fr* first.')
f1 = self._k_ku * Matrix(self.u) + self._f_k
return -Matrix([f1, self._f_d, self._f_dnh])
@property
def q(self):
return self._q
@property
def u(self):
return self._u
@property
def bodylist(self):
return self._bodylist
@property
def forcelist(self):
return self._forcelist
| bsd-3-clause | -3,747,339,343,442,382,000 | 43.151371 | 91 | 0.564668 | false |
kevin8909/xjerp | openerp/addons/account/account_invoice.py | 9 | 96259 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import time
from lxml import etree
import openerp.addons.decimal_precision as dp
import openerp.exceptions
from openerp import netsvc
from openerp import pooler
from openerp.osv import fields, osv, orm
from openerp.tools.translate import _
class account_invoice(osv.osv):
def _amount_all(self, cr, uid, ids, name, args, context=None):
res = {}
for invoice in self.browse(cr, uid, ids, context=context):
res[invoice.id] = {
'amount_untaxed': 0.0,
'amount_tax': 0.0,
'amount_total': 0.0
}
for line in invoice.invoice_line:
res[invoice.id]['amount_untaxed'] += line.price_subtotal
for line in invoice.tax_line:
res[invoice.id]['amount_tax'] += line.amount
res[invoice.id]['amount_total'] = res[invoice.id]['amount_tax'] + res[invoice.id]['amount_untaxed']
return res
def _get_journal(self, cr, uid, context=None):
if context is None:
context = {}
type_inv = context.get('type', 'out_invoice')
user = self.pool.get('res.users').browse(cr, uid, uid, context=context)
company_id = context.get('company_id', user.company_id.id)
type2journal = {'out_invoice': 'sale', 'in_invoice': 'purchase', 'out_refund': 'sale_refund', 'in_refund': 'purchase_refund'}
journal_obj = self.pool.get('account.journal')
domain = [('company_id', '=', company_id)]
if isinstance(type_inv, list):
domain.append(('type', 'in', [type2journal.get(type) for type in type_inv if type2journal.get(type)]))
else:
domain.append(('type', '=', type2journal.get(type_inv, 'sale')))
res = journal_obj.search(cr, uid, domain, limit=1)
return res and res[0] or False
def _get_currency(self, cr, uid, context=None):
res = False
journal_id = self._get_journal(cr, uid, context=context)
if journal_id:
journal = self.pool.get('account.journal').browse(cr, uid, journal_id, context=context)
res = journal.currency and journal.currency.id or journal.company_id.currency_id.id
return res
def _get_journal_analytic(self, cr, uid, type_inv, context=None):
type2journal = {'out_invoice': 'sale', 'in_invoice': 'purchase', 'out_refund': 'sale', 'in_refund': 'purchase'}
tt = type2journal.get(type_inv, 'sale')
result = self.pool.get('account.analytic.journal').search(cr, uid, [('type','=',tt)], context=context)
if not result:
raise osv.except_osv(_('No Analytic Journal!'),_("You must define an analytic journal of type '%s'!") % (tt,))
return result[0]
def _get_type(self, cr, uid, context=None):
if context is None:
context = {}
return context.get('type', 'out_invoice')
def _reconciled(self, cr, uid, ids, name, args, context=None):
res = {}
wf_service = netsvc.LocalService("workflow")
for inv in self.browse(cr, uid, ids, context=context):
res[inv.id] = self.test_paid(cr, uid, [inv.id])
if not res[inv.id] and inv.state == 'paid':
wf_service.trg_validate(uid, 'account.invoice', inv.id, 'open_test', cr)
return res
def _get_reference_type(self, cr, uid, context=None):
return [('none', _('Free Reference'))]
def _amount_residual(self, cr, uid, ids, name, args, context=None):
"""Function of the field residua. It computes the residual amount (balance) for each invoice"""
if context is None:
context = {}
ctx = context.copy()
result = {}
currency_obj = self.pool.get('res.currency')
for invoice in self.browse(cr, uid, ids, context=context):
nb_inv_in_partial_rec = max_invoice_id = 0
result[invoice.id] = 0.0
if invoice.move_id:
for aml in invoice.move_id.line_id:
if aml.account_id.type in ('receivable','payable'):
if aml.currency_id and aml.currency_id.id == invoice.currency_id.id:
result[invoice.id] += aml.amount_residual_currency
else:
ctx['date'] = aml.date
result[invoice.id] += currency_obj.compute(cr, uid, aml.company_id.currency_id.id, invoice.currency_id.id, aml.amount_residual, context=ctx)
if aml.reconcile_partial_id.line_partial_ids:
#we check if the invoice is partially reconciled and if there are other invoices
#involved in this partial reconciliation (and we sum these invoices)
for line in aml.reconcile_partial_id.line_partial_ids:
if line.invoice and invoice.type == line.invoice.type:
nb_inv_in_partial_rec += 1
#store the max invoice id as for this invoice we will make a balance instead of a simple division
max_invoice_id = max(max_invoice_id, line.invoice.id)
if nb_inv_in_partial_rec:
#if there are several invoices in a partial reconciliation, we split the residual by the number
#of invoice to have a sum of residual amounts that matches the partner balance
new_value = currency_obj.round(cr, uid, invoice.currency_id, result[invoice.id] / nb_inv_in_partial_rec)
if invoice.id == max_invoice_id:
#if it's the last the invoice of the bunch of invoices partially reconciled together, we make a
#balance to avoid rounding errors
result[invoice.id] = result[invoice.id] - ((nb_inv_in_partial_rec - 1) * new_value)
else:
result[invoice.id] = new_value
#prevent the residual amount on the invoice to be less than 0
result[invoice.id] = max(result[invoice.id], 0.0)
return result
# Give Journal Items related to the payment reconciled to this invoice
# Return ids of partial and total payments related to the selected invoices
def _get_lines(self, cr, uid, ids, name, arg, context=None):
res = {}
for invoice in self.browse(cr, uid, ids, context=context):
id = invoice.id
res[id] = []
if not invoice.move_id:
continue
data_lines = [x for x in invoice.move_id.line_id if x.account_id.id == invoice.account_id.id]
partial_ids = []
for line in data_lines:
ids_line = []
if line.reconcile_id:
ids_line = line.reconcile_id.line_id
elif line.reconcile_partial_id:
ids_line = line.reconcile_partial_id.line_partial_ids
l = map(lambda x: x.id, ids_line)
partial_ids.append(line.id)
res[id] =[x for x in l if x <> line.id and x not in partial_ids]
return res
def _get_invoice_line(self, cr, uid, ids, context=None):
result = {}
for line in self.pool.get('account.invoice.line').browse(cr, uid, ids, context=context):
result[line.invoice_id.id] = True
return result.keys()
def _get_invoice_tax(self, cr, uid, ids, context=None):
result = {}
for tax in self.pool.get('account.invoice.tax').browse(cr, uid, ids, context=context):
result[tax.invoice_id.id] = True
return result.keys()
def _compute_lines(self, cr, uid, ids, name, args, context=None):
result = {}
for invoice in self.browse(cr, uid, ids, context=context):
src = []
lines = []
if invoice.move_id:
for m in invoice.move_id.line_id:
temp_lines = []
if m.reconcile_id:
temp_lines = map(lambda x: x.id, m.reconcile_id.line_id)
elif m.reconcile_partial_id:
temp_lines = map(lambda x: x.id, m.reconcile_partial_id.line_partial_ids)
lines += [x for x in temp_lines if x not in lines]
src.append(m.id)
lines = filter(lambda x: x not in src, lines)
result[invoice.id] = lines
return result
def _get_invoice_from_line(self, cr, uid, ids, context=None):
move = {}
for line in self.pool.get('account.move.line').browse(cr, uid, ids, context=context):
if line.reconcile_partial_id:
for line2 in line.reconcile_partial_id.line_partial_ids:
move[line2.move_id.id] = True
if line.reconcile_id:
for line2 in line.reconcile_id.line_id:
move[line2.move_id.id] = True
invoice_ids = []
if move:
invoice_ids = self.pool.get('account.invoice').search(cr, uid, [('move_id','in',move.keys())], context=context)
return invoice_ids
def _get_invoice_from_reconcile(self, cr, uid, ids, context=None):
move = {}
for r in self.pool.get('account.move.reconcile').browse(cr, uid, ids, context=context):
for line in r.line_partial_ids:
move[line.move_id.id] = True
for line in r.line_id:
move[line.move_id.id] = True
invoice_ids = []
if move:
invoice_ids = self.pool.get('account.invoice').search(cr, uid, [('move_id','in',move.keys())], context=context)
return invoice_ids
_name = "account.invoice"
_inherit = ['mail.thread']
_description = 'Invoice'
_order = "id desc"
_track = {
'type': {
},
'state': {
'account.mt_invoice_paid': lambda self, cr, uid, obj, ctx=None: obj['state'] == 'paid' and obj['type'] in ('out_invoice', 'out_refund'),
'account.mt_invoice_validated': lambda self, cr, uid, obj, ctx=None: obj['state'] == 'open' and obj['type'] in ('out_invoice', 'out_refund'),
},
}
_columns = {
'name': fields.char('Description', size=64, select=True, readonly=True, states={'draft':[('readonly',False)]}),
'origin': fields.char('Source Document', size=64, help="Reference of the document that produced this invoice.", readonly=True, states={'draft':[('readonly',False)]}),
'supplier_invoice_number': fields.char('Supplier Invoice Number', size=64, help="The reference of this invoice as provided by the supplier.", readonly=True, states={'draft':[('readonly',False)]}),
'type': fields.selection([
('out_invoice','Customer Invoice'),
('in_invoice','Supplier Invoice'),
('out_refund','Customer Refund'),
('in_refund','Supplier Refund'),
],'Type', readonly=True, select=True, change_default=True, track_visibility='always'),
'number': fields.related('move_id','name', type='char', readonly=True, size=64, relation='account.move', store=True, string='Number'),
'internal_number': fields.char('Invoice Number', size=32, readonly=True, help="Unique number of the invoice, computed automatically when the invoice is created."),
'reference': fields.char('Invoice Reference', size=64, help="The partner reference of this invoice."),
'reference_type': fields.selection(_get_reference_type, 'Payment Reference',
required=True, readonly=True, states={'draft':[('readonly',False)]}),
'comment': fields.text('Additional Information'),
'state': fields.selection([
('draft','Draft'),
('proforma','Pro-forma'),
('proforma2','Pro-forma'),
('open','Open'),
('paid','Paid'),
('cancel','Cancelled'),
],'Status', select=True, readonly=True, track_visibility='onchange',
help=' * The \'Draft\' status is used when a user is encoding a new and unconfirmed Invoice. \
\n* The \'Pro-forma\' when invoice is in Pro-forma status,invoice does not have an invoice number. \
\n* The \'Open\' status is used when user create invoice,a invoice number is generated.Its in open status till user does not pay invoice. \
\n* The \'Paid\' status is set automatically when the invoice is paid. Its related journal entries may or may not be reconciled. \
\n* The \'Cancelled\' status is used when user cancel invoice.'),
'sent': fields.boolean('Sent', readonly=True, help="It indicates that the invoice has been sent."),
'date_invoice': fields.date('Invoice Date', readonly=True, states={'draft':[('readonly',False)]}, select=True, help="Keep empty to use the current date"),
'date_due': fields.date('Due Date', readonly=True, states={'draft':[('readonly',False)]}, select=True,
help="If you use payment terms, the due date will be computed automatically at the generation "\
"of accounting entries. The payment term may compute several due dates, for example 50% now and 50% in one month, but if you want to force a due date, make sure that the payment term is not set on the invoice. If you keep the payment term and the due date empty, it means direct payment."),
'partner_id': fields.many2one('res.partner', 'Partner', change_default=True, readonly=True, required=True, states={'draft':[('readonly',False)]}, track_visibility='always'),
'payment_term': fields.many2one('account.payment.term', 'Payment Terms',readonly=True, states={'draft':[('readonly',False)]},
help="If you use payment terms, the due date will be computed automatically at the generation "\
"of accounting entries. If you keep the payment term and the due date empty, it means direct payment. "\
"The payment term may compute several due dates, for example 50% now, 50% in one month."),
'period_id': fields.many2one('account.period', 'Force Period', domain=[('state','<>','done')], help="Keep empty to use the period of the validation(invoice) date.", readonly=True, states={'draft':[('readonly',False)]}),
'account_id': fields.many2one('account.account', 'Account', required=True, readonly=True, states={'draft':[('readonly',False)]}, help="The partner account used for this invoice."),
'invoice_line': fields.one2many('account.invoice.line', 'invoice_id', 'Invoice Lines', readonly=True, states={'draft':[('readonly',False)]}),
'tax_line': fields.one2many('account.invoice.tax', 'invoice_id', 'Tax Lines', readonly=True, states={'draft':[('readonly',False)]}),
'move_id': fields.many2one('account.move', 'Journal Entry', readonly=True, select=1, ondelete='restrict', help="Link to the automatically generated Journal Items."),
'amount_untaxed': fields.function(_amount_all, digits_compute=dp.get_precision('Account'), string='Subtotal', track_visibility='always',
store={
'account.invoice': (lambda self, cr, uid, ids, c={}: ids, ['invoice_line'], 20),
'account.invoice.tax': (_get_invoice_tax, None, 20),
'account.invoice.line': (_get_invoice_line, ['price_unit','invoice_line_tax_id','quantity','discount','invoice_id'], 20),
},
multi='all'),
'amount_tax': fields.function(_amount_all, digits_compute=dp.get_precision('Account'), string='Tax',
store={
'account.invoice': (lambda self, cr, uid, ids, c={}: ids, ['invoice_line'], 20),
'account.invoice.tax': (_get_invoice_tax, None, 20),
'account.invoice.line': (_get_invoice_line, ['price_unit','invoice_line_tax_id','quantity','discount','invoice_id'], 20),
},
multi='all'),
'amount_total': fields.function(_amount_all, digits_compute=dp.get_precision('Account'), string='Total',
store={
'account.invoice': (lambda self, cr, uid, ids, c={}: ids, ['invoice_line'], 20),
'account.invoice.tax': (_get_invoice_tax, None, 20),
'account.invoice.line': (_get_invoice_line, ['price_unit','invoice_line_tax_id','quantity','discount','invoice_id'], 20),
},
multi='all'),
'currency_id': fields.many2one('res.currency', 'Currency', required=True, readonly=True, states={'draft':[('readonly',False)]}, track_visibility='always'),
'journal_id': fields.many2one('account.journal', 'Journal', required=True, readonly=True, states={'draft':[('readonly',False)]}),
'company_id': fields.many2one('res.company', 'Company', required=True, change_default=True, readonly=True, states={'draft':[('readonly',False)]}),
'check_total': fields.float('Verification Total', digits_compute=dp.get_precision('Account'), readonly=True, states={'draft':[('readonly',False)]}),
'reconciled': fields.function(_reconciled, string='Paid/Reconciled', type='boolean',
store={
'account.invoice': (lambda self, cr, uid, ids, c={}: ids, None, 50), # Check if we can remove ?
'account.move.line': (_get_invoice_from_line, None, 50),
'account.move.reconcile': (_get_invoice_from_reconcile, None, 50),
}, help="It indicates that the invoice has been paid and the journal entry of the invoice has been reconciled with one or several journal entries of payment."),
'partner_bank_id': fields.many2one('res.partner.bank', 'Bank Account',
help='Bank Account Number to which the invoice will be paid. A Company bank account if this is a Customer Invoice or Supplier Refund, otherwise a Partner bank account number.', readonly=True, states={'draft':[('readonly',False)]}),
'move_lines':fields.function(_get_lines, type='many2many', relation='account.move.line', string='Entry Lines'),
'residual': fields.function(_amount_residual, digits_compute=dp.get_precision('Account'), string='Balance',
store={
'account.invoice': (lambda self, cr, uid, ids, c={}: ids, ['invoice_line','move_id'], 50),
'account.invoice.tax': (_get_invoice_tax, None, 50),
'account.invoice.line': (_get_invoice_line, ['price_unit','invoice_line_tax_id','quantity','discount','invoice_id'], 50),
'account.move.line': (_get_invoice_from_line, None, 50),
'account.move.reconcile': (_get_invoice_from_reconcile, None, 50),
},
help="Remaining amount due."),
'payment_ids': fields.function(_compute_lines, relation='account.move.line', type="many2many", string='Payments'),
'move_name': fields.char('Journal Entry', size=64, readonly=True, states={'draft':[('readonly',False)]}),
'user_id': fields.many2one('res.users', 'Salesperson', readonly=True, track_visibility='onchange', states={'draft':[('readonly',False)]}),
'fiscal_position': fields.many2one('account.fiscal.position', 'Fiscal Position', readonly=True, states={'draft':[('readonly',False)]})
}
_defaults = {
'type': _get_type,
'state': 'draft',
'journal_id': _get_journal,
'currency_id': _get_currency,
'company_id': lambda self,cr,uid,c: self.pool.get('res.company')._company_default_get(cr, uid, 'account.invoice', context=c),
'reference_type': 'none',
'check_total': 0.0,
'internal_number': False,
'user_id': lambda s, cr, u, c: u,
'sent': False,
}
_sql_constraints = [
('number_uniq', 'unique(number, company_id, journal_id, type)', 'Invoice Number must be unique per Company!'),
]
def fields_view_get(self, cr, uid, view_id=None, view_type=False, context=None, toolbar=False, submenu=False):
journal_obj = self.pool.get('account.journal')
if context is None:
context = {}
if context.get('active_model', '') in ['res.partner'] and context.get('active_ids', False) and context['active_ids']:
partner = self.pool.get(context['active_model']).read(cr, uid, context['active_ids'], ['supplier','customer'])[0]
if not view_type:
view_id = self.pool.get('ir.ui.view').search(cr, uid, [('name', '=', 'account.invoice.tree')])
view_type = 'tree'
if view_type == 'form':
if partner['supplier'] and not partner['customer']:
view_id = self.pool.get('ir.ui.view').search(cr,uid,[('name', '=', 'account.invoice.supplier.form')])
elif partner['customer'] and not partner['supplier']:
view_id = self.pool.get('ir.ui.view').search(cr,uid,[('name', '=', 'account.invoice.form')])
if view_id and isinstance(view_id, (list, tuple)):
view_id = view_id[0]
res = super(account_invoice,self).fields_view_get(cr, uid, view_id=view_id, view_type=view_type, context=context, toolbar=toolbar, submenu=submenu)
type = context.get('journal_type', False)
for field in res['fields']:
if field == 'journal_id' and type:
journal_select = journal_obj._name_search(cr, uid, '', [('type', '=', type)], context=context, limit=None, name_get_uid=1)
res['fields'][field]['selection'] = journal_select
doc = etree.XML(res['arch'])
if context.get('type', False):
for node in doc.xpath("//field[@name='partner_bank_id']"):
if context['type'] == 'in_refund':
node.set('domain', "[('partner_id.ref_companies', 'in', [company_id])]")
elif context['type'] == 'out_refund':
node.set('domain', "[('partner_id', '=', partner_id)]")
res['arch'] = etree.tostring(doc)
if view_type == 'search':
if context.get('type', 'in_invoice') in ('out_invoice', 'out_refund'):
for node in doc.xpath("//group[@name='extended filter']"):
doc.remove(node)
res['arch'] = etree.tostring(doc)
if view_type == 'tree':
partner_string = _('Customer')
if context.get('type', 'out_invoice') in ('in_invoice', 'in_refund'):
partner_string = _('Supplier')
for node in doc.xpath("//field[@name='reference']"):
node.set('invisible', '0')
for node in doc.xpath("//field[@name='partner_id']"):
node.set('string', partner_string)
res['arch'] = etree.tostring(doc)
return res
def get_log_context(self, cr, uid, context=None):
if context is None:
context = {}
res = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'account', 'invoice_form')
view_id = res and res[1] or False
context['view_id'] = view_id
return context
def invoice_print(self, cr, uid, ids, context=None):
'''
This function prints the invoice and mark it as sent, so that we can see more easily the next step of the workflow
'''
assert len(ids) == 1, 'This option should only be used for a single id at a time.'
self.write(cr, uid, ids, {'sent': True}, context=context)
datas = {
'ids': ids,
'model': 'account.invoice',
'form': self.read(cr, uid, ids[0], context=context)
}
return {
'type': 'ir.actions.report.xml',
'report_name': 'account.invoice',
'datas': datas,
'nodestroy' : True
}
def action_invoice_sent(self, cr, uid, ids, context=None):
'''
This function opens a window to compose an email, with the edi invoice template message loaded by default
'''
assert len(ids) == 1, 'This option should only be used for a single id at a time.'
ir_model_data = self.pool.get('ir.model.data')
try:
template_id = ir_model_data.get_object_reference(cr, uid, 'account', 'email_template_edi_invoice')[1]
except ValueError:
template_id = False
try:
compose_form_id = ir_model_data.get_object_reference(cr, uid, 'mail', 'email_compose_message_wizard_form')[1]
except ValueError:
compose_form_id = False
ctx = dict(context)
ctx.update({
'default_model': 'account.invoice',
'default_res_id': ids[0],
'default_use_template': bool(template_id),
'default_template_id': template_id,
'default_composition_mode': 'comment',
'mark_invoice_as_sent': True,
})
return {
'type': 'ir.actions.act_window',
'view_type': 'form',
'view_mode': 'form',
'res_model': 'mail.compose.message',
'views': [(compose_form_id, 'form')],
'view_id': compose_form_id,
'target': 'new',
'context': ctx,
}
def confirm_paid(self, cr, uid, ids, context=None):
if context is None:
context = {}
self.write(cr, uid, ids, {'state':'paid'}, context=context)
return True
def unlink(self, cr, uid, ids, context=None):
if context is None:
context = {}
invoices = self.read(cr, uid, ids, ['state','internal_number'], context=context)
unlink_ids = []
for t in invoices:
if t['state'] not in ('draft', 'cancel'):
raise openerp.exceptions.Warning(_('You cannot delete an invoice which is not draft or cancelled. You should refund it instead.'))
elif t['internal_number']:
raise openerp.exceptions.Warning(_('You cannot delete an invoice after it has been validated (and received a number). You can set it back to "Draft" state and modify its content, then re-confirm it.'))
else:
unlink_ids.append(t['id'])
osv.osv.unlink(self, cr, uid, unlink_ids, context=context)
return True
def onchange_partner_id(self, cr, uid, ids, type, partner_id,\
date_invoice=False, payment_term=False, partner_bank_id=False, company_id=False):
partner_payment_term = False
acc_id = False
bank_id = False
fiscal_position = False
opt = [('uid', str(uid))]
if partner_id:
opt.insert(0, ('id', partner_id))
p = self.pool.get('res.partner').browse(cr, uid, partner_id)
if company_id:
if (p.property_account_receivable.company_id and (p.property_account_receivable.company_id.id != company_id)) and (p.property_account_payable.company_id and (p.property_account_payable.company_id.id != company_id)):
property_obj = self.pool.get('ir.property')
rec_pro_id = property_obj.search(cr,uid,[('name','=','property_account_receivable'),('res_id','=','res.partner,'+str(partner_id)+''),('company_id','=',company_id)])
pay_pro_id = property_obj.search(cr,uid,[('name','=','property_account_payable'),('res_id','=','res.partner,'+str(partner_id)+''),('company_id','=',company_id)])
if not rec_pro_id:
rec_pro_id = property_obj.search(cr,uid,[('name','=','property_account_receivable'),('company_id','=',company_id)])
if not pay_pro_id:
pay_pro_id = property_obj.search(cr,uid,[('name','=','property_account_payable'),('company_id','=',company_id)])
rec_line_data = property_obj.read(cr,uid,rec_pro_id,['name','value_reference','res_id'])
pay_line_data = property_obj.read(cr,uid,pay_pro_id,['name','value_reference','res_id'])
rec_res_id = rec_line_data and rec_line_data[0].get('value_reference',False) and int(rec_line_data[0]['value_reference'].split(',')[1]) or False
pay_res_id = pay_line_data and pay_line_data[0].get('value_reference',False) and int(pay_line_data[0]['value_reference'].split(',')[1]) or False
if not rec_res_id and not pay_res_id:
raise osv.except_osv(_('Configuration Error!'),
_('Cannot find a chart of accounts for this company, you should create one.'))
account_obj = self.pool.get('account.account')
rec_obj_acc = account_obj.browse(cr, uid, [rec_res_id])
pay_obj_acc = account_obj.browse(cr, uid, [pay_res_id])
p.property_account_receivable = rec_obj_acc[0]
p.property_account_payable = pay_obj_acc[0]
if type in ('out_invoice', 'out_refund'):
acc_id = p.property_account_receivable.id
partner_payment_term = p.property_payment_term and p.property_payment_term.id or False
else:
acc_id = p.property_account_payable.id
partner_payment_term = p.property_supplier_payment_term and p.property_supplier_payment_term.id or False
fiscal_position = p.property_account_position and p.property_account_position.id or False
if p.bank_ids:
bank_id = p.bank_ids[0].id
result = {'value': {
'account_id': acc_id,
'payment_term': partner_payment_term,
'fiscal_position': fiscal_position
}
}
if type in ('in_invoice', 'in_refund'):
result['value']['partner_bank_id'] = bank_id
if payment_term != partner_payment_term:
if partner_payment_term:
to_update = self.onchange_payment_term_date_invoice(
cr, uid, ids, partner_payment_term, date_invoice)
result['value'].update(to_update['value'])
else:
result['value']['date_due'] = False
if partner_bank_id != bank_id:
to_update = self.onchange_partner_bank(cr, uid, ids, bank_id)
result['value'].update(to_update['value'])
return result
def onchange_journal_id(self, cr, uid, ids, journal_id=False, context=None):
result = {}
if journal_id:
journal = self.pool.get('account.journal').browse(cr, uid, journal_id, context=context)
currency_id = journal.currency and journal.currency.id or journal.company_id.currency_id.id
company_id = journal.company_id.id
result = {'value': {
'currency_id': currency_id,
'company_id': company_id,
}
}
return result
def onchange_payment_term_date_invoice(self, cr, uid, ids, payment_term_id, date_invoice):
res = {}
if isinstance(ids, (int, long)):
ids = [ids]
if not date_invoice:
date_invoice = time.strftime('%Y-%m-%d')
if not payment_term_id:
inv = self.browse(cr, uid, ids[0])
#To make sure the invoice due date should contain due date which is entered by user when there is no payment term defined
return {'value':{'date_due': inv.date_due and inv.date_due or date_invoice}}
pterm_list = self.pool.get('account.payment.term').compute(cr, uid, payment_term_id, value=1, date_ref=date_invoice)
if pterm_list:
pterm_list = [line[0] for line in pterm_list]
pterm_list.sort()
res = {'value':{'date_due': pterm_list[-1]}}
else:
raise osv.except_osv(_('Insufficient Data!'), _('The payment term of supplier does not have a payment term line.'))
return res
def onchange_invoice_line(self, cr, uid, ids, lines):
return {}
def onchange_partner_bank(self, cursor, user, ids, partner_bank_id=False):
return {'value': {}}
def onchange_company_id(self, cr, uid, ids, company_id, part_id, type, invoice_line, currency_id):
#TODO: add the missing context parameter when forward-porting in trunk so we can remove
# this hack!
context = self.pool['res.users'].context_get(cr, uid)
val = {}
dom = {}
obj_journal = self.pool.get('account.journal')
account_obj = self.pool.get('account.account')
inv_line_obj = self.pool.get('account.invoice.line')
if company_id and part_id and type:
acc_id = False
partner_obj = self.pool.get('res.partner').browse(cr,uid,part_id)
if partner_obj.property_account_payable and partner_obj.property_account_receivable:
if partner_obj.property_account_payable.company_id.id != company_id and partner_obj.property_account_receivable.company_id.id != company_id:
property_obj = self.pool.get('ir.property')
rec_pro_id = property_obj.search(cr, uid, [('name','=','property_account_receivable'),('res_id','=','res.partner,'+str(part_id)+''),('company_id','=',company_id)])
pay_pro_id = property_obj.search(cr, uid, [('name','=','property_account_payable'),('res_id','=','res.partner,'+str(part_id)+''),('company_id','=',company_id)])
if not rec_pro_id:
rec_pro_id = property_obj.search(cr, uid, [('name','=','property_account_receivable'),('company_id','=',company_id)])
if not pay_pro_id:
pay_pro_id = property_obj.search(cr, uid, [('name','=','property_account_payable'),('company_id','=',company_id)])
rec_line_data = property_obj.read(cr, uid, rec_pro_id, ['name','value_reference','res_id'])
pay_line_data = property_obj.read(cr, uid, pay_pro_id, ['name','value_reference','res_id'])
rec_res_id = rec_line_data and rec_line_data[0].get('value_reference',False) and int(rec_line_data[0]['value_reference'].split(',')[1]) or False
pay_res_id = pay_line_data and pay_line_data[0].get('value_reference',False) and int(pay_line_data[0]['value_reference'].split(',')[1]) or False
if not rec_res_id and not pay_res_id:
raise osv.except_osv(_('Configuration Error!'),
_('Cannot find a chart of account, you should create one from Settings\Configuration\Accounting menu.'))
if type in ('out_invoice', 'out_refund'):
acc_id = rec_res_id
else:
acc_id = pay_res_id
val= {'account_id': acc_id}
if ids:
if company_id:
inv_obj = self.browse(cr,uid,ids)
for line in inv_obj[0].invoice_line:
if line.account_id:
if line.account_id.company_id.id != company_id:
result_id = account_obj.search(cr, uid, [('name','=',line.account_id.name),('company_id','=',company_id)])
if not result_id:
raise osv.except_osv(_('Configuration Error!'),
_('Cannot find a chart of account, you should create one from Settings\Configuration\Accounting menu.'))
inv_line_obj.write(cr, uid, [line.id], {'account_id': result_id[-1]})
else:
if invoice_line:
for inv_line in invoice_line:
obj_l = account_obj.browse(cr, uid, inv_line[2]['account_id'])
if obj_l.company_id.id != company_id:
raise osv.except_osv(_('Configuration Error!'),
_('Invoice line account\'s company and invoice\'s company does not match.'))
else:
continue
if company_id and type:
journal_mapping = {
'out_invoice': 'sale',
'out_refund': 'sale_refund',
'in_refund': 'purchase_refund',
'in_invoice': 'purchase',
}
journal_type = journal_mapping[type]
journal_ids = obj_journal.search(cr, uid, [('company_id','=',company_id), ('type', '=', journal_type)])
if journal_ids:
val['journal_id'] = journal_ids[0]
ir_values_obj = self.pool.get('ir.values')
res_journal_default = ir_values_obj.get(cr, uid, 'default', 'type=%s' % (type), ['account.invoice'])
for r in res_journal_default:
if r[1] == 'journal_id' and r[2] in journal_ids:
val['journal_id'] = r[2]
if not val.get('journal_id', False):
journal_type_map = dict(obj_journal._columns['type'].selection)
journal_type_label = self.pool['ir.translation']._get_source(cr, uid, None, ('code','selection'),
context.get('lang'),
journal_type_map.get(journal_type))
raise osv.except_osv(_('Configuration Error!'),
_('Cannot find any account journal of %s type for this company.\n\nYou can create one in the menu: \nConfiguration\Journals\Journals.') % ('"%s"' % journal_type_label))
dom = {'journal_id': [('id', 'in', journal_ids)]}
else:
journal_ids = obj_journal.search(cr, uid, [])
return {'value': val, 'domain': dom}
# go from canceled state to draft state
def action_cancel_draft(self, cr, uid, ids, *args):
self.write(cr, uid, ids, {'state':'draft'})
wf_service = netsvc.LocalService("workflow")
for inv_id in ids:
wf_service.trg_delete(uid, 'account.invoice', inv_id, cr)
wf_service.trg_create(uid, 'account.invoice', inv_id, cr)
return True
# Workflow stuff
#################
# return the ids of the move lines which has the same account than the invoice
# whose id is in ids
def move_line_id_payment_get(self, cr, uid, ids, *args):
if not ids: return []
result = self.move_line_id_payment_gets(cr, uid, ids, *args)
return result.get(ids[0], [])
def move_line_id_payment_gets(self, cr, uid, ids, *args):
res = {}
if not ids: return res
cr.execute('SELECT i.id, l.id '\
'FROM account_move_line l '\
'LEFT JOIN account_invoice i ON (i.move_id=l.move_id) '\
'WHERE i.id IN %s '\
'AND l.account_id=i.account_id',
(tuple(ids),))
for r in cr.fetchall():
res.setdefault(r[0], [])
res[r[0]].append( r[1] )
return res
def copy(self, cr, uid, id, default=None, context=None):
default = default or {}
default.update({
'state':'draft',
'number':False,
'move_id':False,
'move_name':False,
'internal_number': False,
'period_id': False,
'sent': False,
})
if 'date_invoice' not in default:
default.update({
'date_invoice':False
})
if 'date_due' not in default:
default.update({
'date_due':False
})
return super(account_invoice, self).copy(cr, uid, id, default, context)
def test_paid(self, cr, uid, ids, *args):
res = self.move_line_id_payment_get(cr, uid, ids)
if not res:
return False
ok = True
for id in res:
cr.execute('select reconcile_id from account_move_line where id=%s', (id,))
ok = ok and bool(cr.fetchone()[0])
return ok
def button_reset_taxes(self, cr, uid, ids, context=None):
if context is None:
context = {}
ctx = context.copy()
ait_obj = self.pool.get('account.invoice.tax')
for id in ids:
cr.execute("DELETE FROM account_invoice_tax WHERE invoice_id=%s AND manual is False", (id,))
partner = self.browse(cr, uid, id, context=ctx).partner_id
if partner.lang:
ctx.update({'lang': partner.lang})
for taxe in ait_obj.compute(cr, uid, id, context=ctx).values():
ait_obj.create(cr, uid, taxe)
# Update the stored value (fields.function), so we write to trigger recompute
self.pool.get('account.invoice').write(cr, uid, ids, {'invoice_line':[]}, context=ctx)
return True
def button_compute(self, cr, uid, ids, context=None, set_total=False):
self.button_reset_taxes(cr, uid, ids, context)
for inv in self.browse(cr, uid, ids, context=context):
if set_total:
self.pool.get('account.invoice').write(cr, uid, [inv.id], {'check_total': inv.amount_total})
return True
def _convert_ref(self, cr, uid, ref):
return (ref or '').replace('/','')
def _get_analytic_lines(self, cr, uid, id, context=None):
if context is None:
context = {}
inv = self.browse(cr, uid, id)
cur_obj = self.pool.get('res.currency')
company_currency = self.pool['res.company'].browse(cr, uid, inv.company_id.id).currency_id.id
if inv.type in ('out_invoice', 'in_refund'):
sign = 1
else:
sign = -1
iml = self.pool.get('account.invoice.line').move_line_get(cr, uid, inv.id, context=context)
for il in iml:
if il['account_analytic_id']:
if inv.type in ('in_invoice', 'in_refund'):
ref = inv.reference
else:
ref = self._convert_ref(cr, uid, inv.number)
if not inv.journal_id.analytic_journal_id:
raise osv.except_osv(_('No Analytic Journal!'),_("You have to define an analytic journal on the '%s' journal!") % (inv.journal_id.name,))
il['analytic_lines'] = [(0,0, {
'name': il['name'],
'date': inv['date_invoice'],
'account_id': il['account_analytic_id'],
'unit_amount': il['quantity'],
'amount': cur_obj.compute(cr, uid, inv.currency_id.id, company_currency, il['price'], context={'date': inv.date_invoice}) * sign,
'product_id': il['product_id'],
'product_uom_id': il['uos_id'],
'general_account_id': il['account_id'],
'journal_id': inv.journal_id.analytic_journal_id.id,
'ref': ref,
})]
return iml
def action_date_assign(self, cr, uid, ids, *args):
for inv in self.browse(cr, uid, ids):
res = self.onchange_payment_term_date_invoice(cr, uid, inv.id, inv.payment_term.id, inv.date_invoice)
if res and res['value']:
self.write(cr, uid, [inv.id], res['value'])
return True
def finalize_invoice_move_lines(self, cr, uid, invoice_browse, move_lines):
"""finalize_invoice_move_lines(cr, uid, invoice, move_lines) -> move_lines
Hook method to be overridden in additional modules to verify and possibly alter the
move lines to be created by an invoice, for special cases.
:param invoice_browse: browsable record of the invoice that is generating the move lines
:param move_lines: list of dictionaries with the account.move.lines (as for create())
:return: the (possibly updated) final move_lines to create for this invoice
"""
return move_lines
def check_tax_lines(self, cr, uid, inv, compute_taxes, ait_obj):
company_currency = self.pool['res.company'].browse(cr, uid, inv.company_id.id).currency_id
if not inv.tax_line:
for tax in compute_taxes.values():
ait_obj.create(cr, uid, tax)
else:
tax_key = []
for tax in inv.tax_line:
if tax.manual:
continue
key = (tax.tax_code_id.id, tax.base_code_id.id, tax.account_id.id, tax.account_analytic_id.id)
tax_key.append(key)
if not key in compute_taxes:
raise osv.except_osv(_('Warning!'), _('Global taxes defined, but they are not in invoice lines !'))
base = compute_taxes[key]['base']
if abs(base - tax.base) > company_currency.rounding:
raise osv.except_osv(_('Warning!'), _('Tax base different!\nClick on compute to update the tax base.'))
for key in compute_taxes:
if not key in tax_key:
raise osv.except_osv(_('Warning!'), _('Taxes are missing!\nClick on compute button.'))
def compute_invoice_totals(self, cr, uid, inv, company_currency, ref, invoice_move_lines, context=None):
if context is None:
context={}
total = 0
total_currency = 0
cur_obj = self.pool.get('res.currency')
for i in invoice_move_lines:
if inv.currency_id.id != company_currency:
context.update({'date': inv.date_invoice or time.strftime('%Y-%m-%d')})
i['currency_id'] = inv.currency_id.id
i['amount_currency'] = i['price']
i['price'] = cur_obj.compute(cr, uid, inv.currency_id.id,
company_currency, i['price'],
context=context)
else:
i['amount_currency'] = False
i['currency_id'] = False
i['ref'] = ref
if inv.type in ('out_invoice','in_refund'):
total += i['price']
total_currency += i['amount_currency'] or i['price']
i['price'] = - i['price']
else:
total -= i['price']
total_currency -= i['amount_currency'] or i['price']
return total, total_currency, invoice_move_lines
def inv_line_characteristic_hashcode(self, invoice, invoice_line):
"""Overridable hashcode generation for invoice lines. Lines having the same hashcode
will be grouped together if the journal has the 'group line' option. Of course a module
can add fields to invoice lines that would need to be tested too before merging lines
or not."""
return "%s-%s-%s-%s-%s"%(
invoice_line['account_id'],
invoice_line.get('tax_code_id',"False"),
invoice_line.get('product_id',"False"),
invoice_line.get('analytic_account_id',"False"),
invoice_line.get('date_maturity',"False"))
def group_lines(self, cr, uid, iml, line, inv):
"""Merge account move lines (and hence analytic lines) if invoice line hashcodes are equals"""
if inv.journal_id.group_invoice_lines:
line2 = {}
for x, y, l in line:
tmp = self.inv_line_characteristic_hashcode(inv, l)
if tmp in line2:
am = line2[tmp]['debit'] - line2[tmp]['credit'] + (l['debit'] - l['credit'])
line2[tmp]['debit'] = (am > 0) and am or 0.0
line2[tmp]['credit'] = (am < 0) and -am or 0.0
line2[tmp]['tax_amount'] += l['tax_amount']
line2[tmp]['analytic_lines'] += l['analytic_lines']
else:
line2[tmp] = l
line = []
for key, val in line2.items():
line.append((0,0,val))
return line
def action_move_create(self, cr, uid, ids, context=None):
"""Creates invoice related analytics and financial move lines"""
ait_obj = self.pool.get('account.invoice.tax')
cur_obj = self.pool.get('res.currency')
period_obj = self.pool.get('account.period')
payment_term_obj = self.pool.get('account.payment.term')
journal_obj = self.pool.get('account.journal')
move_obj = self.pool.get('account.move')
if context is None:
context = {}
for inv in self.browse(cr, uid, ids, context=context):
if not inv.journal_id.sequence_id:
raise osv.except_osv(_('Error!'), _('Please define sequence on the journal related to this invoice.'))
if not inv.invoice_line:
raise osv.except_osv(_('No Invoice Lines!'), _('Please create some invoice lines.'))
if inv.move_id:
continue
ctx = context.copy()
ctx.update({'lang': inv.partner_id.lang})
if not inv.date_invoice:
self.write(cr, uid, [inv.id], {'date_invoice': fields.date.context_today(self,cr,uid,context=context)}, context=ctx)
company_currency = self.pool['res.company'].browse(cr, uid, inv.company_id.id).currency_id.id
# create the analytical lines
# one move line per invoice line
iml = self._get_analytic_lines(cr, uid, inv.id, context=ctx)
# check if taxes are all computed
compute_taxes = ait_obj.compute(cr, uid, inv.id, context=ctx)
self.check_tax_lines(cr, uid, inv, compute_taxes, ait_obj)
# I disabled the check_total feature
group_check_total_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'account', 'group_supplier_inv_check_total')[1]
group_check_total = self.pool.get('res.groups').browse(cr, uid, group_check_total_id, context=context)
if group_check_total and uid in [x.id for x in group_check_total.users]:
if (inv.type in ('in_invoice', 'in_refund') and abs(inv.check_total - inv.amount_total) >= (inv.currency_id.rounding/2.0)):
raise osv.except_osv(_('Bad Total!'), _('Please verify the price of the invoice!\nThe encoded total does not match the computed total.'))
if inv.payment_term:
total_fixed = total_percent = 0
for line in inv.payment_term.line_ids:
if line.value == 'fixed':
total_fixed += line.value_amount
if line.value == 'procent':
total_percent += line.value_amount
total_fixed = (total_fixed * 100) / (inv.amount_total or 1.0)
if (total_fixed + total_percent) > 100:
raise osv.except_osv(_('Error!'), _("Cannot create the invoice.\nThe related payment term is probably misconfigured as it gives a computed amount greater than the total invoiced amount. In order to avoid rounding issues, the latest line of your payment term must be of type 'balance'."))
# one move line per tax line
iml += ait_obj.move_line_get(cr, uid, inv.id)
entry_type = ''
if inv.type in ('in_invoice', 'in_refund'):
ref = inv.reference
entry_type = 'journal_pur_voucher'
if inv.type == 'in_refund':
entry_type = 'cont_voucher'
else:
ref = self._convert_ref(cr, uid, inv.number)
entry_type = 'journal_sale_vou'
if inv.type == 'out_refund':
entry_type = 'cont_voucher'
diff_currency_p = inv.currency_id.id <> company_currency
# create one move line for the total and possibly adjust the other lines amount
total = 0
total_currency = 0
total, total_currency, iml = self.compute_invoice_totals(cr, uid, inv, company_currency, ref, iml, context=ctx)
acc_id = inv.account_id.id
name = inv['name'] or inv['supplier_invoice_number'] or '/'
totlines = False
if inv.payment_term:
totlines = payment_term_obj.compute(cr,
uid, inv.payment_term.id, total, inv.date_invoice or False, context=ctx)
if totlines:
res_amount_currency = total_currency
i = 0
ctx.update({'date': inv.date_invoice})
for t in totlines:
if inv.currency_id.id != company_currency:
amount_currency = cur_obj.compute(cr, uid, company_currency, inv.currency_id.id, t[1], context=ctx)
else:
amount_currency = False
# last line add the diff
res_amount_currency -= amount_currency or 0
i += 1
if i == len(totlines):
amount_currency += res_amount_currency
iml.append({
'type': 'dest',
'name': name,
'price': t[1],
'account_id': acc_id,
'date_maturity': t[0],
'amount_currency': diff_currency_p \
and amount_currency or False,
'currency_id': diff_currency_p \
and inv.currency_id.id or False,
'ref': ref,
})
else:
iml.append({
'type': 'dest',
'name': name,
'price': total,
'account_id': acc_id,
'date_maturity': inv.date_due or False,
'amount_currency': diff_currency_p \
and total_currency or False,
'currency_id': diff_currency_p \
and inv.currency_id.id or False,
'ref': ref
})
date = inv.date_invoice or time.strftime('%Y-%m-%d')
part = self.pool.get("res.partner")._find_accounting_partner(inv.partner_id)
line = map(lambda x:(0,0,self.line_get_convert(cr, uid, x, part.id, date, context=ctx)),iml)
line = self.group_lines(cr, uid, iml, line, inv)
journal_id = inv.journal_id.id
journal = journal_obj.browse(cr, uid, journal_id, context=ctx)
if journal.centralisation:
raise osv.except_osv(_('User Error!'),
_('You cannot create an invoice on a centralized journal. Uncheck the centralized counterpart box in the related journal from the configuration menu.'))
line = self.finalize_invoice_move_lines(cr, uid, inv, line)
move = {
'ref': inv.reference and inv.reference or inv.name,
'line_id': line,
'journal_id': journal_id,
'date': date,
'narration': inv.comment,
'company_id': inv.company_id.id,
}
period_id = inv.period_id and inv.period_id.id or False
ctx.update(company_id=inv.company_id.id,
account_period_prefer_normal=True)
if not period_id:
period_ids = period_obj.find(cr, uid, inv.date_invoice, context=ctx)
period_id = period_ids and period_ids[0] or False
if period_id:
move['period_id'] = period_id
for i in line:
i[2]['period_id'] = period_id
ctx.update(invoice=inv)
move_id = move_obj.create(cr, uid, move, context=ctx)
new_move_name = move_obj.browse(cr, uid, move_id, context=ctx).name
# make the invoice point to that move
self.write(cr, uid, [inv.id], {'move_id': move_id,'period_id':period_id, 'move_name':new_move_name}, context=ctx)
# Pass invoice in context in method post: used if you want to get the same
# account move reference when creating the same invoice after a cancelled one:
move_obj.post(cr, uid, [move_id], context=ctx)
self._log_event(cr, uid, ids)
return True
def invoice_validate(self, cr, uid, ids, context=None):
self.write(cr, uid, ids, {'state':'open'}, context=context)
return True
def line_get_convert(self, cr, uid, x, part, date, context=None):
return {
'date_maturity': x.get('date_maturity', False),
'partner_id': part,
'name': x['name'][:64],
'date': date,
'debit': x['price']>0 and x['price'],
'credit': x['price']<0 and -x['price'],
'account_id': x['account_id'],
'analytic_lines': x.get('analytic_lines', []),
'amount_currency': x['price']>0 and abs(x.get('amount_currency', False)) or -abs(x.get('amount_currency', False)),
'currency_id': x.get('currency_id', False),
'tax_code_id': x.get('tax_code_id', False),
'tax_amount': x.get('tax_amount', False),
'ref': x.get('ref', False),
'quantity': x.get('quantity',1.00),
'product_id': x.get('product_id', False),
'product_uom_id': x.get('uos_id', False),
'analytic_account_id': x.get('account_analytic_id', False),
}
def action_number(self, cr, uid, ids, context=None):
if context is None:
context = {}
#TODO: not correct fix but required a frech values before reading it.
self.write(cr, uid, ids, {})
for obj_inv in self.browse(cr, uid, ids, context=context):
invtype = obj_inv.type
number = obj_inv.number
move_id = obj_inv.move_id and obj_inv.move_id.id or False
reference = obj_inv.reference or ''
self.write(cr, uid, ids, {'internal_number': number})
if invtype in ('in_invoice', 'in_refund'):
if not reference:
ref = self._convert_ref(cr, uid, number)
else:
ref = reference
else:
ref = self._convert_ref(cr, uid, number)
cr.execute('UPDATE account_move SET ref=%s ' \
'WHERE id=%s AND (ref is null OR ref = \'\')',
(ref, move_id))
cr.execute('UPDATE account_move_line SET ref=%s ' \
'WHERE move_id=%s AND (ref is null OR ref = \'\')',
(ref, move_id))
cr.execute('UPDATE account_analytic_line SET ref=%s ' \
'FROM account_move_line ' \
'WHERE account_move_line.move_id = %s ' \
'AND account_analytic_line.move_id = account_move_line.id',
(ref, move_id))
return True
def action_cancel(self, cr, uid, ids, context=None):
if context is None:
context = {}
account_move_obj = self.pool.get('account.move')
invoices = self.read(cr, uid, ids, ['move_id', 'payment_ids'])
move_ids = [] # ones that we will need to remove
for i in invoices:
if i['move_id']:
move_ids.append(i['move_id'][0])
if i['payment_ids']:
account_move_line_obj = self.pool.get('account.move.line')
pay_ids = account_move_line_obj.browse(cr, uid, i['payment_ids'])
for move_line in pay_ids:
if move_line.reconcile_partial_id and move_line.reconcile_partial_id.line_partial_ids:
raise osv.except_osv(_('Error!'), _('You cannot cancel an invoice which is partially paid. You need to unreconcile related payment entries first.'))
# First, set the invoices as cancelled and detach the move ids
self.write(cr, uid, ids, {'state':'cancel', 'move_id':False})
if move_ids:
# second, invalidate the move(s)
account_move_obj.button_cancel(cr, uid, move_ids, context=context)
# delete the move this invoice was pointing to
# Note that the corresponding move_lines and move_reconciles
# will be automatically deleted too
account_move_obj.unlink(cr, uid, move_ids, context=context)
self._log_event(cr, uid, ids, -1.0, 'Cancel Invoice')
return True
###################
def list_distinct_taxes(self, cr, uid, ids):
invoices = self.browse(cr, uid, ids)
taxes = {}
for inv in invoices:
for tax in inv.tax_line:
if not tax['name'] in taxes:
taxes[tax['name']] = {'name': tax['name']}
return taxes.values()
def _log_event(self, cr, uid, ids, factor=1.0, name='Open Invoice'):
#TODO: implement messages system
return True
def name_get(self, cr, uid, ids, context=None):
if not ids:
return []
types = {
'out_invoice': _('Invoice'),
'in_invoice': _('Supplier Invoice'),
'out_refund': _('Refund'),
'in_refund': _('Supplier Refund'),
}
return [(r['id'], '%s %s' % (r['number'] or types[r['type']], r['name'] or '')) for r in self.read(cr, uid, ids, ['type', 'number', 'name'], context, load='_classic_write')]
def name_search(self, cr, user, name, args=None, operator='ilike', context=None, limit=100):
if not args:
args = []
if context is None:
context = {}
ids = []
if name:
ids = self.search(cr, user, [('number','=',name)] + args, limit=limit, context=context)
if not ids:
ids = self.search(cr, user, [('name',operator,name)] + args, limit=limit, context=context)
return self.name_get(cr, user, ids, context)
def _refund_cleanup_lines(self, cr, uid, lines, context=None):
"""Convert records to dict of values suitable for one2many line creation
:param list(browse_record) lines: records to convert
:return: list of command tuple for one2many line creation [(0, 0, dict of valueis), ...]
"""
clean_lines = []
for line in lines:
clean_line = {}
for field in line._all_columns.keys():
if line._all_columns[field].column._type == 'many2one':
clean_line[field] = line[field].id
elif line._all_columns[field].column._type not in ['many2many','one2many']:
clean_line[field] = line[field]
elif field == 'invoice_line_tax_id':
tax_list = []
for tax in line[field]:
tax_list.append(tax.id)
clean_line[field] = [(6,0, tax_list)]
clean_lines.append(clean_line)
return map(lambda x: (0,0,x), clean_lines)
def _prepare_refund(self, cr, uid, invoice, date=None, period_id=None, description=None, journal_id=None, context=None):
"""Prepare the dict of values to create the new refund from the invoice.
This method may be overridden to implement custom
refund generation (making sure to call super() to establish
a clean extension chain).
:param integer invoice_id: id of the invoice to refund
:param dict invoice: read of the invoice to refund
:param string date: refund creation date from the wizard
:param integer period_id: force account.period from the wizard
:param string description: description of the refund from the wizard
:param integer journal_id: account.journal from the wizard
:return: dict of value to create() the refund
"""
obj_journal = self.pool.get('account.journal')
type_dict = {
'out_invoice': 'out_refund', # Customer Invoice
'in_invoice': 'in_refund', # Supplier Invoice
'out_refund': 'out_invoice', # Customer Refund
'in_refund': 'in_invoice', # Supplier Refund
}
invoice_data = {}
for field in ['name', 'reference', 'comment', 'date_due', 'partner_id', 'company_id',
'account_id', 'currency_id', 'payment_term', 'user_id', 'fiscal_position']:
if invoice._all_columns[field].column._type == 'many2one':
invoice_data[field] = invoice[field].id
else:
invoice_data[field] = invoice[field] if invoice[field] else False
invoice_lines = self._refund_cleanup_lines(cr, uid, invoice.invoice_line, context=context)
tax_lines = filter(lambda l: l['manual'], invoice.tax_line)
tax_lines = self._refund_cleanup_lines(cr, uid, tax_lines, context=context)
if journal_id:
refund_journal_ids = [journal_id]
elif invoice['type'] == 'in_invoice':
refund_journal_ids = obj_journal.search(cr, uid, [('type','=','purchase_refund')], context=context)
else:
refund_journal_ids = obj_journal.search(cr, uid, [('type','=','sale_refund')], context=context)
if not date:
date = time.strftime('%Y-%m-%d')
invoice_data.update({
'type': type_dict[invoice['type']],
'date_invoice': date,
'state': 'draft',
'number': False,
'invoice_line': invoice_lines,
'tax_line': tax_lines,
'journal_id': refund_journal_ids and refund_journal_ids[0] or False,
})
if period_id:
invoice_data['period_id'] = period_id
if description:
invoice_data['name'] = description
return invoice_data
def refund(self, cr, uid, ids, date=None, period_id=None, description=None, journal_id=None, context=None):
new_ids = []
for invoice in self.browse(cr, uid, ids, context=context):
invoice = self._prepare_refund(cr, uid, invoice,
date=date,
period_id=period_id,
description=description,
journal_id=journal_id,
context=context)
# create the new invoice
new_ids.append(self.create(cr, uid, invoice, context=context))
return new_ids
def pay_and_reconcile(self, cr, uid, ids, pay_amount, pay_account_id, period_id, pay_journal_id, writeoff_acc_id, writeoff_period_id, writeoff_journal_id, context=None, name=''):
if context is None:
context = {}
#TODO check if we can use different period for payment and the writeoff line
assert len(ids)==1, "Can only pay one invoice at a time."
invoice = self.browse(cr, uid, ids[0], context=context)
src_account_id = invoice.account_id.id
# Take the seq as name for move
types = {'out_invoice': -1, 'in_invoice': 1, 'out_refund': 1, 'in_refund': -1}
direction = types[invoice.type]
#take the choosen date
if 'date_p' in context and context['date_p']:
date=context['date_p']
else:
date=time.strftime('%Y-%m-%d')
# Take the amount in currency and the currency of the payment
if 'amount_currency' in context and context['amount_currency'] and 'currency_id' in context and context['currency_id']:
amount_currency = context['amount_currency']
currency_id = context['currency_id']
else:
amount_currency = False
currency_id = False
pay_journal = self.pool.get('account.journal').read(cr, uid, pay_journal_id, ['type'], context=context)
if invoice.type in ('in_invoice', 'out_invoice'):
if pay_journal['type'] == 'bank':
entry_type = 'bank_pay_voucher' # Bank payment
else:
entry_type = 'pay_voucher' # Cash payment
else:
entry_type = 'cont_voucher'
if invoice.type in ('in_invoice', 'in_refund'):
ref = invoice.reference
else:
ref = self._convert_ref(cr, uid, invoice.number)
partner = self.pool['res.partner']._find_accounting_partner(invoice.partner_id)
# Pay attention to the sign for both debit/credit AND amount_currency
l1 = {
'debit': direction * pay_amount>0 and direction * pay_amount,
'credit': direction * pay_amount<0 and - direction * pay_amount,
'account_id': src_account_id,
'partner_id': partner.id,
'ref':ref,
'date': date,
'currency_id':currency_id,
'amount_currency':amount_currency and direction * amount_currency or 0.0,
'company_id': invoice.company_id.id,
}
l2 = {
'debit': direction * pay_amount<0 and - direction * pay_amount,
'credit': direction * pay_amount>0 and direction * pay_amount,
'account_id': pay_account_id,
'partner_id': partner.id,
'ref':ref,
'date': date,
'currency_id':currency_id,
'amount_currency':amount_currency and - direction * amount_currency or 0.0,
'company_id': invoice.company_id.id,
}
if not name:
name = invoice.invoice_line and invoice.invoice_line[0].name or invoice.number
l1['name'] = name
l2['name'] = name
lines = [(0, 0, l1), (0, 0, l2)]
move = {'ref': ref, 'line_id': lines, 'journal_id': pay_journal_id, 'period_id': period_id, 'date': date}
move_id = self.pool.get('account.move').create(cr, uid, move, context=context)
line_ids = []
total = 0.0
line = self.pool.get('account.move.line')
move_ids = [move_id,]
if invoice.move_id:
move_ids.append(invoice.move_id.id)
cr.execute('SELECT id FROM account_move_line '\
'WHERE move_id IN %s',
((move_id, invoice.move_id.id),))
lines = line.browse(cr, uid, map(lambda x: x[0], cr.fetchall()) )
for l in lines+invoice.payment_ids:
if l.account_id.id == src_account_id:
line_ids.append(l.id)
total += (l.debit or 0.0) - (l.credit or 0.0)
inv_id, name = self.name_get(cr, uid, [invoice.id], context=context)[0]
if (not round(total,self.pool.get('decimal.precision').precision_get(cr, uid, 'Account'))) or writeoff_acc_id:
self.pool.get('account.move.line').reconcile(cr, uid, line_ids, 'manual', writeoff_acc_id, writeoff_period_id, writeoff_journal_id, context)
else:
code = invoice.currency_id.symbol
# TODO: use currency's formatting function
msg = _("Invoice partially paid: %s%s of %s%s (%s%s remaining).") % \
(pay_amount, code, invoice.amount_total, code, total, code)
self.message_post(cr, uid, [inv_id], body=msg, context=context)
self.pool.get('account.move.line').reconcile_partial(cr, uid, line_ids, 'manual', context)
# Update the stored value (fields.function), so we write to trigger recompute
self.pool.get('account.invoice').write(cr, uid, ids, {}, context=context)
return True
class account_invoice_line(osv.osv):
def _amount_line(self, cr, uid, ids, prop, unknow_none, unknow_dict):
res = {}
tax_obj = self.pool.get('account.tax')
cur_obj = self.pool.get('res.currency')
for line in self.browse(cr, uid, ids):
price = line.price_unit * (1-(line.discount or 0.0)/100.0)
taxes = tax_obj.compute_all(cr, uid, line.invoice_line_tax_id, price, line.quantity, product=line.product_id, partner=line.invoice_id.partner_id)
res[line.id] = taxes['total']
if line.invoice_id:
cur = line.invoice_id.currency_id
res[line.id] = cur_obj.round(cr, uid, cur, res[line.id])
return res
def _price_unit_default(self, cr, uid, context=None):
if context is None:
context = {}
if context.get('check_total', False):
t = context['check_total']
for l in context.get('invoice_line', {}):
if isinstance(l, (list, tuple)) and len(l) >= 3 and l[2]:
tax_obj = self.pool.get('account.tax')
p = l[2].get('price_unit', 0) * (1-l[2].get('discount', 0)/100.0)
t = t - (p * l[2].get('quantity'))
taxes = l[2].get('invoice_line_tax_id')
if len(taxes[0]) >= 3 and taxes[0][2]:
taxes = tax_obj.browse(cr, uid, list(taxes[0][2]))
for tax in tax_obj.compute_all(cr, uid, taxes, p,l[2].get('quantity'), l[2].get('product_id', False), context.get('partner_id', False))['taxes']:
t = t - tax['amount']
return t
return 0
_name = "account.invoice.line"
_description = "Invoice Line"
_order = "invoice_id,sequence,id"
_columns = {
'name': fields.text('Description', required=True),
'origin': fields.char('Source Document', size=256, help="Reference of the document that produced this invoice."),
'sequence': fields.integer('Sequence', help="Gives the sequence of this line when displaying the invoice."),
'invoice_id': fields.many2one('account.invoice', 'Invoice Reference', ondelete='cascade', select=True),
'uos_id': fields.many2one('product.uom', 'Unit of Measure', ondelete='set null', select=True),
'product_id': fields.many2one('product.product', 'Product', ondelete='set null', select=True),
'account_id': fields.many2one('account.account', 'Account', required=True, domain=[('type','<>','view'), ('type', '<>', 'closed')], help="The income or expense account related to the selected product."),
'price_unit': fields.float('Unit Price', required=True, digits_compute= dp.get_precision('Product Price')),
'price_subtotal': fields.function(_amount_line, string='Amount', type="float",
digits_compute= dp.get_precision('Account'), store=True),
'quantity': fields.float('Quantity', digits_compute= dp.get_precision('Product Unit of Measure'), required=True),
'discount': fields.float('Discount (%)', digits_compute= dp.get_precision('Discount')),
'invoice_line_tax_id': fields.many2many('account.tax', 'account_invoice_line_tax', 'invoice_line_id', 'tax_id', 'Taxes', domain=[('parent_id','=',False)]),
'account_analytic_id': fields.many2one('account.analytic.account', 'Analytic Account'),
'company_id': fields.related('invoice_id','company_id',type='many2one',relation='res.company',string='Company', store=True, readonly=True),
'partner_id': fields.related('invoice_id','partner_id',type='many2one',relation='res.partner',string='Partner',store=True)
}
def _default_account_id(self, cr, uid, context=None):
# XXX this gets the default account for the user's company,
# it should get the default account for the invoice's company
# however, the invoice's company does not reach this point
if context is None:
context = {}
if context.get('type') in ('out_invoice','out_refund'):
prop = self.pool.get('ir.property').get(cr, uid, 'property_account_income_categ', 'product.category', context=context)
else:
prop = self.pool.get('ir.property').get(cr, uid, 'property_account_expense_categ', 'product.category', context=context)
return prop and prop.id or False
_defaults = {
'quantity': 1,
'discount': 0.0,
'price_unit': _price_unit_default,
'account_id': _default_account_id,
'sequence': 10,
}
def fields_view_get(self, cr, uid, view_id=None, view_type='form', context=None, toolbar=False, submenu=False):
if context is None:
context = {}
res = super(account_invoice_line,self).fields_view_get(cr, uid, view_id=view_id, view_type=view_type, context=context, toolbar=toolbar, submenu=submenu)
if context.get('type', False):
doc = etree.XML(res['arch'])
for node in doc.xpath("//field[@name='product_id']"):
if context['type'] in ('in_invoice', 'in_refund'):
node.set('domain', "[('purchase_ok', '=', True)]")
else:
node.set('domain', "[('sale_ok', '=', True)]")
res['arch'] = etree.tostring(doc)
return res
def product_id_change(self, cr, uid, ids, product, uom_id, qty=0, name='', type='out_invoice', partner_id=False, fposition_id=False, price_unit=False, currency_id=False, context=None, company_id=None):
if context is None:
context = {}
company_id = company_id if company_id != None else context.get('company_id',False)
context = dict(context)
context.update({'company_id': company_id, 'force_company': company_id})
if not partner_id:
raise osv.except_osv(_('No Partner Defined!'),_("You must first select a partner!") )
if not product:
if type in ('in_invoice', 'in_refund'):
return {'value': {}, 'domain':{'product_uom':[]}}
else:
return {'value': {'price_unit': 0.0}, 'domain':{'product_uom':[]}}
part = self.pool.get('res.partner').browse(cr, uid, partner_id, context=context)
fpos_obj = self.pool.get('account.fiscal.position')
fpos = fposition_id and fpos_obj.browse(cr, uid, fposition_id, context=context) or False
if part.lang:
context.update({'lang': part.lang})
result = {}
res = self.pool.get('product.product').browse(cr, uid, product, context=context)
if type in ('out_invoice','out_refund'):
a = res.property_account_income.id
if not a:
a = res.categ_id.property_account_income_categ.id
else:
a = res.property_account_expense.id
if not a:
a = res.categ_id.property_account_expense_categ.id
a = fpos_obj.map_account(cr, uid, fpos, a)
if a:
result['account_id'] = a
if type in ('out_invoice', 'out_refund'):
taxes = res.taxes_id and res.taxes_id or (a and self.pool.get('account.account').browse(cr, uid, a, context=context).tax_ids or False)
else:
taxes = res.supplier_taxes_id and res.supplier_taxes_id or (a and self.pool.get('account.account').browse(cr, uid, a, context=context).tax_ids or False)
tax_id = fpos_obj.map_tax(cr, uid, fpos, taxes)
if type in ('in_invoice', 'in_refund'):
result.update( {'price_unit': price_unit or res.standard_price,'invoice_line_tax_id': tax_id} )
else:
result.update({'price_unit': res.list_price, 'invoice_line_tax_id': tax_id})
result['name'] = res.partner_ref
result['uos_id'] = uom_id or res.uom_id.id
if res.description:
result['name'] += '\n'+res.description
domain = {'uos_id':[('category_id','=',res.uom_id.category_id.id)]}
res_final = {'value':result, 'domain':domain}
if not company_id or not currency_id:
return res_final
company = self.pool.get('res.company').browse(cr, uid, company_id, context=context)
currency = self.pool.get('res.currency').browse(cr, uid, currency_id, context=context)
if company.currency_id.id != currency.id:
if type in ('in_invoice', 'in_refund'):
res_final['value']['price_unit'] = res.standard_price
new_price = res_final['value']['price_unit'] * currency.rate
res_final['value']['price_unit'] = new_price
if result['uos_id'] and result['uos_id'] != res.uom_id.id:
selected_uom = self.pool.get('product.uom').browse(cr, uid, result['uos_id'], context=context)
new_price = self.pool.get('product.uom')._compute_price(cr, uid, res.uom_id.id, res_final['value']['price_unit'], result['uos_id'])
res_final['value']['price_unit'] = new_price
return res_final
def uos_id_change(self, cr, uid, ids, product, uom, qty=0, name='', type='out_invoice', partner_id=False, fposition_id=False, price_unit=False, currency_id=False, context=None, company_id=None):
if context is None:
context = {}
company_id = company_id if company_id != None else context.get('company_id',False)
context = dict(context)
context.update({'company_id': company_id})
warning = {}
res = self.product_id_change(cr, uid, ids, product, uom, qty, name, type, partner_id, fposition_id, price_unit, currency_id, context=context)
if not uom:
res['value']['price_unit'] = 0.0
if product and uom:
prod = self.pool.get('product.product').browse(cr, uid, product, context=context)
prod_uom = self.pool.get('product.uom').browse(cr, uid, uom, context=context)
if prod.uom_id.category_id.id != prod_uom.category_id.id:
warning = {
'title': _('Warning!'),
'message': _('The selected unit of measure is not compatible with the unit of measure of the product.')
}
res['value'].update({'uos_id': prod.uom_id.id})
return {'value': res['value'], 'warning': warning}
return res
def move_line_get(self, cr, uid, invoice_id, context=None):
res = []
tax_obj = self.pool.get('account.tax')
cur_obj = self.pool.get('res.currency')
if context is None:
context = {}
inv = self.pool.get('account.invoice').browse(cr, uid, invoice_id, context=context)
company_currency = self.pool['res.company'].browse(cr, uid, inv.company_id.id).currency_id.id
for line in inv.invoice_line:
mres = self.move_line_get_item(cr, uid, line, context)
if not mres:
continue
res.append(mres)
tax_code_found= False
for tax in tax_obj.compute_all(cr, uid, line.invoice_line_tax_id,
(line.price_unit * (1.0 - (line['discount'] or 0.0) / 100.0)),
line.quantity, line.product_id,
inv.partner_id)['taxes']:
if inv.type in ('out_invoice', 'in_invoice'):
tax_code_id = tax['base_code_id']
tax_amount = line.price_subtotal * tax['base_sign']
else:
tax_code_id = tax['ref_base_code_id']
tax_amount = line.price_subtotal * tax['ref_base_sign']
if tax_code_found:
if not tax_code_id:
continue
res.append(self.move_line_get_item(cr, uid, line, context))
res[-1]['price'] = 0.0
res[-1]['account_analytic_id'] = False
elif not tax_code_id:
continue
tax_code_found = True
res[-1]['tax_code_id'] = tax_code_id
res[-1]['tax_amount'] = cur_obj.compute(cr, uid, inv.currency_id.id, company_currency, tax_amount, context={'date': inv.date_invoice})
return res
def move_line_get_item(self, cr, uid, line, context=None):
return {
'type':'src',
'name': line.name.split('\n')[0][:64],
'price_unit':line.price_unit,
'quantity':line.quantity,
'price':line.price_subtotal,
'account_id':line.account_id.id,
'product_id':line.product_id.id,
'uos_id':line.uos_id.id,
'account_analytic_id':line.account_analytic_id.id,
'taxes':line.invoice_line_tax_id,
}
#
# Set the tax field according to the account and the fiscal position
#
def onchange_account_id(self, cr, uid, ids, product_id, partner_id, inv_type, fposition_id, account_id):
if not account_id:
return {}
unique_tax_ids = []
fpos = fposition_id and self.pool.get('account.fiscal.position').browse(cr, uid, fposition_id) or False
account = self.pool.get('account.account').browse(cr, uid, account_id)
if not product_id:
taxes = account.tax_ids
unique_tax_ids = self.pool.get('account.fiscal.position').map_tax(cr, uid, fpos, taxes)
else:
product_change_result = self.product_id_change(cr, uid, ids, product_id, False, type=inv_type,
partner_id=partner_id, fposition_id=fposition_id,
company_id=account.company_id.id)
if product_change_result and 'value' in product_change_result and 'invoice_line_tax_id' in product_change_result['value']:
unique_tax_ids = product_change_result['value']['invoice_line_tax_id']
return {'value':{'invoice_line_tax_id': unique_tax_ids}}
account_invoice_line()
class account_invoice_tax(osv.osv):
_name = "account.invoice.tax"
_description = "Invoice Tax"
def _count_factor(self, cr, uid, ids, name, args, context=None):
res = {}
for invoice_tax in self.browse(cr, uid, ids, context=context):
res[invoice_tax.id] = {
'factor_base': 1.0,
'factor_tax': 1.0,
}
if invoice_tax.amount <> 0.0:
factor_tax = invoice_tax.tax_amount / invoice_tax.amount
res[invoice_tax.id]['factor_tax'] = factor_tax
if invoice_tax.base <> 0.0:
factor_base = invoice_tax.base_amount / invoice_tax.base
res[invoice_tax.id]['factor_base'] = factor_base
return res
_columns = {
'invoice_id': fields.many2one('account.invoice', 'Invoice Line', ondelete='cascade', select=True),
'name': fields.char('Tax Description', size=64, required=True),
'account_id': fields.many2one('account.account', 'Tax Account', required=True, domain=[('type','<>','view'),('type','<>','income'), ('type', '<>', 'closed')]),
'account_analytic_id': fields.many2one('account.analytic.account', 'Analytic account'),
'base': fields.float('Base', digits_compute=dp.get_precision('Account')),
'amount': fields.float('Amount', digits_compute=dp.get_precision('Account')),
'manual': fields.boolean('Manual'),
'sequence': fields.integer('Sequence', help="Gives the sequence order when displaying a list of invoice tax."),
'base_code_id': fields.many2one('account.tax.code', 'Base Code', help="The account basis of the tax declaration."),
'base_amount': fields.float('Base Code Amount', digits_compute=dp.get_precision('Account')),
'tax_code_id': fields.many2one('account.tax.code', 'Tax Code', help="The tax basis of the tax declaration."),
'tax_amount': fields.float('Tax Code Amount', digits_compute=dp.get_precision('Account')),
'company_id': fields.related('account_id', 'company_id', type='many2one', relation='res.company', string='Company', store=True, readonly=True),
'factor_base': fields.function(_count_factor, string='Multipication factor for Base code', type='float', multi="all"),
'factor_tax': fields.function(_count_factor, string='Multipication factor Tax code', type='float', multi="all")
}
def base_change(self, cr, uid, ids, base, currency_id=False, company_id=False, date_invoice=False):
cur_obj = self.pool.get('res.currency')
company_obj = self.pool.get('res.company')
company_currency = False
factor = 1
if ids:
factor = self.read(cr, uid, ids[0], ['factor_base'])['factor_base']
if company_id:
company_currency = company_obj.read(cr, uid, [company_id], ['currency_id'])[0]['currency_id'][0]
if currency_id and company_currency:
base = cur_obj.compute(cr, uid, currency_id, company_currency, base*factor, context={'date': date_invoice or time.strftime('%Y-%m-%d')}, round=False)
return {'value': {'base_amount':base}}
def amount_change(self, cr, uid, ids, amount, currency_id=False, company_id=False, date_invoice=False):
cur_obj = self.pool.get('res.currency')
company_obj = self.pool.get('res.company')
company_currency = False
factor = 1
if ids:
factor = self.read(cr, uid, ids[0], ['factor_tax'])['factor_tax']
if company_id:
company_currency = company_obj.read(cr, uid, [company_id], ['currency_id'])[0]['currency_id'][0]
if currency_id and company_currency:
amount = cur_obj.compute(cr, uid, currency_id, company_currency, amount*factor, context={'date': date_invoice or time.strftime('%Y-%m-%d')}, round=False)
return {'value': {'tax_amount': amount}}
_order = 'sequence'
_defaults = {
'manual': 1,
'base_amount': 0.0,
'tax_amount': 0.0,
}
def compute(self, cr, uid, invoice_id, context=None):
tax_grouped = {}
tax_obj = self.pool.get('account.tax')
cur_obj = self.pool.get('res.currency')
inv = self.pool.get('account.invoice').browse(cr, uid, invoice_id, context=context)
cur = inv.currency_id
company_currency = self.pool['res.company'].browse(cr, uid, inv.company_id.id).currency_id.id
for line in inv.invoice_line:
for tax in tax_obj.compute_all(cr, uid, line.invoice_line_tax_id, (line.price_unit* (1-(line.discount or 0.0)/100.0)), line.quantity, line.product_id, inv.partner_id)['taxes']:
val={}
val['invoice_id'] = inv.id
val['name'] = tax['name']
val['amount'] = tax['amount']
val['manual'] = False
val['sequence'] = tax['sequence']
val['base'] = cur_obj.round(cr, uid, cur, tax['price_unit'] * line['quantity'])
if inv.type in ('out_invoice','in_invoice'):
val['base_code_id'] = tax['base_code_id']
val['tax_code_id'] = tax['tax_code_id']
val['base_amount'] = cur_obj.compute(cr, uid, inv.currency_id.id, company_currency, val['base'] * tax['base_sign'], context={'date': inv.date_invoice or time.strftime('%Y-%m-%d')}, round=False)
val['tax_amount'] = cur_obj.compute(cr, uid, inv.currency_id.id, company_currency, val['amount'] * tax['tax_sign'], context={'date': inv.date_invoice or time.strftime('%Y-%m-%d')}, round=False)
val['account_id'] = tax['account_collected_id'] or line.account_id.id
val['account_analytic_id'] = tax['account_analytic_collected_id']
else:
val['base_code_id'] = tax['ref_base_code_id']
val['tax_code_id'] = tax['ref_tax_code_id']
val['base_amount'] = cur_obj.compute(cr, uid, inv.currency_id.id, company_currency, val['base'] * tax['ref_base_sign'], context={'date': inv.date_invoice or time.strftime('%Y-%m-%d')}, round=False)
val['tax_amount'] = cur_obj.compute(cr, uid, inv.currency_id.id, company_currency, val['amount'] * tax['ref_tax_sign'], context={'date': inv.date_invoice or time.strftime('%Y-%m-%d')}, round=False)
val['account_id'] = tax['account_paid_id'] or line.account_id.id
val['account_analytic_id'] = tax['account_analytic_paid_id']
key = (val['tax_code_id'], val['base_code_id'], val['account_id'], val['account_analytic_id'])
if not key in tax_grouped:
tax_grouped[key] = val
else:
tax_grouped[key]['amount'] += val['amount']
tax_grouped[key]['base'] += val['base']
tax_grouped[key]['base_amount'] += val['base_amount']
tax_grouped[key]['tax_amount'] += val['tax_amount']
for t in tax_grouped.values():
t['base'] = cur_obj.round(cr, uid, cur, t['base'])
t['amount'] = cur_obj.round(cr, uid, cur, t['amount'])
t['base_amount'] = cur_obj.round(cr, uid, cur, t['base_amount'])
t['tax_amount'] = cur_obj.round(cr, uid, cur, t['tax_amount'])
return tax_grouped
def move_line_get(self, cr, uid, invoice_id):
res = []
cr.execute('SELECT * FROM account_invoice_tax WHERE invoice_id=%s', (invoice_id,))
for t in cr.dictfetchall():
if not t['amount'] \
and not t['tax_code_id'] \
and not t['tax_amount']:
continue
res.append({
'type':'tax',
'name':t['name'],
'price_unit': t['amount'],
'quantity': 1,
'price': t['amount'] or 0.0,
'account_id': t['account_id'],
'tax_code_id': t['tax_code_id'],
'tax_amount': t['tax_amount'],
'account_analytic_id': t['account_analytic_id'],
})
return res
class res_partner(osv.osv):
""" Inherits partner and adds invoice information in the partner form """
_inherit = 'res.partner'
_columns = {
'invoice_ids': fields.one2many('account.invoice.line', 'partner_id', 'Invoices', readonly=True),
}
def _find_accounting_partner(self, partner):
'''
Find the partner for which the accounting entries will be created
'''
# FIXME: after 7.0, to replace by function field partner.commercial_partner_id
#if the chosen partner is not a company and has a parent company, use the parent for the journal entries
#because you want to invoice 'Agrolait, accounting department' but the journal items are for 'Agrolait'
while not partner.is_company and partner.parent_id:
partner = partner.parent_id
return partner
def copy(self, cr, uid, id, default=None, context=None):
default = default or {}
default.update({'invoice_ids' : []})
return super(res_partner, self).copy(cr, uid, id, default, context)
class mail_compose_message(osv.Model):
_inherit = 'mail.compose.message'
def send_mail(self, cr, uid, ids, context=None):
context = context or {}
if context.get('default_model') == 'account.invoice' and context.get('default_res_id') and context.get('mark_invoice_as_sent'):
context = dict(context, mail_post_autofollow=True)
self.pool.get('account.invoice').write(cr, uid, [context['default_res_id']], {'sent': True}, context=context)
return super(mail_compose_message, self).send_mail(cr, uid, ids, context=context)
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | -1,727,593,621,761,000,200 | 52.211166 | 307 | 0.559449 | false |
lakewood999/Ciphers | rcipherx.py | 1 | 5693 | '''
=================================================================================================
R Cipher Suite
Includes all variants of the R cipher
=================================================================================================
Developed by: ProgramRandom, a division of RandomCorporations
A Page For This Project Will Be Created Soon On lakewood999.github.io
Visit my webpage at: https://lakewood999.github.io -- Note that this is my personal page, not the RandomCorporations page
=================================================================================================
What is the R cipher: This is just a random dipher that I came up with. I will not say this is a good cipher, or perfect cipher, but it's just something I decided to make. The R cipher is an improved version of the Caesar cipher
Root of the name: R cipher
-Well, cipher is just what it is, and R stands for random, or things being randomly generated
=================================================================================================
License:
You are free to use this script free of charge, however, I am not responsible for any types of problems caused by this script. By using this program, you agree to not hold be liable for any charges related to this programe.
You are free to modify, and distribute this software(free of charge), but you are NOT allowed to commercialize this software(sell). Please attribute this program to me if you are sharing it, or re-distributing it
=================================================================================================
Status:
This project is currently a WIP
-Variant "i" of the R cipher comping up
Version: Version 1: The X Update
R Cipher X - Progress: 100%
=================================================================================================
'''
import random
def letterout(x):
out = ""
x = str(x)
if x == "1":
out = "a"
elif x == "2":
out = "b"
elif x == "3":
out = "c"
elif x == "4":
out = "d"
elif x == "5":
out = "e"
elif x == "6":
out = "f"
elif x == "7":
out = "g"
elif x == "8":
out = "h"
elif x == "9":
out = "i"
elif x == "10":
out = "j"
elif x == "11":
out = "k"
elif x == "12":
out = "l"
elif x == "13":
out = "m"
elif x == "14":
out = "n"
elif x == "15":
out = "o"
elif x == "16":
out = "p"
elif x == "17":
out = "q"
elif x == "18":
out = "r"
elif x == "19":
out = "s"
elif x == "20":
out = "t"
elif x == "21":
out = "u"
elif x == "22":
out = "v"
elif x == "23":
out = "w"
elif x == "24":
out = "x"
elif x == "25":
out = "y"
elif x == "26":
out = "z"
return out
#This is script just returns the number depnding on the input--WIP Need to alternate
def numberout(x):
out = ""
if x == "a":
out = "1"
elif x == "":
out = "0"
elif x == "b":
out = "2"
elif x == "c":
out = "3"
elif x == "d":
out = "4"
elif x == "e":
out = "5"
elif x == "f":
out = "6"
elif x == "g":
out = "7"
elif x == "h":
out = "8"
elif x == "i":
out = "9"
elif x == "j":
out = "10"
elif x == "k":
out = "11"
elif x == "l":
out = "12"
elif x == "m":
out = "13"
elif x == "n":
out = "14"
elif x == "o":
out = "15"
elif x == "p":
out = "16"
elif x == "q":
out = "17"
elif x == "r":
out = "18"
elif x == "s":
out = "19"
elif x == "t":
out = "20"
elif x == "u":
out = "21"
elif x == "v":
out = "22"
elif x == "w":
out = "23"
elif x == "x":
out = "24"
elif x == "y":
out = "25"
elif x == "z":
out = "26"
return out
def rcipherx(x):
#This is script just returns the letter depnding on the input
#This is the function that encrypts the text
def encrypt(text):
encrypted = ""
key = ""
totalscan = len(text)
scan = 0
while scan < totalscan:
prekey = random.randint(1, 26)
letter = text[scan]
letternum = numberout(letter)
encryptout = ""
if letternum == "":
encryptout = " "
prekey = ""
else:
lettersum = prekey+int(letternum)
if lettersum > 26:
lettersum = lettersum % 26
encryptout = letterout(lettersum)
if key != "":
if prekey == "":
key = key
else:
key = key + ", " + str(prekey)
else:
if prekey == "":
key = key
else:
key = key + str(prekey)
encrypted += encryptout
scan += 1
print("Your encrypted message: "+encrypted)
print("Here is your key: "+key)
def decrypt(text):
decrypted = ""
key = input("What is the key(Key Numbers Must Be Separated By Commas With Spaces, e.g. 1, 2, 4): ")
keylist = key.split(', ')
print("Warning: Your key length must be equal to the number of characters in the text your are trying to decrypt, or this decryption will be unsuccessful")
totalscan = len(text)
scan = 0
keyscan = 0
while scan < totalscan:
letter = text[scan]
letternum = numberout(letter)
decryptout = ""
if letternum == "":
decryptout = " "
scan = scan +1
else:
decryptout = int(letternum) - int(keylist[keyscan])
if decryptout < 0:
decryptout = letterout(26-abs(decryptout))
else:
decryptout = letterout(decryptout)
scan = scan + 1
keyscan = keyscan+1
decrypted += str(decryptout)
print("Your decrpyted message is: "+decrypted)
print("This message was decrypted with a key of: "+key)
if x == "encrypt":
encrypt(input("Please type in the text you would like to encrypt: "))
elif x == "decrypt":
decrypt(input("Please type in the text you would like to decrypt: "))
#encrypt(input("Please type in the text you would like to encrypt: "))
#decrypt(input("Please type in the text you would like to decrypt: "))
#rcipherx()
| mit | 2,846,263,050,524,925,000 | 23.433476 | 230 | 0.529773 | false |
SnappyDataInc/spark | examples/src/main/python/mllib/fpgrowth_example.py | 158 | 1280 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# $example on$
from pyspark.mllib.fpm import FPGrowth
# $example off$
from pyspark import SparkContext
if __name__ == "__main__":
sc = SparkContext(appName="FPGrowth")
# $example on$
data = sc.textFile("data/mllib/sample_fpgrowth.txt")
transactions = data.map(lambda line: line.strip().split(' '))
model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
result = model.freqItemsets().collect()
for fi in result:
print(fi)
# $example off$
| apache-2.0 | 5,924,723,334,621,177,000 | 37.787879 | 74 | 0.73125 | false |
webjunkie/python-social-auth | social/backends/slack.py | 68 | 2414 | """
Slack OAuth2 backend, docs at:
http://psa.matiasaguirre.net/docs/backends/slack.html
https://api.slack.com/docs/oauth
"""
import re
from social.backends.oauth import BaseOAuth2
class SlackOAuth2(BaseOAuth2):
"""Slack OAuth authentication backend"""
name = 'slack'
AUTHORIZATION_URL = 'https://slack.com/oauth/authorize'
ACCESS_TOKEN_URL = 'https://slack.com/api/oauth.access'
ACCESS_TOKEN_METHOD = 'POST'
SCOPE_SEPARATOR = ','
REDIRECT_STATE = False
EXTRA_DATA = [
('id', 'id'),
('name', 'name'),
('real_name', 'real_name')
]
def get_user_details(self, response):
"""Return user details from Slack account"""
# Build the username with the team $username@$team_url
# Necessary to get unique names for all of slack
username = response.get('user')
if self.setting('USERNAME_WITH_TEAM', True):
match = re.search(r'//([^.]+)\.slack\.com', response['url'])
username = '{0}@{1}'.format(username, match.group(1))
out = {'username': username}
if 'profile' in response:
out.update({
'email': response['profile'].get('email'),
'fullname': response['profile'].get('real_name'),
'first_name': response['profile'].get('first_name'),
'last_name': response['profile'].get('last_name')
})
return out
def user_data(self, access_token, *args, **kwargs):
"""Loads user data from service"""
# Has to be two calls, because the users.info requires a username,
# And we want the team information. Check auth.test details at:
# https://api.slack.com/methods/auth.test
auth_test = self.get_json('https://slack.com/api/auth.test', params={
'token': access_token
})
# https://api.slack.com/methods/users.info
user_info = self.get_json('https://slack.com/api/users.info', params={
'token': access_token,
'user': auth_test.get('user_id')
})
if user_info.get('user'):
# Capture the user data, if available based on the scope
auth_test.update(user_info['user'])
# Clean up user_id vs id
auth_test['id'] = auth_test['user_id']
auth_test.pop('ok', None)
auth_test.pop('user_id', None)
return auth_test
| bsd-3-clause | -3,803,897,133,941,440,500 | 35.575758 | 78 | 0.575394 | false |
DESHRAJ/crowdsource-platform | crowdsourcing/models.py | 4 | 22804 | from django.contrib.auth.models import User
from django.db import models
from django.utils import timezone
from oauth2client.django_orm import FlowField, CredentialsField
from crowdsourcing.utils import get_delimiter
import pandas as pd
import os
class RegistrationModel(models.Model):
user = models.OneToOneField(User)
activation_key = models.CharField(max_length=40)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class PasswordResetModel(models.Model):
user = models.OneToOneField(User)
reset_key = models.CharField(max_length=40)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Region(models.Model):
name = models.CharField(max_length=64, error_messages={'required': 'Please specify the region!', })
code = models.CharField(max_length=16, error_messages={'required': 'Please specify the region code!', })
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Country(models.Model):
name = models.CharField(max_length=64, error_messages={'required': 'Please specify the country!', })
code = models.CharField(max_length=8, error_messages={'required': 'Please specify the country code!', })
region = models.ForeignKey(Region)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
def __unicode__(self):
return u'%s' % (self.name)
class City(models.Model):
name = models.CharField(max_length=64, error_messages={'required': 'Please specify the city!', })
country = models.ForeignKey(Country)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
def __unicode__(self):
return u'%s' % (self.name)
class Address(models.Model):
street = models.CharField(max_length=128, error_messages={'required': 'Please specify the street name!', })
country = models.ForeignKey(Country)
city = models.ForeignKey(City)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
def __unicode__(self):
return u'%s, %s, %s' % (self.street, self.city, self.country)
class Role(models.Model):
name = models.CharField(max_length=32, unique=True, error_messages={'required': 'Please specify the role name!',
'unique': 'The role %(value)r already exists. Please provide another name!'})
is_active = models.BooleanField(default=True)
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Language(models.Model):
name = models.CharField(max_length=64, error_messages={'required': 'Please specify the language!'})
iso_code = models.CharField(max_length=8)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class UserProfile(models.Model):
user = models.OneToOneField(User)
gender_choices = (('M', 'Male'), ('F', 'Female'))
gender = models.CharField(max_length=1, choices=gender_choices)
address = models.ForeignKey(Address, null=True)
birthday = models.DateField(null=True, error_messages={'invalid': "Please enter a correct date format"})
nationality = models.ManyToManyField(Country, through='UserCountry')
verified = models.BooleanField(default=False)
picture = models.BinaryField(null=True)
friends = models.ManyToManyField('self', through='Friendship',
symmetrical=False)
roles = models.ManyToManyField(Role, through='UserRole')
deleted = models.BooleanField(default=False)
languages = models.ManyToManyField(Language, through='UserLanguage')
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class UserCountry(models.Model):
country = models.ForeignKey(Country)
user = models.ForeignKey(UserProfile)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Skill(models.Model):
name = models.CharField(max_length=128, error_messages={'required': "Please enter the skill name!"})
description = models.CharField(max_length=512, error_messages={'required': "Please enter the skill description!"})
verified = models.BooleanField(default=False)
parent = models.ForeignKey('self', null=True)
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Worker(models.Model):
profile = models.OneToOneField(UserProfile)
skills = models.ManyToManyField(Skill, through='WorkerSkill')
deleted = models.BooleanField(default=False)
alias = models.CharField(max_length=32, error_messages={'required': "Please enter an alias!"})
class WorkerSkill(models.Model):
worker = models.ForeignKey(Worker)
skill = models.ForeignKey(Skill)
level = models.IntegerField(null=True)
verified = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
unique_together = ('worker', 'skill')
class Requester(models.Model):
profile = models.OneToOneField(UserProfile)
alias = models.CharField(max_length=32, error_messages={'required': "Please enter an alias!"})
class UserRole(models.Model):
user_profile = models.ForeignKey(UserProfile)
role = models.ForeignKey(Role)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Friendship(models.Model):
user_source = models.ForeignKey(UserProfile, related_name='user_source')
user_target = models.ForeignKey(UserProfile, related_name='user_target')
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Category(models.Model):
name = models.CharField(max_length=128, error_messages={'required': "Please enter the category name!"})
parent = models.ForeignKey('self', null=True)
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Project(models.Model):
name = models.CharField(max_length=128, error_messages={'required': "Please enter the project name!"})
start_date = models.DateTimeField(auto_now_add=True, auto_now=False)
end_date = models.DateTimeField(auto_now_add=True, auto_now=False)
owner = models.ForeignKey(Requester, related_name='project_owner')
description = models.CharField(max_length=1024, default='')
collaborators = models.ManyToManyField(Requester, through='ProjectRequester')
keywords = models.TextField(null=True)
save_to_drive = models.BooleanField(default=False)
deleted = models.BooleanField(default=False)
categories = models.ManyToManyField(Category, through='ProjectCategory')
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class ProjectRequester(models.Model):
"""
Tracks the list of requesters that collaborate on a specific project
"""
requester = models.ForeignKey(Requester)
project = models.ForeignKey(Project)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
unique_together = ('requester', 'project')
class Template(models.Model):
name = models.CharField(max_length=128, error_messages={'required': "Please enter the template name!"})
owner = models.ForeignKey(UserProfile)
source_html = models.TextField(default=None, null=True)
price = models.FloatField(default=0)
share_with_others = models.BooleanField(default=False)
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Module(models.Model):
"""
aka Milestone
This is a group of similar tasks of the same kind.
Fields
-repetition: number of times a task needs to be performed
"""
name = models.CharField(max_length=128, error_messages={'required': "Please enter the module name!"})
description = models.TextField(error_messages={'required': "Please enter the module description!"})
owner = models.ForeignKey(Requester)
project = models.ForeignKey(Project, related_name='modules')
categories = models.ManyToManyField(Category, through='ModuleCategory')
keywords = models.TextField(null=True)
# TODO: To be refined
statuses = ((1, "Created"),
(2, 'In Review'),
(3, 'In Progress'),
(4, 'Completed')
)
permission_types = ((1, "Others:Read+Write::Workers:Read+Write"),
(2, 'Others:Read::Workers:Read+Write'),
(3, 'Others:Read::Workers:Read'),
(4, 'Others:None::Workers:Read')
)
status = models.IntegerField(choices=statuses, default=1)
price = models.FloatField()
repetition = models.IntegerField(default=1)
module_timeout = models.IntegerField(default=0)
has_data_set = models.BooleanField(default=False)
data_set_location = models.CharField(max_length=256, default='No data set', null=True)
task_time = models.FloatField(default=0) # in minutes
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
template = models.ManyToManyField(Template, through='ModuleTemplate')
is_micro = models.BooleanField(default=True)
is_prototype = models.BooleanField(default=False)
min_rating = models.FloatField(default=0)
allow_feedback = models.BooleanField(default=True)
feedback_permissions = models.IntegerField(choices=permission_types, default=1)
class ModuleCategory(models.Model):
module = models.ForeignKey(Module)
category = models.ForeignKey(Category)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
unique_together = ('category', 'module')
class ProjectCategory(models.Model):
project = models.ForeignKey(Project)
category = models.ForeignKey(Category)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
unique_together = ('project', 'category')
class TemplateItem(models.Model):
name = models.CharField(max_length=128, error_messages={'required': "Please enter the name of the template item!"})
template = models.ForeignKey(Template, related_name='template_items')
id_string = models.CharField(max_length=128)
role = models.CharField(max_length=16)
icon = models.CharField(max_length=256, null=True)
data_source = models.CharField(max_length=256, null=True)
layout = models.CharField(max_length=16, default='column')
type = models.CharField(max_length=16)
sub_type = models.CharField(max_length=16)
values = models.TextField(null=True)
position = models.IntegerField()
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
ordering = ['position']
class ModuleTemplate(models.Model):
module = models.ForeignKey(Module)
template = models.ForeignKey(Template)
class TemplateItemProperties(models.Model):
template_item = models.ForeignKey(TemplateItem)
attribute = models.CharField(max_length=128)
operator = models.CharField(max_length=128)
value1 = models.CharField(max_length=128)
value2 = models.CharField(max_length=128)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Task(models.Model):
module = models.ForeignKey(Module, related_name='module_tasks')
# TODO: To be refined
statuses = ((1, "Created"),
(2, 'Accepted'),
(3, 'Assigned'),
(4, 'Finished')
)
status = models.IntegerField(choices=statuses, default=1)
data = models.TextField(null=True)
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
price = models.FloatField(default=0)
class TaskWorker(models.Model):
task = models.ForeignKey(Task, related_name='task_workers')
worker = models.ForeignKey(Worker)
statuses = ((1, 'In Progress'),
(2, 'Submitted'),
(3, 'Accepted'),
(4, 'Rejected'),
(5, 'Returned'),
(6, 'Skipped')
)
task_status = models.IntegerField(choices=statuses, default=1)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
is_paid = models.BooleanField(default=False)
class TaskWorkerResult(models.Model):
task_worker = models.ForeignKey(TaskWorker, related_name='task_worker_results')
result = models.TextField(null=True)
template_item = models.ForeignKey(TemplateItem)
# TODO: To be refined
statuses = ((1, 'Created'),
(2, 'Accepted'),
(3, 'Rejected')
)
status = models.IntegerField(choices=statuses, default=1)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class WorkerModuleApplication(models.Model):
worker = models.ForeignKey(Worker)
module = models.ForeignKey(Module)
# TODO: To be refined
statuses = ((1, "Created"),
(2, 'Accepted'),
(3, 'Rejected')
)
status = models.IntegerField(choices=statuses, default=1)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class ActivityLog(models.Model):
"""
Track all user's activities: Create, Update and Delete
"""
activity = models.CharField(max_length=512)
author = models.ForeignKey(User)
created_timestamp = models.DateTimeField(auto_now_add=False, auto_now=True)
class Qualification(models.Model):
module = models.ForeignKey(Module)
# TODO: To be refined
types = ((1, "Strict"),
(2, 'Flexible'))
type = models.IntegerField(choices=types, default=1)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class QualificationItem(models.Model):
qualification = models.ForeignKey(Qualification)
attribute = models.CharField(max_length=128)
operator = models.CharField(max_length=128)
value1 = models.CharField(max_length=128)
value2 = models.CharField(max_length=128)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class UserLanguage(models.Model):
language = models.ForeignKey(Language)
user = models.ForeignKey(UserProfile)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Currency(models.Model):
name = models.CharField(max_length=32)
iso_code = models.CharField(max_length=8)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class UserPreferences(models.Model):
user = models.OneToOneField(User)
language = models.ForeignKey(Language)
currency = models.ForeignKey(Currency)
login_alerts = models.SmallIntegerField(default=0)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class RequesterRanking(models.Model):
requester_name = models.CharField(max_length=128)
requester_payRank = models.FloatField()
requester_fairRank = models.FloatField()
requester_speedRank = models.FloatField()
requester_communicationRank = models.FloatField()
requester_numberofReviews = models.IntegerField(default=0)
class ModuleRating(models.Model):
worker = models.ForeignKey(Worker)
module = models.ForeignKey(Module)
value = models.IntegerField()
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
unique_together = ('worker', 'module')
class ModuleReview(models.Model):
worker = models.ForeignKey(Worker)
anonymous = models.BooleanField(default=False)
module = models.ForeignKey(Module)
comments = models.TextField()
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
unique_together = ('worker', 'module')
class FlowModel(models.Model):
id = models.OneToOneField(User, primary_key=True)
flow = FlowField()
class AccountModel(models.Model):
name = models.CharField(max_length=128)
type = models.CharField(max_length=16)
email = models.EmailField()
access_token = models.TextField(max_length=2048)
root = models.CharField(max_length=256)
is_active = models.IntegerField()
quota = models.BigIntegerField()
used_space = models.BigIntegerField()
assigned_space = models.BigIntegerField()
status = models.IntegerField(default=quota)
owner = models.ForeignKey(User)
class CredentialsModel(models.Model):
account = models.ForeignKey(AccountModel)
credential = CredentialsField()
class TemporaryFlowModel(models.Model):
user = models.ForeignKey(User)
type = models.CharField(max_length=16)
email = models.EmailField()
class BookmarkedProjects(models.Model):
profile = models.ForeignKey(UserProfile)
project = models.ForeignKey(Project)
class Conversation(models.Model):
subject = models.CharField(max_length=64)
sender = models.ForeignKey(User, related_name='sender')
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
deleted = models.BooleanField(default=False)
recipients = models.ManyToManyField(User, through='ConversationRecipient')
class Message(models.Model):
conversation = models.ForeignKey(Conversation, related_name='messages')
sender = models.ForeignKey(User)
body = models.TextField(max_length=8192)
deleted = models.BooleanField(default=False)
status = models.IntegerField(default=1) # 1:Sent 2:Delivered 3:Read
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class ConversationRecipient(models.Model):
recipient = models.ForeignKey(User, related_name='recipients')
conversation = models.ForeignKey(Conversation, related_name='conversation_recipient')
date_added = models.DateTimeField(auto_now_add=True, auto_now=False)
class UserMessage(models.Model):
message = models.ForeignKey(Message)
user = models.ForeignKey(User)
deleted = models.BooleanField(default=False)
class RequesterInputFile(models.Model):
# TODO will need save files on a server rather than in a temporary folder
file = models.FileField(upload_to='tmp/')
deleted = models.BooleanField(default=False)
def parse_csv(self):
delimiter = get_delimiter(self.file.name)
df = pd.DataFrame(pd.read_csv(self.file, sep=delimiter))
return df.to_dict(orient='records')
def delete(self, *args, **kwargs):
root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
path = os.path.join(root, self.file.url[1:])
os.remove(path)
super(RequesterInputFile, self).delete(*args, **kwargs)
class WorkerRequesterRating(models.Model):
origin = models.ForeignKey(UserProfile, related_name='rating_origin')
target = models.ForeignKey(UserProfile, related_name='rating_target')
module = models.ForeignKey(Module, related_name='rating_module')
weight = models.FloatField(default=2)
origin_type = models.CharField(max_length=16)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Comment(models.Model):
sender = models.ForeignKey(UserProfile, related_name='comment_sender')
body = models.TextField(max_length=8192)
parent = models.ForeignKey('self', related_name='reply_to', null=True)
deleted = models.BooleanField(default=False)
created_timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
last_updated = models.DateTimeField(auto_now_add=False, auto_now=True)
class Meta:
ordering = ['created_timestamp']
class ModuleComment(models.Model):
module = models.ForeignKey(Module, related_name='modulecomment_module')
comment = models.ForeignKey(Comment, related_name='modulecomment_comment')
deleted = models.BooleanField(default=False)
class TaskComment(models.Model):
task = models.ForeignKey(Task, related_name='taskcomment_task')
comment = models.ForeignKey(Comment, related_name='taskcomment_comment')
deleted = models.BooleanField(default=False) | mit | -9,201,666,082,569,424,000 | 40.691042 | 149 | 0.710139 | false |
andela-bojengwa/talk | venv/lib/python2.7/site-packages/pip/req/req_set.py | 79 | 24967 | from __future__ import absolute_import
import logging
import os
from pip._vendor import pkg_resources
from pip._vendor import requests
from pip.download import (url_to_path, unpack_url)
from pip.exceptions import (InstallationError, BestVersionAlreadyInstalled,
DistributionNotFound, PreviousBuildDirError)
from pip.locations import (PIP_DELETE_MARKER_FILENAME, build_prefix)
from pip.req.req_install import InstallRequirement
from pip.utils import (display_path, rmtree, dist_in_usersite,
_make_build_dir, normalize_path)
from pip.utils.logging import indent_log
from pip.vcs import vcs
from pip.wheel import wheel_ext
logger = logging.getLogger(__name__)
class Requirements(object):
def __init__(self):
self._keys = []
self._dict = {}
def keys(self):
return self._keys
def values(self):
return [self._dict[key] for key in self._keys]
def __contains__(self, item):
return item in self._keys
def __setitem__(self, key, value):
if key not in self._keys:
self._keys.append(key)
self._dict[key] = value
def __getitem__(self, key):
return self._dict[key]
def __repr__(self):
values = ['%s: %s' % (repr(k), repr(self[k])) for k in self.keys()]
return 'Requirements({%s})' % ', '.join(values)
class RequirementSet(object):
def __init__(self, build_dir, src_dir, download_dir, upgrade=False,
ignore_installed=False, as_egg=False, target_dir=None,
ignore_dependencies=False, force_reinstall=False,
use_user_site=False, session=None, pycompile=True,
isolated=False, wheel_download_dir=None):
if session is None:
raise TypeError(
"RequirementSet() missing 1 required keyword argument: "
"'session'"
)
self.build_dir = build_dir
self.src_dir = src_dir
self.download_dir = download_dir
self.upgrade = upgrade
self.ignore_installed = ignore_installed
self.force_reinstall = force_reinstall
self.requirements = Requirements()
# Mapping of alias: real_name
self.requirement_aliases = {}
self.unnamed_requirements = []
self.ignore_dependencies = ignore_dependencies
self.successfully_downloaded = []
self.successfully_installed = []
self.reqs_to_cleanup = []
self.as_egg = as_egg
self.use_user_site = use_user_site
self.target_dir = target_dir # set from --target option
self.session = session
self.pycompile = pycompile
self.isolated = isolated
if wheel_download_dir:
wheel_download_dir = normalize_path(wheel_download_dir)
self.wheel_download_dir = wheel_download_dir
def __str__(self):
reqs = [req for req in self.requirements.values()
if not req.comes_from]
reqs.sort(key=lambda req: req.name.lower())
return ' '.join([str(req.req) for req in reqs])
def add_requirement(self, install_req):
if not install_req.match_markers():
logger.debug("Ignore %s: markers %r don't match",
install_req.name, install_req.markers)
return
name = install_req.name
install_req.as_egg = self.as_egg
install_req.use_user_site = self.use_user_site
install_req.target_dir = self.target_dir
install_req.pycompile = self.pycompile
if not name:
# url or path requirement w/o an egg fragment
self.unnamed_requirements.append(install_req)
else:
if self.has_requirement(name):
raise InstallationError(
'Double requirement given: %s (already in %s, name=%r)'
% (install_req, self.get_requirement(name), name))
self.requirements[name] = install_req
# FIXME: what about other normalizations? E.g., _ vs. -?
if name.lower() != name:
self.requirement_aliases[name.lower()] = name
def has_requirement(self, project_name):
for name in project_name, project_name.lower():
if name in self.requirements or name in self.requirement_aliases:
return True
return False
@property
def has_requirements(self):
return list(self.requirements.values()) or self.unnamed_requirements
@property
def is_download(self):
if self.download_dir:
self.download_dir = os.path.expanduser(self.download_dir)
if os.path.exists(self.download_dir):
return True
else:
logger.critical('Could not find download directory')
raise InstallationError(
"Could not find or access download directory '%s'"
% display_path(self.download_dir))
return False
def get_requirement(self, project_name):
for name in project_name, project_name.lower():
if name in self.requirements:
return self.requirements[name]
if name in self.requirement_aliases:
return self.requirements[self.requirement_aliases[name]]
raise KeyError("No project with the name %r" % project_name)
def uninstall(self, auto_confirm=False):
for req in self.requirements.values():
req.uninstall(auto_confirm=auto_confirm)
req.commit_uninstall()
def locate_files(self):
# FIXME: duplicates code from prepare_files; relevant code should
# probably be factored out into a separate method
unnamed = list(self.unnamed_requirements)
reqs = list(self.requirements.values())
while reqs or unnamed:
if unnamed:
req_to_install = unnamed.pop(0)
else:
req_to_install = reqs.pop(0)
install_needed = True
if not self.ignore_installed and not req_to_install.editable:
req_to_install.check_if_exists()
if req_to_install.satisfied_by:
if self.upgrade:
# don't uninstall conflict if user install and
# conflict is not user install
if not (self.use_user_site
and not dist_in_usersite(
req_to_install.satisfied_by
)):
req_to_install.conflicts_with = \
req_to_install.satisfied_by
req_to_install.satisfied_by = None
else:
install_needed = False
if req_to_install.satisfied_by:
logger.info(
'Requirement already satisfied (use --upgrade to '
'upgrade): %s',
req_to_install,
)
if req_to_install.editable:
if req_to_install.source_dir is None:
req_to_install.source_dir = req_to_install.build_location(
self.src_dir
)
elif install_needed:
req_to_install.source_dir = req_to_install.build_location(
self.build_dir,
)
if (req_to_install.source_dir is not None
and not os.path.isdir(req_to_install.source_dir)):
raise InstallationError(
'Could not install requirement %s because source folder %s'
' does not exist (perhaps --no-download was used without '
'first running an equivalent install with --no-install?)' %
(req_to_install, req_to_install.source_dir)
)
def prepare_files(self, finder):
"""
Prepare process. Create temp directories, download and/or unpack files.
"""
from pip.index import Link
unnamed = list(self.unnamed_requirements)
reqs = list(self.requirements.values())
while reqs or unnamed:
if unnamed:
req_to_install = unnamed.pop(0)
else:
req_to_install = reqs.pop(0)
install = True
best_installed = False
not_found = None
# ############################################# #
# # Search for archive to fulfill requirement # #
# ############################################# #
if not self.ignore_installed and not req_to_install.editable:
req_to_install.check_if_exists()
if req_to_install.satisfied_by:
if self.upgrade:
if not self.force_reinstall and not req_to_install.url:
try:
url = finder.find_requirement(
req_to_install, self.upgrade)
except BestVersionAlreadyInstalled:
best_installed = True
install = False
except DistributionNotFound as exc:
not_found = exc
else:
# Avoid the need to call find_requirement again
req_to_install.url = url.url
if not best_installed:
# don't uninstall conflict if user install and
# conflict is not user install
if not (self.use_user_site
and not dist_in_usersite(
req_to_install.satisfied_by
)):
req_to_install.conflicts_with = \
req_to_install.satisfied_by
req_to_install.satisfied_by = None
else:
install = False
if req_to_install.satisfied_by:
if best_installed:
logger.info(
'Requirement already up-to-date: %s',
req_to_install,
)
else:
logger.info(
'Requirement already satisfied (use --upgrade to '
'upgrade): %s',
req_to_install,
)
if req_to_install.editable:
logger.info('Obtaining %s', req_to_install)
elif install:
if (req_to_install.url
and req_to_install.url.lower().startswith('file:')):
path = url_to_path(req_to_install.url)
logger.info('Processing %s', display_path(path))
else:
logger.info('Collecting %s', req_to_install)
with indent_log():
# ################################ #
# # vcs update or unpack archive # #
# ################################ #
is_wheel = False
if req_to_install.editable:
if req_to_install.source_dir is None:
location = req_to_install.build_location(self.src_dir)
req_to_install.source_dir = location
else:
location = req_to_install.source_dir
if not os.path.exists(self.build_dir):
_make_build_dir(self.build_dir)
req_to_install.update_editable(not self.is_download)
if self.is_download:
req_to_install.run_egg_info()
req_to_install.archive(self.download_dir)
else:
req_to_install.run_egg_info()
elif install:
# @@ if filesystem packages are not marked
# editable in a req, a non deterministic error
# occurs when the script attempts to unpack the
# build directory
# NB: This call can result in the creation of a temporary
# build directory
location = req_to_install.build_location(
self.build_dir,
)
unpack = True
url = None
# If a checkout exists, it's unwise to keep going. version
# inconsistencies are logged later, but do not fail the
# installation.
if os.path.exists(os.path.join(location, 'setup.py')):
raise PreviousBuildDirError(
"pip can't proceed with requirements '%s' due to a"
" pre-existing build directory (%s). This is "
"likely due to a previous installation that failed"
". pip is being responsible and not assuming it "
"can delete this. Please delete it and try again."
% (req_to_install, location)
)
else:
# FIXME: this won't upgrade when there's an existing
# package unpacked in `location`
if req_to_install.url is None:
if not_found:
raise not_found
url = finder.find_requirement(
req_to_install,
upgrade=self.upgrade,
)
else:
# FIXME: should req_to_install.url already be a
# link?
url = Link(req_to_install.url)
assert url
if url:
try:
if (
url.filename.endswith(wheel_ext)
and self.wheel_download_dir
):
# when doing 'pip wheel`
download_dir = self.wheel_download_dir
do_download = True
else:
download_dir = self.download_dir
do_download = self.is_download
unpack_url(
url, location, download_dir,
do_download, session=self.session,
)
except requests.HTTPError as exc:
logger.critical(
'Could not install requirement %s because '
'of error %s',
req_to_install,
exc,
)
raise InstallationError(
'Could not install requirement %s because '
'of HTTP error %s for URL %s' %
(req_to_install, exc, url)
)
else:
unpack = False
if unpack:
is_wheel = url and url.filename.endswith(wheel_ext)
if self.is_download:
req_to_install.source_dir = location
if not is_wheel:
# FIXME:https://github.com/pypa/pip/issues/1112
req_to_install.run_egg_info()
if url and url.scheme in vcs.all_schemes:
req_to_install.archive(self.download_dir)
elif is_wheel:
req_to_install.source_dir = location
req_to_install.url = url.url
else:
req_to_install.source_dir = location
req_to_install.run_egg_info()
req_to_install.assert_source_matches_version()
# req_to_install.req is only avail after unpack for URL
# pkgs repeat check_if_exists to uninstall-on-upgrade
# (#14)
if not self.ignore_installed:
req_to_install.check_if_exists()
if req_to_install.satisfied_by:
if self.upgrade or self.ignore_installed:
# don't uninstall conflict if user install and
# conflict is not user install
if not (self.use_user_site
and not dist_in_usersite(
req_to_install.satisfied_by)):
req_to_install.conflicts_with = \
req_to_install.satisfied_by
req_to_install.satisfied_by = None
else:
logger.info(
'Requirement already satisfied (use '
'--upgrade to upgrade): %s',
req_to_install,
)
install = False
# ###################### #
# # parse dependencies # #
# ###################### #
if (req_to_install.extras):
logger.debug(
"Installing extra requirements: %r",
','.join(req_to_install.extras),
)
if is_wheel:
dist = list(
pkg_resources.find_distributions(location)
)[0]
else: # sdists
if req_to_install.satisfied_by:
dist = req_to_install.satisfied_by
else:
dist = req_to_install.get_dist()
# FIXME: shouldn't be globally added:
if dist.has_metadata('dependency_links.txt'):
finder.add_dependency_links(
dist.get_metadata_lines('dependency_links.txt')
)
if not self.ignore_dependencies:
for subreq in dist.requires(
req_to_install.extras):
if self.has_requirement(
subreq.project_name):
# FIXME: check for conflict
continue
subreq = InstallRequirement(
str(subreq),
req_to_install,
isolated=self.isolated,
)
reqs.append(subreq)
self.add_requirement(subreq)
if not self.has_requirement(req_to_install.name):
# 'unnamed' requirements will get added here
self.add_requirement(req_to_install)
# cleanup tmp src
if (self.is_download or
req_to_install._temp_build_dir is not None):
self.reqs_to_cleanup.append(req_to_install)
if install:
self.successfully_downloaded.append(req_to_install)
def cleanup_files(self):
"""Clean up files, remove builds."""
logger.debug('Cleaning up...')
with indent_log():
for req in self.reqs_to_cleanup:
req.remove_temporary_source()
if self._pip_has_created_build_dir():
logger.debug('Removing temporary dir %s...', self.build_dir)
rmtree(self.build_dir)
def _pip_has_created_build_dir(self):
return (
self.build_dir == build_prefix
and os.path.exists(
os.path.join(self.build_dir, PIP_DELETE_MARKER_FILENAME)
)
)
def install(self, install_options, global_options=(), *args, **kwargs):
"""
Install everything in this set (after having downloaded and unpacked
the packages)
"""
to_install = [r for r in self.requirements.values()[::-1]
if not r.satisfied_by]
# DISTRIBUTE TO SETUPTOOLS UPGRADE HACK (1 of 3 parts)
# move the distribute-0.7.X wrapper to the end because it does not
# install a setuptools package. by moving it to the end, we ensure it's
# setuptools dependency is handled first, which will provide the
# setuptools package
# TODO: take this out later
distribute_req = pkg_resources.Requirement.parse("distribute>=0.7")
for req in to_install:
if (req.name == 'distribute'
and req.installed_version is not None
and req.installed_version in distribute_req):
to_install.remove(req)
to_install.append(req)
if to_install:
logger.info(
'Installing collected packages: %s',
', '.join([req.name for req in to_install]),
)
with indent_log():
for requirement in to_install:
# DISTRIBUTE TO SETUPTOOLS UPGRADE HACK (1 of 3 parts)
# when upgrading from distribute-0.6.X to the new merged
# setuptools in py2, we need to force setuptools to uninstall
# distribute. In py3, which is always using distribute, this
# conversion is already happening in distribute's
# pkg_resources. It's ok *not* to check if setuptools>=0.7
# because if someone were actually trying to ugrade from
# distribute to setuptools 0.6.X, then all this could do is
# actually help, although that upgade path was certainly never
# "supported"
# TODO: remove this later
if requirement.name == 'setuptools':
try:
# only uninstall distribute<0.7. For >=0.7, setuptools
# will also be present, and that's what we need to
# uninstall
distribute_requirement = \
pkg_resources.Requirement.parse("distribute<0.7")
existing_distribute = \
pkg_resources.get_distribution("distribute")
if existing_distribute in distribute_requirement:
requirement.conflicts_with = existing_distribute
except pkg_resources.DistributionNotFound:
# distribute wasn't installed, so nothing to do
pass
if requirement.conflicts_with:
logger.info(
'Found existing installation: %s',
requirement.conflicts_with,
)
with indent_log():
requirement.uninstall(auto_confirm=True)
try:
requirement.install(
install_options,
global_options,
*args,
**kwargs
)
except:
# if install did not succeed, rollback previous uninstall
if (requirement.conflicts_with
and not requirement.install_succeeded):
requirement.rollback_uninstall()
raise
else:
if (requirement.conflicts_with
and requirement.install_succeeded):
requirement.commit_uninstall()
requirement.remove_temporary_source()
self.successfully_installed = to_install
| mit | -1,400,298,254,899,841,800 | 43.346359 | 79 | 0.464894 | false |
apurvbhartia/gnuradio-routing | gr-wxgui/grc/top_block_gui.py | 18 | 2250 | # Copyright 2008, 2009 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
import wx
from gnuradio import gr
import panel
default_gui_size = (200, 100)
class top_block_gui(gr.top_block):
"""gr top block with wx gui app and grid sizer."""
def __init__(self, title='', size=default_gui_size):
"""
Initialize the gr top block.
Create the wx gui elements.
@param title the main window title
@param size the main window size tuple in pixels
@param icon the file path to an icon or None
"""
#initialize
gr.top_block.__init__(self)
self._size = size
#create gui elements
self._app = wx.App()
self._frame = wx.Frame(None, title=title)
self._panel = panel.Panel(self._frame)
self.Add = self._panel.Add
self.GridAdd = self._panel.GridAdd
self.GetWin = self._panel.GetWin
def SetIcon(self, *args, **kwargs): self._frame.SetIcon(*args, **kwargs)
def Run(self, start=True):
"""
Setup the wx gui elements.
Start the gr top block.
Block with the wx main loop.
"""
#set minimal window size
self._frame.SetSizeHints(*self._size)
#create callback for quit
def _quit(event):
self.stop(); self.wait()
self._frame.Destroy()
#setup app
self._frame.Bind(wx.EVT_CLOSE, _quit)
self._sizer = wx.BoxSizer(wx.VERTICAL)
self._sizer.Add(self._panel, 0, wx.EXPAND)
self._frame.SetSizerAndFit(self._sizer)
self._frame.SetAutoLayout(True)
self._frame.Show(True)
self._app.SetTopWindow(self._frame)
#start flow graph
if start: self.start()
#blocking main loop
self._app.MainLoop()
| gpl-3.0 | 7,023,478,842,067,168,000 | 29.405405 | 73 | 0.708444 | false |
borisroman/vdsm | vdsm_hooks/ovs/ovs_after_network_setup_fail.py | 1 | 1636 | #!/usr/bin/env python
# Copyright 2015 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#
# Refer to the README and COPYING files for full details of the license
#
from functools import partial
import traceback
from vdsm import supervdsm
import hooking
import ovs_utils
log = partial(ovs_utils.log, tag='ovs_after_network_setup_fail: ')
def main():
setup_nets_config = hooking.read_json()
in_rollback = setup_nets_config['request']['options'].get('_inRollback')
if in_rollback:
log('Configuration failed with _inRollback=True.')
else:
log('Configuration failed. At this point, non-OVS rollback should be '
'done. Executing OVS rollback.')
supervdsm.getProxy().setupNetworks(
{}, {}, {'connectivityCheck': False, '_inRollback': True,
'_inOVSRollback': True})
if __name__ == '__main__':
try:
main()
except:
hooking.exit_hook(traceback.format_exc())
| gpl-2.0 | -1,356,778,651,513,556,700 | 31.078431 | 78 | 0.69621 | false |
40223220/2015_cdb_g7_40223220 | static/Brython3.1.1-20150328-091302/Lib/xml/sax/__init__.py | 637 | 3505 | """Simple API for XML (SAX) implementation for Python.
This module provides an implementation of the SAX 2 interface;
information about the Java version of the interface can be found at
http://www.megginson.com/SAX/. The Python version of the interface is
documented at <...>.
This package contains the following modules:
handler -- Base classes and constants which define the SAX 2 API for
the 'client-side' of SAX for Python.
saxutils -- Implementation of the convenience classes commonly used to
work with SAX.
xmlreader -- Base classes and constants which define the SAX 2 API for
the parsers used with SAX for Python.
expatreader -- Driver that allows use of the Expat parser with SAX.
"""
from .xmlreader import InputSource
from .handler import ContentHandler, ErrorHandler
from ._exceptions import SAXException, SAXNotRecognizedException, \
SAXParseException, SAXNotSupportedException, \
SAXReaderNotAvailable
def parse(source, handler, errorHandler=ErrorHandler()):
parser = make_parser()
parser.setContentHandler(handler)
parser.setErrorHandler(errorHandler)
parser.parse(source)
def parseString(string, handler, errorHandler=ErrorHandler()):
from io import BytesIO
if errorHandler is None:
errorHandler = ErrorHandler()
parser = make_parser()
parser.setContentHandler(handler)
parser.setErrorHandler(errorHandler)
inpsrc = InputSource()
inpsrc.setByteStream(BytesIO(string))
parser.parse(inpsrc)
# this is the parser list used by the make_parser function if no
# alternatives are given as parameters to the function
default_parser_list = ["xml.sax.expatreader"]
# tell modulefinder that importing sax potentially imports expatreader
_false = 0
if _false:
import xml.sax.expatreader
import os, sys
#if "PY_SAX_PARSER" in os.environ:
# default_parser_list = os.environ["PY_SAX_PARSER"].split(",")
del os
_key = "python.xml.sax.parser"
if sys.platform[:4] == "java" and sys.registry.containsKey(_key):
default_parser_list = sys.registry.getProperty(_key).split(",")
def make_parser(parser_list = []):
"""Creates and returns a SAX parser.
Creates the first parser it is able to instantiate of the ones
given in the list created by doing parser_list +
default_parser_list. The lists must contain the names of Python
modules containing both a SAX parser and a create_parser function."""
for parser_name in parser_list + default_parser_list:
try:
return _create_parser(parser_name)
except ImportError as e:
import sys
if parser_name in sys.modules:
# The parser module was found, but importing it
# failed unexpectedly, pass this exception through
raise
except SAXReaderNotAvailable:
# The parser module detected that it won't work properly,
# so try the next one
pass
raise SAXReaderNotAvailable("No parsers found", None)
# --- Internal utility methods used by make_parser
if sys.platform[ : 4] == "java":
def _create_parser(parser_name):
from org.python.core import imp
drv_module = imp.importName(parser_name, 0, globals())
return drv_module.create_parser()
else:
def _create_parser(parser_name):
drv_module = __import__(parser_name,{},{},['create_parser'])
return drv_module.create_parser()
del sys
| gpl-3.0 | 3,683,180,443,397,805,600 | 32.380952 | 73 | 0.690728 | false |
tntnatbry/tensorflow | tensorflow/contrib/learn/python/learn/estimators/state_saving_rnn_estimator.py | 9 | 38410 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Estimator for State Saving RNNs."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
from tensorflow.contrib import layers
from tensorflow.contrib import metrics
from tensorflow.contrib import rnn as rnn_cell
from tensorflow.contrib.framework.python.framework import deprecated
from tensorflow.contrib.layers.python.layers import feature_column_ops
from tensorflow.contrib.layers.python.layers import optimizers
from tensorflow.contrib.learn.python.learn import metric_spec
from tensorflow.contrib.learn.python.learn.estimators import constants
from tensorflow.contrib.learn.python.learn.estimators import estimator
from tensorflow.contrib.learn.python.learn.estimators import model_fn
from tensorflow.contrib.learn.python.learn.estimators import prediction_key
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
from tensorflow.contrib.rnn.python.ops import core_rnn
from tensorflow.contrib.training.python.training import sequence_queueing_state_saver as sqss
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import array_ops
from tensorflow.python.training import momentum as momentum_opt
from tensorflow.python.util import nest
def construct_state_saving_rnn(cell,
inputs,
num_label_columns,
state_saver,
state_name,
scope='rnn'):
"""Build a state saving RNN and apply a fully connected layer.
Args:
cell: An instance of `RNNCell`.
inputs: A length `T` list of inputs, each a `Tensor` of shape
`[batch_size, input_size, ...]`.
num_label_columns: The desired output dimension.
state_saver: A state saver object with methods `state` and `save_state`.
state_name: Python string or tuple of strings. The name to use with the
state_saver. If the cell returns tuples of states (i.e.,
`cell.state_size` is a tuple) then `state_name` should be a tuple of
strings having the same length as `cell.state_size`. Otherwise it should
be a single string.
scope: `VariableScope` for the created subgraph; defaults to "rnn".
Returns:
activations: The output of the RNN, projected to `num_label_columns`
dimensions, a `Tensor` of shape `[batch_size, T, num_label_columns]`.
final_state: The final state output by the RNN
"""
with ops.name_scope(scope):
rnn_outputs, final_state = core_rnn.static_state_saving_rnn(
cell=cell,
inputs=inputs,
state_saver=state_saver,
state_name=state_name,
scope=scope)
# Convert rnn_outputs from a list of time-major order Tensors to a single
# Tensor of batch-major order.
rnn_outputs = array_ops.stack(rnn_outputs, axis=1)
activations = layers.fully_connected(
inputs=rnn_outputs,
num_outputs=num_label_columns,
activation_fn=None,
trainable=True)
# Use `identity` to rename `final_state`.
final_state = array_ops.identity(
final_state, name=rnn_common.RNNKeys.FINAL_STATE_KEY)
return activations, final_state
# TODO(jtbates): As per cl/14156248, remove this function and switch from
# MetricSpec to metric ops.
def _mask_multivalue(sequence_length, metric):
"""Wrapper function that masks values by `sequence_length`.
Args:
sequence_length: A `Tensor` with shape `[batch_size]` and dtype `int32`
containing the length of each sequence in the batch. If `None`, sequences
are assumed to be unpadded.
metric: A metric function. Its signature must contain `predictions` and
`labels`.
Returns:
A metric function that masks `predictions` and `labels` using
`sequence_length` and then applies `metric` to the results.
"""
@functools.wraps(metric)
def _metric(predictions, labels, *args, **kwargs):
predictions, labels = rnn_common.mask_activations_and_labels(
predictions, labels, sequence_length)
return metric(predictions, labels, *args, **kwargs)
return _metric
def _get_default_metrics(problem_type, sequence_length):
"""Returns default `MetricSpec`s for `problem_type`.
Args:
problem_type: `ProblemType.CLASSIFICATION` or
`ProblemType.LINEAR_REGRESSION`.
sequence_length: A `Tensor` with shape `[batch_size]` and dtype `int32`
containing the length of each sequence in the batch. If `None`, sequences
are assumed to be unpadded.
Returns:
A `dict` mapping strings to `MetricSpec`s.
"""
default_metrics = {}
if problem_type == constants.ProblemType.CLASSIFICATION:
default_metrics['accuracy'] = metric_spec.MetricSpec(
metric_fn=_mask_multivalue(sequence_length, metrics.streaming_accuracy),
prediction_key=prediction_key.PredictionKey.CLASSES)
elif problem_type == constants.ProblemType.LINEAR_REGRESSION:
pass
return default_metrics
def _multi_value_loss(
activations, labels, sequence_length, target_column, features):
"""Maps `activations` from the RNN to loss for multi value models.
Args:
activations: Output from an RNN. Should have dtype `float32` and shape
`[batch_size, padded_length, ?]`.
labels: A `Tensor` with length `[batch_size, padded_length]`.
sequence_length: A `Tensor` with shape `[batch_size]` and dtype `int32`
containing the length of each sequence in the batch. If `None`, sequences
are assumed to be unpadded.
target_column: An initialized `TargetColumn`, calculate predictions.
features: A `dict` containing the input and (optionally) sequence length
information and initial state.
Returns:
A scalar `Tensor` containing the loss.
"""
with ops.name_scope('MultiValueLoss'):
activations_masked, labels_masked = rnn_common.mask_activations_and_labels(
activations, labels, sequence_length)
return target_column.loss(activations_masked, labels_masked, features)
def _get_name_or_parent_names(column):
"""Gets the name of a column or its parent columns' names.
Args:
column: A sequence feature column derived from `FeatureColumn`.
Returns:
A list of the name of `column` or the names of its parent columns,
if any exist.
"""
# pylint: disable=protected-access
parent_columns = feature_column_ops._get_parent_columns(column)
if parent_columns:
return [x.name for x in parent_columns]
return [column.name]
def _prepare_features_for_sqss(features, labels, mode,
sequence_feature_columns,
context_feature_columns):
"""Prepares features for batching by the SQSS.
In preparation for batching by the SQSS, this function:
- Extracts the input key from the features dict.
- Separates sequence and context features dicts from the features dict.
- Adds the labels tensor to the sequence features dict.
Args:
features: A dict of Python string to an iterable of `Tensor` or
`SparseTensor` of rank 2, the `features` argument of a TF.Learn model_fn.
labels: An iterable of `Tensor`.
mode: Defines whether this is training, evaluation or prediction.
See `ModeKeys`.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
context_feature_columns: An iterable containing all the feature columns
describing context features, i.e., features that apply accross all time
steps. All items in the set should be instances of classes derived from
`FeatureColumn`.
Returns:
sequence_features: A dict mapping feature names to sequence features.
context_features: A dict mapping feature names to context features.
Raises:
ValueError: If `features` does not contain a value for every key in
`sequence_feature_columns` or `context_feature_columns`.
"""
# Extract sequence features.
feature_column_ops._check_supported_sequence_columns(sequence_feature_columns) # pylint: disable=protected-access
sequence_features = {}
for column in sequence_feature_columns:
for name in _get_name_or_parent_names(column):
feature = features.get(name, None)
if feature is None:
raise ValueError('No key in features for sequence feature: ' + name)
sequence_features[name] = feature
# Extract context features.
context_features = {}
if context_feature_columns is not None:
for column in context_feature_columns:
name = column.name
feature = features.get(name, None)
if feature is None:
raise ValueError('No key in features for context feature: ' + name)
context_features[name] = feature
# Add labels to the resulting sequence features dict.
if mode != model_fn.ModeKeys.INFER:
sequence_features[rnn_common.RNNKeys.LABELS_KEY] = labels
return sequence_features, context_features
def _read_batch(cell,
features,
labels,
mode,
num_unroll,
num_rnn_layers,
batch_size,
sequence_feature_columns,
context_feature_columns=None,
num_threads=3,
queue_capacity=1000,
seed=None):
"""Reads a batch from a state saving sequence queue.
Args:
cell: An initialized `RNNCell` to be used in the RNN.
features: A dict of Python string to an iterable of `Tensor`, the
`features` argument of a TF.Learn model_fn.
labels: An iterable of `Tensor`, the `labels` argument of a
TF.Learn model_fn.
mode: Defines whether this is training, evaluation or prediction.
See `ModeKeys`.
num_unroll: Python integer, how many time steps to unroll at a time.
The input sequences of length `k` are then split into `k / num_unroll`
many segments.
num_rnn_layers: Python integer, number of layers in the RNN.
batch_size: Python integer, the size of the minibatch produced by the SQSS.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
context_feature_columns: An iterable containing all the feature columns
describing context features, i.e., features that apply accross all time
steps. All items in the set should be instances of classes derived from
`FeatureColumn`.
num_threads: The Python integer number of threads enqueuing input examples
into a queue. Defaults to 3.
queue_capacity: The max capacity of the queue in number of examples.
Needs to be at least `batch_size`. Defaults to 1000. When iterating
over the same input example multiple times reusing their keys the
`queue_capacity` must be smaller than the number of examples.
seed: Fixes the random seed used for generating input keys by the SQSS.
Returns:
batch: A `NextQueuedSequenceBatch` containing batch_size `SequenceExample`
values and their saved internal states.
"""
# Set batch_size=1 to initialize SQSS with cell's zero state.
values = cell.zero_state(batch_size=1, dtype=dtypes.float32)
# Set up stateful queue reader.
states = {}
state_names = _get_lstm_state_names(num_rnn_layers)
for i in range(num_rnn_layers):
states[state_names[i][0]] = array_ops.squeeze(values[i][0], axis=0)
states[state_names[i][1]] = array_ops.squeeze(values[i][1], axis=0)
sequences, context = _prepare_features_for_sqss(
features, labels, mode, sequence_feature_columns,
context_feature_columns)
return sqss.batch_sequences_with_states(
input_key='key',
input_sequences=sequences,
input_context=context,
input_length=None, # infer sequence lengths
initial_states=states,
num_unroll=num_unroll,
batch_size=batch_size,
pad=True, # pad to a multiple of num_unroll
make_keys_unique=True,
make_keys_unique_seed=seed,
num_threads=num_threads,
capacity=queue_capacity)
def _get_state_name(i):
"""Constructs the name string for state component `i`."""
return '{}_{}'.format(rnn_common.RNNKeys.STATE_PREFIX, i)
def state_tuple_to_dict(state):
"""Returns a dict containing flattened `state`.
Args:
state: A `Tensor` or a nested tuple of `Tensors`. All of the `Tensor`s must
have the same rank and agree on all dimensions except the last.
Returns:
A dict containing the `Tensor`s that make up `state`. The keys of the dict
are of the form "STATE_PREFIX_i" where `i` is the place of this `Tensor`
in a depth-first traversal of `state`.
"""
with ops.name_scope('state_tuple_to_dict'):
flat_state = nest.flatten(state)
state_dict = {}
for i, state_component in enumerate(flat_state):
state_name = _get_state_name(i)
state_value = (None if state_component is None else array_ops.identity(
state_component, name=state_name))
state_dict[state_name] = state_value
return state_dict
def _prepare_inputs_for_rnn(sequence_features, context_features,
sequence_feature_columns, num_unroll):
"""Prepares features batched by the SQSS for input to a state-saving RNN.
Args:
sequence_features: A dict of sequence feature name to `Tensor` or
`SparseTensor`, with `Tensor`s of shape `[batch_size, num_unroll, ...]`
or `SparseTensors` of dense shape `[batch_size, num_unroll, d]`.
context_features: A dict of context feature name to `Tensor`, with
tensors of shape `[batch_size, 1, ...]` and type float32.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
num_unroll: Python integer, how many time steps to unroll at a time.
The input sequences of length `k` are then split into `k / num_unroll`
many segments.
Returns:
features_by_time: A list of length `num_unroll` with `Tensor` entries of
shape `[batch_size, sum(sequence_features dimensions) +
sum(context_features dimensions)]` of type float32.
Context features are copied into each time step.
"""
def _tile(feature):
return array_ops.squeeze(
array_ops.tile(array_ops.expand_dims(feature, 1), [1, num_unroll, 1]),
axis=2)
for feature in sequence_features.values():
if isinstance(feature, sparse_tensor.SparseTensor):
# Explicitly set dense_shape's shape to 3 ([batch_size, num_unroll, d])
# since it can't be statically inferred.
feature.dense_shape.set_shape([3])
sequence_features = layers.sequence_input_from_feature_columns(
columns_to_tensors=sequence_features,
feature_columns=sequence_feature_columns,
weight_collections=None,
scope=None)
# Explicitly set shape along dimension 1 to num_unroll for the unstack op.
sequence_features.set_shape([None, num_unroll, None])
if not context_features:
return array_ops.unstack(sequence_features, axis=1)
# TODO(jtbates): Call layers.input_from_feature_columns for context features.
context_features = [
_tile(context_features[k]) for k in sorted(context_features)
]
return array_ops.unstack(
array_ops.concat(
[sequence_features, array_ops.stack(context_features, 2)], axis=2),
axis=1)
def _get_rnn_model_fn(target_column,
problem_type,
optimizer,
num_unroll,
num_units,
num_rnn_layers,
num_threads,
queue_capacity,
batch_size,
sequence_feature_columns,
context_feature_columns=None,
predict_probabilities=False,
learning_rate=None,
gradient_clipping_norm=None,
dropout_keep_probabilities=None,
name='StateSavingRNNModel',
seed=None):
"""Creates a state saving RNN model function for an `Estimator`.
Args:
target_column: An initialized `TargetColumn`, used to calculate prediction
and loss.
problem_type: `ProblemType.CLASSIFICATION` or
`ProblemType.LINEAR_REGRESSION`.
optimizer: A subclass of `Optimizer`, an instance of an `Optimizer` or a
string.
num_unroll: Python integer, how many time steps to unroll at a time.
The input sequences of length `k` are then split into `k / num_unroll`
many segments.
num_units: The number of units in the `RNNCell`.
num_rnn_layers: Python integer, number of layers in the RNN.
num_threads: The Python integer number of threads enqueuing input examples
into a queue.
queue_capacity: The max capacity of the queue in number of examples.
Needs to be at least `batch_size`. When iterating over the same input
example multiple times reusing their keys the `queue_capacity` must be
smaller than the number of examples.
batch_size: Python integer, the size of the minibatch produced by the SQSS.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
context_feature_columns: An iterable containing all the feature columns
describing context features, i.e., features that apply accross all time
steps. All items in the set should be instances of classes derived from
`FeatureColumn`.
predict_probabilities: A boolean indicating whether to predict probabilities
for all classes.
Must only be used with `ProblemType.CLASSIFICATION`.
learning_rate: Learning rate used for optimization. This argument has no
effect if `optimizer` is an instance of an `Optimizer`.
gradient_clipping_norm: A float. Gradients will be clipped to this value.
dropout_keep_probabilities: a list of dropout keep probabilities or `None`.
If given a list, it must have length `num_rnn_layers + 1`.
name: A string that will be used to create a scope for the RNN.
seed: Fixes the random seed used for generating input keys by the SQSS.
Returns:
A model function to be passed to an `Estimator`.
Raises:
ValueError: `problem_type` is not one of
`ProblemType.LINEAR_REGRESSION`
or `ProblemType.CLASSIFICATION`.
ValueError: `predict_probabilities` is `True` for `problem_type` other
than `ProblemType.CLASSIFICATION`.
ValueError: `num_unroll` is not positive.
"""
if problem_type not in (constants.ProblemType.CLASSIFICATION,
constants.ProblemType.LINEAR_REGRESSION):
raise ValueError(
'problem_type must be ProblemType.LINEAR_REGRESSION or '
'ProblemType.CLASSIFICATION; got {}'.
format(problem_type))
if (problem_type != constants.ProblemType.CLASSIFICATION and
predict_probabilities):
raise ValueError(
'predict_probabilities can only be set to True for problem_type'
' ProblemType.CLASSIFICATION; got {}.'.format(problem_type))
if num_unroll <= 0:
raise ValueError('num_unroll must be positive; got {}.'.format(num_unroll))
def _rnn_model_fn(features, labels, mode):
"""The model to be passed to an `Estimator`."""
with ops.name_scope(name):
dropout = (dropout_keep_probabilities
if mode == model_fn.ModeKeys.TRAIN
else None)
cell = lstm_cell(num_units, num_rnn_layers, dropout)
batch = _read_batch(
cell=cell,
features=features,
labels=labels,
mode=mode,
num_unroll=num_unroll,
num_rnn_layers=num_rnn_layers,
batch_size=batch_size,
sequence_feature_columns=sequence_feature_columns,
context_feature_columns=context_feature_columns,
num_threads=num_threads,
queue_capacity=queue_capacity,
seed=seed)
sequence_features = batch.sequences
context_features = batch.context
if mode != model_fn.ModeKeys.INFER:
labels = sequence_features.pop(rnn_common.RNNKeys.LABELS_KEY)
inputs = _prepare_inputs_for_rnn(sequence_features, context_features,
sequence_feature_columns, num_unroll)
state_name = _get_lstm_state_names(num_rnn_layers)
rnn_activations, final_state = construct_state_saving_rnn(
cell=cell,
inputs=inputs,
num_label_columns=target_column.num_label_columns,
state_saver=batch,
state_name=state_name)
loss = None # Created below for modes TRAIN and EVAL.
prediction_dict = rnn_common.multi_value_predictions(
rnn_activations, target_column, problem_type, predict_probabilities)
if mode != model_fn.ModeKeys.INFER:
loss = _multi_value_loss(rnn_activations, labels, batch.length,
target_column, features)
eval_metric_ops = None
if mode != model_fn.ModeKeys.INFER:
default_metrics = _get_default_metrics(problem_type, batch.length)
eval_metric_ops = estimator._make_metrics_ops( # pylint: disable=protected-access
default_metrics, features, labels, prediction_dict)
state_dict = state_tuple_to_dict(final_state)
prediction_dict.update(state_dict)
train_op = None
if mode == model_fn.ModeKeys.TRAIN:
train_op = optimizers.optimize_loss(
loss=loss,
global_step=None, # Get it internally.
learning_rate=learning_rate,
optimizer=optimizer,
clip_gradients=gradient_clipping_norm,
summaries=optimizers.OPTIMIZER_SUMMARIES)
return model_fn.ModelFnOps(mode=mode,
predictions=prediction_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
return _rnn_model_fn
def _get_lstm_state_names(num_rnn_layers):
"""Returns a num_rnn_layers long list of lstm state name pairs.
Args:
num_rnn_layers: The number of layers in the RNN.
Returns:
A num_rnn_layers long list of lstm state name pairs of the form:
['lstm_state_cN', 'lstm_state_mN'] for all N from 0 to num_rnn_layers.
"""
return [['lstm_state_c' + str(i), 'lstm_state_m' + str(i)]
for i in range(num_rnn_layers)]
# TODO(jtbates): Allow users to specify cell types other than LSTM.
def lstm_cell(num_units, num_rnn_layers, dropout_keep_probabilities):
"""Constructs a `MultiRNNCell` with num_rnn_layers `BasicLSTMCell`s.
Args:
num_units: The number of units in the `RNNCell`.
num_rnn_layers: The number of layers in the RNN.
dropout_keep_probabilities: a list whose elements are either floats in
`[0.0, 1.0]` or `None`. It must have length `num_rnn_layers + 1`.
Returns:
An intiialized `MultiRNNCell`.
"""
cells = [
rnn_cell.BasicLSTMCell(num_units=num_units, state_is_tuple=True)
for _ in range(num_rnn_layers)
]
if dropout_keep_probabilities:
cells = rnn_common.apply_dropout(cells, dropout_keep_probabilities)
return rnn_cell.MultiRNNCell(cells)
class StateSavingRnnEstimator(estimator.Estimator):
def __init__(self,
problem_type,
num_units,
num_unroll,
batch_size,
sequence_feature_columns,
context_feature_columns=None,
num_classes=None,
num_rnn_layers=1,
optimizer_type='SGD',
learning_rate=0.1,
predict_probabilities=False,
momentum=None,
gradient_clipping_norm=5.0,
dropout_keep_probabilities=None,
model_dir=None,
config=None,
feature_engineering_fn=None,
num_threads=3,
queue_capacity=1000,
seed=None):
"""Initializes a StateSavingRnnEstimator.
Args:
problem_type: `ProblemType.CLASSIFICATION` or
`ProblemType.LINEAR_REGRESSION`.
num_units: The size of the RNN cells.
num_unroll: Python integer, how many time steps to unroll at a time.
The input sequences of length `k` are then split into `k / num_unroll`
many segments.
batch_size: Python integer, the size of the minibatch.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
context_feature_columns: An iterable containing all the feature columns
describing context features, i.e., features that apply accross all time
steps. All items in the set should be instances of classes derived from
`FeatureColumn`.
num_classes: The number of classes for categorization. Used only and
required if `problem_type` is `ProblemType.CLASSIFICATION`
num_rnn_layers: Number of RNN layers.
optimizer_type: The type of optimizer to use. Either a subclass of
`Optimizer`, an instance of an `Optimizer` or a string. Strings must be
one of 'Adagrad', 'Adam', 'Ftrl', Momentum', 'RMSProp', or 'SGD'.
learning_rate: Learning rate. This argument has no effect if `optimizer`
is an instance of an `Optimizer`.
predict_probabilities: A boolean indicating whether to predict
probabilities for all classes. Used only if `problem_type` is
`ProblemType.CLASSIFICATION`.
momentum: Momentum value. Only used if `optimizer_type` is 'Momentum'.
gradient_clipping_norm: Parameter used for gradient clipping. If `None`,
then no clipping is performed.
dropout_keep_probabilities: a list of dropout keep probabilities or
`None`. If given a list, it must have length `num_rnn_layers + 1`.
model_dir: The directory in which to save and restore the model graph,
parameters, etc.
config: A `RunConfig` instance.
feature_engineering_fn: Takes features and labels which are the output of
`input_fn` and returns features and labels which will be fed into
`model_fn`. Please check `model_fn` for a definition of features and
labels.
num_threads: The Python integer number of threads enqueuing input examples
into a queue. Defaults to 3.
queue_capacity: The max capacity of the queue in number of examples.
Needs to be at least `batch_size`. Defaults to 1000. When iterating
over the same input example multiple times reusing their keys the
`queue_capacity` must be smaller than the number of examples.
seed: Fixes the random seed used for generating input keys by the SQSS.
Raises:
ValueError: `problem_type` is not one of
`ProblemType.LINEAR_REGRESSION` or `ProblemType.CLASSIFICATION`.
ValueError: `problem_type` is `ProblemType.CLASSIFICATION` but
`num_classes` is not specified.
"""
name = 'MultiValueStateSavingRNN'
if problem_type == constants.ProblemType.LINEAR_REGRESSION:
name += 'Regressor'
target_column = layers.regression_target()
elif problem_type == constants.ProblemType.CLASSIFICATION:
if not num_classes:
raise ValueError('For CLASSIFICATION problem_type, num_classes must be '
'specified.')
target_column = layers.multi_class_target(n_classes=num_classes)
name += 'Classifier'
else:
raise ValueError(
'problem_type must be either ProblemType.LINEAR_REGRESSION '
'or ProblemType.CLASSIFICATION; got {}'.format(
problem_type))
if optimizer_type == 'Momentum':
optimizer_type = momentum_opt.MomentumOptimizer(learning_rate, momentum)
rnn_model_fn = _get_rnn_model_fn(
target_column=target_column,
problem_type=problem_type,
optimizer=optimizer_type,
num_unroll=num_unroll,
num_units=num_units,
num_rnn_layers=num_rnn_layers,
num_threads=num_threads,
queue_capacity=queue_capacity,
batch_size=batch_size,
sequence_feature_columns=sequence_feature_columns,
context_feature_columns=context_feature_columns,
predict_probabilities=predict_probabilities,
learning_rate=learning_rate,
gradient_clipping_norm=gradient_clipping_norm,
dropout_keep_probabilities=dropout_keep_probabilities,
name=name,
seed=seed)
super(StateSavingRnnEstimator, self).__init__(
model_fn=rnn_model_fn,
model_dir=model_dir,
config=config,
feature_engineering_fn=feature_engineering_fn)
@deprecated('2017-04-01', 'multi_value_rnn_regressor is deprecated. '
'Please construct a StateSavingRnnEstimator directly.')
def multi_value_rnn_regressor(num_units,
num_unroll,
batch_size,
sequence_feature_columns,
context_feature_columns=None,
num_rnn_layers=1,
optimizer_type='SGD',
learning_rate=0.1,
momentum=None,
gradient_clipping_norm=5.0,
dropout_keep_probabilities=None,
model_dir=None,
config=None,
feature_engineering_fn=None,
num_threads=3,
queue_capacity=1000,
seed=None):
"""Creates a RNN `Estimator` that predicts sequences of values.
Args:
num_units: The size of the RNN cells.
num_unroll: Python integer, how many time steps to unroll at a time.
The input sequences of length `k` are then split into `k / num_unroll`
many segments.
batch_size: Python integer, the size of the minibatch.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
context_feature_columns: An iterable containing all the feature columns
describing context features, i.e., features that apply accross all time
steps. All items in the set should be instances of classes derived from
`FeatureColumn`.
num_rnn_layers: Number of RNN layers. Leave this at its default value 1
if passing a `cell_type` that is already a MultiRNNCell.
optimizer_type: The type of optimizer to use. Either a subclass of
`Optimizer`, an instance of an `Optimizer` or a string. Strings must be
one of 'Adagrad', 'Momentum' or 'SGD'.
learning_rate: Learning rate. This argument has no effect if `optimizer`
is an instance of an `Optimizer`.
momentum: Momentum value. Only used if `optimizer_type` is 'Momentum'.
gradient_clipping_norm: Parameter used for gradient clipping. If `None`,
then no clipping is performed.
dropout_keep_probabilities: a list of dropout keep probabilities or `None`.
If given a list, it must have length `num_rnn_layers + 1`.
model_dir: The directory in which to save and restore the model graph,
parameters, etc.
config: A `RunConfig` instance.
feature_engineering_fn: Takes features and labels which are the output of
`input_fn` and returns features and labels which will be fed into
`model_fn`. Please check `model_fn` for a definition of features and
labels.
num_threads: The Python integer number of threads enqueuing input examples
into a queue. Defaults to 3.
queue_capacity: The max capacity of the queue in number of examples.
Needs to be at least `batch_size`. Defaults to 1000. When iterating
over the same input example multiple times reusing their keys the
`queue_capacity` must be smaller than the number of examples.
seed: Fixes the random seed used for generating input keys by the SQSS.
Returns:
An initialized `Estimator`.
"""
return StateSavingRnnEstimator(
constants.ProblemType.LINEAR_REGRESSION,
num_units,
num_unroll,
batch_size,
sequence_feature_columns,
context_feature_columns=context_feature_columns,
num_classes=None,
num_rnn_layers=num_rnn_layers,
optimizer_type=optimizer_type,
learning_rate=learning_rate,
predict_probabilities=False,
momentum=momentum,
gradient_clipping_norm=gradient_clipping_norm,
dropout_keep_probabilities=dropout_keep_probabilities,
model_dir=model_dir,
config=config,
feature_engineering_fn=feature_engineering_fn,
num_threads=num_threads,
queue_capacity=queue_capacity,
seed=seed)
@deprecated('2017-04-01', 'multi_value_rnn_classifier is deprecated. '
'Please construct a StateSavingRnnEstimator directly.')
def multi_value_rnn_classifier(num_classes,
num_units,
num_unroll,
batch_size,
sequence_feature_columns,
context_feature_columns=None,
num_rnn_layers=1,
optimizer_type='SGD',
learning_rate=0.1,
predict_probabilities=False,
momentum=None,
gradient_clipping_norm=5.0,
dropout_keep_probabilities=None,
model_dir=None,
config=None,
feature_engineering_fn=None,
num_threads=3,
queue_capacity=1000,
seed=None):
"""Creates a RNN `Estimator` that predicts sequences of labels.
Args:
num_classes: The number of classes for categorization.
num_units: The size of the RNN cells.
num_unroll: Python integer, how many time steps to unroll at a time.
The input sequences of length `k` are then split into `k / num_unroll`
many segments.
batch_size: Python integer, the size of the minibatch.
sequence_feature_columns: An iterable containing all the feature columns
describing sequence features. All items in the set should be instances
of classes derived from `FeatureColumn`.
context_feature_columns: An iterable containing all the feature columns
describing context features, i.e., features that apply accross all time
steps. All items in the set should be instances of classes derived from
`FeatureColumn`.
num_rnn_layers: Number of RNN layers.
optimizer_type: The type of optimizer to use. Either a subclass of
`Optimizer`, an instance of an `Optimizer` or a string. Strings must be
one of 'Adagrad', 'Momentum' or 'SGD'.
learning_rate: Learning rate. This argument has no effect if `optimizer`
is an instance of an `Optimizer`.
predict_probabilities: A boolean indicating whether to predict probabilities
for all classes.
momentum: Momentum value. Only used if `optimizer_type` is 'Momentum'.
gradient_clipping_norm: Parameter used for gradient clipping. If `None`,
then no clipping is performed.
dropout_keep_probabilities: a list of dropout keep probabilities or `None`.
If given a list, it must have length `num_rnn_layers + 1`.
model_dir: The directory in which to save and restore the model graph,
parameters, etc.
config: A `RunConfig` instance.
feature_engineering_fn: Takes features and labels which are the output of
`input_fn` and returns features and labels which will be fed into
`model_fn`. Please check `model_fn` for a definition of features and
labels.
num_threads: The Python integer number of threads enqueuing input examples
into a queue. Defaults to 3.
queue_capacity: The max capacity of the queue in number of examples.
Needs to be at least `batch_size`. Defaults to 1000. When iterating
over the same input example multiple times reusing their keys the
`queue_capacity` must be smaller than the number of examples.
seed: Fixes the random seed used for generating input keys by the SQSS.
Returns:
An initialized `Estimator`.
"""
return StateSavingRnnEstimator(
constants.ProblemType.CLASSIFICATION,
num_units,
num_unroll,
batch_size,
sequence_feature_columns,
context_feature_columns=context_feature_columns,
num_classes=num_classes,
num_rnn_layers=num_rnn_layers,
optimizer_type=optimizer_type,
learning_rate=learning_rate,
predict_probabilities=predict_probabilities,
momentum=momentum,
gradient_clipping_norm=gradient_clipping_norm,
dropout_keep_probabilities=dropout_keep_probabilities,
model_dir=model_dir,
config=config,
feature_engineering_fn=feature_engineering_fn,
num_threads=num_threads,
queue_capacity=queue_capacity,
seed=seed)
| apache-2.0 | -7,088,845,920,904,883,000 | 42.847032 | 116 | 0.664827 | false |
TripleDogDare/RadioWCSpy | env/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/contrib/pyopenssl.py | 153 | 9905 | '''SSL with SNI_-support for Python 2. Follow these instructions if you would
like to verify SSL certificates in Python 2. Note, the default libraries do
*not* do certificate checking; you need to do additional work to validate
certificates yourself.
This needs the following packages installed:
* pyOpenSSL (tested with 0.13)
* ndg-httpsclient (tested with 0.3.2)
* pyasn1 (tested with 0.1.6)
You can install them with the following command:
pip install pyopenssl ndg-httpsclient pyasn1
To activate certificate checking, call
:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code
before you begin making HTTP requests. This can be done in a ``sitecustomize``
module, or at any other time before your application begins using ``urllib3``,
like this::
try:
import urllib3.contrib.pyopenssl
urllib3.contrib.pyopenssl.inject_into_urllib3()
except ImportError:
pass
Now you can use :mod:`urllib3` as you normally would, and it will support SNI
when the required modules are installed.
Activating this module also has the positive side effect of disabling SSL/TLS
compression in Python 2 (see `CRIME attack`_).
If you want to configure the default list of supported cipher suites, you can
set the ``urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST`` variable.
Module Variables
----------------
:var DEFAULT_SSL_CIPHER_LIST: The list of supported SSL/TLS cipher suites.
Default: ``ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:
ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS``
.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication
.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit)
'''
try:
from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT
from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName
except SyntaxError as e:
raise ImportError(e)
import OpenSSL.SSL
from pyasn1.codec.der import decoder as der_decoder
from pyasn1.type import univ, constraint
from socket import _fileobject, timeout
import ssl
import select
from .. import connection
from .. import util
__all__ = ['inject_into_urllib3', 'extract_from_urllib3']
# SNI only *really* works if we can read the subjectAltName of certificates.
HAS_SNI = SUBJ_ALT_NAME_SUPPORT
# Map from urllib3 to PyOpenSSL compatible parameter-values.
_openssl_versions = {
ssl.PROTOCOL_SSLv23: OpenSSL.SSL.SSLv23_METHOD,
ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD,
}
try:
_openssl_versions.update({ssl.PROTOCOL_SSLv3: OpenSSL.SSL.SSLv3_METHOD})
except AttributeError:
pass
_openssl_verify = {
ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE,
ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER,
ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER
+ OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
}
# A secure default.
# Sources for more information on TLS ciphers:
#
# - https://wiki.mozilla.org/Security/Server_Side_TLS
# - https://www.ssllabs.com/projects/best-practices/index.html
# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
#
# The general intent is:
# - Prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE),
# - prefer ECDHE over DHE for better performance,
# - prefer any AES-GCM over any AES-CBC for better performance and security,
# - use 3DES as fallback which is secure but slow,
# - disable NULL authentication, MD5 MACs and DSS for security reasons.
DEFAULT_SSL_CIPHER_LIST = "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:" + \
"ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:" + \
"!aNULL:!MD5:!DSS"
orig_util_HAS_SNI = util.HAS_SNI
orig_connection_ssl_wrap_socket = connection.ssl_wrap_socket
def inject_into_urllib3():
'Monkey-patch urllib3 with PyOpenSSL-backed SSL-support.'
connection.ssl_wrap_socket = ssl_wrap_socket
util.HAS_SNI = HAS_SNI
def extract_from_urllib3():
'Undo monkey-patching by :func:`inject_into_urllib3`.'
connection.ssl_wrap_socket = orig_connection_ssl_wrap_socket
util.HAS_SNI = orig_util_HAS_SNI
### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.
class SubjectAltName(BaseSubjectAltName):
'''ASN.1 implementation for subjectAltNames support'''
# There is no limit to how many SAN certificates a certificate may have,
# however this needs to have some limit so we'll set an arbitrarily high
# limit.
sizeSpec = univ.SequenceOf.sizeSpec + \
constraint.ValueSizeConstraint(1, 1024)
### Note: This is a slightly bug-fixed version of same from ndg-httpsclient.
def get_subj_alt_name(peer_cert):
# Search through extensions
dns_name = []
if not SUBJ_ALT_NAME_SUPPORT:
return dns_name
general_names = SubjectAltName()
for i in range(peer_cert.get_extension_count()):
ext = peer_cert.get_extension(i)
ext_name = ext.get_short_name()
if ext_name != 'subjectAltName':
continue
# PyOpenSSL returns extension data in ASN.1 encoded form
ext_dat = ext.get_data()
decoded_dat = der_decoder.decode(ext_dat,
asn1Spec=general_names)
for name in decoded_dat:
if not isinstance(name, SubjectAltName):
continue
for entry in range(len(name)):
component = name.getComponentByPosition(entry)
if component.getName() != 'dNSName':
continue
dns_name.append(str(component.getComponent()))
return dns_name
class WrappedSocket(object):
'''API-compatibility wrapper for Python OpenSSL's Connection-class.
Note: _makefile_refs, _drop() and _reuse() are needed for the garbage
collector of pypy.
'''
def __init__(self, connection, socket, suppress_ragged_eofs=True):
self.connection = connection
self.socket = socket
self.suppress_ragged_eofs = suppress_ragged_eofs
self._makefile_refs = 0
def fileno(self):
return self.socket.fileno()
def makefile(self, mode, bufsize=-1):
self._makefile_refs += 1
return _fileobject(self, mode, bufsize, close=True)
def recv(self, *args, **kwargs):
try:
data = self.connection.recv(*args, **kwargs)
except OpenSSL.SSL.SysCallError as e:
if self.suppress_ragged_eofs and e.args == (-1, 'Unexpected EOF'):
return b''
else:
raise
except OpenSSL.SSL.WantReadError:
rd, wd, ed = select.select(
[self.socket], [], [], self.socket.gettimeout())
if not rd:
raise timeout('The read operation timed out')
else:
return self.recv(*args, **kwargs)
else:
return data
def settimeout(self, timeout):
return self.socket.settimeout(timeout)
def _send_until_done(self, data):
while True:
try:
return self.connection.send(data)
except OpenSSL.SSL.WantWriteError:
_, wlist, _ = select.select([], [self.socket], [],
self.socket.gettimeout())
if not wlist:
raise timeout()
continue
def sendall(self, data):
while len(data):
sent = self._send_until_done(data)
data = data[sent:]
def close(self):
if self._makefile_refs < 1:
return self.connection.shutdown()
else:
self._makefile_refs -= 1
def getpeercert(self, binary_form=False):
x509 = self.connection.get_peer_certificate()
if not x509:
return x509
if binary_form:
return OpenSSL.crypto.dump_certificate(
OpenSSL.crypto.FILETYPE_ASN1,
x509)
return {
'subject': (
(('commonName', x509.get_subject().CN),),
),
'subjectAltName': [
('DNS', value)
for value in get_subj_alt_name(x509)
]
}
def _reuse(self):
self._makefile_refs += 1
def _drop(self):
if self._makefile_refs < 1:
self.close()
else:
self._makefile_refs -= 1
def _verify_callback(cnx, x509, err_no, err_depth, return_code):
return err_no == 0
def ssl_wrap_socket(sock, keyfile=None, certfile=None, cert_reqs=None,
ca_certs=None, server_hostname=None,
ssl_version=None):
ctx = OpenSSL.SSL.Context(_openssl_versions[ssl_version])
if certfile:
keyfile = keyfile or certfile # Match behaviour of the normal python ssl library
ctx.use_certificate_file(certfile)
if keyfile:
ctx.use_privatekey_file(keyfile)
if cert_reqs != ssl.CERT_NONE:
ctx.set_verify(_openssl_verify[cert_reqs], _verify_callback)
if ca_certs:
try:
ctx.load_verify_locations(ca_certs, None)
except OpenSSL.SSL.Error as e:
raise ssl.SSLError('bad ca_certs: %r' % ca_certs, e)
else:
ctx.set_default_verify_paths()
# Disable TLS compression to migitate CRIME attack (issue #309)
OP_NO_COMPRESSION = 0x20000
ctx.set_options(OP_NO_COMPRESSION)
# Set list of supported ciphersuites.
ctx.set_cipher_list(DEFAULT_SSL_CIPHER_LIST)
cnx = OpenSSL.SSL.Connection(ctx, sock)
cnx.set_tlsext_host_name(server_hostname)
cnx.set_connect_state()
while True:
try:
cnx.do_handshake()
except OpenSSL.SSL.WantReadError:
select.select([sock], [], [])
continue
except OpenSSL.SSL.Error as e:
raise ssl.SSLError('bad handshake', e)
break
return WrappedSocket(cnx, sock)
| mit | 7,411,130,221,127,615,000 | 31.689769 | 89 | 0.641999 | false |
elkingtonmcb/scikit-learn | sklearn/feature_selection/tests/test_feature_select.py | 103 | 22297 | """
Todo: cross-check the F-value with stats model
"""
from __future__ import division
import itertools
import warnings
import numpy as np
from scipy import stats, sparse
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_not_in
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_greater_equal
from sklearn.utils import safe_mask
from sklearn.datasets.samples_generator import (make_classification,
make_regression)
from sklearn.feature_selection import (chi2, f_classif, f_oneway, f_regression,
SelectPercentile, SelectKBest,
SelectFpr, SelectFdr, SelectFwe,
GenericUnivariateSelect)
##############################################################################
# Test the score functions
def test_f_oneway_vs_scipy_stats():
# Test that our f_oneway gives the same result as scipy.stats
rng = np.random.RandomState(0)
X1 = rng.randn(10, 3)
X2 = 1 + rng.randn(10, 3)
f, pv = stats.f_oneway(X1, X2)
f2, pv2 = f_oneway(X1, X2)
assert_true(np.allclose(f, f2))
assert_true(np.allclose(pv, pv2))
def test_f_oneway_ints():
# Smoke test f_oneway on integers: that it does raise casting errors
# with recent numpys
rng = np.random.RandomState(0)
X = rng.randint(10, size=(10, 10))
y = np.arange(10)
fint, pint = f_oneway(X, y)
# test that is gives the same result as with float
f, p = f_oneway(X.astype(np.float), y)
assert_array_almost_equal(f, fint, decimal=4)
assert_array_almost_equal(p, pint, decimal=4)
def test_f_classif():
# Test whether the F test yields meaningful results
# on a simple simulated classification problem
X, y = make_classification(n_samples=200, n_features=20,
n_informative=3, n_redundant=2,
n_repeated=0, n_classes=8,
n_clusters_per_class=1, flip_y=0.0,
class_sep=10, shuffle=False, random_state=0)
F, pv = f_classif(X, y)
F_sparse, pv_sparse = f_classif(sparse.csr_matrix(X), y)
assert_true((F > 0).all())
assert_true((pv > 0).all())
assert_true((pv < 1).all())
assert_true((pv[:5] < 0.05).all())
assert_true((pv[5:] > 1.e-4).all())
assert_array_almost_equal(F_sparse, F)
assert_array_almost_equal(pv_sparse, pv)
def test_f_regression():
# Test whether the F test yields meaningful results
# on a simple simulated regression problem
X, y = make_regression(n_samples=200, n_features=20, n_informative=5,
shuffle=False, random_state=0)
F, pv = f_regression(X, y)
assert_true((F > 0).all())
assert_true((pv > 0).all())
assert_true((pv < 1).all())
assert_true((pv[:5] < 0.05).all())
assert_true((pv[5:] > 1.e-4).all())
# again without centering, compare with sparse
F, pv = f_regression(X, y, center=False)
F_sparse, pv_sparse = f_regression(sparse.csr_matrix(X), y, center=False)
assert_array_almost_equal(F_sparse, F)
assert_array_almost_equal(pv_sparse, pv)
def test_f_regression_input_dtype():
# Test whether f_regression returns the same value
# for any numeric data_type
rng = np.random.RandomState(0)
X = rng.rand(10, 20)
y = np.arange(10).astype(np.int)
F1, pv1 = f_regression(X, y)
F2, pv2 = f_regression(X, y.astype(np.float))
assert_array_almost_equal(F1, F2, 5)
assert_array_almost_equal(pv1, pv2, 5)
def test_f_regression_center():
# Test whether f_regression preserves dof according to 'center' argument
# We use two centered variates so we have a simple relationship between
# F-score with variates centering and F-score without variates centering.
# Create toy example
X = np.arange(-5, 6).reshape(-1, 1) # X has zero mean
n_samples = X.size
Y = np.ones(n_samples)
Y[::2] *= -1.
Y[0] = 0. # have Y mean being null
F1, _ = f_regression(X, Y, center=True)
F2, _ = f_regression(X, Y, center=False)
assert_array_almost_equal(F1 * (n_samples - 1.) / (n_samples - 2.), F2)
assert_almost_equal(F2[0], 0.232558139) # value from statsmodels OLS
def test_f_classif_multi_class():
# Test whether the F test yields meaningful results
# on a simple simulated classification problem
X, y = make_classification(n_samples=200, n_features=20,
n_informative=3, n_redundant=2,
n_repeated=0, n_classes=8,
n_clusters_per_class=1, flip_y=0.0,
class_sep=10, shuffle=False, random_state=0)
F, pv = f_classif(X, y)
assert_true((F > 0).all())
assert_true((pv > 0).all())
assert_true((pv < 1).all())
assert_true((pv[:5] < 0.05).all())
assert_true((pv[5:] > 1.e-4).all())
def test_select_percentile_classif():
# Test whether the relative univariate feature selection
# gets the correct items in a simple classification problem
# with the percentile heuristic
X, y = make_classification(n_samples=200, n_features=20,
n_informative=3, n_redundant=2,
n_repeated=0, n_classes=8,
n_clusters_per_class=1, flip_y=0.0,
class_sep=10, shuffle=False, random_state=0)
univariate_filter = SelectPercentile(f_classif, percentile=25)
X_r = univariate_filter.fit(X, y).transform(X)
X_r2 = GenericUnivariateSelect(f_classif, mode='percentile',
param=25).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
gtruth = np.zeros(20)
gtruth[:5] = 1
assert_array_equal(support, gtruth)
def test_select_percentile_classif_sparse():
# Test whether the relative univariate feature selection
# gets the correct items in a simple classification problem
# with the percentile heuristic
X, y = make_classification(n_samples=200, n_features=20,
n_informative=3, n_redundant=2,
n_repeated=0, n_classes=8,
n_clusters_per_class=1, flip_y=0.0,
class_sep=10, shuffle=False, random_state=0)
X = sparse.csr_matrix(X)
univariate_filter = SelectPercentile(f_classif, percentile=25)
X_r = univariate_filter.fit(X, y).transform(X)
X_r2 = GenericUnivariateSelect(f_classif, mode='percentile',
param=25).fit(X, y).transform(X)
assert_array_equal(X_r.toarray(), X_r2.toarray())
support = univariate_filter.get_support()
gtruth = np.zeros(20)
gtruth[:5] = 1
assert_array_equal(support, gtruth)
X_r2inv = univariate_filter.inverse_transform(X_r2)
assert_true(sparse.issparse(X_r2inv))
support_mask = safe_mask(X_r2inv, support)
assert_equal(X_r2inv.shape, X.shape)
assert_array_equal(X_r2inv[:, support_mask].toarray(), X_r.toarray())
# Check other columns are empty
assert_equal(X_r2inv.getnnz(), X_r.getnnz())
##############################################################################
# Test univariate selection in classification settings
def test_select_kbest_classif():
# Test whether the relative univariate feature selection
# gets the correct items in a simple classification problem
# with the k best heuristic
X, y = make_classification(n_samples=200, n_features=20,
n_informative=3, n_redundant=2,
n_repeated=0, n_classes=8,
n_clusters_per_class=1, flip_y=0.0,
class_sep=10, shuffle=False, random_state=0)
univariate_filter = SelectKBest(f_classif, k=5)
X_r = univariate_filter.fit(X, y).transform(X)
X_r2 = GenericUnivariateSelect(
f_classif, mode='k_best', param=5).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
gtruth = np.zeros(20)
gtruth[:5] = 1
assert_array_equal(support, gtruth)
def test_select_kbest_all():
# Test whether k="all" correctly returns all features.
X, y = make_classification(n_samples=20, n_features=10,
shuffle=False, random_state=0)
univariate_filter = SelectKBest(f_classif, k='all')
X_r = univariate_filter.fit(X, y).transform(X)
assert_array_equal(X, X_r)
def test_select_kbest_zero():
# Test whether k=0 correctly returns no features.
X, y = make_classification(n_samples=20, n_features=10,
shuffle=False, random_state=0)
univariate_filter = SelectKBest(f_classif, k=0)
univariate_filter.fit(X, y)
support = univariate_filter.get_support()
gtruth = np.zeros(10, dtype=bool)
assert_array_equal(support, gtruth)
X_selected = assert_warns_message(UserWarning, 'No features were selected',
univariate_filter.transform, X)
assert_equal(X_selected.shape, (20, 0))
def test_select_heuristics_classif():
# Test whether the relative univariate feature selection
# gets the correct items in a simple classification problem
# with the fdr, fwe and fpr heuristics
X, y = make_classification(n_samples=200, n_features=20,
n_informative=3, n_redundant=2,
n_repeated=0, n_classes=8,
n_clusters_per_class=1, flip_y=0.0,
class_sep=10, shuffle=False, random_state=0)
univariate_filter = SelectFwe(f_classif, alpha=0.01)
X_r = univariate_filter.fit(X, y).transform(X)
gtruth = np.zeros(20)
gtruth[:5] = 1
for mode in ['fdr', 'fpr', 'fwe']:
X_r2 = GenericUnivariateSelect(
f_classif, mode=mode, param=0.01).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
assert_array_almost_equal(support, gtruth)
##############################################################################
# Test univariate selection in regression settings
def assert_best_scores_kept(score_filter):
scores = score_filter.scores_
support = score_filter.get_support()
assert_array_equal(np.sort(scores[support]),
np.sort(scores)[-support.sum():])
def test_select_percentile_regression():
# Test whether the relative univariate feature selection
# gets the correct items in a simple regression problem
# with the percentile heuristic
X, y = make_regression(n_samples=200, n_features=20,
n_informative=5, shuffle=False, random_state=0)
univariate_filter = SelectPercentile(f_regression, percentile=25)
X_r = univariate_filter.fit(X, y).transform(X)
assert_best_scores_kept(univariate_filter)
X_r2 = GenericUnivariateSelect(
f_regression, mode='percentile', param=25).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
gtruth = np.zeros(20)
gtruth[:5] = 1
assert_array_equal(support, gtruth)
X_2 = X.copy()
X_2[:, np.logical_not(support)] = 0
assert_array_equal(X_2, univariate_filter.inverse_transform(X_r))
# Check inverse_transform respects dtype
assert_array_equal(X_2.astype(bool),
univariate_filter.inverse_transform(X_r.astype(bool)))
def test_select_percentile_regression_full():
# Test whether the relative univariate feature selection
# selects all features when '100%' is asked.
X, y = make_regression(n_samples=200, n_features=20,
n_informative=5, shuffle=False, random_state=0)
univariate_filter = SelectPercentile(f_regression, percentile=100)
X_r = univariate_filter.fit(X, y).transform(X)
assert_best_scores_kept(univariate_filter)
X_r2 = GenericUnivariateSelect(
f_regression, mode='percentile', param=100).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
gtruth = np.ones(20)
assert_array_equal(support, gtruth)
def test_invalid_percentile():
X, y = make_regression(n_samples=10, n_features=20,
n_informative=2, shuffle=False, random_state=0)
assert_raises(ValueError, SelectPercentile(percentile=-1).fit, X, y)
assert_raises(ValueError, SelectPercentile(percentile=101).fit, X, y)
assert_raises(ValueError, GenericUnivariateSelect(mode='percentile',
param=-1).fit, X, y)
assert_raises(ValueError, GenericUnivariateSelect(mode='percentile',
param=101).fit, X, y)
def test_select_kbest_regression():
# Test whether the relative univariate feature selection
# gets the correct items in a simple regression problem
# with the k best heuristic
X, y = make_regression(n_samples=200, n_features=20, n_informative=5,
shuffle=False, random_state=0, noise=10)
univariate_filter = SelectKBest(f_regression, k=5)
X_r = univariate_filter.fit(X, y).transform(X)
assert_best_scores_kept(univariate_filter)
X_r2 = GenericUnivariateSelect(
f_regression, mode='k_best', param=5).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
gtruth = np.zeros(20)
gtruth[:5] = 1
assert_array_equal(support, gtruth)
def test_select_heuristics_regression():
# Test whether the relative univariate feature selection
# gets the correct items in a simple regression problem
# with the fpr, fdr or fwe heuristics
X, y = make_regression(n_samples=200, n_features=20, n_informative=5,
shuffle=False, random_state=0, noise=10)
univariate_filter = SelectFpr(f_regression, alpha=0.01)
X_r = univariate_filter.fit(X, y).transform(X)
gtruth = np.zeros(20)
gtruth[:5] = 1
for mode in ['fdr', 'fpr', 'fwe']:
X_r2 = GenericUnivariateSelect(
f_regression, mode=mode, param=0.01).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
assert_array_equal(support[:5], np.ones((5, ), dtype=np.bool))
assert_less(np.sum(support[5:] == 1), 3)
def test_select_fdr_regression():
# Test that fdr heuristic actually has low FDR.
def single_fdr(alpha, n_informative, random_state):
X, y = make_regression(n_samples=150, n_features=20,
n_informative=n_informative, shuffle=False,
random_state=random_state, noise=10)
with warnings.catch_warnings(record=True):
# Warnings can be raised when no features are selected
# (low alpha or very noisy data)
univariate_filter = SelectFdr(f_regression, alpha=alpha)
X_r = univariate_filter.fit(X, y).transform(X)
X_r2 = GenericUnivariateSelect(
f_regression, mode='fdr', param=alpha).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
num_false_positives = np.sum(support[n_informative:] == 1)
num_true_positives = np.sum(support[:n_informative] == 1)
if num_false_positives == 0:
return 0.
false_discovery_rate = (num_false_positives /
(num_true_positives + num_false_positives))
return false_discovery_rate
for alpha in [0.001, 0.01, 0.1]:
for n_informative in [1, 5, 10]:
# As per Benjamini-Hochberg, the expected false discovery rate
# should be lower than alpha:
# FDR = E(FP / (TP + FP)) <= alpha
false_discovery_rate = np.mean([single_fdr(alpha, n_informative,
random_state) for
random_state in range(30)])
assert_greater_equal(alpha, false_discovery_rate)
# Make sure that the empirical false discovery rate increases
# with alpha:
if false_discovery_rate != 0:
assert_greater(false_discovery_rate, alpha / 10)
def test_select_fwe_regression():
# Test whether the relative univariate feature selection
# gets the correct items in a simple regression problem
# with the fwe heuristic
X, y = make_regression(n_samples=200, n_features=20,
n_informative=5, shuffle=False, random_state=0)
univariate_filter = SelectFwe(f_regression, alpha=0.01)
X_r = univariate_filter.fit(X, y).transform(X)
X_r2 = GenericUnivariateSelect(
f_regression, mode='fwe', param=0.01).fit(X, y).transform(X)
assert_array_equal(X_r, X_r2)
support = univariate_filter.get_support()
gtruth = np.zeros(20)
gtruth[:5] = 1
assert_array_equal(support[:5], np.ones((5, ), dtype=np.bool))
assert_less(np.sum(support[5:] == 1), 2)
def test_selectkbest_tiebreaking():
# Test whether SelectKBest actually selects k features in case of ties.
# Prior to 0.11, SelectKBest would return more features than requested.
Xs = [[0, 1, 1], [0, 0, 1], [1, 0, 0], [1, 1, 0]]
y = [1]
dummy_score = lambda X, y: (X[0], X[0])
for X in Xs:
sel = SelectKBest(dummy_score, k=1)
X1 = ignore_warnings(sel.fit_transform)([X], y)
assert_equal(X1.shape[1], 1)
assert_best_scores_kept(sel)
sel = SelectKBest(dummy_score, k=2)
X2 = ignore_warnings(sel.fit_transform)([X], y)
assert_equal(X2.shape[1], 2)
assert_best_scores_kept(sel)
def test_selectpercentile_tiebreaking():
# Test if SelectPercentile selects the right n_features in case of ties.
Xs = [[0, 1, 1], [0, 0, 1], [1, 0, 0], [1, 1, 0]]
y = [1]
dummy_score = lambda X, y: (X[0], X[0])
for X in Xs:
sel = SelectPercentile(dummy_score, percentile=34)
X1 = ignore_warnings(sel.fit_transform)([X], y)
assert_equal(X1.shape[1], 1)
assert_best_scores_kept(sel)
sel = SelectPercentile(dummy_score, percentile=67)
X2 = ignore_warnings(sel.fit_transform)([X], y)
assert_equal(X2.shape[1], 2)
assert_best_scores_kept(sel)
def test_tied_pvalues():
# Test whether k-best and percentiles work with tied pvalues from chi2.
# chi2 will return the same p-values for the following features, but it
# will return different scores.
X0 = np.array([[10000, 9999, 9998], [1, 1, 1]])
y = [0, 1]
for perm in itertools.permutations((0, 1, 2)):
X = X0[:, perm]
Xt = SelectKBest(chi2, k=2).fit_transform(X, y)
assert_equal(Xt.shape, (2, 2))
assert_not_in(9998, Xt)
Xt = SelectPercentile(chi2, percentile=67).fit_transform(X, y)
assert_equal(Xt.shape, (2, 2))
assert_not_in(9998, Xt)
def test_tied_scores():
# Test for stable sorting in k-best with tied scores.
X_train = np.array([[0, 0, 0], [1, 1, 1]])
y_train = [0, 1]
for n_features in [1, 2, 3]:
sel = SelectKBest(chi2, k=n_features).fit(X_train, y_train)
X_test = sel.transform([[0, 1, 2]])
assert_array_equal(X_test[0], np.arange(3)[-n_features:])
def test_nans():
# Assert that SelectKBest and SelectPercentile can handle NaNs.
# First feature has zero variance to confuse f_classif (ANOVA) and
# make it return a NaN.
X = [[0, 1, 0], [0, -1, -1], [0, .5, .5]]
y = [1, 0, 1]
for select in (SelectKBest(f_classif, 2),
SelectPercentile(f_classif, percentile=67)):
ignore_warnings(select.fit)(X, y)
assert_array_equal(select.get_support(indices=True), np.array([1, 2]))
def test_score_func_error():
X = [[0, 1, 0], [0, -1, -1], [0, .5, .5]]
y = [1, 0, 1]
for SelectFeatures in [SelectKBest, SelectPercentile, SelectFwe,
SelectFdr, SelectFpr, GenericUnivariateSelect]:
assert_raises(TypeError, SelectFeatures(score_func=10).fit, X, y)
def test_invalid_k():
X = [[0, 1, 0], [0, -1, -1], [0, .5, .5]]
y = [1, 0, 1]
assert_raises(ValueError, SelectKBest(k=-1).fit, X, y)
assert_raises(ValueError, SelectKBest(k=4).fit, X, y)
assert_raises(ValueError,
GenericUnivariateSelect(mode='k_best', param=-1).fit, X, y)
assert_raises(ValueError,
GenericUnivariateSelect(mode='k_best', param=4).fit, X, y)
def test_f_classif_constant_feature():
# Test that f_classif warns if a feature is constant throughout.
X, y = make_classification(n_samples=10, n_features=5)
X[:, 0] = 2.0
assert_warns(UserWarning, f_classif, X, y)
def test_no_feature_selected():
rng = np.random.RandomState(0)
# Generate random uncorrelated data: a strict univariate test should
# rejects all the features
X = rng.rand(40, 10)
y = rng.randint(0, 4, size=40)
strict_selectors = [
SelectFwe(alpha=0.01).fit(X, y),
SelectFdr(alpha=0.01).fit(X, y),
SelectFpr(alpha=0.01).fit(X, y),
SelectPercentile(percentile=0).fit(X, y),
SelectKBest(k=0).fit(X, y),
]
for selector in strict_selectors:
assert_array_equal(selector.get_support(), np.zeros(10))
X_selected = assert_warns_message(
UserWarning, 'No features were selected', selector.transform, X)
assert_equal(X_selected.shape, (40, 0))
| bsd-3-clause | -3,523,390,748,438,043,000 | 38.958781 | 79 | 0.609364 | false |
jnishi/chainer | chainer/links/connection/lstm.py | 2 | 12108 | import six
import chainer
from chainer.backends import cuda
from chainer.functions.activation import lstm
from chainer.functions.array import concat
from chainer.functions.array import split_axis
from chainer import initializers
from chainer import link
from chainer.links.connection import linear
from chainer import utils
from chainer import variable
class LSTMBase(link.Chain):
def __init__(self, in_size, out_size=None, lateral_init=None,
upward_init=None, bias_init=None, forget_bias_init=None):
if out_size is None:
out_size, in_size = in_size, None
super(LSTMBase, self).__init__()
if bias_init is None:
bias_init = 0
if forget_bias_init is None:
forget_bias_init = 1
self.state_size = out_size
self.lateral_init = lateral_init
self.upward_init = upward_init
self.bias_init = bias_init
self.forget_bias_init = forget_bias_init
with self.init_scope():
self.upward = linear.Linear(in_size, 4 * out_size, initialW=0)
self.lateral = linear.Linear(out_size, 4 * out_size, initialW=0,
nobias=True)
if in_size is not None:
self._initialize_params()
def _initialize_params(self):
lateral_init = initializers._get_initializer(self.lateral_init)
upward_init = initializers._get_initializer(self.upward_init)
bias_init = initializers._get_initializer(self.bias_init)
forget_bias_init = initializers._get_initializer(self.forget_bias_init)
for i in six.moves.range(0, 4 * self.state_size, self.state_size):
lateral_init(self.lateral.W.array[i:i + self.state_size, :])
upward_init(self.upward.W.array[i:i + self.state_size, :])
a, i, f, o = lstm._extract_gates(
self.upward.b.array.reshape(1, 4 * self.state_size, 1))
bias_init(a)
bias_init(i)
forget_bias_init(f)
bias_init(o)
class StatelessLSTM(LSTMBase):
"""Stateless LSTM layer.
This is a fully-connected LSTM layer as a chain. Unlike the
:func:`~chainer.functions.lstm` function, this chain holds upward and
lateral connections as child links. This link doesn't keep cell and
hidden states.
Args:
in_size (int or None): Dimension of input vectors. If ``None``,
parameter initialization will be deferred until the first forward
data pass at which time the size will be determined.
out_size (int): Dimensionality of output vectors.
Attributes:
upward (chainer.links.Linear): Linear layer of upward connections.
lateral (chainer.links.Linear): Linear layer of lateral connections.
.. admonition:: Example
There are several ways to make a StatelessLSTM link.
Let a two-dimensional input array :math:`x`, a cell state array
:math:`h`, and the output array of the previous step :math:`h` be:
>>> x = np.zeros((1, 10), dtype=np.float32)
>>> c = np.zeros((1, 20), dtype=np.float32)
>>> h = np.zeros((1, 20), dtype=np.float32)
1. Give both ``in_size`` and ``out_size`` arguments:
>>> l = L.StatelessLSTM(10, 20)
>>> c_new, h_new = l(c, h, x)
>>> c_new.shape
(1, 20)
>>> h_new.shape
(1, 20)
2. Omit ``in_size`` argument or fill it with ``None``:
The below two cases are the same.
>>> l = L.StatelessLSTM(20)
>>> c_new, h_new = l(c, h, x)
>>> c_new.shape
(1, 20)
>>> h_new.shape
(1, 20)
>>> l = L.StatelessLSTM(None, 20)
>>> c_new, h_new = l(c, h, x)
>>> c_new.shape
(1, 20)
>>> h_new.shape
(1, 20)
"""
def forward(self, c, h, x):
"""Returns new cell state and updated output of LSTM.
Args:
c (~chainer.Variable): Cell states of LSTM units.
h (~chainer.Variable): Output at the previous time step.
x (~chainer.Variable): A new batch from the input sequence.
Returns:
tuple of ~chainer.Variable: Returns ``(c_new, h_new)``, where
``c_new`` represents new cell state, and ``h_new`` is updated
output of LSTM units.
"""
if self.upward.W.array is None:
in_size = x.size // x.shape[0]
with chainer.using_device(self.device):
self.upward._initialize_params(in_size)
self._initialize_params()
lstm_in = self.upward(x)
if h is not None:
lstm_in += self.lateral(h)
if c is None:
xp = self.xp
with chainer.using_device(self.device):
c = variable.Variable(
xp.zeros((x.shape[0], self.state_size), dtype=x.dtype))
return lstm.lstm(c, lstm_in)
class LSTM(LSTMBase):
"""Fully-connected LSTM layer.
This is a fully-connected LSTM layer as a chain. Unlike the
:func:`~chainer.functions.lstm` function, which is defined as a stateless
activation function, this chain holds upward and lateral connections as
child links.
It also maintains *states*, including the cell state and the output
at the previous time step. Therefore, it can be used as a *stateful LSTM*.
This link supports variable length inputs. The mini-batch size of the
current input must be equal to or smaller than that of the previous one.
The mini-batch size of ``c`` and ``h`` is determined as that of the first
input ``x``.
When mini-batch size of ``i``-th input is smaller than that of the previous
input, this link only updates ``c[0:len(x)]`` and ``h[0:len(x)]`` and
doesn't change the rest of ``c`` and ``h``.
So, please sort input sequences in descending order of lengths before
applying the function.
Args:
in_size (int): Dimension of input vectors. If it is ``None`` or
omitted, parameter initialization will be deferred until the first
forward data pass at which time the size will be determined.
out_size (int): Dimensionality of output vectors.
lateral_init: A callable that takes ``numpy.ndarray`` or
``cupy.ndarray`` and edits its value.
It is used for initialization of the lateral connections.
May be ``None`` to use default initialization.
upward_init: A callable that takes ``numpy.ndarray`` or
``cupy.ndarray`` and edits its value.
It is used for initialization of the upward connections.
May be ``None`` to use default initialization.
bias_init: A callable that takes ``numpy.ndarray`` or
``cupy.ndarray`` and edits its value
It is used for initialization of the biases of cell input,
input gate and output gate.and gates of the upward connection.
May be a scalar, in that case, the bias is
initialized by this value.
If it is ``None``, the cell-input bias is initialized to zero.
forget_bias_init: A callable that takes ``numpy.ndarray`` or
``cupy.ndarray`` and edits its value
It is used for initialization of the biases of the forget gate of
the upward connection.
May be a scalar, in that case, the bias is
initialized by this value.
If it is ``None``, the forget bias is initialized to one.
Attributes:
upward (~chainer.links.Linear): Linear layer of upward connections.
lateral (~chainer.links.Linear): Linear layer of lateral connections.
c (~chainer.Variable): Cell states of LSTM units.
h (~chainer.Variable): Output at the previous time step.
.. admonition:: Example
There are several ways to make a LSTM link.
Let a two-dimensional input array :math:`x` be:
>>> x = np.zeros((1, 10), dtype=np.float32)
1. Give both ``in_size`` and ``out_size`` arguments:
>>> l = L.LSTM(10, 20)
>>> h_new = l(x)
>>> h_new.shape
(1, 20)
2. Omit ``in_size`` argument or fill it with ``None``:
The below two cases are the same.
>>> l = L.LSTM(20)
>>> h_new = l(x)
>>> h_new.shape
(1, 20)
>>> l = L.LSTM(None, 20)
>>> h_new = l(x)
>>> h_new.shape
(1, 20)
"""
def __init__(self, in_size, out_size=None, lateral_init=None,
upward_init=None, bias_init=None, forget_bias_init=None):
if out_size is None:
in_size, out_size = None, in_size
super(LSTM, self).__init__(
in_size, out_size, lateral_init, upward_init, bias_init,
forget_bias_init)
self.reset_state()
def _to_device(self, device, skip_between_cupy_devices=False):
# Overrides Link._to_device
# TODO(niboshi): Avoid forcing concrete links to override _to_device
device = chainer.get_device(device)
super(LSTM, self)._to_device(
device, skip_between_cupy_devices=skip_between_cupy_devices)
if self.c is not None:
if not (skip_between_cupy_devices
and device.xp is cuda.cupy
and isinstance(self.c, cuda.ndarray)):
self.c.to_device(device)
if self.h is not None:
if not (skip_between_cupy_devices
and device.xp is cuda.cupy
and isinstance(self.h, cuda.ndarray)):
self.h.to_device(device)
return self
def set_state(self, c, h):
"""Sets the internal state.
It sets the :attr:`c` and :attr:`h` attributes.
Args:
c (~chainer.Variable): A new cell states of LSTM units.
h (~chainer.Variable): A new output at the previous time step.
"""
assert isinstance(c, variable.Variable)
assert isinstance(h, variable.Variable)
c.to_device(self.device)
h.to_device(self.device)
self.c = c
self.h = h
def reset_state(self):
"""Resets the internal state.
It sets ``None`` to the :attr:`c` and :attr:`h` attributes.
"""
self.c = self.h = None
def forward(self, x):
"""Updates the internal state and returns the LSTM outputs.
Args:
x (~chainer.Variable): A new batch from the input sequence.
Returns:
~chainer.Variable: Outputs of updated LSTM units.
"""
if self.upward.W.array is None:
with chainer.using_device(self.device):
in_size = utils.size_of_shape(x.shape[1:])
self.upward._initialize_params(in_size)
self._initialize_params()
batch = x.shape[0]
lstm_in = self.upward(x)
h_rest = None
if self.h is not None:
h_size = self.h.shape[0]
if batch == 0:
h_rest = self.h
elif h_size < batch:
msg = ('The batch size of x must be equal to or less than'
'the size of the previous state h.')
raise TypeError(msg)
elif h_size > batch:
h_update, h_rest = split_axis.split_axis(
self.h, [batch], axis=0)
lstm_in += self.lateral(h_update)
else:
lstm_in += self.lateral(self.h)
if self.c is None:
with chainer.using_device(self.device):
self.c = variable.Variable(
self.xp.zeros((batch, self.state_size), dtype=x.dtype))
self.c, y = lstm.lstm(self.c, lstm_in)
if h_rest is None:
self.h = y
elif len(y.array) == 0:
self.h = h_rest
else:
self.h = concat.concat([y, h_rest], axis=0)
return y
| mit | 8,633,784,803,710,555,000 | 35.251497 | 79 | 0.569954 | false |
israeltobias/DownMedia | youtube-dl/youtube_dl/extractor/atresplayer.py | 5 | 7688 | from __future__ import unicode_literals
import time
import hmac
import hashlib
import re
from .common import InfoExtractor
from ..compat import compat_str
from ..utils import (
ExtractorError,
float_or_none,
int_or_none,
sanitized_Request,
urlencode_postdata,
xpath_text,
)
class AtresPlayerIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?atresplayer\.com/television/[^/]+/[^/]+/[^/]+/(?P<id>.+?)_\d+\.html'
_NETRC_MACHINE = 'atresplayer'
_TESTS = [
{
'url': 'http://www.atresplayer.com/television/programas/el-club-de-la-comedia/temporada-4/capitulo-10-especial-solidario-nochebuena_2014122100174.html',
'md5': 'efd56753cda1bb64df52a3074f62e38a',
'info_dict': {
'id': 'capitulo-10-especial-solidario-nochebuena',
'ext': 'mp4',
'title': 'Especial Solidario de Nochebuena',
'description': 'md5:e2d52ff12214fa937107d21064075bf1',
'duration': 5527.6,
'thumbnail': r're:^https?://.*\.jpg$',
},
'skip': 'This video is only available for registered users'
},
{
'url': 'http://www.atresplayer.com/television/especial/videoencuentros/temporada-1/capitulo-112-david-bustamante_2014121600375.html',
'md5': '0d0e918533bbd4b263f2de4d197d4aac',
'info_dict': {
'id': 'capitulo-112-david-bustamante',
'ext': 'flv',
'title': 'David Bustamante',
'description': 'md5:f33f1c0a05be57f6708d4dd83a3b81c6',
'duration': 1439.0,
'thumbnail': r're:^https?://.*\.jpg$',
},
},
{
'url': 'http://www.atresplayer.com/television/series/el-secreto-de-puente-viejo/el-chico-de-los-tres-lunares/capitulo-977-29-12-14_2014122400174.html',
'only_matching': True,
},
]
_USER_AGENT = 'Dalvik/1.6.0 (Linux; U; Android 4.3; GT-I9300 Build/JSS15J'
_MAGIC = 'QWtMLXs414Yo+c#_+Q#K@NN)'
_TIMESTAMP_SHIFT = 30000
_TIME_API_URL = 'http://servicios.atresplayer.com/api/admin/time.json'
_URL_VIDEO_TEMPLATE = 'https://servicios.atresplayer.com/api/urlVideo/{1}/{0}/{1}|{2}|{3}.json'
_PLAYER_URL_TEMPLATE = 'https://servicios.atresplayer.com/episode/getplayer.json?episodePk=%s'
_EPISODE_URL_TEMPLATE = 'http://www.atresplayer.com/episodexml/%s'
_LOGIN_URL = 'https://servicios.atresplayer.com/j_spring_security_check'
_ERRORS = {
'UNPUBLISHED': 'We\'re sorry, but this video is not yet available.',
'DELETED': 'This video has expired and is no longer available for online streaming.',
'GEOUNPUBLISHED': 'We\'re sorry, but this video is not available in your region due to right restrictions.',
# 'PREMIUM': 'PREMIUM',
}
def _real_initialize(self):
self._login()
def _login(self):
(username, password) = self._get_login_info()
if username is None:
return
login_form = {
'j_username': username,
'j_password': password,
}
request = sanitized_Request(
self._LOGIN_URL, urlencode_postdata(login_form))
request.add_header('Content-Type', 'application/x-www-form-urlencoded')
response = self._download_webpage(
request, None, 'Logging in as %s' % username)
error = self._html_search_regex(
r'(?s)<ul class="list_error">(.+?)</ul>', response, 'error', default=None)
if error:
raise ExtractorError(
'Unable to login: %s' % error, expected=True)
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
episode_id = self._search_regex(
r'episode="([^"]+)"', webpage, 'episode id')
request = sanitized_Request(
self._PLAYER_URL_TEMPLATE % episode_id,
headers={'User-Agent': self._USER_AGENT})
player = self._download_json(request, episode_id, 'Downloading player JSON')
episode_type = player.get('typeOfEpisode')
error_message = self._ERRORS.get(episode_type)
if error_message:
raise ExtractorError(
'%s returned error: %s' % (self.IE_NAME, error_message), expected=True)
formats = []
video_url = player.get('urlVideo')
if video_url:
format_info = {
'url': video_url,
'format_id': 'http',
}
mobj = re.search(r'(?P<bitrate>\d+)K_(?P<width>\d+)x(?P<height>\d+)', video_url)
if mobj:
format_info.update({
'width': int_or_none(mobj.group('width')),
'height': int_or_none(mobj.group('height')),
'tbr': int_or_none(mobj.group('bitrate')),
})
formats.append(format_info)
timestamp = int_or_none(self._download_webpage(
self._TIME_API_URL,
video_id, 'Downloading timestamp', fatal=False), 1000, time.time())
timestamp_shifted = compat_str(timestamp + self._TIMESTAMP_SHIFT)
token = hmac.new(
self._MAGIC.encode('ascii'),
(episode_id + timestamp_shifted).encode('utf-8'), hashlib.md5
).hexdigest()
request = sanitized_Request(
self._URL_VIDEO_TEMPLATE.format('windows', episode_id, timestamp_shifted, token),
headers={'User-Agent': self._USER_AGENT})
fmt_json = self._download_json(
request, video_id, 'Downloading windows video JSON')
result = fmt_json.get('resultDes')
if result.lower() != 'ok':
raise ExtractorError(
'%s returned error: %s' % (self.IE_NAME, result), expected=True)
for format_id, video_url in fmt_json['resultObject'].items():
if format_id == 'token' or not video_url.startswith('http'):
continue
if 'geodeswowsmpra3player' in video_url:
f4m_path = video_url.split('smil:', 1)[-1].split('free_', 1)[0]
f4m_url = 'http://drg.antena3.com/{0}hds/es/sd.f4m'.format(f4m_path)
# this videos are protected by DRM, the f4m downloader doesn't support them
continue
else:
f4m_url = video_url[:-9] + '/manifest.f4m'
formats.extend(self._extract_f4m_formats(f4m_url, video_id, f4m_id='hds', fatal=False))
self._sort_formats(formats)
path_data = player.get('pathData')
episode = self._download_xml(
self._EPISODE_URL_TEMPLATE % path_data, video_id,
'Downloading episode XML')
duration = float_or_none(xpath_text(
episode, './media/asset/info/technical/contentDuration', 'duration'))
art = episode.find('./media/asset/info/art')
title = xpath_text(art, './name', 'title')
description = xpath_text(art, './description', 'description')
thumbnail = xpath_text(episode, './media/asset/files/background', 'thumbnail')
subtitles = {}
subtitle_url = xpath_text(episode, './media/asset/files/subtitle', 'subtitle')
if subtitle_url:
subtitles['es'] = [{
'ext': 'srt',
'url': subtitle_url,
}]
return {
'id': video_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'duration': duration,
'formats': formats,
'subtitles': subtitles,
}
| gpl-3.0 | -1,045,066,340,403,646,200 | 38.025381 | 164 | 0.562175 | false |
ocadotechnology/django-forge | forge/views/v3.py | 2 | 4771 | import urllib
import urlparse
from django.core.paginator import Paginator
from django.core.urlresolvers import reverse
from .utils import json_response
from ..models import Author, Module, Release
## Helper methods
def error_response(errors, **kwargs):
"""
Returns an error response for v3 Forge API.
"""
error_dict = {'errors': errors}
if 'message' in kwargs:
error_dict['message'] = kwargs['message']
return json_response(
error_dict, indent=2,
status=kwargs.get('status', 400)
)
def query_dict(request):
"""
Returns query dictionary initialized with common parameters to v3 views.
"""
try:
limit = int(request.GET.get('limit', 20))
except ValueError:
limit = 20
try:
offset = int(request.GET.get('offset', 0))
except ValueError:
offset = 0
return {
'limit': limit,
'offset': offset,
}
def pagination_data(qs, query, url_name):
"""
Returns a two-tuple comprising a Page and dictionary of pagination data
corresponding to the given queryset, query parameters, and URL name.
"""
limit = query['limit']
offset = query['offset']
p = Paginator(qs, limit)
page_num = (offset / p.per_page) + 1
page = p.page(page_num)
cur_url = urlparse.urlsplit(reverse(url_name))
first_query = query.copy()
first_query['offset'] = 0
first_url = urlparse.urlunsplit(
(cur_url.scheme, cur_url.netloc, cur_url.path,
urllib.urlencode(first_query), cur_url.fragment)
)
if page.has_previous():
prev_query = query.copy()
prev_query['offset'] = (page_num - 2) * p.per_page
prev_url = urlparse.urlunsplit(
(cur_url.scheme, cur_url.netloc, cur_url.path,
urllib.urlencode(prev_query), cur_url.fragment)
)
else:
prev_url = None
if page.has_next():
next_query = query.copy()
next_query['offset'] = page_num * p.per_page
next_url = urlparse.urlunsplit(
(cur_url.scheme, cur_url.netloc, cur_url.path,
urllib.urlencode(next_query), cur_url.fragment)
)
else:
next_url = None
pagination_dict = {
'limit': limit,
'offset': offset,
'first': first_url,
'previous': prev_url,
'next': next_url,
'total': p.count,
}
return page, pagination_dict
## API views
def modules(request):
"""
Provides the `/v3/modules` API endpoint.
"""
query = query_dict(request)
q = request.GET.get('query', None)
if q:
# Client has provided a search query..
query['query'] = q
parsed = Module.objects.parse_full_name(q)
if parsed:
# If query looks like a module name, try and get it.
author, name = parsed
qs = Module.objects.filter(author__name=author, name=name)
else:
# Otherwise we search other fields.
qs = (
Module.objects.filter(name__icontains=q) |
Module.objects.filter(author__name__icontains=q) |
Module.objects.filter(releases__version__icontains=q) |
Module.objects.filter(tags__icontains=q) |
Module.objects.filter(desc__icontains=q)
)
else:
qs = Module.objects.all()
# Ensure only distinct records are returned.
qs = qs.order_by('author__name').distinct()
# Get pagination page and data.
page, pagination_dict = pagination_data(qs, query, 'modules_v3')
modules_data = {
'pagination': pagination_dict,
'results': [module.v3 for module in page.object_list],
}
return json_response(modules_data, indent=2)
def releases(request):
"""
Provides the `/v3/releases` API endpoint.
"""
query = query_dict(request)
qs = Release.objects.all()
module_name = request.GET.get('module', None)
if module_name:
query['module'] = module_name
if Module.objects.parse_full_name(module_name):
try:
qs = qs.filter(
module=Module.objects.get_for_full_name(module_name)
)
except Module.DoesNotExist:
qs = qs.none()
else:
return error_response(
["'%s' is not a valid full modulename" % module_name]
)
# Get pagination page and data.
page, pagination_dict = pagination_data(qs, query, 'releases_v3')
# Constructing releases_data dictionary for serialization.
releases_data = {
'pagination': pagination_dict,
'results': [rel.v3 for rel in page.object_list],
}
return json_response(releases_data, indent=2)
| apache-2.0 | -1,914,287,290,377,931,000 | 26.578035 | 76 | 0.584783 | false |
wouwei/PiLapse | picam/picamEnv/Lib/site-packages/pip/req/req_install.py | 50 | 45589 | from __future__ import absolute_import
import logging
import os
import re
import shutil
import sys
import tempfile
import traceback
import warnings
import zipfile
from distutils import sysconfig
from distutils.util import change_root
from email.parser import FeedParser
from pip._vendor import pkg_resources, six
from pip._vendor.distlib.markers import interpret as markers_interpret
from pip._vendor.packaging import specifiers
from pip._vendor.packaging.utils import canonicalize_name
from pip._vendor.six.moves import configparser
import pip.wheel
from pip.compat import native_str, get_stdlib, WINDOWS
from pip.download import is_url, url_to_path, path_to_url, is_archive_file
from pip.exceptions import (
InstallationError, UninstallationError, UnsupportedWheel,
)
from pip.locations import (
bin_py, running_under_virtualenv, PIP_DELETE_MARKER_FILENAME, bin_user,
)
from pip.utils import (
display_path, rmtree, ask_path_exists, backup_dir, is_installable_dir,
dist_in_usersite, dist_in_site_packages, egg_link_path,
call_subprocess, read_text_file, FakeFile, _make_build_dir, ensure_dir,
get_installed_version, normalize_path, dist_is_local,
)
from pip.utils.hashes import Hashes
from pip.utils.deprecation import RemovedInPip9Warning, RemovedInPip10Warning
from pip.utils.logging import indent_log
from pip.utils.setuptools_build import SETUPTOOLS_SHIM
from pip.utils.ui import open_spinner
from pip.req.req_uninstall import UninstallPathSet
from pip.vcs import vcs
from pip.wheel import move_wheel_files, Wheel
from pip._vendor.packaging.version import Version
logger = logging.getLogger(__name__)
operators = specifiers.Specifier._operators.keys()
def _strip_extras(path):
m = re.match(r'^(.+)(\[[^\]]+\])$', path)
extras = None
if m:
path_no_extras = m.group(1)
extras = m.group(2)
else:
path_no_extras = path
return path_no_extras, extras
class InstallRequirement(object):
def __init__(self, req, comes_from, source_dir=None, editable=False,
link=None, as_egg=False, update=True,
pycompile=True, markers=None, isolated=False, options=None,
wheel_cache=None, constraint=False):
self.extras = ()
if isinstance(req, six.string_types):
try:
req = pkg_resources.Requirement.parse(req)
except pkg_resources.RequirementParseError:
if os.path.sep in req:
add_msg = "It looks like a path. Does it exist ?"
elif '=' in req and not any(op in req for op in operators):
add_msg = "= is not a valid operator. Did you mean == ?"
else:
add_msg = traceback.format_exc()
raise InstallationError(
"Invalid requirement: '%s'\n%s" % (req, add_msg))
self.extras = req.extras
self.req = req
self.comes_from = comes_from
self.constraint = constraint
self.source_dir = source_dir
self.editable = editable
self._wheel_cache = wheel_cache
self.link = self.original_link = link
self.as_egg = as_egg
self.markers = markers
self._egg_info_path = None
# This holds the pkg_resources.Distribution object if this requirement
# is already available:
self.satisfied_by = None
# This hold the pkg_resources.Distribution object if this requirement
# conflicts with another installed distribution:
self.conflicts_with = None
# Temporary build location
self._temp_build_dir = None
# Used to store the global directory where the _temp_build_dir should
# have been created. Cf _correct_build_location method.
self._ideal_build_dir = None
# True if the editable should be updated:
self.update = update
# Set to True after successful installation
self.install_succeeded = None
# UninstallPathSet of uninstalled distribution (for possible rollback)
self.uninstalled = None
# Set True if a legitimate do-nothing-on-uninstall has happened - e.g.
# system site packages, stdlib packages.
self.nothing_to_uninstall = False
self.use_user_site = False
self.target_dir = None
self.options = options if options else {}
self.pycompile = pycompile
# Set to True after successful preparation of this requirement
self.prepared = False
self.isolated = isolated
@classmethod
def from_editable(cls, editable_req, comes_from=None, default_vcs=None,
isolated=False, options=None, wheel_cache=None,
constraint=False):
from pip.index import Link
name, url, extras_override = parse_editable(
editable_req, default_vcs)
if url.startswith('file:'):
source_dir = url_to_path(url)
else:
source_dir = None
res = cls(name, comes_from, source_dir=source_dir,
editable=True,
link=Link(url),
constraint=constraint,
isolated=isolated,
options=options if options else {},
wheel_cache=wheel_cache)
if extras_override is not None:
res.extras = extras_override
return res
@classmethod
def from_line(
cls, name, comes_from=None, isolated=False, options=None,
wheel_cache=None, constraint=False):
"""Creates an InstallRequirement from a name, which might be a
requirement, directory containing 'setup.py', filename, or URL.
"""
from pip.index import Link
if is_url(name):
marker_sep = '; '
else:
marker_sep = ';'
if marker_sep in name:
name, markers = name.split(marker_sep, 1)
markers = markers.strip()
if not markers:
markers = None
else:
markers = None
name = name.strip()
req = None
path = os.path.normpath(os.path.abspath(name))
link = None
extras = None
if is_url(name):
link = Link(name)
else:
p, extras = _strip_extras(path)
if (os.path.isdir(p) and
(os.path.sep in name or name.startswith('.'))):
if not is_installable_dir(p):
raise InstallationError(
"Directory %r is not installable. File 'setup.py' "
"not found." % name
)
link = Link(path_to_url(p))
elif is_archive_file(p):
if not os.path.isfile(p):
logger.warning(
'Requirement %r looks like a filename, but the '
'file does not exist',
name
)
link = Link(path_to_url(p))
# it's a local file, dir, or url
if link:
# Handle relative file URLs
if link.scheme == 'file' and re.search(r'\.\./', link.url):
link = Link(
path_to_url(os.path.normpath(os.path.abspath(link.path))))
# wheel file
if link.is_wheel:
wheel = Wheel(link.filename) # can raise InvalidWheelFilename
if not wheel.supported():
raise UnsupportedWheel(
"%s is not a supported wheel on this platform." %
wheel.filename
)
req = "%s==%s" % (wheel.name, wheel.version)
else:
# set the req to the egg fragment. when it's not there, this
# will become an 'unnamed' requirement
req = link.egg_fragment
# a requirement specifier
else:
req = name
options = options if options else {}
res = cls(req, comes_from, link=link, markers=markers,
isolated=isolated, options=options,
wheel_cache=wheel_cache, constraint=constraint)
if extras:
res.extras = pkg_resources.Requirement.parse('__placeholder__' +
extras).extras
return res
def __str__(self):
if self.req:
s = str(self.req)
if self.link:
s += ' from %s' % self.link.url
else:
s = self.link.url if self.link else None
if self.satisfied_by is not None:
s += ' in %s' % display_path(self.satisfied_by.location)
if self.comes_from:
if isinstance(self.comes_from, six.string_types):
comes_from = self.comes_from
else:
comes_from = self.comes_from.from_path()
if comes_from:
s += ' (from %s)' % comes_from
return s
def __repr__(self):
return '<%s object: %s editable=%r>' % (
self.__class__.__name__, str(self), self.editable)
def populate_link(self, finder, upgrade, require_hashes):
"""Ensure that if a link can be found for this, that it is found.
Note that self.link may still be None - if Upgrade is False and the
requirement is already installed.
If require_hashes is True, don't use the wheel cache, because cached
wheels, always built locally, have different hashes than the files
downloaded from the index server and thus throw false hash mismatches.
Furthermore, cached wheels at present have undeterministic contents due
to file modification times.
"""
if self.link is None:
self.link = finder.find_requirement(self, upgrade)
if self._wheel_cache is not None and not require_hashes:
old_link = self.link
self.link = self._wheel_cache.cached_wheel(self.link, self.name)
if old_link != self.link:
logger.debug('Using cached wheel link: %s', self.link)
@property
def specifier(self):
return self.req.specifier
@property
def is_pinned(self):
"""Return whether I am pinned to an exact version.
For example, some-package==1.2 is pinned; some-package>1.2 is not.
"""
specifiers = self.specifier
return (len(specifiers) == 1 and
next(iter(specifiers)).operator in ('==', '==='))
def from_path(self):
if self.req is None:
return None
s = str(self.req)
if self.comes_from:
if isinstance(self.comes_from, six.string_types):
comes_from = self.comes_from
else:
comes_from = self.comes_from.from_path()
if comes_from:
s += '->' + comes_from
return s
def build_location(self, build_dir):
if self._temp_build_dir is not None:
return self._temp_build_dir
if self.req is None:
# for requirement via a path to a directory: the name of the
# package is not available yet so we create a temp directory
# Once run_egg_info will have run, we'll be able
# to fix it via _correct_build_location
self._temp_build_dir = tempfile.mkdtemp('-build', 'pip-')
self._ideal_build_dir = build_dir
return self._temp_build_dir
if self.editable:
name = self.name.lower()
else:
name = self.name
# FIXME: Is there a better place to create the build_dir? (hg and bzr
# need this)
if not os.path.exists(build_dir):
logger.debug('Creating directory %s', build_dir)
_make_build_dir(build_dir)
return os.path.join(build_dir, name)
def _correct_build_location(self):
"""Move self._temp_build_dir to self._ideal_build_dir/self.req.name
For some requirements (e.g. a path to a directory), the name of the
package is not available until we run egg_info, so the build_location
will return a temporary directory and store the _ideal_build_dir.
This is only called by self.egg_info_path to fix the temporary build
directory.
"""
if self.source_dir is not None:
return
assert self.req is not None
assert self._temp_build_dir
assert self._ideal_build_dir
old_location = self._temp_build_dir
self._temp_build_dir = None
new_location = self.build_location(self._ideal_build_dir)
if os.path.exists(new_location):
raise InstallationError(
'A package already exists in %s; please remove it to continue'
% display_path(new_location))
logger.debug(
'Moving package %s from %s to new location %s',
self, display_path(old_location), display_path(new_location),
)
shutil.move(old_location, new_location)
self._temp_build_dir = new_location
self._ideal_build_dir = None
self.source_dir = new_location
self._egg_info_path = None
@property
def name(self):
if self.req is None:
return None
return native_str(self.req.project_name)
@property
def setup_py_dir(self):
return os.path.join(
self.source_dir,
self.link and self.link.subdirectory_fragment or '')
@property
def setup_py(self):
assert self.source_dir, "No source dir for %s" % self
try:
import setuptools # noqa
except ImportError:
if get_installed_version('setuptools') is None:
add_msg = "Please install setuptools."
else:
add_msg = traceback.format_exc()
# Setuptools is not available
raise InstallationError(
"Could not import setuptools which is required to "
"install from a source distribution.\n%s" % add_msg
)
setup_py = os.path.join(self.setup_py_dir, 'setup.py')
# Python2 __file__ should not be unicode
if six.PY2 and isinstance(setup_py, six.text_type):
setup_py = setup_py.encode(sys.getfilesystemencoding())
return setup_py
def run_egg_info(self):
assert self.source_dir
if self.name:
logger.debug(
'Running setup.py (path:%s) egg_info for package %s',
self.setup_py, self.name,
)
else:
logger.debug(
'Running setup.py (path:%s) egg_info for package from %s',
self.setup_py, self.link,
)
with indent_log():
script = SETUPTOOLS_SHIM % self.setup_py
base_cmd = [sys.executable, '-c', script]
if self.isolated:
base_cmd += ["--no-user-cfg"]
egg_info_cmd = base_cmd + ['egg_info']
# We can't put the .egg-info files at the root, because then the
# source code will be mistaken for an installed egg, causing
# problems
if self.editable:
egg_base_option = []
else:
egg_info_dir = os.path.join(self.setup_py_dir, 'pip-egg-info')
ensure_dir(egg_info_dir)
egg_base_option = ['--egg-base', 'pip-egg-info']
call_subprocess(
egg_info_cmd + egg_base_option,
cwd=self.setup_py_dir,
show_stdout=False,
command_level=logging.DEBUG,
command_desc='python setup.py egg_info')
if not self.req:
if isinstance(
pkg_resources.parse_version(self.pkg_info()["Version"]),
Version):
op = "=="
else:
op = "==="
self.req = pkg_resources.Requirement.parse(
"".join([
self.pkg_info()["Name"],
op,
self.pkg_info()["Version"],
]))
self._correct_build_location()
else:
metadata_name = canonicalize_name(self.pkg_info()["Name"])
if canonicalize_name(self.req.project_name) != metadata_name:
logger.warning(
'Running setup.py (path:%s) egg_info for package %s '
'produced metadata for project name %s. Fix your '
'#egg=%s fragments.',
self.setup_py, self.name, metadata_name, self.name
)
self.req = pkg_resources.Requirement.parse(metadata_name)
def egg_info_data(self, filename):
if self.satisfied_by is not None:
if not self.satisfied_by.has_metadata(filename):
return None
return self.satisfied_by.get_metadata(filename)
assert self.source_dir
filename = self.egg_info_path(filename)
if not os.path.exists(filename):
return None
data = read_text_file(filename)
return data
def egg_info_path(self, filename):
if self._egg_info_path is None:
if self.editable:
base = self.source_dir
else:
base = os.path.join(self.setup_py_dir, 'pip-egg-info')
filenames = os.listdir(base)
if self.editable:
filenames = []
for root, dirs, files in os.walk(base):
for dir in vcs.dirnames:
if dir in dirs:
dirs.remove(dir)
# Iterate over a copy of ``dirs``, since mutating
# a list while iterating over it can cause trouble.
# (See https://github.com/pypa/pip/pull/462.)
for dir in list(dirs):
# Don't search in anything that looks like a virtualenv
# environment
if (
os.path.exists(
os.path.join(root, dir, 'bin', 'python')
) or
os.path.exists(
os.path.join(
root, dir, 'Scripts', 'Python.exe'
)
)):
dirs.remove(dir)
# Also don't search through tests
elif dir == 'test' or dir == 'tests':
dirs.remove(dir)
filenames.extend([os.path.join(root, dir)
for dir in dirs])
filenames = [f for f in filenames if f.endswith('.egg-info')]
if not filenames:
raise InstallationError(
'No files/directories in %s (from %s)' % (base, filename)
)
assert filenames, \
"No files/directories in %s (from %s)" % (base, filename)
# if we have more than one match, we pick the toplevel one. This
# can easily be the case if there is a dist folder which contains
# an extracted tarball for testing purposes.
if len(filenames) > 1:
filenames.sort(
key=lambda x: x.count(os.path.sep) +
(os.path.altsep and x.count(os.path.altsep) or 0)
)
self._egg_info_path = os.path.join(base, filenames[0])
return os.path.join(self._egg_info_path, filename)
def pkg_info(self):
p = FeedParser()
data = self.egg_info_data('PKG-INFO')
if not data:
logger.warning(
'No PKG-INFO file found in %s',
display_path(self.egg_info_path('PKG-INFO')),
)
p.feed(data or '')
return p.close()
_requirements_section_re = re.compile(r'\[(.*?)\]')
@property
def installed_version(self):
return get_installed_version(self.name)
def assert_source_matches_version(self):
assert self.source_dir
version = self.pkg_info()['version']
if version not in self.req:
logger.warning(
'Requested %s, but installing version %s',
self,
self.installed_version,
)
else:
logger.debug(
'Source in %s has version %s, which satisfies requirement %s',
display_path(self.source_dir),
version,
self,
)
def update_editable(self, obtain=True):
if not self.link:
logger.debug(
"Cannot update repository at %s; repository location is "
"unknown",
self.source_dir,
)
return
assert self.editable
assert self.source_dir
if self.link.scheme == 'file':
# Static paths don't get updated
return
assert '+' in self.link.url, "bad url: %r" % self.link.url
if not self.update:
return
vc_type, url = self.link.url.split('+', 1)
backend = vcs.get_backend(vc_type)
if backend:
vcs_backend = backend(self.link.url)
if obtain:
vcs_backend.obtain(self.source_dir)
else:
vcs_backend.export(self.source_dir)
else:
assert 0, (
'Unexpected version control type (in %s): %s'
% (self.link, vc_type))
def uninstall(self, auto_confirm=False):
"""
Uninstall the distribution currently satisfying this requirement.
Prompts before removing or modifying files unless
``auto_confirm`` is True.
Refuses to delete or modify files outside of ``sys.prefix`` -
thus uninstallation within a virtual environment can only
modify that virtual environment, even if the virtualenv is
linked to global site-packages.
"""
if not self.check_if_exists():
raise UninstallationError(
"Cannot uninstall requirement %s, not installed" % (self.name,)
)
dist = self.satisfied_by or self.conflicts_with
dist_path = normalize_path(dist.location)
if not dist_is_local(dist):
logger.info(
"Not uninstalling %s at %s, outside environment %s",
dist.key,
dist_path,
sys.prefix,
)
self.nothing_to_uninstall = True
return
if dist_path in get_stdlib():
logger.info(
"Not uninstalling %s at %s, as it is in the standard library.",
dist.key,
dist_path,
)
self.nothing_to_uninstall = True
return
paths_to_remove = UninstallPathSet(dist)
develop_egg_link = egg_link_path(dist)
develop_egg_link_egg_info = '{0}.egg-info'.format(
pkg_resources.to_filename(dist.project_name))
egg_info_exists = dist.egg_info and os.path.exists(dist.egg_info)
# Special case for distutils installed package
distutils_egg_info = getattr(dist._provider, 'path', None)
# Uninstall cases order do matter as in the case of 2 installs of the
# same package, pip needs to uninstall the currently detected version
if (egg_info_exists and dist.egg_info.endswith('.egg-info') and
not dist.egg_info.endswith(develop_egg_link_egg_info)):
# if dist.egg_info.endswith(develop_egg_link_egg_info), we
# are in fact in the develop_egg_link case
paths_to_remove.add(dist.egg_info)
if dist.has_metadata('installed-files.txt'):
for installed_file in dist.get_metadata(
'installed-files.txt').splitlines():
path = os.path.normpath(
os.path.join(dist.egg_info, installed_file)
)
paths_to_remove.add(path)
# FIXME: need a test for this elif block
# occurs with --single-version-externally-managed/--record outside
# of pip
elif dist.has_metadata('top_level.txt'):
if dist.has_metadata('namespace_packages.txt'):
namespaces = dist.get_metadata('namespace_packages.txt')
else:
namespaces = []
for top_level_pkg in [
p for p
in dist.get_metadata('top_level.txt').splitlines()
if p and p not in namespaces]:
path = os.path.join(dist.location, top_level_pkg)
paths_to_remove.add(path)
paths_to_remove.add(path + '.py')
paths_to_remove.add(path + '.pyc')
paths_to_remove.add(path + '.pyo')
elif distutils_egg_info:
warnings.warn(
"Uninstalling a distutils installed project ({0}) has been "
"deprecated and will be removed in a future version. This is "
"due to the fact that uninstalling a distutils project will "
"only partially uninstall the project.".format(self.name),
RemovedInPip10Warning,
)
paths_to_remove.add(distutils_egg_info)
elif dist.location.endswith('.egg'):
# package installed by easy_install
# We cannot match on dist.egg_name because it can slightly vary
# i.e. setuptools-0.6c11-py2.6.egg vs setuptools-0.6rc11-py2.6.egg
paths_to_remove.add(dist.location)
easy_install_egg = os.path.split(dist.location)[1]
easy_install_pth = os.path.join(os.path.dirname(dist.location),
'easy-install.pth')
paths_to_remove.add_pth(easy_install_pth, './' + easy_install_egg)
elif develop_egg_link:
# develop egg
with open(develop_egg_link, 'r') as fh:
link_pointer = os.path.normcase(fh.readline().strip())
assert (link_pointer == dist.location), (
'Egg-link %s does not match installed location of %s '
'(at %s)' % (link_pointer, self.name, dist.location)
)
paths_to_remove.add(develop_egg_link)
easy_install_pth = os.path.join(os.path.dirname(develop_egg_link),
'easy-install.pth')
paths_to_remove.add_pth(easy_install_pth, dist.location)
elif egg_info_exists and dist.egg_info.endswith('.dist-info'):
for path in pip.wheel.uninstallation_paths(dist):
paths_to_remove.add(path)
else:
logger.debug(
'Not sure how to uninstall: %s - Check: %s',
dist, dist.location)
# find distutils scripts= scripts
if dist.has_metadata('scripts') and dist.metadata_isdir('scripts'):
for script in dist.metadata_listdir('scripts'):
if dist_in_usersite(dist):
bin_dir = bin_user
else:
bin_dir = bin_py
paths_to_remove.add(os.path.join(bin_dir, script))
if WINDOWS:
paths_to_remove.add(os.path.join(bin_dir, script) + '.bat')
# find console_scripts
if dist.has_metadata('entry_points.txt'):
if six.PY2:
options = {}
else:
options = {"delimiters": ('=', )}
config = configparser.SafeConfigParser(**options)
config.readfp(
FakeFile(dist.get_metadata_lines('entry_points.txt'))
)
if config.has_section('console_scripts'):
for name, value in config.items('console_scripts'):
if dist_in_usersite(dist):
bin_dir = bin_user
else:
bin_dir = bin_py
paths_to_remove.add(os.path.join(bin_dir, name))
if WINDOWS:
paths_to_remove.add(
os.path.join(bin_dir, name) + '.exe'
)
paths_to_remove.add(
os.path.join(bin_dir, name) + '.exe.manifest'
)
paths_to_remove.add(
os.path.join(bin_dir, name) + '-script.py'
)
paths_to_remove.remove(auto_confirm)
self.uninstalled = paths_to_remove
def rollback_uninstall(self):
if self.uninstalled:
self.uninstalled.rollback()
else:
logger.error(
"Can't rollback %s, nothing uninstalled.", self.name,
)
def commit_uninstall(self):
if self.uninstalled:
self.uninstalled.commit()
elif not self.nothing_to_uninstall:
logger.error(
"Can't commit %s, nothing uninstalled.", self.name,
)
def archive(self, build_dir):
assert self.source_dir
create_archive = True
archive_name = '%s-%s.zip' % (self.name, self.pkg_info()["version"])
archive_path = os.path.join(build_dir, archive_name)
if os.path.exists(archive_path):
response = ask_path_exists(
'The file %s exists. (i)gnore, (w)ipe, (b)ackup ' %
display_path(archive_path), ('i', 'w', 'b'))
if response == 'i':
create_archive = False
elif response == 'w':
logger.warning('Deleting %s', display_path(archive_path))
os.remove(archive_path)
elif response == 'b':
dest_file = backup_dir(archive_path)
logger.warning(
'Backing up %s to %s',
display_path(archive_path),
display_path(dest_file),
)
shutil.move(archive_path, dest_file)
if create_archive:
zip = zipfile.ZipFile(
archive_path, 'w', zipfile.ZIP_DEFLATED,
allowZip64=True
)
dir = os.path.normcase(os.path.abspath(self.setup_py_dir))
for dirpath, dirnames, filenames in os.walk(dir):
if 'pip-egg-info' in dirnames:
dirnames.remove('pip-egg-info')
for dirname in dirnames:
dirname = os.path.join(dirpath, dirname)
name = self._clean_zip_name(dirname, dir)
zipdir = zipfile.ZipInfo(self.name + '/' + name + '/')
zipdir.external_attr = 0x1ED << 16 # 0o755
zip.writestr(zipdir, '')
for filename in filenames:
if filename == PIP_DELETE_MARKER_FILENAME:
continue
filename = os.path.join(dirpath, filename)
name = self._clean_zip_name(filename, dir)
zip.write(filename, self.name + '/' + name)
zip.close()
logger.info('Saved %s', display_path(archive_path))
def _clean_zip_name(self, name, prefix):
assert name.startswith(prefix + os.path.sep), (
"name %r doesn't start with prefix %r" % (name, prefix)
)
name = name[len(prefix) + 1:]
name = name.replace(os.path.sep, '/')
return name
def match_markers(self):
if self.markers is not None:
return markers_interpret(self.markers)
else:
return True
def install(self, install_options, global_options=[], root=None,
prefix=None):
if self.editable:
self.install_editable(
install_options, global_options, prefix=prefix)
return
if self.is_wheel:
version = pip.wheel.wheel_version(self.source_dir)
pip.wheel.check_compatibility(version, self.name)
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
self.install_succeeded = True
return
# Extend the list of global and install options passed on to
# the setup.py call with the ones from the requirements file.
# Options specified in requirements file override those
# specified on the command line, since the last option given
# to setup.py is the one that is used.
global_options += self.options.get('global_options', [])
install_options += self.options.get('install_options', [])
if self.isolated:
global_options = list(global_options) + ["--no-user-cfg"]
temp_location = tempfile.mkdtemp('-record', 'pip-')
record_filename = os.path.join(temp_location, 'install-record.txt')
try:
install_args = [sys.executable, "-u"]
install_args.append('-c')
install_args.append(SETUPTOOLS_SHIM % self.setup_py)
install_args += list(global_options) + \
['install', '--record', record_filename]
if not self.as_egg:
install_args += ['--single-version-externally-managed']
if root is not None:
install_args += ['--root', root]
if prefix is not None:
install_args += ['--prefix', prefix]
if self.pycompile:
install_args += ["--compile"]
else:
install_args += ["--no-compile"]
if running_under_virtualenv():
py_ver_str = 'python' + sysconfig.get_python_version()
install_args += ['--install-headers',
os.path.join(sys.prefix, 'include', 'site',
py_ver_str, self.name)]
msg = 'Running setup.py install for %s' % (self.name,)
with open_spinner(msg) as spinner:
with indent_log():
call_subprocess(
install_args + install_options,
cwd=self.setup_py_dir,
show_stdout=False,
spinner=spinner,
)
if not os.path.exists(record_filename):
logger.debug('Record file %s not found', record_filename)
return
self.install_succeeded = True
if self.as_egg:
# there's no --always-unzip option we can pass to install
# command so we unable to save the installed-files.txt
return
def prepend_root(path):
if root is None or not os.path.isabs(path):
return path
else:
return change_root(root, path)
with open(record_filename) as f:
for line in f:
directory = os.path.dirname(line)
if directory.endswith('.egg-info'):
egg_info_dir = prepend_root(directory)
break
else:
logger.warning(
'Could not find .egg-info directory in install record'
' for %s',
self,
)
# FIXME: put the record somewhere
# FIXME: should this be an error?
return
new_lines = []
with open(record_filename) as f:
for line in f:
filename = line.strip()
if os.path.isdir(filename):
filename += os.path.sep
new_lines.append(
os.path.relpath(
prepend_root(filename), egg_info_dir)
)
inst_files_path = os.path.join(egg_info_dir, 'installed-files.txt')
with open(inst_files_path, 'w') as f:
f.write('\n'.join(new_lines) + '\n')
finally:
if os.path.exists(record_filename):
os.remove(record_filename)
rmtree(temp_location)
def ensure_has_source_dir(self, parent_dir):
"""Ensure that a source_dir is set.
This will create a temporary build dir if the name of the requirement
isn't known yet.
:param parent_dir: The ideal pip parent_dir for the source_dir.
Generally src_dir for editables and build_dir for sdists.
:return: self.source_dir
"""
if self.source_dir is None:
self.source_dir = self.build_location(parent_dir)
return self.source_dir
def remove_temporary_source(self):
"""Remove the source files from this requirement, if they are marked
for deletion"""
if self.source_dir and os.path.exists(
os.path.join(self.source_dir, PIP_DELETE_MARKER_FILENAME)):
logger.debug('Removing source in %s', self.source_dir)
rmtree(self.source_dir)
self.source_dir = None
if self._temp_build_dir and os.path.exists(self._temp_build_dir):
rmtree(self._temp_build_dir)
self._temp_build_dir = None
def install_editable(self, install_options,
global_options=(), prefix=None):
logger.info('Running setup.py develop for %s', self.name)
if self.isolated:
global_options = list(global_options) + ["--no-user-cfg"]
if prefix:
prefix_param = ['--prefix={0}'.format(prefix)]
install_options = list(install_options) + prefix_param
with indent_log():
# FIXME: should we do --install-headers here too?
call_subprocess(
[
sys.executable,
'-c',
SETUPTOOLS_SHIM % self.setup_py
] +
list(global_options) +
['develop', '--no-deps'] +
list(install_options),
cwd=self.setup_py_dir,
show_stdout=False)
self.install_succeeded = True
def check_if_exists(self):
"""Find an installed distribution that satisfies or conflicts
with this requirement, and set self.satisfied_by or
self.conflicts_with appropriately.
"""
if self.req is None:
return False
try:
self.satisfied_by = pkg_resources.get_distribution(self.req)
except pkg_resources.DistributionNotFound:
return False
except pkg_resources.VersionConflict:
existing_dist = pkg_resources.get_distribution(
self.req.project_name
)
if self.use_user_site:
if dist_in_usersite(existing_dist):
self.conflicts_with = existing_dist
elif (running_under_virtualenv() and
dist_in_site_packages(existing_dist)):
raise InstallationError(
"Will not install to the user site because it will "
"lack sys.path precedence to %s in %s" %
(existing_dist.project_name, existing_dist.location)
)
else:
self.conflicts_with = existing_dist
return True
@property
def is_wheel(self):
return self.link and self.link.is_wheel
def move_wheel_files(self, wheeldir, root=None, prefix=None):
move_wheel_files(
self.name, self.req, wheeldir,
user=self.use_user_site,
home=self.target_dir,
root=root,
prefix=prefix,
pycompile=self.pycompile,
isolated=self.isolated,
)
def get_dist(self):
"""Return a pkg_resources.Distribution built from self.egg_info_path"""
egg_info = self.egg_info_path('').rstrip('/')
base_dir = os.path.dirname(egg_info)
metadata = pkg_resources.PathMetadata(base_dir, egg_info)
dist_name = os.path.splitext(os.path.basename(egg_info))[0]
return pkg_resources.Distribution(
os.path.dirname(egg_info),
project_name=dist_name,
metadata=metadata)
@property
def has_hash_options(self):
"""Return whether any known-good hashes are specified as options.
These activate --require-hashes mode; hashes specified as part of a
URL do not.
"""
return bool(self.options.get('hashes', {}))
def hashes(self, trust_internet=True):
"""Return a hash-comparer that considers my option- and URL-based
hashes to be known-good.
Hashes in URLs--ones embedded in the requirements file, not ones
downloaded from an index server--are almost peers with ones from
flags. They satisfy --require-hashes (whether it was implicitly or
explicitly activated) but do not activate it. md5 and sha224 are not
allowed in flags, which should nudge people toward good algos. We
always OR all hashes together, even ones from URLs.
:param trust_internet: Whether to trust URL-based (#md5=...) hashes
downloaded from the internet, as by populate_link()
"""
good_hashes = self.options.get('hashes', {}).copy()
link = self.link if trust_internet else self.original_link
if link and link.hash:
good_hashes.setdefault(link.hash_name, []).append(link.hash)
return Hashes(good_hashes)
def _strip_postfix(req):
"""
Strip req postfix ( -dev, 0.2, etc )
"""
# FIXME: use package_to_requirement?
match = re.search(r'^(.*?)(?:-dev|-\d.*)$', req)
if match:
# Strip off -dev, -0.2, etc.
req = match.group(1)
return req
def _build_req_from_url(url):
parts = [p for p in url.split('#', 1)[0].split('/') if p]
req = None
if len(parts) > 2 and parts[-2] in ('tags', 'branches', 'tag', 'branch'):
req = parts[-3]
elif len(parts) > 1 and parts[-1] == 'trunk':
req = parts[-2]
if req:
warnings.warn(
'Sniffing the requirement name from the url is deprecated and '
'will be removed in the future. Please specify an #egg segment '
'instead.', RemovedInPip9Warning,
stacklevel=2)
return req
def parse_editable(editable_req, default_vcs=None):
"""Parses an editable requirement into:
- a requirement name
- an URL
- extras
- editable options
Accepted requirements:
svn+http://blahblah@rev#egg=Foobar[baz]&subdirectory=version_subdir
.[some_extra]
"""
from pip.index import Link
url = editable_req
extras = None
# If a file path is specified with extras, strip off the extras.
m = re.match(r'^(.+)(\[[^\]]+\])$', url)
if m:
url_no_extras = m.group(1)
extras = m.group(2)
else:
url_no_extras = url
if os.path.isdir(url_no_extras):
if not os.path.exists(os.path.join(url_no_extras, 'setup.py')):
raise InstallationError(
"Directory %r is not installable. File 'setup.py' not found." %
url_no_extras
)
# Treating it as code that has already been checked out
url_no_extras = path_to_url(url_no_extras)
if url_no_extras.lower().startswith('file:'):
package_name = Link(url_no_extras).egg_fragment
if extras:
return (
package_name,
url_no_extras,
pkg_resources.Requirement.parse(
'__placeholder__' + extras
).extras,
)
else:
return package_name, url_no_extras, None
for version_control in vcs:
if url.lower().startswith('%s:' % version_control):
url = '%s+%s' % (version_control, url)
break
if '+' not in url:
if default_vcs:
url = default_vcs + '+' + url
else:
raise InstallationError(
'%s should either be a path to a local project or a VCS url '
'beginning with svn+, git+, hg+, or bzr+' %
editable_req
)
vc_type = url.split('+', 1)[0].lower()
if not vcs.get_backend(vc_type):
error_message = 'For --editable=%s only ' % editable_req + \
', '.join([backend.name + '+URL' for backend in vcs.backends]) + \
' is currently supported'
raise InstallationError(error_message)
package_name = Link(url).egg_fragment
if not package_name:
package_name = _build_req_from_url(editable_req)
if not package_name:
raise InstallationError(
'--editable=%s is not the right format; it must have '
'#egg=Package' % editable_req
)
return _strip_postfix(package_name), url, None
| apache-2.0 | -5,672,599,904,934,145,000 | 37.536771 | 79 | 0.535239 | false |
talbrecht/pism_pik07 | site-packages/PISM/options.py | 2 | 8548 | # Copyright (C) 2011, 2014, 2015 David Maxwell
#
# This file is part of PISM.
#
# PISM is free software; you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation; either version 3 of the License, or (at your option) any later
# version.
#
# PISM is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
#
# You should have received a copy of the GNU General Public License
# along with PISM; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
"""Helper functions to make working with the PISM/PETSc option system more pythonic."""
import PISM
def _to_tuple(option, use_default):
"""Convert a PISM Option object into a tuple of (value, flag). Return
(None, False) if use_default is False and the option was not set.
"""
if option.is_set() or use_default:
return (option.value(), option.is_set())
return (None, False)
def optionsIntWasSet(option, text, default=None):
"""Determines if an integer-valued command-line option was set.
:param option: Name of command-line option.
:param text: Description of option.
:param default: Default value if option was not set.
:returns: Tuple ``(value, wasSet)`` where ``value`` is the value that was set (or the ``default`` value if it was not)
and ``wasSet`` is a boolean that is ``True`` if the command line option was set explicitly.
"""
if default is None:
return _to_tuple(PISM.cpp.OptionInteger(option, text, 0), False)
else:
return _to_tuple(PISM.cpp.OptionInteger(option, text, default), True)
def optionsInt(*args, **kwargs):
"""Same as :func:`optionsIntWasSet` but only returns the integer value."""
return optionsIntWasSet(*args, **kwargs)[0]
def optionsRealWasSet(option, text, default=None):
"""Determines if a real-valued command line option was set.
:param option: Name of command line option.
:param text: Description of option.
:param default: Default value if option was not set.
:returns: Tuple ``(value, wasSet)`` where ``value`` is the value that was set (or the ``default`` value if it was not)
and ``wasSet`` is a boolean that is ``True`` if the command line option was set explicitly.
"""
if default is None:
return _to_tuple(PISM.cpp.OptionReal(option, text, 0.0), False)
else:
return _to_tuple(PISM.cpp.OptionReal(option, text, default), True)
def optionsReal(*args, **kwargs):
"""Same as :func:`optionsRealWasSet` but only returns the real value."""
return optionsRealWasSet(*args, **kwargs)[0]
def optionsStringWasSet(option, text, default=None):
"""Determines if a string-valued command line option was set.
:param option: Name of command line option.
:param text: Description of option.
:param default: Default value if option was not set.
:returns: Tuple ``(value, wasSet)`` where ``value`` is the value that was set (or the ``default`` value if it was not)
and ``wasSet`` is a boolean that is ``True`` if the command line option was set explicitly.
"""
if default is None:
return _to_tuple(PISM.cpp.OptionString(option, text, ""), False)
else:
return _to_tuple(PISM.cpp.OptionString(option, text, default), True)
def optionsString(*args, **kwargs):
"""Same as :func:`optionsStringWasSet` but only returns the string value."""
return optionsStringWasSet(*args, **kwargs)[0]
def optionsIntArrayWasSet(option, text, default=None):
"""Determines if an integer-array-valued command line option was set.
:param option: Name of command line option.
:param text: Description of option.
:param default: Default value if option was not set.
:returns: Tuple ``(value, wasSet)`` where ``value`` is the value that was set (or the ``default`` value if it was not)
and ``wasSet`` is a boolean that is ``True`` if the command line option was set explicitly.
"""
if default is None:
return _to_tuple(PISM.cpp.OptionIntegerList(option, text), False)
else:
option = PISM.cpp.OptionIntegerList(option, text)
if option.is_set():
return _to_tuple(option, True)
else:
return (default, False)
def optionsIntArray(*args, **kwargs):
"""Same as :func:`optionsIntArrayWasSet` but only returns the integer array."""
return optionsIntArrayWasSet(*args, **kwargs)[0]
def optionsRealArrayWasSet(option, text, default=None):
"""Determines if a real-array-valued command line option was set.
:param option: Name of command line option.
:param text: Description of option.
:param default: Default value if option was not set.
:returns: Tuple ``(value, wasSet)`` where ``value`` is the value that was set (or the ``default`` value if it was not)
and ``wasSet`` is a boolean that is ``True`` if the command line option was set explicitly.
"""
if default is None:
return _to_tuple(PISM.cpp.OptionRealList(option, text), False)
else:
option = PISM.cpp.OptionRealList(option, text)
if option.is_set():
return _to_tuple(option, True)
else:
return (default, False)
def optionsRealArray(*args, **kwargs):
"""Same as :func:`optionsRealArrayWasSet` but only returns the real array."""
return optionsRealArrayWasSet(*args, **kwargs)[0]
def optionsStringArrayWasSet(option, text, default=None):
"""Determines if a string-array-valued command line option was set.
:param option: Name of command line option.
:param text: Description of option.
:param default: Default value if option was not set.
:returns: Tuple ``(value, wasSet)`` where ``value`` is the value that was set (or the ``default`` value if it was not)
and ``wasSet`` is a boolean that is ``True`` if the command line option was set explicitly.
"""
if default is None:
return _to_tuple(PISM.cpp.OptionStringList(option, text, ""), False)
else:
option = PISM.cpp.OptionStringList(option, text, default)
if option.is_set():
return _to_tuple(option, True)
else:
return (default, False)
def optionsStringArray(*args, **kwargs):
"""Same as :func:`optionsStringArrayWasSet` but only returns the string array."""
return optionsStringArrayWasSet(*args, **kwargs)[0]
def optionsListWasSet(option, text, choices, default):
"""Determines if a string command line option was set, where the string can be one of a few legal options.
:param option: Name of command line option.
:param text: Description of option.
:param choices: Comma-separated list of legal values (a string).
:param default: Default value.
:returns: Tuple ``(value, wasSet)`` where ``value`` is the value that was set (or the ``default`` value if it was not)
and ``wasSet`` is a boolean that is ``True`` if the command line option was set explicitly.
"""
if default is None:
return _to_tuple(PISM.cpp.OptionKeyword(option, text, choices, ""), False)
else:
return _to_tuple(PISM.cpp.OptionKeyword(option, text, choices, default), True)
def optionsList(*args, **kwargs):
"""Same as :func:`optionsListWasSet` but only returns the option value."""
return optionsListWasSet(*args, **kwargs)[0]
def optionsFlag(option, text, default=False):
"""Determines if a flag command line option of the form ``-foo`` or ``-no_foo`` was set.
The option value is
:param option: Name of command line option.
:param text: Description of option.
:param default: Default value.
:returns: ``True`` if ``-foo`` was set and ``False`` if ``-no_foo`` was set. If
neither is set, the `default` is used, and if both are set a :exc:`RuntimeError` is raised.
"""
if option[0] == '-':
option = option[1:]
true_set = PISM.OptionBool("-" + option, text)
false_set = PISM.OptionBool("-no_" + option, text)
if true_set and false_set:
raise RuntimeError("Command line options inconsistent: both -%s and -no_%s are set" % (option, option))
if true_set:
return True
if false_set:
return False
return default
| gpl-3.0 | -8,233,226,754,999,077,000 | 39.131455 | 122 | 0.667174 | false |
armab/st2contrib | packs/opsgenie/actions/list_users.py | 4 | 1144 | # Licensed to the StackStorm, Inc ('StackStorm') under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
from lib.actions import OpsGenieBaseAction
class ListUsersAction(OpsGenieBaseAction):
def run(self):
"""
List users in OpsGenie.
Returns:
- dict: Data from OpsGenie.
"""
payload = {"apiKey": self.api_key}
data = self._req("GET",
"v1/json/user",
payload=payload)
return data
| apache-2.0 | -1,229,180,218,176,857,600 | 34.75 | 74 | 0.681818 | false |
Linaro/squad | squad/core/migrations/0028_suite_and_test_name_length.py | 2 | 1629 | # -*- coding: utf-8 -*-
# Generated by Django 1.10.7 on 2017-06-06 15:14
from __future__ import unicode_literals
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core', '0027_project_notification_strategy'),
]
operations = [
migrations.AlterField(
model_name='environment',
name='slug',
field=models.CharField(max_length=100, validators=[django.core.validators.RegexValidator(regex='^[a-zA-Z0-9][a-zA-Z0-9_.-]*')]),
),
migrations.AlterField(
model_name='group',
name='slug',
field=models.CharField(max_length=100, unique=True, validators=[django.core.validators.RegexValidator(regex='^[a-zA-Z0-9][a-zA-Z0-9_.-]*')]),
),
migrations.AlterField(
model_name='project',
name='slug',
field=models.CharField(max_length=100, validators=[django.core.validators.RegexValidator(regex='^[a-zA-Z0-9][a-zA-Z0-9_.-]*')]),
),
migrations.AlterField(
model_name='suite',
name='name',
field=models.CharField(max_length=256, null=True),
),
migrations.AlterField(
model_name='suite',
name='slug',
field=models.CharField(max_length=256, validators=[django.core.validators.RegexValidator(regex='^[a-zA-Z0-9][a-zA-Z0-9_.-]*')]),
),
migrations.AlterField(
model_name='test',
name='name',
field=models.CharField(max_length=256),
),
]
| agpl-3.0 | 8,913,729,166,419,853,000 | 34.413043 | 153 | 0.576427 | false |
zorroz/microblog | flask/lib/python2.7/site-packages/sqlalchemy/testing/fixtures.py | 32 | 10721 | # testing/fixtures.py
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
from . import config
from . import assertions, schema
from .util import adict
from .. import util
from .engines import drop_all_tables
from .entities import BasicEntity, ComparableEntity
import sys
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta
# whether or not we use unittest changes things dramatically,
# as far as how py.test collection works.
class TestBase(object):
# A sequence of database names to always run, regardless of the
# constraints below.
__whitelist__ = ()
# A sequence of requirement names matching testing.requires decorators
__requires__ = ()
# A sequence of dialect names to exclude from the test class.
__unsupported_on__ = ()
# If present, test class is only runnable for the *single* specified
# dialect. If you need multiple, use __unsupported_on__ and invert.
__only_on__ = None
# A sequence of no-arg callables. If any are True, the entire testcase is
# skipped.
__skip_if__ = None
def assert_(self, val, msg=None):
assert val, msg
# apparently a handful of tests are doing this....OK
def setup(self):
if hasattr(self, "setUp"):
self.setUp()
def teardown(self):
if hasattr(self, "tearDown"):
self.tearDown()
class TablesTest(TestBase):
# 'once', None
run_setup_bind = 'once'
# 'once', 'each', None
run_define_tables = 'once'
# 'once', 'each', None
run_create_tables = 'once'
# 'once', 'each', None
run_inserts = 'each'
# 'each', None
run_deletes = 'each'
# 'once', None
run_dispose_bind = None
bind = None
metadata = None
tables = None
other = None
@classmethod
def setup_class(cls):
cls._init_class()
cls._setup_once_tables()
cls._setup_once_inserts()
@classmethod
def _init_class(cls):
if cls.run_define_tables == 'each':
if cls.run_create_tables == 'once':
cls.run_create_tables = 'each'
assert cls.run_inserts in ('each', None)
cls.other = adict()
cls.tables = adict()
cls.bind = cls.setup_bind()
cls.metadata = sa.MetaData()
cls.metadata.bind = cls.bind
@classmethod
def _setup_once_inserts(cls):
if cls.run_inserts == 'once':
cls._load_fixtures()
cls.insert_data()
@classmethod
def _setup_once_tables(cls):
if cls.run_define_tables == 'once':
cls.define_tables(cls.metadata)
if cls.run_create_tables == 'once':
cls.metadata.create_all(cls.bind)
cls.tables.update(cls.metadata.tables)
def _setup_each_tables(self):
if self.run_define_tables == 'each':
self.tables.clear()
if self.run_create_tables == 'each':
drop_all_tables(self.metadata, self.bind)
self.metadata.clear()
self.define_tables(self.metadata)
if self.run_create_tables == 'each':
self.metadata.create_all(self.bind)
self.tables.update(self.metadata.tables)
elif self.run_create_tables == 'each':
drop_all_tables(self.metadata, self.bind)
self.metadata.create_all(self.bind)
def _setup_each_inserts(self):
if self.run_inserts == 'each':
self._load_fixtures()
self.insert_data()
def _teardown_each_tables(self):
# no need to run deletes if tables are recreated on setup
if self.run_define_tables != 'each' and self.run_deletes == 'each':
with self.bind.connect() as conn:
for table in reversed(self.metadata.sorted_tables):
try:
conn.execute(table.delete())
except sa.exc.DBAPIError as ex:
util.print_(
("Error emptying table %s: %r" % (table, ex)),
file=sys.stderr)
def setup(self):
self._setup_each_tables()
self._setup_each_inserts()
def teardown(self):
self._teardown_each_tables()
@classmethod
def _teardown_once_metadata_bind(cls):
if cls.run_create_tables:
drop_all_tables(cls.metadata, cls.bind)
if cls.run_dispose_bind == 'once':
cls.dispose_bind(cls.bind)
cls.metadata.bind = None
if cls.run_setup_bind is not None:
cls.bind = None
@classmethod
def teardown_class(cls):
cls._teardown_once_metadata_bind()
@classmethod
def setup_bind(cls):
return config.db
@classmethod
def dispose_bind(cls, bind):
if hasattr(bind, 'dispose'):
bind.dispose()
elif hasattr(bind, 'close'):
bind.close()
@classmethod
def define_tables(cls, metadata):
pass
@classmethod
def fixtures(cls):
return {}
@classmethod
def insert_data(cls):
pass
def sql_count_(self, count, fn):
self.assert_sql_count(self.bind, fn, count)
def sql_eq_(self, callable_, statements):
self.assert_sql(self.bind, callable_, statements)
@classmethod
def _load_fixtures(cls):
"""Insert rows as represented by the fixtures() method."""
headers, rows = {}, {}
for table, data in cls.fixtures().items():
if len(data) < 2:
continue
if isinstance(table, util.string_types):
table = cls.tables[table]
headers[table] = data[0]
rows[table] = data[1:]
for table in cls.metadata.sorted_tables:
if table not in headers:
continue
cls.bind.execute(
table.insert(),
[dict(zip(headers[table], column_values))
for column_values in rows[table]])
from sqlalchemy import event
class RemovesEvents(object):
@util.memoized_property
def _event_fns(self):
return set()
def event_listen(self, target, name, fn):
self._event_fns.add((target, name, fn))
event.listen(target, name, fn)
def teardown(self):
for key in self._event_fns:
event.remove(*key)
super_ = super(RemovesEvents, self)
if hasattr(super_, "teardown"):
super_.teardown()
class _ORMTest(object):
@classmethod
def teardown_class(cls):
sa.orm.session.Session.close_all()
sa.orm.clear_mappers()
class ORMTest(_ORMTest, TestBase):
pass
class MappedTest(_ORMTest, TablesTest, assertions.AssertsExecutionResults):
# 'once', 'each', None
run_setup_classes = 'once'
# 'once', 'each', None
run_setup_mappers = 'each'
classes = None
@classmethod
def setup_class(cls):
cls._init_class()
if cls.classes is None:
cls.classes = adict()
cls._setup_once_tables()
cls._setup_once_classes()
cls._setup_once_mappers()
cls._setup_once_inserts()
@classmethod
def teardown_class(cls):
cls._teardown_once_class()
cls._teardown_once_metadata_bind()
def setup(self):
self._setup_each_tables()
self._setup_each_classes()
self._setup_each_mappers()
self._setup_each_inserts()
def teardown(self):
sa.orm.session.Session.close_all()
self._teardown_each_mappers()
self._teardown_each_classes()
self._teardown_each_tables()
@classmethod
def _teardown_once_class(cls):
cls.classes.clear()
_ORMTest.teardown_class()
@classmethod
def _setup_once_classes(cls):
if cls.run_setup_classes == 'once':
cls._with_register_classes(cls.setup_classes)
@classmethod
def _setup_once_mappers(cls):
if cls.run_setup_mappers == 'once':
cls._with_register_classes(cls.setup_mappers)
def _setup_each_mappers(self):
if self.run_setup_mappers == 'each':
self._with_register_classes(self.setup_mappers)
def _setup_each_classes(self):
if self.run_setup_classes == 'each':
self._with_register_classes(self.setup_classes)
@classmethod
def _with_register_classes(cls, fn):
"""Run a setup method, framing the operation with a Base class
that will catch new subclasses to be established within
the "classes" registry.
"""
cls_registry = cls.classes
class FindFixture(type):
def __init__(cls, classname, bases, dict_):
cls_registry[classname] = cls
return type.__init__(cls, classname, bases, dict_)
class _Base(util.with_metaclass(FindFixture, object)):
pass
class Basic(BasicEntity, _Base):
pass
class Comparable(ComparableEntity, _Base):
pass
cls.Basic = Basic
cls.Comparable = Comparable
fn()
def _teardown_each_mappers(self):
# some tests create mappers in the test bodies
# and will define setup_mappers as None -
# clear mappers in any case
if self.run_setup_mappers != 'once':
sa.orm.clear_mappers()
def _teardown_each_classes(self):
if self.run_setup_classes != 'once':
self.classes.clear()
@classmethod
def setup_classes(cls):
pass
@classmethod
def setup_mappers(cls):
pass
class DeclarativeMappedTest(MappedTest):
run_setup_classes = 'once'
run_setup_mappers = 'once'
@classmethod
def _setup_once_tables(cls):
pass
@classmethod
def _with_register_classes(cls, fn):
cls_registry = cls.classes
class FindFixtureDeclarative(DeclarativeMeta):
def __init__(cls, classname, bases, dict_):
cls_registry[classname] = cls
return DeclarativeMeta.__init__(
cls, classname, bases, dict_)
class DeclarativeBasic(object):
__table_cls__ = schema.Table
_DeclBase = declarative_base(metadata=cls.metadata,
metaclass=FindFixtureDeclarative,
cls=DeclarativeBasic)
cls.DeclarativeBasic = _DeclBase
fn()
if cls.metadata.tables and cls.run_create_tables:
cls.metadata.create_all(config.db)
| bsd-3-clause | 6,513,110,466,575,245,000 | 26.774611 | 77 | 0.582595 | false |
cneumann/vrjuggler | modules/gadgeteer/tools/matrix_solver/matrix_solver.py | 7 | 3642 | """
Matrix Solver
Parses a calibration table and solves the equations for the alpha constants
used in the Hardy's Multi-Quadric method of calibration.
"""
import os, sys, string
from math import sqrt
from xml.dom import *
from xml.dom.minidom import *
import Numeric, LinearAlgebra
# Define useful functions
def length(v):
"""
Determines the magnitude of a three dimensional vector, v.
"""
return sqrt( v[0] * v[0] + v[1] * v[1] + v[2] * v[2] )
def vec_subtract(a, b):
"""
Returns a tuple c, s.t. c = a - b
"""
return (a[0] - b[0], a[1] - b[1], a[2] - b[2])
def vec_multiply(a, b):
"""
Returns the scalar result of a dot b.
"""
return a[0] * b[0] + a[1] * b[1] + a[2] * b[2]
argc = len(sys.argv)
if argc < 2 or argc > 3:
print "Usage: matrix_solver.py input_file [output_file]"
sys.exit(1)
# XXX: Take out the debug file when ready.
dbg_file = file('debug_output.txt', 'w')
# Open the table file
in_file = file(sys.argv[1], 'r')
doc = parse(in_file)
root_element = doc.documentElement
# Get the offsets from the table
offset_elements = root_element.getElementsByTagName('Offset')
offset_table = {}
# This has to be done since keys and values in Python dictionaries are stored
# in random order.
keys_in_order = []
dbg_file.write('Parsed Offsets\n')
# Build an offset table.
for e in offset_elements:
curr_offset = string.split(e.firstChild.data)
qx = e.attributes['X'].nodeValue
qy = e.attributes['Y'].nodeValue
qz = e.attributes['Z'].nodeValue
q = ( float(qx), float(qy), float(qz) )
px = curr_offset[0]
py = curr_offset[1]
pz = curr_offset[2]
p = ( float(px), float(py), float(pz) )
dbg_file.write('(' + qx + ',' + qy + ',' + qz + '):(' + px + ',' + py + ',' + pz + ')\n')
dbg_file.write(str(q) + ' : ' + str(p) + '\n')
offset_table[q] = p
keys_in_order.append(q)
dbg_file.write('\nOffset Table\n')
dbg_file.write(str(offset_table))
# w[j](p) = sqrt( (p-p[j]) * (p-p[j]) + R^2 )
# s.t. 10 <= pow(R, 2) <= 1000
w_matrix_list = []
r_squared = 0.4
print 'Calculating W Matrix...'
for i in range(0, len(offset_table)):
w_matrix_row = []
p = offset_table[keys_in_order[i]]
for j in range(0, len(offset_table)):
pj = offset_table[keys_in_order[j]]
p_difference = vec_subtract(p, pj)
w = sqrt(vec_multiply(p_difference, p_difference) + r_squared)
w_matrix_row.append(w)
w_matrix_list.append(w_matrix_row)
dbg_file.write('\nW Matrix List\n')
dbg_file.write( str(w_matrix_list) )
w_matrix = Numeric.array(w_matrix_list)
dbg_file.write('\nW Matrix\n')
dbg_file.write( str(w_matrix) )
q_list = []
#for q in offset_table.values():
# q_list.append(list(q))
for k in keys_in_order:
q_list.append( list(k) )
dbg_file.write('\nQ List\n')
dbg_file.write( str(q_list) )
q_vector = Numeric.array(q_list)
print 'Solving for alpha vector...'
alpha_vector = LinearAlgebra.solve_linear_equations(w_matrix, q_vector)
dbg_file.write('\nAlpha Vector\n')
dbg_file.write( str(alpha_vector) )
print 'Alpha Vector found.'
out_file = ''
if argc == '2':
out_file = sys.argv[1]
else:
out_file = sys.argv[2]
in_file.close()
out_file = file(out_file, 'w')
alpha_vector_list = alpha_vector.tolist()
dbg_file.write('\nCheck Solution\n')
solution_check = Numeric.matrixmultiply(w_matrix, alpha_vector)
dbg_file.write( str(solution_check) )
# Add Alpha constants to XML Tree
for i in alpha_vector_list:
element = Element('Alpha')
element.setAttribute('X', str(i[0]))
element.setAttribute('Y', str(i[1]))
element.setAttribute('Z', str(i[2]))
root_element.appendChild(element)
out_file.write(doc.toprettyxml())
out_file.close()
| lgpl-2.1 | -4,380,167,994,130,430,000 | 29.864407 | 92 | 0.644975 | false |
pilou-/ansible | lib/ansible/modules/network/aci/aci_contract_subject.py | 22 | 10735 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = r'''
---
module: aci_contract_subject
short_description: Manage initial Contract Subjects (vz:Subj)
description:
- Manage initial Contract Subjects on Cisco ACI fabrics.
version_added: '2.4'
options:
tenant:
description:
- The name of the tenant.
type: str
aliases: [ tenant_name ]
subject:
description:
- The contract subject name.
type: str
aliases: [ contract_subject, name, subject_name ]
contract:
description:
- The name of the Contract.
type: str
aliases: [ contract_name ]
reverse_filter:
description:
- Determines if the APIC should reverse the src and dst ports to allow the
return traffic back, since ACI is stateless filter.
- The APIC defaults to C(yes) when unset during creation.
type: bool
priority:
description:
- The QoS class.
- The APIC defaults to C(unspecified) when unset during creation.
type: str
choices: [ level1, level2, level3, unspecified ]
dscp:
description:
- The target DSCP.
- The APIC defaults to C(unspecified) when unset during creation.
type: str
choices: [ AF11, AF12, AF13, AF21, AF22, AF23, AF31, AF32, AF33, AF41, AF42, AF43,
CS0, CS1, CS2, CS3, CS4, CS5, CS6, CS7, EF, VA, unspecified ]
aliases: [ target ]
description:
description:
- Description for the contract subject.
type: str
aliases: [ descr ]
consumer_match:
description:
- The match criteria across consumers.
- The APIC defaults to C(at_least_one) when unset during creation.
type: str
choices: [ all, at_least_one, at_most_one, none ]
provider_match:
description:
- The match criteria across providers.
- The APIC defaults to C(at_least_one) when unset during creation.
type: str
choices: [ all, at_least_one, at_most_one, none ]
state:
description:
- Use C(present) or C(absent) for adding or removing.
- Use C(query) for listing an object or multiple objects.
type: str
choices: [ absent, present, query ]
default: present
extends_documentation_fragment: aci
notes:
- The C(tenant) and C(contract) used must exist before using this module in your playbook.
The M(aci_tenant) and M(aci_contract) modules can be used for this.
seealso:
- module: aci_contract
- module: aci_tenant
- name: APIC Management Information Model reference
description: More information about the internal APIC class B(vz:Subj).
link: https://developer.cisco.com/docs/apic-mim-ref/
author:
- Swetha Chunduri (@schunduri)
'''
EXAMPLES = r'''
- name: Add a new contract subject
aci_contract_subject:
host: apic
username: admin
password: SomeSecretPassword
tenant: production
contract: web_to_db
subject: default
description: test
reverse_filter: yes
priority: level1
dscp: unspecified
state: present
register: query_result
- name: Remove a contract subject
aci_contract_subject:
host: apic
username: admin
password: SomeSecretPassword
tenant: production
contract: web_to_db
subject: default
state: absent
delegate_to: localhost
- name: Query a contract subject
aci_contract_subject:
host: apic
username: admin
password: SomeSecretPassword
tenant: production
contract: web_to_db
subject: default
state: query
delegate_to: localhost
register: query_result
- name: Query all contract subjects
aci_contract_subject:
host: apic
username: admin
password: SomeSecretPassword
state: query
delegate_to: localhost
register: query_result
'''
RETURN = r'''
current:
description: The existing configuration from the APIC after the module has finished
returned: success
type: list
sample:
[
{
"fvTenant": {
"attributes": {
"descr": "Production environment",
"dn": "uni/tn-production",
"name": "production",
"nameAlias": "",
"ownerKey": "",
"ownerTag": ""
}
}
}
]
error:
description: The error information as returned from the APIC
returned: failure
type: dict
sample:
{
"code": "122",
"text": "unknown managed object class foo"
}
raw:
description: The raw output returned by the APIC REST API (xml or json)
returned: parse error
type: str
sample: '<?xml version="1.0" encoding="UTF-8"?><imdata totalCount="1"><error code="122" text="unknown managed object class foo"/></imdata>'
sent:
description: The actual/minimal configuration pushed to the APIC
returned: info
type: list
sample:
{
"fvTenant": {
"attributes": {
"descr": "Production environment"
}
}
}
previous:
description: The original configuration from the APIC before the module has started
returned: info
type: list
sample:
[
{
"fvTenant": {
"attributes": {
"descr": "Production",
"dn": "uni/tn-production",
"name": "production",
"nameAlias": "",
"ownerKey": "",
"ownerTag": ""
}
}
}
]
proposed:
description: The assembled configuration from the user-provided parameters
returned: info
type: dict
sample:
{
"fvTenant": {
"attributes": {
"descr": "Production environment",
"name": "production"
}
}
}
filter_string:
description: The filter string used for the request
returned: failure or debug
type: str
sample: ?rsp-prop-include=config-only
method:
description: The HTTP method used for the request to the APIC
returned: failure or debug
type: str
sample: POST
response:
description: The HTTP response from the APIC
returned: failure or debug
type: str
sample: OK (30 bytes)
status:
description: The HTTP status from the APIC
returned: failure or debug
type: int
sample: 200
url:
description: The HTTP url used for the request to the APIC
returned: failure or debug
type: str
sample: https://10.11.12.13/api/mo/uni/tn-production.json
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.aci.aci import ACIModule, aci_argument_spec
MATCH_MAPPING = dict(
all='All',
at_least_one='AtleastOne',
at_most_one='AtmostOne',
none='None',
)
def main():
argument_spec = aci_argument_spec()
argument_spec.update(
contract=dict(type='str', aliases=['contract_name']), # Not required for querying all objects
subject=dict(type='str', aliases=['contract_subject', 'name', 'subject_name']), # Not required for querying all objects
tenant=dict(type='str', aliases=['tenant_name']), # Not required for querying all objects
priority=dict(type='str', choices=['unspecified', 'level1', 'level2', 'level3']),
reverse_filter=dict(type='bool'),
dscp=dict(type='str', aliases=['target'],
choices=['AF11', 'AF12', 'AF13', 'AF21', 'AF22', 'AF23', 'AF31', 'AF32', 'AF33', 'AF41', 'AF42', 'AF43',
'CS0', 'CS1', 'CS2', 'CS3', 'CS4', 'CS5', 'CS6', 'CS7', 'EF', 'VA', 'unspecified']),
description=dict(type='str', aliases=['descr']),
consumer_match=dict(type='str', choices=['all', 'at_least_one', 'at_most_one', 'none']),
provider_match=dict(type='str', choices=['all', 'at_least_one', 'at_most_one', 'none']),
state=dict(type='str', default='present', choices=['absent', 'present', 'query']),
directive=dict(type='str', removed_in_version='2.4'), # Deprecated starting from v2.4
filter=dict(type='str', aliases=['filter_name'], removed_in_version='2.4'), # Deprecated starting from v2.4
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=[
['state', 'absent', ['contract', 'subject', 'tenant']],
['state', 'present', ['contract', 'subject', 'tenant']],
],
)
aci = ACIModule(module)
subject = module.params['subject']
priority = module.params['priority']
reverse_filter = aci.boolean(module.params['reverse_filter'])
contract = module.params['contract']
dscp = module.params['dscp']
description = module.params['description']
filter_name = module.params['filter']
directive = module.params['directive']
consumer_match = module.params['consumer_match']
if consumer_match is not None:
consumer_match = MATCH_MAPPING[consumer_match]
provider_match = module.params['provider_match']
if provider_match is not None:
provider_match = MATCH_MAPPING[provider_match]
state = module.params['state']
tenant = module.params['tenant']
if directive is not None or filter_name is not None:
module.fail_json(msg="Managing Contract Subjects to Filter bindings has been moved to module 'aci_subject_bind_filter'")
aci.construct_url(
root_class=dict(
aci_class='fvTenant',
aci_rn='tn-{0}'.format(tenant),
module_object=tenant,
target_filter={'name': tenant},
),
subclass_1=dict(
aci_class='vzBrCP',
aci_rn='brc-{0}'.format(contract),
module_object=contract,
target_filter={'name': contract},
),
subclass_2=dict(
aci_class='vzSubj',
aci_rn='subj-{0}'.format(subject),
module_object=subject,
target_filter={'name': subject},
),
)
aci.get_existing()
if state == 'present':
aci.payload(
aci_class='vzSubj',
class_config=dict(
name=subject,
prio=priority,
revFltPorts=reverse_filter,
targetDscp=dscp,
consMatchT=consumer_match,
provMatchT=provider_match,
descr=description,
),
)
aci.get_diff(aci_class='vzSubj')
aci.post_config()
elif state == 'absent':
aci.delete_config()
aci.exit_json()
if __name__ == "__main__":
main()
| gpl-3.0 | -634,954,004,349,788,900 | 29.070028 | 141 | 0.609595 | false |
georgid/sms-tools | lectures/5-Sinusoidal-model/plots-code/sine-analysis-synthesis.py | 2 | 1538 | import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import hamming, triang, blackmanharris
import sys, os, functools, time
from scipy.fftpack import fft, ifft, fftshift
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../../software/models/'))
import dftModel as DFT
import utilFunctions as UF
(fs, x) = UF.wavread('../../../sounds/oboe-A4.wav')
M = 601
w = np.blackman(M)
N = 1024
hN = N/2
Ns = 512
hNs = Ns/2
pin = 5000
t = -70
x1 = x[pin:pin+w.size]
mX, pX = DFT.dftAnal(x1, w, N)
ploc = UF.peakDetection(mX, hN, t)
iploc, ipmag, ipphase = UF.peakInterp(mX, pX, ploc)
freqs = iploc*fs/N
Y = UF.genSpecSines(freqs, ipmag, ipphase, Ns, fs)
mY = 20*np.log10(abs(Y[:hNs]))
pY = np.unwrap(np.angle(Y[:hNs]))
y= fftshift(ifft(Y))*sum(blackmanharris(Ns))
plt.figure(1, figsize=(9, 6))
plt.subplot(4,1,1)
plt.plot(np.arange(-M/2,M/2), x1, 'b', lw=1.5)
plt.axis([-M/2,M/2, min(x1), max(x1)])
plt.title("x (oboe-A4.wav), M = 601")
plt.subplot(4,1,2)
plt.plot(np.arange(hN), mX, 'r', lw=1.5)
plt.plot(iploc, ipmag, marker='x', color='b', linestyle='', markeredgewidth=1.5)
plt.axis([0, hN,-90,max(mX)+2])
plt.title("mX + spectral peaks; Blackman, N = 1024")
plt.subplot(4,1,3)
plt.plot(np.arange(hNs), mY, 'r', lw=1.5)
plt.axis([0, hNs,-90,max(mY)+2])
plt.title("mY; Blackman-Harris; Ns = 512")
plt.subplot(4,1,4)
plt.plot(np.arange(Ns), y, 'b', lw=1.5)
plt.axis([0, Ns,min(y),max(y)])
plt.title("y; Ns = 512")
plt.tight_layout()
plt.savefig('sine-analysis-synthesis.png')
plt.show()
| agpl-3.0 | 1,270,301,819,498,110,500 | 27.481481 | 103 | 0.653446 | false |
commtrack/temp-aquatest | apps/user_registration/forms.py | 1 | 4996 | """
Forms and validation code for user user_registration.
"""
from django.contrib.auth.models import User
from django import forms
from django.utils.translation import ugettext_lazy as _
# I put this on all required fields, because it's easier to pick up
# on them with CSS or JavaScript if they have a class of "required"
# in the HTML. Your mileage may vary. If/when Django ticket #3515
# lands in trunk, this will no longer be necessary.
attrs_dict = { 'class': 'required' }
class RegistrationForm(forms.Form):
"""
Form for registering a new user account.
Validates that the requested username is not already in use, and
requires the password to be entered twice to catch typos.
Subclasses should feel free to add any additional validation they
need, but should avoid defining a ``save()`` method -- the actual
saving of collected user data is delegated to the active
registration backend.
"""
username = forms.RegexField(regex=r'^\w+$',
max_length=30,
widget=forms.TextInput(attrs=attrs_dict),
label=_("Username"),
error_messages={ 'invalid': _("This value must contain only letters, numbers and underscores.") })
email = forms.EmailField(widget=forms.TextInput(attrs=dict(attrs_dict,
maxlength=75)),
label=_("Email address"))
password1 = forms.CharField(widget=forms.PasswordInput(attrs=attrs_dict, render_value=False),
label=_("Password"))
password2 = forms.CharField(widget=forms.PasswordInput(attrs=attrs_dict, render_value=False),
label=_("Password (again)"))
def clean_username(self):
"""
Validate that the username is alphanumeric and is not already
in use.
"""
try:
user = User.objects.get(username__iexact=self.cleaned_data['username'])
except User.DoesNotExist:
return self.cleaned_data['username']
raise forms.ValidationError(_("A user with that username already exists."))
def clean(self):
"""
Verifiy that the values entered into the two password fields
match. Note that an error here will end up in
``non_field_errors()`` because it doesn't apply to a single
field.
"""
if 'password1' in self.cleaned_data and 'password2' in self.cleaned_data:
if self.cleaned_data['password1'] != self.cleaned_data['password2']:
raise forms.ValidationError(_("The two password fields didn't match."))
return self.cleaned_data
class RegistrationFormTermsOfService(RegistrationForm):
"""
Subclass of ``RegistrationForm`` which adds a required checkbox
for agreeing to a site's Terms of Service.
"""
tos = forms.BooleanField(widget=forms.CheckboxInput(attrs=attrs_dict),
label=_(u'I have read and agree to the Terms of Service'),
error_messages={ 'required': _("You must agree to the terms to register") })
class RegistrationFormUniqueEmail(RegistrationForm):
"""
Subclass of ``RegistrationForm`` which enforces uniqueness of
email addresses.
"""
def clean_email(self):
"""
Validate that the supplied email address is unique for the
site.
"""
if User.objects.filter(email__iexact=self.cleaned_data['email']):
raise forms.ValidationError(_("This email address is already in use. Please supply a different email address."))
return self.cleaned_data['email']
class RegistrationFormNoFreeEmail(RegistrationForm):
"""
Subclass of ``RegistrationForm`` which disallows registration with
email addresses from popular free webmail services; moderately
useful for preventing automated spam registrations.
To change the list of banned domains, subclass this form and
override the attribute ``bad_domains``.
"""
bad_domains = ['aim.com', 'aol.com', 'email.com', 'gmail.com',
'googlemail.com', 'hotmail.com', 'hushmail.com',
'msn.com', 'mail.ru', 'mailinator.com', 'live.com',
'yahoo.com']
def clean_email(self):
"""
Check the supplied email address against a list of known free
webmail domains.
"""
email_domain = self.cleaned_data['email'].split('@')[1]
if email_domain in self.bad_domains:
raise forms.ValidationError(_("Registration using free email addresses is prohibited. Please supply a different email address."))
return self.cleaned_data['email']
| bsd-3-clause | -5,780,155,725,748,824,000 | 38.617886 | 141 | 0.602082 | false |
sg00dwin/origin | vendor/github.com/google/certificate-transparency/python/ct/crypto/asn1/x509_extension.py | 34 | 7546 | """ASN.1 specification for X509 extensions."""
from ct.crypto.asn1 import named_value
from ct.crypto.asn1 import oid
from ct.crypto.asn1 import tag
from ct.crypto.asn1 import types
from ct.crypto.asn1 import x509_common
from ct.crypto.asn1 import x509_name
# Standard extensions from RFC 5280.
class BasicConstraints(types.Sequence):
print_delimiter = ", "
components = (
(types.Component("cA", types.Boolean, default=False)),
(types.Component("pathLenConstraint", types.Integer, optional=True))
)
class SubjectAlternativeNames(types.SequenceOf):
print_delimiter = ", "
component = x509_name.GeneralName
class KeyUsage(types.NamedBitList):
DIGITAL_SIGNATURE = named_value.NamedValue("digitalSignature", 0)
NON_REPUDIATION = named_value.NamedValue("nonRepudiation", 1)
KEY_ENCIPHERMENT = named_value.NamedValue("keyEncipherment", 2)
DATA_ENCIPHERMENT = named_value.NamedValue("dataEncipherment", 3)
KEY_AGREEMENT = named_value.NamedValue("keyAgreement", 4)
KEY_CERT_SIGN = named_value.NamedValue("keyCertSign", 5)
CRL_SIGN = named_value.NamedValue("cRLSign", 6)
ENCIPHER_ONLY = named_value.NamedValue("encipherOnly", 7)
DECIPHER_ONLY = named_value.NamedValue("decipherOnly", 8)
named_bit_list = (DIGITAL_SIGNATURE, NON_REPUDIATION, KEY_ENCIPHERMENT,
DATA_ENCIPHERMENT, KEY_AGREEMENT, KEY_CERT_SIGN,
CRL_SIGN, ENCIPHER_ONLY, DECIPHER_ONLY)
class KeyPurposeID(oid.ObjectIdentifier):
pass
class ExtendedKeyUsage(types.SequenceOf):
print_delimiter = ", "
print_labels = False
component = KeyPurposeID
class KeyIdentifier(types.OctetString):
pass
class SubjectKeyIdentifier(KeyIdentifier):
pass
KEY_IDENTIFIER = "keyIdentifier"
AUTHORITY_CERT_ISSUER = "authorityCertIssuer"
AUTHORITY_CERT_SERIAL_NUMBER = "authorityCertSerialNumber"
class AuthorityKeyIdentifier(types.Sequence):
components = (
types.Component(KEY_IDENTIFIER, KeyIdentifier.implicit(0), optional=True),
types.Component(AUTHORITY_CERT_ISSUER, x509_name.GeneralNames.implicit(1),
optional=True),
types.Component(AUTHORITY_CERT_SERIAL_NUMBER,
x509_common.CertificateSerialNumber.implicit(2),
optional=True)
)
class DisplayText(types.Choice):
components = {
"ia5String": types.IA5String,
"visibleString": types.VisibleString,
"bmpString": types.BMPString,
"utf8String": types.UTF8String
}
class NoticeNumbers(types.SequenceOf):
component = types.Integer
class NoticeReference(types.Sequence):
components = (
types.Component("organization", DisplayText),
types.Component("noticeNumbers", NoticeNumbers)
)
NOTICE_REF = "noticeRef"
EXPLICIT_TEXT = "explicitText"
class UserNotice(types.Sequence):
components = (
types.Component(NOTICE_REF, NoticeReference, optional=True),
types.Component(EXPLICIT_TEXT, DisplayText, optional=True)
)
class CPSuri(types.IA5String):
pass
_POLICY_QUALIFIER_DICT = {
oid.ID_QT_CPS: CPSuri,
oid.ID_QT_UNOTICE: UserNotice
}
POLICY_QUALIFIER_ID = "policyQualifierId"
QUALIFIER = "qualifier"
class PolicyQualifierInfo(types.Sequence):
print_labels = False
print_delimiter = ": "
components = (
types.Component(POLICY_QUALIFIER_ID, oid.ObjectIdentifier),
types.Component(QUALIFIER, types.Any, defined_by="policyQualifierId",
lookup=_POLICY_QUALIFIER_DICT)
)
class PolicyQualifiers(types.SequenceOf):
print_labels = False
component = PolicyQualifierInfo
POLICY_IDENTIFIER = "policyIdentifier"
POLICY_QUALIFIERS = "policyQualifiers"
class PolicyInformation(types.Sequence):
components = (
types.Component(POLICY_IDENTIFIER, oid.ObjectIdentifier),
types.Component(POLICY_QUALIFIERS, PolicyQualifiers, optional=True)
)
class CertificatePolicies(types.SequenceOf):
component = PolicyInformation
FULL_NAME = "fullName"
RELATIVE_NAME = "nameRelativetoCRLIssuer"
class DistributionPointName(types.Choice):
components = {
FULL_NAME: x509_name.GeneralNames.implicit(0),
RELATIVE_NAME: x509_name.RelativeDistinguishedName.implicit(1)
}
class ReasonFlags(types.NamedBitList):
UNUSED = named_value.NamedValue("unused", 0)
KEY_COMPROMISE = named_value.NamedValue("keyCompromise", 1)
CA_COMPROMISE = named_value.NamedValue("cACompromise", 2),
AFFILIATION_CHANGED = named_value.NamedValue("affiliationChanged", 3)
SUPERSEDED = named_value.NamedValue("superseded", 4)
CESSATION_OF_OPERATION = named_value.NamedValue("cessationOfOperation", 5)
CERTIFICATE_HOLD = named_value.NamedValue("certificateHold", 6)
PRIVILEGE_WITHDRAWN = named_value.NamedValue("privilegeWithdrawn", 7)
AA_COMPROMISE = named_value.NamedValue("aACompromise", 8)
named_bit_list = (UNUSED, KEY_COMPROMISE, CA_COMPROMISE,
AFFILIATION_CHANGED, SUPERSEDED, CESSATION_OF_OPERATION,
CERTIFICATE_HOLD, PRIVILEGE_WITHDRAWN, AA_COMPROMISE)
DISTRIBUTION_POINT = "distributionPoint"
REASONS = "reasons"
CRL_ISSUER = "cRLIssuer"
class DistributionPoint(types.Sequence):
components = (
types.Component(DISTRIBUTION_POINT, DistributionPointName.explicit(0),
optional=True),
types.Component(REASONS, ReasonFlags.implicit(1), optional=True),
types.Component(CRL_ISSUER, x509_name.GeneralNames.implicit(2),
optional=True)
)
class CRLDistributionPoints(types.SequenceOf):
component = DistributionPoint
ACCESS_METHOD = "accessMethod"
ACCESS_LOCATION = "accessLocation"
class AccessDescription(types.Sequence):
print_labels = False
print_delimiter = ": "
components = (
types.Component(ACCESS_METHOD, oid.ObjectIdentifier),
types.Component(ACCESS_LOCATION, x509_name.GeneralName)
)
# Called AuthorityInfoAccessSyntax in RFC 5280.
class AuthorityInfoAccess(types.SequenceOf):
component = AccessDescription
class SignedCertificateTimestampList(types.OctetString):
pass
# Hack! This is not a valid ASN.1 definition but it works: an extension value
# value is defined as a DER-encoded value wrapped in an OctetString.
# This is functionally equivalent to an Any type that is tagged with the
# OctetString tag.
@types.Universal(4, tag.PRIMITIVE)
class ExtensionValue(types.Any):
pass
_EXTENSION_DICT = {
oid.ID_CE_BASIC_CONSTRAINTS: BasicConstraints,
oid.ID_CE_SUBJECT_ALT_NAME: SubjectAlternativeNames,
oid.ID_CE_KEY_USAGE: KeyUsage,
oid.ID_CE_EXT_KEY_USAGE: ExtendedKeyUsage,
oid.ID_CE_SUBJECT_KEY_IDENTIFIER: SubjectKeyIdentifier,
oid.ID_CE_AUTHORITY_KEY_IDENTIFIER: AuthorityKeyIdentifier,
oid.ID_CE_CERTIFICATE_POLICIES: CertificatePolicies,
oid.ID_CE_CRL_DISTRIBUTION_POINTS: CRLDistributionPoints,
oid.ID_PE_AUTHORITY_INFO_ACCESS: AuthorityInfoAccess,
oid.CT_POISON: types.Null,
oid.CT_EMBEDDED_SCT_LIST: SignedCertificateTimestampList
}
class Extension(types.Sequence):
print_delimiter = ", "
components = (
types.Component("extnID", oid.ObjectIdentifier),
types.Component("critical", types.Boolean, default=False),
types.Component("extnValue", ExtensionValue, defined_by="extnID",
lookup=_EXTENSION_DICT)
)
class Extensions(types.SequenceOf):
component = Extension
| apache-2.0 | 2,483,610,300,752,415,000 | 29.305221 | 80 | 0.708852 | false |
dkroy/luigi | luigi/six.py | 65 | 29796 | """Utilities for writing code that runs on Python 2 and 3
In luigi, we hard-copy this file into the project itself, to ensure that all
luigi users use the same version of six.
"""
# Copyright (c) 2010-2015 Benjamin Peterson
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from __future__ import absolute_import
import functools
import itertools
import operator
import sys
import types
__author__ = "Benjamin Peterson <[email protected]>"
__version__ = "1.9.0"
# Useful for very coarse version differentiation.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result) # Invokes __set__.
try:
# This is a bit ugly, but it avoids running this again by
# removing this descriptor.
delattr(obj.__class__, self.name)
except AttributeError:
pass
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
def __getattr__(self, attr):
_module = self._resolve()
value = getattr(_module, attr)
setattr(self, attr, value)
return value
class _LazyModule(types.ModuleType):
def __init__(self, name):
super(_LazyModule, self).__init__(name)
self.__doc__ = self.__class__.__doc__
def __dir__(self):
attrs = ["__doc__", "__name__"]
attrs += [attr.name for attr in self._moved_attributes]
return attrs
# Subclasses should override this
_moved_attributes = []
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _SixMetaPathImporter(object):
"""
A meta path importer to from luigi import six.moves and its submodules.
This class implements a PEP302 finder and loader. It should be compatible
with Python 2.5 and all existing versions of Python3
"""
def __init__(self, six_module_name):
self.name = six_module_name
self.known_modules = {}
def _add_module(self, mod, *fullnames):
for fullname in fullnames:
self.known_modules[self.name + "." + fullname] = mod
def _get_module(self, fullname):
return self.known_modules[self.name + "." + fullname]
def find_module(self, fullname, path=None):
if fullname in self.known_modules:
return self
return None
def __get_module(self, fullname):
try:
return self.known_modules[fullname]
except KeyError:
raise ImportError("This loader does not know module " + fullname)
def load_module(self, fullname):
try:
# in case of a reload
return sys.modules[fullname]
except KeyError:
pass
mod = self.__get_module(fullname)
if isinstance(mod, MovedModule):
mod = mod._resolve()
else:
mod.__loader__ = self
sys.modules[fullname] = mod
return mod
def is_package(self, fullname):
"""
Return true, if the named module is a package.
We need this method to get correct spec objects with
Python 3.4 (see PEP451)
"""
return hasattr(self.__get_module(fullname), "__path__")
def get_code(self, fullname):
"""Return None
Required, if is_package is implemented"""
self.__get_module(fullname) # eventually raises ImportError
return None
get_source = get_code # same as get_code
_importer = _SixMetaPathImporter(__name__)
class _MovedItems(_LazyModule):
"""Lazy loading of moved objects"""
__path__ = [] # mark as package
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("intern", "__builtin__", "sys"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("shlex_quote", "pipes", "shlex", "quote"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("UserDict", "UserDict", "collections"),
MovedAttribute("UserList", "UserList", "collections"),
MovedAttribute("UserString", "UserString", "collections"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("_thread", "thread", "_thread"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"),
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
if isinstance(attr, MovedModule):
_importer._add_module(attr, "moves." + attr.name)
del attr
_MovedItems._moved_attributes = _moved_attributes
moves = _MovedItems(__name__ + ".moves")
_importer._add_module(moves, "moves")
class Module_six_moves_urllib_parse(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_parse"""
_urllib_parse_moved_attributes = [
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
MovedAttribute("quote", "urllib", "urllib.parse"),
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote", "urllib", "urllib.parse"),
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
MovedAttribute("urlencode", "urllib", "urllib.parse"),
MovedAttribute("splitquery", "urllib", "urllib.parse"),
MovedAttribute("splittag", "urllib", "urllib.parse"),
MovedAttribute("splituser", "urllib", "urllib.parse"),
MovedAttribute("uses_fragment", "urlparse", "urllib.parse"),
MovedAttribute("uses_netloc", "urlparse", "urllib.parse"),
MovedAttribute("uses_params", "urlparse", "urllib.parse"),
MovedAttribute("uses_query", "urlparse", "urllib.parse"),
MovedAttribute("uses_relative", "urlparse", "urllib.parse"),
]
for attr in _urllib_parse_moved_attributes:
setattr(Module_six_moves_urllib_parse, attr.name, attr)
del attr
Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
_importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"),
"moves.urllib_parse", "moves.urllib.parse")
class Module_six_moves_urllib_error(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_error"""
_urllib_error_moved_attributes = [
MovedAttribute("URLError", "urllib2", "urllib.error"),
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
]
for attr in _urllib_error_moved_attributes:
setattr(Module_six_moves_urllib_error, attr.name, attr)
del attr
Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
_importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"),
"moves.urllib_error", "moves.urllib.error")
class Module_six_moves_urllib_request(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_request"""
_urllib_request_moved_attributes = [
MovedAttribute("urlopen", "urllib2", "urllib.request"),
MovedAttribute("install_opener", "urllib2", "urllib.request"),
MovedAttribute("build_opener", "urllib2", "urllib.request"),
MovedAttribute("pathname2url", "urllib", "urllib.request"),
MovedAttribute("url2pathname", "urllib", "urllib.request"),
MovedAttribute("getproxies", "urllib", "urllib.request"),
MovedAttribute("Request", "urllib2", "urllib.request"),
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
MovedAttribute("URLopener", "urllib", "urllib.request"),
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
]
for attr in _urllib_request_moved_attributes:
setattr(Module_six_moves_urllib_request, attr.name, attr)
del attr
Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
_importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"),
"moves.urllib_request", "moves.urllib.request")
class Module_six_moves_urllib_response(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_response"""
_urllib_response_moved_attributes = [
MovedAttribute("addbase", "urllib", "urllib.response"),
MovedAttribute("addclosehook", "urllib", "urllib.response"),
MovedAttribute("addinfo", "urllib", "urllib.response"),
MovedAttribute("addinfourl", "urllib", "urllib.response"),
]
for attr in _urllib_response_moved_attributes:
setattr(Module_six_moves_urllib_response, attr.name, attr)
del attr
Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
_importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"),
"moves.urllib_response", "moves.urllib.response")
class Module_six_moves_urllib_robotparser(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
_urllib_robotparser_moved_attributes = [
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
]
for attr in _urllib_robotparser_moved_attributes:
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
del attr
Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
_importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"),
"moves.urllib_robotparser", "moves.urllib.robotparser")
class Module_six_moves_urllib(types.ModuleType):
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
__path__ = [] # mark as package
parse = _importer._get_module("moves.urllib_parse")
error = _importer._get_module("moves.urllib_error")
request = _importer._get_module("moves.urllib_request")
response = _importer._get_module("moves.urllib_response")
robotparser = _importer._get_module("moves.urllib_robotparser")
def __dir__(self):
return ['parse', 'error', 'request', 'response', 'robotparser']
_importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"),
"moves.urllib")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
create_bound_method = types.MethodType
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.im_func
def create_bound_method(func, obj):
return types.MethodType(func, obj, obj.__class__)
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
if PY3:
def iterkeys(d, **kw):
return iter(d.keys(**kw))
def itervalues(d, **kw):
return iter(d.values(**kw))
def iteritems(d, **kw):
return iter(d.items(**kw))
def iterlists(d, **kw):
return iter(d.lists(**kw))
viewkeys = operator.methodcaller("keys")
viewvalues = operator.methodcaller("values")
viewitems = operator.methodcaller("items")
else:
def iterkeys(d, **kw):
return iter(d.iterkeys(**kw))
def itervalues(d, **kw):
return iter(d.itervalues(**kw))
def iteritems(d, **kw):
return iter(d.iteritems(**kw))
def iterlists(d, **kw):
return iter(d.iterlists(**kw))
viewkeys = operator.methodcaller("viewkeys")
viewvalues = operator.methodcaller("viewvalues")
viewitems = operator.methodcaller("viewitems")
_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.")
_add_doc(itervalues, "Return an iterator over the values of a dictionary.")
_add_doc(iteritems,
"Return an iterator over the (key, value) pairs of a dictionary.")
_add_doc(iterlists,
"Return an iterator over the (key, [values]) pairs of a dictionary.")
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
unichr = chr
if sys.version_info[1] <= 1:
def int2byte(i):
return bytes((i,))
else:
# This is about 2x faster than the implementation above on 3.2+
int2byte = operator.methodcaller("to_bytes", 1, "big")
byte2int = operator.itemgetter(0)
indexbytes = operator.getitem
iterbytes = iter
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
_assertCountEqual = "assertCountEqual"
_assertRaisesRegex = "assertRaisesRegex"
_assertRegex = "assertRegex"
else:
def b(s):
return s
# Workaround for standalone backslash
def u(s):
return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape")
unichr = unichr
int2byte = chr
def byte2int(bs):
return ord(bs[0])
def indexbytes(buf, i):
return ord(buf[i])
iterbytes = functools.partial(itertools.imap, ord)
import StringIO
StringIO = BytesIO = StringIO.StringIO
_assertCountEqual = "assertItemsEqual"
_assertRaisesRegex = "assertRaisesRegexp"
_assertRegex = "assertRegexpMatches"
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
def assertCountEqual(self, *args, **kwargs):
return getattr(self, _assertCountEqual)(*args, **kwargs)
def assertRaisesRegex(self, *args, **kwargs):
return getattr(self, _assertRaisesRegex)(*args, **kwargs)
def assertRegex(self, *args, **kwargs):
return getattr(self, _assertRegex)(*args, **kwargs)
if PY3:
exec_ = getattr(moves.builtins, "exec")
def reraise(tp, value, tb=None):
if value is None:
value = tp()
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
raise tp, value, tb
""")
if sys.version_info[:2] == (3, 2):
exec_("""def raise_from(value, from_value):
if from_value is None:
raise value
raise value from from_value
""")
elif sys.version_info[:2] > (3, 2):
exec_("""def raise_from(value, from_value):
raise value from from_value
""")
else:
def raise_from(value, from_value):
raise value
print_ = getattr(moves.builtins, "print", None)
if print_ is None:
def print_(*args, **kwargs):
"""The new-style print function for Python 2.4 and 2.5."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
# If the file has an encoding, encode unicode with it.
if (isinstance(fp, file) and
isinstance(data, unicode) and
fp.encoding is not None):
errors = getattr(fp, "errors", None)
if errors is None:
errors = "strict"
data = data.encode(fp.encoding, errors)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
if sys.version_info[:2] < (3, 3):
_print = print_
def print_(*args, **kwargs):
fp = kwargs.get("file", sys.stdout)
flush = kwargs.pop("flush", False)
_print(*args, **kwargs)
if flush and fp is not None:
fp.flush()
_add_doc(reraise, """Reraise an exception.""")
if sys.version_info[0:2] < (3, 4):
def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS,
updated=functools.WRAPPER_UPDATES):
def wrapper(f):
f = functools.wraps(wrapped, assigned, updated)(f)
f.__wrapped__ = wrapped
return f
return wrapper
else:
wraps = functools.wraps
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(meta):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
return type.__new__(metaclass, 'temporary_class', (), {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
slots = orig_vars.get('__slots__')
if slots is not None:
if isinstance(slots, str):
slots = [slots]
for slots_var in slots:
orig_vars.pop(slots_var)
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
def python_2_unicode_compatible(klass):
"""
A decorator that defines __unicode__ and __str__ methods under Python 2.
Under Python 3 it does nothing.
To support Python 2 and 3 with a single code base, define a __str__ method
returning text and apply this decorator to the class.
"""
if PY2:
if '__str__' not in klass.__dict__:
raise ValueError("@python_2_unicode_compatible cannot be applied "
"to %s because it doesn't define __str__()." %
klass.__name__)
klass.__unicode__ = klass.__str__
klass.__str__ = lambda self: self.__unicode__().encode('utf-8')
return klass
# Complete the moves implementation.
# This code is at the end of this module to speed up module loading.
# Turn this module into a package.
__path__ = [] # required for PEP 302 and PEP 451
__package__ = __name__ # see PEP 366 @ReservedAssignment
if globals().get("__spec__") is not None:
__spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable
# Remove other six meta path importers, since they cause problems. This can
# happen if six is removed from sys.modules and then reloaded. (Setuptools does
# this for some reason.)
if sys.meta_path:
for i, importer in enumerate(sys.meta_path):
# Here's some real nastiness: Another "instance" of the six module might
# be floating around. Therefore, we can't use isinstance() to check for
# the six meta path importer, since the other six instance will have
# inserted an importer with different class.
if (type(importer).__name__ == "_SixMetaPathImporter" and
importer.name == __name__):
del sys.meta_path[i]
break
del i, importer
# Finally, add the importer to the meta path import hook.
sys.meta_path.append(_importer)
| apache-2.0 | 5,586,783,883,803,160,000 | 34.345196 | 98 | 0.632602 | false |
kalxas/geonode | geonode/base/api/pagination.py | 6 | 1935 | # -*- coding: utf-8 -*-
#########################################################################
#
# Copyright (C) 2020 OSGeo
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#########################################################################
from django.conf import settings
from rest_framework.response import Response
from rest_framework.pagination import PageNumberPagination
DEFAULT_PAGE = getattr(settings, 'REST_API_DEFAULT_PAGE', 1)
DEFAULT_PAGE_SIZE = getattr(settings, 'REST_API_DEFAULT_PAGE_SIZE', 10)
DEFAULT_PAGE_QUERY_PARAM = getattr(settings, 'REST_API_DEFAULT_PAGE_QUERY_PARAM', 'page_size')
class GeoNodeApiPagination(PageNumberPagination):
page = DEFAULT_PAGE
page_size = DEFAULT_PAGE_SIZE
page_size_query_param = DEFAULT_PAGE_QUERY_PARAM
def get_paginated_response(self, data):
_paginated_response = {
'links': {
'next': self.get_next_link(),
'previous': self.get_previous_link()
},
'total': self.page.paginator.count,
'page': int(self.request.GET.get('page', DEFAULT_PAGE)), # can not set default = self.page
DEFAULT_PAGE_QUERY_PARAM: int(self.request.GET.get(DEFAULT_PAGE_QUERY_PARAM, self.page_size))
}
_paginated_response.update(data)
return Response(_paginated_response)
| gpl-3.0 | -1,415,260,359,998,245 | 41.065217 | 105 | 0.645478 | false |
joshblum/django-with-audit | django/contrib/localflavor/ar/forms.py | 87 | 4024 | # -*- coding: utf-8 -*-
"""
AR-specific Form helpers.
"""
from __future__ import absolute_import
from django.contrib.localflavor.ar.ar_provinces import PROVINCE_CHOICES
from django.core.validators import EMPTY_VALUES
from django.forms import ValidationError
from django.forms.fields import RegexField, CharField, Select
from django.utils.translation import ugettext_lazy as _
class ARProvinceSelect(Select):
"""
A Select widget that uses a list of Argentinean provinces/autonomous cities
as its choices.
"""
def __init__(self, attrs=None):
super(ARProvinceSelect, self).__init__(attrs, choices=PROVINCE_CHOICES)
class ARPostalCodeField(RegexField):
"""
A field that accepts a 'classic' NNNN Postal Code or a CPA.
See http://www.correoargentino.com.ar/consulta_cpa/home.php
"""
default_error_messages = {
'invalid': _("Enter a postal code in the format NNNN or ANNNNAAA."),
}
def __init__(self, max_length=8, min_length=4, *args, **kwargs):
super(ARPostalCodeField, self).__init__(r'^\d{4}$|^[A-HJ-NP-Za-hj-np-z]\d{4}\D{3}$',
max_length, min_length, *args, **kwargs)
def clean(self, value):
value = super(ARPostalCodeField, self).clean(value)
if value in EMPTY_VALUES:
return u''
if len(value) not in (4, 8):
raise ValidationError(self.error_messages['invalid'])
if len(value) == 8:
return u'%s%s%s' % (value[0].upper(), value[1:5], value[5:].upper())
return value
class ARDNIField(CharField):
"""
A field that validates 'Documento Nacional de Identidad' (DNI) numbers.
"""
default_error_messages = {
'invalid': _("This field requires only numbers."),
'max_digits': _("This field requires 7 or 8 digits."),
}
def __init__(self, max_length=10, min_length=7, *args, **kwargs):
super(ARDNIField, self).__init__(max_length, min_length, *args,
**kwargs)
def clean(self, value):
"""
Value can be a string either in the [X]X.XXX.XXX or [X]XXXXXXX formats.
"""
value = super(ARDNIField, self).clean(value)
if value in EMPTY_VALUES:
return u''
if not value.isdigit():
value = value.replace('.', '')
if not value.isdigit():
raise ValidationError(self.error_messages['invalid'])
if len(value) not in (7, 8):
raise ValidationError(self.error_messages['max_digits'])
return value
class ARCUITField(RegexField):
"""
This field validates a CUIT (CΓ³digo Γnico de IdentificaciΓ³n Tributaria). A
CUIT is of the form XX-XXXXXXXX-V. The last digit is a check digit.
"""
default_error_messages = {
'invalid': _('Enter a valid CUIT in XX-XXXXXXXX-X or XXXXXXXXXXXX format.'),
'checksum': _("Invalid CUIT."),
}
def __init__(self, max_length=None, min_length=None, *args, **kwargs):
super(ARCUITField, self).__init__(r'^\d{2}-?\d{8}-?\d$',
max_length, min_length, *args, **kwargs)
def clean(self, value):
"""
Value can be either a string in the format XX-XXXXXXXX-X or an
11-digit number.
"""
value = super(ARCUITField, self).clean(value)
if value in EMPTY_VALUES:
return u''
value, cd = self._canon(value)
if self._calc_cd(value) != cd:
raise ValidationError(self.error_messages['checksum'])
return self._format(value, cd)
def _canon(self, cuit):
cuit = cuit.replace('-', '')
return cuit[:-1], cuit[-1]
def _calc_cd(self, cuit):
mults = (5, 4, 3, 2, 7, 6, 5, 4, 3, 2)
tmp = sum([m * int(cuit[idx]) for idx, m in enumerate(mults)])
return str(11 - tmp % 11)
def _format(self, cuit, check_digit=None):
if check_digit == None:
check_digit = cuit[-1]
cuit = cuit[:-1]
return u'%s-%s-%s' % (cuit[:2], cuit[2:], check_digit)
| bsd-3-clause | -8,263,621,960,817,804,000 | 33.367521 | 92 | 0.59239 | false |
yamila-moreno/django | django/templatetags/i18n.py | 16 | 18205 | from __future__ import unicode_literals
import sys
from django.conf import settings
from django.template import Library, Node, TemplateSyntaxError, Variable
from django.template.base import TOKEN_TEXT, TOKEN_VAR, render_value_in_context
from django.template.defaulttags import token_kwargs
from django.utils import six, translation
register = Library()
class GetAvailableLanguagesNode(Node):
def __init__(self, variable):
self.variable = variable
def render(self, context):
context[self.variable] = [(k, translation.ugettext(v)) for k, v in settings.LANGUAGES]
return ''
class GetLanguageInfoNode(Node):
def __init__(self, lang_code, variable):
self.lang_code = lang_code
self.variable = variable
def render(self, context):
lang_code = self.lang_code.resolve(context)
context[self.variable] = translation.get_language_info(lang_code)
return ''
class GetLanguageInfoListNode(Node):
def __init__(self, languages, variable):
self.languages = languages
self.variable = variable
def get_language_info(self, language):
# ``language`` is either a language code string or a sequence
# with the language code as its first item
if len(language[0]) > 1:
return translation.get_language_info(language[0])
else:
return translation.get_language_info(str(language))
def render(self, context):
langs = self.languages.resolve(context)
context[self.variable] = [self.get_language_info(lang) for lang in langs]
return ''
class GetCurrentLanguageNode(Node):
def __init__(self, variable):
self.variable = variable
def render(self, context):
context[self.variable] = translation.get_language()
return ''
class GetCurrentLanguageBidiNode(Node):
def __init__(self, variable):
self.variable = variable
def render(self, context):
context[self.variable] = translation.get_language_bidi()
return ''
class TranslateNode(Node):
def __init__(self, filter_expression, noop, asvar=None,
message_context=None):
self.noop = noop
self.asvar = asvar
self.message_context = message_context
self.filter_expression = filter_expression
if isinstance(self.filter_expression.var, six.string_types):
self.filter_expression.var = Variable("'%s'" %
self.filter_expression.var)
def render(self, context):
self.filter_expression.var.translate = not self.noop
if self.message_context:
self.filter_expression.var.message_context = (
self.message_context.resolve(context))
output = self.filter_expression.resolve(context)
value = render_value_in_context(output, context)
if self.asvar:
context[self.asvar] = value
return ''
else:
return value
class BlockTranslateNode(Node):
def __init__(self, extra_context, singular, plural=None, countervar=None,
counter=None, message_context=None, trimmed=False):
self.extra_context = extra_context
self.singular = singular
self.plural = plural
self.countervar = countervar
self.counter = counter
self.message_context = message_context
self.trimmed = trimmed
def render_token_list(self, tokens):
result = []
vars = []
for token in tokens:
if token.token_type == TOKEN_TEXT:
result.append(token.contents.replace('%', '%%'))
elif token.token_type == TOKEN_VAR:
result.append('%%(%s)s' % token.contents)
vars.append(token.contents)
msg = ''.join(result)
if self.trimmed:
msg = translation.trim_whitespace(msg)
return msg, vars
def render(self, context, nested=False):
if self.message_context:
message_context = self.message_context.resolve(context)
else:
message_context = None
tmp_context = {}
for var, val in self.extra_context.items():
tmp_context[var] = val.resolve(context)
# Update() works like a push(), so corresponding context.pop() is at
# the end of function
context.update(tmp_context)
singular, vars = self.render_token_list(self.singular)
if self.plural and self.countervar and self.counter:
count = self.counter.resolve(context)
context[self.countervar] = count
plural, plural_vars = self.render_token_list(self.plural)
if message_context:
result = translation.npgettext(message_context, singular,
plural, count)
else:
result = translation.ungettext(singular, plural, count)
vars.extend(plural_vars)
else:
if message_context:
result = translation.pgettext(message_context, singular)
else:
result = translation.ugettext(singular)
default_value = context.template.engine.string_if_invalid
def render_value(key):
if key in context:
val = context[key]
else:
val = default_value % key if '%s' in default_value else default_value
return render_value_in_context(val, context)
data = {v: render_value(v) for v in vars}
context.pop()
try:
result = result % data
except (KeyError, ValueError):
if nested:
# Either string is malformed, or it's a bug
raise TemplateSyntaxError("'blocktrans' is unable to format "
"string returned by gettext: %r using %r" % (result, data))
with translation.override(None):
result = self.render(context, nested=True)
return result
class LanguageNode(Node):
def __init__(self, nodelist, language):
self.nodelist = nodelist
self.language = language
def render(self, context):
with translation.override(self.language.resolve(context)):
output = self.nodelist.render(context)
return output
@register.tag("get_available_languages")
def do_get_available_languages(parser, token):
"""
This will store a list of available languages
in the context.
Usage::
{% get_available_languages as languages %}
{% for language in languages %}
...
{% endfor %}
This will just pull the LANGUAGES setting from
your setting file (or the default settings) and
put it into the named variable.
"""
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError("'get_available_languages' requires 'as variable' (got %r)" % args)
return GetAvailableLanguagesNode(args[2])
@register.tag("get_language_info")
def do_get_language_info(parser, token):
"""
This will store the language information dictionary for the given language
code in a context variable.
Usage::
{% get_language_info for LANGUAGE_CODE as l %}
{{ l.code }}
{{ l.name }}
{{ l.name_translated }}
{{ l.name_local }}
{{ l.bidi|yesno:"bi-directional,uni-directional" }}
"""
args = token.split_contents()
if len(args) != 5 or args[1] != 'for' or args[3] != 'as':
raise TemplateSyntaxError("'%s' requires 'for string as variable' (got %r)" % (args[0], args[1:]))
return GetLanguageInfoNode(parser.compile_filter(args[2]), args[4])
@register.tag("get_language_info_list")
def do_get_language_info_list(parser, token):
"""
This will store a list of language information dictionaries for the given
language codes in a context variable. The language codes can be specified
either as a list of strings or a settings.LANGUAGES style list (or any
sequence of sequences whose first items are language codes).
Usage::
{% get_language_info_list for LANGUAGES as langs %}
{% for l in langs %}
{{ l.code }}
{{ l.name }}
{{ l.name_translated }}
{{ l.name_local }}
{{ l.bidi|yesno:"bi-directional,uni-directional" }}
{% endfor %}
"""
args = token.split_contents()
if len(args) != 5 or args[1] != 'for' or args[3] != 'as':
raise TemplateSyntaxError("'%s' requires 'for sequence as variable' (got %r)" % (args[0], args[1:]))
return GetLanguageInfoListNode(parser.compile_filter(args[2]), args[4])
@register.filter
def language_name(lang_code):
return translation.get_language_info(lang_code)['name']
@register.filter
def language_name_translated(lang_code):
english_name = translation.get_language_info(lang_code)['name']
return translation.ugettext(english_name)
@register.filter
def language_name_local(lang_code):
return translation.get_language_info(lang_code)['name_local']
@register.filter
def language_bidi(lang_code):
return translation.get_language_info(lang_code)['bidi']
@register.tag("get_current_language")
def do_get_current_language(parser, token):
"""
This will store the current language in the context.
Usage::
{% get_current_language as language %}
This will fetch the currently active language and
put it's value into the ``language`` context
variable.
"""
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError("'get_current_language' requires 'as variable' (got %r)" % args)
return GetCurrentLanguageNode(args[2])
@register.tag("get_current_language_bidi")
def do_get_current_language_bidi(parser, token):
"""
This will store the current language layout in the context.
Usage::
{% get_current_language_bidi as bidi %}
This will fetch the currently active language's layout and
put it's value into the ``bidi`` context variable.
True indicates right-to-left layout, otherwise left-to-right
"""
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError("'get_current_language_bidi' requires 'as variable' (got %r)" % args)
return GetCurrentLanguageBidiNode(args[2])
@register.tag("trans")
def do_translate(parser, token):
"""
This will mark a string for translation and will
translate the string for the current language.
Usage::
{% trans "this is a test" %}
This will mark the string for translation so it will
be pulled out by mark-messages.py into the .po files
and will run the string through the translation engine.
There is a second form::
{% trans "this is a test" noop %}
This will only mark for translation, but will return
the string unchanged. Use it when you need to store
values into forms that should be translated later on.
You can use variables instead of constant strings
to translate stuff you marked somewhere else::
{% trans variable %}
This will just try to translate the contents of
the variable ``variable``. Make sure that the string
in there is something that is in the .po file.
It is possible to store the translated string into a variable::
{% trans "this is a test" as var %}
{{ var }}
Contextual translations are also supported::
{% trans "this is a test" context "greeting" %}
This is equivalent to calling pgettext instead of (u)gettext.
"""
bits = token.split_contents()
if len(bits) < 2:
raise TemplateSyntaxError("'%s' takes at least one argument" % bits[0])
message_string = parser.compile_filter(bits[1])
remaining = bits[2:]
noop = False
asvar = None
message_context = None
seen = set()
invalid_context = {'as', 'noop'}
while remaining:
option = remaining.pop(0)
if option in seen:
raise TemplateSyntaxError(
"The '%s' option was specified more than once." % option,
)
elif option == 'noop':
noop = True
elif option == 'context':
try:
value = remaining.pop(0)
except IndexError:
msg = "No argument provided to the '%s' tag for the context option." % bits[0]
six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])
if value in invalid_context:
raise TemplateSyntaxError(
"Invalid argument '%s' provided to the '%s' tag for the context option" % (value, bits[0]),
)
message_context = parser.compile_filter(value)
elif option == 'as':
try:
value = remaining.pop(0)
except IndexError:
msg = "No argument provided to the '%s' tag for the as option." % bits[0]
six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])
asvar = value
else:
raise TemplateSyntaxError(
"Unknown argument for '%s' tag: '%s'. The only options "
"available are 'noop', 'context' \"xxx\", and 'as VAR'." % (
bits[0], option,
)
)
seen.add(option)
return TranslateNode(message_string, noop, asvar, message_context)
@register.tag("blocktrans")
def do_block_translate(parser, token):
"""
This will translate a block of text with parameters.
Usage::
{% blocktrans with bar=foo|filter boo=baz|filter %}
This is {{ bar }} and {{ boo }}.
{% endblocktrans %}
Additionally, this supports pluralization::
{% blocktrans count count=var|length %}
There is {{ count }} object.
{% plural %}
There are {{ count }} objects.
{% endblocktrans %}
This is much like ngettext, only in template syntax.
The "var as value" legacy format is still supported::
{% blocktrans with foo|filter as bar and baz|filter as boo %}
{% blocktrans count var|length as count %}
Contextual translations are supported::
{% blocktrans with bar=foo|filter context "greeting" %}
This is {{ bar }}.
{% endblocktrans %}
This is equivalent to calling pgettext/npgettext instead of
(u)gettext/(u)ngettext.
"""
bits = token.split_contents()
options = {}
remaining_bits = bits[1:]
while remaining_bits:
option = remaining_bits.pop(0)
if option in options:
raise TemplateSyntaxError('The %r option was specified more '
'than once.' % option)
if option == 'with':
value = token_kwargs(remaining_bits, parser, support_legacy=True)
if not value:
raise TemplateSyntaxError('"with" in %r tag needs at least '
'one keyword argument.' % bits[0])
elif option == 'count':
value = token_kwargs(remaining_bits, parser, support_legacy=True)
if len(value) != 1:
raise TemplateSyntaxError('"count" in %r tag expected exactly '
'one keyword argument.' % bits[0])
elif option == "context":
try:
value = remaining_bits.pop(0)
value = parser.compile_filter(value)
except Exception:
msg = (
'"context" in %r tag expected '
'exactly one argument.') % bits[0]
six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])
elif option == "trimmed":
value = True
else:
raise TemplateSyntaxError('Unknown argument for %r tag: %r.' %
(bits[0], option))
options[option] = value
if 'count' in options:
countervar, counter = list(options['count'].items())[0]
else:
countervar, counter = None, None
if 'context' in options:
message_context = options['context']
else:
message_context = None
extra_context = options.get('with', {})
trimmed = options.get("trimmed", False)
singular = []
plural = []
while parser.tokens:
token = parser.next_token()
if token.token_type in (TOKEN_VAR, TOKEN_TEXT):
singular.append(token)
else:
break
if countervar and counter:
if token.contents.strip() != 'plural':
raise TemplateSyntaxError("'blocktrans' doesn't allow other block tags inside it")
while parser.tokens:
token = parser.next_token()
if token.token_type in (TOKEN_VAR, TOKEN_TEXT):
plural.append(token)
else:
break
if token.contents.strip() != 'endblocktrans':
raise TemplateSyntaxError("'blocktrans' doesn't allow other block tags (seen %r) inside it" % token.contents)
return BlockTranslateNode(extra_context, singular, plural, countervar,
counter, message_context, trimmed=trimmed)
@register.tag
def language(parser, token):
"""
This will enable the given language just for this block.
Usage::
{% language "de" %}
This is {{ bar }} and {{ boo }}.
{% endlanguage %}
"""
bits = token.split_contents()
if len(bits) != 2:
raise TemplateSyntaxError("'%s' takes one argument (language)" % bits[0])
language = parser.compile_filter(bits[1])
nodelist = parser.parse(('endlanguage',))
parser.delete_first_token()
return LanguageNode(nodelist, language)
| bsd-3-clause | 1,640,208,146,679,332,600 | 33.349057 | 117 | 0.604779 | false |
SublimeText-Markdown/MarkdownEditing | references.py | 1 | 26853 | """
Commands related to links, references and footnotes.
Exported commands:
ReferenceJumpCommand
ReferenceJumpContextCommand
ReferenceNewReferenceCommand
ReferenceNewInlineLinkCommand
ReferenceNewInlineImage
ReferenceNewImage
ReferenceNewFootnote
ReferenceDeleteReference
ReferenceOrganize
GatherMissingLinkMarkersCommand
ConvertInlineLinkToReferenceCommand
ConvertInlineLinksToReferencesCommand
"""
import sublime
import re
import operator
try:
from MarkdownEditing.mdeutils import MDETextCommand
except ImportError:
from mdeutils import MDETextCommand
refname_scope_name = "constant.other.reference.link.markdown"
definition_scope_name = "meta.link.reference.def.markdown"
footnote_scope_name = "meta.link.reference.footnote.markdown"
marker_scope_name = "meta.link.reference.markdown"
marker_literal_scope_name = "meta.link.reference.literal.markdown"
marker_image_scope_name = "meta.image.reference.markdown"
ref_link_scope_name = "markup.underline.link.markdown"
marker_begin_scope_name = "punctuation.definition.string.begin.markdown"
marker_text_end_scope_name = "punctuation.definition.string.end.markdown"
marker_text_scope_name = "string.other.link.title.markdown"
refname_start_scope_name = "punctuation.definition.constant.begin.markdown"
marker_end_scope_name = "punctuation.definition.constant.end.markdown"
def hasScope(scope_name, to_find):
"""Test to_find's existence in scope_name."""
return to_find in scope_name.split(" ")
class Obj(object):
"""A utility obj for anoymous object."""
def __init__(self, **kwargs):
"""Take keyword arguments."""
self.__dict__.update(kwargs)
def getMarkers(view, name=''):
"""Find all markers."""
# returns {name -> Region}
markers = []
name = re.escape(name)
if name == '':
markers.extend(view.find_all(r"(?<=\]\[)([^\]]+)(?=\])", 0)) # ][???]
markers.extend(view.find_all(r"(?<=\[)([^\]]*)(?=\]\[\])", 0)) # [???][]
markers.extend(view.find_all(r"(?<=\[)(\^[^\]]+)(?=\])(?!\s*\]:)", 0)) # [^???]
markers.extend(view.find_all(r"(?<!\]\[)(?<=\[)([^\]]+)(?=\])(?!\]\[)(?!\]\()(?!\]:)", 0)) # [???]
else:
# ][name]
markers.extend(view.find_all(r"(?<=\]\[)(?i)(%s)(?=\])" % name, 0))
markers.extend(view.find_all(r"(?<=\[)(?i)(%s)(?=\]\[\])" % name, 0)) # [name][]
markers.extend(view.find_all(r"(?<!\]\[)(?<=\[)(?i)(%s)(?=\])(?!\]\[)(?!\]\()(?!\]:)" % name, 0)) # [name]
if name[0] == '^':
# [(^)name]
markers.extend(view.find_all(r"(?<=\[)(%s)(?=\])(?!\s*\]:)" % name, 0))
regions = []
for x in markers:
scope_name = view.scope_name(x.begin())
if (hasScope(scope_name, refname_scope_name) or hasScope(scope_name, marker_text_scope_name)) and \
not hasScope(view.scope_name(x.begin()), definition_scope_name):
regions.append(x)
ids = {}
for reg in regions:
name = view.substr(reg).strip()
key = name.lower()
if key in ids:
ids[key].regions.append(reg)
else:
ids[key] = Obj(regions=[reg], label=name)
return ids
def getReferences(view, name=''):
"""Find all reference definitions."""
# returns {name -> Region}
refs = []
name = re.escape(name)
if name == '':
refs.extend(view.find_all(r"(?<=^\[)([^\]]+)(?=\]:)", 0))
else:
refs.extend(view.find_all(r"(?<=^\[)(%s)(?=\]:)" % name, 0))
regions = refs
ids = {}
for reg in regions:
name = view.substr(reg).strip()
key = name.lower()
if key in ids:
ids[key].regions.append(reg)
else:
ids[key] = Obj(regions=[reg], label=name)
return ids
def isMarkerDefined(view, name):
"""Return True if a marker is defined by that name."""
return len(getReferences(view, name)) > 0
def getCurrentScopeRegion(view, pt):
"""Extend the region under current scope."""
scope = view.scope_name(pt)
l = pt
while l > 0 and view.scope_name(l - 1) == scope:
l -= 1
r = pt
while r < view.size() and view.scope_name(r) == scope:
r += 1
return sublime.Region(l, r)
def findScopeFrom(view, pt, scope, backwards=False, char=None):
"""Find the nearest position of a scope from given position."""
if backwards:
while pt >= 0 and (not hasScope(view.scope_name(pt), scope) or
(char is not None and view.substr(pt) != char)):
pt -= 1
else:
while pt < view.size() and (not hasScope(view.scope_name(pt), scope) or
(char is not None and view.substr(pt) != char)):
pt += 1
return pt
def get_reference(view, pos):
"""Try to match a marker or reference on given position. Return a tuple (matched, is_definition, name)."""
scope = view.scope_name(pos).split(" ")
if definition_scope_name in scope or footnote_scope_name in scope:
if refname_scope_name in scope:
# Definition name
defname = view.substr(getCurrentScopeRegion(view, pos))
elif refname_start_scope_name in scope:
# Starting "["
defname = view.substr(getCurrentScopeRegion(view, pos + 1))
else:
# URL or footnote
marker_pt = findScopeFrom(view, pos, refname_scope_name, True)
defname = view.substr(getCurrentScopeRegion(view, marker_pt))
return (True, True, defname)
elif marker_scope_name in scope or marker_image_scope_name in scope or marker_literal_scope_name in scope:
if refname_scope_name in scope:
# defname name
defname = view.substr(getCurrentScopeRegion(view, pos))
else:
# Text
if marker_begin_scope_name in scope:
pos += 1
while pos >= 0 and view.substr(sublime.Region(pos, pos + 1)) in '[]':
pos -= 1
if not (marker_scope_name in scope or marker_image_scope_name in scope or marker_literal_scope_name in scope):
return (False, None, None)
marker_text_end = findScopeFrom(view, pos, marker_text_end_scope_name) + 1
if hasScope(view.scope_name(marker_text_end), refname_start_scope_name) and not hasScope(view.scope_name(marker_text_end + 1), marker_end_scope_name):
# of [Text][name] struct
marker_pt = marker_text_end + 1
marker_pt_end = findScopeFrom(view, marker_pt, marker_end_scope_name)
defname = view.substr(sublime.Region(marker_pt, marker_pt_end))
else:
# of [Text] struct or [Text][] struct
defname = view.substr(getCurrentScopeRegion(view, pos))
return (True, False, defname)
else:
return (False, None, None)
class ReferenceJumpCommand(MDETextCommand):
"""Jump between definition and reference."""
def description(self):
"""Description for package control."""
return 'Jump between definition and reference'
def run(self, edit):
"""Run command callback."""
view = self.view
edit_regions = []
markers = getMarkers(view)
refs = getReferences(view)
missing_markers = []
missing_refs = []
for sel in view.sel():
matched, is_definition, defname = get_reference(view, sel.begin())
if matched:
defname_key = defname.lower()
if is_definition:
if defname_key in markers:
edit_regions.extend(markers[defname_key].regions)
else:
missing_markers.append(defname)
else:
if defname_key in refs:
edit_regions.extend(refs[defname_key].regions)
else:
missing_refs.append(defname)
if len(edit_regions) > 0:
sels = view.sel()
sels.clear()
sels.add_all(edit_regions)
view.show(edit_regions[0])
if len(missing_refs) + len(missing_markers) > 0:
# has something missing
if len(missing_markers) == 0:
sublime.status_message("The definition%s of %s cannot be found." % ("" if len(missing_refs) == 1 else "s", ", ".join(missing_refs)))
elif len(missing_refs) == 0:
sublime.status_message("The marker%s of %s cannot be found." % ("" if len(missing_markers) == 1 else "s", ", ".join(missing_markers)))
else:
sublime.status_message("The definition%s of %s and the marker%s of %s cannot be found." % ("" if len(missing_refs) == 1 else "s", ", ".join(missing_refs), "" if len(missing_markers) == 1 else "s", ", ".join(missing_markers)))
class ReferenceJumpContextCommand(ReferenceJumpCommand):
"""Jump between definition and reference. Used in context menu."""
def is_visible(self):
"""Return True if cursor is on a marker or reference."""
return ReferenceJumpCommand.is_visible(self) and any(get_reference(self.view, sel.begin())[0] for sel in self.view.sel())
def is_url(contents):
"""Return if contents contains an URL."""
re_match_urls = re.compile(r"""((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.ββ][a-z]{2,4}/)(?:[^\s()<>]+|(([^\s()<>]+|(([^\s()<>]+)))*))+(?:(([^\s()<>]+|(ββ([^\s()<>]+)))*)|[^\s`!()[]{};:'".,<>?«»ββββ]))""", re.DOTALL)
m = re_match_urls.search(contents)
return True if m else False
def mangle_url(url):
"""Mangle URL for links."""
url = url.strip()
if re.match(r'^([a-z0-9-]+\.)+\w{2,4}', url, re.IGNORECASE):
url = 'http://' + url
return url
def append_reference_link(edit, view, name, url):
r"""Detect if file ends with \n."""
if view.substr(view.size() - 1) == '\n':
nl = ''
else:
nl = '\n'
# Append the new reference link to the end of the file
edit_position = view.size() + len(nl) + 1
view.insert(edit, view.size(), '{0}[{1}]: {2}\n'.format(nl, name, url))
return sublime.Region(edit_position, edit_position + len(name))
def suggest_default_link_name(name, image):
"""Suggest default link name in camel case."""
ret = ''
name_segs = name.split()
if len(name_segs) > 1:
for word in name_segs:
ret += word.capitalize()
if len(ret) > 30:
break
return ('image' if image else '') + ret
else:
return name
def check_for_link(view, link):
"""Check if the link already defined. Return the name if so."""
refs = getReferences(view)
link = link.strip()
for name in refs:
link_begin = findScopeFrom(view, refs[name].regions[0].begin(), ref_link_scope_name)
reg = getCurrentScopeRegion(view, link_begin)
found_link = view.substr(reg).strip()
if found_link == link:
return name
return None
class ReferenceNewReferenceCommand(MDETextCommand):
"""Create a new reference."""
def run(self, edit, image=False):
"""Run command callback."""
view = self.view
edit_regions = []
contents = sublime.get_clipboard().strip()
link = mangle_url(contents) if is_url(contents) else ""
suggested_name = ""
if len(link) > 0:
# If link already exists, reuse existing reference
suggested_link_name = suggested_name = check_for_link(view, link)
for sel in view.sel():
text = view.substr(sel)
if not suggested_name:
suggested_link_name = suggest_default_link_name(text, image)
suggested_name = suggested_link_name if suggested_link_name != text else ""
edit_position = sel.end() + 3
if image:
edit_position += 1
view.replace(edit, sel, "![" + text + "][" + suggested_name + "]")
else:
view.replace(edit, sel, "[" + text + "][" + suggested_name + "]")
edit_regions.append(sublime.Region(edit_position, edit_position + len(suggested_name)))
if len(edit_regions) > 0:
selection = view.sel()
selection.clear()
reference_region = append_reference_link(edit, view, suggested_link_name, link)
selection.add(reference_region)
selection.add_all(edit_regions)
class ReferenceNewInlineLinkCommand(MDETextCommand):
"""Create a new inline link."""
def run(self, edit, image=False):
"""Run command callback."""
view = self.view
contents = sublime.get_clipboard().strip()
link = mangle_url(contents) if is_url(contents) else ""
link = link.replace("$", "\\$")
if image:
view.run_command("insert_snippet", {"contents": ""})
else:
view.run_command("insert_snippet", {"contents": "[${1:$SELECTION}](${2:" + link + "})"})
class ReferenceNewInlineImage(MDETextCommand):
"""Create a new inline image."""
def run(self, edit):
"""Run command callback."""
self.view.run_command("reference_new_inline_link", {"image": True})
class ReferenceNewImage(MDETextCommand):
"""Create a new image."""
def run(self, edit):
"""Run command callback."""
self.view.run_command("reference_new_reference", {"image": True})
def get_next_footnote_marker(view):
"""Get the number of the next footnote."""
refs = getReferences(view)
footnotes = [int(ref[1:]) for ref in refs if view.substr(refs[ref].regions[0])[0] == "^"]
def target_loc(num):
return (num - 1) % len(footnotes)
for i in range(len(footnotes)):
footnote = footnotes[i]
tl = target_loc(footnote)
# footnotes = [1 2 {4} 5], i = 2, footnote = 4, tl = 3
while tl != i:
target_fn = footnotes[tl]
ttl = target_loc(target_fn)
# target_fn = 5, ttl = 0
if ttl != tl or target_fn > footnote:
footnotes[i], footnotes[tl] = footnotes[tl], footnotes[i]
tl, footnote = ttl, target_fn
# [1 2 {5} 4]
else:
break
for i in range(len(footnotes)):
if footnotes[i] != i + 1:
return i + 1
return len(footnotes) + 1
class ReferenceNewFootnote(MDETextCommand):
"""Create a new footnote."""
def run(self, edit):
"""Run command callback."""
view = self.view
markernum = get_next_footnote_marker(view)
markernum_str = '[^%s]' % markernum
for sel in view.sel():
startloc = sel.end()
if bool(view.size()):
targetloc = view.find('(\s|$)', startloc).begin()
else:
targetloc = 0
view.insert(edit, targetloc, markernum_str)
if len(view.sel()) > 0:
view.show(view.size())
view.insert(edit, view.size(), '\n' + markernum_str + ': ')
view.sel().clear()
view.sel().add(sublime.Region(view.size(), view.size()))
class ReferenceDeleteReference(MDETextCommand):
"""Delete a reference."""
def run(self, edit):
"""Run command callback."""
view = self.view
edit_regions = []
markers = getMarkers(view)
refs = getReferences(view)
for sel in view.sel():
matched, is_definition, defname = get_reference(view, sel.begin())
if matched:
defname_key = defname.lower()
if defname_key in markers:
for marker in markers[defname_key].regions:
if defname[0] == "^":
edit_regions.append(sublime.Region(marker.begin() - 1, marker.end() + 1))
else:
l = findScopeFrom(view, marker.begin(), marker_begin_scope_name, True)
if l > 0 and view.substr(sublime.Region(l - 1, l)) == "!":
edit_regions.append(sublime.Region(l - 1, l + 1))
else:
edit_regions.append(sublime.Region(l, l + 1))
if hasScope(view.scope_name(marker.end()), marker_text_end_scope_name):
if view.substr(sublime.Region(marker.end() + 1, marker.end() + 2)) == '[':
# [Text][]
r = findScopeFrom(view, marker.end(), marker_end_scope_name, False)
edit_regions.append(sublime.Region(marker.end(), r + 1))
else:
# [Text]
edit_regions.append(sublime.Region(marker.end(), marker.end() + 1))
else:
# [Text][name]
r = findScopeFrom(view, marker.begin(), marker_text_end_scope_name, True)
edit_regions.append(sublime.Region(r, marker.end() + 1))
if defname_key in refs:
for ref in refs[defname_key].regions:
edit_regions.append(view.full_line(ref.begin()))
if len(edit_regions) > 0:
sel = view.sel()
sel.clear()
sel.add_all(edit_regions)
def delete_all(index):
if index == 0:
view.run_command("left_delete")
view.window().show_quick_panel(["Delete the References", "Preview the Changes"], delete_all, sublime.MONOSPACE_FONT)
class ReferenceOrganize(MDETextCommand):
"""Sort and report all references."""
def run(self, edit):
"""Run command callback."""
view = self.view
# reorder
markers = getMarkers(view)
marker_order = sorted(markers.keys(), key=lambda marker: min(markers[marker].regions, key=lambda reg: reg.a).a)
marker_order = dict(zip(marker_order, range(0, len(marker_order))))
refs = getReferences(view)
flatrefs = []
flatfns = []
sel = view.sel()
sel.clear()
for name in refs:
for link_reg in refs[name].regions:
line_reg = view.full_line(link_reg)
if name[0] == "^":
flatfns.append((name, view.substr(line_reg).strip("\n")))
else:
flatrefs.append((name, view.substr(line_reg).strip("\n")))
sel.add(line_reg)
flatfns.sort(key=operator.itemgetter(0))
flatrefs.sort(key=lambda x: marker_order[x[0].lower()] if x[0].lower() in marker_order else 9999)
view.run_command("left_delete")
if view.size() >= 2 and view.substr(sublime.Region(view.size() - 2, view.size())) == "\n\n":
view.erase(edit, sublime.Region(view.size() - 1, view.size()))
for fn_tuple in flatfns:
view.insert(edit, view.size(), fn_tuple[1])
view.insert(edit, view.size(), "\n")
view.insert(edit, view.size(), "\n")
for ref_tuple in flatrefs:
view.insert(edit, view.size(), ref_tuple[1])
view.insert(edit, view.size(), "\n")
# delete duplicate / report conflict
sel.clear()
refs = getReferences(view)
conflicts = {}
unique_links = {}
output = ""
for name in refs:
if name[0] == '^':
continue
n_links = len(refs[name].regions)
if n_links > 1:
for ref in refs[name].regions:
link_begin = findScopeFrom(view, ref.end(), ref_link_scope_name)
link = view.substr(getCurrentScopeRegion(view, link_begin))
if name in unique_links:
if link == unique_links[name]:
output += "%s has duplicate value of %s\n" % (refs[name].label, link)
sel.add(view.full_line(ref.begin()))
elif name in conflicts:
conflicts[name].append(link)
else:
conflicts[name] = [link]
else:
unique_links[name] = link
# view.run_command("left_delete")
for name in conflicts:
output += "%s has conflict values: %s with %s\n" % (refs[name].label, unique_links[name], ", ".join(conflicts[name]))
# report missing
refs = getReferences(view)
lower_refs = [ref.lower() for ref in refs]
missings = []
for ref in refs:
if ref not in marker_order:
missings.append(refs[ref].label)
if len(missings) > 0:
output += "Error: Definition [%s] %s no reference\n" % (", ".join(missings), "have" if len(missings) > 1 else "has")
missings = []
for marker in markers:
if marker not in lower_refs:
missings.append(markers[marker].label)
if len(missings) > 0:
output += "Error: [%s] %s no definition\n" % (", ".join(missings), "have" if len(missings) > 1 else "has")
# sel.clear()
if len(output) == 0:
output = "All references are well defined :)\n"
output += "===================\n"
def get_times_string(n):
if n == 0:
return "0 time"
elif n == 1:
return "1 time"
else:
return "%i times" % n
output += "\n".join(('[%s] is referenced %s' % (markers[m].label, get_times_string(len(markers[m].regions)))) for m in markers)
window = view.window()
output_panel = window.create_output_panel("mde")
output_panel.run_command('erase_view')
output_panel.run_command('append', {'characters': output})
window.run_command("show_panel", {"panel": "output.mde"})
class GatherMissingLinkMarkersCommand(MDETextCommand):
"""Gather all missing references and creates them."""
def run(self, edit):
"""Run command callback."""
view = self.view
refs = getReferences(view)
markers = getMarkers(view)
missings = []
for marker in markers:
if marker not in refs:
missings.append(marker)
if len(missings):
# Remove all whitespace at the end of the file
whitespace_at_end = view.find(r'\s*\z', 0)
view.replace(edit, whitespace_at_end, "\n")
# If there is not already a reference list at the end, insert a new line at the end
if not view.find(r'\n\s*\[[^\]]*\]:.*\s*\z', 0):
view.insert(edit, view.size(), "\n")
for link in missings:
view.insert(edit, view.size(), '[%s]: \n' % link)
def convert2ref(view, edit, link_span, name, omit_name=False):
"""Convert single link to reference."""
view.sel().clear()
link = view.substr(sublime.Region(link_span.a + 1, link_span.b - 1))
if omit_name:
view.replace(edit, link_span, '[]')
link_span = sublime.Region(link_span.a + 1, link_span.a + 1)
offset = len(link)
else:
view.replace(edit, link_span, '[%s]' % name)
link_span = sublime.Region(link_span.a + 1, link_span.a + 1 + len(name))
offset = len(link) - len(name)
view.sel().add(link_span)
view.show_at_center(link_span)
_viewsize = view.size()
view.insert(edit, _viewsize, '[%s]: %s\n' % (name, link))
reference_span = sublime.Region(_viewsize + 1, _viewsize + 1 + len(name))
view.sel().add(reference_span)
return offset
class ConvertInlineLinkToReferenceCommand(MDETextCommand):
"""Convert an inline link to reference."""
def is_visible(self):
"""Return True if cursor is on a marker or reference."""
for sel in self.view.sel():
scope_name = self.view.scope_name(sel.b)
if hasScope(scope_name, 'meta.link.inline.markdown'):
return True
return False
def run(self, edit, name=None):
"""Run command callback."""
view = self.view
pattern = r"\[([^\]]+)\]\((?!#)([^\)]+)\)"
# Remove all whitespace at the end of the file
whitespace_at_end = view.find(r'\s*\z', 0)
view.replace(edit, whitespace_at_end, "\n")
# If there is not already a reference list at the end, insert a new line at the end
if not view.find(r'\n\s*\[[^\]]*\]:.*\s*\z', 0):
view.insert(edit, view.size(), "\n")
link_spans = []
for sel in view.sel():
scope_name = view.scope_name(sel.b)
if not hasScope(scope_name, 'meta.link.inline.markdown'):
continue
start = findScopeFrom(view, sel.b, marker_begin_scope_name, backwards=True)
end = findScopeFrom(view, sel.b, 'punctuation.definition.metadata.markdown', char=')') + 1
text = view.substr(sublime.Region(start, end))
m = re.match(pattern, text)
if m is None:
continue
text = m.group(1)
link = m.group(2)
link_span = sublime.Region(start + m.span(2)[0] - 1, start + m.span(2)[1] + 1)
if is_url(link):
link = mangle_url(link)
if len(link) > 0:
if name is None:
# If link already exists, reuse existing reference
suggested_name = check_for_link(view, link)
if suggested_name is None:
is_image = view.substr(start - 1) == '!' if start > 0 else False
suggested_name = suggest_default_link_name(text, is_image)
_name = name if name is not None else suggested_name
link_spans.append((link_span, _name, _name == text))
offset = 0
for link_span in link_spans:
_link_span = sublime.Region(link_span[0].a + offset, link_span[0].b + offset)
offset -= convert2ref(view, edit, _link_span, link_span[1], link_span[2])
class ConvertInlineLinksToReferencesCommand(MDETextCommand):
"""Convert inline links to references."""
def run(self, edit):
"""Run command callback."""
view = self.view
pattern = r"(?<=\]\()(?!#)([^\)]+)(?=\))"
_sel = []
for sel in view.sel():
_sel.append(sel)
view.sel().clear()
view.sel().add_all(view.find_all(pattern))
view.run_command('convert_inline_link_to_reference')
| mit | 3,459,411,057,167,930,400 | 38.521355 | 242 | 0.543767 | false |
gdelpierre/ansible-modules-core | utilities/helper/meta.py | 11 | 3274 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016, Ansible, a Red Hat company
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
module: meta
short_description: Execute Ansible 'actions'
version_added: "1.2"
description:
- Meta tasks are a special kind of task which can influence Ansible internal execution or state. Prior to Ansible 2.0,
the only meta option available was `flush_handlers`. As of 2.2, there are five meta tasks which can be used.
Meta tasks can be used anywhere within your playbook.
options:
free_form:
description:
- This module takes a free form command, as a string. There's not an actual option named "free form". See the examples!
- "C(flush_handlers) makes Ansible run any handler tasks which have thus far been notified. Ansible inserts these tasks internally at certain points to implicitly trigger handler runs (after pre/post tasks, the final role execution, and the main tasks section of your plays)."
- "C(refresh_inventory) (added in 2.0) forces the reload of the inventory, which in the case of dynamic inventory scripts means they will be re-executed. This is mainly useful when additional hosts are created and users wish to use them instead of using the `add_host` module."
- "C(noop) (added in 2.0) This literally does 'nothing'. It is mainly used internally and not recommended for general use."
- "C(clear_facts) (added in 2.1) causes the gathered facts for the hosts specified in the play's list of hosts to be cleared, including the fact cache."
- "C(clear_host_errors) (added in 2.1) clears the failed state (if any) from hosts specified in the play's list of hosts."
- "C(end_play) (added in 2.2) causes the play to end without failing the host."
choices: ['noop', 'flush_handlers', 'refresh_inventory', 'clear_facts', 'clear_host_errors', 'end_play']
required: true
default: null
notes:
- meta is not really a module nor action_plugin as such it cannot be overwritten.
author:
- "Ansible Core Team"
'''
EXAMPLES = '''
# force all notified handlers to run at this point, not waiting for normal sync points
- template: src=new.j2 dest=/etc/config.txt
notify: myhandler
- meta: flush_handlers
# reload inventory, useful with dynamic inventories when play makes changes to the existing hosts
- cloud_guest: name=newhost state=present # this is fake module
- meta: refresh_inventory
# clear gathered facts from all currently targeted hosts
- meta: clear_facts
# bring host back to play after failure
- copy: src=file dest=/etc/file
remote_user: imightnothavepermission
- meta: clear_host_errors
'''
| gpl-3.0 | 4,374,397,211,470,802,400 | 49.369231 | 285 | 0.733048 | false |
tangentlabs/wagtail | wagtail/wagtailsearch/tests/test_backends.py | 18 | 7650 | import unittest
import time
from django.test import TestCase
from django.test.utils import override_settings
from django.conf import settings
from django.core import management
from django.utils.six import StringIO
from wagtail.tests.utils import WagtailTestUtils
from wagtail.tests.search import models
from wagtail.wagtailsearch.backends import get_search_backend, get_search_backends, InvalidSearchBackendError
from wagtail.wagtailsearch.backends.db import DBSearch
class BackendTests(WagtailTestUtils):
# To test a specific backend, subclass BackendTests and define self.backend_path.
def setUp(self):
# Search WAGTAILSEARCH_BACKENDS for an entry that uses the given backend path
for backend_name, backend_conf in settings.WAGTAILSEARCH_BACKENDS.items():
if backend_conf['BACKEND'] == self.backend_path:
self.backend = get_search_backend(backend_name)
self.backend_name = backend_name
break
else:
# no conf entry found - skip tests for this backend
raise unittest.SkipTest("No WAGTAILSEARCH_BACKENDS entry for the backend %s" % self.backend_path)
self.load_test_data()
def load_test_data(self):
# Reset the index
self.backend.reset_index()
self.backend.add_type(models.SearchTest)
self.backend.add_type(models.SearchTestChild)
# Create a test database
testa = models.SearchTest()
testa.title = "Hello World"
testa.save()
self.backend.add(testa)
self.testa = testa
testb = models.SearchTest()
testb.title = "Hello"
testb.live = True
testb.save()
self.backend.add(testb)
self.testb = testb
testc = models.SearchTestChild()
testc.title = "Hello"
testc.live = True
testc.save()
self.backend.add(testc)
self.testc = testc
testd = models.SearchTestChild()
testd.title = "World"
testd.save()
self.backend.add(testd)
self.testd = testd
# Refresh the index
self.backend.refresh_index()
def test_blank_search(self):
results = self.backend.search("", models.SearchTest)
self.assertEqual(set(results), set())
def test_search(self):
results = self.backend.search("Hello", models.SearchTest)
self.assertEqual(set(results), {self.testa, self.testb, self.testc.searchtest_ptr})
results = self.backend.search("World", models.SearchTest)
self.assertEqual(set(results), {self.testa, self.testd.searchtest_ptr})
def test_callable_indexed_field(self):
results = self.backend.search("Callable", models.SearchTest)
self.assertEqual(set(results), {self.testa, self.testb, self.testc.searchtest_ptr, self.testd.searchtest_ptr})
def test_filters(self):
results = self.backend.search(None, models.SearchTest, filters=dict(live=True))
self.assertEqual(set(results), {self.testb, self.testc.searchtest_ptr})
def test_filters_with_in_lookup(self):
live_page_titles = models.SearchTest.objects.filter(live=True).values_list('title', flat=True)
results = self.backend.search(None, models.SearchTest, filters=dict(title__in=live_page_titles))
self.assertEqual(set(results), {self.testb, self.testc.searchtest_ptr})
def test_single_result(self):
result = self.backend.search(None, models.SearchTest)[0]
self.assertIsInstance(result, models.SearchTest)
def test_sliced_results(self):
sliced_results = self.backend.search(None, models.SearchTest)[1:3]
self.assertEqual(len(sliced_results), 2)
for result in sliced_results:
self.assertIsInstance(result, models.SearchTest)
def test_child_model(self):
results = self.backend.search(None, models.SearchTestChild)
self.assertEqual(set(results), {self.testc, self.testd})
def test_delete(self):
# Delete one of the objects
self.backend.delete(self.testa)
self.testa.delete()
self.backend.refresh_index()
results = self.backend.search(None, models.SearchTest)
self.assertEqual(set(results), {self.testb, self.testc.searchtest_ptr, self.testd.searchtest_ptr})
def test_update_index_command(self):
# Reset the index, this should clear out the index
self.backend.reset_index()
# Give Elasticsearch some time to catch up...
time.sleep(1)
results = self.backend.search(None, models.SearchTest)
self.assertEqual(set(results), set())
# Run update_index command
with self.ignore_deprecation_warnings(): # ignore any DeprecationWarnings thrown by models with old-style indexed_fields definitions
management.call_command('update_index', backend_name=self.backend_name, interactive=False, stdout=StringIO())
results = self.backend.search(None, models.SearchTest)
self.assertEqual(set(results), {self.testa, self.testb, self.testc.searchtest_ptr, self.testd.searchtest_ptr})
@override_settings(
WAGTAILSEARCH_BACKENDS={
'default': {'BACKEND': 'wagtail.wagtailsearch.backends.db'}
}
)
class TestBackendLoader(TestCase):
def test_import_by_name(self):
db = get_search_backend(backend='default')
self.assertIsInstance(db, DBSearch)
def test_import_by_path(self):
db = get_search_backend(backend='wagtail.wagtailsearch.backends.db')
self.assertIsInstance(db, DBSearch)
def test_import_by_full_path(self):
db = get_search_backend(backend='wagtail.wagtailsearch.backends.db.DBSearch')
self.assertIsInstance(db, DBSearch)
def test_nonexistent_backend_import(self):
self.assertRaises(InvalidSearchBackendError, get_search_backend, backend='wagtail.wagtailsearch.backends.doesntexist')
def test_invalid_backend_import(self):
self.assertRaises(InvalidSearchBackendError, get_search_backend, backend="I'm not a backend!")
def test_get_search_backends(self):
backends = list(get_search_backends())
self.assertEqual(len(backends), 1)
self.assertIsInstance(backends[0], DBSearch)
@override_settings(
WAGTAILSEARCH_BACKENDS={
'default': {
'BACKEND': 'wagtail.wagtailsearch.backends.db'
},
'another-backend': {
'BACKEND': 'wagtail.wagtailsearch.backends.db'
},
}
)
def test_get_search_backends_multiple(self):
backends = list(get_search_backends())
self.assertEqual(len(backends), 2)
def test_get_search_backends_with_auto_update(self):
backends = list(get_search_backends(with_auto_update=True))
# Auto update is the default
self.assertEqual(len(backends), 1)
@override_settings(
WAGTAILSEARCH_BACKENDS={
'default': {
'BACKEND': 'wagtail.wagtailsearch.backends.db',
'AUTO_UPDATE': False,
},
}
)
def test_get_search_backends_with_auto_update_disabled(self):
backends = list(get_search_backends(with_auto_update=True))
self.assertEqual(len(backends), 0)
@override_settings(
WAGTAILSEARCH_BACKENDS={
'default': {
'BACKEND': 'wagtail.wagtailsearch.backends.db',
'AUTO_UPDATE': False,
},
}
)
def test_get_search_backends_without_auto_update_disabled(self):
backends = list(get_search_backends())
self.assertEqual(len(backends), 1)
| bsd-3-clause | -3,600,358,642,008,149,000 | 35.428571 | 141 | 0.657255 | false |
kxliugang/edx-platform | common/djangoapps/util/model_utils.py | 31 | 6782 | """
Utilities for django models.
"""
import unicodedata
import re
from eventtracking import tracker
from django.conf import settings
from django.utils.encoding import force_unicode
from django.utils.safestring import mark_safe
from django_countries.fields import Country
# The setting name used for events when "settings" (account settings, preferences, profile information) change.
USER_SETTINGS_CHANGED_EVENT_NAME = u'edx.user.settings.changed'
def get_changed_fields_dict(instance, model_class):
"""
Helper method for tracking field changes on a model.
Given a model instance and class, return a dict whose keys are that
instance's fields which differ from the last saved ones and whose values
are the old values of those fields. Related fields are not considered.
Args:
instance (Model instance): the model instance with changes that are
being tracked
model_class (Model class): the class of the model instance we are
tracking
Returns:
dict: a mapping of field names to current database values of those
fields, or an empty dict if the model is new
"""
try:
old_model = model_class.objects.get(pk=instance.pk)
except model_class.DoesNotExist:
# Object is new, so fields haven't technically changed. We'll return
# an empty dict as a default value.
return {}
else:
field_names = [
field[0].name for field in model_class._meta.get_fields_with_model()
]
changed_fields = {
field_name: getattr(old_model, field_name) for field_name in field_names
if getattr(old_model, field_name) != getattr(instance, field_name)
}
return changed_fields
def emit_field_changed_events(instance, user, db_table, excluded_fields=None, hidden_fields=None):
"""Emits a settings changed event for each field that has changed.
Note that this function expects that a `_changed_fields` dict has been set
as an attribute on `instance` (see `get_changed_fields_dict`.
Args:
instance (Model instance): the model instance that is being saved
user (User): the user that this instance is associated with
db_table (str): the name of the table that we're modifying
excluded_fields (list): a list of field names for which events should
not be emitted
hidden_fields (list): a list of field names specifying fields whose
values should not be included in the event (None will be used
instead)
Returns:
None
"""
def clean_field(field_name, value):
"""
Prepare a field to be emitted in a JSON serializable format. If
`field_name` is a hidden field, return None.
"""
if field_name in hidden_fields:
return None
# Country is not JSON serializable. Return the country code.
if isinstance(value, Country):
if value.code:
return value.code
else:
return None
return value
excluded_fields = excluded_fields or []
hidden_fields = hidden_fields or []
changed_fields = getattr(instance, '_changed_fields', {})
for field_name in changed_fields:
if field_name not in excluded_fields:
old_value = clean_field(field_name, changed_fields[field_name])
new_value = clean_field(field_name, getattr(instance, field_name))
emit_setting_changed_event(user, db_table, field_name, old_value, new_value)
# Remove the now inaccurate _changed_fields attribute.
if hasattr(instance, '_changed_fields'):
del instance._changed_fields
def truncate_fields(old_value, new_value):
"""
Truncates old_value and new_value for analytics event emission if necessary.
Args:
old_value(obj): the value before the change
new_value(obj): the new value being saved
Returns:
a dictionary with the following fields:
'old': the truncated old value
'new': the truncated new value
'truncated': the list of fields that have been truncated
"""
# Compute the maximum value length so that two copies can fit into the maximum event size
# in addition to all the other fields recorded.
max_value_length = settings.TRACK_MAX_EVENT / 4
serialized_old_value, old_was_truncated = _get_truncated_setting_value(old_value, max_length=max_value_length)
serialized_new_value, new_was_truncated = _get_truncated_setting_value(new_value, max_length=max_value_length)
truncated_values = []
if old_was_truncated:
truncated_values.append("old")
if new_was_truncated:
truncated_values.append("new")
return {'old': serialized_old_value, 'new': serialized_new_value, 'truncated': truncated_values}
def emit_setting_changed_event(user, db_table, setting_name, old_value, new_value):
"""Emits an event for a change in a setting.
Args:
user (User): the user that this setting is associated with.
db_table (str): the name of the table that we're modifying.
setting_name (str): the name of the setting being changed.
old_value (object): the value before the change.
new_value (object): the new value being saved.
Returns:
None
"""
truncated_fields = truncate_fields(old_value, new_value)
truncated_fields['setting'] = setting_name
truncated_fields['user_id'] = user.id
truncated_fields['table'] = db_table
tracker.emit(
USER_SETTINGS_CHANGED_EVENT_NAME,
truncated_fields
)
def _get_truncated_setting_value(value, max_length=None):
"""
Returns the truncated form of a setting value.
Returns:
truncated_value (object): the possibly truncated version of the value.
was_truncated (bool): returns true if the serialized value was truncated.
"""
if isinstance(value, basestring) and max_length is not None and len(value) > max_length:
return value[0:max_length], True
else:
return value, False
# Taken from Django 1.8 source code because it's not supported in 1.4
def slugify(value):
"""Converts value into a string suitable for readable URLs.
Converts to ASCII. Converts spaces to hyphens. Removes characters that
aren't alphanumerics, underscores, or hyphens. Converts to lowercase.
Also strips leading and trailing whitespace.
Args:
value (string): String to slugify.
"""
value = force_unicode(value)
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore').decode('ascii')
value = re.sub(r'[^\w\s-]', '', value).strip().lower()
return mark_safe(re.sub(r'[-\s]+', '-', value))
| agpl-3.0 | -2,166,354,602,985,927,700 | 35.858696 | 114 | 0.664258 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.