repo_name
stringlengths 5
100
| path
stringlengths 4
254
| copies
stringlengths 1
5
| size
stringlengths 4
7
| content
stringlengths 681
1M
| license
stringclasses 15
values | hash
int64 -9,223,351,895,964,839,000
9,223,298,349B
| line_mean
float64 3.5
100
| line_max
int64 15
1k
| alpha_frac
float64 0.25
0.97
| autogenerated
bool 1
class | ratio
float64 1.5
8.15
| config_test
bool 2
classes | has_no_keywords
bool 2
classes | few_assignments
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
kelseyoo14/Wander | venv_2_7/lib/python2.7/site-packages/numpy/fft/info.py | 34 | 7236 | """
Discrete Fourier Transform (:mod:`numpy.fft`)
=============================================
.. currentmodule:: numpy.fft
Standard FFTs
-------------
.. autosummary::
:toctree: generated/
fft Discrete Fourier transform.
ifft Inverse discrete Fourier transform.
fft2 Discrete Fourier transform in two dimensions.
ifft2 Inverse discrete Fourier transform in two dimensions.
fftn Discrete Fourier transform in N-dimensions.
ifftn Inverse discrete Fourier transform in N dimensions.
Real FFTs
---------
.. autosummary::
:toctree: generated/
rfft Real discrete Fourier transform.
irfft Inverse real discrete Fourier transform.
rfft2 Real discrete Fourier transform in two dimensions.
irfft2 Inverse real discrete Fourier transform in two dimensions.
rfftn Real discrete Fourier transform in N dimensions.
irfftn Inverse real discrete Fourier transform in N dimensions.
Hermitian FFTs
--------------
.. autosummary::
:toctree: generated/
hfft Hermitian discrete Fourier transform.
ihfft Inverse Hermitian discrete Fourier transform.
Helper routines
---------------
.. autosummary::
:toctree: generated/
fftfreq Discrete Fourier Transform sample frequencies.
rfftfreq DFT sample frequencies (for usage with rfft, irfft).
fftshift Shift zero-frequency component to center of spectrum.
ifftshift Inverse of fftshift.
Background information
----------------------
Fourier analysis is fundamentally a method for expressing a function as a
sum of periodic components, and for recovering the function from those
components. When both the function and its Fourier transform are
replaced with discretized counterparts, it is called the discrete Fourier
transform (DFT). The DFT has become a mainstay of numerical computing in
part because of a very fast algorithm for computing it, called the Fast
Fourier Transform (FFT), which was known to Gauss (1805) and was brought
to light in its current form by Cooley and Tukey [CT]_. Press et al. [NR]_
provide an accessible introduction to Fourier analysis and its
applications.
Because the discrete Fourier transform separates its input into
components that contribute at discrete frequencies, it has a great number
of applications in digital signal processing, e.g., for filtering, and in
this context the discretized input to the transform is customarily
referred to as a *signal*, which exists in the *time domain*. The output
is called a *spectrum* or *transform* and exists in the *frequency
domain*.
Implementation details
----------------------
There are many ways to define the DFT, varying in the sign of the
exponent, normalization, etc. In this implementation, the DFT is defined
as
.. math::
A_k = \\sum_{m=0}^{n-1} a_m \\exp\\left\\{-2\\pi i{mk \\over n}\\right\\}
\\qquad k = 0,\\ldots,n-1.
The DFT is in general defined for complex inputs and outputs, and a
single-frequency component at linear frequency :math:`f` is
represented by a complex exponential
:math:`a_m = \\exp\\{2\\pi i\\,f m\\Delta t\\}`, where :math:`\\Delta t`
is the sampling interval.
The values in the result follow so-called "standard" order: If ``A =
fft(a, n)``, then ``A[0]`` contains the zero-frequency term (the mean of
the signal), which is always purely real for real inputs. Then ``A[1:n/2]``
contains the positive-frequency terms, and ``A[n/2+1:]`` contains the
negative-frequency terms, in order of decreasingly negative frequency.
For an even number of input points, ``A[n/2]`` represents both positive and
negative Nyquist frequency, and is also purely real for real input. For
an odd number of input points, ``A[(n-1)/2]`` contains the largest positive
frequency, while ``A[(n+1)/2]`` contains the largest negative frequency.
The routine ``np.fft.fftfreq(n)`` returns an array giving the frequencies
of corresponding elements in the output. The routine
``np.fft.fftshift(A)`` shifts transforms and their frequencies to put the
zero-frequency components in the middle, and ``np.fft.ifftshift(A)`` undoes
that shift.
When the input `a` is a time-domain signal and ``A = fft(a)``, ``np.abs(A)``
is its amplitude spectrum and ``np.abs(A)**2`` is its power spectrum.
The phase spectrum is obtained by ``np.angle(A)``.
The inverse DFT is defined as
.. math::
a_m = \\frac{1}{n}\\sum_{k=0}^{n-1}A_k\\exp\\left\\{2\\pi i{mk\\over n}\\right\\}
\\qquad m = 0,\\ldots,n-1.
It differs from the forward transform by the sign of the exponential
argument and the default normalization by :math:`1/n`.
Normalization
-------------
The default normalization has the direct transforms unscaled and the inverse
transforms are scaled by :math:`1/n`. It is possible to obtain unitary
transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is
`None`) so that both direct and inverse transforms will be scaled by
:math:`1/\\sqrt{n}`.
Real and Hermitian transforms
-----------------------------
When the input is purely real, its transform is Hermitian, i.e., the
component at frequency :math:`f_k` is the complex conjugate of the
component at frequency :math:`-f_k`, which means that for real
inputs there is no information in the negative frequency components that
is not already available from the positive frequency components.
The family of `rfft` functions is
designed to operate on real inputs, and exploits this symmetry by
computing only the positive frequency components, up to and including the
Nyquist frequency. Thus, ``n`` input points produce ``n/2+1`` complex
output points. The inverses of this family assumes the same symmetry of
its input, and for an output of ``n`` points uses ``n/2+1`` input points.
Correspondingly, when the spectrum is purely real, the signal is
Hermitian. The `hfft` family of functions exploits this symmetry by
using ``n/2+1`` complex points in the input (time) domain for ``n`` real
points in the frequency domain.
In higher dimensions, FFTs are used, e.g., for image analysis and
filtering. The computational efficiency of the FFT means that it can
also be a faster way to compute large convolutions, using the property
that a convolution in the time domain is equivalent to a point-by-point
multiplication in the frequency domain.
Higher dimensions
-----------------
In two dimensions, the DFT is defined as
.. math::
A_{kl} = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1}
a_{mn}\\exp\\left\\{-2\\pi i \\left({mk\\over M}+{nl\\over N}\\right)\\right\\}
\\qquad k = 0, \\ldots, M-1;\\quad l = 0, \\ldots, N-1,
which extends in the obvious way to higher dimensions, and the inverses
in higher dimensions also extend in the same way.
References
----------
.. [CT] Cooley, James W., and John W. Tukey, 1965, "An algorithm for the
machine calculation of complex Fourier series," *Math. Comput.*
19: 297-301.
.. [NR] Press, W., Teukolsky, S., Vetterline, W.T., and Flannery, B.P.,
2007, *Numerical Recipes: The Art of Scientific Computing*, ch.
12-13. Cambridge Univ. Press, Cambridge, UK.
Examples
--------
For examples, see the various functions.
"""
from __future__ import division, absolute_import, print_function
depends = ['core']
| artistic-2.0 | -7,611,868,567,570,339,000 | 37.695187 | 84 | 0.711443 | false | 3.684318 | false | false | false |
unix-beard/gloria | service/decorator.py | 1 | 3222 | import inspect
import logging
from gloria.service.runnable import Task, Service
on_task_wrapped = None
on_service_wrapped = None
class _WrapperHelper(object):
def __call__(self, klass, base, _enabled, _autostart, _respawn):
return type(klass.__name__, (base,), dict(klass.__dict__, enabled=_enabled, autostart=_autostart, respawn=_respawn))
def task(enabled=True, autostart=False, respawn=False):
"""Mark class as a task"""
def _task(klass):
class Wrapper(object):
wrapped_task = _WrapperHelper()(klass, Task, enabled, autostart, respawn)
# Notify whoever is interested in the task being wrapped
if on_task_wrapped is not None:
on_task_wrapped(Wrapper.wrapped_task)
return Wrapper
return _task
class _ServiceWrapperHelper:
def __call__(self, klass, service_dir=''):
if klass == Service:
return type(service_dir, (object,), dict(klass.__dict__))
return type(klass.__name__, (Service,), dict(klass.__dict__))
def service(tasks=[]):
"""Mark class as a service"""
#####################################
# TODO: add autostart=False parameter
#####################################
def _service(klass, service_dir=''):
class Wrapper:
wrapped_class = _ServiceWrapperHelper()(klass, service_dir)
wrapped_tasks = tasks
# Notify whoever is interested in the class being wrapped
if on_service_wrapped is not None:
on_service_wrapped(Wrapper)
return Wrapper
return _service
class Property(object):
"""Expose task's property to the outside world"""
def __init__(self, fget=None, fset=None, doc=None):
self.fget = fget
self.fset = fset
if doc is None and fget is not None:
doc = fget.__doc__
self.__doc__ = doc
def __get__(self, obj, objtype=None):
if obj is None:
return self
if self.fget is None:
raise AttributeError('unreadable attribute')
return self.fget(obj)
def __set__(self, obj, value):
if self.fset is None:
raise AttributeError('cannot set attribute')
self.fset(obj, value)
def __str__(self):
return '{0}:{1}'.format(self.fget.__name__, self.__doc__)
def getter(self, fget):
return type(self)(fget, self.fset, self.__doc__)
def setter(self, fset):
return type(self)(self.fget, fset, self.__doc__)
class Command(object):
"""Expose task's command to the outside world"""
def __init__(self, func=None):
self.func = func
# Number of arguments this function takes ('self' is counted as well)
self.argc = len(inspect.getargspec(func)[0])
logging.debug('Command __init__: func={0} argc={1}'.format(func.__name__, self.argc))
def __call__(self, obj, args=''):
if obj is None:
return self
if self.func is None:
raise AttributeError('uncallable method')
if self.argc == 1:
return self.func(obj)
return self.func(obj, args)
def __str__(self):
return '{0}:{1}'.format(self.func.__name__, self.func.__doc__)
| mit | -8,969,074,680,863,516,000 | 29.685714 | 124 | 0.581006 | false | 3.96798 | false | false | false |
jmanday/Master | TFM/library/boost_1_63_0/libs/geometry/doc/make_qbk.py | 4 | 6751 | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# ===========================================================================
# Copyright (c) 2007-2012 Barend Gehrels, Amsterdam, the Netherlands.
# Copyright (c) 2008-2012 Bruno Lalande, Paris, France.
# Copyright (c) 2009-2012 Mateusz Loskot ([email protected]), London, UK
#
# Use, modification and distribution is subject to the Boost Software License,
# Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
# http://www.boost.org/LICENSE_1_0.txt)
# ============================================================================
import os, sys
script_dir = os.path.dirname(__file__)
os.chdir(os.path.abspath(script_dir))
print("Boost.Geometry is making .qbk files in %s" % os.getcwd())
if 'DOXYGEN' in os.environ:
doxygen_cmd = os.environ['DOXYGEN']
else:
doxygen_cmd = 'doxygen'
if 'DOXYGEN_XML2QBK' in os.environ:
doxygen_xml2qbk_cmd = os.environ['DOXYGEN_XML2QBK']
elif '--doxygen-xml2qbk' in sys.argv:
doxygen_xml2qbk_cmd = sys.argv[sys.argv.index('--doxygen-xml2qbk')+1]
else:
doxygen_xml2qbk_cmd = 'doxygen_xml2qbk'
os.environ['PATH'] = os.environ['PATH']+os.pathsep+os.path.dirname(doxygen_xml2qbk_cmd)
doxygen_xml2qbk_cmd = os.path.basename(doxygen_xml2qbk_cmd)
cmd = doxygen_xml2qbk_cmd
cmd = cmd + " --xml doxy/doxygen_output/xml/%s.xml"
cmd = cmd + " --start_include boost/geometry/"
cmd = cmd + " --convenience_header_path ../../../boost/geometry/"
cmd = cmd + " --convenience_headers geometry.hpp,geometries/geometries.hpp"
cmd = cmd + " --skip_namespace boost::geometry::"
cmd = cmd + " --copyright src/copyright_block.qbk"
cmd = cmd + " --output_member_variables false"
cmd = cmd + " > generated/%s.qbk"
def run_command(command):
if os.system(command) != 0:
raise Exception("Error running %s" % command)
def remove_all_files(dir):
if os.path.exists(dir):
for f in os.listdir(dir):
os.remove(dir+f)
def call_doxygen():
os.chdir("doxy")
remove_all_files("doxygen_output/xml/")
run_command(doxygen_cmd)
os.chdir("..")
def group_to_quickbook(section):
run_command(cmd % ("group__" + section.replace("_", "__"), section))
def model_to_quickbook(section):
run_command(cmd % ("classboost_1_1geometry_1_1model_1_1" + section.replace("_", "__"), section))
def model_to_quickbook2(classname, section):
run_command(cmd % ("classboost_1_1geometry_1_1model_1_1" + classname, section))
def struct_to_quickbook(section):
run_command(cmd % ("structboost_1_1geometry_1_1" + section.replace("_", "__"), section))
def class_to_quickbook(section):
run_command(cmd % ("classboost_1_1geometry_1_1" + section.replace("_", "__"), section))
def class_to_quickbook2(classname, section):
run_command(cmd % ("classboost_1_1geometry_1_1" + classname, section))
def strategy_to_quickbook(section):
p = section.find("::")
ns = section[:p]
strategy = section[p+2:]
run_command(cmd % ("classboost_1_1geometry_1_1strategy_1_1"
+ ns.replace("_", "__") + "_1_1" + strategy.replace("_", "__"),
ns + "_" + strategy))
def cs_to_quickbook(section):
run_command(cmd % ("structboost_1_1geometry_1_1cs_1_1" + section.replace("_", "__"), section))
call_doxygen()
algorithms = ["append", "assign", "make", "clear"
, "area", "buffer", "centroid", "convert", "correct", "covered_by"
, "convex_hull", "crosses", "difference", "disjoint", "distance"
, "envelope", "equals", "expand", "for_each", "is_empty"
, "is_simple", "is_valid", "intersection", "intersects", "length"
, "num_geometries", "num_interior_rings", "num_points"
, "num_segments", "overlaps", "perimeter", "relate", "relation"
, "reverse", "simplify", "sym_difference", "touches"
, "transform", "union", "unique", "within"]
access_functions = ["get", "set", "exterior_ring", "interior_rings"
, "num_points", "num_interior_rings", "num_geometries"]
coordinate_systems = ["cartesian", "geographic", "polar", "spherical", "spherical_equatorial"]
core = ["closure", "coordinate_system", "coordinate_type", "cs_tag"
, "dimension", "exception", "interior_type"
, "degree", "radian"
, "is_radian", "point_order"
, "point_type", "ring_type", "tag", "tag_cast" ]
exceptions = ["exception", "centroid_exception"];
iterators = ["circular_iterator", "closing_iterator"
, "ever_circling_iterator"]
models = ["point", "linestring", "box"
, "polygon", "segment", "ring"
, "multi_linestring", "multi_point", "multi_polygon", "referring_segment"]
strategies = ["distance::pythagoras", "distance::pythagoras_box_box"
, "distance::pythagoras_point_box", "distance::haversine"
, "distance::cross_track", "distance::cross_track_point_box"
, "distance::projected_point"
, "within::winding", "within::franklin", "within::crossings_multiply"
, "area::surveyor", "area::huiller"
, "buffer::point_circle", "buffer::point_square"
, "buffer::join_round", "buffer::join_miter"
, "buffer::end_round", "buffer::end_flat"
, "buffer::distance_symmetric", "buffer::distance_asymmetric"
, "buffer::side_straight"
, "centroid::bashein_detmer", "centroid::average"
, "convex_hull::graham_andrew"
, "simplify::douglas_peucker"
, "side::side_by_triangle", "side::side_by_cross_track", "side::spherical_side_formula"
, "transform::inverse_transformer", "transform::map_transformer"
, "transform::rotate_transformer", "transform::scale_transformer"
, "transform::translate_transformer", "transform::ublas_transformer"
]
views = ["box_view", "segment_view"
, "closeable_view", "reversible_view", "identity_view"]
for i in algorithms:
group_to_quickbook(i)
for i in access_functions:
group_to_quickbook(i)
for i in coordinate_systems:
cs_to_quickbook(i)
for i in core:
struct_to_quickbook(i)
for i in exceptions:
class_to_quickbook(i)
for i in iterators:
struct_to_quickbook(i)
for i in models:
model_to_quickbook(i)
for i in strategies:
strategy_to_quickbook(i)
for i in views:
struct_to_quickbook(i)
model_to_quickbook2("d2_1_1point__xy", "point_xy")
group_to_quickbook("arithmetic")
group_to_quickbook("enum")
group_to_quickbook("register")
group_to_quickbook("svg")
class_to_quickbook("svg_mapper")
group_to_quickbook("wkt")
class_to_quickbook2("de9im_1_1matrix", "de9im_matrix")
class_to_quickbook2("de9im_1_1mask", "de9im_mask")
class_to_quickbook2("de9im_1_1static__mask", "de9im_static_mask")
os.chdir("index")
execfile("make_qbk.py")
os.chdir("..")
# Use either bjam or b2 or ../../../b2 (the last should be done on Release branch)
if "--release-build" not in sys.argv:
run_command("b2")
| apache-2.0 | -7,518,914,146,392,926,000 | 33.979275 | 100 | 0.641386 | false | 2.966169 | false | false | false |
npiganeau/odoo | addons/hr_timesheet_invoice/report/hr_timesheet_invoice_report.py | 23 | 9496 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields,osv
from openerp.tools.sql import drop_view_if_exists
class report_timesheet_line(osv.osv):
_name = "report.timesheet.line"
_description = "Timesheet Line"
_auto = False
_columns = {
'name': fields.char('Year', required=False, readonly=True),
'user_id': fields.many2one('res.users', 'User', readonly=True),
'date': fields.date('Date', readonly=True),
'day': fields.char('Day', size=128, readonly=True),
'quantity': fields.float('Time', readonly=True),
'cost': fields.float('Cost', readonly=True),
'product_id': fields.many2one('product.product', 'Product',readonly=True),
'account_id': fields.many2one('account.analytic.account', 'Analytic Account', readonly=True),
'general_account_id': fields.many2one('account.account', 'Financial Account', readonly=True),
'invoice_id': fields.many2one('account.invoice', 'Invoiced', readonly=True),
'month': fields.selection([('01','January'), ('02','February'), ('03','March'), ('04','April'), ('05','May'), ('06','June'),
('07','July'), ('08','August'), ('09','September'), ('10','October'), ('11','November'), ('12','December')],'Month', readonly=True),
}
_order = 'name desc,user_id desc'
def init(self, cr):
drop_view_if_exists(cr, 'report_timesheet_line')
cr.execute("""
create or replace view report_timesheet_line as (
select
min(l.id) as id,
l.date as date,
to_char(l.date,'YYYY') as name,
to_char(l.date,'MM') as month,
l.user_id,
to_char(l.date, 'YYYY-MM-DD') as day,
l.invoice_id,
l.product_id,
l.account_id,
l.general_account_id,
sum(l.unit_amount) as quantity,
sum(l.amount) as cost
from
account_analytic_line l
where
l.user_id is not null
group by
l.date,
l.user_id,
l.product_id,
l.account_id,
l.general_account_id,
l.invoice_id
)
""")
class report_timesheet_user(osv.osv):
_name = "report_timesheet.user"
_description = "Timesheet per day"
_auto = False
_columns = {
'name': fields.char('Year', required=False, readonly=True),
'user_id':fields.many2one('res.users', 'User', readonly=True),
'quantity': fields.float('Time', readonly=True),
'cost': fields.float('Cost', readonly=True),
'month':fields.selection([('01','January'), ('02','February'), ('03','March'), ('04','April'), ('05','May'), ('06','June'),
('07','July'), ('08','August'), ('09','September'), ('10','October'), ('11','November'), ('12','December')],'Month', readonly=True),
}
_order = 'name desc,user_id desc'
def init(self, cr):
drop_view_if_exists(cr, 'report_timesheet_user')
cr.execute("""
create or replace view report_timesheet_user as (
select
min(l.id) as id,
to_char(l.date,'YYYY') as name,
to_char(l.date,'MM') as month,
l.user_id,
sum(l.unit_amount) as quantity,
sum(l.amount) as cost
from
account_analytic_line l
where
user_id is not null
group by l.date, to_char(l.date,'YYYY'),to_char(l.date,'MM'), l.user_id
)
""")
class report_timesheet_account(osv.osv):
_name = "report_timesheet.account"
_description = "Timesheet per account"
_auto = False
_columns = {
'name': fields.char('Year', required=False, readonly=True),
'user_id':fields.many2one('res.users', 'User', readonly=True),
'account_id':fields.many2one('account.analytic.account', 'Analytic Account', readonly=True),
'quantity': fields.float('Time', readonly=True),
'month':fields.selection([('01','January'), ('02','February'), ('03','March'), ('04','April'), ('05','May'), ('06','June'),
('07','July'), ('08','August'), ('09','September'), ('10','October'), ('11','November'), ('12','December')],'Month', readonly=True),
}
_order = 'name desc,account_id desc,user_id desc'
def init(self, cr):
drop_view_if_exists(cr, 'report_timesheet_account')
cr.execute("""
create or replace view report_timesheet_account as (
select
min(id) as id,
to_char(create_date, 'YYYY') as name,
to_char(create_date,'MM') as month,
user_id,
account_id,
sum(unit_amount) as quantity
from
account_analytic_line
group by
to_char(create_date, 'YYYY'),to_char(create_date, 'MM'), user_id, account_id
)
""")
class report_timesheet_account_date(osv.osv):
_name = "report_timesheet.account.date"
_description = "Daily timesheet per account"
_auto = False
_columns = {
'name': fields.char('Year', required=False, readonly=True),
'user_id':fields.many2one('res.users', 'User', readonly=True),
'account_id':fields.many2one('account.analytic.account', 'Analytic Account', readonly=True),
'quantity': fields.float('Time', readonly=True),
'month':fields.selection([('01','January'), ('02','February'), ('03','March'), ('04','April'), ('05','May'), ('06','June'),
('07','July'), ('08','August'), ('09','September'), ('10','October'), ('11','November'), ('12','December')],'Month', readonly=True),
}
_order = 'name desc,account_id desc,user_id desc'
def init(self, cr):
drop_view_if_exists(cr, 'report_timesheet_account_date')
cr.execute("""
create or replace view report_timesheet_account_date as (
select
min(id) as id,
to_char(date,'YYYY') as name,
to_char(date,'MM') as month,
user_id,
account_id,
sum(unit_amount) as quantity
from
account_analytic_line
group by
to_char(date,'YYYY'),to_char(date,'MM'), user_id, account_id
)
""")
class report_timesheet_invoice(osv.osv):
_name = "report_timesheet.invoice"
_description = "Costs to invoice"
_auto = False
_columns = {
'user_id':fields.many2one('res.users', 'User', readonly=True),
'account_id':fields.many2one('account.analytic.account', 'Project', readonly=True),
'manager_id':fields.many2one('res.users', 'Manager', readonly=True),
'quantity': fields.float('Time', readonly=True),
'amount_invoice': fields.float('To invoice', readonly=True)
}
_rec_name = 'user_id'
_order = 'user_id desc'
def init(self, cr):
drop_view_if_exists(cr, 'report_timesheet_invoice')
cr.execute("""
create or replace view report_timesheet_invoice as (
select
min(l.id) as id,
l.user_id as user_id,
l.account_id as account_id,
a.user_id as manager_id,
sum(l.unit_amount) as quantity,
sum(l.unit_amount * t.list_price) as amount_invoice
from account_analytic_line l
left join hr_timesheet_invoice_factor f on (l.to_invoice=f.id)
left join account_analytic_account a on (l.account_id=a.id)
left join product_product p on (l.to_invoice=f.id)
left join product_template t on (l.to_invoice=f.id)
where
l.to_invoice is not null and
l.invoice_id is null
group by
l.user_id,
l.account_id,
a.user_id
)
""")
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | -6,737,287,767,771,691,000 | 42.962963 | 166 | 0.518639 | false | 3.958316 | false | false | false |
jblackburne/scikit-learn | sklearn/neural_network/rbm.py | 46 | 12291 | """Restricted Boltzmann Machine
"""
# Authors: Yann N. Dauphin <[email protected]>
# Vlad Niculae
# Gabriel Synnaeve
# Lars Buitinck
# License: BSD 3 clause
import time
import numpy as np
import scipy.sparse as sp
from ..base import BaseEstimator
from ..base import TransformerMixin
from ..externals.six.moves import xrange
from ..utils import check_array
from ..utils import check_random_state
from ..utils import gen_even_slices
from ..utils import issparse
from ..utils.extmath import safe_sparse_dot
from ..utils.extmath import log_logistic
from ..utils.fixes import expit # logistic function
from ..utils.validation import check_is_fitted
class BernoulliRBM(BaseEstimator, TransformerMixin):
"""Bernoulli Restricted Boltzmann Machine (RBM).
A Restricted Boltzmann Machine with binary visible units and
binary hidden units. Parameters are estimated using Stochastic Maximum
Likelihood (SML), also known as Persistent Contrastive Divergence (PCD)
[2].
The time complexity of this implementation is ``O(d ** 2)`` assuming
d ~ n_features ~ n_components.
Read more in the :ref:`User Guide <rbm>`.
Parameters
----------
n_components : int, optional
Number of binary hidden units.
learning_rate : float, optional
The learning rate for weight updates. It is *highly* recommended
to tune this hyper-parameter. Reasonable values are in the
10**[0., -3.] range.
batch_size : int, optional
Number of examples per minibatch.
n_iter : int, optional
Number of iterations/sweeps over the training dataset to perform
during training.
verbose : int, optional
The verbosity level. The default, zero, means silent mode.
random_state : integer or numpy.RandomState, optional
A random number generator instance to define the state of the
random permutations generator. If an integer is given, it fixes the
seed. Defaults to the global numpy random number generator.
Attributes
----------
intercept_hidden_ : array-like, shape (n_components,)
Biases of the hidden units.
intercept_visible_ : array-like, shape (n_features,)
Biases of the visible units.
components_ : array-like, shape (n_components, n_features)
Weight matrix, where n_features in the number of
visible units and n_components is the number of hidden units.
Examples
--------
>>> import numpy as np
>>> from sklearn.neural_network import BernoulliRBM
>>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
>>> model = BernoulliRBM(n_components=2)
>>> model.fit(X)
BernoulliRBM(batch_size=10, learning_rate=0.1, n_components=2, n_iter=10,
random_state=None, verbose=0)
References
----------
[1] Hinton, G. E., Osindero, S. and Teh, Y. A fast learning algorithm for
deep belief nets. Neural Computation 18, pp 1527-1554.
http://www.cs.toronto.edu/~hinton/absps/fastnc.pdf
[2] Tieleman, T. Training Restricted Boltzmann Machines using
Approximations to the Likelihood Gradient. International Conference
on Machine Learning (ICML) 2008
"""
def __init__(self, n_components=256, learning_rate=0.1, batch_size=10,
n_iter=10, verbose=0, random_state=None):
self.n_components = n_components
self.learning_rate = learning_rate
self.batch_size = batch_size
self.n_iter = n_iter
self.verbose = verbose
self.random_state = random_state
def transform(self, X):
"""Compute the hidden layer activation probabilities, P(h=1|v=X).
Parameters
----------
X : {array-like, sparse matrix} shape (n_samples, n_features)
The data to be transformed.
Returns
-------
h : array, shape (n_samples, n_components)
Latent representations of the data.
"""
check_is_fitted(self, "components_")
X = check_array(X, accept_sparse='csr', dtype=np.float64)
return self._mean_hiddens(X)
def _mean_hiddens(self, v):
"""Computes the probabilities P(h=1|v).
Parameters
----------
v : array-like, shape (n_samples, n_features)
Values of the visible layer.
Returns
-------
h : array-like, shape (n_samples, n_components)
Corresponding mean field values for the hidden layer.
"""
p = safe_sparse_dot(v, self.components_.T)
p += self.intercept_hidden_
return expit(p, out=p)
def _sample_hiddens(self, v, rng):
"""Sample from the distribution P(h|v).
Parameters
----------
v : array-like, shape (n_samples, n_features)
Values of the visible layer to sample from.
rng : RandomState
Random number generator to use.
Returns
-------
h : array-like, shape (n_samples, n_components)
Values of the hidden layer.
"""
p = self._mean_hiddens(v)
return (rng.random_sample(size=p.shape) < p)
def _sample_visibles(self, h, rng):
"""Sample from the distribution P(v|h).
Parameters
----------
h : array-like, shape (n_samples, n_components)
Values of the hidden layer to sample from.
rng : RandomState
Random number generator to use.
Returns
-------
v : array-like, shape (n_samples, n_features)
Values of the visible layer.
"""
p = np.dot(h, self.components_)
p += self.intercept_visible_
expit(p, out=p)
return (rng.random_sample(size=p.shape) < p)
def _free_energy(self, v):
"""Computes the free energy F(v) = - log sum_h exp(-E(v,h)).
Parameters
----------
v : array-like, shape (n_samples, n_features)
Values of the visible layer.
Returns
-------
free_energy : array-like, shape (n_samples,)
The value of the free energy.
"""
return (- safe_sparse_dot(v, self.intercept_visible_)
- np.logaddexp(0, safe_sparse_dot(v, self.components_.T)
+ self.intercept_hidden_).sum(axis=1))
def gibbs(self, v):
"""Perform one Gibbs sampling step.
Parameters
----------
v : array-like, shape (n_samples, n_features)
Values of the visible layer to start from.
Returns
-------
v_new : array-like, shape (n_samples, n_features)
Values of the visible layer after one Gibbs step.
"""
check_is_fitted(self, "components_")
if not hasattr(self, "random_state_"):
self.random_state_ = check_random_state(self.random_state)
h_ = self._sample_hiddens(v, self.random_state_)
v_ = self._sample_visibles(h_, self.random_state_)
return v_
def partial_fit(self, X, y=None):
"""Fit the model to the data X which should contain a partial
segment of the data.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training data.
Returns
-------
self : BernoulliRBM
The fitted model.
"""
X = check_array(X, accept_sparse='csr', dtype=np.float64)
if not hasattr(self, 'random_state_'):
self.random_state_ = check_random_state(self.random_state)
if not hasattr(self, 'components_'):
self.components_ = np.asarray(
self.random_state_.normal(
0,
0.01,
(self.n_components, X.shape[1])
),
order='F')
if not hasattr(self, 'intercept_hidden_'):
self.intercept_hidden_ = np.zeros(self.n_components, )
if not hasattr(self, 'intercept_visible_'):
self.intercept_visible_ = np.zeros(X.shape[1], )
if not hasattr(self, 'h_samples_'):
self.h_samples_ = np.zeros((self.batch_size, self.n_components))
self._fit(X, self.random_state_)
def _fit(self, v_pos, rng):
"""Inner fit for one mini-batch.
Adjust the parameters to maximize the likelihood of v using
Stochastic Maximum Likelihood (SML).
Parameters
----------
v_pos : array-like, shape (n_samples, n_features)
The data to use for training.
rng : RandomState
Random number generator to use for sampling.
"""
h_pos = self._mean_hiddens(v_pos)
v_neg = self._sample_visibles(self.h_samples_, rng)
h_neg = self._mean_hiddens(v_neg)
lr = float(self.learning_rate) / v_pos.shape[0]
update = safe_sparse_dot(v_pos.T, h_pos, dense_output=True).T
update -= np.dot(h_neg.T, v_neg)
self.components_ += lr * update
self.intercept_hidden_ += lr * (h_pos.sum(axis=0) - h_neg.sum(axis=0))
self.intercept_visible_ += lr * (np.asarray(
v_pos.sum(axis=0)).squeeze() -
v_neg.sum(axis=0))
h_neg[rng.uniform(size=h_neg.shape) < h_neg] = 1.0 # sample binomial
self.h_samples_ = np.floor(h_neg, h_neg)
def score_samples(self, X):
"""Compute the pseudo-likelihood of X.
Parameters
----------
X : {array-like, sparse matrix} shape (n_samples, n_features)
Values of the visible layer. Must be all-boolean (not checked).
Returns
-------
pseudo_likelihood : array-like, shape (n_samples,)
Value of the pseudo-likelihood (proxy for likelihood).
Notes
-----
This method is not deterministic: it computes a quantity called the
free energy on X, then on a randomly corrupted version of X, and
returns the log of the logistic function of the difference.
"""
check_is_fitted(self, "components_")
v = check_array(X, accept_sparse='csr')
rng = check_random_state(self.random_state)
# Randomly corrupt one feature in each sample in v.
ind = (np.arange(v.shape[0]),
rng.randint(0, v.shape[1], v.shape[0]))
if issparse(v):
data = -2 * v[ind] + 1
v_ = v + sp.csr_matrix((data.A.ravel(), ind), shape=v.shape)
else:
v_ = v.copy()
v_[ind] = 1 - v_[ind]
fe = self._free_energy(v)
fe_ = self._free_energy(v_)
return v.shape[1] * log_logistic(fe_ - fe)
def fit(self, X, y=None):
"""Fit the model to the data X.
Parameters
----------
X : {array-like, sparse matrix} shape (n_samples, n_features)
Training data.
Returns
-------
self : BernoulliRBM
The fitted model.
"""
X = check_array(X, accept_sparse='csr', dtype=np.float64)
n_samples = X.shape[0]
rng = check_random_state(self.random_state)
self.components_ = np.asarray(
rng.normal(0, 0.01, (self.n_components, X.shape[1])),
order='F')
self.intercept_hidden_ = np.zeros(self.n_components, )
self.intercept_visible_ = np.zeros(X.shape[1], )
self.h_samples_ = np.zeros((self.batch_size, self.n_components))
n_batches = int(np.ceil(float(n_samples) / self.batch_size))
batch_slices = list(gen_even_slices(n_batches * self.batch_size,
n_batches, n_samples))
verbose = self.verbose
begin = time.time()
for iteration in xrange(1, self.n_iter + 1):
for batch_slice in batch_slices:
self._fit(X[batch_slice], rng)
if verbose:
end = time.time()
print("[%s] Iteration %d, pseudo-likelihood = %.2f,"
" time = %.2fs"
% (type(self).__name__, iteration,
self.score_samples(X).mean(), end - begin))
begin = end
return self
| bsd-3-clause | -1,363,436,273,807,866,000 | 32.673973 | 78 | 0.566105 | false | 3.867527 | false | false | false |
LREN-CHUV/data-factory-airflow-dags | reorganisation_steps/cleanup_all_local.py | 2 | 1473 | """
Reorganisation step: cleanup all local data.
Cleanup the local data (for the whole data-set) created during copy_to_local step.
Configuration variables used:
* :reorganisation:copy_to_local section
* OUTPUT_FOLDER: destination folder for the local copy
"""
from datetime import timedelta
from textwrap import dedent
from airflow import configuration
from airflow.operators.bash_operator import BashOperator
from common_steps import Step
def cleanup_all_local_cfg(dag, upstream_step, step_section=None):
cleanup_folder = configuration.get(step_section, "OUTPUT_FOLDER")
return cleanup_all_local_step(dag, upstream_step, cleanup_folder)
def cleanup_all_local_step(dag, upstream_step, cleanup_folder):
cleanup_local_cmd = dedent("""
rm -rf {{ params["cleanup_folder"] }}/*
""")
cleanup_all_local = BashOperator(
task_id='cleanup_all_local',
bash_command=cleanup_local_cmd,
params={'cleanup_folder': cleanup_folder},
priority_weight=upstream_step.priority_weight,
execution_timeout=timedelta(hours=1),
dag=dag
)
if upstream_step.task:
cleanup_all_local.set_upstream(upstream_step.task)
cleanup_all_local.doc_md = dedent("""\
# Cleanup all local files
Remove locally stored files as they have been already reorganised.
""")
return Step(cleanup_all_local, cleanup_all_local.task_id, upstream_step.priority_weight + 10)
| apache-2.0 | -5,086,110,847,583,015,000 | 26.277778 | 97 | 0.698574 | false | 3.896825 | false | false | false |
ozturkemre/programming-challanges | 02-temperature_converter/temperature_converter.py | 1 | 1147 | print("""Enter 'C' or 'c' for Celsius,
'K' or 'k' for Kelvin,
'F' or 'f' for Fahrenheit\n\n""")
converted=0
fr=input("I want converter from: \n")
value1=input("Enter value: \n")
to=input("to: \n")
try:
value1=float(value1)
if(fr=='C' or fr=='c'):
if(to=='F' or to=='f'):
converted=value1*1,8+32
elif(to=='K' or to=='k'):
converted = value1 + 273.15
else:
print("you enter different value\n")
exit()
elif(fr=='K' or fr=='k'):
if(to=='C' or to=='c'):
converted=value1-273.15
elif(to=='F' or to=='f'):
converted = (value1 - 273.15) * 1.8 + 32
else:
print("you enter different value\n")
exit()
elif(fr=='F' or fr=='f'):
if(to=='C' or to=='c'):
converted=(value1-32)/1.8
elif(to=='K' or to=='k'):
converted = ((f - 32) / 1.8) + 273.15
else:
print("you enter different value\n")
exit()
except ValueError:
print("That was no valid number.")
print("result = {}".format(converted))
| mit | -510,726,133,871,561,400 | 23.934783 | 52 | 0.470793 | false | 3.203911 | false | false | false |
Empeeric/dirometer | django/views/generic/edit.py | 159 | 7457 | from django.forms import models as model_forms
from django.core.exceptions import ImproperlyConfigured
from django.http import HttpResponseRedirect
from django.views.generic.base import TemplateResponseMixin, View
from django.views.generic.detail import (SingleObjectMixin,
SingleObjectTemplateResponseMixin, BaseDetailView)
class FormMixin(object):
"""
A mixin that provides a way to show and handle a form in a request.
"""
initial = {}
form_class = None
success_url = None
def get_initial(self):
"""
Returns the initial data to use for forms on this view.
"""
return self.initial
def get_form_class(self):
"""
Returns the form class to use in this view
"""
return self.form_class
def get_form(self, form_class):
"""
Returns an instance of the form to be used in this view.
"""
return form_class(**self.get_form_kwargs())
def get_form_kwargs(self):
"""
Returns the keyword arguments for instanciating the form.
"""
kwargs = {'initial': self.get_initial()}
if self.request.method in ('POST', 'PUT'):
kwargs.update({
'data': self.request.POST,
'files': self.request.FILES,
})
return kwargs
def get_context_data(self, **kwargs):
return kwargs
def get_success_url(self):
if self.success_url:
url = self.success_url
else:
raise ImproperlyConfigured(
"No URL to redirect to. Provide a success_url.")
return url
def form_valid(self, form):
return HttpResponseRedirect(self.get_success_url())
def form_invalid(self, form):
return self.render_to_response(self.get_context_data(form=form))
class ModelFormMixin(FormMixin, SingleObjectMixin):
"""
A mixin that provides a way to show and handle a modelform in a request.
"""
def get_form_class(self):
"""
Returns the form class to use in this view
"""
if self.form_class:
return self.form_class
else:
if self.model is not None:
# If a model has been explicitly provided, use it
model = self.model
elif hasattr(self, 'object') and self.object is not None:
# If this view is operating on a single object, use
# the class of that object
model = self.object.__class__
else:
# Try to get a queryset and extract the model class
# from that
model = self.get_queryset().model
return model_forms.modelform_factory(model)
def get_form_kwargs(self):
"""
Returns the keyword arguments for instanciating the form.
"""
kwargs = super(ModelFormMixin, self).get_form_kwargs()
kwargs.update({'instance': self.object})
return kwargs
def get_success_url(self):
if self.success_url:
url = self.success_url % self.object.__dict__
else:
try:
url = self.object.get_absolute_url()
except AttributeError:
raise ImproperlyConfigured(
"No URL to redirect to. Either provide a url or define"
" a get_absolute_url method on the Model.")
return url
def form_valid(self, form):
self.object = form.save()
return super(ModelFormMixin, self).form_valid(form)
def get_context_data(self, **kwargs):
context = kwargs
if self.object:
context['object'] = self.object
context_object_name = self.get_context_object_name(self.object)
if context_object_name:
context[context_object_name] = self.object
return context
class ProcessFormView(View):
"""
A mixin that processes a form on POST.
"""
def get(self, request, *args, **kwargs):
form_class = self.get_form_class()
form = self.get_form(form_class)
return self.render_to_response(self.get_context_data(form=form))
def post(self, request, *args, **kwargs):
form_class = self.get_form_class()
form = self.get_form(form_class)
if form.is_valid():
return self.form_valid(form)
else:
return self.form_invalid(form)
# PUT is a valid HTTP verb for creating (with a known URL) or editing an
# object, note that browsers only support POST for now.
def put(self, *args, **kwargs):
return self.post(*args, **kwargs)
class BaseFormView(FormMixin, ProcessFormView):
"""
A base view for displaying a form
"""
class FormView(TemplateResponseMixin, BaseFormView):
"""
A view for displaying a form, and rendering a template response.
"""
class BaseCreateView(ModelFormMixin, ProcessFormView):
"""
Base view for creating an new object instance.
Using this base class requires subclassing to provide a response mixin.
"""
def get(self, request, *args, **kwargs):
self.object = None
return super(BaseCreateView, self).get(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
self.object = None
return super(BaseCreateView, self).post(request, *args, **kwargs)
class CreateView(SingleObjectTemplateResponseMixin, BaseCreateView):
"""
View for creating an new object instance,
with a response rendered by template.
"""
template_name_suffix = '_form'
class BaseUpdateView(ModelFormMixin, ProcessFormView):
"""
Base view for updating an existing object.
Using this base class requires subclassing to provide a response mixin.
"""
def get(self, request, *args, **kwargs):
self.object = self.get_object()
return super(BaseUpdateView, self).get(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
self.object = self.get_object()
return super(BaseUpdateView, self).post(request, *args, **kwargs)
class UpdateView(SingleObjectTemplateResponseMixin, BaseUpdateView):
"""
View for updating an object,
with a response rendered by template..
"""
template_name_suffix = '_form'
class DeletionMixin(object):
"""
A mixin providing the ability to delete objects
"""
success_url = None
def delete(self, request, *args, **kwargs):
self.object = self.get_object()
self.object.delete()
return HttpResponseRedirect(self.get_success_url())
# Add support for browsers which only accept GET and POST for now.
def post(self, *args, **kwargs):
return self.delete(*args, **kwargs)
def get_success_url(self):
if self.success_url:
return self.success_url
else:
raise ImproperlyConfigured(
"No URL to redirect to. Provide a success_url.")
class BaseDeleteView(DeletionMixin, BaseDetailView):
"""
Base view for deleting an object.
Using this base class requires subclassing to provide a response mixin.
"""
class DeleteView(SingleObjectTemplateResponseMixin, BaseDeleteView):
"""
View for deleting an object retrieved with `self.get_object()`,
with a response rendered by template.
"""
template_name_suffix = '_confirm_delete'
| mit | 9,162,839,054,210,751,000 | 29.81405 | 76 | 0.615127 | false | 4.315394 | false | false | false |
ipylypiv/grpc | src/python/grpcio/grpc/_plugin_wrapping.py | 19 | 4602 | # Copyright 2015, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import collections
import threading
import grpc
from grpc import _common
from grpc._cython import cygrpc
class AuthMetadataContext(
collections.namedtuple('AuthMetadataContext', (
'service_url', 'method_name',)), grpc.AuthMetadataContext):
pass
class AuthMetadataPluginCallback(grpc.AuthMetadataContext):
def __init__(self, callback):
self._callback = callback
def __call__(self, metadata, error):
self._callback(metadata, error)
class _WrappedCygrpcCallback(object):
def __init__(self, cygrpc_callback):
self.is_called = False
self.error = None
self.is_called_lock = threading.Lock()
self.cygrpc_callback = cygrpc_callback
def _invoke_failure(self, error):
# TODO(atash) translate different Exception superclasses into different
# status codes.
self.cygrpc_callback(_common.EMPTY_METADATA, cygrpc.StatusCode.internal,
_common.encode(str(error)))
def _invoke_success(self, metadata):
try:
cygrpc_metadata = _common.to_cygrpc_metadata(metadata)
except Exception as exception: # pylint: disable=broad-except
self._invoke_failure(exception)
return
self.cygrpc_callback(cygrpc_metadata, cygrpc.StatusCode.ok, b'')
def __call__(self, metadata, error):
with self.is_called_lock:
if self.is_called:
raise RuntimeError('callback should only ever be invoked once')
if self.error:
self._invoke_failure(self.error)
return
self.is_called = True
if error is None:
self._invoke_success(metadata)
else:
self._invoke_failure(error)
def notify_failure(self, error):
with self.is_called_lock:
if not self.is_called:
self.error = error
class _WrappedPlugin(object):
def __init__(self, plugin):
self.plugin = plugin
def __call__(self, context, cygrpc_callback):
wrapped_cygrpc_callback = _WrappedCygrpcCallback(cygrpc_callback)
wrapped_context = AuthMetadataContext(
_common.decode(context.service_url),
_common.decode(context.method_name))
try:
self.plugin(wrapped_context,
AuthMetadataPluginCallback(wrapped_cygrpc_callback))
except Exception as error:
wrapped_cygrpc_callback.notify_failure(error)
raise
def call_credentials_metadata_plugin(plugin, name):
"""
Args:
plugin: A callable accepting a grpc.AuthMetadataContext
object and a callback (itself accepting a list of metadata key/value
2-tuples and a None-able exception value). The callback must be eventually
called, but need not be called in plugin's invocation.
plugin's invocation must be non-blocking.
"""
return cygrpc.call_credentials_metadata_plugin(
cygrpc.CredentialsMetadataPlugin(
_WrappedPlugin(plugin), _common.encode(name)))
| bsd-3-clause | -8,882,402,637,771,358,000 | 36.414634 | 80 | 0.684485 | false | 4.467961 | false | false | false |
bobcyw/django | django/core/management/commands/check.py | 316 | 1892 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.apps import apps
from django.core import checks
from django.core.checks.registry import registry
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
help = "Checks the entire Django project for potential problems."
requires_system_checks = False
def add_arguments(self, parser):
parser.add_argument('args', metavar='app_label', nargs='*')
parser.add_argument('--tag', '-t', action='append', dest='tags',
help='Run only checks labeled with given tag.')
parser.add_argument('--list-tags', action='store_true', dest='list_tags',
help='List available tags.')
parser.add_argument('--deploy', action='store_true', dest='deploy',
help='Check deployment settings.')
def handle(self, *app_labels, **options):
include_deployment_checks = options['deploy']
if options.get('list_tags'):
self.stdout.write('\n'.join(sorted(registry.tags_available(include_deployment_checks))))
return
if app_labels:
app_configs = [apps.get_app_config(app_label) for app_label in app_labels]
else:
app_configs = None
tags = options.get('tags')
if tags:
try:
invalid_tag = next(
tag for tag in tags if not checks.tag_exists(tag, include_deployment_checks)
)
except StopIteration:
# no invalid tags
pass
else:
raise CommandError('There is no system check with the "%s" tag.' % invalid_tag)
self.check(
app_configs=app_configs,
tags=tags,
display_num_errors=True,
include_deployment_checks=include_deployment_checks,
)
| bsd-3-clause | 7,410,625,854,500,169,000 | 35.384615 | 100 | 0.597252 | false | 4.410256 | false | false | false |
ldbc/ldbc_snb_datagen | tools/get-sizes.py | 1 | 1059 | #!/usr/bin/env python3
import argparse
import os
import sys
import boto3
import json
def get_entity_sizes(bucket, prefix):
s3 = boto3.client("s3")
prefix = f"{prefix}social_network/csv/raw/composite-merged-fk/dynamic/"
more = True
token = None
sizes = {}
while more:
resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix, **({'ContinuationToken': token} if token else {}))
for obj in resp["Contents"]:
splits = obj["Key"][len(prefix):].split("/", 1)
if len(splits) > 1:
entity, rest = splits
if rest.endswith(".csv"):
total, c, m = sizes.get(entity, [0, 0, 0])
sizes[entity] = [total + obj["Size"], c + 1, max(m, obj["Size"])]
more = False
if 'NextContinuationToken' in resp.keys():
token = resp['NextContinuationToken']
more = True
with open(f"sizes.json", "w") as f:
json.dump(sizes, f)
get_entity_sizes("ldbc-datagen-sf10k-debug", "sf1000/runs/20210412_091530/")
| gpl-3.0 | 5,652,502,450,209,840,000 | 32.09375 | 114 | 0.568461 | false | 3.449511 | false | false | false |
energicryptocurrency/energi | qa/rpc-tests/bipdersig-p2p.py | 1 | 7055 | #!/usr/bin/env python3
# Copyright (c) 2015-2018 The Energi Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
# Copyright (c) 2015-2016 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
from test_framework.test_framework import ComparisonTestFramework
from test_framework.util import *
from test_framework.mininode import CTransaction, NetworkThread
from test_framework.blocktools import create_coinbase, create_block
from test_framework.comptool import TestInstance, TestManager
from test_framework.script import CScript
from io import BytesIO
# A canonical signature consists of:
# <30> <total len> <02> <len R> <R> <02> <len S> <S> <hashtype>
def unDERify(tx):
'''
Make the signature in vin 0 of a tx non-DER-compliant,
by adding padding after the S-value.
'''
scriptSig = CScript(tx.vin[0].scriptSig)
newscript = []
for i in scriptSig:
if (len(newscript) == 0):
newscript.append(i[0:-1] + b'\0' + i[-1:])
else:
newscript.append(i)
tx.vin[0].scriptSig = CScript(newscript)
'''
This test is meant to exercise BIP66 (DER SIG).
Connect to a single node.
Mine 2 (version 2) blocks (save the coinbases for later).
Generate 98 more version 2 blocks, verify the node accepts.
Mine 749 version 3 blocks, verify the node accepts.
Check that the new DERSIG rules are not enforced on the 750th version 3 block.
Check that the new DERSIG rules are enforced on the 751st version 3 block.
Mine 199 new version blocks.
Mine 1 old-version block.
Mine 1 new version block.
Mine 1 old version block, see that the node rejects.
'''
class BIP66Test(ComparisonTestFramework):
def __init__(self):
super().__init__()
self.num_nodes = 1
def setup_network(self):
# Must set the blockversion for this test
self.nodes = start_nodes(self.num_nodes, self.options.tmpdir,
extra_args=[['-debug', '-whitelist=127.0.0.1', '-blockversion=2']],
binary=[self.options.testbinary])
def run_test(self):
test = TestManager(self, self.options.tmpdir)
test.add_all_connections(self.nodes)
NetworkThread().start() # Start up network handling in another thread
test.run()
def create_transaction(self, node, coinbase, to_address, amount):
from_txid = node.getblock(coinbase)['tx'][0]
inputs = [{ "txid" : from_txid, "vout" : 0}]
outputs = { to_address : amount }
rawtx = node.createrawtransaction(inputs, outputs)
signresult = node.signrawtransaction(rawtx)
tx = CTransaction()
f = BytesIO(hex_str_to_bytes(signresult['hex']))
tx.deserialize(f)
return tx
def get_tests(self):
self.coinbase_blocks = self.nodes[0].generate(2)
height = 3 # height of the next block to build
self.tip = int("0x" + self.nodes[0].getbestblockhash(), 0)
self.nodeaddress = self.nodes[0].getnewaddress()
self.last_block_time = get_mocktime() + 1
''' 298 more version 2 blocks '''
test_blocks = []
for i in range(298):
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 2
block.rehash()
block.solve()
test_blocks.append([block, True])
self.last_block_time += 1
self.tip = block.sha256
height += 1
yield TestInstance(test_blocks, sync_every_block=False)
''' Mine 749 version 3 blocks '''
test_blocks = []
for i in range(749):
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 3
block.rehash()
block.solve()
test_blocks.append([block, True])
self.last_block_time += 1
self.tip = block.sha256
height += 1
yield TestInstance(test_blocks, sync_every_block=False)
'''
Check that the new DERSIG rules are not enforced in the 750th
version 3 block.
'''
spendtx = self.create_transaction(self.nodes[0],
self.coinbase_blocks[0], self.nodeaddress, 1.0)
unDERify(spendtx)
spendtx.rehash()
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 3
block.vtx.append(spendtx)
block.hashMerkleRoot = block.calc_merkle_root()
block.rehash()
block.solve()
self.last_block_time += 1
self.tip = block.sha256
height += 1
yield TestInstance([[block, True]])
''' Mine 199 new version blocks on last valid tip '''
test_blocks = []
for i in range(199):
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 3
block.rehash()
block.solve()
test_blocks.append([block, True])
self.last_block_time += 1
self.tip = block.sha256
height += 1
yield TestInstance(test_blocks, sync_every_block=False)
''' Mine 1 old version block '''
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 2
block.rehash()
block.solve()
self.last_block_time += 1
self.tip = block.sha256
height += 1
yield TestInstance([[block, True]])
''' Mine 1 new version block '''
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 3
block.rehash()
block.solve()
self.last_block_time += 1
self.tip = block.sha256
height += 1
yield TestInstance([[block, True]])
'''
Check that the new DERSIG rules are enforced in the 951st version 3
block.
'''
spendtx = self.create_transaction(self.nodes[0],
self.coinbase_blocks[1], self.nodeaddress, 1.0)
unDERify(spendtx)
spendtx.rehash()
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 3
block.vtx.append(spendtx)
block.hashMerkleRoot = block.calc_merkle_root()
block.rehash()
block.solve()
self.last_block_time += 1
yield TestInstance([[block, False]])
''' Mine 1 old version block, should be invalid '''
block = create_block(self.tip, create_coinbase(height), self.last_block_time + 1)
block.nVersion = 2
block.rehash()
block.solve()
self.last_block_time += 1
yield TestInstance([[block, False]])
if __name__ == '__main__':
BIP66Test().main()
| mit | 9,093,959,642,825,869,000 | 35.937173 | 100 | 0.60652 | false | 3.813514 | true | false | false |
CalvinHsu1223/LinuxCNC-HAL-EtherCAT-Driver-with-ILC | configs/gladevcp/probe/probe.py | 10 | 7401 | #!/usr/bin/env python
# vim: sts=4 sw=4 et
# This is a component of EMC
# probe.py Copyright 2010 Michael Haberler
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA''''''
'''
gladevcp probe demo example
Michael Haberler 11/2010
'''
import os,sys
from gladevcp.persistence import IniFile,widget_defaults,set_debug,select_widgets
import hal
import hal_glib
import gtk
import glib
import linuxcnc
debug = 0
class EmcInterface(object):
def __init__(self):
try:
self.s = linuxcnc.stat();
self.c = linuxcnc.command()
except Exception, msg:
print "cant initialize EmcInterface: %s - EMC not running?" %(msg)
def running(self,do_poll=True):
if do_poll: self.s.poll()
return self.s.task_mode == linuxcnc.MODE_AUTO and self.s.interp_state != linuxcnc.INTERP_IDLE
def manual_ok(self,do_poll=True):
if do_poll: self.s.poll()
if self.s.task_state != linuxcnc.STATE_ON: return False
return self.s.interp_state == linuxcnc.INTERP_IDLE
def ensure_mode(self,m, *p):
'''
If emc is not already in one of the modes given, switch it to the first mode
example:
ensure_mode(linuxcnc.MODE_MDI)
ensure_mode(linuxcnc.MODE_AUTO, linuxcnc.MODE_MDI)
'''
self.s.poll()
if self.s.task_mode == m or self.s.task_mode in p: return True
if self.running(do_poll=False): return False
self.c.mode(m)
self.c.wait_complete()
return True
def active_codes(self):
self.s.poll()
return self.s.gcodes
def get_current_system(self):
for i in self.active_codes():
if i >= 540 and i <= 590:
return i/10 - 53
elif i >= 590 and i <= 593:
return i - 584
return 1
def mdi_command(self,command, wait=True):
#ensure_mode(emself.c.MODE_MDI)
self.c.mdi(command)
if wait: self.c.wait_complete()
def emc_status(self):
'''
return tuple (task mode, task state, exec state, interp state) as strings
'''
self.s.poll()
task_mode = ['invalid', 'MANUAL', 'AUTO', 'MDI'][self.s.task_mode]
task_state = ['invalid', 'ESTOP', 'ESTOP_RESET', 'OFF', 'ON'][self.s.task_state]
exec_state = ['invalid', 'ERROR', 'DONE',
'WAITING_FOR_MOTION',
'WAITING_FOR_MOTION_QUEUE',
'WAITING_FOR_IO',
'WAITING_FOR_PAUSE',
'WAITING_FOR_MOTION_AND_IO',
'WAITING_FOR_DELAY',
'WAITING_FOR_SYSTEM_CMD' ][self.s.exec_state]
interp_state = ['invalid', 'IDLE', 'READING', 'PAUSED', 'WAITING'][self.s.interp_state]
return (task_mode, task_state, exec_state, interp_state)
class HandlerClass:
def on_manual_mode(self,widget,data=None):
if self.e.ensure_mode(linuxcnc.MODE_MANUAL):
print "switched to manual mode"
else:
print "cant switch to manual in this state"
def on_mdi_mode(self,widget,data=None):
if self.e.ensure_mode(linuxcnc.MODE_MDI):
print "switched to MDI mode"
else:
print "cant switch to MDI in this state"
def _query_emc_status(self,data=None):
(task_mode, task_state, exec_state, interp_state) = self.e.emc_status()
self.builder.get_object('task_mode').set_label("Task mode: " + task_mode)
self.builder.get_object('task_state').set_label("Task state: " + task_state)
self.builder.get_object('exec_state').set_label("Exec state: " + exec_state)
self.builder.get_object('interp_state').set_label("Interp state: " + interp_state)
return True
def on_probe(self,widget,data=None):
label = widget.get_label()
axis = ord(label[0].lower()) - ord('x')
direction = 1.0
if label[1] == '-':
direction = -1.0
self.e.s.poll()
self.start_feed = self.e.s.settings[1]
# determine system we are touching off - 1...g54 etc
self.current_system = self.e.get_current_system()
# remember current abs or rel mode - g91
self.start_relative = (910 in self.e.active_codes())
self.previous_mode = self.e.s.task_mode
if self.e.s.task_state != linuxcnc.STATE_ON:
print "machine not turned on"
return
if not self.e.s.homed[axis]:
print "%s axis not homed" %(chr(axis + ord('X')))
return
if self.e.running(do_poll=False):
print "cant do that now - intepreter running"
return
self.e.ensure_mode(linuxcnc.MODE_MDI)
self.e.mdi_command("#<_Probe_System> = %d " % (self.current_system ),wait=False)
self.e.mdi_command("#<_Probe_Axis> = %d " % (axis),wait=False)
self.e.mdi_command("#<_Probe_Speed> = %s " % (self.builder.get_object('probe_feed').get_value()),wait=False)
self.e.mdi_command("#<_Probe_Diameter> = %s " % (self.builder.get_object('probe_diameter').get_value() ),wait=False)
self.e.mdi_command("#<_Probe_Distance> = %s " % (self.builder.get_object('probe_travel').get_value() * direction),wait=False)
self.e.mdi_command("#<_Probe_Retract> = %s " % (self.builder.get_object('retract').get_value() * direction * -1.0),wait=False)
self.e.mdi_command("O<probe> call",wait=False)
self.e.mdi_command('F%f' % (self.start_feed),wait=False)
self.e.mdi_command('G91' if self.start_relative else 'G90',wait=False)
# self.e.ensure_mode(self.previous_mode)
def on_destroy(self,obj,data=None):
self.ini.save_state(self)
def on_restore_defaults(self,button,data=None):
'''
example callback for 'Reset to defaults' button
currently unused
'''
self.ini.create_default_ini()
self.ini.restore_state(self)
def __init__(self, halcomp,builder,useropts):
self.halcomp = halcomp
self.builder = builder
self.ini_filename = __name__ + '.ini'
self.defaults = { IniFile.vars: dict(),
IniFile.widgets : widget_defaults(select_widgets(self.builder.get_objects(), hal_only=False,output_only = True))
}
self.ini = IniFile(self.ini_filename,self.defaults,self.builder)
self.ini.restore_state(self)
self.e = EmcInterface()
glib.timeout_add_seconds(1, self._query_emc_status)
def get_handlers(halcomp,builder,useropts):
global debug
for cmd in useropts:
exec cmd in globals()
set_debug(debug)
return [HandlerClass(halcomp,builder,useropts)]
| gpl-2.0 | 5,929,274,333,969,834,000 | 35.279412 | 139 | 0.602351 | false | 3.364091 | false | false | false |
brianlsharp/MissionPlanner | Lib/site-packages/numpy/lib/type_check.py | 53 | 17548 | ## Automatically adapted for numpy Sep 19, 2005 by convertcode.py
__all__ = ['iscomplexobj','isrealobj','imag','iscomplex',
'isreal','nan_to_num','real','real_if_close',
'typename','asfarray','mintypecode','asscalar',
'common_type', 'datetime_data']
import numpy.core.numeric as _nx
from numpy.core.numeric import asarray, asanyarray, array, isnan, \
obj2sctype, zeros
from ufunclike import isneginf, isposinf
_typecodes_by_elsize = 'GDFgdfQqLlIiHhBb?'
def mintypecode(typechars,typeset='GDFgdf',default='d'):
"""
Return the character for the minimum-size type to which given types can
be safely cast.
The returned type character must represent the smallest size dtype such
that an array of the returned type can handle the data from an array of
all types in `typechars` (or if `typechars` is an array, then its
dtype.char).
Parameters
----------
typechars : list of str or array_like
If a list of strings, each string should represent a dtype.
If array_like, the character representation of the array dtype is used.
typeset : str or list of str, optional
The set of characters that the returned character is chosen from.
The default set is 'GDFgdf'.
default : str, optional
The default character, this is returned if none of the characters in
`typechars` matches a character in `typeset`.
Returns
-------
typechar : str
The character representing the minimum-size type that was found.
See Also
--------
dtype, sctype2char, maximum_sctype
Examples
--------
>>> np.mintypecode(['d', 'f', 'S'])
'd'
>>> x = np.array([1.1, 2-3.j])
>>> np.mintypecode(x)
'D'
>>> np.mintypecode('abceh', default='G')
'G'
"""
typecodes = [(type(t) is type('') and t) or asarray(t).dtype.char\
for t in typechars]
intersection = [t for t in typecodes if t in typeset]
if not intersection:
return default
if 'F' in intersection and 'd' in intersection:
return 'D'
l = []
for t in intersection:
i = _typecodes_by_elsize.index(t)
l.append((i,t))
l.sort()
return l[0][1]
def asfarray(a, dtype=_nx.float_):
"""
Return an array converted to a float type.
Parameters
----------
a : array_like
The input array.
dtype : str or dtype object, optional
Float type code to coerce input array `a`. If `dtype` is one of the
'int' dtypes, it is replaced with float64.
Returns
-------
out : ndarray
The input `a` as a float ndarray.
Examples
--------
>>> np.asfarray([2, 3])
array([ 2., 3.])
>>> np.asfarray([2, 3], dtype='float')
array([ 2., 3.])
>>> np.asfarray([2, 3], dtype='int8')
array([ 2., 3.])
"""
dtype = _nx.obj2sctype(dtype)
if not issubclass(dtype, _nx.inexact):
dtype = _nx.float_
return asarray(a,dtype=dtype)
def real(val):
"""
Return the real part of the elements of the array.
Parameters
----------
val : array_like
Input array.
Returns
-------
out : ndarray
Output array. If `val` is real, the type of `val` is used for the
output. If `val` has complex elements, the returned type is float.
See Also
--------
real_if_close, imag, angle
Examples
--------
>>> a = np.array([1+2j, 3+4j, 5+6j])
>>> a.real
array([ 1., 3., 5.])
>>> a.real = 9
>>> a
array([ 9.+2.j, 9.+4.j, 9.+6.j])
>>> a.real = np.array([9, 8, 7])
>>> a
array([ 9.+2.j, 8.+4.j, 7.+6.j])
"""
return asanyarray(val).real
def imag(val):
"""
Return the imaginary part of the elements of the array.
Parameters
----------
val : array_like
Input array.
Returns
-------
out : ndarray
Output array. If `val` is real, the type of `val` is used for the
output. If `val` has complex elements, the returned type is float.
See Also
--------
real, angle, real_if_close
Examples
--------
>>> a = np.array([1+2j, 3+4j, 5+6j])
>>> a.imag
array([ 2., 4., 6.])
>>> a.imag = np.array([8, 10, 12])
>>> a
array([ 1. +8.j, 3.+10.j, 5.+12.j])
"""
return asanyarray(val).imag
def iscomplex(x):
"""
Returns a bool array, where True if input element is complex.
What is tested is whether the input has a non-zero imaginary part, not if
the input type is complex.
Parameters
----------
x : array_like
Input array.
Returns
-------
out : ndarray of bools
Output array.
See Also
--------
isreal
iscomplexobj : Return True if x is a complex type or an array of complex
numbers.
Examples
--------
>>> np.iscomplex([1+1j, 1+0j, 4.5, 3, 2, 2j])
array([ True, False, False, False, False, True], dtype=bool)
"""
ax = asanyarray(x)
if issubclass(ax.dtype.type, _nx.complexfloating):
return ax.imag != 0
res = zeros(ax.shape, bool)
return +res # convet to array-scalar if needed
def isreal(x):
"""
Returns a bool array, where True if input element is real.
If element has complex type with zero complex part, the return value
for that element is True.
Parameters
----------
x : array_like
Input array.
Returns
-------
out : ndarray, bool
Boolean array of same shape as `x`.
See Also
--------
iscomplex
isrealobj : Return True if x is not a complex type.
Examples
--------
>>> np.isreal([1+1j, 1+0j, 4.5, 3, 2, 2j])
array([False, True, True, True, True, False], dtype=bool)
"""
return imag(x) == 0
def iscomplexobj(x):
"""
Return True if x is a complex type or an array of complex numbers.
The type of the input is checked, not the value. So even if the input
has an imaginary part equal to zero, `iscomplexobj` evaluates to True
if the data type is complex.
Parameters
----------
x : any
The input can be of any type and shape.
Returns
-------
y : bool
The return value, True if `x` is of a complex type.
See Also
--------
isrealobj, iscomplex
Examples
--------
>>> np.iscomplexobj(1)
False
>>> np.iscomplexobj(1+0j)
True
>>> np.iscomplexobj([3, 1+0j, True])
True
"""
return issubclass( asarray(x).dtype.type, _nx.complexfloating)
def isrealobj(x):
"""
Return True if x is a not complex type or an array of complex numbers.
The type of the input is checked, not the value. So even if the input
has an imaginary part equal to zero, `isrealobj` evaluates to False
if the data type is complex.
Parameters
----------
x : any
The input can be of any type and shape.
Returns
-------
y : bool
The return value, False if `x` is of a complex type.
See Also
--------
iscomplexobj, isreal
Examples
--------
>>> np.isrealobj(1)
True
>>> np.isrealobj(1+0j)
False
>>> np.isrealobj([3, 1+0j, True])
False
"""
return not issubclass( asarray(x).dtype.type, _nx.complexfloating)
#-----------------------------------------------------------------------------
def _getmaxmin(t):
from numpy.core import getlimits
f = getlimits.finfo(t)
return f.max, f.min
def nan_to_num(x):
"""
Replace nan with zero and inf with finite numbers.
Returns an array or scalar replacing Not a Number (NaN) with zero,
(positive) infinity with a very large number and negative infinity
with a very small (or negative) number.
Parameters
----------
x : array_like
Input data.
Returns
-------
out : ndarray, float
Array with the same shape as `x` and dtype of the element in `x` with
the greatest precision. NaN is replaced by zero, and infinity
(-infinity) is replaced by the largest (smallest or most negative)
floating point value that fits in the output dtype. All finite numbers
are upcast to the output dtype (default float64).
See Also
--------
isinf : Shows which elements are negative or negative infinity.
isneginf : Shows which elements are negative infinity.
isposinf : Shows which elements are positive infinity.
isnan : Shows which elements are Not a Number (NaN).
isfinite : Shows which elements are finite (not NaN, not infinity)
Notes
-----
Numpy uses the IEEE Standard for Binary Floating-Point for Arithmetic
(IEEE 754). This means that Not a Number is not equivalent to infinity.
Examples
--------
>>> np.set_printoptions(precision=8)
>>> x = np.array([np.inf, -np.inf, np.nan, -128, 128])
>>> np.nan_to_num(x)
array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000,
-1.28000000e+002, 1.28000000e+002])
"""
try:
t = x.dtype.type
except AttributeError:
t = obj2sctype(type(x))
if issubclass(t, _nx.complexfloating):
return nan_to_num(x.real) + 1j * nan_to_num(x.imag)
else:
try:
y = x.copy()
except AttributeError:
y = array(x)
if not issubclass(t, _nx.integer):
if not y.shape:
y = array([x])
scalar = True
else:
scalar = False
are_inf = isposinf(y)
are_neg_inf = isneginf(y)
are_nan = isnan(y)
maxf, minf = _getmaxmin(y.dtype.type)
y[are_nan] = 0
y[are_inf] = maxf
y[are_neg_inf] = minf
if scalar:
y = y[0]
return y
#-----------------------------------------------------------------------------
def real_if_close(a,tol=100):
"""
If complex input returns a real array if complex parts are close to zero.
"Close to zero" is defined as `tol` * (machine epsilon of the type for
`a`).
Parameters
----------
a : array_like
Input array.
tol : float
Tolerance in machine epsilons for the complex part of the elements
in the array.
Returns
-------
out : ndarray
If `a` is real, the type of `a` is used for the output. If `a`
has complex elements, the returned type is float.
See Also
--------
real, imag, angle
Notes
-----
Machine epsilon varies from machine to machine and between data types
but Python floats on most platforms have a machine epsilon equal to
2.2204460492503131e-16. You can use 'np.finfo(np.float).eps' to print
out the machine epsilon for floats.
Examples
--------
>>> np.finfo(np.float).eps
2.2204460492503131e-16
>>> np.real_if_close([2.1 + 4e-14j], tol=1000)
array([ 2.1])
>>> np.real_if_close([2.1 + 4e-13j], tol=1000)
array([ 2.1 +4.00000000e-13j])
"""
a = asanyarray(a)
if not issubclass(a.dtype.type, _nx.complexfloating):
return a
if tol > 1:
from numpy.core import getlimits
f = getlimits.finfo(a.dtype.type)
tol = f.eps * tol
if _nx.allclose(a.imag, 0, atol=tol):
a = a.real
return a
def asscalar(a):
"""
Convert an array of size 1 to its scalar equivalent.
Parameters
----------
a : ndarray
Input array of size 1.
Returns
-------
out : scalar
Scalar representation of `a`. The input data type is preserved.
Examples
--------
>>> np.asscalar(np.array([24]))
24
"""
return a.item()
#-----------------------------------------------------------------------------
_namefromtype = {'S1' : 'character',
'?' : 'bool',
'b' : 'signed char',
'B' : 'unsigned char',
'h' : 'short',
'H' : 'unsigned short',
'i' : 'integer',
'I' : 'unsigned integer',
'l' : 'long integer',
'L' : 'unsigned long integer',
'q' : 'long long integer',
'Q' : 'unsigned long long integer',
'f' : 'single precision',
'd' : 'double precision',
'g' : 'long precision',
'F' : 'complex single precision',
'D' : 'complex double precision',
'G' : 'complex long double precision',
'S' : 'string',
'U' : 'unicode',
'V' : 'void',
'O' : 'object'
}
def typename(char):
"""
Return a description for the given data type code.
Parameters
----------
char : str
Data type code.
Returns
-------
out : str
Description of the input data type code.
See Also
--------
dtype, typecodes
Examples
--------
>>> typechars = ['S1', '?', 'B', 'D', 'G', 'F', 'I', 'H', 'L', 'O', 'Q',
... 'S', 'U', 'V', 'b', 'd', 'g', 'f', 'i', 'h', 'l', 'q']
>>> for typechar in typechars:
... print typechar, ' : ', np.typename(typechar)
...
S1 : character
? : bool
B : unsigned char
D : complex double precision
G : complex long double precision
F : complex single precision
I : unsigned integer
H : unsigned short
L : unsigned long integer
O : object
Q : unsigned long long integer
S : string
U : unicode
V : void
b : signed char
d : double precision
g : long precision
f : single precision
i : integer
h : short
l : long integer
q : long long integer
"""
return _namefromtype[char]
#-----------------------------------------------------------------------------
#determine the "minimum common type" for a group of arrays.
array_type = [[_nx.single, _nx.double, _nx.longdouble],
[_nx.csingle, _nx.cdouble, _nx.clongdouble]]
array_precision = {_nx.single : 0,
_nx.double : 1,
_nx.longdouble : 2,
_nx.csingle : 0,
_nx.cdouble : 1,
_nx.clongdouble : 2}
def common_type(*arrays):
"""
Return a scalar type which is common to the input arrays.
The return type will always be an inexact (i.e. floating point) scalar
type, even if all the arrays are integer arrays. If one of the inputs is
an integer array, the minimum precision type that is returned is a
64-bit floating point dtype.
All input arrays can be safely cast to the returned dtype without loss
of information.
Parameters
----------
array1, array2, ... : ndarrays
Input arrays.
Returns
-------
out : data type code
Data type code.
See Also
--------
dtype, mintypecode
Examples
--------
>>> np.common_type(np.arange(2, dtype=np.float32))
<type 'numpy.float32'>
>>> np.common_type(np.arange(2, dtype=np.float32), np.arange(2))
<type 'numpy.float64'>
>>> np.common_type(np.arange(4), np.array([45, 6.j]), np.array([45.0]))
<type 'numpy.complex128'>
"""
is_complex = False
precision = 0
for a in arrays:
t = a.dtype.type
if iscomplexobj(a):
is_complex = True
if issubclass(t, _nx.integer):
p = 1
else:
p = array_precision.get(t, None)
if p is None:
raise TypeError("can't get common type for non-numeric array")
precision = max(precision, p)
if is_complex:
return array_type[1][precision]
else:
return array_type[0][precision]
def datetime_data(dtype):
"""Return (unit, numerator, denominator, events) from a datetime dtype
"""
try:
import ctypes
except ImportError:
raise RuntimeError, "Cannot access date-time internals without ctypes installed"
if dtype.kind not in ['m','M']:
raise ValueError, "Not a date-time dtype"
# TODO: This used to have
# obj = dtype.metadata[METADATA_DTSTR]
# now we get an error because obj is not set.
class DATETIMEMETA(ctypes.Structure):
_fields_ = [('base', ctypes.c_int),
('num', ctypes.c_int),
('den', ctypes.c_int),
('events', ctypes.c_int)]
import sys
if sys.version_info[:2] >= (3, 0):
func = ctypes.pythonapi.PyCapsule_GetPointer
func.argtypes = [ctypes.py_object, ctypes.c_char_p]
func.restype = ctypes.c_void_p
result = func(ctypes.py_object(obj), ctypes.c_char_p(None))
else:
func = ctypes.pythonapi.PyCObject_AsVoidPtr
func.argtypes = [ctypes.py_object]
func.restype = ctypes.c_void_p
result = func(ctypes.py_object(obj))
result = ctypes.cast(ctypes.c_void_p(result), ctypes.POINTER(DATETIMEMETA))
struct = result[0]
base = struct.base
# FIXME: This needs to be kept consistent with enum in ndarrayobject.h
from numpy.core.multiarray import DATETIMEUNITS
obj = ctypes.py_object(DATETIMEUNITS)
if sys.version_info[:2] >= (2,7):
result = func(obj, ctypes.c_char_p(None))
else:
result = func(obj)
_unitnum2name = ctypes.cast(ctypes.c_void_p(result), ctypes.POINTER(ctypes.c_char_p))
return (_unitnum2name[base], struct.num, struct.den, struct.events)
| gpl-3.0 | 2,043,922,071,066,819,800 | 26.038521 | 89 | 0.55049 | false | 3.728065 | false | false | false |
jusdng/odoo | openerp/addons/base/module/wizard/base_import_language.py | 337 | 2644 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import base64
from tempfile import TemporaryFile
from openerp import tools
from openerp.osv import osv, fields
class base_language_import(osv.osv_memory):
""" Language Import """
_name = "base.language.import"
_description = "Language Import"
_columns = {
'name': fields.char('Language Name', required=True),
'code': fields.char('ISO Code', size=5, help="ISO Language and Country code, e.g. en_US", required=True),
'data': fields.binary('File', required=True),
'overwrite': fields.boolean('Overwrite Existing Terms',
help="If you enable this option, existing translations (including custom ones) "
"will be overwritten and replaced by those in this file"),
}
def import_lang(self, cr, uid, ids, context=None):
if context is None:
context = {}
this = self.browse(cr, uid, ids[0])
if this.overwrite:
context = dict(context, overwrite=True)
fileobj = TemporaryFile('w+')
try:
fileobj.write(base64.decodestring(this.data))
# now we determine the file format
fileobj.seek(0)
first_line = fileobj.readline().strip().replace('"', '').replace(' ', '')
fileformat = first_line.endswith("type,name,res_id,src,value") and 'csv' or 'po'
fileobj.seek(0)
tools.trans_load_data(cr, fileobj, fileformat, this.code, lang_name=this.name, context=context)
finally:
fileobj.close()
return True
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | -347,243,725,224,010,430 | 40.3125 | 116 | 0.596445 | false | 4.320261 | false | false | false |
EnviroCentre/jython-upgrade | jython/lib/site-packages/pip/commands/uninstall.py | 3 | 2289 | from pip.req import InstallRequirement, RequirementSet, parse_requirements
from pip.basecommand import Command
from pip.exceptions import InstallationError
class UninstallCommand(Command):
"""
Uninstall packages.
pip is able to uninstall most installed packages. Known exceptions are:
- Pure distutils packages installed with ``python setup.py install``, which
leave behind no metadata to determine what files were installed.
- Script wrappers installed by ``python setup.py develop``.
"""
name = 'uninstall'
usage = """
%prog [options] <package> ...
%prog [options] -r <requirements file> ..."""
summary = 'Uninstall packages.'
def __init__(self, *args, **kw):
super(UninstallCommand, self).__init__(*args, **kw)
self.cmd_opts.add_option(
'-r', '--requirement',
dest='requirements',
action='append',
default=[],
metavar='file',
help='Uninstall all the packages listed in the given requirements '
'file. This option can be used multiple times.',
)
self.cmd_opts.add_option(
'-y', '--yes',
dest='yes',
action='store_true',
help="Don't ask for confirmation of uninstall deletions.")
self.parser.insert_option_group(0, self.cmd_opts)
def run(self, options, args):
session = self._build_session(options)
requirement_set = RequirementSet(
build_dir=None,
src_dir=None,
download_dir=None,
session=session,
)
for name in args:
requirement_set.add_requirement(
InstallRequirement.from_line(name))
for filename in options.requirements:
for req in parse_requirements(
filename,
options=options,
session=session):
requirement_set.add_requirement(req)
if not requirement_set.has_requirements:
raise InstallationError(
'You must give at least one requirement to %(name)s (see "pip '
'help %(name)s")' % dict(name=self.name)
)
requirement_set.uninstall(auto_confirm=options.yes)
| mit | -6,908,659,263,130,169,000 | 34.765625 | 79 | 0.58235 | false | 4.633603 | false | false | false |
emonty/ansible | lib/ansible/modules/system/iptables.py | 20 | 28339 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Linus Unnebäck <[email protected]>
# Copyright: (c) 2017, Sébastien DA ROCHA <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: iptables
short_description: Modify iptables rules
version_added: "2.0"
author:
- Linus Unnebäck (@LinusU) <[email protected]>
- Sébastien DA ROCHA (@sebastiendarocha)
description:
- C(iptables) is used to set up, maintain, and inspect the tables of IP packet
filter rules in the Linux kernel.
- This module does not handle the saving and/or loading of rules, but rather
only manipulates the current rules that are present in memory. This is the
same as the behaviour of the C(iptables) and C(ip6tables) command which
this module uses internally.
notes:
- This module just deals with individual rules.If you need advanced
chaining of rules the recommended way is to template the iptables restore
file.
options:
table:
description:
- This option specifies the packet matching table which the command should operate on.
- If the kernel is configured with automatic module loading, an attempt will be made
to load the appropriate module for that table if it is not already there.
type: str
choices: [ filter, nat, mangle, raw, security ]
default: filter
state:
description:
- Whether the rule should be absent or present.
type: str
choices: [ absent, present ]
default: present
action:
description:
- Whether the rule should be appended at the bottom or inserted at the top.
- If the rule already exists the chain will not be modified.
type: str
choices: [ append, insert ]
default: append
version_added: "2.2"
rule_num:
description:
- Insert the rule as the given rule number.
- This works only with C(action=insert).
type: str
version_added: "2.5"
ip_version:
description:
- Which version of the IP protocol this rule should apply to.
type: str
choices: [ ipv4, ipv6 ]
default: ipv4
chain:
description:
- Specify the iptables chain to modify.
- This could be a user-defined chain or one of the standard iptables chains, like
C(INPUT), C(FORWARD), C(OUTPUT), C(PREROUTING), C(POSTROUTING), C(SECMARK) or C(CONNSECMARK).
type: str
protocol:
description:
- The protocol of the rule or of the packet to check.
- The specified protocol can be one of C(tcp), C(udp), C(udplite), C(icmp), C(esp),
C(ah), C(sctp) or the special keyword C(all), or it can be a numeric value,
representing one of these protocols or a different one.
- A protocol name from I(/etc/protocols) is also allowed.
- A C(!) argument before the protocol inverts the test.
- The number zero is equivalent to all.
- C(all) will match with all protocols and is taken as default when this option is omitted.
type: str
source:
description:
- Source specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
destination:
description:
- Destination specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
tcp_flags:
description:
- TCP flags specification.
- C(tcp_flags) expects a dict with the two keys C(flags) and C(flags_set).
type: dict
default: {}
version_added: "2.4"
suboptions:
flags:
description:
- List of flags you want to examine.
type: list
flags_set:
description:
- Flags to be set.
type: list
match:
description:
- Specifies a match to use, that is, an extension module that tests for
a specific property.
- The set of matches make up the condition under which a target is invoked.
- Matches are evaluated first to last if specified as an array and work in short-circuit
fashion, i.e. if one extension yields false, evaluation will stop.
type: list
default: []
jump:
description:
- This specifies the target of the rule; i.e., what to do if the packet matches it.
- The target can be a user-defined chain (other than the one
this rule is in), one of the special builtin targets which decide the
fate of the packet immediately, or an extension (see EXTENSIONS
below).
- If this option is omitted in a rule (and the goto parameter
is not used), then matching the rule will have no effect on the
packet's fate, but the counters on the rule will be incremented.
type: str
gateway:
description:
- This specifies the IP address of host to send the cloned packets.
- This option is only valid when C(jump) is set to C(TEE).
type: str
version_added: "2.8"
log_prefix:
description:
- Specifies a log text for the rule. Only make sense with a LOG jump.
type: str
version_added: "2.5"
log_level:
description:
- Logging level according to the syslogd-defined priorities.
- The value can be strings or numbers from 1-8.
- This parameter is only applicable if C(jump) is set to C(LOG).
type: str
version_added: "2.8"
choices: [ '0', '1', '2', '3', '4', '5', '6', '7', 'emerg', 'alert', 'crit', 'error', 'warning', 'notice', 'info', 'debug' ]
goto:
description:
- This specifies that the processing should continue in a user specified chain.
- Unlike the jump argument return will not continue processing in
this chain but instead in the chain that called us via jump.
type: str
in_interface:
description:
- Name of an interface via which a packet was received (only for packets
entering the C(INPUT), C(FORWARD) and C(PREROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins with
this name will match.
- If this option is omitted, any interface name will match.
type: str
out_interface:
description:
- Name of an interface via which a packet is going to be sent (for
packets entering the C(FORWARD), C(OUTPUT) and C(POSTROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins
with this name will match.
- If this option is omitted, any interface name will match.
type: str
fragment:
description:
- This means that the rule only refers to second and further fragments
of fragmented packets.
- Since there is no way to tell the source or destination ports of such
a packet (or ICMP type), such a packet will not match any rules which specify them.
- When the "!" argument precedes fragment argument, the rule will only match head fragments,
or unfragmented packets.
type: str
set_counters:
description:
- This enables the administrator to initialize the packet and byte
counters of a rule (during C(INSERT), C(APPEND), C(REPLACE) operations).
type: str
source_port:
description:
- Source port or port range specification.
- This can either be a service name or a port number.
- An inclusive range can also be specified, using the format C(first:last).
- If the first port is omitted, C(0) is assumed; if the last is omitted, C(65535) is assumed.
- If the first port is greater than the second one they will be swapped.
type: str
destination_port:
description:
- "Destination port or port range specification. This can either be
a service name or a port number. An inclusive range can also be
specified, using the format first:last. If the first port is omitted,
'0' is assumed; if the last is omitted, '65535' is assumed. If the
first port is greater than the second one they will be swapped.
This is only valid if the rule also specifies one of the following
protocols: tcp, udp, dccp or sctp."
type: str
to_ports:
description:
- This specifies a destination port or range of ports to use, without
this, the destination port is never altered.
- This is only valid if the rule also specifies one of the protocol
C(tcp), C(udp), C(dccp) or C(sctp).
type: str
to_destination:
description:
- This specifies a destination address to use with C(DNAT).
- Without this, the destination address is never altered.
type: str
version_added: "2.1"
to_source:
description:
- This specifies a source address to use with C(SNAT).
- Without this, the source address is never altered.
type: str
version_added: "2.2"
syn:
description:
- This allows matching packets that have the SYN bit set and the ACK
and RST bits unset.
- When negated, this matches all packets with the RST or the ACK bits set.
type: str
choices: [ ignore, match, negate ]
default: ignore
version_added: "2.5"
set_dscp_mark:
description:
- This allows specifying a DSCP mark to be added to packets.
It takes either an integer or hex value.
- Mutually exclusive with C(set_dscp_mark_class).
type: str
version_added: "2.1"
set_dscp_mark_class:
description:
- This allows specifying a predefined DiffServ class which will be
translated to the corresponding DSCP mark.
- Mutually exclusive with C(set_dscp_mark).
type: str
version_added: "2.1"
comment:
description:
- This specifies a comment that will be added to the rule.
type: str
ctstate:
description:
- C(ctstate) is a list of the connection states to match in the conntrack module.
- Possible states are C(INVALID), C(NEW), C(ESTABLISHED), C(RELATED), C(UNTRACKED), C(SNAT), C(DNAT)
type: list
default: []
src_range:
description:
- Specifies the source IP range to match in the iprange module.
type: str
version_added: "2.8"
dst_range:
description:
- Specifies the destination IP range to match in the iprange module.
type: str
version_added: "2.8"
limit:
description:
- Specifies the maximum average number of matches to allow per second.
- The number can specify units explicitly, using `/second', `/minute',
`/hour' or `/day', or parts of them (so `5/second' is the same as
`5/s').
type: str
limit_burst:
description:
- Specifies the maximum burst before the above limit kicks in.
type: str
version_added: "2.1"
uid_owner:
description:
- Specifies the UID or username to use in match by owner rule.
- From Ansible 2.6 when the C(!) argument is prepended then the it inverts
the rule to apply instead to all users except that one specified.
type: str
version_added: "2.1"
gid_owner:
description:
- Specifies the GID or group to use in match by owner rule.
type: str
version_added: "2.9"
reject_with:
description:
- 'Specifies the error packet type to return while rejecting. It implies
"jump: REJECT"'
type: str
version_added: "2.1"
icmp_type:
description:
- This allows specification of the ICMP type, which can be a numeric
ICMP type, type/code pair, or one of the ICMP type names shown by the
command 'iptables -p icmp -h'
type: str
version_added: "2.2"
flush:
description:
- Flushes the specified table and chain of all rules.
- If no chain is specified then the entire table is purged.
- Ignores all other parameters.
type: bool
version_added: "2.2"
policy:
description:
- Set the policy for the chain to the given target.
- Only built-in chains can have policies.
- This parameter requires the C(chain) parameter.
- Ignores all other parameters.
type: str
choices: [ ACCEPT, DROP, QUEUE, RETURN ]
version_added: "2.2"
wait:
description:
- Wait N seconds for the xtables lock to prevent multiple instances of
the program from running concurrently.
type: str
version_added: "2.10"
'''
EXAMPLES = r'''
- name: Block specific IP
iptables:
chain: INPUT
source: 8.8.8.8
jump: DROP
become: yes
- name: Forward port 80 to 8600
iptables:
table: nat
chain: PREROUTING
in_interface: eth0
protocol: tcp
match: tcp
destination_port: 80
jump: REDIRECT
to_ports: 8600
comment: Redirect web traffic to port 8600
become: yes
- name: Allow related and established connections
iptables:
chain: INPUT
ctstate: ESTABLISHED,RELATED
jump: ACCEPT
become: yes
- name: Allow new incoming SYN packets on TCP port 22 (SSH).
iptables:
chain: INPUT
protocol: tcp
destination_port: 22
ctstate: NEW
syn: match
jump: ACCEPT
comment: Accept new SSH connections.
- name: Match on IP ranges
iptables:
chain: FORWARD
src_range: 192.168.1.100-192.168.1.199
dst_range: 10.0.0.1-10.0.0.50
jump: ACCEPT
- name: Tag all outbound tcp packets with DSCP mark 8
iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark: 8
protocol: tcp
- name: Tag all outbound tcp packets with DSCP DiffServ class CS1
iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark_class: CS1
protocol: tcp
- name: Insert a rule on line 5
iptables:
chain: INPUT
protocol: tcp
destination_port: 8080
jump: ACCEPT
action: insert
rule_num: 5
- name: Set the policy for the INPUT chain to DROP
iptables:
chain: INPUT
policy: DROP
- name: Reject tcp with tcp-reset
iptables:
chain: INPUT
protocol: tcp
reject_with: tcp-reset
ip_version: ipv4
- name: Set tcp flags
iptables:
chain: OUTPUT
jump: DROP
protocol: tcp
tcp_flags:
flags: ALL
flags_set:
- ACK
- RST
- SYN
- FIN
- name: iptables flush filter
iptables:
chain: "{{ item }}"
flush: yes
with_items: [ 'INPUT', 'FORWARD', 'OUTPUT' ]
- name: iptables flush nat
iptables:
table: nat
chain: '{{ item }}'
flush: yes
with_items: [ 'INPUT', 'OUTPUT', 'PREROUTING', 'POSTROUTING' ]
- name: Log packets arriving into an user-defined chain
iptables:
chain: LOGGING
action: append
state: present
limit: 2/second
limit_burst: 20
log_prefix: "IPTABLES:INFO: "
log_level: info
'''
import re
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
IPTABLES_WAIT_SUPPORT_ADDED = '1.4.20'
IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED = '1.6.0'
BINS = dict(
ipv4='iptables',
ipv6='ip6tables',
)
ICMP_TYPE_OPTIONS = dict(
ipv4='--icmp-type',
ipv6='--icmpv6-type',
)
def append_param(rule, param, flag, is_list):
if is_list:
for item in param:
append_param(rule, item, flag, False)
else:
if param is not None:
if param[0] == '!':
rule.extend(['!', flag, param[1:]])
else:
rule.extend([flag, param])
def append_tcp_flags(rule, param, flag):
if param:
if 'flags' in param and 'flags_set' in param:
rule.extend([flag, ','.join(param['flags']), ','.join(param['flags_set'])])
def append_match_flag(rule, param, flag, negatable):
if param == 'match':
rule.extend([flag])
elif negatable and param == 'negate':
rule.extend(['!', flag])
def append_csv(rule, param, flag):
if param:
rule.extend([flag, ','.join(param)])
def append_match(rule, param, match):
if param:
rule.extend(['-m', match])
def append_jump(rule, param, jump):
if param:
rule.extend(['-j', jump])
def append_wait(rule, param, flag):
if param:
rule.extend([flag, param])
def construct_rule(params):
rule = []
append_wait(rule, params['wait'], '-w')
append_param(rule, params['protocol'], '-p', False)
append_param(rule, params['source'], '-s', False)
append_param(rule, params['destination'], '-d', False)
append_param(rule, params['match'], '-m', True)
append_tcp_flags(rule, params['tcp_flags'], '--tcp-flags')
append_param(rule, params['jump'], '-j', False)
if params.get('jump') and params['jump'].lower() == 'tee':
append_param(rule, params['gateway'], '--gateway', False)
append_param(rule, params['log_prefix'], '--log-prefix', False)
append_param(rule, params['log_level'], '--log-level', False)
append_param(rule, params['to_destination'], '--to-destination', False)
append_param(rule, params['to_source'], '--to-source', False)
append_param(rule, params['goto'], '-g', False)
append_param(rule, params['in_interface'], '-i', False)
append_param(rule, params['out_interface'], '-o', False)
append_param(rule, params['fragment'], '-f', False)
append_param(rule, params['set_counters'], '-c', False)
append_param(rule, params['source_port'], '--source-port', False)
append_param(rule, params['destination_port'], '--destination-port', False)
append_param(rule, params['to_ports'], '--to-ports', False)
append_param(rule, params['set_dscp_mark'], '--set-dscp', False)
append_param(
rule,
params['set_dscp_mark_class'],
'--set-dscp-class',
False)
append_match_flag(rule, params['syn'], '--syn', True)
append_match(rule, params['comment'], 'comment')
append_param(rule, params['comment'], '--comment', False)
if 'conntrack' in params['match']:
append_csv(rule, params['ctstate'], '--ctstate')
elif 'state' in params['match']:
append_csv(rule, params['ctstate'], '--state')
elif params['ctstate']:
append_match(rule, params['ctstate'], 'conntrack')
append_csv(rule, params['ctstate'], '--ctstate')
if 'iprange' in params['match']:
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
elif params['src_range'] or params['dst_range']:
append_match(rule, params['src_range'] or params['dst_range'], 'iprange')
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
append_match(rule, params['limit'] or params['limit_burst'], 'limit')
append_param(rule, params['limit'], '--limit', False)
append_param(rule, params['limit_burst'], '--limit-burst', False)
append_match(rule, params['uid_owner'], 'owner')
append_match_flag(rule, params['uid_owner'], '--uid-owner', True)
append_param(rule, params['uid_owner'], '--uid-owner', False)
append_match(rule, params['gid_owner'], 'owner')
append_match_flag(rule, params['gid_owner'], '--gid-owner', True)
append_param(rule, params['gid_owner'], '--gid-owner', False)
if params['jump'] is None:
append_jump(rule, params['reject_with'], 'REJECT')
append_param(rule, params['reject_with'], '--reject-with', False)
append_param(
rule,
params['icmp_type'],
ICMP_TYPE_OPTIONS[params['ip_version']],
False)
return rule
def push_arguments(iptables_path, action, params, make_rule=True):
cmd = [iptables_path]
cmd.extend(['-t', params['table']])
cmd.extend([action, params['chain']])
if action == '-I' and params['rule_num']:
cmd.extend([params['rule_num']])
if make_rule:
cmd.extend(construct_rule(params))
return cmd
def check_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-C', params)
rc, _, __ = module.run_command(cmd, check_rc=False)
return (rc == 0)
def append_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-A', params)
module.run_command(cmd, check_rc=True)
def insert_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-I', params)
module.run_command(cmd, check_rc=True)
def remove_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-D', params)
module.run_command(cmd, check_rc=True)
def flush_table(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-F', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def set_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-P', params, make_rule=False)
cmd.append(params['policy'])
module.run_command(cmd, check_rc=True)
def get_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params)
rc, out, _ = module.run_command(cmd, check_rc=True)
chain_header = out.split("\n")[0]
result = re.search(r'\(policy ([A-Z]+)\)', chain_header)
if result:
return result.group(1)
return None
def get_iptables_version(iptables_path, module):
cmd = [iptables_path, '--version']
rc, out, _ = module.run_command(cmd, check_rc=True)
return out.split('v')[1].rstrip('\n')
def main():
module = AnsibleModule(
supports_check_mode=True,
argument_spec=dict(
table=dict(type='str', default='filter', choices=['filter', 'nat', 'mangle', 'raw', 'security']),
state=dict(type='str', default='present', choices=['absent', 'present']),
action=dict(type='str', default='append', choices=['append', 'insert']),
ip_version=dict(type='str', default='ipv4', choices=['ipv4', 'ipv6']),
chain=dict(type='str'),
rule_num=dict(type='str'),
protocol=dict(type='str'),
wait=dict(type='str'),
source=dict(type='str'),
to_source=dict(type='str'),
destination=dict(type='str'),
to_destination=dict(type='str'),
match=dict(type='list', default=[]),
tcp_flags=dict(type='dict',
options=dict(
flags=dict(type='list'),
flags_set=dict(type='list'))
),
jump=dict(type='str'),
gateway=dict(type='str'),
log_prefix=dict(type='str'),
log_level=dict(type='str',
choices=['0', '1', '2', '3', '4', '5', '6', '7',
'emerg', 'alert', 'crit', 'error',
'warning', 'notice', 'info', 'debug'],
default=None,
),
goto=dict(type='str'),
in_interface=dict(type='str'),
out_interface=dict(type='str'),
fragment=dict(type='str'),
set_counters=dict(type='str'),
source_port=dict(type='str'),
destination_port=dict(type='str'),
to_ports=dict(type='str'),
set_dscp_mark=dict(type='str'),
set_dscp_mark_class=dict(type='str'),
comment=dict(type='str'),
ctstate=dict(type='list', default=[]),
src_range=dict(type='str'),
dst_range=dict(type='str'),
limit=dict(type='str'),
limit_burst=dict(type='str'),
uid_owner=dict(type='str'),
gid_owner=dict(type='str'),
reject_with=dict(type='str'),
icmp_type=dict(type='str'),
syn=dict(type='str', default='ignore', choices=['ignore', 'match', 'negate']),
flush=dict(type='bool', default=False),
policy=dict(type='str', choices=['ACCEPT', 'DROP', 'QUEUE', 'RETURN']),
),
mutually_exclusive=(
['set_dscp_mark', 'set_dscp_mark_class'],
['flush', 'policy'],
),
required_if=[
['jump', 'TEE', ['gateway']],
['jump', 'tee', ['gateway']],
]
)
args = dict(
changed=False,
failed=False,
ip_version=module.params['ip_version'],
table=module.params['table'],
chain=module.params['chain'],
flush=module.params['flush'],
rule=' '.join(construct_rule(module.params)),
state=module.params['state'],
)
ip_version = module.params['ip_version']
iptables_path = module.get_bin_path(BINS[ip_version], True)
# Check if chain option is required
if args['flush'] is False and args['chain'] is None:
module.fail_json(msg="Either chain or flush parameter must be specified.")
if module.params.get('log_prefix', None) or module.params.get('log_level', None):
if module.params['jump'] is None:
module.params['jump'] = 'LOG'
elif module.params['jump'] != 'LOG':
module.fail_json(msg="Logging options can only be used with the LOG jump target.")
# Check if wait option is supported
iptables_version = LooseVersion(get_iptables_version(iptables_path, module))
if iptables_version >= LooseVersion(IPTABLES_WAIT_SUPPORT_ADDED):
if iptables_version < LooseVersion(IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED):
module.params['wait'] = ''
else:
module.params['wait'] = None
# Flush the table
if args['flush'] is True:
args['changed'] = True
if not module.check_mode:
flush_table(iptables_path, module, module.params)
# Set the policy
elif module.params['policy']:
current_policy = get_chain_policy(iptables_path, module, module.params)
if not current_policy:
module.fail_json(msg='Can\'t detect current policy')
changed = current_policy != module.params['policy']
args['changed'] = changed
if changed and not module.check_mode:
set_chain_policy(iptables_path, module, module.params)
else:
insert = (module.params['action'] == 'insert')
rule_is_present = check_present(iptables_path, module, module.params)
should_be_present = (args['state'] == 'present')
# Check if target is up to date
args['changed'] = (rule_is_present != should_be_present)
if args['changed'] is False:
# Target is already up to date
module.exit_json(**args)
# Check only; don't modify
if not module.check_mode:
if should_be_present:
if insert:
insert_rule(iptables_path, module, module.params)
else:
append_rule(iptables_path, module, module.params)
else:
remove_rule(iptables_path, module, module.params)
module.exit_json(**args)
if __name__ == '__main__':
main()
| gpl-3.0 | -7,721,591,528,611,428,000 | 34.596734 | 128 | 0.623116 | false | 3.81308 | false | false | false |
TheoChevalier/bedrock | bedrock/mozorg/middleware.py | 11 | 2753 | # This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import datetime
from email.utils import formatdate
import time
from django.conf import settings
from django.core.exceptions import MiddlewareNotUsed
from django_statsd.middleware import GraphiteRequestTimingMiddleware
class CacheMiddleware(object):
def process_response(self, request, response):
cache = (request.method != 'POST' and
response.status_code != 404 and
'Cache-Control' not in response)
if cache:
d = datetime.datetime.now() + datetime.timedelta(minutes=10)
stamp = time.mktime(d.timetuple())
response['Cache-Control'] = 'max-age=600'
response['Expires'] = formatdate(timeval=stamp, localtime=False,
usegmt=True)
return response
class MozorgRequestTimingMiddleware(GraphiteRequestTimingMiddleware):
def process_view(self, request, view, view_args, view_kwargs):
if hasattr(view, 'page_name'):
request._view_module = 'page'
request._view_name = view.page_name.replace('/', '.')
request._start_time = time.time()
else:
f = super(MozorgRequestTimingMiddleware, self)
f.process_view(request, view, view_args, view_kwargs)
class ClacksOverheadMiddleware(object):
# bug 1144901
@staticmethod
def process_response(request, response):
if response.status_code == 200:
response['X-Clacks-Overhead'] = 'GNU Terry Pratchett'
return response
class HostnameMiddleware(object):
def __init__(self):
if not settings.ENABLE_HOSTNAME_MIDDLEWARE:
raise MiddlewareNotUsed
values = [getattr(settings, x) for x in ['HOSTNAME', 'DEIS_APP', 'DEIS_DOMAIN']]
self.backend_server = '.'.join(x for x in values if x)
def process_response(self, request, response):
response['X-Backend-Server'] = self.backend_server
return response
class VaryNoCacheMiddleware(object):
def __init__(self):
if not settings.ENABLE_VARY_NOCACHE_MIDDLEWARE:
raise MiddlewareNotUsed
@staticmethod
def process_response(request, response):
if 'vary' in response:
path = request.path
if path != '/' and not any(path.startswith(x) for x in
settings.VARY_NOCACHE_EXEMPT_URL_PREFIXES):
del response['vary']
del response['expires']
response['Cache-Control'] = 'max-age=0'
return response
| mpl-2.0 | -4,599,908,344,236,206,000 | 33.4125 | 88 | 0.626589 | false | 4.164902 | false | false | false |
wcmckee/moejobs-site | cache/.mako.tmp/comments_helper_googleplus.tmpl.py | 1 | 2430 | # -*- coding:utf-8 -*-
from mako import runtime, filters, cache
UNDEFINED = runtime.UNDEFINED
STOP_RENDERING = runtime.STOP_RENDERING
__M_dict_builtin = dict
__M_locals_builtin = locals
_magic_number = 10
_modified_time = 1443802885.4031692
_enable_loop = True
_template_filename = '/usr/local/lib/python3.4/dist-packages/nikola/data/themes/base/templates/comments_helper_googleplus.tmpl'
_template_uri = 'comments_helper_googleplus.tmpl'
_source_encoding = 'utf-8'
_exports = ['comment_link_script', 'comment_form', 'comment_link']
def render_body(context,**pageargs):
__M_caller = context.caller_stack._push_frame()
try:
__M_locals = __M_dict_builtin(pageargs=pageargs)
__M_writer = context.writer()
__M_writer('\n\n')
__M_writer('\n\n')
__M_writer('\n')
return ''
finally:
context.caller_stack._pop_frame()
def render_comment_link_script(context):
__M_caller = context.caller_stack._push_frame()
try:
__M_writer = context.writer()
__M_writer('\n')
return ''
finally:
context.caller_stack._pop_frame()
def render_comment_form(context,url,title,identifier):
__M_caller = context.caller_stack._push_frame()
try:
__M_writer = context.writer()
__M_writer('\n<script src="https://apis.google.com/js/plusone.js"></script>\n<div class="g-comments"\n data-href="')
__M_writer(str(url))
__M_writer('"\n data-first_party_property="BLOGGER"\n data-view_type="FILTERED_POSTMOD">\n</div>\n')
return ''
finally:
context.caller_stack._pop_frame()
def render_comment_link(context,link,identifier):
__M_caller = context.caller_stack._push_frame()
try:
__M_writer = context.writer()
__M_writer('\n<div class="g-commentcount" data-href="')
__M_writer(str(link))
__M_writer('"></div>\n<script src="https://apis.google.com/js/plusone.js"></script>\n')
return ''
finally:
context.caller_stack._pop_frame()
"""
__M_BEGIN_METADATA
{"uri": "comments_helper_googleplus.tmpl", "source_encoding": "utf-8", "filename": "/usr/local/lib/python3.4/dist-packages/nikola/data/themes/base/templates/comments_helper_googleplus.tmpl", "line_map": {"33": 16, "39": 2, "57": 12, "43": 2, "44": 5, "45": 5, "16": 0, "51": 11, "21": 9, "22": 14, "23": 17, "56": 12, "55": 11, "29": 16, "63": 57}}
__M_END_METADATA
"""
| mit | -6,546,587,520,571,600,000 | 35.268657 | 348 | 0.617695 | false | 3.079848 | false | false | false |
muffinresearch/olympia | conftest.py | 6 | 4031 | from django import http, test
from django.conf import settings
from django.core.cache import cache
from django.utils import translation
import caching
import pytest
import amo
from access.models import Group, GroupUser
from translations.hold import clean_translations
from users.models import UserProfile
@pytest.fixture(autouse=True)
def mock_inline_css(monkeypatch):
"""Mock jingo_minify.helpers.is_external: don't break on missing files.
When testing, we don't want nor need the bundled/minified css files, so
pretend that all the css files are external.
Mocking this will prevent amo.helpers.inline_css to believe it should
bundle the css.
"""
import amo.helpers
monkeypatch.setattr(amo.helpers, 'is_external', lambda css: True)
def prefix_indexes(config):
"""Prefix all ES index names and cache keys with `test_` and, if running
under xdist, the ID of the current slave."""
if hasattr(config, 'slaveinput'):
prefix = 'test_{[slaveid]}'.format(config.slaveinput)
else:
prefix = 'test'
from django.conf import settings
# Ideally, this should be a session-scoped fixture that gets injected into
# any test that requires ES. This would be especially useful, as it would
# allow xdist to transparently group all ES tests into a single process.
# Unfurtunately, it's surprisingly difficult to achieve with our current
# unittest-based setup.
for key, index in settings.ES_INDEXES.items():
if not index.startswith(prefix):
settings.ES_INDEXES[key] = '{prefix}_amo_{index}'.format(
prefix=prefix, index=index)
settings.CACHE_PREFIX = 'amo:{0}:'.format(prefix)
settings.KEY_PREFIX = settings.CACHE_PREFIX
def pytest_configure(config):
prefix_indexes(config)
@pytest.fixture(autouse=True, scope='session')
def instrument_jinja():
"""Make sure the "templates" list in a response is properly updated, even
though we're using Jinja2 and not the default django template engine."""
import jinja2
old_render = jinja2.Template.render
def instrumented_render(self, *args, **kwargs):
context = dict(*args, **kwargs)
test.signals.template_rendered.send(
sender=self, template=self, context=context)
return old_render(self, *args, **kwargs)
jinja2.Template.render = instrumented_render
def default_prefixer():
"""Make sure each test starts with a default URL prefixer."""
request = http.HttpRequest()
request.META['SCRIPT_NAME'] = ''
prefixer = amo.urlresolvers.Prefixer(request)
prefixer.app = settings.DEFAULT_APP
prefixer.locale = settings.LANGUAGE_CODE
amo.urlresolvers.set_url_prefix(prefixer)
@pytest.fixture(autouse=True)
def test_pre_setup():
cache.clear()
# Override django-cache-machine caching.base.TIMEOUT because it's
# computed too early, before settings_test.py is imported.
caching.base.TIMEOUT = settings.CACHE_COUNT_TIMEOUT
translation.trans_real.deactivate()
# Django fails to clear this cache.
translation.trans_real._translations = {}
translation.trans_real.activate(settings.LANGUAGE_CODE)
# Reset the prefixer.
default_prefixer()
@pytest.fixture(autouse=True)
def test_post_teardown():
amo.set_user(None)
clean_translations(None) # Make sure queued translations are removed.
# Make sure we revert everything we might have changed to prefixers.
amo.urlresolvers.clean_url_prefixes()
@pytest.fixture
def admin_group(db):
"""Create the Admins group."""
return Group.objects.create(name='Admins', rules='*:*')
@pytest.fixture
def mozilla_user(admin_group):
"""Create a "Mozilla User"."""
user = UserProfile.objects.create(pk=settings.TASK_USER_ID,
email='[email protected]',
username='admin')
user.set_password('password')
user.save()
GroupUser.objects.create(user=user, group=admin_group)
return user
| bsd-3-clause | -6,607,943,819,425,651,000 | 30.992063 | 78 | 0.696105 | false | 3.963618 | true | false | false |
m039/Void | third-party/void-boost/tools/build/src/build/scanner.py | 8 | 6258 | # Status: ported.
# Base revision: 45462
#
# Copyright 2003 Dave Abrahams
# Copyright 2002, 2003, 2004, 2005 Vladimir Prus
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
# Implements scanners: objects that compute implicit dependencies for
# files, such as includes in C++.
#
# Scanner has a regular expression used to find dependencies, some
# data needed to interpret those dependencies (for example, include
# paths), and a code which actually established needed relationship
# between actual jam targets.
#
# Scanner objects are created by actions, when they try to actualize
# virtual targets, passed to 'virtual-target.actualize' method and are
# then associated with actual targets. It is possible to use
# several scanners for a virtual-target. For example, a single source
# might be used by to compile actions, with different include paths.
# In this case, two different actual targets will be created, each
# having scanner of its own.
#
# Typically, scanners are created from target type and action's
# properties, using the rule 'get' in this module. Directly creating
# scanners is not recommended, because it might create many equvivalent
# but different instances, and lead in unneeded duplication of
# actual targets. However, actions can also create scanners in a special
# way, instead of relying on just target type.
import property
import bjam
import os
from b2.manager import get_manager
from b2.util import is_iterable_typed
def reset ():
""" Clear the module state. This is mainly for testing purposes.
"""
global __scanners, __rv_cache, __scanner_cache
# Maps registered scanner classes to relevant properties
__scanners = {}
# A cache of scanners.
# The key is: class_name.properties_tag, where properties_tag is the concatenation
# of all relevant properties, separated by '-'
__scanner_cache = {}
reset ()
def register(scanner_class, relevant_properties):
""" Registers a new generator class, specifying a set of
properties relevant to this scanner. Ctor for that class
should have one parameter: list of properties.
"""
assert issubclass(scanner_class, Scanner)
assert isinstance(relevant_properties, basestring)
__scanners[str(scanner_class)] = relevant_properties
def registered(scanner_class):
""" Returns true iff a scanner of that class is registered
"""
return str(scanner_class) in __scanners
def get(scanner_class, properties):
""" Returns an instance of previously registered scanner
with the specified properties.
"""
assert issubclass(scanner_class, Scanner)
assert is_iterable_typed(properties, basestring)
scanner_name = str(scanner_class)
if not registered(scanner_name):
raise BaseException ("attempt to get unregisted scanner: %s" % scanner_name)
relevant_properties = __scanners[scanner_name]
r = property.select(relevant_properties, properties)
scanner_id = scanner_name + '.' + '-'.join(r)
if scanner_id not in __scanner_cache:
__scanner_cache[scanner_id] = scanner_class(r)
return __scanner_cache[scanner_id]
class Scanner:
""" Base scanner class.
"""
def __init__ (self):
pass
def pattern (self):
""" Returns a pattern to use for scanning.
"""
raise BaseException ("method must be overriden")
def process (self, target, matches, binding):
""" Establish necessary relationship between targets,
given actual target beeing scanned, and a list of
pattern matches in that file.
"""
raise BaseException ("method must be overriden")
# Common scanner class, which can be used when there's only one
# kind of includes (unlike C, where "" and <> includes have different
# search paths).
class CommonScanner(Scanner):
def __init__ (self, includes):
Scanner.__init__(self)
self.includes = includes
def process(self, target, matches, binding):
target_path = os.path.normpath(os.path.dirname(binding[0]))
bjam.call("mark-included", target, matches)
get_manager().engine().set_target_variable(matches, "SEARCH",
[target_path] + self.includes)
get_manager().scanners().propagate(self, matches)
class ScannerRegistry:
def __init__ (self, manager):
self.manager_ = manager
self.count_ = 0
self.exported_scanners_ = {}
def install (self, scanner, target, vtarget):
""" Installs the specified scanner on actual target 'target'.
vtarget: virtual target from which 'target' was actualized.
"""
assert isinstance(scanner, Scanner)
assert isinstance(target, basestring)
assert isinstance(vtarget, basestring)
engine = self.manager_.engine()
engine.set_target_variable(target, "HDRSCAN", scanner.pattern())
if scanner not in self.exported_scanners_:
exported_name = "scanner_" + str(self.count_)
self.count_ = self.count_ + 1
self.exported_scanners_[scanner] = exported_name
bjam.import_rule("", exported_name, scanner.process)
else:
exported_name = self.exported_scanners_[scanner]
engine.set_target_variable(target, "HDRRULE", exported_name)
# scanner reflects difference in properties affecting
# binding of 'target', which will be known when processing
# includes for it, will give information on how to
# interpret quoted includes.
engine.set_target_variable(target, "HDRGRIST", str(id(scanner)))
pass
def propagate(self, scanner, targets):
assert isinstance(scanner, Scanner)
assert is_iterable_typed(targets, basestring) or isinstance(targets, basestring)
engine = self.manager_.engine()
engine.set_target_variable(targets, "HDRSCAN", scanner.pattern())
engine.set_target_variable(targets, "HDRRULE",
self.exported_scanners_[scanner])
engine.set_target_variable(targets, "HDRGRIST", str(id(scanner)))
| mit | 6,886,645,102,377,342,000 | 36.473054 | 88 | 0.675935 | false | 4.208473 | false | false | false |
dieterich-lab/DCC | DCC/circFilter.py | 1 | 5910 | import numpy as np
import os
import sys
import HTSeq
from IntervalTree import IntervalTree
##########################
# Input of this script #
##########################
# This script input a count table:
# chr start end junctiontype count1 count2 ... countn
# and a repeatitive region file in gtf format
# specify minimum circular RNA length
class Circfilter(object):
def __init__(self, length, countthreshold, replicatethreshold, tmp_dir):
'''
counttable: the circular RNA count file, typically generated by findcircRNA.py: chr start end junctiontype count1 count2 ... countn
rep_file: the gtf file to specify the region of repeatitive reagion of analyzed genome
length: the minimum length of circular RNAs
countthreshold: the minimum expression level of junction type 1 circular RNAs
'''
# self.counttable = counttable
# self.rep_file = rep_file
self.length = int(length)
# self.level0 = int(level0)
self.countthreshold = int(countthreshold)
# self.threshold0 = int(threshold0)
self.replicatethreshold = int(replicatethreshold)
self.tmp_dir = tmp_dir
# Read circRNA count and coordinates information to numpy array
def readcirc(self, countfile, coordinates):
# Read the circRNA count file
circ = open(countfile, 'r')
coor = open(coordinates, 'r')
count = []
indx = []
for line in circ:
fields = line.split('\t')
# row_indx = [str(itm) for itm in fields[0:4]]
# print row_indx
try:
row_count = [int(itm) for itm in fields[4:]]
except ValueError:
row_count = [float(itm) for itm in fields[4:]]
count.append(row_count)
# indx.append(row_indx)
for line in coor:
fields = line.split('\t')
row_indx = [str(itm).strip() for itm in fields[0:6]]
indx.append(row_indx)
count = np.array(count)
indx = np.array(indx)
circ.close()
return count, indx
# Do filtering
def filtercount(self, count, indx):
print 'Filtering by read counts'
sel = [] # store the passed filtering rows
for itm in range(len(count)):
if indx[itm][4] == '0':
# if sum( count[itm] >= self.level0 ) >= self.threshold0:
# sel.append(itm)
pass
elif indx[itm][4] != '0':
if sum(count[itm] >= self.countthreshold) >= self.replicatethreshold:
sel.append(itm)
# splicing the passed filtering rows
if len(sel) == 0:
sys.exit("No circRNA passed the expression threshold filtering.")
return count[sel], indx[sel]
def read_rep_region(self, regionfile):
regions = HTSeq.GFF_Reader(regionfile, end_included=True)
rep_tree = IntervalTree()
for feature in regions:
iv = feature.iv
rep_tree.insert(iv, annotation='.')
return rep_tree
def filter_nonrep(self, regionfile, indx0, count0):
if not regionfile is None:
rep_tree = self.read_rep_region(regionfile)
def numpy_array_2_GenomiInterval(array):
left = HTSeq.GenomicInterval(str(array[0]), int(array[1]), int(array[1]) + self.length, str(array[5]))
right = HTSeq.GenomicInterval(str(array[0]), int(array[2]) - self.length, int(array[2]), str(array[5]))
return left, right
keep_index = []
for i, j in enumerate(indx0):
out = []
left, right = numpy_array_2_GenomiInterval(j)
rep_tree.intersect(left, lambda x: out.append(x))
rep_tree.intersect(right, lambda x: out.append(x))
if not out:
# not in repetitive region
keep_index.append(i)
indx0 = indx0[keep_index]
count0 = count0[keep_index]
nonrep = np.column_stack((indx0, count0))
# write the result
np.savetxt(self.tmp_dir + 'tmp_unsortedWithChrM', nonrep, delimiter='\t', newline='\n', fmt='%s')
def dummy_filter(self, indx0, count0):
nonrep = np.column_stack((indx0, count0))
# write the result
np.savetxt(self.tmp_dir + 'tmp_unsortedWithChrM', nonrep, delimiter='\t', newline='\n', fmt='%s')
def removeChrM(self, withChrM):
print 'Remove ChrM'
unremoved = open(withChrM, 'r').readlines()
removed = []
for lines in unremoved:
if not lines.startswith('chrM') and not lines.startswith('MT'):
removed.append(lines)
removedfile = open(self.tmp_dir + 'tmp_unsortedNoChrM', 'w')
removedfile.writelines(removed)
removedfile.close()
def sortOutput(self, unsorted, outCount, outCoordinates, samplelist=None):
# Sample list is a string with sample names seperated by \t.
# Split used to split if coordinates information and count information are integrated
count = open(outCount, 'w')
coor = open(outCoordinates, 'w')
if samplelist:
count.write('Chr\tStart\tEnd\t' + samplelist + '\n')
lines = open(unsorted).readlines()
for line in lines:
linesplit = [x.strip() for x in line.split('\t')]
count.write('\t'.join(linesplit[0:3] + list(linesplit[6:])) + '\n')
coor.write('\t'.join(linesplit[0:6]) + '\n')
coor.close()
count.close()
def remove_tmp(self):
try:
os.remove(self.tmp_dir + 'tmp_left')
os.remove(self.tmp_dir + 'tmp_right')
os.remove(self.tmp_dir + 'tmp_unsortedWithChrM')
os.remove(self.tmp_dir + 'tmp_unsortedNoChrM')
except OSError:
pass
| gpl-3.0 | 5,880,789,453,425,107,000 | 37.881579 | 140 | 0.573604 | false | 3.740506 | false | false | false |
gx1997/chrome-loongson | net/tools/testserver/chromiumsync.py | 9 | 50327 | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""An implementation of the server side of the Chromium sync protocol.
The details of the protocol are described mostly by comments in the protocol
buffer definition at chrome/browser/sync/protocol/sync.proto.
"""
import cgi
import copy
import operator
import pickle
import random
import sys
import threading
import time
import urlparse
import app_notification_specifics_pb2
import app_setting_specifics_pb2
import app_specifics_pb2
import autofill_specifics_pb2
import bookmark_specifics_pb2
import get_updates_caller_info_pb2
import extension_setting_specifics_pb2
import extension_specifics_pb2
import nigori_specifics_pb2
import password_specifics_pb2
import preference_specifics_pb2
import search_engine_specifics_pb2
import session_specifics_pb2
import sync_pb2
import sync_enums_pb2
import theme_specifics_pb2
import typed_url_specifics_pb2
# An enumeration of the various kinds of data that can be synced.
# Over the wire, this enumeration is not used: a sync object's type is
# inferred by which EntitySpecifics field it has. But in the context
# of a program, it is useful to have an enumeration.
ALL_TYPES = (
TOP_LEVEL, # The type of the 'Google Chrome' folder.
APPS,
APP_NOTIFICATION,
APP_SETTINGS,
AUTOFILL,
AUTOFILL_PROFILE,
BOOKMARK,
EXTENSIONS,
NIGORI,
PASSWORD,
PREFERENCE,
SEARCH_ENGINE,
SESSION,
THEME,
TYPED_URL,
EXTENSION_SETTINGS) = range(16)
# An eumeration on the frequency at which the server should send errors
# to the client. This would be specified by the url that triggers the error.
# Note: This enum should be kept in the same order as the enum in sync_test.h.
SYNC_ERROR_FREQUENCY = (
ERROR_FREQUENCY_NONE,
ERROR_FREQUENCY_ALWAYS,
ERROR_FREQUENCY_TWO_THIRDS) = range(3)
# Well-known server tag of the top level 'Google Chrome' folder.
TOP_LEVEL_FOLDER_TAG = 'google_chrome'
# Given a sync type from ALL_TYPES, find the FieldDescriptor corresponding
# to that datatype. Note that TOP_LEVEL has no such token.
SYNC_TYPE_FIELDS = sync_pb2.EntitySpecifics.DESCRIPTOR.fields_by_name
SYNC_TYPE_TO_DESCRIPTOR = {
APP_NOTIFICATION: SYNC_TYPE_FIELDS['app_notification'],
APP_SETTINGS: SYNC_TYPE_FIELDS['app_setting'],
APPS: SYNC_TYPE_FIELDS['app'],
AUTOFILL: SYNC_TYPE_FIELDS['autofill'],
AUTOFILL_PROFILE: SYNC_TYPE_FIELDS['autofill_profile'],
BOOKMARK: SYNC_TYPE_FIELDS['bookmark'],
EXTENSION_SETTINGS: SYNC_TYPE_FIELDS['extension_setting'],
EXTENSIONS: SYNC_TYPE_FIELDS['extension'],
NIGORI: SYNC_TYPE_FIELDS['nigori'],
PASSWORD: SYNC_TYPE_FIELDS['password'],
PREFERENCE: SYNC_TYPE_FIELDS['preference'],
SEARCH_ENGINE: SYNC_TYPE_FIELDS['search_engine'],
SESSION: SYNC_TYPE_FIELDS['session'],
THEME: SYNC_TYPE_FIELDS['theme'],
TYPED_URL: SYNC_TYPE_FIELDS['typed_url'],
}
# The parent ID used to indicate a top-level node.
ROOT_ID = '0'
# Unix time epoch in struct_time format. The tuple corresponds to UTC Wednesday
# Jan 1 1970, 00:00:00, non-dst.
UNIX_TIME_EPOCH = (1970, 1, 1, 0, 0, 0, 3, 1, 0)
class Error(Exception):
"""Error class for this module."""
class ProtobufDataTypeFieldNotUnique(Error):
"""An entry should not have more than one data type present."""
class DataTypeIdNotRecognized(Error):
"""The requested data type is not recognized."""
class MigrationDoneError(Error):
"""A server-side migration occurred; clients must re-sync some datatypes.
Attributes:
datatypes: a list of the datatypes (python enum) needing migration.
"""
def __init__(self, datatypes):
self.datatypes = datatypes
class StoreBirthdayError(Error):
"""The client sent a birthday that doesn't correspond to this server."""
class TransientError(Error):
"""The client would be sent a transient error."""
class SyncInducedError(Error):
"""The client would be sent an error."""
class InducedErrorFrequencyNotDefined(Error):
"""The error frequency defined is not handled."""
def GetEntryType(entry):
"""Extract the sync type from a SyncEntry.
Args:
entry: A SyncEntity protobuf object whose type to determine.
Returns:
An enum value from ALL_TYPES if the entry's type can be determined, or None
if the type cannot be determined.
Raises:
ProtobufDataTypeFieldNotUnique: More than one type was indicated by
the entry.
"""
if entry.server_defined_unique_tag == TOP_LEVEL_FOLDER_TAG:
return TOP_LEVEL
entry_types = GetEntryTypesFromSpecifics(entry.specifics)
if not entry_types:
return None
# If there is more than one, either there's a bug, or else the caller
# should use GetEntryTypes.
if len(entry_types) > 1:
raise ProtobufDataTypeFieldNotUnique
return entry_types[0]
def GetEntryTypesFromSpecifics(specifics):
"""Determine the sync types indicated by an EntitySpecifics's field(s).
If the specifics have more than one recognized data type field (as commonly
happens with the requested_types field of GetUpdatesMessage), all types
will be returned. Callers must handle the possibility of the returned
value having more than one item.
Args:
specifics: A EntitySpecifics protobuf message whose extensions to
enumerate.
Returns:
A list of the sync types (values from ALL_TYPES) associated with each
recognized extension of the specifics message.
"""
return [data_type for data_type, field_descriptor
in SYNC_TYPE_TO_DESCRIPTOR.iteritems()
if specifics.HasField(field_descriptor.name)]
def SyncTypeToProtocolDataTypeId(data_type):
"""Convert from a sync type (python enum) to the protocol's data type id."""
return SYNC_TYPE_TO_DESCRIPTOR[data_type].number
def ProtocolDataTypeIdToSyncType(protocol_data_type_id):
"""Convert from the protocol's data type id to a sync type (python enum)."""
for data_type, field_descriptor in SYNC_TYPE_TO_DESCRIPTOR.iteritems():
if field_descriptor.number == protocol_data_type_id:
return data_type
raise DataTypeIdNotRecognized
def DataTypeStringToSyncTypeLoose(data_type_string):
"""Converts a human-readable string to a sync type (python enum).
Capitalization and pluralization don't matter; this function is appropriate
for values that might have been typed by a human being; e.g., command-line
flags or query parameters.
"""
if data_type_string.isdigit():
return ProtocolDataTypeIdToSyncType(int(data_type_string))
name = data_type_string.lower().rstrip('s')
for data_type, field_descriptor in SYNC_TYPE_TO_DESCRIPTOR.iteritems():
if field_descriptor.name.lower().rstrip('s') == name:
return data_type
raise DataTypeIdNotRecognized
def SyncTypeToString(data_type):
"""Formats a sync type enum (from ALL_TYPES) to a human-readable string."""
return SYNC_TYPE_TO_DESCRIPTOR[data_type].name
def CallerInfoToString(caller_info_source):
"""Formats a GetUpdatesSource enum value to a readable string."""
return get_updates_caller_info_pb2.GetUpdatesCallerInfo \
.DESCRIPTOR.enum_types_by_name['GetUpdatesSource'] \
.values_by_number[caller_info_source].name
def ShortDatatypeListSummary(data_types):
"""Formats compactly a list of sync types (python enums) for human eyes.
This function is intended for use by logging. If the list of datatypes
contains almost all of the values, the return value will be expressed
in terms of the datatypes that aren't set.
"""
included = set(data_types) - set([TOP_LEVEL])
if not included:
return 'nothing'
excluded = set(ALL_TYPES) - included - set([TOP_LEVEL])
if not excluded:
return 'everything'
simple_text = '+'.join(sorted([SyncTypeToString(x) for x in included]))
all_but_text = 'all except %s' % (
'+'.join(sorted([SyncTypeToString(x) for x in excluded])))
if len(included) < len(excluded) or len(simple_text) <= len(all_but_text):
return simple_text
else:
return all_but_text
def GetDefaultEntitySpecifics(data_type):
"""Get an EntitySpecifics having a sync type's default field value."""
specifics = sync_pb2.EntitySpecifics()
if data_type in SYNC_TYPE_TO_DESCRIPTOR:
descriptor = SYNC_TYPE_TO_DESCRIPTOR[data_type]
getattr(specifics, descriptor.name).SetInParent()
return specifics
class PermanentItem(object):
"""A specification of one server-created permanent item.
Attributes:
tag: A known-to-the-client value that uniquely identifies a server-created
permanent item.
name: The human-readable display name for this item.
parent_tag: The tag of the permanent item's parent. If ROOT_ID, indicates
a top-level item. Otherwise, this must be the tag value of some other
server-created permanent item.
sync_type: A value from ALL_TYPES, giving the datatype of this permanent
item. This controls which types of client GetUpdates requests will
cause the permanent item to be created and returned.
create_by_default: Whether the permanent item is created at startup or not.
This value is set to True in the default case. Non-default permanent items
are those that are created only when a client explicitly tells the server
to do so.
"""
def __init__(self, tag, name, parent_tag, sync_type, create_by_default=True):
self.tag = tag
self.name = name
self.parent_tag = parent_tag
self.sync_type = sync_type
self.create_by_default = create_by_default
class MigrationHistory(object):
"""A record of the migration events associated with an account.
Each migration event invalidates one or more datatypes on all clients
that had synced the datatype before the event. Such clients will continue
to receive MigrationDone errors until they throw away their progress and
re-sync that datatype from the beginning.
"""
def __init__(self):
self._migrations = {}
for datatype in ALL_TYPES:
self._migrations[datatype] = [1]
self._next_migration_version = 2
def GetLatestVersion(self, datatype):
return self._migrations[datatype][-1]
def CheckAllCurrent(self, versions_map):
"""Raises an error if any the provided versions are out of date.
This function intentionally returns migrations in the order that they were
triggered. Doing it this way allows the client to queue up two migrations
in a row, so the second one is received while responding to the first.
Arguments:
version_map: a map whose keys are datatypes and whose values are versions.
Raises:
MigrationDoneError: if a mismatch is found.
"""
problems = {}
for datatype, client_migration in versions_map.iteritems():
for server_migration in self._migrations[datatype]:
if client_migration < server_migration:
problems.setdefault(server_migration, []).append(datatype)
if problems:
raise MigrationDoneError(problems[min(problems.keys())])
def Bump(self, datatypes):
"""Add a record of a migration, to cause errors on future requests."""
for idx, datatype in enumerate(datatypes):
self._migrations[datatype].append(self._next_migration_version)
self._next_migration_version += 1
class UpdateSieve(object):
"""A filter to remove items the client has already seen."""
def __init__(self, request, migration_history=None):
self._original_request = request
self._state = {}
self._migration_history = migration_history or MigrationHistory()
self._migration_versions_to_check = {}
if request.from_progress_marker:
for marker in request.from_progress_marker:
data_type = ProtocolDataTypeIdToSyncType(marker.data_type_id)
if marker.HasField('timestamp_token_for_migration'):
timestamp = marker.timestamp_token_for_migration
if timestamp:
self._migration_versions_to_check[data_type] = 1
elif marker.token:
(timestamp, version) = pickle.loads(marker.token)
self._migration_versions_to_check[data_type] = version
elif marker.HasField('token'):
timestamp = 0
else:
raise ValueError('No timestamp information in progress marker.')
data_type = ProtocolDataTypeIdToSyncType(marker.data_type_id)
self._state[data_type] = timestamp
elif request.HasField('from_timestamp'):
for data_type in GetEntryTypesFromSpecifics(request.requested_types):
self._state[data_type] = request.from_timestamp
self._migration_versions_to_check[data_type] = 1
if self._state:
self._state[TOP_LEVEL] = min(self._state.itervalues())
def SummarizeRequest(self):
timestamps = {}
for data_type, timestamp in self._state.iteritems():
if data_type == TOP_LEVEL:
continue
timestamps.setdefault(timestamp, []).append(data_type)
return ', '.join('<%s>@%d' % (ShortDatatypeListSummary(types), stamp)
for stamp, types in sorted(timestamps.iteritems()))
def CheckMigrationState(self):
self._migration_history.CheckAllCurrent(self._migration_versions_to_check)
def ClientWantsItem(self, item):
"""Return true if the client hasn't already seen an item."""
return self._state.get(GetEntryType(item), sys.maxint) < item.version
def HasAnyTimestamp(self):
"""Return true if at least one datatype was requested."""
return bool(self._state)
def GetMinTimestamp(self):
"""Return true the smallest timestamp requested across all datatypes."""
return min(self._state.itervalues())
def GetFirstTimeTypes(self):
"""Return a list of datatypes requesting updates from timestamp zero."""
return [datatype for datatype, timestamp in self._state.iteritems()
if timestamp == 0]
def SaveProgress(self, new_timestamp, get_updates_response):
"""Write the new_timestamp or new_progress_marker fields to a response."""
if self._original_request.from_progress_marker:
for data_type, old_timestamp in self._state.iteritems():
if data_type == TOP_LEVEL:
continue
new_marker = sync_pb2.DataTypeProgressMarker()
new_marker.data_type_id = SyncTypeToProtocolDataTypeId(data_type)
final_stamp = max(old_timestamp, new_timestamp)
final_migration = self._migration_history.GetLatestVersion(data_type)
new_marker.token = pickle.dumps((final_stamp, final_migration))
if new_marker not in self._original_request.from_progress_marker:
get_updates_response.new_progress_marker.add().MergeFrom(new_marker)
elif self._original_request.HasField('from_timestamp'):
if self._original_request.from_timestamp < new_timestamp:
get_updates_response.new_timestamp = new_timestamp
class SyncDataModel(object):
"""Models the account state of one sync user."""
_BATCH_SIZE = 100
# Specify all the permanent items that a model might need.
_PERMANENT_ITEM_SPECS = [
PermanentItem('google_chrome', name='Google Chrome',
parent_tag=ROOT_ID, sync_type=TOP_LEVEL),
PermanentItem('google_chrome_bookmarks', name='Bookmarks',
parent_tag='google_chrome', sync_type=BOOKMARK),
PermanentItem('bookmark_bar', name='Bookmark Bar',
parent_tag='google_chrome_bookmarks', sync_type=BOOKMARK),
PermanentItem('other_bookmarks', name='Other Bookmarks',
parent_tag='google_chrome_bookmarks', sync_type=BOOKMARK),
PermanentItem('synced_bookmarks', name='Synced Bookmarks',
parent_tag='google_chrome_bookmarks', sync_type=BOOKMARK,
create_by_default=False),
PermanentItem('google_chrome_preferences', name='Preferences',
parent_tag='google_chrome', sync_type=PREFERENCE),
PermanentItem('google_chrome_autofill', name='Autofill',
parent_tag='google_chrome', sync_type=AUTOFILL),
PermanentItem('google_chrome_autofill_profiles', name='Autofill Profiles',
parent_tag='google_chrome', sync_type=AUTOFILL_PROFILE),
PermanentItem('google_chrome_app_settings',
name='App Settings',
parent_tag='google_chrome', sync_type=APP_SETTINGS),
PermanentItem('google_chrome_extension_settings',
name='Extension Settings',
parent_tag='google_chrome', sync_type=EXTENSION_SETTINGS),
PermanentItem('google_chrome_extensions', name='Extensions',
parent_tag='google_chrome', sync_type=EXTENSIONS),
PermanentItem('google_chrome_passwords', name='Passwords',
parent_tag='google_chrome', sync_type=PASSWORD),
PermanentItem('google_chrome_search_engines', name='Search Engines',
parent_tag='google_chrome', sync_type=SEARCH_ENGINE),
PermanentItem('google_chrome_sessions', name='Sessions',
parent_tag='google_chrome', sync_type=SESSION),
PermanentItem('google_chrome_themes', name='Themes',
parent_tag='google_chrome', sync_type=THEME),
PermanentItem('google_chrome_typed_urls', name='Typed URLs',
parent_tag='google_chrome', sync_type=TYPED_URL),
PermanentItem('google_chrome_nigori', name='Nigori',
parent_tag='google_chrome', sync_type=NIGORI),
PermanentItem('google_chrome_apps', name='Apps',
parent_tag='google_chrome', sync_type=APPS),
PermanentItem('google_chrome_app_notifications', name='App Notifications',
parent_tag='google_chrome', sync_type=APP_NOTIFICATION),
]
def __init__(self):
# Monotonically increasing version number. The next object change will
# take on this value + 1.
self._version = 0
# The definitive copy of this client's items: a map from ID string to a
# SyncEntity protocol buffer.
self._entries = {}
self.ResetStoreBirthday()
self.migration_history = MigrationHistory()
self.induced_error = sync_pb2.ClientToServerResponse.Error()
self.induced_error_frequency = 0
self.sync_count_before_errors = 0
def _SaveEntry(self, entry):
"""Insert or update an entry in the change log, and give it a new version.
The ID fields of this entry are assumed to be valid server IDs. This
entry will be updated with a new version number and sync_timestamp.
Args:
entry: The entry to be added or updated.
"""
self._version += 1
# Maintain a global (rather than per-item) sequence number and use it
# both as the per-entry version as well as the update-progress timestamp.
# This simulates the behavior of the original server implementation.
entry.version = self._version
entry.sync_timestamp = self._version
# Preserve the originator info, which the client is not required to send
# when updating.
base_entry = self._entries.get(entry.id_string)
if base_entry:
entry.originator_cache_guid = base_entry.originator_cache_guid
entry.originator_client_item_id = base_entry.originator_client_item_id
self._entries[entry.id_string] = copy.deepcopy(entry)
def _ServerTagToId(self, tag):
"""Determine the server ID from a server-unique tag.
The resulting value is guaranteed not to collide with the other ID
generation methods.
Args:
datatype: The sync type (python enum) of the identified object.
tag: The unique, known-to-the-client tag of a server-generated item.
Returns:
The string value of the computed server ID.
"""
if not tag or tag == ROOT_ID:
return tag
spec = [x for x in self._PERMANENT_ITEM_SPECS if x.tag == tag][0]
return self._MakeCurrentId(spec.sync_type, '<server tag>%s' % tag)
def _ClientTagToId(self, datatype, tag):
"""Determine the server ID from a client-unique tag.
The resulting value is guaranteed not to collide with the other ID
generation methods.
Args:
datatype: The sync type (python enum) of the identified object.
tag: The unique, opaque-to-the-server tag of a client-tagged item.
Returns:
The string value of the computed server ID.
"""
return self._MakeCurrentId(datatype, '<client tag>%s' % tag)
def _ClientIdToId(self, datatype, client_guid, client_item_id):
"""Compute a unique server ID from a client-local ID tag.
The resulting value is guaranteed not to collide with the other ID
generation methods.
Args:
datatype: The sync type (python enum) of the identified object.
client_guid: A globally unique ID that identifies the client which
created this item.
client_item_id: An ID that uniquely identifies this item on the client
which created it.
Returns:
The string value of the computed server ID.
"""
# Using the client ID info is not required here (we could instead generate
# a random ID), but it's useful for debugging.
return self._MakeCurrentId(datatype,
'<server ID originally>%s/%s' % (client_guid, client_item_id))
def _MakeCurrentId(self, datatype, inner_id):
return '%d^%d^%s' % (datatype,
self.migration_history.GetLatestVersion(datatype),
inner_id)
def _ExtractIdInfo(self, id_string):
if not id_string or id_string == ROOT_ID:
return None
datatype_string, separator, remainder = id_string.partition('^')
migration_version_string, separator, inner_id = remainder.partition('^')
return (int(datatype_string), int(migration_version_string), inner_id)
def _WritePosition(self, entry, parent_id):
"""Ensure the entry has an absolute, numeric position and parent_id.
Historically, clients would specify positions using the predecessor-based
references in the insert_after_item_id field; starting July 2011, this
was changed and Chrome now sends up the absolute position. The server
must store a position_in_parent value and must not maintain
insert_after_item_id.
Args:
entry: The entry for which to write a position. Its ID field are
assumed to be server IDs. This entry will have its parent_id_string
and position_in_parent fields updated; its insert_after_item_id field
will be cleared.
parent_id: The ID of the entry intended as the new parent.
"""
entry.parent_id_string = parent_id
if not entry.HasField('position_in_parent'):
entry.position_in_parent = 1337 # A debuggable, distinctive default.
entry.ClearField('insert_after_item_id')
def _ItemExists(self, id_string):
"""Determine whether an item exists in the changelog."""
return id_string in self._entries
def _CreatePermanentItem(self, spec):
"""Create one permanent item from its spec, if it doesn't exist.
The resulting item is added to the changelog.
Args:
spec: A PermanentItem object holding the properties of the item to create.
"""
id_string = self._ServerTagToId(spec.tag)
if self._ItemExists(id_string):
return
print 'Creating permanent item: %s' % spec.name
entry = sync_pb2.SyncEntity()
entry.id_string = id_string
entry.non_unique_name = spec.name
entry.name = spec.name
entry.server_defined_unique_tag = spec.tag
entry.folder = True
entry.deleted = False
entry.specifics.CopyFrom(GetDefaultEntitySpecifics(spec.sync_type))
self._WritePosition(entry, self._ServerTagToId(spec.parent_tag))
self._SaveEntry(entry)
def _CreateDefaultPermanentItems(self, requested_types):
"""Ensure creation of all default permanent items for a given set of types.
Args:
requested_types: A list of sync data types from ALL_TYPES.
All default permanent items of only these types will be created.
"""
for spec in self._PERMANENT_ITEM_SPECS:
if spec.sync_type in requested_types and spec.create_by_default:
self._CreatePermanentItem(spec)
def ResetStoreBirthday(self):
"""Resets the store birthday to a random value."""
# TODO(nick): uuid.uuid1() is better, but python 2.5 only.
self.store_birthday = '%0.30f' % random.random()
def StoreBirthday(self):
"""Gets the store birthday."""
return self.store_birthday
def GetChanges(self, sieve):
"""Get entries which have changed, oldest first.
The returned entries are limited to being _BATCH_SIZE many. The entries
are returned in strict version order.
Args:
sieve: An update sieve to use to filter out updates the client
has already seen.
Returns:
A tuple of (version, entries, changes_remaining). Version is a new
timestamp value, which should be used as the starting point for the
next query. Entries is the batch of entries meeting the current
timestamp query. Changes_remaining indicates the number of changes
left on the server after this batch.
"""
if not sieve.HasAnyTimestamp():
return (0, [], 0)
min_timestamp = sieve.GetMinTimestamp()
self._CreateDefaultPermanentItems(sieve.GetFirstTimeTypes())
change_log = sorted(self._entries.values(),
key=operator.attrgetter('version'))
new_changes = [x for x in change_log if x.version > min_timestamp]
# Pick batch_size new changes, and then filter them. This matches
# the RPC behavior of the production sync server.
batch = new_changes[:self._BATCH_SIZE]
if not batch:
# Client is up to date.
return (min_timestamp, [], 0)
# Restrict batch to requested types. Tombstones are untyped
# and will always get included.
filtered = [copy.deepcopy(item) for item in batch
if item.deleted or sieve.ClientWantsItem(item)]
# The new client timestamp is the timestamp of the last item in the
# batch, even if that item was filtered out.
return (batch[-1].version, filtered, len(new_changes) - len(batch))
def _CopyOverImmutableFields(self, entry):
"""Preserve immutable fields by copying pre-commit state.
Args:
entry: A sync entity from the client.
"""
if entry.id_string in self._entries:
if self._entries[entry.id_string].HasField(
'server_defined_unique_tag'):
entry.server_defined_unique_tag = (
self._entries[entry.id_string].server_defined_unique_tag)
def _CheckVersionForCommit(self, entry):
"""Perform an optimistic concurrency check on the version number.
Clients are only allowed to commit if they report having seen the most
recent version of an object.
Args:
entry: A sync entity from the client. It is assumed that ID fields
have been converted to server IDs.
Returns:
A boolean value indicating whether the client's version matches the
newest server version for the given entry.
"""
if entry.id_string in self._entries:
# Allow edits/deletes if the version matches, and any undeletion.
return (self._entries[entry.id_string].version == entry.version or
self._entries[entry.id_string].deleted)
else:
# Allow unknown ID only if the client thinks it's new too.
return entry.version == 0
def _CheckParentIdForCommit(self, entry):
"""Check that the parent ID referenced in a SyncEntity actually exists.
Args:
entry: A sync entity from the client. It is assumed that ID fields
have been converted to server IDs.
Returns:
A boolean value indicating whether the entity's parent ID is an object
that actually exists (and is not deleted) in the current account state.
"""
if entry.parent_id_string == ROOT_ID:
# This is generally allowed.
return True
if entry.parent_id_string not in self._entries:
print 'Warning: Client sent unknown ID. Should never happen.'
return False
if entry.parent_id_string == entry.id_string:
print 'Warning: Client sent circular reference. Should never happen.'
return False
if self._entries[entry.parent_id_string].deleted:
# This can happen in a race condition between two clients.
return False
if not self._entries[entry.parent_id_string].folder:
print 'Warning: Client sent non-folder parent. Should never happen.'
return False
return True
def _RewriteIdsAsServerIds(self, entry, cache_guid, commit_session):
"""Convert ID fields in a client sync entry to server IDs.
A commit batch sent by a client may contain new items for which the
server has not generated IDs yet. And within a commit batch, later
items are allowed to refer to earlier items. This method will
generate server IDs for new items, as well as rewrite references
to items whose server IDs were generated earlier in the batch.
Args:
entry: The client sync entry to modify.
cache_guid: The globally unique ID of the client that sent this
commit request.
commit_session: A dictionary mapping the original IDs to the new server
IDs, for any items committed earlier in the batch.
"""
if entry.version == 0:
data_type = GetEntryType(entry)
if entry.HasField('client_defined_unique_tag'):
# When present, this should determine the item's ID.
new_id = self._ClientTagToId(data_type, entry.client_defined_unique_tag)
else:
new_id = self._ClientIdToId(data_type, cache_guid, entry.id_string)
entry.originator_cache_guid = cache_guid
entry.originator_client_item_id = entry.id_string
commit_session[entry.id_string] = new_id # Remember the remapping.
entry.id_string = new_id
if entry.parent_id_string in commit_session:
entry.parent_id_string = commit_session[entry.parent_id_string]
if entry.insert_after_item_id in commit_session:
entry.insert_after_item_id = commit_session[entry.insert_after_item_id]
def ValidateCommitEntries(self, entries):
"""Raise an exception if a commit batch contains any global errors.
Arguments:
entries: an iterable containing commit-form SyncEntity protocol buffers.
Raises:
MigrationDoneError: if any of the entries reference a recently-migrated
datatype.
"""
server_ids_in_commit = set()
local_ids_in_commit = set()
for entry in entries:
if entry.version:
server_ids_in_commit.add(entry.id_string)
else:
local_ids_in_commit.add(entry.id_string)
if entry.HasField('parent_id_string'):
if entry.parent_id_string not in local_ids_in_commit:
server_ids_in_commit.add(entry.parent_id_string)
versions_present = {}
for server_id in server_ids_in_commit:
parsed = self._ExtractIdInfo(server_id)
if parsed:
datatype, version, _ = parsed
versions_present.setdefault(datatype, []).append(version)
self.migration_history.CheckAllCurrent(
dict((k, min(v)) for k, v in versions_present.iteritems()))
def CommitEntry(self, entry, cache_guid, commit_session):
"""Attempt to commit one entry to the user's account.
Args:
entry: A SyncEntity protobuf representing desired object changes.
cache_guid: A string value uniquely identifying the client; this
is used for ID generation and will determine the originator_cache_guid
if the entry is new.
commit_session: A dictionary mapping client IDs to server IDs for any
objects committed earlier this session. If the entry gets a new ID
during commit, the change will be recorded here.
Returns:
A SyncEntity reflecting the post-commit value of the entry, or None
if the entry was not committed due to an error.
"""
entry = copy.deepcopy(entry)
# Generate server IDs for this entry, and write generated server IDs
# from earlier entries into the message's fields, as appropriate. The
# ID generation state is stored in 'commit_session'.
self._RewriteIdsAsServerIds(entry, cache_guid, commit_session)
# Perform the optimistic concurrency check on the entry's version number.
# Clients are not allowed to commit unless they indicate that they've seen
# the most recent version of an object.
if not self._CheckVersionForCommit(entry):
return None
# Check the validity of the parent ID; it must exist at this point.
# TODO(nick): Implement cycle detection and resolution.
if not self._CheckParentIdForCommit(entry):
return None
self._CopyOverImmutableFields(entry);
# At this point, the commit is definitely going to happen.
# Deletion works by storing a limited record for an entry, called a
# tombstone. A sync server must track deleted IDs forever, since it does
# not keep track of client knowledge (there's no deletion ACK event).
if entry.deleted:
def MakeTombstone(id_string):
"""Make a tombstone entry that will replace the entry being deleted.
Args:
id_string: Index of the SyncEntity to be deleted.
Returns:
A new SyncEntity reflecting the fact that the entry is deleted.
"""
# Only the ID, version and deletion state are preserved on a tombstone.
# TODO(nick): Does the production server not preserve the type? Not
# doing so means that tombstones cannot be filtered based on
# requested_types at GetUpdates time.
tombstone = sync_pb2.SyncEntity()
tombstone.id_string = id_string
tombstone.deleted = True
tombstone.name = ''
return tombstone
def IsChild(child_id):
"""Check if a SyncEntity is a child of entry, or any of its children.
Args:
child_id: Index of the SyncEntity that is a possible child of entry.
Returns:
True if it is a child; false otherwise.
"""
if child_id not in self._entries:
return False
if self._entries[child_id].parent_id_string == entry.id_string:
return True
return IsChild(self._entries[child_id].parent_id_string)
# Identify any children entry might have.
child_ids = [child.id_string for child in self._entries.itervalues()
if IsChild(child.id_string)]
# Mark all children that were identified as deleted.
for child_id in child_ids:
self._SaveEntry(MakeTombstone(child_id))
# Delete entry itself.
entry = MakeTombstone(entry.id_string)
else:
# Comments in sync.proto detail how the representation of positional
# ordering works: either the 'insert_after_item_id' field or the
# 'position_in_parent' field may determine the sibling order during
# Commit operations. The 'position_in_parent' field provides an absolute
# ordering in GetUpdates contexts. Here we assume the client will
# always send a valid position_in_parent (this is the newer style), and
# we ignore insert_after_item_id (an older style).
self._WritePosition(entry, entry.parent_id_string)
# Preserve the originator info, which the client is not required to send
# when updating.
base_entry = self._entries.get(entry.id_string)
if base_entry and not entry.HasField('originator_cache_guid'):
entry.originator_cache_guid = base_entry.originator_cache_guid
entry.originator_client_item_id = base_entry.originator_client_item_id
# Store the current time since the Unix epoch in milliseconds.
entry.mtime = (int((time.mktime(time.gmtime()) -
time.mktime(UNIX_TIME_EPOCH))*1000))
# Commit the change. This also updates the version number.
self._SaveEntry(entry)
return entry
def _RewriteVersionInId(self, id_string):
"""Rewrites an ID so that its migration version becomes current."""
parsed_id = self._ExtractIdInfo(id_string)
if not parsed_id:
return id_string
datatype, old_migration_version, inner_id = parsed_id
return self._MakeCurrentId(datatype, inner_id)
def TriggerMigration(self, datatypes):
"""Cause a migration to occur for a set of datatypes on this account.
Clients will see the MIGRATION_DONE error for these datatypes until they
resync them.
"""
versions_to_remap = self.migration_history.Bump(datatypes)
all_entries = self._entries.values()
self._entries.clear()
for entry in all_entries:
new_id = self._RewriteVersionInId(entry.id_string)
entry.id_string = new_id
if entry.HasField('parent_id_string'):
entry.parent_id_string = self._RewriteVersionInId(
entry.parent_id_string)
self._entries[entry.id_string] = entry
def TriggerSyncTabs(self):
"""Set the 'sync_tabs' field to this account's nigori node.
If the field is not currently set, will write a new nigori node entry
with the field set. Else does nothing.
"""
nigori_tag = "google_chrome_nigori"
nigori_original = self._entries.get(self._ServerTagToId(nigori_tag))
if (nigori_original.specifics.nigori.sync_tabs):
return
nigori_new = copy.deepcopy(nigori_original)
nigori_new.specifics.nigori.sync_tabs = True
self._SaveEntry(nigori_new)
def TriggerCreateSyncedBookmarks(self):
"""Create the Synced Bookmarks folder under the Bookmarks permanent item.
Clients will then receive the Synced Bookmarks folder on future
GetUpdates, and new bookmarks can be added within the Synced Bookmarks
folder.
"""
synced_bookmarks_spec, = [spec for spec in self._PERMANENT_ITEM_SPECS
if spec.tag == "synced_bookmarks"]
self._CreatePermanentItem(synced_bookmarks_spec)
def SetInducedError(self, error, error_frequency,
sync_count_before_errors):
self.induced_error = error
self.induced_error_frequency = error_frequency
self.sync_count_before_errors = sync_count_before_errors
def GetInducedError(self):
return self.induced_error
class TestServer(object):
"""An object to handle requests for one (and only one) Chrome Sync account.
TestServer consumes the sync command messages that are the outermost
layers of the protocol, performs the corresponding actions on its
SyncDataModel, and constructs an appropropriate response message.
"""
def __init__(self):
# The implementation supports exactly one account; its state is here.
self.account = SyncDataModel()
self.account_lock = threading.Lock()
# Clients that have talked to us: a map from the full client ID
# to its nickname.
self.clients = {}
self.client_name_generator = ('+' * times + chr(c)
for times in xrange(0, sys.maxint) for c in xrange(ord('A'), ord('Z')))
self.transient_error = False
self.sync_count = 0
def GetShortClientName(self, query):
parsed = cgi.parse_qs(query[query.find('?')+1:])
client_id = parsed.get('client_id')
if not client_id:
return '?'
client_id = client_id[0]
if client_id not in self.clients:
self.clients[client_id] = self.client_name_generator.next()
return self.clients[client_id]
def CheckStoreBirthday(self, request):
"""Raises StoreBirthdayError if the request's birthday is a mismatch."""
if not request.HasField('store_birthday'):
return
if self.account.StoreBirthday() != request.store_birthday:
raise StoreBirthdayError
def CheckTransientError(self):
"""Raises TransientError if transient_error variable is set."""
if self.transient_error:
raise TransientError
def CheckSendError(self):
"""Raises SyncInducedError if needed."""
if (self.account.induced_error.error_type !=
sync_enums_pb2.SyncEnums.UNKNOWN):
# Always means return the given error for all requests.
if self.account.induced_error_frequency == ERROR_FREQUENCY_ALWAYS:
raise SyncInducedError
# This means the FIRST 2 requests of every 3 requests
# return an error. Don't switch the order of failures. There are
# test cases that rely on the first 2 being the failure rather than
# the last 2.
elif (self.account.induced_error_frequency ==
ERROR_FREQUENCY_TWO_THIRDS):
if (((self.sync_count -
self.account.sync_count_before_errors) % 3) != 0):
raise SyncInducedError
else:
raise InducedErrorFrequencyNotDefined
def HandleMigrate(self, path):
query = urlparse.urlparse(path)[4]
code = 200
self.account_lock.acquire()
try:
datatypes = [DataTypeStringToSyncTypeLoose(x)
for x in urlparse.parse_qs(query).get('type',[])]
if datatypes:
self.account.TriggerMigration(datatypes)
response = 'Migrated datatypes %s' % (
' and '.join(SyncTypeToString(x).upper() for x in datatypes))
else:
response = 'Please specify one or more <i>type=name</i> parameters'
code = 400
except DataTypeIdNotRecognized, error:
response = 'Could not interpret datatype name'
code = 400
finally:
self.account_lock.release()
return (code, '<html><title>Migration: %d</title><H1>%d %s</H1></html>' %
(code, code, response))
def HandleSetInducedError(self, path):
query = urlparse.urlparse(path)[4]
self.account_lock.acquire()
code = 200;
response = 'Success'
error = sync_pb2.ClientToServerResponse.Error()
try:
error_type = urlparse.parse_qs(query)['error']
action = urlparse.parse_qs(query)['action']
error.error_type = int(error_type[0])
error.action = int(action[0])
try:
error.url = (urlparse.parse_qs(query)['url'])[0]
except KeyError:
error.url = ''
try:
error.error_description =(
(urlparse.parse_qs(query)['error_description'])[0])
except KeyError:
error.error_description = ''
try:
error_frequency = int((urlparse.parse_qs(query)['frequency'])[0])
except KeyError:
error_frequency = ERROR_FREQUENCY_ALWAYS
self.account.SetInducedError(error, error_frequency, self.sync_count)
response = ('Error = %d, action = %d, url = %s, description = %s' %
(error.error_type, error.action,
error.url,
error.error_description))
except error:
response = 'Could not parse url'
code = 400
finally:
self.account_lock.release()
return (code, '<html><title>SetError: %d</title><H1>%d %s</H1></html>' %
(code, code, response))
def HandleCreateBirthdayError(self):
self.account.ResetStoreBirthday()
return (
200,
'<html><title>Birthday error</title><H1>Birthday error</H1></html>')
def HandleSetTransientError(self):
self.transient_error = True
return (
200,
'<html><title>Transient error</title><H1>Transient error</H1></html>')
def HandleSetSyncTabs(self):
"""Set the 'sync_tab' field of the nigori node for this account."""
self.account.TriggerSyncTabs()
return (
200,
'<html><title>Sync Tabs</title><H1>Sync Tabs</H1></html>')
def HandleCreateSyncedBookmarks(self):
"""Create the Synced Bookmarks folder under Bookmarks."""
self.account.TriggerCreateSyncedBookmarks()
return (
200,
'<html><title>Synced Bookmarks</title><H1>Synced Bookmarks</H1></html>')
def HandleCommand(self, query, raw_request):
"""Decode and handle a sync command from a raw input of bytes.
This is the main entry point for this class. It is safe to call this
method from multiple threads.
Args:
raw_request: An iterable byte sequence to be interpreted as a sync
protocol command.
Returns:
A tuple (response_code, raw_response); the first value is an HTTP
result code, while the second value is a string of bytes which is the
serialized reply to the command.
"""
self.account_lock.acquire()
self.sync_count += 1
def print_context(direction):
print '[Client %s %s %s.py]' % (self.GetShortClientName(query), direction,
__name__),
try:
request = sync_pb2.ClientToServerMessage()
request.MergeFromString(raw_request)
contents = request.message_contents
response = sync_pb2.ClientToServerResponse()
response.error_code = sync_enums_pb2.SyncEnums.SUCCESS
self.CheckStoreBirthday(request)
response.store_birthday = self.account.store_birthday
self.CheckTransientError();
self.CheckSendError();
print_context('->')
if contents == sync_pb2.ClientToServerMessage.AUTHENTICATE:
print 'Authenticate'
# We accept any authentication token, and support only one account.
# TODO(nick): Mock out the GAIA authentication as well; hook up here.
response.authenticate.user.email = 'syncjuser@chromium'
response.authenticate.user.display_name = 'Sync J User'
elif contents == sync_pb2.ClientToServerMessage.COMMIT:
print 'Commit %d item(s)' % len(request.commit.entries)
self.HandleCommit(request.commit, response.commit)
elif contents == sync_pb2.ClientToServerMessage.GET_UPDATES:
print 'GetUpdates',
self.HandleGetUpdates(request.get_updates, response.get_updates)
print_context('<-')
print '%d update(s)' % len(response.get_updates.entries)
else:
print 'Unrecognizable sync request!'
return (400, None) # Bad request.
return (200, response.SerializeToString())
except MigrationDoneError, error:
print_context('<-')
print 'MIGRATION_DONE: <%s>' % (ShortDatatypeListSummary(error.datatypes))
response = sync_pb2.ClientToServerResponse()
response.store_birthday = self.account.store_birthday
response.error_code = sync_enums_pb2.SyncEnums.MIGRATION_DONE
response.migrated_data_type_id[:] = [
SyncTypeToProtocolDataTypeId(x) for x in error.datatypes]
return (200, response.SerializeToString())
except StoreBirthdayError, error:
print_context('<-')
print 'NOT_MY_BIRTHDAY'
response = sync_pb2.ClientToServerResponse()
response.store_birthday = self.account.store_birthday
response.error_code = sync_enums_pb2.SyncEnums.NOT_MY_BIRTHDAY
return (200, response.SerializeToString())
except TransientError, error:
### This is deprecated now. Would be removed once test cases are removed.
print_context('<-')
print 'TRANSIENT_ERROR'
response.store_birthday = self.account.store_birthday
response.error_code = sync_enums_pb2.SyncEnums.TRANSIENT_ERROR
return (200, response.SerializeToString())
except SyncInducedError, error:
print_context('<-')
print 'INDUCED_ERROR'
response.store_birthday = self.account.store_birthday
error = self.account.GetInducedError()
response.error.error_type = error.error_type
response.error.url = error.url
response.error.error_description = error.error_description
response.error.action = error.action
return (200, response.SerializeToString())
finally:
self.account_lock.release()
def HandleCommit(self, commit_message, commit_response):
"""Respond to a Commit request by updating the user's account state.
Commit attempts stop after the first error, returning a CONFLICT result
for any unattempted entries.
Args:
commit_message: A sync_pb.CommitMessage protobuf holding the content
of the client's request.
commit_response: A sync_pb.CommitResponse protobuf into which a reply
to the client request will be written.
"""
commit_response.SetInParent()
batch_failure = False
session = {} # Tracks ID renaming during the commit operation.
guid = commit_message.cache_guid
self.account.ValidateCommitEntries(commit_message.entries)
for entry in commit_message.entries:
server_entry = None
if not batch_failure:
# Try to commit the change to the account.
server_entry = self.account.CommitEntry(entry, guid, session)
# An entryresponse is returned in both success and failure cases.
reply = commit_response.entryresponse.add()
if not server_entry:
reply.response_type = sync_pb2.CommitResponse.CONFLICT
reply.error_message = 'Conflict.'
batch_failure = True # One failure halts the batch.
else:
reply.response_type = sync_pb2.CommitResponse.SUCCESS
# These are the properties that the server is allowed to override
# during commit; the client wants to know their values at the end
# of the operation.
reply.id_string = server_entry.id_string
if not server_entry.deleted:
# Note: the production server doesn't actually send the
# parent_id_string on commit responses, so we don't either.
reply.position_in_parent = server_entry.position_in_parent
reply.version = server_entry.version
reply.name = server_entry.name
reply.non_unique_name = server_entry.non_unique_name
else:
reply.version = entry.version + 1
def HandleGetUpdates(self, update_request, update_response):
"""Respond to a GetUpdates request by querying the user's account.
Args:
update_request: A sync_pb.GetUpdatesMessage protobuf holding the content
of the client's request.
update_response: A sync_pb.GetUpdatesResponse protobuf into which a reply
to the client request will be written.
"""
update_response.SetInParent()
update_sieve = UpdateSieve(update_request, self.account.migration_history)
print CallerInfoToString(update_request.caller_info.source),
print update_sieve.SummarizeRequest()
update_sieve.CheckMigrationState()
new_timestamp, entries, remaining = self.account.GetChanges(update_sieve)
update_response.changes_remaining = remaining
for entry in entries:
reply = update_response.entries.add()
reply.CopyFrom(entry)
update_sieve.SaveProgress(new_timestamp, update_response)
| bsd-3-clause | -7,226,555,059,539,434,000 | 38.815665 | 80 | 0.686292 | false | 3.954038 | false | false | false |
ncf-ds/chloroform | samples/build_fixtures.py | 1 | 6130 | import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from chloroform import db
from chloroform.models import *
#Things this has:
#Same client different retail chains
#Same retail chain different clients
#question_groups containing a question_group
#same qquestion_groups across forms
#Question with a free formed response
#Things this doesn't have:
#questions containing question_groups
#question_groups containing multiple question_groups
# Form 1
form = Form(title="Palermos for CVS")
form.form_context = FormContext(name = "CVS")
form.client = Client(name="Palermos")
# Questions
question1 = Question("Did you find the ${name}?")
question2 = Question("Was the ${dname} full stocked and organized?")
question3 = Question("How many {$product} are on the ${dname}?")
quest_mad1 = QuestionMadlib("name")
quest_mad2 = QuestionMadlib("dname")
quest_mad3 = QuestionMadlib("product")
quest_mad4 = QuestionMadlib("dname2")
question1.choices = [Choice("Yes"),Choice("No"),Choice("Did not look")]
question1.madlib_associations = [quest_mad1]
quest_mad1.madlib = Madlib("display")
question2.choices = [Choice("Yes"),Choice("No, poorly organized"),Choice("No, not enough items")]
question2.madlib_associations = [quest_mad2]
quest_mad2.madlib = Madlib("display")
question3.choices = [Choice("Write in the number")]
question3.madlib_associations = [quest_mad3, quest_mad4]
quest_mad3.madlib = Madlib("frozen pizzas")
quest_mad4.madlib = Madlib("display")
# Question Groups
question_group1 = QuestionGroup("QuestGroup title")
question_group2 = QuestionGroup("QuestGroup title")
question_group1.question_groups = [question_group2]
question_group2.questions = [question1, question2, question3]
# question_group1.questions = [question1]
# question_group2.questions = [question2, question3]
form.question_group = question_group1
# Add to session
db.session.add(form)
db.session.add(question_group1)
db.session.add(question_group2)
db.session.add(question1)
db.session.add(question2)
db.session.add(question3)
# Form 2
form = Form(title="L'Oreal for CVS")
form.form_context = FormContext(name = "CVS")
form.client = Client(name="L'Oreal")
# Questions
question1 = Question("Did you find the ${dname}?")
question2 = Question("Was the ${dname} fully stocked and organized?")
question3 = Question("How many {$product} are on the ${dname2}?")
quest_mad1 = QuestionMadlib("dname")
quest_mad2 = QuestionMadlib("dname")
quest_mad3 = QuestionMadlib("product")
quest_mad4 = QuestionMadlib("dname2")
question1.choices = [Choice("Yes"),Choice("No"),Choice("Did not look")]
question1.madlib_associations = [quest_mad1]
quest_mad1.madlib = Madlib("display")
question2.choices = [Choice("Yes"),Choice("No, poorly organized"),Choice("No, not enough items")]
question2.madlib_associations = [quest_mad2]
quest_mad2.madlib = Madlib("display")
question3.choices = [Choice("Write in the number")]
question3.madlib_associations = [quest_mad3, quest_mad4]
quest_mad3.madlib = Madlib("shampoo")
quest_mad4.madlib = Madlib("display")
# Question Groups
question_group1 = QuestionGroup("QuestGroup title")
question_group2 = QuestionGroup("QuestGroup title")
question_group1.question_groups = [question_group2]
question_group2.questions = [question1, question2, question3]
# question_group1.questions = [question1]
# question_group2.questions = [question2, question3]
form.question_group = question_group1
# Add to session
db.session.add(form)
db.session.add(question_group2)
db.session.add(question2)
db.session.add(question3)
db.session.add(quest_mad1)
db.session.add(quest_mad2)
db.session.add(quest_mad3)
# Form 3
form = Form(title="Palermos for Publix")
form.form_context = FormContext(name = "Publix")
form.client = Client(name="Palermos1")
# Questions
question1 = Question("Did you find the ${name}?")
question2 = Question("Are there ${pname1} on the ${dname1}?")
question3 = Question("Are there ${pname2} on the ${dname2}?")
question4 = Question("Are there ${pname3} on the ${dname3}?")
quest_mad1 = QuestionMadlib("name")
quest_mad2 = QuestionMadlib("pname1")
quest_mad3 = QuestionMadlib("dname1")
quest_mad4 = QuestionMadlib("pname2")
quest_mad5 = QuestionMadlib("dname2")
quest_mad6 = QuestionMadlib("pname3")
quest_mad7 = QuestionMadlib("dname3")
question1.choices = [Choice("Yes"),Choice("No"),Choice("Did not look")]
question1.madlib_associations = [quest_mad1]
quest_mad1.madlib = Madlib("display")
question2.choices = [Choice("Yes, there are many items"),Choice("Yes, but there are only a few items"),Choice("No")]
question2.madlib_associations = [quest_mad2, quest_mad3]
quest_mad2.madlib = Madlib("Palermos Pepperoni Pizza")
quest_mad3.madlib = Madlib("display")
question3.choices = [Choice("Yes, there are many items"),Choice("Yes, but there are only a few items"),Choice("No")]
question3.madlib_associations = [quest_mad4, quest_mad5]
quest_mad4.madlib = Madlib("Palermos Cheese Pizza")
quest_mad5.madlib = Madlib("display")
question4.choices = [Choice("Yes, there are many items"),Choice("Yes, but there are only a few items"),Choice("No")]
question4.madlib_associations = [quest_mad6, quest_mad7]
quest_mad6.madlib = Madlib("Palermos Sausage Pizza")
quest_mad7.madlib = Madlib("display")
# Question Groups
question_group1 = QuestionGroup("QuestGroup title")
question_group2 = QuestionGroup("QuestGroup title")
question_group1.question_groups = [question_group2]
question_group2.questions = [question1, question2, question3, question4]
# question_group1.questions = [question1]
# question_group2.questions = [question2, question3, question4]
form.question_group = question_group1
# Add to session
db.session.add(form)
# db.session.add(question_group2)
db.session.add(question2)
db.session.add(question3)
db.session.add(question4)
db.session.add(quest_mad1)
db.session.add(quest_mad2)
db.session.add(quest_mad3)
db.session.add(quest_mad4)
db.session.add(quest_mad5)
db.session.add(quest_mad6)
db.session.add(quest_mad7)
db.session.commit()
# form = Form.query.filter_by(title='Palermos for CVS').first()
# form.question_group
# form.question_group.questions
| agpl-3.0 | 6,657,443,697,487,821,000 | 31.094241 | 116 | 0.748613 | false | 2.864486 | false | false | false |
JFriel/honours_project | venv/lib/python2.7/site-packages/numpy/f2py/f2py2e.py | 174 | 22908 | #!/usr/bin/env python
"""
f2py2e - Fortran to Python C/API generator. 2nd Edition.
See __usage__ below.
Copyright 1999--2011 Pearu Peterson all rights reserved,
Pearu Peterson <[email protected]>
Permission to use, modify, and distribute this software is given under the
terms of the NumPy License.
NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
$Date: 2005/05/06 08:31:19 $
Pearu Peterson
"""
from __future__ import division, absolute_import, print_function
import sys
import os
import pprint
import re
from . import crackfortran
from . import rules
from . import cb_rules
from . import auxfuncs
from . import cfuncs
from . import f90mod_rules
from . import __version__
f2py_version = __version__.version
errmess = sys.stderr.write
# outmess=sys.stdout.write
show = pprint.pprint
outmess = auxfuncs.outmess
try:
from numpy import __version__ as numpy_version
except ImportError:
numpy_version = 'N/A'
__usage__ = """\
Usage:
1) To construct extension module sources:
f2py [<options>] <fortran files> [[[only:]||[skip:]] \\
<fortran functions> ] \\
[: <fortran files> ...]
2) To compile fortran files and build extension modules:
f2py -c [<options>, <build_flib options>, <extra options>] <fortran files>
3) To generate signature files:
f2py -h <filename.pyf> ...< same options as in (1) >
Description: This program generates a Python C/API file (<modulename>module.c)
that contains wrappers for given fortran functions so that they
can be called from Python. With the -c option the corresponding
extension modules are built.
Options:
--2d-numpy Use numpy.f2py tool with NumPy support. [DEFAULT]
--2d-numeric Use f2py2e tool with Numeric support.
--2d-numarray Use f2py2e tool with Numarray support.
--g3-numpy Use 3rd generation f2py from the separate f2py package.
[NOT AVAILABLE YET]
-h <filename> Write signatures of the fortran routines to file <filename>
and exit. You can then edit <filename> and use it instead
of <fortran files>. If <filename>==stdout then the
signatures are printed to stdout.
<fortran functions> Names of fortran routines for which Python C/API
functions will be generated. Default is all that are found
in <fortran files>.
<fortran files> Paths to fortran/signature files that will be scanned for
<fortran functions> in order to determine their signatures.
skip: Ignore fortran functions that follow until `:'.
only: Use only fortran functions that follow until `:'.
: Get back to <fortran files> mode.
-m <modulename> Name of the module; f2py generates a Python/C API
file <modulename>module.c or extension module <modulename>.
Default is 'untitled'.
--[no-]lower Do [not] lower the cases in <fortran files>. By default,
--lower is assumed with -h key, and --no-lower without -h key.
--build-dir <dirname> All f2py generated files are created in <dirname>.
Default is tempfile.mkdtemp().
--overwrite-signature Overwrite existing signature file.
--[no-]latex-doc Create (or not) <modulename>module.tex.
Default is --no-latex-doc.
--short-latex Create 'incomplete' LaTeX document (without commands
\\documentclass, \\tableofcontents, and \\begin{document},
\\end{document}).
--[no-]rest-doc Create (or not) <modulename>module.rst.
Default is --no-rest-doc.
--debug-capi Create C/API code that reports the state of the wrappers
during runtime. Useful for debugging.
--[no-]wrap-functions Create Fortran subroutine wrappers to Fortran 77
functions. --wrap-functions is default because it ensures
maximum portability/compiler independence.
--include-paths <path1>:<path2>:... Search include files from the given
directories.
--help-link [..] List system resources found by system_info.py. See also
--link-<resource> switch below. [..] is optional list
of resources names. E.g. try 'f2py --help-link lapack_opt'.
--quiet Run quietly.
--verbose Run with extra verbosity.
-v Print f2py version ID and exit.
numpy.distutils options (only effective with -c):
--fcompiler= Specify Fortran compiler type by vendor
--compiler= Specify C compiler type (as defined by distutils)
--help-fcompiler List available Fortran compilers and exit
--f77exec= Specify the path to F77 compiler
--f90exec= Specify the path to F90 compiler
--f77flags= Specify F77 compiler flags
--f90flags= Specify F90 compiler flags
--opt= Specify optimization flags
--arch= Specify architecture specific optimization flags
--noopt Compile without optimization
--noarch Compile without arch-dependent optimization
--debug Compile with debugging information
Extra options (only effective with -c):
--link-<resource> Link extension module with <resource> as defined
by numpy.distutils/system_info.py. E.g. to link
with optimized LAPACK libraries (vecLib on MacOSX,
ATLAS elsewhere), use --link-lapack_opt.
See also --help-link switch.
-L/path/to/lib/ -l<libname>
-D<define> -U<name>
-I/path/to/include/
<filename>.o <filename>.so <filename>.a
Using the following macros may be required with non-gcc Fortran
compilers:
-DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN
-DUNDERSCORE_G77
When using -DF2PY_REPORT_ATEXIT, a performance report of F2PY
interface is printed out at exit (platforms: Linux).
When using -DF2PY_REPORT_ON_ARRAY_COPY=<int>, a message is
sent to stderr whenever F2PY interface makes a copy of an
array. Integer <int> sets the threshold for array sizes when
a message should be shown.
Version: %s
numpy Version: %s
Requires: Python 2.3 or higher.
License: NumPy license (see LICENSE.txt in the NumPy source code)
Copyright 1999 - 2011 Pearu Peterson all rights reserved.
http://cens.ioc.ee/projects/f2py2e/""" % (f2py_version, numpy_version)
def scaninputline(inputline):
files, skipfuncs, onlyfuncs, debug = [], [], [], []
f, f2, f3, f5, f6, f7, f8, f9 = 1, 0, 0, 0, 0, 0, 0, 0
verbose = 1
dolc = -1
dolatexdoc = 0
dorestdoc = 0
wrapfuncs = 1
buildpath = '.'
include_paths = []
signsfile, modulename = None, None
options = {'buildpath': buildpath,
'coutput': None,
'f2py_wrapper_output': None}
for l in inputline:
if l == '':
pass
elif l == 'only:':
f = 0
elif l == 'skip:':
f = -1
elif l == ':':
f = 1
elif l[:8] == '--debug-':
debug.append(l[8:])
elif l == '--lower':
dolc = 1
elif l == '--build-dir':
f6 = 1
elif l == '--no-lower':
dolc = 0
elif l == '--quiet':
verbose = 0
elif l == '--verbose':
verbose += 1
elif l == '--latex-doc':
dolatexdoc = 1
elif l == '--no-latex-doc':
dolatexdoc = 0
elif l == '--rest-doc':
dorestdoc = 1
elif l == '--no-rest-doc':
dorestdoc = 0
elif l == '--wrap-functions':
wrapfuncs = 1
elif l == '--no-wrap-functions':
wrapfuncs = 0
elif l == '--short-latex':
options['shortlatex'] = 1
elif l == '--coutput':
f8 = 1
elif l == '--f2py-wrapper-output':
f9 = 1
elif l == '--overwrite-signature':
options['h-overwrite'] = 1
elif l == '-h':
f2 = 1
elif l == '-m':
f3 = 1
elif l[:2] == '-v':
print(f2py_version)
sys.exit()
elif l == '--show-compilers':
f5 = 1
elif l[:8] == '-include':
cfuncs.outneeds['userincludes'].append(l[9:-1])
cfuncs.userincludes[l[9:-1]] = '#include ' + l[8:]
elif l[:15] in '--include_paths':
outmess(
'f2py option --include_paths is deprecated, use --include-paths instead.\n')
f7 = 1
elif l[:15] in '--include-paths':
f7 = 1
elif l[0] == '-':
errmess('Unknown option %s\n' % repr(l))
sys.exit()
elif f2:
f2 = 0
signsfile = l
elif f3:
f3 = 0
modulename = l
elif f6:
f6 = 0
buildpath = l
elif f7:
f7 = 0
include_paths.extend(l.split(os.pathsep))
elif f8:
f8 = 0
options["coutput"] = l
elif f9:
f9 = 0
options["f2py_wrapper_output"] = l
elif f == 1:
try:
open(l).close()
files.append(l)
except IOError as detail:
errmess('IOError: %s. Skipping file "%s".\n' %
(str(detail), l))
elif f == -1:
skipfuncs.append(l)
elif f == 0:
onlyfuncs.append(l)
if not f5 and not files and not modulename:
print(__usage__)
sys.exit()
if not os.path.isdir(buildpath):
if not verbose:
outmess('Creating build directory %s' % (buildpath))
os.mkdir(buildpath)
if signsfile:
signsfile = os.path.join(buildpath, signsfile)
if signsfile and os.path.isfile(signsfile) and 'h-overwrite' not in options:
errmess(
'Signature file "%s" exists!!! Use --overwrite-signature to overwrite.\n' % (signsfile))
sys.exit()
options['debug'] = debug
options['verbose'] = verbose
if dolc == -1 and not signsfile:
options['do-lower'] = 0
else:
options['do-lower'] = dolc
if modulename:
options['module'] = modulename
if signsfile:
options['signsfile'] = signsfile
if onlyfuncs:
options['onlyfuncs'] = onlyfuncs
if skipfuncs:
options['skipfuncs'] = skipfuncs
options['dolatexdoc'] = dolatexdoc
options['dorestdoc'] = dorestdoc
options['wrapfuncs'] = wrapfuncs
options['buildpath'] = buildpath
options['include_paths'] = include_paths
return files, options
def callcrackfortran(files, options):
rules.options = options
crackfortran.debug = options['debug']
crackfortran.verbose = options['verbose']
if 'module' in options:
crackfortran.f77modulename = options['module']
if 'skipfuncs' in options:
crackfortran.skipfuncs = options['skipfuncs']
if 'onlyfuncs' in options:
crackfortran.onlyfuncs = options['onlyfuncs']
crackfortran.include_paths[:] = options['include_paths']
crackfortran.dolowercase = options['do-lower']
postlist = crackfortran.crackfortran(files)
if 'signsfile' in options:
outmess('Saving signatures to file "%s"\n' % (options['signsfile']))
pyf = crackfortran.crack2fortran(postlist)
if options['signsfile'][-6:] == 'stdout':
sys.stdout.write(pyf)
else:
f = open(options['signsfile'], 'w')
f.write(pyf)
f.close()
if options["coutput"] is None:
for mod in postlist:
mod["coutput"] = "%smodule.c" % mod["name"]
else:
for mod in postlist:
mod["coutput"] = options["coutput"]
if options["f2py_wrapper_output"] is None:
for mod in postlist:
mod["f2py_wrapper_output"] = "%s-f2pywrappers.f" % mod["name"]
else:
for mod in postlist:
mod["f2py_wrapper_output"] = options["f2py_wrapper_output"]
return postlist
def buildmodules(lst):
cfuncs.buildcfuncs()
outmess('Building modules...\n')
modules, mnames, isusedby = [], [], {}
for i in range(len(lst)):
if '__user__' in lst[i]['name']:
cb_rules.buildcallbacks(lst[i])
else:
if 'use' in lst[i]:
for u in lst[i]['use'].keys():
if u not in isusedby:
isusedby[u] = []
isusedby[u].append(lst[i]['name'])
modules.append(lst[i])
mnames.append(lst[i]['name'])
ret = {}
for i in range(len(mnames)):
if mnames[i] in isusedby:
outmess('\tSkipping module "%s" which is used by %s.\n' % (
mnames[i], ','.join(['"%s"' % s for s in isusedby[mnames[i]]])))
else:
um = []
if 'use' in modules[i]:
for u in modules[i]['use'].keys():
if u in isusedby and u in mnames:
um.append(modules[mnames.index(u)])
else:
outmess(
'\tModule "%s" uses nonexisting "%s" which will be ignored.\n' % (mnames[i], u))
ret[mnames[i]] = {}
dict_append(ret[mnames[i]], rules.buildmodule(modules[i], um))
return ret
def dict_append(d_out, d_in):
for (k, v) in d_in.items():
if k not in d_out:
d_out[k] = []
if isinstance(v, list):
d_out[k] = d_out[k] + v
else:
d_out[k].append(v)
def run_main(comline_list):
"""Run f2py as if string.join(comline_list,' ') is used as a command line.
In case of using -h flag, return None.
"""
crackfortran.reset_global_f2py_vars()
f2pydir = os.path.dirname(os.path.abspath(cfuncs.__file__))
fobjhsrc = os.path.join(f2pydir, 'src', 'fortranobject.h')
fobjcsrc = os.path.join(f2pydir, 'src', 'fortranobject.c')
files, options = scaninputline(comline_list)
auxfuncs.options = options
postlist = callcrackfortran(files, options)
isusedby = {}
for i in range(len(postlist)):
if 'use' in postlist[i]:
for u in postlist[i]['use'].keys():
if u not in isusedby:
isusedby[u] = []
isusedby[u].append(postlist[i]['name'])
for i in range(len(postlist)):
if postlist[i]['block'] == 'python module' and '__user__' in postlist[i]['name']:
if postlist[i]['name'] in isusedby:
# if not quiet:
outmess('Skipping Makefile build for module "%s" which is used by %s\n' % (
postlist[i]['name'], ','.join(['"%s"' % s for s in isusedby[postlist[i]['name']]])))
if 'signsfile' in options:
if options['verbose'] > 1:
outmess(
'Stopping. Edit the signature file and then run f2py on the signature file: ')
outmess('%s %s\n' %
(os.path.basename(sys.argv[0]), options['signsfile']))
return
for i in range(len(postlist)):
if postlist[i]['block'] != 'python module':
if 'python module' not in options:
errmess(
'Tip: If your original code is Fortran source then you must use -m option.\n')
raise TypeError('All blocks must be python module blocks but got %s' % (
repr(postlist[i]['block'])))
auxfuncs.debugoptions = options['debug']
f90mod_rules.options = options
auxfuncs.wrapfuncs = options['wrapfuncs']
ret = buildmodules(postlist)
for mn in ret.keys():
dict_append(ret[mn], {'csrc': fobjcsrc, 'h': fobjhsrc})
return ret
def filter_files(prefix, suffix, files, remove_prefix=None):
"""
Filter files by prefix and suffix.
"""
filtered, rest = [], []
match = re.compile(prefix + r'.*' + suffix + r'\Z').match
if remove_prefix:
ind = len(prefix)
else:
ind = 0
for file in [x.strip() for x in files]:
if match(file):
filtered.append(file[ind:])
else:
rest.append(file)
return filtered, rest
def get_prefix(module):
p = os.path.dirname(os.path.dirname(module.__file__))
return p
def run_compile():
"""
Do it all in one call!
"""
import tempfile
i = sys.argv.index('-c')
del sys.argv[i]
remove_build_dir = 0
try:
i = sys.argv.index('--build-dir')
except ValueError:
i = None
if i is not None:
build_dir = sys.argv[i + 1]
del sys.argv[i + 1]
del sys.argv[i]
else:
remove_build_dir = 1
build_dir = tempfile.mkdtemp()
_reg1 = re.compile(r'[-][-]link[-]')
sysinfo_flags = [_m for _m in sys.argv[1:] if _reg1.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in sysinfo_flags]
if sysinfo_flags:
sysinfo_flags = [f[7:] for f in sysinfo_flags]
_reg2 = re.compile(
r'[-][-]((no[-]|)(wrap[-]functions|lower)|debug[-]capi|quiet)|[-]include')
f2py_flags = [_m for _m in sys.argv[1:] if _reg2.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in f2py_flags]
f2py_flags2 = []
fl = 0
for a in sys.argv[1:]:
if a in ['only:', 'skip:']:
fl = 1
elif a == ':':
fl = 0
if fl or a == ':':
f2py_flags2.append(a)
if f2py_flags2 and f2py_flags2[-1] != ':':
f2py_flags2.append(':')
f2py_flags.extend(f2py_flags2)
sys.argv = [_m for _m in sys.argv if _m not in f2py_flags2]
_reg3 = re.compile(
r'[-][-]((f(90)?compiler([-]exec|)|compiler)=|help[-]compiler)')
flib_flags = [_m for _m in sys.argv[1:] if _reg3.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in flib_flags]
_reg4 = re.compile(
r'[-][-]((f(77|90)(flags|exec)|opt|arch)=|(debug|noopt|noarch|help[-]fcompiler))')
fc_flags = [_m for _m in sys.argv[1:] if _reg4.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in fc_flags]
if 1:
del_list = []
for s in flib_flags:
v = '--fcompiler='
if s[:len(v)] == v:
from numpy.distutils import fcompiler
fcompiler.load_all_fcompiler_classes()
allowed_keys = list(fcompiler.fcompiler_class.keys())
nv = ov = s[len(v):].lower()
if ov not in allowed_keys:
vmap = {} # XXX
try:
nv = vmap[ov]
except KeyError:
if ov not in vmap.values():
print('Unknown vendor: "%s"' % (s[len(v):]))
nv = ov
i = flib_flags.index(s)
flib_flags[i] = '--fcompiler=' + nv
continue
for s in del_list:
i = flib_flags.index(s)
del flib_flags[i]
assert len(flib_flags) <= 2, repr(flib_flags)
_reg5 = re.compile(r'[-][-](verbose)')
setup_flags = [_m for _m in sys.argv[1:] if _reg5.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in setup_flags]
if '--quiet' in f2py_flags:
setup_flags.append('--quiet')
modulename = 'untitled'
sources = sys.argv[1:]
for optname in ['--include_paths', '--include-paths']:
if optname in sys.argv:
i = sys.argv.index(optname)
f2py_flags.extend(sys.argv[i:i + 2])
del sys.argv[i + 1], sys.argv[i]
sources = sys.argv[1:]
if '-m' in sys.argv:
i = sys.argv.index('-m')
modulename = sys.argv[i + 1]
del sys.argv[i + 1], sys.argv[i]
sources = sys.argv[1:]
else:
from numpy.distutils.command.build_src import get_f2py_modulename
pyf_files, sources = filter_files('', '[.]pyf([.]src|)', sources)
sources = pyf_files + sources
for f in pyf_files:
modulename = get_f2py_modulename(f)
if modulename:
break
extra_objects, sources = filter_files('', '[.](o|a|so)', sources)
include_dirs, sources = filter_files('-I', '', sources, remove_prefix=1)
library_dirs, sources = filter_files('-L', '', sources, remove_prefix=1)
libraries, sources = filter_files('-l', '', sources, remove_prefix=1)
undef_macros, sources = filter_files('-U', '', sources, remove_prefix=1)
define_macros, sources = filter_files('-D', '', sources, remove_prefix=1)
for i in range(len(define_macros)):
name_value = define_macros[i].split('=', 1)
if len(name_value) == 1:
name_value.append(None)
if len(name_value) == 2:
define_macros[i] = tuple(name_value)
else:
print('Invalid use of -D:', name_value)
from numpy.distutils.system_info import get_info
num_info = {}
if num_info:
include_dirs.extend(num_info.get('include_dirs', []))
from numpy.distutils.core import setup, Extension
ext_args = {'name': modulename, 'sources': sources,
'include_dirs': include_dirs,
'library_dirs': library_dirs,
'libraries': libraries,
'define_macros': define_macros,
'undef_macros': undef_macros,
'extra_objects': extra_objects,
'f2py_options': f2py_flags,
}
if sysinfo_flags:
from numpy.distutils.misc_util import dict_append
for n in sysinfo_flags:
i = get_info(n)
if not i:
outmess('No %s resources found in system'
' (try `f2py --help-link`)\n' % (repr(n)))
dict_append(ext_args, **i)
ext = Extension(**ext_args)
sys.argv = [sys.argv[0]] + setup_flags
sys.argv.extend(['build',
'--build-temp', build_dir,
'--build-base', build_dir,
'--build-platlib', '.'])
if fc_flags:
sys.argv.extend(['config_fc'] + fc_flags)
if flib_flags:
sys.argv.extend(['build_ext'] + flib_flags)
setup(ext_modules=[ext])
if remove_build_dir and os.path.exists(build_dir):
import shutil
outmess('Removing build directory %s\n' % (build_dir))
shutil.rmtree(build_dir)
def main():
if '--help-link' in sys.argv[1:]:
sys.argv.remove('--help-link')
from numpy.distutils.system_info import show_all
show_all()
return
if '-c' in sys.argv[1:]:
run_compile()
else:
run_main(sys.argv[1:])
# if __name__ == "__main__":
# main()
# EOF
| gpl-3.0 | -2,306,411,861,323,313,700 | 33.920732 | 108 | 0.549022 | false | 3.664107 | false | false | false |
prasen-ftech/pywinauto | doc_src/build_autodoc_files.py | 16 | 2513 | "Build up the sphinx autodoc file for the python code"
import os
import sys
docs_folder = os.path.dirname(__file__)
pywin_folder = os.path.dirname(docs_folder)
sys.path.append(pywin_folder)
pywin_folder = os.path.join(pywin_folder, "pywinauto")
excluded_dirs = ["unittests"]
excluded_files = [
"_menux.py",
"__init__.py",
"win32defines.py",
"win32structures.py",
"win32functions.py"]
output_folder = os.path.join(docs_folder, "code")
try:
os.mkdir(output_folder)
except WindowsError:
pass
module_docs = []
for root, dirs, files in os.walk(pywin_folder):
# Skip over directories we don't want to document
for i, d in enumerate(dirs):
if d in excluded_dirs:
del dirs[i]
py_files = [f for f in files if f.endswith(".py")]
for py_filename in py_files:
# skip over py files we don't want to document
if py_filename in excluded_files:
continue
py_filepath = os.path.join(root, py_filename)
# find the last instance of 'pywinauto' to make a module name from
# the path
modulename = 'pywinauto' + py_filepath.rsplit("pywinauto", 1)[1]
modulename = os.path.splitext(modulename)[0]
modulename = modulename.replace('\\', '.')
# the final doc name is the modulename + .txt
doc_source_filename = os.path.join(output_folder, modulename + ".txt")
# skip files that are already generated
if os.path.exists(doc_source_filename):
continue
print py_filename
out = open(doc_source_filename, "w")
out.write(modulename + "\n")
out.write("-" * len(modulename) + "\n")
out.write(" .. automodule:: %s\n"% modulename)
out.write(" :members:\n")
out.write(" :undoc-members:\n\n")
#out.write(" :inherited-members:\n")
#out.write(" .. autoattribute:: %s\n"% modulename)
out.close()
module_docs.append(doc_source_filename)
# This section needs to be updated - I should idealy parse the
# existing file to see if any new docs have been added, if not then
# I should just leave the file alone rathre than re-create.
#
#c = open(os.path.join(output_folder, "code.txt"), "w")
#c.write("Source Code\n")
#c.write("=" * 30 + "\n")
#
#c.write(".. toctree::\n")
#c.write(" :maxdepth: 3\n\n")
#for doc in module_docs:
# c.write(" " + doc + "\n")
#
#c.close()
| lgpl-2.1 | 3,804,347,580,718,601,000 | 27.22093 | 78 | 0.592121 | false | 3.373154 | false | false | false |
yukoba/sympy | sympy/polys/rootisolation.py | 78 | 55536 | """Real and complex root isolation and refinement algorithms. """
from __future__ import print_function, division
from sympy.polys.densebasic import (
dup_LC, dup_TC, dup_degree,
dup_strip, dup_reverse,
dup_convert,
dup_terms_gcd)
from sympy.polys.densearith import (
dup_neg, dup_rshift, dup_rem)
from sympy.polys.densetools import (
dup_clear_denoms,
dup_mirror, dup_scale, dup_shift,
dup_transform,
dup_diff,
dup_eval, dmp_eval_in,
dup_sign_variations,
dup_real_imag)
from sympy.polys.sqfreetools import (
dup_sqf_part, dup_sqf_list)
from sympy.polys.factortools import (
dup_factor_list)
from sympy.polys.polyerrors import (
RefinementFailed,
DomainError)
from sympy.core.compatibility import range
def dup_sturm(f, K):
"""
Computes the Sturm sequence of ``f`` in ``F[x]``.
Given a univariate, square-free polynomial ``f(x)`` returns the
associated Sturm sequence ``f_0(x), ..., f_n(x)`` defined by::
f_0(x), f_1(x) = f(x), f'(x)
f_n = -rem(f_{n-2}(x), f_{n-1}(x))
Examples
========
>>> from sympy.polys import ring, QQ
>>> R, x = ring("x", QQ)
>>> R.dup_sturm(x**3 - 2*x**2 + x - 3)
[x**3 - 2*x**2 + x - 3, 3*x**2 - 4*x + 1, 2/9*x + 25/9, -2079/4]
References
==========
1. [Davenport88]_
"""
if not K.has_Field:
raise DomainError("can't compute Sturm sequence over %s" % K)
f = dup_sqf_part(f, K)
sturm = [f, dup_diff(f, 1, K)]
while sturm[-1]:
s = dup_rem(sturm[-2], sturm[-1], K)
sturm.append(dup_neg(s, K))
return sturm[:-1]
def dup_root_upper_bound(f, K):
"""Compute the LMQ upper bound for the positive roots of `f`;
LMQ (Local Max Quadratic) was developed by Akritas-Strzebonski-Vigklas.
Reference:
==========
Alkiviadis G. Akritas: "Linear and Quadratic Complexity Bounds on the
Values of the Positive Roots of Polynomials"
Journal of Universal Computer Science, Vol. 15, No. 3, 523-537, 2009.
"""
n, P = len(f), []
t = n * [K.one]
if dup_LC(f, K) < 0:
f = dup_neg(f, K)
f = list(reversed(f))
for i in range(0, n):
if f[i] >= 0:
continue
a, QL = K.log(-f[i], 2), []
for j in range(i + 1, n):
if f[j] <= 0:
continue
q = t[j] + a - K.log(f[j], 2)
QL.append([q // (j - i) , j])
if not QL:
continue
q = min(QL)
t[q[1]] = t[q[1]] + 1
P.append(q[0])
if not P:
return None
else:
return K.get_field()(2)**(max(P) + 1)
def dup_root_lower_bound(f, K):
"""Compute the LMQ lower bound for the positive roots of `f`;
LMQ (Local Max Quadratic) was developed by Akritas-Strzebonski-Vigklas.
Reference:
==========
Alkiviadis G. Akritas: "Linear and Quadratic Complexity Bounds on the
Values of the Positive Roots of Polynomials"
Journal of Universal Computer Science, Vol. 15, No. 3, 523-537, 2009.
"""
bound = dup_root_upper_bound(dup_reverse(f), K)
if bound is not None:
return 1/bound
else:
return None
def _mobius_from_interval(I, field):
"""Convert an open interval to a Mobius transform. """
s, t = I
a, c = field.numer(s), field.denom(s)
b, d = field.numer(t), field.denom(t)
return a, b, c, d
def _mobius_to_interval(M, field):
"""Convert a Mobius transform to an open interval. """
a, b, c, d = M
s, t = field(a, c), field(b, d)
if s <= t:
return (s, t)
else:
return (t, s)
def dup_step_refine_real_root(f, M, K, fast=False):
"""One step of positive real root refinement algorithm. """
a, b, c, d = M
if a == b and c == d:
return f, (a, b, c, d)
A = dup_root_lower_bound(f, K)
if A is not None:
A = K(int(A))
else:
A = K.zero
if fast and A > 16:
f = dup_scale(f, A, K)
a, c, A = A*a, A*c, K.one
if A >= K.one:
f = dup_shift(f, A, K)
b, d = A*a + b, A*c + d
if not dup_eval(f, K.zero, K):
return f, (b, b, d, d)
f, g = dup_shift(f, K.one, K), f
a1, b1, c1, d1 = a, a + b, c, c + d
if not dup_eval(f, K.zero, K):
return f, (b1, b1, d1, d1)
k = dup_sign_variations(f, K)
if k == 1:
a, b, c, d = a1, b1, c1, d1
else:
f = dup_shift(dup_reverse(g), K.one, K)
if not dup_eval(f, K.zero, K):
f = dup_rshift(f, 1, K)
a, b, c, d = b, a + b, d, c + d
return f, (a, b, c, d)
def dup_inner_refine_real_root(f, M, K, eps=None, steps=None, disjoint=None, fast=False, mobius=False):
"""Refine a positive root of `f` given a Mobius transform or an interval. """
F = K.get_field()
if len(M) == 2:
a, b, c, d = _mobius_from_interval(M, F)
else:
a, b, c, d = M
while not c:
f, (a, b, c, d) = dup_step_refine_real_root(f, (a, b, c,
d), K, fast=fast)
if eps is not None and steps is not None:
for i in range(0, steps):
if abs(F(a, c) - F(b, d)) >= eps:
f, (a, b, c, d) = dup_step_refine_real_root(f, (a, b, c, d), K, fast=fast)
else:
break
else:
if eps is not None:
while abs(F(a, c) - F(b, d)) >= eps:
f, (a, b, c, d) = dup_step_refine_real_root(f, (a, b, c, d), K, fast=fast)
if steps is not None:
for i in range(0, steps):
f, (a, b, c, d) = dup_step_refine_real_root(f, (a, b, c, d), K, fast=fast)
if disjoint is not None:
while True:
u, v = _mobius_to_interval((a, b, c, d), F)
if v <= disjoint or disjoint <= u:
break
else:
f, (a, b, c, d) = dup_step_refine_real_root(f, (a, b, c, d), K, fast=fast)
if not mobius:
return _mobius_to_interval((a, b, c, d), F)
else:
return f, (a, b, c, d)
def dup_outer_refine_real_root(f, s, t, K, eps=None, steps=None, disjoint=None, fast=False):
"""Refine a positive root of `f` given an interval `(s, t)`. """
a, b, c, d = _mobius_from_interval((s, t), K.get_field())
f = dup_transform(f, dup_strip([a, b]),
dup_strip([c, d]), K)
if dup_sign_variations(f, K) != 1:
raise RefinementFailed("there should be exactly one root in (%s, %s) interval" % (s, t))
return dup_inner_refine_real_root(f, (a, b, c, d), K, eps=eps, steps=steps, disjoint=disjoint, fast=fast)
def dup_refine_real_root(f, s, t, K, eps=None, steps=None, disjoint=None, fast=False):
"""Refine real root's approximating interval to the given precision. """
if K.is_QQ:
(_, f), K = dup_clear_denoms(f, K, convert=True), K.get_ring()
elif not K.is_ZZ:
raise DomainError("real root refinement not supported over %s" % K)
if s == t:
return (s, t)
if s > t:
s, t = t, s
negative = False
if s < 0:
if t <= 0:
f, s, t, negative = dup_mirror(f, K), -t, -s, True
else:
raise ValueError("can't refine a real root in (%s, %s)" % (s, t))
if negative and disjoint is not None:
if disjoint < 0:
disjoint = -disjoint
else:
disjoint = None
s, t = dup_outer_refine_real_root(
f, s, t, K, eps=eps, steps=steps, disjoint=disjoint, fast=fast)
if negative:
return (-t, -s)
else:
return ( s, t)
def dup_inner_isolate_real_roots(f, K, eps=None, fast=False):
"""Internal function for isolation positive roots up to given precision.
References:
===========
1. Alkiviadis G. Akritas and Adam W. Strzebonski: A Comparative Study of Two Real Root
Isolation Methods . Nonlinear Analysis: Modelling and Control, Vol. 10, No. 4, 297-304, 2005.
2. Alkiviadis G. Akritas, Adam W. Strzebonski and Panagiotis S. Vigklas: Improving the
Performance of the Continued Fractions Method Using new Bounds of Positive Roots. Nonlinear
Analysis: Modelling and Control, Vol. 13, No. 3, 265-279, 2008.
"""
a, b, c, d = K.one, K.zero, K.zero, K.one
k = dup_sign_variations(f, K)
if k == 0:
return []
if k == 1:
roots = [dup_inner_refine_real_root(
f, (a, b, c, d), K, eps=eps, fast=fast, mobius=True)]
else:
roots, stack = [], [(a, b, c, d, f, k)]
while stack:
a, b, c, d, f, k = stack.pop()
A = dup_root_lower_bound(f, K)
if A is not None:
A = K(int(A))
else:
A = K.zero
if fast and A > 16:
f = dup_scale(f, A, K)
a, c, A = A*a, A*c, K.one
if A >= K.one:
f = dup_shift(f, A, K)
b, d = A*a + b, A*c + d
if not dup_TC(f, K):
roots.append((f, (b, b, d, d)))
f = dup_rshift(f, 1, K)
k = dup_sign_variations(f, K)
if k == 0:
continue
if k == 1:
roots.append(dup_inner_refine_real_root(
f, (a, b, c, d), K, eps=eps, fast=fast, mobius=True))
continue
f1 = dup_shift(f, K.one, K)
a1, b1, c1, d1, r = a, a + b, c, c + d, 0
if not dup_TC(f1, K):
roots.append((f1, (b1, b1, d1, d1)))
f1, r = dup_rshift(f1, 1, K), 1
k1 = dup_sign_variations(f1, K)
k2 = k - k1 - r
a2, b2, c2, d2 = b, a + b, d, c + d
if k2 > 1:
f2 = dup_shift(dup_reverse(f), K.one, K)
if not dup_TC(f2, K):
f2 = dup_rshift(f2, 1, K)
k2 = dup_sign_variations(f2, K)
else:
f2 = None
if k1 < k2:
a1, a2, b1, b2 = a2, a1, b2, b1
c1, c2, d1, d2 = c2, c1, d2, d1
f1, f2, k1, k2 = f2, f1, k2, k1
if not k1:
continue
if f1 is None:
f1 = dup_shift(dup_reverse(f), K.one, K)
if not dup_TC(f1, K):
f1 = dup_rshift(f1, 1, K)
if k1 == 1:
roots.append(dup_inner_refine_real_root(
f1, (a1, b1, c1, d1), K, eps=eps, fast=fast, mobius=True))
else:
stack.append((a1, b1, c1, d1, f1, k1))
if not k2:
continue
if f2 is None:
f2 = dup_shift(dup_reverse(f), K.one, K)
if not dup_TC(f2, K):
f2 = dup_rshift(f2, 1, K)
if k2 == 1:
roots.append(dup_inner_refine_real_root(
f2, (a2, b2, c2, d2), K, eps=eps, fast=fast, mobius=True))
else:
stack.append((a2, b2, c2, d2, f2, k2))
return roots
def _discard_if_outside_interval(f, M, inf, sup, K, negative, fast, mobius):
"""Discard an isolating interval if outside ``(inf, sup)``. """
F = K.get_field()
while True:
u, v = _mobius_to_interval(M, F)
if negative:
u, v = -v, -u
if (inf is None or u >= inf) and (sup is None or v <= sup):
if not mobius:
return u, v
else:
return f, M
elif (sup is not None and u > sup) or (inf is not None and v < inf):
return None
else:
f, M = dup_step_refine_real_root(f, M, K, fast=fast)
def dup_inner_isolate_positive_roots(f, K, eps=None, inf=None, sup=None, fast=False, mobius=False):
"""Iteratively compute disjoint positive root isolation intervals. """
if sup is not None and sup < 0:
return []
roots = dup_inner_isolate_real_roots(f, K, eps=eps, fast=fast)
F, results = K.get_field(), []
if inf is not None or sup is not None:
for f, M in roots:
result = _discard_if_outside_interval(f, M, inf, sup, K, False, fast, mobius)
if result is not None:
results.append(result)
elif not mobius:
for f, M in roots:
u, v = _mobius_to_interval(M, F)
results.append((u, v))
else:
results = roots
return results
def dup_inner_isolate_negative_roots(f, K, inf=None, sup=None, eps=None, fast=False, mobius=False):
"""Iteratively compute disjoint negative root isolation intervals. """
if inf is not None and inf >= 0:
return []
roots = dup_inner_isolate_real_roots(dup_mirror(f, K), K, eps=eps, fast=fast)
F, results = K.get_field(), []
if inf is not None or sup is not None:
for f, M in roots:
result = _discard_if_outside_interval(f, M, inf, sup, K, True, fast, mobius)
if result is not None:
results.append(result)
elif not mobius:
for f, M in roots:
u, v = _mobius_to_interval(M, F)
results.append((-v, -u))
else:
results = roots
return results
def _isolate_zero(f, K, inf, sup, basis=False, sqf=False):
"""Handle special case of CF algorithm when ``f`` is homogeneous. """
j, f = dup_terms_gcd(f, K)
if j > 0:
F = K.get_field()
if (inf is None or inf <= 0) and (sup is None or 0 <= sup):
if not sqf:
if not basis:
return [((F.zero, F.zero), j)], f
else:
return [((F.zero, F.zero), j, [K.one, K.zero])], f
else:
return [(F.zero, F.zero)], f
return [], f
def dup_isolate_real_roots_sqf(f, K, eps=None, inf=None, sup=None, fast=False, blackbox=False):
"""Isolate real roots of a square-free polynomial using the Vincent-Akritas-Strzebonski (VAS) CF approach.
References:
===========
1. Alkiviadis G. Akritas and Adam W. Strzebonski: A Comparative Study of Two Real Root Isolation Methods.
Nonlinear Analysis: Modelling and Control, Vol. 10, No. 4, 297-304, 2005.
2. Alkiviadis G. Akritas, Adam W. Strzebonski and Panagiotis S. Vigklas: Improving the Performance
of the Continued Fractions Method Using New Bounds of Positive Roots.
Nonlinear Analysis: Modelling and Control, Vol. 13, No. 3, 265-279, 2008.
"""
if K.is_QQ:
(_, f), K = dup_clear_denoms(f, K, convert=True), K.get_ring()
elif not K.is_ZZ:
raise DomainError("isolation of real roots not supported over %s" % K)
if dup_degree(f) <= 0:
return []
I_zero, f = _isolate_zero(f, K, inf, sup, basis=False, sqf=True)
I_neg = dup_inner_isolate_negative_roots(f, K, eps=eps, inf=inf, sup=sup, fast=fast)
I_pos = dup_inner_isolate_positive_roots(f, K, eps=eps, inf=inf, sup=sup, fast=fast)
roots = sorted(I_neg + I_zero + I_pos)
if not blackbox:
return roots
else:
return [ RealInterval((a, b), f, K) for (a, b) in roots ]
def dup_isolate_real_roots(f, K, eps=None, inf=None, sup=None, basis=False, fast=False):
"""Isolate real roots using Vincent-Akritas-Strzebonski (VAS) continued fractions approach.
References:
===========
1. Alkiviadis G. Akritas and Adam W. Strzebonski: A Comparative Study of Two Real Root Isolation Methods.
Nonlinear Analysis: Modelling and Control, Vol. 10, No. 4, 297-304, 2005.
2. Alkiviadis G. Akritas, Adam W. Strzebonski and Panagiotis S. Vigklas: Improving the Performance
of the Continued Fractions Method Using New Bounds of Positive Roots.
Nonlinear Analysis: Modelling and Control, Vol. 13, No. 3, 265-279, 2008.
"""
if K.is_QQ:
(_, f), K = dup_clear_denoms(f, K, convert=True), K.get_ring()
elif not K.is_ZZ:
raise DomainError("isolation of real roots not supported over %s" % K)
if dup_degree(f) <= 0:
return []
I_zero, f = _isolate_zero(f, K, inf, sup, basis=basis, sqf=False)
_, factors = dup_sqf_list(f, K)
if len(factors) == 1:
((f, k),) = factors
I_neg = dup_inner_isolate_negative_roots(f, K, eps=eps, inf=inf, sup=sup, fast=fast)
I_pos = dup_inner_isolate_positive_roots(f, K, eps=eps, inf=inf, sup=sup, fast=fast)
I_neg = [ ((u, v), k) for u, v in I_neg ]
I_pos = [ ((u, v), k) for u, v in I_pos ]
else:
I_neg, I_pos = _real_isolate_and_disjoin(factors, K,
eps=eps, inf=inf, sup=sup, basis=basis, fast=fast)
return sorted(I_neg + I_zero + I_pos)
def dup_isolate_real_roots_list(polys, K, eps=None, inf=None, sup=None, strict=False, basis=False, fast=False):
"""Isolate real roots of a list of square-free polynomial using Vincent-Akritas-Strzebonski (VAS) CF approach.
References:
===========
1. Alkiviadis G. Akritas and Adam W. Strzebonski: A Comparative Study of Two Real Root Isolation Methods.
Nonlinear Analysis: Modelling and Control, Vol. 10, No. 4, 297-304, 2005.
2. Alkiviadis G. Akritas, Adam W. Strzebonski and Panagiotis S. Vigklas: Improving the Performance
of the Continued Fractions Method Using New Bounds of Positive Roots.
Nonlinear Analysis: Modelling and Control, Vol. 13, No. 3, 265-279, 2008.
"""
if K.is_QQ:
K, F, polys = K.get_ring(), K, polys[:]
for i, p in enumerate(polys):
polys[i] = dup_clear_denoms(p, F, K, convert=True)[1]
elif not K.is_ZZ:
raise DomainError("isolation of real roots not supported over %s" % K)
zeros, factors_dict = False, {}
if (inf is None or inf <= 0) and (sup is None or 0 <= sup):
zeros, zero_indices = True, {}
for i, p in enumerate(polys):
j, p = dup_terms_gcd(p, K)
if zeros and j > 0:
zero_indices[i] = j
for f, k in dup_factor_list(p, K)[1]:
f = tuple(f)
if f not in factors_dict:
factors_dict[f] = {i: k}
else:
factors_dict[f][i] = k
factors_list = []
for f, indices in factors_dict.items():
factors_list.append((list(f), indices))
I_neg, I_pos = _real_isolate_and_disjoin(factors_list, K, eps=eps,
inf=inf, sup=sup, strict=strict, basis=basis, fast=fast)
F = K.get_field()
if not zeros or not zero_indices:
I_zero = []
else:
if not basis:
I_zero = [((F.zero, F.zero), zero_indices)]
else:
I_zero = [((F.zero, F.zero), zero_indices, [K.one, K.zero])]
return sorted(I_neg + I_zero + I_pos)
def _disjoint_p(M, N, strict=False):
"""Check if Mobius transforms define disjoint intervals. """
a1, b1, c1, d1 = M
a2, b2, c2, d2 = N
a1d1, b1c1 = a1*d1, b1*c1
a2d2, b2c2 = a2*d2, b2*c2
if a1d1 == b1c1 and a2d2 == b2c2:
return True
if a1d1 > b1c1:
a1, c1, b1, d1 = b1, d1, a1, c1
if a2d2 > b2c2:
a2, c2, b2, d2 = b2, d2, a2, c2
if not strict:
return a2*d1 >= c2*b1 or b2*c1 <= d2*a1
else:
return a2*d1 > c2*b1 or b2*c1 < d2*a1
def _real_isolate_and_disjoin(factors, K, eps=None, inf=None, sup=None, strict=False, basis=False, fast=False):
"""Isolate real roots of a list of polynomials and disjoin intervals. """
I_pos, I_neg = [], []
for i, (f, k) in enumerate(factors):
for F, M in dup_inner_isolate_positive_roots(f, K, eps=eps, inf=inf, sup=sup, fast=fast, mobius=True):
I_pos.append((F, M, k, f))
for G, N in dup_inner_isolate_negative_roots(f, K, eps=eps, inf=inf, sup=sup, fast=fast, mobius=True):
I_neg.append((G, N, k, f))
for i, (f, M, k, F) in enumerate(I_pos):
for j, (g, N, m, G) in enumerate(I_pos[i + 1:]):
while not _disjoint_p(M, N, strict=strict):
f, M = dup_inner_refine_real_root(f, M, K, steps=1, fast=fast, mobius=True)
g, N = dup_inner_refine_real_root(g, N, K, steps=1, fast=fast, mobius=True)
I_pos[i + j + 1] = (g, N, m, G)
I_pos[i] = (f, M, k, F)
for i, (f, M, k, F) in enumerate(I_neg):
for j, (g, N, m, G) in enumerate(I_neg[i + 1:]):
while not _disjoint_p(M, N, strict=strict):
f, M = dup_inner_refine_real_root(f, M, K, steps=1, fast=fast, mobius=True)
g, N = dup_inner_refine_real_root(g, N, K, steps=1, fast=fast, mobius=True)
I_neg[i + j + 1] = (g, N, m, G)
I_neg[i] = (f, M, k, F)
if strict:
for i, (f, M, k, F) in enumerate(I_neg):
if not M[0]:
while not M[0]:
f, M = dup_inner_refine_real_root(f, M, K, steps=1, fast=fast, mobius=True)
I_neg[i] = (f, M, k, F)
break
for j, (g, N, m, G) in enumerate(I_pos):
if not N[0]:
while not N[0]:
g, N = dup_inner_refine_real_root(g, N, K, steps=1, fast=fast, mobius=True)
I_pos[j] = (g, N, m, G)
break
field = K.get_field()
I_neg = [ (_mobius_to_interval(M, field), k, f) for (_, M, k, f) in I_neg ]
I_pos = [ (_mobius_to_interval(M, field), k, f) for (_, M, k, f) in I_pos ]
if not basis:
I_neg = [ ((-v, -u), k) for ((u, v), k, _) in I_neg ]
I_pos = [ (( u, v), k) for ((u, v), k, _) in I_pos ]
else:
I_neg = [ ((-v, -u), k, f) for ((u, v), k, f) in I_neg ]
I_pos = [ (( u, v), k, f) for ((u, v), k, f) in I_pos ]
return I_neg, I_pos
def dup_count_real_roots(f, K, inf=None, sup=None):
"""Returns the number of distinct real roots of ``f`` in ``[inf, sup]``. """
if dup_degree(f) <= 0:
return 0
if not K.has_Field:
R, K = K, K.get_field()
f = dup_convert(f, R, K)
sturm = dup_sturm(f, K)
if inf is None:
signs_inf = dup_sign_variations([ dup_LC(s, K)*(-1)**dup_degree(s) for s in sturm ], K)
else:
signs_inf = dup_sign_variations([ dup_eval(s, inf, K) for s in sturm ], K)
if sup is None:
signs_sup = dup_sign_variations([ dup_LC(s, K) for s in sturm ], K)
else:
signs_sup = dup_sign_variations([ dup_eval(s, sup, K) for s in sturm ], K)
count = abs(signs_inf - signs_sup)
if inf is not None and not dup_eval(f, inf, K):
count += 1
return count
OO = 'OO' # Origin of (re, im) coordinate system
Q1 = 'Q1' # Quadrant #1 (++): re > 0 and im > 0
Q2 = 'Q2' # Quadrant #2 (-+): re < 0 and im > 0
Q3 = 'Q3' # Quadrant #3 (--): re < 0 and im < 0
Q4 = 'Q4' # Quadrant #4 (+-): re > 0 and im < 0
A1 = 'A1' # Axis #1 (+0): re > 0 and im = 0
A2 = 'A2' # Axis #2 (0+): re = 0 and im > 0
A3 = 'A3' # Axis #3 (-0): re < 0 and im = 0
A4 = 'A4' # Axis #4 (0-): re = 0 and im < 0
_rules_simple = {
# Q --> Q (same) => no change
(Q1, Q1): 0,
(Q2, Q2): 0,
(Q3, Q3): 0,
(Q4, Q4): 0,
# A -- CCW --> Q => +1/4 (CCW)
(A1, Q1): 1,
(A2, Q2): 1,
(A3, Q3): 1,
(A4, Q4): 1,
# A -- CW --> Q => -1/4 (CCW)
(A1, Q4): 2,
(A2, Q1): 2,
(A3, Q2): 2,
(A4, Q3): 2,
# Q -- CCW --> A => +1/4 (CCW)
(Q1, A2): 3,
(Q2, A3): 3,
(Q3, A4): 3,
(Q4, A1): 3,
# Q -- CW --> A => -1/4 (CCW)
(Q1, A1): 4,
(Q2, A2): 4,
(Q3, A3): 4,
(Q4, A4): 4,
# Q -- CCW --> Q => +1/2 (CCW)
(Q1, Q2): +5,
(Q2, Q3): +5,
(Q3, Q4): +5,
(Q4, Q1): +5,
# Q -- CW --> Q => -1/2 (CW)
(Q1, Q4): -5,
(Q2, Q1): -5,
(Q3, Q2): -5,
(Q4, Q3): -5,
}
_rules_ambiguous = {
# A -- CCW --> Q => { +1/4 (CCW), -9/4 (CW) }
(A1, OO, Q1): -1,
(A2, OO, Q2): -1,
(A3, OO, Q3): -1,
(A4, OO, Q4): -1,
# A -- CW --> Q => { -1/4 (CCW), +7/4 (CW) }
(A1, OO, Q4): -2,
(A2, OO, Q1): -2,
(A3, OO, Q2): -2,
(A4, OO, Q3): -2,
# Q -- CCW --> A => { +1/4 (CCW), -9/4 (CW) }
(Q1, OO, A2): -3,
(Q2, OO, A3): -3,
(Q3, OO, A4): -3,
(Q4, OO, A1): -3,
# Q -- CW --> A => { -1/4 (CCW), +7/4 (CW) }
(Q1, OO, A1): -4,
(Q2, OO, A2): -4,
(Q3, OO, A3): -4,
(Q4, OO, A4): -4,
# A -- OO --> A => { +1 (CCW), -1 (CW) }
(A1, A3): 7,
(A2, A4): 7,
(A3, A1): 7,
(A4, A2): 7,
(A1, OO, A3): 7,
(A2, OO, A4): 7,
(A3, OO, A1): 7,
(A4, OO, A2): 7,
# Q -- DIA --> Q => { +1 (CCW), -1 (CW) }
(Q1, Q3): 8,
(Q2, Q4): 8,
(Q3, Q1): 8,
(Q4, Q2): 8,
(Q1, OO, Q3): 8,
(Q2, OO, Q4): 8,
(Q3, OO, Q1): 8,
(Q4, OO, Q2): 8,
# A --- R ---> A => { +1/2 (CCW), -3/2 (CW) }
(A1, A2): 9,
(A2, A3): 9,
(A3, A4): 9,
(A4, A1): 9,
(A1, OO, A2): 9,
(A2, OO, A3): 9,
(A3, OO, A4): 9,
(A4, OO, A1): 9,
# A --- L ---> A => { +3/2 (CCW), -1/2 (CW) }
(A1, A4): 10,
(A2, A1): 10,
(A3, A2): 10,
(A4, A3): 10,
(A1, OO, A4): 10,
(A2, OO, A1): 10,
(A3, OO, A2): 10,
(A4, OO, A3): 10,
# Q --- 1 ---> A => { +3/4 (CCW), -5/4 (CW) }
(Q1, A3): 11,
(Q2, A4): 11,
(Q3, A1): 11,
(Q4, A2): 11,
(Q1, OO, A3): 11,
(Q2, OO, A4): 11,
(Q3, OO, A1): 11,
(Q4, OO, A2): 11,
# Q --- 2 ---> A => { +5/4 (CCW), -3/4 (CW) }
(Q1, A4): 12,
(Q2, A1): 12,
(Q3, A2): 12,
(Q4, A3): 12,
(Q1, OO, A4): 12,
(Q2, OO, A1): 12,
(Q3, OO, A2): 12,
(Q4, OO, A3): 12,
# A --- 1 ---> Q => { +5/4 (CCW), -3/4 (CW) }
(A1, Q3): 13,
(A2, Q4): 13,
(A3, Q1): 13,
(A4, Q2): 13,
(A1, OO, Q3): 13,
(A2, OO, Q4): 13,
(A3, OO, Q1): 13,
(A4, OO, Q2): 13,
# A --- 2 ---> Q => { +3/4 (CCW), -5/4 (CW) }
(A1, Q2): 14,
(A2, Q3): 14,
(A3, Q4): 14,
(A4, Q1): 14,
(A1, OO, Q2): 14,
(A2, OO, Q3): 14,
(A3, OO, Q4): 14,
(A4, OO, Q1): 14,
# Q --> OO --> Q => { +1/2 (CCW), -3/2 (CW) }
(Q1, OO, Q2): 15,
(Q2, OO, Q3): 15,
(Q3, OO, Q4): 15,
(Q4, OO, Q1): 15,
# Q --> OO --> Q => { +3/2 (CCW), -1/2 (CW) }
(Q1, OO, Q4): 16,
(Q2, OO, Q1): 16,
(Q3, OO, Q2): 16,
(Q4, OO, Q3): 16,
# A --> OO --> A => { +2 (CCW), 0 (CW) }
(A1, OO, A1): 17,
(A2, OO, A2): 17,
(A3, OO, A3): 17,
(A4, OO, A4): 17,
# Q --> OO --> Q => { +2 (CCW), 0 (CW) }
(Q1, OO, Q1): 18,
(Q2, OO, Q2): 18,
(Q3, OO, Q3): 18,
(Q4, OO, Q4): 18,
}
_values = {
0: [( 0, 1)],
1: [(+1, 4)],
2: [(-1, 4)],
3: [(+1, 4)],
4: [(-1, 4)],
-1: [(+9, 4), (+1, 4)],
-2: [(+7, 4), (-1, 4)],
-3: [(+9, 4), (+1, 4)],
-4: [(+7, 4), (-1, 4)],
+5: [(+1, 2)],
-5: [(-1, 2)],
7: [(+1, 1), (-1, 1)],
8: [(+1, 1), (-1, 1)],
9: [(+1, 2), (-3, 2)],
10: [(+3, 2), (-1, 2)],
11: [(+3, 4), (-5, 4)],
12: [(+5, 4), (-3, 4)],
13: [(+5, 4), (-3, 4)],
14: [(+3, 4), (-5, 4)],
15: [(+1, 2), (-3, 2)],
16: [(+3, 2), (-1, 2)],
17: [(+2, 1), ( 0, 1)],
18: [(+2, 1), ( 0, 1)],
}
def _classify_point(re, im):
"""Return the half-axis (or origin) on which (re, im) point is located. """
if not re and not im:
return OO
if not re:
if im > 0:
return A2
else:
return A4
elif not im:
if re > 0:
return A1
else:
return A3
def _intervals_to_quadrants(intervals, f1, f2, s, t, F):
"""Generate a sequence of extended quadrants from a list of critical points. """
if not intervals:
return []
Q = []
if not f1:
(a, b), _, _ = intervals[0]
if a == b == s:
if len(intervals) == 1:
if dup_eval(f2, t, F) > 0:
return [OO, A2]
else:
return [OO, A4]
else:
(a, _), _, _ = intervals[1]
if dup_eval(f2, (s + a)/2, F) > 0:
Q.extend([OO, A2])
f2_sgn = +1
else:
Q.extend([OO, A4])
f2_sgn = -1
intervals = intervals[1:]
else:
if dup_eval(f2, s, F) > 0:
Q.append(A2)
f2_sgn = +1
else:
Q.append(A4)
f2_sgn = -1
for (a, _), indices, _ in intervals:
Q.append(OO)
if indices[1] % 2 == 1:
f2_sgn = -f2_sgn
if a != t:
if f2_sgn > 0:
Q.append(A2)
else:
Q.append(A4)
return Q
if not f2:
(a, b), _, _ = intervals[0]
if a == b == s:
if len(intervals) == 1:
if dup_eval(f1, t, F) > 0:
return [OO, A1]
else:
return [OO, A3]
else:
(a, _), _, _ = intervals[1]
if dup_eval(f1, (s + a)/2, F) > 0:
Q.extend([OO, A1])
f1_sgn = +1
else:
Q.extend([OO, A3])
f1_sgn = -1
intervals = intervals[1:]
else:
if dup_eval(f1, s, F) > 0:
Q.append(A1)
f1_sgn = +1
else:
Q.append(A3)
f1_sgn = -1
for (a, _), indices, _ in intervals:
Q.append(OO)
if indices[0] % 2 == 1:
f1_sgn = -f1_sgn
if a != t:
if f1_sgn > 0:
Q.append(A1)
else:
Q.append(A3)
return Q
re = dup_eval(f1, s, F)
im = dup_eval(f2, s, F)
if not re or not im:
Q.append(_classify_point(re, im))
if len(intervals) == 1:
re = dup_eval(f1, t, F)
im = dup_eval(f2, t, F)
else:
(a, _), _, _ = intervals[1]
re = dup_eval(f1, (s + a)/2, F)
im = dup_eval(f2, (s + a)/2, F)
intervals = intervals[1:]
if re > 0:
f1_sgn = +1
else:
f1_sgn = -1
if im > 0:
f2_sgn = +1
else:
f2_sgn = -1
sgn = {
(+1, +1): Q1,
(-1, +1): Q2,
(-1, -1): Q3,
(+1, -1): Q4,
}
Q.append(sgn[(f1_sgn, f2_sgn)])
for (a, b), indices, _ in intervals:
if a == b:
re = dup_eval(f1, a, F)
im = dup_eval(f2, a, F)
cls = _classify_point(re, im)
if cls is not None:
Q.append(cls)
if 0 in indices:
if indices[0] % 2 == 1:
f1_sgn = -f1_sgn
if 1 in indices:
if indices[1] % 2 == 1:
f2_sgn = -f2_sgn
if not (a == b and b == t):
Q.append(sgn[(f1_sgn, f2_sgn)])
return Q
def _traverse_quadrants(Q_L1, Q_L2, Q_L3, Q_L4, exclude=None):
"""Transform sequences of quadrants to a sequence of rules. """
if exclude is True:
edges = [1, 1, 0, 0]
corners = {
(0, 1): 1,
(1, 2): 1,
(2, 3): 0,
(3, 0): 1,
}
else:
edges = [0, 0, 0, 0]
corners = {
(0, 1): 0,
(1, 2): 0,
(2, 3): 0,
(3, 0): 0,
}
if exclude is not None and exclude is not True:
exclude = set(exclude)
for i, edge in enumerate(['S', 'E', 'N', 'W']):
if edge in exclude:
edges[i] = 1
for i, corner in enumerate(['SW', 'SE', 'NE', 'NW']):
if corner in exclude:
corners[((i - 1) % 4, i)] = 1
QQ, rules = [Q_L1, Q_L2, Q_L3, Q_L4], []
for i, Q in enumerate(QQ):
if not Q:
continue
if Q[-1] == OO:
Q = Q[:-1]
if Q[0] == OO:
j, Q = (i - 1) % 4, Q[1:]
qq = (QQ[j][-2], OO, Q[0])
if qq in _rules_ambiguous:
rules.append((_rules_ambiguous[qq], corners[(j, i)]))
else:
raise NotImplementedError("3 element rule (corner): " + str(qq))
q1, k = Q[0], 1
while k < len(Q):
q2, k = Q[k], k + 1
if q2 != OO:
qq = (q1, q2)
if qq in _rules_simple:
rules.append((_rules_simple[qq], 0))
elif qq in _rules_ambiguous:
rules.append((_rules_ambiguous[qq], edges[i]))
else:
raise NotImplementedError("2 element rule (inside): " + str(qq))
else:
qq, k = (q1, q2, Q[k]), k + 1
if qq in _rules_ambiguous:
rules.append((_rules_ambiguous[qq], edges[i]))
else:
raise NotImplementedError("3 element rule (edge): " + str(qq))
q1 = qq[-1]
return rules
def _reverse_intervals(intervals):
"""Reverse intervals for traversal from right to left and from top to bottom. """
return [ ((b, a), indices, f) for (a, b), indices, f in reversed(intervals) ]
def _winding_number(T, field):
"""Compute the winding number of the input polynomial, i.e. the number of roots. """
return int(sum([ field(*_values[t][i]) for t, i in T ]) / field(2))
def dup_count_complex_roots(f, K, inf=None, sup=None, exclude=None):
"""Count all roots in [u + v*I, s + t*I] rectangle using Collins-Krandick algorithm. """
if not K.is_ZZ and not K.is_QQ:
raise DomainError("complex root counting is not supported over %s" % K)
if K.is_ZZ:
R, F = K, K.get_field()
else:
R, F = K.get_ring(), K
f = dup_convert(f, K, F)
if inf is None or sup is None:
n, lc = dup_degree(f), abs(dup_LC(f, F))
B = 2*max([ F.quo(abs(c), lc) for c in f ])
if inf is None:
(u, v) = (-B, -B)
else:
(u, v) = inf
if sup is None:
(s, t) = (+B, +B)
else:
(s, t) = sup
f1, f2 = dup_real_imag(f, F)
f1L1F = dmp_eval_in(f1, v, 1, 1, F)
f2L1F = dmp_eval_in(f2, v, 1, 1, F)
_, f1L1R = dup_clear_denoms(f1L1F, F, R, convert=True)
_, f2L1R = dup_clear_denoms(f2L1F, F, R, convert=True)
f1L2F = dmp_eval_in(f1, s, 0, 1, F)
f2L2F = dmp_eval_in(f2, s, 0, 1, F)
_, f1L2R = dup_clear_denoms(f1L2F, F, R, convert=True)
_, f2L2R = dup_clear_denoms(f2L2F, F, R, convert=True)
f1L3F = dmp_eval_in(f1, t, 1, 1, F)
f2L3F = dmp_eval_in(f2, t, 1, 1, F)
_, f1L3R = dup_clear_denoms(f1L3F, F, R, convert=True)
_, f2L3R = dup_clear_denoms(f2L3F, F, R, convert=True)
f1L4F = dmp_eval_in(f1, u, 0, 1, F)
f2L4F = dmp_eval_in(f2, u, 0, 1, F)
_, f1L4R = dup_clear_denoms(f1L4F, F, R, convert=True)
_, f2L4R = dup_clear_denoms(f2L4F, F, R, convert=True)
S_L1 = [f1L1R, f2L1R]
S_L2 = [f1L2R, f2L2R]
S_L3 = [f1L3R, f2L3R]
S_L4 = [f1L4R, f2L4R]
I_L1 = dup_isolate_real_roots_list(S_L1, R, inf=u, sup=s, fast=True, basis=True, strict=True)
I_L2 = dup_isolate_real_roots_list(S_L2, R, inf=v, sup=t, fast=True, basis=True, strict=True)
I_L3 = dup_isolate_real_roots_list(S_L3, R, inf=u, sup=s, fast=True, basis=True, strict=True)
I_L4 = dup_isolate_real_roots_list(S_L4, R, inf=v, sup=t, fast=True, basis=True, strict=True)
I_L3 = _reverse_intervals(I_L3)
I_L4 = _reverse_intervals(I_L4)
Q_L1 = _intervals_to_quadrants(I_L1, f1L1F, f2L1F, u, s, F)
Q_L2 = _intervals_to_quadrants(I_L2, f1L2F, f2L2F, v, t, F)
Q_L3 = _intervals_to_quadrants(I_L3, f1L3F, f2L3F, s, u, F)
Q_L4 = _intervals_to_quadrants(I_L4, f1L4F, f2L4F, t, v, F)
T = _traverse_quadrants(Q_L1, Q_L2, Q_L3, Q_L4, exclude=exclude)
return _winding_number(T, F)
def _vertical_bisection(N, a, b, I, Q, F1, F2, f1, f2, F):
"""Vertical bisection step in Collins-Krandick root isolation algorithm. """
(u, v), (s, t) = a, b
I_L1, I_L2, I_L3, I_L4 = I
Q_L1, Q_L2, Q_L3, Q_L4 = Q
f1L1F, f1L2F, f1L3F, f1L4F = F1
f2L1F, f2L2F, f2L3F, f2L4F = F2
x = (u + s) / 2
f1V = dmp_eval_in(f1, x, 0, 1, F)
f2V = dmp_eval_in(f2, x, 0, 1, F)
I_V = dup_isolate_real_roots_list([f1V, f2V], F, inf=v, sup=t, fast=True, strict=True, basis=True)
I_L1_L, I_L1_R = [], []
I_L2_L, I_L2_R = I_V, I_L2
I_L3_L, I_L3_R = [], []
I_L4_L, I_L4_R = I_L4, _reverse_intervals(I_V)
for I in I_L1:
(a, b), indices, h = I
if a == b:
if a == x:
I_L1_L.append(I)
I_L1_R.append(I)
elif a < x:
I_L1_L.append(I)
else:
I_L1_R.append(I)
else:
if b <= x:
I_L1_L.append(I)
elif a >= x:
I_L1_R.append(I)
else:
a, b = dup_refine_real_root(h, a, b, F.get_ring(), disjoint=x, fast=True)
if b <= x:
I_L1_L.append(((a, b), indices, h))
if a >= x:
I_L1_R.append(((a, b), indices, h))
for I in I_L3:
(b, a), indices, h = I
if a == b:
if a == x:
I_L3_L.append(I)
I_L3_R.append(I)
elif a < x:
I_L3_L.append(I)
else:
I_L3_R.append(I)
else:
if b <= x:
I_L3_L.append(I)
elif a >= x:
I_L3_R.append(I)
else:
a, b = dup_refine_real_root(h, a, b, F.get_ring(), disjoint=x, fast=True)
if b <= x:
I_L3_L.append(((b, a), indices, h))
if a >= x:
I_L3_R.append(((b, a), indices, h))
Q_L1_L = _intervals_to_quadrants(I_L1_L, f1L1F, f2L1F, u, x, F)
Q_L2_L = _intervals_to_quadrants(I_L2_L, f1V, f2V, v, t, F)
Q_L3_L = _intervals_to_quadrants(I_L3_L, f1L3F, f2L3F, x, u, F)
Q_L4_L = Q_L4
Q_L1_R = _intervals_to_quadrants(I_L1_R, f1L1F, f2L1F, x, s, F)
Q_L2_R = Q_L2
Q_L3_R = _intervals_to_quadrants(I_L3_R, f1L3F, f2L3F, s, x, F)
Q_L4_R = _intervals_to_quadrants(I_L4_R, f1V, f2V, t, v, F)
T_L = _traverse_quadrants(Q_L1_L, Q_L2_L, Q_L3_L, Q_L4_L, exclude=True)
T_R = _traverse_quadrants(Q_L1_R, Q_L2_R, Q_L3_R, Q_L4_R, exclude=True)
N_L = _winding_number(T_L, F)
N_R = _winding_number(T_R, F)
I_L = (I_L1_L, I_L2_L, I_L3_L, I_L4_L)
Q_L = (Q_L1_L, Q_L2_L, Q_L3_L, Q_L4_L)
I_R = (I_L1_R, I_L2_R, I_L3_R, I_L4_R)
Q_R = (Q_L1_R, Q_L2_R, Q_L3_R, Q_L4_R)
F1_L = (f1L1F, f1V, f1L3F, f1L4F)
F2_L = (f2L1F, f2V, f2L3F, f2L4F)
F1_R = (f1L1F, f1L2F, f1L3F, f1V)
F2_R = (f2L1F, f2L2F, f2L3F, f2V)
a, b = (u, v), (x, t)
c, d = (x, v), (s, t)
D_L = (N_L, a, b, I_L, Q_L, F1_L, F2_L)
D_R = (N_R, c, d, I_R, Q_R, F1_R, F2_R)
return D_L, D_R
def _horizontal_bisection(N, a, b, I, Q, F1, F2, f1, f2, F):
"""Horizontal bisection step in Collins-Krandick root isolation algorithm. """
(u, v), (s, t) = a, b
I_L1, I_L2, I_L3, I_L4 = I
Q_L1, Q_L2, Q_L3, Q_L4 = Q
f1L1F, f1L2F, f1L3F, f1L4F = F1
f2L1F, f2L2F, f2L3F, f2L4F = F2
y = (v + t) / 2
f1H = dmp_eval_in(f1, y, 1, 1, F)
f2H = dmp_eval_in(f2, y, 1, 1, F)
I_H = dup_isolate_real_roots_list([f1H, f2H], F, inf=u, sup=s, fast=True, strict=True, basis=True)
I_L1_B, I_L1_U = I_L1, I_H
I_L2_B, I_L2_U = [], []
I_L3_B, I_L3_U = _reverse_intervals(I_H), I_L3
I_L4_B, I_L4_U = [], []
for I in I_L2:
(a, b), indices, h = I
if a == b:
if a == y:
I_L2_B.append(I)
I_L2_U.append(I)
elif a < y:
I_L2_B.append(I)
else:
I_L2_U.append(I)
else:
if b <= y:
I_L2_B.append(I)
elif a >= y:
I_L2_U.append(I)
else:
a, b = dup_refine_real_root(h, a, b, F.get_ring(), disjoint=y, fast=True)
if b <= y:
I_L2_B.append(((a, b), indices, h))
if a >= y:
I_L2_U.append(((a, b), indices, h))
for I in I_L4:
(b, a), indices, h = I
if a == b:
if a == y:
I_L4_B.append(I)
I_L4_U.append(I)
elif a < y:
I_L4_B.append(I)
else:
I_L4_U.append(I)
else:
if b <= y:
I_L4_B.append(I)
elif a >= y:
I_L4_U.append(I)
else:
a, b = dup_refine_real_root(h, a, b, F.get_ring(), disjoint=y, fast=True)
if b <= y:
I_L4_B.append(((b, a), indices, h))
if a >= y:
I_L4_U.append(((b, a), indices, h))
Q_L1_B = Q_L1
Q_L2_B = _intervals_to_quadrants(I_L2_B, f1L2F, f2L2F, v, y, F)
Q_L3_B = _intervals_to_quadrants(I_L3_B, f1H, f2H, s, u, F)
Q_L4_B = _intervals_to_quadrants(I_L4_B, f1L4F, f2L4F, y, v, F)
Q_L1_U = _intervals_to_quadrants(I_L1_U, f1H, f2H, u, s, F)
Q_L2_U = _intervals_to_quadrants(I_L2_U, f1L2F, f2L2F, y, t, F)
Q_L3_U = Q_L3
Q_L4_U = _intervals_to_quadrants(I_L4_U, f1L4F, f2L4F, t, y, F)
T_B = _traverse_quadrants(Q_L1_B, Q_L2_B, Q_L3_B, Q_L4_B, exclude=True)
T_U = _traverse_quadrants(Q_L1_U, Q_L2_U, Q_L3_U, Q_L4_U, exclude=True)
N_B = _winding_number(T_B, F)
N_U = _winding_number(T_U, F)
I_B = (I_L1_B, I_L2_B, I_L3_B, I_L4_B)
Q_B = (Q_L1_B, Q_L2_B, Q_L3_B, Q_L4_B)
I_U = (I_L1_U, I_L2_U, I_L3_U, I_L4_U)
Q_U = (Q_L1_U, Q_L2_U, Q_L3_U, Q_L4_U)
F1_B = (f1L1F, f1L2F, f1H, f1L4F)
F2_B = (f2L1F, f2L2F, f2H, f2L4F)
F1_U = (f1H, f1L2F, f1L3F, f1L4F)
F2_U = (f2H, f2L2F, f2L3F, f2L4F)
a, b = (u, v), (s, y)
c, d = (u, y), (s, t)
D_B = (N_B, a, b, I_B, Q_B, F1_B, F2_B)
D_U = (N_U, c, d, I_U, Q_U, F1_U, F2_U)
return D_B, D_U
def _depth_first_select(rectangles):
"""Find a rectangle of minimum area for bisection. """
min_area, j = None, None
for i, (_, (u, v), (s, t), _, _, _, _) in enumerate(rectangles):
area = (s - u)*(t - v)
if min_area is None or area < min_area:
min_area, j = area, i
return rectangles.pop(j)
def _rectangle_small_p(a, b, eps):
"""Return ``True`` if the given rectangle is small enough. """
(u, v), (s, t) = a, b
if eps is not None:
return s - u < eps and t - v < eps
else:
return True
def dup_isolate_complex_roots_sqf(f, K, eps=None, inf=None, sup=None, blackbox=False):
"""Isolate complex roots of a square-free polynomial using Collins-Krandick algorithm. """
if not K.is_ZZ and not K.is_QQ:
raise DomainError("isolation of complex roots is not supported over %s" % K)
if dup_degree(f) <= 0:
return []
if K.is_ZZ:
R, F = K, K.get_field()
else:
R, F = K.get_ring(), K
f = dup_convert(f, K, F)
n, lc = dup_degree(f), abs(dup_LC(f, F))
B = 2*max([ F.quo(abs(c), lc) for c in f ])
(u, v), (s, t) = (-B, F.zero), (B, B)
if inf is not None:
u = inf
if sup is not None:
s = sup
if v < 0 or t <= v or s <= u:
raise ValueError("not a valid complex isolation rectangle")
f1, f2 = dup_real_imag(f, F)
f1L1 = dmp_eval_in(f1, v, 1, 1, F)
f2L1 = dmp_eval_in(f2, v, 1, 1, F)
f1L2 = dmp_eval_in(f1, s, 0, 1, F)
f2L2 = dmp_eval_in(f2, s, 0, 1, F)
f1L3 = dmp_eval_in(f1, t, 1, 1, F)
f2L3 = dmp_eval_in(f2, t, 1, 1, F)
f1L4 = dmp_eval_in(f1, u, 0, 1, F)
f2L4 = dmp_eval_in(f2, u, 0, 1, F)
S_L1 = [f1L1, f2L1]
S_L2 = [f1L2, f2L2]
S_L3 = [f1L3, f2L3]
S_L4 = [f1L4, f2L4]
I_L1 = dup_isolate_real_roots_list(S_L1, F, inf=u, sup=s, fast=True, strict=True, basis=True)
I_L2 = dup_isolate_real_roots_list(S_L2, F, inf=v, sup=t, fast=True, strict=True, basis=True)
I_L3 = dup_isolate_real_roots_list(S_L3, F, inf=u, sup=s, fast=True, strict=True, basis=True)
I_L4 = dup_isolate_real_roots_list(S_L4, F, inf=v, sup=t, fast=True, strict=True, basis=True)
I_L3 = _reverse_intervals(I_L3)
I_L4 = _reverse_intervals(I_L4)
Q_L1 = _intervals_to_quadrants(I_L1, f1L1, f2L1, u, s, F)
Q_L2 = _intervals_to_quadrants(I_L2, f1L2, f2L2, v, t, F)
Q_L3 = _intervals_to_quadrants(I_L3, f1L3, f2L3, s, u, F)
Q_L4 = _intervals_to_quadrants(I_L4, f1L4, f2L4, t, v, F)
T = _traverse_quadrants(Q_L1, Q_L2, Q_L3, Q_L4)
N = _winding_number(T, F)
if not N:
return []
I = (I_L1, I_L2, I_L3, I_L4)
Q = (Q_L1, Q_L2, Q_L3, Q_L4)
F1 = (f1L1, f1L2, f1L3, f1L4)
F2 = (f2L1, f2L2, f2L3, f2L4)
rectangles, roots = [(N, (u, v), (s, t), I, Q, F1, F2)], []
while rectangles:
N, (u, v), (s, t), I, Q, F1, F2 = _depth_first_select(rectangles)
if s - u > t - v:
D_L, D_R = _vertical_bisection(N, (u, v), (s, t), I, Q, F1, F2, f1, f2, F)
N_L, a, b, I_L, Q_L, F1_L, F2_L = D_L
N_R, c, d, I_R, Q_R, F1_R, F2_R = D_R
if N_L >= 1:
if N_L == 1 and _rectangle_small_p(a, b, eps):
roots.append(ComplexInterval(a, b, I_L, Q_L, F1_L, F2_L, f1, f2, F))
else:
rectangles.append(D_L)
if N_R >= 1:
if N_R == 1 and _rectangle_small_p(c, d, eps):
roots.append(ComplexInterval(c, d, I_R, Q_R, F1_R, F2_R, f1, f2, F))
else:
rectangles.append(D_R)
else:
D_B, D_U = _horizontal_bisection(N, (u, v), (s, t), I, Q, F1, F2, f1, f2, F)
N_B, a, b, I_B, Q_B, F1_B, F2_B = D_B
N_U, c, d, I_U, Q_U, F1_U, F2_U = D_U
if N_B >= 1:
if N_B == 1 and _rectangle_small_p(a, b, eps):
roots.append(ComplexInterval(
a, b, I_B, Q_B, F1_B, F2_B, f1, f2, F))
else:
rectangles.append(D_B)
if N_U >= 1:
if N_U == 1 and _rectangle_small_p(c, d, eps):
roots.append(ComplexInterval(
c, d, I_U, Q_U, F1_U, F2_U, f1, f2, F))
else:
rectangles.append(D_U)
_roots, roots = sorted(roots, key=lambda r: (r.ax, r.ay)), []
for root in _roots:
roots.extend([root.conjugate(), root])
if blackbox:
return roots
else:
return [ r.as_tuple() for r in roots ]
def dup_isolate_all_roots_sqf(f, K, eps=None, inf=None, sup=None, fast=False, blackbox=False):
"""Isolate real and complex roots of a square-free polynomial ``f``. """
return (
dup_isolate_real_roots_sqf( f, K, eps=eps, inf=inf, sup=sup, fast=fast, blackbox=blackbox),
dup_isolate_complex_roots_sqf(f, K, eps=eps, inf=inf, sup=sup, blackbox=blackbox))
def dup_isolate_all_roots(f, K, eps=None, inf=None, sup=None, fast=False):
"""Isolate real and complex roots of a non-square-free polynomial ``f``. """
if not K.is_ZZ and not K.is_QQ:
raise DomainError("isolation of real and complex roots is not supported over %s" % K)
_, factors = dup_sqf_list(f, K)
if len(factors) == 1:
((f, k),) = factors
real_part, complex_part = dup_isolate_all_roots_sqf(
f, K, eps=eps, inf=inf, sup=sup, fast=fast)
real_part = [ ((a, b), k) for (a, b) in real_part ]
complex_part = [ ((a, b), k) for (a, b) in complex_part ]
return real_part, complex_part
else:
raise NotImplementedError( "only trivial square-free polynomials are supported")
class RealInterval(object):
"""A fully qualified representation of a real isolation interval. """
def __init__(self, data, f, dom):
"""Initialize new real interval with complete information. """
if len(data) == 2:
s, t = data
self.neg = False
if s < 0:
if t <= 0:
f, s, t, self.neg = dup_mirror(f, dom), -t, -s, True
else:
raise ValueError("can't refine a real root in (%s, %s)" % (s, t))
a, b, c, d = _mobius_from_interval((s, t), dom.get_field())
f = dup_transform(f, dup_strip([a, b]),
dup_strip([c, d]), dom)
self.mobius = a, b, c, d
else:
self.mobius = data[:-1]
self.neg = data[-1]
self.f, self.dom = f, dom
@property
def a(self):
"""Return the position of the left end. """
field = self.dom.get_field()
a, b, c, d = self.mobius
if not self.neg:
if a*d < b*c:
return field(a, c)
return field(b, d)
else:
if a*d > b*c:
return -field(a, c)
return -field(b, d)
@property
def b(self):
"""Return the position of the right end. """
was = self.neg
self.neg = not was
rv = -self.a
self.neg = was
return rv
@property
def dx(self):
"""Return width of the real isolating interval. """
return self.b - self.a
@property
def center(self):
"""Return the center of the real isolating interval. """
return (self.a + self.b)/2
def as_tuple(self):
"""Return tuple representation of real isolating interval. """
return (self.a, self.b)
def __repr__(self):
return "(%s, %s)" % (self.a, self.b)
def is_disjoint(self, other):
"""Return ``True`` if two isolation intervals are disjoint. """
return (self.b <= other.a or other.b <= self.a)
def _inner_refine(self):
"""Internal one step real root refinement procedure. """
if self.mobius is None:
return self
f, mobius = dup_inner_refine_real_root(
self.f, self.mobius, self.dom, steps=1, mobius=True)
return RealInterval(mobius + (self.neg,), f, self.dom)
def refine_disjoint(self, other):
"""Refine an isolating interval until it is disjoint with another one. """
expr = self
while not expr.is_disjoint(other):
expr, other = expr._inner_refine(), other._inner_refine()
return expr, other
def refine_size(self, dx):
"""Refine an isolating interval until it is of sufficiently small size. """
expr = self
while not (expr.dx < dx):
expr = expr._inner_refine()
return expr
def refine_step(self, steps=1):
"""Perform several steps of real root refinement algorithm. """
expr = self
for _ in range(steps):
expr = expr._inner_refine()
return expr
def refine(self):
"""Perform one step of real root refinement algorithm. """
return self._inner_refine()
class ComplexInterval(object):
"""A fully qualified representation of a complex isolation interval.
The printed form is shown as (x1, y1) x (x2, y2): the southwest x northeast
coordinates of the interval's rectangle."""
def __init__(self, a, b, I, Q, F1, F2, f1, f2, dom, conj=False):
"""Initialize new complex interval with complete information. """
self.a, self.b = a, b # the southwest and northeast corner: (x1, y1), (x2, y2)
self.I, self.Q = I, Q
self.f1, self.F1 = f1, F1
self.f2, self.F2 = f2, F2
self.dom = dom
self.conj = conj
@property
def ax(self):
"""Return ``x`` coordinate of south-western corner. """
return self.a[0]
@property
def ay(self):
"""Return ``y`` coordinate of south-western corner. """
if not self.conj:
return self.a[1]
else:
return -self.b[1]
@property
def bx(self):
"""Return ``x`` coordinate of north-eastern corner. """
return self.b[0]
@property
def by(self):
"""Return ``y`` coordinate of north-eastern corner. """
if not self.conj:
return self.b[1]
else:
return -self.a[1]
@property
def dx(self):
"""Return width of the complex isolating interval. """
return self.b[0] - self.a[0]
@property
def dy(self):
"""Return height of the complex isolating interval. """
return self.b[1] - self.a[1]
@property
def center(self):
"""Return the center of the complex isolating interval. """
return ((self.ax + self.bx)/2, (self.ay + self.by)/2)
def as_tuple(self):
"""Return tuple representation of complex isolating interval. """
return ((self.ax, self.ay), (self.bx, self.by))
def __repr__(self):
return "(%s, %s) x (%s, %s)" % (self.ax, self.bx, self.ay, self.by)
def conjugate(self):
"""This complex interval really is located in lower half-plane. """
return ComplexInterval(self.a, self.b, self.I, self.Q,
self.F1, self.F2, self.f1, self.f2, self.dom, conj=True)
def is_disjoint(self, other):
"""Return ``True`` if two isolation intervals are disjoint. """
if self.conj != other.conj:
return True
re_distinct = (self.bx <= other.ax or other.bx <= self.ax)
if re_distinct:
return True
im_distinct = (self.by <= other.ay or other.by <= self.ay)
return im_distinct
def _inner_refine(self):
"""Internal one step complex root refinement procedure. """
(u, v), (s, t) = self.a, self.b
I, Q = self.I, self.Q
f1, F1 = self.f1, self.F1
f2, F2 = self.f2, self.F2
dom = self.dom
if s - u > t - v:
D_L, D_R = _vertical_bisection(1, (u, v), (s, t), I, Q, F1, F2, f1, f2, dom)
if D_L[0] == 1:
_, a, b, I, Q, F1, F2 = D_L
else:
_, a, b, I, Q, F1, F2 = D_R
else:
D_B, D_U = _horizontal_bisection(1, (u, v), (s, t), I, Q, F1, F2, f1, f2, dom)
if D_B[0] == 1:
_, a, b, I, Q, F1, F2 = D_B
else:
_, a, b, I, Q, F1, F2 = D_U
return ComplexInterval(a, b, I, Q, F1, F2, f1, f2, dom, self.conj)
def refine_disjoint(self, other):
"""Refine an isolating interval until it is disjoint with another one. """
expr = self
while not expr.is_disjoint(other):
expr, other = expr._inner_refine(), other._inner_refine()
return expr, other
def refine_size(self, dx, dy=None):
"""Refine an isolating interval until it is of sufficiently small size. """
if dy is None:
dy = dx
expr = self
while not (expr.dx < dx and expr.dy < dy):
expr = expr._inner_refine()
return expr
def refine_step(self, steps=1):
"""Perform several steps of complex root refinement algorithm. """
expr = self
for _ in range(steps):
expr = expr._inner_refine()
return expr
def refine(self):
"""Perform one step of complex root refinement algorithm. """
return self._inner_refine()
| bsd-3-clause | 3,254,446,479,256,429,600 | 28.198738 | 114 | 0.481309 | false | 2.669358 | false | false | false |
nest/nest-simulator | examples/NESTServerClient/NESTServerClient.py | 17 | 2145 | # -*- coding: utf-8 -*-
#
# NESTServerClient.py
#
# This file is part of NEST.
#
# Copyright (C) 2004 The NEST Initiative
#
# NEST is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# NEST is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with NEST. If not, see <http://www.gnu.org/licenses/>.
import requests
from werkzeug.exceptions import BadRequest
__all__ = [
'NESTServerClient',
]
def encode(response):
if response.ok:
return response.json()
elif response.status_code == 400:
raise BadRequest(response.text)
class NESTServerClient(object):
def __init__(self, host='localhost', port=5000):
self.url = 'http://{}:{}/'.format(host, port)
self.headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
def __getattr__(self, call):
def method(*args, **kwargs):
kwargs.update({'args': args})
response = requests.post(self.url + 'api/' + call, json=kwargs, headers=self.headers)
return encode(response)
return method
def exec_script(self, source, return_vars=None):
params = {
'source': source,
'return': return_vars,
}
response = requests.post(self.url + 'exec', json=params, headers=self.headers)
return encode(response)
def from_file(self, filename, return_vars=None):
with open(filename, 'r') as f:
lines = f.readlines()
script = ''.join(lines)
print('Execute script code of {}'.format(filename))
print('Return variables: {}'.format(return_vars))
print(20*'-')
print(script)
print(20*'-')
return self.exec_script(script, return_vars)
| gpl-2.0 | 6,658,699,441,154,078,000 | 30.544118 | 97 | 0.640093 | false | 3.907104 | false | false | false |
zsjohny/jumpserver | apps/perms/urls/views_urls.py | 1 | 2684 | # coding:utf-8
from django.conf.urls import url
from django.urls import path
from .. import views
app_name = 'perms'
urlpatterns = [
# asset-permission
path('asset-permission/', views.AssetPermissionListView.as_view(), name='asset-permission-list'),
path('asset-permission/create/', views.AssetPermissionCreateView.as_view(), name='asset-permission-create'),
path('asset-permission/<uuid:pk>/update/', views.AssetPermissionUpdateView.as_view(), name='asset-permission-update'),
path('asset-permission/<uuid:pk>/', views.AssetPermissionDetailView.as_view(),name='asset-permission-detail'),
path('asset-permission/<uuid:pk>/delete/', views.AssetPermissionDeleteView.as_view(), name='asset-permission-delete'),
path('asset-permission/<uuid:pk>/user/', views.AssetPermissionUserView.as_view(), name='asset-permission-user-list'),
path('asset-permission/<uuid:pk>/asset/', views.AssetPermissionAssetView.as_view(), name='asset-permission-asset-list'),
# remote-app-permission
path('remote-app-permission/', views.RemoteAppPermissionListView.as_view(), name='remote-app-permission-list'),
path('remote-app-permission/create/', views.RemoteAppPermissionCreateView.as_view(), name='remote-app-permission-create'),
path('remote-app-permission/<uuid:pk>/update/', views.RemoteAppPermissionUpdateView.as_view(), name='remote-app-permission-update'),
path('remote-app-permission/<uuid:pk>/', views.RemoteAppPermissionDetailView.as_view(), name='remote-app-permission-detail'),
path('remote-app-permission/<uuid:pk>/user/', views.RemoteAppPermissionUserView.as_view(), name='remote-app-permission-user-list'),
path('remote-app-permission/<uuid:pk>/remote-app/', views.RemoteAppPermissionRemoteAppView.as_view(), name='remote-app-permission-remote-app-list'),
# database-app-permission
path('database-app-permission/', views.DatabaseAppPermissionListView.as_view(), name='database-app-permission-list'),
path('database-app-permission/create/', views.DatabaseAppPermissionCreateView.as_view(), name='database-app-permission-create'),
path('database-app-permission/<uuid:pk>/update/', views.DatabaseAppPermissionUpdateView.as_view(), name='database-app-permission-update'),
path('database-app-permission/<uuid:pk>/', views.DatabaseAppPermissionDetailView.as_view(), name='database-app-permission-detail'),
path('database-app-permission/<uuid:pk>/user/', views.DatabaseAppPermissionUserView.as_view(), name='database-app-permission-user-list'),
path('database-app-permission/<uuid:pk>/database-app/', views.DatabaseAppPermissionDatabaseAppView.as_view(), name='database-app-permission-database-app-list'),
]
| gpl-2.0 | 1,337,201,133,897,868,300 | 77.941176 | 164 | 0.754471 | false | 3.753846 | false | true | false |
cctaylor/googleads-python-lib | examples/dfp/v201505/proposal_service/submit_proposals_for_approval.py | 3 | 2395 | #!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This code example approves a single proposal.
To determine which proposals exist, run get_all_proposals.py."""
__author__ = 'Nicholas Chen'
# Import appropriate modules from the client library.
from googleads import dfp
PROPOSAL_ID = 'INSERT_PROPOSAL_ID_HERE'
def main(client, proposal_id):
# Initialize appropriate service.
proposal_service = client.GetService('ProposalService', version='v201505')
# Create query.
values = [{
'key': 'proposalId',
'value': {
'xsi_type': 'TextValue',
'value': proposal_id
}
}]
query = 'WHERE id = :proposalId'
# Create a filter statement.
statement = dfp.FilterStatement(query, values)
proposals_approved = 0
# Get proposals by statement.
while True:
response = proposal_service.getProposalsByStatement(statement.ToStatement())
if 'results' in response:
# Display results.
for proposal in response['results']:
print ('Proposal with id \'%s\', name \'%s\', and status \'%s\' will be'
' approved.' % (proposal['id'], proposal['name'],
proposal['status']))
# Perform action.
result = proposal_service.performProposalAction(
{'xsi_type': 'SubmitProposalsForApproval'}, statement.ToStatement())
if result and int(result['numChanges']) > 0:
proposals_approved += int(result['numChanges'])
statement.offset += dfp.SUGGESTED_PAGE_LIMIT
else:
break
# Display results.
if proposals_approved > 0:
print '\nNumber of proposals approved: %s' % proposals_approved
else:
print '\nNo proposals were approved.'
if __name__ == '__main__':
# Initialize client object.
dfp_client = dfp.DfpClient.LoadFromStorage()
main(dfp_client, PROPOSAL_ID)
| apache-2.0 | -5,491,334,083,840,781,000 | 31.808219 | 80 | 0.675157 | false | 3.795563 | false | false | false |
wtanaka/google-app-engine-django-openid | src/openid/consumer/html_parse.py | 167 | 7161 | """
This module implements a VERY limited parser that finds <link> tags in
the head of HTML or XHTML documents and parses out their attributes
according to the OpenID spec. It is a liberal parser, but it requires
these things from the data in order to work:
- There must be an open <html> tag
- There must be an open <head> tag inside of the <html> tag
- Only <link>s that are found inside of the <head> tag are parsed
(this is by design)
- The parser follows the OpenID specification in resolving the
attributes of the link tags. This means that the attributes DO NOT
get resolved as they would by an XML or HTML parser. In particular,
only certain entities get replaced, and href attributes do not get
resolved relative to a base URL.
From http://openid.net/specs.bml#linkrel:
- The openid.server URL MUST be an absolute URL. OpenID consumers
MUST NOT attempt to resolve relative URLs.
- The openid.server URL MUST NOT include entities other than &,
<, >, and ".
The parser ignores SGML comments and <![CDATA[blocks]]>. Both kinds of
quoting are allowed for attributes.
The parser deals with invalid markup in these ways:
- Tag names are not case-sensitive
- The <html> tag is accepted even when it is not at the top level
- The <head> tag is accepted even when it is not a direct child of
the <html> tag, but a <html> tag must be an ancestor of the <head>
tag
- <link> tags are accepted even when they are not direct children of
the <head> tag, but a <head> tag must be an ancestor of the <link>
tag
- If there is no closing tag for an open <html> or <head> tag, the
remainder of the document is viewed as being inside of the tag. If
there is no closing tag for a <link> tag, the link tag is treated
as a short tag. Exceptions to this rule are that <html> closes
<html> and <body> or <head> closes <head>
- Attributes of the <link> tag are not required to be quoted.
- In the case of duplicated attribute names, the attribute coming
last in the tag will be the value returned.
- Any text that does not parse as an attribute within a link tag will
be ignored. (e.g. <link pumpkin rel='openid.server' /> will ignore
pumpkin)
- If there are more than one <html> or <head> tag, the parser only
looks inside of the first one.
- The contents of <script> tags are ignored entirely, except unclosed
<script> tags. Unclosed <script> tags are ignored.
- Any other invalid markup is ignored, including unclosed SGML
comments and unclosed <![CDATA[blocks.
"""
__all__ = ['parseLinkAttrs']
import re
flags = ( re.DOTALL # Match newlines with '.'
| re.IGNORECASE
| re.VERBOSE # Allow comments and whitespace in patterns
| re.UNICODE # Make \b respect Unicode word boundaries
)
# Stuff to remove before we start looking for tags
removed_re = re.compile(r'''
# Comments
<!--.*?-->
# CDATA blocks
| <!\[CDATA\[.*?\]\]>
# script blocks
| <script\b
# make sure script is not an XML namespace
(?!:)
[^>]*>.*?</script>
''', flags)
tag_expr = r'''
# Starts with the tag name at a word boundary, where the tag name is
# not a namespace
<%(tag_name)s\b(?!:)
# All of the stuff up to a ">", hopefully attributes.
(?P<attrs>[^>]*?)
(?: # Match a short tag
/>
| # Match a full tag
>
(?P<contents>.*?)
# Closed by
(?: # One of the specified close tags
</?%(closers)s\s*>
# End of the string
| \Z
)
)
'''
def tagMatcher(tag_name, *close_tags):
if close_tags:
options = '|'.join((tag_name,) + close_tags)
closers = '(?:%s)' % (options,)
else:
closers = tag_name
expr = tag_expr % locals()
return re.compile(expr, flags)
# Must contain at least an open html and an open head tag
html_find = tagMatcher('html')
head_find = tagMatcher('head', 'body')
link_find = re.compile(r'<link\b(?!:)', flags)
attr_find = re.compile(r'''
# Must start with a sequence of word-characters, followed by an equals sign
(?P<attr_name>\w+)=
# Then either a quoted or unquoted attribute
(?:
# Match everything that\'s between matching quote marks
(?P<qopen>["\'])(?P<q_val>.*?)(?P=qopen)
|
# If the value is not quoted, match up to whitespace
(?P<unq_val>(?:[^\s<>/]|/(?!>))+)
)
|
(?P<end_link>[<>])
''', flags)
# Entity replacement:
replacements = {
'amp':'&',
'lt':'<',
'gt':'>',
'quot':'"',
}
ent_replace = re.compile(r'&(%s);' % '|'.join(replacements.keys()))
def replaceEnt(mo):
"Replace the entities that are specified by OpenID"
return replacements.get(mo.group(1), mo.group())
def parseLinkAttrs(html):
"""Find all link tags in a string representing a HTML document and
return a list of their attributes.
@param html: the text to parse
@type html: str or unicode
@return: A list of dictionaries of attributes, one for each link tag
@rtype: [[(type(html), type(html))]]
"""
stripped = removed_re.sub('', html)
html_mo = html_find.search(stripped)
if html_mo is None or html_mo.start('contents') == -1:
return []
start, end = html_mo.span('contents')
head_mo = head_find.search(stripped, start, end)
if head_mo is None or head_mo.start('contents') == -1:
return []
start, end = head_mo.span('contents')
link_mos = link_find.finditer(stripped, head_mo.start(), head_mo.end())
matches = []
for link_mo in link_mos:
start = link_mo.start() + 5
link_attrs = {}
for attr_mo in attr_find.finditer(stripped, start):
if attr_mo.lastgroup == 'end_link':
break
# Either q_val or unq_val must be present, but not both
# unq_val is a True (non-empty) value if it is present
attr_name, q_val, unq_val = attr_mo.group(
'attr_name', 'q_val', 'unq_val')
attr_val = ent_replace.sub(replaceEnt, unq_val or q_val)
link_attrs[attr_name] = attr_val
matches.append(link_attrs)
return matches
def relMatches(rel_attr, target_rel):
"""Does this target_rel appear in the rel_str?"""
# XXX: TESTME
rels = rel_attr.strip().split()
for rel in rels:
rel = rel.lower()
if rel == target_rel:
return 1
return 0
def linkHasRel(link_attrs, target_rel):
"""Does this link have target_rel as a relationship?"""
# XXX: TESTME
rel_attr = link_attrs.get('rel')
return rel_attr and relMatches(rel_attr, target_rel)
def findLinksRel(link_attrs_list, target_rel):
"""Filter the list of link attributes on whether it has target_rel
as a relationship."""
# XXX: TESTME
matchesTarget = lambda attrs: linkHasRel(attrs, target_rel)
return filter(matchesTarget, link_attrs_list)
def findFirstHref(link_attrs_list, target_rel):
"""Return the value of the href attribute for the first link tag
in the list that has target_rel as a relationship."""
# XXX: TESTME
matches = findLinksRel(link_attrs_list, target_rel)
if not matches:
return None
first = matches[0]
return first.get('href')
| gpl-3.0 | -5,548,681,114,163,012,000 | 27.759036 | 75 | 0.64558 | false | 3.560915 | false | false | false |
chengdezhi/language_model_for_typing | lm_prediction.py | 1 | 6516 | import datrie
from data_utils import Vocabulary, Dataset
import string
import re
from flask import Flask
from flask_restful import Resource, Api
import traceback
import time
import sys
#import thriftpy
import os
from flask import Flask, request, redirect, url_for
from werkzeug.utils import secure_filename
from newPyClient import computeKSR
import json
import numpy as np
import time
import tensorflow as tf
from data_utils import Vocabulary, Dataset
from language_model import LM
from common import CheckpointLoader
import heapq
UPLOAD_FOLDER = '/data/ngramTest/uploads'
UPLOAD_FOLDER = './'
top_k = 3
pattern = re.compile('[\w+]')
p_punc = re.compile('(\.|\"|,|\?|\!)')
hps = LM.get_default_hparams()
vocab = Vocabulary.from_file("1b_word_vocab.txt")
with tf.variable_scope("model"):
hps.num_sampled = 0 # Always using full softmax at evaluation. run out of memory
hps.keep_prob = 1.0
hps.num_gpus = 1
model = LM(hps,"predict_next", "/cpu:0")
if hps.average_params:
print("Averaging parameters for evaluation.")
saver = tf.train.Saver(model.avg_dict)
else:
saver = tf.train.Saver()
# Use only 4 threads for the evaluation.
config = tf.ConfigProto(allow_soft_placement=True,
intra_op_parallelism_threads=20,
inter_op_parallelism_threads=1)
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
ckpt_loader = CheckpointLoader(saver, model.global_step, "log.txt/train")
saver.restore(sess,"log.txt/train/model.ckpt-742996")
app = Flask(__name__)
api = Api(app)
'''
#build vocab trie
trie = datrie.new(string.printable)
cnt = 0
vocab_size = 140000
for i in range(vocab_size):
word = vocab.get_token(i)
trie[word] = i
for key in trie.keys(u"pre"):
print key,trie[key]
trie.save("data/vocab_trie")
'''
trie = datrie.Trie.load("data/vocab_trie")
class ngramPredict(Resource):
def get(self,input):
input = input.decode("utf-8")
#print "input:",input
input_words = input
if input_words.find('<S>')!=0:
input_words = '<S> ' + input
isCompletion = False
if input_words[-1] == ' ':
#print "Predict:"
prefix_input = [vocab.get_id(w) for w in input_words.split()]
else:
#print "Compeletion:"
isCompletion = True
prefix_input = [vocab.get_id(w) for w in input_words.split()[:-1]]
prefix = input_words.split()[-1]
#print "prefix:",prefix,type(prefix)
#print("input:",input,"pre:",prefix_input,"len:",len(prefix_input))
w = np.zeros([1, len(prefix_input)], np.uint8)
w[:] =1
inputs = np.zeros([hps.batch_size*hps.num_gpus,hps.num_steps])
weights = np.zeros([hps.batch_size*hps.num_gpus,hps.num_steps])
inputs[0,:len(prefix_input)] = prefix_input[:]
weights[0,:len(prefix_input)] = w[:]
words = []
with sess.as_default():
#ckpt_loader.load_checkpoint() # FOR ONLY ONE CHECKPOINT
sess.run(tf.local_variables_initializer())
words = []
if not isCompletion:
indexes = sess.run([model.index],{model.x:inputs, model.w:weights})
indexes = np.reshape(indexes,[hps.num_steps,hps.arg_max])
for j in range(hps.arg_max):
word = vocab.get_token(indexes[len(prefix_input)-1][j])
if not p_punc.match(word)==None:
words += [word]
continue
if pattern.match(word)==None:
continue
words += [word]
else:
prob = sess.run([model.logits],{model.x:inputs, model.w:weights})
prob = np.reshape(prob,[hps.num_steps,hps.vocab_size])
prob = prob[len(prefix_input)-1] # the last prefix_input step prob is the predict one
#print "prob:", len(prob)
#print "prefix:",trie.keys(prefix)
cand = [trie[cand_index] for cand_index in trie.keys(prefix)]
#print "cand:", cand
#print "prefix:", prefix
cand_prob = [prob[pb] for pb in cand]
ins = heapq.nlargest(top_k, range(len(cand_prob)), cand_prob.__getitem__)
for j in ins:
word = vocab.get_token(cand[j])
words += [word]
#print words
return words[:top_k]
@app.route('/ngramfile/', methods=['GET', 'POST'])
def upload_file():
if request.method == 'POST':
doc = request.json
if doc:
doc = doc['text']
ngramClient = ngramPredict()
res = computeKSR(ngramClient,doc)
return json.dumps(res)
if 'text' not in request.files:
return "{\"ret\":-1}"
file = request.files['text']
if file.filename == '':
return "{\"ret\":-2}"
filename = secure_filename(file.filename)
uploadFilePath = os.path.join(UPLOAD_FOLDER, filename)
file.save(uploadFilePath)
doc = ""
with open(uploadFilePath, 'rb') as textFile:
doc = textFile.read()
ngramClient = ngramPredict()
res = computeKSR(ngramClient,doc)
#print("res:",res)
#TODO
#return json.dumps(res)
api.add_resource(ngramPredict, '/ngram/<input>')
#predictClient = PredictClient()
if __name__ == '__main__':
'''
ngrampredict = ngramPredict()
ngrampredict.get("how are")
ngrampredict.get("what the")
ngrampredict.get("i am")
ngrampredict.get("how do")
'''
#print('test for grep ksr')
app.run(host = "0",port=9898)
| mit | -5,495,016,405,757,070,000 | 35.606742 | 104 | 0.511971 | false | 3.857904 | false | false | false |
alsrgv/tensorflow | tensorflow/contrib/distributions/python/ops/bijectors/softplus.py | 35 | 5563 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Softplus bijector."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.framework import ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import nn_ops
from tensorflow.python.ops.distributions import bijector
from tensorflow.python.ops.distributions import util as distribution_util
from tensorflow.python.util import deprecation
__all__ = [
"Softplus",
]
class Softplus(bijector.Bijector):
"""Bijector which computes `Y = g(X) = Log[1 + exp(X)]`.
The softplus `Bijector` has the following two useful properties:
* The domain is the positive real numbers
* `softplus(x) approx x`, for large `x`, so it does not overflow as easily as
the `Exp` `Bijector`.
The optional nonzero `hinge_softness` parameter changes the transition at
zero. With `hinge_softness = c`, the bijector is:
```f_c(x) := c * g(x / c) = c * Log[1 + exp(x / c)].```
For large `x >> 1`, `c * Log[1 + exp(x / c)] approx c * Log[exp(x / c)] = x`,
so the behavior for large `x` is the same as the standard softplus.
As `c > 0` approaches 0 from the right, `f_c(x)` becomes less and less soft,
approaching `max(0, x)`.
* `c = 1` is the default.
* `c > 0` but small means `f(x) approx ReLu(x) = max(0, x)`.
* `c < 0` flips sign and reflects around the `y-axis`: `f_{-c}(x) = -f_c(-x)`.
* `c = 0` results in a non-bijective transformation and triggers an exception.
Example Use:
```python
# Create the Y=g(X)=softplus(X) transform which works only on Tensors with 1
# batch ndim and 2 event ndims (i.e., vector of matrices).
softplus = Softplus()
x = [[[1., 2],
[3, 4]],
[[5, 6],
[7, 8]]]
log(1 + exp(x)) == softplus.forward(x)
log(exp(x) - 1) == softplus.inverse(x)
```
Note: log(.) and exp(.) are applied element-wise but the Jacobian is a
reduction over the event space.
"""
@distribution_util.AppendDocstring(
kwargs_dict={
"hinge_softness": (
"Nonzero floating point `Tensor`. Controls the softness of what "
"would otherwise be a kink at the origin. Default is 1.0")})
@deprecation.deprecated(
"2018-10-01",
"The TensorFlow Distributions library has moved to "
"TensorFlow Probability "
"(https://github.com/tensorflow/probability). You "
"should update all references to use `tfp.distributions` "
"instead of `tf.contrib.distributions`.",
warn_once=True)
def __init__(self,
hinge_softness=None,
validate_args=False,
name="softplus"):
with ops.name_scope(name, values=[hinge_softness]):
if hinge_softness is not None:
self._hinge_softness = ops.convert_to_tensor(
hinge_softness, name="hinge_softness")
else:
self._hinge_softness = None
if validate_args:
nonzero_check = check_ops.assert_none_equal(
ops.convert_to_tensor(
0, dtype=self.hinge_softness.dtype),
self.hinge_softness,
message="hinge_softness must be non-zero")
self._hinge_softness = control_flow_ops.with_dependencies(
[nonzero_check], self.hinge_softness)
super(Softplus, self).__init__(
forward_min_event_ndims=0,
validate_args=validate_args,
name=name)
def _forward(self, x):
if self.hinge_softness is None:
return nn_ops.softplus(x)
hinge_softness = math_ops.cast(self.hinge_softness, x.dtype)
return hinge_softness * nn_ops.softplus(x / hinge_softness)
def _inverse(self, y):
if self.hinge_softness is None:
return distribution_util.softplus_inverse(y)
hinge_softness = math_ops.cast(self.hinge_softness, y.dtype)
return hinge_softness * distribution_util.softplus_inverse(
y / hinge_softness)
def _inverse_log_det_jacobian(self, y):
# Could also do:
# ildj = math_ops.reduce_sum(y - distribution_util.softplus_inverse(y),
# axis=event_dims)
# but the following is more numerically stable. Ie,
# Y = Log[1 + exp{X}] ==> X = Log[exp{Y} - 1]
# ==> dX/dY = exp{Y} / (exp{Y} - 1)
# = 1 / (1 - exp{-Y}),
# which is the most stable for large Y > 0. For small Y, we use
# 1 - exp{-Y} approx Y.
if self.hinge_softness is not None:
y /= math_ops.cast(self.hinge_softness, y.dtype)
return -math_ops.log(-math_ops.expm1(-y))
def _forward_log_det_jacobian(self, x):
if self.hinge_softness is not None:
x /= math_ops.cast(self.hinge_softness, x.dtype)
return -nn_ops.softplus(-x)
@property
def hinge_softness(self):
return self._hinge_softness
| apache-2.0 | -4,781,450,353,001,732,000 | 36.086667 | 80 | 0.63401 | false | 3.425493 | false | false | false |
rueckstiess/dopamine | adapters/explorers/explorer.py | 1 | 2647 | from dopamine.adapters import Adapter
import numpy as np
class Explorer(Adapter):
# define the conditions of the environment
inConditions = {}
# define the conditions of the environment
outConditions = {}
def __init__(self):
Adapter.__init__(self)
# set this to False to turn off exploration
self.active = True
def applyAction(self, action):
""" apply transformations to action and return it. """
if self.active:
action = self._explore(action)
# tell agent the action that was executed (for the history)
self.experiment.agent.action = action
return action
def _explore(self, action):
return action
class DecayExplorer(Explorer):
finalFactor = 0.001
def __init__(self, epsilon, episodeCount=None, actionCount=None):
""" DecayExplorer is an explorer base class that has exploration decay, i.e.
the amount of exploration weakens exponentially over time. epsilon is the
initial parameter (can mean different things for different explorers),
which reduced over time. if episodeCount is given, epsilon reduces to
1/1000 of the initial value in the given number of episodes. if actionCount
is given, epsilon reduces to 1/1000 of the initial value in the given
number of actions executed. actionCount takes priority if both values
are given. In either case, after epsilon is 1/1000 of its initial value,
exploration automatically deactivates.
"""
Explorer.__init__(self)
self.episodeCount = episodeCount
self.actionCount = actionCount
self.epsilon = epsilon
self.initialEpsilon = epsilon
if self.episodeCount:
self.decay = np.power(self.finalFactor, 1./self.episodeCount)
if self.actionCount:
self.decay = np.power(self.finalFactor, 1./self.actionCount)
self.episodeCount = None
def resetExploration(self):
self.epsilon = self.initialEpsilon
def applyAction(self, action):
action = Explorer.applyAction(self, action)
if self.actionCount and self.active:
self.epsilon *= self.decay
if self.epsilon <= self.initialEpsilon * self.finalFactor:
self.active = False
return action
def applyEpisodeFinished(self, episodeFinished):
if episodeFinished and self.episodeCount and self.active:
self.epsilon *= self.decay
return episodeFinished
| gpl-3.0 | -3,794,373,032,818,563,600 | 32.0875 | 87 | 0.633925 | false | 4.693262 | false | false | false |
szeged/servo | tests/wpt/web-platform-tests/tools/wptrunner/wptrunner/browsers/servo.py | 5 | 2728 | import os
from .base import NullBrowser, ExecutorBrowser, require_arg
from .base import get_timeout_multiplier # noqa: F401
from ..executors import executor_kwargs as base_executor_kwargs
from ..executors.executorservo import ServoTestharnessExecutor, ServoRefTestExecutor, ServoWdspecExecutor # noqa: F401
here = os.path.join(os.path.split(__file__)[0])
__wptrunner__ = {
"product": "servo",
"check_args": "check_args",
"browser": "ServoBrowser",
"executor": {
"testharness": "ServoTestharnessExecutor",
"reftest": "ServoRefTestExecutor",
"wdspec": "ServoWdspecExecutor",
},
"browser_kwargs": "browser_kwargs",
"executor_kwargs": "executor_kwargs",
"env_extras": "env_extras",
"env_options": "env_options",
"timeout_multiplier": "get_timeout_multiplier",
"update_properties": "update_properties",
}
def check_args(**kwargs):
require_arg(kwargs, "binary")
def browser_kwargs(test_type, run_info_data, config, **kwargs):
return {
"binary": kwargs["binary"],
"debug_info": kwargs["debug_info"],
"binary_args": kwargs["binary_args"],
"user_stylesheets": kwargs.get("user_stylesheets"),
"ca_certificate_path": config.ssl_config["ca_cert_path"],
}
def executor_kwargs(test_type, server_config, cache_manager, run_info_data,
**kwargs):
rv = base_executor_kwargs(test_type, server_config,
cache_manager, run_info_data, **kwargs)
rv["pause_after_test"] = kwargs["pause_after_test"]
if test_type == "wdspec":
rv["capabilities"] = {}
return rv
def env_extras(**kwargs):
return []
def env_options():
return {"server_host": "127.0.0.1",
"bind_address": False,
"testharnessreport": "testharnessreport-servo.js",
"supports_debugger": True}
def update_properties():
return ["debug", "os", "version", "processor", "bits"], None
class ServoBrowser(NullBrowser):
def __init__(self, logger, binary, debug_info=None, binary_args=None,
user_stylesheets=None, ca_certificate_path=None):
NullBrowser.__init__(self, logger)
self.binary = binary
self.debug_info = debug_info
self.binary_args = binary_args or []
self.user_stylesheets = user_stylesheets or []
self.ca_certificate_path = ca_certificate_path
def executor_browser(self):
return ExecutorBrowser, {
"binary": self.binary,
"debug_info": self.debug_info,
"binary_args": self.binary_args,
"user_stylesheets": self.user_stylesheets,
"ca_certificate_path": self.ca_certificate_path,
}
| mpl-2.0 | -6,204,316,042,673,528,000 | 31.47619 | 119 | 0.623534 | false | 3.622842 | true | false | false |
hujiajie/pa-chromium | base/android/jni_generator/jni_generator.py | 20 | 38680 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Extracts native methods from a Java file and generates the JNI bindings.
If you change this, please run and update the tests."""
import collections
import errno
import optparse
import os
import re
import string
from string import Template
import subprocess
import sys
import textwrap
import zipfile
class ParseError(Exception):
"""Exception thrown when we can't parse the input file."""
def __init__(self, description, *context_lines):
Exception.__init__(self)
self.description = description
self.context_lines = context_lines
def __str__(self):
context = '\n'.join(self.context_lines)
return '***\nERROR: %s\n\n%s\n***' % (self.description, context)
class Param(object):
"""Describes a param for a method, either java or native."""
def __init__(self, **kwargs):
self.datatype = kwargs['datatype']
self.name = kwargs['name']
class NativeMethod(object):
"""Describes a C/C++ method that is called by Java code"""
def __init__(self, **kwargs):
self.static = kwargs['static']
self.java_class_name = kwargs['java_class_name']
self.return_type = kwargs['return_type']
self.name = kwargs['name']
self.params = kwargs['params']
if self.params:
assert type(self.params) is list
assert type(self.params[0]) is Param
if (self.params and
self.params[0].datatype == 'int' and
self.params[0].name.startswith('native')):
self.type = 'method'
self.p0_type = self.params[0].name[len('native'):]
if kwargs.get('native_class_name'):
self.p0_type = kwargs['native_class_name']
else:
self.type = 'function'
self.method_id_var_name = kwargs.get('method_id_var_name', None)
class CalledByNative(object):
"""Describes a java method exported to c/c++"""
def __init__(self, **kwargs):
self.system_class = kwargs['system_class']
self.unchecked = kwargs['unchecked']
self.static = kwargs['static']
self.java_class_name = kwargs['java_class_name']
self.return_type = kwargs['return_type']
self.name = kwargs['name']
self.params = kwargs['params']
self.method_id_var_name = kwargs.get('method_id_var_name', None)
self.is_constructor = kwargs.get('is_constructor', False)
self.env_call = GetEnvCall(self.is_constructor, self.static,
self.return_type)
self.static_cast = GetStaticCastForReturnType(self.return_type)
def JavaDataTypeToC(java_type):
"""Returns a C datatype for the given java type."""
java_pod_type_map = {
'int': 'jint',
'byte': 'jbyte',
'char': 'jchar',
'short': 'jshort',
'boolean': 'jboolean',
'long': 'jlong',
'double': 'jdouble',
'float': 'jfloat',
}
java_type_map = {
'void': 'void',
'String': 'jstring',
'java/lang/String': 'jstring',
'Class': 'jclass',
'java/lang/Class': 'jclass',
}
if java_type in java_pod_type_map:
return java_pod_type_map[java_type]
elif java_type in java_type_map:
return java_type_map[java_type]
elif java_type.endswith('[]'):
if java_type[:-2] in java_pod_type_map:
return java_pod_type_map[java_type[:-2]] + 'Array'
return 'jobjectArray'
else:
return 'jobject'
class JniParams(object):
_imports = []
_fully_qualified_class = ''
_package = ''
_inner_classes = []
_remappings = []
@staticmethod
def SetFullyQualifiedClass(fully_qualified_class):
JniParams._fully_qualified_class = 'L' + fully_qualified_class
JniParams._package = '/'.join(fully_qualified_class.split('/')[:-1])
@staticmethod
def ExtractImportsAndInnerClasses(contents):
contents = contents.replace('\n', '')
re_import = re.compile(r'import.*?(?P<class>\S*?);')
for match in re.finditer(re_import, contents):
JniParams._imports += ['L' + match.group('class').replace('.', '/')]
re_inner = re.compile(r'(class|interface)\s+?(?P<name>\w+?)\W')
for match in re.finditer(re_inner, contents):
inner = match.group('name')
if not JniParams._fully_qualified_class.endswith(inner):
JniParams._inner_classes += [JniParams._fully_qualified_class + '$' +
inner]
@staticmethod
def JavaToJni(param):
"""Converts a java param into a JNI signature type."""
pod_param_map = {
'int': 'I',
'boolean': 'Z',
'char': 'C',
'short': 'S',
'long': 'J',
'double': 'D',
'float': 'F',
'byte': 'B',
'void': 'V',
}
object_param_list = [
'Ljava/lang/Boolean',
'Ljava/lang/Integer',
'Ljava/lang/Long',
'Ljava/lang/Object',
'Ljava/lang/String',
'Ljava/lang/Class',
]
prefix = ''
# Array?
while param[-2:] == '[]':
prefix += '['
param = param[:-2]
# Generic?
if '<' in param:
param = param[:param.index('<')]
if param in pod_param_map:
return prefix + pod_param_map[param]
if '/' in param:
# Coming from javap, use the fully qualified param directly.
return prefix + 'L' + JniParams.RemapClassName(param) + ';'
for qualified_name in (object_param_list +
[JniParams._fully_qualified_class] +
JniParams._inner_classes):
if (qualified_name.endswith('/' + param) or
qualified_name.endswith('$' + param.replace('.', '$')) or
qualified_name == 'L' + param):
return prefix + JniParams.RemapClassName(qualified_name) + ';'
# Is it from an import? (e.g. referecing Class from import pkg.Class;
# note that referencing an inner class Inner from import pkg.Class.Inner
# is not supported).
for qualified_name in JniParams._imports:
if qualified_name.endswith('/' + param):
# Ensure it's not an inner class.
components = qualified_name.split('/')
if len(components) > 2 and components[-2][0].isupper():
raise SyntaxError('Inner class (%s) can not be imported '
'and used by JNI (%s). Please import the outer '
'class and use Outer.Inner instead.' %
(qualified_name, param))
return prefix + JniParams.RemapClassName(qualified_name) + ';'
# Is it an inner class from an outer class import? (e.g. referencing
# Class.Inner from import pkg.Class).
if '.' in param:
components = param.split('.')
outer = '/'.join(components[:-1])
inner = components[-1]
for qualified_name in JniParams._imports:
if qualified_name.endswith('/' + outer):
return (prefix + JniParams.RemapClassName(qualified_name) +
'$' + inner + ';')
# Type not found, falling back to same package as this class.
return (prefix + 'L' +
JniParams.RemapClassName(JniParams._package + '/' + param) + ';')
@staticmethod
def Signature(params, returns, wrap):
"""Returns the JNI signature for the given datatypes."""
items = ['(']
items += [JniParams.JavaToJni(param.datatype) for param in params]
items += [')']
items += [JniParams.JavaToJni(returns)]
if wrap:
return '\n' + '\n'.join(['"' + item + '"' for item in items])
else:
return '"' + ''.join(items) + '"'
@staticmethod
def Parse(params):
"""Parses the params into a list of Param objects."""
if not params:
return []
ret = []
for p in [p.strip() for p in params.split(',')]:
items = p.split(' ')
if 'final' in items:
items.remove('final')
param = Param(
datatype=items[0],
name=(items[1] if len(items) > 1 else 'p%s' % len(ret)),
)
ret += [param]
return ret
@staticmethod
def RemapClassName(class_name):
"""Remaps class names using the jarjar mapping table."""
for old, new in JniParams._remappings:
if old in class_name:
return class_name.replace(old, new, 1)
return class_name
@staticmethod
def SetJarJarMappings(mappings):
"""Parse jarjar mappings from a string."""
JniParams._remappings = []
for line in mappings.splitlines():
keyword, src, dest = line.split()
if keyword != 'rule':
continue
assert src.endswith('.**')
src = src[:-2].replace('.', '/')
dest = dest.replace('.', '/')
if dest.endswith('@0'):
JniParams._remappings.append((src, dest[:-2] + src))
else:
assert dest.endswith('@1')
JniParams._remappings.append((src, dest[:-2]))
def ExtractJNINamespace(contents):
re_jni_namespace = re.compile('.*?@JNINamespace\("(.*?)"\)')
m = re.findall(re_jni_namespace, contents)
if not m:
return ''
return m[0]
def ExtractFullyQualifiedJavaClassName(java_file_name, contents):
re_package = re.compile('.*?package (.*?);')
matches = re.findall(re_package, contents)
if not matches:
raise SyntaxError('Unable to find "package" line in %s' % java_file_name)
return (matches[0].replace('.', '/') + '/' +
os.path.splitext(os.path.basename(java_file_name))[0])
def ExtractNatives(contents):
"""Returns a list of dict containing information about a native method."""
contents = contents.replace('\n', '')
natives = []
re_native = re.compile(r'(@NativeClassQualifiedName'
'\(\"(?P<native_class_name>.*?)\"\))?\s*'
'(@NativeCall(\(\"(?P<java_class_name>.*?)\"\)))?\s*'
'(?P<qualifiers>\w+\s\w+|\w+|\s+)\s*?native '
'(?P<return_type>\S*?) '
'(?P<name>\w+?)\((?P<params>.*?)\);')
for match in re.finditer(re_native, contents):
native = NativeMethod(
static='static' in match.group('qualifiers'),
java_class_name=match.group('java_class_name'),
native_class_name=match.group('native_class_name'),
return_type=match.group('return_type'),
name=match.group('name').replace('native', ''),
params=JniParams.Parse(match.group('params')))
natives += [native]
return natives
def GetStaticCastForReturnType(return_type):
type_map = { 'String' : 'jstring',
'java/lang/String' : 'jstring',
'boolean[]': 'jbooleanArray',
'byte[]': 'jbyteArray',
'char[]': 'jcharArray',
'short[]': 'jshortArray',
'int[]': 'jintArray',
'long[]': 'jlongArray',
'double[]': 'jdoubleArray' }
ret = type_map.get(return_type, None)
if ret:
return ret
if return_type.endswith('[]'):
return 'jobjectArray'
return None
def GetEnvCall(is_constructor, is_static, return_type):
"""Maps the types availabe via env->Call__Method."""
if is_constructor:
return 'NewObject'
env_call_map = {'boolean': 'Boolean',
'byte': 'Byte',
'char': 'Char',
'short': 'Short',
'int': 'Int',
'long': 'Long',
'float': 'Float',
'void': 'Void',
'double': 'Double',
'Object': 'Object',
}
call = env_call_map.get(return_type, 'Object')
if is_static:
call = 'Static' + call
return 'Call' + call + 'Method'
def GetMangledParam(datatype):
"""Returns a mangled identifier for the datatype."""
if len(datatype) <= 2:
return datatype.replace('[', 'A')
ret = ''
for i in range(1, len(datatype)):
c = datatype[i]
if c == '[':
ret += 'A'
elif c.isupper() or datatype[i - 1] in ['/', 'L']:
ret += c.upper()
return ret
def GetMangledMethodName(name, params, return_type):
"""Returns a mangled method name for the given signature.
The returned name can be used as a C identifier and will be unique for all
valid overloads of the same method.
Args:
name: string.
params: list of Param.
return_type: string.
Returns:
A mangled name.
"""
mangled_items = []
for datatype in [return_type] + [x.datatype for x in params]:
mangled_items += [GetMangledParam(JniParams.JavaToJni(datatype))]
mangled_name = name + '_'.join(mangled_items)
assert re.match(r'[0-9a-zA-Z_]+', mangled_name)
return mangled_name
def MangleCalledByNatives(called_by_natives):
"""Mangles all the overloads from the call_by_natives list."""
method_counts = collections.defaultdict(
lambda: collections.defaultdict(lambda: 0))
for called_by_native in called_by_natives:
java_class_name = called_by_native.java_class_name
name = called_by_native.name
method_counts[java_class_name][name] += 1
for called_by_native in called_by_natives:
java_class_name = called_by_native.java_class_name
method_name = called_by_native.name
method_id_var_name = method_name
if method_counts[java_class_name][method_name] > 1:
method_id_var_name = GetMangledMethodName(method_name,
called_by_native.params,
called_by_native.return_type)
called_by_native.method_id_var_name = method_id_var_name
return called_by_natives
# Regex to match the JNI return types that should be included in a
# ScopedJavaLocalRef.
RE_SCOPED_JNI_RETURN_TYPES = re.compile('jobject|jclass|jstring|.*Array')
# Regex to match a string like "@CalledByNative public void foo(int bar)".
RE_CALLED_BY_NATIVE = re.compile(
'@CalledByNative(?P<Unchecked>(Unchecked)*?)(?:\("(?P<annotation>.*)"\))?'
'\s+(?P<prefix>[\w ]*?)'
'\s*(?P<return_type>\S+?)'
'\s+(?P<name>\w+)'
'\s*\((?P<params>[^\)]*)\)')
def ExtractCalledByNatives(contents):
"""Parses all methods annotated with @CalledByNative.
Args:
contents: the contents of the java file.
Returns:
A list of dict with information about the annotated methods.
TODO(bulach): return a CalledByNative object.
Raises:
ParseError: if unable to parse.
"""
called_by_natives = []
for match in re.finditer(RE_CALLED_BY_NATIVE, contents):
called_by_natives += [CalledByNative(
system_class=False,
unchecked='Unchecked' in match.group('Unchecked'),
static='static' in match.group('prefix'),
java_class_name=match.group('annotation') or '',
return_type=match.group('return_type'),
name=match.group('name'),
params=JniParams.Parse(match.group('params')))]
# Check for any @CalledByNative occurrences that weren't matched.
unmatched_lines = re.sub(RE_CALLED_BY_NATIVE, '', contents).split('\n')
for line1, line2 in zip(unmatched_lines, unmatched_lines[1:]):
if '@CalledByNative' in line1:
raise ParseError('could not parse @CalledByNative method signature',
line1, line2)
return MangleCalledByNatives(called_by_natives)
class JNIFromJavaP(object):
"""Uses 'javap' to parse a .class file and generate the JNI header file."""
def __init__(self, contents, namespace):
self.contents = contents
self.namespace = namespace
self.fully_qualified_class = re.match(
'.*?(class|interface) (?P<class_name>.*?)( |{)',
contents[1]).group('class_name')
self.fully_qualified_class = self.fully_qualified_class.replace('.', '/')
JniParams.SetFullyQualifiedClass(self.fully_qualified_class)
self.java_class_name = self.fully_qualified_class.split('/')[-1]
if not self.namespace:
self.namespace = 'JNI_' + self.java_class_name
re_method = re.compile('(?P<prefix>.*?)(?P<return_type>\S+?) (?P<name>\w+?)'
'\((?P<params>.*?)\)')
self.called_by_natives = []
for content in contents[2:]:
match = re.match(re_method, content)
if not match:
continue
self.called_by_natives += [CalledByNative(
system_class=True,
unchecked=False,
static='static' in match.group('prefix'),
java_class_name='',
return_type=match.group('return_type').replace('.', '/'),
name=match.group('name'),
params=JniParams.Parse(match.group('params').replace('.', '/')))]
re_constructor = re.compile('.*? public ' +
self.fully_qualified_class.replace('/', '.') +
'\((?P<params>.*?)\)')
for content in contents[2:]:
match = re.match(re_constructor, content)
if not match:
continue
self.called_by_natives += [CalledByNative(
system_class=True,
unchecked=False,
static=False,
java_class_name='',
return_type=self.fully_qualified_class,
name='Constructor',
params=JniParams.Parse(match.group('params').replace('.', '/')),
is_constructor=True)]
self.called_by_natives = MangleCalledByNatives(self.called_by_natives)
self.inl_header_file_generator = InlHeaderFileGenerator(
self.namespace, self.fully_qualified_class, [], self.called_by_natives)
def GetContent(self):
return self.inl_header_file_generator.GetContent()
@staticmethod
def CreateFromClass(class_file, namespace):
class_name = os.path.splitext(os.path.basename(class_file))[0]
p = subprocess.Popen(args=['javap', class_name],
cwd=os.path.dirname(class_file),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _ = p.communicate()
jni_from_javap = JNIFromJavaP(stdout.split('\n'), namespace)
return jni_from_javap
class JNIFromJavaSource(object):
"""Uses the given java source file to generate the JNI header file."""
def __init__(self, contents, fully_qualified_class):
contents = self._RemoveComments(contents)
JniParams.SetFullyQualifiedClass(fully_qualified_class)
JniParams.ExtractImportsAndInnerClasses(contents)
jni_namespace = ExtractJNINamespace(contents)
natives = ExtractNatives(contents)
called_by_natives = ExtractCalledByNatives(contents)
if len(natives) == 0 and len(called_by_natives) == 0:
raise SyntaxError('Unable to find any JNI methods for %s.' %
fully_qualified_class)
inl_header_file_generator = InlHeaderFileGenerator(
jni_namespace, fully_qualified_class, natives, called_by_natives)
self.content = inl_header_file_generator.GetContent()
def _RemoveComments(self, contents):
# We need to support both inline and block comments, and we need to handle
# strings that contain '//' or '/*'. Rather than trying to do all that with
# regexps, we just pipe the contents through the C preprocessor. We tell cpp
# the file has already been preprocessed, so it just removes comments and
# doesn't try to parse #include, #pragma etc.
#
# TODO(husky): This is a bit hacky. It would be cleaner to use a real Java
# parser. Maybe we could ditch JNIFromJavaSource and just always use
# JNIFromJavaP; or maybe we could rewrite this script in Java and use APT.
# http://code.google.com/p/chromium/issues/detail?id=138941
p = subprocess.Popen(args=['cpp', '-fpreprocessed'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _ = p.communicate(contents)
return stdout
def GetContent(self):
return self.content
@staticmethod
def CreateFromFile(java_file_name):
contents = file(java_file_name).read()
fully_qualified_class = ExtractFullyQualifiedJavaClassName(java_file_name,
contents)
return JNIFromJavaSource(contents, fully_qualified_class)
class InlHeaderFileGenerator(object):
"""Generates an inline header file for JNI integration."""
def __init__(self, namespace, fully_qualified_class, natives,
called_by_natives):
self.namespace = namespace
self.fully_qualified_class = fully_qualified_class
self.class_name = self.fully_qualified_class.split('/')[-1]
self.natives = natives
self.called_by_natives = called_by_natives
self.header_guard = fully_qualified_class.replace('/', '_') + '_JNI'
def GetContent(self):
"""Returns the content of the JNI binding file."""
template = Template("""\
// Copyright (c) 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// This file is autogenerated by
// ${SCRIPT_NAME}
// For
// ${FULLY_QUALIFIED_CLASS}
#ifndef ${HEADER_GUARD}
#define ${HEADER_GUARD}
#include <jni.h>
#include "base/android/jni_android.h"
#include "base/android/scoped_java_ref.h"
#include "base/basictypes.h"
#include "base/logging.h"
using base::android::ScopedJavaLocalRef;
// Step 1: forward declarations.
namespace {
$CLASS_PATH_DEFINITIONS
} // namespace
$OPEN_NAMESPACE
$FORWARD_DECLARATIONS
// Step 2: method stubs.
$METHOD_STUBS
// Step 3: RegisterNatives.
static bool RegisterNativesImpl(JNIEnv* env) {
$REGISTER_NATIVES_IMPL
return true;
}
$CLOSE_NAMESPACE
#endif // ${HEADER_GUARD}
""")
script_components = os.path.abspath(sys.argv[0]).split(os.path.sep)
base_index = script_components.index('base')
script_name = os.sep.join(script_components[base_index:])
values = {
'SCRIPT_NAME': script_name,
'FULLY_QUALIFIED_CLASS': self.fully_qualified_class,
'CLASS_PATH_DEFINITIONS': self.GetClassPathDefinitionsString(),
'FORWARD_DECLARATIONS': self.GetForwardDeclarationsString(),
'METHOD_STUBS': self.GetMethodStubsString(),
'OPEN_NAMESPACE': self.GetOpenNamespaceString(),
'REGISTER_NATIVES_IMPL': self.GetRegisterNativesImplString(),
'CLOSE_NAMESPACE': self.GetCloseNamespaceString(),
'HEADER_GUARD': self.header_guard,
}
return WrapOutput(template.substitute(values))
def GetClassPathDefinitionsString(self):
ret = []
ret += [self.GetClassPathDefinitions()]
return '\n'.join(ret)
def GetForwardDeclarationsString(self):
ret = []
for native in self.natives:
if native.type != 'method':
ret += [self.GetForwardDeclaration(native)]
return '\n'.join(ret)
def GetMethodStubsString(self):
ret = []
for native in self.natives:
if native.type == 'method':
ret += [self.GetNativeMethodStub(native)]
for called_by_native in self.called_by_natives:
ret += [self.GetCalledByNativeMethodStub(called_by_native)]
return '\n'.join(ret)
def GetKMethodsString(self, clazz):
ret = []
for native in self.natives:
if (native.java_class_name == clazz or
(not native.java_class_name and clazz == self.class_name)):
ret += [self.GetKMethodArrayEntry(native)]
return '\n'.join(ret)
def GetRegisterNativesImplString(self):
"""Returns the implementation for RegisterNatives."""
template = Template("""\
static const JNINativeMethod kMethods${JAVA_CLASS}[] = {
${KMETHODS}
};
const int kMethods${JAVA_CLASS}Size = arraysize(kMethods${JAVA_CLASS});
if (env->RegisterNatives(g_${JAVA_CLASS}_clazz,
kMethods${JAVA_CLASS},
kMethods${JAVA_CLASS}Size) < 0) {
LOG(ERROR) << "RegisterNatives failed in " << __FILE__;
return false;
}
""")
ret = [self.GetFindClasses()]
all_classes = self.GetUniqueClasses(self.natives)
all_classes[self.class_name] = self.fully_qualified_class
for clazz in all_classes:
kmethods = self.GetKMethodsString(clazz)
if kmethods:
values = {'JAVA_CLASS': clazz,
'KMETHODS': kmethods}
ret += [template.substitute(values)]
if not ret: return ''
return '\n' + '\n'.join(ret)
def GetOpenNamespaceString(self):
if self.namespace:
all_namespaces = ['namespace %s {' % ns
for ns in self.namespace.split('::')]
return '\n'.join(all_namespaces)
return ''
def GetCloseNamespaceString(self):
if self.namespace:
all_namespaces = ['} // namespace %s' % ns
for ns in self.namespace.split('::')]
all_namespaces.reverse()
return '\n'.join(all_namespaces) + '\n'
return ''
def GetJNIFirstParam(self, native):
ret = []
if native.type == 'method':
ret = ['jobject obj']
elif native.type == 'function':
if native.static:
ret = ['jclass clazz']
else:
ret = ['jobject obj']
return ret
def GetParamsInDeclaration(self, native):
"""Returns the params for the stub declaration.
Args:
native: the native dictionary describing the method.
Returns:
A string containing the params.
"""
return ',\n '.join(self.GetJNIFirstParam(native) +
[JavaDataTypeToC(param.datatype) + ' ' +
param.name
for param in native.params])
def GetCalledByNativeParamsInDeclaration(self, called_by_native):
return ',\n '.join([JavaDataTypeToC(param.datatype) + ' ' +
param.name
for param in called_by_native.params])
def GetForwardDeclaration(self, native):
template = Template("""
static ${RETURN} ${NAME}(JNIEnv* env, ${PARAMS});
""")
values = {'RETURN': JavaDataTypeToC(native.return_type),
'NAME': native.name,
'PARAMS': self.GetParamsInDeclaration(native)}
return template.substitute(values)
def GetNativeMethodStub(self, native):
"""Returns stubs for native methods."""
template = Template("""\
static ${RETURN} ${NAME}(JNIEnv* env, ${PARAMS_IN_DECLARATION}) {
DCHECK(${PARAM0_NAME}) << "${NAME}";
${P0_TYPE}* native = reinterpret_cast<${P0_TYPE}*>(${PARAM0_NAME});
return native->${NAME}(env, obj${PARAMS_IN_CALL})${POST_CALL};
}
""")
params_for_call = ', '.join(p.name for p in native.params[1:])
if params_for_call:
params_for_call = ', ' + params_for_call
return_type = JavaDataTypeToC(native.return_type)
if re.match(RE_SCOPED_JNI_RETURN_TYPES, return_type):
scoped_return_type = 'ScopedJavaLocalRef<' + return_type + '>'
post_call = '.Release()'
else:
scoped_return_type = return_type
post_call = ''
values = {
'RETURN': return_type,
'SCOPED_RETURN': scoped_return_type,
'NAME': native.name,
'PARAMS_IN_DECLARATION': self.GetParamsInDeclaration(native),
'PARAM0_NAME': native.params[0].name,
'P0_TYPE': native.p0_type,
'PARAMS_IN_CALL': params_for_call,
'POST_CALL': post_call
}
return template.substitute(values)
def GetCalledByNativeMethodStub(self, called_by_native):
"""Returns a string."""
function_signature_template = Template("""\
static ${RETURN_TYPE} Java_${JAVA_CLASS}_${METHOD_ID_VAR_NAME}(\
JNIEnv* env${FIRST_PARAM_IN_DECLARATION}${PARAMS_IN_DECLARATION})""")
function_header_template = Template("""\
${FUNCTION_SIGNATURE} {""")
function_header_with_unused_template = Template("""\
${FUNCTION_SIGNATURE} __attribute__ ((unused));
${FUNCTION_SIGNATURE} {""")
template = Template("""
static base::subtle::AtomicWord g_${JAVA_CLASS}_${METHOD_ID_VAR_NAME} = 0;
${FUNCTION_HEADER}
/* Must call RegisterNativesImpl() */
DCHECK(g_${JAVA_CLASS}_clazz);
jmethodID method_id =
${GET_METHOD_ID_IMPL}
${RETURN_DECLARATION}
${PRE_CALL}env->${ENV_CALL}(${FIRST_PARAM_IN_CALL},
method_id${PARAMS_IN_CALL})${POST_CALL};
${CHECK_EXCEPTION}
${RETURN_CLAUSE}
}""")
if called_by_native.static or called_by_native.is_constructor:
first_param_in_declaration = ''
first_param_in_call = ('g_%s_clazz' %
(called_by_native.java_class_name or
self.class_name))
else:
first_param_in_declaration = ', jobject obj'
first_param_in_call = 'obj'
params_in_declaration = self.GetCalledByNativeParamsInDeclaration(
called_by_native)
if params_in_declaration:
params_in_declaration = ', ' + params_in_declaration
params_for_call = ', '.join(param.name
for param in called_by_native.params)
if params_for_call:
params_for_call = ', ' + params_for_call
pre_call = ''
post_call = ''
if called_by_native.static_cast:
pre_call = 'static_cast<%s>(' % called_by_native.static_cast
post_call = ')'
check_exception = ''
if not called_by_native.unchecked:
check_exception = 'base::android::CheckException(env);'
return_type = JavaDataTypeToC(called_by_native.return_type)
return_declaration = ''
return_clause = ''
if return_type != 'void':
pre_call = ' ' + pre_call
return_declaration = return_type + ' ret ='
if re.match(RE_SCOPED_JNI_RETURN_TYPES, return_type):
return_type = 'ScopedJavaLocalRef<' + return_type + '>'
return_clause = 'return ' + return_type + '(env, ret);'
else:
return_clause = 'return ret;'
values = {
'JAVA_CLASS': called_by_native.java_class_name or self.class_name,
'METHOD': called_by_native.name,
'RETURN_TYPE': return_type,
'RETURN_DECLARATION': return_declaration,
'RETURN_CLAUSE': return_clause,
'FIRST_PARAM_IN_DECLARATION': first_param_in_declaration,
'PARAMS_IN_DECLARATION': params_in_declaration,
'STATIC': 'Static' if called_by_native.static else '',
'PRE_CALL': pre_call,
'POST_CALL': post_call,
'ENV_CALL': called_by_native.env_call,
'FIRST_PARAM_IN_CALL': first_param_in_call,
'PARAMS_IN_CALL': params_for_call,
'METHOD_ID_VAR_NAME': called_by_native.method_id_var_name,
'CHECK_EXCEPTION': check_exception,
'GET_METHOD_ID_IMPL': self.GetMethodIDImpl(called_by_native)
}
values['FUNCTION_SIGNATURE'] = (
function_signature_template.substitute(values))
if called_by_native.system_class:
values['FUNCTION_HEADER'] = (
function_header_with_unused_template.substitute(values))
else:
values['FUNCTION_HEADER'] = function_header_template.substitute(values)
return template.substitute(values)
def GetKMethodArrayEntry(self, native):
template = Template("""\
{ "native${NAME}", ${JNI_SIGNATURE}, reinterpret_cast<void*>(${NAME}) },""")
values = {'NAME': native.name,
'JNI_SIGNATURE': JniParams.Signature(native.params,
native.return_type,
True)}
return template.substitute(values)
def GetUniqueClasses(self, origin):
ret = {self.class_name: self.fully_qualified_class}
for entry in origin:
class_name = self.class_name
jni_class_path = self.fully_qualified_class
if entry.java_class_name:
class_name = entry.java_class_name
jni_class_path = self.fully_qualified_class + '$' + class_name
ret[class_name] = jni_class_path
return ret
def GetClassPathDefinitions(self):
"""Returns the ClassPath constants."""
ret = []
template = Template("""\
const char k${JAVA_CLASS}ClassPath[] = "${JNI_CLASS_PATH}";""")
native_classes = self.GetUniqueClasses(self.natives)
called_by_native_classes = self.GetUniqueClasses(self.called_by_natives)
all_classes = native_classes
all_classes.update(called_by_native_classes)
for clazz in all_classes:
values = {
'JAVA_CLASS': clazz,
'JNI_CLASS_PATH': JniParams.RemapClassName(all_classes[clazz]),
}
ret += [template.substitute(values)]
ret += ''
for clazz in called_by_native_classes:
template = Template("""\
// Leaking this jclass as we cannot use LazyInstance from some threads.
jclass g_${JAVA_CLASS}_clazz = NULL;""")
values = {
'JAVA_CLASS': clazz,
}
ret += [template.substitute(values)]
return '\n'.join(ret)
def GetFindClasses(self):
"""Returns the imlementation of FindClass for all known classes."""
template = Template("""\
g_${JAVA_CLASS}_clazz = reinterpret_cast<jclass>(env->NewGlobalRef(
base::android::GetClass(env, k${JAVA_CLASS}ClassPath).obj()));""")
ret = []
for clazz in self.GetUniqueClasses(self.called_by_natives):
values = {'JAVA_CLASS': clazz}
ret += [template.substitute(values)]
return '\n'.join(ret)
def GetMethodIDImpl(self, called_by_native):
"""Returns the implementation of GetMethodID."""
template = Template("""\
base::android::MethodID::LazyGet<
base::android::MethodID::TYPE_${STATIC}>(
env, g_${JAVA_CLASS}_clazz,
"${JNI_NAME}",
${JNI_SIGNATURE},
&g_${JAVA_CLASS}_${METHOD_ID_VAR_NAME});
""")
jni_name = called_by_native.name
jni_return_type = called_by_native.return_type
if called_by_native.is_constructor:
jni_name = '<init>'
jni_return_type = 'void'
values = {
'JAVA_CLASS': called_by_native.java_class_name or self.class_name,
'JNI_NAME': jni_name,
'METHOD_ID_VAR_NAME': called_by_native.method_id_var_name,
'STATIC': 'STATIC' if called_by_native.static else 'INSTANCE',
'JNI_SIGNATURE': JniParams.Signature(called_by_native.params,
jni_return_type,
True)
}
return template.substitute(values)
def WrapOutput(output):
ret = []
for line in output.splitlines():
# Do not wrap lines under 80 characters or preprocessor directives.
if len(line) < 80 or line.lstrip()[:1] == '#':
stripped = line.rstrip()
if len(ret) == 0 or len(ret[-1]) or len(stripped):
ret.append(stripped)
else:
first_line_indent = ' ' * (len(line) - len(line.lstrip()))
subsequent_indent = first_line_indent + ' ' * 4
if line.startswith('//'):
subsequent_indent = '//' + subsequent_indent
wrapper = textwrap.TextWrapper(width=80,
subsequent_indent=subsequent_indent,
break_long_words=False)
ret += [wrapped.rstrip() for wrapped in wrapper.wrap(line)]
ret += ['']
return '\n'.join(ret)
def ExtractJarInputFile(jar_file, input_file, out_dir):
"""Extracts input file from jar and returns the filename.
The input file is extracted to the same directory that the generated jni
headers will be placed in. This is passed as an argument to script.
Args:
jar_file: the jar file containing the input files to extract.
input_files: the list of files to extract from the jar file.
out_dir: the name of the directories to extract to.
Returns:
the name of extracted input file.
"""
jar_file = zipfile.ZipFile(jar_file)
out_dir = os.path.join(out_dir, os.path.dirname(input_file))
try:
os.makedirs(out_dir)
except OSError as e:
if e.errno != errno.EEXIST:
raise
extracted_file_name = os.path.join(out_dir, os.path.basename(input_file))
with open(extracted_file_name, 'w') as outfile:
outfile.write(jar_file.read(input_file))
return extracted_file_name
def GenerateJNIHeader(input_file, output_file, namespace, skip_if_same):
try:
if os.path.splitext(input_file)[1] == '.class':
jni_from_javap = JNIFromJavaP.CreateFromClass(input_file, namespace)
content = jni_from_javap.GetContent()
else:
jni_from_java_source = JNIFromJavaSource.CreateFromFile(input_file)
content = jni_from_java_source.GetContent()
except ParseError, e:
print e
sys.exit(1)
if output_file:
if not os.path.exists(os.path.dirname(os.path.abspath(output_file))):
os.makedirs(os.path.dirname(os.path.abspath(output_file)))
if skip_if_same and os.path.exists(output_file):
with file(output_file, 'r') as f:
existing_content = f.read()
if existing_content == content:
return
with file(output_file, 'w') as f:
f.write(content)
else:
print output
def main(argv):
usage = """usage: %prog [OPTIONS]
This script will parse the given java source code extracting the native
declarations and print the header file to stdout (or a file).
See SampleForTests.java for more details.
"""
option_parser = optparse.OptionParser(usage=usage)
option_parser.add_option('-j', dest='jar_file',
help='Extract the list of input files from'
' a specified jar file.'
' Uses javap to extract the methods from a'
' pre-compiled class. --input should point'
' to pre-compiled Java .class files.')
option_parser.add_option('-n', dest='namespace',
help='Uses as a namespace in the generated header,'
' instead of the javap class name.')
option_parser.add_option('--input_file',
help='Single input file name. The output file name '
'will be derived from it. Must be used with '
'--output_dir.')
option_parser.add_option('--output_dir',
help='The output directory. Must be used with '
'--input')
option_parser.add_option('--optimize_generation', type="int",
default=0, help='Whether we should optimize JNI '
'generation by not regenerating files if they have '
'not changed.')
option_parser.add_option('--jarjar',
help='Path to optional jarjar rules file.')
options, args = option_parser.parse_args(argv)
if options.jar_file:
input_file = ExtractJarInputFile(options.jar_file, options.input_file,
options.output_dir)
else:
input_file = options.input_file
output_file = None
if options.output_dir:
root_name = os.path.splitext(os.path.basename(input_file))[0]
output_file = os.path.join(options.output_dir, root_name) + '_jni.h'
if options.jarjar:
with open(options.jarjar) as f:
JniParams.SetJarJarMappings(f.read())
GenerateJNIHeader(input_file, output_file, options.namespace,
options.optimize_generation)
if __name__ == '__main__':
sys.exit(main(sys.argv))
| bsd-3-clause | -6,610,376,184,051,497,000 | 35.319249 | 80 | 0.611841 | false | 3.648368 | false | false | false |
BhallaLab/moose-examples | symcomp/symcomp.py | 4 | 3064 | # symcompartment.py ---
#
# Filename: symcompartment.py
# Description:
# Author:
# Maintainer:
# Created: Thu Jun 20 17:47:10 2013 (+0530)
# Version:
# Last-Updated: Wed Jun 26 11:43:47 2013 (+0530)
# By: subha
# Update #: 90
# URL:
# Keywords:
# Compatibility:
#
#
# Commentary:
#
#
#
#
# Change log:
#
#
#
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street, Fifth
# Floor, Boston, MA 02110-1301, USA.
#
#
# Code:
import numpy as np
import pylab
import moose
simdt = 1e-6
simtime = 100e-3
def test_symcompartment():
model = moose.Neutral('model')
soma = moose.SymCompartment('%s/soma' % (model.path))
soma.Em = -60e-3
soma.Rm = 1e9
soma.Cm = 1e-11
soma.Ra = 1e6
d1 = moose.SymCompartment('%s/d1' % (model.path))
d1.Rm = 1e8
d1.Cm = 1e-10
d1.Ra = 1e7
d2 = moose.SymCompartment('%s/d2' % (model.path))
d2.Rm = 1e8
d2.Cm = 1e-10
d2.Ra = 2e7
moose.connect(d1, 'proximal', soma, 'distal')
moose.connect(d2, 'proximal', soma, 'distal')
moose.connect(d1, 'sibling', d2, 'sibling')
pg = moose.PulseGen('/model/pulse')
pg.delay[0] = 10e-3
pg.width[0] = 20e-3
pg.level[0] = 1e-6
pg.delay[1] = 1e9
moose.connect(pg, 'output', d1, 'injectMsg')
data = moose.Neutral('/data')
tab_soma = moose.Table('%s/soma_Vm' % (data.path))
tab_d1 = moose.Table('%s/d1_Vm' % (data.path))
tab_d2 = moose.Table('%s/d2_Vm' % (data.path))
moose.connect(tab_soma, 'requestOut', soma, 'getVm')
moose.connect(tab_d1, 'requestOut', d1, 'getVm')
moose.connect(tab_d2, 'requestOut', d2, 'getVm')
moose.setClock(0, simdt)
moose.setClock(1, simdt)
moose.setClock(2, simdt)
moose.useClock(0, '/model/##[ISA=Compartment]', 'init') # This is allowed because SymCompartment is a subclass of Compartment
moose.useClock(1, '/model/##', 'process')
moose.useClock(2, '/data/##[ISA=Table]', 'process')
moose.reinit()
moose.start(simtime)
t = np.linspace(0, simtime, len(tab_soma.vector))
data_matrix = np.vstack((t, tab_soma.vector, tab_d1.vector, tab_d2.vector))
np.savetxt('symcompartment.txt', data_matrix.transpose())
pylab.plot(t, tab_soma.vector, label='Vm_soma')
pylab.plot(t, tab_d1.vector, label='Vm_d1')
pylab.plot(t, tab_d2.vector, label='Vm_d2')
pylab.show()
if __name__ == '__main__':
test_symcompartment()
#
# symcompartment.py ends here
| gpl-2.0 | -4,284,771,184,869,985,000 | 27.110092 | 129 | 0.642624 | false | 2.671316 | false | false | false |
sznekol/django-cms | cms/management/commands/subcommands/base.py | 48 | 1618 | # -*- coding: utf-8 -*-
import sys
from django.core.management.base import BaseCommand, CommandError
class SubcommandsCommand(BaseCommand):
subcommands = {}
command_name = ''
def __init__(self):
super(SubcommandsCommand, self).__init__()
for name, subcommand in self.subcommands.items():
subcommand.command_name = '%s %s' % (self.command_name, name)
def handle(self, *args, **options):
stderr = getattr(self, 'stderr', sys.stderr)
stdout = getattr(self, 'stdout', sys.stdout)
if len(args) > 0:
if args[0] in self.subcommands.keys():
handle_command = self.subcommands.get(args[0])()
handle_command.stdout = stdout
handle_command.stderr = stderr
handle_command.handle(*args[1:], **options)
else:
stderr.write("%r is not a valid subcommand for %r\n" % (args[0], self.command_name))
stderr.write("Available subcommands are:\n")
for subcommand in sorted(self.subcommands.keys()):
stderr.write(" %r\n" % subcommand)
raise CommandError('Invalid subcommand %r for %r' % (args[0], self.command_name))
else:
stderr.write("%r must be called with at least one argument, it's subcommand.\n" % self.command_name)
stderr.write("Available subcommands are:\n")
for subcommand in sorted(self.subcommands.keys()):
stderr.write(" %r\n" % subcommand)
raise CommandError('No subcommand given for %r' % self.command_name)
| bsd-3-clause | -2,089,971,199,633,882,000 | 43.944444 | 112 | 0.58529 | false | 4.127551 | false | false | false |
henrykironde/weaverhenry | app/app.py | 2 | 1395 |
import wx
class Frame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, wx.DefaultPosition, wx.Size(450, 350))
hbox = wx.BoxSizer(wx.HORIZONTAL)
vbox = wx.BoxSizer(wx.VERTICAL)
panel1 = wx.Panel(self, -1)
panel2 = wx.Panel(self, -1)
self.tree = wx.TreeCtrl(panel1, 1, wx.DefaultPosition, (-1,-1), wx.TR_HIDE_ROOT|wx.TR_HAS_BUTTONS)
root = self.tree.AddRoot('weaver Data')
os = self.tree.AppendItem(root, 'Gis data')
pl = self.tree.AppendItem(root, 'Time Series')
cl = self.tree.AppendItem(pl, 'Integrated Data')
sl = self.tree.AppendItem(pl, 'world data')
self.tree.AppendItem(cl, 'Plants')
self.tree.Bind(wx.EVT_TREE_SEL_CHANGED, self.OnSelChanged, id=1)
self.display = wx.StaticText(panel2, -1, '',(10,10), style=wx.ALIGN_CENTRE)
vbox.Add(self.tree, 1, wx.EXPAND)
hbox.Add(panel1, 1, wx.EXPAND)
hbox.Add(panel2, 1, wx.EXPAND)
panel1.SetSizer(vbox)
self.SetSizer(hbox)
self.Centre()
def OnSelChanged(self, event):
item = event.GetItem()
self.display.SetLabel(self.tree.GetItemText(item))
class App(wx.App):
def OnInit(self):
frame = Frame(None, -1, 'treectrl.py')
frame.Show(True)
self.SetTopWindow(frame)
return True | mit | 7,659,714,490,347,819,000 | 33.04878 | 106 | 0.602151 | false | 3.086283 | false | false | false |
827992983/yue | yue/urls.py | 1 | 2674 | """yue URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.9/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url
from django.contrib.staticfiles.urls import static
from django.conf import settings
# from django.contrib import admin
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
from api import views as api_views
from admin import views as admin_views
from guest import views as guest_views
from login import views as login_views
urlpatterns = [
url(r'^login', login_views.login, name='login'),
url(r'^logout', login_views.logout, name='logout'),
url(r'^admin', admin_views.index, name='admin'),
url(r'^changepwd', admin_views.changepwd, name='changepwd'),
url(r'^configure', admin_views.configure, name='configure'),
url(r'^checkenv', admin_views.checkenv, name='checkenv'),
url(r'^storage', admin_views.storage, name='storage'),
url(r'^network', admin_views.network, name='network'),
url(r'^vms', admin_views.vm, name='vm'),
url(r'^vm/status', admin_views.vm_status, name='vm_status'),
url(r'^vm/edit', admin_views.vm_edit, name='vm_edit'),
url(r'^vm/delete', admin_views.vm_delete, name='vm_delete'),
url(r'^vm/start', admin_views.vm_start, name='vm_start'),
url(r'^vm/stop', admin_views.vm_stop, name='vm_stop'),
url(r'^vm/template', admin_views.template, name='template'),
url(r'^vm/snapshot', admin_views.snapshot, name='snapshot'),
url(r'^vm/conninfo', admin_views.connect_info, name='conninfo'),
url(r'^iso', admin_views.iso, name='iso'),
url(r'^users', login_views.users, name='users'),
url(r'^user/create', login_views.create_user, name='create_user'),
url(r'^user/delete', login_views.delete_user, name='delete_user'),
url(r'^user/edit', login_views.edit_user, name='edit_user'),
url(r'^guest', guest_views.index, name='guest'),
url(r'^get_vms_by_user', admin_views.get_vms_by_user, name='get_vms_by_user'),
url(r'^api', api_views.index),
url(r'^index', login_views.index, name='index'),
url(r'^', login_views.index, name='index'),
]
urlpatterns += staticfiles_urlpatterns()
| gpl-3.0 | 6,424,795,993,814,014,000 | 46.75 | 82 | 0.684368 | false | 3.260976 | false | false | false |
FlorentChamault/My_sickbeard | lib/hachoir_parser/file_system/mbr.py | 90 | 7784 | """
Master Boot Record.
"""
# cfdisk uses the following algorithm to compute the geometry:
# 0. Use the values given by the user.
# 1. Try to guess the geometry from the partition table:
# if all the used partitions end at the same head H and the
# same sector S, then there are (H+1) heads and S sectors/cylinder.
# 2. Ask the system (ioctl/HDIO_GETGEO).
# 3. 255 heads and 63 sectors/cylinder.
from lib.hachoir_parser import Parser
from lib.hachoir_core.field import (FieldSet,
Enum, Bits, UInt8, UInt16, UInt32,
RawBytes)
from lib.hachoir_core.endian import LITTLE_ENDIAN
from lib.hachoir_core.tools import humanFilesize
from lib.hachoir_core.text_handler import textHandler, hexadecimal
BLOCK_SIZE = 512 # bytes
class CylinderNumber(Bits):
def __init__(self, parent, name, description=None):
Bits.__init__(self, parent, name, 10, description)
def createValue(self):
i = self.parent.stream.readInteger(
self.absolute_address, False, self._size, self.parent.endian)
return i >> 2 | i % 4 << 8
class PartitionHeader(FieldSet):
static_size = 16*8
# taken from the source of cfdisk:
# sed -n 's/.*{\(.*\), N_(\(.*\))}.*/ \1: \2,/p' i386_sys_types.c
system_name = {
0x00: "Empty",
0x01: "FAT12",
0x02: "XENIX root",
0x03: "XENIX usr",
0x04: "FAT16 <32M",
0x05: "Extended",
0x06: "FAT16",
0x07: "HPFS/NTFS",
0x08: "AIX",
0x09: "AIX bootable",
0x0a: "OS/2 Boot Manager",
0x0b: "W95 FAT32",
0x0c: "W95 FAT32 (LBA)",
0x0e: "W95 FAT16 (LBA)",
0x0f: "W95 Ext'd (LBA)",
0x10: "OPUS",
0x11: "Hidden FAT12",
0x12: "Compaq diagnostics",
0x14: "Hidden FAT16 <32M",
0x16: "Hidden FAT16",
0x17: "Hidden HPFS/NTFS",
0x18: "AST SmartSleep",
0x1b: "Hidden W95 FAT32",
0x1c: "Hidden W95 FAT32 (LBA)",
0x1e: "Hidden W95 FAT16 (LBA)",
0x24: "NEC DOS",
0x39: "Plan 9",
0x3c: "PartitionMagic recovery",
0x40: "Venix 80286",
0x41: "PPC PReP Boot",
0x42: "SFS",
0x4d: "QNX4.x",
0x4e: "QNX4.x 2nd part",
0x4f: "QNX4.x 3rd part",
0x50: "OnTrack DM",
0x51: "OnTrack DM6 Aux1",
0x52: "CP/M",
0x53: "OnTrack DM6 Aux3",
0x54: "OnTrackDM6",
0x55: "EZ-Drive",
0x56: "Golden Bow",
0x5c: "Priam Edisk",
0x61: "SpeedStor",
0x63: "GNU HURD or SysV",
0x64: "Novell Netware 286",
0x65: "Novell Netware 386",
0x70: "DiskSecure Multi-Boot",
0x75: "PC/IX",
0x80: "Old Minix",
0x81: "Minix / old Linux",
0x82: "Linux swap / Solaris",
0x83: "Linux (ext2/ext3)",
0x84: "OS/2 hidden C: drive",
0x85: "Linux extended",
0x86: "NTFS volume set",
0x87: "NTFS volume set",
0x88: "Linux plaintext",
0x8e: "Linux LVM",
0x93: "Amoeba",
0x94: "Amoeba BBT",
0x9f: "BSD/OS",
0xa0: "IBM Thinkpad hibernation",
0xa5: "FreeBSD",
0xa6: "OpenBSD",
0xa7: "NeXTSTEP",
0xa8: "Darwin UFS",
0xa9: "NetBSD",
0xab: "Darwin boot",
0xb7: "BSDI fs",
0xb8: "BSDI swap",
0xbb: "Boot Wizard hidden",
0xbe: "Solaris boot",
0xbf: "Solaris",
0xc1: "DRDOS/sec (FAT-12)",
0xc4: "DRDOS/sec (FAT-16 < 32M)",
0xc6: "DRDOS/sec (FAT-16)",
0xc7: "Syrinx",
0xda: "Non-FS data",
0xdb: "CP/M / CTOS / ...",
0xde: "Dell Utility",
0xdf: "BootIt",
0xe1: "DOS access",
0xe3: "DOS R/O",
0xe4: "SpeedStor",
0xeb: "BeOS fs",
0xee: "EFI GPT",
0xef: "EFI (FAT-12/16/32)",
0xf0: "Linux/PA-RISC boot",
0xf1: "SpeedStor",
0xf4: "SpeedStor",
0xf2: "DOS secondary",
0xfd: "Linux raid autodetect",
0xfe: "LANstep",
0xff: "BBT"
}
def createFields(self):
yield UInt8(self, "bootable", "Bootable flag (true if equals to 0x80)")
if self["bootable"].value not in (0x00, 0x80):
self.warning("Stream doesn't look like master boot record (partition bootable error)!")
yield UInt8(self, "start_head", "Starting head number of the partition")
yield Bits(self, "start_sector", 6, "Starting sector number of the partition")
yield CylinderNumber(self, "start_cylinder", "Starting cylinder number of the partition")
yield Enum(UInt8(self, "system", "System indicator"), self.system_name)
yield UInt8(self, "end_head", "Ending head number of the partition")
yield Bits(self, "end_sector", 6, "Ending sector number of the partition")
yield CylinderNumber(self, "end_cylinder", "Ending cylinder number of the partition")
yield UInt32(self, "LBA", "LBA (number of sectors before this partition)")
yield UInt32(self, "size", "Size (block count)")
def isUsed(self):
return self["system"].value != 0
def createDescription(self):
desc = "Partition header: "
if self.isUsed():
system = self["system"].display
size = self["size"].value * BLOCK_SIZE
desc += "%s, %s" % (system, humanFilesize(size))
else:
desc += "(unused)"
return desc
class MasterBootRecord(FieldSet):
static_size = 512*8
def createFields(self):
yield RawBytes(self, "program", 446, "Boot program (Intel x86 machine code)")
yield PartitionHeader(self, "header[0]")
yield PartitionHeader(self, "header[1]")
yield PartitionHeader(self, "header[2]")
yield PartitionHeader(self, "header[3]")
yield textHandler(UInt16(self, "signature", "Signature (0xAA55)"), hexadecimal)
def _getPartitions(self):
return ( self[index] for index in xrange(1,5) )
headers = property(_getPartitions)
class Partition(FieldSet):
def createFields(self):
mbr = MasterBootRecord(self, "mbr")
yield mbr
# No error if we only want to analyse a backup of a mbr
if self.eof:
return
for start, index, header in sorted((hdr["LBA"].value, index, hdr)
for index, hdr in enumerate(mbr.headers) if hdr.isUsed()):
# Seek to the beginning of the partition
padding = self.seekByte(start * BLOCK_SIZE, "padding[]")
if padding:
yield padding
# Content of the partition
name = "partition[%u]" % index
size = BLOCK_SIZE * header["size"].value
desc = header["system"].display
if header["system"].value == 5:
yield Partition(self, name, desc, size * 8)
else:
yield RawBytes(self, name, size, desc)
# Padding at the end
if self.current_size < self._size:
yield self.seekBit(self._size, "end")
class MSDos_HardDrive(Parser, Partition):
endian = LITTLE_ENDIAN
MAGIC = "\x55\xAA"
PARSER_TAGS = {
"id": "msdos_harddrive",
"category": "file_system",
"description": "MS-DOS hard drive with Master Boot Record (MBR)",
"min_size": 512*8,
"file_ext": ("",),
# "magic": ((MAGIC, 510*8),),
}
def validate(self):
if self.stream.readBytes(510*8, 2) != self.MAGIC:
return "Invalid signature"
used = False
for hdr in self["mbr"].headers:
if hdr["bootable"].value not in (0x00, 0x80):
return "Wrong boot flag"
used |= hdr.isUsed()
return used or "No partition found"
| gpl-3.0 | -6,416,897,720,142,389,000 | 32.843478 | 99 | 0.561151 | false | 3.235245 | false | false | false |
testmana2/test | ThirdParty/Pygments/pygments/lexers/erlang.py | 72 | 18195 | # -*- coding: utf-8 -*-
"""
pygments.lexers.erlang
~~~~~~~~~~~~~~~~~~~~~~
Lexers for Erlang.
:copyright: Copyright 2006-2014 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import Lexer, RegexLexer, bygroups, words, do_insertions, \
include, default
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Generic
__all__ = ['ErlangLexer', 'ErlangShellLexer', 'ElixirConsoleLexer',
'ElixirLexer']
line_re = re.compile('.*?\n')
class ErlangLexer(RegexLexer):
"""
For the Erlang functional programming language.
Blame Jeremy Thurgood (http://jerith.za.net/).
.. versionadded:: 0.9
"""
name = 'Erlang'
aliases = ['erlang']
filenames = ['*.erl', '*.hrl', '*.es', '*.escript']
mimetypes = ['text/x-erlang']
keywords = (
'after', 'begin', 'case', 'catch', 'cond', 'end', 'fun', 'if',
'let', 'of', 'query', 'receive', 'try', 'when',
)
builtins = ( # See erlang(3) man page
'abs', 'append_element', 'apply', 'atom_to_list', 'binary_to_list',
'bitstring_to_list', 'binary_to_term', 'bit_size', 'bump_reductions',
'byte_size', 'cancel_timer', 'check_process_code', 'delete_module',
'demonitor', 'disconnect_node', 'display', 'element', 'erase', 'exit',
'float', 'float_to_list', 'fun_info', 'fun_to_list',
'function_exported', 'garbage_collect', 'get', 'get_keys',
'group_leader', 'hash', 'hd', 'integer_to_list', 'iolist_to_binary',
'iolist_size', 'is_atom', 'is_binary', 'is_bitstring', 'is_boolean',
'is_builtin', 'is_float', 'is_function', 'is_integer', 'is_list',
'is_number', 'is_pid', 'is_port', 'is_process_alive', 'is_record',
'is_reference', 'is_tuple', 'length', 'link', 'list_to_atom',
'list_to_binary', 'list_to_bitstring', 'list_to_existing_atom',
'list_to_float', 'list_to_integer', 'list_to_pid', 'list_to_tuple',
'load_module', 'localtime_to_universaltime', 'make_tuple', 'md5',
'md5_final', 'md5_update', 'memory', 'module_loaded', 'monitor',
'monitor_node', 'node', 'nodes', 'open_port', 'phash', 'phash2',
'pid_to_list', 'port_close', 'port_command', 'port_connect',
'port_control', 'port_call', 'port_info', 'port_to_list',
'process_display', 'process_flag', 'process_info', 'purge_module',
'put', 'read_timer', 'ref_to_list', 'register', 'resume_process',
'round', 'send', 'send_after', 'send_nosuspend', 'set_cookie',
'setelement', 'size', 'spawn', 'spawn_link', 'spawn_monitor',
'spawn_opt', 'split_binary', 'start_timer', 'statistics',
'suspend_process', 'system_flag', 'system_info', 'system_monitor',
'system_profile', 'term_to_binary', 'tl', 'trace', 'trace_delivered',
'trace_info', 'trace_pattern', 'trunc', 'tuple_size', 'tuple_to_list',
'universaltime_to_localtime', 'unlink', 'unregister', 'whereis'
)
operators = r'(\+\+?|--?|\*|/|<|>|/=|=:=|=/=|=<|>=|==?|<-|!|\?)'
word_operators = (
'and', 'andalso', 'band', 'bnot', 'bor', 'bsl', 'bsr', 'bxor',
'div', 'not', 'or', 'orelse', 'rem', 'xor'
)
atom_re = r"(?:[a-z]\w*|'[^\n']*[^\\]')"
variable_re = r'(?:[A-Z_]\w*)'
escape_re = r'(?:\\(?:[bdefnrstv\'"\\/]|[0-7][0-7]?[0-7]?|\^[a-zA-Z]))'
macro_re = r'(?:'+variable_re+r'|'+atom_re+r')'
base_re = r'(?:[2-9]|[12][0-9]|3[0-6])'
tokens = {
'root': [
(r'\s+', Text),
(r'%.*\n', Comment),
(words(keywords, suffix=r'\b'), Keyword),
(words(builtins, suffix=r'\b'), Name.Builtin),
(words(word_operators, suffix=r'\b'), Operator.Word),
(r'^-', Punctuation, 'directive'),
(operators, Operator),
(r'"', String, 'string'),
(r'<<', Name.Label),
(r'>>', Name.Label),
('(' + atom_re + ')(:)', bygroups(Name.Namespace, Punctuation)),
('(?:^|(?<=:))(' + atom_re + r')(\s*)(\()',
bygroups(Name.Function, Text, Punctuation)),
(r'[+-]?' + base_re + r'#[0-9a-zA-Z]+', Number.Integer),
(r'[+-]?\d+', Number.Integer),
(r'[+-]?\d+.\d+', Number.Float),
(r'[]\[:_@\".{}()|;,]', Punctuation),
(variable_re, Name.Variable),
(atom_re, Name),
(r'\?'+macro_re, Name.Constant),
(r'\$(?:'+escape_re+r'|\\[ %]|[^\\])', String.Char),
(r'#'+atom_re+r'(:?\.'+atom_re+r')?', Name.Label),
],
'string': [
(escape_re, String.Escape),
(r'"', String, '#pop'),
(r'~[0-9.*]*[~#+bBcdefginpPswWxX]', String.Interpol),
(r'[^"\\~]+', String),
(r'~', String),
],
'directive': [
(r'(define)(\s*)(\()('+macro_re+r')',
bygroups(Name.Entity, Text, Punctuation, Name.Constant), '#pop'),
(r'(record)(\s*)(\()('+macro_re+r')',
bygroups(Name.Entity, Text, Punctuation, Name.Label), '#pop'),
(atom_re, Name.Entity, '#pop'),
],
}
class ErlangShellLexer(Lexer):
"""
Shell sessions in erl (for Erlang code).
.. versionadded:: 1.1
"""
name = 'Erlang erl session'
aliases = ['erl']
filenames = ['*.erl-sh']
mimetypes = ['text/x-erl-shellsession']
_prompt_re = re.compile(r'\d+>(?=\s|\Z)')
def get_tokens_unprocessed(self, text):
erlexer = ErlangLexer(**self.options)
curcode = ''
insertions = []
for match in line_re.finditer(text):
line = match.group()
m = self._prompt_re.match(line)
if m is not None:
end = m.end()
insertions.append((len(curcode),
[(0, Generic.Prompt, line[:end])]))
curcode += line[end:]
else:
if curcode:
for item in do_insertions(insertions,
erlexer.get_tokens_unprocessed(curcode)):
yield item
curcode = ''
insertions = []
if line.startswith('*'):
yield match.start(), Generic.Traceback, line
else:
yield match.start(), Generic.Output, line
if curcode:
for item in do_insertions(insertions,
erlexer.get_tokens_unprocessed(curcode)):
yield item
def gen_elixir_string_rules(name, symbol, token):
states = {}
states['string_' + name] = [
(r'[^#%s\\]+' % (symbol,), token),
include('escapes'),
(r'\\.', token),
(r'(%s)' % (symbol,), bygroups(token), "#pop"),
include('interpol')
]
return states
def gen_elixir_sigstr_rules(term, token, interpol=True):
if interpol:
return [
(r'[^#%s\\]+' % (term,), token),
include('escapes'),
(r'\\.', token),
(r'%s[a-zA-Z]*' % (term,), token, '#pop'),
include('interpol')
]
else:
return [
(r'[^%s\\]+' % (term,), token),
(r'\\.', token),
(r'%s[a-zA-Z]*' % (term,), token, '#pop'),
]
class ElixirLexer(RegexLexer):
"""
For the `Elixir language <http://elixir-lang.org>`_.
.. versionadded:: 1.5
"""
name = 'Elixir'
aliases = ['elixir', 'ex', 'exs']
filenames = ['*.ex', '*.exs']
mimetypes = ['text/x-elixir']
KEYWORD = ('fn', 'do', 'end', 'after', 'else', 'rescue', 'catch')
KEYWORD_OPERATOR = ('not', 'and', 'or', 'when', 'in')
BUILTIN = (
'case', 'cond', 'for', 'if', 'unless', 'try', 'receive', 'raise',
'quote', 'unquote', 'unquote_splicing', 'throw', 'super'
)
BUILTIN_DECLARATION = (
'def', 'defp', 'defmodule', 'defprotocol', 'defmacro', 'defmacrop',
'defdelegate', 'defexception', 'defstruct', 'defimpl', 'defcallback'
)
BUILTIN_NAMESPACE = ('import', 'require', 'use', 'alias')
CONSTANT = ('nil', 'true', 'false')
PSEUDO_VAR = ('_', '__MODULE__', '__DIR__', '__ENV__', '__CALLER__')
OPERATORS3 = (
'<<<', '>>>', '|||', '&&&', '^^^', '~~~', '===', '!==',
'~>>', '<~>', '|~>', '<|>',
)
OPERATORS2 = (
'==', '!=', '<=', '>=', '&&', '||', '<>', '++', '--', '|>', '=~',
'->', '<-', '|', '.', '=', '~>', '<~',
)
OPERATORS1 = ('<', '>', '+', '-', '*', '/', '!', '^', '&')
PUNCTUATION = (
'\\\\', '<<', '>>', '=>', '(', ')', ':', ';', ',', '[', ']'
)
def get_tokens_unprocessed(self, text):
for index, token, value in RegexLexer.get_tokens_unprocessed(self, text):
if token is Name:
if value in self.KEYWORD:
yield index, Keyword, value
elif value in self.KEYWORD_OPERATOR:
yield index, Operator.Word, value
elif value in self.BUILTIN:
yield index, Keyword, value
elif value in self.BUILTIN_DECLARATION:
yield index, Keyword.Declaration, value
elif value in self.BUILTIN_NAMESPACE:
yield index, Keyword.Namespace, value
elif value in self.CONSTANT:
yield index, Name.Constant, value
elif value in self.PSEUDO_VAR:
yield index, Name.Builtin.Pseudo, value
else:
yield index, token, value
else:
yield index, token, value
def gen_elixir_sigil_rules():
# all valid sigil terminators (excluding heredocs)
terminators = [
(r'\{', r'\}', 'cb'),
(r'\[', r'\]', 'sb'),
(r'\(', r'\)', 'pa'),
(r'<', r'>', 'ab'),
(r'/', r'/', 'slas'),
(r'\|', r'\|', 'pipe'),
('"', '"', 'quot'),
("'", "'", 'apos'),
]
# heredocs have slightly different rules
triquotes = [(r'"""', 'triquot'), (r"'''", 'triapos')]
token = String.Other
states = {'sigils': []}
for term, name in triquotes:
states['sigils'] += [
(r'(~[a-z])(%s)' % (term,), bygroups(token, String.Heredoc),
(name + '-end', name + '-intp')),
(r'(~[A-Z])(%s)' % (term,), bygroups(token, String.Heredoc),
(name + '-end', name + '-no-intp')),
]
states[name + '-end'] = [
(r'[a-zA-Z]+', token, '#pop'),
default('#pop'),
]
states[name + '-intp'] = [
(r'^\s*' + term, String.Heredoc, '#pop'),
include('heredoc_interpol'),
]
states[name + '-no-intp'] = [
(r'^\s*' + term, String.Heredoc, '#pop'),
include('heredoc_no_interpol'),
]
for lterm, rterm, name in terminators:
states['sigils'] += [
(r'~[a-z]' + lterm, token, name + '-intp'),
(r'~[A-Z]' + lterm, token, name + '-no-intp'),
]
states[name + '-intp'] = gen_elixir_sigstr_rules(rterm, token)
states[name + '-no-intp'] = \
gen_elixir_sigstr_rules(rterm, token, interpol=False)
return states
op3_re = "|".join(re.escape(s) for s in OPERATORS3)
op2_re = "|".join(re.escape(s) for s in OPERATORS2)
op1_re = "|".join(re.escape(s) for s in OPERATORS1)
ops_re = r'(?:%s|%s|%s)' % (op3_re, op2_re, op1_re)
punctuation_re = "|".join(re.escape(s) for s in PUNCTUATION)
alnum = '\w'
name_re = r'(?:\.\.\.|[a-z_]%s*[!?]?)' % alnum
modname_re = r'[A-Z]%(alnum)s*(?:\.[A-Z]%(alnum)s*)*' % {'alnum': alnum}
complex_name_re = r'(?:%s|%s|%s)' % (name_re, modname_re, ops_re)
special_atom_re = r'(?:\.\.\.|<<>>|%\{\}|%|\{\})'
long_hex_char_re = r'(\\x\{)([\da-fA-F]+)(\})'
hex_char_re = r'(\\x[\da-fA-F]{1,2})'
escape_char_re = r'(\\[abdefnrstv])'
tokens = {
'root': [
(r'\s+', Text),
(r'#.*$', Comment.Single),
# Various kinds of characters
(r'(\?)' + long_hex_char_re,
bygroups(String.Char,
String.Escape, Number.Hex, String.Escape)),
(r'(\?)' + hex_char_re,
bygroups(String.Char, String.Escape)),
(r'(\?)' + escape_char_re,
bygroups(String.Char, String.Escape)),
(r'\?\\?.', String.Char),
# '::' has to go before atoms
(r':::', String.Symbol),
(r'::', Operator),
# atoms
(r':' + special_atom_re, String.Symbol),
(r':' + complex_name_re, String.Symbol),
(r':"', String.Symbol, 'string_double_atom'),
(r":'", String.Symbol, 'string_single_atom'),
# [keywords: ...]
(r'(%s|%s)(:)(?=\s|\n)' % (special_atom_re, complex_name_re),
bygroups(String.Symbol, Punctuation)),
# @attributes
(r'@' + name_re, Name.Attribute),
# identifiers
(name_re, Name),
(r'(%%?)(%s)' % (modname_re,), bygroups(Punctuation, Name.Class)),
# operators and punctuation
(op3_re, Operator),
(op2_re, Operator),
(punctuation_re, Punctuation),
(r'&\d', Name.Entity), # anon func arguments
(op1_re, Operator),
# numbers
(r'0b[01]+', Number.Bin),
(r'0o[0-7]+', Number.Oct),
(r'0x[\da-fA-F]+', Number.Hex),
(r'\d(_?\d)*\.\d(_?\d)*([eE][-+]?\d(_?\d)*)?', Number.Float),
(r'\d(_?\d)*', Number.Integer),
# strings and heredocs
(r'"""\s*', String.Heredoc, 'heredoc_double'),
(r"'''\s*$", String.Heredoc, 'heredoc_single'),
(r'"', String.Double, 'string_double'),
(r"'", String.Single, 'string_single'),
include('sigils'),
(r'%\{', Punctuation, 'map_key'),
(r'\{', Punctuation, 'tuple'),
],
'heredoc_double': [
(r'^\s*"""', String.Heredoc, '#pop'),
include('heredoc_interpol'),
],
'heredoc_single': [
(r"^\s*'''", String.Heredoc, '#pop'),
include('heredoc_interpol'),
],
'heredoc_interpol': [
(r'[^#\\\n]+', String.Heredoc),
include('escapes'),
(r'\\.', String.Heredoc),
(r'\n+', String.Heredoc),
include('interpol'),
],
'heredoc_no_interpol': [
(r'[^\\\n]+', String.Heredoc),
(r'\\.', String.Heredoc),
(r'\n+', String.Heredoc),
],
'escapes': [
(long_hex_char_re,
bygroups(String.Escape, Number.Hex, String.Escape)),
(hex_char_re, String.Escape),
(escape_char_re, String.Escape),
],
'interpol': [
(r'#\{', String.Interpol, 'interpol_string'),
],
'interpol_string': [
(r'\}', String.Interpol, "#pop"),
include('root')
],
'map_key': [
include('root'),
(r':', Punctuation, 'map_val'),
(r'=>', Punctuation, 'map_val'),
(r'\}', Punctuation, '#pop'),
],
'map_val': [
include('root'),
(r',', Punctuation, '#pop'),
(r'(?=\})', Punctuation, '#pop'),
],
'tuple': [
include('root'),
(r'\}', Punctuation, '#pop'),
],
}
tokens.update(gen_elixir_string_rules('double', '"', String.Double))
tokens.update(gen_elixir_string_rules('single', "'", String.Single))
tokens.update(gen_elixir_string_rules('double_atom', '"', String.Symbol))
tokens.update(gen_elixir_string_rules('single_atom', "'", String.Symbol))
tokens.update(gen_elixir_sigil_rules())
class ElixirConsoleLexer(Lexer):
"""
For Elixir interactive console (iex) output like:
.. sourcecode:: iex
iex> [head | tail] = [1,2,3]
[1,2,3]
iex> head
1
iex> tail
[2,3]
iex> [head | tail]
[1,2,3]
iex> length [head | tail]
3
.. versionadded:: 1.5
"""
name = 'Elixir iex session'
aliases = ['iex']
mimetypes = ['text/x-elixir-shellsession']
_prompt_re = re.compile('(iex|\.{3})(\(\d+\))?> ')
def get_tokens_unprocessed(self, text):
exlexer = ElixirLexer(**self.options)
curcode = ''
in_error = False
insertions = []
for match in line_re.finditer(text):
line = match.group()
if line.startswith(u'** '):
in_error = True
insertions.append((len(curcode),
[(0, Generic.Error, line[:-1])]))
curcode += line[-1:]
else:
m = self._prompt_re.match(line)
if m is not None:
in_error = False
end = m.end()
insertions.append((len(curcode),
[(0, Generic.Prompt, line[:end])]))
curcode += line[end:]
else:
if curcode:
for item in do_insertions(
insertions, exlexer.get_tokens_unprocessed(curcode)):
yield item
curcode = ''
insertions = []
token = Generic.Error if in_error else Generic.Output
yield match.start(), token, line
if curcode:
for item in do_insertions(
insertions, exlexer.get_tokens_unprocessed(curcode)):
yield item
| gpl-3.0 | -6,485,025,965,252,877,000 | 34.606654 | 87 | 0.452872 | false | 3.51187 | false | false | false |
Arcanemagus/SickRage | lib/six.py | 172 | 30888 | # Copyright (c) 2010-2017 Benjamin Peterson
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""Utilities for writing code that runs on Python 2 and 3"""
from __future__ import absolute_import
import functools
import itertools
import operator
import sys
import types
__author__ = "Benjamin Peterson <[email protected]>"
__version__ = "1.11.0"
# Useful for very coarse version differentiation.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
PY34 = sys.version_info[0:2] >= (3, 4)
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result) # Invokes __set__.
try:
# This is a bit ugly, but it avoids running this again by
# removing this descriptor.
delattr(obj.__class__, self.name)
except AttributeError:
pass
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
def __getattr__(self, attr):
_module = self._resolve()
value = getattr(_module, attr)
setattr(self, attr, value)
return value
class _LazyModule(types.ModuleType):
def __init__(self, name):
super(_LazyModule, self).__init__(name)
self.__doc__ = self.__class__.__doc__
def __dir__(self):
attrs = ["__doc__", "__name__"]
attrs += [attr.name for attr in self._moved_attributes]
return attrs
# Subclasses should override this
_moved_attributes = []
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _SixMetaPathImporter(object):
"""
A meta path importer to import six.moves and its submodules.
This class implements a PEP302 finder and loader. It should be compatible
with Python 2.5 and all existing versions of Python3
"""
def __init__(self, six_module_name):
self.name = six_module_name
self.known_modules = {}
def _add_module(self, mod, *fullnames):
for fullname in fullnames:
self.known_modules[self.name + "." + fullname] = mod
def _get_module(self, fullname):
return self.known_modules[self.name + "." + fullname]
def find_module(self, fullname, path=None):
if fullname in self.known_modules:
return self
return None
def __get_module(self, fullname):
try:
return self.known_modules[fullname]
except KeyError:
raise ImportError("This loader does not know module " + fullname)
def load_module(self, fullname):
try:
# in case of a reload
return sys.modules[fullname]
except KeyError:
pass
mod = self.__get_module(fullname)
if isinstance(mod, MovedModule):
mod = mod._resolve()
else:
mod.__loader__ = self
sys.modules[fullname] = mod
return mod
def is_package(self, fullname):
"""
Return true, if the named module is a package.
We need this method to get correct spec objects with
Python 3.4 (see PEP451)
"""
return hasattr(self.__get_module(fullname), "__path__")
def get_code(self, fullname):
"""Return None
Required, if is_package is implemented"""
self.__get_module(fullname) # eventually raises ImportError
return None
get_source = get_code # same as get_code
_importer = _SixMetaPathImporter(__name__)
class _MovedItems(_LazyModule):
"""Lazy loading of moved objects"""
__path__ = [] # mark as package
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("intern", "__builtin__", "sys"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"),
MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"),
MovedAttribute("getoutput", "commands", "subprocess"),
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("shlex_quote", "pipes", "shlex", "quote"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("UserDict", "UserDict", "collections"),
MovedAttribute("UserList", "UserList", "collections"),
MovedAttribute("UserString", "UserString", "collections"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("_thread", "thread", "_thread"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"),
]
# Add windows specific modules.
if sys.platform == "win32":
_moved_attributes += [
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
if isinstance(attr, MovedModule):
_importer._add_module(attr, "moves." + attr.name)
del attr
_MovedItems._moved_attributes = _moved_attributes
moves = _MovedItems(__name__ + ".moves")
_importer._add_module(moves, "moves")
class Module_six_moves_urllib_parse(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_parse"""
_urllib_parse_moved_attributes = [
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
MovedAttribute("quote", "urllib", "urllib.parse"),
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote", "urllib", "urllib.parse"),
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes"),
MovedAttribute("urlencode", "urllib", "urllib.parse"),
MovedAttribute("splitquery", "urllib", "urllib.parse"),
MovedAttribute("splittag", "urllib", "urllib.parse"),
MovedAttribute("splituser", "urllib", "urllib.parse"),
MovedAttribute("splitvalue", "urllib", "urllib.parse"),
MovedAttribute("uses_fragment", "urlparse", "urllib.parse"),
MovedAttribute("uses_netloc", "urlparse", "urllib.parse"),
MovedAttribute("uses_params", "urlparse", "urllib.parse"),
MovedAttribute("uses_query", "urlparse", "urllib.parse"),
MovedAttribute("uses_relative", "urlparse", "urllib.parse"),
]
for attr in _urllib_parse_moved_attributes:
setattr(Module_six_moves_urllib_parse, attr.name, attr)
del attr
Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
_importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"),
"moves.urllib_parse", "moves.urllib.parse")
class Module_six_moves_urllib_error(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_error"""
_urllib_error_moved_attributes = [
MovedAttribute("URLError", "urllib2", "urllib.error"),
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
]
for attr in _urllib_error_moved_attributes:
setattr(Module_six_moves_urllib_error, attr.name, attr)
del attr
Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
_importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"),
"moves.urllib_error", "moves.urllib.error")
class Module_six_moves_urllib_request(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_request"""
_urllib_request_moved_attributes = [
MovedAttribute("urlopen", "urllib2", "urllib.request"),
MovedAttribute("install_opener", "urllib2", "urllib.request"),
MovedAttribute("build_opener", "urllib2", "urllib.request"),
MovedAttribute("pathname2url", "urllib", "urllib.request"),
MovedAttribute("url2pathname", "urllib", "urllib.request"),
MovedAttribute("getproxies", "urllib", "urllib.request"),
MovedAttribute("Request", "urllib2", "urllib.request"),
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
MovedAttribute("URLopener", "urllib", "urllib.request"),
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
MovedAttribute("parse_http_list", "urllib2", "urllib.request"),
MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"),
]
for attr in _urllib_request_moved_attributes:
setattr(Module_six_moves_urllib_request, attr.name, attr)
del attr
Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
_importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"),
"moves.urllib_request", "moves.urllib.request")
class Module_six_moves_urllib_response(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_response"""
_urllib_response_moved_attributes = [
MovedAttribute("addbase", "urllib", "urllib.response"),
MovedAttribute("addclosehook", "urllib", "urllib.response"),
MovedAttribute("addinfo", "urllib", "urllib.response"),
MovedAttribute("addinfourl", "urllib", "urllib.response"),
]
for attr in _urllib_response_moved_attributes:
setattr(Module_six_moves_urllib_response, attr.name, attr)
del attr
Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
_importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"),
"moves.urllib_response", "moves.urllib.response")
class Module_six_moves_urllib_robotparser(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
_urllib_robotparser_moved_attributes = [
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
]
for attr in _urllib_robotparser_moved_attributes:
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
del attr
Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
_importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"),
"moves.urllib_robotparser", "moves.urllib.robotparser")
class Module_six_moves_urllib(types.ModuleType):
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
__path__ = [] # mark as package
parse = _importer._get_module("moves.urllib_parse")
error = _importer._get_module("moves.urllib_error")
request = _importer._get_module("moves.urllib_request")
response = _importer._get_module("moves.urllib_response")
robotparser = _importer._get_module("moves.urllib_robotparser")
def __dir__(self):
return ['parse', 'error', 'request', 'response', 'robotparser']
_importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"),
"moves.urllib")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
create_bound_method = types.MethodType
def create_unbound_method(func, cls):
return func
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.im_func
def create_bound_method(func, obj):
return types.MethodType(func, obj, obj.__class__)
def create_unbound_method(func, cls):
return types.MethodType(func, None, cls)
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
if PY3:
def iterkeys(d, **kw):
return iter(d.keys(**kw))
def itervalues(d, **kw):
return iter(d.values(**kw))
def iteritems(d, **kw):
return iter(d.items(**kw))
def iterlists(d, **kw):
return iter(d.lists(**kw))
viewkeys = operator.methodcaller("keys")
viewvalues = operator.methodcaller("values")
viewitems = operator.methodcaller("items")
else:
def iterkeys(d, **kw):
return d.iterkeys(**kw)
def itervalues(d, **kw):
return d.itervalues(**kw)
def iteritems(d, **kw):
return d.iteritems(**kw)
def iterlists(d, **kw):
return d.iterlists(**kw)
viewkeys = operator.methodcaller("viewkeys")
viewvalues = operator.methodcaller("viewvalues")
viewitems = operator.methodcaller("viewitems")
_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.")
_add_doc(itervalues, "Return an iterator over the values of a dictionary.")
_add_doc(iteritems,
"Return an iterator over the (key, value) pairs of a dictionary.")
_add_doc(iterlists,
"Return an iterator over the (key, [values]) pairs of a dictionary.")
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
unichr = chr
import struct
int2byte = struct.Struct(">B").pack
del struct
byte2int = operator.itemgetter(0)
indexbytes = operator.getitem
iterbytes = iter
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
_assertCountEqual = "assertCountEqual"
if sys.version_info[1] <= 1:
_assertRaisesRegex = "assertRaisesRegexp"
_assertRegex = "assertRegexpMatches"
else:
_assertRaisesRegex = "assertRaisesRegex"
_assertRegex = "assertRegex"
else:
def b(s):
return s
# Workaround for standalone backslash
def u(s):
return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape")
unichr = unichr
int2byte = chr
def byte2int(bs):
return ord(bs[0])
def indexbytes(buf, i):
return ord(buf[i])
iterbytes = functools.partial(itertools.imap, ord)
import StringIO
StringIO = BytesIO = StringIO.StringIO
_assertCountEqual = "assertItemsEqual"
_assertRaisesRegex = "assertRaisesRegexp"
_assertRegex = "assertRegexpMatches"
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
def assertCountEqual(self, *args, **kwargs):
return getattr(self, _assertCountEqual)(*args, **kwargs)
def assertRaisesRegex(self, *args, **kwargs):
return getattr(self, _assertRaisesRegex)(*args, **kwargs)
def assertRegex(self, *args, **kwargs):
return getattr(self, _assertRegex)(*args, **kwargs)
if PY3:
exec_ = getattr(moves.builtins, "exec")
def reraise(tp, value, tb=None):
try:
if value is None:
value = tp()
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
finally:
value = None
tb = None
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
try:
raise tp, value, tb
finally:
tb = None
""")
if sys.version_info[:2] == (3, 2):
exec_("""def raise_from(value, from_value):
try:
if from_value is None:
raise value
raise value from from_value
finally:
value = None
""")
elif sys.version_info[:2] > (3, 2):
exec_("""def raise_from(value, from_value):
try:
raise value from from_value
finally:
value = None
""")
else:
def raise_from(value, from_value):
raise value
print_ = getattr(moves.builtins, "print", None)
if print_ is None:
def print_(*args, **kwargs):
"""The new-style print function for Python 2.4 and 2.5."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
# If the file has an encoding, encode unicode with it.
if (isinstance(fp, file) and
isinstance(data, unicode) and
fp.encoding is not None):
errors = getattr(fp, "errors", None)
if errors is None:
errors = "strict"
data = data.encode(fp.encoding, errors)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
if sys.version_info[:2] < (3, 3):
_print = print_
def print_(*args, **kwargs):
fp = kwargs.get("file", sys.stdout)
flush = kwargs.pop("flush", False)
_print(*args, **kwargs)
if flush and fp is not None:
fp.flush()
_add_doc(reraise, """Reraise an exception.""")
if sys.version_info[0:2] < (3, 4):
def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS,
updated=functools.WRAPPER_UPDATES):
def wrapper(f):
f = functools.wraps(wrapped, assigned, updated)(f)
f.__wrapped__ = wrapped
return f
return wrapper
else:
wraps = functools.wraps
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(type):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
@classmethod
def __prepare__(cls, name, this_bases):
return meta.__prepare__(name, bases)
return type.__new__(metaclass, 'temporary_class', (), {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
slots = orig_vars.get('__slots__')
if slots is not None:
if isinstance(slots, str):
slots = [slots]
for slots_var in slots:
orig_vars.pop(slots_var)
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
def python_2_unicode_compatible(klass):
"""
A decorator that defines __unicode__ and __str__ methods under Python 2.
Under Python 3 it does nothing.
To support Python 2 and 3 with a single code base, define a __str__ method
returning text and apply this decorator to the class.
"""
if PY2:
if '__str__' not in klass.__dict__:
raise ValueError("@python_2_unicode_compatible cannot be applied "
"to %s because it doesn't define __str__()." %
klass.__name__)
klass.__unicode__ = klass.__str__
klass.__str__ = lambda self: self.__unicode__().encode('utf-8')
return klass
# Complete the moves implementation.
# This code is at the end of this module to speed up module loading.
# Turn this module into a package.
__path__ = [] # required for PEP 302 and PEP 451
__package__ = __name__ # see PEP 366 @ReservedAssignment
if globals().get("__spec__") is not None:
__spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable
# Remove other six meta path importers, since they cause problems. This can
# happen if six is removed from sys.modules and then reloaded. (Setuptools does
# this for some reason.)
if sys.meta_path:
for i, importer in enumerate(sys.meta_path):
# Here's some real nastiness: Another "instance" of the six module might
# be floating around. Therefore, we can't use isinstance() to check for
# the six meta path importer, since the other six instance will have
# inserted an importer with different class.
if (type(importer).__name__ == "_SixMetaPathImporter" and
importer.name == __name__):
del sys.meta_path[i]
break
del i, importer
# Finally, add the importer to the meta path import hook.
sys.meta_path.append(_importer)
| gpl-3.0 | 3,833,100,061,493,608,400 | 33.666667 | 98 | 0.629468 | false | 4.021875 | false | false | false |
LePastis/pyload | module/gui/MainWindow.py | 41 | 30215 | # -*- coding: utf-8 -*-
"""
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License,
or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, see <http://www.gnu.org/licenses/>.
@author: mkaay
"""
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from os.path import join
from module.gui.PackageDock import *
from module.gui.LinkDock import *
from module.gui.CaptchaDock import CaptchaDock
from module.gui.SettingsWidget import SettingsWidget
from module.gui.Collector import CollectorView, Package, Link
from module.gui.Queue import QueueView
from module.gui.Overview import OverviewView
from module.gui.Accounts import AccountView
from module.gui.AccountEdit import AccountEdit
from module.remote.thriftbackend.ThriftClient import AccountInfo
class MainWindow(QMainWindow):
def __init__(self, connector):
"""
set up main window
"""
QMainWindow.__init__(self)
#window stuff
self.setWindowTitle(_("pyLoad Client"))
self.setWindowIcon(QIcon(join(pypath, "icons","logo.png")))
self.resize(1000,600)
#layout version
self.version = 3
#init docks
self.newPackDock = NewPackageDock()
self.addDockWidget(Qt.RightDockWidgetArea, self.newPackDock)
self.connect(self.newPackDock, SIGNAL("done"), self.slotAddPackage)
self.captchaDock = CaptchaDock()
self.addDockWidget(Qt.BottomDockWidgetArea, self.captchaDock)
self.newLinkDock = NewLinkDock()
self.addDockWidget(Qt.RightDockWidgetArea, self.newLinkDock)
self.connect(self.newLinkDock, SIGNAL("done"), self.slotAddLinksToPackage)
#central widget, layout
self.masterlayout = QVBoxLayout()
lw = QWidget()
lw.setLayout(self.masterlayout)
self.setCentralWidget(lw)
#status
self.statusw = QFrame()
self.statusw.setFrameStyle(QFrame.StyledPanel | QFrame.Raised)
self.statusw.setLineWidth(2)
self.statusw.setLayout(QGridLayout())
#palette = self.statusw.palette()
#palette.setColor(QPalette.Window, QColor(255, 255, 255))
#self.statusw.setPalette(palette)
#self.statusw.setAutoFillBackground(True)
l = self.statusw.layout()
class BoldLabel(QLabel):
def __init__(self, text):
QLabel.__init__(self, text)
f = self.font()
f.setBold(True)
self.setFont(f)
self.setAlignment(Qt.AlignRight)
class Seperator(QFrame):
def __init__(self):
QFrame.__init__(self)
self.setFrameShape(QFrame.VLine)
self.setFrameShadow(QFrame.Sunken)
l.addWidget(BoldLabel(_("Packages:")), 0, 0)
self.packageCount = QLabel("0")
l.addWidget(self.packageCount, 0, 1)
l.addWidget(BoldLabel(_("Files:")), 0, 2)
self.fileCount = QLabel("0")
l.addWidget(self.fileCount, 0, 3)
l.addWidget(BoldLabel(_("Status:")), 0, 4)
self.status = QLabel("running")
l.addWidget(self.status, 0, 5)
l.addWidget(BoldLabel(_("Space:")), 0, 6)
self.space = QLabel("")
l.addWidget(self.space, 0, 7)
l.addWidget(BoldLabel(_("Speed:")), 0, 8)
self.speed = QLabel("")
l.addWidget(self.speed, 0, 9)
#l.addWidget(BoldLabel(_("Max. downloads:")), 0, 9)
#l.addWidget(BoldLabel(_("Max. chunks:")), 1, 9)
#self.maxDownloads = QSpinBox()
#self.maxDownloads.setEnabled(False)
#self.maxChunks = QSpinBox()
#self.maxChunks.setEnabled(False)
#l.addWidget(self.maxDownloads, 0, 10)
#l.addWidget(self.maxChunks, 1, 10)
#set menubar and statusbar
self.menubar = self.menuBar()
#self.statusbar = self.statusBar()
#self.connect(self.statusbar, SIGNAL("showMsg"), self.statusbar.showMessage)
#self.serverStatus = QLabel(_("Status: Not Connected"))
#self.statusbar.addPermanentWidget(self.serverStatus)
#menu
self.menus = {"file": self.menubar.addMenu(_("File")),
"connections": self.menubar.addMenu(_("Connections"))}
#menu actions
self.mactions = {"exit": QAction(_("Exit"), self.menus["file"]),
"manager": QAction(_("Connection manager"), self.menus["connections"])}
#add menu actions
self.menus["file"].addAction(self.mactions["exit"])
self.menus["connections"].addAction(self.mactions["manager"])
#toolbar
self.actions = {}
self.init_toolbar()
#tabs
self.tabw = QTabWidget()
self.tabs = {"overview": {"w": QWidget()},
"queue": {"w": QWidget()},
"collector": {"w": QWidget()},
"accounts": {"w": QWidget()},
"settings": {}}
#self.tabs["settings"]["s"] = QScrollArea()
self.tabs["settings"]["w"] = SettingsWidget()
#self.tabs["settings"]["s"].setWidgetResizable(True)
#self.tabs["settings"]["s"].setWidget(self.tabs["settings"]["w"])
self.tabs["log"] = {"w":QWidget()}
self.tabw.addTab(self.tabs["overview"]["w"], _("Overview"))
self.tabw.addTab(self.tabs["queue"]["w"], _("Queue"))
self.tabw.addTab(self.tabs["collector"]["w"], _("Collector"))
self.tabw.addTab(self.tabs["accounts"]["w"], _("Accounts"))
self.tabw.addTab(self.tabs["settings"]["w"], _("Settings"))
self.tabw.addTab(self.tabs["log"]["w"], _("Log"))
#init tabs
self.init_tabs(connector)
#context menus
self.init_context()
#layout
self.masterlayout.addWidget(self.tabw)
self.masterlayout.addWidget(self.statusw)
#signals..
self.connect(self.mactions["manager"], SIGNAL("triggered()"), self.slotShowConnector)
self.connect(self.tabs["queue"]["view"], SIGNAL('customContextMenuRequested(const QPoint &)'), self.slotQueueContextMenu)
self.connect(self.tabs["collector"]["package_view"], SIGNAL('customContextMenuRequested(const QPoint &)'), self.slotCollectorContextMenu)
self.connect(self.tabs["accounts"]["view"], SIGNAL('customContextMenuRequested(const QPoint &)'), self.slotAccountContextMenu)
self.connect(self.tabw, SIGNAL("currentChanged(int)"), self.slotTabChanged)
self.lastAddedID = None
self.connector = connector
def init_toolbar(self):
"""
create toolbar
"""
self.toolbar = self.addToolBar(_("Hide Toolbar"))
self.toolbar.setObjectName("Main Toolbar")
self.toolbar.setIconSize(QSize(30,30))
self.toolbar.setMovable(False)
self.actions["toggle_status"] = self.toolbar.addAction(_("Toggle Pause/Resume"))
pricon = QIcon()
pricon.addFile(join(pypath, "icons","toolbar_start.png"), QSize(), QIcon.Normal, QIcon.Off)
pricon.addFile(join(pypath, "icons","toolbar_pause.png"), QSize(), QIcon.Normal, QIcon.On)
self.actions["toggle_status"].setIcon(pricon)
self.actions["toggle_status"].setCheckable(True)
self.actions["status_stop"] = self.toolbar.addAction(QIcon(join(pypath, "icons","toolbar_stop.png")), _("Stop"))
self.toolbar.addSeparator()
self.actions["add"] = self.toolbar.addAction(QIcon(join(pypath, "icons","toolbar_add.png")), _("Add"))
self.toolbar.addSeparator()
self.actions["clipboard"] = self.toolbar.addAction(QIcon(join(pypath, "icons","clipboard.png")), _("Check Clipboard"))
self.actions["clipboard"].setCheckable(True)
self.connect(self.actions["toggle_status"], SIGNAL("toggled(bool)"), self.slotToggleStatus)
self.connect(self.actions["clipboard"], SIGNAL("toggled(bool)"), self.slotToggleClipboard)
self.connect(self.actions["status_stop"], SIGNAL("triggered()"), self.slotStatusStop)
self.addMenu = QMenu()
packageAction = self.addMenu.addAction(_("Package"))
containerAction = self.addMenu.addAction(_("Container"))
accountAction = self.addMenu.addAction(_("Account"))
linksAction = self.addMenu.addAction(_("Links"))
self.connect(self.actions["add"], SIGNAL("triggered()"), self.slotAdd)
self.connect(packageAction, SIGNAL("triggered()"), self.slotShowAddPackage)
self.connect(containerAction, SIGNAL("triggered()"), self.slotShowAddContainer)
self.connect(accountAction, SIGNAL("triggered()"), self.slotNewAccount)
self.connect(linksAction, SIGNAL("triggered()"), self.slotShowAddLinks)
def init_tabs(self, connector):
"""
create tabs
"""
#overview
self.tabs["overview"]["l"] = QGridLayout()
self.tabs["overview"]["w"].setLayout(self.tabs["overview"]["l"])
self.tabs["overview"]["view"] = OverviewView(connector)
self.tabs["overview"]["l"].addWidget(self.tabs["overview"]["view"])
#queue
self.tabs["queue"]["l"] = QGridLayout()
self.tabs["queue"]["w"].setLayout(self.tabs["queue"]["l"])
self.tabs["queue"]["view"] = QueueView(connector)
self.tabs["queue"]["l"].addWidget(self.tabs["queue"]["view"])
#collector
toQueue = QPushButton(_("Push selected packages to queue"))
self.tabs["collector"]["l"] = QGridLayout()
self.tabs["collector"]["w"].setLayout(self.tabs["collector"]["l"])
self.tabs["collector"]["package_view"] = CollectorView(connector)
self.tabs["collector"]["l"].addWidget(self.tabs["collector"]["package_view"], 0, 0)
self.tabs["collector"]["l"].addWidget(toQueue, 1, 0)
self.connect(toQueue, SIGNAL("clicked()"), self.slotPushPackageToQueue)
self.tabs["collector"]["package_view"].setContextMenuPolicy(Qt.CustomContextMenu)
self.tabs["queue"]["view"].setContextMenuPolicy(Qt.CustomContextMenu)
#log
self.tabs["log"]["l"] = QGridLayout()
self.tabs["log"]["w"].setLayout(self.tabs["log"]["l"])
self.tabs["log"]["text"] = QTextEdit()
self.tabs["log"]["text"].logOffset = 0
self.tabs["log"]["text"].setReadOnly(True)
self.connect(self.tabs["log"]["text"], SIGNAL("append(QString)"), self.tabs["log"]["text"].append)
self.tabs["log"]["l"].addWidget(self.tabs["log"]["text"])
#accounts
self.tabs["accounts"]["view"] = AccountView(connector)
self.tabs["accounts"]["w"].setLayout(QVBoxLayout())
self.tabs["accounts"]["w"].layout().addWidget(self.tabs["accounts"]["view"])
newbutton = QPushButton(_("New Account"))
self.tabs["accounts"]["w"].layout().addWidget(newbutton)
self.connect(newbutton, SIGNAL("clicked()"), self.slotNewAccount)
self.tabs["accounts"]["view"].setContextMenuPolicy(Qt.CustomContextMenu)
def init_context(self):
"""
create context menus
"""
self.activeMenu = None
#queue
self.queueContext = QMenu()
self.queueContext.buttons = {}
self.queueContext.item = (None, None)
self.queueContext.buttons["remove"] = QAction(QIcon(join(pypath, "icons","remove_small.png")), _("Remove"), self.queueContext)
self.queueContext.buttons["restart"] = QAction(QIcon(join(pypath, "icons","refresh_small.png")), _("Restart"), self.queueContext)
self.queueContext.buttons["pull"] = QAction(QIcon(join(pypath, "icons","pull_small.png")), _("Pull out"), self.queueContext)
self.queueContext.buttons["abort"] = QAction(QIcon(join(pypath, "icons","abort.png")), _("Abort"), self.queueContext)
self.queueContext.buttons["edit"] = QAction(QIcon(join(pypath, "icons","edit_small.png")), _("Edit Name"), self.queueContext)
self.queueContext.addAction(self.queueContext.buttons["pull"])
self.queueContext.addAction(self.queueContext.buttons["edit"])
self.queueContext.addAction(self.queueContext.buttons["remove"])
self.queueContext.addAction(self.queueContext.buttons["restart"])
self.queueContext.addAction(self.queueContext.buttons["abort"])
self.connect(self.queueContext.buttons["remove"], SIGNAL("triggered()"), self.slotRemoveDownload)
self.connect(self.queueContext.buttons["restart"], SIGNAL("triggered()"), self.slotRestartDownload)
self.connect(self.queueContext.buttons["pull"], SIGNAL("triggered()"), self.slotPullOutPackage)
self.connect(self.queueContext.buttons["abort"], SIGNAL("triggered()"), self.slotAbortDownload)
self.connect(self.queueContext.buttons["edit"], SIGNAL("triggered()"), self.slotEditPackage)
#collector
self.collectorContext = QMenu()
self.collectorContext.buttons = {}
self.collectorContext.item = (None, None)
self.collectorContext.buttons["remove"] = QAction(QIcon(join(pypath, "icons","remove_small.png")), _("Remove"), self.collectorContext)
self.collectorContext.buttons["push"] = QAction(QIcon(join(pypath, "icons","push_small.png")), _("Push to queue"), self.collectorContext)
self.collectorContext.buttons["edit"] = QAction(QIcon(join(pypath, "icons","edit_small.png")), _("Edit Name"), self.collectorContext)
self.collectorContext.buttons["restart"] = QAction(QIcon(join(pypath, "icons","refresh_small.png")), _("Restart"), self.collectorContext)
self.collectorContext.buttons["refresh"] = QAction(QIcon(join(pypath, "icons","refresh1_small.png")),_("Refresh Status"), self.collectorContext)
self.collectorContext.addAction(self.collectorContext.buttons["push"])
self.collectorContext.addSeparator()
self.collectorContext.buttons["add"] = self.collectorContext.addMenu(QIcon(join(pypath, "icons","add_small.png")), _("Add"))
self.collectorContext.addAction(self.collectorContext.buttons["edit"])
self.collectorContext.addAction(self.collectorContext.buttons["remove"])
self.collectorContext.addAction(self.collectorContext.buttons["restart"])
self.collectorContext.addSeparator()
self.collectorContext.addAction(self.collectorContext.buttons["refresh"])
packageAction = self.collectorContext.buttons["add"].addAction(_("Package"))
containerAction = self.collectorContext.buttons["add"].addAction(_("Container"))
linkAction = self.collectorContext.buttons["add"].addAction(_("Links"))
self.connect(self.collectorContext.buttons["remove"], SIGNAL("triggered()"), self.slotRemoveDownload)
self.connect(self.collectorContext.buttons["push"], SIGNAL("triggered()"), self.slotPushPackageToQueue)
self.connect(self.collectorContext.buttons["edit"], SIGNAL("triggered()"), self.slotEditPackage)
self.connect(self.collectorContext.buttons["restart"], SIGNAL("triggered()"), self.slotRestartDownload)
self.connect(self.collectorContext.buttons["refresh"], SIGNAL("triggered()"), self.slotRefreshPackage)
self.connect(packageAction, SIGNAL("triggered()"), self.slotShowAddPackage)
self.connect(containerAction, SIGNAL("triggered()"), self.slotShowAddContainer)
self.connect(linkAction, SIGNAL("triggered()"), self.slotShowAddLinks)
self.accountContext = QMenu()
self.accountContext.buttons = {}
self.accountContext.buttons["add"] = QAction(QIcon(join(pypath, "icons","add_small.png")), _("Add"), self.accountContext)
self.accountContext.buttons["remove"] = QAction(QIcon(join(pypath, "icons","remove_small.png")), _("Remove"), self.accountContext)
self.accountContext.buttons["edit"] = QAction(QIcon(join(pypath, "icons","edit_small.png")), _("Edit"), self.accountContext)
self.accountContext.addAction(self.accountContext.buttons["add"])
self.accountContext.addAction(self.accountContext.buttons["edit"])
self.accountContext.addAction(self.accountContext.buttons["remove"])
self.connect(self.accountContext.buttons["add"], SIGNAL("triggered()"), self.slotNewAccount)
self.connect(self.accountContext.buttons["edit"], SIGNAL("triggered()"), self.slotEditAccount)
self.connect(self.accountContext.buttons["remove"], SIGNAL("triggered()"), self.slotRemoveAccount)
def slotToggleStatus(self, status):
"""
pause/start toggle (toolbar)
"""
self.emit(SIGNAL("setDownloadStatus"), status)
def slotStatusStop(self):
"""
stop button (toolbar)
"""
self.emit(SIGNAL("stopAllDownloads"))
def slotAdd(self):
"""
add button (toolbar)
show context menu (choice: links/package)
"""
self.addMenu.exec_(QCursor.pos())
def slotShowAddPackage(self):
"""
action from add-menu
show new-package dock
"""
self.tabw.setCurrentIndex(1)
self.newPackDock.show()
def slotShowAddLinks(self):
"""
action from add-menu
show new-links dock
"""
self.tabw.setCurrentIndex(1)
self.newLinkDock.show()
def slotShowConnector(self):
"""
connectionmanager action triggered
let main to the stuff
"""
self.emit(SIGNAL("connector"))
def slotAddPackage(self, name, links, password=None):
"""
new package
let main to the stuff
"""
self.emit(SIGNAL("addPackage"), name, links, password)
def slotAddLinksToPackage(self, links):
"""
adds links to currently selected package
only in collector
"""
if self.tabw.currentIndex() != 1:
return
smodel = self.tabs["collector"]["package_view"].selectionModel()
for index in smodel.selectedRows(0):
item = index.internalPointer()
if isinstance(item, Package):
self.connector.proxy.addFiles(item.id, links)
break
def slotShowAddContainer(self):
"""
action from add-menu
show file selector, emit upload
"""
typeStr = ";;".join([
_("All Container Types (%s)") % "*.dlc *.ccf *.rsdf *.txt",
_("DLC (%s)") % "*.dlc",
_("CCF (%s)") % "*.ccf",
_("RSDF (%s)") % "*.rsdf",
_("Text Files (%s)") % "*.txt"
])
fileNames = QFileDialog.getOpenFileNames(self, _("Open container"), "", typeStr)
for name in fileNames:
self.emit(SIGNAL("addContainer"), str(name))
def slotPushPackageToQueue(self):
"""
push collector pack to queue
get child ids
let main to the rest
"""
smodel = self.tabs["collector"]["package_view"].selectionModel()
for index in smodel.selectedRows(0):
item = index.internalPointer()
if isinstance(item, Package):
self.emit(SIGNAL("pushPackageToQueue"), item.id)
else:
self.emit(SIGNAL("pushPackageToQueue"), item.package.id)
def saveWindow(self):
"""
get window state/geometry
pass data to main
"""
state_raw = self.saveState(self.version)
geo_raw = self.saveGeometry()
state = str(state_raw.toBase64())
geo = str(geo_raw.toBase64())
self.emit(SIGNAL("saveMainWindow"), state, geo)
def closeEvent(self, event):
"""
somebody wants to close me!
let me first save my state
"""
self.saveWindow()
event.ignore()
self.hide()
self.emit(SIGNAL("hidden"))
# quit when no tray is available
if not QSystemTrayIcon.isSystemTrayAvailable():
self.emit(SIGNAL("Quit"))
def restoreWindow(self, state, geo):
"""
restore window state/geometry
"""
state = QByteArray(state)
geo = QByteArray(geo)
state_raw = QByteArray.fromBase64(state)
geo_raw = QByteArray.fromBase64(geo)
self.restoreState(state_raw, self.version)
self.restoreGeometry(geo_raw)
def slotQueueContextMenu(self, pos):
"""
custom context menu in queue view requested
"""
globalPos = self.tabs["queue"]["view"].mapToGlobal(pos)
i = self.tabs["queue"]["view"].indexAt(pos)
if not i:
return
item = i.internalPointer()
menuPos = QCursor.pos()
menuPos.setX(menuPos.x()+2)
self.activeMenu = self.queueContext
showAbort = False
if isinstance(item, Link) and item.data["downloading"]:
showAbort = True
elif isinstance(item, Package):
for child in item.children:
if child.data["downloading"]:
showAbort = True
break
if showAbort:
self.queueContext.buttons["abort"].setEnabled(True)
else:
self.queueContext.buttons["abort"].setEnabled(False)
if isinstance(item, Package):
self.queueContext.index = i
#self.queueContext.buttons["remove"].setEnabled(True)
#self.queueContext.buttons["restart"].setEnabled(True)
self.queueContext.buttons["pull"].setEnabled(True)
self.queueContext.buttons["edit"].setEnabled(True)
elif isinstance(item, Link):
self.collectorContext.index = i
self.collectorContext.buttons["edit"].setEnabled(False)
self.collectorContext.buttons["remove"].setEnabled(True)
self.collectorContext.buttons["push"].setEnabled(False)
self.collectorContext.buttons["restart"].setEnabled(True)
else:
self.queueContext.index = None
#self.queueContext.buttons["remove"].setEnabled(False)
#self.queueContext.buttons["restart"].setEnabled(False)
self.queueContext.buttons["pull"].setEnabled(False)
self.queueContext.buttons["edit"].setEnabled(False)
self.queueContext.exec_(menuPos)
def slotCollectorContextMenu(self, pos):
"""
custom context menu in package collector view requested
"""
globalPos = self.tabs["collector"]["package_view"].mapToGlobal(pos)
i = self.tabs["collector"]["package_view"].indexAt(pos)
if not i:
return
item = i.internalPointer()
menuPos = QCursor.pos()
menuPos.setX(menuPos.x()+2)
self.activeMenu = self.collectorContext
if isinstance(item, Package):
self.collectorContext.index = i
self.collectorContext.buttons["edit"].setEnabled(True)
self.collectorContext.buttons["remove"].setEnabled(True)
self.collectorContext.buttons["push"].setEnabled(True)
self.collectorContext.buttons["restart"].setEnabled(True)
elif isinstance(item, Link):
self.collectorContext.index = i
self.collectorContext.buttons["edit"].setEnabled(False)
self.collectorContext.buttons["remove"].setEnabled(True)
self.collectorContext.buttons["push"].setEnabled(False)
self.collectorContext.buttons["restart"].setEnabled(True)
else:
self.collectorContext.index = None
self.collectorContext.buttons["edit"].setEnabled(False)
self.collectorContext.buttons["remove"].setEnabled(False)
self.collectorContext.buttons["push"].setEnabled(False)
self.collectorContext.buttons["restart"].setEnabled(False)
self.collectorContext.exec_(menuPos)
def slotLinkCollectorContextMenu(self, pos):
"""
custom context menu in link collector view requested
"""
pass
def slotRestartDownload(self):
"""
restart download action is triggered
"""
smodel = self.tabs["queue"]["view"].selectionModel()
for index in smodel.selectedRows(0):
item = index.internalPointer()
self.emit(SIGNAL("restartDownload"), item.id, isinstance(item, Package))
def slotRemoveDownload(self):
"""
remove download action is triggered
"""
if self.activeMenu == self.queueContext:
view = self.tabs["queue"]["view"]
else:
view = self.tabs["collector"]["package_view"]
smodel = view.selectionModel()
for index in smodel.selectedRows(0):
item = index.internalPointer()
self.emit(SIGNAL("removeDownload"), item.id, isinstance(item, Package))
def slotToggleClipboard(self, status):
"""
check clipboard (toolbar)
"""
self.emit(SIGNAL("setClipboardStatus"), status)
def slotEditPackage(self):
# in Queue, only edit name
if self.activeMenu == self.queueContext:
view = self.tabs["queue"]["view"]
else:
view = self.tabs["collector"]["package_view"]
view.edit(self.activeMenu.index)
def slotEditCommit(self, editor):
self.emit(SIGNAL("changePackageName"), self.activeMenu.index.internalPointer().id, editor.text())
def slotPullOutPackage(self):
"""
pull package out of the queue
"""
smodel = self.tabs["queue"]["view"].selectionModel()
for index in smodel.selectedRows(0):
item = index.internalPointer()
if isinstance(item, Package):
self.emit(SIGNAL("pullOutPackage"), item.id)
else:
self.emit(SIGNAL("pullOutPackage"), item.package.id)
def slotAbortDownload(self):
view = self.tabs["queue"]["view"]
smodel = view.selectionModel()
for index in smodel.selectedRows(0):
item = index.internalPointer()
self.emit(SIGNAL("abortDownload"), item.id, isinstance(item, Package))
# TODO disabled because changing desktop on linux, main window disappears
#def changeEvent(self, e):
# if e.type() == QEvent.WindowStateChange and self.isMinimized():
# e.ignore()
# self.hide()
# self.emit(SIGNAL("hidden"))
# else:
# super(MainWindow, self).changeEvent(e)
def slotTabChanged(self, index):
if index == 2:
self.emit(SIGNAL("reloadAccounts"))
elif index == 3:
self.tabs["settings"]["w"].loadConfig()
def slotRefreshPackage(self):
smodel = self.tabs["collector"]["package_view"].selectionModel()
for index in smodel.selectedRows(0):
item = index.internalPointer()
pid = item.id
if isinstance(item, Link):
pid = item.package.id
self.emit(SIGNAL("refreshStatus"), pid)
def slotNewAccount(self):
types = self.connector.proxy.getAccountTypes()
self.accountEdit = AccountEdit.newAccount(types)
#TODO make more easy n1, n2, n3
def save(data):
if data["password"]:
self.accountEdit.close()
n1 = data["acctype"]
n2 = data["login"]
n3 = data["password"]
self.connector.updateAccount(n1, n2, n3, None)
self.accountEdit.connect(self.accountEdit, SIGNAL("done"), save)
self.accountEdit.show()
def slotEditAccount(self):
types = self.connector.getAccountTypes()
data = self.tabs["accounts"]["view"].selectedIndexes()
if len(data) < 1:
return
data = data[0].internalPointer()
self.accountEdit = AccountEdit.editAccount(types, data)
#TODO make more easy n1, n2, n3
#TODO reload accounts tab after insert of edit account
#TODO if account does not exist give error
def save(data):
self.accountEdit.close()
n1 = data["acctype"]
n2 = data["login"]
if data["password"]:
n3 = data["password"]
self.connector.updateAccount(n1, n2, n3, None)
self.accountEdit.connect(self.accountEdit, SIGNAL("done"), save)
self.accountEdit.show()
def slotRemoveAccount(self):
data = self.tabs["accounts"]["view"].selectedIndexes()
if len(data) < 1:
return
data = data[0].internalPointer()
self.connector.removeAccount(data.type, data.login)
def slotAccountContextMenu(self, pos):
globalPos = self.tabs["accounts"]["view"].mapToGlobal(pos)
i = self.tabs["accounts"]["view"].indexAt(pos)
if not i:
return
data = i.internalPointer()
if data is None:
self.accountContext.buttons["edit"].setEnabled(False)
self.accountContext.buttons["remove"].setEnabled(False)
else:
self.accountContext.buttons["edit"].setEnabled(True)
self.accountContext.buttons["remove"].setEnabled(True)
menuPos = QCursor.pos()
menuPos.setX(menuPos.x()+2)
self.accountContext.exec_(menuPos)
| gpl-3.0 | 4,450,976,661,799,134,700 | 42.350072 | 152 | 0.60556 | false | 4.202949 | false | false | false |
dfalt974/SickRage | lib/sqlalchemy/orm/unitofwork.py | 78 | 23204 | # orm/unitofwork.py
# Copyright (C) 2005-2014 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""The internals for the unit of work system.
The session's flush() process passes objects to a contextual object
here, which assembles flush tasks based on mappers and their properties,
organizes them in order of dependency, and executes.
"""
from .. import util, event
from ..util import topological
from . import attributes, persistence, util as orm_util
def track_cascade_events(descriptor, prop):
"""Establish event listeners on object attributes which handle
cascade-on-set/append.
"""
key = prop.key
def append(state, item, initiator):
# process "save_update" cascade rules for when
# an instance is appended to the list of another instance
if item is None:
return
sess = state.session
if sess:
if sess._warn_on_events:
sess._flush_warning("collection append")
prop = state.manager.mapper._props[key]
item_state = attributes.instance_state(item)
if prop._cascade.save_update and \
(prop.cascade_backrefs or key == initiator.key) and \
not sess._contains_state(item_state):
sess._save_or_update_state(item_state)
return item
def remove(state, item, initiator):
if item is None:
return
sess = state.session
if sess:
prop = state.manager.mapper._props[key]
if sess._warn_on_events:
sess._flush_warning(
"collection remove"
if prop.uselist
else "related attribute delete")
# expunge pending orphans
item_state = attributes.instance_state(item)
if prop._cascade.delete_orphan and \
item_state in sess._new and \
prop.mapper._is_orphan(item_state):
sess.expunge(item)
def set_(state, newvalue, oldvalue, initiator):
# process "save_update" cascade rules for when an instance
# is attached to another instance
if oldvalue is newvalue:
return newvalue
sess = state.session
if sess:
if sess._warn_on_events:
sess._flush_warning("related attribute set")
prop = state.manager.mapper._props[key]
if newvalue is not None:
newvalue_state = attributes.instance_state(newvalue)
if prop._cascade.save_update and \
(prop.cascade_backrefs or key == initiator.key) and \
not sess._contains_state(newvalue_state):
sess._save_or_update_state(newvalue_state)
if oldvalue is not None and \
oldvalue is not attributes.PASSIVE_NO_RESULT and \
prop._cascade.delete_orphan:
# possible to reach here with attributes.NEVER_SET ?
oldvalue_state = attributes.instance_state(oldvalue)
if oldvalue_state in sess._new and \
prop.mapper._is_orphan(oldvalue_state):
sess.expunge(oldvalue)
return newvalue
event.listen(descriptor, 'append', append, raw=True, retval=True)
event.listen(descriptor, 'remove', remove, raw=True, retval=True)
event.listen(descriptor, 'set', set_, raw=True, retval=True)
class UOWTransaction(object):
def __init__(self, session):
self.session = session
# dictionary used by external actors to
# store arbitrary state information.
self.attributes = {}
# dictionary of mappers to sets of
# DependencyProcessors, which are also
# set to be part of the sorted flush actions,
# which have that mapper as a parent.
self.deps = util.defaultdict(set)
# dictionary of mappers to sets of InstanceState
# items pending for flush which have that mapper
# as a parent.
self.mappers = util.defaultdict(set)
# a dictionary of Preprocess objects, which gather
# additional states impacted by the flush
# and determine if a flush action is needed
self.presort_actions = {}
# dictionary of PostSortRec objects, each
# one issues work during the flush within
# a certain ordering.
self.postsort_actions = {}
# a set of 2-tuples, each containing two
# PostSortRec objects where the second
# is dependent on the first being executed
# first
self.dependencies = set()
# dictionary of InstanceState-> (isdelete, listonly)
# tuples, indicating if this state is to be deleted
# or insert/updated, or just refreshed
self.states = {}
# tracks InstanceStates which will be receiving
# a "post update" call. Keys are mappers,
# values are a set of states and a set of the
# columns which should be included in the update.
self.post_update_states = util.defaultdict(lambda: (set(), set()))
@property
def has_work(self):
return bool(self.states)
def is_deleted(self, state):
"""return true if the given state is marked as deleted
within this uowtransaction."""
return state in self.states and self.states[state][0]
def memo(self, key, callable_):
if key in self.attributes:
return self.attributes[key]
else:
self.attributes[key] = ret = callable_()
return ret
def remove_state_actions(self, state):
"""remove pending actions for a state from the uowtransaction."""
isdelete = self.states[state][0]
self.states[state] = (isdelete, True)
def get_attribute_history(self, state, key,
passive=attributes.PASSIVE_NO_INITIALIZE):
"""facade to attributes.get_state_history(), including
caching of results."""
hashkey = ("history", state, key)
# cache the objects, not the states; the strong reference here
# prevents newly loaded objects from being dereferenced during the
# flush process
if hashkey in self.attributes:
history, state_history, cached_passive = self.attributes[hashkey]
# if the cached lookup was "passive" and now
# we want non-passive, do a non-passive lookup and re-cache
if not cached_passive & attributes.SQL_OK \
and passive & attributes.SQL_OK:
impl = state.manager[key].impl
history = impl.get_history(state, state.dict,
attributes.PASSIVE_OFF |
attributes.LOAD_AGAINST_COMMITTED)
if history and impl.uses_objects:
state_history = history.as_state()
else:
state_history = history
self.attributes[hashkey] = (history, state_history, passive)
else:
impl = state.manager[key].impl
# TODO: store the history as (state, object) tuples
# so we don't have to keep converting here
history = impl.get_history(state, state.dict, passive |
attributes.LOAD_AGAINST_COMMITTED)
if history and impl.uses_objects:
state_history = history.as_state()
else:
state_history = history
self.attributes[hashkey] = (history, state_history,
passive)
return state_history
def has_dep(self, processor):
return (processor, True) in self.presort_actions
def register_preprocessor(self, processor, fromparent):
key = (processor, fromparent)
if key not in self.presort_actions:
self.presort_actions[key] = Preprocess(processor, fromparent)
def register_object(self, state, isdelete=False,
listonly=False, cancel_delete=False,
operation=None, prop=None):
if not self.session._contains_state(state):
if not state.deleted and operation is not None:
util.warn("Object of type %s not in session, %s operation "
"along '%s' will not proceed" %
(orm_util.state_class_str(state), operation, prop))
return False
if state not in self.states:
mapper = state.manager.mapper
if mapper not in self.mappers:
self._per_mapper_flush_actions(mapper)
self.mappers[mapper].add(state)
self.states[state] = (isdelete, listonly)
else:
if not listonly and (isdelete or cancel_delete):
self.states[state] = (isdelete, False)
return True
def issue_post_update(self, state, post_update_cols):
mapper = state.manager.mapper.base_mapper
states, cols = self.post_update_states[mapper]
states.add(state)
cols.update(post_update_cols)
def _per_mapper_flush_actions(self, mapper):
saves = SaveUpdateAll(self, mapper.base_mapper)
deletes = DeleteAll(self, mapper.base_mapper)
self.dependencies.add((saves, deletes))
for dep in mapper._dependency_processors:
dep.per_property_preprocessors(self)
for prop in mapper.relationships:
if prop.viewonly:
continue
dep = prop._dependency_processor
dep.per_property_preprocessors(self)
@util.memoized_property
def _mapper_for_dep(self):
"""return a dynamic mapping of (Mapper, DependencyProcessor) to
True or False, indicating if the DependencyProcessor operates
on objects of that Mapper.
The result is stored in the dictionary persistently once
calculated.
"""
return util.PopulateDict(
lambda tup: tup[0]._props.get(tup[1].key) is tup[1].prop
)
def filter_states_for_dep(self, dep, states):
"""Filter the given list of InstanceStates to those relevant to the
given DependencyProcessor.
"""
mapper_for_dep = self._mapper_for_dep
return [s for s in states if mapper_for_dep[(s.manager.mapper, dep)]]
def states_for_mapper_hierarchy(self, mapper, isdelete, listonly):
checktup = (isdelete, listonly)
for mapper in mapper.base_mapper.self_and_descendants:
for state in self.mappers[mapper]:
if self.states[state] == checktup:
yield state
def _generate_actions(self):
"""Generate the full, unsorted collection of PostSortRecs as
well as dependency pairs for this UOWTransaction.
"""
# execute presort_actions, until all states
# have been processed. a presort_action might
# add new states to the uow.
while True:
ret = False
for action in list(self.presort_actions.values()):
if action.execute(self):
ret = True
if not ret:
break
# see if the graph of mapper dependencies has cycles.
self.cycles = cycles = topological.find_cycles(
self.dependencies,
list(self.postsort_actions.values()))
if cycles:
# if yes, break the per-mapper actions into
# per-state actions
convert = dict(
(rec, set(rec.per_state_flush_actions(self)))
for rec in cycles
)
# rewrite the existing dependencies to point to
# the per-state actions for those per-mapper actions
# that were broken up.
for edge in list(self.dependencies):
if None in edge or \
edge[0].disabled or edge[1].disabled or \
cycles.issuperset(edge):
self.dependencies.remove(edge)
elif edge[0] in cycles:
self.dependencies.remove(edge)
for dep in convert[edge[0]]:
self.dependencies.add((dep, edge[1]))
elif edge[1] in cycles:
self.dependencies.remove(edge)
for dep in convert[edge[1]]:
self.dependencies.add((edge[0], dep))
return set([a for a in self.postsort_actions.values()
if not a.disabled
]
).difference(cycles)
def execute(self):
postsort_actions = self._generate_actions()
#sort = topological.sort(self.dependencies, postsort_actions)
#print "--------------"
#print "\ndependencies:", self.dependencies
#print "\ncycles:", self.cycles
#print "\nsort:", list(sort)
#print "\nCOUNT OF POSTSORT ACTIONS", len(postsort_actions)
# execute
if self.cycles:
for set_ in topological.sort_as_subsets(
self.dependencies,
postsort_actions):
while set_:
n = set_.pop()
n.execute_aggregate(self, set_)
else:
for rec in topological.sort(
self.dependencies,
postsort_actions):
rec.execute(self)
def finalize_flush_changes(self):
"""mark processed objects as clean / deleted after a successful
flush().
this method is called within the flush() method after the
execute() method has succeeded and the transaction has been committed.
"""
states = set(self.states)
isdel = set(
s for (s, (isdelete, listonly)) in self.states.items()
if isdelete
)
other = states.difference(isdel)
self.session._remove_newly_deleted(isdel)
self.session._register_newly_persistent(other)
class IterateMappersMixin(object):
def _mappers(self, uow):
if self.fromparent:
return iter(
m for m in
self.dependency_processor.parent.self_and_descendants
if uow._mapper_for_dep[(m, self.dependency_processor)]
)
else:
return self.dependency_processor.mapper.self_and_descendants
class Preprocess(IterateMappersMixin):
def __init__(self, dependency_processor, fromparent):
self.dependency_processor = dependency_processor
self.fromparent = fromparent
self.processed = set()
self.setup_flush_actions = False
def execute(self, uow):
delete_states = set()
save_states = set()
for mapper in self._mappers(uow):
for state in uow.mappers[mapper].difference(self.processed):
(isdelete, listonly) = uow.states[state]
if not listonly:
if isdelete:
delete_states.add(state)
else:
save_states.add(state)
if delete_states:
self.dependency_processor.presort_deletes(uow, delete_states)
self.processed.update(delete_states)
if save_states:
self.dependency_processor.presort_saves(uow, save_states)
self.processed.update(save_states)
if (delete_states or save_states):
if not self.setup_flush_actions and (
self.dependency_processor.\
prop_has_changes(uow, delete_states, True) or
self.dependency_processor.\
prop_has_changes(uow, save_states, False)
):
self.dependency_processor.per_property_flush_actions(uow)
self.setup_flush_actions = True
return True
else:
return False
class PostSortRec(object):
disabled = False
def __new__(cls, uow, *args):
key = (cls, ) + args
if key in uow.postsort_actions:
return uow.postsort_actions[key]
else:
uow.postsort_actions[key] = \
ret = \
object.__new__(cls)
return ret
def execute_aggregate(self, uow, recs):
self.execute(uow)
def __repr__(self):
return "%s(%s)" % (
self.__class__.__name__,
",".join(str(x) for x in self.__dict__.values())
)
class ProcessAll(IterateMappersMixin, PostSortRec):
def __init__(self, uow, dependency_processor, delete, fromparent):
self.dependency_processor = dependency_processor
self.delete = delete
self.fromparent = fromparent
uow.deps[dependency_processor.parent.base_mapper].\
add(dependency_processor)
def execute(self, uow):
states = self._elements(uow)
if self.delete:
self.dependency_processor.process_deletes(uow, states)
else:
self.dependency_processor.process_saves(uow, states)
def per_state_flush_actions(self, uow):
# this is handled by SaveUpdateAll and DeleteAll,
# since a ProcessAll should unconditionally be pulled
# into per-state if either the parent/child mappers
# are part of a cycle
return iter([])
def __repr__(self):
return "%s(%s, delete=%s)" % (
self.__class__.__name__,
self.dependency_processor,
self.delete
)
def _elements(self, uow):
for mapper in self._mappers(uow):
for state in uow.mappers[mapper]:
(isdelete, listonly) = uow.states[state]
if isdelete == self.delete and not listonly:
yield state
class IssuePostUpdate(PostSortRec):
def __init__(self, uow, mapper, isdelete):
self.mapper = mapper
self.isdelete = isdelete
def execute(self, uow):
states, cols = uow.post_update_states[self.mapper]
states = [s for s in states if uow.states[s][0] == self.isdelete]
persistence.post_update(self.mapper, states, uow, cols)
class SaveUpdateAll(PostSortRec):
def __init__(self, uow, mapper):
self.mapper = mapper
assert mapper is mapper.base_mapper
def execute(self, uow):
persistence.save_obj(self.mapper,
uow.states_for_mapper_hierarchy(self.mapper, False, False),
uow
)
def per_state_flush_actions(self, uow):
states = list(uow.states_for_mapper_hierarchy(
self.mapper, False, False))
base_mapper = self.mapper.base_mapper
delete_all = DeleteAll(uow, base_mapper)
for state in states:
# keep saves before deletes -
# this ensures 'row switch' operations work
action = SaveUpdateState(uow, state, base_mapper)
uow.dependencies.add((action, delete_all))
yield action
for dep in uow.deps[self.mapper]:
states_for_prop = uow.filter_states_for_dep(dep, states)
dep.per_state_flush_actions(uow, states_for_prop, False)
class DeleteAll(PostSortRec):
def __init__(self, uow, mapper):
self.mapper = mapper
assert mapper is mapper.base_mapper
def execute(self, uow):
persistence.delete_obj(self.mapper,
uow.states_for_mapper_hierarchy(self.mapper, True, False),
uow
)
def per_state_flush_actions(self, uow):
states = list(uow.states_for_mapper_hierarchy(
self.mapper, True, False))
base_mapper = self.mapper.base_mapper
save_all = SaveUpdateAll(uow, base_mapper)
for state in states:
# keep saves before deletes -
# this ensures 'row switch' operations work
action = DeleteState(uow, state, base_mapper)
uow.dependencies.add((save_all, action))
yield action
for dep in uow.deps[self.mapper]:
states_for_prop = uow.filter_states_for_dep(dep, states)
dep.per_state_flush_actions(uow, states_for_prop, True)
class ProcessState(PostSortRec):
def __init__(self, uow, dependency_processor, delete, state):
self.dependency_processor = dependency_processor
self.delete = delete
self.state = state
def execute_aggregate(self, uow, recs):
cls_ = self.__class__
dependency_processor = self.dependency_processor
delete = self.delete
our_recs = [r for r in recs
if r.__class__ is cls_ and
r.dependency_processor is dependency_processor and
r.delete is delete]
recs.difference_update(our_recs)
states = [self.state] + [r.state for r in our_recs]
if delete:
dependency_processor.process_deletes(uow, states)
else:
dependency_processor.process_saves(uow, states)
def __repr__(self):
return "%s(%s, %s, delete=%s)" % (
self.__class__.__name__,
self.dependency_processor,
orm_util.state_str(self.state),
self.delete
)
class SaveUpdateState(PostSortRec):
def __init__(self, uow, state, mapper):
self.state = state
self.mapper = mapper
def execute_aggregate(self, uow, recs):
cls_ = self.__class__
mapper = self.mapper
our_recs = [r for r in recs
if r.__class__ is cls_ and
r.mapper is mapper]
recs.difference_update(our_recs)
persistence.save_obj(mapper,
[self.state] +
[r.state for r in our_recs],
uow)
def __repr__(self):
return "%s(%s)" % (
self.__class__.__name__,
orm_util.state_str(self.state)
)
class DeleteState(PostSortRec):
def __init__(self, uow, state, mapper):
self.state = state
self.mapper = mapper
def execute_aggregate(self, uow, recs):
cls_ = self.__class__
mapper = self.mapper
our_recs = [r for r in recs
if r.__class__ is cls_ and
r.mapper is mapper]
recs.difference_update(our_recs)
states = [self.state] + [r.state for r in our_recs]
persistence.delete_obj(mapper,
[s for s in states if uow.states[s][0]],
uow)
def __repr__(self):
return "%s(%s)" % (
self.__class__.__name__,
orm_util.state_str(self.state)
)
| gpl-3.0 | 3,319,720,626,130,826,000 | 34.919505 | 84 | 0.564687 | false | 4.426555 | false | false | false |
sandeepgupta2k4/tensorflow | tensorflow/python/ops/parsing_ops.py | 21 | 49286 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Parsing Ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import re
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import gen_parsing_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import sparse_ops
# go/tf-wildcard-import
# pylint: disable=wildcard-import,undefined-variable
from tensorflow.python.ops.gen_parsing_ops import *
# pylint: enable=wildcard-import,undefined-variable
from tensorflow.python.platform import tf_logging
ops.NotDifferentiable("DecodeRaw")
ops.NotDifferentiable("ParseTensor")
ops.NotDifferentiable("StringToNumber")
class VarLenFeature(collections.namedtuple("VarLenFeature", ["dtype"])):
"""Configuration for parsing a variable-length input feature.
Fields:
dtype: Data type of input.
"""
pass
class SparseFeature(
collections.namedtuple(
"SparseFeature",
["index_key", "value_key", "dtype", "size", "already_sorted"])):
"""Configuration for parsing a sparse input feature from an `Example`.
Note, preferably use `VarLenFeature` (possibly in combination with a
`SequenceExample`) in order to parse out `SparseTensor`s instead of
`SparseFeature` due to its simplicity.
Closely mimicking the `SparseTensor` that will be obtained by parsing an
`Example` with a `SparseFeature` config, a `SparseFeature` contains a
* `value_key`: The name of key for a `Feature` in the `Example` whose parsed
`Tensor` will be the resulting `SparseTensor.values`.
* `index_key`: A list of names - one for each dimension in the resulting
`SparseTensor` whose `indices[i][dim]` indicating the position of
the `i`-th value in the `dim` dimension will be equal to the `i`-th value in
the Feature with key named `index_key[dim]` in the `Example`.
* `size`: A list of ints for the resulting `SparseTensor.dense_shape`.
For example, we can represent the following 2D `SparseTensor`
```python
SparseTensor(indices=[[3, 1], [20, 0]],
values=[0.5, -1.0]
dense_shape=[100, 3])
```
with an `Example` input proto
```python
features {
feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
feature { key: "ix0" value { int64_list { value: [ 3, 20 ] } } }
feature { key: "ix1" value { int64_list { value: [ 1, 0 ] } } }
}
```
and `SparseFeature` config with 2 `index_key`s
```python
SparseFeature(index_key=["ix0", "ix1"],
value_key="val",
dtype=tf.float32,
size=[100, 3])
```
Fields:
index_key: A single string name or a list of string names of index features.
For each key the underlying feature's type must be `int64` and its length
must always match that of the `value_key` feature.
To represent `SparseTensor`s with a `dense_shape` of `rank` higher than 1
a list of length `rank` should be used.
value_key: Name of value feature. The underlying feature's type must
be `dtype` and its length must always match that of all the `index_key`s'
features.
dtype: Data type of the `value_key` feature.
size: A Python int or list thereof specifying the dense shape. Should be a
list if and only if `index_key` is a list. In that case the list must be
equal to the length of `index_key`. Each for each entry `i` all values in
the `index_key`[i] feature must be in `[0, size[i])`.
already_sorted: A Python boolean to specify whether the values in
`value_key` are already sorted by their index position. If so skip
sorting. False by default (optional).
"""
pass
SparseFeature.__new__.__defaults__ = (False,)
class FixedLenFeature(collections.namedtuple(
"FixedLenFeature", ["shape", "dtype", "default_value"])):
"""Configuration for parsing a fixed-length input feature.
To treat sparse input as dense, provide a `default_value`; otherwise,
the parse functions will fail on any examples missing this feature.
Fields:
shape: Shape of input data.
dtype: Data type of input.
default_value: Value to be used if an example is missing this feature. It
must be compatible with `dtype` and of the specified `shape`.
"""
pass
FixedLenFeature.__new__.__defaults__ = (None,)
class FixedLenSequenceFeature(collections.namedtuple(
"FixedLenSequenceFeature",
["shape", "dtype", "allow_missing", "default_value"])):
"""Configuration for parsing a variable-length input feature into a `Tensor`.
The resulting `Tensor` of parsing a single `SequenceExample` or `Example` has
a static `shape` of `[None] + shape` and the specified `dtype`.
The resulting `Tensor` of parsing a `batch_size` many `Example`s has
a static `shape` of `[batch_size, None] + shape` and the specified `dtype`.
The entries in the `batch` from different `Examples` will be padded with
`default_value` to the maximum length present in the `batch`.
To treat a sparse input as dense, provide `allow_missing=True`; otherwise,
the parse functions will fail on any examples missing this feature.
Fields:
shape: Shape of input data for dimension 2 and higher. First dimension is
of variable length `None`.
dtype: Data type of input.
allow_missing: Whether to allow this feature to be missing from a feature
list item. Is available only for parsing `SequenceExample` not for
parsing `Examples`.
default_value: Scalar value to be used to pad multiple `Example`s to their
maximum length. Irrelevant for parsing a single `Example` or
`SequenceExample`. Defaults to "" for dtype string and 0 otherwise
(optional).
"""
pass
FixedLenSequenceFeature.__new__.__defaults__ = (False, None)
def _features_to_raw_params(features, types):
"""Split feature tuples into raw params used by `gen_parsing_ops`.
Args:
features: A `dict` mapping feature keys to objects of a type in `types`.
types: Type of features to allow, among `FixedLenFeature`, `VarLenFeature`,
`SparseFeature`, and `FixedLenSequenceFeature`.
Returns:
Tuple of `sparse_keys`, `sparse_types`, `dense_keys`, `dense_types`,
`dense_defaults`, `dense_shapes`.
Raises:
ValueError: if `features` contains an item not in `types`, or an invalid
feature.
"""
sparse_keys = []
sparse_types = []
dense_keys = []
dense_types = []
dense_defaults = {}
dense_shapes = []
if features:
# NOTE: We iterate over sorted keys to keep things deterministic.
for key in sorted(features.keys()):
feature = features[key]
if isinstance(feature, VarLenFeature):
if VarLenFeature not in types:
raise ValueError("Unsupported VarLenFeature %s.", feature)
if not feature.dtype:
raise ValueError("Missing type for feature %s." % key)
sparse_keys.append(key)
sparse_types.append(feature.dtype)
elif isinstance(feature, SparseFeature):
if SparseFeature not in types:
raise ValueError("Unsupported SparseFeature %s.", feature)
if not feature.index_key:
raise ValueError(
"Missing index_key for SparseFeature %s.", feature)
if not feature.value_key:
raise ValueError(
"Missing value_key for SparseFeature %s.", feature)
if not feature.dtype:
raise ValueError("Missing type for feature %s." % key)
index_keys = feature.index_key
if isinstance(index_keys, str):
index_keys = [index_keys]
elif len(index_keys) > 1:
tf_logging.warning("SparseFeature is a complicated feature config "
"and should only be used after careful "
"consideration of VarLenFeature.")
for index_key in sorted(index_keys):
if index_key in sparse_keys:
dtype = sparse_types[sparse_keys.index(index_key)]
if dtype != dtypes.int64:
raise ValueError("Conflicting type %s vs int64 for feature %s." %
(dtype, index_key))
else:
sparse_keys.append(index_key)
sparse_types.append(dtypes.int64)
if feature.value_key in sparse_keys:
dtype = sparse_types[sparse_keys.index(feature.value_key)]
if dtype != feature.dtype:
raise ValueError("Conflicting type %s vs %s for feature %s." % (
dtype, feature.dtype, feature.value_key))
else:
sparse_keys.append(feature.value_key)
sparse_types.append(feature.dtype)
elif isinstance(feature, FixedLenFeature):
if FixedLenFeature not in types:
raise ValueError("Unsupported FixedLenFeature %s.", feature)
if not feature.dtype:
raise ValueError("Missing type for feature %s." % key)
if feature.shape is None:
raise ValueError("Missing shape for feature %s." % key)
feature_tensor_shape = tensor_shape.as_shape(feature.shape)
if (feature.shape and feature_tensor_shape.ndims and
feature_tensor_shape.dims[0].value is None):
raise ValueError("First dimension of shape for feature %s unknown. "
"Consider using FixedLenSequenceFeature." % key)
if (feature.shape is not None and
not feature_tensor_shape.is_fully_defined()):
raise ValueError("All dimensions of shape for feature %s need to be "
"known but received %s." % (key, str(feature.shape)))
dense_keys.append(key)
dense_shapes.append(feature.shape)
dense_types.append(feature.dtype)
if feature.default_value is not None:
dense_defaults[key] = feature.default_value
elif isinstance(feature, FixedLenSequenceFeature):
if FixedLenSequenceFeature not in types:
raise ValueError("Unsupported FixedLenSequenceFeature %s.", feature)
if not feature.dtype:
raise ValueError("Missing type for feature %s." % key)
if feature.shape is None:
raise ValueError("Missing shape for feature %s." % key)
dense_keys.append(key)
dense_shapes.append(feature.shape)
dense_types.append(feature.dtype)
if feature.allow_missing:
dense_defaults[key] = None
if feature.default_value is not None:
dense_defaults[key] = feature.default_value
else:
raise ValueError("Invalid feature %s:%s." % (key, feature))
return (
sparse_keys, sparse_types, dense_keys, dense_types, dense_defaults,
dense_shapes)
def _construct_sparse_tensors_for_sparse_features(features, tensor_dict):
"""Merges SparseTensors of indices and values of SparseFeatures.
Constructs new dict based on `tensor_dict`. For `SparseFeatures` in the values
of `features` expects their `index_key`s and `index_value`s to be present in
`tensor_dict` mapping to `SparseTensor`s. Constructs a single `SparseTensor`
from them, and adds it to the result with the key from `features`.
Copies other keys and values from `tensor_dict` with keys present in
`features`.
Args:
features: A `dict` mapping feature keys to `SparseFeature` values.
Values of other types will be ignored.
tensor_dict: A `dict` mapping feature keys to `Tensor` and `SparseTensor`
values. Expected to contain keys of the `SparseFeature`s' `index_key`s and
`value_key`s and mapping them to `SparseTensor`s.
Returns:
A `dict` mapping feature keys to `Tensor` and `SparseTensor` values. Similar
to `tensor_dict` except each `SparseFeature`s in `features` results in a
single `SparseTensor`.
"""
tensor_dict = dict(tensor_dict) # Do not modify argument passed in.
# Construct SparseTensors for SparseFeatures.
for key in sorted(features.keys()):
feature = features[key]
if isinstance(feature, SparseFeature):
if isinstance(feature.index_key, str):
sp_ids = tensor_dict[feature.index_key]
else:
sp_ids = [tensor_dict[index_key] for index_key in feature.index_key]
sp_values = tensor_dict[feature.value_key]
tensor_dict[key] = sparse_ops.sparse_merge(
sp_ids,
sp_values,
vocab_size=feature.size,
already_sorted=feature.already_sorted)
# Remove tensors from dictionary that were only used to construct
# SparseTensors for SparseFeature.
for key in set(tensor_dict) - set(features):
del tensor_dict[key]
return tensor_dict
def _prepend_none_dimension(features):
if features:
modified_features = dict(features) # Create a copy to modify
for key, feature in features.items():
if isinstance(feature, FixedLenSequenceFeature):
if not feature.allow_missing:
raise ValueError("Unsupported: FixedLenSequenceFeature requires "
"allow_missing to be True.")
modified_features[key] = FixedLenSequenceFeature(
[None] + list(feature.shape),
feature.dtype,
feature.allow_missing,
feature.default_value)
return modified_features
else:
return features
def parse_example(serialized, features, name=None, example_names=None):
# pylint: disable=line-too-long
"""Parses `Example` protos into a `dict` of tensors.
Parses a number of serialized [`Example`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
protos given in `serialized`. We refer to `serialized` as a batch with
`batch_size` many entries of individual `Example` protos.
`example_names` may contain descriptive names for the corresponding serialized
protos. These may be useful for debugging purposes, but they have no effect on
the output. If not `None`, `example_names` must be the same length as
`serialized`.
This op parses serialized examples into a dictionary mapping keys to `Tensor`
and `SparseTensor` objects. `features` is a dict from keys to `VarLenFeature`,
`SparseFeature`, and `FixedLenFeature` objects. Each `VarLenFeature`
and `SparseFeature` is mapped to a `SparseTensor`, and each
`FixedLenFeature` is mapped to a `Tensor`.
Each `VarLenFeature` maps to a `SparseTensor` of the specified type
representing a ragged matrix. Its indices are `[batch, index]` where `batch`
identifies the example in `serialized`, and `index` is the value's index in
the list of values associated with that feature and example.
Each `SparseFeature` maps to a `SparseTensor` of the specified type
representing a Tensor of `dense_shape` `[batch_size] + SparseFeature.size`.
Its `values` come from the feature in the examples with key `value_key`.
A `values[i]` comes from a position `k` in the feature of an example at batch
entry `batch`. This positional information is recorded in `indices[i]` as
`[batch, index_0, index_1, ...]` where `index_j` is the `k-th` value of
the feature in the example at with key `SparseFeature.index_key[j].
In other words, we split the indices (except the first index indicating the
batch entry) of a `SparseTensor` by dimension into different features of the
`Example`. Due to its complexity a `VarLenFeature` should be preferred over a
`SparseFeature` whenever possible.
Each `FixedLenFeature` `df` maps to a `Tensor` of the specified type (or
`tf.float32` if not specified) and shape `(serialized.size(),) + df.shape`.
`FixedLenFeature` entries with a `default_value` are optional. With no default
value, we will fail if that `Feature` is missing from any example in
`serialized`.
Each `FixedLenSequenceFeature` `df` maps to a `Tensor` of the specified type
(or `tf.float32` if not specified) and shape
`(serialized.size(), None) + df.shape`.
All examples in `serialized` will be padded with `default_value` along the
second dimension.
Examples:
For example, if one expects a `tf.float32` `VarLenFeature` `ft` and three
serialized `Example`s are provided:
```
serialized = [
features
{ feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
features
{ feature []},
features
{ feature { key: "ft" value { float_list { value: [3.0] } } }
]
```
then the output will look like:
```
{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
values=[1.0, 2.0, 3.0],
dense_shape=(3, 2)) }
```
If instead a `FixedLenSequenceFeature` with `default_value = -1.0` and
`shape=[]` is used then the output will look like:
```
{"ft": [[1.0, 2.0], [3.0, -1.0]]}
```
Given two `Example` input protos in `serialized`:
```
[
features {
feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } }
feature { key: "gps" value { float_list { value: [] } } }
},
features {
feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } }
feature { key: "dank" value { int64_list { value: [ 42 ] } } }
feature { key: "gps" value { } }
}
]
```
And arguments
```
example_names: ["input0", "input1"],
features: {
"kw": VarLenFeature(tf.string),
"dank": VarLenFeature(tf.int64),
"gps": VarLenFeature(tf.float32),
}
```
Then the output is a dictionary:
```python
{
"kw": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["knit", "big", "emmy"]
dense_shape=[2, 2]),
"dank": SparseTensor(
indices=[[1, 0]],
values=[42],
dense_shape=[2, 1]),
"gps": SparseTensor(
indices=[],
values=[],
dense_shape=[2, 0]),
}
```
For dense results in two serialized `Example`s:
```
[
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
}
]
```
We can use arguments:
```
example_names: ["input0", "input1"],
features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
}
```
And the expected output is:
```python
{
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
}
```
An alternative to `VarLenFeature` to obtain a `SparseTensor` is
`SparseFeature`. For example, given two `Example` input protos in
`serialized`:
```
[
features {
feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } }
},
features {
feature { key: "val" value { float_list { value: [ 0.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 42 ] } } }
}
]
```
And arguments
```
example_names: ["input0", "input1"],
features: {
"sparse": SparseFeature(
index_key="ix", value_key="val", dtype=tf.float32, size=100),
}
```
Then the output is a dictionary:
```python
{
"sparse": SparseTensor(
indices=[[0, 3], [0, 20], [1, 42]],
values=[0.5, -1.0, 0.0]
dense_shape=[2, 100]),
}
```
Args:
serialized: A vector (1-D Tensor) of strings, a batch of binary
serialized `Example` protos.
features: A `dict` mapping feature keys to `FixedLenFeature`,
`VarLenFeature`, and `SparseFeature` values.
name: A name for this operation (optional).
example_names: A vector (1-D Tensor) of strings (optional), the names of
the serialized protos in the batch.
Returns:
A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.
Raises:
ValueError: if any feature is invalid.
"""
if not features:
raise ValueError("Missing: features was %s." % features)
features = _prepend_none_dimension(features)
(sparse_keys, sparse_types, dense_keys, dense_types, dense_defaults,
dense_shapes) = _features_to_raw_params(
features,
[VarLenFeature, SparseFeature, FixedLenFeature, FixedLenSequenceFeature])
outputs = _parse_example_raw(
serialized, example_names, sparse_keys, sparse_types, dense_keys,
dense_types, dense_defaults, dense_shapes, name)
return _construct_sparse_tensors_for_sparse_features(features, outputs)
def _parse_example_raw(serialized,
names=None,
sparse_keys=None,
sparse_types=None,
dense_keys=None,
dense_types=None,
dense_defaults=None,
dense_shapes=None,
name=None):
"""Parses `Example` protos.
Args:
serialized: A vector (1-D Tensor) of strings, a batch of binary
serialized `Example` protos.
names: A vector (1-D Tensor) of strings (optional), the names of
the serialized protos.
sparse_keys: A list of string keys in the examples' features.
The results for these keys will be returned as `SparseTensor` objects.
sparse_types: A list of `DTypes` of the same length as `sparse_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
dense_keys: A list of string keys in the examples' features.
The results for these keys will be returned as `Tensor`s
dense_types: A list of DTypes of the same length as `dense_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
dense_defaults: A dict mapping string keys to `Tensor`s.
The keys of the dict must match the dense_keys of the feature.
dense_shapes: A list of tuples with the same length as `dense_keys`.
The shape of the data for each dense feature referenced by `dense_keys`.
Required for any input tensors identified by `dense_keys`. Must be
either fully defined, or may contain an unknown first dimension.
An unknown first dimension means the feature is treated as having
a variable number of blocks, and the output shape along this dimension
is considered unknown at graph build time. Padding is applied for
minibatch elements smaller than the maximum number of blocks for the
given feature along this dimension.
name: A name for this operation (optional).
Returns:
A `dict` mapping keys to `Tensor`s and `SparseTensor`s.
Raises:
ValueError: If sparse and dense key sets intersect, or input lengths do not
match up.
"""
with ops.name_scope(name, "ParseExample", [serialized, names]):
names = [] if names is None else names
dense_defaults = {} if dense_defaults is None else dense_defaults
sparse_keys = [] if sparse_keys is None else sparse_keys
sparse_types = [] if sparse_types is None else sparse_types
dense_keys = [] if dense_keys is None else dense_keys
dense_types = [] if dense_types is None else dense_types
dense_shapes = (
[[]] * len(dense_keys) if dense_shapes is None else dense_shapes)
num_dense = len(dense_keys)
num_sparse = len(sparse_keys)
if len(dense_shapes) != num_dense:
raise ValueError("len(dense_shapes) != len(dense_keys): %d vs. %d"
% (len(dense_shapes), num_dense))
if len(dense_types) != num_dense:
raise ValueError("len(dense_types) != len(num_dense): %d vs. %d"
% (len(dense_types), num_dense))
if len(sparse_types) != num_sparse:
raise ValueError("len(sparse_types) != len(sparse_keys): %d vs. %d"
% (len(sparse_types), num_sparse))
if num_dense + num_sparse == 0:
raise ValueError("Must provide at least one sparse key or dense key")
if not set(dense_keys).isdisjoint(set(sparse_keys)):
raise ValueError(
"Dense and sparse keys must not intersect; intersection: %s" %
set(dense_keys).intersection(set(sparse_keys)))
# Convert dense_shapes to TensorShape object.
dense_shapes = [tensor_shape.as_shape(shape) for shape in dense_shapes]
dense_defaults_vec = []
for i, key in enumerate(dense_keys):
default_value = dense_defaults.get(key)
dense_shape = dense_shapes[i]
if (dense_shape.ndims is not None and dense_shape.ndims > 0 and
dense_shape[0].value is None):
# Variable stride dense shape, the default value should be a
# scalar padding value
if default_value is None:
default_value = ops.convert_to_tensor(
"" if dense_types[i] == dtypes.string else 0,
dtype=dense_types[i])
else:
# Reshape to a scalar to ensure user gets an error if they
# provide a tensor that's not intended to be a padding value
# (0 or 2+ elements).
key_name = "padding_" + re.sub("[^A-Za-z0-9_.\\-/]", "_", key)
default_value = ops.convert_to_tensor(
default_value, dtype=dense_types[i], name=key_name)
default_value = array_ops.reshape(default_value, [])
else:
if default_value is None:
default_value = constant_op.constant([], dtype=dense_types[i])
elif not isinstance(default_value, ops.Tensor):
key_name = "key_" + re.sub("[^A-Za-z0-9_.\\-/]", "_", key)
default_value = ops.convert_to_tensor(
default_value, dtype=dense_types[i], name=key_name)
default_value = array_ops.reshape(default_value, dense_shape)
dense_defaults_vec.append(default_value)
# Finally, convert dense_shapes to TensorShapeProto
dense_shapes = [shape.as_proto() for shape in dense_shapes]
# pylint: disable=protected-access
outputs = gen_parsing_ops._parse_example(
serialized=serialized,
names=names,
dense_defaults=dense_defaults_vec,
sparse_keys=sparse_keys,
sparse_types=sparse_types,
dense_keys=dense_keys,
dense_shapes=dense_shapes,
name=name)
# pylint: enable=protected-access
(sparse_indices, sparse_values, sparse_shapes, dense_values) = outputs
sparse_tensors = [
sparse_tensor.SparseTensor(ix, val, shape) for (ix, val, shape)
in zip(sparse_indices, sparse_values, sparse_shapes)]
return dict(zip(sparse_keys + dense_keys, sparse_tensors + dense_values))
def parse_single_example(serialized, features, name=None, example_names=None):
"""Parses a single `Example` proto.
Similar to `parse_example`, except:
For dense tensors, the returned `Tensor` is identical to the output of
`parse_example`, except there is no batch dimension, the output shape is the
same as the shape given in `dense_shape`.
For `SparseTensor`s, the first (batch) column of the indices matrix is removed
(the indices matrix is a column vector), the values vector is unchanged, and
the first (`batch_size`) entry of the shape vector is removed (it is now a
single element vector).
One might see performance advantages by batching `Example` protos with
`parse_example` instead of using this function directly.
Args:
serialized: A scalar string Tensor, a single serialized Example.
See `_parse_single_example_raw` documentation for more details.
features: A `dict` mapping feature keys to `FixedLenFeature` or
`VarLenFeature` values.
name: A name for this operation (optional).
example_names: (Optional) A scalar string Tensor, the associated name.
See `_parse_single_example_raw` documentation for more details.
Returns:
A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.
Raises:
ValueError: if any feature is invalid.
"""
if not features:
raise ValueError("Missing features.")
features = _prepend_none_dimension(features)
(sparse_keys, sparse_types, dense_keys, dense_types, dense_defaults,
dense_shapes) = _features_to_raw_params(
features,
[VarLenFeature, FixedLenFeature, FixedLenSequenceFeature, SparseFeature])
outputs = _parse_single_example_raw(
serialized, example_names, sparse_keys, sparse_types, dense_keys,
dense_types, dense_defaults, dense_shapes, name)
return _construct_sparse_tensors_for_sparse_features(features, outputs)
def _parse_single_example_raw(serialized,
names=None,
sparse_keys=None,
sparse_types=None,
dense_keys=None,
dense_types=None,
dense_defaults=None,
dense_shapes=None,
name=None):
"""Parses a single `Example` proto.
Args:
serialized: A scalar string Tensor, a single serialized Example.
See `_parse_example_raw` documentation for more details.
names: (Optional) A scalar string Tensor, the associated name.
See `_parse_example_raw` documentation for more details.
sparse_keys: See `_parse_example_raw` documentation for more details.
sparse_types: See `_parse_example_raw` documentation for more details.
dense_keys: See `_parse_example_raw` documentation for more details.
dense_types: See `_parse_example_raw` documentation for more details.
dense_defaults: See `_parse_example_raw` documentation for more details.
dense_shapes: See `_parse_example_raw` documentation for more details.
name: A name for this operation (optional).
Returns:
A `dict` mapping feature keys to `Tensor` and `SparseTensor` values.
Raises:
ValueError: if any feature is invalid.
"""
with ops.name_scope(name, "ParseSingleExample", [serialized, names]):
serialized = ops.convert_to_tensor(serialized)
serialized_shape = serialized.get_shape()
if serialized_shape.ndims is not None:
if serialized_shape.ndims != 0:
raise ValueError("Input serialized must be a scalar")
else:
serialized = control_flow_ops.with_dependencies(
[control_flow_ops.Assert(
math_ops.equal(array_ops.rank(serialized), 0),
["Input serialized must be a scalar"],
name="SerializedIsScalar")],
serialized,
name="SerializedDependencies")
serialized = array_ops.expand_dims(serialized, 0)
if names is not None:
names = ops.convert_to_tensor(names)
names_shape = names.get_shape()
if names_shape.ndims is not None:
if names_shape.ndims != 0:
raise ValueError("Input names must be a scalar")
else:
names = control_flow_ops.with_dependencies(
[control_flow_ops.Assert(
math_ops.equal(array_ops.rank(names), 0),
["Input names must be a scalar"],
name="NamesIsScalar")],
names,
name="NamesDependencies")
names = array_ops.expand_dims(names, 0)
outputs = _parse_example_raw(
serialized,
names=names,
sparse_keys=sparse_keys,
sparse_types=sparse_types,
dense_keys=dense_keys,
dense_types=dense_types,
dense_defaults=dense_defaults,
dense_shapes=dense_shapes,
name=name)
if dense_keys is not None:
for d in dense_keys:
d_name = re.sub("[^A-Za-z0-9_.\\-/]", "_", d)
outputs[d] = array_ops.squeeze(
outputs[d], [0], name="Squeeze_%s" % d_name)
if sparse_keys is not None:
for s in sparse_keys:
s_name = re.sub("[^A-Za-z0-9_.\\-/]", "_", s)
outputs[s] = sparse_tensor.SparseTensor(
array_ops.slice(outputs[s].indices,
[0, 1], [-1, -1], name="Slice_Indices_%s" % s_name),
outputs[s].values,
array_ops.slice(outputs[s].dense_shape,
[1], [-1], name="Squeeze_Shape_%s" % s_name))
return outputs
def parse_single_sequence_example(
serialized, context_features=None, sequence_features=None,
example_name=None, name=None):
# pylint: disable=line-too-long
"""Parses a single `SequenceExample` proto.
Parses a single serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto)
proto given in `serialized`.
This op parses a serialized sequence example into a tuple of dictionaries
mapping keys to `Tensor` and `SparseTensor` objects respectively.
The first dictionary contains mappings for keys appearing in
`context_features`, and the second dictionary contains mappings for keys
appearing in `sequence_features`.
At least one of `context_features` and `sequence_features` must be provided
and non-empty.
The `context_features` keys are associated with a `SequenceExample` as a
whole, independent of time / frame. In contrast, the `sequence_features` keys
provide a way to access variable-length data within the `FeatureList` section
of the `SequenceExample` proto. While the shapes of `context_features` values
are fixed with respect to frame, the frame dimension (the first dimension)
of `sequence_features` values may vary between `SequenceExample` protos,
and even between `feature_list` keys within the same `SequenceExample`.
`context_features` contains `VarLenFeature` and `FixedLenFeature` objects.
Each `VarLenFeature` is mapped to a `SparseTensor`, and each `FixedLenFeature`
is mapped to a `Tensor`, of the specified type, shape, and default value.
`sequence_features` contains `VarLenFeature` and `FixedLenSequenceFeature`
objects. Each `VarLenFeature` is mapped to a `SparseTensor`, and each
`FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type.
The shape will be `(T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`, where
`T` is the length of the associated `FeatureList` in the `SequenceExample`.
For instance, `FixedLenSequenceFeature([])` yields a scalar 1-D `Tensor` of
static shape `[None]` and dynamic shape `[T]`, while
`FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 2-D matrix `Tensor`
of static shape `[None, k]` and dynamic shape `[T, k]`.
Each `SparseTensor` corresponding to `sequence_features` represents a ragged
vector. Its indices are `[time, index]`, where `time` is the `FeatureList`
entry and `index` is the value's index in the list of values associated with
that time.
`FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature`
entries with `allow_missing=True` are optional; otherwise, we will fail if
that `Feature` or `FeatureList` is missing from any example in `serialized`.
`example_name` may contain a descriptive name for the corresponding serialized
proto. This may be useful for debugging purposes, but it has no effect on the
output. If not `None`, `example_name` must be a scalar.
Args:
serialized: A scalar (0-D Tensor) of type string, a single binary
serialized `SequenceExample` proto.
context_features: A `dict` mapping feature keys to `FixedLenFeature` or
`VarLenFeature` values. These features are associated with a
`SequenceExample` as a whole.
sequence_features: A `dict` mapping feature keys to
`FixedLenSequenceFeature` or `VarLenFeature` values. These features are
associated with data within the `FeatureList` section of the
`SequenceExample` proto.
example_name: A scalar (0-D Tensor) of strings (optional), the name of
the serialized proto.
name: A name for this operation (optional).
Returns:
A tuple of two `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s.
The first dict contains the context key/values.
The second dict contains the feature_list key/values.
Raises:
ValueError: if any feature is invalid.
"""
# pylint: enable=line-too-long
if not (context_features or sequence_features):
raise ValueError("Missing features.")
(context_sparse_keys, context_sparse_types, context_dense_keys,
context_dense_types, context_dense_defaults,
context_dense_shapes) = _features_to_raw_params(
context_features, [VarLenFeature, FixedLenFeature])
(feature_list_sparse_keys, feature_list_sparse_types,
feature_list_dense_keys, feature_list_dense_types,
feature_list_dense_defaults,
feature_list_dense_shapes) = _features_to_raw_params(
sequence_features, [VarLenFeature, FixedLenSequenceFeature])
return _parse_single_sequence_example_raw(
serialized, context_sparse_keys, context_sparse_types,
context_dense_keys, context_dense_types, context_dense_defaults,
context_dense_shapes, feature_list_sparse_keys,
feature_list_sparse_types, feature_list_dense_keys,
feature_list_dense_types, feature_list_dense_shapes,
feature_list_dense_defaults, example_name, name)
def _parse_single_sequence_example_raw(serialized,
context_sparse_keys=None,
context_sparse_types=None,
context_dense_keys=None,
context_dense_types=None,
context_dense_defaults=None,
context_dense_shapes=None,
feature_list_sparse_keys=None,
feature_list_sparse_types=None,
feature_list_dense_keys=None,
feature_list_dense_types=None,
feature_list_dense_shapes=None,
feature_list_dense_defaults=None,
debug_name=None,
name=None):
"""Parses a single `SequenceExample` proto.
Args:
serialized: A scalar (0-D Tensor) of type string, a single binary
serialized `SequenceExample` proto.
context_sparse_keys: A list of string keys in the `SequenceExample`'s
features. The results for these keys will be returned as
`SparseTensor` objects.
context_sparse_types: A list of `DTypes`, the same length as `sparse_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
context_dense_keys: A list of string keys in the examples' features.
The results for these keys will be returned as `Tensor`s
context_dense_types: A list of DTypes, same length as `context_dense_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
context_dense_defaults: A dict mapping string keys to `Tensor`s.
The keys of the dict must match the context_dense_keys of the feature.
context_dense_shapes: A list of tuples, same length as `context_dense_keys`.
The shape of the data for each context_dense feature referenced by
`context_dense_keys`. Required for any input tensors identified by
`context_dense_keys` whose shapes are anything other than `[]` or `[1]`.
feature_list_sparse_keys: A list of string keys in the `SequenceExample`'s
feature_lists. The results for these keys will be returned as
`SparseTensor` objects.
feature_list_sparse_types: A list of `DTypes`, same length as `sparse_keys`.
Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`),
and `tf.string` (`BytesList`) are supported.
feature_list_dense_keys: A list of string keys in the `SequenceExample`'s
features_lists. The results for these keys will be returned as `Tensor`s.
feature_list_dense_types: A list of `DTypes`, same length as
`feature_list_dense_keys`. Only `tf.float32` (`FloatList`),
`tf.int64` (`Int64List`), and `tf.string` (`BytesList`) are supported.
feature_list_dense_shapes: A list of tuples, same length as
`feature_list_dense_keys`. The shape of the data for each
`FeatureList` feature referenced by `feature_list_dense_keys`.
feature_list_dense_defaults: A dict mapping key strings to values.
The only currently allowed value is `None`. Any key appearing
in this dict with value `None` is allowed to be missing from the
`SequenceExample`. If missing, the key is treated as zero-length.
debug_name: A scalar (0-D Tensor) of strings (optional), the name of
the serialized proto.
name: A name for this operation (optional).
Returns:
A tuple of two `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s.
The first dict contains the context key/values.
The second dict contains the feature_list key/values.
Raises:
ValueError: If context_sparse and context_dense key sets intersect,
if input lengths do not match up, or if a value in
feature_list_dense_defaults is not None.
TypeError: if feature_list_dense_defaults is not either None or a dict.
"""
with ops.name_scope(name, "ParseSingleSequenceExample", [serialized]):
context_dense_defaults = (
{} if context_dense_defaults is None else context_dense_defaults)
context_sparse_keys = (
[] if context_sparse_keys is None else context_sparse_keys)
context_sparse_types = (
[] if context_sparse_types is None else context_sparse_types)
context_dense_keys = (
[] if context_dense_keys is None else context_dense_keys)
context_dense_types = (
[] if context_dense_types is None else context_dense_types)
context_dense_shapes = (
[[]] * len(context_dense_keys)
if context_dense_shapes is None else context_dense_shapes)
feature_list_sparse_keys = (
[] if feature_list_sparse_keys is None else feature_list_sparse_keys)
feature_list_sparse_types = (
[] if feature_list_sparse_types is None else feature_list_sparse_types)
feature_list_dense_keys = (
[] if feature_list_dense_keys is None else feature_list_dense_keys)
feature_list_dense_types = (
[] if feature_list_dense_types is None else feature_list_dense_types)
feature_list_dense_shapes = (
[[]] * len(feature_list_dense_keys)
if feature_list_dense_shapes is None else feature_list_dense_shapes)
feature_list_dense_defaults = (
dict() if feature_list_dense_defaults is None
else feature_list_dense_defaults)
debug_name = "" if debug_name is None else debug_name
# Internal
feature_list_dense_missing_assumed_empty = []
num_context_dense = len(context_dense_keys)
num_feature_list_dense = len(feature_list_dense_keys)
num_context_sparse = len(context_sparse_keys)
num_feature_list_sparse = len(feature_list_sparse_keys)
if len(context_dense_shapes) != num_context_dense:
raise ValueError(
"len(context_dense_shapes) != len(context_dense_keys): %d vs. %d"
% (len(context_dense_shapes), num_context_dense))
if len(context_dense_types) != num_context_dense:
raise ValueError(
"len(context_dense_types) != len(num_context_dense): %d vs. %d"
% (len(context_dense_types), num_context_dense))
if len(feature_list_dense_shapes) != num_feature_list_dense:
raise ValueError(
"len(feature_list_dense_shapes) != len(feature_list_dense_keys): "
"%d vs. %d" % (len(feature_list_dense_shapes),
num_feature_list_dense))
if len(feature_list_dense_types) != num_feature_list_dense:
raise ValueError(
"len(feature_list_dense_types) != len(num_feature_list_dense):"
"%d vs. %d" % (len(feature_list_dense_types), num_feature_list_dense))
if len(context_sparse_types) != num_context_sparse:
raise ValueError(
"len(context_sparse_types) != len(context_sparse_keys): %d vs. %d"
% (len(context_sparse_types), num_context_sparse))
if len(feature_list_sparse_types) != num_feature_list_sparse:
raise ValueError(
"len(feature_list_sparse_types) != len(feature_list_sparse_keys): "
"%d vs. %d"
% (len(feature_list_sparse_types), num_feature_list_sparse))
if (num_context_dense + num_context_sparse
+ num_feature_list_dense + num_feature_list_sparse) == 0:
raise ValueError(
"Must provide at least one context_sparse key, context_dense key, "
", feature_list_sparse key, or feature_list_dense key")
if not set(context_dense_keys).isdisjoint(set(context_sparse_keys)):
raise ValueError(
"context_dense and context_sparse keys must not intersect; "
"intersection: %s" %
set(context_dense_keys).intersection(set(context_sparse_keys)))
if not set(feature_list_dense_keys).isdisjoint(
set(feature_list_sparse_keys)):
raise ValueError(
"feature_list_dense and feature_list_sparse keys must not intersect; "
"intersection: %s" %
set(feature_list_dense_keys).intersection(
set(feature_list_sparse_keys)))
if not isinstance(feature_list_dense_defaults, dict):
raise TypeError("feature_list_dense_defaults must be a dict")
for k, v in feature_list_dense_defaults.items():
if v is not None:
raise ValueError("Value feature_list_dense_defaults[%s] must be None"
% k)
feature_list_dense_missing_assumed_empty.append(k)
context_dense_defaults_vec = []
for i, key in enumerate(context_dense_keys):
default_value = context_dense_defaults.get(key)
if default_value is None:
default_value = constant_op.constant([], dtype=context_dense_types[i])
elif not isinstance(default_value, ops.Tensor):
key_name = "key_" + re.sub("[^A-Za-z0-9_.\\-/]", "_", key)
default_value = ops.convert_to_tensor(
default_value, dtype=context_dense_types[i], name=key_name)
default_value = array_ops.reshape(
default_value, context_dense_shapes[i])
context_dense_defaults_vec.append(default_value)
context_dense_shapes = [tensor_shape.as_shape(shape).as_proto()
for shape in context_dense_shapes]
feature_list_dense_shapes = [tensor_shape.as_shape(shape).as_proto()
for shape in feature_list_dense_shapes]
# pylint: disable=protected-access
outputs = gen_parsing_ops._parse_single_sequence_example(
serialized=serialized,
debug_name=debug_name,
context_dense_defaults=context_dense_defaults_vec,
context_sparse_keys=context_sparse_keys,
context_sparse_types=context_sparse_types,
context_dense_keys=context_dense_keys,
context_dense_shapes=context_dense_shapes,
feature_list_sparse_keys=feature_list_sparse_keys,
feature_list_sparse_types=feature_list_sparse_types,
feature_list_dense_keys=feature_list_dense_keys,
feature_list_dense_types=feature_list_dense_types,
feature_list_dense_shapes=feature_list_dense_shapes,
feature_list_dense_missing_assumed_empty=(
feature_list_dense_missing_assumed_empty),
name=name)
# pylint: enable=protected-access
(context_sparse_indices, context_sparse_values,
context_sparse_shapes, context_dense_values,
feature_list_sparse_indices, feature_list_sparse_values,
feature_list_sparse_shapes, feature_list_dense_values) = outputs
context_sparse_tensors = [
sparse_tensor.SparseTensor(ix, val, shape) for (ix, val, shape)
in zip(context_sparse_indices,
context_sparse_values,
context_sparse_shapes)]
feature_list_sparse_tensors = [
sparse_tensor.SparseTensor(ix, val, shape) for (ix, val, shape)
in zip(feature_list_sparse_indices,
feature_list_sparse_values,
feature_list_sparse_shapes)]
context_output = dict(
zip(context_sparse_keys + context_dense_keys,
context_sparse_tensors + context_dense_values))
feature_list_output = dict(
zip(feature_list_sparse_keys + feature_list_dense_keys,
feature_list_sparse_tensors + feature_list_dense_values))
return (context_output, feature_list_output)
| apache-2.0 | -5,405,936,189,678,612,000 | 41.634948 | 119 | 0.653674 | false | 3.945721 | false | false | false |
abonil91/ncanda-data-integration | scripts/redcap/scoring/ctq/__init__.py | 1 | 3092 | #!/usr/bin/env python
##
## Copyright 2016 SRI International
## See COPYING file distributed along with the package for the copyright and license terms.
##
import pandas
import Rwrapper
#
# Variables from surveys needed for CTQ
#
# LimeSurvey field names
lime_fields = [ "ctq_set1 [ctq1]", "ctq_set1 [ctq2]", "ctq_set1 [ctq3]", "ctq_set1 [ctq4]", "ctq_set1 [ctq5]", "ctq_set1 [ctq6]", "ctq_set1 [ctq7]", "ctq_set2 [ctq8]", "ctq_set2 [ctq9]", "ctq_set2 [ct10]", "ctq_set2 [ct11]",
"ctq_set2 [ct12]", "ctq_set2 [ct13]", "ctq_set2 [ct14]", "ctq_set3 [ctq15]", "ctq_set3 [ctq16]", "ctq_set3 [ctq17]", "ctq_set3 [ctq18]", "ctq_set3 [ctq19]", "ctq_set3 [ctq20]", "ctq_set3 [ctq21]",
"ctq_set4 [ctq22]", "ctq_set4 [ctq23]", "ctq_set4 [ctq24]", "ctq_set4 [ctq25]", "ctq_set4 [ctq26]", "ctq_set4 [ctq27]", "ctq_set4 [ctq28]" ]
# Dictionary to recover LimeSurvey field names from REDCap names
rc2lime = dict()
for field in lime_fields:
rc2lime[Rwrapper.label_to_sri( 'youthreport2', field )] = field
# REDCap fields names
input_fields = { 'mrireport' : [ 'youth_report_2_complete', 'youthreport2_missing' ] + rc2lime.keys() }
#
# This determines the name of the form in REDCap where the results are posted.
#
output_form = 'clinical'
#
# CTQ field names mapping from R to REDCap
#
R2rc = { 'Emotional Abuse Scale Total Score' : 'ctq_ea',
'Physical Abuse Scale Total Score' : 'ctq_pa',
'Sexual Abuse Scale Total Score' : 'ctq_sa',
'Emotional Neglect Scale Total Score' : 'ctq_en',
'Physical Neglect Scale Total Score' : 'ctq_pn',
'Minimization/Denial Scale Total Score' : 'ctq_minds' }
#
# Scoring function - take requested data (as requested by "input_fields") for each (subject,event), and demographics (date of birth, gender) for each subject.
#
def compute_scores( data, demographics ):
# Get rid of all records that don't have YR2
data.dropna( axis=1, subset=['youth_report_2_complete'] )
data = data[ data['youth_report_2_complete'] > 0 ]
data = data[ ~(data['youthreport2_missing'] > 0) ]
# If no records to score, return empty DF
if len( data ) == 0:
return pandas.DataFrame()
# Replace all column labels with the original LimeSurvey names
data.columns = Rwrapper.map_labels( data.columns, rc2lime )
# Call the scoring function for all table rows
scores = data.apply( Rwrapper.runscript, axis=1, Rscript='ctq/CTQ.R', scores_key='CTQ.ary' )
# Replace all score columns with REDCap field names
scores.columns = Rwrapper.map_labels( scores.columns, R2rc )
# Simply copy completion status from the input surveys
scores['ctq_complete'] = data['youth_report_2_complete'].map( int )
# Make a proper multi-index for the scores table
scores.index = pandas.MultiIndex.from_tuples(scores.index)
scores.index.names = ['study_id', 'redcap_event_name']
# Return the computed scores - this is what will be imported back into REDCap
outfield_list = [ 'ctq_complete' ] + R2rc.values()
return scores[ outfield_list ]
| bsd-3-clause | 6,952,695,894,297,899,000 | 39.684211 | 224 | 0.663648 | false | 2.906015 | false | false | false |
Simran-B/arangodb | 3rdParty/V8-4.3.61/third_party/python_26/Lib/site-packages/win32/Demos/security/account_rights.py | 34 | 1472 | import win32security,win32file,win32api,ntsecuritycon,win32con
from security_enums import TRUSTEE_TYPE,TRUSTEE_FORM,ACE_FLAGS,ACCESS_MODE
new_privs = ((win32security.LookupPrivilegeValue('',ntsecuritycon.SE_SECURITY_NAME),win32con.SE_PRIVILEGE_ENABLED),
(win32security.LookupPrivilegeValue('',ntsecuritycon.SE_CREATE_PERMANENT_NAME),win32con.SE_PRIVILEGE_ENABLED),
(win32security.LookupPrivilegeValue('','SeEnableDelegationPrivilege'),win32con.SE_PRIVILEGE_ENABLED) ##doesn't seem to be in ntsecuritycon.py ?
)
ph = win32api.GetCurrentProcess()
th = win32security.OpenProcessToken(ph,win32security.TOKEN_ALL_ACCESS) ##win32con.TOKEN_ADJUST_PRIVILEGES)
win32security.AdjustTokenPrivileges(th,0,new_privs)
policy_handle = win32security.GetPolicyHandle('',win32security.POLICY_ALL_ACCESS)
tmp_sid = win32security.LookupAccountName('','tmp')[0]
privs=[ntsecuritycon.SE_DEBUG_NAME,ntsecuritycon.SE_TCB_NAME,ntsecuritycon.SE_RESTORE_NAME,ntsecuritycon.SE_REMOTE_SHUTDOWN_NAME]
win32security.LsaAddAccountRights(policy_handle,tmp_sid,privs)
privlist=win32security.LsaEnumerateAccountRights(policy_handle,tmp_sid)
for priv in privlist:
print priv
privs=[ntsecuritycon.SE_DEBUG_NAME,ntsecuritycon.SE_TCB_NAME]
win32security.LsaRemoveAccountRights(policy_handle,tmp_sid,0,privs)
privlist=win32security.LsaEnumerateAccountRights(policy_handle,tmp_sid)
for priv in privlist:
print priv
win32security.LsaClose(policy_handle)
| apache-2.0 | -8,382,494,482,341,797,000 | 46.483871 | 156 | 0.80231 | false | 3.041322 | false | false | false |
deklungel/iRulez | src/dimmer/mqtt_sender.py | 1 | 1164 | import src.irulez.util as util
import src.irulez.topic_factory as topic_factory
import src.irulez.log as log
import paho.mqtt.client as mqtt
import uuid
logger = log.get_logger('dimmer_mqtt_sender')
class MqttSender:
def __init__(self, client: mqtt.Client):
self.__client = client
def publish_dimming_action_to_timer(self, dimming_action_id: uuid.UUID, delay: int):
publish_topic = topic_factory.create_timer_dimmer_timer_fired_topic()
topic_name = topic_factory.create_timer_dimmer_timer_fired_response_topic()
payload = util.serialize_json({
'topic': topic_name,
'payload': str(dimming_action_id),
'delay': delay
})
logger.debug(f"Publishing: {publish_topic}{payload}")
self.__client.publish(publish_topic, payload, 0, False)
def publish_dimming_action_to_arduino(self, arduino_name: str, pin_number: int, dim_value: int):
publish_topic = topic_factory.create_arduino_dim_action_topic(arduino_name, pin_number)
logger.debug(f"Publishing: {publish_topic} / {dim_value}")
self.__client.publish(publish_topic, dim_value, 0, False)
| mit | -1,275,875,996,077,186,000 | 39.137931 | 100 | 0.67354 | false | 3.278873 | false | false | false |
master2be1/pychess | lib/pychess/widgets/preferencesDialog.py | 20 | 23931 | from __future__ import print_function
import sys, os
from os import listdir
from os.path import isdir, isfile, splitext
from xml.dom import minidom
from gi.repository import Gtk, GdkPixbuf
from gi.repository import Gdk
from pychess.System.prefix import addDataPrefix, getDataPrefix
from pychess.System.glock import glock_connect_after
from pychess.System import conf, gstreamer, uistuff
from pychess.System.uistuff import POSITION_GOLDEN
from pychess.Players.engineNest import discoverer
from pychess.Utils.const import *
from pychess.Utils.IconLoader import load_icon
from pychess.gfx import Pieces
firstRun = True
def run(widgets):
global firstRun
if firstRun:
initialize(widgets)
firstRun = False
widgets["preferences"].show()
widgets["preferences"].present()
def initialize(widgets):
GeneralTab(widgets)
HintTab(widgets)
SoundTab(widgets)
PanelTab(widgets)
ThemeTab(widgets)
uistuff.keepWindowSize("preferencesdialog", widgets["preferences"],
defaultPosition=POSITION_GOLDEN)
def delete_event (widget, *args):
widgets["preferences"].hide()
return True
widgets["preferences"].connect("delete-event", delete_event)
widgets["preferences"].connect("key-press-event",
lambda w,e: w.event(Gdk.Event(Gdk.EventType.DELETE))
if e.keyval == Gdk.KEY_Escape else None)
################################################################################
# General initing #
################################################################################
class GeneralTab:
def __init__ (self, widgets):
conf.set("firstName", conf.get("firstName", conf.username))
conf.set("secondName", conf.get("secondName", _("Guest")))
# Give to uistuff.keeper
for key in ("firstName", "secondName", "showEmt", "showEval",
"hideTabs", "faceToFace", "showCords", "showCaptured",
"figuresInNotation", "fullAnimation", "moveAnimation", "noAnimation"):
uistuff.keep(widgets[key], key)
# Options on by default
for key in ("autoRotate", "fullAnimation", "showBlunder"):
uistuff.keep(widgets[key], key, first_value=True)
################################################################################
# Hint initing #
################################################################################
def anal_combo_get_value (combobox):
engine = list(discoverer.getAnalyzers())[combobox.get_active()]
return engine.get("md5")
def anal_combo_set_value (combobox, value, show_arrow_check, ana_check, analyzer_type):
engine = discoverer.getEngineByMd5(value)
if engine is None:
combobox.set_active(0)
# This return saves us from the None-engine being used
# in later code -Jonas Thiem
return
else:
try:
index = list(discoverer.getAnalyzers()).index(engine)
except ValueError:
index = 0
combobox.set_active(index)
from pychess.Main import gameDic
from pychess.widgets.gamewidget import widgets
for gmwidg in gameDic.keys():
spectators = gmwidg.gamemodel.spectators
md5 = engine.get('md5')
if analyzer_type in spectators and \
spectators[analyzer_type].md5 != md5:
gmwidg.gamemodel.remove_analyzer(analyzer_type)
gmwidg.gamemodel.start_analyzer(analyzer_type)
if not widgets[show_arrow_check].get_active():
gmwidg.gamemodel.pause_analyzer(analyzer_type)
class HintTab:
def __init__ (self, widgets):
self.widgets = widgets
# Options on by default
for key in ("opening_check", "endgame_check", "online_egtb_check",
"analyzer_check", "inv_analyzer_check"):
uistuff.keep(widgets[key], key, first_value=True)
# Opening book
default_path = os.path.join(addDataPrefix("pychess_book.bin"))
path = conf.get("opening_file_entry", default_path)
conf.set("opening_file_entry", path)
book_chooser_dialog = Gtk.FileChooserDialog(_("Select book file"), None, Gtk.FileChooserAction.OPEN,
(Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OPEN, Gtk.ResponseType.OK))
book_chooser_button = Gtk.FileChooserButton(book_chooser_dialog)
filter = Gtk.FileFilter()
filter.set_name(_("Opening books"))
filter.add_pattern("*.bin")
book_chooser_dialog.add_filter(filter)
book_chooser_button.set_filename(path)
self.widgets["bookChooserDock"].add(book_chooser_button)
book_chooser_button.show()
def select_new_book(button):
new_book = book_chooser_dialog.get_filename()
if new_book:
conf.set("opening_file_entry", new_book)
else:
# restore the original
book_chooser_dialog.set_filename(path)
book_chooser_button.connect("file-set", select_new_book)
def on_opening_check_toggled (check):
widgets["opening_hbox"].set_sensitive(check.get_active())
widgets["opening_check"].connect_after("toggled",
on_opening_check_toggled)
# Endgame
default_path = os.path.join(getDataPrefix())
egtb_path = conf.get("egtb_path", default_path)
conf.set("egtb_path", egtb_path)
egtb_chooser_dialog = Gtk.FileChooserDialog(_("Select Gaviota TB path"), None, Gtk.FileChooserAction.SELECT_FOLDER,
(Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OPEN, Gtk.ResponseType.OK))
egtb_chooser_button = Gtk.FileChooserButton.new_with_dialog(egtb_chooser_dialog)
egtb_chooser_button.set_current_folder(egtb_path)
self.widgets["egtbChooserDock"].add(egtb_chooser_button)
egtb_chooser_button.show()
def select_egtb(button):
new_directory = egtb_chooser_dialog.get_filename()
if new_directory != egtb_path:
conf.set("egtb_path", new_directory)
egtb_chooser_button.connect("current-folder-changed", select_egtb)
def on_endgame_check_toggled (check):
widgets["endgame_hbox"].set_sensitive(check.get_active())
widgets["endgame_check"].connect_after("toggled",
on_endgame_check_toggled)
# Analyzing engines
uistuff.createCombo(widgets["ana_combobox"])
uistuff.createCombo(widgets["inv_ana_combobox"])
from pychess.widgets import newGameDialog
def update_analyzers_store(discoverer):
data = [(item[0], item[1]) for item in newGameDialog.analyzerItems]
uistuff.updateCombo(widgets["ana_combobox"], data)
uistuff.updateCombo(widgets["inv_ana_combobox"], data)
glock_connect_after(discoverer, "all_engines_discovered",
update_analyzers_store)
update_analyzers_store(discoverer)
# Save, load and make analyze combos active
conf.set("ana_combobox", conf.get("ana_combobox", 0))
conf.set("inv_ana_combobox", conf.get("inv_ana_combobox", 0))
def on_analyzer_check_toggled (check):
widgets["analyzers_vbox"].set_sensitive(check.get_active())
from pychess.Main import gameDic
if gameDic:
if check.get_active():
for gmwidg in gameDic.keys():
gmwidg.gamemodel.restart_analyzer(HINT)
if not widgets["hint_mode"].get_active():
gmwidg.gamemodel.pause_analyzer(HINT)
else:
for gmwidg in gameDic.keys():
gmwidg.gamemodel.remove_analyzer(HINT)
widgets["analyzers_vbox"].set_sensitive(
widgets["analyzer_check"].get_active())
widgets["analyzer_check"].connect_after("toggled",
on_analyzer_check_toggled)
def on_invanalyzer_check_toggled (check):
widgets["inv_analyzers_vbox"].set_sensitive(check.get_active())
from pychess.Main import gameDic
if gameDic:
if check.get_active():
for gmwidg in gameDic.keys():
gmwidg.gamemodel.restart_analyzer(SPY)
if not widgets["spy_mode"].get_active():
gmwidg.gamemodel.pause_analyzer(SPY)
else:
for gmwidg in gameDic.keys():
gmwidg.gamemodel.remove_analyzer(SPY)
widgets["inv_analyzers_vbox"].set_sensitive(
widgets["inv_analyzer_check"].get_active())
widgets["inv_analyzer_check"].connect_after("toggled",
on_invanalyzer_check_toggled)
# Give widgets to keeper
uistuff.keep(widgets["ana_combobox"], "ana_combobox", anal_combo_get_value,
lambda combobox, value: anal_combo_set_value(combobox, value, "hint_mode",
"analyzer_check", HINT))
uistuff.keep(widgets["inv_ana_combobox"], "inv_ana_combobox", anal_combo_get_value,
lambda combobox, value: anal_combo_set_value(combobox, value, "spy_mode",
"inv_analyzer_check", SPY))
uistuff.keep(widgets["max_analysis_spin"], "max_analysis_spin", first_value=3)
################################################################################
# Sound initing #
################################################################################
# Setup default sounds
for i in range(11):
if not conf.hasKey("soundcombo%d" % i):
conf.set("soundcombo%d" % i, SOUND_URI)
if not conf.hasKey("sounduri0"):
conf.set("sounduri0", "file://"+addDataPrefix("sounds/move1.ogg"))
if not conf.hasKey("sounduri1"):
conf.set("sounduri1", "file://"+addDataPrefix("sounds/check1.ogg"))
if not conf.hasKey("sounduri2"):
conf.set("sounduri2", "file://"+addDataPrefix("sounds/capture1.ogg"))
if not conf.hasKey("sounduri3"):
conf.set("sounduri3", "file://"+addDataPrefix("sounds/start1.ogg"))
if not conf.hasKey("sounduri4"):
conf.set("sounduri4", "file://"+addDataPrefix("sounds/win1.ogg"))
if not conf.hasKey("sounduri5"):
conf.set("sounduri5", "file://"+addDataPrefix("sounds/lose1.ogg"))
if not conf.hasKey("sounduri6"):
conf.set("sounduri6", "file://"+addDataPrefix("sounds/draw1.ogg"))
if not conf.hasKey("sounduri7"):
conf.set("sounduri7", "file://"+addDataPrefix("sounds/obs_mov.ogg"))
if not conf.hasKey("sounduri8"):
conf.set("sounduri8", "file://"+addDataPrefix("sounds/obs_end.ogg"))
if not conf.hasKey("sounduri9"):
conf.set("sounduri9", "file://"+addDataPrefix("sounds/alarm.ogg"))
if not conf.hasKey("sounduri10"):
conf.set("sounduri10", "file://"+addDataPrefix("sounds/invalid.ogg"))
class SoundTab:
SOUND_DIRS = (addDataPrefix("sounds"), "/usr/share/sounds",
"/usr/local/share/sounds", os.environ["HOME"])
COUNT_OF_SOUNDS = 11
actionToKeyNo = {
"aPlayerMoves": 0,
"aPlayerChecks": 1,
"aPlayerCaptures": 2,
"gameIsSetup": 3,
"gameIsWon": 4,
"gameIsLost": 5,
"gameIsDrawn": 6,
"observedMoves": 7,
"oberservedEnds": 8,
"shortOnTime": 9,
"invalidMove": 10,
}
_player = None
@classmethod
def getPlayer (cls):
if not cls._player:
cls._player = gstreamer.Player()
return cls._player
@classmethod
def playAction (cls, action):
if not conf.get("useSounds", True):
return
if isinstance(action, str):
no = cls.actionToKeyNo[action]
else: no = action
typ = conf.get("soundcombo%d" % no, SOUND_MUTE)
if typ == SOUND_BEEP:
sys.stdout.write("\a")
sys.stdout.flush()
elif typ == SOUND_URI:
uri = conf.get("sounduri%d" % no, "")
if not os.path.isfile(uri[7:]):
conf.set("soundcombo%d" % no, SOUND_MUTE)
return
cls.getPlayer().play(uri)
def __init__ (self, widgets):
# Init open dialog
opendialog = Gtk.FileChooserDialog (
_("Open Sound File"), None, Gtk.FileChooserAction.OPEN,
(Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OPEN,
Gtk.ResponseType.ACCEPT))
for dir in self.SOUND_DIRS:
if os.path.isdir(dir):
opendialog.set_current_folder(dir)
break
soundfilter = Gtk.FileFilter()
soundfilter.set_name(_("Sound files"))
#soundfilter.add_custom(soundfilter.get_needed(),
# lambda data: data[3] and data[3].startswith("audio/"))
soundfilter.add_mime_type("audio/*")
opendialog.add_filter(soundfilter)
opendialog.set_filter(soundfilter)
# Get combo icons
icons = ((_("No sound"), "audio-volume-muted", "audio-volume-muted"),
(_("Beep"), "stock_bell", "audio-x-generic"),
(_("Select sound file..."), "gtk-open", "document-open"))
items = []
for level, stock, altstock in icons:
image = load_icon(16, stock, altstock)
items += [(image, level)]
audioIco = load_icon(16, "audio-x-generic")
# Set-up combos
def callback (combobox, index):
if combobox.get_active() == SOUND_SELECT:
if opendialog.run() == Gtk.ResponseType.ACCEPT:
uri = opendialog.get_uri()
model = combobox.get_model()
conf.set("sounduri%d"%index, uri)
label = os.path.split(uri)[1]
if len(model) == 3:
model.append([audioIco, label])
else:
model.set(model.get_iter((3,)), 1, label)
combobox.set_active(3)
else:
combobox.set_active(conf.get("soundcombo%d"%index,SOUND_MUTE))
opendialog.hide()
for i in range(self.COUNT_OF_SOUNDS):
combo = widgets["soundcombo%d"%i]
uistuff.createCombo (combo, items)
combo.set_active(0)
combo.connect("changed", callback, i)
label = widgets["soundlabel%d"%i]
label.props.mnemonic_widget = combo
uri = conf.get("sounduri%d"%i,"")
if os.path.isfile(uri[7:]):
model = combo.get_model()
model.append([audioIco, os.path.split(uri)[1]])
combo.set_active(3)
for i in range(self.COUNT_OF_SOUNDS):
if conf.get("soundcombo%d"%i, SOUND_MUTE) == SOUND_URI and \
not os.path.isfile(conf.get("sounduri%d"%i,"")[7:]):
conf.set("soundcombo%d"%i, SOUND_MUTE)
uistuff.keep(widgets["soundcombo%d"%i], "soundcombo%d"%i)
#widgets["soundcombo%d"%i].set_active(conf.get("soundcombo%d"%i, SOUND_MUTE))
# Init play button
def playCallback (button, index):
SoundTab.playAction(index)
for i in range (self.COUNT_OF_SOUNDS):
button = widgets["soundbutton%d"%i]
button.connect("clicked", playCallback, i)
# Init 'use sound" checkbutton
def checkCallBack (*args):
checkbox = widgets["useSounds"]
widgets["frame23"].set_property("sensitive", checkbox.get_active())
conf.notify_add("useSounds", checkCallBack)
widgets["useSounds"].set_active(True)
uistuff.keep(widgets["useSounds"], "useSounds")
checkCallBack()
def soundError (player, gstmessage):
widgets["useSounds"].set_sensitive(False)
widgets["useSounds"].set_active(False)
self.getPlayer().connect("error", soundError)
uistuff.keep(widgets["alarm_spin"], "alarm_spin", first_value=15)
################################################################################
# Panel initing #
################################################################################
class PanelTab:
def __init__ (self, widgets):
# Put panels in trees
self.widgets = widgets
from pychess.widgets.gamewidget import sidePanels, dockLocation
saved_panels = []
xmlOK = os.path.isfile(dockLocation)
if xmlOK:
doc = minidom.parse(dockLocation)
for elem in doc.getElementsByTagName("panel"):
saved_panels.append(elem.getAttribute("id"))
store = Gtk.ListStore(bool, GdkPixbuf.Pixbuf, str, object)
for panel in sidePanels:
checked = True if not xmlOK else panel.__name__ in saved_panels
panel_icon = GdkPixbuf.Pixbuf.new_from_file_at_size(panel.__icon__, 32, 32)
text = "<b>%s</b>\n%s" % (panel.__title__, panel.__desc__)
store.append((checked, panel_icon, text, panel))
self.tv = widgets["treeview1"]
self.tv.set_model(store)
self.widgets['panel_about_button'].connect('clicked', self.panel_about)
self.widgets['panel_enable_button'].connect('toggled', self.panel_toggled)
self.tv.get_selection().connect('changed', self.selection_changed)
pixbuf = Gtk.CellRendererPixbuf()
pixbuf.props.yalign = 0
pixbuf.props.ypad = 3
pixbuf.props.xpad = 3
self.tv.append_column(Gtk.TreeViewColumn("Icon", pixbuf, pixbuf=1, sensitive=0))
uistuff.appendAutowrapColumn(self.tv, "Name", markup=2, sensitive=0)
widgets['notebook1'].connect("switch-page", self.__on_switch_page)
widgets["preferences"].connect("show", self.__on_show_window)
widgets["preferences"].connect("hide", self.__on_hide_window)
def selection_changed(self, treeselection):
store, iter = self.tv.get_selection().get_selected()
self.widgets['panel_enable_button'].set_sensitive(bool(iter))
self.widgets['panel_about_button'].set_sensitive(bool(iter))
if iter:
active = self.tv.get_model().get(iter, 0)[0]
self.widgets['panel_enable_button'].set_active(active)
def panel_about(self, button):
store, iter = self.tv.get_selection().get_selected()
assert iter # The button should only be clickable when we have a selection
path = store.get_path(iter)
panel = store[path][3]
d = Gtk.MessageDialog (type=Gtk.MessageType.INFO, buttons=Gtk.ButtonsType.CLOSE)
d.set_markup ("<big><b>%s</b></big>" % panel.__title__)
text = panel.__about__ if hasattr(panel, '__about__') else _('Undescribed panel')
d.format_secondary_text (text)
d.run()
d.hide()
def panel_toggled(self, button):
store, iter = self.tv.get_selection().get_selected()
assert iter # The button should only be clickable when we have a selection
path = store.get_path(iter)
active = button.get_active()
if store[path][0] == active:
return
store[path][0] = active
self.__set_panel_active(store[path][3], active)
def __set_panel_active(self, panel, active):
name = panel.__name__
from pychess.widgets.gamewidget import notebooks, docks
from pychess.widgets.pydock import EAST
if active:
leaf = notebooks["board"].get_parent().get_parent()
leaf.dock(docks[name][1], EAST, docks[name][0], name)
else:
try:
notebooks[name].get_parent().get_parent().undock(notebooks[name])
except AttributeError:
# A new panel appeared in the panels directory
leaf = notebooks["board"].get_parent().get_parent()
leaf.dock(docks[name][1], EAST, docks[name][0], name)
def showit(self):
from pychess.widgets.gamewidget import showDesignGW
showDesignGW()
def hideit(self):
from pychess.widgets.gamewidget import hideDesignGW
hideDesignGW()
def __on_switch_page(self, notebook, page, page_num):
if notebook.get_nth_page(page_num) == self.widgets['sidepanels']:
self.showit()
else: self.hideit()
def __on_show_window(self, widget):
notebook = self.widgets['notebook1']
page_num = notebook.get_current_page()
if notebook.get_nth_page(page_num) == self.widgets['sidepanels']:
self.showit()
def __on_hide_window(self, widget):
self.hideit()
class ThemeTab:
def __init__ (self, widgets):
self.themes = self.discover_themes()
store = Gtk.ListStore(GdkPixbuf.Pixbuf, str)
for theme in self.themes:
pngfile = "%s/%s.png" % (addDataPrefix("pieces"), theme)
if isfile(pngfile):
pixbuf = GdkPixbuf.Pixbuf.new_from_file(pngfile)
store.append((pixbuf, theme))
else:
print("WARNING: No piece theme preview icons find. Run create_theme_preview.sh !")
break
iconView = widgets["pieceTheme"]
iconView.set_model(store)
iconView.set_pixbuf_column(0)
iconView.set_text_column(1)
#############################################
# Hack to fix spacing problem in iconview
# http://stackoverflow.com/questions/14090094/what-causes-the-different-display-behaviour-for-a-gtkiconview-between-different
def keep_size(crt, *args):
crt.handler_block(crt_notify)
crt.set_property('width', 40)
crt.handler_unblock(crt_notify)
crt, crp = iconView.get_cells()
crt_notify = crt.connect('notify', keep_size)
#############################################
def _get_active(iconview):
model = iconview.get_model()
selected = iconview.get_selected_items()
if len(selected) == 0:
return conf.get("pieceTheme", "Pychess")
i = selected[0][0]
theme = model[i][1]
Pieces.set_piece_theme(theme)
return theme
def _set_active(iconview, value):
try:
index = self.themes.index(value)
except ValueError:
index = 0
iconview.select_path(Gtk.TreePath(index,))
uistuff.keep(widgets["pieceTheme"], "pieceTheme", _get_active,
_set_active, "Pychess")
def discover_themes(self):
themes = ['Pychess']
pieces = addDataPrefix("pieces")
themes += [d.capitalize() for d in listdir(pieces) if isdir(os.path.join(pieces,d)) and d != 'ttf']
ttf = addDataPrefix("pieces/ttf")
themes += ["ttf-" + splitext(d)[0].capitalize() for d in listdir(ttf) if splitext(d)[1] == '.ttf']
themes.sort()
return themes
| gpl-3.0 | -1,862,141,245,394,663,400 | 38.951586 | 133 | 0.551962 | false | 3.918618 | false | false | false |
alsrgv/tensorflow | tensorflow/python/data/kernel_tests/interleave_test.py | 2 | 10138 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for `tf.data.Dataset.interleave()`."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import multiprocessing
from absl.testing import parameterized
import numpy as np
from tensorflow.python.data.kernel_tests import test_base
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.framework import errors
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import sparse_ops
from tensorflow.python.platform import test
def _interleave(lists, cycle_length, block_length):
"""Reference implementation of interleave used for testing.
Args:
lists: a list of lists to interleave
cycle_length: the length of the interleave cycle
block_length: the length of the interleave block
Yields:
Elements of `lists` interleaved in the order determined by `cycle_length`
and `block_length`.
"""
num_open = 0
# `all_iterators` acts as a queue of iterators over each element of `lists`.
all_iterators = [iter(l) for l in lists]
# `open_iterators` are the iterators whose elements are currently being
# interleaved.
open_iterators = []
if cycle_length == dataset_ops.AUTOTUNE:
cycle_length = multiprocessing.cpu_count()
for i in range(cycle_length):
if all_iterators:
open_iterators.append(all_iterators.pop(0))
num_open += 1
else:
open_iterators.append(None)
while num_open or all_iterators:
for i in range(cycle_length):
if open_iterators[i] is None:
if all_iterators:
open_iterators[i] = all_iterators.pop(0)
num_open += 1
else:
continue
for _ in range(block_length):
try:
yield next(open_iterators[i])
except StopIteration:
open_iterators[i] = None
num_open -= 1
break
def _repeat(values, count):
"""Produces a list of lists suitable for testing interleave.
Args:
values: for each element `x` the result contains `[x] * x`
count: determines how many times to repeat `[x] * x` in the result
Returns:
A list of lists of values suitable for testing interleave.
"""
return [[value] * value for value in np.tile(values, count)]
@test_util.run_all_in_graph_and_eager_modes
class InterleaveTest(test_base.DatasetTestBase, parameterized.TestCase):
@parameterized.named_parameters(
("1", [4, 5, 6], 1, 1, [
4, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 4, 4, 4, 4, 5, 5, 5, 5,
5, 6, 6, 6, 6, 6, 6
]),
("2", [4, 5, 6], 2, 1, [
4, 5, 4, 5, 4, 5, 4, 5, 5, 6, 6, 4, 6, 4, 6, 4, 6, 4, 6, 5, 6, 5, 6,
5, 6, 5, 6, 5, 6, 6
]),
("3", [4, 5, 6], 2, 3, [
4, 4, 4, 5, 5, 5, 4, 5, 5, 6, 6, 6, 4, 4, 4, 6, 6, 6, 4, 5, 5, 5, 6,
6, 6, 5, 5, 6, 6, 6
]),
("4", [4, 5, 6], 7, 2, [
4, 4, 5, 5, 6, 6, 4, 4, 5, 5, 6, 6, 4, 4, 5, 5, 6, 6, 4, 4, 5, 5, 6,
6, 5, 6, 6, 5, 6, 6
]),
("5", [4, 0, 6], 2, 1,
[4, 4, 6, 4, 6, 4, 6, 6, 4, 6, 4, 6, 4, 4, 6, 6, 6, 6, 6, 6]),
)
def testPythonImplementation(self, input_values, cycle_length, block_length,
expected_elements):
input_lists = _repeat(input_values, 2)
for expected, produced in zip(
expected_elements, _interleave(input_lists, cycle_length,
block_length)):
self.assertEqual(expected, produced)
@parameterized.named_parameters(
("1", np.int64([4, 5, 6]), 1, 3, None),
("2", np.int64([4, 5, 6]), 1, 3, 1),
("3", np.int64([4, 5, 6]), 2, 1, None),
("4", np.int64([4, 5, 6]), 2, 1, 1),
("5", np.int64([4, 5, 6]), 2, 1, 2),
("6", np.int64([4, 5, 6]), 2, 3, None),
("7", np.int64([4, 5, 6]), 2, 3, 1),
("8", np.int64([4, 5, 6]), 2, 3, 2),
("9", np.int64([4, 5, 6]), 7, 2, None),
("10", np.int64([4, 5, 6]), 7, 2, 1),
("11", np.int64([4, 5, 6]), 7, 2, 3),
("12", np.int64([4, 5, 6]), 7, 2, 5),
("13", np.int64([4, 5, 6]), 7, 2, 7),
("14", np.int64([4, 5, 6]), dataset_ops.AUTOTUNE, 3, None),
("15", np.int64([4, 5, 6]), dataset_ops.AUTOTUNE, 3, 1),
("16", np.int64([]), 2, 3, None),
("17", np.int64([0, 0, 0]), 2, 3, None),
("18", np.int64([4, 0, 6]), 2, 3, None),
("19", np.int64([4, 0, 6]), 2, 3, 1),
("20", np.int64([4, 0, 6]), 2, 3, 2),
)
def testInterleaveDataset(self, input_values, cycle_length, block_length,
num_parallel_calls):
count = 2
dataset = dataset_ops.Dataset.from_tensor_slices(input_values).repeat(
count).interleave(
lambda x: dataset_ops.Dataset.from_tensors(x).repeat(x),
cycle_length, block_length, num_parallel_calls)
expected_output = [
element for element in _interleave(
_repeat(input_values, count), cycle_length, block_length)
]
self.assertDatasetProduces(dataset, expected_output)
@parameterized.named_parameters(
("1", np.float32([1., np.nan, 2., np.nan, 3.]), 1, 3, None),
("2", np.float32([1., np.nan, 2., np.nan, 3.]), 1, 3, 1),
("3", np.float32([1., np.nan, 2., np.nan, 3.]), 2, 1, None),
("4", np.float32([1., np.nan, 2., np.nan, 3.]), 2, 1, 1),
("5", np.float32([1., np.nan, 2., np.nan, 3.]), 2, 1, 2),
("6", np.float32([1., np.nan, 2., np.nan, 3.]), 2, 3, None),
("7", np.float32([1., np.nan, 2., np.nan, 3.]), 2, 3, 1),
("8", np.float32([1., np.nan, 2., np.nan, 3.]), 2, 3, 2),
("9", np.float32([1., np.nan, 2., np.nan, 3.]), 7, 2, None),
("10", np.float32([1., np.nan, 2., np.nan, 3.]), 7, 2, 1),
("11", np.float32([1., np.nan, 2., np.nan, 3.]), 7, 2, 3),
("12", np.float32([1., np.nan, 2., np.nan, 3.]), 7, 2, 5),
("13", np.float32([1., np.nan, 2., np.nan, 3.]), 7, 2, 7),
)
def testInterleaveDatasetError(self, input_values, cycle_length, block_length,
num_parallel_calls):
dataset = dataset_ops.Dataset.from_tensor_slices(input_values).map(
lambda x: array_ops.check_numerics(x, "message")).interleave(
dataset_ops.Dataset.from_tensors, cycle_length, block_length,
num_parallel_calls)
get_next = self.getNext(dataset)
for value in input_values:
if np.isnan(value):
with self.assertRaises(errors.InvalidArgumentError):
self.evaluate(get_next())
else:
self.assertEqual(value, self.evaluate(get_next()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
def testInterleaveSparse(self):
def _map_fn(i):
return sparse_tensor.SparseTensorValue(
indices=[[0, 0], [1, 1]], values=(i * [1, -1]), dense_shape=[2, 2])
def _interleave_fn(x):
return dataset_ops.Dataset.from_tensor_slices(
sparse_ops.sparse_to_dense(x.indices, x.dense_shape, x.values))
dataset = dataset_ops.Dataset.range(10).map(_map_fn).interleave(
_interleave_fn, cycle_length=1)
get_next = self.getNext(dataset)
for i in range(10):
for j in range(2):
expected = [i, 0] if j % 2 == 0 else [0, -i]
self.assertAllEqual(expected, self.evaluate(get_next()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
@parameterized.named_parameters(
("1", np.int64([4, 5, 6]), 1, 3, 1),
("2", np.int64([4, 5, 6]), 2, 1, 1),
("3", np.int64([4, 5, 6]), 2, 1, 2),
("4", np.int64([4, 5, 6]), 2, 3, 1),
("5", np.int64([4, 5, 6]), 2, 3, 2),
("6", np.int64([4, 5, 6]), 7, 2, 1),
("7", np.int64([4, 5, 6]), 7, 2, 3),
("8", np.int64([4, 5, 6]), 7, 2, 5),
("9", np.int64([4, 5, 6]), 7, 2, 7),
("10", np.int64([4, 5, 6]), dataset_ops.AUTOTUNE, 3, 1),
("11", np.int64([4, 0, 6]), 2, 3, 1),
("12", np.int64([4, 0, 6]), 2, 3, 2),
)
def testSloppyInterleaveDataset(self, input_values, cycle_length,
block_length, num_parallel_calls):
count = 2
dataset = dataset_ops.Dataset.from_tensor_slices(input_values).repeat(
count).interleave(
lambda x: dataset_ops.Dataset.from_tensors(x).repeat(x),
cycle_length, block_length, num_parallel_calls)
options = dataset_ops.Options()
options.experimental_deterministic = False
dataset = dataset.with_options(options)
expected_output = [
element for element in _interleave(
_repeat(input_values, count), cycle_length, block_length)
]
get_next = self.getNext(dataset)
actual_output = []
for _ in range(len(expected_output)):
actual_output.append(self.evaluate(get_next()))
self.assertAllEqual(expected_output.sort(), actual_output.sort())
def testInterleaveMap(self):
dataset = dataset_ops.Dataset.range(100)
def interleave_fn(x):
dataset = dataset_ops.Dataset.from_tensors(x)
return dataset.map(lambda x: x + x)
dataset = dataset.interleave(interleave_fn, cycle_length=5)
dataset = dataset.interleave(interleave_fn, cycle_length=5)
self.assertDatasetProduces(dataset, [4 * x for x in range(100)])
if __name__ == "__main__":
test.main()
| apache-2.0 | 1,540,068,868,335,517,000 | 37.547529 | 80 | 0.574078 | false | 3.086149 | true | false | false |
foursquare/pants | contrib/node/src/python/pants/contrib/node/subsystems/package_managers.py | 2 | 9205 | # coding=utf-8
# Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import absolute_import, division, print_function, unicode_literals
import logging
from builtins import object
from pants.contrib.node.subsystems.command import command_gen
LOG = logging.getLogger(__name__)
PACKAGE_MANAGER_NPM = 'npm'
PACKAGE_MANAGER_YARNPKG = 'yarnpkg'
PACKAGE_MANAGER_YARNPKG_ALIAS = 'yarn'
VALID_PACKAGE_MANAGERS = [PACKAGE_MANAGER_NPM, PACKAGE_MANAGER_YARNPKG, PACKAGE_MANAGER_YARNPKG_ALIAS]
# TODO: Change to enum type when migrated to Python 3.4+
class PackageInstallationTypeOption(object):
PROD = 'prod'
DEV = 'dev'
PEER = 'peer'
BUNDLE = 'bundle'
OPTIONAL = 'optional'
NO_SAVE = 'not saved'
class PackageInstallationVersionOption(object):
EXACT = 'exact'
TILDE = 'tilde'
class PackageManager(object):
"""Defines node package manager functionalities."""
def __init__(self, name, tool_installations):
self.name = name
self.tool_installations = tool_installations
def _get_installation_args(self, install_optional, production_only, force, frozen_lockfile):
"""Returns command line args for installing package.
:param install_optional: True to request install optional dependencies.
:param production_only: True to only install production dependencies, i.e.
ignore devDependencies.
:param force: True to force re-download dependencies.
:param frozen_lockfile: True to disallow automatic update of lock files.
:rtype: list of strings
"""
raise NotImplementedError
def _get_run_script_args(self):
"""Returns command line args to run a package.json script.
:rtype: list of strings
"""
raise NotImplementedError
def _get_add_package_args(self, package, type_option, version_option):
"""Returns command line args to add a node pacakge.
:rtype: list of strings
"""
raise NotImplementedError()
def run_command(self, args=None, node_paths=None):
"""Returns a command that when executed will run an arbitury command via package manager."""
return command_gen(
self.tool_installations,
self.name,
args=args,
node_paths=node_paths
)
def install_module(
self,
install_optional=False,
production_only=False,
force=False,
frozen_lockfile=True,
node_paths=None):
"""Returns a command that when executed will install node package.
:param install_optional: True to install optional dependencies.
:param production_only: True to only install production dependencies, i.e.
ignore devDependencies.
:param force: True to force re-download dependencies.
:param frozen_lockfile: True to disallow automatic update of lock files.
:param node_paths: A list of path that should be included in $PATH when
running installation.
"""
args=self._get_installation_args(
install_optional=install_optional,
production_only=production_only,
force=force,
frozen_lockfile=frozen_lockfile)
return self.run_command(args=args, node_paths=node_paths)
def run_script(self, script_name, script_args=None, node_paths=None):
"""Returns a command to execute a package.json script.
:param script_name: Name of the script to name. Note that script name 'test'
can be used to run node tests.
:param script_args: Args to be passed to package.json script.
:param node_paths: A list of path that should be included in $PATH when
running the script.
"""
# TODO: consider add a pants.util function to manipulate command line.
package_manager_args = self._get_run_script_args()
package_manager_args.append(script_name)
if script_args:
package_manager_args.append('--')
package_manager_args.extend(script_args)
return self.run_command(args=package_manager_args, node_paths=node_paths)
def add_package(
self,
package,
node_paths=None,
type_option=PackageInstallationTypeOption.PROD,
version_option=None):
"""Returns a command that when executed will add a node package to current node module.
:param package: string. A valid npm/yarn package description. The accepted forms are
package-name, package-name@version, package-name@tag, file:/folder, file:/path/to.tgz
https://url/to.tgz
:param node_paths: A list of path that should be included in $PATH when
running the script.
:param type_option: A value from PackageInstallationTypeOption that indicates the type
of package to be installed. Default to 'prod', which is a production dependency.
:param version_option: A value from PackageInstallationVersionOption that indicates how
to match version. Default to None, which uses package manager default.
"""
args=self._get_add_package_args(
package,
type_option=type_option,
version_option=version_option)
return self.run_command(args=args, node_paths=node_paths)
def run_cli(self, cli, args=None, node_paths=None):
"""Returns a command that when executed will run an installed cli via package manager."""
cli_args = [cli]
if args:
cli_args.append('--')
cli_args.extend(args)
return self.run_command(args=cli_args, node_paths=node_paths)
class PackageManagerYarnpkg(PackageManager):
def __init__(self, tool_installation):
super(PackageManagerYarnpkg, self).__init__(PACKAGE_MANAGER_YARNPKG, tool_installation)
def _get_run_script_args(self):
return ['run']
def _get_installation_args(self, install_optional, production_only, force, frozen_lockfile):
return_args = ['--non-interactive']
if not install_optional:
return_args.append('--ignore-optional')
if production_only:
return_args.append('--production=true')
if force:
return_args.append('--force')
if frozen_lockfile:
return_args.append('--frozen-lockfile')
return return_args
def _get_add_package_args(self, package, type_option, version_option):
return_args = ['add', package]
package_type_option = {
PackageInstallationTypeOption.PROD: '', # Yarn save production is the default.
PackageInstallationTypeOption.DEV: '--dev',
PackageInstallationTypeOption.PEER: '--peer',
PackageInstallationTypeOption.OPTIONAL: '--optional',
PackageInstallationTypeOption.BUNDLE: None,
PackageInstallationTypeOption.NO_SAVE: None,
}.get(type_option)
if package_type_option is None:
LOG.warning('{} does not support {} packages, ignored.'.format(self.name, type_option))
elif package_type_option: # Skip over '' entries
return_args.append(package_type_option)
package_version_option = {
PackageInstallationVersionOption.EXACT: '--exact',
PackageInstallationVersionOption.TILDE: '--tilde',
}.get(version_option)
if package_version_option is None:
LOG.warning(
'{} does not support install with {} version, ignored'.format(self.name, version_option))
elif package_version_option: # Skip over '' entries
return_args.append(package_version_option)
return return_args
class PackageManagerNpm(PackageManager):
def __init__(self, tool_installation):
super(PackageManagerNpm, self).__init__(PACKAGE_MANAGER_NPM, tool_installation)
def _get_run_script_args(self):
return ['run-script']
def _get_installation_args(self, install_optional, production_only, force, frozen_lockfile):
return_args = ['install']
if not install_optional:
return_args.append('--no-optional')
if production_only:
return_args.append('--production')
if force:
return_args.append('--force')
if frozen_lockfile:
LOG.warning('{} does not support frozen lockfile option. Ignored.'.format(self.name))
return return_args
def _get_add_package_args(self, package, type_option, version_option):
return_args = ['install', package]
package_type_option = {
PackageInstallationTypeOption.PROD: '--save-prod',
PackageInstallationTypeOption.DEV: '--save-dev',
PackageInstallationTypeOption.PEER: None,
PackageInstallationTypeOption.OPTIONAL: '--save-optional',
PackageInstallationTypeOption.BUNDLE: '--save-bundle',
PackageInstallationTypeOption.NO_SAVE: '--no-save',
}.get(type_option)
if package_type_option is None:
LOG.warning('{} does not support {} packages, ignored.'.format(self.name, type_option))
elif package_type_option: # Skip over '' entries
return_args.append(package_type_option)
package_version_option = {
PackageInstallationVersionOption.EXACT: '--save-exact',
PackageInstallationVersionOption.TILDE: None,
}.get(version_option)
if package_version_option is None:
LOG.warning(
'{} does not support install with {} version, ignored.'.format(self.name, version_option))
elif package_version_option: # Skip over '' entries
return_args.append(package_version_option)
return return_args
def run_cli(self, cli, args=None, node_paths=None):
raise RuntimeError('npm does not support run cli directly. Please use Yarn instead.')
| apache-2.0 | 3,468,031,671,416,038,000 | 36.72541 | 102 | 0.705703 | false | 3.920358 | false | false | false |
segwitcoin/SegwitCoin | contrib/linearize/linearize-hashes.py | 27 | 4579 | #!/usr/bin/env python3
#
# linearize-hashes.py: List blocks in a linear, no-fork version of the chain.
#
# Copyright (c) 2013-2016 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
from __future__ import print_function
try: # Python 3
import http.client as httplib
except ImportError: # Python 2
import httplib
import json
import re
import base64
import sys
import os
import os.path
settings = {}
##### Switch endian-ness #####
def hex_switchEndian(s):
""" Switches the endianness of a hex string (in pairs of hex chars) """
pairList = [s[i:i+2].encode() for i in range(0, len(s), 2)]
return b''.join(pairList[::-1]).decode()
class BitcoinRPC:
def __init__(self, host, port, username, password):
authpair = "%s:%s" % (username, password)
authpair = authpair.encode('utf-8')
self.authhdr = b"Basic " + base64.b64encode(authpair)
self.conn = httplib.HTTPConnection(host, port=port, timeout=30)
def execute(self, obj):
try:
self.conn.request('POST', '/', json.dumps(obj),
{ 'Authorization' : self.authhdr,
'Content-type' : 'application/json' })
except ConnectionRefusedError:
print('RPC connection refused. Check RPC settings and the server status.',
file=sys.stderr)
return None
resp = self.conn.getresponse()
if resp is None:
print("JSON-RPC: no response", file=sys.stderr)
return None
body = resp.read().decode('utf-8')
resp_obj = json.loads(body)
return resp_obj
@staticmethod
def build_request(idx, method, params):
obj = { 'version' : '1.1',
'method' : method,
'id' : idx }
if params is None:
obj['params'] = []
else:
obj['params'] = params
return obj
@staticmethod
def response_is_error(resp_obj):
return 'error' in resp_obj and resp_obj['error'] is not None
def get_block_hashes(settings, max_blocks_per_call=10000):
rpc = BitcoinRPC(settings['host'], settings['port'],
settings['rpcuser'], settings['rpcpassword'])
height = settings['min_height']
while height < settings['max_height']+1:
num_blocks = min(settings['max_height']+1-height, max_blocks_per_call)
batch = []
for x in range(num_blocks):
batch.append(rpc.build_request(x, 'getblockhash', [height + x]))
reply = rpc.execute(batch)
if reply is None:
print('Cannot continue. Program will halt.')
return None
for x,resp_obj in enumerate(reply):
if rpc.response_is_error(resp_obj):
print('JSON-RPC: error at height', height+x, ': ', resp_obj['error'], file=sys.stderr)
exit(1)
assert(resp_obj['id'] == x) # assume replies are in-sequence
if settings['rev_hash_bytes'] == 'true':
resp_obj['result'] = hex_switchEndian(resp_obj['result'])
print(resp_obj['result'])
height += num_blocks
def get_rpc_cookie():
# Open the cookie file
with open(os.path.join(os.path.expanduser(settings['datadir']), '.cookie'), 'r') as f:
combined = f.readline()
combined_split = combined.split(":")
settings['rpcuser'] = combined_split[0]
settings['rpcpassword'] = combined_split[1]
if __name__ == '__main__':
if len(sys.argv) != 2:
print("Usage: linearize-hashes.py CONFIG-FILE")
sys.exit(1)
f = open(sys.argv[1])
for line in f:
# skip comment lines
m = re.search('^\s*#', line)
if m:
continue
# parse key=value lines
m = re.search('^(\w+)\s*=\s*(\S.*)$', line)
if m is None:
continue
settings[m.group(1)] = m.group(2)
f.close()
if 'host' not in settings:
settings['host'] = '127.0.0.1'
if 'port' not in settings:
settings['port'] = 8332
if 'min_height' not in settings:
settings['min_height'] = 0
if 'max_height' not in settings:
settings['max_height'] = 313000
if 'rev_hash_bytes' not in settings:
settings['rev_hash_bytes'] = 'false'
use_userpass = True
use_datadir = False
if 'rpcuser' not in settings or 'rpcpassword' not in settings:
use_userpass = False
if 'datadir' in settings and not use_userpass:
use_datadir = True
if not use_userpass and not use_datadir:
print("Missing datadir or username and/or password in cfg file", file=stderr)
sys.exit(1)
settings['port'] = int(settings['port'])
settings['min_height'] = int(settings['min_height'])
settings['max_height'] = int(settings['max_height'])
# Force hash byte format setting to be lowercase to make comparisons easier.
settings['rev_hash_bytes'] = settings['rev_hash_bytes'].lower()
# Get the rpc user and pass from the cookie if the datadir is set
if use_datadir:
get_rpc_cookie()
get_block_hashes(settings)
| mit | 2,729,803,349,214,221,000 | 28.165605 | 90 | 0.672199 | false | 3.046574 | false | false | false |
perryjrandall/arsenalsuite | cpp/apps/bach/web/bach/models/keyword.py | 10 | 1381 | #
# Copyright (c) 2009 Dr. D Studios. (Please refer to license for details)
# SVN_META_HEADURL = "$HeadURL: $"
# SVN_META_ID = "$Id: keyword.py 9408 2010-03-03 22:35:49Z brobison $"
#
from sqlalchemy import Column, Table, types, ForeignKey, Index
from sqlalchemy.orm import relation, backref
from ..config import mapper, metadata
from .asset import Asset
class Keyword( object ):
def __init__( self ):
self.keybachkeyword = None
self.name = None
@property
def asset_count(self):
return 0 #len(self.assets)
def __repr__( self ):
return '<%s:%s:%s>' % ( self.__class__.__name__, self.keybachkeyword, self.name )
table = Table( 'bachkeyword', metadata,
Column( 'keybachkeyword', types.Integer, primary_key=True ),
Column( 'name', types.String, nullable=False ) )
join_table = Table( 'bachkeywordmap', metadata,
Column( 'fkeybachkeyword', types.Integer, ForeignKey( 'bachkeyword.keybachkeyword' ) ),
Column( 'fkeybachasset', types.Integer, ForeignKey( 'bachasset.keybachasset' ) ) )
mapper( Keyword, table,
properties={
'assets':relation( Asset,
secondary=join_table,
# backref='buckets'
),
} )
| gpl-2.0 | -8,533,436,228,025,473,000 | 33.525 | 107 | 0.566256 | false | 4.002899 | false | false | false |
liu602348184/django | tests/migrations/test_writer.py | 65 | 22965 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import datetime
import math
import os
import re
import tokenize
import unittest
import custom_migration_operations.more_operations
import custom_migration_operations.operations
from django.conf import settings
from django.core.validators import EmailValidator, RegexValidator
from django.db import migrations, models
from django.db.migrations.writer import (
MigrationWriter, OperationWriter, SettingsReference,
)
from django.test import SimpleTestCase, ignore_warnings
from django.utils import datetime_safe, six
from django.utils._os import upath
from django.utils.deconstruct import deconstructible
from django.utils.timezone import FixedOffset, get_default_timezone, utc
from django.utils.translation import ugettext_lazy as _
from .models import FoodManager, FoodQuerySet
class TestModel1(object):
def upload_to(self):
return "somewhere dynamic"
thing = models.FileField(upload_to=upload_to)
class OperationWriterTests(SimpleTestCase):
def test_empty_signature(self):
operation = custom_migration_operations.operations.TestOperation()
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
'custom_migration_operations.operations.TestOperation(\n'
'),'
)
def test_args_signature(self):
operation = custom_migration_operations.operations.ArgsOperation(1, 2)
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
'custom_migration_operations.operations.ArgsOperation(\n'
' arg1=1,\n'
' arg2=2,\n'
'),'
)
def test_kwargs_signature(self):
operation = custom_migration_operations.operations.KwargsOperation(kwarg1=1)
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
'custom_migration_operations.operations.KwargsOperation(\n'
' kwarg1=1,\n'
'),'
)
def test_args_kwargs_signature(self):
operation = custom_migration_operations.operations.ArgsKwargsOperation(1, 2, kwarg2=4)
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
'custom_migration_operations.operations.ArgsKwargsOperation(\n'
' arg1=1,\n'
' arg2=2,\n'
' kwarg2=4,\n'
'),'
)
def test_nested_args_signature(self):
operation = custom_migration_operations.operations.ArgsOperation(
custom_migration_operations.operations.ArgsOperation(1, 2),
custom_migration_operations.operations.KwargsOperation(kwarg1=3, kwarg2=4)
)
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
'custom_migration_operations.operations.ArgsOperation(\n'
' arg1=custom_migration_operations.operations.ArgsOperation(\n'
' arg1=1,\n'
' arg2=2,\n'
' ),\n'
' arg2=custom_migration_operations.operations.KwargsOperation(\n'
' kwarg1=3,\n'
' kwarg2=4,\n'
' ),\n'
'),'
)
def test_multiline_args_signature(self):
operation = custom_migration_operations.operations.ArgsOperation("test\n arg1", "test\narg2")
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
"custom_migration_operations.operations.ArgsOperation(\n"
" arg1='test\\n arg1',\n"
" arg2='test\\narg2',\n"
"),"
)
def test_expand_args_signature(self):
operation = custom_migration_operations.operations.ExpandArgsOperation([1, 2])
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
'custom_migration_operations.operations.ExpandArgsOperation(\n'
' arg=[\n'
' 1,\n'
' 2,\n'
' ],\n'
'),'
)
def test_nested_operation_expand_args_signature(self):
operation = custom_migration_operations.operations.ExpandArgsOperation(
arg=[
custom_migration_operations.operations.KwargsOperation(
kwarg1=1,
kwarg2=2,
),
]
)
buff, imports = OperationWriter(operation, indentation=0).serialize()
self.assertEqual(imports, {'import custom_migration_operations.operations'})
self.assertEqual(
buff,
'custom_migration_operations.operations.ExpandArgsOperation(\n'
' arg=[\n'
' custom_migration_operations.operations.KwargsOperation(\n'
' kwarg1=1,\n'
' kwarg2=2,\n'
' ),\n'
' ],\n'
'),'
)
class WriterTests(SimpleTestCase):
"""
Tests the migration writer (makes migration files from Migration instances)
"""
def safe_exec(self, string, value=None):
l = {}
try:
exec(string, globals(), l)
except Exception as e:
if value:
self.fail("Could not exec %r (from value %r): %s" % (string.strip(), value, e))
else:
self.fail("Could not exec %r: %s" % (string.strip(), e))
return l
def serialize_round_trip(self, value):
string, imports = MigrationWriter.serialize(value)
return self.safe_exec("%s\ntest_value_result = %s" % ("\n".join(imports), string), value)['test_value_result']
def assertSerializedEqual(self, value):
self.assertEqual(self.serialize_round_trip(value), value)
def assertSerializedResultEqual(self, value, target):
self.assertEqual(MigrationWriter.serialize(value), target)
def assertSerializedFieldEqual(self, value):
new_value = self.serialize_round_trip(value)
self.assertEqual(value.__class__, new_value.__class__)
self.assertEqual(value.max_length, new_value.max_length)
self.assertEqual(value.null, new_value.null)
self.assertEqual(value.unique, new_value.unique)
def test_serialize_numbers(self):
self.assertSerializedEqual(1)
self.assertSerializedEqual(1.2)
self.assertTrue(math.isinf(self.serialize_round_trip(float("inf"))))
self.assertTrue(math.isinf(self.serialize_round_trip(float("-inf"))))
self.assertTrue(math.isnan(self.serialize_round_trip(float("nan"))))
def test_serialize_constants(self):
self.assertSerializedEqual(None)
self.assertSerializedEqual(True)
self.assertSerializedEqual(False)
def test_serialize_strings(self):
self.assertSerializedEqual(b"foobar")
string, imports = MigrationWriter.serialize(b"foobar")
self.assertEqual(string, "b'foobar'")
self.assertSerializedEqual("föobár")
string, imports = MigrationWriter.serialize("foobar")
self.assertEqual(string, "'foobar'")
def test_serialize_multiline_strings(self):
self.assertSerializedEqual(b"foo\nbar")
string, imports = MigrationWriter.serialize(b"foo\nbar")
self.assertEqual(string, "b'foo\\nbar'")
self.assertSerializedEqual("föo\nbár")
string, imports = MigrationWriter.serialize("foo\nbar")
self.assertEqual(string, "'foo\\nbar'")
def test_serialize_collections(self):
self.assertSerializedEqual({1: 2})
self.assertSerializedEqual(["a", 2, True, None])
self.assertSerializedEqual({2, 3, "eighty"})
self.assertSerializedEqual({"lalalala": ["yeah", "no", "maybe"]})
self.assertSerializedEqual(_('Hello'))
def test_serialize_builtin_types(self):
self.assertSerializedEqual([list, tuple, dict, set, frozenset])
self.assertSerializedResultEqual(
[list, tuple, dict, set, frozenset],
("[list, tuple, dict, set, frozenset]", set())
)
def test_serialize_functions(self):
with six.assertRaisesRegex(self, ValueError, 'Cannot serialize function: lambda'):
self.assertSerializedEqual(lambda x: 42)
self.assertSerializedEqual(models.SET_NULL)
string, imports = MigrationWriter.serialize(models.SET(42))
self.assertEqual(string, 'models.SET(42)')
self.serialize_round_trip(models.SET(42))
def test_serialize_datetime(self):
self.assertSerializedEqual(datetime.datetime.utcnow())
self.assertSerializedEqual(datetime.datetime.utcnow)
self.assertSerializedEqual(datetime.datetime.today())
self.assertSerializedEqual(datetime.datetime.today)
self.assertSerializedEqual(datetime.date.today())
self.assertSerializedEqual(datetime.date.today)
self.assertSerializedEqual(datetime.datetime.now().time())
self.assertSerializedEqual(datetime.datetime(2014, 1, 1, 1, 1, tzinfo=get_default_timezone()))
self.assertSerializedEqual(datetime.datetime(2013, 12, 31, 22, 1, tzinfo=FixedOffset(180)))
self.assertSerializedResultEqual(
datetime.datetime(2014, 1, 1, 1, 1),
("datetime.datetime(2014, 1, 1, 1, 1)", {'import datetime'})
)
self.assertSerializedResultEqual(
datetime.datetime(2012, 1, 1, 1, 1, tzinfo=utc),
(
"datetime.datetime(2012, 1, 1, 1, 1, tzinfo=utc)",
{'import datetime', 'from django.utils.timezone import utc'},
)
)
def test_serialize_datetime_safe(self):
self.assertSerializedResultEqual(
datetime_safe.date(2014, 3, 31),
("datetime.date(2014, 3, 31)", {'import datetime'})
)
self.assertSerializedResultEqual(
datetime_safe.time(10, 25),
("datetime.time(10, 25)", {'import datetime'})
)
self.assertSerializedResultEqual(
datetime_safe.datetime(2014, 3, 31, 16, 4, 31),
("datetime.datetime(2014, 3, 31, 16, 4, 31)", {'import datetime'})
)
def test_serialize_fields(self):
self.assertSerializedFieldEqual(models.CharField(max_length=255))
self.assertSerializedResultEqual(
models.CharField(max_length=255),
("models.CharField(max_length=255)", {"from django.db import models"})
)
self.assertSerializedFieldEqual(models.TextField(null=True, blank=True))
self.assertSerializedResultEqual(
models.TextField(null=True, blank=True),
("models.TextField(blank=True, null=True)", {'from django.db import models'})
)
def test_serialize_settings(self):
self.assertSerializedEqual(SettingsReference(settings.AUTH_USER_MODEL, "AUTH_USER_MODEL"))
self.assertSerializedResultEqual(
SettingsReference("someapp.model", "AUTH_USER_MODEL"),
("settings.AUTH_USER_MODEL", {"from django.conf import settings"})
)
self.assertSerializedResultEqual(
((x, x * x) for x in range(3)),
("((0, 0), (1, 1), (2, 4))", set())
)
def test_serialize_compiled_regex(self):
"""
Make sure compiled regex can be serialized.
"""
regex = re.compile(r'^\w+$', re.U)
self.assertSerializedEqual(regex)
def test_serialize_class_based_validators(self):
"""
Ticket #22943: Test serialization of class-based validators, including
compiled regexes.
"""
validator = RegexValidator(message="hello")
string = MigrationWriter.serialize(validator)[0]
self.assertEqual(string, "django.core.validators.RegexValidator(message='hello')")
self.serialize_round_trip(validator)
# Test with a compiled regex.
validator = RegexValidator(regex=re.compile(r'^\w+$', re.U))
string = MigrationWriter.serialize(validator)[0]
self.assertEqual(string, "django.core.validators.RegexValidator(regex=re.compile('^\\\\w+$', 32))")
self.serialize_round_trip(validator)
# Test a string regex with flag
validator = RegexValidator(r'^[0-9]+$', flags=re.U)
string = MigrationWriter.serialize(validator)[0]
self.assertEqual(string, "django.core.validators.RegexValidator('^[0-9]+$', flags=32)")
self.serialize_round_trip(validator)
# Test message and code
validator = RegexValidator('^[-a-zA-Z0-9_]+$', 'Invalid', 'invalid')
string = MigrationWriter.serialize(validator)[0]
self.assertEqual(string, "django.core.validators.RegexValidator('^[-a-zA-Z0-9_]+$', 'Invalid', 'invalid')")
self.serialize_round_trip(validator)
# Test with a subclass.
validator = EmailValidator(message="hello")
string = MigrationWriter.serialize(validator)[0]
self.assertEqual(string, "django.core.validators.EmailValidator(message='hello')")
self.serialize_round_trip(validator)
validator = deconstructible(path="migrations.test_writer.EmailValidator")(EmailValidator)(message="hello")
string = MigrationWriter.serialize(validator)[0]
self.assertEqual(string, "migrations.test_writer.EmailValidator(message='hello')")
validator = deconstructible(path="custom.EmailValidator")(EmailValidator)(message="hello")
with six.assertRaisesRegex(self, ImportError, "No module named '?custom'?"):
MigrationWriter.serialize(validator)
validator = deconstructible(path="django.core.validators.EmailValidator2")(EmailValidator)(message="hello")
with self.assertRaisesMessage(ValueError, "Could not find object EmailValidator2 in django.core.validators."):
MigrationWriter.serialize(validator)
def test_serialize_empty_nonempty_tuple(self):
"""
Ticket #22679: makemigrations generates invalid code for (an empty
tuple) default_permissions = ()
"""
empty_tuple = ()
one_item_tuple = ('a',)
many_items_tuple = ('a', 'b', 'c')
self.assertSerializedEqual(empty_tuple)
self.assertSerializedEqual(one_item_tuple)
self.assertSerializedEqual(many_items_tuple)
@unittest.skipUnless(six.PY2, "Only applies on Python 2")
def test_serialize_direct_function_reference(self):
"""
Ticket #22436: You cannot use a function straight from its body
(e.g. define the method and use it in the same body)
"""
with self.assertRaises(ValueError):
self.serialize_round_trip(TestModel1.thing)
def test_serialize_local_function_reference(self):
"""
Neither py2 or py3 can serialize a reference in a local scope.
"""
class TestModel2(object):
def upload_to(self):
return "somewhere dynamic"
thing = models.FileField(upload_to=upload_to)
with self.assertRaises(ValueError):
self.serialize_round_trip(TestModel2.thing)
def test_serialize_local_function_reference_message(self):
"""
Make sure user is seeing which module/function is the issue
"""
class TestModel2(object):
def upload_to(self):
return "somewhere dynamic"
thing = models.FileField(upload_to=upload_to)
with six.assertRaisesRegex(self, ValueError,
'^Could not find function upload_to in migrations.test_writer'):
self.serialize_round_trip(TestModel2.thing)
def test_serialize_managers(self):
self.assertSerializedEqual(models.Manager())
self.assertSerializedResultEqual(
FoodQuerySet.as_manager(),
('migrations.models.FoodQuerySet.as_manager()', {'import migrations.models'})
)
self.assertSerializedEqual(FoodManager('a', 'b'))
self.assertSerializedEqual(FoodManager('x', 'y', c=3, d=4))
def test_serialize_frozensets(self):
self.assertSerializedEqual(frozenset())
self.assertSerializedEqual(frozenset("let it go"))
def test_serialize_timedelta(self):
self.assertSerializedEqual(datetime.timedelta())
self.assertSerializedEqual(datetime.timedelta(minutes=42))
def test_simple_migration(self):
"""
Tests serializing a simple migration.
"""
fields = {
'charfield': models.DateTimeField(default=datetime.datetime.utcnow),
'datetimefield': models.DateTimeField(default=datetime.datetime.utcnow),
}
options = {
'verbose_name': 'My model',
'verbose_name_plural': 'My models',
}
migration = type(str("Migration"), (migrations.Migration,), {
"operations": [
migrations.CreateModel("MyModel", tuple(fields.items()), options, (models.Model,)),
migrations.CreateModel("MyModel2", tuple(fields.items()), bases=(models.Model,)),
migrations.CreateModel(name="MyModel3", fields=tuple(fields.items()), options=options, bases=(models.Model,)),
migrations.DeleteModel("MyModel"),
migrations.AddField("OtherModel", "datetimefield", fields["datetimefield"]),
],
"dependencies": [("testapp", "some_other_one")],
})
writer = MigrationWriter(migration)
output = writer.as_string()
# It should NOT be unicode.
self.assertIsInstance(output, six.binary_type, "Migration as_string returned unicode")
# We don't test the output formatting - that's too fragile.
# Just make sure it runs for now, and that things look alright.
result = self.safe_exec(output)
self.assertIn("Migration", result)
# In order to preserve compatibility with Python 3.2 unicode literals
# prefix shouldn't be added to strings.
tokens = tokenize.generate_tokens(six.StringIO(str(output)).readline)
for token_type, token_source, (srow, scol), __, line in tokens:
if token_type == tokenize.STRING:
self.assertFalse(
token_source.startswith('u'),
"Unicode literal prefix found at %d:%d: %r" % (
srow, scol, line.strip()
)
)
# Silence warning on Python 2: Not importing directory
# 'tests/migrations/migrations_test_apps/without_init_file/migrations':
# missing __init__.py
@ignore_warnings(category=ImportWarning)
def test_migration_path(self):
test_apps = [
'migrations.migrations_test_apps.normal',
'migrations.migrations_test_apps.with_package_model',
'migrations.migrations_test_apps.without_init_file',
]
base_dir = os.path.dirname(os.path.dirname(upath(__file__)))
for app in test_apps:
with self.modify_settings(INSTALLED_APPS={'append': app}):
migration = migrations.Migration('0001_initial', app.split('.')[-1])
expected_path = os.path.join(base_dir, *(app.split('.') + ['migrations', '0001_initial.py']))
writer = MigrationWriter(migration)
self.assertEqual(writer.path, expected_path)
def test_custom_operation(self):
migration = type(str("Migration"), (migrations.Migration,), {
"operations": [
custom_migration_operations.operations.TestOperation(),
custom_migration_operations.operations.CreateModel(),
migrations.CreateModel("MyModel", (), {}, (models.Model,)),
custom_migration_operations.more_operations.TestOperation()
],
"dependencies": []
})
writer = MigrationWriter(migration)
output = writer.as_string()
result = self.safe_exec(output)
self.assertIn("custom_migration_operations", result)
self.assertNotEqual(
result['custom_migration_operations'].operations.TestOperation,
result['custom_migration_operations'].more_operations.TestOperation
)
def test_sorted_imports(self):
"""
#24155 - Tests ordering of imports.
"""
migration = type(str("Migration"), (migrations.Migration,), {
"operations": [
migrations.AddField("mymodel", "myfield", models.DateTimeField(
default=datetime.datetime(2012, 1, 1, 1, 1, tzinfo=utc),
)),
]
})
writer = MigrationWriter(migration)
output = writer.as_string().decode('utf-8')
self.assertIn(
"import datetime\n"
"from django.db import migrations, models\n"
"from django.utils.timezone import utc\n",
output
)
def test_models_import_omitted(self):
"""
django.db.models shouldn't be imported if unused.
"""
migration = type(str("Migration"), (migrations.Migration,), {
"operations": [
migrations.AlterModelOptions(
name='model',
options={'verbose_name': 'model', 'verbose_name_plural': 'models'},
),
]
})
writer = MigrationWriter(migration)
output = writer.as_string().decode('utf-8')
self.assertIn("from django.db import migrations\n", output)
def test_deconstruct_class_arguments(self):
# Yes, it doesn't make sense to use a class as a default for a
# CharField. It does make sense for custom fields though, for example
# an enumfield that takes the enum class as an argument.
class DeconstructableInstances(object):
def deconstruct(self):
return ('DeconstructableInstances', [], {})
string = MigrationWriter.serialize(models.CharField(default=DeconstructableInstances))[0]
self.assertEqual(string, "models.CharField(default=migrations.test_writer.DeconstructableInstances)")
| bsd-3-clause | -8,129,352,722,528,693,000 | 41.52037 | 126 | 0.622621 | false | 4.306264 | true | false | false |
freyley/trello-traceability | rungui.py | 1 | 12592 | #!/usr/bin/env python
import urwid
from trello import TrelloClient
import models
import settings
from models import Board, get_session, TrelloList
from trellointerface import create_dbcard_and_ensure_checklist
class RemoveOrgUser(object):
def __init__(self, parent):
self.items = [urwid.Text("foo"), urwid.Text("bar")]
self.main_content = urwid.SimpleListWalker(
[urwid.AttrMap(w, None, 'reveal focus') for w in self.items])
self.parent = parent
self.listbox = urwid.ListBox(self.main_content)
@property
def widget(self):
return urwid.AttrWrap(self.listbox, 'body')
def handle_input(self, k):
if k in ('u', 'U'):
self.parent.set_view(Top)
class NoRefocusColumns(urwid.Columns):
def keypress(self, size, key):
return key
class NoRefocusPile(urwid.Pile):
def keypress(self, size, key):
return key
class TrelloCard(object):
def __init__(self, card, trello):
self.card = card
self.trello = trello
self.initialize()
def initialize(self):
raise NotImplementedError()
@property
def trellocard(self):
return self.trello.get_card(self.card.id)
@property
def url(self):
return self.trellocard.url
@property
def id(self):
return self.card.id
@property
def name(self):
return self.card.name
class Story(TrelloCard):
def initialize(self):
pass
@property
def meta_checklist(self):
tc = self.trellocard
tc.fetch(eager=True)
return [ checklist for checklist in tc.checklists if checklist.id == self.card.magic_checklist_id ][0]
def connect_to(self, epic):
self.meta_checklist.add_checklist_item("Epic Connection: {}: {}".format(epic.id, epic.url))
epic.story_checklist.add_checklist_item("{}: {}".format(self.id, self.url))
self.card.connected_to_id = epic.id
@property
def more_info_area(self):
if self.card.connected_to is None:
return "No connection"
else:
epic = self.card.connected_to
return "Connected to {}".format(epic.name)
class Epic(TrelloCard):
def initialize(self):
pass
@property
def story_checklist(self):
tc = self.trellocard
tc.fetch(eager=True)
return [ checklist for checklist in tc.checklists if checklist.id == self.card.magic_checklist_id ][0]
@property
def more_info_area(self):
return ""
class Panel(object):
def __init__(self, parent, board, card_cls):
self.db_session = parent.db_session
self.trello = parent.trello
self.parent = parent
self.board = board
self.card_list_ptr = 0
self.card_lists = self.db_session.query(TrelloList).filter_by(board=self.board)
self.card_cls = card_cls
self.content = urwid.SimpleListWalker(
[urwid.AttrMap(w, None, 'reveal focus') for w in self.items])
self.listbox = urwid.ListBox(self.content)
def reset_content(self):
while self.content:
self.content.pop()
self.content += [ urwid.AttrMap(w, None, 'reveal focus') for w in self.items]
if len(self.items) >2:
self.listbox.set_focus(2)
# TODO: this doesn't belong here exactly.
self.parent.more_info_area.set_text(self.card.more_info_area)
@property
def trelloboard(self):
return self.trello.get_board(self.board.id)
@property
def items(self):
items = [urwid.Text(self.card_list.name), urwid.Text('-=-=-=-=-=-=-=-=-=-')]
items += [urwid.Text("{}] {}".format(i, card.name)) for i, card in enumerate(self.get_cards())]
return items
@property
def card_list(self):
return self.card_lists[self.card_list_ptr]
def get_cards(self):
db_cards = self.db_session.query(models.Card).filter_by(trellolist=self.card_list)
self.cards = [ self.card_cls(card, self.trello) for card in db_cards]
return self.cards
def set_focus(self, idx):
self.listbox.set_focus(idx)
@property
def card(self):
return self.cards[self.listbox.get_focus()[1] - 2]
def go_left(self):
if self.card_list_ptr > 0:
self.card_list_ptr -= 1
self.reset_content()
def go_right(self):
if self.card_list_ptr < self.card_lists.count() - 1:
self.card_list_ptr += 1
self.reset_content()
def move_up(self):
focus_widget, idx = self.listbox.get_focus()
if idx > 2:
idx = idx - 1
self.listbox.set_focus(idx)
self.parent.more_info_area.set_text(self.card.more_info_area)
def move_down(self):
focus_widget, idx = self.listbox.get_focus()
if idx < len(self.content) - 1:
idx = idx + 1
self.listbox.set_focus(idx)
self.parent.more_info_area.set_text(self.card.more_info_area)
class Connect(object):
def __init__(self, parent):
self.db_session = get_session()()
self.mid_cmd = self.old_focus = None
self.parent = parent
self.story_board = self.db_session.query(Board).filter_by(story_board=True).first()
self.epic_board = self.db_session.query(Board).filter_by(epic_board=True).first()
self.future_story_boards = self.db_session.query(Board).filter_by(future_story_board=True)
self.panels = [Panel(self, board=self.story_board, card_cls=Story)]
self.panels += [ Panel(self, board=fsb, card_cls=Story) for fsb in self.future_story_boards]
self.left_panel_idx = 0
self.left_panel = self.panels[self.left_panel_idx]
self.right_panel = Panel(self, board=self.epic_board, card_cls=Epic)
self.left_panel.set_focus(2)
self.columns = NoRefocusColumns([self.left_panel.listbox, self.right_panel.listbox], focus_column=0)
self.more_info_area = urwid.Text(self.left_panel.card.more_info_area)
self.command_area = urwid.Edit(caption="")
self.edit_area_listbox = urwid.ListBox([urwid.Text("-=-=-=-=-=-=-=-"), self.more_info_area, self.command_area])
#urwid.AttrMap(self.command_area, "notfocus", "focus")])
self.frame = NoRefocusPile([self.columns, self.edit_area_listbox], focus_item=0)
@property
def widget(self):
return urwid.AttrWrap(self.frame, 'body')
@property
def trello(self):
return self.parent.trelloclient
def _complete(self):
self.command_area.set_edit_text("")
self.mid_cmd = False
self.left_panel.listbox.set_focus(self.old_focus)
self.frame.set_focus(0)
def complete_n(self):
output = self.command_area.get_edit_text().strip()
card_list = self.right_panel.card_list
trello_list = self.right_panel.trelloboard.get_list(card_list.id)
card = trello_list.add_card(output)
db_card = create_dbcard_and_ensure_checklist(self.db_session, card, prefetch_checklists=True)
self.db_session.commit()
self.db_session = get_session()()
self.left_panel.card.connect_to(Epic(db_card, self.trello))
self.right_panel.reset_content()
self._complete()
def complete_c(self):
output = int(self.command_area.get_edit_text())
self._complete()
self.left_panel.card.connect_to(self.right_panel.cards[output])
self.db_session.commit()
def switch_story_boards(self):
self.left_panel_idx += 1
if self.left_panel_idx == len(self.panels):
self.left_panel_idx = 0
self.left_panel = self.panels[self.left_panel_idx]
self.columns.widget_list = [self.left_panel.listbox, self.right_panel.listbox]
self.columns.set_focus(0)
self.left_panel.listbox.set_focus(2)
def handle_input(self, k):
if self.mid_cmd:
if k == 'esc':
self._complete()
if k == 'enter':
if self.mid_cmd == 'n':
self.complete_n()
elif self.mid_cmd == 'c':
self.complete_c()
else:
self.command_area.keypress([0], k)
return
if k in ('u', 'U'):
self.parent.set_view(Top)
if k == 's':
self.switch_story_boards()
if k == 'c':
self.frame.set_focus(1)
self.command_area.set_edit_pos(0)
self.mid_cmd = 'c'
self.old_focus = self.left_panel.listbox.get_focus()[1]
if k == 'n':
self.frame.set_focus(1)
self.command_area.set_edit_pos(0)
self.mid_cmd = 'n'
self.old_focus = self.left_panel.listbox.get_focus()[1]
# navigation
elif k == 'j':
self.right_panel.go_left()
elif k == 'l':
self.right_panel.go_right()
elif k == 'a':
self.left_panel.go_left()
elif k == 'd':
self.left_panel.go_right()
elif k == 'up':
self.left_panel.move_up()
elif k == 'down':
self.left_panel.move_down()
VIEWS = {
"Remove Organization User": RemoveOrgUser,
"Connect Stories to Epics": Connect,
}
class Top(object):
def __init__(self, parent):
self.commands = [urwid.Text(text) for text in VIEWS.keys() ]
self.main_content = urwid.SimpleListWalker(
[urwid.Text('Commands'),
urwid.Text('-=-=-=-=-=-=-=-=-=-')]+
[urwid.AttrMap(w, None, 'reveal focus') for w in self.commands])
self.listbox = urwid.ListBox(self.main_content)
self.listbox.set_focus(2)
self.parent = parent
@property
def widget(self):
return urwid.AttrWrap(self.listbox, 'body')
def enter_command(self):
focus_widget, idx = self.listbox.get_focus()
item = self.main_content[idx].original_widget.text
view = VIEWS[item]
self.parent.set_view(view)
# focus_widget, idx = self.listbox.get_focus()
# item = self.main_content[idx].original_widget.text
# new_item = urwid.Text(item)
# self.commands.append(new_item)
# self.main_content.append(urwid.AttrMap(new_item, None, 'reveal focus'))
def handle_input(self, k):
if k == 'up':
focus_widget, idx = self.listbox.get_focus()
if idx > 2:
idx = idx - 1
self.listbox.set_focus(idx)
elif k == 'down':
focus_widget, idx = self.listbox.get_focus()
if idx < len(self.main_content) - 1:
idx = idx + 1
self.listbox.set_focus(idx)
elif k == 'enter':
self.enter_command()
class TrelloTraceability:
palette = [
('body', 'black', 'light gray'),
('focus', 'light gray', 'dark blue', 'standout'),
('head', 'yellow', 'black', 'standout'),
('foot', 'light gray', 'black'),
('key', 'light cyan', 'black','underline'),
('title', 'white', 'black', 'bold'),
('flag', 'dark gray', 'light gray'),
('error', 'dark red', 'light gray'),
]
cmdstrings = [
('title', "Commands"), " ",
('key', "Q"), ' ',
('key', 'U')
]
def __init__(self):
self.current_view = Top(self)
# header and footer
self.header = urwid.Text( "Trello Traceability" )
self.cmds = urwid.AttrWrap(urwid.Text(self.cmdstrings),
'foot')
self.view = urwid.Frame(
self.current_view.widget,
header=urwid.AttrWrap(self.header, 'head' ),
footer=self.cmds )
self.trelloclient = TrelloClient(
api_key=settings.TRELLO_API_KEY,
api_secret=settings.TRELLO_API_SECRET,
token=settings.TRELLO_OAUTH_TOKEN,
)
self.organization = self.trelloclient.get_organization(settings.TRELLO_ORGANIZATION_ID)
def set_view(self, cls):
self.current_view = cls(self)
self.view.body = self.current_view.widget
def main(self):
"""Run the program."""
self.loop = urwid.MainLoop(self.view, self.palette,
unhandled_input=self.unhandled_input)
self.loop.run()
def unhandled_input(self, k):
if k in ('q','Q'):
raise urwid.ExitMainLoop()
else:
self.current_view.handle_input(k)
def main():
TrelloTraceability().main()
if __name__=="__main__":
main() | agpl-3.0 | -8,318,284,779,255,094,000 | 31.456186 | 119 | 0.579018 | false | 3.415243 | false | false | false |
tsgit/invenio | modules/bibfield/lib/functions/is_type_isbn.py | 17 | 1815 | ## This file is part of Invenio.
## Copyright (C) 2013 CERN.
##
## Invenio is free software; you can redistribute it and/or
## modify it under the terms of the GNU General Public License as
## published by the Free Software Foundation; either version 2 of the
## License, or (at your option) any later version.
##
## Invenio is distributed in the hope that it will be useful, but
## WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
## General Public License for more details.
##
## You should have received a copy of the GNU General Public License
## along with Invenio; if not, write to the Free Software Foundation, Inc.,
## 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
def _convert_x_to_10(x):
if x != 'X':
return int(x)
else:
return 10
def is_type_isbn10(val):
"""
Test if argument is an ISBN-10 number
Courtesy Wikipedia:
http://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
val = val.replace("-", "").replace(" ", "")
if len(val) != 10:
return False
r = sum([(10 - i) * (_convert_x_to_10(x)) for i, x in enumerate(val)])
return not (r % 11)
def is_type_isbn13(val):
"""
Test if argument is an ISBN-13 number
Courtesy Wikipedia:
http://en.wikipedia.org/wiki/International_Standard_Book_Number
"""
val = val.replace("-", "").replace(" ", "")
if len(val) != 13:
return False
total = sum([int(num) * weight for num, weight in zip(val, (1, 3) * 6)])
ck = (10 - total) % 10
return ck == int(val[-1])
def is_type_isbn(val):
""" Test if argument is an ISBN-10 or ISBN-13 number """
try:
return is_type_isbn10(val) or is_type_isbn13(val)
except:
return False
| gpl-2.0 | -7,659,831,587,807,510,000 | 29.25 | 76 | 0.638567 | false | 3.49711 | false | false | false |
supergentle/migueltutorial | flask/lib/python2.7/site-packages/pbr/tests/test_core.py | 86 | 5269 | # Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Copyright (C) 2013 Association of Universities for Research in Astronomy
# (AURA)
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
#
# 3. The name of AURA and its representatives may not be used to
# endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY AURA ``AS IS'' AND ANY EXPRESS OR IMPLIED
# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL AURA BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
import glob
import os
import tarfile
import fixtures
from pbr.tests import base
class TestCore(base.BaseTestCase):
cmd_names = ('pbr_test_cmd', 'pbr_test_cmd_with_class')
def check_script_install(self, install_stdout):
for cmd_name in self.cmd_names:
install_txt = 'Installing %s script to %s' % (cmd_name,
self.temp_dir)
self.assertIn(install_txt, install_stdout)
cmd_filename = os.path.join(self.temp_dir, cmd_name)
script_txt = open(cmd_filename, 'r').read()
self.assertNotIn('pkg_resources', script_txt)
stdout, _, return_code = self._run_cmd(cmd_filename)
self.assertIn("PBR", stdout)
def test_setup_py_keywords(self):
"""setup.py --keywords.
Test that the `./setup.py --keywords` command returns the correct
value without balking.
"""
self.run_setup('egg_info')
stdout, _, _ = self.run_setup('--keywords')
assert stdout == 'packaging,distutils,setuptools'
def test_sdist_extra_files(self):
"""Test that the extra files are correctly added."""
stdout, _, return_code = self.run_setup('sdist', '--formats=gztar')
# There can be only one
try:
tf_path = glob.glob(os.path.join('dist', '*.tar.gz'))[0]
except IndexError:
assert False, 'source dist not found'
tf = tarfile.open(tf_path)
names = ['/'.join(p.split('/')[1:]) for p in tf.getnames()]
self.assertIn('extra-file.txt', names)
def test_console_script_install(self):
"""Test that we install a non-pkg-resources console script."""
if os.name == 'nt':
self.skipTest('Windows support is passthrough')
stdout, _, return_code = self.run_setup(
'install_scripts', '--install-dir=%s' % self.temp_dir)
self.useFixture(
fixtures.EnvironmentVariable('PYTHONPATH', '.'))
self.check_script_install(stdout)
def test_console_script_develop(self):
"""Test that we develop a non-pkg-resources console script."""
if os.name == 'nt':
self.skipTest('Windows support is passthrough')
self.useFixture(
fixtures.EnvironmentVariable(
'PYTHONPATH', ".:%s" % self.temp_dir))
stdout, _, return_code = self.run_setup(
'develop', '--install-dir=%s' % self.temp_dir)
self.check_script_install(stdout)
class TestGitSDist(base.BaseTestCase):
def setUp(self):
super(TestGitSDist, self).setUp()
stdout, _, return_code = self._run_cmd('git', ('init',))
if return_code:
self.skipTest("git not installed")
stdout, _, return_code = self._run_cmd('git', ('add', '.'))
stdout, _, return_code = self._run_cmd(
'git', ('commit', '-m', 'Turn this into a git repo'))
stdout, _, return_code = self.run_setup('sdist', '--formats=gztar')
def test_sdist_git_extra_files(self):
"""Test that extra files found in git are correctly added."""
# There can be only one
tf_path = glob.glob(os.path.join('dist', '*.tar.gz'))[0]
tf = tarfile.open(tf_path)
names = ['/'.join(p.split('/')[1:]) for p in tf.getnames()]
self.assertIn('git-extra-file.txt', names)
| bsd-3-clause | 7,623,451,611,574,101,000 | 34.843537 | 77 | 0.633707 | false | 4.006844 | true | false | false |
wolverineav/neutron | neutron/pecan_wsgi/hooks/body_validation.py | 4 | 2556 | # Copyright (c) 2015 Mirantis, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
from oslo_serialization import jsonutils
from pecan import hooks
from neutron.api.v2 import attributes as v2_attributes
from neutron.api.v2 import base as v2_base
LOG = log.getLogger(__name__)
class BodyValidationHook(hooks.PecanHook):
priority = 120
def before(self, state):
if state.request.method not in ('POST', 'PUT'):
return
resource = state.request.context.get('resource')
collection = state.request.context.get('collection')
neutron_context = state.request.context['neutron_context']
is_create = state.request.method == 'POST'
if not resource:
return
try:
json_data = jsonutils.loads(state.request.body)
except ValueError:
LOG.debug("No JSON Data in %(method)s request for %(collection)s",
{'method': state.request.method,
'collections': collection})
return
# Raw data are consumed by member actions such as add_router_interface
state.request.context['request_data'] = json_data
if not (resource in json_data or collection in json_data):
# there is no resource in the request. This can happen when a
# member action is being processed or on agent scheduler operations
return
# Prepare data to be passed to the plugin from request body
data = v2_base.Controller.prepare_request_body(
neutron_context,
json_data,
is_create,
resource,
v2_attributes.get_collection_info(collection),
allow_bulk=is_create)
if collection in data:
state.request.context['resources'] = [item[resource] for item in
data[collection]]
else:
state.request.context['resources'] = [data[resource]]
| apache-2.0 | 6,770,917,538,147,770,000 | 38.323077 | 79 | 0.634585 | false | 4.406897 | false | false | false |
Chilledheart/chromium | build/android/gyp/process_resources.py | 1 | 15150 | #!/usr/bin/env python
#
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Process Android resources to generate R.java, and prepare for packaging.
This will crunch images and generate v14 compatible resources
(see generate_v14_compatible_resources.py).
"""
import codecs
import optparse
import os
import re
import shutil
import sys
import generate_v14_compatible_resources
from util import build_utils
# Import jinja2 from third_party/jinja2
sys.path.insert(1,
os.path.join(os.path.dirname(__file__), '../../../third_party'))
from jinja2 import Template # pylint: disable=F0401
def ParseArgs(args):
"""Parses command line options.
Returns:
An options object as from optparse.OptionsParser.parse_args()
"""
parser = optparse.OptionParser()
build_utils.AddDepfileOption(parser)
parser.add_option('--android-sdk', help='path to the Android SDK folder')
parser.add_option('--aapt-path',
help='path to the Android aapt tool')
parser.add_option('--non-constant-id', action='store_true')
parser.add_option('--android-manifest', help='AndroidManifest.xml path')
parser.add_option('--custom-package', help='Java package for R.java')
parser.add_option(
'--shared-resources',
action='store_true',
help='Make a resource package that can be loaded by a different'
'application at runtime to access the package\'s resources.')
parser.add_option('--resource-dirs',
help='Directories containing resources of this target.')
parser.add_option('--dependencies-res-zips',
help='Resources from dependents.')
parser.add_option('--resource-zip-out',
help='Path for output zipped resources.')
parser.add_option('--R-dir',
help='directory to hold generated R.java.')
parser.add_option('--srcjar-out',
help='Path to srcjar to contain generated R.java.')
parser.add_option('--r-text-out',
help='Path to store the R.txt file generated by appt.')
parser.add_option('--proguard-file',
help='Path to proguard.txt generated file')
parser.add_option(
'--v14-skip',
action="store_true",
help='Do not generate nor verify v14 resources')
parser.add_option(
'--extra-res-packages',
help='Additional package names to generate R.java files for')
parser.add_option(
'--extra-r-text-files',
help='For each additional package, the R.txt file should contain a '
'list of resources to be included in the R.java file in the format '
'generated by aapt')
parser.add_option(
'--include-all-resources',
action='store_true',
help='Include every resource ID in every generated R.java file '
'(ignoring R.txt).')
parser.add_option(
'--all-resources-zip-out',
help='Path for output of all resources. This includes resources in '
'dependencies.')
parser.add_option('--stamp', help='File to touch on success')
(options, args) = parser.parse_args(args)
if args:
parser.error('No positional arguments should be given.')
# Check that required options have been provided.
required_options = (
'android_sdk',
'aapt_path',
'android_manifest',
'dependencies_res_zips',
'resource_dirs',
'resource_zip_out',
)
build_utils.CheckOptions(options, parser, required=required_options)
if (options.R_dir is None) == (options.srcjar_out is None):
raise Exception('Exactly one of --R-dir or --srcjar-out must be specified.')
return options
def CreateExtraRJavaFiles(
r_dir, extra_packages, extra_r_text_files, shared_resources, include_all):
if include_all:
java_files = build_utils.FindInDirectory(r_dir, "R.java")
if len(java_files) != 1:
return
r_java_file = java_files[0]
r_java_contents = codecs.open(r_java_file, encoding='utf-8').read()
for package in extra_packages:
package_r_java_dir = os.path.join(r_dir, *package.split('.'))
build_utils.MakeDirectory(package_r_java_dir)
package_r_java_path = os.path.join(package_r_java_dir, 'R.java')
new_r_java = re.sub(r'package [.\w]*;', u'package %s;' % package,
r_java_contents)
codecs.open(package_r_java_path, 'w', encoding='utf-8').write(new_r_java)
else:
if len(extra_packages) != len(extra_r_text_files):
raise Exception('Need one R.txt file per extra package')
all_resources = {}
r_txt_file = os.path.join(r_dir, 'R.txt')
if not os.path.exists(r_txt_file):
return
with open(r_txt_file) as f:
for line in f:
m = re.match(r'(int(?:\[\])?) (\w+) (\w+) (.+)$', line)
if not m:
raise Exception('Unexpected line in R.txt: %s' % line)
java_type, resource_type, name, value = m.groups()
all_resources[(resource_type, name)] = (java_type, value)
for package, r_text_file in zip(extra_packages, extra_r_text_files):
if os.path.exists(r_text_file):
package_r_java_dir = os.path.join(r_dir, *package.split('.'))
build_utils.MakeDirectory(package_r_java_dir)
package_r_java_path = os.path.join(package_r_java_dir, 'R.java')
CreateExtraRJavaFile(
package, package_r_java_path, r_text_file, all_resources,
shared_resources)
def CreateExtraRJavaFile(
package, r_java_path, r_text_file, all_resources, shared_resources):
resources = {}
with open(r_text_file) as f:
for line in f:
m = re.match(r'int(?:\[\])? (\w+) (\w+) ', line)
if not m:
raise Exception('Unexpected line in R.txt: %s' % line)
resource_type, name = m.groups()
java_type, value = all_resources[(resource_type, name)]
if resource_type not in resources:
resources[resource_type] = []
resources[resource_type].append((name, java_type, value))
template = Template("""/* AUTO-GENERATED FILE. DO NOT MODIFY. */
package {{ package }};
public final class R {
{% for resource_type in resources %}
public static final class {{ resource_type }} {
{% for name, java_type, value in resources[resource_type] %}
{% if shared_resources %}
public static {{ java_type }} {{ name }} = {{ value }};
{% else %}
public static final {{ java_type }} {{ name }} = {{ value }};
{% endif %}
{% endfor %}
}
{% endfor %}
{% if shared_resources %}
public static void onResourcesLoaded(int packageId) {
{% for resource_type in resources %}
{% for name, java_type, value in resources[resource_type] %}
{% if java_type == 'int[]' %}
for(int i = 0; i < {{ resource_type }}.{{ name }}.length; ++i) {
{{ resource_type }}.{{ name }}[i] =
({{ resource_type }}.{{ name }}[i] & 0x00ffffff)
| (packageId << 24);
}
{% else %}
{{ resource_type }}.{{ name }} =
({{ resource_type }}.{{ name }} & 0x00ffffff)
| (packageId << 24);
{% endif %}
{% endfor %}
{% endfor %}
}
{% endif %}
}
""", trim_blocks=True, lstrip_blocks=True)
output = template.render(package=package, resources=resources,
shared_resources=shared_resources)
with open(r_java_path, 'w') as f:
f.write(output)
def CrunchDirectory(aapt, input_dir, output_dir):
"""Crunches the images in input_dir and its subdirectories into output_dir.
If an image is already optimized, crunching often increases image size. In
this case, the crunched image is overwritten with the original image.
"""
aapt_cmd = [aapt,
'crunch',
'-C', output_dir,
'-S', input_dir,
'--ignore-assets', build_utils.AAPT_IGNORE_PATTERN]
build_utils.CheckOutput(aapt_cmd, stderr_filter=FilterCrunchStderr,
fail_func=DidCrunchFail)
# Check for images whose size increased during crunching and replace them
# with their originals (except for 9-patches, which must be crunched).
for dir_, _, files in os.walk(output_dir):
for crunched in files:
if crunched.endswith('.9.png'):
continue
if not crunched.endswith('.png'):
raise Exception('Unexpected file in crunched dir: ' + crunched)
crunched = os.path.join(dir_, crunched)
original = os.path.join(input_dir, os.path.relpath(crunched, output_dir))
original_size = os.path.getsize(original)
crunched_size = os.path.getsize(crunched)
if original_size < crunched_size:
shutil.copyfile(original, crunched)
def FilterCrunchStderr(stderr):
"""Filters out lines from aapt crunch's stderr that can safely be ignored."""
filtered_lines = []
for line in stderr.splitlines(True):
# Ignore this libpng warning, which is a known non-error condition.
# http://crbug.com/364355
if ('libpng warning: iCCP: Not recognizing known sRGB profile that has '
+ 'been edited' in line):
continue
filtered_lines.append(line)
return ''.join(filtered_lines)
def DidCrunchFail(returncode, stderr):
"""Determines whether aapt crunch failed from its return code and output.
Because aapt's return code cannot be trusted, any output to stderr is
an indication that aapt has failed (http://crbug.com/314885).
"""
return returncode != 0 or stderr
def ZipResources(resource_dirs, zip_path):
# Python zipfile does not provide a way to replace a file (it just writes
# another file with the same name). So, first collect all the files to put
# in the zip (with proper overriding), and then zip them.
files_to_zip = dict()
for d in resource_dirs:
for root, _, files in os.walk(d):
for f in files:
archive_path = f
parent_dir = os.path.relpath(root, d)
if parent_dir != '.':
archive_path = os.path.join(parent_dir, f)
path = os.path.join(root, f)
files_to_zip[archive_path] = path
build_utils.DoZip(files_to_zip.iteritems(), zip_path)
def CombineZips(zip_files, output_path):
# When packaging resources, if the top-level directories in the zip file are
# of the form 0, 1, ..., then each subdirectory will be passed to aapt as a
# resources directory. While some resources just clobber others (image files,
# etc), other resources (particularly .xml files) need to be more
# intelligently merged. That merging is left up to aapt.
def path_transform(name, src_zip):
return '%d/%s' % (zip_files.index(src_zip), name)
build_utils.MergeZips(output_path, zip_files, path_transform=path_transform)
def main():
args = build_utils.ExpandFileArgs(sys.argv[1:])
options = ParseArgs(args)
android_jar = os.path.join(options.android_sdk, 'android.jar')
aapt = options.aapt_path
input_files = []
with build_utils.TempDir() as temp_dir:
deps_dir = os.path.join(temp_dir, 'deps')
build_utils.MakeDirectory(deps_dir)
v14_dir = os.path.join(temp_dir, 'v14')
build_utils.MakeDirectory(v14_dir)
gen_dir = os.path.join(temp_dir, 'gen')
build_utils.MakeDirectory(gen_dir)
input_resource_dirs = build_utils.ParseGypList(options.resource_dirs)
if not options.v14_skip:
for resource_dir in input_resource_dirs:
generate_v14_compatible_resources.GenerateV14Resources(
resource_dir,
v14_dir)
dep_zips = build_utils.ParseGypList(options.dependencies_res_zips)
input_files += dep_zips
dep_subdirs = []
for z in dep_zips:
subdir = os.path.join(deps_dir, os.path.basename(z))
if os.path.exists(subdir):
raise Exception('Resource zip name conflict: ' + os.path.basename(z))
build_utils.ExtractAll(z, path=subdir)
dep_subdirs.append(subdir)
# Generate R.java. This R.java contains non-final constants and is used only
# while compiling the library jar (e.g. chromium_content.jar). When building
# an apk, a new R.java file with the correct resource -> ID mappings will be
# generated by merging the resources from all libraries and the main apk
# project.
package_command = [aapt,
'package',
'-m',
'-M', options.android_manifest,
'--auto-add-overlay',
'-I', android_jar,
'--output-text-symbols', gen_dir,
'-J', gen_dir,
'--ignore-assets', build_utils.AAPT_IGNORE_PATTERN]
for d in input_resource_dirs:
package_command += ['-S', d]
for d in dep_subdirs:
package_command += ['-S', d]
if options.non_constant_id:
package_command.append('--non-constant-id')
if options.custom_package:
package_command += ['--custom-package', options.custom_package]
if options.proguard_file:
package_command += ['-G', options.proguard_file]
if options.shared_resources:
package_command.append('--shared-lib')
build_utils.CheckOutput(package_command, print_stderr=False)
if options.extra_res_packages:
CreateExtraRJavaFiles(
gen_dir,
build_utils.ParseGypList(options.extra_res_packages),
build_utils.ParseGypList(options.extra_r_text_files),
options.shared_resources,
options.include_all_resources)
# This is the list of directories with resources to put in the final .zip
# file. The order of these is important so that crunched/v14 resources
# override the normal ones.
zip_resource_dirs = input_resource_dirs + [v14_dir]
base_crunch_dir = os.path.join(temp_dir, 'crunch')
# Crunch image resources. This shrinks png files and is necessary for
# 9-patch images to display correctly. 'aapt crunch' accepts only a single
# directory at a time and deletes everything in the output directory.
for idx, input_dir in enumerate(input_resource_dirs):
crunch_dir = os.path.join(base_crunch_dir, str(idx))
build_utils.MakeDirectory(crunch_dir)
zip_resource_dirs.append(crunch_dir)
CrunchDirectory(aapt, input_dir, crunch_dir)
ZipResources(zip_resource_dirs, options.resource_zip_out)
if options.all_resources_zip_out:
CombineZips([options.resource_zip_out] + dep_zips,
options.all_resources_zip_out)
if options.R_dir:
build_utils.DeleteDirectory(options.R_dir)
shutil.copytree(gen_dir, options.R_dir)
else:
build_utils.ZipDir(options.srcjar_out, gen_dir)
if options.r_text_out:
r_text_path = os.path.join(gen_dir, 'R.txt')
if os.path.exists(r_text_path):
shutil.copyfile(r_text_path, options.r_text_out)
else:
open(options.r_text_out, 'w').close()
if options.depfile:
input_files += build_utils.GetPythonDependencies()
build_utils.WriteDepfile(options.depfile, input_files)
if options.stamp:
build_utils.Touch(options.stamp)
if __name__ == '__main__':
main()
| bsd-3-clause | -3,804,866,868,311,570,400 | 35.244019 | 80 | 0.636964 | false | 3.594306 | false | false | false |
rruebner/odoo | addons/point_of_sale/wizard/pos_confirm.py | 343 | 2403 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import osv
class pos_confirm(osv.osv_memory):
_name = 'pos.confirm'
_description = 'Post POS Journal Entries'
def action_confirm(self, cr, uid, ids, context=None):
order_obj = self.pool.get('pos.order')
ids = order_obj.search(cr, uid, [('state','=','paid')], context=context)
for order in order_obj.browse(cr, uid, ids, context=context):
todo = True
for line in order.statement_ids:
if line.statement_id.state != 'confirm':
todo = False
break
if todo:
order.signal_workflow('done')
# Check if there is orders to reconcile their invoices
ids = order_obj.search(cr, uid, [('state','=','invoiced'),('invoice_id.state','=','open')], context=context)
for order in order_obj.browse(cr, uid, ids, context=context):
invoice = order.invoice_id
data_lines = [x.id for x in invoice.move_id.line_id if x.account_id.id == invoice.account_id.id]
for st in order.statement_ids:
for move in st.move_ids:
data_lines += [x.id for x in move.line_id if x.account_id.id == invoice.account_id.id]
self.pool.get('account.move.line').reconcile(cr, uid, data_lines, context=context)
return {}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | 7,988,854,076,979,739,000 | 45.211538 | 116 | 0.589263 | false | 4.086735 | false | false | false |
rdipietro/tensorflow | tensorflow/python/ops/split_benchmark.py | 7 | 4025 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Benchmark for split and grad of split."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.python.platform import benchmark
from tensorflow.python.platform import tf_logging as logging
def build_graph(device, input_shape, output_sizes, axis):
"""Build a graph containing a sequence of batch normalizations.
Args:
device: string, the device to run on.
input_shape: shape of the input tensor.
output_sizes: size of each output along axis.
axis: axis to be split along.
Returns:
An array of tensors to run()
"""
with tf.device("/%s:0" % device):
inp = tf.zeros(input_shape)
outputs = []
for _ in range(100):
outputs.extend(tf.split_v(inp, output_sizes, axis))
return tf.group(*outputs)
class SplitBenchmark(tf.test.Benchmark):
"""Benchmark split!"""
def _run_graph(self, device, output_shape, variable, num_outputs, axis):
"""Run the graph and print its execution time.
Args:
device: string, the device to run on.
output_shape: shape of each output tensors.
variable: whether or not the output shape should be fixed
num_outputs: the number of outputs to split the input into
axis: axis to be split
Returns:
The duration of the run in seconds.
"""
graph = tf.Graph()
with graph.as_default():
if not variable:
if axis == 0:
input_shape = [output_shape[0] * num_outputs, output_shape[1]]
sizes = [output_shape[0] for _ in range(num_outputs)]
else:
input_shape = [output_shape[0], output_shape[1] * num_outputs]
sizes = [output_shape[1] for _ in range(num_outputs)]
else:
sizes = np.random.randint(
low=max(1, output_shape[axis] - 2),
high=output_shape[axis] + 2,
size=num_outputs)
total_size = np.sum(sizes)
if axis == 0:
input_shape = [total_size, output_shape[1]]
else:
input_shape = [output_shape[0], total_size]
outputs = build_graph(device, input_shape, sizes, axis)
config = tf.ConfigProto(graph_options=tf.GraphOptions(
optimizer_options=tf.OptimizerOptions(
opt_level=tf.OptimizerOptions.L0)))
with tf.Session(graph=graph, config=config) as session:
logging.set_verbosity("info")
tf.global_variables_initializer().run()
bench = benchmark.TensorFlowBenchmark()
bench.run_op_benchmark(
session,
outputs,
mbs=input_shape[0] * input_shape[1] * 4 * 2 * 100 / 1e6,
extras={
"input_shape": input_shape,
"variable": variable,
"axis": axis
})
def benchmark_split(self):
print("Forward vs backward concat")
shapes = [[2000, 8], [8, 2000], [100, 18], [1000, 18], [10000, 18],
[100, 97], [1000, 97], [10000, 1], [1, 10000]]
axis_ = [1] # 0 is very fast because it doesn't actually do any copying
num_outputs = 100
variable = [False, True] # fixed input size or not
for shape in shapes:
for axis in axis_:
for v in variable:
self._run_graph("gpu", shape, v, num_outputs, axis)
if __name__ == "__main__":
tf.test.main()
| apache-2.0 | 1,387,651,702,457,593,900 | 33.698276 | 80 | 0.624099 | false | 3.877649 | false | false | false |
byu-aml-lab/bzrflag | bzagents/bzrc.py | 18 | 14582 | #!/usr/bin/python -tt
# Control BZFlag tanks remotely with synchronous communication.
####################################################################
# NOTE TO STUDENTS:
# You CAN and probably SHOULD modify this code. Just because it is
# in a separate file does not mean that you can ignore it or that
# you have to leave it alone. Treat it as your code. You are
# required to understand it enough to be able to modify it if you
# find something wrong. This is merely a help to get you started
# on interacting with BZRC. It is provided AS IS, with NO WARRANTY,
# express or implied.
####################################################################
from __future__ import division
import math
import sys
import socket
import time
class BZRC:
"""Class handles queries and responses with remote controled tanks."""
def __init__(self, host, port, debug=False):
"""Given a hostname and port number, connect to the RC tanks."""
self.debug = debug
# Note that AF_INET and SOCK_STREAM are defaults.
sock = socket.socket()
sock.connect((host, port))
# Make a line-buffered "file" from the socket.
self.conn = sock.makefile(bufsize=1)
self.handshake()
def handshake(self):
"""Perform the handshake with the remote tanks."""
self.expect(('bzrobots', '1'), True)
print >>self.conn, 'agent 1'
def close(self):
"""Close the socket."""
self.conn.close()
def read_arr(self):
"""Read a response from the RC tanks as an array split on
whitespace.
"""
try:
line = self.conn.readline()
except socket.error:
print 'Server Shut down. Aborting'
sys.exit(1)
if self.debug:
print 'Received: %s' % line.split()
return line.split()
def sendline(self, line):
"""Send a line to the RC tanks."""
print >>self.conn, line
def die_confused(self, expected, got_arr):
"""When we think the RC tanks should have responded differently, call
this method with a string explaining what should have been sent and
with the array containing what was actually sent.
"""
raise UnexpectedResponse(expected, ' '.join(got_arr))
def expect(self, expected, full=False):
"""Verify that server's response is as expected."""
if isinstance(expected, str):
expected = (expected,)
line = self.read_arr()
good = True
if full and len(expected) != len(line):
good = False
else:
for a,b in zip(expected,line):
if a!=b:
good = False
break
if not good:
self.die_confused(' '.join(expected), line)
if full:
return True
return line[len(expected):]
def expect_multi(self, *expecteds, **kwds):
"""Verify the server's response looks like one of
several possible responses. Return the index of the matched response,
and the server's line response.
"""
line = self.read_arr()
for i,expected in enumerate(expecteds):
for a,b in zip(expected, line):
if a!=b:
break
else:
if not kwds.get('full',False) or len(expected) == len(line):
break
else:
self.die_confused(' or '.join(' '.join(one) for one in expecteds),
line)
return i, line[len(expected):]
def read_ack(self):
"""Expect an "ack" line from the remote tanks.
Raise an UnexpectedResponse exception if we get something else.
"""
self.expect('ack')
def read_bool(self):
"""Expect a boolean response from the remote tanks.
Return True or False in accordance with the response. Raise an
UnexpectedResponse exception if we get something else.
"""
i, rest = self.expect_multi(('ok',),('fail',))
return (True, False)[i]
def read_teams(self):
"""Get team information."""
self.expect('begin')
teams = []
while True:
i, rest = self.expect_multi(('team',),('end',))
if i == 1:
break
team = Answer()
team.color = rest[0]
team.count = float(rest[1])
team.base = [(float(x), float(y)) for (x, y) in
zip(rest[2:10:2], rest[3:10:2])]
teams.append(team)
return teams
def read_obstacles(self):
"""Get obstacle information."""
self.expect('begin')
obstacles = []
while True:
i, rest = self.expect_multi(('obstacle',),('end',))
if i == 1:
break
obstacle = [(float(x), float(y)) for (x, y) in
zip(rest[::2], rest[1::2])]
obstacles.append(obstacle)
return obstacles
def read_occgrid(self):
"""Read grid."""
response = self.read_arr()
if 'fail' in response:
return None
pos = tuple(int(a) for a in self.expect('at')[0].split(','))
size = tuple(int(a) for a in self.expect('size')[0].split('x'))
grid = [[0 for i in range(size[1])] for j in range(size[0])]
for x in range(size[0]):
line = self.read_arr()[0]
for y in range(size[1]):
if line[y] == '1':
grid[x][y] = 1
self.expect('end', True)
return pos, grid
def read_flags(self):
"""Get flag information."""
line = self.read_arr()
if line[0] != 'begin':
self.die_confused('begin', line)
flags = []
while True:
line = self.read_arr()
if line[0] == 'flag':
flag = Answer()
flag.color = line[1]
flag.poss_color = line[2]
flag.x = float(line[3])
flag.y = float(line[4])
flags.append(flag)
elif line[0] == 'end':
break
else:
self.die_confused('flag or end', line)
return flags
def read_shots(self):
"""Get shot information."""
line = self.read_arr()
if line[0] != 'begin':
self.die_confused('begin', line)
shots = []
while True:
line = self.read_arr()
if line[0] == 'shot':
shot = Answer()
shot.x = float(line[1])
shot.y = float(line[2])
shot.vx = float(line[3])
shot.vy = float(line[4])
shots.append(shot)
elif line[0] == 'end':
break
else:
self.die_confused('shot or end', line)
return shots
def read_mytanks(self):
"""Get friendly tank information."""
line = self.read_arr()
if line[0] != 'begin':
self.die_confused('begin', line)
tanks = []
while True:
line = self.read_arr()
if line[0] == 'mytank':
tank = Answer()
tank.index = int(line[1])
tank.callsign = line[2]
tank.status = line[3]
tank.shots_avail = int(line[4])
tank.time_to_reload = float(line[5])
tank.flag = line[6]
tank.x = float(line[7])
tank.y = float(line[8])
tank.angle = float(line[9])
tank.vx = float(line[10])
tank.vy = float(line[11])
tank.angvel = float(line[12])
tanks.append(tank)
elif line[0] == 'end':
break
else:
self.die_confused('mytank or end', line)
return tanks
def read_othertanks(self):
"""Get enemy tank information."""
line = self.read_arr()
if line[0] != 'begin':
self.die_confused('begin', line)
tanks = []
while True:
line = self.read_arr()
if line[0] == 'othertank':
tank = Answer()
tank.callsign = line[1]
tank.color = line[2]
tank.status = line[3]
tank.flag = line[4]
tank.x = float(line[5])
tank.y = float(line[6])
tank.angle = float(line[7])
tanks.append(tank)
elif line[0] == 'end':
break
else:
self.die_confused('othertank or end', line)
return tanks
def read_bases(self):
"""Get base information."""
bases = []
line = self.read_arr()
if line[0] != 'begin':
self.die_confused('begin', line)
while True:
line = self.read_arr()
if line[0] == 'base':
base = Answer()
base.color = line[1]
base.corner1_x = float(line[2])
base.corner1_y = float(line[3])
base.corner2_x = float(line[4])
base.corner2_y = float(line[5])
base.corner3_x = float(line[6])
base.corner3_y = float(line[7])
base.corner4_x = float(line[8])
base.corner4_y = float(line[9])
bases.append(base)
elif line[0] == 'end':
break
else:
self.die_confused('othertank or end', line)
return bases
def read_constants(self):
"""Get constants."""
line = self.read_arr()
if line[0] != 'begin':
self.die_confused('begin', line)
constants = {}
while True:
line = self.read_arr()
if line[0] == 'constant':
constants[line[1]] = line[2]
elif line[0] == 'end':
break
else:
self.die_confused('constant or end', line)
return constants
# Commands:
def shoot(self, index):
"""Perform a shoot request."""
self.sendline('shoot %s' % index)
self.read_ack()
return self.read_bool()
def speed(self, index, value):
"""Set the desired speed to the specified value."""
self.sendline('speed %s %s' % (index, value))
self.read_ack()
return self.read_bool()
def angvel(self, index, value):
"""Set the desired angular velocity to the specified value."""
self.sendline('angvel %s %s' % (index, value))
self.read_ack()
return self.read_bool()
# Information Requests:
def get_teams(self):
"""Request a list of teams."""
self.sendline('teams')
self.read_ack()
return self.read_teams()
def get_obstacles(self):
"""Request a list of obstacles."""
self.sendline('obstacles')
self.read_ack()
return self.read_obstacles()
def get_occgrid(self, tankid):
"""Request an occupancy grid for a tank"""
self.sendline('occgrid %d' % tankid)
self.read_ack()
return self.read_occgrid()
def get_flags(self):
"""Request a list of flags."""
self.sendline('flags')
self.read_ack()
return self.read_flags()
def get_shots(self):
"""Request a list of shots."""
self.sendline('shots')
self.read_ack()
return self.read_shots()
def get_mytanks(self):
"""Request a list of our tanks."""
self.sendline('mytanks')
self.read_ack()
return self.read_mytanks()
def get_othertanks(self):
"""Request a list of tanks that aren't ours."""
self.sendline('othertanks')
self.read_ack()
return self.read_othertanks()
def get_bases(self):
"""Request a list of bases."""
self.sendline('bases')
self.read_ack()
return self.read_bases()
def get_constants(self):
"""Request a dictionary of game constants."""
self.sendline('constants')
self.read_ack()
return self.read_constants()
# Optimized queries
def get_lots_o_stuff(self):
"""Network-optimized request for mytanks, othertanks, flags, and shots.
Returns a tuple with the four results.
"""
self.sendline('mytanks')
self.sendline('othertanks')
self.sendline('flags')
self.sendline('shots')
self.read_ack()
mytanks = self.read_mytanks()
self.read_ack()
othertanks = self.read_othertanks()
self.read_ack()
flags = self.read_flags()
self.read_ack()
shots = self.read_shots()
return (mytanks, othertanks, flags, shots)
def do_commands(self, commands):
"""Send commands for a bunch of tanks in a network-optimized way."""
for cmd in commands:
self.sendline('speed %s %s' % (cmd.index, cmd.speed))
self.sendline('angvel %s %s' % (cmd.index, cmd.angvel))
if cmd.shoot:
self.sendline('shoot %s' % cmd.index)
results = []
for cmd in commands:
self.read_ack()
result_speed = self.read_bool()
self.read_ack()
result_angvel = self.read_bool()
if cmd.shoot:
self.read_ack()
result_shoot = self.read_bool()
else:
result_shoot = False
results.append((result_speed, result_angvel, result_shoot))
return results
class Answer(object):
"""BZRC returns an Answer for things like tanks, obstacles, etc.
You should probably write your own code for this sort of stuff. We
created this class just to keep things short and sweet.
"""
pass
class Command(object):
"""Class for setting a command for a tank."""
def __init__(self, index, speed, angvel, shoot):
self.index = index
self.speed = speed
self.angvel = angvel
self.shoot = shoot
class UnexpectedResponse(Exception):
"""Exception raised when the BZRC gets confused by a bad response."""
def __init__(self, expected, got):
self.expected = expected
self.got = got
def __str__(self):
return 'BZRC: Expected "%s". Instead got "%s".' % (self.expected,
self.got)
# vim: et sw=4 sts=4
| gpl-3.0 | 26,795,852,626,714,104 | 29.698947 | 79 | 0.510012 | false | 3.967891 | false | false | false |
leiferikb/bitpop | src/third_party/pyftpdlib/src/demo/tls_ftpd.py | 4 | 2359 | #!/usr/bin/env python
# $Id: tls_ftpd.py 977 2012-01-22 23:05:09Z g.rodola $
# pyftpdlib is released under the MIT license, reproduced below:
# ======================================================================
# Copyright (C) 2007-2012 Giampaolo Rodola' <[email protected]>
#
# All Rights Reserved
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following
# conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
#
# ======================================================================
"""An RFC-4217 asynchronous FTPS server supporting both SSL and TLS.
Requires PyOpenSSL module (http://pypi.python.org/pypi/pyOpenSSL).
"""
import os
from pyftpdlib import ftpserver
from pyftpdlib.contrib.handlers import TLS_FTPHandler
CERTFILE = os.path.abspath(os.path.join(os.path.dirname(__file__),
"keycert.pem"))
def main():
authorizer = ftpserver.DummyAuthorizer()
authorizer.add_user('user', '12345', '.', perm='elradfmw')
authorizer.add_anonymous('.')
ftp_handler = TLS_FTPHandler
ftp_handler.certfile = CERTFILE
ftp_handler.authorizer = authorizer
# requires SSL for both control and data channel
#ftp_handler.tls_control_required = True
#ftp_handler.tls_data_required = True
ftpd = ftpserver.FTPServer(('', 8021), ftp_handler)
ftpd.serve_forever()
if __name__ == '__main__':
main()
| gpl-3.0 | 6,446,451,407,849,144,000 | 38.316667 | 73 | 0.673167 | false | 3.998305 | false | false | false |
mith1979/ansible_automation | applied_python/applied_python/lib/python2.7/site-packages/pylint/test/functional/bad_reversed_sequence.py | 12 | 2132 | """ Checks that reversed() receive proper argument """
# pylint: disable=missing-docstring
# pylint: disable=too-few-public-methods,no-self-use,no-absolute-import
from collections import deque
__revision__ = 0
class GoodReversed(object):
""" Implements __reversed__ """
def __reversed__(self):
return [1, 2, 3]
class SecondGoodReversed(object):
""" Implements __len__ and __getitem__ """
def __len__(self):
return 3
def __getitem__(self, index):
return index
class BadReversed(object):
""" implements only len() """
def __len__(self):
return 3
class SecondBadReversed(object):
""" implements only __getitem__ """
def __getitem__(self, index):
return index
class ThirdBadReversed(dict):
""" dict subclass """
def uninferable(seq):
""" This can't be infered at this moment,
make sure we don't have a false positive.
"""
return reversed(seq)
def test(path):
""" test function """
seq = reversed() # No argument given
seq = reversed(None) # [bad-reversed-sequence]
seq = reversed([1, 2, 3])
seq = reversed((1, 2, 3))
seq = reversed(set()) # [bad-reversed-sequence]
seq = reversed({'a': 1, 'b': 2}) # [bad-reversed-sequence]
seq = reversed(iter([1, 2, 3])) # [bad-reversed-sequence]
seq = reversed(GoodReversed())
seq = reversed(SecondGoodReversed())
seq = reversed(BadReversed()) # [bad-reversed-sequence]
seq = reversed(SecondBadReversed()) # [bad-reversed-sequence]
seq = reversed(range(100))
seq = reversed(ThirdBadReversed()) # [bad-reversed-sequence]
seq = reversed(lambda: None) # [bad-reversed-sequence]
seq = reversed(deque([]))
seq = reversed("123")
seq = uninferable([1, 2, 3])
seq = reversed(path.split("/"))
return seq
def test_dict_ancestor_and_reversed():
"""Don't emit for subclasses of dict, with __reversed__ implemented."""
from collections import OrderedDict
class Child(dict):
def __reversed__(self):
return reversed(range(10))
seq = reversed(OrderedDict())
return reversed(Child()), seq
| apache-2.0 | 6,344,979,343,260,533,000 | 29.028169 | 75 | 0.625235 | false | 3.766784 | false | false | false |
yaoandw/joke | Pods/AVOSCloudCrashReporting/Breakpad/src/testing/test/gmock_output_test.py | 986 | 5999 | #!/usr/bin/env python
#
# Copyright 2008, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Tests the text output of Google C++ Mocking Framework.
SYNOPSIS
gmock_output_test.py --build_dir=BUILD/DIR --gengolden
# where BUILD/DIR contains the built gmock_output_test_ file.
gmock_output_test.py --gengolden
gmock_output_test.py
"""
__author__ = '[email protected] (Zhanyong Wan)'
import os
import re
import sys
import gmock_test_utils
# The flag for generating the golden file
GENGOLDEN_FLAG = '--gengolden'
PROGRAM_PATH = gmock_test_utils.GetTestExecutablePath('gmock_output_test_')
COMMAND = [PROGRAM_PATH, '--gtest_stack_trace_depth=0', '--gtest_print_time=0']
GOLDEN_NAME = 'gmock_output_test_golden.txt'
GOLDEN_PATH = os.path.join(gmock_test_utils.GetSourceDir(), GOLDEN_NAME)
def ToUnixLineEnding(s):
"""Changes all Windows/Mac line endings in s to UNIX line endings."""
return s.replace('\r\n', '\n').replace('\r', '\n')
def RemoveReportHeaderAndFooter(output):
"""Removes Google Test result report's header and footer from the output."""
output = re.sub(r'.*gtest_main.*\n', '', output)
output = re.sub(r'\[.*\d+ tests.*\n', '', output)
output = re.sub(r'\[.* test environment .*\n', '', output)
output = re.sub(r'\[=+\] \d+ tests .* ran.*', '', output)
output = re.sub(r'.* FAILED TESTS\n', '', output)
return output
def RemoveLocations(output):
"""Removes all file location info from a Google Test program's output.
Args:
output: the output of a Google Test program.
Returns:
output with all file location info (in the form of
'DIRECTORY/FILE_NAME:LINE_NUMBER: 'or
'DIRECTORY\\FILE_NAME(LINE_NUMBER): ') replaced by
'FILE:#: '.
"""
return re.sub(r'.*[/\\](.+)(\:\d+|\(\d+\))\:', 'FILE:#:', output)
def NormalizeErrorMarker(output):
"""Normalizes the error marker, which is different on Windows vs on Linux."""
return re.sub(r' error: ', ' Failure\n', output)
def RemoveMemoryAddresses(output):
"""Removes memory addresses from the test output."""
return re.sub(r'@\w+', '@0x#', output)
def RemoveTestNamesOfLeakedMocks(output):
"""Removes the test names of leaked mock objects from the test output."""
return re.sub(r'\(used in test .+\) ', '', output)
def GetLeakyTests(output):
"""Returns a list of test names that leak mock objects."""
# findall() returns a list of all matches of the regex in output.
# For example, if '(used in test FooTest.Bar)' is in output, the
# list will contain 'FooTest.Bar'.
return re.findall(r'\(used in test (.+)\)', output)
def GetNormalizedOutputAndLeakyTests(output):
"""Normalizes the output of gmock_output_test_.
Args:
output: The test output.
Returns:
A tuple (the normalized test output, the list of test names that have
leaked mocks).
"""
output = ToUnixLineEnding(output)
output = RemoveReportHeaderAndFooter(output)
output = NormalizeErrorMarker(output)
output = RemoveLocations(output)
output = RemoveMemoryAddresses(output)
return (RemoveTestNamesOfLeakedMocks(output), GetLeakyTests(output))
def GetShellCommandOutput(cmd):
"""Runs a command in a sub-process, and returns its STDOUT in a string."""
return gmock_test_utils.Subprocess(cmd, capture_stderr=False).output
def GetNormalizedCommandOutputAndLeakyTests(cmd):
"""Runs a command and returns its normalized output and a list of leaky tests.
Args:
cmd: the shell command.
"""
# Disables exception pop-ups on Windows.
os.environ['GTEST_CATCH_EXCEPTIONS'] = '1'
return GetNormalizedOutputAndLeakyTests(GetShellCommandOutput(cmd))
class GMockOutputTest(gmock_test_utils.TestCase):
def testOutput(self):
(output, leaky_tests) = GetNormalizedCommandOutputAndLeakyTests(COMMAND)
golden_file = open(GOLDEN_PATH, 'rb')
golden = golden_file.read()
golden_file.close()
# The normalized output should match the golden file.
self.assertEquals(golden, output)
# The raw output should contain 2 leaked mock object errors for
# test GMockOutputTest.CatchesLeakedMocks.
self.assertEquals(['GMockOutputTest.CatchesLeakedMocks',
'GMockOutputTest.CatchesLeakedMocks'],
leaky_tests)
if __name__ == '__main__':
if sys.argv[1:] == [GENGOLDEN_FLAG]:
(output, _) = GetNormalizedCommandOutputAndLeakyTests(COMMAND)
golden_file = open(GOLDEN_PATH, 'wb')
golden_file.write(output)
golden_file.close()
else:
gmock_test_utils.Main()
| mit | -4,474,121,058,681,443,000 | 32.327778 | 80 | 0.710118 | false | 3.716853 | true | false | false |
slarosa/QGIS | python/plugins/sextante/tests/SextanteToolsTest.py | 3 | 3013 | # -*- coding: utf-8 -*-
"""
***************************************************************************
SextanteToolsTest.py
---------------------
Date : April 2013
Copyright : (C) 2013 by Victor Olaya
Email : volayaf at gmail dot com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Victor Olaya'
__date__ = 'April 2013'
__copyright__ = '(C) 2013, Victor Olaya'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
import sextante
import unittest
from sextante.tests.TestData import points, points2, polygons, polygons2, lines, union,\
table, polygonsGeoJson, raster
from sextante.core import Sextante
from sextante.tools.vector import values
from sextante.tools.general import getfromname
class SextanteToolsTest(unittest.TestCase):
'''tests the method imported when doing an "import sextante", and also in sextante.tools.
They are mostly convenience tools'''
def test_getobject(self):
layer = sextante.getobject(points());
self.assertIsNotNone(layer)
layer = sextante.getobject("points");
self.assertIsNotNone(layer)
def test_runandload(self):
sextante.runandload("qgis:countpointsinpolygon",polygons(),points(),"NUMPOINTS", None)
layer = getfromname("Result")
self.assertIsNotNone(layer)
def test_featuresWithoutSelection(self):
layer = sextante.getobject(points())
features = sextante.getfeatures(layer)
self.assertEqual(12, len(features))
def test_featuresWithSelection(self):
layer = sextante.getobject(points())
feature = layer.getFeatures().next()
selected = [feature.id()]
layer.setSelectedFeatures(selected)
features = sextante.getfeatures(layer)
self.assertEqual(1, len(features))
layer.setSelectedFeatures([])
def test_attributeValues(self):
layer = sextante.getobject(points())
attributeValues = values(layer, "ID")
i = 1
for value in attributeValues['ID']:
self.assertEqual(int(i), int(value))
i+=1
self.assertEquals(13,i)
def test_extent(self):
pass
def suite():
suite = unittest.makeSuite(SextanteToolsTest, 'test')
return suite
def runtests():
result = unittest.TestResult()
testsuite = suite()
testsuite.run(result)
return result
| gpl-2.0 | 6,003,430,409,727,838,000 | 34.447059 | 94 | 0.563226 | false | 4.392128 | true | false | false |
largelymfs/IRModel | src/models/Tfidf.py | 1 | 2000 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Author: largelymfs
# @Date: 2014-12-23 20:06:10
# @Last Modified by: largelymfs
# @Last Modified time: 2014-12-23 23:18:08
import numpy as np
import numpy.linalg as LA
class TFIDF:
def __init__(self, filename):
#get the self.vocab
self.vocab = {}
id = 0
doc = 0
with open(filename) as fin:
self.doc = [l.strip().split() for l in fin]
for words in self.doc:
for word in words:
if word not in self.vocab:
self.vocab[word] = id
id+=1
doc+=1
self.word_number = id
self.doc_number = doc
self.matrix = np.zeros((self.doc_number, self.word_number))
self.idf = {}
for words in self.doc:
now = set(words)
for word in now:
word_id = self.vocab[word]
if word_id not in self.idf:
self.idf[word_id] = 1
else:
self.idf[word_id] +=1
for k in self.idf.keys():
self.idf[k] = np.log((float(self.doc_number) / float(self.idf[k])))
id = 0
for words in self.doc:
total = 0.0
for word in words:
self.matrix[id][self.vocab[word]] +=1.0
total +=1.0
if total==0:
print words
continue
self.matrix[id] = self.matrix[id] * (1./total)
id+=1
self.matrix = self.matrix.T
for i in range(self.word_number):
self.matrix[i] = self.matrix[i] * (self.idf[i])
self.matrix = self.matrix.T
def get_score(self, v1, v2):
return np.dot(v1, v2)/(LA.norm(v1) * LA.norm(v2))
def querry(self, q):
vector = np.zeros(self.word_number)
total = 0.0
for w in q:
if w in self.vocab:
vector[self.vocab[w]]+=1.0
total +=1.0
vector = vector * (1./total)
for i in range(self.word_number):
vector[i] *= (self.idf[i])
result = [(i, self.get_score(self.matrix[i], vector)) for i in range(self.doc_number)]
result = sorted(result, cmp=lambda x, y:-cmp(x[1],y[1]))[:10]
for (id, score) in result:
print id, "".join(self.doc[id])
if __name__=='__main__':
model = TFIDF("./../../data/demo.txt.out")
model.querry(["进球", "晋级", "胜利"]) | mit | -5,200,241,484,263,836,000 | 25.878378 | 88 | 0.606137 | false | 2.478803 | false | false | false |
vprnet/traces | app/index.py | 1 | 1098 | #!/usr/local/bin/python2.7
from flask import Flask
import sys
from flask_frozen import Freezer
from upload_s3 import set_metadata
from config import AWS_DIRECTORY
from query import get_slugs
app = Flask(__name__)
app.config.from_object('config')
from views import *
# Serving from s3 leads to some complications in how static files are served
if len(sys.argv) > 1 and sys.argv[1] == 'build':
PROJECT_ROOT = '/' + AWS_DIRECTORY
else:
PROJECT_ROOT = '/'
class WebFactionMiddleware(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['SCRIPT_NAME'] = PROJECT_ROOT
return self.app(environ, start_response)
app.wsgi_app = WebFactionMiddleware(app.wsgi_app)
freezer = Freezer(app)
@freezer.register_generator
def post():
slugs, links = get_slugs(title=False)
for i in slugs:
yield {'title': i}
if __name__ == '__main__':
if len(sys.argv) > 1 and sys.argv[1] == 'build':
app.debug = True
freezer.freeze()
set_metadata()
else:
app.run(debug=True)
| apache-2.0 | -6,234,646,654,411,311,000 | 21.875 | 76 | 0.652095 | false | 3.287425 | false | false | false |
jorik041/scikit-learn | sklearn/linear_model/randomized_l1.py | 95 | 23365 | """
Randomized Lasso/Logistic: feature selection based on Lasso and
sparse Logistic Regression
"""
# Author: Gael Varoquaux, Alexandre Gramfort
#
# License: BSD 3 clause
import itertools
from abc import ABCMeta, abstractmethod
import warnings
import numpy as np
from scipy.sparse import issparse
from scipy import sparse
from scipy.interpolate import interp1d
from .base import center_data
from ..base import BaseEstimator, TransformerMixin
from ..externals import six
from ..externals.joblib import Memory, Parallel, delayed
from ..utils import (as_float_array, check_random_state, check_X_y,
check_array, safe_mask, ConvergenceWarning)
from ..utils.validation import check_is_fitted
from .least_angle import lars_path, LassoLarsIC
from .logistic import LogisticRegression
###############################################################################
# Randomized linear model: feature selection
def _resample_model(estimator_func, X, y, scaling=.5, n_resampling=200,
n_jobs=1, verbose=False, pre_dispatch='3*n_jobs',
random_state=None, sample_fraction=.75, **params):
random_state = check_random_state(random_state)
# We are generating 1 - weights, and not weights
n_samples, n_features = X.shape
if not (0 < scaling < 1):
raise ValueError(
"'scaling' should be between 0 and 1. Got %r instead." % scaling)
scaling = 1. - scaling
scores_ = 0.0
for active_set in Parallel(n_jobs=n_jobs, verbose=verbose,
pre_dispatch=pre_dispatch)(
delayed(estimator_func)(
X, y, weights=scaling * random_state.random_integers(
0, 1, size=(n_features,)),
mask=(random_state.rand(n_samples) < sample_fraction),
verbose=max(0, verbose - 1),
**params)
for _ in range(n_resampling)):
scores_ += active_set
scores_ /= n_resampling
return scores_
class BaseRandomizedLinearModel(six.with_metaclass(ABCMeta, BaseEstimator,
TransformerMixin)):
"""Base class to implement randomized linear models for feature selection
This implements the strategy by Meinshausen and Buhlman:
stability selection with randomized sampling, and random re-weighting of
the penalty.
"""
@abstractmethod
def __init__(self):
pass
_center_data = staticmethod(center_data)
def fit(self, X, y):
"""Fit the model using X, y as training data.
Parameters
----------
X : array-like, sparse matrix shape = [n_samples, n_features]
Training data.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
Returns an instance of self.
"""
X, y = check_X_y(X, y, ['csr', 'csc', 'coo'], y_numeric=True)
X = as_float_array(X, copy=False)
n_samples, n_features = X.shape
X, y, X_mean, y_mean, X_std = self._center_data(X, y,
self.fit_intercept,
self.normalize)
estimator_func, params = self._make_estimator_and_params(X, y)
memory = self.memory
if isinstance(memory, six.string_types):
memory = Memory(cachedir=memory)
scores_ = memory.cache(
_resample_model, ignore=['verbose', 'n_jobs', 'pre_dispatch']
)(
estimator_func, X, y,
scaling=self.scaling, n_resampling=self.n_resampling,
n_jobs=self.n_jobs, verbose=self.verbose,
pre_dispatch=self.pre_dispatch, random_state=self.random_state,
sample_fraction=self.sample_fraction, **params)
if scores_.ndim == 1:
scores_ = scores_[:, np.newaxis]
self.all_scores_ = scores_
self.scores_ = np.max(self.all_scores_, axis=1)
return self
def _make_estimator_and_params(self, X, y):
"""Return the parameters passed to the estimator"""
raise NotImplementedError
def get_support(self, indices=False):
"""Return a mask, or list, of the features/indices selected."""
check_is_fitted(self, 'scores_')
mask = self.scores_ > self.selection_threshold
return mask if not indices else np.where(mask)[0]
# XXX: the two function below are copy/pasted from feature_selection,
# Should we add an intermediate base class?
def transform(self, X):
"""Transform a new matrix using the selected features"""
mask = self.get_support()
X = check_array(X)
if len(mask) != X.shape[1]:
raise ValueError("X has a different shape than during fitting.")
return check_array(X)[:, safe_mask(X, mask)]
def inverse_transform(self, X):
"""Transform a new matrix using the selected features"""
support = self.get_support()
if X.ndim == 1:
X = X[None, :]
Xt = np.zeros((X.shape[0], support.size))
Xt[:, support] = X
return Xt
###############################################################################
# Randomized lasso: regression settings
def _randomized_lasso(X, y, weights, mask, alpha=1., verbose=False,
precompute=False, eps=np.finfo(np.float).eps,
max_iter=500):
X = X[safe_mask(X, mask)]
y = y[mask]
# Center X and y to avoid fit the intercept
X -= X.mean(axis=0)
y -= y.mean()
alpha = np.atleast_1d(np.asarray(alpha, dtype=np.float))
X = (1 - weights) * X
with warnings.catch_warnings():
warnings.simplefilter('ignore', ConvergenceWarning)
alphas_, _, coef_ = lars_path(X, y,
Gram=precompute, copy_X=False,
copy_Gram=False, alpha_min=np.min(alpha),
method='lasso', verbose=verbose,
max_iter=max_iter, eps=eps)
if len(alpha) > 1:
if len(alphas_) > 1: # np.min(alpha) < alpha_min
interpolator = interp1d(alphas_[::-1], coef_[:, ::-1],
bounds_error=False, fill_value=0.)
scores = (interpolator(alpha) != 0.0)
else:
scores = np.zeros((X.shape[1], len(alpha)), dtype=np.bool)
else:
scores = coef_[:, -1] != 0.0
return scores
class RandomizedLasso(BaseRandomizedLinearModel):
"""Randomized Lasso.
Randomized Lasso works by resampling the train data and computing
a Lasso on each resampling. In short, the features selected more
often are good features. It is also known as stability selection.
Read more in the :ref:`User Guide <randomized_l1>`.
Parameters
----------
alpha : float, 'aic', or 'bic', optional
The regularization parameter alpha parameter in the Lasso.
Warning: this is not the alpha parameter in the stability selection
article which is scaling.
scaling : float, optional
The alpha parameter in the stability selection article used to
randomly scale the features. Should be between 0 and 1.
sample_fraction : float, optional
The fraction of samples to be used in each randomized design.
Should be between 0 and 1. If 1, all samples are used.
n_resampling : int, optional
Number of randomized models.
selection_threshold: float, optional
The score above which features should be selected.
fit_intercept : boolean, optional
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default True
If True, the regressors X will be normalized before regression.
precompute : True | False | 'auto'
Whether to use a precomputed Gram matrix to speed up
calculations. If set to 'auto' let us decide. The Gram
matrix can also be passed as argument.
max_iter : integer, optional
Maximum number of iterations to perform in the Lars algorithm.
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the 'tol' parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
n_jobs : integer, optional
Number of CPUs to use during the resampling. If '-1', use
all the CPUs
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
pre_dispatch : int, or string, optional
Controls the number of jobs that get dispatched during parallel
execution. Reducing this number can be useful to avoid an
explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
- None, in which case all the jobs are immediately
created and spawned. Use this for lightweight and
fast-running jobs, to avoid delays due to on-demand
spawning of the jobs
- An int, giving the exact number of total jobs that are
spawned
- A string, giving an expression as a function of n_jobs,
as in '2*n_jobs'
memory : Instance of joblib.Memory or string
Used for internal caching. By default, no caching is done.
If a string is given, it is the path to the caching directory.
Attributes
----------
scores_ : array, shape = [n_features]
Feature scores between 0 and 1.
all_scores_ : array, shape = [n_features, n_reg_parameter]
Feature scores between 0 and 1 for all values of the regularization \
parameter. The reference article suggests ``scores_`` is the max of \
``all_scores_``.
Examples
--------
>>> from sklearn.linear_model import RandomizedLasso
>>> randomized_lasso = RandomizedLasso()
Notes
-----
See examples/linear_model/plot_sparse_recovery.py for an example.
References
----------
Stability selection
Nicolai Meinshausen, Peter Buhlmann
Journal of the Royal Statistical Society: Series B
Volume 72, Issue 4, pages 417-473, September 2010
DOI: 10.1111/j.1467-9868.2010.00740.x
See also
--------
RandomizedLogisticRegression, LogisticRegression
"""
def __init__(self, alpha='aic', scaling=.5, sample_fraction=.75,
n_resampling=200, selection_threshold=.25,
fit_intercept=True, verbose=False,
normalize=True, precompute='auto',
max_iter=500,
eps=np.finfo(np.float).eps, random_state=None,
n_jobs=1, pre_dispatch='3*n_jobs',
memory=Memory(cachedir=None, verbose=0)):
self.alpha = alpha
self.scaling = scaling
self.sample_fraction = sample_fraction
self.n_resampling = n_resampling
self.fit_intercept = fit_intercept
self.max_iter = max_iter
self.verbose = verbose
self.normalize = normalize
self.precompute = precompute
self.eps = eps
self.random_state = random_state
self.n_jobs = n_jobs
self.selection_threshold = selection_threshold
self.pre_dispatch = pre_dispatch
self.memory = memory
def _make_estimator_and_params(self, X, y):
assert self.precompute in (True, False, None, 'auto')
alpha = self.alpha
if alpha in ('aic', 'bic'):
model = LassoLarsIC(precompute=self.precompute,
criterion=self.alpha,
max_iter=self.max_iter,
eps=self.eps)
model.fit(X, y)
self.alpha_ = alpha = model.alpha_
return _randomized_lasso, dict(alpha=alpha, max_iter=self.max_iter,
eps=self.eps,
precompute=self.precompute)
###############################################################################
# Randomized logistic: classification settings
def _randomized_logistic(X, y, weights, mask, C=1., verbose=False,
fit_intercept=True, tol=1e-3):
X = X[safe_mask(X, mask)]
y = y[mask]
if issparse(X):
size = len(weights)
weight_dia = sparse.dia_matrix((1 - weights, 0), (size, size))
X = X * weight_dia
else:
X *= (1 - weights)
C = np.atleast_1d(np.asarray(C, dtype=np.float))
scores = np.zeros((X.shape[1], len(C)), dtype=np.bool)
for this_C, this_scores in zip(C, scores.T):
# XXX : would be great to do it with a warm_start ...
clf = LogisticRegression(C=this_C, tol=tol, penalty='l1', dual=False,
fit_intercept=fit_intercept)
clf.fit(X, y)
this_scores[:] = np.any(
np.abs(clf.coef_) > 10 * np.finfo(np.float).eps, axis=0)
return scores
class RandomizedLogisticRegression(BaseRandomizedLinearModel):
"""Randomized Logistic Regression
Randomized Regression works by resampling the train data and computing
a LogisticRegression on each resampling. In short, the features selected
more often are good features. It is also known as stability selection.
Read more in the :ref:`User Guide <randomized_l1>`.
Parameters
----------
C : float, optional, default=1
The regularization parameter C in the LogisticRegression.
scaling : float, optional, default=0.5
The alpha parameter in the stability selection article used to
randomly scale the features. Should be between 0 and 1.
sample_fraction : float, optional, default=0.75
The fraction of samples to be used in each randomized design.
Should be between 0 and 1. If 1, all samples are used.
n_resampling : int, optional, default=200
Number of randomized models.
selection_threshold : float, optional, default=0.25
The score above which features should be selected.
fit_intercept : boolean, optional, default=True
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default=True
If True, the regressors X will be normalized before regression.
tol : float, optional, default=1e-3
tolerance for stopping criteria of LogisticRegression
n_jobs : integer, optional
Number of CPUs to use during the resampling. If '-1', use
all the CPUs
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
pre_dispatch : int, or string, optional
Controls the number of jobs that get dispatched during parallel
execution. Reducing this number can be useful to avoid an
explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
- None, in which case all the jobs are immediately
created and spawned. Use this for lightweight and
fast-running jobs, to avoid delays due to on-demand
spawning of the jobs
- An int, giving the exact number of total jobs that are
spawned
- A string, giving an expression as a function of n_jobs,
as in '2*n_jobs'
memory : Instance of joblib.Memory or string
Used for internal caching. By default, no caching is done.
If a string is given, it is the path to the caching directory.
Attributes
----------
scores_ : array, shape = [n_features]
Feature scores between 0 and 1.
all_scores_ : array, shape = [n_features, n_reg_parameter]
Feature scores between 0 and 1 for all values of the regularization \
parameter. The reference article suggests ``scores_`` is the max \
of ``all_scores_``.
Examples
--------
>>> from sklearn.linear_model import RandomizedLogisticRegression
>>> randomized_logistic = RandomizedLogisticRegression()
Notes
-----
See examples/linear_model/plot_sparse_recovery.py for an example.
References
----------
Stability selection
Nicolai Meinshausen, Peter Buhlmann
Journal of the Royal Statistical Society: Series B
Volume 72, Issue 4, pages 417-473, September 2010
DOI: 10.1111/j.1467-9868.2010.00740.x
See also
--------
RandomizedLasso, Lasso, ElasticNet
"""
def __init__(self, C=1, scaling=.5, sample_fraction=.75,
n_resampling=200,
selection_threshold=.25, tol=1e-3,
fit_intercept=True, verbose=False,
normalize=True,
random_state=None,
n_jobs=1, pre_dispatch='3*n_jobs',
memory=Memory(cachedir=None, verbose=0)):
self.C = C
self.scaling = scaling
self.sample_fraction = sample_fraction
self.n_resampling = n_resampling
self.fit_intercept = fit_intercept
self.verbose = verbose
self.normalize = normalize
self.tol = tol
self.random_state = random_state
self.n_jobs = n_jobs
self.selection_threshold = selection_threshold
self.pre_dispatch = pre_dispatch
self.memory = memory
def _make_estimator_and_params(self, X, y):
params = dict(C=self.C, tol=self.tol,
fit_intercept=self.fit_intercept)
return _randomized_logistic, params
def _center_data(self, X, y, fit_intercept, normalize=False):
"""Center the data in X but not in y"""
X, _, Xmean, _, X_std = center_data(X, y, fit_intercept,
normalize=normalize)
return X, y, Xmean, y, X_std
###############################################################################
# Stability paths
def _lasso_stability_path(X, y, mask, weights, eps):
"Inner loop of lasso_stability_path"
X = X * weights[np.newaxis, :]
X = X[safe_mask(X, mask), :]
y = y[mask]
alpha_max = np.max(np.abs(np.dot(X.T, y))) / X.shape[0]
alpha_min = eps * alpha_max # set for early stopping in path
with warnings.catch_warnings():
warnings.simplefilter('ignore', ConvergenceWarning)
alphas, _, coefs = lars_path(X, y, method='lasso', verbose=False,
alpha_min=alpha_min)
# Scale alpha by alpha_max
alphas /= alphas[0]
# Sort alphas in assending order
alphas = alphas[::-1]
coefs = coefs[:, ::-1]
# Get rid of the alphas that are too small
mask = alphas >= eps
# We also want to keep the first one: it should be close to the OLS
# solution
mask[0] = True
alphas = alphas[mask]
coefs = coefs[:, mask]
return alphas, coefs
def lasso_stability_path(X, y, scaling=0.5, random_state=None,
n_resampling=200, n_grid=100,
sample_fraction=0.75,
eps=4 * np.finfo(np.float).eps, n_jobs=1,
verbose=False):
"""Stabiliy path based on randomized Lasso estimates
Read more in the :ref:`User Guide <randomized_l1>`.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
training data.
y : array-like, shape = [n_samples]
target values.
scaling : float, optional, default=0.5
The alpha parameter in the stability selection article used to
randomly scale the features. Should be between 0 and 1.
random_state : integer or numpy.random.RandomState, optional
The generator used to randomize the design.
n_resampling : int, optional, default=200
Number of randomized models.
n_grid : int, optional, default=100
Number of grid points. The path is linearly reinterpolated
on a grid between 0 and 1 before computing the scores.
sample_fraction : float, optional, default=0.75
The fraction of samples to be used in each randomized design.
Should be between 0 and 1. If 1, all samples are used.
eps : float, optional
Smallest value of alpha / alpha_max considered
n_jobs : integer, optional
Number of CPUs to use during the resampling. If '-1', use
all the CPUs
verbose : boolean or integer, optional
Sets the verbosity amount
Returns
-------
alphas_grid : array, shape ~ [n_grid]
The grid points between 0 and 1: alpha/alpha_max
scores_path : array, shape = [n_features, n_grid]
The scores for each feature along the path.
Notes
-----
See examples/linear_model/plot_sparse_recovery.py for an example.
"""
rng = check_random_state(random_state)
if not (0 < scaling < 1):
raise ValueError("Parameter 'scaling' should be between 0 and 1."
" Got %r instead." % scaling)
n_samples, n_features = X.shape
paths = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_lasso_stability_path)(
X, y, mask=rng.rand(n_samples) < sample_fraction,
weights=1. - scaling * rng.random_integers(0, 1,
size=(n_features,)),
eps=eps)
for k in range(n_resampling))
all_alphas = sorted(list(set(itertools.chain(*[p[0] for p in paths]))))
# Take approximately n_grid values
stride = int(max(1, int(len(all_alphas) / float(n_grid))))
all_alphas = all_alphas[::stride]
if not all_alphas[-1] == 1:
all_alphas.append(1.)
all_alphas = np.array(all_alphas)
scores_path = np.zeros((n_features, len(all_alphas)))
for alphas, coefs in paths:
if alphas[0] != 0:
alphas = np.r_[0, alphas]
coefs = np.c_[np.ones((n_features, 1)), coefs]
if alphas[-1] != all_alphas[-1]:
alphas = np.r_[alphas, all_alphas[-1]]
coefs = np.c_[coefs, np.zeros((n_features, 1))]
scores_path += (interp1d(alphas, coefs,
kind='nearest', bounds_error=False,
fill_value=0, axis=-1)(all_alphas) != 0)
scores_path /= n_resampling
return all_alphas, scores_path
| bsd-3-clause | 3,757,223,006,510,591,000 | 36.028526 | 79 | 0.599486 | false | 4.17605 | false | false | false |
canvasnetworks/canvas | common/boto/s3/acl.py | 17 | 5397 | # Copyright (c) 2006,2007 Mitch Garnaat http://garnaat.org/
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from boto.s3.user import User
CannedACLStrings = ['private', 'public-read',
'public-read-write', 'authenticated-read',
'bucket-owner-read', 'bucket-owner-full-control']
class Policy:
def __init__(self, parent=None):
self.parent = parent
self.acl = None
def __repr__(self):
grants = []
for g in self.acl.grants:
if g.id == self.owner.id:
grants.append("%s (owner) = %s" % (g.display_name, g.permission))
else:
if g.type == 'CanonicalUser':
u = g.display_name
elif g.type == 'Group':
u = g.uri
else:
u = g.email_address
grants.append("%s = %s" % (u, g.permission))
return "<Policy: %s>" % ", ".join(grants)
def startElement(self, name, attrs, connection):
if name == 'Owner':
self.owner = User(self)
return self.owner
elif name == 'AccessControlList':
self.acl = ACL(self)
return self.acl
else:
return None
def endElement(self, name, value, connection):
if name == 'Owner':
pass
elif name == 'AccessControlList':
pass
else:
setattr(self, name, value)
def to_xml(self):
s = '<AccessControlPolicy>'
s += self.owner.to_xml()
s += self.acl.to_xml()
s += '</AccessControlPolicy>'
return s
class ACL:
def __init__(self, policy=None):
self.policy = policy
self.grants = []
def add_grant(self, grant):
self.grants.append(grant)
def add_email_grant(self, permission, email_address):
grant = Grant(permission=permission, type='AmazonCustomerByEmail',
email_address=email_address)
self.grants.append(grant)
def add_user_grant(self, permission, user_id, display_name=None):
grant = Grant(permission=permission, type='CanonicalUser', id=user_id, display_name=display_name)
self.grants.append(grant)
def startElement(self, name, attrs, connection):
if name == 'Grant':
self.grants.append(Grant(self))
return self.grants[-1]
else:
return None
def endElement(self, name, value, connection):
if name == 'Grant':
pass
else:
setattr(self, name, value)
def to_xml(self):
s = '<AccessControlList>'
for grant in self.grants:
s += grant.to_xml()
s += '</AccessControlList>'
return s
class Grant:
NameSpace = 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"'
def __init__(self, permission=None, type=None, id=None,
display_name=None, uri=None, email_address=None):
self.permission = permission
self.id = id
self.display_name = display_name
self.uri = uri
self.email_address = email_address
self.type = type
def startElement(self, name, attrs, connection):
if name == 'Grantee':
self.type = attrs['xsi:type']
return None
def endElement(self, name, value, connection):
if name == 'ID':
self.id = value
elif name == 'DisplayName':
self.display_name = value
elif name == 'URI':
self.uri = value
elif name == 'EmailAddress':
self.email_address = value
elif name == 'Grantee':
pass
elif name == 'Permission':
self.permission = value
else:
setattr(self, name, value)
def to_xml(self):
s = '<Grant>'
s += '<Grantee %s xsi:type="%s">' % (self.NameSpace, self.type)
if self.type == 'CanonicalUser':
s += '<ID>%s</ID>' % self.id
s += '<DisplayName>%s</DisplayName>' % self.display_name
elif self.type == 'Group':
s += '<URI>%s</URI>' % self.uri
else:
s += '<EmailAddress>%s</EmailAddress>' % self.email_address
s += '</Grantee>'
s += '<Permission>%s</Permission>' % self.permission
s += '</Grant>'
return s
| bsd-3-clause | 1,424,259,395,550,491,400 | 32.110429 | 105 | 0.569205 | false | 4.02161 | false | false | false |
dana-i2cat/felix | vt_manager/src/python/vt_manager/communication/XmlRpcClient.py | 3 | 1276 | import xmlrpclib, logging
from urlparse import urlparse
"""
author: msune, CarolinaFernandez
Server monitoring thread
"""
class XmlRpcClient():
"""
Calling a remote method with variable number of parameters
"""
@staticmethod
def callRPCMethodBasicAuth(url,userName,password,methodName,*params):
result = None
#Incrust basic authentication
parsed = urlparse(url)
newUrl = parsed.scheme+"://"+userName+":"+password+"@"+parsed.netloc+parsed.path
if not parsed.query == "":
newUrl += "?"+parsed.query
try:
result = XmlRpcClient.callRPCMethod(newUrl,methodName,*params)
except Exception:
raise
return result
@staticmethod
def callRPCMethod(url,methodName,*params):
result = None
try:
server = xmlrpclib.Server(url)
result = getattr(server,methodName)(*params)
except Exception as e:
turl=url.split('@')
if len(turl)>1:
url = turl[0].split('//')[0]+'//'+turl[-1]
te =str(e)
if '@' in te:
e=te[0:te.find('for ')]+te[te.find('@')+1:]
logging.error("XMLRPC Client error: can't connect to method %s at %s" % (methodName, url))
logging.error(e)
raise Exception("XMLRPC Client error: can't connect to method %s at %s\n" % (methodName, url) + str(e))
return result
| apache-2.0 | 4,492,516,822,640,603,000 | 27.355556 | 106 | 0.659091 | false | 3.197995 | false | false | false |
westinedu/newertrends | django/template/__init__.py | 561 | 3247 | """
This is the Django template system.
How it works:
The Lexer.tokenize() function converts a template string (i.e., a string containing
markup with custom template tags) to tokens, which can be either plain text
(TOKEN_TEXT), variables (TOKEN_VAR) or block statements (TOKEN_BLOCK).
The Parser() class takes a list of tokens in its constructor, and its parse()
method returns a compiled template -- which is, under the hood, a list of
Node objects.
Each Node is responsible for creating some sort of output -- e.g. simple text
(TextNode), variable values in a given context (VariableNode), results of basic
logic (IfNode), results of looping (ForNode), or anything else. The core Node
types are TextNode, VariableNode, IfNode and ForNode, but plugin modules can
define their own custom node types.
Each Node has a render() method, which takes a Context and returns a string of
the rendered node. For example, the render() method of a Variable Node returns
the variable's value as a string. The render() method of an IfNode returns the
rendered output of whatever was inside the loop, recursively.
The Template class is a convenient wrapper that takes care of template
compilation and rendering.
Usage:
The only thing you should ever use directly in this file is the Template class.
Create a compiled template object with a template_string, then call render()
with a context. In the compilation stage, the TemplateSyntaxError exception
will be raised if the template doesn't have proper syntax.
Sample code:
>>> from django import template
>>> s = u'<html>{% if test %}<h1>{{ varvalue }}</h1>{% endif %}</html>'
>>> t = template.Template(s)
(t is now a compiled template, and its render() method can be called multiple
times with multiple contexts)
>>> c = template.Context({'test':True, 'varvalue': 'Hello'})
>>> t.render(c)
u'<html><h1>Hello</h1></html>'
>>> c = template.Context({'test':False, 'varvalue': 'Hello'})
>>> t.render(c)
u'<html></html>'
"""
# Template lexing symbols
from django.template.base import (ALLOWED_VARIABLE_CHARS, BLOCK_TAG_END,
BLOCK_TAG_START, COMMENT_TAG_END, COMMENT_TAG_START,
FILTER_ARGUMENT_SEPARATOR, FILTER_SEPARATOR, SINGLE_BRACE_END,
SINGLE_BRACE_START, TOKEN_BLOCK, TOKEN_COMMENT, TOKEN_TEXT, TOKEN_VAR,
TRANSLATOR_COMMENT_MARK, UNKNOWN_SOURCE, VARIABLE_ATTRIBUTE_SEPARATOR,
VARIABLE_TAG_END, VARIABLE_TAG_START, filter_re, tag_re)
# Exceptions
from django.template.base import (ContextPopException, InvalidTemplateLibrary,
TemplateDoesNotExist, TemplateEncodingError, TemplateSyntaxError,
VariableDoesNotExist)
# Template parts
from django.template.base import (Context, FilterExpression, Lexer, Node,
NodeList, Parser, RequestContext, Origin, StringOrigin, Template,
TextNode, Token, TokenParser, Variable, VariableNode, constant_string,
filter_raw_string)
# Compiling templates
from django.template.base import (compile_string, resolve_variable,
unescape_string_literal, generic_tag_compiler)
# Library management
from django.template.base import (Library, add_to_builtins, builtins,
get_library, get_templatetags_modules, get_text_list, import_library,
libraries)
__all__ = ('Template', 'Context', 'RequestContext', 'compile_string')
| bsd-3-clause | 7,799,414,344,892,095,000 | 39.5875 | 83 | 0.755159 | false | 3.893285 | false | false | false |
dhermes/project-euler | python/complete/no190.py | 1 | 1063 | #!/usr/bin/env python
# Let S_m = (x_1, x_2, ... , x_m) be the m-tuple of positive real
# numbers with x_1 + x_2 + ... + x_m = m for which
# P_m = x_1 * x_2^2 * ... * x_m^m is maximised.
# For example, it can be verified that [P_10] = 4112
# ([] is the integer part function).
# Find SUM[P_m] for 2 <= m <= 15.
# -------- LAGRANGE --------
# maximize f(x,...) given g(x,....) = c
# set ratio of partials equal to lambda
# Since g = x_1 + ... + x_m
# We need d(P_m)/d(x_i) = i P_m/x_i = lambda
# Hence i/x_i = 1/x_1, x_i = i*x_1
# m = x_1(1 + ... + m) = x_1(m)(m+1)/2
# x_1 = 2/(m + 1)
# P_m = (2/m+1)**(m*(m+1)/2)*(1*2**2*...*m**m)
# P_10 = (2/11)**(55)*(1*4*...*(10**10)) = 4112.0850028536197
import operator
from math import floor
from python.decorators import euler_timer
def P(m):
return reduce(operator.mul,
[((2 * n) / (1.0 * (m + 1))) ** n for n in range(1, m + 1)])
def main(verbose=False):
return int(sum(floor(P(n)) for n in range(2, 16)))
if __name__ == '__main__':
print euler_timer(190)(main)(verbose=True)
| apache-2.0 | 3,962,063,337,530,785,000 | 25.575 | 78 | 0.521167 | false | 2.300866 | false | false | false |
huanpc/lab_cloud_computing | docs/learning-by-doing/week09_10_connectDB_restApi/auto scaling system/constant.py | 1 | 3300 | __author__ = 'huanpc'
CPU_THRESHOLD_UP = 0.1
CPU_THRESHOLD_DOWN = 0.001
MEM_THRESHOLD_UP = 15700000.0
MEM_THRESHOLD_DOWN = 2097152.0
HOST = '25.22.28.94'
PORT = 8086
USER = 'root'
PASS = 'root'
DATABASE = 'cadvisor'
SELECT_CPU = 'derivative(cpu_cumulative_usage)'
SELECT_MEMORY = 'median(memory_usage)'
SERIES = '"stats"'
APP_NAME = 'demo-server'
NAME = ''
WHERE_BEGIN = 'container_name =~ /.*'
WHERE_END = '.*/ and time>now()-5m'
GROUP_BY = "time(10s), container_name"
CONDITION = " limit 1 "
JSON_APP_DEFINE = './demo_web_server.json'
APP_ID = 'demo-server'
MARATHON_URI = 'localhost:8080'
HEADER = {'Content-Type': 'application/json'}
# scale
SCALE_LINK = '/v2/apps/' + APP_ID + '?force=true'
TIME_DELAY_LONG = 15
TIME_DELAY_SORT = 5
ROOT_PASSWORD = '444455555'
MODEL_ENGINE = 'mysql+pymysql://root:autoscaling@[email protected]:3306/policydb'
SCHEMA = '''
# PolicyDB
# apps.enabled: 0-not scaled, 1-scaled
# apps.locked: 0-unlocked, 1-locked
# apps.next_time: time in the future the app'll be checked for scaling
# next_time = last success caused by policyX + policyX.cooldown_period
# policies.metric_type: 0-CPU, 1-memory
# policies.cooldown_period: in second
# policies.measurement_period: in second
# deleted: 0-active, 1-deleted
DROP DATABASE IF EXISTS policydb;
CREATE DATABASE policydb;
USE policydb;
CREATE TABLE apps(\
Id INT AUTO_INCREMENT PRIMARY KEY, \
app_uuid VARCHAR(255), \
name VARCHAR(255), \
min_instances SMALLINT UNSIGNED, \
max_instances SMALLINT UNSIGNED, \
enabled TINYINT UNSIGNED, \
locked TINYINT UNSIGNED, \
next_time INT \
);
CREATE TABLE policies(\
Id INT AUTO_INCREMENT PRIMARY KEY, \
app_uuid VARCHAR(255), \
policy_uuid VARCHAR(255), \
metric_type TINYINT UNSIGNED, \
upper_threshold FLOAT, \
lower_threshold FLOAT, \
instances_out SMALLINT UNSIGNED, \
instances_in SMALLINT UNSIGNED, \
cooldown_period SMALLINT UNSIGNED, \
measurement_period SMALLINT UNSIGNED, \
deleted TINYINT UNSIGNED \
);
# tuna
CREATE TABLE crons(\
Id INT AUTO_INCREMENT PRIMARY KEY, \
app_uuid VARCHAR(255), \
cron_uuid VARCHAR(255), \
min_instances SMALLINT UNSIGNED, \
max_instances SMALLINT UNSIGNED, \
cron_string VARCHAR(255), \
deleted TINYINT UNSIGNED \
);
# end tuna
-----
# Test data
# Stresser
INSERT INTO apps(app_uuid, name, min_instances, max_instances, enabled, locked, next_time) \
VALUES ("f5bfcbad-7daa-4317-97cc-e42ae46b6ad1", "java-allocateMemory", 1, 5, 1, 0, 0);
INSERT INTO policies(app_uuid, policy_uuid, metric_type, upper_threshold, lower_threshold, instances_out, instances_in, cooldown_period, measurement_period, deleted) \
VALUES ("f5bfcbad-7daa-4317-97cc-e42ae46b6ad1", "b3da4493-58f1-4d65-bf43-e52e7de62151", 1, 0.7, 0.3, 1, 1, 30, 10, 0);
# INSERT INTO policies(app_uuid, policy_uuid, metric_type, upper_threshold, lower_threshold, instances_out, instances_in, cooldown_period, measurement_period, deleted) \
# VALUES ("f5bfcbad-7daa-4317-97cc-e42ae46b6ad1", "b3da4493-58f1-4d65-bf43-e52e7dpolicy", 1, 0.7, 0.3, 1, 1, 30, 10, 0);
INSERT INTO crons(app_uuid, cron_uuid, min_instances, max_instances, cron_string, deleted) \
VALUES ("f5bfcbad-7daa-4317-97cc-e42ae46b6ad1", "b3da4493-58f1-4d65-bf43-e52eacascron", 1, 10, "* * * * * *", false);
'''
| apache-2.0 | 8,404,436,930,996,980,000 | 33.736842 | 169 | 0.701818 | false | 2.818104 | false | false | false |
martinohanlon/pelmetcam | GPSController.py | 1 | 2913 | from gps import *
import time
import datetime
import threading
import math
class GpsUtils():
MPS_TO_MPH = 2.2369362920544
@staticmethod
def latLongToXY(lat, lon):
rMajor = 6378137 # Equatorial Radius, WGS84
shift = math.pi * rMajor
x = lon * shift / 180
y = math.log(math.tan((90 + lat) * math.pi / 360)) / (math.pi / 180)
y = y * shift / 180
return x,y
class GpsController(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.running = False
def run(self):
self.running = True
while self.running:
# grab EACH set of gpsd info to clear the buffer
self.gpsd.next()
def stopController(self):
self.running = False
@property
def fix(self):
return self.gpsd.fix
@property
def utc(self):
return self.gpsd.utc
@property
def satellites(self):
return self.gpsd.satellites
@property
def fixdatetime(self):
#return None if we cant get a time
UTCTime = None
try:
# have we got a fix?
if self.fix.mode != 1:
#strip time from utc
UTCTime = time.strptime(self.utc, "%Y-%m-%dT%H:%M:%S.%fz")
#convert time struct to datetime
UTCTime = datetime.datetime.fromtimestamp(time.mktime(UTCTime))
except:
#return None if we get an error
UTCTime = None
return UTCTime
if __name__ == '__main__':
gpsc = GpsController() # create the thread
try:
gpsc.start() # start it up
while True:
print "latitude ", gpsc.fix.latitude
print "longitude ", gpsc.fix.longitude
print "time utc ", gpsc.utc, " + ", gpsc.gpsd.fix.time
print "altitude (m)", gpsc.fix.altitude
#print "eps ", gpsc.gpsd.fix.eps
#print "epx ", gpsc.gpsd.fix.epx
#print "epv ", gpsc.gpsd.fix.epv
#print "ept ", gpsc.gpsd.fix.ept
print "speed (m/s) ", gpsc.fix.speed
print "track ", gpsc.gpsd.fix.track
print "mode ", gpsc.gpsd.fix.mode
#print "sats ", gpsc.satellites
print "climb ", gpsc.fix.climb
print gpsc.fixdatetime
x,y = GpsUtils.latLongToXY(gpsc.fix.latitude, gpsc.fix.longitude)
print "x", x
print "y", y
time.sleep(0.5)
#Ctrl C
except KeyboardInterrupt:
print "User cancelled"
#Error
except:
print "Unexpected error:", sys.exc_info()[0]
raise
finally:
print "Stopping gps controller"
gpsc.stopController()
#wait for the tread to finish
gpsc.join()
print "Done"
| mit | -870,046,744,598,906,500 | 26.742857 | 79 | 0.544113 | false | 3.63217 | false | false | false |
Symmetry-Innovations-Pty-Ltd/Python-2.7-for-QNX6.5.0-x86 | usr/pkg/lib/python2.7/idlelib/IOBinding.py | 16 | 20975 | # changes by [email protected]
# - IOBinding.open() replaces the current window with the opened file,
# if the current window is both unmodified and unnamed
# - IOBinding.loadfile() interprets Windows, UNIX, and Macintosh
# end-of-line conventions, instead of relying on the standard library,
# which will only understand the local convention.
import os
import types
import sys
import codecs
import tempfile
import tkFileDialog
import tkMessageBox
import re
from Tkinter import *
from SimpleDialog import SimpleDialog
from idlelib.configHandler import idleConf
try:
from codecs import BOM_UTF8
except ImportError:
# only available since Python 2.3
BOM_UTF8 = '\xef\xbb\xbf'
# Try setting the locale, so that we can find out
# what encoding to use
try:
import locale
locale.setlocale(locale.LC_CTYPE, "")
except (ImportError, locale.Error):
pass
# Encoding for file names
filesystemencoding = sys.getfilesystemencoding()
encoding = "ascii"
if sys.platform == 'win32':
# On Windows, we could use "mbcs". However, to give the user
# a portable encoding name, we need to find the code page
try:
encoding = locale.getdefaultlocale()[1]
codecs.lookup(encoding)
except LookupError:
pass
else:
try:
# Different things can fail here: the locale module may not be
# loaded, it may not offer nl_langinfo, or CODESET, or the
# resulting codeset may be unknown to Python. We ignore all
# these problems, falling back to ASCII
encoding = locale.nl_langinfo(locale.CODESET)
if encoding is None or encoding is '':
# situation occurs on Mac OS X
encoding = 'ascii'
codecs.lookup(encoding)
except (NameError, AttributeError, LookupError):
# Try getdefaultlocale well: it parses environment variables,
# which may give a clue. Unfortunately, getdefaultlocale has
# bugs that can cause ValueError.
try:
encoding = locale.getdefaultlocale()[1]
if encoding is None or encoding is '':
# situation occurs on Mac OS X
encoding = 'ascii'
codecs.lookup(encoding)
except (ValueError, LookupError):
pass
encoding = encoding.lower()
coding_re = re.compile("coding[:=]\s*([-\w_.]+)")
class EncodingMessage(SimpleDialog):
"Inform user that an encoding declaration is needed."
def __init__(self, master, enc):
self.should_edit = False
self.root = top = Toplevel(master)
top.bind("<Return>", self.return_event)
top.bind("<Escape>", self.do_ok)
top.protocol("WM_DELETE_WINDOW", self.wm_delete_window)
top.wm_title("I/O Warning")
top.wm_iconname("I/O Warning")
self.top = top
l1 = Label(top,
text="Non-ASCII found, yet no encoding declared. Add a line like")
l1.pack(side=TOP, anchor=W)
l2 = Entry(top, font="courier")
l2.insert(0, "# -*- coding: %s -*-" % enc)
# For some reason, the text is not selectable anymore if the
# widget is disabled.
# l2['state'] = DISABLED
l2.pack(side=TOP, anchor = W, fill=X)
l3 = Label(top, text="to your file\n"
"Choose OK to save this file as %s\n"
"Edit your general options to silence this warning" % enc)
l3.pack(side=TOP, anchor = W)
buttons = Frame(top)
buttons.pack(side=TOP, fill=X)
# Both return and cancel mean the same thing: do nothing
self.default = self.cancel = 0
b1 = Button(buttons, text="Ok", default="active",
command=self.do_ok)
b1.pack(side=LEFT, fill=BOTH, expand=1)
b2 = Button(buttons, text="Edit my file",
command=self.do_edit)
b2.pack(side=LEFT, fill=BOTH, expand=1)
self._set_transient(master)
def do_ok(self):
self.done(0)
def do_edit(self):
self.done(1)
def coding_spec(str):
"""Return the encoding declaration according to PEP 263.
Raise LookupError if the encoding is declared but unknown.
"""
# Only consider the first two lines
str = str.split("\n")[:2]
str = "\n".join(str)
match = coding_re.search(str)
if not match:
return None
name = match.group(1)
# Check whether the encoding is known
import codecs
try:
codecs.lookup(name)
except LookupError:
# The standard encoding error does not indicate the encoding
raise LookupError, "Unknown encoding "+name
return name
class IOBinding:
def __init__(self, editwin):
self.editwin = editwin
self.text = editwin.text
self.__id_open = self.text.bind("<<open-window-from-file>>", self.open)
self.__id_save = self.text.bind("<<save-window>>", self.save)
self.__id_saveas = self.text.bind("<<save-window-as-file>>",
self.save_as)
self.__id_savecopy = self.text.bind("<<save-copy-of-window-as-file>>",
self.save_a_copy)
self.fileencoding = None
self.__id_print = self.text.bind("<<print-window>>", self.print_window)
def close(self):
# Undo command bindings
self.text.unbind("<<open-window-from-file>>", self.__id_open)
self.text.unbind("<<save-window>>", self.__id_save)
self.text.unbind("<<save-window-as-file>>",self.__id_saveas)
self.text.unbind("<<save-copy-of-window-as-file>>", self.__id_savecopy)
self.text.unbind("<<print-window>>", self.__id_print)
# Break cycles
self.editwin = None
self.text = None
self.filename_change_hook = None
def get_saved(self):
return self.editwin.get_saved()
def set_saved(self, flag):
self.editwin.set_saved(flag)
def reset_undo(self):
self.editwin.reset_undo()
filename_change_hook = None
def set_filename_change_hook(self, hook):
self.filename_change_hook = hook
filename = None
dirname = None
def set_filename(self, filename):
if filename and os.path.isdir(filename):
self.filename = None
self.dirname = filename
else:
self.filename = filename
self.dirname = None
self.set_saved(1)
if self.filename_change_hook:
self.filename_change_hook()
def open(self, event=None, editFile=None):
if self.editwin.flist:
if not editFile:
filename = self.askopenfile()
else:
filename=editFile
if filename:
# If the current window has no filename and hasn't been
# modified, we replace its contents (no loss). Otherwise
# we open a new window. But we won't replace the
# shell window (which has an interp(reter) attribute), which
# gets set to "not modified" at every new prompt.
try:
interp = self.editwin.interp
except AttributeError:
interp = None
if not self.filename and self.get_saved() and not interp:
self.editwin.flist.open(filename, self.loadfile)
else:
self.editwin.flist.open(filename)
else:
self.text.focus_set()
return "break"
#
# Code for use outside IDLE:
if self.get_saved():
reply = self.maybesave()
if reply == "cancel":
self.text.focus_set()
return "break"
if not editFile:
filename = self.askopenfile()
else:
filename=editFile
if filename:
self.loadfile(filename)
else:
self.text.focus_set()
return "break"
eol = r"(\r\n)|\n|\r" # \r\n (Windows), \n (UNIX), or \r (Mac)
eol_re = re.compile(eol)
eol_convention = os.linesep # Default
def loadfile(self, filename):
try:
# open the file in binary mode so that we can handle
# end-of-line convention ourselves.
f = open(filename,'rb')
chars = f.read()
f.close()
except IOError, msg:
tkMessageBox.showerror("I/O Error", str(msg), master=self.text)
return False
chars = self.decode(chars)
# We now convert all end-of-lines to '\n's
firsteol = self.eol_re.search(chars)
if firsteol:
self.eol_convention = firsteol.group(0)
if isinstance(self.eol_convention, unicode):
# Make sure it is an ASCII string
self.eol_convention = self.eol_convention.encode("ascii")
chars = self.eol_re.sub(r"\n", chars)
self.text.delete("1.0", "end")
self.set_filename(None)
self.text.insert("1.0", chars)
self.reset_undo()
self.set_filename(filename)
self.text.mark_set("insert", "1.0")
self.text.see("insert")
self.updaterecentfileslist(filename)
return True
def decode(self, chars):
"""Create a Unicode string
If that fails, let Tcl try its best
"""
# Check presence of a UTF-8 signature first
if chars.startswith(BOM_UTF8):
try:
chars = chars[3:].decode("utf-8")
except UnicodeError:
# has UTF-8 signature, but fails to decode...
return chars
else:
# Indicates that this file originally had a BOM
self.fileencoding = BOM_UTF8
return chars
# Next look for coding specification
try:
enc = coding_spec(chars)
except LookupError, name:
tkMessageBox.showerror(
title="Error loading the file",
message="The encoding '%s' is not known to this Python "\
"installation. The file may not display correctly" % name,
master = self.text)
enc = None
if enc:
try:
return unicode(chars, enc)
except UnicodeError:
pass
# If it is ASCII, we need not to record anything
try:
return unicode(chars, 'ascii')
except UnicodeError:
pass
# Finally, try the locale's encoding. This is deprecated;
# the user should declare a non-ASCII encoding
try:
chars = unicode(chars, encoding)
self.fileencoding = encoding
except UnicodeError:
pass
return chars
def maybesave(self):
if self.get_saved():
return "yes"
message = "Do you want to save %s before closing?" % (
self.filename or "this untitled document")
confirm = tkMessageBox.askyesnocancel(
title="Save On Close",
message=message,
default=tkMessageBox.YES,
master=self.text)
if confirm:
reply = "yes"
self.save(None)
if not self.get_saved():
reply = "cancel"
elif confirm is None:
reply = "cancel"
else:
reply = "no"
self.text.focus_set()
return reply
def save(self, event):
if not self.filename:
self.save_as(event)
else:
if self.writefile(self.filename):
self.set_saved(True)
try:
self.editwin.store_file_breaks()
except AttributeError: # may be a PyShell
pass
self.text.focus_set()
return "break"
def save_as(self, event):
filename = self.asksavefile()
if filename:
if self.writefile(filename):
self.set_filename(filename)
self.set_saved(1)
try:
self.editwin.store_file_breaks()
except AttributeError:
pass
self.text.focus_set()
self.updaterecentfileslist(filename)
return "break"
def save_a_copy(self, event):
filename = self.asksavefile()
if filename:
self.writefile(filename)
self.text.focus_set()
self.updaterecentfileslist(filename)
return "break"
def writefile(self, filename):
self.fixlastline()
chars = self.encode(self.text.get("1.0", "end-1c"))
if self.eol_convention != "\n":
chars = chars.replace("\n", self.eol_convention)
try:
f = open(filename, "wb")
f.write(chars)
f.flush()
f.close()
return True
except IOError, msg:
tkMessageBox.showerror("I/O Error", str(msg),
master=self.text)
return False
def encode(self, chars):
if isinstance(chars, types.StringType):
# This is either plain ASCII, or Tk was returning mixed-encoding
# text to us. Don't try to guess further.
return chars
# See whether there is anything non-ASCII in it.
# If not, no need to figure out the encoding.
try:
return chars.encode('ascii')
except UnicodeError:
pass
# If there is an encoding declared, try this first.
try:
enc = coding_spec(chars)
failed = None
except LookupError, msg:
failed = msg
enc = None
if enc:
try:
return chars.encode(enc)
except UnicodeError:
failed = "Invalid encoding '%s'" % enc
if failed:
tkMessageBox.showerror(
"I/O Error",
"%s. Saving as UTF-8" % failed,
master = self.text)
# If there was a UTF-8 signature, use that. This should not fail
if self.fileencoding == BOM_UTF8 or failed:
return BOM_UTF8 + chars.encode("utf-8")
# Try the original file encoding next, if any
if self.fileencoding:
try:
return chars.encode(self.fileencoding)
except UnicodeError:
tkMessageBox.showerror(
"I/O Error",
"Cannot save this as '%s' anymore. Saving as UTF-8" \
% self.fileencoding,
master = self.text)
return BOM_UTF8 + chars.encode("utf-8")
# Nothing was declared, and we had not determined an encoding
# on loading. Recommend an encoding line.
config_encoding = idleConf.GetOption("main","EditorWindow",
"encoding")
if config_encoding == 'utf-8':
# User has requested that we save files as UTF-8
return BOM_UTF8 + chars.encode("utf-8")
ask_user = True
try:
chars = chars.encode(encoding)
enc = encoding
if config_encoding == 'locale':
ask_user = False
except UnicodeError:
chars = BOM_UTF8 + chars.encode("utf-8")
enc = "utf-8"
if not ask_user:
return chars
dialog = EncodingMessage(self.editwin.top, enc)
dialog.go()
if dialog.num == 1:
# User asked us to edit the file
encline = "# -*- coding: %s -*-\n" % enc
firstline = self.text.get("1.0", "2.0")
if firstline.startswith("#!"):
# Insert encoding after #! line
self.text.insert("2.0", encline)
else:
self.text.insert("1.0", encline)
return self.encode(self.text.get("1.0", "end-1c"))
return chars
def fixlastline(self):
c = self.text.get("end-2c")
if c != '\n':
self.text.insert("end-1c", "\n")
def print_window(self, event):
confirm = tkMessageBox.askokcancel(
title="Print",
message="Print to Default Printer",
default=tkMessageBox.OK,
master=self.text)
if not confirm:
self.text.focus_set()
return "break"
tempfilename = None
saved = self.get_saved()
if saved:
filename = self.filename
# shell undo is reset after every prompt, looks saved, probably isn't
if not saved or filename is None:
(tfd, tempfilename) = tempfile.mkstemp(prefix='IDLE_tmp_')
filename = tempfilename
os.close(tfd)
if not self.writefile(tempfilename):
os.unlink(tempfilename)
return "break"
platform = os.name
printPlatform = True
if platform == 'posix': #posix platform
command = idleConf.GetOption('main','General',
'print-command-posix')
command = command + " 2>&1"
elif platform == 'nt': #win32 platform
command = idleConf.GetOption('main','General','print-command-win')
else: #no printing for this platform
printPlatform = False
if printPlatform: #we can try to print for this platform
command = command % filename
pipe = os.popen(command, "r")
# things can get ugly on NT if there is no printer available.
output = pipe.read().strip()
status = pipe.close()
if status:
output = "Printing failed (exit status 0x%x)\n" % \
status + output
if output:
output = "Printing command: %s\n" % repr(command) + output
tkMessageBox.showerror("Print status", output, master=self.text)
else: #no printing for this platform
message = "Printing is not enabled for this platform: %s" % platform
tkMessageBox.showinfo("Print status", message, master=self.text)
if tempfilename:
os.unlink(tempfilename)
return "break"
opendialog = None
savedialog = None
filetypes = [
("Python files", "*.py *.pyw", "TEXT"),
("Text files", "*.txt", "TEXT"),
("All files", "*"),
]
def askopenfile(self):
dir, base = self.defaultfilename("open")
if not self.opendialog:
self.opendialog = tkFileDialog.Open(master=self.text,
filetypes=self.filetypes)
filename = self.opendialog.show(initialdir=dir, initialfile=base)
if isinstance(filename, unicode):
filename = filename.encode(filesystemencoding)
return filename
def defaultfilename(self, mode="open"):
if self.filename:
return os.path.split(self.filename)
elif self.dirname:
return self.dirname, ""
else:
try:
pwd = os.getcwd()
except os.error:
pwd = ""
return pwd, ""
def asksavefile(self):
dir, base = self.defaultfilename("save")
if not self.savedialog:
self.savedialog = tkFileDialog.SaveAs(master=self.text,
filetypes=self.filetypes)
filename = self.savedialog.show(initialdir=dir, initialfile=base)
if isinstance(filename, unicode):
filename = filename.encode(filesystemencoding)
return filename
def updaterecentfileslist(self,filename):
"Update recent file list on all editor windows"
self.editwin.update_recent_files_list(filename)
def test():
root = Tk()
class MyEditWin:
def __init__(self, text):
self.text = text
self.flist = None
self.text.bind("<Control-o>", self.open)
self.text.bind("<Control-s>", self.save)
self.text.bind("<Alt-s>", self.save_as)
self.text.bind("<Alt-z>", self.save_a_copy)
def get_saved(self): return 0
def set_saved(self, flag): pass
def reset_undo(self): pass
def open(self, event):
self.text.event_generate("<<open-window-from-file>>")
def save(self, event):
self.text.event_generate("<<save-window>>")
def save_as(self, event):
self.text.event_generate("<<save-window-as-file>>")
def save_a_copy(self, event):
self.text.event_generate("<<save-copy-of-window-as-file>>")
text = Text(root)
text.pack()
text.focus_set()
editwin = MyEditWin(text)
io = IOBinding(editwin)
root.mainloop()
if __name__ == "__main__":
test()
| mit | 3,166,070,688,643,800,000 | 34.311448 | 80 | 0.548653 | false | 4.192485 | false | false | false |
bmispelon/csvkit | csvkit/sql.py | 21 | 3016 | #!/usr/bin/env python
import datetime
import six
from sqlalchemy import Column, MetaData, Table, create_engine
from sqlalchemy import BigInteger, Boolean, Date, DateTime, Float, Integer, String, Time
from sqlalchemy.schema import CreateTable
NoneType = type(None)
DIALECTS = {
'access': 'access.base',
'firebird': 'firebird.kinterbasdb',
'informix': 'informix.informixdb',
'maxdb': 'maxdb.sapdb',
'mssql': 'mssql.pyodbc',
'mysql': 'mysql.mysqlconnector',
'oracle': 'oracle.cx_oracle',
'postgresql': 'postgresql.psycopg2',
'sqlite': 'sqlite.pysqlite',
'sybase': 'sybase.pyodbc'
}
NULL_COLUMN_MAX_LENGTH = 32
SQL_INTEGER_MAX = 2147483647
SQL_INTEGER_MIN = -2147483647
def make_column(column, no_constraints=False):
"""
Creates a sqlalchemy column from a csvkit Column.
"""
sql_column_kwargs = {}
sql_type_kwargs = {}
column_types = {
bool: Boolean,
#int: Integer, see special case below
float: Float,
datetime.datetime: DateTime,
datetime.date: Date,
datetime.time: Time,
NoneType: String,
six.text_type: String
}
if column.type in column_types:
sql_column_type = column_types[column.type]
elif column.type is int:
column_max = max([v for v in column if v is not None])
column_min = min([v for v in column if v is not None])
if column_max > SQL_INTEGER_MAX or column_min < SQL_INTEGER_MIN:
sql_column_type = BigInteger
else:
sql_column_type = Integer
else:
raise ValueError('Unexpected normalized column type: %s' % column.type)
if no_constraints is False:
if column.type is NoneType:
sql_type_kwargs['length'] = NULL_COLUMN_MAX_LENGTH
elif column.type is six.text_type:
sql_type_kwargs['length'] = column.max_length()
sql_column_kwargs['nullable'] = column.has_nulls()
return Column(column.name, sql_column_type(**sql_type_kwargs), **sql_column_kwargs)
def get_connection(connection_string):
engine = create_engine(connection_string)
metadata = MetaData(engine)
return engine, metadata
def make_table(csv_table, name='table_name', no_constraints=False, db_schema=None, metadata=None):
"""
Creates a sqlalchemy table from a csvkit Table.
"""
if not metadata:
metadata = MetaData()
sql_table = Table(csv_table.name, metadata, schema=db_schema)
for column in csv_table:
sql_table.append_column(make_column(column, no_constraints))
return sql_table
def make_create_table_statement(sql_table, dialect=None):
"""
Generates a CREATE TABLE statement for a sqlalchemy table.
"""
if dialect:
module = __import__('sqlalchemy.dialects.%s' % DIALECTS[dialect], fromlist=['dialect'])
sql_dialect = module.dialect()
else:
sql_dialect = None
return six.text_type(CreateTable(sql_table).compile(dialect=sql_dialect)).strip() + ';'
| mit | 191,187,529,732,256,830 | 28.568627 | 98 | 0.647546 | false | 3.691554 | false | false | false |
CloudServer/cinder | cinder/openstack/common/scheduler/filters/capabilities_filter.py | 26 | 2792 | # Copyright (c) 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import six
from cinder.openstack.common.scheduler import filters
from cinder.openstack.common.scheduler.filters import extra_specs_ops
LOG = logging.getLogger(__name__)
class CapabilitiesFilter(filters.BaseHostFilter):
"""HostFilter to work with resource (instance & volume) type records."""
def _satisfies_extra_specs(self, capabilities, resource_type):
"""Check that the capabilities provided by the services satisfy
the extra specs associated with the resource type.
"""
extra_specs = resource_type.get('extra_specs', [])
if not extra_specs:
return True
for key, req in six.iteritems(extra_specs):
# Either not scope format, or in capabilities scope
scope = key.split(':')
if len(scope) > 1 and scope[0] != "capabilities":
continue
elif scope[0] == "capabilities":
del scope[0]
cap = capabilities
for index in range(len(scope)):
try:
cap = cap.get(scope[index])
except AttributeError:
return False
if cap is None:
return False
if not extra_specs_ops.match(cap, req):
LOG.debug("extra_spec requirement '%(req)s' "
"does not match '%(cap)s'",
{'req': req, 'cap': cap})
return False
return True
def host_passes(self, host_state, filter_properties):
"""Return a list of hosts that can create resource_type."""
# Note(zhiteng) Currently only Cinder and Nova are using
# this filter, so the resource type is either instance or
# volume.
resource_type = filter_properties.get('resource_type')
if not self._satisfies_extra_specs(host_state.capabilities,
resource_type):
LOG.debug("%(host_state)s fails resource_type extra_specs "
"requirements", {'host_state': host_state})
return False
return True
| apache-2.0 | -1,256,998,803,532,345,900 | 38.323944 | 78 | 0.602077 | false | 4.614876 | false | false | false |
arizvisa/syringe | template/lnkfile.py | 1 | 40522 | import ptypes, ndk, office.propertyset
from ptypes import *
from ndk.datatypes import *
class uint0(pint.uint_t):
length = 0
class GUID(ndk.GUID):
pass
@pbinary.littleendian
class LinkFlags(pbinary.flags):
_fields_ = [
(1, 'HasLinkTargetIDList'), # The shell link is saved with an item ID list (IDList). If this bit is set, a LinkTargetIDList structure (section 2.2) MUST follow the ShellLinkHeader. If this bit is not set, this structure MUST NOT be present.
(1, 'HasLinkInfo'), # The shell link is saved with link information. If this bit is set, a LinkInfo structure (section 2.3) MUST be present. If this bit is not set, this structure MUST NOT be present.
(1, 'HasName'), # The shell link is saved with a name string. If this bit is set, a NAME_STRING StringData structure (section 2.4) MUST be present. If this bit is not set, this structure MUST NOT be present.
(1, 'HasRelativePath'), # The shell link is saved with a relative path string. If this bit is set, a RELATIVE_PATH StringData structure (section 2.4) MUST be present. If this bit is not set, this structure MUST NOT be present.
(1, 'HasWorkingDir'), # The shell link is saved with a working directory string. If this bit is set, a WORKING_DIR StringData structure (section 2.4) MUST be present. If this bit is not set, this structure MUST NOT be present.
(1, 'HasArguments'), # The shell link is saved with command line arguments. If this bit is set, a COMMAND_LINE_ARGUMENTS StringData structure (section 2.4) MUST be present. If this bit is not set, this structure MUST NOT be present.
(1, 'HasIconLocation'), # The shell link is saved with an icon location string. If this bit is set, an ICON_LOCATION StringData structure (section 2.4) MUST be present. If this bit is not set, this structure MUST NOT be present.
(1, 'IsUnicode'), # The shell link contains Unicode encoded strings. This bit SHOULD be set. If this bit is set, the StringData section contains Unicode-encoded strings; otherwise, it contains strings that are encoded using the system default code page.
(1, 'ForceNoLinkInfo'), # The LinkInfo structure (section 2.3) is ignored.
(1, 'HasExpString'), # The shell link is saved with an EnvironmentVariableDataBlock (section 2.5.4).
(1, 'RunInSeparateProcess'), # The target is run in a separate virtual machine when launching a link target that is a 16-bit application.
(1, 'Unused1'), # A bit that is undefined and MUST be ignored.
(1, 'HasDarwinID'), # The shell link is saved with a DarwinDataBlock (section 2.5.3).
(1, 'RunAsUser'), # The application is run as a different user when the target of the shell link is activated.
(1, 'HasExpIcon'), # The shell link is saved with an IconEnvironmentDataBlock (section 2.5.5).
(1, 'NoPidlAlias'), # The file system location is represented in the shell namespace when the path to an item is parsed into an IDList.
(1, 'Unused2'), # A bit that is undefined and MUST be ignored.
(1, 'RunWithShimLayer'), # The shell link is saved with a ShimDataBlock (section 2.5.8).
(1, 'ForceNoLinkTrack'), # The TrackerDataBlock (section 2.5.10) is ignored.
(1, 'EnableTargetMetadata'), # The shell link attempts to collect target properties and store them in the PropertyStoreDataBlock (section 2.5.7) when the link target is set.
(1, 'DisableLinkPathTracking'), # The EnvironmentVariableDataBlock is ignored.
(1, 'DisableKnownFolderTracking'), # The SpecialFolderDataBlock (section 2.5.9) and the KnownFolderDataBlock (section 2.5.6) are ignored when loading the shell link. If this bit is set, these extra data blocks SHOULD NOT be saved when saving the shell link.
(1, 'DisableKnownFolderAlias'), # If the link has a KnownFolderDataBlock (section 2.5.6), the unaliased form of the known folder IDList SHOULD be used when translating the target IDList at the time that the link is loaded.
(1, 'AllowLinkToLink'), # Creating a link that references another link is enabled. Otherwise, specifying a link as the target IDList SHOULD NOT be allowed.
(1, 'UnaliasOnSave'), # When saving a link for which the target IDList is under a known folder, either the unaliased form of that known folder or the target IDList SHOULD be used.
(1, 'PreferEnvironmentPath'), # The target IDList SHOULD NOT be stored; instead, the path specified in the EnvironmentVariableDataBlock (section 2.5.4) SHOULD be used to refer to the target.
(1, 'KeepLocalIDListForUNCTarget'), # When the target is a UNC name that refers to a location on a local machine, the local path IDList in the PropertyStoreDataBlock (section 2.5.7) SHOULD be stored, so it can be used when the link is loaded on the local machine.
(5, 'Unused'),
][::-1]
@pbinary.littleendian
class FileAttributesFlags(pbinary.flags):
_fields_ = [
(1, 'FILE_ATTRIBUTE_READONLY'), # The file or directory is read-only. For a file, if this bit is set, applications can read the file but cannot write to it or delete it. For a directory, if this bit is set, applications cannot delete the directory.
(1, 'FILE_ATTRIBUTE_HIDDEN'), # The file or directory is hidden. If this bit is set, the file or folder is not included in an ordinary directory listing.
(1, 'FILE_ATTRIBUTE_SYSTEM'), # The file or directory is part of the operating system or is used exclusively by the operating system.
(1, 'Reserved1'), # A bit that MUST be zero.
(1, 'FILE_ATTRIBUTE_DIRECTORY'), # The link target is a directory instead of a file.
(1, 'FILE_ATTRIBUTE_ARCHIVE'), # The file or directory is an archive file. Applications use this flag to mark files for backup or removal.
(1, 'Reserved2'), # A bit that MUST be zero.
(1, 'FILE_ATTRIBUTE_NORMAL'), # The file or directory has no other flags set. If this bit is 1, all other bits in this structure MUST be clear.
(1, 'FILE_ATTRIBUTE_TEMPORARY'), # The file is being used for temporary storage.
(1, 'FILE_ATTRIBUTE_SPARSE_FILE'), # The file is a sparse file.
(1, 'FILE_ATTRIBUTE_REPARSE_POINT'), # The file or directory has an associated reparse point.
(1, 'FILE_ATTRIBUTE_COMPRESSED'), # The file or directory is compressed. For a file, this means that all data in the file is compressed. For a directory, this means that compression is the default for newly created files and subdirectories.
(1, 'FILE_ATTRIBUTE_OFFLINE'), # The data of the file is not immediately available.
(1, 'FILE_ATTRIBUTE_NOT_CONTENT_INDEXED'), # The contents of the file need to be indexed.
(1, 'FILE_ATTRIBUTE_ENCRYPTED'), # The file or directory is encrypted. For a file, this means that all data in the file is encrypted. For a directory, this means that encryption is the default for newly created files and subdirectories.
(17, 'Unused'),
][::-1]
class SW_SHOW(pint.enum, DWORD):
_values_ = [
('NORMAL', 1),
('MAXIMIZED', 3),
('MINNOACTIVE', 7),
]
class VK_(pint.enum, BYTE):
_values_ = [
('None', 0x00), # No key assigned.
('VK_0', 0x30), # "0" key
('VK_1', 0x31), # "1" key
('VK_2', 0x32), # "2" key
('VK_3', 0x33), # "3" key
('VK_4', 0x34), # "4" key
('VK_5', 0x35), # "5" key
('VK_6', 0x36), # "6" key
('VK_7', 0x37), # "7" key
('VK_8', 0x38), # "8" key
('VK_9', 0x39), # "9" key
('VK_A', 0x41), # "A" key
('VK_B', 0x42), # "B" key
('VK_C', 0x43), # "C" key
('VK_D', 0x44), # "D" key
('VK_E', 0x45), # "E" key
('VK_F', 0x46), # "F" key
('VK_G', 0x47), # "G" key
('VK_H', 0x48), # "H" key
('VK_I', 0x49), # "I" key
('VK_J', 0x4A), # "J" key
('VK_K', 0x4B), # "K" key
('VK_L', 0x4C), # "L" key
('VK_M', 0x4D), # "M" key
('VK_N', 0x4E), # "N" key
('VK_O', 0x4F), # "O" key
('VK_P', 0x50), # "P" key
('VK_Q', 0x51), # "Q" key
('VK_R', 0x52), # "R" key
('VK_S', 0x53), # "S" key
('VK_T', 0x54), # "T" key
('VK_U', 0x55), # "U" key
('VK_V', 0x56), # "V" key
('VK_W', 0x57), # "W" key
('VK_X', 0x58), # "X" key
('VK_Y', 0x59), # "Y" key
('VK_Z', 0x5A), # "Z" key
('VK_F1', 0x70), # "F1" key
('VK_F2', 0x71), # "F2" key
('VK_F3', 0x72), # "F3" key
('VK_F4', 0x73), # "F4" key
('VK_F5', 0x74), # "F5" key
('VK_F6', 0x75), # "F6" key
('VK_F7', 0x76), # "F7" key
('VK_F8', 0x77), # "F8" key
('VK_F9', 0x78), # "F9" key
('VK_F10', 0x79), # "F10" key
('VK_F11', 0x7A), # "F11" key
('VK_F12', 0x7B), # "F12" key
('VK_F13', 0x7C), # "F13" key
('VK_F14', 0x7D), # "F14" key
('VK_F15', 0x7E), # "F15" key
('VK_F16', 0x7F), # "F16" key
('VK_F17', 0x80), # "F17" key
('VK_F18', 0x81), # "F18" key
('VK_F19', 0x82), # "F19" key
('VK_F20', 0x83), # "F20" key
('VK_F21', 0x84), # "F21" key
('VK_F22', 0x85), # "F22" key
('VK_F23', 0x86), # "F23" key
('VK_F24', 0x87), # "F24" key
('VK_NUMLOCK', 0x90), # "NUM LOCK" key
('VK_SCROLL', 0x91), # "SCROLL LOCK" key
]
class HOTKEYF_(pbinary.flags):
_fields_ = [
(5, 'RESERVED'),
(1, 'ALT'),
(1, 'CONTROL'),
(1, 'SHIFT'),
]
class HotKeyFlags(pstruct.type):
_fields_ = [
(VK_, 'LowByte'),
(HOTKEYF_, 'HighByte'),
]
class ShellLinkHeader(pstruct.type):
def blocksize(self):
# If we're allocated, then we can just read our size field to determine
# the blocksize, otherwise we need to cheat and assume it's a complete
# structure. We do this by making a copy using the original blocksize
# to allocate it and calculate the expected size.
Fblocksize = super(ShellLinkHeader, self).blocksize
return self['HeaderSize'].li.int() if self.value else self.copy(blocksize=Fblocksize).a.size()
def __Reserved(type, required):
def Freserved(self):
expected = self['HeaderSize'].li
return type if required <= expected.int() else uint0
return Freserved
def __Padding(self):
expected, fields = self['HeaderSize'].li, ['HeaderSize', 'LinkCLSID', 'LinkFlags', 'FileAttributes', 'CreationTime', 'AccessTime', 'WriteTime', 'FileSize', 'IconIndex', 'ShowCommand', 'HotKey', 'Reserved1', 'Reserved2', 'Reserved3']
return dyn.block(max(0, expected.int() - sum(self[fld].li.size() for fld in fields)))
_fields_ = [
(DWORD, 'HeaderSize'),
(CLSID, 'LinkCLSID'),
(LinkFlags, 'LinkFlags'),
(FileAttributesFlags, 'FileAttributes'),
(FILETIME, 'CreationTime'),
(FILETIME, 'AccessTime'),
(FILETIME, 'WriteTime'),
(DWORD, 'FileSize'),
(DWORD, 'IconIndex'),
(SW_SHOW, 'ShowCommand'),
(HotKeyFlags, 'HotKey'),
(__Reserved(WORD, 0x44), 'Reserved1'),
(__Reserved(DWORD, 0x48), 'Reserved2'),
(__Reserved(DWORD, 0x4c), 'Reserved3'),
(__Padding, 'Padding'),
]
class ItemID(pstruct.type):
def __Data(self):
expected, fields = self['ItemIDSize'].li, ['ItemIDSize']
return dyn.block(max(0, expected.int() - sum(self[fld].li.size() for fld in fields)))
_fields_ = [
(WORD, 'ItemIDSize'),
(__Data, 'Data'),
]
class IDList(parray.terminated):
_object_ = ItemID
def isTerminator(self, item):
return item['ItemIDSize'].int() == 0
class LinkTargetIDList(pstruct.type):
def __padding_IDList(self):
expected = self['IDListSize'].li.int()
return dyn.block(max(0, expected - self['IDList'].li.size()))
_fields_ = [
(WORD, 'IDListSize'),
(IDList, 'IDList'),
(__padding_IDList, 'padding(IDList)'),
]
class DRIVE_(pint.enum, DWORD):
_values_ = [
('UNKNOWN', 0x00000000), # The drive type cannot be determined.
('NO_ROOT_DIR', 0x00000001), # The root path is invalid; for example, there is no volume mounted at the path.
('REMOVABLE', 0x00000002), # The drive has removable media, such as a floppy drive, thumb drive, or flash card reader.
('FIXED', 0x00000003), # The drive has fixed media, such as a hard drive or flash drive.
('REMOTE', 0x00000004), # The drive is a remote (network) drive.
('CDROM', 0x00000005), # The drive is a CD-ROM drive.
('RAMDISK', 0x00000006), # The drive is a RAM disk.
]
class VolumeID(pstruct.type):
def blocksize(self):
# If we're allocated, then we can just read our size field to determine
# the blocksize, otherwise we need to cheat and assume it's a complete
# structure. We do this by making a copy using the original blocksize
# to allocate it and calculate the expected size.
Fblocksize = super(VolumeID, self).blocksize
return self['VolumeIDSize'].li.int() if self.value else self.copy(blocksize=Fblocksize).a.size()
def __VolumeLabelOffset(self):
size = self['VolumeIDSize'].li
if size.int() < 0x10:
return dyn.rpointer(pstr.szstring, self, uint0)
return dyn.rpointer(pstr.szstring, self, DWORD)
def __VolumeLabelOffsetUnicode(self):
size, offset = (self[fld].li for fld in ['VolumeIDSize', 'VolumeLabelOffset'])
t = uint0 if any(item.int() < 0x14 for item in {size, offset}) else DWORD
return dyn.rpointer(pstr.szwstring, self, t)
def __Data(self):
expected, fields = self['VolumeIDSize'].li.int(), ['VolumeIDSize', 'DriveType', 'DriveSerialNumber', 'VolumeLabelOffset', 'VolumeLabelOffsetUnicode']
return dyn.block(max(0, expected - sum(self[fld].li.size() for fld in fields)))
_fields_ = [
(DWORD, 'VolumeIDSize'),
(DRIVE_, 'DriveType'),
(DWORD, 'DriveSerialNumber'),
(__VolumeLabelOffset, 'VolumeLabelOffset'),
(__VolumeLabelOffsetUnicode, 'VolumeLabelOffsetUnicode'),
#(ptype.undefined, 'VolumeLabelOffset'),
#(ptype.undefined, 'VolumeLabelOffsetUnicode'),
# This data contains the two previously defined strings
(__Data, 'Data'),
]
class WNNC_NET_(pint.enum, DWORD):
_values_ = [
('AVID', 0x001A0000),
('DOCUSPACE', 0x001B0000),
('MANGOSOFT', 0x001C0000),
('SERNET', 0x001D0000),
('RIVERFRONT1', 0X001E0000),
('RIVERFRONT2', 0x001F0000),
('DECORB', 0x00200000),
('PROTSTOR', 0x00210000),
('FJ_REDIR', 0x00220000),
('DISTINCT', 0x00230000),
('TWINS', 0x00240000),
('RDR2SAMPLE', 0x00250000),
('CSC', 0x00260000),
('3IN1', 0x00270000),
('EXTENDNET', 0x00290000),
('STAC', 0x002A0000),
('FOXBAT', 0x002B0000),
('YAHOO', 0x002C0000),
('EXIFS', 0x002D0000),
('DAV', 0x002E0000),
('KNOWARE', 0x002F0000),
('OBJECT_DIRE', 0x00300000),
('MASFAX', 0x00310000),
('HOB_NFS', 0x00320000),
('SHIVA', 0x00330000),
('IBMAL', 0x00340000),
('LOCK', 0x00350000),
('TERMSRV', 0x00360000),
('SRT', 0x00370000),
('QUINCY', 0x00380000),
('OPENAFS', 0x00390000),
('AVID1', 0X003A0000),
('DFS', 0x003B0000),
('KWNP', 0x003C0000),
('ZENWORKS', 0x003D0000),
('DRIVEONWEB', 0x003E0000),
('VMWARE', 0x003F0000),
('RSFX', 0x00400000),
('MFILES', 0x00410000),
('MS_NFS', 0x00420000),
('GOOGLE', 0x00430000),
]
class CommonNetworkRelativeLink(pstruct.type):
def blocksize(self):
# If we're allocated, then we can just read our size field to determine
# the blocksize, otherwise we need to cheat and assume it's a complete
# structure. We do this by making a copy using the original blocksize
# to allocate it and calculate the expected size.
Fblocksize = super(CommonNetworkRelativeLink, self).blocksize
return self['CommonNetworkRelativeLinkSize'].li.int() if self.value else self.copy(blocksize=Fblocksize).a.size()
@pbinary.littleendian
class _CommonNetworkRelativeLinkFlags(pbinary.flags):
_fields_ = [
(1, 'ValidDevice'),
(1, 'ValidNetType'),
(30, 'Unused'),
][::-1]
def __CommonNetworkRelativeLinkOffset(target, required):
def Foffset(self):
expected = self['CommonNetworkRelativeLinkSize'].li
t = uint0 if required < expected.int() else DWORD
return dyn.rpointer(target, self, t)
return Foffset
def __CommonNetworkRelativeLinkData(self):
expected, fields = self['CommonNetworkRelativeLinkSize'].li, ['CommonNetworkRelativeLinkSize', 'CommonNetworkRelativeLinkFlags', 'NetNameOffset', 'DeviceNameOffset', 'NetworkProviderType' 'NetNameOffsetUnicode', 'DeviceNameOffsetUnicode']
return dyn.block(max(0, expected.int() - sum(self[fld].li.size() for fld in fields)))
_fields_ = [
(DWORD, 'CommonNetworkRelativeLinkSize'),
(_CommonNetworkRelativeLinkFlags, 'CommonNetworkRelativeLinkFlags'),
(__CommonNetworkRelativeLinkOffset(pstr.szstring, 0xc), 'NetNameOffset'),
(__CommonNetworkRelativeLinkOffset(pstr.szstring, 0x10), 'DeviceNameOffset'), # ValidDevice
(WNNC_NET_, 'NetworkProviderType'), # ValidNetType
### These are conditional depending on the size
(__CommonNetworkRelativeLinkOffset(pstr.szwstring, 0x18), 'NetNameOffsetUnicode'),
(__CommonNetworkRelativeLinkOffset(pstr.szwstring, 0x1c), 'DeviceNameOffsetUnicode'),
### These might be in an arbitrary order despite what the documentation claims
#(pstr.szstring, 'NetName'),
#(pstr.szstring, 'DeviceName'),
#(pstr.szwstring, 'NetNameUnicode'),
#(pstr.szwstring, 'DeviceNameUnicode'),
(__CommonNetworkRelativeLinkData, 'CommonNetworkRelativeLinkData'),
]
class LinkInfo(pstruct.type):
def blocksize(self):
# If we're allocated, then we can just read our size field to determine
# the blocksize, otherwise we need to cheat and assume it's a complete
# structure. We do this by making a copy using the original blocksize
# to allocate it and calculate the expected size.
Fblocksize = super(LinkInfo, self).blocksize
return self['LinkInfoSize'].li.int() if self.value else self.copy(blocksize=Fblocksize).a.size()
@pbinary.littleendian
class _LinkInfoFlags(pbinary.flags):
_fields_ = [
(1, 'VolumeIDAndLocalBasePath'),
(1, 'CommonNetworkRelativeLinkAndPathSuffix'),
(30, 'Unused'),
][::-1]
def __LinkInfoHeaderOffset(target, required):
def Foffset(self):
expected = self['LinkInfoHeaderSize'].li
t = DWORD if required <= expected.int() else uint0
return dyn.rpointer(target, self, t)
return Foffset
def __LinkInfoData(self):
expected, header = (self[fld].li for fld in ['LinkInfoSize', 'LinkInfoHeaderSize'])
return dyn.block(max(0, expected.int() - header.int()))
_fields_ = [
(DWORD, 'LinkInfoSize'),
(DWORD, 'LinkInfoHeaderSize'),
(_LinkInfoFlags, 'LinkInfoFlags'),
### XXX: These are conditional depending on the LinkInfoFlags and sized by LinkInfoHeaderSize
(__LinkInfoHeaderOffset(VolumeID, 0x10), 'VolumeIDOffset'), # VolumeIDAndLocalBasePath
(__LinkInfoHeaderOffset(pstr.szstring, 0x14), 'LocalBasePathOffset'), # VolumeIDAndLocalBasePath
(__LinkInfoHeaderOffset(CommonNetworkRelativeLink, 0x18), 'CommonNetworkRelativeLinkOffset'), # CommonNetworkRelativeLinkAndPathSuffix
(__LinkInfoHeaderOffset(pstr.szstring, 0x1c), 'CommonPathSuffixOffset'), #
(__LinkInfoHeaderOffset(pstr.szwstring, 0x20), 'LocalBasePathOffsetUnicode'), # VolumeIDAndLocalBasePath
(__LinkInfoHeaderOffset(pstr.szwstring, 0x24), 'CommonPathSuffixOffsetUnicode'), # If size >= 0x24
### These might be in an arbitrary order despite what the documentation claims
#(VolumeID, 'VolumeID'), #
#(pstr.szwstring, 'LocalBasePath'), #
#(CommonNetworkRelativeLink, 'CommonNetworkRelativeLink'),
#(pstr.szwstring, 'CommonPathSuffix'), #
#(pstr.szwstring, 'LocalBasePathUnicode'),
#(pstr.szwstring, 'CommonPathSuffixUnicode'),
(__LinkInfoData, 'LinkInfoData'),
]
class StringData(pstruct.type):
_fields_ = [
(WORD, 'CountCharacters'),
(pstr.string, 'String'),
]
def str(self):
item = self['String']
return item.str()
def summary(self):
count, string = self['CountCharacters'], self.str()
return "(CountCharacters={:d}) String: {:s}".format(count.int(), string)
class AnsiStringData(StringData):
_fields_ = [
(WORD, 'CountCharacters'),
(lambda self: dyn.clone(pstr.string, length=self['CountCharacters'].li.int()), 'String'),
]
class UnicodeStringData(StringData):
_fields_ = [
(WORD, 'CountCharacters'),
(lambda self: dyn.clone(pstr.wstring, length=self['CountCharacters'].li.int()), 'String'),
]
class EXTRA_DATA(ptype.definition):
attribute, cache = 'signature', {}
class EXTRA_DATA_BLOCK(pint.enum, DWORD):
_values_ = [
('CONSOLE_PROPS', 0xA0000002), # A ConsoleDataBlock structure (section 2.5.1).
('CONSOLE_FE_PROPS', 0xA0000004), # A ConsoleFEDataBlock structure (section 2.5.2).
('DARWIN_PROPS', 0xA0000006), # A DarwinDataBlock structure (section 2.5.3).
('ENVIRONMENT_PROPS', 0xA0000001), # An EnvironmentVariableDataBlock structure (section 2.5.4).
('ICON_ENVIRONMENT_PROPS', 0xA0000007), # An IconEnvironmentDataBlock structure (section 2.5.5).
('KNOWN_FOLDER_PROPS', 0xA000000B), # A KnownFolderDataBlock structure (section 2.5.6).
('PROPERTY_STORE_PROPS', 0xA0000009), # A PropertyStoreDataBlock structure (section 2.5.7).
('SHIM_PROPS', 0xA0000008), # A ShimDataBlock structure (section 2.5.8).
('SPECIAL_FOLDER_PROPS', 0xA0000005), # A SpecialFolderDataBlock structure (section 2.5.9).
('TRACKER_PROPS', 0xA0000003), # A TrackerDataBlock structure (section 2.5.10).
('VISTA_AND_ABOVE_IDLIST_PROPS', 0xA000000C), # A VistaAndAboveIDListDataBlock structure (section 2.5.11).
]
class ExtraDataBlock(pstruct.type):
def __BlockData(self):
size, signature = (self[fld].li for fld in ['BlockSize', 'BlockSignature'])
total, fields = self['BlockSize'].li.int(), ['BlockSize', 'BlockSignature']
expected = total - sum(self[fld].li.size() for fld in fields)
return EXTRA_DATA.withdefault(signature.int(), ptype.block, length=max(0, expected))
def __padding_BlockData(self):
expected, fields = self['BlockSize'].li.int(), ['BlockSize', 'BlockSignature', 'BlockData']
return dyn.block(max(0, expected - sum(self[fld].li.size() for fld in fields)))
_fields_ = [
(DWORD, 'BlockSize'),
(lambda self: dyn.clone(EXTRA_DATA_BLOCK, length=0) if self['BlockSize'].li.int() < 8 else EXTRA_DATA_BLOCK, 'BlockSignature'),
(__BlockData, 'BlockData'),
(__padding_BlockData, 'padding(BlockData)'),
]
class ExtraData(parray.terminated):
_object_ = ExtraDataBlock
def isTerminator(self, item):
return item['BlockSize'].int() < 4
### extra data blocks
class RGBI(pbinary.flags):
_fields_ = [
(1, 'INTENSITY'),
(1, 'RED'),
(1, 'GREEN'),
(1, 'BLUE'),
]
class FOREGROUND_(RGBI): pass
class BACKGROUND_(RGBI): pass
class FF_(pbinary.enum):
length, _values_ = 4, [
('DONTCARE', 0),
('ROMAN', 1),
('SWISS', 2),
('MODERN', 3),
('SCRIPT', 4),
('DECORATIVE', 4),
]
class TMPF_(pbinary.flags):
_fields_ = [
(1, 'DEVICE'),
(1, 'TRUETYPE'),
(1, 'VECTOR'),
(1, 'FIXED_PITCH'),
]
@EXTRA_DATA.define
class ConsoleDataBlock(pstruct.type):
signature = 0xA0000002
@pbinary.littleendian
class _FillAttributes(pbinary.flags):
_fields_ = [
(8, 'Unused'),
(BACKGROUND_, 'BACKGROUND'),
(FOREGROUND_, 'FOREGROUND'),
]
@pbinary.littleendian
class _FontFamily(pbinary.struct):
_fields_ = [
(24, 'Unused'),
(FF_, 'Family'),
(TMPF_, 'Pitch'),
]
_fields_ = [
(_FillAttributes, 'FillAttributes'),
(_FillAttributes, 'PopupFillAttributes'),
(INT16, 'ScreenBufferSizeX'),
(INT16, 'ScreenBufferSizeY'),
(INT16, 'WindowSizeX'),
(INT16, 'WindowSizeY'),
(INT16, 'WindowOriginX'),
(INT16, 'WindowOriginY'),
(DWORD, 'Unused1'),
(DWORD, 'Unused2'),
(DWORD, 'FontSize'),
(_FontFamily, 'FontFamily'),
(DWORD, 'FontWeight'),
(dyn.clone(pstr.wstring, length=32), 'Face Name'),
(DWORD, 'CursorSize'),
(DWORD, 'FullScreen'),
(DWORD, 'QuickEdit'),
(DWORD, 'InsertMode'),
(DWORD, 'AutoPosition'),
(DWORD, 'HistoryBufferSize'),
(DWORD, 'NumberOfHistoryBuffers'),
(DWORD, 'HistoryNoDup'),
(dyn.array(DWORD, 16), 'ColorTable'),
]
@EXTRA_DATA.define
class ConsoleFEDataBlock(pstruct.type):
signature = 0xA0000004
_fields_ = [
(DWORD, 'CodePage'),
]
@EXTRA_DATA.define
class DarwinDataBlock(pstruct.type):
signature = 0xA0000006
def __padding(field, size):
def Fpadding(self):
return dyn.block(max(0, size - self[field].li.size()))
return Fpadding
_fields_ = [
(pstr.szstring, 'DarwinDataAnsi'),
(__padding('DarwinDataAnsi', 260), 'padding(DarwinDataAnsi)'),
(pstr.szwstring, 'DarwinDataUnicode'),
(__padding('DarwinDataUnicode', 520), 'padding(DarwinDataUnicode)'),
]
@EXTRA_DATA.define
class EnvironmentVariableDataBlock(pstruct.type):
signature = 0xA0000001
def __padding(field, size):
def Fpadding(self):
return dyn.block(max(0, size - self[field].li.size()))
return Fpadding
_fields_ = [
(pstr.szstring, 'TargetAnsi'),
(__padding('TargetAnsi', 260), 'padding(TargetAnsi)'),
(pstr.szwstring, 'TargetUnicode'),
(__padding('TargetUnicode', 520), 'padding(TargetUnicode)'),
]
@EXTRA_DATA.define
class IconEnvironmentDataBlock(EnvironmentVariableDataBlock):
signature = 0xA0000007
@EXTRA_DATA.define
class KnownFolderDataBlock(pstruct.type):
signature = 0xA000000B
_fields_ = [
(GUID, 'KnownFolderID'),
(DWORD, 'Offset'),
]
class SerializedPropertyValueStringName(pstruct.type):
_fields_ = [
(DWORD, 'Value Size'),
(DWORD, 'Name Size'),
(BYTE, 'Reserved'),
(pstr.szwstring, 'Name'),
(lambda self: dyn.block(max(0, self['Name Size'].li.int() - self['Name'].li.size())), 'padding(Name)'),
(office.propertyset.TypedPropertyValue, 'Value'),
(lambda self: dyn.block(max(0, self['Value Size'].li.int() - self['Value'].li.size())), 'padding(Value)'),
]
class SerializedPropertyValueIntegerName(pstruct.type):
_fields_ = [
(DWORD, 'Value Size'),
(DWORD, 'Id'),
(BYTE, 'Reserved'),
(office.propertyset.TypedPropertyValue, 'Value'),
(lambda self: dyn.block(max(0, self['Value Size'].li.int() - self['Value'].li.size())), 'padding(Value)'),
]
@EXTRA_DATA.define
class PropertyStoreDataBlock(pstruct.type):
signature = 0xA0000009
class _Serialized_Property_Value(parray.terminated):
def isTerminator(self, item):
return item['Value Size'].int() == 0
def __Serialized_Property_Value(self):
format = self['Format ID']
items = [component for component in format.iterate()]
if items == [0xD5CDD505, 0x2E9C, 0x101B, 0x9397, 0x08002B2CF9AE]:
t = SerializedPropertyValueStringName
else:
t = SerializedPropertyValueIntegerName
return dyn.clone(self._Serialized_Property_Value, _object_=t)
def __padding_Serialized_Property_Value(self):
expected, fields = self['Storage Size'].li, ['Storage Size', 'Version', 'Format ID']
return dyn.block(max(0, expected.int() - sum(self[fld].li.size() for fld in fields)))
_fields_ = [
(DWORD, 'Storage Size'),
(DWORD, 'Version'),
(GUID, 'Format ID'),
(__Serialized_Property_Value, 'Serialized Property Value'),
(__padding_Serialized_Property_Value, 'padding(Serialized Property Value)'),
]
@EXTRA_DATA.define
class ShimDataBlock(pstruct.type):
signature = 0xA0000008
def __LayerName(self):
p = self.parent
if p:
size = p['BlockSize'].li.int() - sum(p[fld].li.size() for fld in ['BlockSize', 'BlockSignature'])
return dyn.clone(pstr.wstring, length=size // 2)
return pstr.wstring
_fields_ = [
(__LayerName, 'LayerName'),
]
@EXTRA_DATA.define
class SpecialFolderDataBlock(pstruct.type):
signature = 0xA0000005
_fields_ = [
(DWORD, 'SpecialFolderID'),
(DWORD, 'Offset'),
]
@EXTRA_DATA.define
class TrackerDataBlock(pstruct.type):
signature = 0xA0000003
class _Droid(parray.type):
length, _object_ = 2, GUID
def summary(self):
items = [item.str() for item in self]
return "({:d}) {:s}".format(len(items), ', '.join(items))
_fields_ = [
(DWORD, 'Length'),
(DWORD, 'Version'),
(dyn.clone(pstr.string, length=16), 'MachineID'),
(_Droid, 'Droid'),
(_Droid, 'DroidBirth'),
]
@EXTRA_DATA.define
class VistaAndAboveIDListDataBlock(IDList):
signature = 0xA000000C
class File(pstruct.type):
def __ConditionalType(flag, type):
def Ftype(self):
header = self['Header'].li
return type if header['LinkFlags'][flag] else ptype.undefined
return Ftype
def __ConditionalStringData(flag):
def FStringData(self):
header = self['Header'].li
string_t = UnicodeStringData if header['LinkFlags']['IsUnicode'] else AnsiStringData
return string_t if header['LinkFlags'][flag] else ptype.undefined
return FStringData
_fields_ = [
(ShellLinkHeader, 'Header'),
(__ConditionalType('HasLinkTargetIDList', LinkTargetIDList), 'IDList'), # HasLinkTargetIDList
(__ConditionalType('HasLinkInfo', LinkInfo), 'Info'), # HasLinkInfo
(__ConditionalStringData('HasName'), 'NAME_STRING'), # HasName
(__ConditionalStringData('HasRelativePAth'), 'RELATIVE_PATH'), # HasRelativePath
(__ConditionalStringData('HasWorkingDir'), 'WORKING_DIR'), # HasWorkingDir
(__ConditionalStringData('HasArguments'), 'COMMAND_LINE_ARGUMENTS'), # HasArguments
(__ConditionalStringData('HasIconLocation'), 'ICON_LOCATION'), # HasIconLocation
(ExtraData, 'Extra'),
]
if __name__ == '__main__':
import builtins, operator, os, math, functools, itertools, sys, types
# x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xA xB xC xD xE xF
hexadecimal_representation = '''
0000 4C 00 00 00 01 14 02 00 00 00 00 00 C0 00 00 00
0010 00 00 00 46 9B 00 08 00 20 00 00 00 D0 E9 EE F2
0020 15 15 C9 01 D0 E9 EE F2 15 15 C9 01 D0 E9 EE F2
0030 15 15 C9 01 00 00 00 00 00 00 00 00 01 00 00 00
0040 00 00 00 00 00 00 00 00 00 00 00 00 BD 00 14 00
0050 1F 50 E0 4F D0 20 EA 3A 69 10 A2 D8 08 00 2B 30
0060 30 9D 19 00 2F 43 3A 5C 00 00 00 00 00 00 00 00
0070 00 00 00 00 00 00 00 00 00 00 00 46 00 31 00 00
0080 00 00 00 2C 39 69 A3 10 00 74 65 73 74 00 00 32
0090 00 07 00 04 00 EF BE 2C 39 65 A3 2C 39 69 A3 26
00A0 00 00 00 03 1E 00 00 00 00 F5 1E 00 00 00 00 00
00B0 00 00 00 00 00 74 00 65 00 73 00 74 00 00 00 14
00C0 00 48 00 32 00 00 00 00 00 2C 39 69 A3 20 00 61
00D0 2E 74 78 74 00 34 00 07 00 04 00 EF BE 2C 39 69
00E0 A3 2C 39 69 A3 26 00 00 00 2D 6E 00 00 00 00 96
00F0 01 00 00 00 00 00 00 00 00 00 00 61 00 2E 00 74
0100 00 78 00 74 00 00 00 14 00 00 00 3C 00 00 00 1C
0110 00 00 00 01 00 00 00 1C 00 00 00 2D 00 00 00 00
0120 00 00 00 3B 00 00 00 11 00 00 00 03 00 00 00 81
0130 8A 7A 30 10 00 00 00 00 43 3A 5C 74 65 73 74 5C
0140 61 2E 74 78 74 00 00 07 00 2E 00 5C 00 61 00 2E
0150 00 74 00 78 00 74 00 07 00 43 00 3A 00 5C 00 74
0160 00 65 00 73 00 74 00 60 00 00 00 03 00 00 A0 58
0170 00 00 00 00 00 00 00 63 68 72 69 73 2D 78 70 73
0180 00 00 00 00 00 00 00 40 78 C7 94 47 FA C7 46 B3
0190 56 5C 2D C6 B6 D1 15 EC 46 CD 7B 22 7F DD 11 94
01A0 99 00 13 72 16 87 4A 40 78 C7 94 47 FA C7 46 B3
01B0 56 5C 2D C6 B6 D1 15 EC 46 CD 7B 22 7F DD 11 94
01C0 99 00 13 72 16 87 4A 00 00 00 00
'''
rows = map(operator.methodcaller('strip'), hexadecimal_representation.split('\n'))
items = [item.replace(' ', '') for offset, item in map(operator.methodcaller('split', ' ', 1), filter(None, rows))]
data = bytes().join(map(operator.methodcaller('decode', 'hex') if sys.version_info.major < 3 else bytes.fromhex, items))
# HeaderSize: (4 bytes, offset 0x0000), 0x0000004C as required.
# LinkCLSID: (16 bytes, offset 0x0004), 00021401-0000-0000-C000-000000000046.
# LinkFlags: (4 bytes, offset 0x0014), 0x0008009B means the following LinkFlags (section 2.1.1) are set:
# HasLinkTargetIDList
# HasLinkInfo
# HasRelativePath
# HasWorkingDir
# IsUnicode
# EnableTargetMetadata
# FileAttributes: (4 bytes, offset 0x0018), 0x00000020, means the following FileAttributesFlags (section 2.1.2) are set:
# FILE_ATTRIBUTE_ARCHIVE
# CreationTime: (8 bytes, offset 0x001C) FILETIME 9/12/08, 8:27:17PM.
# AccessTime: (8 bytes, offset 0x0024) FILETIME 9/12/08, 8:27:17PM.
# WriteTime: (8 bytes, offset 0x002C) FILETIME 9/12/08, 8:27:17PM.
# FileSize: (4 bytes, offset 0x0034), 0x00000000.
# IconIndex: (4 bytes, offset 0x0038), 0x00000000.
# ShowCommand: (4 bytes, offset 0x003C), SW_SHOWNORMAL(1).
# Hotkey: (2 bytes, offset 0x0040), 0x0000.
# Reserved: (2 bytes, offset 0x0042), 0x0000.
# Reserved2: (4 bytes, offset 0x0044), 0 x00000000.
# Reserved3: (4 bytes, offset 0x0048), 0 x00000000.
# Because HasLinkTargetIDList is set, a LinkTargetIDList structure (section 2.2) follows:
# IDListSize: (2 bytes, offset 0x004C), 0x00BD, the size of IDList.
# IDList: (189 bytes, offset 0x004E) an IDList structure (section 2.2.1) follows:
# ItemIDList: (187 bytes, offset 0x004E), ItemID structures (section 2.2.2) follow:
# ItemIDSize: (2 bytes, offset 0x004E), 0x0014
# Data: (12 bytes, offset 0x0050), <18 bytes of data> [computer]
# ItemIDSize: (2 bytes, offset 0x0062), 0x0019
# Data: (23 bytes, offset 0x0064), <23 bytes of data> [c:]
# ItemIDSize: (2 bytes, offset 0x007B), 0x0046
# Data: (68 bytes, offset 0x007D), <68 bytes of data> [test]
# ItemIDSize: (2 bytes, offset 0x00C1), 0x0048
# Data: (68 bytes, offset 0x00C3), <70 bytes of data> [a.txt]
# TerminalID: (2 bytes, offset 0x0109), 0x0000 indicates the end of the IDList.
# Because HasLinkInfo is set, a LinkInfo structure (section 2.3) follows:
# LinkInfoSize: (4 bytes, offset 0x010B), 0x0000003C
# LinkInfoHeaderSize: (4 bytes, offset 0x010F), 0x0000001C as specified in the LinkInfo structure definition.
# LinkInfoFlags: (4 bytes, offset 0x0113), 0x00000001 VolumeIDAndLocalBasePath is set.
# VolumeIDOffset: (4 bytes, offset 0x0117), 0x0000001C, references offset 0x0127.
# LocalBasePathOffset: (4 bytes, offset 0x011B), 0x0000002D, references the character string "C:\test\a.txt".
# CommonNetworkRelativeLinkOffset: (4 bytes, offset 0x011F), 0x00000000 indicates CommonNetworkRelativeLink is not present.
# CommonPathSuffixOffset: (4 bytes, offset 0x0123), 0x0000003B, references offset 0x00000146, the character string "" (empty string).
# VolumeID: (17 bytes, offset 0x0127), because VolumeIDAndLocalBasePath is set, a VolumeID structure (section 2.3.1) follows:
# VolumeIDSize: (4 bytes, offset 0x0127), 0x00000011 indicates the size of the VolumeID structure.
# DriveType: (4 bytes, offset 0x012B), DRIVE_FIXED(3).
# DriveSerialNumber: (4 bytes, offset 0x012F), 0x307A8A81.
# VolumeLabelOffset: (4 bytes, offset 0x0133), 0x00000010, indicates that Volume Label Offset Unicode is not specified and references offset 0x0137 where the Volume Label is stored.
# Data: (1 byte, offset 0x0137), "" an empty character string.
# LocalBasePath: (14 bytes, offset 0x0138), because VolumeIDAndLocalBasePath is set, the character string "c:\test\a.txt" is present.
# CommonPathSuffix: (1 byte, offset 0x0146), "" an empty character string.
# Because HasRelativePath is set, the RELATIVE_PATH StringData structure (section 2.4) follows:
# CountCharacters: (2 bytes, offset 0x0147), 0x0007 Unicode characters.
# String (14 bytes, offset 0x0149), the Unicode string: ".\a.txt".
# Because HasWorkingDir is set, the WORKING_DIR StringData structure (section 2.4) follows:
# CountCharacters: (2 bytes, offset 0x0157), 0x0007 Unicode characters.
# String (14 bytes, offset 0x0159), the Unicode string: "c:\test".
# Extra data section: (100 bytes, offset 0x0167), an ExtraData structure (section 2.5) follows:
# ExtraDataBlock (96 bytes, offset 0x0167), the TrackerDataBlock structure (section 2.5.10) follows:
# BlockSize: (4 bytes, offset 0x0167), 0x00000060
# BlockSignature: (4 bytes, offset 0x016B), 0xA000003, which identifies the TrackerDataBlock structure (section 2.5.10).
# Length: (4 bytes, offset 0x016F), 0x00000058, the required minimum size of this extra data block.
# Version: (4 bytes, offset 0x0173), 0x00000000, the required version.
# MachineID: (16 bytes, offset 0x0177), the character string "chris-xps", with zero fill.
# Droid: (32 bytes, offset 0x0187), 2 GUID values.
# DroidBirth: (32 bytes, offset 0x01A7), 2 GUID values.
# TerminalBlock: (4 bytes, offset 0x01C7), 0x00000000 indicates the end of the extra data section.
import ptypes, lnkfile
from lnkfile import *
#importlib.reload(lnkfile)
source = ptypes.setsource(ptypes.prov.bytes(data))
z = File()
z = z.l
print(z)
| bsd-2-clause | 8,419,824,990,856,185,000 | 47.704327 | 279 | 0.605054 | false | 3.501123 | false | false | false |
chrisndodge/edx-platform | openedx/core/djangoapps/user_api/migrations/0001_initial.py | 20 | 2731 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
from django.conf import settings
import model_utils.fields
import django.core.validators
from openedx.core.djangoapps.xmodule_django.models import CourseKeyField
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='UserCourseTag',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('key', models.CharField(max_length=255, db_index=True)),
('course_id', CourseKeyField(max_length=255, db_index=True)),
('value', models.TextField()),
('user', models.ForeignKey(related_name='+', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='UserOrgTag',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, verbose_name='created', editable=False)),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, verbose_name='modified', editable=False)),
('key', models.CharField(max_length=255, db_index=True)),
('org', models.CharField(max_length=255, db_index=True)),
('value', models.TextField()),
('user', models.ForeignKey(related_name='+', to=settings.AUTH_USER_MODEL)),
],
),
migrations.CreateModel(
name='UserPreference',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('key', models.CharField(db_index=True, max_length=255, validators=[django.core.validators.RegexValidator(b'[-_a-zA-Z0-9]+')])),
('value', models.TextField()),
('user', models.ForeignKey(related_name='preferences', to=settings.AUTH_USER_MODEL)),
],
),
migrations.AlterUniqueTogether(
name='userpreference',
unique_together=set([('user', 'key')]),
),
migrations.AlterUniqueTogether(
name='userorgtag',
unique_together=set([('user', 'org', 'key')]),
),
migrations.AlterUniqueTogether(
name='usercoursetag',
unique_together=set([('user', 'course_id', 'key')]),
),
]
| agpl-3.0 | 5,830,344,502,259,138,000 | 43.048387 | 147 | 0.589894 | false | 4.273865 | false | false | false |
jingxiang-li/kaggle-yelp | model/level3_model_rf.py | 1 | 5669 | from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.calibration import CalibratedClassifierCV
from sklearn.metrics import f1_score
import argparse
from os import path
import os
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
from utils import *
import pickle
np.random.seed(54568464)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--yix', type=int, default=0)
return parser.parse_args()
# functions for hyperparameters optimization
class Score:
def __init__(self, X, y):
self.y = y
self.X = X
def get_score(self, params):
params['n_estimators'] = int(params['n_estimators'])
params['max_depth'] = int(params['max_depth'])
params['min_samples_split'] = int(params['min_samples_split'])
params['min_samples_leaf'] = int(params['min_samples_leaf'])
params['n_estimators'] = int(params['n_estimators'])
print('Training with params:')
print(params)
# cross validation here
scores = []
for train_ix, test_ix in makeKFold(5, self.y, 1):
X_train, y_train = self.X[train_ix, :], self.y[train_ix]
X_test, y_test = self.X[test_ix, :], self.y[test_ix]
weight = y_train.shape[0] / (2 * np.bincount(y_train))
sample_weight = np.array([weight[i] for i in y_train])
clf = RandomForestClassifier(**params)
cclf = CalibratedClassifierCV(base_estimator=clf,
method='isotonic',
cv=makeKFold(3, y_train, 1))
cclf.fit(X_train, y_train, sample_weight)
pred = cclf.predict(X_test)
scores.append(f1_score(y_true=y_test, y_pred=pred))
print(scores)
score = np.mean(scores)
print(score)
return {'loss': -score, 'status': STATUS_OK}
def optimize(trials, X, y, max_evals):
space = {
'n_estimators': hp.quniform('n_estimators', 100, 500, 50),
'criterion': hp.choice('criterion', ['gini', 'entropy']),
'max_depth': hp.quniform('max_depth', 1, 7, 1),
'min_samples_split': hp.quniform('min_samples_split', 1, 9, 2),
'min_samples_leaf': hp.quniform('min_samples_leaf', 1, 5, 1),
'bootstrap': True,
'oob_score': True,
'n_jobs': -1
}
s = Score(X, y)
best = fmin(s.get_score,
space,
algo=tpe.suggest,
trials=trials,
max_evals=max_evals
)
best['n_estimators'] = int(best['n_estimators'])
best['max_depth'] = int(best['max_depth'])
best['min_samples_split'] = int(best['min_samples_split'])
best['min_samples_leaf'] = int(best['min_samples_leaf'])
best['n_estimators'] = int(best['n_estimators'])
best['criterion'] = ['gini', 'entropy'][best['criterion']]
best['bootstrap'] = True
best['oob_score'] = True
best['n_jobs'] = -1
del s
return best
def out_fold_pred(params, X, y):
# cross validation here
preds = np.zeros((y.shape[0]))
for train_ix, test_ix in makeKFold(5, y, 1):
X_train, y_train = X[train_ix, :], y[train_ix]
X_test = X[test_ix, :]
weight = y_train.shape[0] / (2 * np.bincount(y_train))
sample_weight = np.array([weight[i] for i in y_train])
clf = RandomForestClassifier(**params)
cclf = CalibratedClassifierCV(base_estimator=clf,
method='isotonic',
cv=makeKFold(3, y_train, 1))
cclf.fit(X_train, y_train, sample_weight)
pred = cclf.predict_proba(X_test)[:, 1]
preds[test_ix] = pred
return preds
def get_model(params, X, y):
clf = RandomForestClassifier(**params)
cclf = CalibratedClassifierCV(base_estimator=clf,
method='isotonic',
cv=makeKFold(3, y, 1))
weight = y.shape[0] / (2 * np.bincount(y))
sample_weight = np.array([weight[i] for i in y])
cclf.fit(X, y, sample_weight)
return cclf
args = parse_args()
data_dir = '../level3-feature/' + str(args.yix)
X_train = np.load(path.join(data_dir, 'X_train.npy'))
X_test = np.load(path.join(data_dir, 'X_test.npy'))
y_train = np.load(path.join(data_dir, 'y_train.npy'))
print(X_train.shape, X_test.shape, y_train.shape)
X_train_ext = np.load('../extra_ftrs/' + str(args.yix) + '/X_train_ext.npy')
X_test_ext = np.load('../extra_ftrs/' + str(args.yix) + '/X_test_ext.npy')
print(X_train_ext.shape, X_test_ext.shape)
X_train = np.hstack((X_train, X_train_ext))
X_test = np.hstack((X_test, X_test_ext))
print('Add Extra')
print(X_train.shape, X_test.shape, y_train.shape)
# Now we have X_train, X_test, y_train
trials = Trials()
params = optimize(trials, X_train, y_train, 50)
out_fold = out_fold_pred(params, X_train, y_train)
clf = get_model(params, X_train, y_train)
preds = clf.predict_proba(X_test)[:, 1]
save_dir = '../level3-model-final/' + str(args.yix)
print(save_dir)
if not path.exists(save_dir):
os.makedirs(save_dir)
# save model, parameter, outFold_pred, pred
with open(path.join(save_dir, 'model_rf.pkl'), 'wb') as f_model:
pickle.dump(clf.calibrated_classifiers_, f_model)
with open(path.join(save_dir, 'param_rf.pkl'), 'wb') as f_param:
pickle.dump(params, f_param)
np.save(path.join(save_dir, 'pred_rf.npy'), preds)
np.save(path.join(save_dir, 'outFold_rf.npy'), out_fold)
| mit | 6,082,043,993,278,494,000 | 33.357576 | 76 | 0.594814 | false | 3.135509 | true | false | false |
jreback/pandas | pandas/io/formats/html.py | 2 | 23192 | """
Module for formatting output data in HTML.
"""
from textwrap import dedent
from typing import Any, Dict, Iterable, List, Mapping, Optional, Tuple, Union, cast
from pandas._config import get_option
from pandas._libs import lib
from pandas import MultiIndex, option_context
from pandas.io.common import is_url
from pandas.io.formats.format import DataFrameFormatter, get_level_lengths
from pandas.io.formats.printing import pprint_thing
class HTMLFormatter:
"""
Internal class for formatting output data in html.
This class is intended for shared functionality between
DataFrame.to_html() and DataFrame._repr_html_().
Any logic in common with other output formatting methods
should ideally be inherited from classes in format.py
and this class responsible for only producing html markup.
"""
indent_delta = 2
def __init__(
self,
formatter: DataFrameFormatter,
classes: Optional[Union[str, List[str], Tuple[str, ...]]] = None,
border: Optional[int] = None,
table_id: Optional[str] = None,
render_links: bool = False,
) -> None:
self.fmt = formatter
self.classes = classes
self.frame = self.fmt.frame
self.columns = self.fmt.tr_frame.columns
self.elements: List[str] = []
self.bold_rows = self.fmt.bold_rows
self.escape = self.fmt.escape
self.show_dimensions = self.fmt.show_dimensions
if border is None:
border = cast(int, get_option("display.html.border"))
self.border = border
self.table_id = table_id
self.render_links = render_links
self.col_space = {
column: f"{value}px" if isinstance(value, int) else value
for column, value in self.fmt.col_space.items()
}
def to_string(self) -> str:
lines = self.render()
if any(isinstance(x, str) for x in lines):
lines = [str(x) for x in lines]
return "\n".join(lines)
def render(self) -> List[str]:
self._write_table()
if self.should_show_dimensions:
by = chr(215) # ×
self.write(
f"<p>{len(self.frame)} rows {by} {len(self.frame.columns)} columns</p>"
)
return self.elements
@property
def should_show_dimensions(self):
return self.fmt.should_show_dimensions
@property
def show_row_idx_names(self) -> bool:
return self.fmt.show_row_idx_names
@property
def show_col_idx_names(self) -> bool:
return self.fmt.show_col_idx_names
@property
def row_levels(self) -> int:
if self.fmt.index:
# showing (row) index
return self.frame.index.nlevels
elif self.show_col_idx_names:
# see gh-22579
# Column misalignment also occurs for
# a standard index when the columns index is named.
# If the row index is not displayed a column of
# blank cells need to be included before the DataFrame values.
return 1
# not showing (row) index
return 0
def _get_columns_formatted_values(self) -> Iterable:
return self.columns
@property
def is_truncated(self) -> bool:
return self.fmt.is_truncated
@property
def ncols(self) -> int:
return len(self.fmt.tr_frame.columns)
def write(self, s: Any, indent: int = 0) -> None:
rs = pprint_thing(s)
self.elements.append(" " * indent + rs)
def write_th(
self, s: Any, header: bool = False, indent: int = 0, tags: Optional[str] = None
) -> None:
"""
Method for writing a formatted <th> cell.
If col_space is set on the formatter then that is used for
the value of min-width.
Parameters
----------
s : object
The data to be written inside the cell.
header : bool, default False
Set to True if the <th> is for use inside <thead>. This will
cause min-width to be set if there is one.
indent : int, default 0
The indentation level of the cell.
tags : str, default None
Tags to include in the cell.
Returns
-------
A written <th> cell.
"""
col_space = self.col_space.get(s, None)
if header and col_space is not None:
tags = tags or ""
tags += f'style="min-width: {col_space};"'
self._write_cell(s, kind="th", indent=indent, tags=tags)
def write_td(self, s: Any, indent: int = 0, tags: Optional[str] = None) -> None:
self._write_cell(s, kind="td", indent=indent, tags=tags)
def _write_cell(
self, s: Any, kind: str = "td", indent: int = 0, tags: Optional[str] = None
) -> None:
if tags is not None:
start_tag = f"<{kind} {tags}>"
else:
start_tag = f"<{kind}>"
if self.escape:
# escape & first to prevent double escaping of &
esc = {"&": r"&", "<": r"<", ">": r">"}
else:
esc = {}
rs = pprint_thing(s, escape_chars=esc).strip()
if self.render_links and is_url(rs):
rs_unescaped = pprint_thing(s, escape_chars={}).strip()
start_tag += f'<a href="{rs_unescaped}" target="_blank">'
end_a = "</a>"
else:
end_a = ""
self.write(f"{start_tag}{rs}{end_a}</{kind}>", indent)
def write_tr(
self,
line: Iterable,
indent: int = 0,
indent_delta: int = 0,
header: bool = False,
align: Optional[str] = None,
tags: Optional[Dict[int, str]] = None,
nindex_levels: int = 0,
) -> None:
if tags is None:
tags = {}
if align is None:
self.write("<tr>", indent)
else:
self.write(f'<tr style="text-align: {align};">', indent)
indent += indent_delta
for i, s in enumerate(line):
val_tag = tags.get(i, None)
if header or (self.bold_rows and i < nindex_levels):
self.write_th(s, indent=indent, header=header, tags=val_tag)
else:
self.write_td(s, indent, tags=val_tag)
indent -= indent_delta
self.write("</tr>", indent)
def _write_table(self, indent: int = 0) -> None:
_classes = ["dataframe"] # Default class.
use_mathjax = get_option("display.html.use_mathjax")
if not use_mathjax:
_classes.append("tex2jax_ignore")
if self.classes is not None:
if isinstance(self.classes, str):
self.classes = self.classes.split()
if not isinstance(self.classes, (list, tuple)):
raise TypeError(
"classes must be a string, list, "
f"or tuple, not {type(self.classes)}"
)
_classes.extend(self.classes)
if self.table_id is None:
id_section = ""
else:
id_section = f' id="{self.table_id}"'
self.write(
f'<table border="{self.border}" class="{" ".join(_classes)}"{id_section}>',
indent,
)
if self.fmt.header or self.show_row_idx_names:
self._write_header(indent + self.indent_delta)
self._write_body(indent + self.indent_delta)
self.write("</table>", indent)
def _write_col_header(self, indent: int) -> None:
is_truncated_horizontally = self.fmt.is_truncated_horizontally
if isinstance(self.columns, MultiIndex):
template = 'colspan="{span:d}" halign="left"'
if self.fmt.sparsify:
# GH3547
sentinel = lib.no_default
else:
sentinel = False
levels = self.columns.format(sparsify=sentinel, adjoin=False, names=False)
level_lengths = get_level_lengths(levels, sentinel)
inner_lvl = len(level_lengths) - 1
for lnum, (records, values) in enumerate(zip(level_lengths, levels)):
if is_truncated_horizontally:
# modify the header lines
ins_col = self.fmt.tr_col_num
if self.fmt.sparsify:
recs_new = {}
# Increment tags after ... col.
for tag, span in list(records.items()):
if tag >= ins_col:
recs_new[tag + 1] = span
elif tag + span > ins_col:
recs_new[tag] = span + 1
if lnum == inner_lvl:
values = (
values[:ins_col] + ("...",) + values[ins_col:]
)
else:
# sparse col headers do not receive a ...
values = (
values[:ins_col]
+ (values[ins_col - 1],)
+ values[ins_col:]
)
else:
recs_new[tag] = span
# if ins_col lies between tags, all col headers
# get ...
if tag + span == ins_col:
recs_new[ins_col] = 1
values = values[:ins_col] + ("...",) + values[ins_col:]
records = recs_new
inner_lvl = len(level_lengths) - 1
if lnum == inner_lvl:
records[ins_col] = 1
else:
recs_new = {}
for tag, span in list(records.items()):
if tag >= ins_col:
recs_new[tag + 1] = span
else:
recs_new[tag] = span
recs_new[ins_col] = 1
records = recs_new
values = values[:ins_col] + ["..."] + values[ins_col:]
# see gh-22579
# Column Offset Bug with to_html(index=False) with
# MultiIndex Columns and Index.
# Initially fill row with blank cells before column names.
# TODO: Refactor to remove code duplication with code
# block below for standard columns index.
row = [""] * (self.row_levels - 1)
if self.fmt.index or self.show_col_idx_names:
# see gh-22747
# If to_html(index_names=False) do not show columns
# index names.
# TODO: Refactor to use _get_column_name_list from
# DataFrameFormatter class and create a
# _get_formatted_column_labels function for code
# parity with DataFrameFormatter class.
if self.fmt.show_index_names:
name = self.columns.names[lnum]
row.append(pprint_thing(name or ""))
else:
row.append("")
tags = {}
j = len(row)
for i, v in enumerate(values):
if i in records:
if records[i] > 1:
tags[j] = template.format(span=records[i])
else:
continue
j += 1
row.append(v)
self.write_tr(row, indent, self.indent_delta, tags=tags, header=True)
else:
# see gh-22579
# Column misalignment also occurs for
# a standard index when the columns index is named.
# Initially fill row with blank cells before column names.
# TODO: Refactor to remove code duplication with code block
# above for columns MultiIndex.
row = [""] * (self.row_levels - 1)
if self.fmt.index or self.show_col_idx_names:
# see gh-22747
# If to_html(index_names=False) do not show columns
# index names.
# TODO: Refactor to use _get_column_name_list from
# DataFrameFormatter class.
if self.fmt.show_index_names:
row.append(self.columns.name or "")
else:
row.append("")
row.extend(self._get_columns_formatted_values())
align = self.fmt.justify
if is_truncated_horizontally:
ins_col = self.row_levels + self.fmt.tr_col_num
row.insert(ins_col, "...")
self.write_tr(row, indent, self.indent_delta, header=True, align=align)
def _write_row_header(self, indent: int) -> None:
is_truncated_horizontally = self.fmt.is_truncated_horizontally
row = [x if x is not None else "" for x in self.frame.index.names] + [""] * (
self.ncols + (1 if is_truncated_horizontally else 0)
)
self.write_tr(row, indent, self.indent_delta, header=True)
def _write_header(self, indent: int) -> None:
self.write("<thead>", indent)
if self.fmt.header:
self._write_col_header(indent + self.indent_delta)
if self.show_row_idx_names:
self._write_row_header(indent + self.indent_delta)
self.write("</thead>", indent)
def _get_formatted_values(self) -> Dict[int, List[str]]:
with option_context("display.max_colwidth", None):
fmt_values = {i: self.fmt.format_col(i) for i in range(self.ncols)}
return fmt_values
def _write_body(self, indent: int) -> None:
self.write("<tbody>", indent)
fmt_values = self._get_formatted_values()
# write values
if self.fmt.index and isinstance(self.frame.index, MultiIndex):
self._write_hierarchical_rows(fmt_values, indent + self.indent_delta)
else:
self._write_regular_rows(fmt_values, indent + self.indent_delta)
self.write("</tbody>", indent)
def _write_regular_rows(
self, fmt_values: Mapping[int, List[str]], indent: int
) -> None:
is_truncated_horizontally = self.fmt.is_truncated_horizontally
is_truncated_vertically = self.fmt.is_truncated_vertically
nrows = len(self.fmt.tr_frame)
if self.fmt.index:
fmt = self.fmt._get_formatter("__index__")
if fmt is not None:
index_values = self.fmt.tr_frame.index.map(fmt)
else:
index_values = self.fmt.tr_frame.index.format()
row: List[str] = []
for i in range(nrows):
if is_truncated_vertically and i == (self.fmt.tr_row_num):
str_sep_row = ["..."] * len(row)
self.write_tr(
str_sep_row,
indent,
self.indent_delta,
tags=None,
nindex_levels=self.row_levels,
)
row = []
if self.fmt.index:
row.append(index_values[i])
# see gh-22579
# Column misalignment also occurs for
# a standard index when the columns index is named.
# Add blank cell before data cells.
elif self.show_col_idx_names:
row.append("")
row.extend(fmt_values[j][i] for j in range(self.ncols))
if is_truncated_horizontally:
dot_col_ix = self.fmt.tr_col_num + self.row_levels
row.insert(dot_col_ix, "...")
self.write_tr(
row, indent, self.indent_delta, tags=None, nindex_levels=self.row_levels
)
def _write_hierarchical_rows(
self, fmt_values: Mapping[int, List[str]], indent: int
) -> None:
template = 'rowspan="{span}" valign="top"'
is_truncated_horizontally = self.fmt.is_truncated_horizontally
is_truncated_vertically = self.fmt.is_truncated_vertically
frame = self.fmt.tr_frame
nrows = len(frame)
assert isinstance(frame.index, MultiIndex)
idx_values = frame.index.format(sparsify=False, adjoin=False, names=False)
idx_values = list(zip(*idx_values))
if self.fmt.sparsify:
# GH3547
sentinel = lib.no_default
levels = frame.index.format(sparsify=sentinel, adjoin=False, names=False)
level_lengths = get_level_lengths(levels, sentinel)
inner_lvl = len(level_lengths) - 1
if is_truncated_vertically:
# Insert ... row and adjust idx_values and
# level_lengths to take this into account.
ins_row = self.fmt.tr_row_num
inserted = False
for lnum, records in enumerate(level_lengths):
rec_new = {}
for tag, span in list(records.items()):
if tag >= ins_row:
rec_new[tag + 1] = span
elif tag + span > ins_row:
rec_new[tag] = span + 1
# GH 14882 - Make sure insertion done once
if not inserted:
dot_row = list(idx_values[ins_row - 1])
dot_row[-1] = "..."
idx_values.insert(ins_row, tuple(dot_row))
inserted = True
else:
dot_row = list(idx_values[ins_row])
dot_row[inner_lvl - lnum] = "..."
idx_values[ins_row] = tuple(dot_row)
else:
rec_new[tag] = span
# If ins_row lies between tags, all cols idx cols
# receive ...
if tag + span == ins_row:
rec_new[ins_row] = 1
if lnum == 0:
idx_values.insert(
ins_row, tuple(["..."] * len(level_lengths))
)
# GH 14882 - Place ... in correct level
elif inserted:
dot_row = list(idx_values[ins_row])
dot_row[inner_lvl - lnum] = "..."
idx_values[ins_row] = tuple(dot_row)
level_lengths[lnum] = rec_new
level_lengths[inner_lvl][ins_row] = 1
for ix_col in range(len(fmt_values)):
fmt_values[ix_col].insert(ins_row, "...")
nrows += 1
for i in range(nrows):
row = []
tags = {}
sparse_offset = 0
j = 0
for records, v in zip(level_lengths, idx_values[i]):
if i in records:
if records[i] > 1:
tags[j] = template.format(span=records[i])
else:
sparse_offset += 1
continue
j += 1
row.append(v)
row.extend(fmt_values[j][i] for j in range(self.ncols))
if is_truncated_horizontally:
row.insert(
self.row_levels - sparse_offset + self.fmt.tr_col_num, "..."
)
self.write_tr(
row,
indent,
self.indent_delta,
tags=tags,
nindex_levels=len(levels) - sparse_offset,
)
else:
row = []
for i in range(len(frame)):
if is_truncated_vertically and i == (self.fmt.tr_row_num):
str_sep_row = ["..."] * len(row)
self.write_tr(
str_sep_row,
indent,
self.indent_delta,
tags=None,
nindex_levels=self.row_levels,
)
idx_values = list(
zip(*frame.index.format(sparsify=False, adjoin=False, names=False))
)
row = []
row.extend(idx_values[i])
row.extend(fmt_values[j][i] for j in range(self.ncols))
if is_truncated_horizontally:
row.insert(self.row_levels + self.fmt.tr_col_num, "...")
self.write_tr(
row,
indent,
self.indent_delta,
tags=None,
nindex_levels=frame.index.nlevels,
)
class NotebookFormatter(HTMLFormatter):
"""
Internal class for formatting output data in html for display in Jupyter
Notebooks. This class is intended for functionality specific to
DataFrame._repr_html_() and DataFrame.to_html(notebook=True)
"""
def _get_formatted_values(self) -> Dict[int, List[str]]:
return {i: self.fmt.format_col(i) for i in range(self.ncols)}
def _get_columns_formatted_values(self) -> List[str]:
return self.columns.format()
def write_style(self) -> None:
# We use the "scoped" attribute here so that the desired
# style properties for the data frame are not then applied
# throughout the entire notebook.
template_first = """\
<style scoped>"""
template_last = """\
</style>"""
template_select = """\
.dataframe %s {
%s: %s;
}"""
element_props = [
("tbody tr th:only-of-type", "vertical-align", "middle"),
("tbody tr th", "vertical-align", "top"),
]
if isinstance(self.columns, MultiIndex):
element_props.append(("thead tr th", "text-align", "left"))
if self.show_row_idx_names:
element_props.append(
("thead tr:last-of-type th", "text-align", "right")
)
else:
element_props.append(("thead th", "text-align", "right"))
template_mid = "\n\n".join(map(lambda t: template_select % t, element_props))
template = dedent("\n".join((template_first, template_mid, template_last)))
self.write(template)
def render(self) -> List[str]:
self.write("<div>")
self.write_style()
super().render()
self.write("</div>")
return self.elements
| bsd-3-clause | 8,511,209,384,092,984,000 | 37.018033 | 88 | 0.486525 | false | 4.267759 | false | false | false |
systers/mailman | src/mailman/handlers/acknowledge.py | 7 | 3436 | # Copyright (C) 1998-2015 by the Free Software Foundation, Inc.
#
# This file is part of GNU Mailman.
#
# GNU Mailman is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free
# Software Foundation, either version 3 of the License, or (at your option)
# any later version.
#
# GNU Mailman is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# GNU Mailman. If not, see <http://www.gnu.org/licenses/>.
"""Send an acknowledgment of the successful post to the sender.
This only happens if the sender has set their AcknowledgePosts attribute.
"""
__all__ = [
'Acknowledge',
]
from mailman.core.i18n import _
from mailman.email.message import UserNotification
from mailman.interfaces.handler import IHandler
from mailman.interfaces.languages import ILanguageManager
from mailman.utilities.i18n import make
from mailman.utilities.string import oneline
from zope.component import getUtility
from zope.interface import implementer
@implementer(IHandler)
class Acknowledge:
"""Send an acknowledgment."""
name = 'acknowledge'
description = _("""Send an acknowledgment of a posting.""")
def process(self, mlist, msg, msgdata):
"""See `IHandler`."""
# Extract the sender's address and find them in the user database
sender = msgdata.get('original_sender', msg.sender)
member = mlist.members.get_member(sender)
if member is None or not member.acknowledge_posts:
# Either the sender is not a member, in which case we can't know
# whether they want an acknowlegment or not, or they are a member
# who definitely does not want an acknowlegment.
return
# Okay, they are a member that wants an acknowledgment of their post.
# Give them their original subject. BAW: do we want to use the
# decoded header?
original_subject = msgdata.get(
'origsubj', msg.get('subject', _('(no subject)')))
# Get the user's preferred language.
language_manager = getUtility(ILanguageManager)
language = (language_manager[msgdata['lang']]
if 'lang' in msgdata
else member.preferred_language)
# Now get the acknowledgement template.
display_name = mlist.display_name
text = make('postack.txt',
mailing_list=mlist,
language=language.code,
wrap=False,
subject=oneline(original_subject, in_unicode=True),
list_name=mlist.list_name,
display_name=display_name,
listinfo_url=mlist.script_url('listinfo'),
optionsurl=member.options_url,
)
# Craft the outgoing message, with all headers and attributes
# necessary for general delivery. Then enqueue it to the outgoing
# queue.
subject = _('$display_name post acknowledgment')
usermsg = UserNotification(sender, mlist.bounces_address,
subject, text, language)
usermsg.send(mlist)
| gpl-3.0 | 9,196,911,609,368,220,000 | 39.423529 | 78 | 0.655995 | false | 4.247219 | false | false | false |
sciCloud/OLiMS | lims/browser/fields/datetimefield.py | 2 | 2641 | from time import strptime
from dependencies.dependency import ClassSecurityInfo
from dependencies.dependency import DateTime, safelocaltime
from dependencies.dependency import DateTimeError
from dependencies.dependency import registerField
from dependencies.dependency import IDateTimeField
from dependencies.dependency import *
from dependencies.dependency import DateTimeField as DTF
from lims import logger
from dependencies.dependency import implements
class DateTimeField(DTF):
"""A field that stores dates and times
This is identical to the AT widget on which it's based, but it checks
the i18n translation values for date formats. This does not specifically
check the date_format_short_datepicker, so this means that date_formats
should be identical between the python strftime and the jquery version.
"""
_properties = Field._properties.copy()
_properties.update({
'type': 'datetime',
'widget': CalendarWidget,
})
implements(IDateTimeField)
security = ClassSecurityInfo()
security.declarePrivate('set')
def set(self, instance, value, **kwargs):
"""
Check if value is an actual date/time value. If not, attempt
to convert it to one; otherwise, set to None. Assign all
properties passed as kwargs to object.
"""
val = value
if not value:
val = None
elif not isinstance(value, DateTime):
for fmt in ['date_format_long', 'date_format_short']:
fmtstr = instance.translate(fmt, domain='bika', mapping={})
fmtstr = fmtstr.replace(r"${", '%').replace('}', '')
try:
val = strptime(value, fmtstr)
except ValueError:
continue
try:
val = DateTime(*list(val)[:-6])
except DateTimeError:
val = None
if val.timezoneNaive():
# Use local timezone for tz naive strings
# see http://dev.plone.org/plone/ticket/10141
zone = val.localZone(safelocaltime(val.timeTime()))
parts = val.parts()[:-1] + (zone,)
val = DateTime(*parts)
break
else:
logger.warning("DateTimeField failed to format date "
"string '%s' with '%s'" % (value, fmtstr))
super(DateTimeField, self).set(instance, val, **kwargs)
registerField(DateTimeField,
title='Date Time',
description='Used for storing date/time')
| agpl-3.0 | 4,101,780,731,871,722,000 | 36.197183 | 77 | 0.605074 | false | 4.881701 | false | false | false |
gbiggs/rtcshell | rtcshell/rtmgr.py | 1 | 6160 | #!/usr/bin/env python
# -*- Python -*-
# -*- coding: utf-8 -*-
'''rtcshell
Copyright (C) 2009-2010
Geoffrey Biggs
RT-Synthesis Research Group
Intelligent Systems Research Institute,
National Institute of Advanced Industrial Science and Technology (AIST),
Japan
All rights reserved.
Licensed under the Eclipse Public License -v 1.0 (EPL)
http://www.opensource.org/licenses/eclipse-1.0.txt
File: rtmgr.py
Implementation of the command for controlling managers.
'''
# $Source$
from optparse import OptionParser, OptionError
import os
from rtctree.exceptions import RtcTreeError, FailedToLoadModuleError, \
FailedToUnloadModuleError, \
FailedToCreateComponentError, \
FailedToDeleteComponentError
from rtctree.tree import create_rtctree, InvalidServiceError, \
FailedToNarrowRootNamingError, \
NonRootPathError
from rtctree.path import parse_path
import sys
from rtcshell import RTSH_PATH_USAGE, RTSH_VERSION
from rtcshell.path import cmd_path_to_full_path
def get_manager(cmd_path, full_path, tree=None):
path, port = parse_path(full_path)
if port:
# Can't configure a port
print >>sys.stderr, '{0}: Cannot access {1}: No such \
object.'.format(sys.argv[0], cmd_path)
return None
if not path[-1]:
# There was a trailing slash - ignore it
path = path[:-1]
if not tree:
tree = create_rtctree(paths=path)
if not tree:
return None
object = tree.get_node(path)
if not object:
print >>sys.stderr, '{0}: Cannot access {1}: No such \
object.'.format(sys.argv[0], cmd_path)
return tree, None
if not object.is_manager:
print >>sys.stderr, '{0}: Cannot access {1}: Not a \
manager.'.format(sys.argv[0], cmd_path)
return tree, None
return tree, object
def load_module(cmd_path, full_path, module_path, init_func, tree=None):
tree, mgr = get_manager(cmd_path, full_path, tree)
if not mgr:
return 1
try:
mgr.load_module(module_path, init_func)
except FailedToLoadModuleError:
print >>sys.stderr, '{0}: Failed to load module {1}'.format(\
sys.argv[0], module_path)
return 1
return 0
def unload_module(cmd_path, full_path, module_path, tree=None):
tree, mgr = get_manager(cmd_path, full_path, tree)
if not mgr:
return 1
try:
mgr.unload_module(module_path)
except FailedToUnloadModuleError:
print >>sys.stderr, '{0}: Failed to unload module {1}'.format(\
sys.argv[0], module_path)
return 1
return 0
def create_component(cmd_path, full_path, module_name, tree=None):
tree, mgr = get_manager(cmd_path, full_path, tree)
if not mgr:
return 1
try:
mgr.create_component(module_name)
except FailedToCreateComponentError:
print >>sys.stderr, '{0}: Failed to create component from module \
{1}'.format(sys.argv[0], module_name)
return 1
return 0
def delete_component(cmd_path, full_path, instance_name, tree=None):
tree, mgr = get_manager(cmd_path, full_path, tree)
if not mgr:
return 1
try:
mgr.delete_component(instance_name)
except FailedToDeleteComponentError, e:
print >>sys.stderr, '{0}: Failed to delete component {1}'.format(\
sys.argv[0], instance_name)
return 1
return 0
def main(argv=None, tree=None):
usage = '''Usage: %prog [options] <path> <command> [args]
Control a manager, adding and removing shared libraries and components. To
set a mananger's configuration, use rtconf.
A command should be one of:
load, unload, create, delete
load <file system path> <init function>
Load a shared library (DLL file or .so file) into the manager.
unload <file system path>
Unload a shared library (DLL file or .so file) from the manager.
create <module name>
Create a new component instance from a loaded shared library.
Properties of the new component can be set by specifying them as part of the
module name argument, prefixed by a question mark. For example, to set the
instance name of a new component of type ConsoleIn, use:
rtmgr manager.mgr create ConsoleIn?instance_name=blag
delete <instance name>
Delete a component instance from the manager, destroying it.
''' + RTSH_PATH_USAGE
version = RTSH_VERSION
parser = OptionParser(usage=usage, version=version)
parser.add_option('-d', '--debug', dest='debug', action='store_true',
default=False, help='Print debugging information. \
[Default: %default]')
if argv:
sys.argv = [sys.argv[0]] + argv
try:
options, args = parser.parse_args()
except OptionError, e:
print 'OptionError:', e
return 1
if len(args) > 2:
cmd_path = args[0]
cmd = args[1]
args = args[2:]
else:
print >>sys.stderr, usage
return 1
full_path = cmd_path_to_full_path(cmd_path)
if cmd == 'load':
if len(args) != 2:
print >>sys.stderr, '{0}: Incorrect number of arguments for load \
command.'.format(sys.argv[0])
return 1
return load_module(cmd_path, full_path, args[0], args[1], tree)
elif cmd == 'unload':
if len(args) != 1:
print >>sys.stderr, '{0}: Incorrect number of arguments for \
unload command.'.format(sys.argv[0])
return 1
return unload_module(cmd_path, full_path, args[0], tree)
elif cmd == 'create':
if len(args) != 1:
print >>sys.stderr, '{0}: Incorrect number of arguments for \
create command.'.format(sys.argv[0])
return 1
return create_component(cmd_path, full_path, args[0], tree)
elif cmd == 'delete':
if len(args) != 1:
print >>sys.stderr, '{0}: Incorrect number of arguments for \
delete command.'.format(sys.argv[0])
return 1
return delete_component(cmd_path, full_path, args[0], tree)
print >>sys.stderr, usage
return 1
# vim: tw=79
| epl-1.0 | 4,634,796,370,147,265,000 | 28.473684 | 78 | 0.631818 | false | 3.677612 | false | false | false |
stone5495/NewsBlur | apps/profile/views.py | 8 | 22968 | import stripe
import datetime
from django.contrib.auth.decorators import login_required
from django.views.decorators.http import require_POST
from django.views.decorators.csrf import csrf_protect
from django.contrib.auth import logout as logout_user
from django.contrib.auth import login as login_user
from django.db.models.aggregates import Sum
from django.http import HttpResponse, HttpResponseRedirect
from django.contrib.sites.models import Site
from django.contrib.auth.models import User
from django.contrib.admin.views.decorators import staff_member_required
from django.core.urlresolvers import reverse
from django.template import RequestContext
from django.shortcuts import render_to_response
from django.core.mail import mail_admins
from django.conf import settings
from apps.profile.models import Profile, PaymentHistory, RNewUserQueue, MRedeemedCode, MGiftCode
from apps.reader.models import UserSubscription, UserSubscriptionFolders, RUserStory
from apps.profile.forms import StripePlusPaymentForm, PLANS, DeleteAccountForm
from apps.profile.forms import ForgotPasswordForm, ForgotPasswordReturnForm, AccountSettingsForm
from apps.profile.forms import RedeemCodeForm
from apps.reader.forms import SignupForm, LoginForm
from apps.rss_feeds.models import MStarredStory, MStarredStoryCounts
from apps.social.models import MSocialServices, MActivity, MSocialProfile
from apps.analyzer.models import MClassifierTitle, MClassifierAuthor, MClassifierFeed, MClassifierTag
from utils import json_functions as json
from utils.user_functions import ajax_login_required
from utils.view_functions import render_to
from utils.user_functions import get_user
from utils import log as logging
from vendor.paypalapi.exceptions import PayPalAPIResponseError
from vendor.paypal.standard.forms import PayPalPaymentsForm
SINGLE_FIELD_PREFS = ('timezone','feed_pane_size','hide_mobile','send_emails',
'hide_getting_started', 'has_setup_feeds', 'has_found_friends',
'has_trained_intelligence',)
SPECIAL_PREFERENCES = ('old_password', 'new_password', 'autofollow_friends', 'dashboard_date',)
@ajax_login_required
@require_POST
@json.json_view
def set_preference(request):
code = 1
message = ''
new_preferences = request.POST
preferences = json.decode(request.user.profile.preferences)
for preference_name, preference_value in new_preferences.items():
if preference_value in ['true','false']: preference_value = True if preference_value == 'true' else False
if preference_name in SINGLE_FIELD_PREFS:
setattr(request.user.profile, preference_name, preference_value)
elif preference_name in SPECIAL_PREFERENCES:
if preference_name == 'autofollow_friends':
social_services = MSocialServices.get_user(request.user.pk)
social_services.autofollow = preference_value
social_services.save()
elif preference_name == 'dashboard_date':
request.user.profile.dashboard_date = datetime.datetime.utcnow()
else:
if preference_value in ["true", "false"]:
preference_value = True if preference_value == "true" else False
preferences[preference_name] = preference_value
if preference_name == 'intro_page':
logging.user(request, "~FBAdvancing intro to page ~FM~SB%s" % preference_value)
request.user.profile.preferences = json.encode(preferences)
request.user.profile.save()
logging.user(request, "~FMSaving preference: %s" % new_preferences)
response = dict(code=code, message=message, new_preferences=new_preferences)
return response
@ajax_login_required
@json.json_view
def get_preference(request):
code = 1
preference_name = request.POST.get('preference')
preferences = json.decode(request.user.profile.preferences)
payload = preferences
if preference_name:
payload = preferences.get(preference_name)
response = dict(code=code, payload=payload)
return response
@csrf_protect
def login(request):
form = LoginForm()
if request.method == "POST":
form = LoginForm(data=request.POST)
if form.is_valid():
login_user(request, form.get_user())
logging.user(form.get_user(), "~FG~BBOAuth Login~FW")
return HttpResponseRedirect(request.POST['next'] or reverse('index'))
return render_to_response('accounts/login.html', {
'form': form,
'next': request.REQUEST.get('next', "")
}, context_instance=RequestContext(request))
@csrf_protect
def signup(request):
form = SignupForm()
if request.method == "POST":
form = SignupForm(data=request.POST)
if form.is_valid():
new_user = form.save()
login_user(request, new_user)
logging.user(new_user, "~FG~SB~BBNEW SIGNUP: ~FW%s" % new_user.email)
new_user.profile.activate_free()
return HttpResponseRedirect(request.POST['next'] or reverse('index'))
return render_to_response('accounts/signup.html', {
'form': form,
'next': request.REQUEST.get('next', "")
}, context_instance=RequestContext(request))
@login_required
@csrf_protect
def redeem_code(request):
code = request.GET.get('code', None)
form = RedeemCodeForm(initial={'gift_code': code})
if request.method == "POST":
form = RedeemCodeForm(data=request.POST)
if form.is_valid():
gift_code = request.POST['gift_code']
MRedeemedCode.redeem(user=request.user, gift_code=gift_code)
return render_to_response('reader/paypal_return.xhtml',
{}, context_instance=RequestContext(request))
return render_to_response('accounts/redeem_code.html', {
'form': form,
'code': request.REQUEST.get('code', ""),
'next': request.REQUEST.get('next', "")
}, context_instance=RequestContext(request))
@ajax_login_required
@require_POST
@json.json_view
def set_account_settings(request):
code = -1
message = 'OK'
form = AccountSettingsForm(user=request.user, data=request.POST)
if form.is_valid():
form.save()
code = 1
else:
message = form.errors[form.errors.keys()[0]][0]
payload = {
"username": request.user.username,
"email": request.user.email,
"social_profile": MSocialProfile.profile(request.user.pk)
}
return dict(code=code, message=message, payload=payload)
@ajax_login_required
@require_POST
@json.json_view
def set_view_setting(request):
code = 1
feed_id = request.POST['feed_id']
feed_view_setting = request.POST.get('feed_view_setting')
feed_order_setting = request.POST.get('feed_order_setting')
feed_read_filter_setting = request.POST.get('feed_read_filter_setting')
feed_layout_setting = request.POST.get('feed_layout_setting')
view_settings = json.decode(request.user.profile.view_settings)
setting = view_settings.get(feed_id, {})
if isinstance(setting, basestring): setting = {'v': setting}
if feed_view_setting: setting['v'] = feed_view_setting
if feed_order_setting: setting['o'] = feed_order_setting
if feed_read_filter_setting: setting['r'] = feed_read_filter_setting
if feed_layout_setting: setting['l'] = feed_layout_setting
view_settings[feed_id] = setting
request.user.profile.view_settings = json.encode(view_settings)
request.user.profile.save()
logging.user(request, "~FMView settings: %s/%s/%s/%s" % (feed_view_setting,
feed_order_setting, feed_read_filter_setting, feed_layout_setting))
response = dict(code=code)
return response
@ajax_login_required
@require_POST
@json.json_view
def clear_view_setting(request):
code = 1
view_setting_type = request.POST.get('view_setting_type')
view_settings = json.decode(request.user.profile.view_settings)
new_view_settings = {}
removed = 0
for feed_id, view_setting in view_settings.items():
if view_setting_type == 'layout' and 'l' in view_setting:
del view_setting['l']
removed += 1
if view_setting_type == 'view' and 'v' in view_setting:
del view_setting['v']
removed += 1
if view_setting_type == 'order' and 'o' in view_setting:
del view_setting['o']
removed += 1
if view_setting_type == 'order' and 'r' in view_setting:
del view_setting['r']
removed += 1
new_view_settings[feed_id] = view_setting
request.user.profile.view_settings = json.encode(new_view_settings)
request.user.profile.save()
logging.user(request, "~FMClearing view settings: %s (found %s)" % (view_setting_type, removed))
response = dict(code=code, view_settings=view_settings, removed=removed)
return response
@ajax_login_required
@json.json_view
def get_view_setting(request):
code = 1
feed_id = request.POST['feed_id']
view_settings = json.decode(request.user.profile.view_settings)
response = dict(code=code, payload=view_settings.get(feed_id))
return response
@ajax_login_required
@require_POST
@json.json_view
def set_collapsed_folders(request):
code = 1
collapsed_folders = request.POST['collapsed_folders']
request.user.profile.collapsed_folders = collapsed_folders
request.user.profile.save()
logging.user(request, "~FMCollapsing folder: %s" % collapsed_folders)
response = dict(code=code)
return response
@ajax_login_required
def paypal_form(request):
domain = Site.objects.get_current().domain
paypal_dict = {
"cmd": "_xclick-subscriptions",
"business": "[email protected]",
"a3": "12.00", # price
"p3": 1, # duration of each unit (depends on unit)
"t3": "Y", # duration unit ("M for Month")
"src": "1", # make payments recur
"sra": "1", # reattempt payment on payment error
"no_note": "1", # remove extra notes (optional)
"item_name": "NewsBlur Premium Account",
"notify_url": "http://%s%s" % (domain, reverse('paypal-ipn')),
"return_url": "http://%s%s" % (domain, reverse('paypal-return')),
"cancel_return": "http://%s%s" % (domain, reverse('index')),
"custom": request.user.username,
}
# Create the instance.
form = PayPalPaymentsForm(initial=paypal_dict, button_type="subscribe")
logging.user(request, "~FBLoading paypal/feedchooser")
# Output the button.
return HttpResponse(form.render(), mimetype='text/html')
def paypal_return(request):
return render_to_response('reader/paypal_return.xhtml', {
}, context_instance=RequestContext(request))
@login_required
def activate_premium(request):
return HttpResponseRedirect(reverse('index'))
@ajax_login_required
@json.json_view
def profile_is_premium(request):
# Check tries
code = 0
retries = int(request.GET['retries'])
profile = Profile.objects.get(user=request.user)
subs = UserSubscription.objects.filter(user=request.user)
total_subs = subs.count()
activated_subs = subs.filter(active=True).count()
if retries >= 30:
code = -1
if not request.user.profile.is_premium:
subject = "Premium activation failed: %s (%s/%s)" % (request.user, activated_subs, total_subs)
message = """User: %s (%s) -- Email: %s""" % (request.user.username, request.user.pk, request.user.email)
mail_admins(subject, message, fail_silently=True)
request.user.profile.is_premium = True
request.user.profile.save()
return {
'is_premium': profile.is_premium,
'code': code,
'activated_subs': activated_subs,
'total_subs': total_subs,
}
@login_required
def stripe_form(request):
user = request.user
success_updating = False
stripe.api_key = settings.STRIPE_SECRET
plan = int(request.GET.get('plan', 2))
plan = PLANS[plan-1][0]
error = None
if request.method == 'POST':
zebra_form = StripePlusPaymentForm(request.POST, email=user.email)
if zebra_form.is_valid():
user.email = zebra_form.cleaned_data['email']
user.save()
current_premium = (user.profile.is_premium and
user.profile.premium_expire and
user.profile.premium_expire > datetime.datetime.now())
# Are they changing their existing card?
if user.profile.stripe_id and current_premium:
customer = stripe.Customer.retrieve(user.profile.stripe_id)
try:
card = customer.cards.create(card=zebra_form.cleaned_data['stripe_token'])
except stripe.CardError:
error = "This card was declined."
else:
customer.default_card = card.id
customer.save()
success_updating = True
else:
try:
customer = stripe.Customer.create(**{
'card': zebra_form.cleaned_data['stripe_token'],
'plan': zebra_form.cleaned_data['plan'],
'email': user.email,
'description': user.username,
})
except stripe.CardError:
error = "This card was declined."
else:
user.profile.strip_4_digits = zebra_form.cleaned_data['last_4_digits']
user.profile.stripe_id = customer.id
user.profile.save()
user.profile.activate_premium() # TODO: Remove, because webhooks are slow
success_updating = True
else:
zebra_form = StripePlusPaymentForm(email=user.email, plan=plan)
if success_updating:
return render_to_response('reader/paypal_return.xhtml',
{}, context_instance=RequestContext(request))
new_user_queue_count = RNewUserQueue.user_count()
new_user_queue_position = RNewUserQueue.user_position(request.user.pk)
new_user_queue_behind = 0
if new_user_queue_position >= 0:
new_user_queue_behind = new_user_queue_count - new_user_queue_position
new_user_queue_position -= 1
logging.user(request, "~BM~FBLoading Stripe form")
return render_to_response('profile/stripe_form.xhtml',
{
'zebra_form': zebra_form,
'publishable': settings.STRIPE_PUBLISHABLE,
'success_updating': success_updating,
'new_user_queue_count': new_user_queue_count - 1,
'new_user_queue_position': new_user_queue_position,
'new_user_queue_behind': new_user_queue_behind,
'error': error,
},
context_instance=RequestContext(request)
)
@render_to('reader/activities_module.xhtml')
def load_activities(request):
user = get_user(request)
page = max(1, int(request.REQUEST.get('page', 1)))
activities, has_next_page = MActivity.user(user.pk, page=page)
return {
'activities': activities,
'page': page,
'has_next_page': has_next_page,
'username': 'You',
}
@ajax_login_required
@json.json_view
def payment_history(request):
user = request.user
if request.user.is_staff:
user_id = request.REQUEST.get('user_id', request.user.pk)
user = User.objects.get(pk=user_id)
history = PaymentHistory.objects.filter(user=user)
statistics = {
"created_date": user.date_joined,
"last_seen_date": user.profile.last_seen_on,
"last_seen_ip": user.profile.last_seen_ip,
"timezone": unicode(user.profile.timezone),
"stripe_id": user.profile.stripe_id,
"profile": user.profile,
"feeds": UserSubscription.objects.filter(user=user).count(),
"email": user.email,
"read_story_count": RUserStory.read_story_count(user.pk),
"feed_opens": UserSubscription.objects.filter(user=user).aggregate(sum=Sum('feed_opens'))['sum'],
"training": {
'title': MClassifierTitle.objects.filter(user_id=user.pk).count(),
'tag': MClassifierTag.objects.filter(user_id=user.pk).count(),
'author': MClassifierAuthor.objects.filter(user_id=user.pk).count(),
'feed': MClassifierFeed.objects.filter(user_id=user.pk).count(),
}
}
return {
'is_premium': user.profile.is_premium,
'premium_expire': user.profile.premium_expire,
'payments': history,
'statistics': statistics,
}
@ajax_login_required
@json.json_view
def cancel_premium(request):
canceled = request.user.profile.cancel_premium()
return {
'code': 1 if canceled else -1,
}
@staff_member_required
@ajax_login_required
@json.json_view
def refund_premium(request):
user_id = request.REQUEST.get('user_id')
partial = request.REQUEST.get('partial', False)
user = User.objects.get(pk=user_id)
try:
refunded = user.profile.refund_premium(partial=partial)
except stripe.InvalidRequestError, e:
refunded = e
except PayPalAPIResponseError, e:
refunded = e
return {'code': 1 if refunded else -1, 'refunded': refunded}
@staff_member_required
@ajax_login_required
@json.json_view
def upgrade_premium(request):
user_id = request.REQUEST.get('user_id')
user = User.objects.get(pk=user_id)
gift = MGiftCode.add(gifting_user_id=User.objects.get(username='samuel').pk,
receiving_user_id=user.pk)
MRedeemedCode.redeem(user, gift.gift_code)
return {'code': user.profile.is_premium}
@staff_member_required
@ajax_login_required
@json.json_view
def never_expire_premium(request):
user_id = request.REQUEST.get('user_id')
user = User.objects.get(pk=user_id)
if user.profile.is_premium:
user.profile.premium_expire = None
user.profile.save()
return {'code': 1}
return {'code': -1}
@staff_member_required
@ajax_login_required
@json.json_view
def update_payment_history(request):
user_id = request.REQUEST.get('user_id')
user = User.objects.get(pk=user_id)
user.profile.setup_premium_history(check_premium=False)
return {'code': 1}
@login_required
@render_to('profile/delete_account.xhtml')
def delete_account(request):
if request.method == 'POST':
form = DeleteAccountForm(request.POST, user=request.user)
if form.is_valid():
logging.user(request.user, "~SK~BC~FRDeleting ~SB%s~SN's account." %
request.user.username)
request.user.profile.delete_user(confirm=True)
logout_user(request)
return HttpResponseRedirect(reverse('index'))
else:
logging.user(request.user, "~BC~FRFailed attempt to delete ~SB%s~SN's account." %
request.user.username)
else:
logging.user(request.user, "~BC~FRAttempting to delete ~SB%s~SN's account." %
request.user.username)
form = DeleteAccountForm(user=request.user)
return {
'delete_form': form,
}
@render_to('profile/forgot_password.xhtml')
def forgot_password(request):
if request.method == 'POST':
form = ForgotPasswordForm(request.POST)
if form.is_valid():
logging.user(request.user, "~BC~FRForgot password: ~SB%s" % request.POST['email'])
try:
user = User.objects.get(email__iexact=request.POST['email'])
except User.MultipleObjectsReturned:
user = User.objects.filter(email__iexact=request.POST['email'])[0]
user.profile.send_forgot_password_email()
return HttpResponseRedirect(reverse('index'))
else:
logging.user(request.user, "~BC~FRFailed forgot password: ~SB%s~SN" %
request.POST['email'])
else:
logging.user(request.user, "~BC~FRAttempting to retrieve forgotton password.")
form = ForgotPasswordForm()
return {
'forgot_password_form': form,
}
@login_required
@render_to('profile/forgot_password_return.xhtml')
def forgot_password_return(request):
if request.method == 'POST':
logging.user(request.user, "~BC~FRReseting ~SB%s~SN's password." %
request.user.username)
new_password = request.POST.get('password', '')
request.user.set_password(new_password)
request.user.save()
return HttpResponseRedirect(reverse('index'))
else:
logging.user(request.user, "~BC~FRAttempting to reset ~SB%s~SN's password." %
request.user.username)
form = ForgotPasswordReturnForm()
return {
'forgot_password_return_form': form,
}
@ajax_login_required
@json.json_view
def delete_starred_stories(request):
timestamp = request.POST.get('timestamp', None)
if timestamp:
delete_date = datetime.datetime.fromtimestamp(int(timestamp))
else:
delete_date = datetime.datetime.now()
starred_stories = MStarredStory.objects.filter(user_id=request.user.pk,
starred_date__lte=delete_date)
stories_deleted = starred_stories.count()
starred_stories.delete()
MStarredStoryCounts.count_for_user(request.user.pk, total_only=True)
starred_counts, starred_count = MStarredStoryCounts.user_counts(request.user.pk, include_total=True)
logging.user(request.user, "~BC~FRDeleting %s/%s starred stories (%s)" % (stories_deleted,
stories_deleted+starred_count, delete_date))
return dict(code=1, stories_deleted=stories_deleted, starred_counts=starred_counts,
starred_count=starred_count)
@ajax_login_required
@json.json_view
def delete_all_sites(request):
request.user.profile.send_opml_export_email(reason="You have deleted all of your sites, so here's a backup just in case.")
subs = UserSubscription.objects.filter(user=request.user)
sub_count = subs.count()
subs.delete()
usf = UserSubscriptionFolders.objects.get(user=request.user)
usf.folders = '[]'
usf.save()
logging.user(request.user, "~BC~FRDeleting %s sites" % sub_count)
return dict(code=1)
@login_required
@render_to('profile/email_optout.xhtml')
def email_optout(request):
user = request.user
user.profile.send_emails = False
user.profile.save()
return {
"user": user,
}
| mit | 5,065,385,919,666,274,000 | 36.408795 | 126 | 0.636973 | false | 3.748654 | false | false | false |
Aaron1992/v2ex | mapreduce/handlers.py | 20 | 27693 | #!/usr/bin/env python
#
# Copyright 2010 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Defines executor tasks handlers for MapReduce implementation."""
# Disable "Invalid method name"
# pylint: disable-msg=C6409
import datetime
import logging
import math
import os
from mapreduce.lib import simplejson
import time
from google.appengine.api import memcache
from google.appengine.api.labs import taskqueue
from google.appengine.ext import db
from mapreduce import base_handler
from mapreduce import context
from mapreduce import quota
from mapreduce import model
from mapreduce import quota
from mapreduce import util
# TODO(user): Make this a product of the reader or in quotas.py
_QUOTA_BATCH_SIZE = 20
# The amount of time to perform scanning in one slice. New slice will be
# scheduled as soon as current one takes this long.
_SLICE_DURATION_SEC = 15
# Delay between consecutive controller callback invocations.
_CONTROLLER_PERIOD_SEC = 2
class Error(Exception):
"""Base class for exceptions in this module."""
class NotEnoughArgumentsError(Error):
"""Required argument is missing."""
class NoDataError(Error):
"""There is no data present for a desired input."""
class MapperWorkerCallbackHandler(base_handler.BaseHandler):
"""Callback handler for mapreduce worker task.
Request Parameters:
mapreduce_spec: MapreduceSpec of the mapreduce serialized to json.
shard_id: id of the shard.
slice_id: id of the slice.
"""
def __init__(self, time_function=time.time):
"""Constructor.
Args:
time_function: time function to use to obtain current time.
"""
base_handler.BaseHandler.__init__(self)
self._time = time_function
def post(self):
"""Handle post request."""
spec = model.MapreduceSpec.from_json_str(
self.request.get("mapreduce_spec"))
self._start_time = self._time()
shard_id = self.shard_id()
# TODO(user): Make this prettier
logging.debug("post: shard=%s slice=%s headers=%s",
shard_id, self.slice_id(), self.request.headers)
shard_state, control = db.get([
model.ShardState.get_key_by_shard_id(shard_id),
model.MapreduceControl.get_key_by_job_id(spec.mapreduce_id),
])
if not shard_state:
# We're letting this task to die. It's up to controller code to
# reinitialize and restart the task.
logging.error("State not found for shard ID %r; shutting down",
shard_id)
return
if control and control.command == model.MapreduceControl.ABORT:
logging.info("Abort command received by shard %d of job '%s'",
shard_state.shard_number, shard_state.mapreduce_id)
shard_state.active = False
shard_state.result_status = model.ShardState.RESULT_ABORTED
shard_state.put()
model.MapreduceControl.abort(spec.mapreduce_id)
return
input_reader = self.input_reader(spec.mapper)
if spec.mapper.params.get("enable_quota", True):
quota_consumer = quota.QuotaConsumer(
quota.QuotaManager(memcache.Client()),
shard_id,
_QUOTA_BATCH_SIZE)
else:
quota_consumer = None
ctx = context.Context(spec, shard_state)
context.Context._set(ctx)
try:
# consume quota ahead, because we do not want to run a datastore
# query if there's not enough quota for the shard.
if not quota_consumer or quota_consumer.check():
scan_aborted = False
entity = None
# We shouldn't fetch an entity from the reader if there's not enough
# quota to process it. Perform all quota checks proactively.
if not quota_consumer or quota_consumer.consume():
for entity in input_reader:
if isinstance(entity, db.Model):
shard_state.last_work_item = repr(entity.key())
else:
shard_state.last_work_item = repr(entity)[:100]
scan_aborted = not self.process_entity(entity, ctx)
# Check if we've got enough quota for the next entity.
if (quota_consumer and not scan_aborted and
not quota_consumer.consume()):
scan_aborted = True
if scan_aborted:
break
else:
scan_aborted = True
if not scan_aborted:
logging.info("Processing done for shard %d of job '%s'",
shard_state.shard_number, shard_state.mapreduce_id)
# We consumed extra quota item at the end of for loop.
# Just be nice here and give it back :)
if quota_consumer:
quota_consumer.put(1)
shard_state.active = False
shard_state.result_status = model.ShardState.RESULT_SUCCESS
# TODO(user): Mike said we don't want this happen in case of
# exception while scanning. Figure out when it's appropriate to skip.
ctx.flush()
finally:
context.Context._set(None)
if quota_consumer:
quota_consumer.dispose()
# Rescheduling work should always be the last statement. It shouldn't happen
# if there were any exceptions in code before it.
if shard_state.active:
self.reschedule(spec, input_reader)
def process_entity(self, entity, ctx):
"""Process a single entity.
Call mapper handler on the entity.
Args:
entity: an entity to process.
ctx: current execution context.
Returns:
True if scan should be continued, False if scan should be aborted.
"""
ctx.counters.increment(context.COUNTER_MAPPER_CALLS)
handler = ctx.mapreduce_spec.mapper.handler
if util.is_generator_function(handler):
for result in handler(entity):
if callable(result):
result(ctx)
else:
try:
if len(result) == 2:
logging.error("Collectors not implemented yet")
else:
logging.error("Got bad output tuple of length %d", len(result))
except TypeError:
logging.error(
"Handler yielded type %s, expected a callable or a tuple",
result.__class__.__name__)
else:
handler(entity)
if self._time() - self._start_time > _SLICE_DURATION_SEC:
logging.debug("Spent %s seconds. Rescheduling",
self._time() - self._start_time)
return False
return True
def shard_id(self):
"""Get shard unique identifier of this task from request.
Returns:
shard identifier as string.
"""
return str(self.request.get("shard_id"))
def slice_id(self):
"""Get slice unique identifier of this task from request.
Returns:
slice identifier as int.
"""
return int(self.request.get("slice_id"))
def input_reader(self, mapper_spec):
"""Get the reader from mapper_spec initialized with the request's state.
Args:
mapper_spec: a mapper spec containing the immutable mapper state.
Returns:
An initialized InputReader.
"""
input_reader_spec_dict = simplejson.loads(
self.request.get("input_reader_state"))
return mapper_spec.input_reader_class().from_json(
input_reader_spec_dict)
@staticmethod
def worker_parameters(mapreduce_spec,
shard_id,
slice_id,
input_reader):
"""Fill in mapper worker task parameters.
Returned parameters map is to be used as task payload, and it contains
all the data, required by mapper worker to perform its function.
Args:
mapreduce_spec: specification of the mapreduce.
shard_id: id of the shard (part of the whole dataset).
slice_id: id of the slice (part of the shard).
input_reader: InputReader containing the remaining inputs for this
shard.
Returns:
string->string map of parameters to be used as task payload.
"""
return {"mapreduce_spec": mapreduce_spec.to_json_str(),
"shard_id": shard_id,
"slice_id": str(slice_id),
"input_reader_state": input_reader.to_json_str()}
@staticmethod
def get_task_name(shard_id, slice_id):
"""Compute single worker task name.
Args:
shard_id: id of the shard (part of the whole dataset) as string.
slice_id: id of the slice (part of the shard) as int.
Returns:
task name which should be used to process specified shard/slice.
"""
# Prefix the task name with something unique to this framework's
# namespace so we don't conflict with user tasks on the queue.
return "appengine-mrshard-%s-%s" % (shard_id, slice_id)
def reschedule(self, mapreduce_spec, input_reader):
"""Reschedule worker task to continue scanning work.
Args:
mapreduce_spec: mapreduce specification.
input_reader: remaining input reader to process.
"""
MapperWorkerCallbackHandler.schedule_slice(
self.base_path(), mapreduce_spec, self.shard_id(),
self.slice_id() + 1, input_reader)
@classmethod
def schedule_slice(cls,
base_path,
mapreduce_spec,
shard_id,
slice_id,
input_reader,
queue_name=None,
eta=None,
countdown=None):
"""Schedule slice scanning by adding it to the task queue.
Args:
base_path: base_path of mapreduce request handlers as string.
mapreduce_spec: mapreduce specification as MapreduceSpec.
shard_id: current shard id as string.
slice_id: slice id as int.
input_reader: remaining InputReader for given shard.
queue_name: Optional queue to run on; uses the current queue of
execution or the default queue if unspecified.
eta: Absolute time when the MR should execute. May not be specified
if 'countdown' is also supplied. This may be timezone-aware or
timezone-naive.
countdown: Time in seconds into the future that this MR should execute.
Defaults to zero.
"""
task_params = MapperWorkerCallbackHandler.worker_parameters(
mapreduce_spec, shard_id, slice_id, input_reader)
task_name = MapperWorkerCallbackHandler.get_task_name(shard_id, slice_id)
queue_name = os.environ.get("HTTP_X_APPENGINE_QUEUENAME",
queue_name or "default")
try:
taskqueue.Task(url=base_path + "/worker_callback",
params=task_params,
name=task_name,
eta=eta,
countdown=countdown).add(queue_name)
except (taskqueue.TombstonedTaskError, taskqueue.TaskAlreadyExistsError), e:
logging.warning("Task %r with params %r already exists. %s: %s",
task_name, task_params, e.__class__, e)
class ControllerCallbackHandler(base_handler.BaseHandler):
"""Supervises mapreduce execution.
Is also responsible for gathering execution status from shards together.
This task is "continuously" running by adding itself again to taskqueue if
mapreduce is still active.
"""
def __init__(self, time_function=time.time):
"""Constructor.
Args:
time_function: time function to use to obtain current time.
"""
base_handler.BaseHandler.__init__(self)
self._time = time_function
def post(self):
"""Handle post request."""
spec = model.MapreduceSpec.from_json_str(
self.request.get("mapreduce_spec"))
# TODO(user): Make this logging prettier.
logging.debug("post: id=%s headers=%s",
spec.mapreduce_id, self.request.headers)
state, control = db.get([
model.MapreduceState.get_key_by_job_id(spec.mapreduce_id),
model.MapreduceControl.get_key_by_job_id(spec.mapreduce_id),
])
if not state:
logging.error("State not found for mapreduce_id '%s'; skipping",
spec.mapreduce_id)
return
shard_states = model.ShardState.find_by_mapreduce_id(spec.mapreduce_id)
if state.active and len(shard_states) != spec.mapper.shard_count:
# Some shards were lost
logging.error("Incorrect number of shard states: %d vs %d; "
"aborting job '%s'",
len(shard_states), spec.mapper.shard_count,
spec.mapreduce_id)
state.active = False
state.result_status = model.MapreduceState.RESULT_FAILED
model.MapreduceControl.abort(spec.mapreduce_id)
active_shards = [s for s in shard_states if s.active]
failed_shards = [s for s in shard_states
if s.result_status == model.ShardState.RESULT_FAILED]
aborted_shards = [s for s in shard_states
if s.result_status == model.ShardState.RESULT_ABORTED]
if state.active:
state.active = bool(active_shards)
state.active_shards = len(active_shards)
state.failed_shards = len(failed_shards)
state.aborted_shards = len(aborted_shards)
if (not state.active and control and
control.command == model.MapreduceControl.ABORT):
# User-initiated abort *after* all shards have completed.
logging.info("Abort signal received for job '%s'", spec.mapreduce_id)
state.result_status = model.MapreduceState.RESULT_ABORTED
if not state.active:
state.active_shards = 0
if not state.result_status:
# Set final result status derived from shard states.
if [s for s in shard_states
if s.result_status != model.ShardState.RESULT_SUCCESS]:
state.result_status = model.MapreduceState.RESULT_FAILED
else:
state.result_status = model.MapreduceState.RESULT_SUCCESS
logging.info("Final result for job '%s' is '%s'",
spec.mapreduce_id, state.result_status)
# We don't need a transaction here, since we change only statistics data,
# and we don't care if it gets overwritten/slightly inconsistent.
self.aggregate_state(state, shard_states)
poll_time = state.last_poll_time
state.last_poll_time = datetime.datetime.utcfromtimestamp(self._time())
if not state.active:
# This is the last execution.
# Enqueue done_callback if needed.
def put_state(state):
state.put()
done_callback = spec.params.get(
model.MapreduceSpec.PARAM_DONE_CALLBACK)
if done_callback:
taskqueue.Task(
url=done_callback,
headers={"Mapreduce-Id": spec.mapreduce_id}).add(
spec.params.get(
model.MapreduceSpec.PARAM_DONE_CALLBACK_QUEUE,
"default"),
transactional=True)
db.run_in_transaction(put_state, state)
return
else:
state.put()
processing_rate = int(spec.mapper.params.get(
"processing_rate") or model._DEFAULT_PROCESSING_RATE_PER_SEC)
self.refill_quotas(poll_time, processing_rate, active_shards)
ControllerCallbackHandler.reschedule(
self.base_path(), spec, self.serial_id() + 1)
def aggregate_state(self, mapreduce_state, shard_states):
"""Update current mapreduce state by aggregating shard states.
Args:
mapreduce_state: current mapreduce state as MapreduceState.
shard_states: all shard states (active and inactive). list of ShardState.
"""
processed_counts = []
mapreduce_state.counters_map.clear()
for shard_state in shard_states:
mapreduce_state.counters_map.add_map(shard_state.counters_map)
processed_counts.append(shard_state.counters_map.get(
context.COUNTER_MAPPER_CALLS))
mapreduce_state.set_processed_counts(processed_counts)
def refill_quotas(self,
last_poll_time,
processing_rate,
active_shard_states):
"""Refill quotas for all active shards.
Args:
last_poll_time: Datetime with the last time the job state was updated.
processing_rate: How many items to process per second overall.
active_shard_states: All active shard states, list of ShardState.
"""
if not active_shard_states:
return
quota_manager = quota.QuotaManager(memcache.Client())
current_time = int(self._time())
last_poll_time = time.mktime(last_poll_time.timetuple())
total_quota_refill = processing_rate * max(0, current_time - last_poll_time)
quota_refill = int(math.ceil(
1.0 * total_quota_refill / len(active_shard_states)))
if not quota_refill:
return
# TODO(user): use batch memcache API to refill quota in one API call.
for shard_state in active_shard_states:
quota_manager.put(shard_state.shard_id, quota_refill)
def serial_id(self):
"""Get serial unique identifier of this task from request.
Returns:
serial identifier as int.
"""
return int(self.request.get("serial_id"))
@staticmethod
def get_task_name(mapreduce_spec, serial_id):
"""Compute single controller task name.
Args:
mapreduce_spec: specification of the mapreduce.
serial_id: id of the invocation as int.
Returns:
task name which should be used to process specified shard/slice.
"""
# Prefix the task name with something unique to this framework's
# namespace so we don't conflict with user tasks on the queue.
return "appengine-mrcontrol-%s-%s" % (
mapreduce_spec.mapreduce_id, serial_id)
@staticmethod
def controller_parameters(mapreduce_spec, serial_id):
"""Fill in controller task parameters.
Returned parameters map is to be used as task payload, and it contains
all the data, required by controller to perform its function.
Args:
mapreduce_spec: specification of the mapreduce.
serial_id: id of the invocation as int.
Returns:
string->string map of parameters to be used as task payload.
"""
return {"mapreduce_spec": mapreduce_spec.to_json_str(),
"serial_id": str(serial_id)}
@classmethod
def reschedule(cls, base_path, mapreduce_spec, serial_id, queue_name=None):
"""Schedule new update status callback task.
Args:
base_path: mapreduce handlers url base path as string.
mapreduce_spec: mapreduce specification as MapreduceSpec.
serial_id: id of the invocation as int.
queue_name: The queue to schedule this task on. Will use the current
queue of execution if not supplied.
"""
task_name = ControllerCallbackHandler.get_task_name(
mapreduce_spec, serial_id)
task_params = ControllerCallbackHandler.controller_parameters(
mapreduce_spec, serial_id)
if not queue_name:
queue_name = os.environ.get("HTTP_X_APPENGINE_QUEUENAME", "default")
try:
taskqueue.Task(url=base_path + "/controller_callback",
name=task_name, params=task_params,
countdown=_CONTROLLER_PERIOD_SEC).add(queue_name)
except (taskqueue.TombstonedTaskError, taskqueue.TaskAlreadyExistsError), e:
logging.warning("Task %r with params %r already exists. %s: %s",
task_name, task_params, e.__class__, e)
class KickOffJobHandler(base_handler.BaseHandler):
"""Taskqueue handler which kicks off a mapreduce processing.
Request Parameters:
mapreduce_spec: MapreduceSpec of the mapreduce serialized to json.
input_readers: List of InputReaders objects separated by semi-colons.
"""
def post(self):
"""Handles kick off request."""
spec = model.MapreduceSpec.from_json_str(
self._get_required_param("mapreduce_spec"))
input_readers_json = simplejson.loads(
self._get_required_param("input_readers"))
queue_name = os.environ.get("HTTP_X_APPENGINE_QUEUENAME", "default")
mapper_input_reader_class = spec.mapper.input_reader_class()
input_readers = [mapper_input_reader_class.from_json_str(reader_json)
for reader_json in input_readers_json]
KickOffJobHandler._schedule_shards(
spec, input_readers, queue_name, self.base_path())
ControllerCallbackHandler.reschedule(
self.base_path(), spec, queue_name=queue_name, serial_id=0)
def _get_required_param(self, param_name):
"""Get a required request parameter.
Args:
param_name: name of request parameter to fetch.
Returns:
parameter value
Raises:
NotEnoughArgumentsError: if parameter is not specified.
"""
value = self.request.get(param_name)
if not value:
raise NotEnoughArgumentsError(param_name + " not specified")
return value
@classmethod
def _schedule_shards(cls, spec, input_readers, queue_name, base_path):
"""Prepares shard states and schedules their execution.
Args:
spec: mapreduce specification as MapreduceSpec.
input_readers: list of InputReaders describing shard splits.
queue_name: The queue to run this job on.
base_path: The base url path of mapreduce callbacks.
"""
# Note: it's safe to re-attempt this handler because:
# - shard state has deterministic and unique key.
# - schedule_slice will fall back gracefully if a task already exists.
shard_states = []
for shard_number, input_reader in enumerate(input_readers):
shard = model.ShardState.create_new(spec.mapreduce_id, shard_number)
shard.shard_description = str(input_reader)
shard_states.append(shard)
# Retrievs already existing shards.
existing_shard_states = db.get(shard.key() for shard in shard_states)
existing_shard_keys = set(shard.key() for shard in existing_shard_states
if shard is not None)
# Puts only non-existing shards.
db.put(shard for shard in shard_states
if shard.key() not in existing_shard_keys)
for shard_number, input_reader in enumerate(input_readers):
shard_id = model.ShardState.shard_id_from_number(
spec.mapreduce_id, shard_number)
MapperWorkerCallbackHandler.schedule_slice(
base_path, spec, shard_id, 0, input_reader, queue_name=queue_name)
class StartJobHandler(base_handler.JsonHandler):
"""Command handler starts a mapreduce job."""
def handle(self):
"""Handles start request."""
# Mapper spec as form arguments.
mapreduce_name = self._get_required_param("name")
mapper_input_reader_spec = self._get_required_param("mapper_input_reader")
mapper_handler_spec = self._get_required_param("mapper_handler")
mapper_params = self._get_params(
"mapper_params_validator", "mapper_params.")
params = self._get_params(
"params_validator", "params.")
# Set some mapper param defaults if not present.
mapper_params["processing_rate"] = int(mapper_params.get(
"processing_rate") or model._DEFAULT_PROCESSING_RATE_PER_SEC)
queue_name = mapper_params["queue_name"] = mapper_params.get(
"queue_name", "default")
# Validate the Mapper spec, handler, and input reader.
mapper_spec = model.MapperSpec(
mapper_handler_spec,
mapper_input_reader_spec,
mapper_params,
int(mapper_params.get("shard_count", model._DEFAULT_SHARD_COUNT)))
mapreduce_id = type(self)._start_map(
mapreduce_name,
mapper_spec,
params,
base_path=self.base_path(),
queue_name=queue_name,
_app=mapper_params.get("_app"))
self.json_response["mapreduce_id"] = mapreduce_id
def _get_params(self, validator_parameter, name_prefix):
"""Retrieves additional user-supplied params for the job and validates them.
Args:
validator_parameter: name of the request parameter which supplies
validator for this parameter set.
name_prefix: common prefix for all parameter names in the request.
Raises:
Any exception raised by the 'params_validator' request parameter if
the params fail to validate.
"""
params_validator = self.request.get(validator_parameter)
user_params = {}
for key in self.request.arguments():
if key.startswith(name_prefix):
values = self.request.get_all(key)
adjusted_key = key[len(name_prefix):]
if len(values) == 1:
user_params[adjusted_key] = values[0]
else:
user_params[adjusted_key] = values
if params_validator:
resolved_validator = util.for_name(params_validator)
resolved_validator(user_params)
return user_params
def _get_required_param(self, param_name):
"""Get a required request parameter.
Args:
param_name: name of request parameter to fetch.
Returns:
parameter value
Raises:
NotEnoughArgumentsError: if parameter is not specified.
"""
value = self.request.get(param_name)
if not value:
raise NotEnoughArgumentsError(param_name + " not specified")
return value
@classmethod
def _start_map(cls, name, mapper_spec,
mapreduce_params,
base_path="/mapreduce",
queue_name="default",
eta=None,
countdown=None,
_app=None):
# Check that handler can be instantiated.
mapper_spec.get_handler()
mapper_input_reader_class = mapper_spec.input_reader_class()
mapper_input_readers = mapper_input_reader_class.split_input(mapper_spec)
if not mapper_input_readers:
raise NoDataError("Found no mapper input readers to process.")
mapper_spec.shard_count = len(mapper_input_readers)
state = model.MapreduceState.create_new()
mapreduce_spec = model.MapreduceSpec(
name,
state.key().id_or_name(),
mapper_spec.to_json(),
mapreduce_params)
state.mapreduce_spec = mapreduce_spec
state.active = True
state.active_shards = mapper_spec.shard_count
if _app:
state.app_id = _app
# TODO(user): Initialize UI fields correctly.
state.char_url = ""
state.sparkline_url = ""
def schedule_mapreduce(state, mapper_input_readers, eta, countdown):
state.put()
readers_json = [reader.to_json_str() for reader in mapper_input_readers]
taskqueue.Task(
url=base_path + "/kickoffjob_callback",
params={"mapreduce_spec": state.mapreduce_spec.to_json_str(),
"input_readers": simplejson.dumps(readers_json)},
eta=eta, countdown=countdown).add(queue_name, transactional=True)
# Point of no return: We're actually going to run this job!
db.run_in_transaction(
schedule_mapreduce, state, mapper_input_readers, eta, countdown)
return state.key().id_or_name()
class CleanUpJobHandler(base_handler.JsonHandler):
"""Command to kick off tasks to clean up a job's data."""
def handle(self):
# TODO(user): Have this kick off a task to clean up all MapreduceState,
# ShardState, and MapreduceControl entities for a job ID.
self.json_response["status"] = "This does nothing yet."
class AbortJobHandler(base_handler.JsonHandler):
"""Command to abort a running job."""
def handle(self):
model.MapreduceControl.abort(self.request.get("mapreduce_id"))
self.json_response["status"] = "Abort signal sent."
| bsd-3-clause | 8,927,027,168,797,040,000 | 34.232824 | 80 | 0.654245 | false | 4.00825 | false | false | false |
slozier/ironpython2 | Tests/test_bytes.py | 2 | 65335 | # Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the Apache 2.0 License.
# See the LICENSE file in the project root for more information.
import sys
import unittest
from iptest import IronPythonTestCase, ip_supported_encodings, is_cli, is_mono, is_osx, run_test
types = [bytearray, bytes]
class IndexableOC:
def __init__(self, value):
self.value = value
def __index__(self):
return self.value
class Indexable(object):
def __init__(self, value):
self.value = value
def __index__(self):
return self.value
class BytesTest(IronPythonTestCase):
def test_capitalize(self):
tests = [(b'foo', b'Foo'),
(b' foo', b' foo'),
(b'fOO', b'Foo'),
(b' fOO BAR', b' foo bar'),
(b'fOO BAR', b'Foo bar'),
]
for testType in types:
for data, result in tests:
self.assertEqual(testType(data).capitalize(), result)
y = b''
x = y.capitalize()
self.assertEqual(id(x), id(y))
y = bytearray(b'')
x = y.capitalize()
self.assertTrue(id(x) != id(y), "bytearray.capitalize returned self")
def test_center(self):
for testType in types:
self.assertEqual(testType(b'aa').center(4), b' aa ')
self.assertEqual(testType(b'aa').center(4, b'*'), b'*aa*')
self.assertEqual(testType(b'aa').center(4, '*'), b'*aa*')
self.assertEqual(testType(b'aa').center(2), b'aa')
self.assertEqual(testType(b'aa').center(2, '*'), b'aa')
self.assertEqual(testType(b'aa').center(2, b'*'), b'aa')
self.assertRaises(TypeError, testType(b'abc').center, 3, [2, ])
x = b'aa'
self.assertEqual(id(x.center(2, '*')), id(x))
self.assertEqual(id(x.center(2, b'*')), id(x))
x = bytearray(b'aa')
self.assertTrue(id(x.center(2, '*')) != id(x))
self.assertTrue(id(x.center(2, b'*')) != id(x))
def test_count(self):
for testType in types:
self.assertEqual(testType(b"adadad").count(b"d"), 3)
self.assertEqual(testType(b"adbaddads").count(b"ad"), 3)
self.assertEqual(testType(b"adbaddads").count(b"ad", 1, 8), 2)
self.assertEqual(testType(b"adbaddads").count(b"ad", -1, -1), 0)
self.assertEqual(testType(b"adbaddads").count(b"ad", 0, -1), 3)
self.assertEqual(testType(b"adbaddads").count(b"", 0, -1), 9)
self.assertEqual(testType(b"adbaddads").count(b"", 27), 0)
self.assertRaises(TypeError, testType(b"adbaddads").count, [2,])
self.assertRaises(TypeError, testType(b"adbaddads").count, [2,], 0)
self.assertRaises(TypeError, testType(b"adbaddads").count, [2,], 0, 1)
def test_decode(self):
for testType in types:
self.assertEqual(testType(b'\xff\xfea\x00b\x00c\x00').decode('utf-16'), 'abc')
def test_endswith(self):
for testType in types:
self.assertRaises(TypeError, testType(b'abcdef').endswith, ([], ))
self.assertRaises(TypeError, testType(b'abcdef').endswith, [])
self.assertRaises(TypeError, testType(b'abcdef').endswith, [], 0)
self.assertRaises(TypeError, testType(b'abcdef').endswith, [], 0, 1)
self.assertEqual(testType(b'abcdef').endswith(b'def'), True)
self.assertEqual(testType(b'abcdef').endswith(b'def', -1, -2), False)
self.assertEqual(testType(b'abcdef').endswith(b'def', 0, 42), True)
self.assertEqual(testType(b'abcdef').endswith(b'def', 0, -7), False)
self.assertEqual(testType(b'abcdef').endswith(b'def', 42, -7), False)
self.assertEqual(testType(b'abcdef').endswith(b'def', 42), False)
self.assertEqual(testType(b'abcdef').endswith(b'bar'), False)
self.assertEqual(testType(b'abcdef').endswith((b'def', )), True)
self.assertEqual(testType(b'abcdef').endswith((b'baz', )), False)
self.assertEqual(testType(b'abcdef').endswith((b'baz', ), 0, 42), False)
self.assertEqual(testType(b'abcdef').endswith((b'baz', ), 0, -42), False)
for x in (0, 1, 2, 3, -10, -3, -4):
self.assertEqual(testType(b"abcdef").endswith(b"def", x), True)
self.assertEqual(testType(b"abcdef").endswith(b"de", x, 5), True)
self.assertEqual(testType(b"abcdef").endswith(b"de", x, -1), True)
self.assertEqual(testType(b"abcdef").endswith((b"def", ), x), True)
self.assertEqual(testType(b"abcdef").endswith((b"de", ), x, 5), True)
self.assertEqual(testType(b"abcdef").endswith((b"de", ), x, -1), True)
for x in (4, 5, 6, 10, -1, -2):
self.assertEqual(testType(b"abcdef").endswith((b"def", ), x), False)
self.assertEqual(testType(b"abcdef").endswith((b"de", ), x, 5), False)
self.assertEqual(testType(b"abcdef").endswith((b"de", ), x, -1), False)
def test_expandtabs(self):
for testType in types:
self.assertTrue(testType(b"\ttext\t").expandtabs(0) == b"text")
self.assertTrue(testType(b"\ttext\t").expandtabs(-10) == b"text")
self.assertEqual(testType(b"\r\ntext\t").expandtabs(-10), b"\r\ntext")
self.assertEqual(len(testType(b"aaa\taaa\taaa").expandtabs()), 19)
self.assertEqual(testType(b"aaa\taaa\taaa").expandtabs(), b"aaa aaa aaa")
self.assertRaises(OverflowError, bytearray(b'\t\t').expandtabs, sys.maxint)
def test_extend(self):
b = bytearray(b'abc')
b.extend(b'def')
self.assertEqual(b, b'abcdef')
b.extend(bytearray(b'ghi'))
self.assertEqual(b, b'abcdefghi')
b = bytearray(b'abc')
b.extend([2,3,4])
self.assertEqual(b, b'abc' + b'\x02\x03\x04')
b = bytearray(b'abc')
b.extend(memoryview(b"def"))
self.assertEqual(b, b'abcdef')
def test_find(self):
for testType in types:
self.assertEqual(testType(b"abcdbcda").find(b"cd", 1), 2)
self.assertEqual(testType(b"abcdbcda").find(b"cd", 3), 5)
self.assertEqual(testType(b"abcdbcda").find(b"cd", 7), -1)
self.assertEqual(testType(b'abc').find(b'abc', -1, 1), -1)
self.assertEqual(testType(b'abc').find(b'abc', 25), -1)
self.assertEqual(testType(b'abc').find(b'add', 0, 3), -1)
if testType == bytes:
self.assertEqual(testType(b'abc').find(b'add', 0, None), -1)
self.assertEqual(testType(b'abc').find(b'add', None, None), -1)
self.assertEqual(testType(b'abc').find(b'', None, 0), 0)
self.assertEqual(testType(b'x').find(b'x', None, 0), -1)
self.assertEqual(testType(b'abc').find(b'', 0, 0), 0)
self.assertEqual(testType(b'abc').find(b'', 0, 1), 0)
self.assertEqual(testType(b'abc').find(b'', 0, 2), 0)
self.assertEqual(testType(b'abc').find(b'', 0, 3), 0)
self.assertEqual(testType(b'abc').find(b'', 0, 4), 0)
self.assertEqual(testType(b'').find(b'', 0, 4), 0)
self.assertEqual(testType(b'x').find(b'x', 0, 0), -1)
self.assertEqual(testType(b'x').find(b'x', 3, 0), -1)
self.assertEqual(testType(b'x').find(b'', 3, 0), -1)
self.assertRaises(TypeError, testType(b'x').find, [1])
self.assertRaises(TypeError, testType(b'x').find, [1], 0)
self.assertRaises(TypeError, testType(b'x').find, [1], 0, 1)
def test_fromhex(self):
for testType in types:
if testType != str:
self.assertRaises(ValueError, testType.fromhex, u'0')
self.assertRaises(ValueError, testType.fromhex, u'A')
self.assertRaises(ValueError, testType.fromhex, u'a')
self.assertRaises(ValueError, testType.fromhex, u'aG')
self.assertRaises(ValueError, testType.fromhex, u'Ga')
self.assertEqual(testType.fromhex(u'00'), b'\x00')
self.assertEqual(testType.fromhex(u'00 '), b'\x00')
self.assertEqual(testType.fromhex(u'00 '), b'\x00')
self.assertEqual(testType.fromhex(u'00 01'), b'\x00\x01')
self.assertEqual(testType.fromhex(u'00 01 0a'), b'\x00\x01\x0a')
self.assertEqual(testType.fromhex(u'00 01 0a 0B'), b'\x00\x01\x0a\x0B')
self.assertEqual(testType.fromhex(u'00 a1 Aa 0B'), b'\x00\xA1\xAa\x0B')
def test_index(self):
for testType in types:
self.assertRaises(TypeError, testType(b'abc').index, 257)
self.assertEqual(testType(b'abc').index(b'a'), 0)
self.assertEqual(testType(b'abc').index(b'a', 0, -1), 0)
self.assertRaises(ValueError, testType(b'abc').index, b'c', 0, -1)
self.assertRaises(ValueError, testType(b'abc').index, b'a', -1)
self.assertEqual(testType(b'abc').index(b'ab'), 0)
self.assertEqual(testType(b'abc').index(b'bc'), 1)
self.assertRaises(ValueError, testType(b'abc').index, b'abcd')
self.assertRaises(ValueError, testType(b'abc').index, b'e')
self.assertRaises(TypeError, testType(b'x').index, [1])
self.assertRaises(TypeError, testType(b'x').index, [1], 0)
self.assertRaises(TypeError, testType(b'x').index, [1], 0, 1)
def test_insert(self):
b = bytearray(b'abc')
b.insert(0, ord('d'))
self.assertEqual(b, b'dabc')
b.insert(1000, ord('d'))
self.assertEqual(b, b'dabcd')
b.insert(-1, ord('d'))
self.assertEqual(b, b'dabcdd')
self.assertRaises(ValueError, b.insert, 0, 256)
def check_is_method(self, methodName, result):
for testType in types:
self.assertEqual(getattr(testType(b''), methodName)(), False)
for i in xrange(256):
data = bytearray()
data.append(i)
self.assertTrue(getattr(testType(data), methodName)() == result(i), chr(i) + " (" + str(i) + ") should be " + str(result(i)))
def test_isalnum(self):
self.check_is_method('isalnum', lambda i : i >= ord('a') and i <= ord('z') or i >= ord('A') and i <= ord('Z') or i >= ord('0') and i <= ord('9'))
def test_isalpha(self):
self.check_is_method('isalpha', lambda i : i >= ord('a') and i <= ord('z') or i >= ord('A') and i <= ord('Z'))
def test_isdigit(self):
self.check_is_method('isdigit', lambda i : (i >= ord('0') and i <= ord('9')))
def test_islower(self):
self.check_is_method('islower', lambda i : i >= ord('a') and i <= ord('z'))
for testType in types:
for i in xrange(256):
if not chr(i).isupper():
self.assertEqual((testType(b'a') + testType([i])).islower(), True)
def test_isspace(self):
self.check_is_method('isspace', lambda i : i in [ord(' '), ord('\t'), ord('\f'), ord('\n'), ord('\r'), 11])
for testType in types:
for i in xrange(256):
if not chr(i).islower():
self.assertEqual((testType(b'A') + testType([i])).isupper(), True)
def test_istitle(self):
for testType in types:
self.assertEqual(testType(b'').istitle(), False)
self.assertEqual(testType(b'Foo').istitle(), True)
self.assertEqual(testType(b'Foo Bar').istitle(), True)
self.assertEqual(testType(b'FooBar').istitle(), False)
self.assertEqual(testType(b'foo').istitle(), False)
def test_isupper(self):
self.check_is_method('isupper', lambda i : i >= ord('A') and i <= ord('Z'))
def test_join(self):
x = b''
self.assertEqual(id(x.join(b'')), id(x))
x = bytearray(x)
self.assertTrue(id(x.join(b'')) != id(x))
x = b'abc'
self.assertEqual(id(b'foo'.join([x])), id(x))
self.assertRaises(TypeError, b'foo'.join, [42])
x = bytearray(b'foo')
self.assertTrue(id(bytearray(b'foo').join([x])) != id(x), "got back same object on single arg join w/ bytearray")
for testType in types:
self.assertEqual(testType(b'x').join([b'd', b'e', b'f']), b'dxexf')
self.assertEqual(testType(b'x').join([b'd', b'e', b'f']), b'dxexf')
self.assertEqual(type(testType(b'x').join([b'd', b'e', b'f'])), testType)
if str != bytes:
# works in Py3k/Ipy, not in Py2.6
self.assertEqual(b'x'.join([testType(b'd'), testType(b'e'), testType(b'f')]), b'dxexf')
self.assertEqual(bytearray(b'x').join([testType(b'd'), testType(b'e'), testType(b'f')]), b'dxexf')
self.assertEqual(testType(b'').join([]), b'')
self.assertEqual(testType(b'').join((b'abc', )), b'abc')
self.assertEqual(testType(b'').join((b'abc', b'def')), b'abcdef')
self.assertRaises(TypeError, testType(b'').join, (42, ))
def test_ljust(self):
for testType in types:
self.assertRaises(TypeError, testType(b'').ljust, 42, ' ')
self.assertRaises(TypeError, testType(b'').ljust, 42, b' ')
self.assertRaises(TypeError, testType(b'').ljust, 42, u'\u0100')
self.assertEqual(testType(b'abc').ljust(4), b'abc ')
self.assertEqual(testType(b'abc').ljust(4, b'x'), b'abcx')
self.assertEqual(testType(b'abc').ljust(4, 'x'), b'abcx')
x = b'abc'
self.assertEqual(id(x.ljust(2)), id(x))
x = bytearray(x)
self.assertTrue(id(x.ljust(2)) != id(x))
def test_lower(self):
expected = b'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f' \
b'\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f !"#$%' \
b'&\'()*+,-./0123456789:;<=>?@abcdefghijklmnopqrstuvwxyz[\\]^_`' \
b'abcdefghijklmnopqrstuvwxyz{|}~\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88' \
b'\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99' \
b'\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa' \
b'\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb' \
b'\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc' \
b'\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd' \
b'\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee' \
b'\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff'
data = bytearray()
for i in xrange(256):
data.append(i)
for testType in types:
self.assertEqual(testType(data).lower(), expected)
def test_lstrip(self):
for testType in types:
self.assertEqual(testType(b' abc').lstrip(), b'abc')
self.assertEqual(testType(b' abc ').lstrip(), b'abc ')
self.assertEqual(testType(b' ').lstrip(), b'')
x = b'abc'
self.assertEqual(id(x.lstrip()), id(x))
x = bytearray(x)
self.assertTrue(id(x.lstrip()) != id(x))
def test_partition(self):
for testType in types:
self.assertRaises(TypeError, testType(b'').partition, None)
self.assertRaises(ValueError, testType(b'').partition, b'')
self.assertRaises(ValueError, testType(b'').partition, b'')
if testType == bytearray:
self.assertEqual(testType(b'a\x01c').partition([1]), (b'a', b'\x01', b'c'))
else:
self.assertRaises(TypeError, testType(b'a\x01c').partition, [1])
self.assertEqual(testType(b'abc').partition(b'b'), (b'a', b'b', b'c'))
self.assertEqual(testType(b'abc').partition(b'd'), (b'abc', b'', b''))
x = testType(b'abc')
one, two, three = x.partition(b'd')
if testType == bytearray:
self.assertTrue(id(one) != id(x))
else:
self.assertEqual(id(one), id(x))
one, two, three = b''.partition(b'abc')
self.assertEqual(id(one), id(two))
self.assertEqual(id(two), id(three))
one, two, three = bytearray().partition(b'abc')
self.assertTrue(id(one) != id(two))
self.assertTrue(id(two) != id(three))
self.assertTrue(id(three) != id(one))
def test_pop(self):
b = bytearray()
self.assertRaises(IndexError, b.pop)
self.assertRaises(IndexError, b.pop, 0)
b = bytearray(b'abc')
self.assertEqual(b.pop(), ord('c'))
self.assertEqual(b, b'ab')
b = bytearray(b'abc')
b.pop(1)
self.assertEqual(b, b'ac')
b = bytearray(b'abc')
b.pop(-1)
self.assertEqual(b, b'ab')
def test_replace(self):
for testType in types:
self.assertRaises(TypeError, testType(b'abc').replace, None, b'abc')
self.assertRaises(TypeError, testType(b'abc').replace, b'abc', None)
self.assertRaises(TypeError, testType(b'abc').replace, None, b'abc', 1)
self.assertRaises(TypeError, testType(b'abc').replace, b'abc', None, 1)
self.assertRaises(TypeError, testType(b'abc').replace, [1], b'abc')
self.assertRaises(TypeError, testType(b'abc').replace, b'abc', [1])
self.assertRaises(TypeError, testType(b'abc').replace, [1], b'abc', 1)
self.assertRaises(TypeError, testType(b'abc').replace, b'abc', [1], 1)
self.assertEqual(testType(b'abc').replace(b'b', b'foo'), b'afooc')
self.assertEqual(testType(b'abc').replace(b'b', b''), b'ac')
self.assertEqual(testType(b'abcb').replace(b'b', b'foo', 1), b'afoocb')
self.assertEqual(testType(b'abcb').replace(b'b', b'foo', 2), b'afoocfoo')
self.assertEqual(testType(b'abcb').replace(b'b', b'foo', 3), b'afoocfoo')
self.assertEqual(testType(b'abcb').replace(b'b', b'foo', -1), b'afoocfoo')
self.assertEqual(testType(b'abcb').replace(b'', b'foo', 100), b'fooafoobfoocfoobfoo')
self.assertEqual(testType(b'abcb').replace(b'', b'foo', 0), b'abcb')
self.assertEqual(testType(b'abcb').replace(b'', b'foo', 1), b'fooabcb')
self.assertEqual(testType(b'ooooooo').replace(b'o', b'u'), b'uuuuuuu')
x = b'abc'
self.assertEqual(id(x.replace(b'foo', b'bar', 0)), id(x))
if is_cli:
# CPython bug in 2.6 - http://bugs.python.org/issue4348
x = bytearray(b'abc')
self.assertTrue(id(x.replace(b'foo', b'bar', 0)) != id(x))
def test_remove(self):
for toremove in (ord('a'), b'a', Indexable(ord('a')), IndexableOC(ord('a'))):
b = bytearray(b'abc')
b.remove(ord('a'))
self.assertEqual(b, b'bc')
self.assertRaises(ValueError, b.remove, ord('x'))
b = bytearray(b'abc')
self.assertRaises(TypeError, b.remove, bytearray(b'a'))
def test_reverse(self):
b = bytearray(b'abc')
b.reverse()
self.assertEqual(b, b'cba')
# CoreCLR bug xxxx found in build 30324 from silverlight_w2
def test_rfind(self):
for testType in types:
self.assertEqual(testType(b"abcdbcda").rfind(b"cd", 1), 5)
self.assertEqual(testType(b"abcdbcda").rfind(b"cd", 3), 5)
self.assertEqual(testType(b"abcdbcda").rfind(b"cd", 7), -1)
self.assertEqual(testType(b"abcdbcda").rfind(b"cd", -1, -2), -1)
self.assertEqual(testType(b"abc").rfind(b"add", 3, 0), -1)
self.assertEqual(testType(b'abc').rfind(b'bd'), -1)
self.assertRaises(TypeError, testType(b'abc').rfind, [1])
self.assertRaises(TypeError, testType(b'abc').rfind, [1], 1)
self.assertRaises(TypeError, testType(b'abc').rfind, [1], 1, 2)
if testType == bytes:
self.assertEqual(testType(b"abc").rfind(b"add", None, 0), -1)
self.assertEqual(testType(b"abc").rfind(b"add", 3, None), -1)
self.assertEqual(testType(b"abc").rfind(b"add", None, None), -1)
self.assertEqual(testType(b'abc').rfind(b'', 0, 0), 0)
self.assertEqual(testType(b'abc').rfind(b'', 0, 1), 1)
self.assertEqual(testType(b'abc').rfind(b'', 0, 2), 2)
self.assertEqual(testType(b'abc').rfind(b'', 0, 3), 3)
self.assertEqual(testType(b'abc').rfind(b'', 0, 4), 3)
self.assertEqual(testType(b'x').rfind(b'x', 0, 0), -1)
self.assertEqual(testType(b'x').rfind(b'x', 3, 0), -1)
self.assertEqual(testType(b'x').rfind(b'', 3, 0), -1)
def test_rindex(self):
for testType in types:
self.assertRaises(TypeError, testType(b'abc').rindex, 257)
self.assertEqual(testType(b'abc').rindex(b'a'), 0)
self.assertEqual(testType(b'abc').rindex(b'a', 0, -1), 0)
self.assertRaises(TypeError, testType(b'abc').rindex, [1])
self.assertRaises(TypeError, testType(b'abc').rindex, [1], 1)
self.assertRaises(TypeError, testType(b'abc').rindex, [1], 1, 2)
self.assertRaises(ValueError, testType(b'abc').rindex, b'c', 0, -1)
self.assertRaises(ValueError, testType(b'abc').rindex, b'a', -1)
def test_rjust(self):
for testType in types:
self.assertRaises(TypeError, testType(b'').rjust, 42, ' ')
self.assertRaises(TypeError, testType(b'').rjust, 42, b' ')
self.assertRaises(TypeError, testType(b'').rjust, 42, u'\u0100')
self.assertRaises(TypeError, testType(b'').rjust, 42, [2])
self.assertEqual(testType(b'abc').rjust(4), b' abc')
self.assertEqual(testType(b'abc').rjust(4, b'x'), b'xabc')
self.assertEqual(testType(b'abc').rjust(4, 'x'), b'xabc')
x = b'abc'
self.assertEqual(id(x.rjust(2)), id(x))
x = bytearray(x)
self.assertTrue(id(x.rjust(2)) != id(x))
def test_rpartition(self):
for testType in types:
self.assertRaises(TypeError, testType(b'').rpartition, None)
self.assertRaises(ValueError, testType(b'').rpartition, b'')
if testType == bytearray:
self.assertEqual(testType(b'a\x01c').rpartition([1]), (b'a', b'\x01', b'c'))
else:
self.assertRaises(TypeError, testType(b'a\x01c').rpartition, [1])
self.assertEqual(testType(b'abc').rpartition(b'b'), (b'a', b'b', b'c'))
self.assertEqual(testType(b'abc').rpartition(b'd'), (b'', b'', b'abc'))
x = testType(b'abc')
one, two, three = x.rpartition(b'd')
if testType == bytearray:
self.assertTrue(id(three) != id(x))
else:
self.assertEqual(id(three), id(x))
b = testType(b'mississippi')
self.assertEqual(b.rpartition(b'i'), (b'mississipp', b'i', b''))
self.assertEqual(type(b.rpartition(b'i')[0]), testType)
self.assertEqual(type(b.rpartition(b'i')[1]), testType)
self.assertEqual(type(b.rpartition(b'i')[2]), testType)
b = testType(b'abcdefgh')
self.assertEqual(b.rpartition(b'a'), (b'', b'a', b'bcdefgh'))
one, two, three = b''.rpartition(b'abc')
self.assertEqual(id(one), id(two))
self.assertEqual(id(two), id(three))
one, two, three = bytearray().rpartition(b'abc')
self.assertTrue(id(one) != id(two))
self.assertTrue(id(two) != id(three))
self.assertTrue(id(three) != id(one))
def test_rsplit(self):
for testType in types:
x=testType(b"Hello Worllds")
self.assertEqual(x.rsplit(), [b'Hello', b'Worllds'])
s = x.rsplit(b"ll")
self.assertTrue(s[0] == b"He")
self.assertTrue(s[1] == b"o Wor")
self.assertTrue(s[2] == b"ds")
self.assertTrue(testType(b"1--2--3--4--5--6--7--8--9--0").rsplit(b"--", 2) == [b'1--2--3--4--5--6--7--8', b'9', b'0'])
for temp_string in [b"", b" ", b" ", b"\t", b" \t", b"\t ", b"\t\t", b"\n", b"\n\n", b"\n \n"]:
self.assertEqual(temp_string.rsplit(None), [])
self.assertEqual(testType(b"ab").rsplit(None), [b"ab"])
self.assertEqual(testType(b"a b").rsplit(None), [b"a", b"b"])
self.assertRaises(TypeError, testType(b'').rsplit, [2])
self.assertRaises(TypeError, testType(b'').rsplit, [2], 2)
def test_rstrip(self):
for testType in types:
self.assertEqual(testType(b'abc ').rstrip(), b'abc')
self.assertEqual(testType(b' abc ').rstrip(), b' abc')
self.assertEqual(testType(b' ').rstrip(), b'')
self.assertEqual(testType(b'abcx').rstrip(b'x'), b'abc')
self.assertEqual(testType(b'xabc').rstrip(b'x'), b'xabc')
self.assertEqual(testType(b'x').rstrip(b'x'), b'')
self.assertRaises(TypeError, testType(b'').rstrip, [2])
x = b'abc'
self.assertEqual(id(x.rstrip()), id(x))
x = bytearray(x)
self.assertTrue(id(x.rstrip()) != id(x))
def test_split(self):
for testType in types:
x=testType(b"Hello Worllds")
self.assertRaises(ValueError, x.split, b'')
self.assertEqual(x.split(None, 0), [b'Hello Worllds'])
self.assertEqual(x.split(None, -1), [b'Hello', b'Worllds'])
self.assertEqual(x.split(None, 2), [b'Hello', b'Worllds'])
self.assertEqual(x.split(), [b'Hello', b'Worllds'])
self.assertEqual(testType(b'abc').split(b'c'), [b'ab', b''])
self.assertEqual(testType(b'abcd').split(b'c'), [b'ab', b'd'])
self.assertEqual(testType(b'abccdef').split(b'c'), [b'ab', b'', b'def'])
s = x.split(b"ll")
self.assertTrue(s[0] == b"He")
self.assertTrue(s[1] == b"o Wor")
self.assertTrue(s[2] == b"ds")
self.assertTrue(testType(b"1,2,3,4,5,6,7,8,9,0").split(b",") == [b'1',b'2',b'3',b'4',b'5',b'6',b'7',b'8',b'9',b'0'])
self.assertTrue(testType(b"1,2,3,4,5,6,7,8,9,0").split(b",", -1) == [b'1',b'2',b'3',b'4',b'5',b'6',b'7',b'8',b'9',b'0'])
self.assertTrue(testType(b"1,2,3,4,5,6,7,8,9,0").split(b",", 2) == [b'1',b'2',b'3,4,5,6,7,8,9,0'])
self.assertTrue(testType(b"1--2--3--4--5--6--7--8--9--0").split(b"--") == [b'1',b'2',b'3',b'4',b'5',b'6',b'7',b'8',b'9',b'0'])
self.assertTrue(testType(b"1--2--3--4--5--6--7--8--9--0").split(b"--", -1) == [b'1',b'2',b'3',b'4',b'5',b'6',b'7',b'8',b'9',b'0'])
self.assertTrue(testType(b"1--2--3--4--5--6--7--8--9--0").split(b"--", 2) == [b'1', b'2', b'3--4--5--6--7--8--9--0'])
self.assertEqual(testType(b"").split(None), [])
self.assertEqual(testType(b"ab").split(None), [b"ab"])
self.assertEqual(testType(b"a b").split(None), [b"a", b"b"])
self.assertEqual(bytearray(b' a bb c ').split(None, 1), [bytearray(b'a'), bytearray(b'bb c ')])
self.assertEqual(testType(b' ').split(), [])
self.assertRaises(TypeError, testType(b'').split, [2])
self.assertRaises(TypeError, testType(b'').split, [2], 2)
def test_splitlines(self):
for testType in types:
self.assertEqual(testType(b'foo\nbar\n').splitlines(), [b'foo', b'bar'])
self.assertEqual(testType(b'foo\nbar\n').splitlines(True), [b'foo\n', b'bar\n'])
self.assertEqual(testType(b'foo\r\nbar\r\n').splitlines(True), [b'foo\r\n', b'bar\r\n'])
self.assertEqual(testType(b'foo\r\nbar\r\n').splitlines(), [b'foo', b'bar'])
self.assertEqual(testType(b'foo\rbar\r').splitlines(True), [b'foo\r', b'bar\r'])
self.assertEqual(testType(b'foo\nbar\nbaz').splitlines(), [b'foo', b'bar', b'baz'])
self.assertEqual(testType(b'foo\nbar\nbaz').splitlines(True), [b'foo\n', b'bar\n', b'baz'])
self.assertEqual(testType(b'foo\r\nbar\r\nbaz').splitlines(True), [b'foo\r\n', b'bar\r\n', b'baz'])
self.assertEqual(testType(b'foo\rbar\rbaz').splitlines(True), [b'foo\r', b'bar\r', b'baz'])
def test_startswith(self):
for testType in types:
self.assertRaises(TypeError, testType(b'abcdef').startswith, [])
self.assertRaises(TypeError, testType(b'abcdef').startswith, [], 0)
self.assertRaises(TypeError, testType(b'abcdef').startswith, [], 0, 1)
self.assertEqual(testType(b"abcde").startswith(b'c', 2, 6), True)
self.assertEqual(testType(b"abc").startswith(b'c', 4, 6), False)
self.assertEqual(testType(b"abcde").startswith(b'cde', 2, 9), True)
self.assertEqual(testType(b'abc').startswith(b'abcd', 4), False)
self.assertEqual(testType(b'abc').startswith(b'abc', -3), True)
self.assertEqual(testType(b'abc').startswith(b'abc', -10), True)
self.assertEqual(testType(b'abc').startswith(b'abc', -3, 0), False)
self.assertEqual(testType(b'abc').startswith(b'abc', -10, 0), False)
self.assertEqual(testType(b'abc').startswith(b'abc', -10, -10), False)
self.assertEqual(testType(b'abc').startswith(b'ab', 0, -1), True)
self.assertEqual(testType(b'abc').startswith((b'abc', ), -10), True)
self.assertEqual(testType(b'abc').startswith((b'abc', ), 10), False)
self.assertEqual(testType(b'abc').startswith((b'abc', ), -10, 0), False)
self.assertEqual(testType(b'abc').startswith((b'abc', ), 10, 0), False)
self.assertEqual(testType(b'abc').startswith((b'abc', ), 1, -10), False)
self.assertEqual(testType(b'abc').startswith((b'abc', ), 1, -1), False)
self.assertEqual(testType(b'abc').startswith((b'abc', ), -1, -2), False)
self.assertEqual(testType(b'abc').startswith((b'abc', b'def')), True)
self.assertEqual(testType(b'abc').startswith((b'qrt', b'def')), False)
self.assertEqual(testType(b'abc').startswith((b'abc', b'def'), -3), True)
self.assertEqual(testType(b'abc').startswith((b'qrt', b'def'), -3), False)
self.assertEqual(testType(b'abc').startswith((b'abc', b'def'), 0), True)
self.assertEqual(testType(b'abc').startswith((b'qrt', b'def'), 0), False)
self.assertEqual(testType(b'abc').startswith((b'abc', b'def'), -3, 3), True)
self.assertEqual(testType(b'abc').startswith((b'qrt', b'def'), -3, 3), False)
self.assertEqual(testType(b'abc').startswith((b'abc', b'def'), 0, 3), True)
self.assertEqual(testType(b'abc').startswith((b'qrt', b'def'), 0, 3), False)
hw = testType(b"hello world")
self.assertTrue(hw.startswith(b"hello"))
self.assertTrue(not hw.startswith(b"heloo"))
self.assertTrue(hw.startswith(b"llo", 2))
self.assertTrue(not hw.startswith(b"lno", 2))
self.assertTrue(hw.startswith(b"wor", 6, 9))
self.assertTrue(not hw.startswith(b"wor", 6, 7))
self.assertTrue(not hw.startswith(b"wox", 6, 10))
self.assertTrue(not hw.startswith(b"wor", 6, 2))
def test_strip(self):
for testType in types:
self.assertEqual(testType(b'abc ').strip(), b'abc')
self.assertEqual(testType(b' abc').strip(), b'abc')
self.assertEqual(testType(b' abc ').strip(), b'abc')
self.assertEqual(testType(b' ').strip(), b'')
self.assertEqual(testType(b'abcx').strip(b'x'), b'abc')
self.assertEqual(testType(b'xabc').strip(b'x'), b'abc')
self.assertEqual(testType(b'xabcx').strip(b'x'), b'abc')
self.assertEqual(testType(b'x').strip(b'x'), b'')
x = b'abc'
self.assertEqual(id(x.strip()), id(x))
x = bytearray(x)
self.assertTrue(id(x.strip()) != id(x))
def test_swapcase(self):
expected = b'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f' \
b'\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f !"#$%' \
b'&\'()*+,-./0123456789:;<=>?@abcdefghijklmnopqrstuvwxyz[\\]^_`' \
b'ABCDEFGHIJKLMNOPQRSTUVWXYZ{|}~\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88' \
b'\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99' \
b'\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa' \
b'\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb' \
b'\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc' \
b'\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd' \
b'\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee' \
b'\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff'
data = bytearray()
for i in xrange(256):
data.append(i)
for testType in types:
self.assertEqual(testType(b'123').swapcase(), b'123')
b = testType(b'123')
self.assertTrue(id(b.swapcase()) != id(b))
self.assertEqual(testType(b'abc').swapcase(), b'ABC')
self.assertEqual(testType(b'ABC').swapcase(), b'abc')
self.assertEqual(testType(b'ABc').swapcase(), b'abC')
x = testType(data).swapcase()
self.assertEqual(testType(data).swapcase(), expected)
def test_title(self):
for testType in types:
self.assertEqual(testType(b'').title(), b'')
self.assertEqual(testType(b'foo').title(), b'Foo')
self.assertEqual(testType(b'Foo').title(), b'Foo')
self.assertEqual(testType(b'foo bar baz').title(), b'Foo Bar Baz')
for i in xrange(256):
b = bytearray()
b.append(i)
if (b >= b'a' and b <= b'z') or (b >= b'A' and b <= 'Z'):
continue
inp = testType(b.join([b'foo', b'bar', b'baz']))
exp = b.join([b'Foo', b'Bar', b'Baz'])
self.assertEqual(inp.title(), exp)
x = b''
self.assertEqual(id(x.title()), id(x))
x = bytearray(b'')
self.assertTrue(id(x.title()) != id(x))
def test_translate(self):
identTable = bytearray()
for i in xrange(256):
identTable.append(i)
repAtable = bytearray(identTable)
repAtable[ord('A')] = ord('B')
for testType in types:
self.assertRaises(TypeError, testType(b'').translate, {})
self.assertRaises(ValueError, testType(b'foo').translate, b'')
self.assertRaises(ValueError, testType(b'').translate, b'')
self.assertEqual(testType(b'AAA').translate(repAtable), b'BBB')
self.assertEqual(testType(b'AAA').translate(repAtable, b'A'), b'')
self.assertRaises(TypeError, b''.translate, identTable, None)
self.assertEqual(b'AAA'.translate(None, b'A'), b'')
self.assertEqual(b'AAABBB'.translate(None, b'A'), b'BBB')
self.assertEqual(b'AAA'.translate(None), b'AAA')
self.assertEqual(bytearray(b'AAA').translate(None, b'A'),
b'')
self.assertEqual(bytearray(b'AAA').translate(None),
b'AAA')
b = b'abc'
self.assertEqual(id(b.translate(None)), id(b))
b = b''
self.assertEqual(id(b.translate(identTable)), id(b))
b = b''
self.assertEqual(id(b.translate(identTable, b'')), id(b))
b = b''
self.assertEqual(id(b.translate(identTable, b'')), id(b))
if is_cli:
# CPython bug 4348 - http://bugs.python.org/issue4348
b = bytearray(b'')
self.assertTrue(id(b.translate(identTable)) != id(b))
self.assertRaises(TypeError, testType(b'').translate, [])
self.assertRaises(TypeError, testType(b'').translate, [], [])
def test_upper(self):
expected = b'\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f' \
b'\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f !"#$%' \
b'&\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`' \
b'ABCDEFGHIJKLMNOPQRSTUVWXYZ{|}~\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88' \
b'\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99' \
b'\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa' \
b'\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb' \
b'\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc' \
b'\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd' \
b'\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee' \
b'\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff'
data = bytearray()
for i in xrange(256):
data.append(i)
for testType in types:
self.assertEqual(testType(data).upper(), expected)
def test_zfill(self):
for testType in types:
self.assertEqual(testType(b'abc').zfill(0), b'abc')
self.assertEqual(testType(b'abc').zfill(4), b'0abc')
self.assertEqual(testType(b'+abc').zfill(5), b'+0abc')
self.assertEqual(testType(b'-abc').zfill(5), b'-0abc')
self.assertEqual(testType(b'').zfill(2), b'00')
self.assertEqual(testType(b'+').zfill(2), b'+0')
self.assertEqual(testType(b'-').zfill(2), b'-0')
b = b'abc'
self.assertEqual(id(b.zfill(0)), id(b))
b = bytearray(b)
self.assertTrue(id(b.zfill(0)) != id(b))
def test_none(self):
for testType in types:
self.assertRaises(TypeError, testType(b'abc').replace, b"new")
self.assertRaises(TypeError, testType(b'abc').replace, b"new", 2)
self.assertRaises(TypeError, testType(b'abc').center, 0, None)
if str != bytes:
self.assertRaises(TypeError, testType(b'abc').fromhex, None)
self.assertRaises(TypeError, testType(b'abc').decode, 'ascii', None)
for fn in ['find', 'index', 'rfind', 'count', 'startswith', 'endswith']:
f = getattr(testType(b'abc'), fn)
self.assertRaises(TypeError, f, None)
self.assertRaises(TypeError, f, None, 0)
self.assertRaises(TypeError, f, None, 0, 2)
self.assertRaises(TypeError, testType(b'abc').replace, None, b'ef')
self.assertRaises(TypeError, testType(b'abc').replace, None, b'ef', 1)
self.assertRaises(TypeError, testType(b'abc').replace, b'abc', None)
self.assertRaises(TypeError, testType(b'abc').replace, b'abc', None, 1)
def test_add_mul(self):
for testType in types:
self.assertRaises(TypeError, lambda: testType(b"a") + 3)
self.assertRaises(TypeError, lambda: 3 + testType(b"a"))
self.assertRaises(TypeError, lambda: "a" * "3")
self.assertRaises(OverflowError, lambda: "a" * (sys.maxint + 1))
self.assertRaises(OverflowError, lambda: (sys.maxint + 1) * "a")
class mylong(long): pass
# multiply
self.assertEqual("aaaa", "a" * 4L)
self.assertEqual("aaaa", "a" * mylong(4L))
self.assertEqual("aaa", "a" * 3)
self.assertEqual("a", "a" * True)
self.assertEqual("", "a" * False)
self.assertEqual("aaaa", 4L * "a")
self.assertEqual("aaaa", mylong(4L) * "a")
self.assertEqual("aaa", 3 * "a")
self.assertEqual("a", True * "a")
self.assertEqual("", False * "a" )
# zero-length string
def test_empty_bytes(self):
for testType in types:
self.assertEqual(testType(b'').title(), b'')
self.assertEqual(testType(b'').capitalize(), b'')
self.assertEqual(testType(b'').count(b'a'), 0)
table = testType(b'10') * 128
self.assertEqual(testType(b'').translate(table), b'')
self.assertEqual(testType(b'').replace(b'a', b'ef'), b'')
self.assertEqual(testType(b'').replace(b'bc', b'ef'), b'')
self.assertEqual(testType(b'').split(), [])
self.assertEqual(testType(b'').split(b' '), [b''])
self.assertEqual(testType(b'').split(b'a'), [b''])
def test_encode_decode(self):
for testType in types:
self.assertEqual(testType(b'abc').decode(), u'abc')
def test_encode_decode_error(self):
for testType in types:
self.assertRaises(TypeError, testType(b'abc').decode, None)
def test_bytes_subclass(self):
for testType in types:
class customstring(testType):
def __str__(self): return 'xyz'
def __repr__(self): return 'foo'
def __hash__(self): return 42
def __mul__(self, count): return b'multiplied'
def __add__(self, other): return 23
def __len__(self): return 2300
def __contains__(self, value): return False
o = customstring(b'abc')
self.assertEqual(str(o), "xyz")
self.assertEqual(repr(o), "foo")
self.assertEqual(hash(o), 42)
self.assertEqual(o * 3, b'multiplied')
self.assertEqual(o + b'abc', 23)
self.assertEqual(len(o), 2300)
self.assertEqual(b'a' in o, False)
class custombytearray(bytearray):
def __init__(self, value):
bytearray.__init__(self)
self.assertEqual(custombytearray(42), bytearray())
class custombytearray(bytearray):
def __init__(self, value, **args):
bytearray.__init__(self)
self.assertEqual(custombytearray(42, x=42), bytearray())
def test_bytes_equals(self):
for testType in types:
x = testType(b'abc') == testType(b'abc')
y = testType(b'def') == testType(b'def')
self.assertEqual(id(x), id(y))
self.assertEqual(id(x), id(True))
x = testType(b'abc') != testType(b'abc')
y = testType(b'def') != testType(b'def')
self.assertEqual(id(x), id(y))
self.assertEqual(id(x), id(False))
x = testType(b'abcx') == testType(b'abc')
y = testType(b'defx') == testType(b'def')
self.assertEqual(id(x), id(y))
self.assertEqual(id(x), id(False))
x = testType(b'abcx') != testType(b'abc')
y = testType(b'defx') != testType(b'def')
self.assertEqual(id(x), id(y))
self.assertEqual(id(x), id(True))
def test_bytes_dict(self):
self.assertTrue('__init__' not in bytes.__dict__.keys())
self.assertTrue('__init__' in bytearray.__dict__.keys())
for testType in types:
extra_str_dict_keys = [ "__cmp__", "isdecimal", "isnumeric", "isunicode"] # "__radd__",
#It's OK that __getattribute__ does not show up in the __dict__. It is
#implemented.
self.assertTrue(hasattr(testType, "__getattribute__"), str(testType) + " has no __getattribute__ method")
for temp_key in extra_str_dict_keys:
self.assertTrue(not temp_key in testType.__dict__.keys())
def test_bytes_to_numeric(self):
for testType in types:
class substring(testType):
def __int__(self): return 1
def __complex__(self): return 1j
def __float__(self): return 1.0
def __long__(self): return 1L
class myfloat(float): pass
class mylong(long): pass
class myint(int): pass
class mycomplex(complex): pass
v = substring(b"123")
self.assertEqual(float(v), 1.0)
self.assertEqual(myfloat(v), 1.0)
self.assertEqual(type(myfloat(v)), myfloat)
self.assertEqual(long(v), 1L)
self.assertEqual(mylong(v), 1L)
self.assertEqual(type(mylong(v)), mylong)
self.assertEqual(int(v), 1)
self.assertEqual(myint(v), 1)
self.assertEqual(type(myint(v)), myint)
# str in 2.6 still supports this, but not in 3.0, we have the 3.0 behavior.
if not is_cli and testType == bytes:
self.assertEqual(complex(v), 123 + 0j)
self.assertEqual(mycomplex(v), 123 + 0j)
else:
self.assertEqual(complex(v), 1j)
self.assertEqual(mycomplex(v), 1j)
class substring(testType): pass
v = substring(b"123")
self.assertEqual(long(v), 123L)
self.assertEqual(int(v), 123)
self.assertEqual(float(v), 123.0)
self.assertEqual(mylong(v), 123L)
self.assertEqual(type(mylong(v)), mylong)
self.assertEqual(myint(v), 123)
self.assertEqual(type(myint(v)), myint)
if testType == str:
# 2.6 allows this, 3.0 disallows this.
self.assertEqual(complex(v), 123+0j)
self.assertEqual(mycomplex(v), 123+0j)
else:
self.assertRaises(TypeError, complex, v)
self.assertRaises(TypeError, mycomplex, v)
def test_compares(self):
a = b'A'
b = b'B'
bb = b'BB'
aa = b'AA'
ab = b'AB'
ba = b'BA'
for testType in types:
for otherType in types:
self.assertEqual(testType(a) > otherType(b), False)
self.assertEqual(testType(a) < otherType(b), True)
self.assertEqual(testType(a) <= otherType(b), True)
self.assertEqual(testType(a) >= otherType(b), False)
self.assertEqual(testType(a) == otherType(b), False)
self.assertEqual(testType(a) != otherType(b), True)
self.assertEqual(testType(b) > otherType(a), True)
self.assertEqual(testType(b) < otherType(a), False)
self.assertEqual(testType(b) <= otherType(a), False)
self.assertEqual(testType(b) >= otherType(a), True)
self.assertEqual(testType(b) == otherType(a), False)
self.assertEqual(testType(b) != otherType(a), True)
self.assertEqual(testType(a) > otherType(a), False)
self.assertEqual(testType(a) < otherType(a), False)
self.assertEqual(testType(a) <= otherType(a), True)
self.assertEqual(testType(a) >= otherType(a), True)
self.assertEqual(testType(a) == otherType(a), True)
self.assertEqual(testType(a) != otherType(a), False)
self.assertEqual(testType(aa) > otherType(b), False)
self.assertEqual(testType(aa) < otherType(b), True)
self.assertEqual(testType(aa) <= otherType(b), True)
self.assertEqual(testType(aa) >= otherType(b), False)
self.assertEqual(testType(aa) == otherType(b), False)
self.assertEqual(testType(aa) != otherType(b), True)
self.assertEqual(testType(bb) > otherType(a), True)
self.assertEqual(testType(bb) < otherType(a), False)
self.assertEqual(testType(bb) <= otherType(a), False)
self.assertEqual(testType(bb) >= otherType(a), True)
self.assertEqual(testType(bb) == otherType(a), False)
self.assertEqual(testType(bb) != otherType(a), True)
self.assertEqual(testType(ba) > otherType(b), True)
self.assertEqual(testType(ba) < otherType(b), False)
self.assertEqual(testType(ba) <= otherType(b), False)
self.assertEqual(testType(ba) >= otherType(b), True)
self.assertEqual(testType(ba) == otherType(b), False)
self.assertEqual(testType(ba) != otherType(b), True)
self.assertEqual(testType(ab) > otherType(a), True)
self.assertEqual(testType(ab) < otherType(a), False)
self.assertEqual(testType(ab) <= otherType(a), False)
self.assertEqual(testType(ab) >= otherType(a), True)
self.assertEqual(testType(ab) == otherType(a), False)
self.assertEqual(testType(ab) != otherType(a), True)
self.assertEqual(testType(ab) == [], False)
self.assertEqual(testType(a) > None, True)
self.assertEqual(testType(a) < None, False)
self.assertEqual(testType(a) <= None, False)
self.assertEqual(testType(a) >= None, True)
self.assertEqual(None > testType(a), False)
self.assertEqual(None < testType(a), True)
self.assertEqual(None <= testType(a), True)
self.assertEqual(None >= testType(a), False)
def test_bytearray(self):
self.assertRaises(TypeError, hash, bytearray(b'abc'))
self.assertRaises(TypeError, bytearray(b'').__setitem__, None, b'abc')
self.assertRaises(TypeError, bytearray(b'').__delitem__, None)
x = bytearray(b'abc')
del x[-1]
self.assertEqual(x, b'ab')
def f():
x = bytearray(b'abc')
x[0:2] = [1j]
self.assertRaises(TypeError, f)
x = bytearray(b'abc')
x[0:1] = [ord('d')]
self.assertEqual(x, b'dbc')
x = bytearray(b'abc')
x[0:3] = x
self.assertEqual(x, b'abc')
x = bytearray(b'abc')
del x[0]
self.assertEqual(x, b'bc')
x = bytearray(b'abc')
x += b'foo'
self.assertEqual(x, b'abcfoo')
b = bytearray(b"abc")
b1 = b
b += b"def"
self.assertEqual(b1, b)
x = bytearray(b'abc')
x += bytearray(b'foo')
self.assertEqual(x, b'abcfoo')
x = bytearray(b'abc')
x *= 2
self.assertEqual(x, b'abcabc')
x = bytearray(b'abcdefghijklmnopqrstuvwxyz')
x[25:1] = b'x' * 24
self.assertEqual(x, b'abcdefghijklmnopqrstuvwxyxxxxxxxxxxxxxxxxxxxxxxxxz')
x = bytearray(b'abcdefghijklmnopqrstuvwxyz')
x[25:0] = b'x' * 25
self.assertEqual(x, b'abcdefghijklmnopqrstuvwxyxxxxxxxxxxxxxxxxxxxxxxxxxz')
tests = ( ((0, 3, None), b'abc', b''),
((0, 2, None), b'abc', b'c'),
((4, 0, 2), b'abc', b'abc'),
((3, 0, 2), b'abc', b'abc'),
((3, 0, -2), b'abc', b'ab'),
((0, 3, 1), b'abc', b''),
((0, 2, 1), b'abc', b'c'),
((0, 3, 2), b'abc', b'b'),
((0, 2, 2), b'abc', b'bc'),
((0, 3, -1), b'abc', b'abc'),
((0, 2, -1), b'abc', b'abc'),
((3, 0, -1), b'abc', b'a'),
((2, 0, -1), b'abc', b'a'),
((4, 2, -1), b'abcdef', b'abcf'),
)
for indexes, input, result in tests:
x = bytearray(input)
if indexes[2] == None:
del x[indexes[0] : indexes[1]]
self.assertEqual(x, result)
else:
del x[indexes[0] : indexes[1] : indexes[2]]
self.assertEqual(x, result)
class myint(int): pass
class intobj(object):
def __int__(self):
return 42
x = bytearray(b'abe')
x[-1] = ord('a')
self.assertEqual(x, b'aba')
x[-1] = IndexableOC(ord('r'))
self.assertEqual(x, b'abr')
x[-1] = Indexable(ord('s'))
self.assertEqual(x, b'abs')
def f(): x[-1] = IndexableOC(256)
self.assertRaises(ValueError, f)
def f(): x[-1] = Indexable(256)
self.assertRaises(ValueError, f)
x[-1] = b'b'
self.assertEqual(x, b'abb')
x[-1] = myint(ord('c'))
self.assertEqual(x, b'abc')
x[0:1] = 2
self.assertEqual(x, b'\x00\x00bc')
x = bytearray(b'abc')
x[0:1] = 2L
self.assertEqual(x, b'\x00\x00bc')
x[0:2] = b'a'
self.assertEqual(x, b'abc')
x[0:1] = b'd'
self.assertEqual(x, b'dbc')
x[0:1] = myint(3)
self.assertEqual(x, b'\x00\x00\x00bc')
x[0:3] = [ord('a'), ord('b'), ord('c')]
self.assertEqual(x, b'abcbc')
def f(): x[0:1] = intobj()
self.assertRaises(TypeError, f)
def f(): x[0:1] = sys.maxint
# mono doesn't throw an OutOfMemoryException on Linux when the size is too large,
# it does get a value error for trying to set capacity to a negative number
if is_mono:
self.assertRaises(ValueError, f)
else:
self.assertRaises(MemoryError, f)
def f(): x[0:1] = sys.maxint+1
self.assertRaises(TypeError, f)
for setval in [b'bar', bytearray(b'bar'), [b'b', b'a', b'r'], (b'b', b'a', b'r'), (98, b'a', b'r'), (Indexable(98), b'a', b'r'), (IndexableOC(98), b'a', b'r')]:
x = bytearray(b'abc')
x[0:3] = setval
self.assertEqual(x, b'bar')
x = bytearray(b'abc')
x[1:4] = setval
self.assertEqual(x, b'abar')
x = bytearray(b'abc')
x[0:2] = setval
self.assertEqual(x, b'barc')
x = bytearray(b'abc')
x[4:0:2] = setval[-1:-1]
self.assertEqual(x, b'abc')
x = bytearray(b'abc')
x[3:0:2] = setval[-1:-1]
self.assertEqual(x, b'abc')
x = bytearray(b'abc')
x[3:0:-2] = setval[-1:-1]
self.assertEqual(x, b'ab')
x = bytearray(b'abc')
x[3:0:-2] = setval[0:-2]
self.assertEqual(x, b'abb')
x = bytearray(b'abc')
x[0:3:1] = setval
self.assertEqual(x, b'bar')
x = bytearray(b'abc')
x[0:2:1] = setval
self.assertEqual(x, b'barc')
x = bytearray(b'abc')
x[0:3:2] = setval[0:-1]
self.assertEqual(x, b'bba')
x = bytearray(b'abc')
x[0:2:2] = setval[0:-2]
self.assertEqual(x, b'bbc')
x = bytearray(b'abc')
x[0:3:-1] = setval[-1:-1]
self.assertEqual(x, b'abc')
x = bytearray(b'abc')
x[0:2:-1] = setval[-1:-1]
self.assertEqual(x, b'abc')
x = bytearray(b'abc')
x[3:0:-1] = setval[0:-1]
self.assertEqual(x, b'aab')
x = bytearray(b'abc')
x[2:0:-1] = setval[0:-1]
self.assertEqual(x, b'aab')
x = bytearray(b'abcdef')
def f():x[0:6:2] = b'a'
self.assertRaises(ValueError, f)
self.assertEqual(bytearray(source=b'abc'), bytearray(b'abc'))
self.assertEqual(bytearray(source=2), bytearray(b'\x00\x00'))
self.assertEqual(bytearray(b'abc').__alloc__(), 4)
self.assertEqual(bytearray().__alloc__(), 0)
def test_bytes(self):
self.assertEqual(hash(b'abc'), hash(b'abc'))
self.assertEqual(b'abc', B'abc')
def test_operators(self):
for testType in types:
self.assertRaises(TypeError, lambda : testType(b'abc') * None)
self.assertRaises(TypeError, lambda : testType(b'abc') + None)
self.assertRaises(TypeError, lambda : None * testType(b'abc'))
self.assertRaises(TypeError, lambda : None + testType(b'abc'))
self.assertEqual(testType(b'abc') * 2, b'abcabc')
if testType == bytearray:
self.assertEqual(testType(b'abc')[0], ord('a'))
self.assertEqual(testType(b'abc')[-1], ord('c'))
else:
self.assertEqual(testType(b'abc')[0], b'a')
self.assertEqual(testType(b'abc')[-1], b'c')
for otherType in types:
self.assertEqual(testType(b'abc') + otherType(b'def'), b'abcdef')
resType = type(testType(b'abc') + otherType(b'def'))
if testType == bytearray or otherType == bytearray:
self.assertEqual(resType, bytearray)
else:
self.assertEqual(resType, bytes)
self.assertEqual(b'ab' in testType(b'abcd'), True)
# 2.6 doesn't allow this for testType=bytes, so test for 3.0 in this case
if testType is not bytes or hasattr(bytes, '__iter__'):
self.assertEqual(ord(b'a') in testType(b'abcd'), True)
self.assertRaises(ValueError, lambda : 256 in testType(b'abcd'))
x = b'abc'
self.assertEqual(x * 1, x)
self.assertEqual(1 * x, x)
self.assertEqual(id(x), id(x * 1))
self.assertEqual(id(x), id(1 * x))
x = bytearray(b'abc')
self.assertEqual(x * 1, x)
self.assertEqual(1 * x, x)
self.assertTrue(id(x) != id(x * 1))
self.assertTrue(id(x) != id(1 * x))
def test_init(self):
for testType in types:
if testType != str: # skip on Cpy 2.6 for str type
self.assertRaises(TypeError, testType, None, 'ascii')
self.assertRaises(TypeError, testType, u'abc', None)
self.assertRaises(TypeError, testType, [None])
self.assertEqual(testType(u'abc', 'ascii'), b'abc')
self.assertEqual(testType(0), b'')
self.assertEqual(testType(5), b'\x00\x00\x00\x00\x00')
self.assertRaises(ValueError, testType, [256])
self.assertRaises(ValueError, testType, [257])
testType(range(256))
def f():
yield 42
self.assertEqual(bytearray(f()), b'*')
def test_slicing(self):
for testType in types:
self.assertEqual(testType(b'abc')[0:3], b'abc')
self.assertEqual(testType(b'abc')[0:2], b'ab')
self.assertEqual(testType(b'abc')[3:0:2], b'')
self.assertEqual(testType(b'abc')[3:0:2], b'')
self.assertEqual(testType(b'abc')[3:0:-2], b'c')
self.assertEqual(testType(b'abc')[3:0:-2], b'c')
self.assertEqual(testType(b'abc')[0:3:1], b'abc')
self.assertEqual(testType(b'abc')[0:2:1], b'ab')
self.assertEqual(testType(b'abc')[0:3:2], b'ac')
self.assertEqual(testType(b'abc')[0:2:2], b'a')
self.assertEqual(testType(b'abc')[0:3:-1], b'')
self.assertEqual(testType(b'abc')[0:2:-1], b'')
self.assertEqual(testType(b'abc')[3:0:-1], b'cb')
self.assertEqual(testType(b'abc')[2:0:-1], b'cb')
self.assertRaises(TypeError, testType(b'abc').__getitem__, None)
def test_ord(self):
for testType in types:
self.assertEqual(ord(testType(b'a')), 97)
self.assertRaisesPartialMessage(TypeError, "expected a character, but string of length 2 found", ord, testType(b'aa'))
def test_pickle(self):
import cPickle
for testType in types:
self.assertEqual(cPickle.loads(cPickle.dumps(testType(range(256)))), testType(range(256)))
@unittest.skipUnless(is_cli, 'IronPython specific test')
def test_zzz_cli_features(self):
import System
import clr
clr.AddReference('Microsoft.Dynamic')
import Microsoft
for testType in types:
self.assertEqual(testType(b'abc').Count, 3)
self.assertEqual(bytearray(b'abc').Contains(ord('a')), True)
self.assertEqual(list(System.Collections.IEnumerable.GetEnumerator(bytearray(b'abc'))), [ord('a'), ord('b'), ord('c')])
self.assertEqual(testType(b'abc').IndexOf(ord('a')), 0)
self.assertEqual(testType(b'abc').IndexOf(ord('d')), -1)
myList = System.Collections.Generic.List[System.Byte]()
myList.Add(ord('a'))
myList.Add(ord('b'))
myList.Add(ord('c'))
self.assertEqual(testType(b'').join([myList]), b'abc')
# bytearray
'''
self.assertEqual(bytearray(b'abc') == 'abc', False)
if not is_net40:
self.assertEqual(Microsoft.Scripting.IValueEquality.ValueEquals(bytearray(b'abc'), 'abc'), False)
'''
self.assertEqual(bytearray(b'abc') == 'abc', True)
self.assertEqual(b'abc'.IsReadOnly, True)
self.assertEqual(bytearray(b'abc').IsReadOnly, False)
self.assertEqual(bytearray(b'abc').Remove(ord('a')), True)
self.assertEqual(bytearray(b'abc').Remove(ord('d')), False)
x = bytearray(b'abc')
x.Clear()
self.assertEqual(x, b'')
x.Add(ord('a'))
self.assertEqual(x, b'a')
self.assertEqual(x.IndexOf(ord('a')), 0)
self.assertEqual(x.IndexOf(ord('b')), -1)
x.Insert(0, ord('b'))
self.assertEqual(x, b'ba')
x.RemoveAt(0)
self.assertEqual(x, b'a')
System.Collections.Generic.IList[System.Byte].__setitem__(x, 0, ord('b'))
self.assertEqual(x, b'b')
# bytes
self.assertRaises(System.InvalidOperationException, b'abc'.Remove, ord('a'))
self.assertRaises(System.InvalidOperationException, b'abc'.Remove, ord('d'))
self.assertRaises(System.InvalidOperationException, b'abc'.Clear)
self.assertRaises(System.InvalidOperationException, b'abc'.Add, ord('a'))
self.assertRaises(System.InvalidOperationException, b'abc'.Insert, 0, ord('b'))
self.assertRaises(System.InvalidOperationException, b'abc'.RemoveAt, 0)
self.assertRaises(System.InvalidOperationException, System.Collections.Generic.IList[System.Byte].__setitem__, b'abc', 0, ord('b'))
lst = System.Collections.Generic.List[System.Byte]()
lst.Add(42)
self.assertEqual(ord(lst), 42)
lst.Add(42)
self.assertRaisesMessage(TypeError, "expected a character, but string of length 2 found", ord, lst)
def test_bytes_hashing(self):
"""test interaction of bytes w/ hashing modules"""
import _sha, _sha256, _sha512, _md5
for hashLib in (_sha.new, _sha256.sha256, _sha512.sha512, _sha512.sha384, _md5.new):
x = hashLib(b'abc')
x.update(b'abc')
#For now just make sure this doesn't throw
temp = hashLib(bytearray(b'abc'))
x.update(bytearray(b'abc'))
def test_cp35493(self):
self.assertEqual(bytearray(u'\xde\xad\xbe\xef\x80'), bytearray(b'\xde\xad\xbe\xef\x80'))
def test_add(self):
self.assertEqual(bytearray(b"abc") + memoryview(b"def"), b"abcdef")
run_test(__name__)
| apache-2.0 | -8,719,912,513,521,506,000 | 44.089717 | 168 | 0.532088 | false | 3.324936 | true | false | false |
offtools/linux-show-player | lisp/modules/gst_backend/elements/user_element.py | 3 | 3423 | # -*- coding: utf-8 -*-
#
# This file is part of Linux Show Player
#
# Copyright 2012-2016 Francesco Ceruti <[email protected]>
#
# Linux Show Player is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Linux Show Player is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Linux Show Player. If not, see <http://www.gnu.org/licenses/>.
from PyQt5.QtCore import QT_TRANSLATE_NOOP
from lisp.backend.media_element import ElementType, MediaType
from lisp.core.has_properties import Property
from lisp.modules.gst_backend.gi_repository import Gst
from lisp.modules.gst_backend.gst_element import GstMediaElement
class UserElement(GstMediaElement):
ElementType = ElementType.Plugin
MediaType = MediaType.Audio
Name = QT_TRANSLATE_NOOP('MediaElementName', 'Custom Element')
bin = Property(default='')
def __init__(self, pipeline):
super().__init__()
self.pipeline = pipeline
self.audio_convert_sink = Gst.ElementFactory.make("audioconvert", None)
# A default assignment for the bin
self.gst_bin = Gst.ElementFactory.make("identity", None)
self.gst_bin.set_property("signal-handoffs", False)
self.audio_convert_src = Gst.ElementFactory.make("audioconvert", None)
pipeline.add(self.audio_convert_sink)
pipeline.add(self.gst_bin)
pipeline.add(self.audio_convert_src)
self.audio_convert_sink.link(self.gst_bin)
self.gst_bin.link(self.audio_convert_src)
self._old_bin = self.gst_bin
self.changed('bin').connect(self.__prepare_bin)
def sink(self):
return self.audio_convert_sink
def src(self):
return self.audio_convert_src
def __prepare_bin(self, value):
if value != '' and value != self._old_bin:
self._old_bin = value
# If in playing we need to restart the pipeline after unblocking
playing = self.gst_bin.current_state == Gst.State.PLAYING
# Block the stream
pad = self.audio_convert_sink.sinkpads[0]
probe = pad.add_probe(Gst.PadProbeType.BLOCK, lambda *a: 0, "")
# Unlink the components
self.audio_convert_sink.unlink(self.gst_bin)
self.gst_bin.unlink(self.audio_convert_src)
self.pipeline.remove(self.gst_bin)
# Create the bin, when fail use a do-nothing element
try:
self.gst_bin = Gst.parse_bin_from_description(value, True)
except Exception:
self.gst_bin = Gst.ElementFactory.make("identity", None)
self.gst_bin.set_property("signal-handoffs", False)
# Link the components
self.pipeline.add(self.gst_bin)
self.audio_convert_sink.link(self.gst_bin)
self.gst_bin.link(self.audio_convert_src)
# Unblock the stream
pad.remove_probe(probe)
if playing:
self.pipeline.set_state(Gst.State.PLAYING) | gpl-3.0 | -7,158,419,560,938,443,000 | 36.626374 | 79 | 0.658779 | false | 3.769824 | false | false | false |
modelkayak/python_signal_examples | energy_fft.py | 1 | 2685 | import numpy as np
import scipy
from matplotlib import pyplot as plt
from numpy import pi as pi
# Plotting logic switches
time_plot = True
freq_plot = True
# Oversample to make things look purty
oversample = 100
# Frequencies to simulate
f_min = 5 #[Hz]
f_max = 10 #[Hz]
f_list = np.arange(f_min,f_max) # Note: arange does not include the stop pt
# Time array
t_start = 0 #[s]
t_stop = oversample/f_min #[s]
f_samp = oversample*f_max #[Hz]
t_step = 1/f_samp #[s]
# Create a time span, but do not care about the number of points.
# This will likely create sinc functions in the FFT.
#t = np.arange(t_start,t_stop,t_step)
# Use N points to make a faster FFT and to avoid
# the addition of zeros at the end of the FFT array.
# The addition of zeros will result in the mulitplication
# of a box filter in the time domain, which results in
# a sinc function in the frequency domain
N = int(np.power(2,np.ceil(np.log2(t_stop/t_step))))
# Create a time span, but care about the number of points such that
# the signal does not look like a sinc function in the freq. domain.
# Source: U of RI ECE, ELE 436: Comm. Sys., FFT Tutorial
t = np.linspace(t_start,t_stop,num=N,endpoint=True)
# Create random amplitudes
a_list = [np.random.randint(1,10) for i in f_list]
# Create a time signal with random amplitudes for each frequency
x = 0
for a,f in zip(a_list,f_list):
x += a*np.sin(2*pi*f*t)
# Take the FFT of the signal
# Normalize by the size of x due to how a DTFT is taken
# Take absoulte value because we only care about the real part
# of the signal.
X = np.abs(np.fft.fft(x)/x.size)
# Get the labels for the frequencies, num pts and delta between them
freq_labels = np.fft.fftfreq(N,t[1]-t[0])
# Plot the time signal
if time_plot and not freq_plot:
plt.figure('Time Domain View')
plt.title("Time domain view of signal x")
plt.plot(t,x)
plt.xlim([0,5/f_min])
plt.xlabel("Time [s]")
plt.ylabel("Amplitude")
plt.show()
# Or plot the frequecy
if freq_plot and not time_plot:
plt.figure('Frequency Domain View')
plt.title("Frequency domain view of signal x")
plt.plot(freq_labels,X)
plt.xlim([-f_max,f_max])
plt.show()
# Or plot both
if freq_plot and time_plot:
plt.subplot(211)
plt.title("Time and frequency domain view of real signal x")
plt.plot(t,x)
plt.xlim([0,5/f_min]) # Limit the time shown to a small amount
plt.xlabel("Time [s]")
plt.ylabel("Amplitude")
plt.subplot(212)
plt.plot(freq_labels,X)
plt.xlim([-f_max,f_max]) # Limit the freq shown to a small amount
plt.xlabel("Frequency [Hz]")
plt.ylabel("Magnitude (linear)")
plt.show()
| mit | -7,617,899,990,671,451,000 | 28.505495 | 75 | 0.67933 | false | 3.01009 | false | false | false |
moxon6/chemlab | build/lib/chemlab/graphics/qttrajectory.py | 5 | 12898 | from PyQt4.QtGui import QMainWindow, QApplication, QDockWidget
from PyQt4 import QtGui, QtCore
from PyQt4.QtCore import Qt
import os
from .qtviewer import app
from .qchemlabwidget import QChemlabWidget
from .. import resources
import numpy as np
resources_dir = os.path.dirname(resources.__file__)
class PlayStopButton(QtGui.QPushButton):
play = QtCore.pyqtSignal()
pause = QtCore.pyqtSignal()
def __init__(self):
css = '''
PlayStopButton {
width: 30px;
height: 30px;
}
'''
super(PlayStopButton, self).__init__()
self.setStyleSheet(css)
icon = QtGui.QIcon(os.path.join(resources_dir, 'play_icon.svg'))
self.setIcon(icon)
self.status = 'paused'
self.clicked.connect(self.on_click)
def on_click(self):
if self.status == 'paused':
self.status = 'playing'
icon = QtGui.QIcon(os.path.join(resources_dir, 'pause_icon.svg'))
self.setIcon(icon)
self.play.emit()
else:
self.status = 'paused'
icon = QtGui.QIcon(os.path.join(resources_dir, 'play_icon.svg'))
self.setIcon(icon)
self.pause.emit()
def set_pause(self):
self.status = 'paused'
icon = QtGui.QIcon(os.path.join(resources_dir, 'play_icon.svg'))
self.setIcon(icon)
def set_play(self):
self.status = 'playing'
icon = QtGui.QIcon(os.path.join(resources_dir, 'pause_icon.svg'))
self.setIcon(icon)
class AnimationSlider(QtGui.QSlider):
def __init__(self):
super(AnimationSlider, self).__init__(Qt.Horizontal)
self._cursor_adjustment = 7 #px
def mousePressEvent(self, event):
if event.button() == Qt.LeftButton:
value = self.__pixelPosToRangeValue(event.x()-self._cursor_adjustment)
self.setValue(value)
event.accept()
super(AnimationSlider, self).mousePressEvent(event)
def __pixelPosToRangeValue(self, pos):
opt = QtGui.QStyleOptionSlider()
self.initStyleOption(opt)
style = QtGui.QApplication.style()
gr = style.subControlRect(style.CC_Slider, opt, style.SC_SliderGroove, self)
sr = style.subControlRect(style.CC_Slider, opt, style.SC_SliderHandle, self)
if self.orientation() == QtCore.Qt.Horizontal:
slider_length = sr.width()
slider_min = gr.x()
slider_max = gr.right() - slider_length + 1
else:
slider_length = sr.height()
slider_min = gr.y()
slider_max = gr.bottom() - slider_length + 1
return style.sliderValueFromPosition(self.minimum(), self.maximum(),
pos-slider_min, slider_max-slider_min,
opt.upsideDown)
class TrajectoryControls(QtGui.QWidget):
play = QtCore.pyqtSignal()
pause = QtCore.pyqtSignal()
frame_changed = QtCore.pyqtSignal(int)
speed_changed = QtCore.pyqtSignal()
def __init__(self, parent=None):
super(TrajectoryControls, self).__init__(parent)
self.current_index = 0
self.max_index = 0
self._timer = QtCore.QTimer(self)
self._timer.timeout.connect(self.do_update)
containerhb2 = QtGui.QWidget(parent)
hb = QtGui.QHBoxLayout() # For controls
vb = QtGui.QVBoxLayout()
hb2 = QtGui.QHBoxLayout() # For settings
vb.addWidget(containerhb2)
vb.addLayout(hb)
containerhb2.setLayout(hb2)
containerhb2.setSizePolicy(QtGui.QSizePolicy.Minimum,
QtGui.QSizePolicy.Minimum)
hb2.addWidget(QtGui.QLabel('Speed'))
self._speed_slider = QtGui.QSlider(Qt.Horizontal)
self._speed_slider.resize(100, self._speed_slider.height())
self._speed_slider.setSizePolicy(QtGui.QSizePolicy.Fixed,
QtGui.QSizePolicy.Fixed)
self.speeds = np.linspace(15, 250, 11).astype(int)
self.speeds = self.speeds.tolist()
self.speeds.reverse()
self._speed_slider.setMaximum(10)
self._speed_slider.setValue(7)
#self._speed_slider.valueChanged.connect(self.on_speed_changed)
hb2.addWidget(self._speed_slider)
hb2.addStretch(1)
# Control buttons
self.play_stop = PlayStopButton()
hb.addWidget(self.play_stop)
self.slider = AnimationSlider()
hb.addWidget(self.slider, 2)
self._label_tmp = '<b><FONT SIZE=30>{}</b>'
self.timelabel = QtGui.QLabel(self._label_tmp.format('0.0'))
hb.addWidget(self.timelabel)
self._settings_button = QtGui.QPushButton()
self._settings_button.setStyleSheet('''
QPushButton {
width: 30px;
height: 30px;
}''')
icon = QtGui.QIcon(os.path.join(resources_dir, 'settings_icon.svg'))
self._settings_button.setIcon(icon)
self._settings_button.clicked.connect(self._toggle_settings)
hb.addWidget(self._settings_button)
self.play_stop.setFocus()
vb.setSizeConstraint(QtGui.QLayout.SetMaximumSize)
containerhb2.setVisible(False)
self._settings_pan = containerhb2
self.setLayout(vb)
self.speed = self.speeds[self._speed_slider.value()]
# Connecting all the signals
self.play_stop.play.connect(self.on_play)
self.play_stop.pause.connect(self.on_pause)
self.slider.valueChanged.connect(self.on_slider_change)
self.slider.sliderPressed.connect(self.on_slider_down)
self.play_stop.setFocus()
def _toggle_settings(self):
self._settings_pan.setVisible(not self._settings_pan.isVisible())
def on_play(self):
if self.current_index == self.max_index - 1:
# Restart
self.current_index = 0
self._timer.start(self.speed)
def do_update(self):
if self.current_index >= self.max_index:
self.current_index = self.max_index - 1
self._timer.stop()
self.play_stop.set_pause()
else:
self.current_index += 1
# This triggers on_slider_change
self.slider.setSliderPosition(self.current_index)
def next(self, skip=1):
if self.current_index + skip >= self.max_index - 1:
raise StopIteration
else:
self.slider.setValue(self.current_index + skip)
# The current_index is changes
def goto_frame(self, framenum):
self.slider.setValue(framenum)
def on_pause(self):
self._timer.stop()
def on_slider_change(self, value):
#print 'Slider moved', value
self.current_index = value
self.frame_changed.emit(self.current_index)
def on_slider_down(self):
self._timer.stop()
self.play_stop.set_pause()
def on_speed_changed(self, index):
self.speed = self.speeds[index]
if self._timer.isActive():
self._timer.stop()
self._timer.start(self.speed)
def set_ticks(self, number):
'''Set the number of frames to animate.
'''
self.max_index = number
self.current_index = 0
self.slider.setMaximum(self.max_index-1)
self.slider.setMinimum(0)
self.slider.setPageStep(1)
def set_time(self, t):
stime = format_time(t)
label_tmp = '<b><FONT SIZE=30>{}</b>'
self.timelabel.setText(label_tmp.format(stime))
class QtTrajectoryViewer(QMainWindow):
"""Bases: `PyQt4.QtGui.QMainWindow`
Interface for viewing trajectory.
It provides interface elements to play/pause and set the speed of
the animation.
**Example**
To set up a QtTrajectoryViewer you have to add renderers to the
scene, set the number of frames present in the animation by calling
;py:meth:`~chemlab.graphics.QtTrajectoryViewer.set_ticks` and
define an update function.
Below is an example taken from the function
:py:func:`chemlab.graphics.display_trajectory`::
from chemlab.graphics import QtTrajectoryViewer
# sys = some System
# coords_list = some list of atomic coordinates
v = QtTrajectoryViewer()
sr = v.add_renderer(AtomRenderer, sys.r_array, sys.type_array,
backend='impostors')
br = v.add_renderer(BoxRenderer, sys.box_vectors)
v.set_ticks(len(coords_list))
@v.update_function
def on_update(index):
sr.update_positions(coords_list[index])
br.update(sys.box_vectors)
v.set_text(format_time(times[index]))
v.widget.repaint()
v.run()
.. warning:: Use with caution, the API for this element is not
fully stabilized and may be subject to change.
"""
def __init__(self):
super(QtTrajectoryViewer, self).__init__()
self.controls = QDockWidget()
# Eliminate the dock titlebar
title_widget = QtGui.QWidget(self)
self.controls.setTitleBarWidget(title_widget)
traj_controls = TrajectoryControls(self)
self.controls.setWidget(traj_controls)
# Molecular viewer
self.widget = QChemlabWidget(self)
self.setCentralWidget(self.widget)
self.addDockWidget(Qt.DockWidgetArea(Qt.BottomDockWidgetArea),
self.controls)
self.show()
# Replace in this way
traj_controls.frame_changed.connect(self.on_frame_changed)
self.traj_controls = traj_controls
def set_ticks(self, number):
self.traj_controls.set_ticks(number)
def set_text(self, text):
'''Update the time indicator in the interface.
'''
self.traj_controls.timelabel.setText(self.traj_controls._label_tmp.format(text))
def on_frame_changed(self, index):
self._update_function(index)
def on_pause(self):
self._timer.stop()
def on_slider_change(self, value):
self.current_index = value
self._update_function(self.current_index)
def on_slider_down(self):
self._timer.stop()
self.play_stop.set_pause()
def on_speed_changed(self, index):
self.speed = self.speeds[index]
if self._timer.isActive():
self._timer.stop()
self._timer.start(self.speed)
def add_renderer(self, klass, *args, **kwargs):
'''The behaviour of this function is the same as
:py:meth:`chemlab.graphics.QtViewer.add_renderer`.
'''
renderer = klass(self.widget, *args, **kwargs)
self.widget.renderers.append(renderer)
return renderer
def add_ui(self, klass, *args, **kwargs):
'''Add an UI element for the current scene. The approach is
the same as renderers.
.. warning:: The UI api is not yet finalized
'''
ui = klass(self.widget, *args, **kwargs)
self.widget.uis.append(ui)
return ui
def add_post_processing(self, klass, *args, **kwargs):
pp = klass(self.widget, *args, **kwargs)
self.widget.post_processing.append(pp)
return pp
def run(self):
app.exec_()
def update_function(self, func, frames=None):
'''Set the function to be called when it's time to display a frame.
*func* should be a function that takes one integer argument that
represents the frame that has to be played::
def func(index):
# Update the renderers to match the
# current animation index
'''
# Back-compatibility
if frames is not None:
self.traj_controls.set_ticks(frames)
self._update_function = func
def _toggle_settings(self):
self._settings_pan.setVisible(not self._settings_pan.isVisible())
def format_time(t):
if 0.0 <= t < 100.0:
return '%.1f ps' % t
elif 100.0 <= t < 1.0e5:
return '%.1f ns' % (t/1e3)
elif 1.0e5 <= t < 1.0e8:
return '%.1f us' % (t/1e6)
elif 1.0e8 <= t < 1.0e12:
return '%.1f ms' % (t/1e9)
elif 1.0e12 <= t < 1.0e15:
return '%.1f s' % (t/1e12)
if __name__ == '__main__':
v = QtTrajectoryViewer()
v.show()
app.exec_() | gpl-3.0 | 5,636,350,738,547,121,000 | 30.615196 | 88 | 0.575361 | false | 3.925137 | false | false | false |
gautam1858/tensorflow | tensorflow/python/kernel_tests/numerics_test.py | 15 | 5145 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tensorflow.ops.numerics."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import numerics
from tensorflow.python.platform import test
class VerifyTensorAllFiniteTest(test.TestCase):
def testVerifyTensorAllFiniteSucceeds(self):
x_shape = [5, 4]
x = np.random.random_sample(x_shape).astype(np.float32)
with test_util.use_gpu():
t = constant_op.constant(x, shape=x_shape, dtype=dtypes.float32)
t_verified = numerics.verify_tensor_all_finite(t,
"Input is not a number.")
self.assertAllClose(x, self.evaluate(t_verified))
def testVerifyTensorAllFiniteFails(self):
x_shape = [5, 4]
x = np.random.random_sample(x_shape).astype(np.float32)
my_msg = "Input is not a number."
# Test NaN.
x[0] = np.nan
with test_util.use_gpu():
with self.assertRaisesOpError(my_msg):
t = constant_op.constant(x, shape=x_shape, dtype=dtypes.float32)
t_verified = numerics.verify_tensor_all_finite(t, my_msg)
self.evaluate(t_verified)
# Test Inf.
x[0] = np.inf
with test_util.use_gpu():
with self.assertRaisesOpError(my_msg):
t = constant_op.constant(x, shape=x_shape, dtype=dtypes.float32)
t_verified = numerics.verify_tensor_all_finite(t, my_msg)
self.evaluate(t_verified)
@test_util.run_v1_only("b/120545219")
class NumericsTest(test.TestCase):
def testInf(self):
with self.session(graph=ops.Graph()):
t1 = constant_op.constant(1.0)
t2 = constant_op.constant(0.0)
a = math_ops.div(t1, t2)
check = numerics.add_check_numerics_ops()
a = control_flow_ops.with_dependencies([check], a)
with self.assertRaisesOpError("Inf"):
self.evaluate(a)
def testNaN(self):
with self.session(graph=ops.Graph()):
t1 = constant_op.constant(0.0)
t2 = constant_op.constant(0.0)
a = math_ops.div(t1, t2)
check = numerics.add_check_numerics_ops()
a = control_flow_ops.with_dependencies([check], a)
with self.assertRaisesOpError("NaN"):
self.evaluate(a)
def testBoth(self):
with self.session(graph=ops.Graph()):
t1 = constant_op.constant([1.0, 0.0])
t2 = constant_op.constant([0.0, 0.0])
a = math_ops.div(t1, t2)
check = numerics.add_check_numerics_ops()
a = control_flow_ops.with_dependencies([check], a)
with self.assertRaisesOpError("Inf and NaN"):
self.evaluate(a)
def testPassThrough(self):
with self.session(graph=ops.Graph()):
t1 = constant_op.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3])
checked = array_ops.check_numerics(t1, message="pass through test")
value = self.evaluate(checked)
self.assertAllEqual(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), value)
self.assertEqual([2, 3], checked.get_shape())
def testControlFlowCond(self):
predicate = array_ops.placeholder(dtypes.bool, shape=[])
_ = control_flow_ops.cond(predicate,
lambda: constant_op.constant([37.]),
lambda: constant_op.constant([42.]))
with self.assertRaisesRegexp(
ValueError,
r"`tf\.add_check_numerics_ops\(\) is not compatible with "
r"TensorFlow control flow operations such as `tf\.cond\(\)` "
r"or `tf.while_loop\(\)`\."):
numerics.add_check_numerics_ops()
def testControlFlowWhile(self):
predicate = array_ops.placeholder(dtypes.bool, shape=[])
_ = control_flow_ops.while_loop(lambda _: predicate,
lambda _: constant_op.constant([37.]),
[constant_op.constant([42.])])
with self.assertRaisesRegexp(
ValueError,
r"`tf\.add_check_numerics_ops\(\) is not compatible with "
r"TensorFlow control flow operations such as `tf\.cond\(\)` "
r"or `tf.while_loop\(\)`\."):
numerics.add_check_numerics_ops()
if __name__ == "__main__":
test.main()
| apache-2.0 | -8,638,399,950,881,211,000 | 37.395522 | 80 | 0.641399 | false | 3.548276 | true | false | false |
zzsza/TIL | python/crawling/advanced_link_crawler.py | 1 | 1657 | import re
from urllib import robotparser
from urllib.parse import urljoin
from downloader import Downloader
def get_robots_parser(robots_url):
rp = robotparser.RobotFileParser()
rp.set_url(robots_url)
rp.read()
return rp
def get_links(html):
webpage_regex = re.compile("""<a[^>]+href=["'](.*?)["']""", re.IGNORECASE)
return webpage_regex.findall(html)
def link_crawler(start_url, link_regex, robots_url=None, user_agent='wswp',
proxies=None, delay=3, max_depth=4, num_retries=2, cache={}, scraper_callback=None):
crawl_queue = [start_url]
seen = {}
if not robots_url:
robots_url = '{}/robots.txt'.format(start_url)
rp = get_robots_parser(robots_url)
D = Downloader(delay=delay, user_agent=user_agent, proxies=proxies, cache=cache)
while crawl_queue:
url = crawl_queue.pop()
if rp.can_fetch(user_agent, url):
depth = seen.get(url, 0)
if depth == max_depth:
print('Skipping %s due to depth' % url)
continue
html = D(url, num_retries=num_retries)
if not html:
continue
if scraper_callback:
links = scraper_callback(url, html) or []
else:
links = []
for link in get_links(html) + links:
if re.match(link_regex, link):
abs_link = urljoin(start_url, link)
if abs_link not in seen:
seen[abs_link] = depth + 1
crawl_queue.append(abs_link)
else:
print('Blocked by robots.txt:', url)
| mit | -5,075,583,295,079,017,000 | 33.520833 | 101 | 0.554617 | false | 3.723596 | false | false | false |
rbalda/neural_ocr | env/lib/python2.7/site-packages/django/contrib/gis/geoip/base.py | 334 | 11859 | import os
import re
import warnings
from ctypes import c_char_p
from django.contrib.gis.geoip.libgeoip import GEOIP_SETTINGS
from django.contrib.gis.geoip.prototypes import (
GeoIP_country_code_by_addr, GeoIP_country_code_by_name,
GeoIP_country_name_by_addr, GeoIP_country_name_by_name,
GeoIP_database_info, GeoIP_delete, GeoIP_lib_version, GeoIP_open,
GeoIP_record_by_addr, GeoIP_record_by_name,
)
from django.core.validators import ipv4_re
from django.utils import six
from django.utils.deprecation import RemovedInDjango20Warning
from django.utils.encoding import force_bytes, force_text
# Regular expressions for recognizing the GeoIP free database editions.
free_regex = re.compile(r'^GEO-\d{3}FREE')
lite_regex = re.compile(r'^GEO-\d{3}LITE')
class GeoIPException(Exception):
pass
class GeoIP(object):
# The flags for GeoIP memory caching.
# GEOIP_STANDARD - read database from filesystem, uses least memory.
#
# GEOIP_MEMORY_CACHE - load database into memory, faster performance
# but uses more memory
#
# GEOIP_CHECK_CACHE - check for updated database. If database has been
# updated, reload filehandle and/or memory cache. This option
# is not thread safe.
#
# GEOIP_INDEX_CACHE - just cache the most frequently accessed index
# portion of the database, resulting in faster lookups than
# GEOIP_STANDARD, but less memory usage than GEOIP_MEMORY_CACHE -
# useful for larger databases such as GeoIP Organization and
# GeoIP City. Note, for GeoIP Country, Region and Netspeed
# databases, GEOIP_INDEX_CACHE is equivalent to GEOIP_MEMORY_CACHE
#
# GEOIP_MMAP_CACHE - load database into mmap shared memory ( not available
# on Windows).
GEOIP_STANDARD = 0
GEOIP_MEMORY_CACHE = 1
GEOIP_CHECK_CACHE = 2
GEOIP_INDEX_CACHE = 4
GEOIP_MMAP_CACHE = 8
cache_options = {opt: None for opt in (0, 1, 2, 4, 8)}
# Paths to the city & country binary databases.
_city_file = ''
_country_file = ''
# Initially, pointers to GeoIP file references are NULL.
_city = None
_country = None
def __init__(self, path=None, cache=0, country=None, city=None):
"""
Initializes the GeoIP object, no parameters are required to use default
settings. Keyword arguments may be passed in to customize the locations
of the GeoIP data sets.
* path: Base directory to where GeoIP data is located or the full path
to where the city or country data files (*.dat) are located.
Assumes that both the city and country data sets are located in
this directory; overrides the GEOIP_PATH settings attribute.
* cache: The cache settings when opening up the GeoIP datasets,
and may be an integer in (0, 1, 2, 4, 8) corresponding to
the GEOIP_STANDARD, GEOIP_MEMORY_CACHE, GEOIP_CHECK_CACHE,
GEOIP_INDEX_CACHE, and GEOIP_MMAP_CACHE, `GeoIPOptions` C API
settings, respectively. Defaults to 0, meaning that the data is read
from the disk.
* country: The name of the GeoIP country data file. Defaults to
'GeoIP.dat'; overrides the GEOIP_COUNTRY settings attribute.
* city: The name of the GeoIP city data file. Defaults to
'GeoLiteCity.dat'; overrides the GEOIP_CITY settings attribute.
"""
warnings.warn(
"django.contrib.gis.geoip is deprecated in favor of "
"django.contrib.gis.geoip2 and the MaxMind GeoLite2 database "
"format.", RemovedInDjango20Warning, 2
)
# Checking the given cache option.
if cache in self.cache_options:
self._cache = cache
else:
raise GeoIPException('Invalid GeoIP caching option: %s' % cache)
# Getting the GeoIP data path.
if not path:
path = GEOIP_SETTINGS.get('GEOIP_PATH')
if not path:
raise GeoIPException('GeoIP path must be provided via parameter or the GEOIP_PATH setting.')
if not isinstance(path, six.string_types):
raise TypeError('Invalid path type: %s' % type(path).__name__)
if os.path.isdir(path):
# Constructing the GeoIP database filenames using the settings
# dictionary. If the database files for the GeoLite country
# and/or city datasets exist, then try and open them.
country_db = os.path.join(path, country or GEOIP_SETTINGS.get('GEOIP_COUNTRY', 'GeoIP.dat'))
if os.path.isfile(country_db):
self._country = GeoIP_open(force_bytes(country_db), cache)
self._country_file = country_db
city_db = os.path.join(path, city or GEOIP_SETTINGS.get('GEOIP_CITY', 'GeoLiteCity.dat'))
if os.path.isfile(city_db):
self._city = GeoIP_open(force_bytes(city_db), cache)
self._city_file = city_db
elif os.path.isfile(path):
# Otherwise, some detective work will be needed to figure
# out whether the given database path is for the GeoIP country
# or city databases.
ptr = GeoIP_open(force_bytes(path), cache)
info = GeoIP_database_info(ptr)
if lite_regex.match(info):
# GeoLite City database detected.
self._city = ptr
self._city_file = path
elif free_regex.match(info):
# GeoIP Country database detected.
self._country = ptr
self._country_file = path
else:
raise GeoIPException('Unable to recognize database edition: %s' % info)
else:
raise GeoIPException('GeoIP path must be a valid file or directory.')
def __del__(self):
# Cleaning any GeoIP file handles lying around.
if GeoIP_delete is None:
return
if self._country:
GeoIP_delete(self._country)
if self._city:
GeoIP_delete(self._city)
def __repr__(self):
version = ''
if GeoIP_lib_version is not None:
version += ' [v%s]' % force_text(GeoIP_lib_version())
return '<%(cls)s%(version)s _country_file="%(country)s", _city_file="%(city)s">' % {
'cls': self.__class__.__name__,
'version': version,
'country': self._country_file,
'city': self._city_file,
}
def _check_query(self, query, country=False, city=False, city_or_country=False):
"Helper routine for checking the query and database availability."
# Making sure a string was passed in for the query.
if not isinstance(query, six.string_types):
raise TypeError('GeoIP query must be a string, not type %s' % type(query).__name__)
# Extra checks for the existence of country and city databases.
if city_or_country and not (self._country or self._city):
raise GeoIPException('Invalid GeoIP country and city data files.')
elif country and not self._country:
raise GeoIPException('Invalid GeoIP country data file: %s' % self._country_file)
elif city and not self._city:
raise GeoIPException('Invalid GeoIP city data file: %s' % self._city_file)
# Return the query string back to the caller. GeoIP only takes bytestrings.
return force_bytes(query)
def city(self, query):
"""
Returns a dictionary of city information for the given IP address or
Fully Qualified Domain Name (FQDN). Some information in the dictionary
may be undefined (None).
"""
enc_query = self._check_query(query, city=True)
if ipv4_re.match(query):
# If an IP address was passed in
return GeoIP_record_by_addr(self._city, c_char_p(enc_query))
else:
# If a FQDN was passed in.
return GeoIP_record_by_name(self._city, c_char_p(enc_query))
def country_code(self, query):
"Returns the country code for the given IP Address or FQDN."
enc_query = self._check_query(query, city_or_country=True)
if self._country:
if ipv4_re.match(query):
return GeoIP_country_code_by_addr(self._country, enc_query)
else:
return GeoIP_country_code_by_name(self._country, enc_query)
else:
return self.city(query)['country_code']
def country_name(self, query):
"Returns the country name for the given IP Address or FQDN."
enc_query = self._check_query(query, city_or_country=True)
if self._country:
if ipv4_re.match(query):
return GeoIP_country_name_by_addr(self._country, enc_query)
else:
return GeoIP_country_name_by_name(self._country, enc_query)
else:
return self.city(query)['country_name']
def country(self, query):
"""
Returns a dictionary with the country code and name when given an
IP address or a Fully Qualified Domain Name (FQDN). For example, both
'24.124.1.80' and 'djangoproject.com' are valid parameters.
"""
# Returning the country code and name
return {'country_code': self.country_code(query),
'country_name': self.country_name(query),
}
# #### Coordinate retrieval routines ####
def coords(self, query, ordering=('longitude', 'latitude')):
cdict = self.city(query)
if cdict is None:
return None
else:
return tuple(cdict[o] for o in ordering)
def lon_lat(self, query):
"Returns a tuple of the (longitude, latitude) for the given query."
return self.coords(query)
def lat_lon(self, query):
"Returns a tuple of the (latitude, longitude) for the given query."
return self.coords(query, ('latitude', 'longitude'))
def geos(self, query):
"Returns a GEOS Point object for the given query."
ll = self.lon_lat(query)
if ll:
from django.contrib.gis.geos import Point
return Point(ll, srid=4326)
else:
return None
# #### GeoIP Database Information Routines ####
@property
def country_info(self):
"Returns information about the GeoIP country database."
if self._country is None:
ci = 'No GeoIP Country data in "%s"' % self._country_file
else:
ci = GeoIP_database_info(self._country)
return ci
@property
def city_info(self):
"Returns information about the GeoIP city database."
if self._city is None:
ci = 'No GeoIP City data in "%s"' % self._city_file
else:
ci = GeoIP_database_info(self._city)
return ci
@property
def info(self):
"Returns information about the GeoIP library and databases in use."
info = ''
if GeoIP_lib_version:
info += 'GeoIP Library:\n\t%s\n' % GeoIP_lib_version()
return info + 'Country:\n\t%s\nCity:\n\t%s' % (self.country_info, self.city_info)
# #### Methods for compatibility w/the GeoIP-Python API. ####
@classmethod
def open(cls, full_path, cache):
return GeoIP(full_path, cache)
def _rec_by_arg(self, arg):
if self._city:
return self.city(arg)
else:
return self.country(arg)
region_by_addr = city
region_by_name = city
record_by_addr = _rec_by_arg
record_by_name = _rec_by_arg
country_code_by_addr = country_code
country_code_by_name = country_code
country_name_by_addr = country_name
country_name_by_name = country_name
| mit | 2,515,451,417,160,843,300 | 39.613014 | 108 | 0.612362 | false | 4.078061 | false | false | false |
zhanghenry/stocks | tests/utils_tests/test_http.py | 220 | 8102 | from __future__ import unicode_literals
import sys
import unittest
from datetime import datetime
from django.utils import http, six
from django.utils.datastructures import MultiValueDict
class TestUtilsHttp(unittest.TestCase):
def test_same_origin_true(self):
# Identical
self.assertTrue(http.same_origin('http://foo.com/', 'http://foo.com/'))
# One with trailing slash - see #15617
self.assertTrue(http.same_origin('http://foo.com', 'http://foo.com/'))
self.assertTrue(http.same_origin('http://foo.com/', 'http://foo.com'))
# With port
self.assertTrue(http.same_origin('https://foo.com:8000', 'https://foo.com:8000/'))
# No port given but according to RFC6454 still the same origin
self.assertTrue(http.same_origin('http://foo.com', 'http://foo.com:80/'))
self.assertTrue(http.same_origin('https://foo.com', 'https://foo.com:443/'))
def test_same_origin_false(self):
# Different scheme
self.assertFalse(http.same_origin('http://foo.com', 'https://foo.com'))
# Different host
self.assertFalse(http.same_origin('http://foo.com', 'http://goo.com'))
# Different host again
self.assertFalse(http.same_origin('http://foo.com', 'http://foo.com.evil.com'))
# Different port
self.assertFalse(http.same_origin('http://foo.com:8000', 'http://foo.com:8001'))
# No port given
self.assertFalse(http.same_origin('http://foo.com', 'http://foo.com:8000/'))
self.assertFalse(http.same_origin('https://foo.com', 'https://foo.com:8000/'))
def test_urlencode(self):
# 2-tuples (the norm)
result = http.urlencode((('a', 1), ('b', 2), ('c', 3)))
self.assertEqual(result, 'a=1&b=2&c=3')
# A dictionary
result = http.urlencode({'a': 1, 'b': 2, 'c': 3})
acceptable_results = [
# Need to allow all of these as dictionaries have to be treated as
# unordered
'a=1&b=2&c=3',
'a=1&c=3&b=2',
'b=2&a=1&c=3',
'b=2&c=3&a=1',
'c=3&a=1&b=2',
'c=3&b=2&a=1'
]
self.assertIn(result, acceptable_results)
result = http.urlencode({'a': [1, 2]}, doseq=False)
self.assertEqual(result, 'a=%5B%271%27%2C+%272%27%5D')
result = http.urlencode({'a': [1, 2]}, doseq=True)
self.assertEqual(result, 'a=1&a=2')
result = http.urlencode({'a': []}, doseq=True)
self.assertEqual(result, '')
# A MultiValueDict
result = http.urlencode(MultiValueDict({
'name': ['Adrian', 'Simon'],
'position': ['Developer']
}), doseq=True)
acceptable_results = [
# MultiValueDicts are similarly unordered
'name=Adrian&name=Simon&position=Developer',
'position=Developer&name=Adrian&name=Simon'
]
self.assertIn(result, acceptable_results)
def test_base36(self):
# reciprocity works
for n in [0, 1, 1000, 1000000]:
self.assertEqual(n, http.base36_to_int(http.int_to_base36(n)))
if six.PY2:
self.assertEqual(sys.maxint, http.base36_to_int(http.int_to_base36(sys.maxint)))
# bad input
self.assertRaises(ValueError, http.int_to_base36, -1)
if six.PY2:
self.assertRaises(ValueError, http.int_to_base36, sys.maxint + 1)
for n in ['1', 'foo', {1: 2}, (1, 2, 3), 3.141]:
self.assertRaises(TypeError, http.int_to_base36, n)
for n in ['#', ' ']:
self.assertRaises(ValueError, http.base36_to_int, n)
for n in [123, {1: 2}, (1, 2, 3), 3.141]:
self.assertRaises(TypeError, http.base36_to_int, n)
# more explicit output testing
for n, b36 in [(0, '0'), (1, '1'), (42, '16'), (818469960, 'django')]:
self.assertEqual(http.int_to_base36(n), b36)
self.assertEqual(http.base36_to_int(b36), n)
def test_is_safe_url(self):
for bad_url in ('http://example.com',
'http:///example.com',
'https://example.com',
'ftp://exampel.com',
r'\\example.com',
r'\\\example.com',
r'/\\/example.com',
r'\\\example.com',
r'\\example.com',
r'\\//example.com',
r'/\/example.com',
r'\/example.com',
r'/\example.com',
'http:///example.com',
'http:/\//example.com',
'http:\/example.com',
'http:/\example.com',
'javascript:alert("XSS")',
'\njavascript:alert(x)',
'\x08//example.com',
'\n'):
self.assertFalse(http.is_safe_url(bad_url, host='testserver'), "%s should be blocked" % bad_url)
for good_url in ('/view/?param=http://example.com',
'/view/?param=https://example.com',
'/view?param=ftp://exampel.com',
'view/?param=//example.com',
'https://testserver/',
'HTTPS://testserver/',
'//testserver/',
'/url%20with%20spaces/'):
self.assertTrue(http.is_safe_url(good_url, host='testserver'), "%s should be allowed" % good_url)
def test_urlsafe_base64_roundtrip(self):
bytestring = b'foo'
encoded = http.urlsafe_base64_encode(bytestring)
decoded = http.urlsafe_base64_decode(encoded)
self.assertEqual(bytestring, decoded)
def test_urlquote(self):
self.assertEqual(http.urlquote('Paris & Orl\xe9ans'),
'Paris%20%26%20Orl%C3%A9ans')
self.assertEqual(http.urlquote('Paris & Orl\xe9ans', safe="&"),
'Paris%20&%20Orl%C3%A9ans')
self.assertEqual(
http.urlunquote('Paris%20%26%20Orl%C3%A9ans'),
'Paris & Orl\xe9ans')
self.assertEqual(
http.urlunquote('Paris%20&%20Orl%C3%A9ans'),
'Paris & Orl\xe9ans')
self.assertEqual(http.urlquote_plus('Paris & Orl\xe9ans'),
'Paris+%26+Orl%C3%A9ans')
self.assertEqual(http.urlquote_plus('Paris & Orl\xe9ans', safe="&"),
'Paris+&+Orl%C3%A9ans')
self.assertEqual(
http.urlunquote_plus('Paris+%26+Orl%C3%A9ans'),
'Paris & Orl\xe9ans')
self.assertEqual(
http.urlunquote_plus('Paris+&+Orl%C3%A9ans'),
'Paris & Orl\xe9ans')
class ETagProcessingTests(unittest.TestCase):
def test_parsing(self):
etags = http.parse_etags(r'"", "etag", "e\"t\"ag", "e\\tag", W/"weak"')
self.assertEqual(etags, ['', 'etag', 'e"t"ag', r'e\tag', 'weak'])
def test_quoting(self):
quoted_etag = http.quote_etag(r'e\t"ag')
self.assertEqual(quoted_etag, r'"e\\t\"ag"')
class HttpDateProcessingTests(unittest.TestCase):
def test_http_date(self):
t = 1167616461.0
self.assertEqual(http.http_date(t), 'Mon, 01 Jan 2007 01:54:21 GMT')
def test_cookie_date(self):
t = 1167616461.0
self.assertEqual(http.cookie_date(t), 'Mon, 01-Jan-2007 01:54:21 GMT')
def test_parsing_rfc1123(self):
parsed = http.parse_http_date('Sun, 06 Nov 1994 08:49:37 GMT')
self.assertEqual(datetime.utcfromtimestamp(parsed),
datetime(1994, 11, 6, 8, 49, 37))
def test_parsing_rfc850(self):
parsed = http.parse_http_date('Sunday, 06-Nov-94 08:49:37 GMT')
self.assertEqual(datetime.utcfromtimestamp(parsed),
datetime(1994, 11, 6, 8, 49, 37))
def test_parsing_asctime(self):
parsed = http.parse_http_date('Sun Nov 6 08:49:37 1994')
self.assertEqual(datetime.utcfromtimestamp(parsed),
datetime(1994, 11, 6, 8, 49, 37))
| bsd-3-clause | -5,960,138,391,426,787,000 | 40.979275 | 109 | 0.540731 | false | 3.452066 | true | false | false |
andela-bojengwa/talk | venv/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py | 204 | 24273 | # -*- coding: utf-8 -*-
"""
requests.session
~~~~~~~~~~~~~~~~
This module provides a Session object to manage and persist settings across
requests (cookies, auth, proxies).
"""
import os
from collections import Mapping
from datetime import datetime
from .auth import _basic_auth_str
from .compat import cookielib, OrderedDict, urljoin, urlparse
from .cookies import (
cookiejar_from_dict, extract_cookies_to_jar, RequestsCookieJar, merge_cookies)
from .models import Request, PreparedRequest, DEFAULT_REDIRECT_LIMIT
from .hooks import default_hooks, dispatch_hook
from .utils import to_key_val_list, default_headers, to_native_string
from .exceptions import (
TooManyRedirects, InvalidSchema, ChunkedEncodingError, ContentDecodingError)
from .packages.urllib3._collections import RecentlyUsedContainer
from .structures import CaseInsensitiveDict
from .adapters import HTTPAdapter
from .utils import (
requote_uri, get_environ_proxies, get_netrc_auth, should_bypass_proxies,
get_auth_from_url
)
from .status_codes import codes
# formerly defined here, reexposed here for backward compatibility
from .models import REDIRECT_STATI
REDIRECT_CACHE_SIZE = 1000
def merge_setting(request_setting, session_setting, dict_class=OrderedDict):
"""
Determines appropriate setting for a given request, taking into account the
explicit setting on that request, and the setting in the session. If a
setting is a dictionary, they will be merged together using `dict_class`
"""
if session_setting is None:
return request_setting
if request_setting is None:
return session_setting
# Bypass if not a dictionary (e.g. verify)
if not (
isinstance(session_setting, Mapping) and
isinstance(request_setting, Mapping)
):
return request_setting
merged_setting = dict_class(to_key_val_list(session_setting))
merged_setting.update(to_key_val_list(request_setting))
# Remove keys that are set to None.
for (k, v) in request_setting.items():
if v is None:
del merged_setting[k]
merged_setting = dict((k, v) for (k, v) in merged_setting.items() if v is not None)
return merged_setting
def merge_hooks(request_hooks, session_hooks, dict_class=OrderedDict):
"""
Properly merges both requests and session hooks.
This is necessary because when request_hooks == {'response': []}, the
merge breaks Session hooks entirely.
"""
if session_hooks is None or session_hooks.get('response') == []:
return request_hooks
if request_hooks is None or request_hooks.get('response') == []:
return session_hooks
return merge_setting(request_hooks, session_hooks, dict_class)
class SessionRedirectMixin(object):
def resolve_redirects(self, resp, req, stream=False, timeout=None,
verify=True, cert=None, proxies=None):
"""Receives a Response. Returns a generator of Responses."""
i = 0
hist = [] # keep track of history
while resp.is_redirect:
prepared_request = req.copy()
if i > 0:
# Update history and keep track of redirects.
hist.append(resp)
new_hist = list(hist)
resp.history = new_hist
try:
resp.content # Consume socket so it can be released
except (ChunkedEncodingError, ContentDecodingError, RuntimeError):
resp.raw.read(decode_content=False)
if i >= self.max_redirects:
raise TooManyRedirects('Exceeded %s redirects.' % self.max_redirects)
# Release the connection back into the pool.
resp.close()
url = resp.headers['location']
method = req.method
# Handle redirection without scheme (see: RFC 1808 Section 4)
if url.startswith('//'):
parsed_rurl = urlparse(resp.url)
url = '%s:%s' % (parsed_rurl.scheme, url)
# The scheme should be lower case...
parsed = urlparse(url)
url = parsed.geturl()
# Facilitate relative 'location' headers, as allowed by RFC 7231.
# (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource')
# Compliant with RFC3986, we percent encode the url.
if not parsed.netloc:
url = urljoin(resp.url, requote_uri(url))
else:
url = requote_uri(url)
prepared_request.url = to_native_string(url)
# Cache the url, unless it redirects to itself.
if resp.is_permanent_redirect and req.url != prepared_request.url:
self.redirect_cache[req.url] = prepared_request.url
# http://tools.ietf.org/html/rfc7231#section-6.4.4
if (resp.status_code == codes.see_other and
method != 'HEAD'):
method = 'GET'
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
if resp.status_code == codes.found and method != 'HEAD':
method = 'GET'
# Second, if a POST is responded to with a 301, turn it into a GET.
# This bizarre behaviour is explained in Issue 1704.
if resp.status_code == codes.moved and method == 'POST':
method = 'GET'
prepared_request.method = method
# https://github.com/kennethreitz/requests/issues/1084
if resp.status_code not in (codes.temporary_redirect, codes.permanent_redirect):
if 'Content-Length' in prepared_request.headers:
del prepared_request.headers['Content-Length']
prepared_request.body = None
headers = prepared_request.headers
try:
del headers['Cookie']
except KeyError:
pass
extract_cookies_to_jar(prepared_request._cookies, prepared_request, resp.raw)
prepared_request._cookies.update(self.cookies)
prepared_request.prepare_cookies(prepared_request._cookies)
# Rebuild auth and proxy information.
proxies = self.rebuild_proxies(prepared_request, proxies)
self.rebuild_auth(prepared_request, resp)
# Override the original request.
req = prepared_request
resp = self.send(
req,
stream=stream,
timeout=timeout,
verify=verify,
cert=cert,
proxies=proxies,
allow_redirects=False,
)
extract_cookies_to_jar(self.cookies, prepared_request, resp.raw)
i += 1
yield resp
def rebuild_auth(self, prepared_request, response):
"""
When being redirected we may want to strip authentication from the
request to avoid leaking credentials. This method intelligently removes
and reapplies authentication where possible to avoid credential loss.
"""
headers = prepared_request.headers
url = prepared_request.url
if 'Authorization' in headers:
# If we get redirected to a new host, we should strip out any
# authentication headers.
original_parsed = urlparse(response.request.url)
redirect_parsed = urlparse(url)
if (original_parsed.hostname != redirect_parsed.hostname):
del headers['Authorization']
# .netrc might have more auth for us on our new host.
new_auth = get_netrc_auth(url) if self.trust_env else None
if new_auth is not None:
prepared_request.prepare_auth(new_auth)
return
def rebuild_proxies(self, prepared_request, proxies):
"""
This method re-evaluates the proxy configuration by considering the
environment variables. If we are redirected to a URL covered by
NO_PROXY, we strip the proxy configuration. Otherwise, we set missing
proxy keys for this URL (in case they were stripped by a previous
redirect).
This method also replaces the Proxy-Authorization header where
necessary.
"""
headers = prepared_request.headers
url = prepared_request.url
scheme = urlparse(url).scheme
new_proxies = proxies.copy() if proxies is not None else {}
if self.trust_env and not should_bypass_proxies(url):
environ_proxies = get_environ_proxies(url)
proxy = environ_proxies.get(scheme)
if proxy:
new_proxies.setdefault(scheme, environ_proxies[scheme])
if 'Proxy-Authorization' in headers:
del headers['Proxy-Authorization']
try:
username, password = get_auth_from_url(new_proxies[scheme])
except KeyError:
username, password = None, None
if username and password:
headers['Proxy-Authorization'] = _basic_auth_str(username, password)
return new_proxies
class Session(SessionRedirectMixin):
"""A Requests session.
Provides cookie persistence, connection-pooling, and configuration.
Basic Usage::
>>> import requests
>>> s = requests.Session()
>>> s.get('http://httpbin.org/get')
200
"""
__attrs__ = [
'headers', 'cookies', 'auth', 'proxies', 'hooks', 'params', 'verify',
'cert', 'prefetch', 'adapters', 'stream', 'trust_env',
'max_redirects',
]
def __init__(self):
#: A case-insensitive dictionary of headers to be sent on each
#: :class:`Request <Request>` sent from this
#: :class:`Session <Session>`.
self.headers = default_headers()
#: Default Authentication tuple or object to attach to
#: :class:`Request <Request>`.
self.auth = None
#: Dictionary mapping protocol to the URL of the proxy (e.g.
#: {'http': 'foo.bar:3128'}) to be used on each
#: :class:`Request <Request>`.
self.proxies = {}
#: Event-handling hooks.
self.hooks = default_hooks()
#: Dictionary of querystring data to attach to each
#: :class:`Request <Request>`. The dictionary values may be lists for
#: representing multivalued query parameters.
self.params = {}
#: Stream response content default.
self.stream = False
#: SSL Verification default.
self.verify = True
#: SSL certificate default.
self.cert = None
#: Maximum number of redirects allowed. If the request exceeds this
#: limit, a :class:`TooManyRedirects` exception is raised.
self.max_redirects = DEFAULT_REDIRECT_LIMIT
#: Should we trust the environment?
self.trust_env = True
#: A CookieJar containing all currently outstanding cookies set on this
#: session. By default it is a
#: :class:`RequestsCookieJar <requests.cookies.RequestsCookieJar>`, but
#: may be any other ``cookielib.CookieJar`` compatible object.
self.cookies = cookiejar_from_dict({})
# Default connection adapters.
self.adapters = OrderedDict()
self.mount('https://', HTTPAdapter())
self.mount('http://', HTTPAdapter())
# Only store 1000 redirects to prevent using infinite memory
self.redirect_cache = RecentlyUsedContainer(REDIRECT_CACHE_SIZE)
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
def prepare_request(self, request):
"""Constructs a :class:`PreparedRequest <PreparedRequest>` for
transmission and returns it. The :class:`PreparedRequest` has settings
merged from the :class:`Request <Request>` instance and those of the
:class:`Session`.
:param request: :class:`Request` instance to prepare with this
session's settings.
"""
cookies = request.cookies or {}
# Bootstrap CookieJar.
if not isinstance(cookies, cookielib.CookieJar):
cookies = cookiejar_from_dict(cookies)
# Merge with session cookies
merged_cookies = merge_cookies(
merge_cookies(RequestsCookieJar(), self.cookies), cookies)
# Set environment's basic authentication if not explicitly set.
auth = request.auth
if self.trust_env and not auth and not self.auth:
auth = get_netrc_auth(request.url)
p = PreparedRequest()
p.prepare(
method=request.method.upper(),
url=request.url,
files=request.files,
data=request.data,
json=request.json,
headers=merge_setting(request.headers, self.headers, dict_class=CaseInsensitiveDict),
params=merge_setting(request.params, self.params),
auth=merge_setting(auth, self.auth),
cookies=merged_cookies,
hooks=merge_hooks(request.hooks, self.hooks),
)
return p
def request(self, method, url,
params=None,
data=None,
headers=None,
cookies=None,
files=None,
auth=None,
timeout=None,
allow_redirects=True,
proxies=None,
hooks=None,
stream=None,
verify=None,
cert=None,
json=None):
"""Constructs a :class:`Request <Request>`, prepares it and sends it.
Returns :class:`Response <Response>` object.
:param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query
string for the :class:`Request`.
:param data: (optional) Dictionary or bytes to send in the body of the
:class:`Request`.
:param json: (optional) json to send in the body of the
:class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the
:class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the
:class:`Request`.
:param files: (optional) Dictionary of ``'filename': file-like-objects``
for multipart encoding upload.
:param auth: (optional) Auth tuple or callable to enable
Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a (`connect timeout, read
timeout <user/advanced.html#timeouts>`_) tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Set to True by default.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of
the proxy.
:param stream: (optional) whether to immediately download the response
content. Defaults to ``False``.
:param verify: (optional) if ``True``, the SSL cert will be verified.
A CA_BUNDLE path can also be provided.
:param cert: (optional) if String, path to ssl client cert file (.pem).
If Tuple, ('cert', 'key') pair.
"""
method = to_native_string(method)
# Create the Request.
req = Request(
method = method.upper(),
url = url,
headers = headers,
files = files,
data = data or {},
json = json,
params = params or {},
auth = auth,
cookies = cookies,
hooks = hooks,
)
prep = self.prepare_request(req)
proxies = proxies or {}
settings = self.merge_environment_settings(
prep.url, proxies, stream, verify, cert
)
# Send the request.
send_kwargs = {
'timeout': timeout,
'allow_redirects': allow_redirects,
}
send_kwargs.update(settings)
resp = self.send(prep, **send_kwargs)
return resp
def get(self, url, **kwargs):
"""Sends a GET request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
kwargs.setdefault('allow_redirects', True)
return self.request('GET', url, **kwargs)
def options(self, url, **kwargs):
"""Sends a OPTIONS request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
kwargs.setdefault('allow_redirects', True)
return self.request('OPTIONS', url, **kwargs)
def head(self, url, **kwargs):
"""Sends a HEAD request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
kwargs.setdefault('allow_redirects', False)
return self.request('HEAD', url, **kwargs)
def post(self, url, data=None, json=None, **kwargs):
"""Sends a POST request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param json: (optional) json to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('POST', url, data=data, json=json, **kwargs)
def put(self, url, data=None, **kwargs):
"""Sends a PUT request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('PUT', url, data=data, **kwargs)
def patch(self, url, data=None, **kwargs):
"""Sends a PATCH request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('PATCH', url, data=data, **kwargs)
def delete(self, url, **kwargs):
"""Sends a DELETE request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
return self.request('DELETE', url, **kwargs)
def send(self, request, **kwargs):
"""Send a given PreparedRequest."""
# Set defaults that the hooks can utilize to ensure they always have
# the correct parameters to reproduce the previous request.
kwargs.setdefault('stream', self.stream)
kwargs.setdefault('verify', self.verify)
kwargs.setdefault('cert', self.cert)
kwargs.setdefault('proxies', self.proxies)
# It's possible that users might accidentally send a Request object.
# Guard against that specific failure case.
if not isinstance(request, PreparedRequest):
raise ValueError('You can only send PreparedRequests.')
checked_urls = set()
while request.url in self.redirect_cache:
checked_urls.add(request.url)
new_url = self.redirect_cache.get(request.url)
if new_url in checked_urls:
break
request.url = new_url
# Set up variables needed for resolve_redirects and dispatching of hooks
allow_redirects = kwargs.pop('allow_redirects', True)
stream = kwargs.get('stream')
timeout = kwargs.get('timeout')
verify = kwargs.get('verify')
cert = kwargs.get('cert')
proxies = kwargs.get('proxies')
hooks = request.hooks
# Get the appropriate adapter to use
adapter = self.get_adapter(url=request.url)
# Start time (approximately) of the request
start = datetime.utcnow()
# Send the request
r = adapter.send(request, **kwargs)
# Total elapsed time of the request (approximately)
r.elapsed = datetime.utcnow() - start
# Response manipulation hooks
r = dispatch_hook('response', hooks, r, **kwargs)
# Persist cookies
if r.history:
# If the hooks create history then we want those cookies too
for resp in r.history:
extract_cookies_to_jar(self.cookies, resp.request, resp.raw)
extract_cookies_to_jar(self.cookies, request, r.raw)
# Redirect resolving generator.
gen = self.resolve_redirects(r, request,
stream=stream,
timeout=timeout,
verify=verify,
cert=cert,
proxies=proxies)
# Resolve redirects if allowed.
history = [resp for resp in gen] if allow_redirects else []
# Shuffle things around if there's history.
if history:
# Insert the first (original) request at the start
history.insert(0, r)
# Get the last request made
r = history.pop()
r.history = history
if not stream:
r.content
return r
def merge_environment_settings(self, url, proxies, stream, verify, cert):
"""Check the environment and merge it with some settings."""
# Gather clues from the surrounding environment.
if self.trust_env:
# Set environment's proxies.
env_proxies = get_environ_proxies(url) or {}
for (k, v) in env_proxies.items():
proxies.setdefault(k, v)
# Look for requests environment configuration and be compatible
# with cURL.
if verify is True or verify is None:
verify = (os.environ.get('REQUESTS_CA_BUNDLE') or
os.environ.get('CURL_CA_BUNDLE'))
# Merge all the kwargs.
proxies = merge_setting(proxies, self.proxies)
stream = merge_setting(stream, self.stream)
verify = merge_setting(verify, self.verify)
cert = merge_setting(cert, self.cert)
return {'verify': verify, 'proxies': proxies, 'stream': stream,
'cert': cert}
def get_adapter(self, url):
"""Returns the appropriate connnection adapter for the given URL."""
for (prefix, adapter) in self.adapters.items():
if url.lower().startswith(prefix):
return adapter
# Nothing matches :-/
raise InvalidSchema("No connection adapters were found for '%s'" % url)
def close(self):
"""Closes all adapters and as such the session"""
for v in self.adapters.values():
v.close()
def mount(self, prefix, adapter):
"""Registers a connection adapter to a prefix.
Adapters are sorted in descending order by key length."""
self.adapters[prefix] = adapter
keys_to_move = [k for k in self.adapters if len(k) < len(prefix)]
for key in keys_to_move:
self.adapters[key] = self.adapters.pop(key)
def __getstate__(self):
state = dict((attr, getattr(self, attr, None)) for attr in self.__attrs__)
state['redirect_cache'] = dict(self.redirect_cache)
return state
def __setstate__(self, state):
redirect_cache = state.pop('redirect_cache', {})
for attr, value in state.items():
setattr(self, attr, value)
self.redirect_cache = RecentlyUsedContainer(REDIRECT_CACHE_SIZE)
for redirect, to in redirect_cache.items():
self.redirect_cache[redirect] = to
def session():
"""Returns a :class:`Session` for context-management."""
return Session()
| mit | 4,329,638,928,754,115,600 | 34.589443 | 115 | 0.604194 | false | 4.437294 | false | false | false |
bala4901/odoo | addons/l10n_cn/__openerp__.py | 91 | 1827 | # -*- encoding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2009 Gábor Dukai
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
'name': '中国会计科目表 - Accounting',
'version': '1.0',
'category': 'Localization/Account Charts',
'author': 'openerp-china.org',
'maintainer':'openerp-china.org',
'website':'http://openerp-china.org',
'url': 'http://code.google.com/p/openerp-china/source/browse/#svn/trunk/l10n_cn',
'description': """
添加中文省份数据
科目类型\会计科目表模板\增值税\辅助核算类别\管理会计凭证簿\财务会计凭证簿
============================================================
""",
'depends': ['base','account'],
'demo': [],
'data': [
'account_chart.xml',
'l10n_chart_cn_wizard.xml',
'base_data.xml',
],
'license': 'GPL-3',
'auto_install': False,
'installable': True,
'images': ['images/config_chart_l10n_cn.jpeg','images/l10n_cn_chart.jpeg'],
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | 198,857,496,429,203,550 | 35.765957 | 85 | 0.573495 | false | 3.348837 | false | false | false |
buzzer/pr2_imagination | race_simulation_run/tools/race_fluent_modification.py | 2 | 6269 | #! /usr/bin/env python
import yaml
import random
import roslib
class Fluent(yaml.YAMLObject):
yaml_tag='!Fluent'
def __setstate__(self, state):
"""
PyYaml does not call __init__. this is an init replacement.
"""
self.properties = []
self.Class_Instance = state['Class_Instance']
# fix for bug:
if type(self.Class_Instance[0]) == type(False):
self.Class_Instance[0] = '"On"'
if self.Class_Instance[0] == 'On':
self.Class_Instance[0] = '"On"'
self.StartTime = state['StartTime']
self.FinishTime = state['FinishTime']
for preProp in state['Properties']:
self.properties.append(Property(preProp[0], preProp[1], preProp[2]))
def toYamlString(self, line_break='\n'):
yString = ''
yString += '!Fluent' + line_break
yString += '{:<16} {:<20}'.format('Class_Instance: ', '[' + self.Class_Instance[0] + ', ' + self.Class_Instance[1] + ']') + line_break
yString += '{:<16} {:<20}'.format('StartTime: ', str(self.StartTime)) + line_break
yString += '{:<16} {:<20}'.format('FinishTime: ', str(self.FinishTime)) + line_break
if len(self.properties) > 0:
yString += 'Properties:' + line_break
for prop in self.properties:
yString += prop.toYamlString()
else:
yString += 'Properties: []' + line_break
return yString
def __str__(self):
return self.toYamlString()
class Property:
def __init__(self, role_type, filler_type, role_filler):
self.role_type = role_type
self.filler_type = filler_type
self.role_filler = role_filler
for key, value in self.__dict__.items():
if type(value) == type(False):
setattr(self, key, 'On')
#if value == 'On':
# setattr(self, key, '"On"')
def toYamlString(self, line_break='\n'):
return ' - [{}, {}, {}]'.format(self.role_type, '"' + self.filler_type + '"', self.role_filler) + line_break
def __str__(self):
return self.toYamlString()
class FluentPoseModification(Fluent):
yaml_tag='!FluentPoseModification'
GROUP_CHOICE_MAP = {}
def __setstate__(self,state):
"""
PyYaml does not call __init__. this is an init replacement.
"""
# apply group state...
if not FluentPoseModification.GROUP_CHOICE_MAP.has_key(state['Group']):
choice = random.randint(0, state['Choices']-1)
FluentPoseModification.GROUP_CHOICE_MAP[state['Group']] = choice
self.choice = FluentPoseModification.GROUP_CHOICE_MAP[state['Group']]
self.Instance = state['Instance']
self.Modifications = state['Modifications']
self.Attachments = state['Attachments']
# fix for bug:
#if type(self.Class_Instance[0]) == type(False):
# self.Class_Instance[0] = '"On"'
#if self.Class_Instance[0] == 'On':
# self.Class_Instance[0] = '"On"'
#self.StartTime = state['StartTime']
#self.FinishTime = state['FinishTime']
def getMod(self, propertyString):
for mod in self.Modifications:
if mod[0] == propertyString:
return mod[1][self.choice]
return 0
if __name__ == '__main__':
# filepath = roslib.packages.get_pkg_dir('race_simulation_run') + '/data/test.yaml'
# changedFilepath = roslib.packages.get_pkg_dir('race_simulation_run') + '/data/output.yaml'
# replacementsPath = roslib.packages.get_pkg_dir('race_simulation_run') + '/data/replace.yaml'
# # load fluents:
# fluents = []
# with open(filepath) as f:
# for fluent in yaml.load_all(f):
# fluents.append(fluent)
# # load modifications:
# fluentPoseModifications = []
# with open(replacementsPath) as f:
# for poseMod in yaml.load_all(f):
# fluentPoseModifications.append(poseMod)
# # modify fluents poses:
# for fluent in fluents[:]:
# for poseMod in fluentPoseModifications:
# if (poseMod.Instance == fluent.Class_Instance[1]) \
# or (fluent.Class_Instance[1] in poseMod.Attachments):
# for prop in fluent.properties:
# if poseMod.getMod(prop.role_type) != 0:
# prop.role_filler += poseMod.getMod(prop.role_type)
# # generate new file:
# with open(changedFilepath, 'w') as cf:
# string = ('---\n').join(str(fluent) for fluent in fluents)
# cf.write(string)
# load initial knowledge and spawn objects fluents
initialPath = roslib.packages.get_pkg_dir('race_static_knowledge') + '/data/race_initial_knowledge.yaml'
spawnPath = roslib.packages.get_pkg_dir('race_simulation_run') + '/data/spawn_objects.yaml'
replacementsPath = roslib.packages.get_pkg_dir('race_simulation_run') + '/data/replace.yaml'
#replacementsPath = roslib.packages.get_pkg_dir('race_simulation_run') + '/data/replace_simple.yaml'
tempFilePath = roslib.packages.get_pkg_dir('race_simulation_run') + '/data/output.yaml'
# load fluents:
fluents = []
with open(initialPath) as f:
for fluent in yaml.load_all(f):
fluents.append(fluent)
with open(spawnPath) as f:
for fluent in yaml.load_all(f):
fluents.append(fluent)
# load modifications:
fluentPoseModifications = []
with open(replacementsPath) as f:
for poseMod in yaml.load_all(f):
fluentPoseModifications.append(poseMod)
# modify fluents poses:
for fluent in fluents[:]:
for poseMod in fluentPoseModifications:
if (poseMod.Instance == fluent.Class_Instance[1]) \
or (fluent.Class_Instance[1] in poseMod.Attachments):
for prop in fluent.properties:
if poseMod.getMod(prop.role_type) != 0:
prop.role_filler += poseMod.getMod(prop.role_type)
# generate new file:
with open(tempFilePath, 'w') as cf:
string = ('---\n').join(str(fluent) for fluent in fluents)
cf.write(string)
| bsd-2-clause | -449,458,975,980,125,250 | 36.094675 | 142 | 0.580156 | false | 3.461623 | false | false | false |
nirmeshk/oh-mainline | vendor/packages/twisted/twisted/protocols/sip.py | 20 | 41745 | # -*- test-case-name: twisted.test.test_sip -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""Session Initialization Protocol.
Documented in RFC 2543.
[Superceded by 3261]
This module contains a deprecated implementation of HTTP Digest authentication.
See L{twisted.cred.credentials} and L{twisted.cred._digest} for its new home.
"""
# system imports
import socket, time, sys, random, warnings
from zope.interface import implements, Interface
# twisted imports
from twisted.python import log, util
from twisted.python.deprecate import deprecated
from twisted.python.versions import Version
from twisted.python.hashlib import md5
from twisted.internet import protocol, defer, reactor
from twisted import cred
import twisted.cred.error
from twisted.cred.credentials import UsernameHashedPassword, UsernamePassword
# sibling imports
from twisted.protocols import basic
PORT = 5060
# SIP headers have short forms
shortHeaders = {"call-id": "i",
"contact": "m",
"content-encoding": "e",
"content-length": "l",
"content-type": "c",
"from": "f",
"subject": "s",
"to": "t",
"via": "v",
}
longHeaders = {}
for k, v in shortHeaders.items():
longHeaders[v] = k
del k, v
statusCodes = {
100: "Trying",
180: "Ringing",
181: "Call Is Being Forwarded",
182: "Queued",
183: "Session Progress",
200: "OK",
300: "Multiple Choices",
301: "Moved Permanently",
302: "Moved Temporarily",
303: "See Other",
305: "Use Proxy",
380: "Alternative Service",
400: "Bad Request",
401: "Unauthorized",
402: "Payment Required",
403: "Forbidden",
404: "Not Found",
405: "Method Not Allowed",
406: "Not Acceptable",
407: "Proxy Authentication Required",
408: "Request Timeout",
409: "Conflict", # Not in RFC3261
410: "Gone",
411: "Length Required", # Not in RFC3261
413: "Request Entity Too Large",
414: "Request-URI Too Large",
415: "Unsupported Media Type",
416: "Unsupported URI Scheme",
420: "Bad Extension",
421: "Extension Required",
423: "Interval Too Brief",
480: "Temporarily Unavailable",
481: "Call/Transaction Does Not Exist",
482: "Loop Detected",
483: "Too Many Hops",
484: "Address Incomplete",
485: "Ambiguous",
486: "Busy Here",
487: "Request Terminated",
488: "Not Acceptable Here",
491: "Request Pending",
493: "Undecipherable",
500: "Internal Server Error",
501: "Not Implemented",
502: "Bad Gateway", # no donut
503: "Service Unavailable",
504: "Server Time-out",
505: "SIP Version not supported",
513: "Message Too Large",
600: "Busy Everywhere",
603: "Decline",
604: "Does not exist anywhere",
606: "Not Acceptable",
}
specialCases = {
'cseq': 'CSeq',
'call-id': 'Call-ID',
'www-authenticate': 'WWW-Authenticate',
}
def dashCapitalize(s):
''' Capitalize a string, making sure to treat - as a word seperator '''
return '-'.join([ x.capitalize() for x in s.split('-')])
def unq(s):
if s[0] == s[-1] == '"':
return s[1:-1]
return s
def DigestCalcHA1(
pszAlg,
pszUserName,
pszRealm,
pszPassword,
pszNonce,
pszCNonce,
):
m = md5()
m.update(pszUserName)
m.update(":")
m.update(pszRealm)
m.update(":")
m.update(pszPassword)
HA1 = m.digest()
if pszAlg == "md5-sess":
m = md5()
m.update(HA1)
m.update(":")
m.update(pszNonce)
m.update(":")
m.update(pszCNonce)
HA1 = m.digest()
return HA1.encode('hex')
DigestCalcHA1 = deprecated(Version("Twisted", 9, 0, 0))(DigestCalcHA1)
def DigestCalcResponse(
HA1,
pszNonce,
pszNonceCount,
pszCNonce,
pszQop,
pszMethod,
pszDigestUri,
pszHEntity,
):
m = md5()
m.update(pszMethod)
m.update(":")
m.update(pszDigestUri)
if pszQop == "auth-int":
m.update(":")
m.update(pszHEntity)
HA2 = m.digest().encode('hex')
m = md5()
m.update(HA1)
m.update(":")
m.update(pszNonce)
m.update(":")
if pszNonceCount and pszCNonce: # pszQop:
m.update(pszNonceCount)
m.update(":")
m.update(pszCNonce)
m.update(":")
m.update(pszQop)
m.update(":")
m.update(HA2)
hash = m.digest().encode('hex')
return hash
DigestCalcResponse = deprecated(Version("Twisted", 9, 0, 0))(DigestCalcResponse)
_absent = object()
class Via(object):
"""
A L{Via} is a SIP Via header, representing a segment of the path taken by
the request.
See RFC 3261, sections 8.1.1.7, 18.2.2, and 20.42.
@ivar transport: Network protocol used for this leg. (Probably either "TCP"
or "UDP".)
@type transport: C{str}
@ivar branch: Unique identifier for this request.
@type branch: C{str}
@ivar host: Hostname or IP for this leg.
@type host: C{str}
@ivar port: Port used for this leg.
@type port C{int}, or None.
@ivar rportRequested: Whether to request RFC 3581 client processing or not.
@type rportRequested: C{bool}
@ivar rportValue: Servers wishing to honor requests for RFC 3581 processing
should set this parameter to the source port the request was received
from.
@type rportValue: C{int}, or None.
@ivar ttl: Time-to-live for requests on multicast paths.
@type ttl: C{int}, or None.
@ivar maddr: The destination multicast address, if any.
@type maddr: C{str}, or None.
@ivar hidden: Obsolete in SIP 2.0.
@type hidden: C{bool}
@ivar otherParams: Any other parameters in the header.
@type otherParams: C{dict}
"""
def __init__(self, host, port=PORT, transport="UDP", ttl=None,
hidden=False, received=None, rport=_absent, branch=None,
maddr=None, **kw):
"""
Set parameters of this Via header. All arguments correspond to
attributes of the same name.
To maintain compatibility with old SIP
code, the 'rport' argument is used to determine the values of
C{rportRequested} and C{rportValue}. If None, C{rportRequested} is set
to True. (The deprecated method for doing this is to pass True.) If an
integer, C{rportValue} is set to the given value.
Any arguments not explicitly named here are collected into the
C{otherParams} dict.
"""
self.transport = transport
self.host = host
self.port = port
self.ttl = ttl
self.hidden = hidden
self.received = received
if rport is True:
warnings.warn(
"rport=True is deprecated since Twisted 9.0.",
DeprecationWarning,
stacklevel=2)
self.rportValue = None
self.rportRequested = True
elif rport is None:
self.rportValue = None
self.rportRequested = True
elif rport is _absent:
self.rportValue = None
self.rportRequested = False
else:
self.rportValue = rport
self.rportRequested = False
self.branch = branch
self.maddr = maddr
self.otherParams = kw
def _getrport(self):
"""
Returns the rport value expected by the old SIP code.
"""
if self.rportRequested == True:
return True
elif self.rportValue is not None:
return self.rportValue
else:
return None
def _setrport(self, newRPort):
"""
L{Base._fixupNAT} sets C{rport} directly, so this method sets
C{rportValue} based on that.
@param newRPort: The new rport value.
@type newRPort: C{int}
"""
self.rportValue = newRPort
self.rportRequested = False
rport = property(_getrport, _setrport)
def toString(self):
"""
Serialize this header for use in a request or response.
"""
s = "SIP/2.0/%s %s:%s" % (self.transport, self.host, self.port)
if self.hidden:
s += ";hidden"
for n in "ttl", "branch", "maddr", "received":
value = getattr(self, n)
if value is not None:
s += ";%s=%s" % (n, value)
if self.rportRequested:
s += ";rport"
elif self.rportValue is not None:
s += ";rport=%s" % (self.rport,)
etc = self.otherParams.items()
etc.sort()
for k, v in etc:
if v is None:
s += ";" + k
else:
s += ";%s=%s" % (k, v)
return s
def parseViaHeader(value):
"""
Parse a Via header.
@return: The parsed version of this header.
@rtype: L{Via}
"""
parts = value.split(";")
sent, params = parts[0], parts[1:]
protocolinfo, by = sent.split(" ", 1)
by = by.strip()
result = {}
pname, pversion, transport = protocolinfo.split("/")
if pname != "SIP" or pversion != "2.0":
raise ValueError, "wrong protocol or version: %r" % value
result["transport"] = transport
if ":" in by:
host, port = by.split(":")
result["port"] = int(port)
result["host"] = host
else:
result["host"] = by
for p in params:
# it's the comment-striping dance!
p = p.strip().split(" ", 1)
if len(p) == 1:
p, comment = p[0], ""
else:
p, comment = p
if p == "hidden":
result["hidden"] = True
continue
parts = p.split("=", 1)
if len(parts) == 1:
name, value = parts[0], None
else:
name, value = parts
if name in ("rport", "ttl"):
value = int(value)
result[name] = value
return Via(**result)
class URL:
"""A SIP URL."""
def __init__(self, host, username=None, password=None, port=None,
transport=None, usertype=None, method=None,
ttl=None, maddr=None, tag=None, other=None, headers=None):
self.username = username
self.host = host
self.password = password
self.port = port
self.transport = transport
self.usertype = usertype
self.method = method
self.tag = tag
self.ttl = ttl
self.maddr = maddr
if other == None:
self.other = []
else:
self.other = other
if headers == None:
self.headers = {}
else:
self.headers = headers
def toString(self):
l = []; w = l.append
w("sip:")
if self.username != None:
w(self.username)
if self.password != None:
w(":%s" % self.password)
w("@")
w(self.host)
if self.port != None:
w(":%d" % self.port)
if self.usertype != None:
w(";user=%s" % self.usertype)
for n in ("transport", "ttl", "maddr", "method", "tag"):
v = getattr(self, n)
if v != None:
w(";%s=%s" % (n, v))
for v in self.other:
w(";%s" % v)
if self.headers:
w("?")
w("&".join([("%s=%s" % (specialCases.get(h) or dashCapitalize(h), v)) for (h, v) in self.headers.items()]))
return "".join(l)
def __str__(self):
return self.toString()
def __repr__(self):
return '<URL %s:%s@%s:%r/%s>' % (self.username, self.password, self.host, self.port, self.transport)
def parseURL(url, host=None, port=None):
"""Return string into URL object.
URIs are of of form 'sip:[email protected]'.
"""
d = {}
if not url.startswith("sip:"):
raise ValueError("unsupported scheme: " + url[:4])
parts = url[4:].split(";")
userdomain, params = parts[0], parts[1:]
udparts = userdomain.split("@", 1)
if len(udparts) == 2:
userpass, hostport = udparts
upparts = userpass.split(":", 1)
if len(upparts) == 1:
d["username"] = upparts[0]
else:
d["username"] = upparts[0]
d["password"] = upparts[1]
else:
hostport = udparts[0]
hpparts = hostport.split(":", 1)
if len(hpparts) == 1:
d["host"] = hpparts[0]
else:
d["host"] = hpparts[0]
d["port"] = int(hpparts[1])
if host != None:
d["host"] = host
if port != None:
d["port"] = port
for p in params:
if p == params[-1] and "?" in p:
d["headers"] = h = {}
p, headers = p.split("?", 1)
for header in headers.split("&"):
k, v = header.split("=")
h[k] = v
nv = p.split("=", 1)
if len(nv) == 1:
d.setdefault("other", []).append(p)
continue
name, value = nv
if name == "user":
d["usertype"] = value
elif name in ("transport", "ttl", "maddr", "method", "tag"):
if name == "ttl":
value = int(value)
d[name] = value
else:
d.setdefault("other", []).append(p)
return URL(**d)
def cleanRequestURL(url):
"""Clean a URL from a Request line."""
url.transport = None
url.maddr = None
url.ttl = None
url.headers = {}
def parseAddress(address, host=None, port=None, clean=0):
"""Return (name, uri, params) for From/To/Contact header.
@param clean: remove unnecessary info, usually for From and To headers.
"""
address = address.strip()
# simple 'sip:foo' case
if address.startswith("sip:"):
return "", parseURL(address, host=host, port=port), {}
params = {}
name, url = address.split("<", 1)
name = name.strip()
if name.startswith('"'):
name = name[1:]
if name.endswith('"'):
name = name[:-1]
url, paramstring = url.split(">", 1)
url = parseURL(url, host=host, port=port)
paramstring = paramstring.strip()
if paramstring:
for l in paramstring.split(";"):
if not l:
continue
k, v = l.split("=")
params[k] = v
if clean:
# rfc 2543 6.21
url.ttl = None
url.headers = {}
url.transport = None
url.maddr = None
return name, url, params
class SIPError(Exception):
def __init__(self, code, phrase=None):
if phrase is None:
phrase = statusCodes[code]
Exception.__init__(self, "SIP error (%d): %s" % (code, phrase))
self.code = code
self.phrase = phrase
class RegistrationError(SIPError):
"""Registration was not possible."""
class Message:
"""A SIP message."""
length = None
def __init__(self):
self.headers = util.OrderedDict() # map name to list of values
self.body = ""
self.finished = 0
def addHeader(self, name, value):
name = name.lower()
name = longHeaders.get(name, name)
if name == "content-length":
self.length = int(value)
self.headers.setdefault(name,[]).append(value)
def bodyDataReceived(self, data):
self.body += data
def creationFinished(self):
if (self.length != None) and (self.length != len(self.body)):
raise ValueError, "wrong body length"
self.finished = 1
def toString(self):
s = "%s\r\n" % self._getHeaderLine()
for n, vs in self.headers.items():
for v in vs:
s += "%s: %s\r\n" % (specialCases.get(n) or dashCapitalize(n), v)
s += "\r\n"
s += self.body
return s
def _getHeaderLine(self):
raise NotImplementedError
class Request(Message):
"""A Request for a URI"""
def __init__(self, method, uri, version="SIP/2.0"):
Message.__init__(self)
self.method = method
if isinstance(uri, URL):
self.uri = uri
else:
self.uri = parseURL(uri)
cleanRequestURL(self.uri)
def __repr__(self):
return "<SIP Request %d:%s %s>" % (id(self), self.method, self.uri.toString())
def _getHeaderLine(self):
return "%s %s SIP/2.0" % (self.method, self.uri.toString())
class Response(Message):
"""A Response to a URI Request"""
def __init__(self, code, phrase=None, version="SIP/2.0"):
Message.__init__(self)
self.code = code
if phrase == None:
phrase = statusCodes[code]
self.phrase = phrase
def __repr__(self):
return "<SIP Response %d:%s>" % (id(self), self.code)
def _getHeaderLine(self):
return "SIP/2.0 %s %s" % (self.code, self.phrase)
class MessagesParser(basic.LineReceiver):
"""A SIP messages parser.
Expects dataReceived, dataDone repeatedly,
in that order. Shouldn't be connected to actual transport.
"""
version = "SIP/2.0"
acceptResponses = 1
acceptRequests = 1
state = "firstline" # or "headers", "body" or "invalid"
debug = 0
def __init__(self, messageReceivedCallback):
self.messageReceived = messageReceivedCallback
self.reset()
def reset(self, remainingData=""):
self.state = "firstline"
self.length = None # body length
self.bodyReceived = 0 # how much of the body we received
self.message = None
self.setLineMode(remainingData)
def invalidMessage(self):
self.state = "invalid"
self.setRawMode()
def dataDone(self):
# clear out any buffered data that may be hanging around
self.clearLineBuffer()
if self.state == "firstline":
return
if self.state != "body":
self.reset()
return
if self.length == None:
# no content-length header, so end of data signals message done
self.messageDone()
elif self.length < self.bodyReceived:
# aborted in the middle
self.reset()
else:
# we have enough data and message wasn't finished? something is wrong
raise RuntimeError, "this should never happen"
def dataReceived(self, data):
try:
basic.LineReceiver.dataReceived(self, data)
except:
log.err()
self.invalidMessage()
def handleFirstLine(self, line):
"""Expected to create self.message."""
raise NotImplementedError
def lineLengthExceeded(self, line):
self.invalidMessage()
def lineReceived(self, line):
if self.state == "firstline":
while line.startswith("\n") or line.startswith("\r"):
line = line[1:]
if not line:
return
try:
a, b, c = line.split(" ", 2)
except ValueError:
self.invalidMessage()
return
if a == "SIP/2.0" and self.acceptResponses:
# response
try:
code = int(b)
except ValueError:
self.invalidMessage()
return
self.message = Response(code, c)
elif c == "SIP/2.0" and self.acceptRequests:
self.message = Request(a, b)
else:
self.invalidMessage()
return
self.state = "headers"
return
else:
assert self.state == "headers"
if line:
# XXX support multi-line headers
try:
name, value = line.split(":", 1)
except ValueError:
self.invalidMessage()
return
self.message.addHeader(name, value.lstrip())
if name.lower() == "content-length":
try:
self.length = int(value.lstrip())
except ValueError:
self.invalidMessage()
return
else:
# CRLF, we now have message body until self.length bytes,
# or if no length was given, until there is no more data
# from the connection sending us data.
self.state = "body"
if self.length == 0:
self.messageDone()
return
self.setRawMode()
def messageDone(self, remainingData=""):
assert self.state == "body"
self.message.creationFinished()
self.messageReceived(self.message)
self.reset(remainingData)
def rawDataReceived(self, data):
assert self.state in ("body", "invalid")
if self.state == "invalid":
return
if self.length == None:
self.message.bodyDataReceived(data)
else:
dataLen = len(data)
expectedLen = self.length - self.bodyReceived
if dataLen > expectedLen:
self.message.bodyDataReceived(data[:expectedLen])
self.messageDone(data[expectedLen:])
return
else:
self.bodyReceived += dataLen
self.message.bodyDataReceived(data)
if self.bodyReceived == self.length:
self.messageDone()
class Base(protocol.DatagramProtocol):
"""Base class for SIP clients and servers."""
PORT = PORT
debug = False
def __init__(self):
self.messages = []
self.parser = MessagesParser(self.addMessage)
def addMessage(self, msg):
self.messages.append(msg)
def datagramReceived(self, data, addr):
self.parser.dataReceived(data)
self.parser.dataDone()
for m in self.messages:
self._fixupNAT(m, addr)
if self.debug:
log.msg("Received %r from %r" % (m.toString(), addr))
if isinstance(m, Request):
self.handle_request(m, addr)
else:
self.handle_response(m, addr)
self.messages[:] = []
def _fixupNAT(self, message, (srcHost, srcPort)):
# RFC 2543 6.40.2,
senderVia = parseViaHeader(message.headers["via"][0])
if senderVia.host != srcHost:
senderVia.received = srcHost
if senderVia.port != srcPort:
senderVia.rport = srcPort
message.headers["via"][0] = senderVia.toString()
elif senderVia.rport == True:
senderVia.received = srcHost
senderVia.rport = srcPort
message.headers["via"][0] = senderVia.toString()
def deliverResponse(self, responseMessage):
"""Deliver response.
Destination is based on topmost Via header."""
destVia = parseViaHeader(responseMessage.headers["via"][0])
# XXX we don't do multicast yet
host = destVia.received or destVia.host
port = destVia.rport or destVia.port or self.PORT
destAddr = URL(host=host, port=port)
self.sendMessage(destAddr, responseMessage)
def responseFromRequest(self, code, request):
"""Create a response to a request message."""
response = Response(code)
for name in ("via", "to", "from", "call-id", "cseq"):
response.headers[name] = request.headers.get(name, [])[:]
return response
def sendMessage(self, destURL, message):
"""Send a message.
@param destURL: C{URL}. This should be a *physical* URL, not a logical one.
@param message: The message to send.
"""
if destURL.transport not in ("udp", None):
raise RuntimeError, "only UDP currently supported"
if self.debug:
log.msg("Sending %r to %r" % (message.toString(), destURL))
self.transport.write(message.toString(), (destURL.host, destURL.port or self.PORT))
def handle_request(self, message, addr):
"""Override to define behavior for requests received
@type message: C{Message}
@type addr: C{tuple}
"""
raise NotImplementedError
def handle_response(self, message, addr):
"""Override to define behavior for responses received.
@type message: C{Message}
@type addr: C{tuple}
"""
raise NotImplementedError
class IContact(Interface):
"""A user of a registrar or proxy"""
class Registration:
def __init__(self, secondsToExpiry, contactURL):
self.secondsToExpiry = secondsToExpiry
self.contactURL = contactURL
class IRegistry(Interface):
"""Allows registration of logical->physical URL mapping."""
def registerAddress(domainURL, logicalURL, physicalURL):
"""Register the physical address of a logical URL.
@return: Deferred of C{Registration} or failure with RegistrationError.
"""
def unregisterAddress(domainURL, logicalURL, physicalURL):
"""Unregister the physical address of a logical URL.
@return: Deferred of C{Registration} or failure with RegistrationError.
"""
def getRegistrationInfo(logicalURL):
"""Get registration info for logical URL.
@return: Deferred of C{Registration} object or failure of LookupError.
"""
class ILocator(Interface):
"""Allow looking up physical address for logical URL."""
def getAddress(logicalURL):
"""Return physical URL of server for logical URL of user.
@param logicalURL: a logical C{URL}.
@return: Deferred which becomes URL or fails with LookupError.
"""
class Proxy(Base):
"""SIP proxy."""
PORT = PORT
locator = None # object implementing ILocator
def __init__(self, host=None, port=PORT):
"""Create new instance.
@param host: our hostname/IP as set in Via headers.
@param port: our port as set in Via headers.
"""
self.host = host or socket.getfqdn()
self.port = port
Base.__init__(self)
def getVia(self):
"""Return value of Via header for this proxy."""
return Via(host=self.host, port=self.port)
def handle_request(self, message, addr):
# send immediate 100/trying message before processing
#self.deliverResponse(self.responseFromRequest(100, message))
f = getattr(self, "handle_%s_request" % message.method, None)
if f is None:
f = self.handle_request_default
try:
d = f(message, addr)
except SIPError, e:
self.deliverResponse(self.responseFromRequest(e.code, message))
except:
log.err()
self.deliverResponse(self.responseFromRequest(500, message))
else:
if d is not None:
d.addErrback(lambda e:
self.deliverResponse(self.responseFromRequest(e.code, message))
)
def handle_request_default(self, message, (srcHost, srcPort)):
"""Default request handler.
Default behaviour for OPTIONS and unknown methods for proxies
is to forward message on to the client.
Since at the moment we are stateless proxy, thats basically
everything.
"""
def _mungContactHeader(uri, message):
message.headers['contact'][0] = uri.toString()
return self.sendMessage(uri, message)
viaHeader = self.getVia()
if viaHeader.toString() in message.headers["via"]:
# must be a loop, so drop message
log.msg("Dropping looped message.")
return
message.headers["via"].insert(0, viaHeader.toString())
name, uri, tags = parseAddress(message.headers["to"][0], clean=1)
# this is broken and needs refactoring to use cred
d = self.locator.getAddress(uri)
d.addCallback(self.sendMessage, message)
d.addErrback(self._cantForwardRequest, message)
def _cantForwardRequest(self, error, message):
error.trap(LookupError)
del message.headers["via"][0] # this'll be us
self.deliverResponse(self.responseFromRequest(404, message))
def deliverResponse(self, responseMessage):
"""Deliver response.
Destination is based on topmost Via header."""
destVia = parseViaHeader(responseMessage.headers["via"][0])
# XXX we don't do multicast yet
host = destVia.received or destVia.host
port = destVia.rport or destVia.port or self.PORT
destAddr = URL(host=host, port=port)
self.sendMessage(destAddr, responseMessage)
def responseFromRequest(self, code, request):
"""Create a response to a request message."""
response = Response(code)
for name in ("via", "to", "from", "call-id", "cseq"):
response.headers[name] = request.headers.get(name, [])[:]
return response
def handle_response(self, message, addr):
"""Default response handler."""
v = parseViaHeader(message.headers["via"][0])
if (v.host, v.port) != (self.host, self.port):
# we got a message not intended for us?
# XXX note this check breaks if we have multiple external IPs
# yay for suck protocols
log.msg("Dropping incorrectly addressed message")
return
del message.headers["via"][0]
if not message.headers["via"]:
# this message is addressed to us
self.gotResponse(message, addr)
return
self.deliverResponse(message)
def gotResponse(self, message, addr):
"""Called with responses that are addressed at this server."""
pass
class IAuthorizer(Interface):
def getChallenge(peer):
"""Generate a challenge the client may respond to.
@type peer: C{tuple}
@param peer: The client's address
@rtype: C{str}
@return: The challenge string
"""
def decode(response):
"""Create a credentials object from the given response.
@type response: C{str}
"""
class BasicAuthorizer:
"""Authorizer for insecure Basic (base64-encoded plaintext) authentication.
This form of authentication is broken and insecure. Do not use it.
"""
implements(IAuthorizer)
def __init__(self):
"""
This method exists solely to issue a deprecation warning.
"""
warnings.warn(
"twisted.protocols.sip.BasicAuthorizer was deprecated "
"in Twisted 9.0.0",
category=DeprecationWarning,
stacklevel=2)
def getChallenge(self, peer):
return None
def decode(self, response):
# At least one SIP client improperly pads its Base64 encoded messages
for i in range(3):
try:
creds = (response + ('=' * i)).decode('base64')
except:
pass
else:
break
else:
# Totally bogus
raise SIPError(400)
p = creds.split(':', 1)
if len(p) == 2:
return UsernamePassword(*p)
raise SIPError(400)
class DigestedCredentials(UsernameHashedPassword):
"""Yet Another Simple Digest-MD5 authentication scheme"""
def __init__(self, username, fields, challenges):
warnings.warn(
"twisted.protocols.sip.DigestedCredentials was deprecated "
"in Twisted 9.0.0",
category=DeprecationWarning,
stacklevel=2)
self.username = username
self.fields = fields
self.challenges = challenges
def checkPassword(self, password):
method = 'REGISTER'
response = self.fields.get('response')
uri = self.fields.get('uri')
nonce = self.fields.get('nonce')
cnonce = self.fields.get('cnonce')
nc = self.fields.get('nc')
algo = self.fields.get('algorithm', 'MD5')
qop = self.fields.get('qop-options', 'auth')
opaque = self.fields.get('opaque')
if opaque not in self.challenges:
return False
del self.challenges[opaque]
user, domain = self.username.split('@', 1)
if uri is None:
uri = 'sip:' + domain
expected = DigestCalcResponse(
DigestCalcHA1(algo, user, domain, password, nonce, cnonce),
nonce, nc, cnonce, qop, method, uri, None,
)
return expected == response
class DigestAuthorizer:
CHALLENGE_LIFETIME = 15
implements(IAuthorizer)
def __init__(self):
warnings.warn(
"twisted.protocols.sip.DigestAuthorizer was deprecated "
"in Twisted 9.0.0",
category=DeprecationWarning,
stacklevel=2)
self.outstanding = {}
def generateNonce(self):
c = tuple([random.randrange(sys.maxint) for _ in range(3)])
c = '%d%d%d' % c
return c
def generateOpaque(self):
return str(random.randrange(sys.maxint))
def getChallenge(self, peer):
c = self.generateNonce()
o = self.generateOpaque()
self.outstanding[o] = c
return ','.join((
'nonce="%s"' % c,
'opaque="%s"' % o,
'qop-options="auth"',
'algorithm="MD5"',
))
def decode(self, response):
response = ' '.join(response.splitlines())
parts = response.split(',')
auth = dict([(k.strip(), unq(v.strip())) for (k, v) in [p.split('=', 1) for p in parts]])
try:
username = auth['username']
except KeyError:
raise SIPError(401)
try:
return DigestedCredentials(username, auth, self.outstanding)
except:
raise SIPError(400)
class RegisterProxy(Proxy):
"""A proxy that allows registration for a specific domain.
Unregistered users won't be handled.
"""
portal = None
registry = None # should implement IRegistry
authorizers = {}
def __init__(self, *args, **kw):
Proxy.__init__(self, *args, **kw)
self.liveChallenges = {}
if "digest" not in self.authorizers:
self.authorizers["digest"] = DigestAuthorizer()
def handle_ACK_request(self, message, (host, port)):
# XXX
# ACKs are a client's way of indicating they got the last message
# Responding to them is not a good idea.
# However, we should keep track of terminal messages and re-transmit
# if no ACK is received.
pass
def handle_REGISTER_request(self, message, (host, port)):
"""Handle a registration request.
Currently registration is not proxied.
"""
if self.portal is None:
# There is no portal. Let anyone in.
self.register(message, host, port)
else:
# There is a portal. Check for credentials.
if not message.headers.has_key("authorization"):
return self.unauthorized(message, host, port)
else:
return self.login(message, host, port)
def unauthorized(self, message, host, port):
m = self.responseFromRequest(401, message)
for (scheme, auth) in self.authorizers.iteritems():
chal = auth.getChallenge((host, port))
if chal is None:
value = '%s realm="%s"' % (scheme.title(), self.host)
else:
value = '%s %s,realm="%s"' % (scheme.title(), chal, self.host)
m.headers.setdefault('www-authenticate', []).append(value)
self.deliverResponse(m)
def login(self, message, host, port):
parts = message.headers['authorization'][0].split(None, 1)
a = self.authorizers.get(parts[0].lower())
if a:
try:
c = a.decode(parts[1])
except SIPError:
raise
except:
log.err()
self.deliverResponse(self.responseFromRequest(500, message))
else:
c.username += '@' + self.host
self.portal.login(c, None, IContact
).addCallback(self._cbLogin, message, host, port
).addErrback(self._ebLogin, message, host, port
).addErrback(log.err
)
else:
self.deliverResponse(self.responseFromRequest(501, message))
def _cbLogin(self, (i, a, l), message, host, port):
# It's stateless, matey. What a joke.
self.register(message, host, port)
def _ebLogin(self, failure, message, host, port):
failure.trap(cred.error.UnauthorizedLogin)
self.unauthorized(message, host, port)
def register(self, message, host, port):
"""Allow all users to register"""
name, toURL, params = parseAddress(message.headers["to"][0], clean=1)
contact = None
if message.headers.has_key("contact"):
contact = message.headers["contact"][0]
if message.headers.get("expires", [None])[0] == "0":
self.unregister(message, toURL, contact)
else:
# XXX Check expires on appropriate URL, and pass it to registry
# instead of having registry hardcode it.
if contact is not None:
name, contactURL, params = parseAddress(contact, host=host, port=port)
d = self.registry.registerAddress(message.uri, toURL, contactURL)
else:
d = self.registry.getRegistrationInfo(toURL)
d.addCallbacks(self._cbRegister, self._ebRegister,
callbackArgs=(message,),
errbackArgs=(message,)
)
def _cbRegister(self, registration, message):
response = self.responseFromRequest(200, message)
if registration.contactURL != None:
response.addHeader("contact", registration.contactURL.toString())
response.addHeader("expires", "%d" % registration.secondsToExpiry)
response.addHeader("content-length", "0")
self.deliverResponse(response)
def _ebRegister(self, error, message):
error.trap(RegistrationError, LookupError)
# XXX return error message, and alter tests to deal with
# this, currently tests assume no message sent on failure
def unregister(self, message, toURL, contact):
try:
expires = int(message.headers["expires"][0])
except ValueError:
self.deliverResponse(self.responseFromRequest(400, message))
else:
if expires == 0:
if contact == "*":
contactURL = "*"
else:
name, contactURL, params = parseAddress(contact)
d = self.registry.unregisterAddress(message.uri, toURL, contactURL)
d.addCallback(self._cbUnregister, message
).addErrback(self._ebUnregister, message
)
def _cbUnregister(self, registration, message):
msg = self.responseFromRequest(200, message)
msg.headers.setdefault('contact', []).append(registration.contactURL.toString())
msg.addHeader("expires", "0")
self.deliverResponse(msg)
def _ebUnregister(self, registration, message):
pass
class InMemoryRegistry:
"""A simplistic registry for a specific domain."""
implements(IRegistry, ILocator)
def __init__(self, domain):
self.domain = domain # the domain we handle registration for
self.users = {} # map username to (IDelayedCall for expiry, address URI)
def getAddress(self, userURI):
if userURI.host != self.domain:
return defer.fail(LookupError("unknown domain"))
if self.users.has_key(userURI.username):
dc, url = self.users[userURI.username]
return defer.succeed(url)
else:
return defer.fail(LookupError("no such user"))
def getRegistrationInfo(self, userURI):
if userURI.host != self.domain:
return defer.fail(LookupError("unknown domain"))
if self.users.has_key(userURI.username):
dc, url = self.users[userURI.username]
return defer.succeed(Registration(int(dc.getTime() - time.time()), url))
else:
return defer.fail(LookupError("no such user"))
def _expireRegistration(self, username):
try:
dc, url = self.users[username]
except KeyError:
return defer.fail(LookupError("no such user"))
else:
dc.cancel()
del self.users[username]
return defer.succeed(Registration(0, url))
def registerAddress(self, domainURL, logicalURL, physicalURL):
if domainURL.host != self.domain:
log.msg("Registration for domain we don't handle.")
return defer.fail(RegistrationError(404))
if logicalURL.host != self.domain:
log.msg("Registration for domain we don't handle.")
return defer.fail(RegistrationError(404))
if self.users.has_key(logicalURL.username):
dc, old = self.users[logicalURL.username]
dc.reset(3600)
else:
dc = reactor.callLater(3600, self._expireRegistration, logicalURL.username)
log.msg("Registered %s at %s" % (logicalURL.toString(), physicalURL.toString()))
self.users[logicalURL.username] = (dc, physicalURL)
return defer.succeed(Registration(int(dc.getTime() - time.time()), physicalURL))
def unregisterAddress(self, domainURL, logicalURL, physicalURL):
return self._expireRegistration(logicalURL.username)
| agpl-3.0 | 6,215,315,143,037,585,000 | 30.293103 | 119 | 0.569362 | false | 4.006238 | false | false | false |
Subsets and Splits